Flimsy And Barely Functional

See DrawingHand and YouArentGonnaNeedIt for further discussion on this topic. This contribution of Dion's needs to be better integrated over there ...


OK, OK, I know this was somewhat hashed over in YouArentGonnaNeedIt., but I must make a few observations. By intentionally creating a thing that is flimsy and barely functional for your needs (much less someone else's), you have intentionally created an artifact which is so handcrafted and rudimentary that it will probably be unsuitable when examined as a candidate for reuse or other repurposing. Being rudimentary, it is probably lacking in robustness and general purpose integrity and consequently won't be suitable (without expensive maintenance) for even the system it was intended for as it continues to mature.

From my experience, systems that were constructed with ruggedized, robust, and mature components were the ones that suffered low failure and repair rates and rarely needed expensive maintenance visits to upgrade functionality that could easily have been forecast.

I can see how it could be said that putting in "placeholders" or sparse code would be useful in an iterative development process where it would be guaranteed that a barely functioning component would be revisited and brought up to expected quality levels (in/out parameter range checking, full exception handling, debug support, asserts, etc.) But the YouArentGonnaNeedIt philosophy would seem to imply that your lack of time and resources absolves you of responsibility for creating a house-of-cards that will probably collapse in the next breeze. Iterative development has more of a responsibility than that in making sure you just barely pass each iteration. Quality and functionality must transcend and survive each iterative phase.

Sure, it is expensive to create robust, capable components that can meet a range of requirements, but you tend to get what you pay for. There is a middle ground when it comes to design issues such as these. You don't want to build a Corvair[1] that would be UnsafeAtAnySpeed, nor do you need to build a BMW 750 with marginal additional features that come at an exorbitant cost. Usually, a balance is appropriate when it comes to cost-to-quality issues.

I'm somewhat adamant on this issue because so much of the systems I examine are built on code that is unsuitable for reuse or even revision because the developer ended up building a contraption that would just barely solve the problem. Also, a little extra work would have gone a long way towards preventing consignment of rigid, simplistic code or designs to the scrap heap. Consequently, YouArentGonnaNeedIt is rarely a good solution to project pressure (in my most humble opinion). Resistance to those project pressures is usually the best, and most difficult, course of action. -- DionHinchcliffe

I think "placeholders" and "sparse code" are exactly what YouArentGonnaNeedIt refers to. Placeholders for future features usually get in the way of implementing those very features; having a clean code base is a lot more useful in practice. --JeffPetkau?


Dion, I think we haven't clearly communicated what "the simplest thing that could possibly work" means. It most certainly does not mean, "the fewest programming environment operations to get the next test case running". That would result in exactly the code you rightly dis.

What you are looking for is the simplest code to implement both the current set of test cases and the test case you are currently trying to bring to life.

The first test case for a class can typically be implemented quite simply- one object, one method, away you go. Then you go to write the next test case, and you need a more complex operation in the method. The quickest way to get it to work would be to put a conditional in with the rest of the code (we've all seen 100-200 line methods that grew this way).

We're not looking for the quickest way, we're looking for the simplest result. So, we first break the existing method into pieces. That leaves the existing test cases running. Then we modify (simply, now) one of the little methods to handle the next test case.

I call this SwitchingHats?. You are always either 1) adding functionality or 2) restructuring the system so it computes the same result but is more amenable to new functionality. Trying to wear both hats at once is a disaster.

My favorite "simplest thing" story is a project review that ChetHendrickson did. The project had CORBA and databases and this and that. He asked, "why can't you fedex floppy disks with the data?" No answer. In that case, I say, by God fedex the floppies. --KentBeck


Regardless of this explanation, which seems to only help matters slightly, there are several problems. First, how many times do you need to refactor something? I'd be willing to bet that doing it well the first time is faster than multiple revisionary changes. The issue here is that the programmer can be so focused on the short term (the technical term is "hack") that they don't consider all the changes that can occur down the road. More importantly, programmers only occasionally have the ability to determine what changes are likely down the road; they can't design the better solution if they don't know where it will occur. Thus, up-front engineering in a few broader areas can be well justified.

Second, what makes you think programmers know how to refactor worth a damn? If someone knows how to factor code well, it's usually the case that that he or she knows how write the code well the first time around, and will tend to do so. The force at work here on experienced programmer is having to go back and make revisions to code. Programmers generally learn where their work in the past has been flimsy and then try to make sure it can stand up to inevitable changes.

I think the real mistake with this recommendation is that the decision to hack something should really based on priority of that code, not unilateral dogma. If you know that something like a test case is only going to be used internally, won't be shipped to anyone, etc., then by all means hack it. If you know the code is going to need to be adapted down the road, or even more importantly if you don't but understand that the code is pivotal, then you have to do it right the first time. The problem for software engineering comes in the cruft generated by hack upon hack, as well as regressions. Refactoring can never really fix the problems in this approach unless all code is bound by astract ["abstract" or "a strict"?] contract (which is sadly very seldomly). Furthermore, an existing well-engineering design can drive the ability to do something down the road. The ability to meet user's needs quickly depends on a nimble system; if it has to be refactored, expanded, and retested, this isn't going to happen.

Enough. Feh.


Hacking something I don't understand for the first time is good. Then I hack it a second time around. Then maybe a third. A few rewrites later, I understand the problem well enough to write it simply. This is using the SecondSystemEffect to my advantage. But here, instead of building the entire system again--which is expensive and vulnerable to failure--I only build a small part of it over and over again. The motto I use here is, "It's ok to throw away code." Alternatively, "How many lines can I delete today!" -- SunirShah

It's never correct to assume that you have to do it right the first time. That's a meaningless suggestion because you do not know what is right the first time. Or anytime. The code base is always in flux as you learn more about the system and the environment changes.

As I say at the office, "It's not working." "Did you update?" "10 minutes ago." "God, you're obsolete." -- SunirShah

And your coworkers actually put up with that? ;)

Suggestion: redefine "first time" to include the "getting it right" step. Before that, you've only completed a fraction of an iteration. This way, "done" always means something consistent.

Let me add that "getting it right" is a highly subjective determination. XP is the only methodology I know of that defines getting it right, i.e. passing the unit tests.

No. That is simply wrong. "Getting it right" has nothing at all to do with subjective determination. It has everything to do with getting the requirements sorted out into what is fixed and what is changeable. If you can't do that then you simply don't know what you're talking about in the first place. If you know what a component has to do before you start then you can design code that will meet those needs regardless of whatever fluffy feature requirements may be thrown at the component later. Make all the changes you want, as long as the core functionality of the original component remains the same.

Knowing that, now we can create code that is robust, purposeful, and flexible -- all within the constraints of the original fixed requirements. That part is not subject to analysis or second guessing of any kind.

Having stable requirements certainly makes everything easier! But, in many cases, clients have at best a rough idea of what they want - a mix of "I'll know it when I see it" and "You're the expert, you tell me". Further, their ability to communicate what they want is even rougher than their understanding - it's a lot easier when they have prototypes so they may say "like this, but different" at increasing precision. And there is flexibility at both ends: one can adjust the requirements to meet the implementation, e.g. using adapter services and program 'glue'. In this environment, requirements - and even the vision or overall goal - is iterative, incomplete, and flexible. Rather than "fixed" requirements, concerns regard stability, inertia, utility. "Getting it right" is subjective. Unit tests serve to both stabilize and describe what 'right' means, but even those tests are subject to change.

You say this means "you simply don't know what you're talking about in the first place", but communication has many possible challenges: You simply don't know what you're talking about. You don't know how to talk about what you know. You don't understand what someone else is trying to say. And even the first case, which omits the foibles of human communication, is not a sign of incompetence. It could be that your vision is unclear, that you are willing to refine your 'vision' and requirements for your software as you progress in implementing it. Exploratory programming, game development, etc. would certainly fall into this class. Only hindsight is 20/20.

Hmm. Perhaps application programming has this sort of wishy-washy tolerance for clients who don't know what they want before starting a project. EmbeddedSystems has no room at all for this kind of thing because the product is based on a piece of hardware. That hardware has to be built to provide certain functionality in the Real World that can't be replicated by software. So, you can start with a coffee maker or a toaster, but you have to pick one of them.

There is some crossover, however: I've had clients ask for a piece of dedicated hardware to do something that could be done with a browser and web application. I have talked myself out of work this way, but ended up with a happier client in the end. In this case the client had no clue what they really wanted, only what they wanted the "thing" to do. It was up to my expertise, wisdom, and professionalism to find them the best solution. I would hope that custom business applications would be built the same way; a SpikeSolution finds out what the client really wants the "thing" to do, then refinement works it up from there. Discovery is a process.

However, all of this is still secondary to the matter of determining what is the Critical Success Factor for any particular project. Once you determine what a product absolutely must do then everything else is up for discussion. I contend that it is pointless to even begin building code until that basic division of "must have" and "would like" is determined. So, do you want the business application equivalent of a toaster or a coffee maker? Gotta know this going in.

My day job involves Applied R&D for robotics. Though it's outside my niche, EmbeddedSystems are a considerable part of that - the group lathes out new boards on a regular basis, hooks them into experimental devices. In practice, we have a sort of overall vision - such as: build a robot that can map a building or an underwater tunnel, build a robot that experts can use to remotely inspect a potential improvised explosive device, or a UAV that can help the coast guard quickly locate people in need of rescue. But all the intermediate tasks - mobility, balance, perception, world modeling, recognizing and opening doors, dropping communications repeaters when signal strength is poor, payload management, et cetera - are all much less well defined. The strategy for achieving the goal isn't stable. The hardware requirements aren't stable. The models, e.g. to recognize obstacles vs. vegetation, aren't stable. The integration requirements aren't stable. The software requirements aren't stable. And, in a rather unintuitive twist, building new hardware - i.e. a new robot with new devices and so on - is often faster and cheaper than developing new software to control it. Suffice to say, I have very little faith in any sort of inherent 'stability' of requirements - not for embedded systems, and not even for 'Critical Success Factors'.

My hobby involves studying language designs. There are a few programming models that are very close to domain modeling or requirements (and preferences) specification - most notably: term rewrite systems, temporal logic, temporal concurrent constraint programming, calculus of construction, declarative meta-programming. Some day we may reach a point where writing software for toaster or coffee maker involves independent steps of modeling the physical device then specifying the software's functional requirements. (In that case, we could have the device carry its own, query-able model - or reference to one - allowing for far more pluggable hardware systems.) I contend that, even then, there will be a great deal of essential difficulty in determining requirements - that our vision, our strategies, our constraints, our goals will tend to be incomplete, iterative, unstable. "Getting it right" would remain subjective.

Your focus on stable, rewarding requirements - those 'CriticalSuccessFactors' - makes excellent business sense! Ultimately, we should write code where we anticipate a return on investment - a decision involving anticipated risk, anticipated cost, anticipated reward that must be weighed alongside many competing options. MissionCritical software will certainly have much lower risk tolerances than rapid prototypes, interactive arts programming (games, toys, social portals), business applications, report generators.

Okay. I still think we're talking about the need for spike solutions here rather than building flimsy production products. In the case of the robotics applications you simply don't know what the product is really supposed to do. Yeah, you can come up with some pie-in-the-sky ideas about what you'd like it to do, but you still don't know what it really has to do.

Not so for the bulk of embedded products. Even though there is going to be discovery and mind-changing going on while a product is being developed there needs to be some bedrock functionality that is very well defined before development ever begins. The toaster versus coffee maker comparison remains valid. For your robotics apps think ocean surface examiner versus IED examiner. Completely different domains; different hardware, different mission, different, different, different.

My contention remains that any project has to have a specified critical success factor. The project can be broken into sub-projects, each of which has its own critical success factor, and so on. Unless we can define these things then we really don't know what we're doing and can't possibly tell if we're making progress or not. N'est ce' pas?


a little extra work would have gone a long way towards preventing consignment of rigid, simplistic code or designs to the scrap heap.

If it only takes "a little extra work" now, why doesn't it take just a little extra work later? I am not sure what, exactly, FlimsyAndBarelyFunctional implies, but building code adequate to the needs is sufficient. I have seen too many deadlines blown while debugging that "little extra work" that the user did not request. I also have heard far too many users ask "Why can't developers simply do what we ask?"

Because once the code hits the field it builds dependencies in both code, in the developers on how use the code, and the artifacts surrounding the code and developers (books, training, etc). The more dependencies the harder it is to change. If, for example, you make radical changes in the win32 api you break gazillions of tools, lines of code, and you lose an immense investment in programmer training.

If a function is adequate to its needs, why does it (or its interface) need to be broken? If you correct or improve the functionality of a method, you do not need to change its interface. If you are providing a different functionality, you should also provide a different interface. Why should "radical changes" in the interface be required?

One of the assumptions of XP is that there isn't a lot of "concrete" around the software (books, training, etc) This comes from having developers very close to users, frequent releases, etc. PureXp isn't for ShrinkWrap? development.

Is that true? You can have frequent iterations without having frequent releases. If there is a high cost to releasing, then you should of course do it rarely. But I don't see why you can't do PureXp in a ShrinkWrap? environment as long as you have a good product manager (as your customer) and a long release cycle.


It seems like some people are reading "the simplest thing that could possibly work" to mean "the laziest thing" or "the shoddiest thing". I read it pretty differently.

One example that someone mentioned at XpUniverse was storage. If I'm building an app that at first needs to save a single value between runs, I don't start with replicated Oracle servers and an O/R mapping package just because I can imagine future needs that would be solved if I did that. Instead, I start with writing to a file.

Consider an analogy. If you need to take some pictures at a Christmas party, you could run out and blow a few grand on a fancy digital camera, a printer, supplies, and so on. After all, what if you want to publish these photos in a magazine? What if you need to blow them up to wall-poster size? Or if you decide later that you want to send copies to everyone at the party, 50 prints times 50 people is very expensive, so by getting a great digital camera and a box to use as a web server, you're probably saving money, right? Especially since you are sure you'll start taking a lot of photos once you have a good camera.

The truth is that most people for most uses can get by happily with a $8 disposable camera. Taking the "simplest thing" approach means that you start by getting that disposable. If you find that you're buying them once or twice a month and are getting to be a better photographer, then maybe you want to move up to a cheap digital or 35mm camera. And if you grow out of that you can keep moving up.

Sure, if you eventually end up buying the $5000 set of gear, then the money on the interim steps is wasted. But how many people reach that level?

One reason "the simplest thing that could possibly work" seems wrong to people is that it's much easier to notice when things break than when they are overengineered. Plus, complicated things are interesting and challenging. But given the ridiculously large number of software projects that run so far over budget and schedule that they are cancelled before shipping, it makes sense to start small.

-- WilliamPietri


Sometimes, "the simplest thing" is in fact equal to "shoddy", even if you reach the local optimum (because you're more likely to miss the global optimum). A perfect example is the use of Excel and its macro system as a substitute for a real database and programming language. In perfect keeping with the DoTheSimplestThingThatCouldPossiblyWork "philosophy," a couple of Excel formulas and a little VBA is very often the very simplest thing that could conceivably work, for now. But, as soon as your problem scales up in any way whatsoever, you are making a fatal mistake if you respond in any way other than rewriting everything from scratch in a real programming language, on a real database.

However, the DoTheSimplestThingThatCouldPossiblyWork philosophy directs us to incrementally add hack upon hack to our VBA macros, throwing in a little AccidentalComplexity every time (but it's locally the simplest change that could possibly implement each new requirement, and therefore rewriting it is always too much work).

The problem is that as soon as you hit Excel's arbitrary limitations (for example, the hard limit of 65,535 rows in any Excel spreadsheet), you're guaranteed to need to rewrite all of your code from scratch. The longer you stuck with the Excel system because it was the local optimum, the more functionality you will have to re-implement.

The VBA code within Excel is perfectly non-reusable and unportable, and by the time your spreadsheet is big enough to hit the limits, the language limitations of VBA will have forced you to introduce so much AccidentalComplexity, you will have been better off (both in terms of functionality and overall simplicity) to start with a good design from the beginning, even if it didn't look like the simplest thing that could possibly work at first.


I like to find the 3rd option. I want the option people have missed that will solve all the issues, both for now, and for later. I don't start coding until I find that third option, which should be simple to implement, and provide for growth and new features later. To continue the photographer analogy, I'll give you an example that I recommend to people. Everyone wants to take their pictures, develop them, scan them, etc, etc and they want to know what cameras and scanners and software and such to get. Of course, few people understand that the scanner you need to view a picture on your screen is relatively cheap. You don't need a 32 bit scanner that scans at 1200 dpi!! You aren't doing professional publishing for print of highly detailed artwork. It's a picture of your family! So, I ask if they have a camcorder .. most people do .. for $20-$50 you can get a small NTSC capture device that will pull stills (or videos) off your camcorder .. some are even as simple as USB.

So, I tell em to take their camcorder and take different angles and such with it - you don't have to keep it on, but get a bunch of small video clips. Then, you can connect the camcorder to your computer and pull individual frames off the system to your PC. The resolution will be as good as a TV, not the greatest, but considerably better than most web cams and the cheaper digital cameras, and the number of shots they can get is nearly unlimited. Software can print them decently or put them on the web, and they already have 90% of what they need.

I think this approach can be applied to software design. Find a way to use existing components that integrate well, and can be implemented quickly and efficiently. I think this is what it means to take a Simple approach. It's about the design, not about lazy programming. You always have to consider future growth. In programming, to use the example above about using flat files over an Oracle database, you would abstract the data reads and stores in their own methods, and implement them as flat files. In the future, you keep the interface but can re-implement those methods to do your Oracle stuff later. Part of simplicity is a set of abstractions to abstract away the details, and then implement those details as simply as possible, and always try to find the 3rd options that give you the best of both worlds with fewer compromises.

-- EvanLanglois


Evan, these abstractions you talk about do not match with the "simplest thing" criteria discussed above. Spend time implementing a file I/O abstraction when it becomes necessary, not just on the chance that you might have to change to a Oracle database at some point in the future.

It's about noticing what is liable to change. If you have set up a flat file and the user wants it changed to a database, change it to a database but be aware that the user is likely to change his mind again. This is the point where it might be worthwhile to implement an abstraction layer.

Developing abstractions upfront is a recipe for disaster, because YouArentGonnaNeedIt.

-- MattRyall

Not quite... the time to abstract is when it makes the code simpler. Using the file to oracle example, the time to abstract is the second time you enter the code 'open "myfile.txt" for update ... doSomething() ... close "myfile.txt"'. In other words, when you make a duplication. Once you remove the duplication, you find that you've created an abstraction, which will be easy to read and use, and whadda-ya-know, it makes swapping to oracle, or an oodb, or super-reliable memory, a snap!

Except in cases where you should have realized this was likely to happen in the first place and used, say, ADO.NET or ODBC which would take no extra effort at the time, and then no effort at all down the track (except for testing with a new connection string). In other words, planning for an uncertain future does not always add complexity or overhead.

Of course, there are always examples and counter-examples. Being sensible and knowledgeable enough to know the options and draw the line is the key.

I think in the past I erred on over-engineering in my area of work because I was trying to become established alongside permanent employees who were incompetent (for some reason, it is harder to get permanent work when you are competent in some places). For some of these people that I worked with, their mantra was the misinterpreted forms of YouArentGonnaNeedIt and DoTheSimplestThingThatCouldPossiblyWork.

In one case, the application that ended up in production only "worked" (and I use the term with some reservation) because the users were secretly bypassing the interface and hacking the database directly when it stuffed up and spending half a week every fortnight staying back on overtime manually fixing the thousand duplicate records that had glitched into existence (resulting also in the need to recall an order for a thousand duplicates cards from the local ID card factory, or worse yet pay for them and shred them). This is no exaggeration. It really happened, and on a system that had implications for national security and international trade (say no more).


EditText of this page (last edited November 27, 2010) or FindPage with title or text search