Do The Most Complicated Thing That Could Possibly Work

Leadership AntiPattern: the team guru, which is responsible for project infrastructure, craft the most complicated way to make things work. As a result, the team work hard to understand the framework, rather then use it. Furthermore, when changes are required, changing the 'complicated' thing take long time, paralyzing the entire team, and iterating again thru understanding.

AKA, "JobSecurity Pattern". Creates more work and also gives the perpetrator an edge because he/she may be the only one who knows how to navigate their own swamp.

Correction: ScapeGoat pattern


[Warning, Will Robinson!!] This is a total StrawMan. Sorry, but nobody sets out to make something more complex than it needs to be. It often ends up that way through mismanagement and incompetence.


I disagree. This is definitely the case at my place of work we have inherited several complex frameworks from a guru. Most of the complexity is due to the guru building in flexibility where it isn't required.

Yep, I too have this anti-pattern. Our guru believes strongly in his own ability to predict the future, so tends to build lots of stuff that "we're certain to need", which then clutters the code up until we finally take it out because it's never used.

It's a sort of Darwinian exercise, isn't it. Inexperienced teams take out that "clutter" that is "never used", creating lots of consulting revenue for seasoned veterans who can pick the resulting pieces up off the floor. Jaded gurus add reams of voodoo-chicken code because they forgot why they had to do it once, and just keep adding it through force of habit. The trouble with "clutter" is that some of it is like personal hygiene - you can ignore it a while, but if it becomes a way of life you eventually get really really sick. A great fallacy of YAGNI is the assertion that the cost of "clutter" that proves unnecessary always exceeds the cost of adding missing functionality in the future.

Er, I'm not following your argument here, Tom. I added the confirmatory note that's italicized above. Perhaps I should expand and clarify. Your comment invites the inference that some proportion of guru-clutter is always valuable, but the inexperienced team pre-emptively takes it out, at the cost of paying some other guru to put it back later. Is that right?

I'm suggesting that there are code smells/problems that aren't immediately evident, and that don't show up in unit tests. One example is code that works today but will not work tomorrow - I call this a TimeBomb. The classic case is the Y2K stuff, but that's only a very obvious example. If the test designer isn't experienced enough to know and exercise the time-dependent portion (which many aren't), then it will pass the (inadequate) unit tests with flying colors. Another example is code that works until touched - I call this a LandMine. An example is an index or loop counter whose boundary checks don't work but aren't exercised (or don't exist at all - YAGNI, after all). A third example is patterns or habits that develop through experience - accessors are one. Another is whether and how back-pointers are handled when references are created. I'm suggesting that there is value in wisdom and experience, and that some of that value exists even when it is implicit rather than explicit. The implication is that when the inexperienced team pre-emptively takes out code that they don't understand, purely because they don't understand it - and especially if they carry a cultural bias against "gurus" - they risk incurring the added cost of re-inserting it later. -- TomStambaugh

Right or wrong, that's not my experience. My experience is that a guru, for instance, puts in features that are not shown anywhere as required. He puts them in because he's put his mighty brain in gear and worked out to his own satisfaction that they will be required. He believes himself smarter, and more knowledgeable about their own domain than the customer, so he can do that.

Suppose the domain is the user interface of a financial application that must handle multiple currencies. The customer knows that they want to be able to accept input and display output in, say, Pounds, Dollars, and Yen. The user interface builder requires that the code specify a field width, including fixed-part and decimal-part, when the widget is built. The customer says "That's good enough. Just build it." Is it? It seems to me that a reason why an experienced developer is paid more than a green coder is that the experienced developer has an intuition, honed through experience, of things that need to be in the code - whether or not the customer understands their necessity. Sure, sure. suggest to the customer that they might have overlooked something, discuss it with them. Explore the possibilities. Don't just go ahead and spend their money on stuff you haven't convinced them they need, though. That's fraud. And anyway, this is orthogonal to whether or not you let the local guru implement the widget (whatever it does) such that no-one else can maintain it, which is what this page's title is about.

Then, because these unneeded features are implemented in an overly general, and usually highly "clever" way (he's a very bright guru), they become an ever increasing resource drain as they are maintained by the less bright team through subsequent releases (where they remain unused). Then, when a suitable breathing space comes along, the team refactor the unused, overly general stuff away, and go faster subsequently.

Until the customer reports that system no longer works for those banks whose currency (like the yen) is always expressed as a whole number.

The great tautology of YAGNI is the assertion that adding currently out-of-scope features is always more expensive than not adding them. -- KeithBraithwaite

My point is that whether you describe it as a "tautology" or a "fallacy", it is simply not supportable, unless one widens the definition of "out-of-scope", "features" or both. Customer's do sometimes miss requirements that experienced developers see. Customers - especially naive and arrogant customers (they do exist, try selling software to the pharma industry) - sometimes do not have the discipline, patience, ability, or inclination to understand the implications of mistakes they are making. Professional developers do have some responsibility to not knowingly deliver code that will not work (see TimeBomb, above). This is what I meant by the "Darwinian" paradox. There is no pure answer, one side or the other. -- TomStambaugh

Yes, customers do sometimes miss requirements that experienced developers see, and yes the developer has a professional responsibility to point this out, if they notice. I'm not sure that this has anything to do with the topic of this page, which I understand to be that the professional developer has a responsibility not to spend their clients' money on, for instance, building an all-singing all-dancing world subsuming framework, that solves the problem at hand (eventually, as a side effect) and a few dozen more rather than solving the problem at hand first. -- KB


This is what I meant by the "Darwinian" paradox. There is no pure answer, one side or the other. -- TomStambaugh

Having had this argument before, the XP answer is to still push it on the customer and let them decide. Your experience stops at what you can convince the customer of. I never agreed with this as i don't like being coding robot, but there it is.

How would XP respond to the big pharma manufacturer who asserts that there is no need to put the PERL scripts for their data pipeline under source-code control because "it changes too fast to be practical"? The customer is always aware of the what the customer is aware of - that is a tautology. But the customer is often utterly unaware of the sometimes devastating implications of their short-sighted and ill-informed technical decision making.

I think this is a reason why even XP is still resisted in many quarters. It (like some of the PairProgramming material) smacks too much of dogma to be accepted in many organizations. -- TomStambaugh


[...] when the inexperienced team pre-emptively takes out code that they don't understand, purely because they don't understand it [...]

We seem to be talking at cross purposes. I don't see anyone claiming that anyone should remove anything "purely because they don't understand it." What I, and I believe others, are claiming is that we shouldn't put in things we don't understand purely because some grey-beard asserts that we should. -- KB


How would XP respond to the big pharma manufacturer who asserts that there is no need to put the PERL scripts for their data pipeline under source-code control because "it changes too fast to be practical"?

Well, if I were on that XP team, and if we were responsible for those PERL scripts, I'd put the scripts under source-code control. After first choosing an versioning tool good enough to handle the rate of change. Neither guru-osity nor YAGNIfication applies here. The professional responsibility to put the script under control is at least as strong as the responsibility noted above to remind the bank that Yen come in integers, no?


Yeah, I think we're basically on the same page. The version control system is a reasonable example of what I mean. The customer (a big Pharma) had a "software team" comprised of around 10 Perl hackers, all under 25, who felt that I (the "graybeard") was forcing them to do complicated things by, for example, insisting that they think about version control and dependency management. The team was "managed" by a scientist who knew a lot about biology and almost nothing about software. The problem in such cases is that the manager doesn't know enough about software to make informed decisions, and the in-house team doesn't have enough experience to believe it matters. Yet the project was still interesting, and the money was good. I'm not, at all, arguing that complexity should be introduced for its own sake. I'm saying that sometimes I, as an experienced developer, know more about what's going to happen down the road than the customer, and I have a professional obligation to act on that even when the customer doesn't understand why. A balance is involved, which is what I meant by "Darwinian" paradox - if I go too far towards arrogance, I lose by being graybeard that introduces useless complication. If I don't go far enough, I watch a good client suffer because I faild to insist on obvious precautions (like version control). That's the dilemma.

I suppose it sets up a counterbalance to YAGNI - DoTheRightThing. -- TomStambaugh

I would hardly claim to be a greybeard, but I've been around long enough to agree that you probably 'are gonna need a version number in the header, or some other form of VersionControl. Even if you don't, the cost of putting it there is usually low enough to not worry. The cost of needing it when it isn't can be much higher, and the price of that dirty hack that can almost always guess the version number can be worse...

I'm not sure YAGNI wouldn't work fine in this example. If the development team hasn't started coding yet and they don't think they'll need VersionControl, well, more power too 'em, maybe we'll all be surprised. But most likely after a week or so they will encounter real problems some form of VersionControl might help with. Is it so costly to wait until then to introduce it? -- ChristianTaubman

A lot of this comes down to knowing how to listen for the right sort of feedback. A shocking number of web shops, for example, use no VersionControl on their code, and so after a year or so every site has pages like index.test.php and index.test-2001-04-05-new-db-schema.php lying around. But your junior programmer might not think this is even a problem that needs fixing; if you didn't know any better you might think wading through meaningless garbage files is just part of the job. Knowing when you are actually gonna need it is tricky at times, especially if you have team members with vastly different opinions on what's necessary and what's extraneous. -- francis

Yes, SeeingProblems is very important if you're going to follow YAGNI. If the team can't do that well they probably are better off following a guru's advice to DoTheRightThing. -- ChristianTaubman


[Note: This diatribe was written long before the FDA finally pulled the plug on this syringe infusion pump product, known as Baxter Syndeo. Everything I wrote here was true seven years ago and remained true right up to the time the FDA forced Baxter to recall all existing units in the field. -- MartySchrader, 24 Sep 10]

The XP guys often talk about "features" and "customer requests" and such things, as if each User Story can have a set of code to support it in a complete vacuum. Wrong-o. Tom's comments about a professional developer in the position of guru being responsible to his customer is exactly spot on. I have had many clients where I was the greybeard whether I was the oldest or not. In some gratifying cases the client and his staff listened to my advice and gained the benefit of my experience. In other cases the client went into the mode of Tell The Expert as opposed to Ask The Expert, and suffered as a result.

On my last medical instrument gig I was supposed to be developing instrument architecture as part of my assignment. I came up with a layered approach that was not overly complex, provided the command and control structure needed for both the immediate and projected needs of the client, and could be implemented right now. It also had the benefit of testability because each of the little hardware "modules" could be tested individually by sending requests to their "servlets." The client's staff and other contract types hated the "extra" work involved in sticking to my design and convinced the project technical lead to ashcan it.

Three years later (Jul 03) these guys are still bashing away at the product, there is no instrument architecture in place, and it is a piece of kaka. This is not to say that my work would have cured all the problems associated with that project -- far from it. There were so many management screwups that it would have required a team of water-walkers to salvage it anyway. But the fact that the client didn't take my advice on something they had asked me to do in the first damn place did not bode well for their continued development efforts.

Gurus aren't building empires to strengthen their own resume. They are trying to help the stupid-ass client out even when the client is determined to shoot off his own foot.

Some say YAGNI isn't a good reason not to build architecture if an architecture will help deliver the features the client has requested. What is it about this "features" business I can't seem to get across? This particular instrument I refer to simply will not work at all until they implement at least some of the command & control from my architecture design. This has nothing to do with making sure the client's feature requests are honored. It has to do with establishing a baseline of operation upon which you can safely add this or that feature. Do you think operating system designers worry about registering a handler for URIs in the user's workspace before they create an entire IP stack? This stuff has to be built in layers, and the architect/greybeard is the one person who has a handle on it all. Features come much, much later. Stop talking about features. Gurus build system solutions, and that's what I always expect to do from here on out.

-- MartySchrader

"This particular instrument I refer to simply will not work at all until they implement at least some of the command & control from my architecture design."

YAGNI doesn't mean, in this sort of case, "don't build any architecture". It means build in the first instance only as much of the architecture as is required to get the instrument working at all. You seem to be saying yourself that there is only some sub-set of the architecture that's necessary for any functionality at all.

So build that sub-set first. And be prepared to go back and change it. What YAGNI does say is, "don't build the whole architecture before building anything that uses any part of the architecture."

In this case the architecture defined the need for an Instrument Manager component that was -- and remains to this day -- completely missing. The original instrument design (which was created before I joined the team) was lacking in that there was no guiding intelligence to centralize the instrument's operations. Additionally, the architecture pointed out the need to separate functional components by their command and control interfaces, which were never even created. There were a bunch of standalone function blocks that did their business without knowing any other condition within the instrument. Each component simply sought its own stimulus signal and Bang! went on its merry way. The instrument was a pile of isolated units with no coordination and no overall controlling intelligence.

My architecture design separated the units and defined how their C&C interfaces should look to the "outside" world; meaning, everything not involved with that component. Since I never even got cooperation in establishing those interfaces the whole mess could not get off the ground.

However, the question was about the implementation of architecture in part before the whole may be valid, but it certainly doesn't fit in this case. And by the way, here we are, three years after I started working on the instrument, and the client still doesn't have a working instrument or even a unified design. This is the first medical device project I have ever worked on that I would not want used on me.


Over at YouArentGonnaNeedIt, PeterMerel wrote:

Traditionally, ordering your implementation efforts has been done by subsystem - you implement everything in a subsystem's interface, and then move on to implementing the next subsystem, and that makes for a lot of useless overhead with a BigBang at the end where you do all your testing and QA. YouArentGonnaNeedIt is suited to an iterative or evolutionary development method whereby you do a fair amount of planning and architecture definition up front, and then implement by feature-set rather than by subsystem.

Which may explain part of the reason for disagreement here. XP would seem to rest on the assumption that even in large systems, subsystems are less planned in advance than extracted from whatever software satisfies a growing list of required features. -- francis

XP makes the assertion that even in large systems, subsystems are often enough better less planned in advance and more extracted from whatever software satisfies a growing list of required features. So much so that this becomes the default option.


In many programming environments, most notably enterprise application projects, programmers ARE pretty much implementing only "features" (or UserStories). The infrastructure (which the customer doesn't often care about) is already in place. Your database is OracleDatabase or InformixDatabase? or SqlServer or MicrosoftAccess or whatever. You're either a Microsoft shop, a Unix shop, a heterogeneous shop, or something else. You're writing code in JavaLanguage, CsharpLanguage, VisualBasic, FoxPro, etc. Your component framework may be ComponentObjectModel, DotNet, JavaBeans, CommonObjectRequestBrokerArchitecture, or none at all.

But the point is - in such environments, the application developer has little control over the underlying plumbing. No need to develop a framework or a message passing model or anything else that is often the speciality of the guru. ExtremeProgramming and its focus on features works well in such environments. (Unless, of course, the application developers - and the customers - hear that a feature won't work because the ArchitectureWontSupportIt?).

Those of us who write embedded systems software, shrinkwrap applications, etc; often don't have that luxury (though as embedded processors become cheaper and more powerful; such technologies that would previously have been considered too slow and bloated for embedded applications are now showing up in the embedded space). And here we do have gurus and greybeards, at least some of which have been known to overestimate their software architecture skills, and/or assume that expertise in the customer's problem domain implies expertise in software architecture. I myself have been involved in designs which were way too complicated; and gotten my reward by having to maintain it much later. (Hopefully, I have learned something.)

engineer_scotty (ScottJohnson)


I was an architect/greybeard for shared system services. I wrote and/or maintained a homegrown database, message oriented middleware, networking, UI library, etc. Everything I wrote was there to provide a feature to a customer. My customers were other programmers. -- EricHodges


I worked at a place that lost a years worth a work in the last big earthquake. I think they used the XP form of logic that said, "do we really need backups?"

That might be an XP question, it isn't XP logic. And the XP answer would be, "yes, you do really need backups".


A lot of the comments on this page seem to be talking past each other. Some have said "guru" as a sarcastic term for someone extremely arrogant but not really very good, others have meant "guru" to literally mean someone whose judgement is excellent. Both kinds of people exist, obviously, but there doesn't seem to be much recognition of that fact.

YagNi done wrong means that you will eventually need it after all. A true guru (rather than the sarcastic term) will recognize bad attempts at YagNi, and recommend or add features that others think are not necessary, but are in fact necessary.

A sarcastic-term guru will be clueless about "good" YagNi and add scads of unnecessary clutter. Both happen, but are not the same phenomenon.

A lot of the problem here is that judgements differ as to what "good" YagNi really is. Hindsight, at least, can tell us which was which - if we pay attention to learning from hindsight. Which theoretically is what non-sarcastically-termed gurus have already done.

There are always issues with clients, but the old cliche' "the customer is always right" doesn't mean they will ask for what they need, it means that you should make them happy. Ideally this never means only short term happy, but predictably unhappy long after you finish the job and leave for another gig, but in practice it may be unavoidable if the customer insists on something that will predictably be bad for them.

The classic formula for making clients happy is to give them both what they want and what they need, and making sure they are aware that that occurred. We have typically a legal obligation to give them what they ask for (which may be quite different than what they want or need), and a moral obligation to give them what they need, which leads to conflicts when these three things are not in alignment.

Most of the time, however, if that happens, it's a reflection on our communication skills. If we're good enough at communicating, we may find ways to persuade the customer to want what they also need; no more and no less. This is difficult most of the time, not just some of the time, but one does the best one can, and tries to improve communication skills on an ongoing basis.

There are grey areas where one might slide in something that wasn't explicitly asked for but that you are certain will be needed, if it doesn't cost the client extra money, and if they didn't forbid it, but it's a lot cleaner to avoid those grey areas whenever possible. -- DougMerritt (signature added much later)


See: DoTheThingThatCouldPossiblyWork, DoTheMostComplexThingThatCouldPossiblyWork, JustDoIt, DoTheThingThatMightWorkWell, TheCustomersAreIdiots


EditText of this page (last edited September 24, 2010) or FindPage with title or text search