Analysis Paralysis

Analysis Paralysis is a term given to the situation where a team of otherwise intelligent and well-meaning analysts enter into a phase of analysis that only ends when the project is cancelled. [Warning: cynicism]

Frequently, in such situations, designers and developers are staffed, and have no work to do. Busy work is given, and training, just to keep them from quitting before the analysts deliver. This waste is often the reason for cancellation.

Common causes are:

interject: Yes to BigProjectSyndrome especially! This seems like a big problem in highly technical crowds, or where there is less client pressure... Being a hopelessly impractical academic myself, I often fall victim to this, and try to conceive of the most flexible architecture that the world has ever seen, but then the project fizzles because it is too hard/too impractical. Does anyone have a good pattern, paradigm, or mantra to help us let go of our visions of the UltimateArchitecture? -- RusHeywood :done

In one project (no names mentioned), I've found a fairly large and bright team of people who were underqualified in OO analysis and design methods, who spent well over a year of time and produced only a few slim documents (which they'd gotten from an outside source only weeks before their presentation). Analysis Paralysis had set in, and nothing had been done. The project was cancelled as a "failure", creating a negative emotional association to OO methods in the company.

Whatever the particulars, analysts go into analysis...but they never come out.

when the going gets analytical, the analytical turn Pro


Keep anyone playing with paper for a while and they start to feel it. They work and work ... and see no product. They begin to lose sight of why they're working in the first place. No feedback means no quality in what they do. The sheer abstraction of their work means the only testing it gets is against other analysts' models - and then the analysts start trying to integrate their models...

Months pass. Management gets concerned and starts calling meetings. The developers, to justify the time they've spent, wrap the managers up in a GordianKnot of diagrams and charts. The meetings become the project. Everyone's busy, everyone can see the deadlines looming, but no one has the courage to break the gestalt by actually doing some implementing...

Solutions:

 -- PeterMerel

Peter: Wonderful! -- RonJeffries

Peter: You are definitely right! -- MauricioNaranjo?

Peter: Thanks for writing it so clear. Can someone tell me how to convince client that the project should have small incremental cycles, starting with quick prototype, and/or proof of concept --- Sanjay M

Peter: "cycle"? What's a "cycle"? Don't tell me its one of those things programmers do to waste time --PointyHairedBoss


"Building a bigger model doesn't add knowledge - it destroys knowledge."

Thanks so much for saying that. Last week I saw a model, of a villa he'd designed, in a architect's shop in Corsica. It was beautiful. I assume it was very useful in imparting knowledge and gaining feedback from the customer, in advance of building. Software "models" somehow seem to do exactly the opposite. Unless they're both executable and usable.

the problem with SW models: you have to keep an adequate granularity in any given model. Then you decompose it into parts, which you again model with an adequate quality. When you're finished doing so, you have just finished writing the source code.

"Employ a professional architect. Just one architect is what you want - never more than that ... He's not responsible for analyzing the project's requirements either. He's responsible for providing generic tools to coordinate and support the other developers."

Hmm. Does the architect, so defined, make decisions on "infrastructure" without reference to "requirements"?

Maybe, maybe not (it isn't specified therein), but you need him all the same. Obviously he should be somebody who has a lot of development experience... -- DanielKnapp

[How can building a bigger model destroy knowledge?!? I'm confused by this statement.]

When you go into too much detail, a lot more questions arise. You can't answer them all. Or they seem to have a good answer if you give up some assumption of the previous little model. Maybe you should, so, maybe you don't know enough to "correctly" build the first model. Suddenly, you do not know anymore what you previously thought you knew. As I see it, that is why a bigger model "destroys knowledge". Of course, you can get the very same result while coding the first model, then trying to code into some more detail. I always thought that it is better to change the model than the code. People here seem to think that it is better to refactor the code than the model. Indeed, at least this time you can test what you think you should change. (and that is why I love this page:-) I still get into AnalysisParalysis:-) -- OmCandea


Ron, that may or may not be the case. I think it all depends on how the analysis is done. It's true that you can't figure out all the details beforehand because if you do, you'd actually be coding in English, which is a mess. The analysis should be limited to a broad scope or be done as an overall picture + the game strategy. You define what the puzzle probably looks like, you identify you gray areas, your puzzle's areas in green (grass?) and then you start coding (trying the actually fit the pieces, maybe first separating by colors and other signs (strategy), then trying to complete some small areas). The puzzle analogy is only two dimensional, but I think it's useful. If you have a puzzle with 200 pieces, things may be easy and no analysis required. You start trying to fit things into place, and refactor (done areas) until you have the puzzle solved. But if it's a 3D puzzle of 5000 pieces, and some surfaces have different rules for fitting, then you need some vision and some strategy. You don't care about implementation details nor about the little details, and certainly the working "papers" shouldn't be longer than 10 pages. If they are, then I'd say time was wasted. -- FedeRico?


That's such an intriguing idea I'd like to elaborate on it for a moment... There are details at any level of resolution in a model. Perhaps he meant choose a level of detail for your model that doesn't make it so thick with detail that you can't see the forest for the trees. Sometimes projects with many pieces have specific rules for how those pieces interact that need to be understood. You could look at it like a top-down diagram: On level one, you can see only the entirety of the 3D puzzle as a whole (a huge penguin, for example), with no distinctions between parts. After breaking it down, on level two you notice that there is a distinction between the fins and the beak and the rest of the body. The next distinction you might make happens within those previous distinctions: we distinguish between colors and textures. The making of distinctions can continue infinitely (in real reality... because of the granularity of computers, perhaps there is a limit when making distinctions in software), the art consists in knowing when to stop. It's probably similar to the art of deciding when a bit of code should become its own function. -- ScottWilliams


There are two key issues here:


level of detail and SoftwareBlueprints. -- BobBockholt


Risk avoidance Exhaustive analysis appears to offer the hope of being able to make risk-free decisions. Unfortunately, there is never a point when everyone is comfortable that they know enough, so another round begins. -- ScottParnell?


This seems to be part of BigDesignUpFront


AnalysisParalysis - Nicely put. Qualified for OxyMoron?? Not having clear Goals is the root cause. Any lack of clarity in the actual goal will eventually fail. For Analysis phase, its paralysis. If the project managed to get into development phase, it is path to destruction! -- ArunPrakash?


See TheCurseOfXanadu for a extreme example of this (and several other management problems) in action

See also: AntiPattern, AnalysisSmells, ShakespeareanAnalysis

CategoryAntiPattern CategoryAnalysis


EditText of this page (last edited August 16, 2010) or FindPage with title or text search

Meatball