A tool or technique that makes the first 80% of a project easy but the last 20% very difficult or impossible. For example, 80% of various instances follow a nice pattern, but 20% deviate in unpredictable or random ways that bust or gum-up our framework. RAD and boxed solutions often suffer from this when new requirements don't fit what the original designers anticipated.
Another way of saying this is that often 80 percent of occurrences or instances may fit a relatively nice, clean, simple abstraction. Yet the other 20% does not fit the nice abstraction, so we have to find uncomfortable work-arounds or clumsy adjustments to our model. See AllAbstractionsLie.
One might, then, qualify languages and frameworks by how well they handle that last 20%. I.e. is it easy to create and integrate a companion model or component that can handle another 80% of that last 20%? Many languages do well by supporting just two mechanisms of integrating components, rather than relying on one mechanism. These paired paradigms, well chosen, help pick up another 80%, cutting it to 96:36 rule. We may be able to do better even than that.
YinYangVersusSinglism is perhaps related.
Economics
The 80:20 rule originated from Vilfredo Pareto, an Italian economist who studied the distribution of wealth in a variety of countries in the late 1800s. He discovered a common phenomenon: about 80% of the wealth in most countries was controlled by a consistent minority - about 20% of the people. Pareto called this a "predictable imbalance".
Some versions of the story say that Pareto was also an avid gardener, and that while gardening he later observed that 20% of the peapods in his garden yielded 80% of the peas that were harvested.
In about 1941, Pareto's writings were discovered by Dr Joseph Juran, and popularized under the name "Pareto Principle". Juran was working in the field of quality control, and stated the principle in the form "80% of problems come from 20% of causes."
The 80:20 rule has been expanded far since its uses in economics and quality control. One might quibble about the 80% or 20% (it is sometimes 60:40 or 90:10), but the term is normally used to reflect the notion that most of the results (of a life, of a program, of a financial campaign) come from a minority of effort (or people, or input).
This principle is essentially neutral. If 80% of the benefit has been delivered after 20% of the effort was exerted, then: Huzzar! big win. On the other hand: completing the remaining 20% will take four times as long again as getting the first 80% finished did, goddamnit!
The EightyTwentyRule is in practice used to refer to almost any kind of somewhat skewed distribution of anything at all. It doesn't have to be a Pareto/Bradford distribution; it doesn't have to be 80:20, and it especially doesn't have to have been measured.
Let's see just how broad this set of distributions is. We can view any distribution as a nondecreasing function that passes through (0, 0) and (100%, 100%). In practice, people often refer to anything that is more skewed than 60-40 as following the "80-20 rule". It can be skewed in either direction. So draw a box with corners at (40%, 60%) to (60%, 40%), and consider all of the nondecreasing functions that avoid the box:
The actual Pareto distribution (http://en.wikipedia.org/wiki/Pareto_distribution) is more specific than this, but when was the last time you heard the "80:20 rule" being used to refer to something that had really been measured and found to satisfy a PowerLaw with just the right parameters? -- DavidSarahHopwood
This may just be the BellCurve? in disguise.
Maybe. Pareto's observation is that a histogram of wealth in a country will always fit under a curve where
population/wealth = k 1/wealth^E 2<=E<=3, for some constant kRecently, some French physicists have managed to derive this rule by considering an economy to be composed of a population that trade randomly, those trades having random outcomes (neither random distribution is uniform). This allows them to draw a close analogy between the behaviour of an economy and the behaviour of "ill-condensed" materials (e.g. glass) This analogy suggests that the exponent in Pareto's Law can be made larger, giving a more equitable society by raising the analogic temperature; this means having more people trading more often.
It also suggests that, even if one were to start with a uniform distribution of wealth, an economy close to a FreeMarket will eventually produce a Pareto distribution of wealth.
The Brookings Institution has also developed an explanation of how these power law distributions come about and are maintained. ''The Emergence of Firms in a Population of Agents: Local Increasing Returns to Scale, Unstable Nash Equilibria, and Power Law Size Distributions.'' http://www.brookings.edu/reports/1999/06technology_axtell.aspx http://www.brookings.edu/~/media/Files/rc/reports/1999/06technology_axtell/firms.pdf
City sizes also follow a Zipf distribution, presumably for the same reasons that companies do. (Or perhaps companies are constrained by the sizes of the cities they are in, or vice versa.)
http://www.brookings.edu/~/media/Files/rc/reports/2006/02fixtopicname_axtell/csed_wp44.pdf http://dharrison.ba.ttu.edu/Real%20Estate%20Investments/Zipfs%20Law%20--%20QJE%201999.pdf
The latter paper proposes that "If cites grow randomly, with the same expected growth rate and the same standard deviation, the limit distribution will converge to Zipf's law." In particular, it is the result of applying a bell curve to the percentage growth rate of each city -- but having the variance of the percentage growth rate not be a function of the size of the city.
The first eighty percent of the project take the first eighty percent of the time available. The second twenty percent of the project take the second eighty percent of the time. ;-)
Here's how to be 16 times as productive as your co-workers. Always pick those jobs that appear to solve 80% of the problem, but only take 20% of the time, and leave the remaining 80% of the work to some honest but unambitious co-worker. Assuming 5 jobs that each take time t to complete, you alone will produce 5 * 80% = 4 jobs worth of problem solution in 5 * 20% * t = 1 * t time, while your co-worker will need 4 * t time to clean up behind you, delivering just 1 job's worth of problem solution. So your productivity is 4 jobs/t, while your co-worker's is 1/4 jobs/t.
In other words, being productive consists of a) recognizing which work produces maximum (perceived) impact, and b) being unscrupulous. -- AxelWienberg?
This is especially unscrupulous when it becomes a middle level manager's basis for corporate politics. It catches up with them since companies needs projects 100% complete eventually. At first it seems great because the cost estimates are 20% of competing manager's estimates but then after a few projects it is noted that they are stalling at 80% complete.
I'm not so sure about whether "companies needs projects 100% complete eventually". In my current job, I got in just a few months before the project (a large, public service portal) had finished the "development phase" and going into "maintenance/support phase". I - who like to think of myself as someone who is both a sysadmin and a developer - plunged into the mess without hesitation. Of course, the developers had only done 80% of the job, and the 20% things left undone includes virtually anything that could have made the system easy to maintain/support. Documentation, for example. And of course there is just about 0% left of the budget (and only a few developers, only one having been on the original team) to fix this, yet we still have to maintain and support it. Don't leave supportability in the 20% on your projects, please! -- LasseHp
I'm sure that companies do not require projects to complete 100% eventually. In fact, my understanding of how this law is used in practice is that it allows various levels of management to cherry-pick the easiest work that will provide the greatest (apparent) results the fastest. This way of thinking is also then applied to sales, and to marketing. The law is often presented as a means of optimising something (like your time, your effort, your sales call-back lists, cost of sales or marketing and so on). When used in this way, it becomes a guarantee of mediocrity. When used in the context of competition in the marketplace, then the team that only aims at getting an 80% solution onto the market will beat the competitors in one-fifth of the time (80% of features in 20% of the time). Furthermore, most of the market won't even notice the difference (80% of users only require 20% of the features). Repeat and rinse. Eventually, someone realises that you can release 20% of the features in (let's say) 5% of the time, and 80% of the users will never know the difference. THAT product wins 80% market share. The 80% solution comes in at second place. The 100% solution is eventually open-sourced too late to make any difference, except to the diminishingly small percentage of people who still care. GlenelgSmith
My favorite variant: programs spend ninety percent of their time in ten percent of the code. Good news: in a hundred line program, you only have to tune the critical ten lines. Bad news: in a hundred thousand line program, you "only" have to tune the critical ten thousand lines. Gee, I'm glad we've got it narrowed down.-) Seriously, in a hundred thousand line program, there's probably less than a thousand lines of code (perhaps copied and pasted and replicated various places, unfortunately) where it "lives" from a performance point of view. That's not to say you're guaranteed to be able to significantly improve the performance of that thousand lines, but if it's theoretically possible, it's likely to be intellectually manageable. -- PaulChisholm
This is of course recursive so in that thousand line subset you will get the biggest bang per buck by hitting a hundred 100 lines of it.
This is a special case of Zipf's Law (see PowerLaw), which is clearer if you consider the above comment on it being recursive.
But could you consider that, of the time the program spends in those ten thousand lines, it spends ninety percent of its time in ten percent of that code.
You then only have to find the thousand lines that make the most difference to that chunk of the code.
Just keep narrowing it down until you find a manageable chunk that makes a large difference to the speed.
It's entirely possible (and I've not tried to optimize any MLOC programs), that this doesn't scale up properly -- that is, you can't recursively find the most important lines of code.
From another perspective, if optimizing that 10% of the code doubles the speed of the program, as you recursively narrow your 10% down, you're only doubling the speed of the smaller pieces. The speed-up (when measured overall) might not be worth the effort of finding those critical lines.
Other rules such as AmdahlsLaw are much more useful in practice for deciding what to optimize. (The description at AmdahlsLaw concentrates on parallel processing; http://en.wikipedia.org/wiki/Amdahl%27s_law is more relevant.) -- DavidSarahHopwood
Vender "dog and pony" demos often use this principle by showing you all the grand and wonderful things their product does out of the box or with minor configuration. However, what they don't or can't show is all the oddities and/or exceptions to the rule that cause their product to grind into a halt or turn it into a mass of pasta.
The EightyTwentyRule only applies 20% of the time. -- MichaelFeathers
The 80/20 rule also applies to politicians. They will spend 80% more of your money during the last 20% of their period of office (which of course is election time). [Evidence?]
See also TwentyEightyRule, AbstractionDeviationDomainSmell, TheRadBottleneck, BusinessPatternIrregularity