One common trade-off, which is made with great frequency in any sort of economic analysis (including engineering, which is a form of economics), is the trade-off between the predictability of a system/process and its performance. It is rather common that a system/process will have some parameter which we seek to optimize (return on investment, speed, memory use, completion time, etc.); and the final result cannot be predicted with absolute certainty. Instead, we can estimate it in terms of a probability distribution.
In many such cases, by design variations or other decisions, we can alter the probability distribution. However, it is quite common that a decision which optimizes the mean (expected value) for the parameter will increase the variation (expected variability) of the parameter, and vice versa--we can reduce variability with tighter control, but that moves the mean in the opposite direction of where we want it.
The trade-off between Predictability Versus Performance is a key design decision; one which is often overlooked (or argued by platitude from an absolutist position).
Examples:
Alternate View
I find it hard to conceive that there is benefit to unpredictable high performance and do not see how the above examples support that premise. Performance without predictability is merely a lottery; if the cost is low and the reward is very high, one may decide to play, but for most situations performance without predictability is not of value.
If one is primarily concerned with the average case as opposed to a bound on the worst case, and the two do in fact trade off (in many cases, they do not), there are times where going for performance is appropriate. A RDBMS, for instance, is typically optimized for maximum throughput given a particular pattern of queries; RDBMS vendors do not optimize for the worst-case. Microprocessor caches have already been mentioned--they noticeably improve CPU performance, despite adding considerable variation to the time required to execute any given instruction.
Of course, both schemes can (potentially) be foiled; it is possible to write programs with poor locality-of-reference, which consequently perform worse with the cache rather than better.
Having designed real-time systems, both on the hardware and software sides, I can assert the designing for the worst case does not imply anything about the average case, the best case, or the distribution. "Worst Case" is a subjective set of conditions under which one wants to ensure a specific level of operation. The alternative is to permit failures and it would be a hard sell to push the idea of having high performance except when the system fails.
Predictability must precede Performance. If one reads Dr. WilliamEdwardsDeming, he asserts that if one has an unpredictable system, one improves the system by eliminating identifiable causes of degradation and repeating identifiable causes of improved performance. Once one has established a stable system, improvement comes only through experimentation and the results of an experiment will be unpredictable both in terms of performance and variation.
The important point is: know your requirements. In the context of manufacturing, I agree with you--I cannot think of any instance in manufacturing, for instance, where performance would be preferred; but the economics of manufacturing dictates that result. (Were the cost of testing, rework, and/or scrap to be negligible, you might have a different case however.) Also, keep in mind that by "predictability versus performance" I do not mean "correctness versus performance"; I'm not suggesting that that is an acceptable trade-off at all. If a less-predictable design might lead to an incorrect design; then the trade-off no longer exists; you must choose correctness over any other concern.
Caveat: Not everything that advertising reducing variation actually does so.
There still does not appear to be a justification for performance versus predictability; if where is the value of performance if it is not predictable? Yes, it is a case of "know your requirements"; how many real world software systems have a performance requirement that only needs to be met on occasion? Within a given context, performance requirements must be met; there is no room for unpredictability. I still fail to see any support for the statement that a trade off between performance and predictability is common, nor even that it is meaningful.
The terminology used above is not precise, that's all. Typically systems are at least statistically predictable, but usually only hard real time systems are predictable in the sense of having the exactly the same performance every single time on every single operation. [It may be valuable to correct the terminology to correctly communicate the message. The discussion seems to switch between "predictability", some notion of range of variation, and mean values making the message quite unclear.]
Use of e.g. a CPU cache leads to stochastic behavior, which can be contrasted with "predictability" if the audience understands what you're talking about.
Try another analogy: You have two options: 1) I give you $1000 at the end of next week. 2) I toss a coin at the end of next week, and give your either $1000 or $2000, depending on the outcome. I'd guess that you'll ask me to toss the coin: nothing to lose, right? Well, in some circumstances, there's plenty to lose. For example, the American tax system's "AMT", where marginal tax rates can become much greater than 100%. If you were getting close to the cut off, you might prefer the guarenteed $1000, because then you know how much money you're going to get.
How does this relate to predictability and performance? You have described a very predictable situation. Also note that is my requirement is for $2000, neither option is acceptable; if my requirement is for $1500, option 2 is still probably not acceptable and option 1 is not acceptable; if my requirement is for $1000, either option is acceptable; I can only evaluate the options based on the predictability.
Somewhere there's a semi-related subject about how business prefers predictability over high performance but possibly volatile or hard-to-replace employees and work-processes, but I cannot re-find it after googling and bing-ing. It's similar to PlugCompatibleInterchangeableEngineers, but also focuses on processes. It might be PowerfulTechniquesAreRisky, but that doesn't sound right either.
If a less-predictable design might lead to an incorrect design; then the trade-off no longer exists; you must choose correctness over any other concern.
This is not necessarily the case. Consider a computer manufacturer deciding what type of RAM to install in a computer for a cost-conscious customer. RAM with error correction and buffering works more "correctly" than RAM without such features, but the better cost (performance) of the latter may make it preferable to the customer - they may be willing to accept the occasional flipped bit in exchange for a lower price (and usually better speed). This would seem to be true for many physical goods - the more reliable (correct) its operation, the more it going to cost.
CategoryProcess (sorta, maybe needs thinking about, eh?)