A Moore Year is a replacement for the less realistic CPU Year as a unit for long-term computations. You'll often see a complex calculation, such as breaking an encryption key, described as requiring some thousands of CPU years to carry out, based on current technology. But as a metric, "current workstation CPU years" is barely more useful than "mechanical calculator years", a favorite from the early days of computers, or "abacus years".
MooresLaw tells us that CPU power doubles roughly every 18 months at constant cost. So why would someone wait 1000 years for a workstation calculation to complete when it could be computed in one year, 15 years from now? In other words, a CPU from 15 years in the future can compute in a year what today's equivalent can compute in 1,000 years. Thus 1,000 CPU years = 15 MooreYears. A convenient approximation that you can do in your head is:
Moore Years = 5 log10(CPU Years)
40 x Moore 30 x years 20 x 10 x 1 x 1 100 10K 1M 100M CPU yearsThis shows that P=NP (see NpComplete) during the Moore Era, because as the graph shows, a calculation requiring exponential time, measured in CPU years, requires linear time in MooreYears. It also shows that a problem requiring 1 million CPU years to compute is far from incalculable. I hope to be able to carry out such a calculation on a workstation in my lifetime.
This is analogous to inflation (well, deflation) and discount rates. 1 million instructions is "worth" more today than it is in a year. Fortunately, a lot of interesting problems cost the same nominal amount and so we can solve them for less money by waiting until CPU cycles are cheaper.
The deflation of MIPS leads to the question "Where did the value of MIPS go?" One can find articles written at any point during the Moore era which have the theme: "Computers have at last become powerful enough to solve this important and valuable problem that we have been unable to deal with until now." Often there will be a picture of the research team posing around their new supercomputer, or sitting on it, if it was an early Cray. The pictures and the enthusiasm of the team for their new, million dollar Palm-pilot equivalent supercomputer can be poignant.
But it makes you wonder: if it was worth several million dollars in 1970 to solve some problem using fewer MIPS than a PlayStation can give, was it really a good idea to invest those millions back then? Or to put it another way, assume it WAS a good idea, and that the calculation was worth millions. Why, now that anyone can do the calculation, aren't we all rich? What happened to the value of that calculation? Maybe expensive calculations are only valuable in that they go slightly beyond whatever the competition (in business, in the military, in scientific research) is doing. The actual result has no intrinsic value to justify the millions of dollars spent obtaining it, but only short-lived value as one of the first results of its type.
Now in the military, being the first to achieve something is vitally important. In business, it might be cost-justified if the opportunity is big enough. But in pure scientific research that is not trying to produce a monetary payoff under a deadline, it seems foolish to spend a lot of money for MIPS. Spend the money on people, travel, or whatever and make do with what we can compute on a desktop PC. Twenty years ago the best-funded scientists couldn't dream of having such power.
It's analogous to traveling from Europe to the Americas - the first one to do it was very important, because we couldn't get those things (spices, gold, whatever) until then. As the trip got easier, it was worth less, because there was less of a differential between the two sides, and because so many we doing it --PeteHardie
I would also point out that, if everyone waited for technology to "catch up" to the problems of the day, then we would never get anything done. (This reminds me of an XkCd comic I read recently--after cyrogenics was perfected, all the engineers froze themselves so they could wake up thirty years later to see all the cool advances, only to discover that nothing cool had been done since they were frozen.) Sometimes we just have to get on with life, and forge ahead with the tools we have. I am not aware of anyone knowing about "Moore's Law" in 1970; and even today, it's not at all clear that it's going to deliver on its promise!
That, and I'm not familiar enough with all the money thrown into those computer systems, and what those research teams and/or companies got out of that, to say whether or not it was a good investment. Certainly, oil companies finding new oil wells using supercomputers to crunch geological data would probably be worthwhile (and, for some reason, oil companies would always have an edge over making those calculations, and making use of those results by virtue of the equipment they own, than the average smart-phone user, even if that smartphone in his hand has more computational power). It could also be argued that the spending done on most research projects is probably useless--at least, from a commercial point of view--and will always be, whether it is done on an XMP Cray or a Playstation. A company would be far more likely to ask "Why can't you wait ten or twenty years" than a bureaucrat, though.
Finally, I'm fascinated by *anything* that can compute, whether it can fit in a keyboard, or take up a full room, regardless of age, and regardless of computational power. It doesn't matter what comes before or after it, except in terms of a historical "continuum" understanding: if it still works, and can be run economically, it is worthy to be used. (Tube computers in particular would be prohibitively expensive to run!) --Alpheus
This almost looks like it could be the foundation of a game. Given an objective (O), resources (R) and time (T) and a choice of strategies, one could invest R toward an improved R (build a better computer) that would require less T to achieve O (but more risk, as R->R^n may fail), as opposed to investing R toward a "brute force" approach to attaining O that requires more T but less risk.
Computers aren't the only context where this plays. Build a bridge (B) across the chasm. Strategy (S1) might be "cut down trees and build a conventional trestle bridge" (requiring a T of 200) while strategy (S2) might be "invent a lighter-weight suspension technology and build a bridge requiring fewer trees" (requiring a T of ... 100? 150?). If S1 gets the job done first, then S2 loses. Iteratively, however, S2 might win by B3 or B5.
Naturally, this is pretty half-baked, but we do something like this in real life all the time. There's that little-known deal toward the end of WW II where Germany came up with a working fighter jet that should have meant the end of any Allied strategic air advantage and yet, because its superior speed was squandered by turning it into a bomber/attack plane, the few they were able to make were easy prey for conventional fighters. On the other side of the world, a different scenario played out when new bomb technology was applied in a way that rendered land and sea forces all but irrelevant. (You know, I really should look for less controversial examples when I do this ...)
It's not like the concept is new, but it's the first time I've framed it this way for myself.