End Of Moores Law

People continue to believe in MooresLaw, all evidence to the contrary. But there are well-known fundamental physics problems that would prevent MooresLaw from keeping that up forever. Eventually, computers are going to be as fast as it is physically possible for them to be. Of course, you can still make more, but they will take more space, time, and power, and will cost money, which, though at a minimum, cannot be reduced any further.

<--- you are here

Computers will continue to get faster, for a while, after the hardware stops getting faster. The reason is that the incentives will shift. No longer will software writers be able to justify bloated software by telling people they can buy a faster machine - because people won't be able to buy a faster machine. The value of software that runs faster and takes less memory will be more obvious. And so, I think that when the speed of hardware finally tops out, programmers will find it necessary to develop faster and leaner software, and their customers will appreciate it more.

Of course, those of us programmers who already try to minimize the use of CPU and memory will have the advantage of experience when the EndOfMooresLaw finally comes.

Oh, I get it -- you're justifying all the PrematureOptimization you like to do, by pointing out that at some unknown time significantly far into the future, computers will stop getting faster. And therefore it's OK to prematurely optimize now. Is that it?

The end of Moores Law won't mean that computing power will go backwards, it will stay the same. So programmers who write so called bloated software today will continue to be able to write bloated software forever


Good thing Gordon covered himself with MooresSecondLaw. -MichaelLeach


While Chips get AsSmallAsTheyCanPossiblyGet?, I am sure if the power is needed, that we can get more power by increasing the number of bits manipulated in each operation. The 32 bit to 64 bit to 128 bit and so forth to ... 1K Bit,....1Meg Bit,...,1GigaBit Word. They will get as small as they can possibly get when NanoTechnology takes the computer to the limit. Before that happens, programming as we know it today, will be as obsolete, as the bicycle is to the Modern Jet Aircraft.

For the record bicycles are alive and well

Couldn't we also use more than binary along the signal paths to get more out of the chips? So instead of 0 volts meaning binary 0 and 2-5volts meaning binary 1, we could have a range of voltages to represent more per bit. I know this would make the chips much more difficult to build, but chip makers will have nothing else to do so this shouldn't be a problem ;)

Only 50-75 Years? Lemme grab the back of that envelope ... 50 x 12 / 18 = ~33 generations. 2^33 = ~9*10^9. Fastest computer on the planet right now is ASCI White, which does 12 * 10^9 floating point operations / second. So in 50 years, you'll have one that hits about 1 * 10^20 FlOPS.

Isn't Earth Simulator, at 35+ teraflops, now much faster?

But that doesn't get anywhere near ultimate physical limits. The PlanckTime is what, 5 * 10^-44? Well then we have about lemmee see ... about 130 years until the EndOfMooresLaw. If, with all that compute power, we can't then figure out a way to continue MooresLaw beyond Planck, even if it means constructing our own universe, I'll be very much surprised. Of course I've been surprised before ...

Lemme see 12*10^9 floating point operations per second - let's say that's using a 64 bit floating point representation. 1.5 years later same speed double the word size to 128 bits - now 2.4 * 10^10 flops. (March 2003) etc. etc, etc. Oops - I didn't do my math right, we get to 1 gigabyte word in 36 years. Now if speed only increases by 10 times in that period, .... Correct me if I'm wrong - I'll admit I probably am again, but could that be about 2 * 10^17 flops.

I think that we may have a little trouble using the power available before we get to a 1 megabit word. So the limit may not be when Moore's law ends, but when our ability to use the hardware given us is surpassed. We may be approaching that limit in the next 10 years. We might decide not to build computers more powerful than we can use. We have a historical equivalent in our declining to build a supersonic commercial transport.

''Self-assembling materials will go to molecular scales. Custom polymers can work in a variety of ways, and odd micro-structures can do all sorts of weird stuff with trapped electrons e.g. zero-power computation. The future lies in materials, and computing fabrics will return with a vengeance. This suggests Wolfram's cellular automata techniques will become more important. They also lend themselves well to evolutionary scenarios. At some point the computation and the material it runs on will not be distinct. FieldProgrammableGateArrays already have some features like this. I tend to think that custom computational arrays will be evolved and then assembled using a cellular paradigm. I doubt it will be wetware, more likely a complex connective blob of plastic. Compilation will go all the way down to reality itself. JavaRealMachine? from your HomeFoundry? anyone? -- RichardHenderson''

You seem to be talking about computing in things as opposed to computing about things. Processors within objects, altering stuff. Blob of plastic - Put in Cranium, connect, and presto "Borgs"(StarTrek Scenario).

We already compute in things. I'm thinking of materials which are already being assembled this way. Rather conventional looking in that they tend to have a chip substrate (for connecting). The point is that the only limit to scale at the moment is due to our fabrication tools. Not much has changed since LSI chips were invented. Scale down the fabrication technology and you can build some cool stuff. Stuff that photolithography can't do. 'There's plenty of room at the bottom', as a wise man once said.


I remember reading a paper about 15 years ago which predicted the end of Moore's law for conventional computers due to fundamental physical limits somewhere around 2030 if I remember correctly. The basic argument was that there is a minimum entropy associated with the irreversible flip of a bit. (This is why MaxwellsDemon is not actually possible.) Therefore if you flip bits at a given speed, there is a minimum energy consumption and associated heat production that you have to see. This rate is readily calculated. The author then took the maximum conductivity of any known substance, and the maximum heat that any known substance can survive and found what rate of computation would melt anything of that temperature and conductivity. Projecting Moore's law forward, he found when we would reach that rate of computation, and predicted that Moore's law would not survive to that point.

In a brief search, I did not find any reference to that paper or a specific date at which Moore's law did. But, for instance, page 3 of http://www.cwi.nl/~paulv/papers/dagstuhl00.ps does confirm my understanding of the inherent limitations of classical computing.

Note that QuantumComputers use only reversible operations (ReversibleLogic), and therefore don't have to produce entropy. They are therefore exempt from the limitations of ClassicalComputers?. Of course, building them has so far proven to be challenging, and programming them will be... interesting.

-- BenTilly

You don't need QuantumComputers to have ReversibleLogic. You do need asynchronous chips though. But we already have that. For reversible logic, you just need incentive.

I've heard this before, but I don't get it. Asynchronous chips are just as irreversible as standard clocked-logic chips, right? Is there some connection between "asynchronous" and "reversible" that I'm missing? Both are "ideas that might someday make computers better", but I fail to see any deeper connection. -- DavidCary


http://www.gotw.ca/publications/concurrency-ddj.htm


Naturally, if the PeakOil issues are valid, MooresLaw falls apart. ...modulo TheScarcityGame


For a long article and the only dedicated site on End of Moores Law or Moore's Law, please click here http://home.earthlink.net/~moores-law/ . This shows how Peak Oil and Moore's Law fall under the same more general laws, economic and natural laws. It includes a blog that is updated frequently with news items relating to Moore's Law and the only known graph of Moore's Law that includes the end (based on a logistic-regression analysis).


Moores Law may end not because of chip technology limits, but because other issues become the bottleneck, such as harddrives or the bus. At that point it makes sense to move more research into the bottlenecks instead of chips. People are not going to pay more for chips when all their extra money does nothing but run "No-ops" waiting for the harddrive.


600 Years!

WikiPedia: "Some see the limits of the law as being far in the distant future. Lawrence Krauss and Glenn D. Starkman announced an ultimate limit of around 600 years in their paper,[47] based on rigorous estimation of total information-processing capacity of any system in the Universe." ([47] = http://arxiv.org/abs/astro-ph/0404510)

And maybe we can quantum-tunnel into a 4th+ dimension to stack them......if our dimensional neighbors don't charge rent.


There's another limit I've observed in keyboards, mice, speakers, and even to some extent monitors: that a certain level is "good enough" for most users, and thus there will be less incentive to advance beyond that. I have often wondered if microprocessors will hit that limit before they hit the physical limits of Moore's Law.

There *are* a handful of technology drivers that might continue to push the technological limit--namely high-computation stuff, like engineering, meteorology, and computer games--so even if the "average" microprocessor doesn't "advance", microprocessors themselves might continue to be pushed to the physical limits of Moore's Law. --Alpheus.


EditText of this page (last edited December 27, 2011) or FindPage with title or text search