JohnVonNeumann wrote some papers in around 1945 summarizing his view of what a general-purpose electronic computer ought to be like. A computer has a "von Neumann architecture" if it follows his recipe:
- Consists of ALU, control unit, memory, and I/O devices.
- The memory just stores numbers (integers of limited size).
- The program is encoded numerically and stored in the memory along with the data.
- One instruction is executed at a time. It may cause the transfer of a bounded (and small) amount of data to and from the memory and I/O devices.
A friend of mine who was studying JohnVonNeumann's work claims that the stored-memory aspect was particularly clever, because the original notion was to improve on pure-plugboard computing by having a table with writable values to use for things like polynomial coefficients -- obviously it's handy to be able to compute such things on the fly -- but then he just kept generalizing the idea until the things stored in the table were, not just changeable coefficients, but changeable operators as well, so that there needn't be a difference between operators and data.
Most computers since 1945 have worked more or less along those lines. Many variations are possible, and most have been explored to some extent. Taking the features listed above in order:
- The system could be really, radically different, like hardware NeuralNetworks or dataflow machines.
- The data stored in memory could have types attached, so that it need no longer be treated numerically by the CPU, and so that (e.g.) an ADD instruction can operate on integer or floating-point data indiscriminately. (The LispMachines did that.)
- The program could be represented in a fundamentally different way from the data, or stored in separate memory. (HarvardArchitecture)
- There could be many processors, each with its own memory and links to the others. (The ConnectionMachine was like this, as is the CellProcessor.)
Parallel processing is the most common non-von-Neumann-ness in computers today. Even a relatively humble 2-way PC goes beyond the von Neumann architecture. Some small computing devices such as PICs (as well as many
DigitalSignalProcessors) use a
HarvardArchitecture.
Also a long time ago the Intel chips started to have an extra unit to do floating point calculation in special hardware, in parallel with the main ALU. In time these became integrated into the chip design.
That fourth feature of the von Neumann architecture - the narrow data path between memory and CPU - is often known as the VonNeumannBottleneck. It afflicts moderately parallel machines like those 2-way PCs, as well as sequential machines; MassivelyParallel computers avoid the bottleneck, at the cost of being terribly hard to program for.
FlowBasedProgramming is an approach to application development which simulates a number of von Neumann machines linked together by BoundedBuffer connections. It thus combines many of the non von Neumann approaches listed above. This has been found to be a better way to build large applications and affords significantly improved maintainability.
I would add another distinguishing point: VonNeumann machines are not built for arbitrary levels of recursion. -- MarkJanssen
True, because it's considered unnecessary. However, they can be built to support recursion up to the limit of available memory, and languages supporting TailCallOptimization turn certain recursive calls into simple iteration.
CategoryMachineOrientation