In the page on MalbolgeLanguage, it is discussed whether or not MalbolgeLanguage is TuringComplete. In an attempted ReFactoring, it is intended to move much of that debate here; as it quickly went beyond Malbolge (an EsotericProgrammingLanguage).
But first, some practical requirements for TuringCompleteness?:
Two uncontroversial ones.
- A selection (conditional) construct. Any language limited to a linear sequence of instructions (whether terminating or not) is not TuringComplete.
- A looping/iteration/recursion construct (any one of the three will suffice), capable of indefinite iteration. Needed to construct non-terminating programs; without this, any program of finite length will by necessity terminate. (One interesting language without this construct is BlooP--see BloopFloopandGloop? --it has bounded loops, which are always guaranteed to terminate. It isn't TuringComplete; however it "suffices" in most cases as you can construct a loop that will execute until the sun goes nova).
And the source of controversy:
- A memory model which emluates (or provides the illusion of) an infinite memory store. It was argued that MalbolgeLanguage, which has a fixed memory model (and no way to get more memory) is thus a FiniteStateMachine and not a TuringMachine.
Of course, all buildable computers are FiniteStateMachines, as it is impossible to build a machine with infinite memory. There are 10^80 atoms in the known universe; so there's one trivial upper bound. However, many programming languages attempt to preserve the illusion of infinite memory, to varying degrees of success. The set of programming languages can be categorized as follows:
- Those that have language-defined limits on memory access. Malbolge is an example; it has a fixed memory space. (I suppose its rudimentary I/O capabilities might offer an escape valve--one could conceivably hook the thing up to a TuringTapeMachine?).
- Those that have implementation-defined limits on memory access. C/C++ is an example; the size of a pointer is bounded. Implementations can specify the bound--64 bit pointers can access more memory than most machines have, 128 bit pointers could address more memory than has been physically constructed, and 256-bit pointers can uniquely address every atom in the universe, give or take a few bits--but the bound must be finite. The expression "sizeof (void *)" is not allowed to return infinity. C/C++ also have the I/O escape available; an escape which is put to practical use in C/C++ programs. (Many C/C++ programs use persistent storage to manage datasets larger than the address space on the machines on which they run).
- Those which no implementation-defined limits on memory access. JavaLanguage specifies no restrictions on how "big" a reference can be; one could imagine a JavaLanguage implementation where a reference is essentially a BigInt.
And the important question is: Does any of this matter?
No, but it's fun.
It's also interesting, and not entirely unrelated, that java programs will magically (without a recompile) be able to take advantage of 64 bit address spaces, while C programs will not. Essentially, Java tries to preserve the illusion of emulating a virtual machine which is undeniably TuringComplete, while C allows you to interact with a real machine, which is (as mentioned) necessarily an FSM. As a result, when our FSM's become quantitatively better approximations of a TuringMachine, Java programs just hork with strange errors less often, while C is confused. --AdamBerger
I don´t think there is any reason why the bound in C should be finite. "sizeof" tells you how many "units"
of storage a pointer needs, it doesn´t tell you what the unit is. There have been c->lisp compilers on which
the size of all the fundamental C types (pointers, all numerical types) is 1, exactly 1 lisp cell. However
if the underlying lisp used a 18 bit or 32 bit or 64 bit wide lisp cell is completely invisible.