[A problem is said to be Turing-complete if it can only be solved by a Turing machine or any system that is TuringEquivalent. Often programming languages that are TuringEquivalent are said to be TuringComplete.]
A given programming language is said to be Turing-complete if it can be shown that it is computationally equivalent to a Turing machine. That is, any problem that can be solved on a Turing machine using a finite amount of resources (i.e., time and tape), can be solved with the other language using a finite amount of its resources.
Typically, one proves a given language is Turing-complete by providing a recipe for translating any given Turing machine program into an equivalent program in the language in question. Alternately, one can provide a translation scheme from another language, one that has already been proven to be Turing-complete.
Nearly every existing computer language is Turing-complete. About the only computer languages that aren't Turing-complete are a handful of special languages that are capable of LESS than a Turing machine - usually because some limitation is "hard-wired" into the language's structure or definition. (For example, Hofstadter's designed a language called BlooP so that it was impossible for it to have a iteration structure with an arbitrarily high upper bound.)
Because of this fact, the exact definition of a Turing machine isn't particularly important, and in fact any given CS instructor (or textbook) is going to provide a slightly different version of a Turing machine. Basically, a Turing machine is an abstract model for a programmable machine, something that nowadays we would think of as a programming language. It was invented by Alan Turing as a theoretical object which he essentially used to give a concrete basis for proving theorems about algorithms, in the days before electronic computers. Here's my general, inclusive description of the Turing machine:
The Turing machine is usually presented as a read-write head over an arbitrarily long (though finite) length of tape. At each position on the tape is recorded a symbol which the head can read and/or overwrite with a new symbol. The machine's programming is mainly determined by a set of states. At each tick of the Turning machine's clock, the Turing machine reads the symbol recorded at its current position on the tape, writes a new symbol at that position (or possibly retains the existing symbol), moves the read-write head one position to the left or right (or perhaps remains at the current location), and determines the new state to be in at the start of the next tick (again, possibly the same as the current one). One state is specially marked as the initial state; the machine begins a run in that state. Any number of states may also be marked as final states; the machine ends its run upon reaching one of those states.
-- taken from a posting by Brian Raiter
What is interesting in Turing-completeness is that it is the most rigorous definition of "what can be done by algorithmic computation": it is a hypothesis, impossible to prove, that there are no problems that can be solved by mechanical computation but cannot be solved by Turing machines. This is impossible to prove because the concept of "mechanical computation" is an intuitive, not a formal one.
Of course, there are systems that are more efficient at solving problems than Turing machines.
An interesting aside is that there are problems that are known to be impossible to solve by a Turing machine. As a result, they are suspected not to be solvable by any mechanical computational system.
But that is less the subject of Turing Completeness or otherwise, and instead the subject of the Church-Turing Thesis: "The physical Universe is Turing-computable". Whether a given mathematical entity is physically realisable or not makes no difference to the mathematics, only to its applicability as regards to physical reality. That's the distinction between pure and applied mathematics. Turing Completeness can be defined without positing any axioms on the grounds that "things really are like that".
Sometimes, TuringCompleteness is defined by the kind of languages a system can recognize (Chomsky's language hierarchy). These are:
The keyword efficient above is important. TuringCompleteness alone gives you very little and therefore it cannot sensibly be used to compare real programming languages.
TuringCompleteness gives you a great deal. Of course it can't be used to contrast real programming languages because that is the very fact that is being assumed by 'real (i.e. Turing complete) programming languages'.
Many of the most popular languages in use today are TuringComplete; many instances of LittleLanguage are not.
Interestingly enough, SQL is not Turing complete.
Somebody has disputed that, but I don't have a reference. It might be a vendor-specific issue.
There are extensions of SQL that allow recursive querries e.g. to find all ancestors in a person table with parent id column. These could be used to build arbitrary loops. But in general it is a good thing that SQL is not TuringComplete because otherwise no upper limit for a runtime of a statement could be assumed.
ANSI-standard SQL has supported such constructs (termed "recursive common table expressions") since 1999.
HTML is also not TuringComplete.
-it's also not a programming language...
HTML is certainly a language, at least in the formal sense (hence HTML); while it is not a language that describes the behaviour of a general-purpose numerical computation machine, it does describe the behaviour of at least one programmable machine: the web renderer. By any reasonably general definition of “programming,” HTML is a programming language.
By the same general definition of "programming", the PPM format is a programming language for image viewers.
If it did describe the "behaviour of a general-purpose numerical computation machine" then - datatype limitations notwithstanding - it would be Turing-complete. It does not describe the behaviour of at least one programmable machine. The programmable machine describes the behaviour of the programmable machine. That's what a program is: the formalised description of a machine. The HTML document passed in as input data is just that - input data. As a program in its own right the document is literally a non-starter.
Conversely, many computational systems that are (not intended to be) TuringComplete turn out to be so after all. If a given problem domain is complex enough, then adequate modelling of it may result in a system that can indeed perform arbitrary computations. Scheduling and resource-management systems, for example, may involve algorithms that, taken together, can serve as an "instruction set" that can be used to perform tasks. A specific instance is described in Bangert, Bratus, Shapiro & Smith "The Page-Fault Weird Machine: Lessons in Instruction-less Computation" (http://www.cs.dartmouth.edu/~sws/pubs/bbss13.pdf) which uses Intel's IA32 page-fault handler to perform computations. One can imagine distributed programs that encode state in the distribution of shipping containers in ships' holds, with processing performed by the ordinary use of routing software used at ports to decide loading/unloading schedules.
Interesting as in surprising?
Yes. A year and a half ago, I sat down and really learned the language (instead of knowing odds and ends). When I actually sat down and thought through the spec, I was surprised at how limited the language (SQL-89, to be precise) really is. (SqlFlaws)
It no longer surprises me, of course. It now surprises me that I ever thought anything else.
It's also surprising from another point of view - I think most of us would tend to say SQL is not a LittleLanguage. But it falls on the "wrong side" of the TuringComplete dividing line - while a LittleLanguage may be TuringComplete, it is rather surprising for a BigLanguage? to not be TuringComplete.
And why should SQL be Turing complete? It is a declarative language, with a very specialized purpose. XML and UML are also called "languages", they are also quite big, but it would be quite hard to discuss any of them from the Turing-completeness point of view. -- DmitryJemerov
XSLT is TuringComplete (well, I prefer the term TuringEquivalent...)
Many declarative languages are TuringComplete, whereas the UPDATE/DELETE/INSERT part of SQL is not declarative. If we allowed conditional constructions and recursion in views, the SELECT part of SQL might well be TuringComplete.
Interesting as in interesting from a mathematicians point of view...
And I'd add... correct from a mathematicians point of view. But if you allow for a person to sit there and press the button to start the next query, then I bet you could implement a Turing machine no problem. Use one table to store the states. Look up the next state with a query. Use another table to store the strip of paper. All the person has to do is provide the dumb looping capability and watch out for a STOP instruction. -- DanielEarwicker
That scenario still doesn't make SQL TuringComplete - SQL can't provide buttons, so you must be using some other language to handle the button press and produce SQL to retrieve the data. Thus SQL is a means to an end - it isn't TuringComplete, but the language generating the SQL calls is.
What are the actual features required to be TuringComplete? Given a pair of TuringComplete languages, could you convert from one to the other via these minimal features?
Yes. But some languages that are TuringEquivalent are so alike that you don't even need to have a TuringEquivalent language to formalize the conversion between them.
The presentation of a Turing machine above can be used (roughly) as a list of required features - if you can write a Turing machine simulator in the language, that language is TuringComplete (and vice versa). Requirements:
Yep. Considering the FrameProblem, clearly it leaves out:
Actually, in the real world, TuringComplete doesn't even say much about "whether it is possible". For example, JavaLanguage is clearly TuringEquivalent, but as of Java 1.4, it's not possible to write the Java equivalent of the "ping" utility (ICMP echo request). Not just hard, but impossible. In the same way, CommonLisp doesn't give you a way to access command line arguments, and you can't code it in pure CommonLisp either. Also, I believe that most B&D languages (like JavaLanguage and ObjectiveCaml) don't allow you to write a generic object-marshaling library within the language itself. --VladimirSlepnev?
Hmmm... after reading all this page I just have one little doubt: "What the hell is the meaning of Turing-Complete??"
The Turing machine has been proven capable of describing any arbitrary computable function from a bounded input string to a bounded output string. A TuringComplete language of computation, thus, must also be capable of describing any arbitrary computable function from a bounded input string to a bounded output string. This can be achieved by any means - procedural, functional/substitution, constraint/logic, etc. Issues of effort to describe the function, demonstrate it correct, modify a function to create another, combine different functional units, efficiency of the computation, degree to which the program directly represents programmer intention, etc. are not relevant. So is dealing with unbounded I/O (potentially infinite streams of inputs or outputs) and reactive I/O (where inputs react to outputs). There is much more to language quality than Turing completeness.
That really is the question. From what I can gather, Turing invented a mathematical model of a machine and found that this machine can compute anything that is computable. So I think it's up to the reader to figure out what properties are needed to compute everything that is computable. I don't think there is a definite answer to that. [There is. Wikipedia goes into greater depth on the subject than you'll find here.] A Turing machine also doesn't take time and dynamic I/O into consideration, so a Turing machine has limited practical use. This is a frustration to people interested in functional theory because if the operational environment of your functional language is also functional, you can't actually get anything done. It becomes the God problem. Nothing can initiate the system in the first place and no one can use the output except for God.
Same thing for a Turing machine. It cannot exist (mathematically or otherwise) unless its external environment is NOT a Turing machine at some point up the hierarchy. So either the Universe is a Turing machine and something ELSE exists higher up, or the Universe is NOT a Turing machine in the first place which is more likely and easy to explain if you understand that functions and execution loops are unnecessary to enable computations. This is where you get into the situation where a Turing machine will not do, but an equivalent one will (and where the halting problem no longer can be asked nor does it make sense to do so), but where this equivalent machine CAN and MUST handle dynamic I/O.
So you add two additional tapes to the Turing machine, each with different limitations on the operations that can be performed on them. One is read-only and advances by one square every time it scans; the other is write-only and advances by one square every time it writes. I call these two tapes "stdin" and "stdout" because they do the same job. The content of the stdin tape is undetermined by the machine (that's the external environment you insist upon). It SHOULD go without saying that any dynamic I/O that can be handled by a computer can be emulated via stdin/stdout (as NealStephenson implied in his essay "In the Beginning was the Command Line", all the world's a teletype as far as a CPU is concerned), and that this three-tape machine can be emulated by a conventional one-tape machine - which can in turn be emulated by one that monitors it and halts in the event that stdout contains an instance of a certain computable class of pattern. Or doesn't halt it, but instead switches on an indicator light for anyone who's interested. What consequences accrue from switching that light on are not its concern - not even if some of those consequences influence what is being read from stdin.
It's worth pointing out too that in his original paper Turing himself never mentioned "halting", using instead the question of whether or not the machine ever wrote a specific symbol; presumably the machine was free to continue operating afterwards. But, like I already said, that's equivalent to the halting problem. Perhaps insistence on "dynamic I/O" blocks thinking about the functionality of Turing machines.