Not Natural Ona Turing Machine

From NonTuringComputing:


I am interested in what is more naturally done not on a Turing machine, even if a Turing machine could simulate it. Possibilities might be:


Optical computing performs parallel Fourier transforms as a matter of nature. NeuralNet?(s), again can be simulated, but painfully. The use of NeuralNets? to control helicopters has something to do with the "naturalness" of the computation in that medium, which has something to do with the engineering difficulty and also the price / performance constraints.

Or analog computation that drives control systems. Could of course be done with a Turing machine running the GameOfLife with NAND gates constructing a computer that computes the control logic. etc. But for price and speed reasons, is just much more natural to do with Op Amps and resistors and feedback circuits. So Real-Time control of certain things.

Modern DigitalSignalProcessing technology is (thanks to MooresLaw) rapidly replacing analog filters/control systems in more and more applications; I'm not aware of any control applications that can't be solved with a digital processing solution but can with an analog one. (There are quite a few high-frequency filtering/signal processing things which must be done still in the analog domain--due to limitations of today's hardware).

It seems some kind of wrong to me to use the GameOfLife to simulate NAND gates to construct a simulation of hardware gates to construct a Turing machine to compute something. More appropriate to ask what 2D cellular automata are good at doing on their own. Ditto FuzzyNeuralNet(s).

-- AlistairCockburn

How about DNA Computing? People are figuring out how to crack DES, solve the TravelingSalesmanProblem, etc. using DNA computing.

Indeed. Just that. The time it takes humans to pour the chemicals from one tube to another and adjust dials is on the order of hours; but the end computation that pouring replaces is on the order of months!

But is this super-Turing computation, or just massively parallel computation? Certainly there are a finite number of DNA strands composed of a finite number of molecules in that test tube...

I think there's a reason you haven't heard much about DNA computing since that one news article in what, 2002? First you have to generate all possible answers. Then you have to generate the key. Then you mix the two together and see what the key stuck to. Sure, that last step is fast... but while you're building "every possible answer" into DNA, it's obviously easier to just find the minimum at that point, instead of going through the whole process. Plus there's very little power in the computation; basically, it's a hash-table lookup, unreliable at that, that can only ask a very, very limited set of questions. It may be neat that it could solve the travelling salesman problem, but the simple traveling salesman problem is about all it seems it could solve!

It is faintly possible that you could genetically engineer an organism or set of organisms that might give you "all possible answers" in DNA more quickly then a computer could compute them, but probably not if you have any sophistication beyond "all permutations" (and things like traveling salesman do; not all answers are permissible). DNA computation only looks inefficient to simulate in TMs if you ignore the set-up steps; if you include the DNA generation steps in the cost, then DNA computing is a clumsy and inefficient simulation of an extremely limited TM (not a UTM)!

Fundamentally, DNA is a storage mechanism, not a computation mechanism.

I consider DNA computing a canonical example of how important it is to critically read the news; it was geeky cool, but there has never been any technical reason to believe it's a revolution waiting to happen. (I don't expect Quantum Computers to ever work as promised either (full massive q-bit-based computation, not the weaker "transistors exploiting quantum effects" which is nearly inevitable), but at least if they do work, they'll be a real revolution, so some degree of excitement and research is justified.)


Here's one - how about an interpreter for another or higher level language, say Scheme? Or LambdaCalculus (since they are equivalent this should be easy to do). It could be actually run on a TM Emulator such as http://www2.lns.mit.edu/~dsw/turing/turing.html . Or a subset-of-C interpreter then you would be running (small) C written in TM written in C.

It's a common misconception that the interpreted programming language is somehow "less powerful" than the language the interpreter is written in.

VirtualPC, Bochs, etc. are software that runs on the Mac that allows any x86 program (in particular, Microsoft Windows) to execute.

Fusion, Executor, vMac, etc. are software that runs on x86 machines and allow any Mac program (in particular, MacOS 7.5) to execute.

You can start with either hardware platform and stack these 2 kinds of programs in alternating layers as high as you want, until you are overcome with boredom.

[In practical terms you could not go on for ever. The program would get slower as memory resources are consumed. Each intepreted language can be more powerful than the metalanguage in terms of expressiveness etc but it can't be faster than the underlying language implementation. Each successive layer would be slower and slower. That's why you should not use encryption scheme1 within encryption scheme2 within ...encryption scheme n without engineering it properly ie EAP-TTLS over IPSEC or even WEP over PPTP - transmission speed slows down as much as 80%.

It would be interesting to simulate a grammar with a TM1, and simulate another TM2 with that grammar, then simulate another TM3 with the second TM (then a FiniteStateMachine on top of that?). These are all done one by one in separate chapters in "LanguagesAndMachines" ie they use a 3 Tape TM simulating a grammar to prove theorems about SemiThueGrammar? s,(Tape1 is input string, tape2 is the grammar, tape 3 is the derivation), simulate TM' with TM as part of explaining GoedelsIncompletenessTheorem etc. But it would be fun to try stack them all together]

All these languages are TuringEquivalent; using one to emulate another is not a problem (from a computability point of view).


I've been told that it's theoretically possible for physicists and chemists to build a simulation of what happens when one mixes a recipe for chocolate brownies. But as far as I know, no one has actually simulated it -- whenever someone wonders what would happen if he used more chocolate or less flour or leaves out the nuts, he just goes ahead and mixes them up and sees what happens.

Lemmee see. Cost for brownie ingredients--a couple of dollars (or insert local currency here). Cost for the computer time to run the simulation????

I think the economical choice is to head to the kitchen

Exactly. That's why finding the results of food recipe variations are NotNaturalOnaTuringMachine. Or at least Not Economically Feasible Ona Turing Machine.


I've been told that anyone who solves the "protein folding problem" will most likely win a Nobel prize.

Until then, biologists will continue to find out the shape generated by a DNA segment by transcribing it into a protein, then allowing that protein physically fold. (After that, the shape is discovered by crystallography or MRI).

See also BlueGene

Unfortunately, the "natural" way is inadequate for certain proteins. Some proteins can fold in more than one way. The "natural" way typically shows just one way that each protein can fold. If the environment is changed in just so, the "natural" way can consistently show the alternate folding result. But if you do not know that the protein can have multiple shapes, the "natural" way is unlikely to tell you, let alone tell you how to find the alternate shape(s). For example, "mad cow disease" is caused by such a protein. We have literally no idea how many other proteins have multiple possible solutions.


An interpreter for a really higher order language such as NaturalLanguage. If it passed the TuringTest would that be MetaIrony? (TuringMachine passing the TuringTest).

It's fun word play, but of course, computers are Turing Equivalent, and the StrongAiHypothesis? is that humans are (at best) Turing Equivalent. I.e. "of course" the Turing Test can only be passed by a Turing Machine.

Nor is this a coincidence. Turing's interests were strongly related to each other.

(Humans do not always function as strongly as Turing machines; we're prone to believing in self-contradictory things, for instance.)

There is no reason a Turing Machine can not believe self-contradictory things, for some definition of "believe".


Wait-Free Synchronization by Maurice Herlihy (1993) http://portal.acm.org/citation.cfm?doid=114005.102808 ( see WaitFreeSynchronization )

A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of the other processes. -- Abstract ACM

shows some things that *can* be done by processes that have an atomic compare&swap instruction available, that *cannot* be done by processes that only have standard atomic read, write, or even fetch&add instructions.

Since one can make a TuringMachine that only has atomic read and write instructions, this seems to imply that a collection of processes (that have atomic compare&swap instruction available) can do something that a collection of TuringMachines cannot.

-- DavidCary

No. First off, the Turing Machine could just as easily have atomic compare&swap as atomic read/write, so that's comparing apples and oranges; the simple Turing machines are all sequential, not parallel, anyway, so a purely sequential Turing machine everything or nothing "atomic" depending on how you look at it. Secondly, the conclusion is incorrect in any case, because the thing that you're claiming can't be done (basically, have N TuringMachines cooperate via whatever atomic operation) can, in fact, be simulated perfectly well on a single Turing Machine. Just simulate the N-way machine, and simulate whatever atomic operation you like.

You're forgetting that the point has never been that Turing Machines are identical to all other computational devices, merely that they are computational equivalent in power, typically proved by simulating the other device.

Nondeterminism is related to this subject, and regular vanilla Turing Machines are in fact deterministic, and can only simulate pseudo-nondeterminism via pseudo-random numbers, but that's getting into a different discussion.


Turing machines cannot simulate rotating head disks. I use nanosecond jitter in seek times to get random numbers. The simulation yields predictable random numbers.


possibly related to TheFutureOfComputing


EditText of this page (last edited December 17, 2010) or FindPage with title or text search