Interactive Computation Is More Powerful Than Non Interactive

Some computer scientists have made the claim that classical, non-interactive models of computation (the TuringMachine and the pure LambdaCalculus, to name two) are inadequate models for "interactive" computations, that are able to interact with

For papers on this topic, The substance of the claims comes from a restriction on the behavior of a TuringMachine (and of a function in the pure LambdaCalculus): a TuringMachine processes an "input tape" that is predetermined and fixed before computation begins, and produces an "output tape" which is not examined until the TuringMachine halts.

Likewise for the pure LambdaCalculus, which assumes that functions depend only on their inputs and have no effect but their outputs. (Note that variants of the LambdaCalculus extended with I/O primitives are interactive models.)

Turing himself dealt with this issue in his papers, and proposed several classes of TuringMachines; these are discussed by PeterMerel on ChurchTuringThesis.

Here is the relevant part of "OnComputableNumbers, with an Application to the EntscheidungsProblem":

So although Turing did mention several classes of machine, he said essentially nothing about c-machines (unless there is more about them in another paper)? "TuringMachine" in practice always refers to Turing's a-machine, which is a non-interactive, deterministic model.

In any case, this does not really affect the position asserted on this page, which is that interactive models are more powerful than non-interactive ones - not that interactive models are more powerful than any computation model invented by AlanTuring. (The original argument on ModelsOfComputation mentioned TuringMachines only as an example of a non-interactive model, making it clear that a-machines were meant.)


It is provable that an n-tape turing machine is equivalent in power to a single-tape TM. Equivalent in algorithmic power, yes. Note, however, that there are interactive TM variants, such as Persistent Turing Machines, that make essential use of multiple tapes to model interactivity. These are not Turing a-machines.

One can consider a computer that interacts with the outside world to be a turing machine with two tapes - one for the internal memory, and another that holds the transcript of the input. We may also restrict the TM to not writing on the input tape. The fact that in practice the input tape is not written on until something happens is irrelevant - it could just as well be fixed in advance; all that matters is that the TM does not read the appropriate part of the tape until the corresponding IO time in the real computer; to achieve this, we can consider each cell to hold the result of the real computer checking for input, which may of course be absent.

You've essentially described a "Lazy Turing Machine" as mentioned below (except that you didn't consider output). That section gives my response.

Thus, we can see that a TM is an interactive model of computation.

No, we can only see that it is possible to define an interactive model of computation in terms of a modified TM - which was not in dispute.


Seems relevant, but I haven't had chance to read it yet:

Interaction, Computability, and Church�s Thesis Abstract: This article formalizes the claim that interactive finite computing agents are more expressive than Turing machines. The impact of models of interaction on Church's thesis and Godel's incompleteness result is explored. The evolution from algorithmic to interactive models of computation in computer architecture, software engineering, and AI is considered in a final section.

(Note that the paper defines "more expressive" in a way that is essentially equivalent to the usage of "more powerful" on this page.)

Some of Dina Goldin's other papers <http://www.cs.umb.edu/~dqg/papers/> may also be relevant. For example, <http://www.cs.umb.edu/~dqg/papers/god.txt> is a less formal discussion of this topic. Also, at <http://lambda-the-ultimate.org/node/view/203> she addresses the argument that interactive models can be defined in terms of TMs:

DominicFox?:

DinaGoldin?: Goldin's I&C paper is of interest, in part because the explicitly non-Turing aspects of the PTM setup there become clear. A step in a PTM computation is actually a "macrostep" with a (potentially unbounded) computation using a three-tape Turing machine (N3TM). Under Definition 6, a "macrostep" of the PTM allows the PTM to return the symbol "sdiv", for "divergence", in the case that the underlying N3TM does not halt. In short, running a PTM assumes that you can solve the halting problem in some fashion. (In this fashion, it resembles hyper-computation).


See also the thread on the TYPES mailing list archived at


Quoting from <http://www.cse.buffalo.edu/~rapaport/510/actors-vs-church-thesis.txt>:

The second sentence of this quote is actually a stronger claim than the one made on this page. The fact that interactive models are more powerful (in the sense of being able to model a wider range of behavioural specifications) refutes some formulations of the ChurchTuringThesis, but not others. There is not really any single agreed formulation; see <http://plato.stanford.edu/entries/church-turing/>. For the application to the EntscheidungsProblem, it was sufficient to consider only non-interactive computations. For either the more general philosophical consequences or as a guide to what practical models of computers need to be able to express, it is not.

See also "The Origins of the Turing Thesis Myth", at <http://www.engr.uconn.edu/~dqg/papers/myth.pdf>:

So how strong is the claim being made on this page? I thought the quote above was the claim being made on this page. Which parts of this page are not claims being made on this page?

The claims are: If interactive models can be emulated by TMs (everyone seems to agree to that elsewhere on the page), then how can they have "greater modeling power"? What is "modeling power" and how is it measured?

Copied from below:

The Litmus test of whether two models differ in power, should be whether it is possible to implement a program with a given behavioural specification in one model, but impossible in the other. Interactive models are certainly more powerful than non-interactive models by this criterion.

That means some folks here don't agree that interactive models can be emulated by TMs, because if they can then any behavior implemented in an interactive model can be implemented by a TM emulating that interactive model.

When an interactive model is emulated by one or more TMs or TM-variants, it is still an interactive model. It is also still a distinct model from a TM. So there is no contradiction in saying that a program cannot be implemented in the TM model, but it can be implemented in a model that could be emulated by one or more TMs. In the latter case, one of the following is true:

I don't buy it. If a program is implemented in an interactive model that is being emulated by a TM, then the program is implemented by a TM.

No, it is not. If you think it is, please explain how to implement a web browser that requires all its input in advance, and that cannot react interactively to any responses from other machines.

There's nothing special about the word "emulated" that would negated the word "implemented". Every TM program emulates a theoretical machine. Every implementation is an emulation. And multiple communicating TMs can be emulated by a single TM.

{Even if every implementation is an emulation (which might be true in some hand-wavy sense) I think the converse is not true. An implementation of an emulation of Foo is not an implementation of Foo.}


If you have an infinite number of processing nodes in ActorsModel, etc... they're probably right. If you have a finite number of nodes; that claim smells. Note that any finite parallel model can be simulated on a Turing machine by interleaving computations - the Turing machine performs one step from each node in sequence.

All of these models have a finite-but-unbounded number of nodes at any point in the computation - like the length of a TuringMachine tape. The point of the critique is not that an actor configuration can't be simulated - as a function from all its inputs to all its outputs - by a TuringMachine. It can, provided you know all of the inputs in advance, and only care about the output after the machine has stopped. Interactive models are more powerful because they model the fact that additional inputs can be submitted, and partial outputs obtained, while the computation is progressing.

Of course, there are various ways to model interaction using variants of TuringMachines, for example:

But this misses the point of the critique, since these are distinct interactive computation models built on top of the TuringMachine (i.e. a-machine) model.

A practical reason to use models that are specifically designed to express interaction, rather than lazy or iterated Turing machines, is that they are more tractable for that use. For instance, the ActorsModel, JoinCalculus, PiCalculus and CommunicatingSequentialProcesses all model encapsulated processes that can only affect each other's states in controlled ways. This makes them a good basis for the foundations of ObjectOriented languages (or imperative features of functional languages). I don't know of any proposal for a TuringMachine-based interactional model that is able to model encapsulation.

Uh, you're confusing apples and oranges (or perhaps I am, or someone else). Don't confuse computational power, which describes the set of algorithms that can be executed in a finite amount of time on a given machine, with expressiveness-whether or not the machine is convenient to program. The ChurchTuringThesis states that all intuitive (and constructable - things within infinite processors don't count) computational models are equivalent in computational power; my suspicion is that ActorsModel and the PiCalculus are exactly equivalent to the TuringMachine. I would be shocked were it to be demonstrated otherwise.

You've defined computational power as describing the set of algorithms that can be executed in finite time on a given machine. What is an algorithm? It seems that you're considering it to be a non-interactional computation - one that is defined by a function from its input (presented entirely in advance) to its output (read only after the algorithm has completed). Of course any definition of "computational power" that ignores interactive behaviour entirely, will ignore the additional power obtained by interactive computation.

Interaction with what? Interaction with a user? Interaction among several TuringMachines? Interaction with a hypothetical HyperTuringMachine??

Interaction with any real-world process (users, other computers or processes in the same computer, a CNC machine, etc.) That process is not part of the model, but the computation's interaction with it is.

If a TuringMachine is allowed to interact with a hypothetical machine that is more powerful than a Turing machine (in such a manner that the other machine could influence the TuringMachine's computations), then all bets are off. Likewise for interaction with users - it is unknown what computational model best represents the human brain, but there are numerous AI problems which we don't know how to solve with computers but are trivial for humans to solve.

Since the process that the machine is interacting with is not part of the machine itself, it should not matter (for most purposes) whether the model is able to simulate that process.

In any case, either there are real-world processes that cannot be modelled by a TuringMachine, or there are real-world computational processes whose interactions cannot be described by modelling the processes as TuringMachines. Either way, the view that TuringMachines are sufficiently powerful to model all computation is unjustified.

However, neither of these cases [interaction with hyper-Turing machines or interaction with users] is relevant to the issue of whether or not the PiCalculus or the ActorsModel is hyper-Turing. The interesting case is what happens when multiple Turing machines (though a finite number) are hooked up together - what is the computational model of such a thing? The answer, of course, is that a finite network of TuringMachines is no more powerful than a single TuringMachine.

Correct. But that does not address the issue at all. Such a network must have all its input presented in advance, and no output can be read until the combined TuringMachine has stopped.

Assuming that actor configurations are TuringEquivalent (likewise with processes in the PiCalculus), there is no reason to assume that an aggregation of many such entities (again, a finite number!) is somehow more powerful than a TuringMachine.

The point is not that aggregation increases power (although in general it can, as demonstrated by the fact that individual actors in the pure ActorsModel are not TuringEquivalent), but that the ability to perform interactive input and output increases power.

The Litmus test of whether two models differ in power, should be whether it is possible to implement a program with a given behavioural specification in one model, but impossible in the other. Interactive models are certainly more powerful than non-interactive models by this criterion. You can call this "behavioural power" rather than "computational power" if you like, but it is not the same as expressiveness.

Again, apples and oranges. I can't write a JavaApplet to format my hard drive (I could try, but it wouldn't get past either the SecurityManager in the Java runtime; nor would it get past the OS assuming I'm not dumb enough to run it as root), that doesn't make JavaApplets non-Turing equivalent.

It does make them less powerful, as a result of restricting what other processes are connected to the applet. (Incidentally, I have written JavaApplets that did get past the SecurityManager. -- DavidSarahHopwood)

You're correct in that I/O capabilities are orthogonal to the issue of computational power. You're incorrect in implying that I/O capabilities should nonetheless be construed to make something "more powerful" than a Turing machine - it's orthogonal.

Only if a definition of "power" is used that a priori excludes considering interactivity to be an increase in power - not for the "Litmus test" above that considers the set of behavioural specifications that it is possible to model.

Further, claims that the ActorsModel is somehow capable of richer I/O (or more powerful interaction) than a single-threaded process is sorely lacking in evidence - and given how computers are designed and built (simulating many processes on few or one processing element(s) is a well-known trick), the claim is extremely dubious. What do you think exactly that the ActorsModel can do that a program written with "industry standard" paradigms cannot? Keep in mind, we're discussing possibility here, not convenience - I don't doubt that there are things more convenient in the ActorsModel; that's irrelevant to the claims at issue here.

"Industry standard" computers are certainly interactive. Their behaviour cannot be modelled by a TuringMachine (other than by building an interactive model that is not itself a TuringMachine). How would a TuringMachine model the behaviour of a web browser or a Wiki server, for example? These can be accurately specified by typical interactive models.

Yes, that's one of the differences. Interactive models always allow sources of non-determinism from external inputs, and may allow internal non-determinism. But note that a probabilistic TuringMachine that does not have access to I/O would still be less powerful than interactive models.


This is a strange sort of non-determinism being talked about. If I understand it correctly, the ActorsModel is said to be non-deterministic because messages race from sender to receiver. If we know the distances, the speeds and the starting times (and why shouldn't we?) then we know the order of arrival. It sounds to me like the ActorsModel proponents think ignorance of determinism is equal to non-determinism. -- EricHodges

"Non-determinism" is in practice used to refer to any unpredictability in the outcome of a process. IOW, we as observers cannot predict the outcome given our current knowledge. There are several factors that can lead to this:

In attempting to predict something like the weather, for instance, all of these factors apply. They also apply to computer systems: for instance, the spinning of a disk drive (and therefore its observable response time) is subject to air turbulence, which again involves all of these factors.

In sequential computation, the non-determinism that is always present in the implementation of the computer system can be easily hidden. In most forms of concurrent computation (excluding DeclarativeConcurrency), it cannot be hidden. It really is not usually the case that we know the "distances, speeds and starting times" of messages - and even if we did know them, we would not want to have to rely on that knowledge.

A model of computation is a specification for a computer system. There is another kind of unpredictability that applies only to specifications: the spec may intentionally allow multiple outcomes from a given initial or intermediate state (this is sometimes called "don't care" or specification non-determinism).

There are two main reasons for introducing specification non-determinism:

So in a sense, "ignorance of determinism" is a form of non-determinism. Does that answer your objection?

Not really. The distances, speeds and starting times are knowable quantities.

Knowable after the fact, but not predictable in an efficient implementation. The model must take that into account.

No, they are knowable before the fact. Efficiency doesn't matter.

A model of computation is a specification for a class of real machines. If any machine in that class can be non-deterministic (which may be partly motivated by efficiency considerations), then the model must reflect this.

Saying a TuringMachine is slower than an ActorsModel is qualitatively different from saying it's an "inadequate" model.

That is not what I'm saying. Speed is irrelevant if the model doesn't purport to be able to predict timing. Noninteractive models are inadequate because they cannot model interaction, and interaction is extremely important in practice.

It may not be what you're saying but it's in the quote at the top of this page.

It's no longer at the top, and is qualified as being a stronger claim than what this page is asserting.

The distance, speed and time are themselves deterministic.

The distance, speed and time are in general NOT deterministic. They are unpredictable due to all four factors listed above, and both reasons for introducing specification non-determinism apply.

The first two factors (ignorance) don't produce non-determinism.

Do you consider the outcome of rolling a well-shaken die to be non-deterministic? That is unpredictable due to precisely the same factors.

No, I don't. Complexity doesn't negate determinism. No matter how well ActorsModel shakes its dice, the dice behave deterministically.

Well, I submit that you have an unconventional view of non-determinism. This is tangential to the main point of this page, though: as pointed out below, interactive models can be deterministic, and non-interactive models can be non-deterministic.

Refusing to predict an event doesn't make the event unpredictable. Model error is an error and can be corrected. If the ActorsModel mandated amplification of quantum effects so that they decided which message won the race, then that would produce non-deterministic output. But it doesn't.

Ignoring them doesn't grant this model any abilities lacking in a TuringMachine. We could close our eyes while that part of the tape goes by and make the same claim for the TuringMachine. It wouldn't make the machine more powerful in any way. -- EH


In any case, non-determinism is only one of the reasons why interactive models are more powerful, and not the most important one (I/O is more important). For instance, DeclarativeConcurrency is a deterministic interactive model:

increasing power
--------------->

TuringMachine ------> probabilistic TM ------------------------------------------ (non-interactive(non-interactive | confluent)nonconfluent) | .| | .| v .-----------------------------------> DeclarativeConcurrency ------> PiCalculus, ActorsModel (interactive confluent)(interactive nonconfluent)

(confluent ~= "all computations have a deterministic outcome in terms of their explicit inputs".)

I understand that I can emulate any ActorsModel with a TuringMachine. -- EH

Yes, that has already been made clear above ("The point of the critique is not that an actor configuration can't be simulated - as a function from all its inputs to all its outputs - by a TuringMachine."). Read that section again.

The word "non-deterministic" seems to be used in several different fashions. In computing theory, we have the "non-deterministic Turing machine" - which has entries in its transition table with more than one possible state, and an "oracle" to guide it to the correct next state). This device can solve many problems faster than a traditional Turing machine (in particular, it can solve NP problems in polynomial time - that's the definition of NP, after all). However, there is no problem that a NDTM can solve that a DTM cannot - the two are computationally equivalent. Obviously, we cannot build NDTMs, as we have no way to fashion the oracle. At any rate, "non-deterministic" in this sense is a completely different thing than non-deterministic meaning "exhibiting unpredictable behavior". The ActorsModel is non-deterministic in the latter sense - the order in which two messages sent from two actors arrive is not specified - but it isn't a computationally helpful non-determinism.

Yes, "non-deterministic" is used in several different fashions. This is conventional usage and there is little that can be done about it, other than to disambiguate where necessary, as the discussion above tries to do.

NDTMs are a bit of a RedHerring here. First, as you point out they are unimplementable. Second, they are misnamed: an NDTM isn't non-deterministic according to any of the variants of non-determinism discussed above; it just somehow picks the "correct" next state. The probabilistic TM in the diagram above is not an NDTM (and unlike an NDTM, it is directly implementable).

As for whether the increase in power provided by interactivity is "computationally helpful", that is completely missing the point. An actor configuration cannot compute any pure function that cannot also be computed by a TuringMachine. However, it can model I/O. This is extremely useful, and represents an increase in modelling power, regardless of what pure functions can be computed.

For that reason, I suspect some of the initial claims above are a tad vacuous. Claiming that a model which handles I/O is more powerful at handling I/O than a model which ignores I/O altogether - strikes me as a tautology. It may be true, but so what?

So, we should not attempt to use models that cannot handle I/O to model real-world systems in which I/O is required. This implies that TuringMachines and the pure LambdaCalculus should not be used to model such systems.

ActorsModel cannot compute functions that TuringMachines cannot compute, we agree on this point. ActorsModel takes I/O into account, TuringMachines do not. About the best you can really say is that ActorsModel, PiCalculus (or other ProcessCalculi like CommunicatingSequentialProcesses) are better at modelling RealWorld computing than purely mathematical constructs like TuringMachines - and I'll agree with you there. TheoryOfObjects discusses this at length, go have a read at that book. Some of the phrasing of the initial claims above leads one to believe that the ActorsModel invalidates the ChurchTuringThesis - but the ChurchTuringThesis itself only deals with computation of functions, not with I/O.

Church's Thesis only deals with computation of functions. Some of Turing's papers made much more general claims about the sufficiency of TuringMachines (or equivalent) as a model for all computation. For example, in "Computing machinery and intelligence"

The implications of this for programming languages? Practically zilch. Other than AlgolSixty, which was notorious for its lack of I/O facilities as part of the language definition (see AlgolSixtySyndrome), all interesting programming languages can do I/O.

Yes, the issue is what kind of model should be used to describe computation in those languages.

In other words, they are equivalent to Turing's C machine. And my strong suspicion is that the ProcessCalculi are equivalent to the C machine.

"Turing's c-machine" is defined by the following two sentences in a single paper:

  For some purposes we might use machines (choice machines or c-machines) whose motion is only partially
  determined by the configuration (hence the use of the word "possible" in �1). When such a machine reaches
  one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made
  by an external operator.

Is this an adequate description of a model of computation? I don't think so. In any case, if the definition of a c-machine were expanded on in order to allow useful things to be said about it, it would be an interactive model.


Here's the thing, once you add (pure) interactivity, formality goes out the window. It's no longer a "closed" system. The question then is: What is the notion of computation once interactivity enters in? That answer: well, either the other user/machine is substituting for some of your computation (which can be enscribed by just another, bigger, state machine) or you're no longer in a formally-definable system and you're now playing a game, in which case there is no application of the ChurchTuringThesis. --MarkJanssen

Interactivity in now way diminishes the need for formality. Interactivity only, at best, increases the number of external inputs and/or outputs of a system. I'm not clear why you conclude that "playing a game" results in "no application of the ChurchTuringThesis.

Interactivity may or may not diminish the "need" for some kind of formality, but it may make it undecidable, in which case, formality is meaningless. This is why I call it a game -- which, sloppily, I suppose is just saying that it is no longer deterministic, and therefore the notion of computation is not applicable, because "computation" is practically defined by a deterministic mapping between input and output. In a game, this cannot be so. --MarkJanssen

Non-determinism does not preclude formalism. They are orthogonal.

Perhaps in abstract math, but not in the machine, where things must be formerly and exhaustively deterministic. The fact that some of these deterministic formalisms are unfamiliar to you, does not make less of this truth.

{With what "deterministic formalisms" do you believe your correspondent lacks familiarity?}

{In the machine, things are not necessarily "exhaustively deterministic". Some encryption mechanisms rely on non-deterministism, for example, and useful computational processes can be rendered non-deterministic by depending on a hardware random number generator or external input.}


See also ModelsOfComputation, ActorsModel, ProcessCalculus, PiCalculus, CommunicatingSequentialProcesses, ChurchTuringThesis.


CategoryMath CategoryConcurrency


EditText of this page (last edited May 12, 2014) or FindPage with title or text search