Some computer scientists have made the claim that classical, non-interactive models of computation (the TuringMachine and the pure LambdaCalculus, to name two) are inadequate models for "interactive" computations, that are able to interact with
For papers on this topic,
The substance of the claims comes from a restriction on the behavior of a
TuringMachine (and of a function in the pure
LambdaCalculus): a
TuringMachine processes an "input tape" that is predetermined and fixed before computation begins, and produces an "output tape" which is not examined until the
TuringMachine halts.
Likewise for the pure LambdaCalculus, which assumes that functions depend only on their inputs and have no effect but their outputs. (Note that variants of the LambdaCalculus extended with I/O primitives are interactive models.)
Turing himself dealt with this issue in his papers, and proposed several classes of TuringMachines; these are discussed by PeterMerel on ChurchTuringThesis.
Here is the relevant part of "OnComputableNumbers, with an Application to the EntscheidungsProblem":
- If at each stage the motion of a machine (in the sense of �1) is completely determined by the configuration, we shall call the machine an "automatic machine" (or a-machine). For some purposes we might use machines (choice machines or c-machines) whose motion is only partially determined by the configuration (hence the use of the word "possible" in �1). When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator. This would be the case if we were using machines to deal with axiomatic systems. In this paper I deal only with automatic machines, and will therefore often omit the prefix a-.
So although Turing did mention several classes of machine, he said essentially nothing about c-machines (unless there is more about them in another paper)? "
TuringMachine" in practice always refers to Turing's a-machine, which is a non-interactive, deterministic model.
In any case, this does not really affect the position asserted on this page, which is that interactive models are more powerful than non-interactive ones - not that interactive models are more powerful than any computation model invented by AlanTuring. (The original argument on ModelsOfComputation mentioned TuringMachines only as an example of a non-interactive model, making it clear that a-machines were meant.)
It is provable that an n-tape turing machine is equivalent in power to a single-tape TM.
Equivalent in algorithmic power, yes. Note, however, that there are interactive TM variants, such as Persistent Turing Machines, that make essential use of multiple tapes to model interactivity. These are not Turing a-machines.
One can consider a computer that interacts with the outside world to be a turing machine with two tapes - one for the internal memory, and another that holds the transcript of the input. We may also restrict the TM to not writing on the input tape. The fact that in practice the input tape is not written on until something happens is irrelevant - it could just as well be fixed in advance; all that matters is that the TM does not read the appropriate part of the tape until the corresponding IO time in the real computer; to achieve this, we can consider each cell to hold the result of the real computer checking for input, which may of course be absent.
You've essentially described a "Lazy Turing Machine" as mentioned below (except that you didn't consider output). That section gives my response.
Thus, we can see that a TM is an interactive model of computation.
No, we can only see that it is possible to define an interactive model of computation in terms of a modified TM - which was not in dispute.
Seems relevant, but I haven't had chance to read it yet:
Interaction, Computability, and Church�s Thesis
- Peter Wegner, Brown University
- Dina Goldin, University of Massachusetts at Boston
- Draft, May 25, 1999
Abstract: This article formalizes the claim that interactive finite computing agents are more expressive than Turing machines. The impact of models of interaction on Church's thesis and Godel's incompleteness result is explored. The evolution from algorithmic to interactive models of computation in computer architecture, software engineering, and AI is considered in a final section.
(Note that the paper defines "more expressive" in a way that is essentially equivalent to the usage of "more powerful" on this page.)
Some of Dina Goldin's other papers <http://www.cs.umb.edu/~dqg/papers/> may also be relevant. For example, <http://www.cs.umb.edu/~dqg/papers/god.txt> is a less formal discussion of this topic. Also, at <http://lambda-the-ultimate.org/node/view/203> she addresses the argument that interactive models can be defined in terms of TMs:
DominicFox?:
- But at one end of the interaction, what you have is still a Turing machine [...] it's a control system based on a Turing machine nevertheless.
DinaGoldin
?:
- I am the author of the "Origins of the Turing Thesis Myth" paper, as well as other papers on interaction. It is true that there is a Turing machine inside a sequential interaction system (i.e. a Persistent Turing machine, or PTM). However, this does NOT imply that that their expressiveness is the same. After all, if one goes back to the definition of the Turing machine, its computation is nothing more than a series of lookups in a transition table, and it could be viewed as a sequence of computations by a finite automaton. However, as we all know from our Theory of Computation courses, the expressiveness of Turing machines is greater than that of finite automata. Similarly, as our I&C paper shows (http://www.cse.uconn.edu/~dqg/papers/its.pdf), the expressiveness of PTMs is greater than that of Turing machines.
Goldin's I&C paper is of interest, in part because the explicitly non-Turing aspects of the PTM setup there become clear. A step in a PTM computation is actually a "macrostep" with a (potentially unbounded) computation using a three-tape Turing machine (N3TM). Under Definition 6, a "macrostep" of the PTM allows the PTM to return the symbol "sdiv", for "divergence", in the case that the underlying N3TM does not halt. In short, running a PTM assumes that you can solve the halting problem in some fashion. (In this fashion, it resembles hyper-computation).
See also the thread on the TYPES mailing list archived at
Quoting from <http://www.cse.buffalo.edu/~rapaport/510/actors-vs-church-thesis.txt>:
- "The evidence that interactive problem solving cannot be expressed by computable functions appears overwhelming. The non-algorithmic nature of interactive computation contradicts Church's hypothesis that the intuitive notion of computability is captured by Turing computability."
The second sentence of this quote is actually a stronger claim than the one made on this page. The fact that interactive models are more powerful (in the sense of being able to model a wider range of behavioural specifications) refutes some formulations of the
ChurchTuringThesis, but not others. There is not really any single agreed formulation; see <
http://plato.stanford.edu/entries/church-turing/>. For the application to the
EntscheidungsProblem, it was sufficient to consider only non-interactive computations. For either the more general philosophical consequences or as a guide to what practical models of computers need to be able to express, it is not.
See also "The Origins of the Turing Thesis Myth", at <http://www.engr.uconn.edu/~dqg/papers/myth.pdf>:
- In this paper, we identify and analyze the historical reasons for the widespread acceptance of what we call the Turing Thesis myth, which extends the Turing Thesis to imply that Turing Machines model all computers.
So how strong is the claim being made on this page? I thought the quote above was the claim being made on this page. Which parts of this page are not claims being made on this page?
- It seems to me that there is a fundamental confusion going on here: UTMs are a model of computation in the abstract, not of computers as acting systems. The Church-Turing Thesis only applies to the ability of a ModelOfComputation to complete a stateless algorithm ('stateless' here meaning that the only persistent result of the computation is the computed value). It is not capable of modeling interactive systems because interaction by its nature is not a formal computation and AFAIK cannot be modeled by one (though I gather that doing so is one of the goals of both CommunicatingSequentialProcesses and the PiCalculus). Interactions have state, which UTMs do not (or more precisely, they have no state before and after the computation other than the starting and ending points). This confusion lends credence to the argument that the English language term 'computer' is an unfortunate one, as most of what one does with a computer is not, strictly speaking, computation. -- JayOsako, [2008:01:30-1830]
- I've always worked with the adage: Computation = Calculation + Communication. You can't actually perform a calculation without performing communication, and vice versa. The notion of 'state' is encompassed in communication (since signals exist in time or (depending on perspective) vice versa: time exists because there are signals). In the sense that I utilize the terms, the Church-Turing Thesis effectively claims that a UTM can perform any (digital) calculation. Other models of computation more fully encompass the communications and interactions.
The claims are:
- classical, non-interactive models of computation (the TuringMachine and the pure LambdaCalculus, to name two) are inadequate models for "interactive" computations, that are able to interact with
- interactive models such as the PiCalculus, ActorsModel, CommunicatingSequentialProcesses, etc. are (usually) adequate for this purpose.
- the difference in adequacy is due to interactive models having greater modeling power.
If interactive models can be emulated by TMs (everyone seems to agree to that elsewhere on the page), then how can they have "greater modeling power"? What is "modeling power" and how is it measured?
Copied from below:
The Litmus test of whether two models differ in power, should be whether it is possible to implement a program with a given behavioural specification in one model, but impossible in the other. Interactive models are certainly more powerful than non-interactive models by this criterion.
That means some folks here don't agree that interactive models can be emulated by TMs, because if they can then any behavior implemented in an interactive model can be implemented by a TM emulating that interactive model.
When an interactive model is emulated by one or more TMs or TM-variants, it is still an interactive model. It is also still a distinct model from a TM. So there is no contradiction in saying that a program cannot be implemented in the TM model, but it can be implemented in a model that could be emulated by one or more TMs. In the latter case, one of the following is true:
- multiple communicating TMs are used.
- modified machines that are only based on TMs are used.
- a single probabilistic TM is used, but it does not execute the program in a way that meets the program's behavioural specification, because all of the input is required in advance.
I don't buy it. If a program is implemented in an interactive model that is being emulated by a TM, then the program is implemented by a TM.
No, it is not. If you think it is, please explain how to implement a web browser that requires all its input in advance, and that cannot react interactively to any responses from other machines.
- One of us has a drastically different understanding of TMs than the other one. My current browser (IE) is implemented on a Pentium, which is a physical manifestation of a Turing machine. It (the TM) gets all of its input (the program) in advance. The program running on the TM gets input from other places, but not the TM its running on.
There's nothing special about the word "emulated" that would negated the word "implemented". Every TM program emulates a theoretical machine. Every implementation is an emulation. And multiple communicating TMs can be emulated by a single TM.
{Even if every implementation is an emulation (which might be true in some hand-wavy sense) I think the converse is not true. An implementation of an emulation of Foo is not an implementation of Foo.}
If you have an infinite number of processing nodes in ActorsModel, etc... they're probably right. If you have a finite number of nodes; that claim smells. Note that any finite parallel model can be simulated on a Turing machine by interleaving computations - the Turing machine performs one step from each node in sequence.
All of these models have a finite-but-unbounded number of nodes at any point in the computation - like the length of a TuringMachine tape. The point of the critique is not that an actor configuration can't be simulated - as a function from all its inputs to all its outputs - by a TuringMachine. It can, provided you know all of the inputs in advance, and only care about the output after the machine has stopped. Interactive models are more powerful because they model the fact that additional inputs can be submitted, and partial outputs obtained, while the computation is progressing.
Of course, there are various ways to model interaction using variants of TuringMachines, for example:
- Execute the machine lazily, by making it block when it sees a unmarked cell, and adding some mechanism for changing and reading out parts of the tape;
- Model a computation by multiple iterations of a TuringMachine applied to a state and I/O streams. This is also called a SequentialInteractionMachine?.
- Use "PersistentTuringMachine?s" that have multiple tapes and preserve a "worktape" between interactions; see <http://www.cs.umb.edu/~dqg/papers/foiks.ps>.
But this misses the point of the critique, since these are distinct interactive computation models built on top of the
TuringMachine (i.e. a-machine) model.
A practical reason to use models that are specifically designed to express interaction, rather than lazy or iterated Turing machines, is that they are more tractable for that use. For instance, the ActorsModel, JoinCalculus, PiCalculus and CommunicatingSequentialProcesses all model encapsulated processes that can only affect each other's states in controlled ways. This makes them a good basis for the foundations of ObjectOriented languages (or imperative features of functional languages). I don't know of any proposal for a TuringMachine-based interactional model that is able to model encapsulation.
Uh, you're confusing apples and oranges (or perhaps I am, or someone else). Don't confuse computational power, which describes the set of algorithms that can be executed in a finite amount of time on a given machine, with expressiveness-whether or not the machine is convenient to program. The ChurchTuringThesis states that all intuitive (and constructable - things within infinite processors don't count) computational models are equivalent in computational power; my suspicion is that ActorsModel and the PiCalculus are exactly equivalent to the TuringMachine. I would be shocked were it to be demonstrated otherwise.
You've defined computational power as describing the set of algorithms that can be executed in finite time on a given machine. What is an algorithm? It seems that you're considering it to be a non-interactional computation - one that is defined by a function from its input (presented entirely in advance) to its output (read only after the algorithm has completed). Of course any definition of "computational power" that ignores interactive behaviour entirely, will ignore the additional power obtained by interactive computation.
Interaction with what? Interaction with a user? Interaction among several TuringMachines? Interaction with a hypothetical HyperTuringMachine??
Interaction with any real-world process (users, other computers or processes in the same computer, a CNC machine, etc.) That process is not part of the model, but the computation's interaction with it is.
If a TuringMachine is allowed to interact with a hypothetical machine that is more powerful than a Turing machine (in such a manner that the other machine could influence the TuringMachine's computations), then all bets are off. Likewise for interaction with users - it is unknown what computational model best represents the human brain, but there are numerous AI problems which we don't know how to solve with computers but are trivial for humans to solve.
Since the process that the machine is interacting with is not part of the machine itself, it should not matter (for most purposes) whether the model is able to simulate that process.
In any case, either there are real-world processes that cannot be modelled by a TuringMachine, or there are real-world computational processes whose interactions cannot be described by modelling the processes as TuringMachines. Either way, the view that TuringMachines are sufficiently powerful to model all computation is unjustified.
However, neither of these cases [interaction with hyper-Turing machines or interaction with users] is relevant to the issue of whether or not the PiCalculus or the ActorsModel is hyper-Turing. The interesting case is what happens when multiple Turing machines (though a finite number) are hooked up together - what is the computational model of such a thing? The answer, of course, is that a finite network of TuringMachines is no more powerful than a single TuringMachine.
Correct. But that does not address the issue at all. Such a network must have all its input presented in advance, and no output can be read until the combined TuringMachine has stopped.
Assuming that actor configurations are TuringEquivalent (likewise with processes in the PiCalculus), there is no reason to assume that an aggregation of many such entities (again, a finite number!) is somehow more powerful than a TuringMachine.
The point is not that aggregation increases power (although in general it can, as demonstrated by the fact that individual actors in the pure ActorsModel are not TuringEquivalent), but that the ability to perform interactive input and output increases power.
The Litmus test of whether two models differ in power, should be whether it is possible to implement a program with a given behavioural specification in one model, but impossible in the other. Interactive models are certainly more powerful than non-interactive models by this criterion. You can call this "behavioural power" rather than "computational power" if you like, but it is not the same as expressiveness.
Again, apples and oranges. I can't write a JavaApplet to format my hard drive (I could try, but it wouldn't get past either the SecurityManager in the Java runtime; nor would it get past the OS assuming I'm not dumb enough to run it as root), that doesn't make JavaApplets non-Turing equivalent.
It does make them less powerful, as a result of restricting what other processes are connected to the applet. (Incidentally, I have written JavaApplets that did get past the SecurityManager. -- DavidSarahHopwood)
You're correct in that I/O capabilities are orthogonal to the issue of computational power. You're incorrect in implying that I/O capabilities should nonetheless be construed to make something "more powerful" than a Turing machine - it's orthogonal.
Only if a definition of "power" is used that a priori excludes considering interactivity to be an increase in power - not for the "Litmus test" above that considers the set of behavioural specifications that it is possible to model.
Further, claims that the ActorsModel is somehow capable of richer I/O (or more powerful interaction) than a single-threaded process is sorely lacking in evidence - and given how computers are designed and built (simulating many processes on few or one processing element(s) is a well-known trick), the claim is extremely dubious. What do you think exactly that the ActorsModel can do that a program written with "industry standard" paradigms cannot? Keep in mind, we're discussing possibility here, not convenience - I don't doubt that there are things more convenient in the ActorsModel; that's irrelevant to the claims at issue here.
"Industry standard" computers are certainly interactive. Their behaviour cannot be modelled by a TuringMachine (other than by building an interactive model that is not itself a TuringMachine). How would a TuringMachine model the behaviour of a web browser or a Wiki server, for example? These can be accurately specified by typical interactive models.
- A TuringMachine has no source of non-determinism, they are strictly deterministic.
Yes, that's one of the differences. Interactive models always allow sources of non-determinism from external inputs, and may allow internal non-determinism. But note that a probabilistic
TuringMachine that does not have access to I/O would still be less powerful than interactive models.
This is a strange sort of non-determinism being talked about. If I understand it correctly, the ActorsModel is said to be non-deterministic because messages race from sender to receiver. If we know the distances, the speeds and the starting times (and why shouldn't we?) then we know the order of arrival. It sounds to me like the ActorsModel proponents think ignorance of determinism is equal to non-determinism. -- EricHodges
"Non-determinism" is in practice used to refer to any unpredictability in the outcome of a process. IOW, we as observers cannot predict the outcome given our current knowledge. There are several factors that can lead to this:
- Ignorance of initial conditions.
- Ignorance of inputs.
- Model error. Our best known model of the process may be inaccurate.
- Physical non-determinism. The usual interpretations of QuantumTheory hold that physical events would be inherently unpredictable even given perfect knowledge of initial conditions, inputs, and model.
In attempting to predict something like the weather, for instance,
all of these factors apply. They also apply to computer systems: for instance, the spinning of a disk drive (and therefore its observable response time) is subject to air turbulence, which again involves all of these factors.
In sequential computation, the non-determinism that is always present in the implementation of the computer system can be easily hidden. In most forms of concurrent computation (excluding DeclarativeConcurrency), it cannot be hidden. It really is not usually the case that we know the "distances, speeds and starting times" of messages - and even if we did know them, we would not want to have to rely on that knowledge.
A model of computation is a specification for a computer system. There is another kind of unpredictability that applies only to specifications: the spec may intentionally allow multiple outcomes from a given initial or intermediate state (this is sometimes called "don't care" or specification non-determinism).
There are two main reasons for introducing specification non-determinism:
- to model hardware that is actually unpredictable, or to allow such hardware to be used.
- to allow the model to be useful despite partial knowledge of the system. If two processes do not have any necessary dependency on each other, we do not want the model to introduce an artificial dependency. For that reason concurrency models do not specify details of scheduling, since it would introduce many artificial dependencies and make the model all but useless.
So in a sense, "ignorance of determinism"
is a form of non-determinism. Does that answer your objection?
Not really. The distances, speeds and starting times are knowable quantities.
Knowable after the fact, but not predictable in an efficient implementation. The model must take that into account.
No, they are knowable before the fact. Efficiency doesn't matter.
A model of computation is a specification for a class of real machines. If any machine in that class can be non-deterministic (which may be partly motivated by efficiency considerations), then the model must reflect this.
Saying a TuringMachine is slower than an ActorsModel is qualitatively different from saying it's an "inadequate" model.
That is not what I'm saying. Speed is irrelevant if the model doesn't purport to be able to predict timing. Noninteractive models are inadequate because they cannot model interaction, and interaction is extremely important in practice.
It may not be what you're saying but it's in the quote at the top of this page.
It's no longer at the top, and is qualified as being a stronger claim than what this page is asserting.
The distance, speed and time are themselves deterministic.
The distance, speed and time are in general NOT deterministic. They are unpredictable due to all four factors listed above, and both reasons for introducing specification non-determinism apply.
The first two factors (ignorance) don't produce non-determinism.
Do you consider the outcome of rolling a well-shaken die to be non-deterministic? That is unpredictable due to precisely the same factors.
No, I don't. Complexity doesn't negate determinism. No matter how well ActorsModel shakes its dice, the dice behave deterministically.
Well, I submit that you have an unconventional view of non-determinism. This is tangential to the main point of this page, though: as pointed out below, interactive models can be deterministic, and non-interactive models can be non-deterministic.
Refusing to predict an event doesn't make the event unpredictable. Model error is an error and can be corrected. If the ActorsModel mandated amplification of quantum effects so that they decided which message won the race, then that would produce non-deterministic output. But it doesn't.
- Eric, you just do not get it. Run any "program" (the same tape) twice (or in parallel on two TMs) you'll always get the same output. The capability of running the same program and getting distinct output is non-determinism. That's something TM do not have. Therefore you cannot ever "model" a non-deterministic machine inside a TM, because you'd have to be able to run the same program twice and be able to obtain distinct results. Actors model is non-deterministic by definition, it has nothing to do with QM or whatever, it's a mathematical abstraction - not a description of physical phenomena.
- I'm not saying you can model a non-deterministic machine inside a TM. I'm saying using the ActorsModel doesn't make a machine non-deterministic. It isn't non-deterministic by definition. ...
- Correct, using the ActorsModel does not make a machine non-deterministic. It models the non-determinism that is present in the machine for other reasons.
- {{And to add to David's point, ActorsModel is not physical. It is a mathematical abstraction. It is non-deterministic by definition. You won't get the same output because the mathematical machines are specified not to make deterministic choices. End of discussion. Whether you like it or not that's what it is. Arguing against that is arguing against a definition. The fact that you cannot imagine any physical machines to implement that has no bearing on the main point}}
- although the fact that such machines do exist has a bearing on the utility of the ActorsModel. {{Correct, that's why we need to have a separate discussion on the UsefulnessOfNonDeterminism, but Eric needs to acknowledge that discussing the practical utility of a mathematical theory is separate from discussing the theory itself. There's not only one imaginable category of physical implementations for a theory, and even if we accept for the sake of discussion that the universe is strictly deterministic - a philosophical claim in essence that is beyond our concern, but dear to Eric, nevertheless - non-deterministic theories can still be practically useful. It's the same story with the axiom of the parallels or the axiomatic construction of reals. It may turn out that we can't correlate an axiomatic line with any "physical" line, and the same can be said about the axiomatic construction of real and complex numbers, it may turn out that we live in an entirely discrete universe, I really do not care, the practical utility of mathematical theory is not decided by the question whether the theory "admits" a "physical" model or not.}}
- Well, the ActorsModel is perhaps a special case - see UsefulnessOfNonDeterminism. -- DavidSarahHopwood
- ... Run the same program on two ActorsModel machines (and by the same program, I mean the same inputs, the same start times, the same everything) and you'll get the same output. The act of sending a message doesn't introduce non-determinism.
- No, you won't necessarily get the same answers. The ActorsModel can be implemented on machines that are physically non-deterministic. However, even if it is implemented on a physically deterministic machine, the model still needs to incorporate specification non-determinism, for the second reason given above - to allow the model to be useful despite partial knowledge of the system.
- The physical machine doesn't matter. I'm talking about the theoretical machine. It doesn't matter if the ActorsModel machine is built on partial knowledge of a system. Full knowledge of the system is possible, so it can be emulated with a TM. The TM programmer may have to know more about the (theoretical) ActorsModel programmer, but there's nothing in the ActorsModel itself that prevents the TM programmer from knowing those things.
- You're still completely missing why the model incorporates specification non-determinism. If you try to express the ActorsModel in terms of a TM in such a way that a user of the model needs to have full knowledge of the system, then that model will be utterly useless in practice. And if you do it in such a way that the user reasons only in terms of the ActorsModel, then why should that user care whether the model can be expressed in terms of a modified TM?
- {{Again, full knowledge of the system is not possible a priori. That's why it is non-deterministic. A TM has its input tape pre-defined. Other models have sources of non-determinism. All you did is "emulate" a run of an ActorsModel after the system actually ran. Otherwise, if you are given a machine specified in any non-deterministic formal system, before you can produce a trace of the run, you cannot produce a TM that will behave exactly the same, because TMs by definition are deterministic.}}
- {{And even after you had a run of a non-deterministic system, we still may not have a global total order on events (for example messages received) as they happened in the system. And this lack of global ordering is there by definition. A run of the TM will by necessity have a trivial global and total order on the processing steps. Therefore your claim that any interactive model can be "emulated" by a TM is refuted.}}
- Actually this argument is wrong for the ActorsModel [but see correction below]. There always exists a global total order on events, even though the model does not specify what it is. This is an entirely non-obvious property of the ActorsModel that takes several pages to prove: see section II and Theorem 1 in <http://www.cypherpunks.to/erights/history/actors/AITR-633.pdf>. Anyway, this has no bearing on simulability by a TM, because even a non-total ordering could be simulated after the fact if traces were defined as DirectedAcyclicGraphs rather than sequences. After-the-fact simulability is simply irrelevant to whether interactive models are more powerful.
- {{Actually the way I read section II (and I'm not very familiar with ActorsModel so ...) is that the initial definition did not provide for a global total ordering, then Clinger constructs one that is compatible with the partial ordering specified originally which is an axiom (first weak axiom of realizability) and then [adds] the strong axiom of realizability that restricts the initial ActorsModel. Well, so be it, we can define the ActorsModel to include this strongest axiom. But its addition to the system is justified by Clinger's claim that "some notion of global time is essential to any model of concurrent computation". This is kind of too strong of a claim. So will the real ActorsModel stand up? Is it with or without global time? I for one do not care much for the "global time" and most things (or most that I read anyways) in distributed systems assume only the usual "logical time" (aka Lamport's logical clock, etc) which gives us the usual partial ordering. Even if a global time existed we couldn't make use of it in any algorithms by any stretch of the imagination.}}
- Sorry, I described the "realizability in global time" property incorrectly. For any given partial ordering of events induced by the ordering laws, there exists at least one global ordering consistent with the partial ordering. However, that does not imply that any execution has a globally ordered trace, only that at least one possible execution does. So your argument "And even after you had a run of a non-deterministic system, we still may not have a global total order on events (for example messages received) as they happened in the system." was correct. (In fact global time may not even be well-defined, due to GeneralRelativity.) In any case, I was correct in saying that a TM could still simulate a non-total ordering after the fact, but that this is irrelevant to whether interactive models are more powerful.
- As for the differences between the models defined in "Actors and Continuous Functionals" and Clinger's "Foundations of Actor Semantics", this is a bit more complicated. The laws in "Actors and Continuous Functionals" did imply the existence of a global total ordering for at least one possible execution; that is Theorem 1. So Clinger doesn't add the strong axiom of realizability - he suggests that it may be weakened (although this weakening isn't actually very important because in practice we can always assume that a system has a well-defined initial state). Essentially Theorem 1 provides just a different (arguably less ad hoc) way of expressing the ordering laws, not a fundamental change in the model.
Ignoring them doesn't grant this model any abilities lacking in a TuringMachine. We could close our eyes while that part of the tape goes by and make the same claim for the TuringMachine. It wouldn't make the machine more powerful in any way. -- EH
- Closing your eyes is not a valid operation for TMs.
- Nor should it be for ActorsModel machines.
- Rolling a non-deterministic die is a valid operation in ActorsModel, period. End of the story. It is so by definition.
In any case, non-determinism is only one of the reasons why interactive models are more powerful, and not the most important one (I/O is more important). For instance, DeclarativeConcurrency is a deterministic interactive model:
increasing power
--------------->
TuringMachine ------> probabilistic TM ------------------------------------------
(non-interactive(non-interactive |
confluent)nonconfluent) |
.| |
.| v
.-----------------------------------> DeclarativeConcurrency ------> PiCalculus, ActorsModel
(interactive confluent)(interactive nonconfluent)
(confluent ~= "all computations have a deterministic outcome in terms of their explicit inputs".)
I understand that I can emulate any ActorsModel with a TuringMachine. -- EH
Yes, that has already been made clear above ("The point of the critique is not that an actor configuration can't be simulated - as a function from all its inputs to all its outputs - by a TuringMachine."). Read that section again.
The word "non-deterministic" seems to be used in several different fashions. In computing theory, we have the "non-deterministic Turing machine" - which has entries in its transition table with more than one possible state, and an "oracle" to guide it to the correct next state). This device can solve many problems faster than a traditional Turing machine (in particular, it can solve NP problems in polynomial time - that's the definition of NP, after all). However, there is no problem that a NDTM can solve that a DTM cannot - the two are computationally equivalent. Obviously, we cannot build NDTMs, as we have no way to fashion the oracle. At any rate, "non-deterministic" in this sense is a completely different thing than non-deterministic meaning "exhibiting unpredictable behavior". The ActorsModel is non-deterministic in the latter sense - the order in which two messages sent from two actors arrive is not specified - but it isn't a computationally helpful non-determinism.
Yes, "non-deterministic" is used in several different fashions. This is conventional usage and there is little that can be done about it, other than to disambiguate where necessary, as the discussion above tries to do.
NDTMs are a bit of a RedHerring here. First, as you point out they are unimplementable. Second, they are misnamed: an NDTM isn't non-deterministic according to any of the variants of non-determinism discussed above; it just somehow picks the "correct" next state. The probabilistic TM in the diagram above is not an NDTM (and unlike an NDTM, it is directly implementable).
- A probabilistic Turing machine isn't any more powerful than a deterministic one. PTMs are useful in that for some hard problems, repeated algorithms involving "guessing" can converge to the correct result (and do so probabilistically faster than deterministic algorithms).
- Again you're missing the point by focussing solely on pure functions. What if we want to model cryptographic constructs, for example?
As for whether the increase in power provided by interactivity is "computationally helpful", that is completely missing the point. An actor configuration cannot compute any
pure function that cannot also be computed by a
TuringMachine. However, it can model I/O. This is extremely useful, and represents an increase in modelling power, regardless of what pure functions can be computed.
- Modelling power != computational power. This is, I think, the crux of the argument - which is why I added the "for some definition of powerful" line above (which you deleted). The ChurchTuringThesis is couched in terms of computational power, not modeling power (indeed, we have no formal means to measure "modeling" power), so the claim that ActorsModel invalidates the ChurchTuringThesis is highly suspicious.
- <http://www.cs.brown.edu/people/pw/papers/bcj1.pdf> formalizes a notion of power that seems to be consistent with the one used on this page. The paper calls it "expressiveness", but don't be confused by that: it is talking about what behaviours are possible, not whether they are conveniently expressed.
For that reason, I suspect some of the initial claims above are a tad vacuous. Claiming that a model which handles I/O is more powerful at handling I/O than a model which ignores I/O altogether - strikes me as a tautology. It may be true, but so what?
So, we should not attempt to use models that cannot handle I/O to model real-world systems in which I/O is required. This implies that TuringMachines and the pure LambdaCalculus should not be used to model such systems.
- No, it implies that they are limited in scope. As mentioned above, there are ways (though inconvenient) to shoehorn I/O into the TM.
- Yes, it is possible to build an interactive model in terms of a TM. I'm not sure why you would want to, given that other interactive models are much more tractable.
- And as for the computational aspects of systems (which there are many), the LambdaCalculus and the TM are perfectly good models to use - the presence of I/O in a system doesn't invalidate use of either in portions of the system which are algorithmic in nature. I view ProcessCalculi as important additions to the mathematical toolbox, not replacements for the algorithmic calculi.
- It wasn't argued that they are replacements for non-interactive algorithmic calculi. These calculi are still useful largely because they are less powerful. See PrincipleOfLeastPower.
ActorsModel cannot compute functions that TuringMachines cannot compute, we agree on this point. ActorsModel takes I/O into account, TuringMachines do not. About the best you can really say is that ActorsModel, PiCalculus (or other ProcessCalculi like CommunicatingSequentialProcesses) are better at modelling RealWorld computing than purely mathematical constructs like TuringMachines - and I'll agree with you there. TheoryOfObjects discusses this at length, go have a read at that book. Some of the phrasing of the initial claims above leads one to believe that the ActorsModel invalidates the ChurchTuringThesis - but the ChurchTuringThesis itself only deals with computation of functions, not with I/O.
Church's Thesis only deals with computation of functions. Some of Turing's papers made much more general claims about the sufficiency of TuringMachines (or equivalent) as a model for all computation. For example, in "Computing machinery and intelligence"
- http://www.abelard.org/turpap/turpap.htm#universality_of_digital_computers:
- This special property of digital computers, that they can mimic any discrete state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case. It will be seen that as a consequence of this all digital computers are in a sense equivalent.
- And in a large sense - that's true. It still remains unnecessary to design specialized machines. Of course, the machines we do have are not strictly Turing (A) machines with strict provide-input/crunch-algorithms/read-output phases, but ever since the advent of the interactive terminal (which freed computers from being limited to batch-processing - an event that occurred long after Turing's death), we've had to deal with I/O. And the machines based on his model, augmented with whatever I/O is appropriate, still remain sufficient for all computational tasks.
- It sounds like you're agreeing with me: to formally model interactive computers we need to use interactive models, not non-interactive ones.
- ActorsModel and the like are perhaps better formalisms for modern machinery, but that's all they are - formalisms. ActorsModel hasn't enabled us to compute things we couldn't previously compute before (though it probably has made things more convenient).
- This is a very strange argument. TuringMachines and the pure LambdaCalculus are just formalisms. The question is which kinds of formalism are adequate to the task of describing interactive computer systems.
- The PiCalculus is an attempt to formalize a computing paradigm (OO) that arose in industry despite not being previously formalized. In this sense, they are useful. However, some of the links above seem to suggest that Turing, and the ChurchTuringThesis, have been overthrown or invalidated by the ProcessCalculi - which strikes me as rather excessive grandstanding.
- Who said anything about having "overthrown" Turing? As stated above, "the position asserted on this page [...] is that interactive models are more powerful than non-interactive ones - not that interactive models are more powerful than any computation model invented by AlanTuring."
- The ActorsModel and JoinCalculus are closer to OO than the PiCalculus, incidentally.
The implications of this for programming languages? Practically zilch. Other than AlgolSixty, which was notorious for its lack of I/O facilities as part of the language definition (see AlgolSixtySyndrome), all interesting programming languages can do I/O.
Yes, the issue is what kind of model should be used to describe computation in those languages.
In other words, they are equivalent to Turing's C machine. And my strong suspicion is that the ProcessCalculi are equivalent to the C machine.
"Turing's c-machine" is defined by the following two sentences in a single paper:
For some purposes we might use machines (choice machines or c-machines) whose motion is only partially
determined by the configuration (hence the use of the word "possible" in �1). When such a machine reaches
one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made
by an external operator.
Is this an adequate description of a model of computation? I don't think so. In any case,
if the definition of a c-machine were expanded on in order to allow useful things to be said about it, it would be an interactive model.
Here's the thing, once you add (pure) interactivity, formality goes out the window. It's no longer a "closed" system. The question then is: What is the notion of computation once interactivity enters in? That answer: well, either the other user/machine is substituting for some of your computation (which can be enscribed by just another, bigger, state machine) or you're no longer in a formally-definable system and you're now playing a game, in which case there is no application of the ChurchTuringThesis. --MarkJanssen
Interactivity in now way diminishes the need for formality. Interactivity only, at best, increases the number of external inputs and/or outputs of a system. I'm not clear why you conclude that "playing a game" results in "no application of the ChurchTuringThesis.
Interactivity may or may not diminish the "need" for some kind of formality, but it may make it undecidable, in which case, formality is meaningless. This is why I call it a game -- which, sloppily, I suppose is just saying that it is no longer deterministic, and therefore the notion of computation is not applicable, because "computation" is practically defined by a deterministic mapping between input and output. In a game, this cannot be so. --MarkJanssen
Non-determinism does not preclude formalism. They are orthogonal.
Perhaps in abstract math, but not in the machine, where things must be formerly and exhaustively deterministic. The fact that some of these deterministic formalisms are unfamiliar to you, does not make less of this truth.
{With what "deterministic formalisms" do you believe your correspondent lacks familiarity?}
{In the machine, things are not necessarily "exhaustively deterministic". Some encryption mechanisms rely on non-deterministism, for example, and useful computational processes can be rendered non-deterministic by depending on a hardware random number generator or external input.}
See also ModelsOfComputation, ActorsModel, ProcessCalculus, PiCalculus, CommunicatingSequentialProcesses, ChurchTuringThesis.
CategoryMath CategoryConcurrency