Components:
- Pages
- Existing pages
- Related Pages
Potential New Pages:
Part 1 Halting Problem
The HaltingProblem states that it is impossible to write a program that runs in finite time that is able to decide whether an arbitrary program will halt (with a certain input).
The HaltingProblem does not state that it is impossible to decide for a specific program whether it is going to halt. It is even possible to write programs who make this decision. They simply work only on a subset of all programs.
To be correct about terms, the HaltingProblem is actually the question, "Given this code X, will it terminate (halt)?". Usually the only reason to ask this formulated in such a way is to give an example of a problem for which there is no recursive enumerable algorithm, i.e. an algorithm that will itself always produce a result.
See Also: WhatDoesHaltingMean
Here is the proof of the HaltingProblem:
- Assume I have a program P that can tell you whether any program halts or not for given input data.
- I construct a program Q based on P which, if P says its input program doesn't halt, immediately halts, and if P says the program halts, goes into an infinite loop.
- Feeding Q(Q) to P I can see that if P says it halts, it won't, and if P says it doesn't halt, it will.
- Therefore I don't have any such program P and anyone else who says they do is full of it.
WhatDoesHaltingMean?
That argument applies only to a TuringMachine or equivalent. A real machine has a finite amount of state, so you can just run the program in question until it either halts or repeats a state. The machine P runs on would have to be larger than the one Q runs on (a TortoiseAndHare approach would have P's machine large enough to contain two copies of Q's machine state), so Q cannot fully simulate P.
Quite so. But the general HaltingProblem isn't interesting because it indicates a limit to what debuggers do. It's interesting because (a) it's demonstrably insoluble and (b) it's easy to transform large classes of problems into it. This is why it's one of the fundamentals of computability.
Are there any known interesting problems to which this theorem is applicable?
Yes. The answer to the question, "Can I please have 3 million dollars for my research project to detect by inspection whether a program will halt?" is no. Surprising useful. Just as is, "Ha Hahahaha, My computer's instruction set is more powerful than yours!!" being a nonsense claim, thanks to establishing quickly that they are all TuringMachine equivalent. Saved decades of advertising one-up-manship.
Actually, the ChurchTuringThesis hasn't been proved and isn't proved by alluding to the general HaltingProblem. It just hasn't been disproved. Cf. the ScientificMethod.
However, that doesn't affect the fact that proving a computer's instruction set Turing-equivalent demonstrates that it is no more powerful than a Turing machine.
Known interesting problems... yes. "Will this line of code ever be executed?" If we replace the line of code with the HALT instruction, we can see the question is equivalent to the halting problem. (Hence perfect optimization is impossible.)
"Does this program do the same thing as this other program?" If we choose as our second program the one-liner "HALT" (which always halts), we can see this question is, in general, equivalent to the halting problem.
The theorem shows there is a limit to what you can do with a static type checker. If you make a static type system too powerful, it becomes UniversalTuringMachine-capable, and hence you may have to solve the halting problem before you can say whether a given program is type-safe. (Avoiding this is essentially what the word "static" means here.) This is why designing a good static type system is hard, and part of why so many of us prefer dynamically checked languages like Smalltalk.
There are lots of more-or-less trivial variations. I was recently asked to give a bound on the stack size for an arbitrary program. Clearly we can construct a program which halts if and only if a given program exceeds such and such a bound, so finding the bound is equivalent to solving the halting problem.
In a way, this theorem is saying there are no short cuts. Turing earlier showed that a UniversalTuringMachine can tell you what any other TuringMachine will do by emulating it, which is a powerful positive result, but the halting theorem says that emulating it is the only way, which is a powerful negative result. Turing gave with one hand and took away with the other.
You can think of many (perhaps all) physical processes as being computations, with inputs, outputs and evolving states. A UniversalTuringMachine is rather a simple machine, therefore we might expect many physical processes to be UniversalTuringMachines if you look at them right. Hence predicting their behaviour by way of short cuts like partial differential equations is not going to work. Not only is computer-modelling a respectable approach to physics, it may ultimately be the only respectable approach. -- DaveHarris
But...
It's also a good excuse for giving up too early.
For example, if you want to know the maximum stack depth of a program, and find that there are no recursive loops of calls in the program, then finding the maximum depth is really quite doable.
However, if there's a loop in the static "call tree" of the program, then you're back to the halting problem - and can't be sure how much stack will be needed.
-- JeffGrigg
Unless you happen to be able to prove some relevant property of the specific algorithm it implements, and prove that it does implement that algorithm, of course. I hesitate to make this comment here, because it tends to confuse the student more than it clarifies things, prompting thoughts such as "What about a program which will either prove that a TuringMachine halts, or prove that it is impossible to do so for that specific TuringMachine?" (which are answered by other theorems...), but I trust that anybody who considers this seriously harmful will delete it. Wiki is fun that way. -- DanielKnapp
Another important consequence of the halting problem is being unable to know whether a given mathematical formula is a theorem or not (given a set of axioms and inference rules). It is possible to construct an effective enumeration of all theorems: start with the axioms, then the set of all formulas that can be deduced from combinations of axioms and the inference rules, then the set of all formulas that can be deduced from combinations of the formulas earlier listed and inference rules and so on.
You can create a program P such that given a formula f as its input, it traverses the enumeration of all theorems and stops when it finds f (and, if f is not a theorem, it never stops as the enumeration is infinite). If we could decide whether P, given a certain formula, will halt, we would be able to know if a given formula is a theorem or not.
Unfortunately, we are not.
See also: GeneralHaltingProblemProblem
It is false to consider the HaltingProblem as a major thorn in the side of those who would like to be able to use mathematics to prove whether or not a program is correct. This is done. Just like it is shown for many programs whether they halt. It is just impossible to generalize this to an algorithm or program which works on all programs.
But wait a minute - doesn't this imply a constant amount of time for each instruction the machine executes? If we could build a computer that executes each instruction in half the time of the previous instruction, each instruction still will have a non-zero execution time, but it would be possible to execute an unlimited number of instructions in limited time. -- MW
If (something false) then (anything you want to do can be true).
Also, "time" is measured in terms of "clock cycles", not "clock cycles per second."
I am reminded of TheInfinityMachine?, see http://www.chiark.greenend.org.uk/~sgtatham/infinity.html -- MatthewAstley
It has been speculated that the physical conditions in the universe under gravitational collapse could enable computation in this manner (exponentially decreasing time needed per computation), due to the contraction in space and increase in energy density. See http://www.math.tulane.edu/~tipler/summary.html
From: http://www.math.hawaii.edu/~dale/godel/godel.html#HaltingProblem, a page about GoedelsIncompletenessTheorem:
HaltingProblem Theorem. There is no program R(p,i) which for each program p and each input i, can determine "yes" or "no" if p halts on i.
Proof.
- Suppose there is such a program R(p,i).
- Let h be the program which on input p computes R(p,0), R(p,1), R (p,2), ... until it finds an i such that R(p,i) is "no".
- On finding such an i, it outputs i and halts.
- If there is no such i, it searches forever and doesn't halt.
- Now for any program p, we can decide whether or not p halts on all of its inputs:
- p doesn't halt on all its inputs iff
- h does halt on input p iff
- R(h,p) is "yes".
- Contradiction: by the lemma above, this is undecidable.
See also:
http://www.cs.washington.edu/homes/morrison/598/computability.html
One should note that the proof above applies only to a general purpose halting prover. If the program to be decided (p) is restricted to be smaller than the prover (R), then R(R, <any input>) is not allowed. Note that the size of the prover (R) could be much larger than the largest program allowed (p).
For instance, consider a limited prover which determines whether any 16-byte program in 6502 assembly language will halt given less than 8 bytes of input i. It is conceivable that a small problem like this could be solved using a prover R which is much larger than 16 bytes. Indeed, one could imagine a 4-gigabyte prover program which could decide the halting problem for all 1024-byte sequences of 80386 code with up to 512 bytes of input. (I'm not volunteering to write it.)
-
- Why not just use a big lookup table?
-
- Because there are 2^8192 different 1024-byte sequences (if we agree that a byte consists of 8 bits). Considering that there are only about 10^80 atoms in the universe, it is just impossible to save all byte sequences, even if every atom in the universe could hold a googol bits. -- PhilipBusch
Another way around the problem is to create environments where programs always halt. (One could argue that some operating systems act this way in practice. :-) For instance, in a Java environment one could count bytecodes and halt after a certain number. The prover in this case is trivial (if the programs cannot alter the counter).
Still another approach would be to create a compiler which rejects programs unless they are proven to terminate. Such a compiler may reject otherwise legal programs. One would build programs by making legal transformations which always guarantee the program will halt. (One rule might prevent altering a pointer unless one could guarantee that it did not cause a circular data structure. Alternately, a possibly-circular structure could only be traversed using algorithms which terminate.)
In short, while it is not possible to write a completely general halting prover, it may be possible to write a prover for any finite interesting program. I suspect that RealTime theorists are still working on the related problem of ensuring that certain instruction sequences will complete within a certain time period. -- OpenAuthor Er, so the prover itself wouldn't qualify as a "finite interesting program"...?
Great clarification, thank you. But does this lend any hope to the prove-the-program-correct people? Their cause looks pretty bleak to me.
Not so. It is possible to make a contribution without reaching a stated (or implied) goal. A quick web search for WeakestPrecondition? seems to indicate that that approach to reasoning is valued in the cryptographic community at least.
See also, EwDijkstra., A DisciplineOfProgramming, Prentice-Hall, 1976.
The fact that there are programs that cannot be proven to halt does not mean that people who want to prove programs correct are wrong. In general, the people who are trying to prove programs correct are not trying to take some random program and automatically prove things about it. Instead, they are constructing programs that they can prove things about. Programs should be understandable. If you can't prove that a program even halts then it is going to be hard to understand. Don't write programs like that!
The problem with proving program correctness is that it is too much effort for too little benefit. The benefit varies, of course. There have been a few times in my life where it was worthwhile to prove some properties about a program, and I did it. But most of the time, it is not worth the time involved. -- RalphJohnson
Here's another way to think about it:
Programs can be written that are equivalent to mathematical problems. For example, you could write a program that searches for a counter-example to Fermat's last theorem. If it finds one, it halts.
If you had a tool that solves the halting problem for any program, it would automatically be able to solve all sorts of hard mathematics problems with no work. It's not reasonable to expect such a magical tool to exist, so we shouldn't be surprised that it doesn't.
However does this say anything about the programs we actually write? Many of them aren't equivalent to tough mathematical problems. A program prover that works for many cases but sometimes says "I don't know" would still be quite useful. At least, it would focus our attention on the cases it can't prove.
-- BrianSlesinsky
It seems to me that it might be useful to write a program that took a time-limit and had three return values:
- The program provably halts
- The program provably loops forever
- It could not be determined either way with the resources allotted.
The third return value might also provide a means to continue the search (a continuation, in Scheme parlance).
Really I don't think the complexity of the halt-detector is a concern so much as the time or memory that it uses. Here's my halt detector:
Define such a thing as the "state" of the program under test, including the pointer that points to the next part of the program's input. Start the test-subject in its start state. Move the test-subject to its next state, repeatedly - but remember all the states the test-subject has ever been in. If the test-subject moves into a state it has been in before, then it will loop forever. If it terminates, then it will provably terminate. The concern is that the test-subject will just accumulate distinct states without ever terminating; then, your memory will run out! You can trade memory for time; you can, after all, calculate any state on demand, given the start state and the number of moves to take. But still, if the program can keep coming up with distinct states, it can out-run the halt detector.
A human would be able to detect patterns in the states and perform induction upon them. What's hard is figuring out how to get the halt-detector to do that. Even if it does do it, it still falls prey to the original problem: somebody who knows what sequence patterns it will detect can make a sequence which follows a pattern it won't detect. Feeding the halt-detector a version of itself guarantees that this will occur.
-- EdwardKiser
It isn't nescessary to know how to find a pattern, it's existence is enough. If such a pattern exists it is possible to create a proof enumerator which enumerates all possible proofs until it finds the right one describing te pattern and the induction upon it. It'll take some (lots of) time, but it is computable. -- AnonymousForPrivacyReasons
The previous solution gets considerably harder once you factor in the fact that the "verifier" should cope with more than a single input per program. What if it the program being verified could accept any length of input? By definition there is an infinite set of states. -- ArnonKlein
The way the problem has been classically defined, the particular input of the program is part of the question. In other words, we are not just asking if program X halts; we are asking if program X halts when given input Y. Whether the machine halts when given the same program but input Z is considered to be a separate question. -- EdwardKiser
There have been a few times in my life where it was worthwhile to prove some properties about a program, and I did it.
LoopInvariants? are a great help in writing all but the simplest loops.
Similarly, StateInvariants? are helpful in writing classes with non-trivial state. These can sometimes be checked at runtime using assertions to help in testing.
But the most useful proof technique is abstraction, i.e. specifying what is required separately from how it is implemented. Abstraction is crucial to good class and subsystem design. SystemMetaphors are often stated in terms of abstractions of the components of the system.
-- GlynNormington
But does this lend any hope to the prove-the-program-correct people? Their cause looks pretty bleak to me.
This is a common misunderstanding of the HaltingProblem Theorem.
The HPT does not mean that, given a particular program, it is impossible
to prove that the program halts. Proving that a particular program halts
can be arbitrarily difficult, but for any program that is understandable
by a human being, it is usually trivial.
So there is actually no theoretical problem with the idea of proving programs
correct. As a matter of fact, people are doing these things as we speak.
-- StephanHouben
Besides, a perfectly correct program needn't necessarily halt; consider a program that finds solutions for FermatsLastTheorem or simply think of the implementation of an endless loop. Both can be correct, but won't halt if they are. -- PhilipBusch
"Proving that a particular program halts can be arbitrarily difficult, but
for any program that is understandable by a human being, it is usually trivial."
- Usually* trivial. For example, the naive program that halts if there is an odd perfect number and keeps searching if there is not, is extremely simple. However, since it is not known whether or not there is an odd perfect number, it is also unknown whether or not the program halts. But the programmer will probably lose his funding after a few years of searching without results, so he'll close the program - it will halt.
The real question with proving programs correct is not: Can I prove this program correct? But: Can I build all interesting and useful programs using only two things:
- Things that we know about (they either halt, or they don't, and we know which - in other words, instructions), and
- Rules that allow us to combine these things in such a way that we still know about them (in other words, the most restrictive programming environment ever created).
It's actually fairly easy to set up systems using these two components; however, the programming exercise becomes a constant battle with the machine, trying to figure out how to express your ideas using only those constructs the machine will find acceptable. Oh, and don't expect to be able to feed this thing your employee payroll database as input if you have any kind of complexity going on...
Yes, the HaltingProblem is often misunderstood:
Some years ago, I was working on a case tool project in the R&D division of a large multi-national company.
They claimed that their case tool, and all the code it produced was "provably correct" because they added a counter to every loop, ensuring that it would terminate.
Instead of...
while (not eof(file)) {
read;
process;
}
They'd do this:
counter = 65535
while (not eof(file) and counter-- > 0) {
read;
process;
}
As you can see, proving that their programs terminated was a trivial exercise:
All loops had a fixed maximum number of iterations.
(And they did no recursion.)
However, in fact, their programs often crashed, often produced incorrect results, and the code they generated often didn't work.
I can prove that my Windows OS terminates. But I don't think that the BlueScreenOfDeath is a good thing. ;-> -- JeffGrigg
A rarely thought about consequence of the HaltingProblem is that if you have a LittleLanguage that you really want to optimize well, it may be worthwhile to keep it from being TuringComplete. For instance a RelationalDatabase is supposed to analyze and optimize queries, so SQL is not TuringComplete.
-- BenTilly
Why can't you implement a Turing machine in SQL??
Because any possible SQL statement can be implemented by a program that you can easily prove halts. This is because there is no construct in SQL with which you can implement a general loop. -- BenTilly
Are you sure? I'm no SQL expert, and I know SQL has no explicit looping mechanism, but how is it that you know some perverse and impractical combination of other SQL constructs and tables cannot be made to simulate a loop? I can believe that the relational algebra is not TuringComplete (I've read theoreticians who ought to know claim as much), but SQL gives you more than the relational algebra. For example, it seems I could model a "general loop" by writing a query that reads rows from some table T, and then adds new rows to the end of that same table T. Plus, if you look at the mathematics of relations, lots of non-trivial computations can be easily modelled in SQL, such as finite arithmetic, logical satisfiability, general constraint satisfaction, etc. SQL may well be Turing-incomplete, but to me, at least, it's far from obvious.
I am sure. The execution of any select statement can be implemented by constructing the set of all combinations of all tables mentioned in the FROM clause, removing those rows that do not fit the conditions specified in the WHERE clause (if present), applying the grouping rules specified in the GROUP BY clause (if present), filtering again by the conditions specified in the HAVING clause (if present), sorting this output according to the ORDER BY clause (if present), and then displaying results as specified in the SELECT clause. This is a program that executes in polynomial space and time, where the polynomial has the degree of the number of tables specified either directly in the FROM clause or mentioned in any way in any subquery.
Every other basic type of SQL statement also can be implemented naively in polynomial. time. (Ideally, of course, your database finds a more efficient representation.)
-
- Indeed. In fact, any computer program could be viewed as a series of constant-time bit-setting operations. I don't see how this relates to the halting problem. There are plenty of programs - and SQL queries - that cannot run in polynomial with respect to the size of their input.
- Nope! The important thing about computer programs is not bit-setting in variables, it's that they can do arbitrary (non-deterministic, potentially infinite) looping and/or recursion. SQL allows neither. It is quite trivial to prove that SQL allows neither. You can't write an infinite loop in SQL no matter what you do. It is also the case that you can't have TuringEquivalence? without such. As a comparatively minor side-issue, your assertion that SQL queries cannot run in polynomial time of their input is misleading to the point of being incorrect - they always run in polynomial time of their input, when input is defined to be both the query plus the database size. You presumably are thinking only of the query as input, but that is an erroneous approach; the result of such analysis is not helpful, whereas including database size as part of the input does yield a useful analysis.
-
- Define the size of the input. My definition is the amount of data in the tables being processed. (The length of the query is, of course, the length of the program. Which has nothing to do with its algorithmic scaling.) If you disbelieve that with this definition of input there are plenty of SQL queries that cannot be run in polynomial time, then I would be interested to see a specific example. -- BenTilly
This means that it not only halts, but is guaranteed to do so in polynomial time.
-
- No. It doesn't follow that because all of the operations of a language are polynomial time that all algorithms in that language halt. It doesn't even follow that all "programs" in that all halting programs do so in polynomial time with respect to their input size (which is the standard definition of polynomial and exponential time). Consider this loop:
-
- while 1 == 1:
-
- pass
-
- All the operations in this loop run in polynomial time, so by your argument, this halts in polynomial time. How can you be sure that some cheap trick like this can't be played in SQL to create a non-terminating query? Why can't I make a query on some table T that contains a sub-query that, say, keeps appending rows to T, so that the enclosing query never hits the end of T?
-
- You cannot make that query because there is no way to specify it in the SQL language. You cannot have a query that recursively refers to itself. Queries may have subqueries, and subqueries their own subqueries, but there is only a finite amount of nesting.
-
- What about "recursive WITH" in SQL-99? (used for parts-explosion and similar "transitive closure" queries)
-
- More specifically your example does not follow my argument. My argument is that any given query has a formal description of how to execute it where you construct a polynomial amount of data (the set of combinations of all tables in the queries), and pass this data through filters that are at worst polynomial per row, grouping and sorting operations that are n*log(n), other filters that are at worst polynomial, and a display operation. Since this comes down to doing polynomial things on a polynomial amount of data, the result is polynomial.
-
- A full proof would have to proceed by strong induction on the depth of nesting of subqueries. Any SQL select statement without subqueries can be executed by taking a polynomial amount of data (size the product of the FROM tables), passing it through a series of constant-time filtering operations (as specified in the WHERE), then possibly send the whole through an n*log(n) grouping statement (if GROUP BY is present), possibly followed by constant-time per output row filtering (specified with HAVING), followed by an n*log(n) sorting operation (ORDER BY), and then the display operation (SELECT).
-
- After that you can demonstrate that if every depth n or less SQL query can be executed in time that is polynomial of degree the number of tables times some sub-linear factor (from the n*log(n) operations), then so must be every depth n+1 SQL query. Which proceeds as before except that various operations that were previously constant-time might involve subqueries that execute in polynomial time.
-
- But I've written an infinite loop as a single SQL statement. --BottomMind
Another well-known
LittleLanguage that is not
TuringComplete is any type of
RegularExpression. First of all any true regular expression can execute in time proportional to the length of the string times the length of the regular expression with the right DFA engine. (Some DFA engines execute in linear time but possibly have exponential compilation time. Others finish in quadratic time.) Most things that we are used to calling regular expressions have irregular extensions and those engines are NP complete - but still are guaranteed to finish in exponential time.
-- BenTilly
I believe most common operating systems already support this. Run the program. Look at the task scheduler in a little while to see if it has halted. Don't waste time trying to analyze the program, just run it and see what happens.
Most people intuitively think of this solution when they first encounter the Halting Problem. But like the other 'solutions' to the HaltingProblem, it only works for a subset of all the possible programs. In this case, an easy counter-example would be a simple program which waits for 60 minutes, then halts.
"Gödel's Incompleteness Theorem And Turing's Solution To The Halting Problem: Linking The Foundations Of Mathematics And Computer Science" by Philip Rash
http://www.philip.rash.com/research.html (mirror:
http://web.archive.org/web/20010519155036/www.philip.rash.com/research.html)
Would you actually ever hit the paradox in the halting problem? It seems to me like it would end up infinitely calling itself before starting the evaluation. (Which might be detectable by the fictitious check_for_halt program).
I agree. It does seem to me that the example programs given as "proof" either infinitely repeat the calling up of a program or end with a program trying to execute with no input.
Does trouble(t) halt? (where string t represents the algorithm trouble)
function trouble(string s)
if halt(s, s) = false
return true
else
loop forever
Doesn't trouble(t) just infinitely repeat "if halt(t,t); if halt(t,t)...?"
Question: Can anyone write a simple program (that would actually run) as an example of the halting problem, that doesn't just boil down to the illogical statement:
"IF P(Q)=no, then P(Q)=yes",
or a quest for a mathematical conjecture?
- That depends on what you mean by "a quest for a mathematical conjecture"? What does that mean? -- EricHodges
- My mistake, I meant quest for a solution or proof of a mathematical conjecture (mathematical statement which has been proposed as a true statement, but which no one has yet been able to prove or disprove). Example: Will program E2P halt?. (Program E2P defined as a program that tries to prove "every even number greater than 2 can be written as the sum of two primes" or halts when it finds an even number greater than 2 that can't be written as the sum of two primes. See Goldbach'sConjecture http://en.wikipedia.org/wiki/Goldbach%27s_conjecture.) -- DavidMonroe
- I'm confused. Are you trying to remove the set of problems that led to the discovery of the halting problem in the first place? -- EH
The "proof" of the halting problem given in section 2 of this page has a problem:
"program P that can tell you whether any program halts or not for given input data." and "program Q based on P which, if P says its input program doesn't halt, immediately halts, and if P says the program halts, goes into an infinite loop.* Feeding Q(Q) to P..."
When you feed Q(Q) to P, P begins to execute Q(Q), which makes Q begin to execute P(Q), which makes P begin to execute Q, which makes P begin to execute with no input.
It seems to me that all you need to do is add one more line of code to program P that reads something like "If P has no input, output=ILLOGICAL PROGRAM, halt.
-- DavidMonroe (yes, I'm an amateur programmer, but expert toxicologist at the US EPA)
See WhatDoesHaltingMean
Part 2 : GeneralHaltingProblemProblem
[[What follows below is simply misunderstanding about the Halting Problem and the proof that it is undecidable. Instead of P and Q, below, let me use Halt for the alleged program (of one argument) that solves the Halting Problem, and Spoiler the program (of zero arguments) that shows that Halt cannot work. It is a plain and simple result that Spoiler will cause Halt to fail. That is unavoidable. If you want, you can take the text of Spoiler, and incorporate it into another program Halt1, which tests to see if its input is the program Spoiler. But that program Halt1 will have its own Spoiler1. There is, quite simply, no way to avoid this. It simply does not matter what sort of examination Halt makes of its input tape, or whatever. Spoiler takes Halt as given, and is constructed to make Halt fail. That's it. End of story.
Nor are the attempted objections to the misunderstanding relevant. Even if Spoiler includes the text of Halt directly in some sense, Halt will not be able to exploit that fact. Spoiler is constructed by definition to make Halt fail, the proof that it does so is simple. There is no need to invoke an "obfuscated" version of Halt inside a virtual machine to thwart it. In fact, the problem of whether or not an obfuscated version of a program is equivalent to the original version is also undecidable (it is the problem of program equivalence), but this fact is proven by reducing the halting problem to the problem of program equivalence, not vice versa.
But ... there is apparently an even more basic misunderstanding which has to be addressed, because of the potential mechanics of Turing Machines and how they might be put together
It was to clear up this misunderstanding that the argument about an "interpreter" was invoked below. Let's summarize it up here. It relies specifically on Turing Machine formulations of the problem. Halt is defined so that, on a tape that contains nothing but a description of a 0-argument Turing Machine, it runs and leaves a tape which contains only a 0 or a 1, depending on whether or not the given Turing Machine halts on an empty tape. (This is all without loss of generality, I think everyone can agree.). The alleged refutation relies on the fact that when Halt is run, it need not assume that the tape is empty except for its own input. The tape may contain some other markings that a containing program, such as Spoiler, has placed on the tape. It is alleged that by examining these markings, Halt can alter its output to get around the problem Spoiler presents it with. But this is quite simply false. This is where the "interpreter" comes into play (not to obfuscate, as I thought was being suggested, but) by preventing the version of Halt contained within it from accessing anything outside what it would be able to access had it been started under normal conditions. In fact, it is not Halt that is contained within Spoiler, but a lobotomized version of it that forces it to work only on the Halting Problem and not anything else weird involving going outside its own input. It may be objected that this is not fair, restricting Halt in this way, but of course it is totally fair, because there is no obligation to include Halt inside Spoiler. All that is required for the simple proof to go through is a program that is input-output equivalent to Halt when it is started on as tape containing nothing but the program whose halting status it is supposed to ascertain. An "interpreter" of some form may be required to do this (such as one that simulates that otherwise empty tape, even while the program doing the simulation writes other stuff on the "real" tape). But the key is that the modified Halt, contained as a subroutine in Spoiler, is input-output equivalent to the original Halt on the actual problem that was given to the original Halt. It is irrelevant that it is not input-output equivalent on some other strange problem.]]
- Well said, that's right on target. On the last point, though, it's worth noting that there are many, many different ways of proving that program equivalence is undecidable, and equivalently, that there are an infinite number of algorithms/functions/machines that compute any given thing, not just the proofs that depend on the halting problem as a lemma. One of the more startling ones is the Speedup Theorem: there is no such thing as the fastest program; there's an infinite series of programs, each a bit faster than the previous, yet all computing the same thing. But yes, in any case, that result demonstrates that you cannot hope to simply recognize which machine is input, since there are an infinite number of variations on the same theme. -- Doug
- So let me see if I have this version of your collective arguments straight now: to prove that there is no P, we invoke P(Q(I(P(Q(P'], where I is a provably uncrackable (by P) UTM or LC equivalent interpreter, and P' is any source that expresses exactly the same function as P, but without having the same representation as P, this being to prevent the invocation of P that's invoked inside the outer Q from determining that P' is computationally equivalent to P. And I'm to take it on faith that, both without assuming the antecedent of non-recursive halting (which, being the fundament of so many such proofs, I rather suspect is after all assumed, but will let it go), you can demonstrate the undecidability of program equivalence, and the uncrackability of I. I do not mean to cause you any more upset than is strictly necessary here, Doug, I just want to make sure I understand what you're claiming before generating more ThreadMess. -- Pete
[[Uh, no. There is no need to be concerned with the undecidability of program equivalence here. There is no need for "uncrackable" anything, whatever that means, and certainly the version of Halt used by Spoiler (Halt') is more than "any source that expresses the same function" as Halt (how would one find such a thing if not as a provably equivalent minor modification of Halt itself?) It is in fact essentially Halt, but Halt restricted to run on a tape that it was required to be designed for. Note that Halt must be able to run on a tape that starts with nothing on it but a representation of the program it is required to test. It may contain some extra code to look elsewhere on that tape in order to see if it is "being tricked", however, it must be able to solve the Halting Problem without applying that code, in the circumstance that it is started normally on a tape containing only its own input. It is thus a simple matter for Spoiler to take the tuples defining Halt, and alter them so in effect they are running on a virtual tape/head combination, instead of the real tape/head (which is simulating them), and, most importantly, deliver the same answer as would be delivered by Halt running on the real tape/head combination, which, by the claim of its author, has input/output behaviour defining the Halting Problem, and which is the minimum anyone claiming to solve the Halting Problem must be able to provide. Given this using of the simulated Halt, which has input/output behavior equivalent to the original Halt, it is the usual simple proof to show that Halt applied to Spoiler cannot deliver a consistent result.
You know, computability theory is a highly developed and very interesting discipline. Do you really think you have found some problem at the very basis of it that renders all of the deep and complex results (such as those describing the highly complex structure of the recursively enumerable degrees) somehow flawed? Perhaps you should have a little less hubris. I see this has all been explained before, below, although perhaps the explanation that stays at a more concrete level (keeping always in mind that we are dealing with concrete Turing Machines, not lisp machines, abstract environments and interpreters not directly reduced to Turing Machines, etc) may be more helpful. But maybe nothing is helpful for a troll?]]
Brand new semi-humorous argument: my machine P is both a GeneralHaltingProblem solver and also good at inferring the intent of its input. When invoked as Q's inner loop, P will examine its input (source of Q) and immediately realize that it (P) is being used in a misguided attempt to prove that it (P) cannot actually exist. It (P as Q's inner loop) will consequently print "0" - causing Q to halt - while simultaneously adding states to its (Q's) FSM which, when inspected by the outer invocation (P), explain the subterfuge, permitting it - the outer invocation of P - to assert the correct halting value. Which is, correctly, "1".
Please email the Nobel to Pete@that-place-I've-gone-to.com. Thanks!
But seriously, where's the logic error above?
You're brilliant, Pete, but you just overstepped the limitation of Turing Machines. Programs for TM are deterministic, and therefore two runs of the same program for the same input cannot give different results.
If we allow for non-determinism, then, yes, you can make a program P that somehow (magically) distinguishes when it is run from the supervision of Q and does whatever trick it needs. But then, you haven't disproved the HaltingTheorem because such a program cannot be constructed in TMs. So, you don't get the Nobel.
- Ah, but P doesn't act non-deterministically; it merely inspects all the contents of the tape - including its own outer invocation. This is to say, the way it makes its distinction about whether to print a subterfugal "0" is to see all the universal-turing-machine gear, the source of its own outer invocation, and so on. It's all there on the tape, and in finite form, so it's easy for it to do deterministically.
- Well, you assume that it is easy for your program P to determine the context in which it is invoked, but that ain't the case. To make it more clear, let's put it first in Scheme, then in pure lambda calculus:
;assume we have a program p
;without side-effects, in Scheme
(define p (lambda (x) <body-of-p>))
; and p solves the halting question for a program x returning
; true if (x) halts and false otherwise.
; then we define the program q
(define (q) ( if (p q)
(loop-forever)
#t))
; and if p doesn't have side-effects (it is purely functional)
; then (p q) should yield the same result in all context,
; and it is easy to see that
; (p q) = true implies q loops forever
; Now the inventive Peter, will protest that q has access to an environment
;(which makes the recursive invocation (p q) possible, so then if p has access to the
; environment than p can distinguish the places of invocation
; therefore the only technical trick to be done is to de-recursify q,
; remove all access to the environment while keeping it a program equivalent with the above,
; To do that we use a fixed-point combinator like the U
; http://okmij.org/ftp/Computation/fixed-point-combinators.html
(define U (lambda (f) (f f)))
; we can then define q as
(define q (U
(lambda (f) (if (p f)
((lambda (x) (x x))(lambda (x) (x x)));loop forever
#t))))
; now we eliminated the recursion while keeping q the same
; let's inline the lambda expression for U and p
; let's inline the lambda expression for U and p
(define q ((lambda (f) (f f))
(lambda (f) (
if (
(lambda (x) <body-of-p>) ;p actually, applied to
(f f)) ; applied to (f f), which is actually a stand-in for q,
; then
((lambda (x) (x x))(lambda (x) (x x)));loop forever
; else
#t))))
;therefore if <body-of-p> exists with the above property,
;there exists the lambda expression denoted by q
;and evaluating
((lambda (x) <body-of-p>) ; p applied to
; q unrolled
((lambda (f) (f f))
(lambda (f) (
if ((lambda (x) <body-of-p>) (f f))
((lambda (x) (x x))(lambda (x) (x x)))
#t))))
;we reach the conclusion that body-of-p cannot
;exist with the specified properties
- Now, because in pure lambda calculus a program is simply evaluated (or technically reduced according to the rules) and doesn't have any kind of access to any environment, inspecting the stack, etc., it is a basic property that if (f a) "terminates" it always yields the "same" result (modulo renaming, etc.). Then constructing the above lambda expression, if it evaluates to "true" ( known as (lambda (x y) x) in lambda calculus), it follows that it doesn't evaluate at all. So that's it.
- Because of the restriction of lambda calculus it is thus less appealing to claim that <body-of-p> can detect the context in which it is evaluated (there's simply no such mechanism in the lambda calculus formalism), so if p cannot exist in lambda calculus, p cannot exist inside a TM (unless you want to disprove the equivalence between LC and TM).
- I don't have to - you just did! So now we need two Nobels. What a good day.
- Actually, I've been told by reliable sources that TM and LC are really computationally equivalent :) While you certainly have not produced code for your P. Let's call your P, P_TM. I produced a mapping from an P_LC to a Q_LC that proves that P_LC cannot exist. This strongly suggest that your P_TM does not exist, lest the widely held conviction about the equivalence between TM and LC is wrong. So somebody needs to dig a reference to Turing's proof about the equivalence between LC and TM and verify that, but I rather suspect it's just a matter of technique, cause it's obvious to me how to implement a TM in a purely functional Scheme (and consequently in LC). The less obvious is implementing a LC interpreter in TM, but in the end we'll implement somehow some assembly language primitive in TM, and use it to emulate x86 assembly, and then existing interpreters for Scheme are good enough to show that LC can be encoded inside the TM universe.
- I had heard that somewhere, yes :) But the gap in the TM frame I drove my truck through is that nothing prevents P or Q from examining and modifying any and all of the tape. I had thought of self-refuting by showing that the inner P couldn't count on the Q code dwelling within a finite distance on the tape ... but of course then that would go for Q not being to get at P too. You adroitly spotted the whole in my hole and have ably plugged it for Post. Basically, P can't be permitted to figure out that it's testing itself. As far as I can see, nothing in the Turing UTM construction prevents this, so that's that for him. Worth noting that a C-machine could always receive signals via sensors that would permit it to know that it was testing itself - reviving the A-irrelevance claim on CASP ...
- Let's say you were correct about this ("P can't be permitted to figure out that it's testing itself"), for the sake of argument. The HaltingProblem, however, is about figuring out whether any and all programs halt -- it was never claimed that it can't be solved for some programs. So say you've got a program P, which, if allowed to figure out it is testing itself, can solve the HaltingProblem when given itself as input: P(P). Ok. Now take modified program P' which cannot test to see whether it tests itself. You appear to agree that P'(P') fails to solve the HaltingProblem. But then your other program, P, will fail to resolve the HaltingProblem when given P(P'(P')) - and P' is not P, so the notion of seeing whether it is testing itself or not is of no help in any case. -- Doug
- While my latter statement was informal, if I have some modified P'(P') my very clever P can ascertain what's going on with its modification by exactly the same logic as P(Q(Q)). In fact, my P is so clever I'm going to give it a new letter just so we don't confuse it with any old P. Let's see ... how about D, in honour of DougMerritt, the man who used it to prove that LC is not TuringEquivalent?
- Nope, didn't prove that, didn't say that, don't believe it. -- DougMerritt
- If it will please you, and since deleting ThreadMess usually displeases you I'm asking before doing, I'll delete those portions of this text where I refer to such an inference on the conclusions you did draw. -- Pete
- Peter was onto something, because the construction of Q in "popularization of science" books was rather informal (along the lines Q invokes P and then takes the opposite decision (to halt or not to halt). This is rather sloppy, because for a particular program P the construction of Q fails to provide the counter-example. Imagine if we're inside Java (and I saw Java and C based sketches of the proof that fail to consider this), and Q invokes P and then P tests to see if the stack depth is greater than 1 and it does the trick suggested by Peter if so. That means the sketch of a proof fails to give a correct algorithm for deriving Q from P.
- But then the trivial technical trick that solves the problem in the naive construction of Q. Rather then calling P directly, Q constructs a Java (or C, or Scheme, or TM, etc.) interpreter, and thus runs P inside the interpreter, for which purpose now P can see that it's running top-level. So now the inner instance of P cannot do the trick, and thus the outer instance is condemned to fail to decide correctly the outcome of Q. Thus the pseudo-code for q becomes
(define q (if (scheme_interpreter '(p q))
(loop-forever)
#t))
- And therefore now there is no programmer distinguishable difference between the outer top level (p q) and the inner (p q). Sorry, Peter, now the Nobel has to wait a little bit. -- CostinCozianu
- As I said to Doug, I don't buy this in the TM construction; P can always drop down to the tape and analyze it bit by bit to figure out what the whole of Q is doing. LC prevents this exformally, so closes the barn door before the horse has bolted. -- Pete
- Again, the inner instance of (p q) is run inside an interpreter. So it cannot analyze the "whole tape" that is really running the show, but only the virtual tape that it is offered to it by the interpreter. And then no matter what decision it takes, with the sole condition that it is deterministic, even if P has the knowledge that it's analyzing exactly the Q that he knows in advance, Q will take the opposite decision, and P is in a lose-lose situation. The only way P could try to avoid the defeat was if it could distinguish the situation when it's running inside Q or when it's being run from the "top-level" (so to speak). But then Q by running P inside an interpreter made the two situations undistinguishable. So whatever decision you take, even knowing in advance the Q that I'm going to write, you're doomed to give bad results.
- As a proof of that, given the generation of the Q above (it really should eb a Scheme macro, but you get the drill) try to write a program P that decides not the halting of all programs, but just of the program Q that will be derived from P itself. You won't be able to write it. Because if you return "TRUE" you'll return TRUE both from the top level and from inside Q, and then my Q will not halt. And if you return false, my Q will halt. Replace a Scheme interpreter written in Scheme with a TM interpreter written in TM and you get the same effect. -- Costin
- Yes, quite right, this would sink it on the TM, if you could prove you had an interpreter that can't be cracked. And not on a C-Machine where each P can employ sensors that observe its situation. -- Pete, before thinking harder below, for which see the stuff below the next horizontal line. Damn WikiBase poops out after 4 levels of indent.
- Possibly on the C-machine, but then your argument becomes mystico-metaphysical. P is no longer a program (or mathematical formalism) but a machine connected intrinsically to the reality surrounding it, a sentient physical entity.
- Ability to sense does not imply sentient. All our computers are C-machines, and that's sufficient to bollocks up the notion that they can't solve halting for C-machines. Naturally the opportunity to solve halting does not imply the ability to solve halting. No more than the opportunity to walk implies the ability to walk. Our C-machines are horribly handicapped by the VonNeumannArchitecture ...
- That would be truly deserving a Nobel prize, but you have to construct it first. Don't hurry though, we're patient :) In the meantime, for all practical purposes we're stuck with ChurchTuringThesis.
- You misunderstand the point made here. A C-machine P ("CP"), by deploying sensors that permit it to distinguish its situation with respect to Q, could perform the feat quoted in my first para above. Therefore a C-machine not limited by Turing's A-machine computability results: they are irrelevant to C-Machines. -- Pete
- I don't see how you think you've established that. On the other hand, if you really believe that, then go ahead and build such a machine. If your claim for it is true, there are certainly commercial applications, never mind jokes about Nobel Prizes. -- Doug
- As I said it's trivially demonstrated that if CP can inspect itself as it runs it can make the distinction about whether it is running as the inner or outer invocation.
- Nope, not demonstrated at all, let alone "trivially" -- DougMerritt
- [[Not only that, but more importantly, again (sigh) it is irrelevant whether or not CP can make this distinction, because a CP that has the opportunity to do such an inspection is not allowed to run in what you call the "inner invocation". Running there is merely a version of CP in a pristine environment with no information provided to it except its own original input and its own resources, as by its very definition it must be able to do. There is no way this version of CP can tell who has called it or what its caller is trying to accomplish. It must simply go ahead and "solve" the Halting Problem as best it can (which is not going to be good enough, of course).]]
In any case, you guys can quit your crowing on the A-machine. Costin is, above, assuming the existence of an interpreter that can't be cracked. Let's call it I. If you'll kindly refer me to a proof that such an interpreter can exist, then you're out of the woods. To forestall the usual squawking, no, I don't know that you can't prove such a thing - but the burden of proof remains with you. It sure doesn't smell recursive ... -- Pete
Ahem, I beg to differ, Pete. If we restrict the discussion to a purely functional subset of Scheme, then such interpreters of a functional subset of Scheme are written by students around the globe all the time. That they cannot be "broken" follows trivially from the semantic of Scheme: if all the primitives (car,cdr, define, let, if` etc) are deterministic, and all the composition constructs are deterministic, it follows by structural induction that all constructed procedures are deterministic so that evaluating the same expression in any correctly implemented interpreter will always yield the same result.
In principle, of course, I agree. Or else I would have popped up a counter-example. But a specification is not a proof; you'll need to start by proving that each of these primitives cannot be cracked, then show that their principal of combination cannot be cracked. Given RicesTheorem (http://mathworld.wolfram.com/RicesTheorem.html) this is not a trivial undertaking. At each step you risk the AntecedentAssumedFallacy. This is why I say the uncrackable interpreter smells non-recursive.
- For one thing, your conclusion is incorrect. CC's argument is in fact straightforward, trivially so, so Rice's Theorem doesn't have anything really to do with it. Secondly, this is really ironic, because in fact Rice's Theorem can be used to prove the various points CC and I have been trying to make. It is one of the many results that is essentially equivalent to the HaltingTheorem and Goedel's theorem.
- Yes, that's exactly why I was warning CC against the antecedent being assumed. If Rice's theorem is based on halting, and Rice's theorem were to apply to the rudiments of the interpreter CC wanted to prove halting, then a pickle occurs. Like CC, I'm not going there - life's too short!
- This is amusing considering the conversation, and somehow I suspect it won't help, but nonetheless see http://users.informatik.haw-hamburg.de/~owsnicki/rice.html. -- DougMerritt
Mapping from a purely functional subset of scheme to lambda calculus is also a beaten path, so if you want a full-blown interpreter to be presented to you as part of the proof, you'd better give us some good reason rather than asserting the burden of proof on the other side. Because it's surely boring to write down that interpreter and it provides no insight into the
HaltingProblem whatsoever. A quick reference could be
SiCp, but examples of such interpreters are so many, I don't know where to begin.
I don't actually expect you do any such thing. It sounds like a frightful waste of time. But until you or someone does, the A-machine version of the halting problem remains cracked. And then as Doug points out most likely TM != LC.
- Damnit, stop saying that, I never claimed that, and I don't believe it (also CC just said the opposite at bottom of page), and the only reason I didn't contradict you last time you claimed I said that is that I get tired of contradicting every single thing you say on these topics. But for the record, please, do not speak for me again on these topics, because we are in massive disagreement, with no sign of reaching consensus on even the smallest of issues. -- DougMerritt
- Well, we agree on that anyway ;-) -- Pete
- Just as a btw, there's the nice and simple SIOD -- Scheme In One Defun -- not that I'm going to waste effort in that direction. -- DougMerritt
- You know, speaking purely as an old Prolog guy, you're really beginning to make me feel like I was missing out on something when I didn't learn Scheme ...
- P.S. it's great to try to figure out a way to beat the system, but you should be a little less convinced that any particular brainstorm has already succeeded.
- Um, that's the humor part.
- I usually read my email on Tuesdays. Please have the Nobel committee bear this in mind.
- [Although it depends very much on the kind of non-determinism and class of machine; there are many possibilities.]
- Not certain how the construction here depends on non-determinism and class of machine - this sally self-consciously restricts itself to the A-machine.
- [I was commenting on CC's sentence, "If we allow for non-determinism, then, yes, you can make a program P that ..." - in other words, although CC is right, the operative word there is can, as in, under some circumstances, with some kinds of non-determinism, with some kinds of machines. It is not the case that any old kind of non-determinism, with any old kind of machine, gives that power. -- Doug]
There's a valid conclusion to be drawn that maybe TM are not as good as real computers (those can do all kinds of non-deterministic tricks, like being connected through networks, running MS Windows, or using non-ECC memory :) ), but then the stuff that we need TM for can be adequately modeled using TM.
Even more serious than CC's critique: there's no Nobel prize in math. There's the Field's medal, but you may be a bit long in the tooth for that.
Yes, apparently a mathematician raced off Nobel's wife. Strange thing, science ;-)
- FYI this rumor has been endlessly investigated, and there appears to be no supporting evidence for it at all, it seems to simply be a colorful urban legend. As such, it's a good one to repeat at parties, but it's nice to know when one is repeating a myth. :-)
- I didn't know. Anyway let's just re-express the above results in a DNA computer and have them give us the prizes for chemistry and biology. Which one do you want? Whichever, that's the one that I want.
Keep in mind that Turing's original machines are far from the only model of computation ever studied. Machines with oracles are non-deterministic, for instance, and there are also randomized machines, which do indeed have different performance. For instance, the computational complexity of some algorithms depends on whether the Extended Riemann Hypothesis is true, which no one knows is true or not - however, such algorithms can typically be converted into the faster computational class simply by using randomness, which doesn't exist in the original Turing machines. It's a pretty big topic, and
TuringMachines are simply a
starting place. -- Doug
The most fundamental plank of ComputerScience is the General HaltingProblem. Unfortunately there appears to be a problem with the GHP, as follows:
- I have here the code for a program, P, that can tell you whether any program halts or not for given input data.
- I construct a program Q based on P which, if P says its input program doesn't halt, immediately halts, and if P says the program halts, goes into an infinite loop.
- Feeding Q(Q) to P I can see that if P says it halts, it won't, and if P says it doesn't halt, it will.
Therefore,
I don't have any such program P and anyone else who says they do is full of it.
But:
- If I don't have any such program P then there's no way I can construct the derivative work Q.
- Since the proof of P's non-existence requires me to feed Q(Q) to P, which I can't do without Q, then I can't prove P's non-existence. In effect, the proof is subject to a subtle AntecedentAssumedFallacy.
- Most likely all we've proved is that a program P must be smart enough to recognize its own algorithm, and then output a third result, to wit, that the halting behavior of the input depends on its output.
Realizing this, it's plain that the whole form of the proof is identical with the convict's invalid reasoning in the
UnexpectedExecutionParadox. All the computability results that rest on the GHP are therefore null and void, and we better start over on this topic. Oh, another Nobel for me? Pesky things, I keep tripping over 'em. --
PeterMerel
I can't quite tell whether this is a joke or not. Assuming it's serious (DeleteMe if it's just a joke): There is no fallacy. Here's one way to adjust the proof that may make it clearer that there is no fallacy.
- Consider any program P. It needn't be a halting tester.
- Define a new program Q, which passes itself as input to P, runs P, and (a) returns immediately if P returns 0, (b) goes into an infinite loop if P returns 1, and (c) does anything you like if P doesn't halt or doesn't return a single value that's either 0 or 1.
- Observe, trivially, that if you feed Q to P then P doesn't return 0 if Q fails to halt and doesn't return 1 if Q halts.
- Um, did you mean to feed Q to Q? Obviously if P is any program it is free to do act however it likes, including returning 0 if Q fails to halt, and returning 1 if Q halts - right? Nope, I meant what I said. See below.
- Now consider the statement "For any inputless program R, feeding R into P yields 1 if R halts and 0 if it doesn't".
- That statement is false, because what it asserts of all inputless programs isn't true for the particular inputless program Q.
Hence, we've shown that no program P can be a halting tester in the precise sense described by that statement. And we haven't had to assume that any program is a halting tester in order to do it. We started with any candidate program, and produced a new program for which it doesn't give the right answer. --
GarethMcCaughan
I expected this semi-humorous argument to go down in flames, but couldn't quite put my finger on why just yet. Anyway you're quite right, many thanks. A more serious if exformal objection is raised on WhatDoesHaltingMean and discussed on GeneralHaltingProblemProblemProblem, suggesting that InfiniteLooping is in fact exactly what halting means. Ergo Q can not fail to halt, and the Turing fence is down again ... -- PeterMerel
The clearest technical book on the subject that I've ever seen is The Language of Machines - An introduction to Computability and Formal Languages by Floyd and Beigel, ISBN 0716782669 . Obviously, these things cannot be adequately covered non-technically/non-mathematically. -- DougMerritt
Here's a program (in an imaginary language, for an imaginary machine) that might explicate things.
# ALLEGED_TESTER is a function which accepts a single argument.
# We return a new function F that proves that ALLEGED_TESTER isn't
# a perfect termination tester: in other words, either F terminates
# and ALLEGED_TESTER(F) doesn't return 1, or else F doesn't terminate
# and ALLEGED_TESTER(F) doesn't return 0.
def thing_not_tested_validly(alleged_tester):
def f():
tester_result = alleged_tester(f) # same f as we're now defining
if tester_result == 1 then loop_forever()
return
return f
Proving the assertions in the comments there is easy. If ALLEGED_TESTER(F) returns 1 then F loops for ever. If ALLEGED_TESTER(F) returns something other than 1 then F doesn't loop for ever.
If you're bothered by the "same f as we're now defining" thing, don't be: there are ways of working round that. But this is one of the subtle points of the proof. -- GarethMcCaughan
What is wrong with the first proof? It's proof by reductio ad absurdum. To prove that P can't exist you assume P does exist and make it lead to a false conclusion, given your knowledge base.
If you do away with this kind of proof you do away not just with the GHP but with most of mathematics too. :-)
Nothing is wrong with it. I provided the second one because I thought it might evade some mental blocks. It might be worth mentioning that a similar thing can be done to many proofs by reductio ad absurdum, for the sake of those who find RAA hard to trust. -- GarethMcCaughan
Let me know if I get this:
- We have some program Q.
- We have some other program P that when fed some program tells if that program terminates or not.
- Q is a program that feeds itself as input to P and then runs P.
- When P terminates, if P says Q halts then Q goes into an infinite loop, else, if P says Q doesn't halt, Q exits.
Nice...
What this is really telling me is that there's no way to actually completely test a program except by running it. -- CristiOpris
Yes, that's the idea. It's a proof that actually we don't have a program P that tells whether things terminate or not. And it's even worse than you say :-), because even running a program can't prove that it doesn't terminate. -- GarethMcCaughan
Yes, the "general halting problem" proof is one that is commonly given in college courses.
(At least it was in the early '80s. ;-)
But I think it's flawed.
Briefly, to know if Q halts, you need to know its input.
Its input is Q, but to know if that Q halts, you need to know its input.
So, you're really testing this:
Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q(Q( ... ))))))))))))))))))))))))
Where there's an infinite number of Qs.
The definition of the thing you wish to run doesn't halt, so you can never get to the point of running it.
-- JeffGrigg
Ahhh, yes: THAT is the approach I was looking for. Thanks, Jeff! -- TomStambaugh
No, what you input to Q is the "source code" of Q, so no need to get all recursive here.
No, there's no way to know if a program will halt or not unless you know the input. Consider this program: { if input = "halt" then halt else loop_forever } Now, does it halt or run forever? You can't know if it halts unless you know if the input is "halt".
So, to evaluate if Q halts, you have to know its input halts. That input would be Q, so...
Yes, but the input to Q is not the result of executing Q, it's nothing at all. Here's a slightly reworded definition that makes that a little clearer:
- program P(source, input) = { if source_code_will_terminate(source, input) then return 'halts' else return 'loops' }
- program Q() = { execute P with (source = source of Q, input = null); if result is 'halts' then infinite_loop }
- The call to P inside Q is trying to work out if running Q with no input data will halt or not.
- Therefore running Q with no input data will halt if running Q with no input data doesn't halt, and loop if running Q with no input data halts. This is clearly a paradox.
- The only way to resolve this paradox is to assert that source_code_will_terminate doesn't know what the hell it is talking about.
--
GavinLambert
I think the problem might be relating to the definition of 'input'. I'd say that, although there is no explicit input to 'Q()' (according to the signature), there is an implicit input, in the form of the call to 'P' ('Q' asks for input from 'P'). With that call, 'P' ends up having to analyse itself with the associate input of 'Q'... and now we've got a problem.
As an aside, there would seem to be no reason that 'Q' couldn't detect the situation, that it has processed itself multiple times, or even processed itself at all, and therefore respond to the question with a question, so to speak: "Does 'Q' halt? I don't know... do you want it to halt?"
-- WilliamUnderwood
OK, so suppose we have this program "R" which can analyze an input and halts if the input is equivalent to itself...
"Does 'Q' halt? I don't know..." Well if 'P' doesn't know then it isn't solving the halting problem. If you ask the question "does this program halts?" and for some programs like 'Q' you are forced to answer something else than 'yes' or 'no', then Turing is right: you cannot write a program which can answer that question with a 'yes' or 'no' for any given input program. -- Samuel Gélineau
To break down what I see as the problem here, at least on the top part of the page, the proof is this: "Even if you had a halting predictor, you don't have a halting predictor. I can do this for all putative halting predicters." To put something else in that form, "Even if you think you have an integer that is greater then all others, I can prove that your integer isn't greater then all others (by simply adding 1 to yours). I can do this for all putative largest integers."
Your inability to produce the integer that is greater then all others isn't a problem. I never really depended on your specific integer, I made a statement about all of them, by showing concretely that each and every one of them is inadequate. -- JeremyBowers
Hmm...
Assuming that a program could be defined which can recognize itself as input:
From any halting detector, you can provide a program which halts or not based on the opposite of the decision of that detector.
And for any such program you provide me which does that, I can provide a program which can detect trickery of that specific nature.
We can't solve the halting problem in general, but we can get as close as we'd like.
Now, "Can this result be made useful?" is a completely different question.
-- WilliamUnderwood
P.S., where n = the biggest number you can quote to me, the number returned by (n+1) is the largest integer. Prove me otherwise :p (n+1)+1 - I don't think you really understand mathematical symbology. Anything goes in those variables, not just pure numbers.
It is well known that it is, on the other hand, feasible to solve the halting problem for all machines up to size K, for any given finite constant K you care to name. The machine that solves that particular finite K-halting problem will be bigger than K (O(2^K) does it nicely). Unsurprisingly, the class of machines it can solve does not, therefore, include itself. -- dm
Hence the answer of "I don't know. Find a bigger machine." :)
Also, yes, it is useful. Many proof techniques are based on exhaustive examination of state space for things that are equivalent to solving the halting problem. Since the state space is finite, it is sometimes feasible, whereas in the general case it clearly isn't. This is another critical thing to understand about the Halting Problem non-Problem: It specifically refers to the general case, and does not apply in less general cases. Though even in the less general cases it does tend to imply an irreducible exponential cost. -- JeremyBowers
Why need a "Halting Predictor" to prove the point...
How about "Is it possible to predict the output of the function f(int i){return i;}"? The function takes an int (let's say only 0 or 1) and spits it out. It is not possible to tell exactly what it spits out, whether a 0 or a 1 (without knowing the input). Doesn't this qualify as a "proof" that it is not possible to predict the output?
If NO (it doesn't qualify), then how different is the "Halting Problem" from this one other than to disguise the input and output in some fancy programmer's formalism?
-- Mohyneen Mehdi
You don't necessarily need something fancier, but you do need to consider all possibilities one way or another. You're basically assuming the result you want to prove; that's why this is not a proof.
Maybe I am missing something or maybe I am not, but I can't give in to the "circular reasoning" that lurks in the proof of the Halting Problem.
In standard English, the solution goes like this... P has the task of predicting Q's output (or behavior). But Q has the final say in the matter and waits till P makes its prediction, after which Q conveniently adapts its behavior only to violate that prediction.
So, as I see it, P has to look at "Q AND Q's Input" in order to predict its behavior (or output). But Q's INPUT is P's OUTPUT. So, effectively, P has to look at "Q AND P's OUTPUT".
But P's OUTPUT is obtained only by looking at "Q AND Q's Input". Back to square one!
It's important to understand that by Q's INPUT I don't mean Q's parameter values; I mean information that Q eventually uses to behave the way it does.
P needs information which is obtained from Q, who in turn needs the "same" information from P.
But according to the proof, that information is generated on the fly and P is given the credit (or discredit) for generating it. That information (i.e the prediction P makes) couldn't have been generated because P doesn't know Q's complete Input information, information which can come only AFTER P makes it's prediction (i.e the prediction itself).
Where exactly am I going wrong?
-- Mohyneen Mehdi
Here's how I see it: Q's input doesn't have to be a precalculated boolean value equal to P's output. P is supposed to (among other things) look at its own source code and figure out for himself what P's output is supposed to be. If P wants to execute this code in order to find out then P will loop forever, won't ever give an answer for this particular program, and cannot solve the HaltingProblem in general. If P chooses to guess Q's input without executing its own source code, then whatever his guess will be he will give a wrong final answer about Q, so P still isn't solving the HaltingProblem in all cases. -- Samuel Gélineau
Where exactly am I going wrong?
There's several common spots of confusion. Maybe you can help us re-state or otherwise explain the problem so it's less confusing. (But I don't think there's any way of getting around the contradiction, since it's a proof-by-contradiction.)
Confusion between what people mean by Q:
- the source code, a string like "int f(int i){return ~i;}".
- the running executable, which either returns with its final result (halts), or gets stuck in an infinite loop (doesn't halt).
- the actual result returned by the executable.
Other people are mislead into
- thinking of Q and P as unique, singular things - thinking that there's only 2 things, each of them has to make a decision, and so one of them must make the decision first.
The same sorts of confusion have led many people to claim that
QuinePrograms are impossible, when clearly several do exist.
Also, a few of the "proofs" of FreeWill seem to be based on the same confusion.
P needs information which is obtained from Q, who in turn needs the "same" information from P.
Yes, sort of. When P runs, it inspects the source code for Q, and also inspects the input that will be fed to Q.
Please note that P does not actually call() or eval() or execute Q. (Although the sorts of analysis it does are probably very similar to the sorts of analysis that go on when the compiler compiles and partially evaluates Q when we later compile Q).
But according to the proof, that information is generated on the fly and P is given the credit (or discredit) for generating it.
Yes, exactly.
That information (i.e the prediction P makes) couldn't have been generated because P doesn't know Q's complete Input information, information which can come only AFTER P makes it's prediction (i.e the prediction itself).
I think you're very close to understanding the HaltingProblem.
In this case, P *does* know Q's complete input information. Tell me the source code to P (long before you ever *compile* P), and I can tell you the long string that makes up Q's complete input information. It is, in fact, the source code to Q (which includes a copy of the source code to P that you gave me).
Say I want to know whether this function halts (i.e., whether it is an algorithm), and if so what value it returns:
int count_apples(){
int left = count_apples_on_left();
int right = count_apples_on_right();
return( left + right );
}
By "input", we normally mean input from *outside* the computer. The value returned by count_apples_on_left() does *not* count as "input". I inspect that function and see that it calculates a value without any input from *outside* the computer. Say I run count_apples_on_left() once and it returns "17". I can be confident that it will return "17" every time I or anyone else runs that subroutine. Right?
So, what does count_apples() return? Or does it never halt?
I inspect count_apples_on_right(), and I see that it reads in 2 characters from standard input. Those 2 characters count as "input", not just to count_apples_on_right(), but to any program that calls it (such as count_apples() ), and to any program that indirectly calls it (perhaps by calling count_apples() ).
If I know what those 2 characters will be, do I have any hope of figuring out the output of count_apples()?
I just came to read here, correct me if I misunderstood something.
For halting property, there are three kind of program.
- Program that always halts.
- Program that always does not halt.
- Program that halts on some input and not on other. Let's called it "Conditionally halt"
Halting problem states that (I took this from first line in
HaltingProblem)
"It is impossible to write a program that runs in finite time that is able to decide whether an arbitrary program will halt (with a certain input)."
Let
P be the halt tester program, and
Q be the program that will be feed to P, and go in infinite if P return "Q will halt", and halt if P says "Q never halts"
Now the statement With a certain input is where the flawed is.
What is the input to first Q that is fed to P?
''The input is empty, or a dummy string, it does not matter at all. As a matter of fact you can consider the input to be part of the program, and that settles all the useless discussion about input. You cannot define a program P that decides for all programs if they halt with an empty input.The counter-example Q defined by
(define (Q) (if (eval '(P Q)) (loopforever) true))
settles it. If you notice, Q does not use any input at all. So the input can be anything, it's better left out of the picture.
Technically Q is often constructed along the lines
(define (Q X) (if (P X X) (loopforever) true))
And the counter-example is that P cannot decide for
(P Q Q)
Which is more faithful to the original formulation of the halting problem where P had to have 2 arguments (P X I) where X is the program and I is the input. But this later arrangement, while valid, makes it less intuitive, as the reader is distracted by the artifice of using Q both as program and input, and by the role of the dummy variable X in the body of Q. So reducing the problem to programs with empty input, makes the intuition much more accessible. So we prove that we cannot even construct a program that decides the termination of all programs with an empty input.
I took the liberty to delete the flawed discussion about the lookup tables, and the relative sizes of programs and lookup tables. It just distracts from a technically trivial proof. -- CostinCozianu
I don't see why you did so. I said explicitly that it was the starting place for one of the classic approaches to proofs, which I was trying to lead up to one step at a time - maybe not one of the proofs that you prefer, but I don't see why that's reason to delete it. It's possible that your explanation will be better understood by the current audience, but even that is hard to say - obviously, many people do not find the topic clear even when I think it's clearly presented (and yes, I think your statements are clear). I mean, I do see your point that the topic should be simpler, the way you presented it; the question is, will your audience get it. If they do, cool. -- DougMerritt''
I just reread the above and I see specifically what was subconsciously bothering me. I do not think you were clear in the critical part, the conclusion: "And the counter-example is that P cannot decide for (P Q Q)". I think you need to do something to make that much more glaringly obvious, or else your "simplified" approach won't get the point across. -- Doug
But that was exactly the point: that if we discuss about a program/procedure say P2 that takes 2 parameters: (P2 X I), we make it far less obvious to come up with (Q2 X) that evals (P2 X X) and turns the result on its head. Instead if we focus just on P1 that takes input a program that runs with an empty (or otherwise fixed) input then we construct Q1 that evaluates (P1 Q1) and turns it on its head. So assuming a purely functional subset of Scheme, the proof is:
;Proof 1
;assume there is a P1 that decide termination for procedures with no parameters
(define (P1 X) (...))
;where forall X, (P1 X) evaluates true if (X) halts, false otherwise [1]
;construct the function with no parameter
(define (Q1) (if (eval '(P1 Q1)) (loopforever) true)) ;[QDEF]
; then
; if (P1 Q1) evaluates to true, it follows according to [QDEF] that (Q1) loops forever , contradicting [1] -- -- {step A}
; if (P1 Q1) evaluates to false, it follows according to the definition of Q1 that (Q1) terminates , contradicting [1] -- [step B]
; therefore because (P1 Q1) cannot evaluate to either true or false without contradicting [1]
; the assumption that there exists P1 leads to a contradiction and is therefore false (reductio ad absurdum)
This is the complete sketch of the proof about the non-existence of a Scheme procedure P1 that can decide for all Scheme procedures with no argument if they terminate or not. Using code in a real language makes it easier to follow, while keeping it reasonably formal. If on the other hand we try to prove the version with (P2 X I), coming up with [QDEF] and verifying [step A] and [step B] is less intuitive:
;Proof 2
;assume there is a P2 that decide termination for procedures with one parameter ("input")
(define (P2 X I) (...))
;where forall X, (P2 X I) evaluates true if (X I) halts, false otherwise [1]
;construct the procedure with one parameter
(define (Q2 R) (if (eval '(P2 R R)) (loopforever) true)) ;[QDEF]
; then
; if (P2 Q2 Q2) evaluates to true, it follows according to [QDEF] that (Q2 Q2) loops forever, contradicting [1] -- [step A]
; if (P2 Q2 Q2) evaluates to false, it follows according to the definition of Q2 that (Q2 Q2) terminates, contradicting [1] -- [step B]
; therefore because (P2 Q2 Q2) cannot evaluate to either true or false without contradicting [1]
; the assumption that there exists P1 leads to a contradiction and is therefore false (reductio ad absurdum)
So I think that it is clear that Proof 2 is both harder to follow and harder to remember at a later time than Proof 1, and this responds directly top the anonymous contributor's question as to what happened with the input for Q. There is no input, and I presented the justification why it is better to look at the version with no input.
Now for GoldPlating we can tighten some details by to replacing recursion with a fixed-point combinator to make the proof adequate for the halting problem within LambdaCalculus {there's no lambda expression P so that for any other lambda expression X the application (P X) reduces to "true" (aka (lambda (x y) x) ) if X has normal form, and to "false" if X doesn't have a normal form }. Because LC and TM are known to be computationally equivalent this is as good as a proof of the halting problem for TM.
Part 3 GeneralHaltingProblemProblemProblem
- WhatDoesHaltingMean seems equivalent to saying "halt doesn't mean halt"
- No, what it says is: Consider the behavior of a machine whose FSM is a static infinite loop. It sits on the same "square" and does not move. In the context of ComputationAsSignalProcessing is identical with its "halt" behavior - it's a fixed point behavior. But Turing has explicitly constructed a universal machine whose "state" is represented by a trajectory over multiple squares. There is nothing to suggest that the "halt state" is not also so defined. To imagine the machine does not iterate over this trajectory is thus completely exformal to the model. And so the distinction between halting and InfiniteLooping makes a FrameProblem.
- At one point I thought so as well, but it turns out that that difficulty does not exist in the mathematical analysis.
- Of course it's not in the math - it's a FrameProblem. This is to say, the problem is in the exformal ontologies of the model. I'm picking on just the one to do with distinction of halting here, but others that may give rise to similar weaknesses in the proof include the initial orientation of the FSM, the limitations on motion and plurality of the FSM, the source of the the initial configuration of the tape, and most importantly the topology and navigability of the tape. These are obviously exformal because they are not encoded on the tape of the "universal" TM.
- Indeed the only real distinction to draw among behaviors of the A-Machine is one Turing entirely neglects - whether a particular machine's behavior is cyclic or not. In the context of ComputationAsSignalProcessing, Turing's proof amounts only to the fact that many machines do not fall into a fixed point attractor. A shorter, more constructive, and more generative result in this vein is Feigenbaum's analysis of the LogisticMap.
- Basically, computation is equivalent to string recognition and rewriting via grammar at some one of the 4 levels of the ChomskyHierarchy. Regular expressions are handy because they are guaranteed to either accept or reject an input string in finite time. It either matches the grammar specified by the RE or it doesn't. But RegularExpressions are computationally not very powerful. They're the least powerful of the 4 levels. Nonetheless it's great that they're guaranteed to give an answer in finite time. That actually makes them more practical for some things. That's why we use them.
- Being more physically minded, I regard computation as equivalent to signal processing. REs are delightful critters, I agree, but there are many simpler constant time signal processing functions - interferometry for example - that REs can't perform. As to the rest of the ChomskyHierarchy, it's only significant if you restrict computation to serial procedural string functions. You show me where a PinSort or an FPGA show up in this pretty hierarchy and I'll take your "Basically" comment more seriously.
- You misunderstand. REs are not about "constant time". It is trivial to show REs with exponential behavior in time and space; they actually come up daily in real life, as a matter of fact.
- I'm not making a complexity argument here; the constant time point is that the interferometer is emphatically guaranteed to operate in finite time, like the lowest level of the ChomskyHierarchy but computationally is more powerful, or at least non-implementable by the highest level of the ChomskyHierarchy. Same goes for PinSort. So I don't see this hierarchy as constraining any computation outside the limited if presently popular frame of VNA string processing.
- It also doesn't matter how you personally wish to regard computation; what's important is that there are various interesting mathematical proofs on the topic, which obviously transcend matters of personal taste. The weakest such area is the Church-Turing Hypothesis, which amounts to the notion that anything that is computable can be computed via Turing machines, and this is simply a hypothesis because it doesn't formalize the definition of "computable" in a non-circular way, so it remains possible that someday someone will come up with a new notion of "computable" that fits everyone's intuitions, and yet is larger than the class of functions that TuringMachines can handle. So you could attack it on that front -- but not just by handwaving about your personal preferences.
- The ChurchTuringThesis was disproved in 2002 by Willem Fouche - WikiPedia has a link to the relevant paper, which regrettably requires commercial subscription to read at present. Suffice to say DouglasAdams proved prescient again - Fouche's work really did use a hot cup of tea. The reason I raise the matter of taste is I can see no other reason why the Turing/Cantor/Newton/Decartes models should be preferred over physically realizable devices.
- You are gravely mistaken. The ChurchTuringThesis is not the kind of thing that is disprovable, so it can't have been disproven. That doesn't mean it's right. I already addressed this above. It may be right, it may be wrong, but it is an informal statement, that's why it's not "disprovable". It's a sort of rule of thumb that appears to be adequate for now (yes, including today). See for instance http://arxiv.org/pdf/cs.CC/0412022, which mentions work by Fouche.
- The stronger physical version of same ("Every function that can be physically computed can be computed by a Turing machine.") is the thesis in question. See http://en.wikipedia.org/wiki/Church_Turing_Thesis for the reference. I haven't actually read the Fouche paper - Tien Kieu's was good enough for the likes of cranks like me. Esoteric HyperComputation? hardware isn't my interest here.
- I'm afraid that, since you have really gotten yourself mixed up about this entire subject, I'm going to have to give up on you on this topic for the moment...you need to overcome a large number of misconceptions that you have acquired about these mathematical topics, first. I suggest reading mathematical, rather than philosophical, books on the topic. Trust me -- this is not a question of opinion, you really do misunderstand the mathematics of the situation, and you just cannot get philosophical about misunderstood math.
- We are obviously at an impass. I'm well aware of the math - was originally trained in classical CS, number theory, logic, etc. But I don't buy any of it any more, which makes me a horrible crackpot. Crackpots have a choice - they can be cranky in silence or they can be cranky out loud. I find the latter less stressful. Anyway, thanks for playing, and if you come across any of the proofs you've handwaved at - the limited-space-limited-information proof, or the one about the TuringInterferometer?, do please let me know. -- PeterMerel
- FPGAs are no different than any other electronic circuit, in that they can be simulated by a TuringMachine. The equivalence relation on different levels of power is precisely whether something can be simulated by something else -- even if it takes exponential time to do the simulation (which is quite common with TuringMachines; they're not very practical without enhancement). Some other page talks about open versus closed systems and claims that FPGAs are more powerful than TuringMachines because they are open systems interfacing with the outside world, but that's a complete red herring. One can analyze both FPGAs and TuringMachines either as closed or as open systems. One should compare apples to apples.
- Referring again to Turing's 1936 paper, FPGAs are clearly C-Machines, not A-Machines. Their "state" depends on the behavior of the exformal universe. Turing explicitly scopes his A-Machine results to exclude C-Machines. There's nothing wrong with this herring.
- Inteferometry (and various other optical computation devices, like the Hughes light valve, or indeed simply using lenses to compute Fourier transforms) is interesting because we can use it to achieve high levels of parallelism. However, this doesn't have much to do with the fundamentals of mathematical complexity analysis. Turing machines take exponential time for many problems that von Neumann machines such as the Pentium do not, but this is not because of a gaping defect in math, it's because that kind of Turing Machine is simpler to analyze.
- We're talking about computability, not complexity. Pentiums are as as tightly bound by Turing complexity results as any other VNA; they just trade off time for space as per usual.
- There are variants that do not have that exponential behavior, but then the math is more tedious. Similarly there are Turing Machine variants that can compute exactly the same problem as does interferometry, and with the same time bounds, so long as one limits the problem to something physically realizable: a small finite problem computed in a small finite time. Interferometry only becomes "magical" in the unrealizable limit of infinite wavelengths, infinite lens apertures (see the fundamental theorem of Fourier optics for why that matters), non-quantized purely wavelike light, non-analytic functions of the optical wavefront, etc.
- This would be extremely interesting to me - can you provide any reference for your Turing Machine variant that works as an interferometer?
- However, if you insist on more computational power, like being able to match indefinitely nested parenthesis, then you have to move up the hierarchy. It's not intuitive, but a machine that has just an ACCEPT state (showing it accepted the input string as being an acceptable match to the grammar) can be TuringEquivalent, yet if you add a REJECT state (indicating that it has definitely rejected the input string), then in general the latter cannot be TuringEquivalent! Basically because adding that state means deleting the program nodes that would otherwise allow it to keep cycling around trying for infinite time.
- I'm afraid I don't see the relevance here.
- When you get to the unrestricted grammars, equivalent to totally recursive functions and equivalent to TuringMachines, you cannot in general guarantee that you will get an answer in a finite amount of time. It's not that it will run forever, it's that you don't know if it will or not. This is not a trifling matter of terminology about states, it's about whether the computation time is bounded, and you can't define that away by labelling one state as "halt" or as "runs forever" - that actually is irrelevant to the problem (which is in fact always true of word definitions in math).
- It's not a trifling matter if you trouble yourself to consider it seriously for a moment. A "universal" state can be defined by any recursively enumerable trajectory over any recognizable pattern on the tape. Consequently, a "halt" state can be defined to match any "noncomputable" behavior of the machine. The program that produces the "noncomputable" behavior will itself serve as an adequate exformal definition of this state. If one could get rid of this exformation - if one could, for example, show that only a limited set of entirely computable behaviors can serve to signify halt states - you could distinguish between a "noncomputable" behavior and a halt state. But the BurdenOfProof remains on the Turing fans to establish this.
- The converse may be more clear: if a machine is guaranteed to complete its calculation in a finitely bounded time, it cannot compute total recursive functions. You would not like a programming language that enforced that, you really wouldn't. In such a language, you wouldn't be able to e.g. have conditionals change program flow. No non-trivial conditional recursion nor looping.
- The hell I wouldn't. ESOPs [Exclusive-or Sum Of Products] are my bread and butter. Constant time and don't spare the transistors. Let the exformal universe do all the looping and conditionals - my machines just run boolean polynomials to recombine signals, then squirt the mess out again.
- You're in trouble immediately. A large percentage of real world problems suffer a combinatorial explosion such that insisting on constant time will lead to a requirement for an exponential numberr of gates, and not only is that impractical for all but small problem sizes, communication delays rapidly become a dominating issue even if you try to push the limits a little.
- I'm well aware of combinatorial intractability. There are two exformal operations necessary to dispense with it. The first is that the bits represented by your computer must represent the most significant empirical distinctions on the signal under interpretation. The second is that the remaining state required for your computation is represented by the source of signal itself, not by your computer. SaulAmarel described one example of such reformulation with his CannibalsAndMissionaries. But reformulation of a general class of problems remains an area of active research.
- For instance, even just modelling a full 32x32 multiply tends to cause an exponential explosion, e.g. using BDD's, a problem that was only worked around relatively recently, and there are many related problems where extended BDD's still show exponential explosion in gates. An obvious example is factoring integers. This is potentially solvable as a satisfaction problem over boolean operators on boolean variables, e.g. each bit of the input and output being a boolean variable. But exponential explosion means this is unrealizable except on the tiniest of problems, whereas my little program with looping and conditionals can solve vastly larger factoring problems.
- I don't question that these problems, as formulated, are not amenable to constant time computation. I question whether any of the number systems and operations on them you mention are actually the most efficient way to deal with empirical classification and transformation - which is exformal to any computational intent.
- The proof is via diagonalization, as pioneered by Cantor for proving the existence of higher orders of infinity. This is not a matter of common sense. :-)
- At last we agree on something. ThereIsNoInfinity, so Cantor is just as worthless as Turing. More worthless - he's wasting more brainspace in otherwise useful minds.
- But I don't agree. There are no infinities that we know of in the physical universe, but infinity is critically important to mathematics. Diagonalization, and different orders of infinity, are important even for so simple a thing as demonstrating the existence of real numbers as opposed to rationals.
- Yes, there are no real numbers in reality. Even Turing says they're not computable. Ask yourself then, why do we take them seriously? Answer: because LimitTheory? depends on them. Standard and non-standard analysis depend on 'em. They are conventional to the vast bulk of modern mathematics. So there's a really wonderful AdPopulam? argument for their necessity. But this isn't to say other systems are not feasible and possibly preferable. GeneralizedBalancedTernary provides the basis for one. Combine that with GeometricAlgebra and we might have something ...
- Even the ancient greeks knew this, although they rather disliked it.
- See http://www.maa.org/reviews/mpa.html for Fowler's really excellent refutation of this idea. A brilliant book.
- There's no escaping this; it's been extremely thoroughly explored by people who share your tastes, and mathematically, you're stuck with it. Math in general just fails without it. And no one ever claimed it applied to the real world.
- I'd like to agree with you on this, but unfortunately reals and complex and surreal etc. numbers are used, without questioning their provenance, throughout theoretical physics. LimitTheory? is accepted without question by vast numbers of physicists. This leads to physical FrameProblems like infinities of virtual particles, ReNormalization, etc. Mathematicians may not intend that their systems be thought of as applying to the real world - but like it or not they are. As for Turing, I'm willing to bet that 99.9% of CS grads believe without question that his proofs apply to their hardware.
- This is not a question of personal opinion and taste and philosophy, this is a question of studying existing mathematical theory and history and understanding why all attempts to topple this inevitably failed. Unless you're arguing about the real world. But no one said anything different, there.
- FrameProblems have a mathematical basis. So does ExFormation. If you can explain to me just which mathematical constructs you think I misconstrue here, I'd be much obliged.
- The tricky part of this is probably simply that people tend to say "well, if it doesn't exist in the real world, then I can safely reject it in math, too". But this is false. The ancient greeks discovered irrational numbers -- the hypotenuse of a triangle in general has an irrational length, so if you want to solve real world problems, the best way is to allow irrational numbers as a useful abstraction, even if they don't exist in the real world.
- Again please see Fowler's book for an excellent alternative construction of the Greek approach to this. As to irrationals we've already agreed that all we can ever have access to are approximations of same. One way to represent empirical shapes, then, is to use a number scheme that makes the regions you want to interrelate into first class mathematical objects. This is why I mention GeneralizedBalancedTernary.
- The alternative is too awkward.
- It is taste - you see?
- Similarly with negative numbers. For a long time (millennia), they were either not conceived as "numbers", or else philosophically rejected as "numbers". But they are a great way to model, e.g., debt in the real world.
- Can be treated as one dimensional vector math without loss of generality. Which is why I mention GeometricAlgebra.
- Imaginary and complex numbers were rejected for a long time. But again they're the best way to solve many problems.
- The binding between 2-D vector math and complex numbers is so intimate that EulersEquation? is regarded by many as some fundamental result about the mind of god - rather than being, as it is, a concise representation of the classical mathematical frame. Again GBT provides an excellent empirical alternative - so excellent, in fact, that it underpins some of the largest imagery databases in use today. See http://www.spaceimaging.com for example.
- Infinity and infinitesimals have often been rejected as physically unrealistic - but they are often the best model. Models of computation, such as Turing Machines, are sometimes rejected by people's intuition, and yet they are sometimes the best model. It doesn't matter worth a damn what someone's personal intuition is; this space has been thoroughly explored long since by people much smarter and more expert than you and I, on both sides of the issue, and it's long since been settled. There's rarely anything new found to add to these discussions.
- I believe I've worked with only a few folk smarter than you and I. As a rule the smarter they are, the bigger kooks they are. History demonstrates that the idea that we have already worked out the "best" or even a particularly good basis for mathematics, information theory, or computer science is ridiculous. The more CS fundamentals I read, the more arbitrary the whole frame appears. I don't believe any of these smart guys have BigOmega's ear, or that any of their results are for the ages. And I think the stupidest thing you and I can do is to accept what we've received from them without question.
- But going along with the spirit of WhatDoesHaltingMean it is formally the case that the class of machines all of which run forever without halting is not TuringEquivalent; such machines are at minimum one step down in the ChomskyHierarchyOfComputationalPower?, and are merely equivalent to context sensitive grammars, not to TuringMachines.
- Um, a machine that computes the digits of pi would need to run forever, but is quite useful. We could say the same about any InfiniteStateMachine. I'd suggest that an InfiniteStateMachine is actually more powerful than an FSM, not the other way round. Cf. LeibnizianDefinitionOfConsciousness.
- I'm not sure what you think you're arguing there, but it is in fact true that TuringMachines cannot compute pi. Does that help? Transcendental numbers are not computable. It's quite parallel to the way that you yourself cannot compute pi on paper, you can only compute a series of rational approximations - although you have the advantage of being able to reason about the existence of a limit if that series can be proven to converge. The latter activity happens in a larger system than does the computation of the series of rationals.
- I spoke of computing the digits of pi. These digits are obviously Turing-computable and, just as obviously, their limit is not. Similarly, the digital expansion of rational numbers like 1.00000000000000000[etc] is not Turing-computable. I fail to understand the relevance of this to the utility of the approximations of intermediary terms of expansion. These approximations are all that we will ever get from any physically realizable machine.
There is no spoon. Intuitionistically speaking, all programs that cannot be inductively proven to terminate are illegal. For reactive systems, use coinduction proofs and replace termination with progress. Are there any interesting programs left?