- WhatDoesHaltingMean seems equivalent to saying "halt doesn't mean halt"
- No, what it says is: Consider the behavior of a machine whose FSM is a static infinite loop. It sits on the same "square" and does not move. In the context of ComputationAsSignalProcessing is identical with its "halt" behavior - it's a fixed point behavior. But Turing has explicitly constructed a universal machine whose "state" is represented by a trajectory over multiple squares. There is nothing to suggest that the "halt state" is not also so defined. To imagine the machine does not iterate over this trajectory is thus completely exformal to the model. And so the distinction between halting and InfiniteLooping makes a FrameProblem.
- At one point I thought so as well, but it turns out that that difficulty does not exist in the mathematical analysis.
- Of course it's not in the math - it's a FrameProblem. This is to say, the problem is in the exformal ontologies of the model. I'm picking on just the one to do with distinction of halting here, but others that may give rise to similar weaknesses in the proof include the initial orientation of the FSM, the limitations on motion and plurality of the FSM, the source of the the initial configuration of the tape, and most importantly the topology and navigability of the tape. These are obviously exformal because they are not encoded on the tape of the "universal" TM.
- Indeed the only real distinction to draw among behaviors of the A-Machine is one Turing entirely neglects - whether a particular machine's behavior is cyclic or not. In the context of ComputationAsSignalProcessing, Turing's proof amounts only to the fact that many machines do not fall into a fixed point attractor. A shorter, more constructive, and more generative result in this vein is Feigenbaum's analysis of the LogisticMap.
- Basically, computation is equivalent to string recognition and rewriting via grammar at some one of the 4 levels of the ChomskyHierarchy. Regular expressions are handy because they are guaranteed to either accept or reject an input string in finite time. It either matches the grammar specified by the RE or it doesn't. But RegularExpressions are computationally not very powerful. They're the least powerful of the 4 levels. Nonetheless it's great that they're guaranteed to give an answer in finite time. That actually makes them more practical for some things. That's why we use them.
- Being more physically minded, I regard computation as equivalent to signal processing. REs are delightful critters, I agree, but there are many simpler constant time signal processing functions - interferometry for example - that REs can't perform. As to the rest of the ChomskyHierarchy, it's only significant if you restrict computation to serial procedural string functions. You show me where a PinSort or an FPGA show up in this pretty hierarchy and I'll take your "Basically" comment more seriously.
- You misunderstand. REs are not about "constant time". It is trivial to show REs with exponential behavior in time and space; they actually come up daily in real life, as a matter of fact.
- I'm not making a complexity argument here; the constant time point is that the interferometer is emphatically guaranteed to operate in finite time, like the lowest level of the ChomskyHierarchy but computationally is more powerful, or at least non-implementable by the highest level of the ChomskyHierarchy. Same goes for PinSort. So I don't see this hierarchy as constraining any computation outside the limited if presently popular frame of VNA string processing.
- It also doesn't matter how you personally wish to regard computation; what's important is that there are various interesting mathematical proofs on the topic, which obviously transcend matters of personal taste. The weakest such area is the Church-Turing Hypothesis, which amounts to the notion that anything that is computable can be computed via Turing machines, and this is simply a hypothesis because it doesn't formalize the definition of "computable" in a non-circular way, so it remains possible that someday someone will come up with a new notion of "computable" that fits everyone's intuitions, and yet is larger than the class of functions that TuringMachines can handle. So you could attack it on that front -- but not just by handwaving about your personal preferences.
- The ChurchTuringThesis was disproved in 2002 by Willem Fouche - WikiPedia has a link to the relevant paper, which regrettably requires commercial subscription to read at present. Suffice to say DouglasAdams proved prescient again - Fouche's work really did use a hot cup of tea. The reason I raise the matter of taste is I can see no other reason why the Turing/Cantor/Newton/Decartes models should be preferred over physically realizable devices.
- You are gravely mistaken. The ChurchTuringThesis is not the kind of thing that is disprovable, so it can't have been disproven. That doesn't mean it's right. I already addressed this above. It may be right, it may be wrong, but it is an informal statement, that's why it's not "disprovable". It's a sort of rule of thumb that appears to be adequate for now (yes, including today). See for instance http://arxiv.org/pdf/cs.CC/0412022, which mentions work by Fouche.
- The stronger physical version of same ("Every function that can be physically computed can be computed by a Turing machine.") is the thesis in question. See http://en.wikipedia.org/wiki/Church_Turing_Thesis for the reference. I haven't actually read the Fouche paper - Tien Kieu's was good enough for the likes of cranks like me. Esoteric HyperComputation? hardware isn't my interest here.
- I'm afraid that, since you have really gotten yourself mixed up about this entire subject, I'm going to have to give up on you on this topic for the moment...you need to overcome a large number of misconceptions that you have acquired about these mathematical topics, first. I suggest reading mathematical, rather than philosophical, books on the topic. Trust me -- this is not a question of opinion, you really do misunderstand the mathematics of the situation, and you just cannot get philosophical about misunderstood math.
- We are obviously at an impass. I'm well aware of the math - was originally trained in classical CS, number theory, logic, etc. But I don't buy any of it any more, which makes me a horrible crackpot. Crackpots have a choice - they can be cranky in silence or they can be cranky out loud. I find the latter less stressful. Anyway, thanks for playing, and if you come across any of the proofs you've handwaved at - the limited-space-limited-information proof, or the one about the TuringInterferometer?, do please let me know. --PeterMerel
- FPGAs are no different than any other electronic circuit, in that they can be simulated by a TuringMachine. The equivalence relation on different levels of power is precisely whether something can be simulated by something else -- even if it takes exponential time to do the simulation (which is quite common with TuringMachines; they're not very practical without enhancement). Some other page talks about open versus closed systems and claims that FPGAs are more powerful than TuringMachines because they are open systems interfacing with the outside world, but that's a complete red herring. One can analyze both FPGAs and TuringMachines either as closed or as open systems. One should compare apples to apples.
- Referring again to Turing's 1936 paper, FPGAs are clearly C-Machines, not A-Machines. Their "state" depends on the behavior of the exformal universe. Turing explicitly scopes his A-Machine results to exclude C-Machines. There's nothing wrong with this herring.
- Inteferometry (and various other optical computation devices, like the Hughes light valve, or indeed simply using lenses to compute Fourier transforms) is interesting because we can use it to achieve high levels of parallelism. However, this doesn't have much to do with the fundamentals of mathematical complexity analysis. Turing machines take exponential time for many problems that von Neumann machines such as the Pentium do not, but this is not because of a gaping defect in math, it's because that kind of Turing Machine is simpler to analyze.
- We're talking about computability, not complexity. Pentiums are as as tightly bound by Turing complexity results as any other VNA; they just trade off time for space as per usual.
- There are variants that do not have that exponential behavior, but then the math is more tedious. Similarly there are Turing Machine variants that can compute exactly the same problem as does interferometry, and with the same time bounds, so long as one limits the problem to something physically realizable: a small finite problem computed in a small finite time. Interferometry only becomes "magical" in the unrealizable limit of infinite wavelengths, infinite lens apertures (see the fundamental theorem of Fourier optics for why that matters), non-quantized purely wavelike light, non-analytic functions of the optical wavefront, etc.
- This would be extremely interesting to me - can you provide any reference for your Turing Machine variant that works as an interferometer?
- However, if you insist on more computational power, like being able to match indefinitely nested parenthesis, then you have to move up the hierarchy. It's not intuitive, but a machine that has just an ACCEPT state (showing it accepted the input string as being an acceptable match to the grammar) can be TuringEquivalent, yet if you add a REJECT state (indicating that it has definitely rejected the input string), then in general the latter cannot be TuringEquivalent! Basically because adding that state means deleting the program nodes that would otherwise allow it to keep cycling around trying for infinite time.
- I'm afraid I don't see the relevance here.
- When you get to the unrestricted grammars, equivalent to totally recursive functions and equivalent to TuringMachines, you cannot in general guarantee that you will get an answer in a finite amount of time. It's not that it will run forever, it's that you don't know if it will or not. This is not a trifling matter of terminology about states, it's about whether the computation time is bounded, and you can't define that away by labelling one state as "halt" or as "runs forever" - that actually is irrelevant to the problem (which is in fact always true of word definitions in math).
- It's not a trifling matter if you trouble yourself to consider it seriously for a moment. A "universal" state can be defined by any recursively enumerable trajectory over any recognizable pattern on the tape. Consequently, a "halt" state can be defined to match any "noncomputable" behavior of the machine. The program that produces the "noncomputable" behavior will itself serve as an adequate exformal definition of this state. If one could get rid of this exformation - if one could, for example, show that only a limited set of entirely computable behaviors can serve to signify halt states - you could distinguish between a "noncomputable" behavior and a halt state. But the BurdenOfProof remains on the Turing fans to establish this.
- The converse may be more clear: if a machine is guaranteed to complete its calculation in a finitely bounded time, it cannot compute total recursive functions. You would not like a programming language that enforced that, you really wouldn't. In such a language, you wouldn't be able to e.g. have conditionals change program flow. No non-trivial conditional recursion nor looping.
- The hell I wouldn't. ESOPs [Exclusive-or Sum Of Products] are my bread and butter. Constant time and don't spare the transistors. Let the exformal universe do all the looping and conditionals - my machines just run boolean polynomials to recombine signals, then squirt the mess out again.
- You're in trouble immediately. A large percentage of real world problems suffer a combinatorial explosion such that insisting on constant time will lead to a requirement for an exponential numberr of gates, and not only is that impractical for all but small problem sizes, communication delays rapidly become a dominating issue even if you try to push the limits a little.
- I'm well aware of combinatorial intractability. There are two exformal operations necessary to dispense with it. The first is that the bits represented by your computer must represent the most significant empirical distinctions on the signal under interpretation. The second is that the remaining state required for your computation is represented by the source of signal itself, not by your computer. SaulAmarel described one example of such reformulation with his CannibalsAndMissionaries. But reformulation of a general class of problems remains an area of active research.
- For instance, even just modelling a full 32x32 multiply tends to cause an exponential explosion, e.g. using BDD's, a problem that was only worked around relatively recently, and there are many related problems where extended BDD's still show exponential explosion in gates. An obvious example is factoring integers. This is potentially solvable as a satisfaction problem over boolean operators on boolean variables, e.g. each bit of the input and output being a boolean variable. But exponential explosion means this is unrealizable except on the tiniest of problems, whereas my little program with looping and conditionals can solve vastly larger factoring problems.
- I don't question that these problems, as formulated, are not amenable to constant time computation. I question whether any of the number systems and operations on them you mention are actually the most efficient way to deal with empirical classification and transformation - which is exformal to any computational intent.
- The proof is via diagonalization, as pioneered by Cantor for proving the existence of higher orders of infinity. This is not a matter of common sense. :-)
- At last we agree on something. ThereIsNoInfinity, so Cantor is just as worthless as Turing. More worthless - he's wasting more brainspace in otherwise useful minds.
- But I don't agree. There are no infinities that we know of in the physical universe, but infinity is critically important to mathematics. Diagonalization, and different orders of infinity, are important even for so simple a thing as demonstrating the existence of real numbers as opposed to rationals.
- Yes, there are no real numbers in reality. Even Turing says they're not computable. Ask yourself then, why do we take them seriously? Answer: because LimitTheory? depends on them. Standard and non-standard analysis depend on 'em. They are conventional to the vast bulk of modern mathematics. So there's a really wonderful AdPopulam? argument for their necessity. But this isn't to say other systems are not feasible and possibly preferable. GeneralizedBalancedTernary provides the basis for one. Combine that with GeometricAlgebra and we might have something ...
- Even the ancient greeks knew this, although they rather disliked it.
- See http://www.maa.org/reviews/mpa.html for Fowler's really excellent refutation of this idea. A brilliant book.
- There's no escaping this; it's been extremely thoroughly explored by people who share your tastes, and mathematically, you're stuck with it. Math in general just fails without it. And no one ever claimed it applied to the real world.
- I'd like to agree with you on this, but unfortunately reals and complex and surreal etc. numbers are used, without questioning their provenance, throughout theoretical physics. LimitTheory? is accepted without question by vast numbers of physicists. This leads to physical FrameProblems like infinities of virtual particles, ReNormalization, etc. Mathematicians may not intend that their systems be thought of as applying to the real world - but like it or not they are. As for Turing, I'm willing to bet that 99.9% of CS grads believe without question that his proofs apply to their hardware.
- This is not a question of personal opinion and taste and philosophy, this is a question of studying existing mathematical theory and history and understanding why all attempts to topple this inevitably failed. Unless you're arguing about the real world. But no one said anything different, there.
- FrameProblems have a mathematical basis. So does ExFormation. If you can explain to me just which mathematical constructs you think I misconstrue here, I'd be much obliged.
- The tricky part of this is probably simply that people tend to say "well, if it doesn't exist in the real world, then I can safely reject it in math, too". But this is false. The ancient greeks discovered irrational numbers -- the hypotenuse of a triangle in general has an irrational length, so if you want to solve real world problems, the best way is to allow irrational numbers as a useful abstraction, even if they don't exist in the real world.
- Again please see Fowler's book for an excellent alternative construction of the Greek approach to this. As to irrationals we've already agreed that all we can ever have access to are approximations of same. One way to represent empirical shapes, then, is to use a number scheme that makes the regions you want to interrelate into first class mathematical objects. This is why I mention GeneralizedBalancedTernary.
- The alternative is too awkward.
- It is taste - you see?
- Similarly with negative numbers. For a long time (millennia), they were either not conceived as "numbers", or else philosophically rejected as "numbers". But they are a great way to model, e.g., debt in the real world.
- Can be treated as one dimensional vector math without loss of generality. Which is why I mention GeometricAlgebra.
- Imaginary and complex numbers were rejected for a long time. But again they're the best way to solve many problems.
- The binding between 2-D vector math and complex numbers is so intimate that EulersEquation? is regarded by many as some fundamental result about the mind of god - rather than being, as it is, a concise representation of the classical mathematical frame. Again GBT provides an excellent empirical alternative - so excellent, in fact, that it underpins some of the largest imagery databases in use today. See http://www.spaceimaging.com for example.
- Infinity and infinitesimals have often been rejected as physically unrealistic -- but they are often the best model. Models of computation, such as Turing Machines, are sometimes rejected by people's intuition, and yet they are sometimes the best model. It doesn't matter worth a damn what someone's personal intuition is; this space has been thoroughly explored long since by people much smarter and more expert than you and I, on both sides of the issue, and it's long since been settled. There's rarely anything new found to add to these discussions.
- I believe I've worked with only a few folk smarter than you and I. As a rule the smarter they are, the bigger kooks they are. History demonstrates that the idea that we have already worked out the "best" or even a particularly good basis for mathematics, information theory, or computer science is ridiculous. The more CS fundamentals I read, the more arbitrary the whole frame appears. I don't believe any of these smart guys have BigOmega's ear, or that any of their results are for the ages. And I think the stupidest thing you and I can do is to accept what we've received from them without question.
- But going along with the spirit of WhatDoesHaltingMean it is formally the case that the class of machines all of which run forever without halting is not TuringEquivalent; such machines are at minimum one step down in the ChomskyHierarchyOfComputationalPower?, and are merely equivalent to context sensitive grammars, not to TuringMachines.
- Um, a machine that computes the digits of pi would need to run forever, but is quite useful. We could say the same about any InfiniteStateMachine. I'd suggest that an InfiniteStateMachine is actually more powerful than an FSM, not the other way round. Cf. LeibnizianDefinitionOfConsciousness.
- I'm not sure what you think you're arguing there, but it is in fact true that TuringMachines cannot compute pi. Does that help? Transcendental numbers are not computable. It's quite parallel to the way that you yourself cannot compute pi on paper, you can only compute a series of rational approximations - although you have the advantage of being able to reason about the existence of a limit if that series can be proven to converge. The latter activity happens in a larger system than does the computation of the series of rationals.
- I spoke of computing the digits of pi. These digits are obviously Turing-computable and, just as obviously, their limit is not. Similarly, the digital expansion of rational numbers like 1.00000000000000000[etc] is not Turing-computable. I fail to understand the relevance of this to the utility of the approximations of intermediary terms of expansion. These approximations are all that we will ever get from any physically realizable machine.
- I believe that the halting problem was designed to see if a program ever reached a certain state. That is, given a Turing machine described by the first half of the tape, find out if it ever transitions to the state described by the second half of the tape. The halting program buster (if (this program halts) while(1); else return;) could be written to run the equivalent of the halting program, and, if it produces a 1 output for halts, transition to the forever state which always transitions to itself. In this way, it will never reach the halt state.
Is it possible to prove the halting problem applies to computers that are given constructors that allow them to construct more computers (this likes to break things that were considered proven).
Yes. Even if computers constructing more computers were to result in something more powerful than a TM, it would still be unable to solve its own halting problem. If it is more powerful (which I doubt), it might be able to solve the halting problem for TMs.
See HaltingProblem, GeneralHaltingProblem, GeneralHaltingProblemProblem (seems somebody got into a MentalLoop?)