A model or definition so complex and wide that it can be bent to fit just about any observation made. It makes science difficult because something so bendable can be made to fit any observation made. While it may be useful for making predictions of known and yet-to-be-discovered phenomena (UsefulLie), it does not necessarily help reveal the underlying mechanisms. This is because the actual underlying mechanism may be simpler or different than the TautologyMachine's model.
StringTheory has been accused of this, as has creationism.
See also TautologicalDefinitionFallacy
Rejection
A model or definition or theory isn't flawed simply because it covers a lot of ground or happens to be 'complex'. A 'unifying theory of everything' is actively sought by scientists with high hopes and aspirations for models that cover every observation on this universe. Beyond our universe, 'Math', 'Logic', and 'Philosophy' operate with intent to cover just about everything else... though they are, in practice, limited to the computable.
In Science, a theory or model can be rejected as incorrect if (and only if) it either (1) does not make any falsifiable predictions or (2) makes a prediction that is falsified or (3) makes contradictory predictions. A model that doesn't make predictions fails on the first aspect. Now, just because a theory isn't rejected doesn't mean it's accepted, either... for example, simpler models are preferred, so when scientists have a choice between two models that make the same predictions, they pick the simpler one. Einstein summed this approach up with: as simple as possible, but no simpler. Models that are too simple ought to be rejected, and models that are unnecessarily complex oughtn't be accepted.
Certain of the more general models of unified string theory do fail on item (3), and many religious 'explanations' for our existence fail on item (1). These fail as models without resorting to the concept of 'TautologyMachine'.
In Math (and formally computed Philosophies and Logics), theories and models are fully disassociated from empirical observation upon the real world. Instead, theories have their own, personal universe, constructed from a finite set of axioms (given truths). In any 'useful' mathematical theory, there will be rules (among those axioms) that allow the discovery of 'new' truths from other 'known' truths. The computational process of actually applying these rules is known as 'inference', and a (formal) description of the known valid rules applied to move from known truths to a particular statement is called a 'proof' of that statement. One should note that 'new' truths may also exist in the form of rules that take other truths and produce new ones, just like the primitive rules axioms - this allows for the production of 'domain-specific' models within a mathematical model.
Because Mathematical/Philosophical/Logic models and theories are completely disassociated from empirical input, they can only be rejected if they violate item (3) of the Scientific Model list, and make contradictory statements. If you do manage to produce a contradiction, it means the theory or model is 'inconsistent' -- i.e. you've just proven that 'true' is the same as 'false', and therefore all 'false' things are 'true' and, similarly, that all 'true' things are 'false'. Thus, proof of even one single contradiction is sufficient to completely reject the model or theory. You don't even get to call it a UsefulLie.
On decidability and Goedel's Incompleteness Theorem (because so many smart people still get confused): A prediction formed by following production rules in axioms is not the same as a decision, the difference being implicit order in computation. By the time you get to a prediction, you already have a proof - a description of the known valid rules you applied to get there. You just need to remember your steps. However, with a 'decision', you're offering a statement and demanding a proof (or a disproof - a proof of the negation). In any sufficiently complex system, it will be possible to construct a statement such that, given finite time, it is not possible to find a proof or disproof of that statement. These statements are called 'undecidable'.
What Goedel did that is so amazing is he proved that, given any finite axiomatization on the natural numbers and their arithmetic, it is always possible to construct a statement that is true but undecidable. More generally, since all axiomatizations must (in practice) be finite, all mathematical theories are 'incomplete' (at least if they are sufficiently complex to express Peano arithmetic). There will always be truths they cannot discover. I'm not sure whether that conclusion excites most mathematicians or exhausts them. What it does mean is that they'll never be out of work... there will always, always, always be more fundamental truths to find, and new axioms to play with.
On the Use and MisuseOfMath: It is worth noting that a set of axioms can be quite arbitrary, and that not all axiomatizations are of equal 'value' in the more economic sense. 'Usefulness' has just about the same meaning for mathematical theories as it does for scientific ones: a theory is useful if it helps you make money (or provides you something else of inherent value). This, of course, requires attaching math to reality by some mechanism or another... to apply one's mathematical theories and models. The usual means of attaching the two is directly through science: 'hard' scientific theories are formulated within a mathematics model that allows one to plug measurement-values in one side and (via application of rules or a search for a proof) make a decision or provide an answer that (if the scientific theory is any good) will also be true - that is, true as empirically observed and measured.
Some interesting directions mathematical theory has been taking over the last century years or so is in the modeling of mathematical models, theories of inference systems, computation systems, communicating systems, classification systems, etc.. These have grown into the fields of computation science, computability theory, information theory, type theory, and more. Such things as TypeTheory, InformationTheory, computability theory, and all these models of models tend to be a step more indirect in their application (and, thus, in possessing 'usefulness'). They, themselves, are applied to proposed models and computations, and they are used to make conclusions and statements about those. These models-of-models are 'useful' if the statements and conclusions they make are 'useful'.
These tend to be of greatest value in fields that treat models and computations themselves as products... e.g. the fields of Engineering Design and ComputerScience. E.g. TypeTheory has found a great deal of 'usefulness' in the ComputerScience field on two different fronts, one being the classification of computations as 'safe' or 'unsafe' (a property known as TypeSafety), and the other being in the automatic analysis of expressions/statements/objects in attempting to divine programmer 'intent' (which results in the ability to handle static polymorphic dispatching, various optimizations, and some of the more complex syntaxes and grammars (such as those that possess context-sensitive order-of-operations)). These are 'useful' indirectly... e.g. to the programmer who gets to use context-sensitive macros, or to the developer, who can (with automated static type-safety) find 'obvious' errors before fielding the product or (with dynamic type-safety) at least fail gracefully. These help the user... make money, work with reduced trepidation, get more done in less time, whatever - something useful.
Anyhow, back to the subject: Is it possible for 'math theories' to be 'TautologyMachines'? It is true that any complete proof within a particular mathematical theory is certainly a tautology within that theory: the 'complete' proof will necessarily always go all the way back to those axioms, which themselves are true by virtue of being axioms (which is circular logic from an outside perspective). However, the above description of TautologyMachine (that the models "can be bent to fit just about any observation made") simply doesn't apply; mathematical theories are disassociated from real-world observations.
Indeed, after determining that the theory is consistent - that 'true' and 'false' don't mean the same thing - one should only be concerned as to whether the mathematical model is of any 'use'. And, as Goedel's incompleteness theorem indicates, it will almost always be possible to make a theory more useful by carefully selecting axioms that decide the undecidable in ways users of the theory find 'useful'.
Conclusion: The word 'TautologyMachine' is misleading and of no value. The only thing that comes vaguely close to the 'TautologyMachine' is any model that self-contradicts - wherein 'true' and 'false' are the same - and we already have words for those: inconsistent or self-contradictory. That's all we need.
Now, I know that TopMind created this page on TautologyMachine - creating a new word with which to brand the opposition using his trademark unsubstantiated claims. To be honest, I wrote the above mostly to pull the rug out from under him, but I'll allow myself some small, optimistic hope that he'll actually try (maybe even succeed) at comprehending what I said before objecting to it. I'd appreciate it if he keeps his fuming objection to my above rejection (and my snark) below my sig, lest my statements be mutilated beyond readability via spurious injection of aggressive comments.
To be honest, I find your writing too meandering for that. Note that a TautologyMachine may not be valid if not connected to reality because in a purely mathematical realm it may not matter if its "unnecessarily complex". Equivalency is equivalency. (It just makes it hard for humans to work with.) However, for physical models, we generally seek parsimony, that is the simplest explanation that fits observation. A model that is too complex is usually considered suspicious.
The TautologyMachine is not valid ever. And I agree that we seek 'as simple as possible, but no simpler', but never forget the 'but no simpler' part - parsimony itself is never a reason to reject a model. You reject a good explanation when you already have TWO good explanations, and one is simpler than the other.
That is true, but often the complicated one "smells" unnecessarily complex. One test of smelliness is difficultly in falsification either at the instance level and/or the model level. Another is being TuringComplete (NEEDS PROOF). If it can match anything by emulating anything, then it has only limited scientific use (NEEDS PROOF).
What it means for a model to be 'TuringComplete' is only that it is possible to ask problems that, to 'solve' using the axiomatic rules in the model, require computation that is equivalent to an arbitrary Turing machine. For example, the physics model is rather trivially equivalent to finite-tape Turing machines: one can formulate a problem equivalent to finite-tape Turing machine X by representing a complete, physical implementation of 'machine X' in the physics model. Then one can ask arbitrary questions of it (like "does it halt"?). And one can test it by finding the right equations and crunching all the numbers. Despite this, the model for physics in use remains falsifiable - it need only make some predictions that aren't true as measured empirically. And it's quite easy to see that the model for physics remains of scientific use.
You too often use 'TuringComplete' as a form of hand-waving, abracadabra magic to dismiss people's arguments or reinforce a claim, and I'm rather irritated with how you insist on repeating these unjustified (and often unjustifiable) statements as though we're supposed to either accept them on your word alone or perform ALL the legwork in disproving you. From now on I'm writing (NEEDS PROOF) or (NEEDS FURTHER EXPLANATION) whenever I'm irritated in this manner - nice, big, glaring, demanding real attention. Hopefully it will work better in helping you find you explain your points and locate your errors than telling you where you're wrong or providing counter-examples (which are simply more effort than I've decided your comprehension is worth).
(Why is TuringComplete here [in see also]? Nothing about 'TautologyMachine' even begins to associate with it.)
A model that is TuringComplete can be any other model (NEEDS PROOF), thus a TautologyMachine (NEEDS PROOF). It is a technique that can be used to build a TautologyMachine (NEEDS PROOF).
A model that is TuringComplete allows you to represent the computation of a problem from any other model in terms of a problem described in the TuringComplete model. However, that does not imply any semantic equivalence between the models (the set of axioms and rules and their association to the world). Nor does it mean that the use of the model in this manner will seem at all sensible, or that the problem transformation will be easy describe. Your statement here is simply untrue. It's rather like saying that "Brainfuck IS every model" simply because you can represent the computation of problems for any model within BrainfuckLanguage.
In a sense, the BF people are right. That is why there are so many HolyWars in IT: there is no objective formulaic falsifiability (NEEDS PROOF) (except with a layer of human interpretation/assignment (NEEDS FURTHER EXPLANATION)). There is no consensus algorithm to say something belongs to one paradigm or another, for example (fallacy: can't prove an absence (of objective formulaic falsifiability) by use of an example).
In the end it's all just moving bits about, no matter how much you hide that away. The important thing is whether or not the language can move the bits about in all the ways they can possibly be moved.
In the end, a computation is a computation, perhaps. However, that doesn't mean that a model is every other model. Models aren't computations, and models aren't "just moving bits about".
A formal model is usually subject to computation; i.e. we can run scenarios through it. Yes, there are informal models, but they are more subject to subjectivity.
Agreed. Good, formal models should be subject to formal computation.
Note that a TautologyMachine can be a continuous concept. There can be various degrees of "overly complex" (AbstractionInversion)and "difficult to falsify".
See also: UsefulLie, TuringComplete, AbstractionInversion