According to StephanHouben in the page discussing TheEmperorsNewMind, RogerPenrose's biggest mistake is to be one of those HardCorePlatonists. This means that, at least in Stephan's non-platonist mind, he is unable to give a fair account of the implications of GoedelsIncompletenessTheorem. Stephan also complains that Penrose uses CommonSense arguments while in quantum land, which is "not the most reliable guide". I make a few amateur comments in defence of Penrose in these areas in TheEmperorsNewMind and ShadowsOfTheMind. -- RichardDrake
I never accused RogerPenrose of actually making mistakes. However, I did accuse him of interpreting GoedelsIncompletenessTheorem in a particular, Platonist, way. I read TheEmperorsNewMind before I started studying mathematics, and at that point of time I thought his conclusion (that Fk(k) is true) was inevitable. It was only at university that I met the other viewpoint, which is arguably a bit counter-intuitive but perfectly valid. That doesn't mean that RogerPenrose's view is a mistake, it is just that it's not as clear-cut as one may think after reading the book. Perhaps RogerPenrose's biggest "mistake" is to be such a good writer ;-) -- StephanHouben
MeaCulpa, Stephan. Mostly I wanted your point of view on this page. StephenHawking would sympathize with you. (Looking at your name again, you're not a Dutch transcription of his are you?)
[Penrose made a huge mistake. He completely forgets that human mathematicians are inconsistent systems: they make mistakes and believe in provable falsehoods fairly frequently (unintentionally, of course). They are thus perfectly well handled by Goedel's theorems.]
Another objection runs as follows:
I think everyone agrees that experimental results have not yet confirmed any variety of the BoseEinsteinCondensates? thesis. Whether it has been shown to be false, for all possible "locations" in the brain, I also doubt, from what I've read. Particular hypotheses as to how BE condensates arise and are shielded for a time within the brain, before interacting with more conventional "switching", may well have been falsified by experiment. I think TheJurysStillOut? on this one. That's partly why I thought it would be interesting to track on Wiki.
I think the above objections to Penrose are worth more discussion on Wiki, so I thought I'd put 'em together here. But strangely, I think it's even more helpful, for those interested in this area, to record some "mistakes" Penrose himself admits to making:
But the most amazing "mistake", to my mind, I've left to last, because it's one that as Penrose himself says, has been a tacit assumption of "many other physicists" since the 1920s (and I assume that includes some with pretty good reputations). This is taken from p461 of TheEmperorsNewMind, where Penrose has just showed that the probabilities involved in quantum state reduction are different if time is reversed - the first inkling in all of mainstream physics since Newton of time-asymmetry:
The bit about microtubules should be a dead giveaway that something is wrong. Amoebae are composed partly of microtubules, and they don't exhibit the sorts of behaviors most people would count as intelligent.
Sure, but this concept of past, present and future can adequately be explained by Thermodynamics - no need to add additional time-asymmetric facets to the theory. Of course, RogerPenrose notes that this raises the question what caused the universe to be so ordered in the first place.
can be explained by Thermodynamics I wish someone would explain. and how is that related to news that many systems will run both directions? (see http://www.sciencenews.org/20020727/fob1.asp)
About this time-asymmetry: I'm not an expert in QuantumMechanics, but it is important to note the way QM is used to compute things. RogerPenrose also talks about this, but perhaps I should stress it again. When you want to compute, for example, the orbit of an electron, you only use Schroedinger's equation - Penrose calls this the "U" procedure. You then get a completely deterministic answer, phrased in terms of a linear combination of "classic" states for the electron. When at the end of the day your boss walks in and asks you where the electron really is, you apply the "R" procedure and get a statistical distribution for the position of the electron. That is, the "R" procedure is not considered to be something which happens at a particular point of time; rather, it is a kind of "post-processing" to interpret the results of QM.
Penrose suggests that "R" is actually something which happens in reality. "R" is the part of QM which is irreversible. So if "R" is actually a physical process, then QM would indeed be time-asymmetric.
The traditional "Kopenhagen" interpretation of QM, on the other hand, says that QM is just a procedure to compute the behaviour of a given system, and that no physical meaning should be attached to this "R" procedure. I agree with Penrose that this is not very attractive from a philosophical point of view. But that is the traditional view on QM. In this view, there is no time-asymmetry.
Isn't it fair to say that only a small minority of real physicists in the last seventy five years have taken a strict "Kopenhagen" view of QM? Traditional it may be, but "not very attractive from a philosophical point of view" to many of us and not very widely held by those most familiar with the subject matter?
-- RichardDrake
Yes, I think this is fair to say. -- StephanHouben
This sounds somewhat off. Copenhagen insists ultimately that wave functions have to collapse, which means that it's the combination of U and R that describes reality. The main alternative, many-worlds, only uses the U procedure and relies on thermodynamics to separate everything out.
Sure, but this concept of past, present and future can adequately be explained by Thermodynamics - no need to add additional time-asymmetric facets to the theory.
Hmm, two points here. The Second Law of Thermodynamics is, according to Penrose and a lot of other physicists, simply the result of the extraordinarily low entropy in the very first milliseconds of the big bang. There's no time-asymmetry in the more fundamental laws of physics and stochastics that give rise to the Second Law, given the very low entropy starting point. Interestingly, although TheEmperorsNewMind has a light-hearted cartoon showing a bearded God choosing an incredibly low entropy point in the total phase space on "booting" the universe, Penrose prefers to ascribe the initial entropy to fundamental physical law in the form of the unproven (and presumably unprovable) Weyl Curvature Hypothesis. Penrose never declares whether he considers such physical laws themselves, which he agrees were key to the evolution of self-conscious human life, to be part of a divine design. He certainly started out as a materialist and sceptic, by his own admission.
But point two is that nobody is claiming, are they, that the time-asymmetry we observe with entropy can have anything to do with the detailed working of our minds and consciousness? Whereas Penrose and a few others that he's now persuaded believe that the not-yet-discovered rules of a non-computable theory of QuantumGravity will be both time-asymmetric (like current Quantum Theory's "R" state reduction) and key to the working of human brains. -- RichardDrake
Penrose believes that this argument is very strong logically, especially as he's re-expressed it in ShadowsOfTheMind. But he also accepts that it is likely to be just one example of a much broader issue: that human understanding of anything, even if very much less open to logical scrutiny than Goedel, is going to require non-computable, time-asymmetric physics. From the moment I first opened TheEmperorsNewMind, I've found this a compelling point of view. And quite honestly a terrific relief from what had seemed to be the extraordinary combination of astounding claims, great pride yet complete non-achievement in reductionist artificial intelligence. There is a underlying influence here that I've come to believe at least partly explains why we in software have so often eschewed ExtremeHumility and produced such mediocre results. Since the original speculations and projections of AlanTuring we've had self-delusion at the heart of our discipline. We need some hard science to get us back on track.
Those last four sentences were intended to stir up some debate. Please don't DisagreeByDeleting but do use all other means WardCunningham has made available to you. I of course reserve the right to change my own flawed understanding.
-- RichardDrake
The error lies in the fact that Goedel's Incompleteness Theorem is not a Goedel's Theorem!
I will clarify the last sentence. The fact that you know (or, more exactly, believe) that Fk(k) = true, and the fact that you understand Goedel's Incompleteness Theorem are totally unrelated. Why?
Simple. Because Goedel DID prove his theorem in the Whitehead/Russell set of axioms. So, the proof of this theorem could be accomplished by a complex enough Turing machine. VERY different from not being able to prove a theorem, but believing in its reality.
Also, if you just believe that Fk(k) = true, you are just limiting yourself. We could create a Turing Machine that:
I think I agree entirely, though I can't be sure - which is the whole point. Penrose believes in absolute proof. He beliefs in an infallible creature called a Mathematician. It knows when it has got something correct. And it is correct about that knowledge. It just knows. Mistakes aren't a problem for it. If that's true of Penrose, then I guess if he says something, we can't argue! But I doubt if it really is. There are two explanations for that 'Eureka' feeling you get when think you've got something right: 1. You got it right. Well done! Or 2. You're fooling yourself. Proof was invented to encourage self-doubt and careful checking to eliminate reliance on false assumptions, but your stomach may rumble when you are thinking something through, and your concentration drifts. This can happen at any time. How do you know you really know something is true? How do you prove you've proven it? So the apparent ability of a mathematician to "see" truth doesn't require any magic - just an acceptance that they are often wrong, and we as a race make progress by bumbling through difficulties and occasionally getting lucky. No contradiction with Goedel. No quantum magic required. How was Fermat's last theorem proven? How was the Four Colour conjecture proven? (And how do we know they are proven now, anyway?!) It took a lot of people hundreds of years, many of whom thought they'd done it, several times, only to have a flaw pointed out to them. And someone may find a flaw yet in either of those famous examples - only a few dozen people in the world are really capable of doing so, due to the obscure techniques involved. That doesn't give much scope for effective peer review. (Looking at it from a programming point-of-view, I expect that Penrose is rather like one of those lucky programmers who never has any bugs in his code, no siree! Until someone finds one...) -- DanielEarwicker
For discussion of the idea that, as computing is not purely algorithmic, Penrose (and others) may be challenging a strawman, see AlgorithmsOrInteraction.
PenroseCannotConsistentlyAssert or can he? I've moved the detailed discussion of the Goedel stuff there. It's a very important foundation for Penrose's argument and it deserves discussion. But I'd like to make room if possible for other aspects of Penrose's proposals and their implications both for HumanBeings and ComputerScience. Trouble is, Wiki may be short of real bio-neuro-psycho-quanto-cosmo-philosophy experts compared with the Goedel stuff. Or maybe it's easier to bluff in the Goedel arena ...
I don't know which is worse: mathematicians pretending to be philosophers, or philosophers pretending to be mathematicians... undoubtedly the former, because mathematicians are less likely to be lead astray by non-rigorous math (but rigor in philosophy is just tedious, eh?), and then we have physicists pretending to be mathematicians and philosophers...
Finally it becomes clear to me why Plato/Socrates argued that each man should have one (and only one) area of expertise.
Yeah, Philosophy is not for the dilettante.
Philosophy is only worthwhile if they know as little as possible about everything else?
Some online criticisms of Penrose:
TheoVerelst : Always good to read about fundamental science, or is it?
Without reading them back, some remarks about the subjects I found on the page.
First, quantum physics isn't a completed building a mathematician models with complete coverage of every physical and other observable phenomenon. Not that, just like Newtonian Physics, it has been disproven for what it claims to be, but it simply isn't a Complete Theory of Everything, nor does it claim to be.
Some 'laws' are known well in their context, for instance the Heisenberg equation states something about the simultaneous measurements of position and impulse (velocity), where the law states that it isn't a possibility to measure both with infinite accuracy, with as extremes that full measurement-wise knowledge of either leaves you without a clue about the other, no matter how those measurements are conceptualized as 'ordered'.
Quantum physics has a complete and strong basis in observations, and statistical foundations, it speaks about probabilities, and its fundamental integrals speak about combined probability densities, mainly, though one can define important parts in other terminology, the expectation values and variances, and (advanced) statistical mathematics based on observables are the main mathematical vehicles.
Functional integrals, especially of squared kind (...) lend themselves well for the important gaussian function/curve which every repeated observation is statistically ('law' of big numbers) inclined to converge to in the presence of (on average) neutral measurement noise. And they (or their differential counterpart) form a big part of most quantum computations.
Goedel; labeling is about the possibility to continue naming all things you want to consider, which is not so much the direction of most physics, where most laws state generalized behaviour of matter, so that the law may be complicated, but not so much in need of more indices, and in principle Fock space (infinite number of Hilbert spaces to represent a completely general (convolvable and possibly limited to normalized) mapping from any Hilbert space signal to any Hilbert space range result) is dimensional enough to no put you in goedel games when the number of particles considered is bounded or countable. And possibly beyond.
The idea is that all the let's say quarks in the universe are probably never enough to describe everything about all quarks in the universe when we assume they must describe themselves, so that a computer consisting of the whole of the universe can never contain all data about itself, let alone that we can make it predict its own future.
That depends on the aggregation possibilities, and on how much exact information we require for our particular computation, and the laws involved therein. We would run into a goedel like problem if we'd need a number of atoms (let's say one for each of 64 bits) for storing the position of each atom in the universe for each dimension. THAT is a certified problematic goedel-like problem.
If, however, we'd be interested in the average mass and momentum of the whole universe, we'd only need the sum of all masses, and a point of inertia we don't have, that is definitely not even a bit of a mathematical theory problem, but a major physical and philosophical problem.
The 'state' of the mind in quantum mechanical/theoretical sense is hard to define, and therefor hard to reason on, especially when it isn't well-defined or the knowledge of the essential boundaries of quantum theory isn't present.
Some remarks remind me of study work I did before '96, when I had been into quantum physics more than a bit (I'm a network theory (academical) electrical engineer), and an interesting book, also in the context of Penrose, where the idea of consciousness is brought forward by an anesthesiologist who describes the at the time newer area of nanotechnology in the book 'Ultimate Computing' by Stuart R. Hameroff (1986), which I happened to have come across during a library literature search for parallel computing.
About the importance of the right physical models and decent model-wise thinking, what is the state of the mind when the tide is high, as compared to when the tide is low? Strictly physically speaking, there is an observable difference, which is rarely accounted for.
Even stronger: unless gravity is seriously quantized (and there is no reason till this date to even try rigidly assuming that), and when the same holds for EM fields, all models of anything containing matter known to man has essentially to deal with intrinsic holistic features of both quantum and classical models of reality.
Making and working with a model of reality is always a matter of common sense, except that experiment results, when not immediately or at all fitting in expectation patterns, may not appeal to a certain human being's common sense. The makers of quantum theory made perfect sense to themselves, but when a newcomer or not as knowledgeable (or sufficiently intelligent) person looks at it, it is possibly counter-natural. It makes perfect theoretical sense, though, as long as you see it for what it is: a in major ways (heavy) statistical theory based theory where (decent and fullout) fourier analysis and dirac theory are assumed known.
Reasoning in mathematics, or alternatively mathematical reasoning is another story, but every mathematician should know that maybe certain (hopefully correct) reasoning steps yield results which seem counter-intuitive, but the reasoning steps have to have been understood to have come into existence at all, let alone be applied.
To quote the most important philosopher of science of the 20th century: every theory is valid, until it is falsified. The main mathematical assumptions, axioms and proofs go unfalsified for a long human time and an amazingly short geological or cosmic time, unless the dinosaurs or the little green men from somewhere out of space can contribute to the weighing against time of that statement.
[To clarify, this is using one particular technical definition of the word "valid". This could confuse readers; people are confused about Goedel's theorem and undecidability etc quite frequently. The above statement is not necessarily true given other common definitions of the word "valid".]
Theo Verelst Not knowing who I'm responding to, in normal scientific and reasonable academic language, and in all reasonability, my statements are in common language. The above added statement represents an opinion which to me indicates the reader did not understand the basics of the theories discussed to the normal level where they apply, than those theories are valid, and Goedel is about what I described, for most practical use. The rest is gibberish or not to the point for the areas I brought forward, I'm quite sure. And more than qualified. The confusion comes from invalid use and explanations of often indeed counterintuitive, non-deterministic (which is not necessarily non-platonian, that' s a philosophically tinted discussion, whereas my paragraphs are about fundamental sciences, which I assure everyone in the proper use are quite, quite valid, AND have limitations) quantum mechanical theories. The expectation value of throwing a dice is maybe hard to accurately define statistically, but a quite valid concept. And it is quite valid theory to assume on average you'll throw 3 1/2 after many throws. Quite valid reasoning and theory and practical application, in normal language. Believe me.
["Gibberish"? How terribly unkind. To clarify, if you do not mean "valid" in a technical sense, if you mean "every theory is valid, until it is falsified" using everyday definitions of the word "valid", then you are wrong.]
TV Unless you mean the rather childish or boring play upon words that of course you can think of some theory which is untrue, and don't mention it or refuse to disprove it, and don't blame me for not stating with great accuracy on a wiki, you'd have to talk to one of the top philosophers of this time (he might be still alive). That you don't understand the essence of what he himself called 'logic der forschung' doesn't mean he's wrong. No serious scientist would say so, I'm quite sure. Unless you prove you are that, I think you don't understand the point very well.
[Many things have turned out to be technically "independent of the axioms of the system". This is not identical to "valid until falsified" except using a technical definition of "valid". The most general 4 truth values are "provably true", "provably false", "inconsistent" (paradoxical), and "undecidable", which is the same as independent of axioms.]
TV Then you talk about a combination of language and formal logic. Provably true is challenged by that philosopher, which is an interesting, and important part of his work. Do your homework first, I suggest. Just the draw the setting, this guy was from the time of Einstein, and not unknown to him. I your language, every (also in normal language) theory is undecidable at best in mathematical language, unless proven false. You never know for sure whether some theory or though is really actually true, but when you given a well defined one using normal derivations can proof it false, you are certain it is false. Major, century old, mainstream, quite accepted and acceptable philosophy of science, the denial of which probably indicates you're either a bad scientist, or indeed don't understand what that is supposed to say.
It actually says in a way that you're not a good scientist when you simply state something, and don't challenge what you're saying and call it true. Who agrees with your four truths? The maharadji? [Who?] Your mother? Your friends? A reasonably intelligent bystander? The power of the natural sciences, of which you, and others at times, use the manner of speaking, is in a very clear science of research. Experiments are supposed to be repeatable, and the logic is according to usually understandable mathematically foundable reasoning lines. Sort of like: if I do a clearly defined experiment, under fixed circumstances, it should always give me the same results. That game works until it is proven that at some point your theory fails. Until that point, you can maintain your theory.
Many things independent of the axioms of the system. Oh boy. Who are you, a mathematics minor (minor, mind you...) with an idea of a political position in science or something? Your definitions are so away from most things in decent science or human reasoning it's tempting to repeat words I used. I mean I am serious about the subject but such a phrase either doesn't indicate the same in return, or (to me) that you indeed do not understand what this stuff is about.
Major philosophers have dealt with 'proof', 'existence', 'reasoning', 'reality', and such concepts for centuries and longer, and you already to begin with state you know it all? How do you know you are not dreaming your whole existence and interactions with the world? A quite reasonable and maintainable philosophic line (but quite practically useless, probably) could see you and I as little brains in some brain tank invisibly linked to our bodies, which just exist in dreamspace. Or something. I mean whatever. The whole point of good science and philosophy, also in normal life, is that you are able to take a little distance from your opinions and theories and try to be correct in general.
[The oft-repeated summary of Goedel as saying that many things are true but unprovable is a wrong summary. They are not "true but unprovable", they are simply "unprovable with regard to axioms of system A", and for any given enlarged system B in which such a theorem is provably true, there is an alternate system C enlarged with different axioms in which that same theorem is provably false.]
[TV] I didn't give that summary. You have to be something of a mathematician to appreciate goedel correctly, he was when I remember correctly (and I don't mean from some popularistic book, which is fine, but no scientific source, necessarily) talking about mathematical proofs, and though along the lines which I indicated.
Assuming you're serious and worthy of some of the self-ascribed authority: how do you prove that the day after tomorrow, the gravity constant is still the same, and that we won't all be floating around on the surface of earth like on the moon?
Not being obnoxious, that IS a serious physics question. The assumption of the repeatability of experiments, and the practical applicability of the theory of gravity determines when you are right to think gravity will be the same at some point in the future.
No one will find it hard to share the assumption that in all human normality and practice, it will. So a normal person would call the theory of gravity true. But strictly speaking you can prove that only by extrapolation from the often repeated and repeatable experiment of walking on earth and how gravity is thus far. Which is not problematic, but it is for your type of thinking in other areas fundamental sciences.
The idea of the natural sciences is that they provide models of reality, and rely on correct (formal) mathematical proof mechanisms, and that especially the fundamental ones give us a highly reliable view of what repeatably is true. But we have nothing else but repeated experiments and observations to base our 'truths' on. Really. That's a fundamental point in science.
And when you don't want to accept that or don't understand that in the longer run, you're doomed not to get taken all to seriously as a scientist, or take the risk of being written off as someone who merely wants to use (abuse) mathematical formulations to sound more convincing in some argumentative way, without having understood the rules of either decent mathematics or natural science.
And when you claim only mathematics, which is rather limited, because OF COURSE mathematical thinking human beings is influenced by their normal observations about reality, and at least by human modes of thinking, than you aren't saying much at all. And at least suggest I make mistakes that in all reasonability in these fields I don't make. And just for the sake of being clear: I have formal and heavyweight university training in philosophy (of science), formal (mathematical) reasoning, that is proof-mechanisms, advanced mathematics, and quantum physics. Which wouldn't be all needed to understand highschool physics basics of the kind that everything is a model. Or the main thoughts of Karl Popper. I'm not just talking, and my points to me and I'm sure every reasonable scientist in the relevant field are clear and nothing out of the ordinary.
The bit about microtubules should be a dead giveaway that something is wrong. Amoebae are composed partly of microtubules, and they don't exhibit the sorts of behaviors most people would count as intelligent.
A human is more intelligent than a rat. How the brain works in each is the same though. Intelligence is a layering on the base functionality. It's the base functionality that may use microtubules via quantum means. It is now thought that proteins reach confirmation from a shape space of 500 billion options using quantum tunneling. Google this and you'll find a bunch on it. Something more subtle has to be going on for such complexity at such a scale to happen so fast.
[That shows something interesting about cell growth and metabolism. Cells are remarkably complex and intelligent (in the very loose sense) in their design, and microtubules may have something to do with this. But none of that is any kind of support for Penrose's theory about microtubules and human-level intelligence. Occam's razor says microtubules have more or less the same function in rat brains as in human brains, and that human intelligence derives from higher level architecture.]
Rats are very intelligent. They are not as intelligent as humans. So part of it is architecture which may not be related to microtubules, but the architecture part still relies on the underlying capability of the brain which would use microtubules and tunneling. Rather than invoking occam to come to a conclusion, isn't it better to just say you don't know how it works and do the research?
[Unfair retort. That does not follow at all. Yes, the higher level architecture depends on the functioning of the lower level architecture, which itself uses microtubules, that is true. Your jump immediately after that is the unfair part. No, it is not better to prefer "we don't know" over "use Occam's razor".]
[Penrose makes a fairly outrageous claim, and doubt is shed on his claim by the obvious fact that non-humans, though intelligent, nowhere approach human intelligence, even though they have micro-tubules.]
You are making an unsupported judgement about the relative intelligence of humans to other animals and then using that to make an equally unsupported conclusion. When you are trying to say how something works, not just give a general opinion, it usually is better to say I don't know, instead of jumping to a conclusion based on no evidence. The razor doesn't give you any evidence to say how something really works so it isn't a valid principle when talking about descriptive topics.
[I'm not following. Surely you don't mean that it is an "unsupported judgement" to say that humans are more intelligent than other animals?]
It's unsupported that animals "nowhere approach human intelligence" therefore micro-tubules are not part of an explanation for human level intelligence stemming form the evolution of the cerebral cortex.
[In addition to my neuroscience books, I have a whole section on animal and insect intelligence: I am far from being a human intelligence bigot. But I'm a little stumped here. No animal has complex grammar (neither non-trivial symbol ordering nor nested constructions), and in addition to what this means for linguistic intelligence, there is also strong evidence that this is either the same as, or at least very similar to, more general cognitive capacities in human. And there is no evidence of similar complex cognition in animals...all of the contenders are either instinctive, completely rote learned behavior, etc; there is no example of nontrivial conditional sequence nor nesting in any behavior, including animal language, that is not instinctive.]
[So just what are you trying to claim here? That some animals might be as smart as humans? I sure hope not, because that is unsupported, no matter how amazing it is to find out that, in their own ways, some animals (like dolphins and chimpanzees and parrots and eusocial insects) exhibit surprising levels of intelligence compared with what human-intelligence-bigots used to think. That doesn't make it human level intelligence. No such evidence.]
I think the question is this: is human intelligence qualitatively different than animal intelligence? That is, is there a fundamental difference between them? Is human intelligence akin to animal intelligence, only more developed? Is there a fundamental limit on animal intelligence that humans have surpassed by some mechanism? The latter position is favoured by some people for non-scientific reasons, but is not supported.
[The answers seem clear (albeit not proven beyond unreasonable doubt). Compare the sonar processing cortex of dolphins; it is still beyond human ability to reproduce all its capabilities. Does that make it qualitatively different than other kinds of animal/human cognition? Depends on exactly what you mean, but largely "no, not in any dramatic sense". It does what it does because of the evolutionary demands of its environment, which caused a novel higher level architecture. The low level architecture appears to be identical to that of other mammals.]
[The evidence is overwhelming (albeit not proven beyond unreasonable doubt), that the same is true of human cognition.]
We would seem to be in agreement then.
Sonar isn't a form of cognition at all. It's a form of perception. Really now, how hard is it to keep these two radically different things separate?
Human language is a form of cognition. It is in fact the basis for human cognition. And yet, no animal has anything resembling human language. Thus, no animal has anything resembling human cognition.
On this point, I think we should give animal cognition more credit. For example, how about gorillas learning human sign language to express their own ideas? Or prairie dogs' calls that can distinguish among different types of people they see? What type of intelligence is needed for a cat to climb a tree? For this, I believe a lot of conscious decision making is involved even if a cat won't tell you about it in the English language. What type of consciousness is involved in elephant paintings, or humpback whale songs?
RogerPenrose has released a new book called "the Road to Reality". This is over 1000 pages long and is a no holds barred introduction to mathematical physics for the (extremely) motivated layman. It has little on consciousness, but does discuss his latest views on Mathematical Platonism in a few dozen pages. There are lots of links to reviews & forum discussion threads on this page:
http://www.321books.co.uk/reviews/the-road-to-reality-by-roger-penrose.htm.
Argumentum ad Populum
1000 pages to introduce mathematical platonism. Gobsmackingly astonishing when you consider that mathematical platonism hasn't been a serious position since the field of mathematical logic was invented almost a century ago.
-- Note I said "a few dozen pages". The rest of the book covers maths/physics that doesn't need the assumptions of Platonism.
In fact, excluding religious fanatics like Roger Penrose who are simply incapable of making any coherent argument for their chosen position, mathematical platonism is dead, dead, dead. It was given mortal wounds by Russell, killed by Goedel, and utterly annihilated by whoever it is who invented a system which includes the opposite of the Continuum Hypothesis, which really is nothing but the heir to non-Euclidean geometries.
Mathematical platonism has all the credence of creation science. That's probably because it IS creation science. There is only one true mathematics. sound familiar to anyone?
Aren't these awfully cheap arguments? Mathematical platonism is out of fashion, therefore pooh-pooh on RogerPenrose? BTW, KurtGoedel was a mathematical platonist, not that it matters. See the "snob appeal" section of http://philosophy.lander.edu/logic/popular.html.
So by your reasoning, the fact that creationism has been a forbidden subject among biologists for the past century informs us of nothing at all. Yet in practice, that is the precise reason why creationism is dismissed by all right-thinking people.
The fact is that any explanation of why mathematical platonism is crap would be impenetrable and incomprehensible without an introduction to mathematical logic. And I'm not here to give an introductory course on mathematical logic. If you want to remedy the glaring deficiencies in your education then any half-assed university will let you take a course. That's not even counting the countless books on the subject. So demanding that I provide a defense of mathematical formalism or concrete arguments against mathematical platonism, is utterly unreasonable and disingenuous in the extreme.
The only thing people need to know is that the top experts in the relevant field (mathematical logic) have all rejected it, and that the situation will never, ever reverse itself. The entire field of mathematical logic is the rejection of mathematical platonism. Much like chemistry is the rejection of vitalism from alchemy, or biology is the rejection of creationism from natural history. So mathematical formalism is about as likely to go away, or let up enough for mathematical platonism to make a comeback, as evolution is to do so. And just like evolution is the basis of all biology, mathematical formalism (the antithesis of mathematical platonism) is the basis of all mathematical logic. The analogy is exact.
Except for one thing. Mathematics was more than ten times more ancient than natural history when the latter got turned upside down. And having such a long tradition, mathematicians are a very traditionalist lot. Which means that mathematical platonism remains reputable even many decades after it's been formally disproved, rejected and supplanted. This contrasts sharply with the kangaroo courts that were held against creationists after evolution became accepted. And it's why ignoramuses like you can get away with the crap you do. Crap like implying that I have the burden of proof just because you're an undereducated fool! -- RK
Except that the only argument that you present for the supposed demise of mathematical "platonism" and the victory of "formalism" (whatever brand of the two you are referring to), is "because RichardKulisz says so". Oh, well, maybe this has some value in some places, who knows?
No, you nitwit. Because mathematical logic says so.
No, really. When and where has this mathematical logic been heard to say such a thing?
And if you want to know why it says so, then I suggest you study the damn subject!
['Scuse the intrusion, but both of you, please note that mathematical logic has been in a search for better foundations ever since it was invented; we have a provisional system that is used meanwhile, but it is not some kind of ultimate, and research in this topic remains white hot. JohnMcCarthy for instance has been working on that for some decades, although I personally don't care aesthetically for his approach. -- DM]
Yeah, whatever. Like this "singularity of mathematics" had something to do with anything. If you speak about a domain you hardly know, it would pay if you used its concepts and terminology rather then invent your own mumbo jumbo to put up an appearance of knowledge.
<deleteWhenCooked>
I don't need to provide sources for my definition any more than I ever needed to provide any for my definition of life. The fact is, this is the way the words are used. These are the concepts they are attached to. And just because everyone else fucks up the definitions of these concepts doesn't make it any less obvious that my definition is right (fits actual usage in the literature) and the "common" definition is dead wrong (destroys any understanding of the literature). And if YOU think my definition is wrong then it's up to YOU to explain why there's a completely separate word for the positions of mathematical existence vs non-existence (realism vs irrealism).
This is not "the way words are used", see for example http://www.amazon.com/exec/obidos/ASIN/0195143981, http://www.sm.luth.se/~torkel/eget/thesis/chapter1.html . Sure you can claim your definition is "right", but then the reader need be aware that you're operating with your definition. Under your definition I can hardly see a justification to attach your version of platonism to RogerPenrose, in particular with regards to TheRoadToReality.
So you claim that modern platonists adhere to "singularity" and "irrealism". Where can we verify this assertion, i.e. any particular mathematician claiming himself "modern platonist" exposed (in what paper/book) the "singular + irrealist" view. In contrast, one has few problems to find contemporary mathematicians who call themselves platonist, adhere to realism (actually that's their working definition of platonism ) and don't proclaim any kind of "singularity of mathematics" thesis.
Well, that's a first one for you Costin. Providing evidence and argument instead of insult and condemnation. But I think that an entire book with Realism in the title trumps a single article with platonism buried within it. An article which supports my definition since Platonistic == Modern Platonism.
The problem comes entirely because modern people don't understand exactly how fucked up classical thinking was. It's so far outside their frame of reference that they literally can't conceive of it so they try to force-fit it to their modern conception of the world and end up confusing everything. So a little history lesson is in order.
Classical platonism was of the form "the four elements do not correspond to but ARE the first four regular solids such that physical reality's atoms are the tiny little geometric shapes of mathematics" (singular + realist). Modern platonism has rejected the realism of classical platonism but kept the "we don't KNOW whether a goedel sentence is true or false but we know that only one CAN be true or false in the one true mathematics".
You meant to say a book recently published by Oxford University Press on the very subject (platonism and anti-platonism)? I don't care, call it "realism" if it suits you and call "platonism" whatever you want to call "platonism". If you provide your definition, I can kind of make sense of what you're trying to say.
But then it's not OK, if you want to discredit what you call platonism, and then use the label to discredit other folks who call themselves platonist but do not operate with your peculiar definition for platonism.
Here's one more quote for you: "Views of mathematical objects as independently existing abstract entities are generally called a form of platonism." Solomon Feferman ISBN 0195080300
Yet another quote, for the laymen (from http://research.microsoft.com/~gurevich/Opera/123.pdf):
"Working in mathematics, one develops a strong feeling of dealing with objective reality. As in archaeology, one digs and discovers things and does not create them in any way. One feels, as you do, that mathematical objects really exist and have existed before anyone discovers them. That is mathematical Platonism."First, your own reference supports me. Is this handwaving or wishful thinking?
Third, you really shouldn't feel bad about this because this isn't your field of expertise. This isn't mathematics we're discussing. It's metaphysics and philosophy of mathematics.
Indeed, we don't discuss mathematics, we're discussing your trolling. You haven't provided a single reference to support your handwaving.
Why would I need to when you're doing the work for me? You've yet to produce any references for your own position that platonism == realism. And you'll have to actually counter your own reference which says that platonism == singularity. And even if I lost on references, you'd still have to justify using your confusing terminology when mine makes so much more sense. You'll note that using my terminology, we can intelligently discuss every stance from classical mathematical platonism to modern mathematical irrealistic formalism. If you choose to redefine platonism as realism, then what terms will you use to refer to singularity vs plurality?
That's the point, Richard. I don't give a damn (and neither does anyone else) about discussing your terminology, and I'd be wasting my time for nothing to convince you to accept commonly used terms for what they are.
Accept them how they're used in the literature? Or accept them how idiots like you CLAIM they are used?
Yes, accept how they're used in literature which is the same as I claim they are used and I provided enough references, and I could provide even more, if it wasn't a useless exercise because you're just an irrelevant troll trying to spin it any way it fits your ridiculous prejudices.
Now the bottom line is this: did you get that classification of yours from somewhere or did you make it yourself? That's a simple question, demanding a simple answer.
As already stated, I got it from the fact that mathematical realism is treated as a completely different position from mathematical platonism, despite your claim that they are identical. But then, listening to people is not your forte.
Your intepretation of "fact" is always to suit your purpose, but then I don't have time to waste arguing subjective matters with you. I asked you if you take that classification from any kind of relevant reference (like a book/article on history of mathematics, an encyclopedia , or a book on philosophy of mathematics).
Back to DisagreeByDeleting, Richard?
At risk of stirring stagnant waters, Penrose makes many mistakes. One obvious problem is that he only considers Turing Machines, which are internally deterministic. Remove this constraint (by opening the system) and his arguments fall away as straw men. Open systems are not constrained to stasis, and thus need not be limited by Goedel's theorems etc. I suppose this is equivalent to the diagonalization implied in Goedel sentences, and their response, though I think it is probably a lot more messy than that since the influence of external factors is likely to be somewhat random.
Indeed it needs to be, since a system cannot know which direction to go to lift itself out of Goedel's trap. Nevertheless, it can recognize when it has been lifted out, giving us a simple algorithm which broadly reflects the way that neuronal nets interact with their environment.
I don't need quanta to give me a random input. Just heat and stir. Sorry Roger. Like your tiles though. -- RichardHenderson