Without Cause

Some people claim certain events happen "without a cause". Others claim that practically every event is caused by something else, which was caused by something else, all the way back to the BigBang.

I plan to put the debate here ExactlyOnce RealSoonNow. Unless a helpful WikiGnome gets here first. -- DavidCary

A debate here is superfluous. Some folks around here refuse to accept anything less than pure unadulterated determinisms solely on personal philosophical grounds. They have some kind of metaphysical repulsion to even contemplating the possibility of non-determinism in the universe. As such HolyWar ensues, because every time a phenomenon exhibits true randomness/non-determinism they will claim that there's a hidden cause that we just have not found yet or that we cannot measure, or some such non-refutable claim. This ends all discussions. Such debates are best left for physicists rather than pollute wiki with even more amateurish rants on the subject. -- Costin

Todo: refactor from TrueRandom, WhyIamNotConscious, ... any other pages where this discussion was repeated? InteractiveComputationIsMorePowerfulThanNonInteractive (from about halfway down the page).



The definition

was added after much back-and-forth on page TrueRandom.

However, I find it is still lacking, for two reasons. Firstly, the useful parts of the definition are *still* not different from 'random'. Secondly, it conflates causality with determinism, but causality is not useful here: this assertion of a lack of any cause is a bit strange. If what you really want to do is assert the existence of supernatural events, why not do this? This assertion may have philosophical utility for you. It cannot have physical utility (by definition). It has nothing useful to add to the concept of randomness (in other words, anything useful it adds is completely separable from the concept of randomness). So in effect, you are looking for a definition which mixes up two separate, and separably useful ideas. Why do you wish to do this? In common usage we have random: synonymous with stochastic originates from Greek, for 'pertaining to chance', and is the opposite of 'deterministic' (meaning a process whose working can in principle be calculated).

If you are, as it appears, of a philosophical mind about this, here is a question to consider. What is the difference between something that has no cause, and something whose cause we may not determine, even in principle? Is there utility in a definition that differentiates these two? If so, what is it? Rude of me not to answer this before. Sigh. Okay, what is the difference? Ugh, can you first clarify that question a bit? You added "even in principle?" and that throws me off just a bit. I'll attempt it anyway. Something that has no cause ... seems to deny the logic of current science (as I know of it, if I'm wrong, please point me to the scientific source that dictates otherwise). Something that still has a cause but that we can't determine does not necessarily seem to deny that logic. Most of the scientific community realizes that we may not be able to determine everything at any one point in time. Note that I said "seems." It's my philosophy that humans have a harder time with the idea that our logic may only limit us and things within our perception (and come on ... perception is very important, you know nothing of the world without perception) and that there may exist things that can defy our logic, interfere with us, in OUR world without breaking the logical constraints on us and our perceptions. It may sound like I'm backtracking in the same sentence, saying defying but not breaking, but I discern a difference. I declare these are concepts I want to be ready for when we discover the holes in our logic.

On the other hand, it appears on further reading (How did I miss this before?) that the entire purpose of this 'abstraction' is to attempt to patch up one of the inconsistencies in a logically inconsistent system: free will + omniscience + omnipotence ... so perhaps it is all for naught.

One point at a time please, with explanation.

You will probably get further by being less demanding (nobody here owes you the exposition, or a spellchecking pass). Everything in the above paragraph is quite straightforward. It is mostly a reiteration of other parts of this page. What, exactly, were you having trouble with?

Here are some objections, then -

(1) It refers to the "useful parts", but doesn't say which parts are useful and which are useless, except for a later assertion that causality is useless, which isn't apparent; (2) it compares with "random", but doesn't define that term; (3) it says that causality is conflated with determinism, but determinism isn't mentioned in the definition; (4) it embeds an unwarranted and irrelevant reference to the supernatural; (5) it introduces the concept of "physical utility" without explaining what that is, why a definition of truerandom should have it, or how its absence is directly implied by or stated in the definition; (6) it asserts "mixes up two separate, and separably useful ideas", which implies that "separably" does not mean "separately", leaving one without a clue what "separably useful" means; (7) it's unclear how mixing useful ideas can possibly cause one of the ideas to be useless; (8) it asks "why do you wish to do this?", which is a leading question; (9) it gives a dictionary-style definition of the common usage definition of random which doesn't come from any dictionary, and pointlessly mentions the synonym "stochastic", which comes from the Greek "stochastikos", which means "skilful in aiming"; (10) it gives an unusual definition of "deterministic"; (11) it proceeds to ask a rigged question ("What is the difference between something that has no cause, and something whose cause we may not determine, even in principle?"), related to causality, despite having condemned causality as being a useless part of the definition; and finally, (12) it drags in an unjustified reference to "free will + omniscience + omnipotence" as logically inconsistent, when it's usually agreed that such apparent inconsistency can be avoided by careful choice of definitions of the terms involved, "free will" especially.

You appear to be quite confused about the contents of this page. The above comments were clearly meant to be read in the context of the page as a whole. Briefly: 1) causality is not related to randomness; attempting to add it doesn't add anything useful 2) random has been hashed out on this page elsewhere 3) see 1); the lack of deterministic behaviour is key to randomness 4) the reference to supernatural is warranted as far as I can see. Everything in the natural world has a cause, thus the top poster's definition explicitly invokes the supernatural 5) physical utility is something this definition lacks, since there is no way, even in principle, that a human being can physically determine the difference between a random event and a 'truerandom' event. Hence if this definition has any utility, it must not be physical 6) the idea of random, and the (philosophical) idea of an ability to operate outside the physical are conflated here. What is described is a supernatural event, *not* a random event. Calling it 'random' as well adds no value. 7) it doesn't cause anything to be useless. However, it may cause confusion because in this case the whole is no greater than the sum of its parts - using the term asserts that it is, and will thus cause confusion 8) incorrect. see 7 9 & 10) seems to be a non-technical reiteration of the mathematical sense of these words, which *are* concepts with great utility. 11) this is not rigged, but points out the fundamental flaw in attempting to hinge randomness on causality; see 1 again 12) unjustifiable? It is from further down the page. It seems to be the original poster's motivation. How could it possibly be more justified? About the 'apparent inconsistency': It is a hard inconsistency, even without free will. There is no problem with this, so long as you realize when are outside the realm in which you may claim logical inferences.

Re (12), what realm is that? When you have stepped outside the limitation of nature. Some questions are not even meaningful in this context: for example, "what happened before the universe existed" is not a scientifically valid question, but that doesn't make it an uninteresting question...

Using one's imagination to speculate about something outside nature doesn't imply in any way that logical inferences are no longer permitted. Certain questions don't have scientific validity, but that's irrelevant - it's not what's being discussed here.

Certainly, logical inferences are permitted, but they are not of the same order as when you are working within a logically consistent system. So you have to be very careful, and 'truth' has a different meaning. This does not invalidate the exercise, of course! However, it is worthwhile for the effective communication of ideas, to try and separate this sort of fundamentally speculative thinking explicitly. My objections to this page have nothing to do with disliking this sort of thinking, quite the opposite in fact. What I am objecting to is the introduction of a new term that seems to serve *no* useful purpose (i.e. anything that can be achieved in the use of this term is better achieved by using one or more separate concepts). In my experience, this sort of addition of rather vaguely defined terminology can only promote muddy thinking and confusion in the communication of ideas. If you see a real need for a new terminology, first define what that need is, show how the term differs from the tools we already have in the language, and demonstrate that it makes sense in some context. In other words, does the new term have any utility? In my opinion, none of this has been achieved on this page.

That's too vague. What you said before implied that outside of the realm you had in mind, you could not claim logical inferences. Now, you're saying logical inferences are permitted, but are of a different order and use a different meaning of truth. One can't deal with that unless you explain the different order and different truth you mention. What you go on to say amounts to objecting to dictionary definitions. You should have said that at the outset. Dictionary definitions aren't expected to be ideal in all respects. In particular, the principle of parsimony does not apply.

Ok, fair enough. It is sort of vague, but isn't an objection to dictionary definitions (notwithstanding poor quality dictionaries, of which there are far too many). Let me be more specific. When you allow the supernatural and/or begin with a logical inconsistency, you cannot rely on chains of logical inference. You can still *make* chains of logical inference, but they are no longer compelling. Any result you infer may be countered by the application of forces outside the system that your logic relies on. The reductio ad absurdum of this is last-thurdsdayism and the like, but you can't make the subtle ones go away either. It is not (logically) possible to open this door just a crack, and only let in the supernatural/superlogical events and entities that you want, and disallow others. So the net result is that logical inference is no longer true in the sense that it is if you are operating in a consistent system.

Chains of logical inference are always compelling. However, the results needn't be correct if the starting points are incorrect or contradict each other. Reaching obviously incorrect conclusions is a normal way of invalidating the starting points. The poster of the definition (at the top of the page) didn't seem to intend to presume the supernatural or to give starting points that were inconsistent or implied the supernatural.

Ah, but (s)he did presume this, later in the page *explicitly*.

How do you know? My impression is that the person who posted the dictionary-style definition at the top was not, in almost all cases, the same person who commented further down the page.

The original versions of this 'definition' seemed to be looking for a definition of something that was only different from 'random' in supernatural ways.

Can you specify these versions so that I can evaluate this?

Logical inference is *never* compelling if your starting point is false.

No. Logical inference is always compelling by virtue of its logicality. The possibility of an incorrect conclusion (due to an incorrect premise) is irrelevant in assessing whether the chain of inferences is compelling.

I think the problem here is we are talking at cross purposes. If you start with a false premise, your chain of inference is worthless. It doesn't mean that chains of inference are in general worthless. We aren't talking about the same things!

I would ask you to use appropriate words. You originally wrote "never compelling", not "worthless". I would ask you to read for understanding. I meant that the argument is not compelling; you meant that the logic is compelling. I would be guessing your meaning if I did that. Your original words were "logical inference", not "the argument".

What you refer to is proof by contradiction, and that opens a rather subtle can of worms, which is why constructive proofs are preferable (some even reject the validity of p-b-c).

Proof by contradiction is accepted by most people. It's often rather impractical to construct a counterexample. It's not a dislike of p-b-c that motivates most constructivists.

The constructivists are irrelevant. Constructive proofs are *preferred* (see above) by everyone except undergrads. The fact that p-b-c is useful means that most people choose to accept it, warts and all.

I don't know what you've got against undergrads, but such a comment is not proper support for your assertion. I am not aware that constructive proofs are generally preferred to proofs by contradiction, and you've given no evidence for such preference. You've referred to warts, but given no references which detail them. Don't just take my word for it, ask any 100 mathematicians or logicians what they think. I have nothing against undergrads. They do, however, often fall into characteristic traps due to ignorance. If there's a simple mistake, either explain it properly or cite a textbook or website which does so. My mathematics textbooks use proof by contradiction without comment on its validity.

Additionally, the principle of parsimony most certainly does apply here. This is *not* a dictionary definition, and shows no sign of being useful as one.

You may have misunderstood me. I meant that the principle of parsimony isn't required of dictionary definitions.

Which isn't what we were talking about. Parsimony does apply.

I took the dictionary-style definition at the top to be actually a dictionary definition, and so was commenting on such a definition, which doesn't require parsimony. I was deliberately taking little notice of apparent intentions from later in the page, because I didn't think they were from the same contributor.

Well, you took it wrong. While that was written in the style of a dictionary definition, it isn't. If you had followed the page, you would see that parsimony was suggested for a reason that the definition has no value.

Clearly, you didn't write the definition, so how do you know it isn't from a dictionary? So find me the dictionary. If you are using a source, cite it. It wasn't my definition. I saw that it was set out in the style of a dictionary definition, and could see little point in doing this if it wasn't taken from, or closely based on, an actual dictionary definition. Also, I took "without a cause" to include "without a supernatural cause", suggesting that the poster did not think that everything in nature has a cause. Perhaps the original poster could elaborate.

[Original poster: The dictionary police are going to get me...its my own personal dictionary. Maybe one day, we'll all be using it. Worse new words have been formed. "Without a cause" should (and does in my mind) indeed include "without a supernatural cause."]

I don't claim that the *concepts* are of no utility, but that this definition for the reasons given on this page, has no utility. Neither you, nor any other contributor has demonstrated utility so far as I can see. My claim is that the definition adds nothing useful to random. Care to supply an argument that it does?

You have assumed that the definition is trying to add something to random. It is not. It is its own definition that is similar to random, but is not random. [Who wrote that? Not me. The original poster?][Yes, the Original. Actually, let me say that random can include it, but its definition can(not must) fall inside of random.]

You seem to have misunderstood. I claim that the definition is *not* different from random in any useful way. If you wish to introduce a new terminology, it stands to reason that the new terminology provides something that was not already in the language. Anything else is a barrier to communication. Nothing on this page provides an argument for why 'truerandom' differs from random in a useful way.

I'm not about to supply an argument for adding something to random...

Why not? That is exactly the expectation of anybody introducing a new term: that you can justify and motivate its usage.

For starters, I asked earlier if anyone wanted to come up with a new name for the term. No takers beyond PerfectRandom?. Example: To justify the usage of the word "television", you are not required to prove that it adds something to "vision". You might, but you aren't required.

You aren't helping your position here. The term 'television' became *used* because it had utility. It described something new, and people chose to adopt it. PerfectRandom? doesn't have any usage (outside this page), and hasn't demonstrated any utility.

You haven't given people a chance to adopt it. How many people have to adopt it to prove its utility? How quickly must they adopt it? I already use it. If I find 50 in Sodom, may I keep it? I also refer back to an argument further below...about influence. A difference between random and truerandom beyond cause is the ability to influence. A random event(not pseudo) can be influenced, unknowingly if you insist, and still be random as long as you still can't predict it. A truerandom event can not be influenced because there are no causes to influence. I may be mistaken in my assertion, but if so, it is due to the lack of (knowledge which can be called ignorance) dictionary sources that agree with your currently argued view of the definition of random. No dictionary definition of random that I have seen specifies the inability to influence. Please refer to http://www.webster.com as an example of the type of definition of random that I'm referring to. Another argument I presented below is that the above definition is defining according to these more common definitions of random. I would still be happy to read and consider a more precise or scientific definition of random.


6) What's the difference between a supernatural event and a random event?

A random event is an event that cannot be predicted either in principle, or, with somewhat sloppy usage of 'random', in practice. However, we may know *many* things about this event. For example, to use an imperfect example: if you flip a fair coin I can't predictably tell you what side will land up. I *can* tell you that on average you will see one half heads, etc. The atomic decay example is more complicated, but captures more of this since a large number of things are known about decays, none of which help in the prediction of events.

On the other hand, a supernatural event is one that by definition defies empirical study. It is an event that somehow lives outside of nature. While no such events have been rigorously demonstrated, they are often used philosophically (not to mention theologically).

Above, (1) says "causality is not related to randomness". However, if there is a cause of an event, the cause can, in principle, provide information which can be used to predict the event, so the event is not random.

No. The above is not a logical conclusion. *Some* causes provide information, some don't.

No need to be enigmatic. Give an example of a cause which can't in principle provide information. If most causes can, in principle, provide information allowing prediction, that's enough to justify linking causality to randomness in a dictionary definition.

If all cases can, in principle, provide this information, that would be one thing. It still wouldn't be a valid argument for adding to the 'dictionary definition' unless its addition adds some utility to the definition (this has not been demonstrated). Furthermore, examples on this page show that is not in fact true (notwithstanding the confusion below). Coupling causation with randomness adds nothing to the existing definition of random; worse it fails in physical examples as well as philosophical ones. The principle of parsimony strongly suggests we don't do this.

What are the physical examples you refer to? Have you actually read the whole page? The characterization below of causality in atomic decays is wrong (by you or someone else). Atomic decay is a good example of exactly the sort of thing in which knowledge of the mechanism does not help a bit.

Above, (4) says "Everything in the natural world has a cause", but current evidence doesn't support this (for example, we expect 10%, say, of some radioactive material to decay in a given time interval, but cannot predict which nuclei will decay - moreover, there doesn't appear to be a cause for certain nuclei to decay, rather than others.

You are confused. Just because we can't *predict* the timing of decay events, doesn't mean that there is no cause for the decay. We can know a lot about the process of the decay without it helping us at all in predicting when a decay event will happen. This has to do with apparently fundamental limits on how much we can know about the state of the isotope, not on our understanding of what would happen if we *did* know the state exactly (although we don't have a complete theory there either, at the moment). It means we can't predict when it will happen, nothing more.

No. I didn't claim our inability to predict meant there was no cause. I claimed that current evidence doesn't point to the existence of a cause, which is quite different. You're speculating that we may develop a theory as to the cause, but that doesn't mean there is a cause.

This is also incorrect. There is a theoretical background for what is happening, and there is very good experimental support for this theory. You probably shouldn't make pronouncements like this without some rough idea about the current state of knowledge in the field. I'm not talking about detailed knowledge here, just a layman's acquaintance with developments of the last 50+ years.

Tell me here how the theory accounts for the apparent randomness of the outcome, and cite a layman's introduction on the internet or in print. Can you also cite an article on the internet which supports your view, even if it doesn't go into great detail?

To repeat other parts of this page, the randomness of the outcome is tied to the randomness of the initial conditions. The catch is that the theory suggests the randomness of the initial conditions is fundamental, and unavoidable. As stated elsewhere, I don't have web references handy, though I assume some decent ones exist. I have given you some search terms on this page. It is, however, not the sort of thing you can get much of from a page or two. How deep an understanding do you want? Do you have any sort of background in maths? If you want to understand the theory in any non-superficial way, it is going to take some work. You can try the opendirectory: http://dmoz.org/Science/Physics/Quantum_Mechanics/, but I can't really say whether the material there is good or not, as I haven't read it. It at least appears to have some breadth. You've just stated "the randomness of the initial conditions is fundamental", yet you earlier stated "everything in the natural world has a cause". That would require the initial randomness to have a cause, so it wouldn't be fundamental.

(for dmoz.org see OpenDirectoryProject)


Do scientists believe they will never discover the pattern behind the rate of atomic decay?

[If there's no non-random cause of the decay, why are rates of decay different for different elements?] The atomic type describes the "shape" of the decay. In other words, knowing what sort of atom you have lets you predict *probabilities* of decay. But this doesn't help you to predict the timing of any particular event, just what a large number of them will do on average. [How does some nuclear particle of radium, say, know that it should be more likely to decay than an equivalent particle of another element?] It doesn't "know" anything. But the quark content is different, therefore the physics is different, therefore the probabilities are different. Any sort of complete discussion is way beyond the scope of this page, but can be found in any decent elementary particle physics text. ["The physics is different" admits that something in the structure of the atom significantly affects decay probabilities, and hence the half-life of every nucleus. Hence, I speculate that each decay event is directly caused by normally minor local changes which can't be directly observed. This would imply that the decay rate can be affected by quite simple means, such as heating up the substance. However, if too little heat is applied, no change is observed, just as too little stretching of a rubber band doesn't cause it to snap, so experimental verification might be very difficult. (Until someone makes a color force field generator!)]

I'm not sure where you are going with this. 'Heat' doesn't make sense in the context of a single atom. If what you really meant was to add energy, that is fine but you can only do this in discrete amounts (hence 'quantum'). Of course the decay is caused by the internal state of the particular atom, but one of the central tenets of the standard theory implies that you will not be able to observe this state in such a way that you can predict the decay of that atom. Decay rates are predictable, because they describe the statistical behavior of an ensemble of such atoms...I also don't understand the 'color force field' comment above. Color is an eigenstate of particular quarks.

[Heat was just one example. Another way of adding energy might be easier. If a radioactive substance is bombarded with particles in an attempt to disrupt the nuclei, has any experiment been done to determine whether the bombardment, in addition to or instead of causing some nuclei to split, can affect the rate of conventional (unpredictable) decay for the substance? Quantum theory might be able to suggest the type of bombardment that would be suitable if one had some idea how much energy needed to be transferred. I don't see how else your point about energy quanta is relevant. I'm happy that the states of the interior of a nucleus may be unobservable, but how does that imply that applied forces can't affect the probability of the internal states? I'm happy with the statistical point, as expressed. What I'm suggesting is that if the contents of a nucleus affects the decay rate (as observed), and the internal state is unobservable, we don't know that the decay doesn't have an internal cause, nor can we entirely exclude an external cause. Regarding 'color force', see http://hyperphysics.phy-astr.gsu.edu/hbase/forces/color.html, which states it is the force between quarks (of a baryon).]

Ok, 'color force' is being used as a non-standard name for the strong nuclear force then. It is probably sensible to avoid it, and hence avoid confusion, but it is a reasonable use because of the connection between color and strong interactions. My comment about heat was that it doesn't even make *sense* in terms of a single atom, ok? The point you seem to be missing is this: it doesn't matter if the decay is deterministic given the internal state of the nucleus. It doesn't matter because we can't know that state, and hence can't make predictions based on it. Certainly, we can externally affect the atom, and some of these effects are fairly well understood. What we can't do is affect it in such a way that it helps us predict the timing of decays. This may be a bit difficult to understand without understanding the basic physics, but there is not space (or time) here to go over all of that... The point about the quantized energy states is that the theory (and experiment, for that matter) tell us a lot about what sorts of energy states we can put the atom into (as in your suggestion of how much energy should we provide), but they *explicitly* disallow knowing much about the particulars. I can tell you how to get an atom into an excited state, and I can tell you on average how long it will take until falling back into the ground state, but I am not able to predict exactly when that may occur.

[If we can put an atom into an excited state, we can (or might be able to) put a nucleus into an excited state. Do we know experimentally that such a change doesn't affect it's radioactive decay rate? We know the decay rate is affected if we change the number of protons (or neutrons?). If we find a way of affecting the decay rate without changing the numbers of protons and neutrons, we don't need to know the internal state - just the probability that the new decay rate has been established in each nucleus. Sorry about the confusion over 'color force' - edit it to 'strong nuclear force' if you wish.]

Some of the above comments suggest to me an ignorance of atomic theory that will probably get in the way of chasing this example much further... Anyway, as mentioned below, the decay rate is completely irrelevant, as it is predictable. What is unpredictable is the timing of a particular decay. These machinations do not help you with that. Look at it this way. I hand you an isotope, and ask you for a prediction of exactly when it will decay. You can do anything you want to measure it (you can't change the isotope, or we have to start over), and then give me a prediction. The standard model suggests that you cannot do this, period.

[Firstly, I didn't refer to exact prediction (since we can't measure time exactly, exact prediction isn't necessary) - it would suffice to be able to modify certain atoms to make them more likely to decay in some time period. Secondly, we know that this can be accomplished if we change the number of protons in the nuclei, so what fundamental principle excludes the possibility that less drastic changes to the atoms could also affect them enough to change the decay rate? Also, if a proton that's part of a nucleus affects the decay rate, why must other protons have no affect, even if they are made to collide with the nucleus?]

Look, I am not going to recapitulate atomic theory here for you. If you are genuinely interested, read up on it a bit. You are repeating assertions that simply do not make sense, while apparently missing the relevant points, so I am going to leave this after one last try: The precision of the timing doesn't matter. There are many things about atoms that are very predictable, and a few that aren't. You can't change the atoms type without changing the question, so forget that. Part of the fundamental principles are that you simply cannot make arbitrary 'small' changes. Atomic physics does not behave like macroscopic physics, and you will have to get used to that idea if you want to tackle this question seriously. So, we are back to the problem: I hand you an isotope and tell you it is likely (but not certain) to decay in the next hour. You can make any measurements that you want. You *can't* change the isotope, because that does not do what you seem to think it does - it resets the problem. The randomness here is the question of when a decay event will happen when left to it's own devices. So anyways, forget about precise time measurements. You can't even predict if it will happen in the first half-hour or the second.

[I accept that arbitrary changes are not possible. I didn't suggest they are. What you are implying is that although certain changes are possible, the only ones that affect the chance of decay in a particular time interval are changes of type (ie., change of neutron or proton count). If you mean that, why not just say it? However, isn't this simply how atomic nuclei are observed to behave rather than something deduced from quantum theory?]

Sort of. The source of randomness being discussed is the decay events of a particular isotope. You cannot change the isotope without changing the entire question. The theory does predict all of this, at a level below neutrons and protons...(i.e. quarks)

[Fine, you're saying that the chance that some neutron within the nucleus decays (I have beta-decay in mind) in a given time interval is determined by all the quarks of the nucleus. You're then asserting that anything external to the nucleus either has such a limited affect on those quarks that their influence on the probability of decay remains unaltered or alternatively has such a great affect that the number of neutrons or protons is changed. Since you've given nothing of the underlying theory of this, which you say exists, would you care to point me to a relevant article on the internet?] I don't know of any web sites. Can provide book titles if that helps. Searches for the 'standard model', 'QED' and 'QCD' should get you started (those last two stand for quantum electro-dynamics and quantum chromo-dynamics, resp.)

[Doesn't quantum theory presuppose ideas that certain events have no specific cause, but instead a probability?]

No. This is not what quantum theory says. ('presuppose' is probably not the right term to use here...). In fact, you can recast quantum theory completely deterministically (see Bohmian mechanics), with the exception that you are still stuck with an inability to know your initial state *except* probabilistically. So you can in principle know all possible interactions, but it still won't help you because you can't know what state you started in. Basically it is a probabilities in, probabilities out problem: it doesn't matter how perfectly you know the inner workings...

[Why should one want to recast it? Doesn't that mean the usual 'cast' suggests, and is compatible with, a non-deterministic view?]

The Copenhagen interpretation of QM leads us to things like 'observers collapsing the wave function', and 'multi-universes', etc. so badly abused by the popular press. It is possible to interpret things in a different way, where none of the indeterminism is inherent to the physics. The indeterminism is still there, but only in the inability to specify initial conditions exactly. Note that I am not advocating this approach, merely telling you that it can be done. If you are interested in this sort of thing, search for "foundations of quantum mechanics" and you should turn up some references.

[How does such inability account for the apparently random results? Moreover, if we can't specify initial conditions exactly, why should we suppose they're not random?] Why not read about it and find out? Argument from ignorance is hardly ever convincing.

You specifically said the indeterminism (in a suitable interpretation) arises solely from uncertainty about the initial conditions. That can't exclude randomness resulting from initial randomness. If the initial conditions can't be determined exactly, and lead to random results, initial randomness can't de excluded. Reading about a deterministic version of QM isn't going to change that. I'm not denying that such a version is possible.

Irrelevant. Initial randomness is meaningless in this context - the point being that you simply cannot know the initial conditions, period. The context was the statement that lack of cause was at the root of the indeterminism. I was pointing out that it doesn't matter if we know the mechanisms *exactly*, the theory suggests that this cannot help in predicting events. This is not a feature of standard or non-standard interpretations, btw.

Not knowing initial conditions doesn't imply that initial randomness is meaningless. Why is it meaningless, as distinct from non-verifiable? As for your reference to the context, there is no previous use of "root of" in the page.

[It seems to me that there's likely a misunderstanding inherent in the above discussion. It has been proven (see EPR, Bell's Theorem, experimentally verified) that there is no hidden local state. If e.g. radioactive decay event timings depends on initial conditions, then it does so only in the sense that it is "caused" by the state of all other particles (and the vacuum itself) in the entire universe (this is the view taken in Bohmian mechanics). In the standard interpretation, it happens at a truly random time, unpredictable even if you know the precise quantum states of the particles in the nucleus, individually and in aggregate.]

[It is definitely not the case that there is some kind of little internal timer in a nucleus that causes it to decay at a particular time. It truly does happen WithoutCause -- or in nonstandard models, like Bohm's, is caused by literally everything else, which amounts to the same thing, since every radioactive decay is involved in the "cause" of all other decays, and if A causes B while B causes A, then that makes the word "cause" pretty meaningless. Thus the preferred standard interpretation: random. -- DougMerritt]


(moved from WhyIamNotConscious)

Quantum leap WITHOUT CAUSE?

Yes.

The PRECISE CAUSE is being struck by a photon that is absorbs, bringing it to a higher energy state.

Yes, that's why electrons go to a higher energy state. However, electrons spontaneously (without cause) return to the lower energy state, releasing a photon in a random (without cause) direction.

Allow me to quote a famous book (Is this a "argument by authority" fallacy?):

I hope you are all familiar with the phenomenon that light is partly reflected by some surfaces, such as water. ... Try as we might to invent a reasonable theory that can explain how a photon "makes up its mind" whether to go through glass or bounce back, it is impossible to predict which way a given photon will go. Philosophers have said that if the same circumstance doesn't always predict the same results, predictions are impossible and science will collapse. Here is a circumstance -- identical photons are always coming down in the same direction to the same piece of glass -- that produces different results. We cannot predict whether a given photon will arrive at A or B. All we can predict is that out of 100 photons that come down, an average of 4 will be reflected by the front surface. Does this mean that physics, a science of great exactitude, has been reduced to calculating only the probability of an event, and not predicting exactly what will happen? Yes. That's a retreat, but that's the way it is: Nature permits us to calculate only probabilities. Yet science has not collapsed. While partial reflection by a single surface is a deep mystery and a difficult problem, partial reflection by two or more surfaces is absolutely mind-boggling. ... -- p. 19, QED by RichardFeynman


A SimpleMinded observation: "Bouncing back" or "Penetration" might be explained in simple terms: What bounces back, has collided with something more massive and what penetrates either misses or collides with something less massive. To think of a "surface" as continuous and connected such that it presents to oncomers a barrier which is successful 4 times out of 100 might be more SimpleMinded likened to presenting a "mesh" which is 4% blocking, non-continuous, having "holes", or "passages" which allow either by size, or alignment, the penetration of a large percentage of the "particles, photons, or whatever you want to call the arrivers. This makes sense to me, because even after the surface is penetrated, additional encounters make it so that the deeper one goes, the less penetration there is. "It's dark at the bottom of a deep ocean" and "why does the spoon seem to bend under water when viewed from above at an angle?" and "why does a lens magnify?" --DonaldNoyes

That's fine if you just want a SimpleMinded intuition on the subject, but FYI it is incorrect physics which is insufficient to explain a large array of phenomena, which is precisely why quantum physics was invented. Indeed, it is even in opposition to the wave theory of optics (which explains some phenomena well), in taking the particle stance (which explains other, different, phenomena well), an opposition which stood for 300 years or so, and was only finally resolved with quantum physics. -- DougMerritt


EditText of this page (last edited July 13, 2012) or FindPage with title or text search