Are We Code

Several pages on wiki suggest that peoples' minds are made of code. This seems highly unlikely, and even stronger than the strongest StrongAi claims. But then again, if peoples' minds aren't made of code, what are they made of? NeuralNetworks have failed even worse than the non-connectionist versions of AI.

Thesis? Antithesis? Analysis? Synthesis?

Well we our brains (and for that matter our whole bodies) are made from the GeneticCode, and functioning of it is also regulated by genes. The problem is the density of connectivity, which we don't have manufacturing processes to replicate, so the fact that our versions of limited neuron networks don't seem as impressive as those honed by millions of years of evolution does not negate the fact they are the basis of mind. Also artificial ones are components of lots of commercial software (image and handwriting recognition, motion control, etc.) so it is false to say they have failed. We may not have made a complete mind yet, but then we have only just gotten started. When neurons get damaged the mind stops working properly, so there is a high correlation. But what do we mean by code? Are the laws of physics and chemistry themselves also a kind of code, a simulation continually executing to create the story of the universe?


"Soylent Green is Buggy!"


Similar discussion on BioRobots?. The answer depends upon your WorldView. Actually, the answer just is, but whether you think we are Code or not depends upon your WorldView. -- BrucePennington


This begs the question of "what is code?". (And if it requires the question "what is syntax", oh boy are you in for a load of fun.) Related: DataAndCodeAreTheSameThing.

Code is frozen BusinessRules. Hence, yes, We Are Code.


Then, it begs the question - "Do we have FreeWill?" -- BrucePennington

Do you believe a sufficiently advanced artificial intelligence would have free will?

I, personally, don't believe it matters whether we have free will, only that we act as though we have it. It is physically impossible to prove we have or do not have what I'd consider true 'free will' (I'd deny that free will is compatible with determinism). Since the presence of free will cannot (physically) be proven either way, we're left with a choice on what to believe. I also happen to believe that free will is a necessary and sufficient precondition for moral responsibility, so denying we have it would free us to abdicate all moral responsibility for our own actions. As I think encouraging people to reject all responsibility would result in a horrible society, I think we ought believe and encourage belief in free will. It's rational, but not based in truth.

So, suppose a sufficiently advanced artificial intelligence had behaviors of a complexity beyond prediction by any human. Suppose that this artificial intelligence exhibits complex behavior in part as an emergent result of 'artificial' emotions, including:

This intelligence would occasionally perform what seem to be irrational acts based upon its 'artificial' emotional state. E.g. being curious, it might skin a cat to learn what is inside. If you then inflict shame, it would probably associate this pain with skinning the cat, but it might associate it instead with getting caught skinning the cat. This intelligence would grow, over time, into something quite unique from others that have the exact same seed but severely different environments. The choices it makes would be beyond fathoming by any human without an attached debugger... and possibly even to that human (since making sense of the intelligence's knowledge-store would be difficult, too).

If you believe that such an 'artificial' intelligence is capable of becoming 'good' or 'evil', you must also believe that it has free will. If not, you must believe that the robot's choices and actions are a direct consequence of its programming... in which case responsibility for its actions must lie with the programmers (possibly including, in your mind, the robot's 'educators' as one sort of 'programmer').

My own position: the collective programmers and educators can only be responsible for the robot's behavior to the degree they influenced and ought be able to predict its actions. I.e. if an educator teaches this robot to kill squirrels, and the educator knows the robot has trouble distinguishing between squirrels and puppies, then this educator has some responsibility when the robot kills a puppy instead of a squirrel. (In my moral philosophy, this 'responsibility of the educator' applies as much to humans as it does to any other intelligence.) However, if complex behavior beyond prediction of the individual educators and programmers were to emerge as a result of the intelligence synthesizing knowledge, belief, etc., then no 'programmer' can be held responsible for this behavior. As such, responsibility for this unpredicted behavior can lie only with the intelligence itself and its collective environment. To all observations, this is the exact same level of free will as we possess.

Since I hold, for reasons of moral responsibility, that we possess free will, and because the artificial intelligence exhibits, to all observations, the same sort of behavior as do we, I'd also hold that this artificial intelligence possesses free will... for the exact same reasons; the intelligence has progressed to the point that only it can be held morally responsible for its actions, so it -must- have free will.

Thus I wouldn't say that asking 'AreWeCode' "begs the question" as to whether we have FreeWill. Whether we have free will if we are code depends only upon what we consider to be sufficient preconditions for a claim that 'free will' exists. I imagine different people would offer different sets of preconditions. My own are grounded more in the practical and moral than in truth.


I really like your presentation and your view. I too believe that it doesn't matter whether we have free will, only that we act as if we have it. That's my point on the FreeWillIllusion? topic: The meaning of free will is not dependent on its reduction to physics. It can be (and was) defined in terms of observable psychological and/or social categories.

I also like your interpretation (or rather explanation) of emotions. But I feel that there is something missing.

By starting with pleasure and pain and later using terms like goal and importance, you seem to imply that these two emotions act as the axioms of the intelligence's goal seeking. I'm not sure whether you intend it that way, and if you do I don't fully agree. In any case, you should make the connection between emotions and goals and their importance more clear.

On WhatIsIntelligence, I presented a (kind of) definition of intelligence that doesn't explicitly use the concept of emotions, but entirely relies on a concept of a goal to ensure active behavior. I'm not sure whether something like emotion is strictly necessary, or whether emotional behavior necessarily emerges from goals, or whether to 'implement' goals as something emotion-like is the only means to do so.

Going on to society, I don't think discarding FreeWill doesn't imply chaos. I think there could be other means of ensuring social stability. It just depends on the goals the people actually follow. Problems ensue only if people consciously consider alternatives.

So to finish this, I agree that the assumption of FreeWill is needed for a conscious society.

-- GunnarZarncke

Intelligence and a goal-oriented behavior system that leverages this intelligence are necessary (but insufficient) preconditions for emotions. To make use of pleasure and pain, you must have a behavior-system that makes a goal of seeking pleasure and avoiding pain... and you need an intelligence that tells you how to accomplish this goal. Many of the higher-level emotions tie directly into both the intelligence and the behavior system. To feel guilt, an intelligence system needs to automatically report pain upon producing or executing a plan that includes actions associated with memories of shame. To feel sympathy, the system needs to automatically report sympathetic pleasure or pain based upon recognition of the same in others.

My understanding of intelligence is that which can answer how questions... whereas I ascribe to wisdom the ability to answer why questions, and to knowledge the ability to answer what questions (what happened, what will happen). I wish to avoid a LaynesLaw debate, so to be clear: these are 'my' definitions, used because they are somewhat more precise than the common understanding of 'intelligence', 'wisdom', and 'knowledge' in general. An artificial intelligence, in my mind, must possess at least a general intelligence. That is, it must be capable of learning to answer almost any sort of 'how' question. I tend to call AIs dedicated to a topic 'savants'. Anything that is capable of general intelligence can cheaply become capable of general wisdom, including common sense, with only slight modifications to the inference engine (WhyIsHowBackwards). I'll go into slightly more detail before touching on how emotions are attached... and why I brought them into the discussion.

Anyhow, that's what I mean by intelligence. You should note that intelligence does not presuppose that the how question has actually been formulated or that a motivation for answering the question exists. Intelligence does not presuppose motivation! The most intelligent man in the world might want nothing more than a case of beer, a bucket of popcorn, and a marathon of CSI. Answering how, and the ability to do so, simply doesn't indicate any volition to actually perform.

Thus goals must arise from some other source. For an intelligence with no needs and no emotions, they must be introduced artificially... externally, from someplace or someone else. Goals are handed to the behavior system, which then queries how of its attached intelligence, possibly asking for a certain level of detail, and offering certain constraints (no hitting, no biting, minimize cost, maximize future flexibility if things change, etc.) If the intelligence-system can, it will produce a plan at the requested level of detail consisting of sub-goals (over the hill, through the woods... partially ordered) that it is confident can be accomplished. Otherwise, the intelligence says 'haven't a clue'. If offered a plan with too high-level sub-goals, the behavior-system again asks how for the foremost sub-goal, repeating the question until it receives details that are low-enough level to send signals to actuators in a particular order... i.e. a 'program'. Then it begins to actualize the plan. Meanwhile, the behavior-system must continue observing the world and checking to see if the current plan must be revised (which can be optimized; test 'does this plan still approach that goal' or, better, set up the knowledge-system to automatically recognize and report any discovered patterns counter-indicate current, future, and ongoing plans).

Emotions are intrinsic behavior-adjusters... possession of emotions causes or encourages certain behaviors. The tasking of the artificial or synthetic emotions for an artificial intelligence is to produce certain goals continuously and internally to the behaviors system. A primitive pair of these goals are 'seek pleasure' and 'avoid pain'. The presence of these emotions can be used to provide an automatic, inferred set of constraints and heuristics on one's own behavior that are then leveraged effectively by the intelligence system; the heuristics introduced by emotions allow one to effectively narrow down choices, thus reducing the search space and computational cost (you'll never accuse a person driven by emotion of thinking too much). The use of emotions to introduce useful desires can do much to ease the burden on commanders of the artificial-intelligence systems because they'll automatically handle the small stuff... e.g. ensuring that they remain well-fueled, pumping time into data-processing and study during off-hours, choosing solutions that avoid damage to their systems, breaking pesky rules to save lives when the action is necessary (based on irritation), following rules like avoiding harm to others or telling others secrets because they'd feel associated 'guilt', etc.

Other emotions like admiration (social-hierarchy), envy & jealousy (which arise from desire for position in social hierarchy), anger (and rage, and wrath; big cousins of lingering irritation), lust (arises from sexual pleasure and desire to rut), disgust (and its darker cousin, hatred), and sloth... probably ought to be avoided. Some would be actively harmful to human society if present in millions of robots (envy, jealousy, rage, hatred, sadism) whilst others simply are simply not useful to the humans (admiration, lust, sloth). And no need to mention sex-bots... just give them pride in pleasuring others that have requested it; lust would encourage robot-rapists, which really ought to be avoided. (Every technology is turned inevitably toward sex...).

Those goals normally provided to the system artificially can still be provided artificially. However, the intelligence system will still choose paths developed in the long training phase that avoid sources of pain and seek sources of pleasure. It will just be 'ingrained' because the intelligence system has long since learned what works. Further, there is no need to turn the emotions off during any of this... which allows the intelligence system to continue pursuing knowledge (out of curiosity) and continue seeking to produce happiness in others (emerging from sympathy) even as it goes about its assigned tasks.

The pitfall of the emotions-based approach is that sometimes apparent irrationality will result. For example, if the pleasure associated with pride is able to overcome the pain of guilt or physical destruction of one's own components, one might see robots that act quite recklessly in pursuit of their goals... at least until they've grown wise and intelligent enough to recognize more 'efficient' (less painful) paths to the same goal. Similarly, curiosity might cause the intelligence to skin that cat before it has learned to feel guilty about harming animals (or causing pain in others in general). I mostly introduced them above because of this complexity. Slightly different balances of emotional strengths can result in complex, emergent behavior that is very difficult to predict and seemingly irrational (at least without a debugger... see DatingIsHarderThanProgramming).

If there is no such thing as free will, then moral collapse is an impossibility because you are not ultimately responsible for your own behavior, but that doesn't mean behavior would change any... in fact, if there is no free will, then we're all acting exactly as we are now without any choice on our part. However, if there is free will, and everyone rejects its presence, that will hardly encourage action to change things "for the better". You and everyone else would believe that there isn't a "right" or "wrong" thing to do. There are just things to do.

And that's what would be a horrible society... not what you're thinking of, with everyone going around and killing each other.

Even in such extreme circumstances, there would be little or no benefit in becoming such a killer, so it wouldn't happen.

There are no right or wrong things to do. There are just things to do.

The point is that society mediates what things it wants people to do. Society dictates what is wrong and what is right tacitly or explicitly. You quote the words yet you do not seem to want to accept the relativism you are inherently supporting - what is "for the better" is entirely up for grabs unless one can point to an oracle that will provide it.

Which basically means that the "horrible society" you're describing is one in which the algorithm is equivalent but you just don't like its style.

Either way, unless free will is posited as some non-computational choice function working by some magic, then clearly choices are the result of either deterministic or non-deterministic processes in the brain. As such, the best free will can do is say that you have some random elements to your decision making process but most free-will people don't seem to like that much more than the deterministic undermining of free will.

Anyway, the main point is that I would rather call a spade a spade and not worry too much if my chosen ontology is going to cause social collapse. We can hardly make any accurate assessment about what we are if we're going to say, "we are definitely X because I don't like the ideas implied by not X," then that is not really helpful.

-- JamesHollidge

You don't need to be a moral absolutist to believe in morality. A society that believed there was no such thing as right or wrong would have no reason, no justification, to punish those our society deems "criminal". Criminals are just people doing things.

You are entirely wrong in equating non-deterministic to random simply because you cannot imagine any other possibility. It's a common mistake in young philosophers. And rationality doesn't require truth, especially when truth is provably impossible to obtain... it only requires reason.


I never said you did, I just recognize that fundamentally what is "right" and "wrong" can only be framed when one decides what the goals are. The simple fact is that conflicting goals exist. If everyone agreed on what the goals of a society should be, there would be no politics.

As such, what is criminal is what acts against the goals of a society. There is no need to frame justification in terms of an abdication of the responsibilities as one might confer to an entity possessing free will - you simply seek to correct behaviour that works against the goals of society. It has always been this way. Society fosters behaviour that contributes to the goals of society and punishes behaviour that does not.

Non-determinism does equate to randomness - it is not a case of being able to imagine any other possibility, it is simply an argument from informational entropy. Other possibilities are excluded.

-- JamesHollidge

Non-determinism is not randomness. Random will cannot be free will; free will implies control. Both true randomness and non-determinism have some similar properties in that neither allows one to make absolute prediction of the future while in the possession of absolute knowledge of the past and present... or, more accurately, neither allows absolute prediction of any pattern in any temporal-spatial volume given absolute knowledge of every other temporal-spatial volume. However, randomness cannot admit to individual free will. If you believe one's volition is determined at random, you have no ground in saying that the individual has any individual responsibility for his or her own actions. Same goes if there are 'mystical, supernatural agents' that travel about and whimsically affect the volition of individuals. Randomness is, properly, a subset of non-determinism. The two are not the same concept at all.

If no free will exists, or you don't admit to its existence, then you are irrational in blaming any individual for having goals that might be contrary to your own or those of society. You'd still punish them as individuals, and you'd have reason to do so... just not a moral reason. I just recognize that fundamentally what is "right" and "wrong" can only be framed when one decides what the goals are. - "Right" and "wrong" in the sense of accomplishing or being contrary to certain goals is NOT necessarily a moral right and wrong. To believe so requires that you believe that morality is tied to the goals of individuals. And, of course, that requires that you decide which individuals have the ability to decide what is right and wrong. Every individual? That would admit to the fact that the individual you labeled 'criminal' was doing something right simply because the criminal decided he was. Society at large? I'd disagree. If society has a tradition of enslaving blacks and disenfranchising women, or killing Jews, you'd be saying that they are "right" to do so... believing that society should be changed would be "wrong". That's a rather ugly world view. I tend more towards a belief in morality as actions affect the human condition... However, this page is not the right place to discuss this. I suggest moving it elsewhere.


Sorry, but you can't escape the problems of FreeWill's mechanism by invoking control - you just beg the rather obvious question of what it means to 'control'. This leads us to ask for a mechanism for control, which is just going to lead to control being some sort of computation or requiring some other 'free' mechanism which is not helping the argument for the 'free' part of free will. Now you may be getting hung up any number of the other definitions of random but when it comes to non-determinism it's pretty clear - it is not simply that you have some mechanisms with branching outcomes but you are literally unable to make anything other than a stochastic predication. That's where randomness comes in - if it is not random then the informational entropy of the mechanism will be strictly less then 1 and there will be a bias. True randomness has no bias.

You see the problem is clear. Randomness cannot allow free will. Predictability cannot allow free will. You are left with nothing to hang free will on at all. Your problem is that you have decided to equate responsibility with making a decision using a process that is not computational in nature. You are trying to invoke some kind of 'soul' here that truly does work by magic in somehow allowing us some sort of ability to make choices in which we have not previously made any sort of calculation, evaluation and selection based on prior knowledge, personal bias and reasoning heuristics. As such you set up a standard for responsibility that cannot be met. Am I responsible for being born? For my DNA? Etc... continually able to shift the blame for all the various variables that don't let me be 'free'. Not to mention of course it makes the mistaken assumption that the human self is some sort of unified contiguous thing anyway.

I don't need to believe that morality is tied to individual goals - that's exactly what the physical evidence suggests. We have brain regions for moral reasoning. As to who gets to impose their morality well, that's not a matter of sitting about and thinking about it, that's just about whoever can manage to organize it so that they have the ability to impose it.

And you miss the point, it doesn't matter if I think it is wrong to enslave black people, keep women down or kill Jews - if someone else manages to construct a society where that is right then it becomes right for that society. It's not about whether or not I think it's right to change another society - the simple fact is that if I can effect that change then I can effect that change. The point is that just because you don't like the philosophical ramifications doesn't mean it is not true. Ugliness is a matter of aesthetics but ugliness does not preclude success. That is just a fact of life. What the Nazi's managed to achieve is both awe-inspiring and an utterly disturbing example of what is possible when you can focus the efforts of a nation. Evolution doesn't care about who is "right" and "wrong" - just who is "dead" and "alive". It is the tyranny of genetics - there is far more than one way for the game to be played.

As to it being inappropriate, I don't think so - fundamentally you are invoking free will to resist the idea of a world that is fundamentally computational and as such the premise of this page AreWeCode? Free will asks us to believe that decisions, whilst being based on past experience, being generated by what we know of our options, being mediated by our genetic prejudices and being calculated by our neurons, nonetheless, can transcend the physical processes and somehow allow the illusion of the self to make a 'free' selection. The idea of there being a 'free' selection is an inherently difficult one to support because any analysis will show so many clear constraints.

-- JamesHollidge

I do not need to answer the question of free will's mechanism. You don't need to prove how something might exist to prove it exists. Should I demand the same of gravity, space, and time? - yes... they don't exist until you find the mechanism! Seriously, I'm not entirely convinced gravity, space, and time do exist rather than being tricks of perception. One often mentions the Illusion of Free Will. Does that exist, or is it another trick of perception? Either way, accepting it exists is of some value when taking certain lines of thought and making certain decisions.

I, of course, have not argued that free will exists... only that, if it does, a free will mechanism must (necessarily) also exist. Deductively, the existence of a mechanism of non-determinism that is non-random is proven if free will exists. One directly follows the other. Your or my inability to understand the mechanism is entirely immaterial, just like your or my inability to understand a mechanism for time, space, or gravity. Your belief that all laws of a universe must be deterministic is, itself, a flawed one... which is probably blinding you to other possibilities. What I have argued is that even though disproving or proving free will is impossible based on inductive evidence, it is quite rational to act as though it exists... i.e. to accept it as an axiom for choosing one's actions. If you accept it as an axiom, you MUST accept (as a rational entity) that there is a mechanism that is both non-random and non-deterministic, even if you do not know what this mechanism is. QED.

You are left with nothing to hang free will on at all. - Free will does not need a peg. Free will is, itself, a foundation. Consider that I'm also an epistemological solipsist; the only things I accept 100% are my perceptions, my thoughts, and my own attempts to act. I think therefore I am. Where would a person like me hang such an idea as "a world exists that obeys physical laws"? I cannot. I make that an axiomatic foundation for other ideas and organizing perception, and might accept it when deciding how I shall attempt to act, but it isn't "hanging on" anything at all. I do not know how such a world could exist, and I cannot prove that it exists... I just accept that it does for purposes of deciding action upon it. I.e. I act as though it exists.

Your problem is that you have decided to equate responsibility with making a decision using a process that is not computational in nature. If our decisions are determined algorithmically based on our inputs, we have no more moral responsibility for our decisions than does a hydrogen molecule drifting through space, or an asteroid doing the same, under precise physical laws. You could not call such an asteroid "evil" if it were to crash into our planet and fry all human life out of existence, and you could not call Stalin evil for nuking his own people. Moral responsibility requires a non-deterministic decision-making process. IF you believe in moral responsibility AND you are rational, THEN you believe your decisions are non-deterministic. I happen to believe in moral responsibility. The rest follows.

And you miss the point, it doesn't matter if I think it is wrong to enslave black people, keep women down or kill Jews - if someone else manages to construct a society where that is right then it becomes right for that society. ... and you'd be wrong for thinking contrariwise, yes? No. You'd not be wrong, and society would not be right. If you don't believe in Free Will, you cannot rationally believe that "morality" is meaningful. Nothing is right for that society. Nothing is wrong for that society. There is only the current state of affairs (or perception) and judging anything as right or wrong is both pointless and immaterial. Perhaps that is what you believe. Why you'd bother trying to change someone else's belief is beyond me if this is what you believe... after all, it's rather hypocritical. But, perhaps, it was just what your brain made you do. One could ask what you'd think you'd gain from the attempt... but that, too, wouldn't matter if you have no free will.

This discussion is still off-topic for this page. I did not invoke free will as existing as a counterargument to "AreWeCode"... check my original statement: it doesn't matter whether it exists, only that we act as though it does.

You don't need to prove how something might exist to prove it exists. Should I demand the same of gravity, space, and time? - yes... they don't exist until you find the mechanism!

That's exactly the point of epistemology - until you've proven something exists it doesn't. Before you have mathematics underlying the description concepts such as space and time were fuzzy and the notion of 'something that holds us down' mysterious. Having explanations and having formal theories are entirely different things.

Either way, accepting it exists is of some value when taking certain lines of thought and making certain decisions.

The problem is that it is an artificial value. We can sit around making 'what if' proposals about how we should interpret reality if we invoke certain entities - I could invoke angels and demons playing a cosmic chess game with humans. Invoking free will as a magic explanation for human choice doesn't fare much better than that if you can't explain how it works.

Deductively, the existence of a mechanism of non-determinism that is non-random is proven if free will exists.

Such a thing is not mathematically constructable from the arguments about the nature of randomness. Non-determinism means that you cannot make any predictions about the next event based on the former events. This is exactly what randomness is all about. Arguing that people make decisions in some way that is not determined by either their past or their genetics is clearly ludicrous - the former provides the context for which decisions can be formulated and judged against prior experience, the later provides a bias to certain decision classes. The only freedom is to be unpredictable, non-deterministic, and that requires randomness. It's just the law of the excluded middle - either something is predictable or it is not; there is no middle that allows for anything that is not on that scale.

Your belief that all laws of a universe must be deterministic is, itself, a flawed one...

At no point did I say this was my belief.

If you accept it as an axiom, you MUST accept (as a rational entity) that there is a mechanism that is both non-random and non-deterministic, even if you do not know what this mechanism is. QED.

This is like saying I must accept there is a number between 0.9 recurring and 1. If I do so, I have an unsound system.

Consider that I'm also an epistemological solipsist; the only things I accept 100% are my perceptions, my thoughts, and my own attempts to act.

That is a mistake in the face of the known problems with human cognition.

If our decisions are determined algorithmically based on our inputs, we have no more moral responsibility for our decisions than does a hydrogen molecule drifting through space, or an asteroid doing the same, under precise physical laws.

If you choose to define moral responsibility as requiring free will, it does. If responsibility is not contingent on 'free' choice - and the concept of responsibility does not require it, then it is not. Responsibility only requires labelling the source of the action, not the properties of the instigator.

As such a moral agent is responsible for moral actions by virtue of being labelled a moral agent. Don't forget that there's still a great deal of humanity that assigns moral value to natural disasters - for some people the idea that hydrogen molecules may be divinely guided as the ultimate expression of the ultimate moral agent.

... and you'd be wrong for thinking contrariwise, yes?

To them I would. But then that's the point - free will forces you to pretend that everyone has the opportunity to understand moral decisions in the same way and hence choose the 'good' options over the 'bad'. I'm sorry to have to tell you this: people exist who literally cannot do this. Either they lack the intelligence to understand the options or they lack the emotional reasoning to make a determination as to what is good or evil.

Nothing is right for that society. Nothing is wrong for that society.

No, what is right or wrong is ultimately what that society has chosen to view as such based on the abstract goals everyone's agreed to share. Within that society what is right and wrong will be clear. Only survival will say whether or not those were good or bad choices and it is an unfortunate fact of life that societies that employ strategies that could be called 'inhumane' can be successful.

People would make poor social animals if they really had free will - being in a society requires that you can be influenced to some significant degree by a larger group of people without which cohesion is impossible.

As such the real question is not what I'd have to gain by trying to influence what would clearly have to be a very influencible thing in a world without free will - after all if I can know how decision making works I can alter the process - the question is how do you influence anyone in a world with free will?

-- JamesHollidge

You're wrong about epistemology, to start with. Your argument that "Gravity didn't exist before Aristotle, and Time still doesn't exist, because we didn't or don't have mathematics underlying them" is ridiculous. Your insistence on an explanation is immaterial, despite your desire otherwise. According to your statement: You don't think, and you don't have emotions... or they're just fuzzy, nonsensical things until you can prove how they exist. I'm left mostly thinking: "What arrogance this man has to believe human understanding is necessary for existence!"

Free will requires non-determinism, but if one's decisions were entirely unconstrained there'd simply be no intelligence behind them (i.e. with people seemingly randomly choosing to walk through walls). Free will does not require unconstrained will. It only requires that one's will be free when it comes to a set of possible actions when the volition to act arises. And to stem an argument I've seen before: Free will independent of whatever the mechanism causes a volition to act, though volition is necessary to have will (free or otherwise).

You're wrong about epistemology, to start with. Your argument that "Gravity didn't exist before Aristotle, and Time still doesn't exist, because we didn't or don't have mathematics underlying them" is ridiculous.

Not my argument, but I am not surprised you didn't get it. Until we invented these terms, we couldn't talk about it at all. Until we refined these terms, we didn't know what we were talking about anyway. The underlying phenomenon responsible for gravity, one assumes, has always existed. The ability to talk about it in a sensible way required IsaacNewton.

According to your statement: You don't think, and you don't have emotions... or they're just fuzzy, nonsensical things until you can prove how they exist.

No, according to my statement we have underlying phenomenal causes for our operation, but until we label them with accurate labels based upon observation we don't know what we are really talking about - and if we labelled them in a different, equivalent way, then we could talk about them another way. Abstractions don't really exist. Thought and emotions are abstractions, not real things - they are the aggregate terms we give to the operation of more basic chemical and physical principles.

If you adjusted your argument to include 'you don't have a soul, you don't think and you don't have emotions', would I still be ridiculous? Would a soul not be exactly a fuzzy, nonsensical thing until you can prove how it exists? Many people tell me I am stupid for not believing humans have souls but then they can't tell me exactly what a soul is supposed to be or what it does.

It may seem anal, but you have got to realize that English is an arbitrary set of labels. It is no good starting from an assumption that it is 'obvious' what a soul does, or thinking does or emotions do - first you must prove the obvious.

I'm left mostly thinking: "What arrogance this man has to believe human understanding is necessary for existence!"

I am not an Idealist. What I am pointing out is that you must be very clear about what you mean when you use words - you cannot assume anything is obvious because the construction of human language imposes a great number of cognitive assumptions in the user's brain just because of the way it is acquired. It is not sufficient when one is attempting precision to assume that your understanding of the words is the same as mine.

It only requires that one's will be free when it comes to a set of possible actions when the volition to act arises. And to stem an argument I've seen before: Free will independent of whatever the mechanism causes a volition to act, though volition is necessary to have will (free or otherwise).

Which doesn't mean a damn thing at all - if free will is independent of whatever the mechanism actually goes and selects an action then it does in fact have nothing to do with anything. If free will is the mechanism and it's deterministic then it's not free will. If free will is the mechanism and it's non-deterministic then it's not free will. If free will is the mechanism and it's non-deterministic and non-random it is the mechanism, but no such mechanism can exist.

If you insist on that definition of free will, you are not going to get anywhere.

-- JamesHollidge

Abstractions don't really exist. - Almost every word is an abstraction. Horse, Unicorn, Red, Anger... every single noun, every single verb. The only exceptions are names, which are identifiers rather than abstractions. When you say a Horse exists, you mean that at least one instance of something that meets the constraints defined by "horse" has existence. Thus to say that "Abstractions don't really exist" is incorrect.

If an abstraction is vague, the number of possible instantiations for its existence can only increase. E.g. "something" is an abstraction, and you can prove that "something" exists by pointing to any "thing" you care to - a cup, a horse, a chair, etc. More specific abstractions introduce more constraints. E.g. any instantiation of 'a unicorn' must have unicorn properties, and any instantiation of 'a chair' must have chair properties. You say "Until we refined these terms we didn't know what we were talking about anyway.". I'd disagree on one level and agree on another: an unrefined or unspecific term can mean exactly what you want it to mean if the vagueness, that lack of constraint, is intentional. If the vagueness is due to a lack of ability to specify constraint, to put into words your intended meaning, then the term needs further refined.

You are wrong about Newton. Newton did nothing to further refine the definition of gravity... he simply further described gravity. If the universe had a different equation for gravity than product of masses and inverse square of distance, Newton would have described gravity differently. Descriptions are incident, not intrinsic. Even before Newton, people knew exactly what was meant by the word "gravity", and gravity certainly existed.

I think you should figure out what you mean by "exists" and "prove something exists" before you even bother dealing with horses, chairs, or souls. Can you prove the world exists, that the things you perceive "exist", or must you first assume that as an axiom upon which to found other beliefs? To be sure, it's the latter; you can't PROVE, deductively, that the world outside you "exists" in any meaningful sense of the term. I offer that as a challenge if you believe the contrary.

you have got to realize that English is an arbitrary set of labels - I do realize this, perhaps better than you're willing to accept.

I am not an Idealist. (What the heck does idealism have to do with existence and communication among humans?)

"Free will independent of whatever the mechanism causes a volition to act, though volition is necessary to have will (free or otherwise)." - If free will is independent of whatever the mechanism actually goes and selects an action(...) - you misunderstand. I said that free will is independent of volition to act. I.e. you don't need to make a decision to make a decision. The requirement to act, to make a decision, can be forced upon you without violating free will. I said nothing of the mechanism to select an action once the volition to act occurs.

I will not pursue the idea of non-determinism vs. non-random further. You've closed your mind to that subject without even touching on such things as "mechanism for randomness" and other such problems your notions introduce. I don't believe discussion from me will take you any further... but I encourage you to explore it further on your own.

In any case, whether or not Free Will exists, I still think that teaching children and everyone that "every decision you or anyone else makes is determined and fated or otherwise outside your control" is an utterly stupid thing to do. Even if free will doesn't exist, it is only rational to act as though it does. I bet you do it, too... act as though it does, even if you don't believe in it.

Thus to say that "Abstractions don't really exist" is incorrect. The horse is not its abstraction. That is the point - abstractions require we throw away information about the thing we are really talking about. [Unless you talk about abstractions. The loss of information isn't often pertinent... accuracy and precision can remain 100% in the domain of abstractions themselves. It is in moving towards the abstraction that information is lost... in particular, irrelevant information, since its removal is what defines useful abstraction. In any case, the abstraction still exists if it can be instantiated. "I perceived a horse" is true if your perceptions match the horse abstraction (whether or not a horse was there to be perceived).] This is especially evidence when we find edge cases. Since we're talking biology one very pertinent example is the problem with defining when an animal is part of one species and when it is not. We have ring species and all other sorts of complications that can make our taxonomical categorisations messy. [Hierarchical taxonomies are a mistake in some domains. When they are used in an inappropriate domain, you get complications. There are other options.]

If the vagueness is due to a lack of ability to specify constraint, to put into words your intended meaning, then the term needs further refined. Exactly my point about free will. You want me to accept a sloppy definition for an emotional, not rational reason. I mean come on, you use a 'think of the children' argument! [All you need to know about the definition of free will for the argument thus far is: free will is incompatible with determinism, and free will cannot be truly random. I'm not attempting to define it here because I'm making no claim that free will actually exists... but, if it exists, then it definitely has those properties (among others). You ask me to tell you the color of the unicorn's hooves... a ridiculous request when discussing a hypothetical unicorn.]

Newton did nothing to further refine the definition of gravity... he simply further described gravity. Erm, before Newton came along the concept was fuzzy at best. By producing a seminal mathematical work providing a concise and accurate description of its operation you certainly devalue his contribution to science. [Wrong. Before Newton came along, the concept of Gravity was: "whatever it is that holds me to the ground and makes apples fall". AFTER Newton came along, the concept of Gravity was: "whatever it is that holds me to the ground and makes apples fall." Newton didn't redefine the term... he simply provided an "It seems Gravity happens to be a Force (according to definition X) and has a magnitude calculated by equation Y". Newton described Gravity. That is NOT a small contribution, but it is also NOT a refinement of the original definition. The features of gravity are incident upon it; if Newton had described Gravity wrongly, he'd be wrong... he'd have not refined it out of existence.]

To be sure, it's the latter; you can't PROVE, deductively, that the world outside you "exists" in any meaningful sense of the term. I offer that as a challenge if you believe the contrary.

It is not helpful to presume that the world only exists in my head. I know my interpretation of it does but I am assuming there is a knowable world outside of that interpretation. [Who said anything about the world existing in your head? I'm talking about existence, period. Anyhow, it's your assumption I'm talking about. You can't prove that assumption.]

In any case, whether or not Free Will exists, I still think that teaching children and everyone that "every decision you or anyone else makes is determined and fated or otherwise outside your control" is an utterly stupid thing to do.

That opinion is not well substantiated - you can't really tell us what the tangible effects of rejecting free will if it actually exists would be - you just presume it would be an 'ugly society' - and you agree with me that such a rejection of free will would not affect anything if free will does not actually exist. [If free will doesn't actually exist, then you aren't free to accept or reject it. Everything will be as it is, and argument is a rather pointless exercise. In the presence of free will, its rejection can only ever be a psychological burden to useful change. In its presence, embracing it, especially having all of society embrace it, will have sociological effects. It will lead to something I'd consider 'ugly'.]

You've closed your mind to that subject without even touching on such things as "mechanism for randomness" and other such problems your notions introduce.

Hardly. I could accuse you of being closed-minded to free will and frankly there's a lot more justification for it - you won't abandon it as a concept even if it's not viable because for you the importance of the concept goes beyond whether or not it is true. [laughs. I offer free will as a hypothetical. You reject it as a contradiction without properly proving it. I'll let you know this much: if you can provide a mechanism for the existence of randomness in this universe, then you can just as easily provide a mechanism for free will. I've explored this issue before. Your rejection of one and not the other, without exploring the two and your own reasons for rejecting one but not the other, is why I consider your mind closed to the subject.] Sorry, [again...] but to me this seems exactly the same as presuming the existence of some deity so you don't feel sad about the universe not really caring about us. [That is a reason to accept a belief in god. Again, rationality has nothing to do with truth, only with reason.] And I have been told I am closed minded there as well. [Perhaps you are. Know thyself.] Is it really impossible for you to consider I may actually have an honest disagreement based on my interpretation of the facts as I see them? I would really rather you do not close down the discussion based on some nebulous accusation of being 'closed' to an idea especially when you would have to be just as guilty. [If you keep bringing up as a postulate in your discussion that "no such mechanism can exist" despite knowing that is a point under contention, this discussion cannot continue profitably. Perhaps we need to clearly define what "deterministic" and "random" mean, fundamentally. If what you mean by random is simply "non-deterministic", then free will and randomness are not incompatible... but "random" in that case cannot truly mean "without pattern or basis" as it would allow stochastic bias despite still being non-deterministic. I don't consider a non-deterministic system with stochastic bias to be random, thus I can allow for non-deterministic and non-random systems. E.g. consider a number generator that chooses numbers between 1 and 100, but you can narrow it down to exactly two numbers based on the last thousand observations, and choose the correct one of two with 95% accuracy. I would NOT call this a random number generator. But it is still non-deterministic.]

Even if free will doesn't exist, it is only rational to act as though it does. I bet you do it, too... act as though it does, even if you don't believe in it.

No, I don't choose to believe I have free will at all. I believe in something I consider far superior - I have the ability to adjust my reasoning according to feedback from the environment in order to produce better decisions in the future. In your world my decisions are at the mercy of some magical free will - in mine I get to continually improve and refine them. [In your world, your decisions are at the mercy of the environment. Your reasoning may continually improve and refine, but you don't get to do it... that action is determined for you, too. Free will is not incompatible with improving and refining one's own reasoning.] It is ironic to me that you have set up free will as some way to make meaningful decisions [morally meaningful.] when free will has pretty much removed this meaning by not allowing us to make a true analysis [sure you can make a true analysis. Free will allows a decision to analyze one's own reasoning. Further, it can still happen without explicit will to do so... free will only needs to apply to at least one action to "exist", not every action] of how we decide things and hence correct faulty reasoning [without free will, you cannot make a choice to correct your faulty reasoning; the will to do so is simply part of what you are]. A self-improving algorithm (or possibly not improving as the case may be) if you will.

-- JamesHollidge

You have some funky ideas of what it would mean to have "free will"... like the idea that people with free will would be unable to improve their reasoning. Many of my responses were embedded, above.

If free will exists, your decisions can still be constrained by the environment, your experience, your emotion, and your biology. After all these constraints, you are left with a finite but very large set of possible actions. Without free will, the choice among this set is determined either randomly or by a deterministic heuristic. Two possible ways free will might exert itself are deciding among this set (i.e. narrowing it to one choice) or by further narrowing it prior to allowing random or deterministic choice. The mechanism for free will is not relevant to its properties. If free will exists, then it performs its function even if we don't know how. We can, however, describe some constraints on said mechanism. If the 'free will' is, itself, deterministic in its operation, then it isn't "free". Similarly, 'free will' cannot be truly random; if it is, then it is detached from the individual, and cannot be "will". Where, when, why, and how free will might narrow a selection of choices would be an open question. Since non-determinism is only relevant wrgt past inputs and outputs, one possibility is that a 'free will' operates in a temporal field viewing future possibilities and probabilities... obviously not random, but also definitely not deterministic. There are a great number of other non-contradictory possibilities.

A couple proposed mechanisms for 'free will' flip it on its head, making the free will affect circumstance that leads to decision. One (based in Calvinism) essentially says that you made all your decisions before birth when choosing your lot in life; your actions are deterministic, as is your circumstance, but your circumstance was subject to free will. That would imply that free will is atemporal and external to an otherwise deterministic universe, and that the universe in which the circumstance is chosen provides a mechanism for free will (simply moving that problem out a step). Another similarly uncommon view (I've seen it only among quantum physicists) is that the universe itself is subject to our free will, including our environment, emotions, experiences, etc. This one need not be atemporal, and implies a non-deterministic universe... in which case the non-determinism is no more "magical" than any other random source.

Of course, I don't really care how free will exists... or even whether it does. Such things simply aren't relevant to me. They are to you, who insists that free will must be self-contradictory. Me? I've studied the subject enough to know you can't prove its existence one way or the other, let alone touch on the mechanism. I know it isn't self-contradictory; just like randomness and the concept of the world outside us with physical laws, free will is a foundation concept. You cannot "hang it on" anything... and if it exists, then it exists. Is it "magic"? No. No more than your lack of knowledge of a mechanism for time, space, or randomness would make those "magic".


To save a lot of back and forth on the previous points, I think I can sum up our positions respectively:

You believe that freedom has to be given. That means that in order for the being to possess free will there must be some fundamental mechanism. The universe must give us some way of tapping freedom. There is no computer, no rule that can give freedom because blind order is fated to always be what it will be. Similarly, there is no source of randomness that can give you the freedom you want: you cannot be in the lure of chaos and be free - to be in chaos is to be locked into the mercy of fate as well.

I believe that freedom has to be taken. There is no tap of freedom in a universe of rules. We decided how much freedom we can take by understanding these rules and using them to procure more freedom. Freedom is the freedom of control. Will is not a force, it's a matter of whether or not you had the good luck to be born with the equipment that will allow you to win; it doesn't matter what you thought if your thoughts never affected anything.

Would you consider that to be accurate?

-- JamesHollidge

Only partially accurate. I state only that if there is free will, then it must both be free (not chosen by someone or something else, be that a god, a random number generator, or deterministic laws of a universe) and will (connected to an individual). This deductively implies various other properties, such as free will being neither deterministic nor random. Thus, if free will exists, then those properties also exist in this universe. Am I saying free will was "given" by the universe? No. That would imply a decision by the universe. Free will would, rather, be a property of the universe just as much as space or time or gravity... or even randomness. Now, if all you're saying is that I believe that, for free will to exist, it must be a property of the universe? then yes.

Under your worldview, you can't "take" freedom even if you want it. Understanding of rules cannot "procure" even a little freedom, and there is none to start with so you'll be left at no freedom no matter how you struggle. You don't even have an option to pretend whether you have freedom or not... you have no options. You're a tool of the universe, and every decision you make and every thought you have is determined by birth, circumstance, factors that are entirely outside you. If you're fortunate, you'll be born to circumstances that lend themselves to a rather pleasant existence. Otherwise, you may have been more fortunate to have simply not been born. Your will exists, to be sure... but even that is determined by factors outside yourself. You are not free, and never will be.

Only partially accurate. I state only that if there is free will, then it must both be free (not chosen by someone or something else, be that a god, a random number generator, or deterministic laws of a universe) and will (connected to an individual)

This pushes your problem to what it means to be an individual and what it means for something to not be chosen by something else. You want there to be a centre, an essence of self that is not built upon anything else when it is very clear that no such thing exists for the human animal. [You claim this is very clear, and I must ask what you mean by that. In any case, for discussion of free will in humans, it is very clear that the human animal is the level of 'self' under discussion.]

You can't argue with me that nature has a factor - you've already agreed that a lack of the right equipment precludes free will. You can't argue with me that nurture has a factor - you've already agreed that the choices one will formulate depend on the experience of knowing the options available.

You are not resisting my conclusion for any logical reasons. You just don't like the emotional implications - you'd rather deal with a universe you like the idea of than the one that can be shown to exist. [Which conclusion of yours is it you think I'm resisting? that you can have freedom and no free will? Yeah, that's a direct contradiction.] Hardly, I just presented a meaning that has relevance in the given ontology. [It's a joke of an idea, filling that emotional void you left when you rejected free will. You should reject it, too.] So you are going to admit that your so-called rational call for free will is in fact emotional then? (Did I not? Emotional, based on the sociological effects of its emotional presence. Go back to the top of the damn argument.) [That particular conclusion can easily be dismissed for logical reasons. Which other conclusion might you be discussing? That you should embrace your lack of free will?] No, that you should deal with things as they present themselves, not as you might wish them to be. I cannot disprove free will because you have set it up in such a way as it is undetectable. [You can't prove that... still haven't. I don't resist the possibility that free will might not exist, but it'd be foolishness to accept that it doesn't exist without real proof.] On the contrary, it is sensible to not accept the existence of the infinite amount of possible things one might dream up as having existence. You should only deal with the ones that actually exist. (You can't prove that anything actually exists, other than your perceptions. Going from "I perceive X" to "X exists" is missing a necessary step... one that requires assumption. You can only prove that you believe that you've perceived existence directly or indirectly. And it is not logically correct to accept OR dismiss something simply because you dreamed it up... or perceived it.)

You're a tool of the universe, and every decision you make and every thought you have is determined by birth, circumstance, factors that are entirely outside you.

What factors exactly are entirely inside me when my self is entirely contingent on the existence of the universe? You seem to want to resist the minor little issue that you are a colony of little cells. If I cut out the right piece of your brain I can change your sense of self profoundly. Give you the right drugs and turn your perceptions upside down. [If you associate self with a material being, then self includes only that material within you. If free will is to exist in that circumstance, then it must exist as part of your material being. If free will doesn't exist, then no such mechanism needs to exist. Occam's razor appeals to a lack of free will, a lack of randomness, etc. until the moment they, as assumptions, are needed to explain observations. But it is no disproof. If you like that appeal more than you like the appeal of freedom, you can believe one instead of the other. Either choice is rational, but neither is based in truth.]

You are not free, and never will be.

Not if I have to rely on some fountain of freedom that cannot be shown to exist as you define it because it precludes detection. You resist the idea of freedom as an illusion not because my arguments are poor but simply because you do not like the alternative. My idea of freedom is pragmatic - we have an algorithm capable of reasoning about itself. [Your arguments are insufficient. You cannot, deductively, show much of anything to exist... be it a fountain of freedom or the computer you're typing at. All conclusions you make about the world are inductive. You observe people making decisions and the actions they perform, but think that no such thing as free will could ever be detected? I wonder why you think such a thing. One possible test would be to create an artificial agent that matches, exactly, a living human brain, and compare its input-output sequences to those of a living human. With such a thing you can assert that the human behavior diverges from predictions (given an experiment of high enough quality) or calculate the maximum level of effect free will has on individual actions (e.g. if the behavior is extremely close, then you can assert that if there is a free will at the level of particular choices, it is quite limited). It wouldn't touch versions of free will that affect circumstance or interpretation of experience... but it would be a start. Anyhow, if free will exists then, given sufficient technology, it can likely be detected. It just can't be proven deductively to exist or otherwise.]

Even in your own argument about what a machine would have to attain to have free will you have missed that you are proposing not that the system has an inherent freedom from its construct but that it is sufficiently chaotic enough in its output for it to be hard to predict what it will do. [If a machine can be morally responsible, it is only rational to attribute to it free will. Otherwise blame for a machine's actions shall lie on its programmers. It's quite sensible. To my own thinking, any admission of moral responsibility requires first admitting to free will. Your belief that moral responsibility can be attributed to an individual for actions outside that individual's control is not a rational one... how would you like to be blamed for Chernobyl? That's also not rational, but that sort of disconnected responsibility is, essentially, what you've been arguing for.]

You also seem to be conveniently ignoring the facts about human behaviour - it is not as unpredictable as you assert. There are plenty of people who take advantage of this fact for their own gain, be they conmen or illusionists. [Tell me how unpredictable I asserted it to be. I don't remember saying human behavior was unpredictable... only that, if free will exists, it isn't 100% predictable.]

I cannot take on an idea that is demonstratively irrelevant and use that as the basis for what you consider to be practical. Free will is not practical. [Sociologically, the idea of free will is remarkably practical. I've not tried to convince you it actually exists.] Theologically, the idea of god is remarkably practical. I've not tried to convince you it actually exists.

-- JamesHollidge

I offer only scorn to those people who make any absolute assertion based on inductive evidence or its absence. They deserve it. Unless they're young and impressionable... in which case I teach them the difference between induction and deduction. I'm thinking you're not impressionable at all.

And I can only shake my head when people offer argument from ignorance as a valid way of doing things.

[Sociologically, the idea of free will is remarkably practical. I've not tried to convince you it actually exists.]

Theologically, the idea of god is remarkably practical. I've not tried to convince you it actually exists.

I would suspect some abduction wouldn't go amiss for you.

-- JamesHollidge

Belief in god because it makes you more comfortable is a perfectly rational reason, especially when there is neither contradiction nor harm in doing so. If you know your reasons, if you know thyself, if you know what you assume and WHY, and your reasons are rational, then you are rational. Rational reasons for beliefs include, but are not limited to, truth. Comfort is among them. Encouragement to do the right thing is among them. An axiom that leads one to accept responsibility for one's own actions, good and bad, is among them.

But you should know your reasons, and you shouldn't confuse belief for a reason other than truth with belief because of knowledge. You still have yet to examine your own reasons for your beliefs. You accept on faith that your perception is caused by an external reality. You reject without contradiction that free will can exist. You accept without proof that free will doesn't exist. Argument from Ignorance works both ways, buddy: saying something is false because you can't prove it true is just as much a violation. So, start looking at yourself... and shaking that head of yours.

I don't accept as truth that free will exists one way or the other. If I've made any arguments from ignorance, it's only that I claim that encouraging a belief that there is no free will, that there is, thus, no moral responsibility, would have a negative impact on society and the human condition. I cannot offer any statistical evidence for this... no such experiment has been performed. There is only anecdotal evidence. E.g. most historical societies that put a great deal of weight in destiny regarding birth and circumstance are caste systems with a few lucky individuals at the top and slaves on the bottom, without anyone even bothering to try changing things because that's the way things are. Psychologically, belief that one is a slave to circumstance becomes deeply ingrained. I consider truly extending this belief... to the point that you teach children that even the most heinous of criminals have no morally meaningful choice in their actions; in this case, if you punish or cause pain to these criminals, you teach people that they will be punished and caused pain for actions not their own... and if you don't punish them, they can continue causing pain to others. Either way it's a bad thing.

And I agree: Theologically, the idea of god is incredibly useful. Belief in it... less so, at least theologically. Sociologically, belief in god is useful for controlling populations. There's plenty of historical (aka anecdotal) evidence to back up that notion.

Abduction is a good part of scientific reasoning. Abduction, like induction, is also never 100%... and every abductive argument relies on inductively gathered evidence (e.g. the argument: I perceived big brown eyes and floppy ears, therefore I perceived a dog. I perceived a dog, therefore there was a dog). I'm not sure why you're bringing it into this discussion, or what you meant by it wouldn't go amiss for you.


I'd like to make you two aware that I am following your discussion. I find it quite interesting and insightful. Especially in the beginning when it was quite balanced. It's very interesting to use the contingency of free will as a base and not rely on its actual existence or not. But I urge you two to step back and consider: What does the other want to say. Probably you are both right (at least partially) but cannot get your point across (or maybe you can, but its base is not understood).

As my own few bits:

What would it mean if free will were found to be absent or at least to be limited to a smaller area. The latter actually being slightly the case compared to 100 years ago. Would that imply a gradual change to your conjured up 'ugly society'?

-- GunnarZarncke

Small area? Like you only have Free Will if you happen to be in Jerusalem? ^_^ I suppose you really meant that knowledge (and proof) of the absence of free will is in the hands of a small group of people. No, I don't believe that a small group in possession of such knowledge would lead to an ugly society. I simply can't see the idea spreading... at least not in anything but a highly bastardized form.

Uhh, area was the wrong word. Not the area where the knowledge was available, but the scope of scientific research into the brain (like neuroscience). But your interpretation is interesting nonetheless. -- .gz

The small group of people that have unassailable proof that free will doesn't exist either will or won't try to push that idea onto others... of course, the rational among them would recognize the effort as pointless and potentially dangerous, so perhaps the group will splinter among the survivalists, the lazy, and those zealots who desire to bring truth to others. The former and the latter will argue a little, but ultimately their particular actions aren't by any meaningful choice (even if they comfort themselves that their version of "freedom" is the ability for the universe to control other things through controlling their own actions). At this point, the influence of the belief is too small to have any wide sociological effects. Soon, though, the idea and knowledge of said unassailable proof will be spread by use of magazine, television, newspaper and maybe even door-to-door "missionaries" for the cause.

Unfortunately for the "you're a tool of the universe" zealots, most people aren't ruled by reason. Most people embrace ideas taught to them in youth (Republicans good, Democrats bad, Church good, Evolution bad, etc.) without really exploring them when they are finally old enough and intelligent enough to make an independent assessment, and for those who do make such an assessment, it is still often tainted and biased by their earlier teachings. This will be especially visible in the media they utilize to spread their ideas. The "opposite" idea will be given equal screen-time (whereby "opposite" probably means "the Christian god gave us free will, the bible says so" people). Further, it will be given just as much legitimacy by biased newscasters trying to show how unbiased they are ("we give both sides of the story"... as though there are only two sides and they're always of equal quality). The media will ultimately subvert their efforts just as much as they do to the idea of evolution.

There will be those, however, that seriously entertain the idea, and those people that offer more credence to science and evidence. However, even among these people, the idea that "I truly have no free will, no control over my own destiny, I'm just part of the universe and my thoughts, will, emotions, and actions are ultimately completely outside me" will be psychologically abhorrent... so it won't be accepted in that form. Even in the presence of unassailable proof, it won't be accepted; even these people are affected in their decisions by emotion. Alternative theories and views will quickly arise... e.g. the calvinist and quantum theorist views I mentioned above, where free will is something that exerts itself unto your circumstance either due to choices within or prior to existence in this universe. If those, too, were disproven (don't ask how), then some less rational view will develop, like: "I might not have free will, but I can fight for and take my freedom from this universe by (attempting to understand it | building technology | etc.)"... despite the contradiction that neither said fighting nor the resultant freedom are more than illusion. Their actions, and their results, are still outside their control. However, they'd have reified the illusion of free will, given it substance as something more than illusion... called it freedom.

These people with their bastardized views will then be given airtime to espouse them, spreading their more... psychologically acceptable... meme to those who hadn't yet crystalized either way. This would leave some of the original group banging their heads, but might convert others. Hell, there's a good chance that it is someone in the original group that develops this version to make the proof of no free will more acceptable to themselves.

Popularity of an idea has almost nothing to do with the truth it offers, and far more with psychological appeal and day to day utility. The bastardized version would ultimately win out over the raw truth offered by the proof. In further discussions, people will find a middle ground... that, ultimately, this reified illusion of free will has all the properties of free will: people can still be attributed moral responsibility, people can still take meaningful action to change the world around them, people's choices matter, etc. This will lead people to, essentially, believe in free will despite its absence... even among those who gave credence to the proof. They had no real choice in the matter, of course, but that's what they'd believe. And, ultimately, the entire free will scare would be largely forgotten, except as part of history and among those "tool of the universe" zealots who are still pushing the unbastardized truth. People, for the most part, wouldn't teach their children of it directly... the children would only hear that they are responsible for their actions, then go on to learn about free will, then get to philosophy or science courses at age 16-22 that would be as powerless as they've always been to change ingrained belief.

And, IMO, society would be better for not embracing it. Whether free will is absent or present, I'm still convinced that there are rational reasons to believe in it. I hold that position on several related ideas. A human needs to believe, whether it is true or not, that people are basically good. That honor, courage, and virtue mean everything; that money and power mean nothing. That good always triumphs over evil. That one man can make a difference, and it is never too late to start making it; that your choices are meaningful. That love, true love, never dies. I couldn't really put it in words before watching Secondhand Lions, but this has always been my belief. And there are reasons, not reasons based in truth but rather in psychology and sociology, for believing each of these.


Notwithstanding the issues of the arbitrary and undetectable nature of free will - which you have basically capitulated to even if you won't admit it by defining free will as something that can only appear to be random but isn't really (leaving us no way to discern free will from randomness... which is surprisingly similar to ontological arguments for deities...) - you clearly have decided that giving opiates to the people is a good idea, whatever form they might take.

It is a comfortable world you have created sure, but it's not the one we live in. People are not basically good nor basically evil. Honour, courage and virtue are invented concepts. Money and power do mean something. Good does not always triumph over evil. Love is fleeting. Clearly you do not like this but you tacitly admit that this is far more likely the true case.

I would rather deal with the world as it presents itself, not as I might wish it would be. If I refuse to do that I can hardly even begin to propose sensible ways of actually attaining that fantasy world in reality.

And you are of course forgetting in the whole history of ideas that the concept of free will has not always existed...

-- JamesHollidge

Hmm, For good/bad, true love we know that these don't exist. But for free will we don't. I'd agree with James that one should teach that the ideals are well ideals that cannot be reached. But I agree that the ideals should still be shown as desirable goals. If you don't have any goals the effects for society might be bad. -- .gz

The problem I have with free will is that is it NOT a good thing to teach. It teaches that somehow, if you just will it enough, you can really do anything you want, include being 'good'. Reality does not bear this out - people exist who have problems being 'good' for any number of reasons. If we are going to treat these people properly starting from the perspective of, "well, you've got free will, just will yourself to act right," is not helpful. How well is that attitude going towards the WarOnDrugs? in the US? Poorly last time I checked.

If you want something fixed you address the real problem - not a made up one.

I think I will start a WikiPage on the nature of randomness/non-determinism/determinism and such so that those issues can carry on there. Not sure what to call it though. Any ideas?

-- JamesHollidge

You confuse will with psychic powers. [Odd, I can't think of a better description of will. - It's interesting you say that when I wasn't even talking about free will. You admit to psychic powers? No, I'm just saying that it seems etymologically apt.] Will doesn't mean that simply thinking at something hard enough will let you move it with your mind, and that includes yourself. Go ahead. Try it. "Arm! Move! Move, I say!" Hmm... doesn't seem to be working. No, I can move my arm by my will, but my will is not merely desire or statement of desire to move my arm. Free will teaches that, except where you've been coerced, you ARE doing something because you chose to do so. That if a person desires to stop taking a drug, and the situation allows it, they can take meaningful action to make a difference. You somehow think teaching people that they have no power over themselves will make things better? I must ask how. [Simple, we stop telling people to just 'do it' and find the tools that will enable them to achieve it. (You plan to just do that, too? Or shall you first find the tools to enable yourself to find the tools to enable others? Without free will, you have no position to start making decisions from. You'll just do something. [Ah, but the self-correcting program need not be free, it merely needs to do what it does to achieve what it does. If it makes more 'correct' decisions than 'wrong' ones then there is progress - the freedom of that correctness/wrongness is irrelevant]) Be that simple therapy, use of drugs and so forth. Instead of pretending we all have the same ability to recognize and correct our cognitive errors we recognize that we do not and act in our social capacity to help correct those with problems instead of just insisting they figure it out for themselves or we punish them. Now that's what I call immoral.]

You live in a world without morality, of course... where good and evil, honor and malice, courage and cowardice are just invented concepts, and a person deserves neither more nor less respect for being a malicious coward than an honorable warrior. You'd teach people this, too. "Yes, my child. You might be dishonest, malicious, evil... you don't have a choice, you see. But those are mere labels." Not that you have any proof for this, though you can appeal to occam's razor. [I can appeal to the simple fact that if I ask ten different cultures what these concepts mean they will come up with ten different answers that contradict each other.] What it really comes down to is that you just happen to like that particular version of reality more than one in which you might have some control. [If you insist that's my motivation. I am just taking things at face value - you're the one who wants to interject your preferences onto reality. I neither desire nor prefer a universe where control is essentially illusionary - it's just the one I happen to live in.] On the other hand, it wasn't too far above where you went ahead and embraced the illusion of freedom as something to be taken... [No, I simply pointed out that the most 'true' freedom you'll ever get is the freedom to control something else.]

Our perceptions are our only mechanism for viewing the world, thus the world in which live in depends largely upon how we interpret our perceptions; it is subject to psychology. The world in which an autistic or paranoid or uneducated person lives is remarkably different from yours or mine. [Yes, now what makes you think that you really have anything more to your grey matter than them that wraps you up in 'free will'?] How we act upon the world will depend both upon our perception and upon how we desire to manipulate the world through apparent action (as we can only perceive our own actions through our perceptions). Wise people will recognize this and choose to, where possible, equip themselves with the tools to better understand the world they see, to derive more comfort from it, and to shape their own behaviors to match those they believe ideal. [So in other words you agree with me you just don't like the way I say it. It's odd that you mock my idea of freedom as the freedom to control when you say such things.]

Anyhow, for Title suggestions: OnDeterminism?, WhatIsRandom [As you can see from this page, it is quite meaningless to say free will is a non-deterministic non-random process - it is a contradiction in terms.]


(also directed at James) How does a non-free-will philosophy compare, given that most of them devolve into an abdication of personal responsibility. "I had no choice but to do that bad thing, I'm a <downtrodden group of preference here>!" rings out at many criminal trials already

The question you ask is essentially: WhatAreJusticeSystemsFor?

If you think justice is essentially a balancing act then what you seek is to extract compensation from a transgressor - be that monetarily, in terms of jail time or even death.

If you think justice is essentially about making people behave correctly then you seek to find out why there was a transgression and correct the transgressor in order to prevent the same transgression from reoccurring.

If you believe a little bit of both, want to mix in a bit of free will against the fact that, as exemplified, we do not all have the same ability to act in the way society as a whole wishes us to, then you end up with a mess like that.

Now I'm not going to tell you what mix is right because ultimately that depends on your personal prejudices as to what situation you would like as an end-point. However if you think correcting behaviour is a valid goal of the justice system then by chaos you should do it correctly and actually deal with why people behave the way they do and place free will in the bin of unhelpfulness.

-- JamesHollidge

I'm going to let this page cool a few weeks. We've been butting heads, and egoes are bruising. They need a little time to heal. I'd like to thank you for providing an interesting perspective on the world. I'm not conceding anything, mind you... you can take to heart that I still find your arguments at least as insufficient as you find mine.

No need to assume any emotional damage. Ideals must be brutally assaulted if we want to be sure of their worth.

-- JamesHollidge

I second the last sentence. You both have your points. To me, it looks as if the logics there are sound. But the inferences for society he concludes seem to me to be less strict and I cannot fully agree here. On the other hand, James' critique of these points and his grasp of social effects of non-free will sound very plausible to me and I could (currently) rather agree with him here. But he doesn't seem to admit to the logics provided. You both can learn something from each other. [The argument for free-will is not actually for free-will though; it's just about whether or not the concept will give us a warm and fuzzy society - you could substitute for free will anything that achieved the same effect.]

My own ideas:

I cannot share the fears of an ugly society ensuing. Besides 'ugly' being in the eye of the beholder, there surely can be very different non-free will based societies beside caste systems. E.g. I remember comments about well-balanced tribal societies where individuals have no pronounced ego (and thus no dependence on free will).

I propose you two sketch a "user-story" of a non-free will based society and compare it. As an example, I'd use the court example. How would a judge argue in such a society? Would be 'coldly' say "You had to act thus and so do we."?

I don't think just because we lose free will we don't also lose our emotions. Could a judge quite empathically phrase it thus: "I know that you had to do so. But that doesn't solve our problem. I act on behalf of many who cannot other than want a certain treatment for your deeds. You will find your way with this treatment and I hope you luck that it fits your character. And I cannot and I want not do otherwise".

-- GunnarZarncke

Interesting and maybe relevant reference: http://www.owlnet.rice.edu/~rcb1/Robert%20Bishop,%20Oxford%20Philosophy_files/Descriptions.pdf, a chapter from http://www.imprint.co.uk/books/atmanspacher.html.

Nice links, Gunnar. I'm thinking that Voorhis's monster discriminates far too little. It tends to delete previous contribution from a named party if the WikiGnome they've banned and branded 'GrammarVandal' subsequently edits it. You might want to edit with your IP while these petty edit-wars are happening.

I have documented this under SharkBot. Sadly, WardsWiki provides no reliable mechanism for distinguishing a GrammarVandal edit from a legitimate edit if GrammarVandal spoofs your UserName. I have implemented some heuristics that sometimes detect this situation, but nine times out of ten, for all useful intents and purposes, the edit appears to be GrammarVandal's and the outcome is, therefore, predictable. -- DaveVoorhis


Understand me now. I accept something non-deterministic, non-random. The answer to the title question is not to be found within the four dimensions of space and time. I have first-hand evidence that there is more.

Care to elaborate? -- BrucePennington


Interesting article in Jun, '07 Discover Mag; "Soul Search" pg 46. Scientists have devised a method of testing NearDeathExperiences?. First attempt didn't work because none of the subjects experienced an NDE. Interesting test, though. I bring it up here because the activity of an NDE is supposed to take place while the brain is flatlined, no software running. To be continued, hopefully! -- BrucePennington

It's hard to test when the actual memories were made. They may "seem" to have been made at a certain time, but the time-stamp system(s) the brain uses may be unreliable during extreme trauma or oxygen deprivation.


Extracted lengthy discussion about AltruismTowardNonRelatives? there.


MayZeroSeven

CategoryDiscussion


EditText of this page (last edited July 9, 2010) or FindPage with title or text search