Objectivity Is An Illusion

Proposed Rules:


Based on discussions in MostHolyWarsTiedToPsychology, I'm beginning to think that objectivity is an illusion. Objective reality either does not exist, or is not accessible to us, at least not in a way in which we can separate out the subjectivity from it to know which is which.

For your limitations argue, and yours sure enough they are! --Yoda

[One way to phrase it: objective reality is essentially those realities and truths unaffected by our objecting to them in belief or interpretation. This reality is quite accessible to us (most people live in it) and recognizing that it does is quite useful (i.e. "useful objectivity"). Sure, you might be able to reinterpret your 'falling' as you standing still while the planet rushes towards you, but that doesn't change the 'objective' truth that without intervention you and the planet will be colliding in a few moments. It is essential to note that cars, clouds, numbers of people, numbers, baseballs, verbs, and other such abstractions over observed properties are part of 'objective' reality as people speaking English use the word. This whole page is HumptyDumpty sophistry, with the proponent of the topic title ultimately redefining objectivity as far different from the accepted English meaning in a sad attempt to justify major equivocation fallacies in other pages.]

Take the classification of a "baseball". There are plenty of potential balls that could fall into a gray area in which people could rightfully squabble about the judgment or criteria used.

(Moved "prototype" objection below)

We could make a machine that determines baseball-ness, but people could argue that the criteria (algorithm) used is subjective and/or arbitrary. Every baseball detector builder would do it differently. Thus, a digital model may not even qualify as "objective". It is merely a codification of subjective or arbitrary criteria. Repeating arbitrariness in a consistent way does not make it objective, only predictable.

Truth may exist in math, but there are plenty of potential gotchas when assigning the real world to a given math model. Is that really "one brick", or 0.98372 bricks because it has small chips? An accountant who counts fractional bricks may get fired for gumming up the works via complicated pedantic ideas, but they are not necessarily technically wrong.

Perhaps we can only call it "objective" if there is no dispute about it. But, this is merely a UsefulLie so that two or more parties can move on to the rest of the stuff they don't agree on rather than quibble over things they do agree on.

Is anything in the real world (i.e. excluding math) truly objective?


RE: "Perhaps we can only call it "objective" if there is no dispute about it."

Example: Whether NASA sent a rocket to the moon is subjective because it is disputed. Because it is subjective, it didn't 'really' happen. Therefore NASA didn't really send a rocket to the moon. QED. Because it has logically been proven that it didn't 'really' happen, there should be no more dispute about it. Objectively, we did not send a rocket to the moon.

Wow. That suggestion seems like a real logic trap. It is probably a HarmfulLie? more than a 'useful' one.

RE: Is anything in the real world (i.e. excluding math) truly objective?

We cannot prove deductively that the real world exists without assuming that it does. But, based on the consistency of our percepts and our seeming inability to influence the our percepts merely by attempting to reinterpret them, especially when combined with our inability to anticipate all such percepts, there is considerable and cogent reason to believe that a 'real world' of some sort does exist as a cause for these observations. Based on further reasoning, we can say with a great deal of confidence that mobile life forms that require the ability to locate food and shelter resources (which would presumably include us) would do very well to have senses adapted to detecting such resources in this 'real world'. But all this abductive and inductive reasoning is imperfect. At best we can say: we have very strong reason to believe that a 'real world' exists independently of us, and we are sorely lacking any convincing counter-evidence, so we find it irrational to believe the real world does not exist.

Trivially, if there is a 'real world', being defined as something that exists outside of our percepts but influences our percepts, then everything in this world is "truly objective". That's a tautological truth. Such truths aren't, as a rule, particularly useful (unless someone contradicts one... at which point they become useful for ArgumentumAdAbsurdium? to reject one premise).

Less trivially, we can recognize that this 'real world' affects our senses and those of others in our species in some rather consistent manners. As such, observations made with our senses, especially observations consistent with others - i.e. empirical observations of any sort - are acceptably, accessibly, and usefully 'truly objective'. Same goes for any properties derived systematically from our senses (e.g. probabilities, derived abstractions like 'rocket' or 'speed', etc.). For rockets, these could include such things as: does it fly? how fast? did it explode? what is its historic mission reliability? More concretely, we can recognize that 'objective' is an English word and communicates a meaning in English that is likely consistent with what we are taught in grade school and the dictionary. This happens to include examples much like those for the rocket listed above - speed, reliability, payload, and various other empirical and derived properties.

It is also worth noting that objectivity seems to consist largely of the indirectly observable, and often even incomplete knowledge. There is little reason to believe that a lack of empirical data corresponds to a lack of knowledge about objectivity. For example, we might be able to figure out the 'average' speed of the rocket during launch but still lack sufficient information to determine the average speed during a fractional period of that time. We know that if we collected the right data, we could derive it objectively, and so we have good reason to believe this speed must be objective even if we didn't collect it.

Ultimately, the answer to the question is: yes, we have considerable and cogent reason to believe in the 'real world', in 'true objectivity', and, further, that this objectivity is accessible. Perfect knowledge is inaccessible (even in math, due to GoedelsIncompletenessTheorem), but is also not necessary to make the objectivity useful and accessible (see reasons with the Speed of Light example, below).


Ostensive Definitions, Cultural Dependence, and Objectivity

Re: "Take the classification of a "baseball". There are plenty of potential balls that could fall into a gray area in which people could rightfully squabble about the judgment or criteria used."

That's an issue with prototype based (x-like) concepts in general. 'spherical' (sphere-like) has the same problem. Prototype based concepts are founded on simile and pattern-comparisons. A system that can leverage prototype-based concepts will necessarily possess some form of fuzzy logic, and will answer with varying degrees of 'confidence'. False positives and false negatives for prototype-based concepts depend on the culturally accepted meaning of the word.

But any non-pure-model (non-math) object so far appears subject to the same problems. Each brain has its own little "culture".

[There are plenty of non-math concepts that don't have the problems associated with prototype-based concepts, including especially those defined extensively (by enumeration) and intensively (in terms of properties). It is true that prototype-based concepts, typically defined ostensively or by exemplification, are heavily utilized in English. These concepts take advantage of our advanced associative memory and partial-pattern-matching capabilities... something we are much better at than computers (at least today).]

[As far as each brain having its own little "culture", that is a theory accepted by some psychologists (see, for example, MarvinMinsky's SocietyOfMind (ISBN 978-0671657130 )). However, that ultimately is a non-issue. Prototype and ostensive definitions can be taught and tested objectively in a manner independent of the mind's "culture"... such that even Aliens and Machines can get it right and be tested on 'getting it right'. As a note, this mechanism doesn't eliminate the dependence on "culture" outside the brain, thus mechanism rejects the implied argument that "each brain has its own little culture" is a statement of useful relevance. Here's how it works:

[This works because training_set != testing_set. This, by nature, prevents the brain/machine/alien/whatever from merely memorizing and regurgitating the answers: there won't be many (if any) exact matches in the testing set, therefore the learning agent or developer of the algorithm is forced to pick out properties and develop associations and generalizations. Both examples and anti-examples are needed to avoid legitimizing EverythingIsa algorithms (the value in any definition is its ability to make and communicate a distinction).]

[While the result of such training is probability based, it is still objective. It is objective because the learning agent cannot (legitimately, as an individual) contest the sample set... i.e. when comes time for testing, they get those right or wrong just like everyone else does, and appealing to their beliefs/interpretations/prejudices/etc. are futile. Further, this is useful because you can with high confidence believe that anyone (from the same culture) who has 'learned' the word will agree with probability=PRH with your application of the word... essentially making it usable in communication. (If a culture happens to pick precise numbers for PRH rather than just winging it, one can even estimate information loss due to fuzziness.)]

[Note that this is as opposed to 'extensive' definitions, where one essentially has the training_set == testing_set and the brain is expected to memorize an enumerative list. This is also as opposed to 'intensive' definitions, where one must (usually by measurement followed by inductive or abductive reasoning based on probability) verify particular properties. These can combine and compose with prototype properties, and any of them can be objective.]

[A prototype-based or ostensive definition would be 'fully subjective' only if you (as an individual) were free to develop your own sample set, nobody could usefully contest it, and you couldn't usefully contest anyone else's.]

[With prototype-based concepts, one can usefully contest someone's classification for something outside the culture-defined testing and training sets. They can do so by describing properties and probabilities relative to the primary sets, or by using simplification principles like OccamsRazor to select one model or definition over another. However, there is still some wiggle-room due to the probabilistic and fuzzy nature of the communication. This wiggle-room is acceptable (and still objective) so long as the individuals are not also 'wiggling' their learned 'algorithms' just to accommodate the new examples to their preference (which easily results in much fallacy: equivocation, internal inconsistency, cherry picking, moving goal-posts, and HumptyDumpty all rolled into one).]

[Unfortunately, people aren't well trained to maintain internal consistency, and so what is initially a 'fixed' (and therefore 'objective') algorithm tends to slip over time and becomes inconsistent with the original sample set. This is natural because humans are continuous learners, but it is problematic for communication. It can be fixed via retraining or via formalizing the definition for a working context. Formalizing an ostensive definition merely means coming up with one that, only using other words with formal definitions, has at least PRH being consistent with the culture-defined sample set, then using it as the reference definition for the rest of the context. Making it 'useful' also involves eliminating unnecessary dependencies, simplifying it as much as possible, etc.]

[meta-discussion about 'troll logic' excised]


Is the Speed of Light objective? (modulo Epistemological Solipsism)

My argument on the speed of light is that it is not necessarily subjective: IF there is an objective speed of light, then it exists independently of your ability to measure it, and there exists the possibility of error between the speed you infer and the actual speed. (Indeed, only objective properties admit to the possibility of objective error. For truly subjective concepts, only YOU get to determine whether YOU are in error.) This, once again, does not assume the objective existence of the speed-of-light. That assumption is not required.

When you can start proving that distance, height, speeds, positions, topography, referents, mass, etc. are either truly "subjective" or aren't accessible to us even indirectly, maybe I'll start believing that you AREN'T the delusional one. But jumping from "baseball" to "every non-math thing" remains an extremely hasty generalization. (Clarification: 'indirectly' simply means that the properties aren't directly observable, but rather are observable through their effects upon our observations. I.e. we don't actually see a face, we see light reflected off a surface filtered through our neurons and associated with the lines and shapes that form a face. That's indirect.)

Perhaps "incomplete" is a better word than "indirect". If our observations are incomplete, then there are gaps in our knowledge. As described below (PageAnchor "Null Propagation Principle"), an objective conclusion based on nulls cannot be called "objective" because the gap(s) may actually be subjective/arbitrary/relative. All our observations carry at least some nulls because we don't have a god-eye view. Objectivity that we can access and use can only exist if there are no nulls in our knowledge.

There's no such thing as "approximately objective"

[Perhaps not. But there can be such a thing as "probabilistically objective". For example, take the claim P(X :: Movie Title) = "<X> is more than 95 minutes long OR <X>'s action sequences were distasteful". Given a set of movie titles, one could easily locate those "more than 95 minutes long" and determine the probability of this possibility. If this probability were, say, 45%, then (ignoring the "is time subjective?" regression) the property P(X) would be objective 45% of the time. Further, given any particular movie title, we could look up the running time for that title and utilize partial-evaluation. E.g. P("The Incredible Hulk (2008)"), at 114 minutes long, reduces to an objective True, 'nulls' notwithstanding.]

Something being 'relative' or 'arbitrary' doesn't establish it as 'subjective'. Speeds are relative, for example, and you've yet to establish that as subjective. Gaps in our objective knowledge may indeed be subjective (it is oft the toddler learn that putting one's hand on the steaming casserole dish is 'painful') but that doesn't mean you can't determine whether the gap in your knowledge is ultimately objective or subjective by examining the question that describes the gap. Merely being "unknown" is insufficient to establish "not objective"; indeed, if one is willing to admit that some things can be objectively determined, one must admit to unknowns that are known to be objective.

And, Quantum physics has shown that we cannot measure every property at the same time. Thus, our indirect methods will always have some incompleteness to them.

[And such incompleteness doesn't always propagate. (Clipping the quantum junction here.)]

Perhaps, but we cannot know that with certainty. Thus, I say "objectivity, if it exists, is not fully accessible to humans."


Your primary thought experiment was based upon a prototype-based concept. However, that's just ONE kind of concept... among many. There are also comparative concepts, for example. Like "taller" and "before". And there are referent concepts. There are ordinal concepts, like "first" and "second", "top" and "bottom", "inside" and "outside". There are predicative concepts (and their dual: descriptive concepts) such as "masses more than 1 kg?" and "masses more than 1 kg.". One might also note that some concepts, like "baseball" and "chair" have a dependency upon shared culture. I.e. they possess a dependency upon, essentially, shared opinions. This dependency makes them more difficult to clearly define (in addition to their prototype-based nature).

I suggest your assertion that 'ObjectivityIsAnIllusion' is very much a hasty generalization from a sample-set that was far, far too small to derive the sort of sweeping claims you're making. Whether you are right or wrong, you should consider a much larger handful of kinds of concepts before making a generalization.


Do you claim that "intent" is a culture-free concept? Can you name ANY concept you are confident is culture-free?

Yes, intent is a culture-free concept. Anything that knows its own goals when planning actions has, by definition, "intent". Prime numbers, geometric shapes (polygons, spheres, toroids), information, representation, and the idea of communication and language itself are also culture-free. Comparative concepts like "larger" and "smaller", and topographical concepts like "inside" and "outside" also have no dependency upon shared culture. There are plenty more. Actually, even concepts like 'spherical' aren't particularly dependent upon culture, except insofar as it is relevant in determining how un-spherical something can be before you're no longer permitted to call it 'spherical'. It's concepts like 'baseball' and 'chair' and 'flower', where the concept itself is of something created, constructed, or observed by culture, that possess significant dependency.

Cultural independence doesn't mean everyone will understand the concept. It means only that the concept itself possesses no dependency upon common experience or shared opinion.

Without studying more intelligent alien cultures, we don't know if many of these are merely human-centric, vertebrate-centric, etc. I've read about isolated tribes who viewed time very different than we are used to. It was tough to translate speech about time with them.

Further, things like circles are mathematical models. The "culture" is the model. Actual circles may not even exist, unless we choose an arbitrary threshold for matching, and the threshold range may vary from person-to-person or even culture-to-culture. (I am not claiming it is culture free, only asking you for examples of things that are for sure culture-free. "For sure" hasn't been established. It is un-tested at this point.)

We don't need to study alien cultures to determine whether concepts possess a dependency upon shared experience. We only need to establish that the concepts are founded entirely within pure mathematics (including logics, information theory, computability theory, etc.) or universal physical laws. That another culture understands the concept is NOT required for it to be culture-independent. I doubt you'll find anything on linear algebra in Ancient Egypt.


Re: "My opinion is that top is a foolish boy" might be true, but "top is a foolish boy" is subject to one's criterion for "foolish" (and "boy")."

Is it really? Your opinion cannot be verified as accurately reflecting your belief. You could be lying about your belief. Perhaps if we had a God-eye view of your neurons, we could verify, but we don't. Plus, which neuron patterns constitute "foolish" determination brings us to the same issues as the above baseball classifier problem even if we did have a god-eye.

You seem to be confusing detectability and verifiability with objectivity. It is, quite frankly, 100% irrelevant that you lack a god-eye view or the ability to verify. The ability to directly verify a property is ultimately non-consequential to the question of objectivity vs subjectivity. Nothing in the definition or concept "objective" says "oh, by the way, you must also have the tools to see it".

You're also confusing truth-value with objectivity (since something can be objectively false). But I won't wholly fault you for that; in hindsight, I should have worded it better as: the truth-property of the proposition "D's opinion is that top is a foolish boy" is objective. Even if D was lying, the truth-property is objective. It is objective because the value of the property does not depend upon the observer of the proposition. The observer's thoughts, feelings, and interpretations are irrelevant. The truth-property depends only upon D's actual opinion.

I think you, top, will need to review what "objective" means before you move forward to consider whether "ObjectivityIsAnIllusion". You might be thinking about something else entirely. I'd rather avoid a great big argument over nothing at all simply because you have your own concept of what objective is supposed to mean that isn't listed in any dictionary on this planet.

Objective: (adjective; ignoring definitions synonymous with 'goal' and optics) (objective. (n.d.). Dictionary.com Unabridged (v 1.1). Retrieved October 18, 2007, from Dictionary.com website: http://dictionary.reference.com/browse/objective)

  1. intent upon or dealing with things external to the mind rather than with thoughts or feelings, as a person or a book.
  2. being the object of perception or thought; belonging to the object of thought rather than to the thinking subject (opposed to subjective).
  3. of or pertaining to something that can be known, or to something that is an object or a part of an object; existing independent of thought or an observer as part of reality.

Where's the alleged discrepancy?

In your head. (1) You believe (wrongly) that verifiability is relevant to objectivity. (2) You believe (wrongly) that something must be "useful" to be objective. (You believe 'rightly' that something must be verifiable to be useful, but nothing about "objective" requires the "useful".) (3) You believe (wrongly) that ascription of objectivity requires an assumption of a god's-eye view. This belief seems to derive from the other two (since objectivity of properties isn't particularly useful if you lack means to detect or verify them).

Because the definitions are colloquial, they don't really address the observation issue. If something cannot be tested and examined, then whether it's "objective" or not is moot. Perhaps we could say the answer would be "null". As far as the "useful" issue, if we are not talking about anything use-able, then any issue where it comes up can only produce moot UselessTruths. We are not really interested in that, are we?

I, as usual, disagree on almost every point. First, the definitions YOU use might be "colloquial" as in "limited to top's head alone". I prefer to utilize the dictionary for its purpose. The dictionary definitions don't address an "observation issue" because observation is not an issue. Second, on the "moot"ness, I disagree fundamentally; properties that are not verifiable by any test are not oarticularly useful. But they are not particularly "moot" to a question of whether ObjectivityIsAnIllusion. There are tons of things that cannot be verified that are objective. E.g. "The library of Alexandria had over ten scrolls on the subject of breeding dogs." or "Ten years ago, squirrels hid nuts under the primary root of this tree. Then, during the winter, they removed and ate those nuts." Nobody can verify these because we lack the tools (i.e. the records, the time-machines) to do so, but whether it is true or not remains objective. This is essentially the same issue you bring up with the inability to read minds: just because you lack the tools to read the mind doesn't mean it's not objective. I consider your entire line of argument to be "moot" because you haven't based on accepted definitions of "objective". Thirdly, when describing useable truth, you say: "tested and examined." Unless you mean "verified at least indirectly", I disagree with you yet again, though the reasons behind that disagreement are a lot deeper and based largely in computability theory, information theory, and epistemology - topics would diverge far too much from the subject of ObjectivityIsAnIllusion... and that would probably be beyond your ability to casually understand.

Finally, whether a truth is useful depends only upon your goals. I.e. even verifiable truths are useless unless they somehow support you in the making of decisions.

You're saying someone has an agenda with a UsefulTruth. This is quite the hostile view toward society, that people, even hard scientists, have secret agendas. Generally scientists that are discovering truth about the universe couldn't care less about an agenda, they care what is right. If their theory about the universe is wrong (false) then their useful truth is corrected, updated, and revised. An example is the theory of evolution - lots of people work hard on this theory because they care what is true, and only care what is true. Evolution is not subjective. The only agenda a scientist could have, is fundraising - but a scientist thankfully gets more fundraising if he is right about many things. If he is wrong, he gets less funding. The fact that people have cancer is a UsefulTruth. The people working to stop cancer are doing so because they objectively agree that cancer is bad and exists. Saying objectivity is an illusion is the first sign of a crackpot, crank, or quack - since anything can be true, since nothing is really objective. There are people out there who deny cancer even exists - likely these same people also think objectivity is an illusion, and that cancer is just an illusion in the body. Actually Uri Geller (found to be a fraud) is also the type who performs psychological surgeries (paranormal psychic surgery) on people to remove cancer that was apparently just in their mind. This is also the kind of guy who probably wants to convince people objectivity is an illusion.

I should note, here, that my goal is to utilize reason to examine (and, as it turns out, counter, since logic and reason do not support your conclusion) yet another of your grand-standing sweeping claims, this one entitled: "ObjectivityIsAnIllusion". Any truths that help me do this are proving quite useful. One of the truths that is useful is that verifiability is not required for objectivity.

Anyhow, you seem more interested in UselessLies. Even if "ObjectivityIsAnIllusion" weren't just another in a long line of hasty generalizations and silly "oh, I understand now!" eureka moments from your brilliant mind, where the hell would it be "useful"?


Re: [...] The truth-property depends only upon D's actual opinion.

Let me see if I am reading you right. You are saying that the fact the statement was made is an objective fact. We could get into the issue of speech parsing having the same arbitrary boundaries that a "baseball detector" did above and thus is not really 100% objective, but I want to stay away from low-level nits like that right now. Thus, for the sake of argument, I will agree with you in this context, but find that a UselessTruth if the topic is about objective determining of "intent".

You're not reading me right. To start, I didn't say that "the fact that the statement was made is an objective fact." Despite having not stated it, I'd agree with that statement. It will NOT, as you imagine, run into the "speech parsing" issues. Communications and Signals are not prototype-based concepts. Whether I made the statement is not actually dependent upon you understanding it. Even if you could only understand Korean, it wouldn't change the fact that I made the statement. It just wouldn't be useful to you that I made the statement. Even if I said "Dogs are better pets than cats.", that is, objectively, a statement in the English language. (See "dog speech example" below.) But the issue of grammatical statements objectively being statements is rather more associated with the whole "(excluding Math)" caveat I injected for you, above. You probably want to stay away from high-level nits like that right now. It's why I chose to avoid the issue above... I rather hoped to avoid this silly little tangent that could do nothing but confuse you.

What I actually said is that the truth-property of the statement "D's opinion is that top is a foolish boy" is objective. I.e. that statement is either true or false, and it is objectively true or false. And that it is "objective" by definition of "objective": the truth-property, whatever it happens to be, does not depend upon the thoughts or interpretations of the person reading the proposition - the "thinking subject". It belongs, instead, to the object of thought - the proposition.

This is NOT true of ALL statements. E.g. "Dogs are better pets than cats." - if you or consider this proposition, the answer to "is this true?" belongs to YOU; you cannot be called wrong for answering differently than does someone else. This is because "better" involves a subjective value judgement. (If one instead replaced it with: "Dogs are [on average, heavier] pets than cats", that would have objective truth-value.)

Generally, when one says that a proposition or statement is "objective", that person is referring to objective truth-property, not the property of objectively being a proposition.


Because opinion cannot be fully tested, I hesitate to call it an objective statement. These both have similar problems:

because "has opinion X" is fraught with measurement problems just as "foolish" is. You seem to be saying that an opinion-detector can be objective but that a foolishness-detector cannot be. I don't see any important difference between the two kinds of detectors. Perhaps the best we can do is:

This would reduce the room for subjectivity, but not totally eliminate it (as per "dog speech" example below.) They are all subject to one or more of the following:

I will agree that the higher ones are probably more likely to produce disputes than the lower ones, but "dispute rate" and objectivity are usually considered something different. (Although dispute-rate may serve as a practical stand-in for objectivity.)

Even if I believed detectors mattered for the issue of objective vs subjective (which, based upon the accepted definitions of these words, I do not) I still wouldn't agree that there is no important difference between detectors of foolishness and detectors of opinion. I see at least one rather important difference between them: you can test an opinion detector to see how well it works utilizing double-blind studies and mechanisms to verify opinion that have a high likelihood of succeeding (such as asking for verification on non-sensitive questions). You could later take the accuracy and precision of the device into account when using it in the future.

With a generic "foolishness" detector, though, there's no way to test it. If you choose to develop tests, the closest you'll come is a "detector of behaviors that D, E, and/or F are likely to believe to be foolish" - which is a detector of objective things. I'm already a decent detector of things I believe to be foolish, but if you do invent a machine that can do it that I can tune to my own settings, be sure to let me know.

One can make a foolishness detector by throwing together somewhat arbitrary criteria. But my point is that most detectors, perhaps all, use arbitrary criteria. Nobody's identified one yet that is not subject to some amount of arbitrary criteria. (See speed-of-light example below.)

You should probably say what you mean by "arbitrary criteria". Consider a typical detector for, say, blood-alcohol content. I could detect this by taking a blood sample, or by using a sample of breath, and analyzing these for alcohol. Or I could look at the day of the week and hour of day, and judge by the fact that BAL for Susan is, with 95% probability, over 0.12 by 2200 on Thursday. Why might this be, or not be, arbitrary? Or I could use a combination of these. What makes one criteria arbitrary and another non-arbitrary? (If you think all criteria are arbitrary, would you also consider that a TautologyMachine?)

Let's see if we can find a single case of non-arbitrary criteria outside of a model (because models can create their own absolutes out of thin air).

That's not a reasonable request, top, unless you can explain what makes a criterion arbitrary or non-arbitrary. Do your own work, first. Seriously examine this and determine whether you've created what you, in your personal dialect, call a "TautologyMachine".


There is no accurate way to prove that a stated opinion really is a held opinion. At the best, your tests could detect consistency in such statements.

This verifiability issue - the one you keep describing - does apply to held opinion. It also applies to everything else in the real world. All truths about the real world are limited by the fact that we can only learn them through inference over percepts. We have no accurate way to prove that our percepts actually reflect reality, that we aren't, for example, in TheMatrix. But all that is a verifiability issue, not an objectivity issue. If there is a real-world (or even a matrix) with properties that are true independent of observation, those properties are, by definition, objective in nature.

Whether someone holds a particular opinion or not is true or false based on their actual state of mind. It is not based on your interpretation of their state of mind - that would make it subjective. It is not based on your ability to confirm their state of mind (i.e. verify it at least indirectly) - an inability to do would make it "unverifiable", though in this case opinions ARE verifiable (via inference over percepts). It is not based on your ability to detect it directly (i.e. read their mind) - an inability to do so makes it "undetectable".

Compare this to "foolish"; whether a particular action or decision is or is not "foolish" depends on... what, exactly? hmm... Aha! It depends on the interpretation of the action or decision. One's interpretation of "foolishness" probably depends upon perceived goals and emotional beliefs. For example, you might deem a particular act "foolish" unless you believe the performer's intent is "suicide". And you might deem "suicide" foolish based on nothing but your own beliefs about the value of life. And every other observer can hold a different opinion with none of them being "wrong". This dependency upon interpretations of the thinking subject makes "foolish", by definition, subjective.

If you wish to contest that there is no spoon, there are no people, that there are no states of mind, all because you cannot be absolutely confident that your senses aren't lying to you, feel free. I'm not going to contest a world paradigm; my own is unpopular enough (I'm an epistemological solipsist). However, I will contest you drawing completely arbitrary lines and saying that some things are verifiable and other things aren't when the mechanism for both - inference over percept - is exactly the same. Further, I'll contest that these issues of verifiability are particularly relevant to objectivity. If there are no objects, then it would be true that nothing is objective outside math and logic. But, if there are no objects, nothing is subjective, either - there would also be no observer-objects.

Saying there are no objective entities and saying they don't exist are two different things. You are putting arguments in my mouth. "Reality" might just all come down to probability and confidence levels, or at least mutual agreement between parties.

Saying that "there are no objective entities" and saying "objective entities don't exist" is exactly the same thing. And here is another truth: if there are at least two "parties" that can mutually agree, then there are at least two parties. That is objectively true. Even considering that "reality" is some sort of shared fiction implies, also, that there are some things, some 'reality', even more objective: the things sharing the fiction and the mechanism for sharing it.


"Is anything truly objective?"

PageAnchor: Light Example

Certainly measurements of the fundamental constants of physics are (e.g. the speed of light in vacuum, Planck's constant h, etc). But that's getting off-topic. A more useful question might be "where does the line between objectivity and subjectivity lie?"

But there can be differing opinions about the suitability of the detector/ruler being used to measure it.

Differing opinions are only relevant if the concept has a dependency upon a shared culture. "Baseball" qualifies. "Speed of light" does not.

Any analog measurement is going to require some subjective interpretation or at least an automated detector in which the rules programmed into it are subject to personal judgment and/or arbitrary thresholds.

Please convince me that you aren't just lying through your teeth. I'd say that the "suitability of the detector" utilized to measure the speed of light is determined by its accuracy and precision in telling you the actual speed of light.

That sounds like circular reasoning to me. You use the results to test the accuracy of a ruler? Subjectivity can creep in from many angles. For example, even if everyone agrees on the tests and instruments, there may be disputes over whether to use averages or medians to select the final result from multiple measurements.

It would be rather silly to use the results of a test to measure the accuracy of the test. No, I said actual speed of light. This assumes that there is some precision or accuracy error in your test. It also means there is no circular reasoning. The proper mechanism to verify a result is not to look at the numbers (unless you already have a known good measurement with reasonable accuracy and precision). A typical scientist would use, instead, the following means: make a prediction based on sound inference using the proposed result that should be true if your measurement is correct and false if incorrect. You test that prediction. If your prediction was accurate, increase confidence in measurement. If not, decrease confidence in measurement. If it is possible, allow other people to help you out. Repeat until confident. I.e. utilize the ScientificMethod.

The only real problem is that the actual speed of light isn't directly accessible. But science is based around that fact: that nothing is directly accessible. Simply because you can't directly access it doesn't mean that there isn't an actual speed of light. If your measurements regarding the speed of light are wrong, they're wrong, and if they're right, they're right, and it will prove itself one way or the other when you start trying to use the measurement for something else. It simply isn't relevant whether other people have different opinions about the suitability of the detector. What matters is the actual speed of light.

That all just may mean that the variances in our realities shrink the more we observe something. Or, we may be creating the range we observe merely by observing it. (Such an idea may have been considered absurd 100 years ago, but seems within the realm of the quantum world.) Thus, we may be creating our own "reality" (both for ourselves and a group). To us, this may look like we are getting closer to the "real" value, but this could all be an illusion. And, we'll never reach the "real" value anyhow.

If that is the nature of reality, science will eventually model that phenomenon, too. But first you need to be able to make predictions with it.

I will agree there is a kind of "convergence phenomenon" where more and different observations seem to narrow the range of uncertainty (or disagreement) over time. The One True Value is the traditional way to explain this. However, quantum studies offer a second model: our observations change reality (or change which "path" of reality we are traveling down). There may be yet other models to explain convergence.

Warning: approaching LaynesLaw invocation

I'd rather avoid a great big argument over nothing at all simply because top has his own concept of what objective is supposed to mean that isn't listed in any dictionary on this planet. That does seem to be part of the problem here. However, perhaps unfortunately, this argument isn't entirely over the definition; it's also over hasty generalizations and sweeping claims made without any support whatsoever. You know... sweeping claims like: "Any analog measurement requires subjective interpretation". Seems made up to me. As such, one can't quite invoke LaynesLaw and call the case closed.

You need to clarify how you are interpreting the stated definitions. (They do not appear to be "clinical-grade" to answer the tough questions, but if you can show how they clearly settle such, please do.)

The dictionary definitions say that the something is "objective" if it is a property of the "object" and not of the "thinking subject". This says nothing of verifiability, and therefore verifiability is not required.

In the intent case, it is *both*, so which one does (your interpretation) of the definition apply to? It is mind A trying to ascertain what is in mind B. Both sides are "in the brain" such that it is not an "external" object. Thus, a literal interpretation would disqualify both sides.

You're incorrect about the intent case unless you're thinking about your own intent. If you're thinking about my intent, then you are the "thinking subject" and I am the "object". It is true that each side is in a brain, but there isn't just one brain under discussion. One of the brains is external to the other. If intent were subjective, then, at least to A, the intent held by B would actually depend upon the interpretations and thoughts of A. There would be no real way for A to be wrong. But intent is objective. B's intent is part of B, and if A attempts to discern the intent, A's conclusions can be, quite objectively, wrong.

It is *implicit* that it is assumed to exist in the reality as we know it, and reality as we know it does not share the god-eye view. If it doesn't say which reality model is it using, then why assume the god-eye-view as the default? Some interpretations may indeed assume a god-eye-view. But if the god-eye-view does not exist or is not accessible to us, then ObjectivityIsAnIllusion, or at least something we don't have access to. In that case, perhaps the title should be "objectivity is either an illusion or something we don't have access to". However, that is a bit long for a wiki title, and the results for us mortals are probably the same either way. Pick your poison.

If all you wish to argue is that reality is not directly accessible to us, I'd not contest it: I'm already an epistemological solipsist. But you can't prove that something is or is not an illusion when it's beyond your reach.

Despite your assertions, applicable definitions do imply certain world-views. I.e. if something is "objective", that implies that there is more than just one "object". Indeed, for something to be objective, there need to be at least two objects (in reality, behind that magic curtain): the subject doing the thinking, and the object of thought.

Again, if the definition does assume a god-eye-view, then it depends on something that either does not exist, or is not accessible to us. Is it still usable to us despite this gaping flaw? Well, maybe, but only as a stand-in for rate of disputes, as already described.

I disagree that there is a gaping flaw anymore than the likely non-existence of perfect spheres is a "gaping flaw" for having a word for "sphere".

"Sphere" is a word from a model (geometry). How to assign actual real-world objects to it gets us back to the issue of "detectors" and whether it is possible to make an "objective" sphere detector. I don't think there can be such a thing. The thresholds and metrics would always be arbitrary to some degree, and thus "subjective". Now, I know you've asked me to describe how to detect "arbitrary", and I'm afraid that is probably another yet long sticky philosophical battle.

"Objective" is also a word from a model (actually, a very popular class of world-models) in which there exist real objects (or properties) that exert their reality whether or not we detect them. Indeed, by definition, a property is objective, if and only if the interpretations, thoughts, and feelings of a thinking subject or observer are non-consequential to its reality. A failure by an observer to detect, think about, or interpret the property is, by extension, also irrelevant to the "objectivity". Anyhow, to properly discuss the idea that you can't have a perfect detector for "real-world objects", or to even discuss the possibility that detectors might have "flaws", you need the concept described by "objective", and you need "objective" to, very explicitly, NOT also mean "verifiable". Objectivity and verifiability being orthogonal is of far greater value than making a lame attempt to tie them together. Orthogonality is a strength for the definition, making it considerably more useful, not a "gaping flaw".

(I created a dedicated section for answering the divergence involving arbitrariness.)


RE: The thresholds and metrics would always be arbitrary to some degree, and thus "subjective". Now, I know you've asked me to describe how to detect "arbitrary", and I'm afraid that is probably another yet long sticky philosophical battle.

Arbitrary metrics are not necessarily "subjective". It is possible (supposing you admit to any existence of objectivity) to create some rather arbitrary objective metrics. One example is 'greater than 10.27313 kilograms mass'. I'm assuming you'd agree that a number pulled out of thin air is rather arbitrary. To avoid infinite and irrelevant regression (on this point which is about "arbitrariness"), we'll just assume mass and measurements are objective. Since measured mass is objective, and various particular objects may have a mass considerably greater or less than 10.27313 kilograms to the point error in measurement leaves no ambiguity (e.g. a Carbon-14 atom, planet Earth), then the arbitrary 'property' of being 'greater than 10.27313 kilograms mass' must also be objective.

But the choice of and weights of a given criteria are, at least to some extent subjective.

You can say that the choice of and weights of a given criteria are "arbitrary". But you're assuming, again, that arbitrariness implies subjectivity... which would be circular logic in this context.

Hearkening back to the 'brick' example at the top of the page, I assert that the 'greater than 10.27313 kilograms' property is objective regardless of whatever history and reasons went into producing it. It is objective because masses and measurements are objective and that is all the rule depends upon. The observation of the 'greater than 10.27313 kilograms mass' property doesn't depend upon its history. It may be that subjective considerations went into the construction of the property, but that simply wouldn't be relevant.

[Divergent argument irrelevant to 'arbitrariness' issue excised. There didn't appear to be any new material not already iterated elsewhere on this page.]


PageAnchor: Null Propagation Principle. This cannot happen:

  objective = unknown * subjective * objective * etc...

Because if one or more terms is subjective or unknown (null), then the result cannot be considered (fully) objective. Perhaps it should be called the "weakest link principle" because ANY amount of subjectivity also spoils the pot, not just null, but NPP has already stuck.

Which logic are you using? And what does the * operator mean above? In ThreeValuedLogic you can do this:

  T = U or T or F
  F = U and T and F

Similarly, one can translate to objective/subjectives:
  O = O or U
  S = U and S

In logics where you don't hemorrhage information by collapsing unknowns, you can even gather more values. For example,
  ((P == NP) or not (P == NP)) with  (P == NP) is unknown

Using the law of the excluded middle we can prove ((P == NP) or not (P == NP)) to be True. However, should we take your suggestion and 'propagate nulls', we end up with ((P == NP) or not (P == NP)) => (U or not U) => U.

This loss of information is not necessary; it is merely a consequence of choosing an inadequate logic, and is one of the big complaints against ThreeValuedLogic. You should, perhaps, choose a better logic... ideally one better adapted to handling unknowns without loss of information. In such a logic, this 'Null Propagation Principle' or 'Weakest Link Principle' is unlikely to be a problem.


Also, a good case can be made for objective <=(from)= (subjective X subject). A statement of a subjective held to be true, when combined with whomever is holding it, is a property that is either objectively true or false. Though this assumes that subjects themselves are objective.


The thing is, ANY decision based on one or more arbitrary factors is going to be arbitrary. It is sort of like Null in SQL expressions (for some dialects): one null renders the entire thing null. Or, a proof where one critical path element in it involves supernatural influences. It takes just one to spoil the pot.

I'll agree to some degree. Of course, for this statement to be anything but a UselessTruth to you, you, top, will first need to define "arbitrary" (I'm still waiting...). By my own understanding: Whether a decision is arbitrary depends upon whether you can provide a reason for it. If a decision is partially arbitrary (e.g. the exact path you take from your house to McDonalds, including amount of swerve and precise speed) it doesn't necessarily make the whole decision arbitrary (the action of going to McDonalds in order to get food ("because the kids voted on it"), the choice of using certain lanes of traffic ("in order to get there efficiently and reduce risk of being cut off")).

The criteria you lay out could be viewed as a "local model", and within that local model "reduce risk of being cut off" is criteria it puts forth such that within that model it could be described as "objective" (see "single result" below). However, the choice of that model, more specifically the choices of which criteria to make king, is a subjective, arbitrary choice. It is "arbitrary" in the sense that there are multiple local models that can be presented for the same task. We again face the inside-model versus outside-model dilemma.

The model is subjective? Which one? The model of distances and roads between my house and McDonalds? If so, I don't readily believe you - you'll need to convince me first that 'distance' is a subjective property... i.e. that if I merely interpreted reality properly, my hand would already be within arm's reach of your neck (that I may invoke the RemoteStrangulationProtocol). The choice of criterion for what makes a "better" route to McDonalds, I'll agree, is relative to one's goals. Someone else might want, for example, to take the "scenic" route, and call it "better" because they have a goal of either wasting time or observing more of the city. What makes "better" route depends upon how well the plan meets the goals. But this doesn't make either decision "arbitrary" in the sense that reasons can't be given for them. Nor does it necessarily mean the goals are "arbitrary". Maybe you can claim that our existence is ultimately "arbitrary" (no reason for "Why are we here?") but, really, there are plenty of answers to "why? why? why?" on the way towards such grandiose questions.

The "distance" example is not different enough from the speed-of-light example so far to bother exploring it just yet. And again, if you make a given set of goals your model, then indeed WITHIN that model, objectivity might be found. You have not settled the in-model-out-of-model issue yet. Your IF IF IF tendencies are simply throwing models at the problem rather than escape from the models to discover true reality.

(1) The "distance" example is certainly more directly usable than the speed-of-light example. Perhaps we should explore it, instead. Why isn't your neck within arms reach of my hand, top? (2) A set of goals is NOT a 'model'. (3) I'm quite unclear on exactly what you mean by this "in-model-out-of-model" issue, and I'm not convinced one exists. Models are something that exist in our heads. If we model reality, that doesn't mean that reality IS a model. The most you can rationally say is that (a) reality ought to somehow influence our model, and (b) our model of reality ought to allow us empirically correct predictions on future observations of reality. (4) I don't pretend that we can directly access reality. But I also don't pretend that you can prove it can't be even indirectly accessed. (5) My tendencies to use 'IF' statements are largely associated with my tendency to avoid making unsubstantiated or unreasonable claims. Perhaps you should give it a try.

Reality may influence our models, and a lot of other things; but that does not mean that reality is objective. If we don't get the full picture, then we cannot rule out that the portion we don't see is subjective. I am not claiming it is, only that we can't USE IT as a objectivity ruler until we know. This is back to the null propagation analogy.

Actually, if reality outside us influences us, that DOES mean it (or at least that portion of it) is objective. That would follow from the definition of 'objective'. Also, recall that a property being 'subjective' means that it belongs to us: our interpretations, our thoughts, our feelings. The only things ever subject to our thoughts, feelings, and interpretations are those parts of the picture that we do observe. Everything else can be ruled out. If there are parts of reality never observed, they are definitely objective (as in: by definition), for those parts of reality would exist without dependence upon thoughts, feelings, and interpretations.

Please clarify "follow from the definition". As far as that which exists outside of our observation, it either does not exist and/or we don't have access to it. Thus, a definition that depends on it (if it does) is not usable to us (except perhaps as a model that may approximately mirror our observations).

The definition of "objective" does not depend upon the possibility that things exist that we have never observed (which is quite different from saying we can't observe it). This is true because the definition of "objective" allows for the possibility that we did not have any thoughts, interpretations, or feelings towards something... i.e. that we never noticed it. If we didn't notice something, that doesn't mean the something wasn't there to be noticed. (And the definition of "unicorn" does not depend on the existence of unicorns. (Correct.)) However, the definition of subjective DOES depend upon the fact that we've already become the "observer or thinking subject".

And regardless, the definition of "objective" is quite usable; indeed, we're using it right now to discuss the things never noticed. I think you have an extremely skewed and inappropriate view of what "usable" actually means. Words are 'usable' if they are useful for communication.

Again, being useful as an approximation or nation is not the same as implying the model fully reflects reality. If it only partially reflects "it", I invoke NPP again.

[NPP? You mean that principle that appeals to your reliance upon poor choice of logics for handling unknowns? (I'm clipping some LaynesLaw conversation here about usefulness of 'objective' as a word.)]

For all we know -deductively-, perhaps. Inductively and abductively, we know much more about the particle than you're admitting. The vast majority of everything we know is through experience and thus inductive and abductive. And, yes, "objective" is still useful in communication (and better as orthogonal to "determinable") even in the strange universe you describe. (I'm assuming that's the statement to which you refer with "would your statement still hold?", based on immediate context.) If my monitor (one of many things with many properties) really did cease to exist the moment I wasn't in sight-range of it, and creep back into existence when I came nearby, that would still be an objective property of the universe. Actually, also objective would be any rules or properties relating these observations; I could still come up with 'F = ma' and determine that it applies to anything I ever observe. That universe would be plenty objective. Unless, of course, I'm just interpreting my monitor into existence. I'm supposing you're true to your word here: these things actually exist contingent upon my observation of them, but don't exist contingent upon my thoughts, feelings, and interpretations of them. There's a difference.

If your monitor ceasing to exist while not looking at it is an "objective property of the universe", it is not accessible to us, and thus objectivity is not accessible to us. You forgot the very first paragraph of this all. I don't necessarily dispute the possible existence of objectivity, I only dispute that we have access to it. (The F=ma analogy is too similar to the speed of light analogy that I don't see where it adds anything new. It has the same unsolved problems.)

Indirect and incomplete access is still access. You can't rationally conclude that missing access to one aspect of objectivity means complete lack of access to objectivity. If objective reality exists (and, by predictive forms of induction and abduction, it almost certainly does), we probably access it indirectly and incompletely, through our senses, without much effort at all. We just can't deductively prove reality exists. Regardless, your statement, that 'objectivity' (aka 'objective reality') is not accessible, remains unsupported. Unless you can provide support for the idea that we don't already have access to reality, it's just another of your unsubstantiated statements. (It's actually possible to prove that you can't deductively support this statement. And you can also prove that you can't prove it's negation. It was actually an exercise in one of my philosophy courses. But such meta-proofs are difficult to understand or explain to someone without a BookStop. So, ignoring the issue of whether or not you can support your claim that objective reality is not accessible: without support it remains merely an unsubstantiated claim, and shouldn't be given any more credence than should be any other unsubstantiated claim.)

[... editing ...]

As far as "arbitrary" goes, I'm still waiting on your meaning of the word, but I (personally) believe that any choice is non-arbitrary when you can provide a reason (be it sound or fallacious) for choosing it over other possible choices. Improved precision or accuracy (as verified through other mechanisms) and reduced cost are two fine reasons. That the choice managed to reach your attention and bear your scrutiny is a traditionally acceptable reason (i.e. for choosing it over the infinite set of possible criteria you didn't consider), especially if you examined more than a couple possibilities. Non-intrusiveness, automation, portability, range, ease of use, reusability, battery life, and a number of other reasons are quite validly applied when choosing criterion for devices like "detectors", and can make a choice non-arbitrary. I'd consider it quite non-arbitrary to say: "this is the best criterion we've found for automatically detecting XYZ for under $150 dollars per test. Oh, and here are the records of test trials run blind against known sample sets that show it works with 98% confidence, 1.8% False Negatives, and 0.2% False Positives". Even magic numbers can be non-arbitrary ("We tried a million magic numbers for this constant, and this one here is the one that gave a 3.9% accuracy improvement over the second best in the tests we ran. And so we're using it.")

I don't believe that "subjective" implies "arbitrary criterion"; by how I utilize the word, most people will have non-arbitrary criteria for recognizing "fools" and "attractive women"; they'll just have their own non-arbitrary criteria. And, because the concepts or properties in question are "subjective", nobody can say these people are wrong; no amount of evidence can make them wrong. Subjective concepts admit to no accepted means (not even accepted cultural mechanisms) of judging whether any particular set of criteria is good, bad, better, or worse than any other particular set. That is up to the thoughts, feelings, and interpretations of the observer. (e.g. "No matter how attractive you find skinny butts...") I'd argue that properties are subjective only if the metric for what constitutes a "good" or "better" metric is ultimately up to the observer or thinking subject.

But it seems your concept of what "arbitrary" means is different from my own. And so I continue to wait.

I shall tentatively supply the definition of "multiple possible answers to a given question or problem". Your definition seems to be that there is some amount of planning and logic behind it. I would lean toward "non-random" to describe that.

Clarify: multiple possible answers, or multiple correct answers? (i.e. any "true or false" question has at least two possible answers, and the path of a baseball in free flight is always one of many possible paths)

Clarify: if multiple answers apply, does offering an abstraction that covers these answers (and only them) qualify as 'arbitrary'?

Clarify: if the answer-language provides multiple ways of saying or representing the same answer (i.e. "false" and "not true") is the answer necessarily arbitrary?

Clarify: are stochastic processes arbitrary?

Clarify: are properties of the universe arbitrary? e.g. Planck's constant is just one of many possible values.

Clarify: how is 'arbitrary' logically related to 'subjective'? can you provide sound reason to assert that something can't be both arbitrary and objective?


PageAnchor: Dog Speech Example

Regarding allegedly objective quotes such as, "Dogs are better pets than cats" (example above).

One could perhaps claim that the person really said, "Dogzer bit her pits, think its." and it could not be 100% disputed. (Speech recognition of "ordinary" speech that does not use subject context tends to produce translations like this.) News reporters cannot achieve 100% objectivity, only hope that nothing they report is disputed enough to harm their reputation. Speech recognition is a combination of threshold probability and "best fit" as far as meaning.

The idea that something needs to be 100% disputable is not relevant to objectivity. The requirement that someone was around to interpret or misinterpret the quote is not relevant to objectivity. You're still confusing objectivity with verifiability. If you say "Dogzer bit her pits, think its" in a forest, and nobody is around to hear you, you still said "Dogzer bit her pits, think its."

You are slipping into your unjustified god-eye view of truth again. If there is One Universal Truth, us mortals don't have direct access to it, and thus our tools (such as definitions) should not assume we do. We only have detectors (our senses and/or instruments), we can never peak behind the magic curtain.

You are, once again, imposing your view of what properties you believe definitions should have, ignoring those properties the definitions actually possess. It is true that our senses can't verify anything in the real world to 100%. Nonetheless, that is irrelevant to the accepted English definition of "objective". Additionally, I find even your "ideals" regarding language on this matter extremely dubious. If there is anything behind that magic curtain, we ought to possess words to refer to it. If there is nothing behind the magic curtain, then we ought to possess words to refer to the things that we believe to be there. Words, after all, are used for communication of both facts and beliefs.

Both choices are merely models that we *subjectively* choose to use. How ironic that the definition of "objective" may depend on a subjective model (assuming your interpretation of it).

You can tie arbitrary meanings to words... call it "foobar" for all we like. It just means you aren't speaking English, and makes moot your conclusions due to fallacy of equivocation (since you're really saying that "FoobarIsAnIllusion" when you're not talking about "Objectivity" as it is accepted by speakers of English). That you seem to think this even moderately profound has me rolling my eyes. This is doubly true when you consider that your entire argument, thus far, is more like: "RealityIsInaccessible"; if you were a bit more intelligent, you would have avoided the issue of "objectivity" entirely and skipped straight to the real issue, which is "verifiability". You can't even prove that much, of course - logic can only prove that you can't prove and can't disprove the existence of reality.

What is the "accepted" interpretation? You just seem to be claiming that your interpretation is the only or most interpretation. Why should I take your word for it? Perhaps you are arguing that a One True Reality model is the most accepted. That may indeed be the case, but it does not change the fact that objectivity under that model is not fully accessible to us, and thus for us mortals objectivity is still an illusion, a pretend concept. Accepting your view of the definition leaves us with a broken mess. My view of how it is commonly used is that it's a stand-in for dispute-rate.

(Request: can you please stop splitting my paragraphs unless there is real reason? I'd prefer you utilize a 'RE: whatever' approach.)

You should use the dictionary as your primary source for a definition if you wish to avoid dispute. That is what I did. You don't need to "take my word for it". Dictionaries have plenty of words and will speak for themselves.

As to whether "pretend concepts [...] leave us with a broken mess": I don't accept your word for it. Can you explain, in clear terms, exactly which problems it causes? Or is this "mess" also a "pretend concept"?


Can you prove that the definition assumes a god-eye view?

Why should I? YOU are the person who makes that claim, not me. Can you prove that the definition assumes a god-eye view?

Again, I find the definition too colloquial to answer tough philosophical questions. (For one, it doesn't say how to tell for sure if something is "outside" the observer or observer's bias.) I'll welcome a more precisely stated working definition from you, as long as you let me test the hell out of it.

Very well. To avoid being incomplete, I'll come back to this one when I'm ready to answer WhatIsObjective?.


Please clarify: "Actually, also objective would be any rules or properties relating these observations;"

If you wanted clarification on my funky English idioms: "Rules or properties relating these observations are also objective." If you wanted clarification on the greater concept: when making observations over any predictable objective universe, you can begin to (via inference, hypothesis, and testing) observe rules that apply across observations. The 'F = ma' was an example of such a rule: if you apply an observed net-force to an object of an observed mass, it results in a predictable level of acceleration. This relationship between observations is not subject to interpretation, thoughts, or feelings of the observer. It is, therefore, objective.

F=ma is true by definition. We humans defined "force" that way. We made a model and made definitions based on that model.

It doesn't matter if 'force' was defined this way. What matters is that the rule is consistent across observations, allows us to make predictions in a predictable objective universe, and that you can't simply re-interpret your observations to make it consistent. It is the relationship between observations that allows you to make predictions. Our observations were already part of a 'model' of reality. This rule becomes part of our model of reality only BECAUSE it allows us to make predictions over these observations.

I'll further note that correctness isn't even required for objectivity. Scientists are fairly certain that Newton's equations, for example, break down at high velocities and at extremely low and extremely high masses. The relationship may be in error, but when utilizing it you still plug in the observed masses and observed velocities and make predictions which you then attempt to observe. To the contrary, the fact that we can even note that they broke down, that the predictions failed, that they were falsifiable and not fixable by merely reinterpreting observations, is evidence for their objectivity. Only objective things, after all, can be falsified by something outside control of the subject.


Below is an illustration of the "weakest link principle". Subjectivity combined with objectivity results in a subjective answer similar to how natural explanations combined with supernatural explanations render the result a "supernatural" result. Or, at least something not usable as a natural explanation. The supernatural tie "pollutes" the result.

False Analogy (ArgumentByAnalogy). And you've clearly not given this more than surface thought. Mixing the two can result in objectivity. Here are some forms:

Your "weakest link principle" or "null propagation principle" or whatever you choose to call it is not a logically valid theorem, and is incorrect when applied in the manner you have been attempting to apply it.

You have not shown that clearly. And why are you spreading that fight here instead of where WL is defined?


Common Trip-Ups from the Pro-Objectivity Side

These are repeated trip-ups that I've encountered. Please make sure you've reviewed them so as to not make the same fallacies.

Are you ready to formally define "usable objectivity" yet? Or is it just another sophist device, designed and defined to escape all possible argument?


Sadly it seems this whole discussion occurred during my holiday. I'd liked to have chipped in as it seems as Top is slowly moving toward VaguesDependingOnVagues?. While nobody else seems to get what it is about. Sad.

I don't want to intrude into the ongoing ThreadMess of a discussion which I had some trouble reading/following, so I add my cents here.

First I'd like to note that I see Top's position on ObjectivityIsAnIllusion as an instance of TautologicalDefinitionFallacy: His strict interpretation of objectivity, i.e. fully separating objective from subjective statements, disconnects these domains so completely as to make the objective indeed not wrong but useless.

Then I have the feeling that Top has got the core of the vague context problem:

[The criteria [he] lay[s] out could be viewed as a "local model", and within that local model [X] is criteria it puts forth such that within that model it could be described as "objective" [...]. However, the choice of that model, more specifically the choices of which criteria to make king, is a subjective, arbitrary choice. It is "arbitrary" in the sense that there are multiple local models that can be presented for the same task. We again face the inside-model versus outside-model dilemma.]

Any criteria of any property in any model are ultimately embedded in a vague context. Vague is meant here in the sense of dependent on the vagueness of the host of the model which may be a human. Humans use the vagueness of the natural language for bootstrapping modeling at all. In such a vague context choices become arbitrary (in the sense of not being objective). And these outer/meta/beetstrapping contexts are mostly historically developed and arbitrary. Thus no real use comes out of using these vague contexts - except for bootstrapping the really useful models.

-- .gz


Strange it is that programmers (persons), who use machines that manipulate numbers (values), via the shifting of electrons (matter), should argue against objects, about which they have constructed an entire paradigm (way of doing things), thinking that the abstractions (ideas) by which they think about methods, models, and manipulations are any thing other than virtual representations of how they deal with real things (artifacts). But strangeness is frequently present in those who do not understand that which is real from that which is imaginary. -- ThinkingOutLoud.DonaldNoyes.200809042032

Related topic: SoftwarePlatonism


ObjectivityIsAnIllusionContinued?


See also: ObjectiveEvidenceNeverFound, ThereIsNoObjectiveEvidenceThatKeyboardsAreUseful

OctoberZeroSeven and again SeptemberZeroEight

CategorySubjectivityAndRelativism


EditText of this page (last edited March 14, 2012) or FindPage with title or text search