Many lengthy disagreements on this wiki can be directly or indirectly traced to differences in personal psychology or general models (assumptions) about human psychology. I don't know of a single HolyWar topic that was eventually settled by objective information. It would be nice if math or science directly guided us toward the "correct" approach, but that is not going to happen. At best, math might tell that something is inconsistent or needs more work.
Sometimes. If the argument is whether uppercase keywords (ala SQL, COBOL) or lowercase (ala C++, Java) are better, I would agree that there is no correct answer; nor is science likely to provide one. If the argument is whether or not quicksort is O(n*log n) in the average case, science has a definite answer - and anyone who says otherwise is a fruitcake (or had better present a formal proof).
- But for the most part, those are not the kind of issues that generate the most friction.
Many of the questions argued about on Wiki - many of the HolyWars - I think science could eventually weigh in; however, much of that research hasn't been done yet. In the mean time, lots of folks here like to extrapolate their personal preferences into sound engineering principles, often claiming "science" as their ally when it's convenient (even when science has far less to say on a topic than they claim); and dismissing well-established scientific findings that they find inconvenient, often on grounds of psychology. The former is simply bad science; the latter frequently ventures towards PostModernist anti-science. Both are ScientificSins.
- I would be interested to see examples of people rejecting "well-established scientific findings" on this wiki.
Wikizens must learn to face this fact. It is inescapable. No software model can perfectly represent the entire world, and thus must make some assumptions (abstractions) to simplify information enough to be able deal with it as human beings. But since
AllAbstractionsLie, we have to pick the most
UsefulLie among many alternatives. So far there are very few useful metrics to chose among them. And, even the metrics that do exist often depend on assumptions about the path and probability of future changes. Since no one can predict the future, estimating the future depends on studying the past. But, humans have selective memory, and which memories a person selects to make decisions from is all part of psychology. Most of us have at least a few memories of being reprimanded or faulted for problems that popped up due to unanticipated changes. We are more likely to remember those occurrences, and adjust our behavior (and memories) in a reverse-
Pavlov sense to avoid them in the future.
Complicating this even more is that the psychology of each human being is vastly different from individual to individual. Thus, finding and deciding on which human to use as the reference for study or decision making is basically a political process, and politics depends heavily on psychology.
{Moved discussion related to "types" into TopOnTypes.}
It's wise to have respect for others, instead of fighting. For example, I respect many programmers who use different languages. The fact is: you really only have time to master one or a couple languages. So respect the other people, rather than fighting and debating whose view is better. There are, however, some situations where some solutions are just not the best solution to a problem at present, and proof (or science) really helps out. Example: If you see someone completing a project in 5 days using one tool, and the other guy takes 2 years using another tool, you should make a point and offer facts or explanations about why it took you less time. If is far better to point out facts or explanations, and hopefully real world examples, than it is to fight.
There might be a few people who can master many languages and paradigms. However, those are probably rare and difficult to tell from the pretenders. Related: MindOverhaulEconomics.
(moved from MultiParadigmDatabase)
You mean like: "There is an objective definition of 'classification'".
Yep. Exactly like that. And that one would be easy. The definition of classification I use is quite objective: any computation performed with intent or purpose of making a decision as to whether a something (no matter what that something is - object, behavior, phenomenon, etc.) belongs inside or outside a group of somethings (no matter what that group of somethings is). Of course, I'm quite sure that explaining this to someone who's proven to be an utterly HostileStudent (and who thumbs his nose at academics) would be a rather pointless exercise in frustration. Hmmm... I seem to have done it again... I habitually explained a claim, an action you call "meandering". I'm trying to avoid making claims, for now, and just tear apart your fallacious arguments. It's a lot more direct.
You do realize that "intent" and "purpose" are highly subjective things. I don't thumb my nose at academics. I thumb my nose at people who hide behind academics. They remind me of contractors who convolute stuff out of purpose or habit for the sake of job security or kingdom protection. Gorbedef, tear down these walls!
I don't believe you. Can you support your statement that "intent" and "purpose" are highly subjective things? Or are you just blowing more hot air? Intent and purpose being "subjective" would imply that, given a person who performed an action, whether that action was performed intentionally or on purpose woud be literally be "subject" to the opinion of whomever you asked (right along side: "Was what he did 'good' or 'bad'? Did it make you 'happy' or 'sad'?"). But it seems to me that, unlike the questions of "good" or "bad" and "happy" or "sad", there's exactly one authority on this question of "intent" and "purpose": the person who performed the action. The opinions of others do not affect this reality. Thus, 'intent' and 'purpose', to my understanding, fall quite squarely in the "objective" category.
Yet another pedantic UselessTruth. A software engineering definition is not very useful if it depends on asking the person who made something. That means if the author died in a plane-crash, there may be no objective way to see if their work has a certain intent. And even if they describe alleged intent, there is no external way to verify it.
If you put aside your HostileStudent hat, you might learn something: Intent and purpose are relevant because, in any implemented computation, you have representations of things. You cannot prove that an operation is or is not integer-addition, for example, without knowing whether the the objects involved are intended to represent integers. Same truth applies to the act of classification. It is true that you often cannot deductively prove intent (even your own). I'm supposing you've never heard of abduction and induction? Deductive reasoning isn't the only sort available. We have access to induction, abduction, evidence, and probability - and we can verify intent as it exists within a computation or specification with exactly the same approach we utilize to discern every other truth about our physical universe.
One can prove that an operator results in integer addition (for all tested scenarios) regardless of where it came from. Something with identical behavior as an integer adder is for all intent and purposes an integer adder. But human intent in a definition renders it a subjective definition. Otherwise, an alien compiler from a long-dead civalization may not qualify for "types" or "classification" even if it does stuff (has behaviors) that we normally associate with types. I am not against subjectivity-linked definitions. However, one shouldn't be so insistent that it is the only approach if alternative and useful psychological models can be provided. You sell TypeTheory with the haughtiness of purity and singularity, when in fact its in the same subjectivity boat as TypesAreSideFlags.
Re: "One can prove that an operator results in integer addition (for all tested scenarios) regardless of where it came from." - but can you prove the opposite? not knowing the intended integer representations, can you prove that an operator doesn't result in integer addition? Please give this question some thought before reading on.
Identifying something as an integer adder is a less difficult task than proving that addition has occurred or is occurring in arbitrary computation systems (consider that most computation systems that perform addition don't tell you what they think the answer is when they're finished... they just store the result away and allow it to subtly influence future behavior; it's rather difficult to prove they're doing addition). Showing that something is an adder requires first identifying a representation for the integers that the adder-device happens to utilize when performing addition. Unfortunately, you will need to eventually face the issue that there exists an infinite set of possible representations for integers. This can make it remarkably more difficult to take a device and prove that it is not an adder - because you can't ever be completely sure you're not just representing the inputs or interpreting the output incorrectly. And that's even before you deal with such questions as: might the representation for the output integer depend upon the representation or values of the other two? might the integer-representations differ for the two input integers? And is an adder that decrypts the inputs and encrypts the result still an adder? Dealing with representation borderlines as between bignums and smallnums is quite difficult enough even without resorting to systems designed to confound you!
- "Evidence" is sufficent. You don't need proof to have evidence. The more tests it passes, the higher our confidence can be that it is an "adder" even if we're never 100% certain.
- What the hell is your point? You're completely ignoring the issue I presented to you: How do you prove that something ISN'T an adder? What "tests" do you run? The way you're saying it, you've already made a guess as to the representation for integers, and you're running tests with that representation. How did you guess that representation? You can either run a bunch of tests, and decide that the representation is something that matches all those tests, or you can decide on a representation, and run tests until it fails. Either way, how do you prove that something ISN'T an adder? Until you can answer this question, you're at a very serious impasse. And I'll state this very clearly so you have something to target with your future statements: You cannot prove that you don't have an integer adder without knowing the intended integer representations. Period.
- One can determine if something can *serve* as an "adder" (based on past behavior). Whether it "is" an adder gets into B.Clinton/Yoko philosophy fits that we don't need to get into. And what is not being able to prove something is not an adder have to do with anything here? The only "Boolean" statement we can probably make is that either we found a way to use something AS-A (an) adder or we haven't found a way to use it as. That's it. We cannot answer "is" or "isn't" as mortals. Even an individual atom may be doing or potentially do adding on a quantum level outside of our inspection tool abilities. -t
- A definition is impractical unless you can decide whether a particular instance is or is not covered by that definition. You are failing on the "is not" part. A definition that doesn't allow for "is not" answers either covers everything (in your words: a TautologyMachine) or is undecidable (and thus not usable in practice - impractical). Your failing is of the 'undecidable' sort - it takes infinite time to decide that any potential "adder" device will NOT *serve* as an adder. You parrot yourself about the accepted statement that you can determine (with some confidence) that a particular device will add two input numbers and return you a correct result, and can thus can be called an "adder". This is true, but covers half the problem. No matter how much you parrot it, you will not cover the other half: "How do you prove that something ISN'T an adder?".
- It is both unrealistic and unnecessary to determine "is not" with certainty. Your drive for IS-A instead of HAS-A is making the same kind of mistakes that OO inheritance fans do. In the practical world, we'd probably strive for "has use by us as an adder". If nobody on our team can figure it out, then it is not a "usable adder" for us. Because EverythingIsRelative, our models often have to focus on the goals we are trying to achieve in a given frame, which is usually not an absolute model of the universe (unless we can do it for nearly the same cost, which is not likely). Your head can be used as a walnut cracker, but only mad-men and really angry trolls will consider it as such :-)
- You don't need any more "certainty" in answering "is not" than you have for answering "is"... but the amount of certainty required depends on the formality of the definition, and can be 100% (for completely formal definitions). However, it IS necessary that you be able to determine "is not". And the "IS-A instead of HAS-A" is no mistake here: we're talking about DEFINITIONS, not inheritance hierarchies. Definitions are much more akin to sets, or set-descriptors, where a single referent can be (and not be) many different things at once (animal, bird, visible-thing, flying-thing, living-thing, etc.). If you, in your limited wisdom, think that this is "making the same kind of mistakes that OO inheritance fans do", then please provide some examples and reasoning - at the moment I'm extremely skeptical. And I find it quite funny that you're backpedalling far enough that whether something is "useable as an adder" is now entirely subjective, depending on whether each person (or team) can "figure it out". Intent, at least, is objective, and imagined subjectivity (on your part) was supposedly its primary sin.
- A perfect objective definition is probably impossible, but at least I don't pretend like my fav defs are "perfect". The only perfect definitions exist in made-up worlds (simulations, math, etc.). As far as "it IS necessary that you be able to determine 'is not'", that is why I suggested reframing adders as "can be used by [party of interest] as an adder". Absolute adder determination external to the user(s) is not necessary for 99.9% of all practical work. I don't think there are any absolute definitions for real-world things. Even in the animal kingdom, you and I may have genetic material stolen from arthropods by promiscuios microbes and viri. We may be say 0.001% arthropod. (That may explain your crabby attitude.)
- One can easily form and utilize perfect objective definitions over perfect data of any sort, including percepts and linguistic constructs. However, I'll agree that it is impossible to have perfect data over the real world. For the real world, thus, you can choose between to have a perfect definition with imperfect algorithms that identify it, or choose to have an imperfect definition with perfect algorithms that identify it. I.e. you can define "has taken drug XYZ" as having actually taken it, or define "has taken drug XYZ" as identifying positive for it under some particular test. You and I seem to see differently on which of these is (fundamentally) the better choice. With the "adder", your definition is something like: "any device I figured out could add and verified to my confidence within X seconds of attempts"... which I find ludicrous, seeing as you're saying the exact same calculator can be an adder and not be an adder under said definition from one person to another and from one day to another (depending on memory and learning), etc.. I, meanwhile, would say that a calculator is an adder even if the 2-year-old is using it only as a chew-toy.
- It is possible that EVERYTHING is an adder, a chair, and a chew-toy if you merely know now to use it or press the right molecues to trigger (currently) hidden internal properties. Maybe God has a pool stick that when the right atom hits the right spot of any object, it becomes the thing in question. He just doesn't let us mortals know how it works. We can never prove this is NOT the case. But would be a UselessTruth regardless for mortals. This is why I adopt the "usable as X by party of interest".
- I wouldn't say that it is possible that EVERYTHING is an adder (or chair, or chew toy) - if you recall, that's where Intent enters the picture. It keeps things from being UselessTruth for mortals. If you wish to define "usable adder", that's fine too, but it is, quite frankly, not the same as defining "adder". An adder/compiler/chair/etc. might be unusable for a number of reasons - i.e. if you have no power for the computer, you can't run the software on it... or the chair is simply outside of your reach. Whether something is a "usable by party X as a chair" is useful to know, but it simply isn't the same as asking whether something is a "chair".
- "Intent" issue addressed below.
- As far as "evidence" not being deductive (100%) proof: I agree. The exact same thing applies to "evidence" of intent and purpose. It is exactly what was referred to when was stated: "We have access to induction, abduction, evidence, and probability - and we can verify intent as it exists within a computation or specification with exactly the same approach we utilize to discern every other truth about our physical universe." Induction and abduction applies just as easily to proofs about computational systems being integer adders as it does to intent or purpose of computation.
- Barring neuron dissection, intent is probably not absolutely provable. Its a fuzzy ruler.
- Intent is absolutely provable over a specification of operation through formal codifications in semantics. Intent is not absolutely provable over an arbitrary black-box computation system. However, this isn't a serious problem. You can't prove anything 'absolutely' about black-box systems in general; you must always rely upon induction and abduction, evidence and probability, and other tools we've developed to study black-box systems. This is just as true for discerning intent as it is for determining the presence of objects or describing laws in our own physical universe, which is equally subject to induction based on observations and experiments on another black-box system (our universe). Intent is fundamentally no more fuzzy and no less provable (and no less objective) than these other things. However, I'll agree that our tools to discern intent are obviously less precise than our tools that measure, for example, distances between objects.
- Your version of intent can only be solidified by interpreting behavior and putting it into your own private model. Perfection may indeed exist in the model, but that does not mean what you are modeling is perfect. There may be multiple incompatible, but internally consistent models for any given real-world thing. And, can you prove that "intent" exists in a given model? Can an algorithm be run that says Yes or No regarding the existence of intent in a model? Or, do you have to rely on some human's "expertise" for that?
- Intent is a real-world thing and, thus, has all the same issues as every real-world thing. And intent doesn't need to be in MY private model any more than identifying a "baseball" or "bird" needs to be done in MY private model. Can you prove a "bird" exists in a given model without being told it is there? How about a "baseball"? or a "lake"? How about "intent"? The answer and methodology is exactly the same for each: yes, via induction or abduction, which (as methodologies) are imperfect by nature, seeing as they rely upon limited knowledge and such things as evidence and probability. I.e. you can recognizes that a ball of an apparent size with particular stitchings is probably a baseball, though you'd need to weigh it and perform a series of other tests to determine whether it is regulation, but just by looking you can't tell -absolutely- whether it's a baseball or a grenade or a man-eating monster in disguise. Similarly, you can recognize that nature and physics don't create balls of apparent leather with that stitching, so you can say with a great deal of certainty that there was intent behind its construction. However, as with all real-world things, you can't know this absolutely. And you don't need to be a human to recognize intent anymore than you'd need a human to recognize a bird, a lake, or a baseball. It is possible to create algorithms that recognize baseballs and intent with varying degrees of inaccuracy, just like recognizing any other real-world thing.
- It is possible to design an image recognition system that can recognize "baseball" (possibly with a certainty scale). But, I've never seen such a contraption for "intent". And even if one was made, it's results would probably be highly debatable. In this sense, "intent" can be made "objective", but being objective and useful are not necessarily the same. The "objective" baseball identifier may still pick items that Bob or Joe may not consider baseballs themselves. Indeed, any criteria we pick for a definition is going to be subject to dispute, but "intent" has that problem more so than other potential criteria [and this is true because TopMind says so.]. It should be ranked very low on the choices for ideas or concepts to couple a definition too. One could argue that there is no better choice, but you seem to be completely satisfied tying IT definitions to intent. It is much easier to dissect a baseball than human neuron processing (at this point in history). Tying a definition to intent thus kicks practicality and usability of the definition right smack in the nuts. What has to do complex and muddy philosophical juggling and processing to apply it. Perhaps you find that fun, but most don't, at least when there is work to get done.
- Practically, one can with a high degree of confidence recognize 'intent' based on the approximated ratio of something being true by 'coincidence' or 'accident' vs. being true due to 'effort with knowledge'. In IT it is particularly easy to recognize, as these approximated ratios are much easier to make accurate (i.e. random bit patterns vs. patterns recognizable by a particular syntax). This particular 'algorithm' doesn't recognize hidden intent, but, practically, we rarely need to do so any more than we need to recognize hidden baseballs or distinguish grenades disguised as baseballs. One shouldn't tie IT definitions to 'intent' if they don't need any more than they should tie IT definitions to 'baseball' when they don't need to. But there is no practical reason to be dissatisfied with an IT definition needing intent. Your strong objections to it are based more on imagined reasons than on practical ones. As far as 'imagined reasons' go, I'd agree those are likely a common cause for holy wars tied to psychology.
- You claim intent is sufficient even if not perfect. However, you've offered no evidence of such.
- I've offered reasoning for this claim by several different approaches (including computation-theoretic and analogy). Intent may not be necessary, but I've yet to see any alternatives to 'intent' that are sufficient to the demands of the requirements
- I think most would agree that given a choice, definitions should avoid relying on "intent" if they can.
- I agree with this conclusion. However, our reasons apparently differ. I agree for the exact same reason that I believe definitions should avoid relying on "baseball" if they can. Unnecessary dependencies introduce accidental complexity. Of course, this is all heuristic and hardly a set-in-stone rule... if a definition relies on "baseball", it (tautologically) does so. There are no 'tests' definitions must pass; there are only tests to estimate whether they are more or less useful. In any case, if you remove a dependency and it breaks the definition by making it useless in practice (as does removing 'intent' from 'adder') then that is evidence of 'need'. You might be able to replace 'intent' with something else ('purpose', for example) but unless you can do so while eliminating the complaints you possess against 'intent', it's pointless to quibble. What matters is that "intent" works, "intent" is usable, and "intent" can't be just excised from the definition of "adder" without significant negative consequences for usability of "adder".
- We want to analyze things, not the observer or the mental state of the inventor to determine what things are are aren't. For one, automating intent determination has proven very difficult, often beyond the best AI.
- This last point is where you're wrong. Automating intent determination hasn't proven particularly difficult. We do it all the time: detecting signals in white noise, filtering algorithms, TypeInference, gestures and keyboard macros, drag-and-drop, protocol handshakes and yes/no message boxes to confirm anticipated intent, etc. No matter how you 'object' to reality, the truth is that we rely considerably on detecting intent, and we're good at it. It is true that these are all heuristic, imperfect, specific-case deals. But need hasn't arisen for detecting intent in the most generic case any more than need has ever arisen for detecting baseballs in the most generic case (e.g. in a ring, orbiting Saturn, disguised as a meteor).
- Only when the model/environment is sufficiently restrained, and even them its not always perfect. If I use integer A when I meant to use integer B, the compiler may not have a clue that I made such a mistake.
- True. But all detection algorithms are 'imperfect'. The existence of false positives and false negatives is considered an acceptable characteristics for any detection system, be it for baseballs or intent. In any case, detecting that the presence of intent vs. predicting specific intent are about as different as detecting a baseball vs. predicting the final stopping point of a baseball in flight. A device that can perform the former isn't necessarily capable of the latter. Conflating the problems increases conceptual complexity in unnecessary ways, especially since you're reluctant to even admit to 'detecting' intent, much less 'predicting' the properties of a specific intent.
- I'll also note that your complaint involved verification in particular. Failing any verification requires recognizing a discrepancy generally of the form (a statement of what is + a statement of what should be). Examples: (actual code structure + language syntax), (unit + unit test), (value + inferred type-descriptor), (command sequence + StaticAssert), etc. The ability to identify a discrepancy requires that there be a redundancy under normal conditions, though this redundancy can exist in context (e.g. type inference) rather than content (e.g. manifest typing). Even you must grant that identifying two intents of specific nature in a program and further recognizing them as incompatible is, overall, a strictly more difficult problem than probabilistically inferring intent vs. accident (though we still do it, thus such things as type-safety, lint-style tools, unused variable checking, etc.).
- I'm not sure what your unit test example is meant to illustrate. I placed a related issue in DefinitionsThatRelyOnIntent.
- As far as: "being objective and useful are not necessarily the same" - this is true. But the value of any classifier is external to the recognizer itself, instead being determined by its influence on decisions. A tool that recognizes landmines would potentially be of a value measured in lives saved. A tool that recognizes baseballs (distinguishing them from clumps of grass and base mats) might be attached to a robot to save a little human labor picking up after batting practice. On the other hand, either of these tools could be useless if left to gather dust. A tool that measures 'intent' is similarly only as useful as its application.
- Intent cannot "be made" objective; it ALREADY IS objective. Understand this, top: whether a concept is objective is COMPLETELY ORTHOGONAL to whether you can detect it. "Objective" (as opposed to subjective) means only that it is independent of the observer or thinker. Intent is objective because your intent does NOT depend upon my thoughts or observation of your intent.
- You are assuming a God-eye view of the universe. Perhaps "objective" is relative to whether one is a god or not. But for practical purposes, I shall assume the participants are not gods.
- WRONG! I am making no such assumption. "Objective" only requires that the property be independent of the person observing or thinking about it. Period. If you wish to be practical about it, this simply means that "objective" properties are there whether you (or anyone) detects it or not. Godhood is not required. Another practical feature: in the observation of 'objective' properties, you can be "wrong" or "right"; i.e. you can answer "Is there any Mildew?" or "Was this performed intentionally?", but your answer may be in error. This is as opposed to "subjective" properties (like "good behavior"), where you can't really be in error (unless you deny your own thoughts) because the property itself depends on what you're thinking. So, if you can conceivably be in error, the property is most likely objective, and vice versa. Perhaps a god-eye view of the universe would keep you from being in error, but I do not assume or require that your detection of objective properties (like the presence of "mildew" or "intent" or "baseball") be perfect. Indeed, detection of real-world objective properties is USUALLY imperfect.
- EVERYTHING is "objective" according to your working definition. For example, your opinion is independent of my observing of you; therefore, your opinion would always be "objective" by that reconning. Your gift for turning every definition you enounter into a watered-down TautologyMachine is cemented.
- You are (as usual) WRONG! From the proper definitions of subjective/objective (i.e. those that come from the dictionary, and the sort I prefer to adopt... unlike you, who prefers to make shit up), whether a property is "objective" depends on whether it is a property of the thinker/observer or a property of the object thought of/observed. It is true that "my opinion is XYZ" is an objective statement - its truth is not subject to your thoughts. However, "XYZ", by itself, can still be subjective. i.e. "My opinion is that top is a foolish boy" might be true, but "top is a foolish boy" is subject to one's criterion for "foolish" (and "boy").
- See ObjectivityIsAnIllusion.
- And, the algorithm for a person determining "good" in your example can have the same arbitrariness as a person's "baseball" determination algorithm.
- You are once again insinuating your ridiculous belief that "algorithms" are required for definitions. I challenge you to actually propose and defend that statement rather than use repetition, insinuation, and sophistry to try to make the audience (me) believe it. Sure, an algorithm for "goodness" can be arbitrary, but unlike an algorithm for "baseball", the algorithm for "goodness" really can't be judged 'wrong'. However, if someone used (as arbitrary criterion): "any object I can fit in one hand and throw more than 60 feet I shall call a baseball", they would be (quite objectively) WRONG according to the accepted concept referenced by the word "baseball". After all, throwing knives and grenades and apples and oranges also meet the criterion of fitting in one hand and being more than 60 feet throwable... all of which would be "false positives". Now, no matter what algorithm they use, there will likely be some false positives and false negatives... but the -ability- to have false positives and false negatives is sufficient evidence to show that the concept one is independent of the algorithm (otherwise there would be no wiggle room for judging "false"). Anyhow, definitions, by nature, attempt to utilize a language to describe a concept. I'd agree that you -can- use algorithms to describe a concept. But the vast majority of definitions are simply abstractions (that might imply but certainly don't explicitly tell you how to identify them). Algorithms are not necessary for definitions.
- I am not claiming algorithmization is necessary to qualify for a definition. However, without it, it's difficult to test the validity, usefulness, and per-instance fit of the definition. Lack of algorithmizationability (word?) is usually a smell that somebody has chosen a bad definition or has not refined it enough to be useful. You appear to be saying essentially, "Definition X gets to be stupid and weak, therefore mine should also get to be stupid and weak."
- Since your memory seems to be failing you: I've made it quite clear, top, that I don't believe a definition is practical (meaning: usable in practice) unless one can construct a computable algorithm (i.e. one that terminates) that can classify whether a referent "IS" or "IS NOT" an instance of a particular definition... and do so with a very reasonably high rate of success. I'm assuming that what you mean by "algorithmizationability" is exactly that - the property that you can construct a computable algorithm by which to classify referents. However, you should be aware, top, that it is often quite easy to prove that one can construct an algorithm without actually providing one. Further, the provision of an algorithm does not make it "THE" official algorithm... but, unless the author intentionally provides an algorithm with obvious failings, it damn well sure looks like one - which is NOT a good thing in an argument, since then the algorithm gets attacked despite the attack being irrelevant. The failings of any particular choice of algorithm are not the failings of the definition. Only descriptions of failings common to ALL POSSIBLE algorithms constitute failings of a definition.
- As far as "memory failing you", if you used proper references, I wouldn't need to rely on it. Don't write just for me, write for any wiki reader who may not be aware of our argument history. As far as an algorithm, if one can be provided, even a working algorithm, it should because that makes it easier to test. Definitions (and their representations) should be tested and studied to see if they have leaks, unintended results, or contradictions. If you leave it in napkin scratch form, it will not likely be subject to such testing. And algorithmic form is more conducive to more wiki readers than formal academic-speak.
- When you practice what you preach, perhaps I'll listen.
- Me aside, so you admit you can do it better? That's a start.
- "I can always do better" is a motto of mine. It's because I'm an EternalStudent. I encourage you to become one, too.
- Can't say I see many signs of it from you.
- Can you honestly say you've been looking?
- (This is as opposed to things like "good behavior" - what is 'good' behavior depends on who you ask, and is, thus, subjective.) And your argument that it is harder to recognize intent than to recognize a baseball is one that deserves extreme skepticism. To start, consider where intent is made plainly available: intent is extremely easy to recognize where it is either codified in the semantics of a programming language or available for straightforward deductive implication (as per TypeInference), for example, and it is quite easy to identify if described in comments.
- That is claimed intent, not verified intent.
- Indeed. But obscured intent with misleading signals is harder to discern... and was discussed -below- (since you so unceremoniously split the discussion with your premature injected comment). "To start, consider where intent is made plainly available: [...]" - obviously you couldn't manage this simple directive for even one complete sentence. You shouldn't be so reactive; give yourself some time to properly read and think before you open your mouth (or 'EditText' window, in this case).
- I have no idea what you are talking about here.
- Beyond that, in the general sense, one can recognize intent with high probability of success by the very simple approach of comparing the probability of an event against the probability of it happening on accident. This simple approach has been used to recognize file-types by looking at their contents, to identify covert signals in communications systems, and a number of other useful tasks that are associated with intent.
- For one, the probability values themselves are subjective. One algorithm may give something a 96% of being a JPEG, the other 97%. We can select an agreed-upon benchmark, but you cannot vote something objective. The benchmark agreement is merely to create human harmony so work can get done. It is not necessarily meant as a final universal declaration of something. Working with universal truths is often wrought with problems such that in practice it is usually easier to find out what givens are agreed upon and then work from those. That way we don't have to start arguing about whether a chair exists all the time. One can say, "assuming this is a chair, and this is a foo, then we can logically deduct that..."
- What "high probability of success" means is that the algorithm says "JPEG" and has a 96% or 97% chance of being correct as measured against the known origins of the file; the algorithm wouldn't be required to provide the percent chance - that would be something a user would need to build over time. And you are incorrect: probability values and probability measurements are not subjective. When you flip a normal coin 100 times and get 42 heads and 58 tails, that doesn't make the probability of the next flip being tails 58%. It's still at or very near 50%... and that percentage chance doesn't depend upon what you think of it. I do agree that you cannot vote something objective. Properties are either objective (independent of observers) or subjective (dependent on observers) independently of a vote. Now, you might claim that all properties are subjective, i.e. that no properties exist without observers. I'd be quite willing to entertain such an idea (being an epistemological solipsist). However, even if I accepted it, I can't excuse your utter fallacy on these issues: "intent" remains neither more nor less objective (or subjective) than "baseball".
- And working with universal truths isn't particularly difficult. You do it exactly as you stated: "assuming x, y, z, using logic L, deduce a, b, c" can be a universal truth (much the same as "assuming x, using classic logic, deduce x"). However, one always determines the presence of a "chair" or "baseball" or "intent" with mechanisms utilizing evidence and probability, primarily utilizing induction and abduction, not deduction (though you may use deduction in later inference, after assuming probable truths). The relevant probabilities for induction are: (a) the probability a property is absent, given the detected evidence for its presence, and (b) the probability a property is present, given that certain evidence that should be suggested by its presence has not been detected. One can usefully 'confirm' a guess by seeking evidence that is predicted -should- exist that was not utilized to perform the induction... i.e. confirmation of a prediction that is the consequence of a hypothesis. That is, at least, the scientific method for confirming objective knowledge.
- You still started out with "assuming that...". The weakest-link principle would result in a subjective result.
- The reasoning supports that "intent" is just as objective as baseballs, chairs, lengths, speeds, reliability, engines, trucks, etc. If you're going to play HumptyDumpty and emit sophisms claiming that all these things are "subjective", then you aren't in a logical position to argue that 'subjective' things are necessarily "tied to psychology".
- We don't have *useful* and *reliable* ways to measure "intent" in many circumstances. One often has to play Sherlock Holmes. If we agree that something represents "intent", then its not an issue in the practical sense (see below). But if there is a dispute, we cannot rip neurons out of people's heads and have to use indirect means, which are pretty much guesses.
- So? We also don't have *useful* and *reliable* ways to measure "baseball" in many circumstances. One often has to play guessing games with criterion development. And everything we observe is by "indirect means, which are pretty much guesses". Literally. Light reflected off surfaces stimulating rods and cones then interpreted as lines and blobs of color with further interpretation as faces, chairs, and baseballs is quite indirect. Further making estimates of distance and dividing it by estimates of time to produce estimates of 'speed' is a step more indirect. In fact, the analysis of "intent" via indirect means only reinforces the notion that "intent" is just as objective as everything else on that list.
- Like I said elsewhere, if nobody disputes what is the baseball, it does not matter in a practical sense. If and when it becomes disputed, then more rigor will be expected (size, weight, materials, etc.)
- I agree with what you say here. More information and greater precision helps resolve disputes that are caused by incomplete information (as opposed to disputes caused by differing definitions, ideals, assumptions, etc.). But I'm not sure why you're saying it, as it isn't obvious to me how it counters anything I've said. At a guess, do you believe that collecting size, weight, materials, etc. information about objects to better discern whether they are baseballs is somehow direct? or that ripping out neurons and examining them would be direct? Neither of these are direct. Even size, weight, etc. are collected, inferred, and communicated indirectly, then synthesized inductively, abductively, and through deductive models. The tools used to measure weight or guess at material composition are based on the same sort of heuristic approaches as one uses to guess at intent in a segment of writing. These are all "indirect means", and even collecting more evidence for a better inference is 'indirect'. There is no essential difference between baseballs and intent on this issue.
- The potential fuzziness for "intent" is far greater than baseball. True, I have no clear means to measure fuzziness, but most will agree that its easier to find means to resolve disputes about baseball-ness than intent-ness. Unless it is the only choice left, it it is a factor one should avoiding tying a definition to. If you have a choice between using a tree or a fog bank as a reference point, which would be better in most cases? Do you really *have* to tie the definition of say "classification" to intent?
- Your notion of 'potential fuzziness' and even your belief about what 'most will agree' is speculation, so I'm going to ignore it. As far as spatial references, one would generally choose a tree before a fog bank... but for simple reasons: smaller volume (offering more precision), relatively fixed location, and (usually) greater longevity... but one might say different in a large forest with a relatively small and consistent fog bank; in either case, 'fuzziness' needn't come into such a decision. As far as your last question there: no. One doesn't *have* to tie "classification" to "intent"; after all, near-synonyms such as "purpose" and "goal" would serve equally well in solving the various problems 'intent' was brought in to solve. There may be other solutions, but neither you nor I could think of any in the above discussion. Because 'classification' actions are, by nature, ultimately 'for purpose of' class-based decision making (aka branching), I'm fairly certain that 'intent' (or synonym) is fundamental to any proper definition of 'classification'.
- As far as speculation, any counter claim will likely be speculation also. Thus, it's fair game for at least consideration purposes.
- It's fair game for fiction and brainstorming, perhaps. As a point in an argument, though, speculation is worse than wasted breath: it makes you look like you're incompetently (if not more maliciously) waving your hands and inventing facts to support your arguments.
- BickerFlag: I did not "invent facts". If it was speculation on my part, it was clear it was speculation (or at least not purposely mislabeled). If you see speculation that could potentially be interpreted as a something more rigorous, then point out the SPECIFIC words that produce the potential confusion, rather than sweeping generalizations or accusations. Learn to solve problems without having to be rude. Your accusations are embellished with exaggeration and implications. Generally if I encounter speculation that I don't care to hear, I simply ignore it and move on. This is the civilized way to handle it. Further, there is no rule on this wiki that speculation is not allowed. Thus, I am not breaking any rules that I know of. If you make its existence an issue, it will only trigger angry exchanges. In short, it is rude. Stop making mountains out of mole-hills. I deny any damned "hand waving" on my part and am sick of that phrase.
- When you state any point in an argument, it is inherently implied that it has weight, and this applies to your speculation too. You need to explicitly label it as speculation if you don't want it to be interpreted as something more rigorous. But speculation has no place in argument. It's "hand waving" no matter how you deny it, and you deserve every accusation you receive on that subject. If you're 'sick of' hearing about your "hand waving", then I suggest you stop doing it. If you were 'civilized', you'd not have been doing it in the first place, and you'd have stopped the first time you received a warning - many years ago. It is you that is being rude by violating the implied rules of any intellectual forum. At the moment, you're just a bastard charlatan and fraud who has been caught red-handed and is attempting to defend his sophistry and fallacy as though it were righteous. It's disgusting.
- Most people would not be bothered by the way I worded it, based on my past experience. If you would sign your damned content, then I'd make an effort to adjust my style to the areas that you are more sensitive about. Note that your "implied rules" is speculation on your part. Should I be offended and fly off the handle like you? Hypocrite! And take your type-safety junk science and shove it! I don't have live with fast and loose insults form a drama queen like you. You cannot produce realistic coded examples of betterment because you are full of self-deceiving shit! --top
- People have accused you of "hand waving" plenty often before I ever joined this forum: that you continue to do so "based on your past experience" is hardly indicative of real introspection. As for hypocrisy: what you're doing right now is flying off the handle because I'm violating what you believe to be "implied rules" about civility. And my "implied rules" is more than speculation: there are rules to fair and reasonable debate and among them is to never present lies or speculation as though it were a valid point. That you practice otherwise is the art of a fraud.
- The "handwaving" phrasology is recent. And, I am flying off the handle because you called me a "charlatan bastard". If you said that to my face one of us would be in the hospital and the other in jail. As written, my statement is not misleading to normal people. It's not my fault that you are a freak. And, why not simply offer an alternative ("fixed") wording instead of insulting me? Your solution to every conflict is name-calling.
- [I'm not surprised your correspondent has popped a gasket, Top. You have a -- and I mean this as a helpful observation, not an attack -- way of debating that is particularly frustrating; your arguments are repetitive, frequently superficial, often demand extensive effort on the part of your correspondents to illustrate things that most consider obvious, you're agonisingly persistent in spite of contradictory evidence, and have the pervasive (and perverse) flavour of someone who has absolutely made up his mind, will brook no opposition, and believes he should be considered the definitive font of all computing knowledge.]
- Many of those are true of my competitors, including repetition and a frustrating style. If my arguments are repetitive, then suggest topics to factor them into one place. I tried to do that once, but some balked at "creating too many topics". As far as "strong contradictory evidence", I've seen no such thing. You guys mostly mistake personal preference for universal truths or pound on one HobbyHorse piano key and mistake it for the whole orchestra. You don't show them as universal truths in any rigorous way though. But at least I don't insult people directly as a first resort. That is NOT something I am guilty of. --top
- [*Sigh* You obviously have no insight into your own behaviour. Yeah, yeah, I know -- we don't either. Don't bother writing it.]
- Obviously we have very different ways of viewing things. That's about the only reason I stay in these debates: I am fascinated by people who think different than me. It's just that its damn hard to communicate with such people because their root mental framework is so different.
- [That may well be true. Some of the discussions here have the flavour of a reflexologist criticising Western medicine practitioners, or an Intelligent Design believer addressing a group of evolutionary biologists. At best, there may be some (probably tense) superficial talk in areas of grudgingly-agreed commonality, but the differing axiomatic bases for our respective views -- combined with your lack of formal ComputerScience background -- precludes most meaningful and in-depth conversation.]
- I believe in practical demonstrations as a way to vet theoretical ideas. Your car design must win races or endurance tests, not merely look great on paper. If that truly is a flawed idea, then maybe I really am the bad guy you paint me as. --top
- [But various practical demonstrations have been provided, most recently on CrossToolTypeAndObjectSharing with the Complex number and Tree examples, and going back over numerous realistic OO examples like TopsQueryResultSet. We do presume, of course, that you can appreciate simplified examples and extrapolate them into the "real world" yourself. It would be quite unreasonable to expect any of us to write a full-blown ten or hundred thousand line application merely to illustrate what should be an obvious point.]
- "Practical"? Bwaaa haa ha ha! You a funny man. (TopsQueryResultSet ended up being a systems software example.)
- TopMind considers systems software examples to be impractical? Or perhaps he is saying that databases and data-driven design are 'custom biz app' only and shouldn't be made better for systems software?
- [Top, you write business applications, and you don't deal with ResultSets??? The mind, as it were, boggles. One is forced to wonder if what you do can be considered "programming" in the usual sense. At my place of work, there is a roomful of people that write forms, queries and reports for the large, well-known off-the-shelf information system used for ERP, HR, CRM, etc., but they're not considered programmers -- they're the "<productName> Configuration Team".]
- Where did I say I didn't deal with result sets? And please take any long reply to that topic, not here. And I didn't say I don't think systems software examples are "impractical". I merely am not challenging your pet techniques in that domain. If your claim is universality, then you need to show them beyond SS.
- [Why is the universality of my example not self-evident, in this case? How is dealing with a ResultSet uniquely system software, when you (as an admitted non- system software programmer) deal with ResultSets? In short, where do you draw the line?]
- They are dealt with on the systems software side and the app developer side in different ways. As an app developer, I am not going to write an ODBC driver. That would be inappropriate use of time and specialties. Note that in my Exbase days, I'd sometimes store results in temporary Exbase tables. But, that's a different animal than the examples in TopsQueryResultSet that deal directly with RAM and other low-level stuff (although may reduce the need for them).
- [Curious. When I created the example on TopsQueryResultSet, it was, in part, inspired by the ability to store results in temporary ExBase tables. I see creating such facilities as very much a component of end-user application development (when needed, of course), but then I tend to blur the distinction between systems programming and application development, and have often done the two in mutually-supportive conjunction.]
- As far as "purpose" and "goal", they have the same issues as "intent". If {quote} 'intent' (or synonym) is fundamental to any proper definition of 'classification' {end-quote}, then it will be a problematic definition with no consensus and/or difficult ways to actually measure it. [I need to revisit the grammar there.] I invite WikiZens to find an alternative. --top
- You keep saying it is problematic, but I'm really not inclined to agree. Determining and measuring 'intent' can be performed by the same mechanism one uses to obtain confidence in any other inductively or abductively inferred truth. For example, the evidence on this page readily supports that you're quite intent on rejecting the use of 'intent'.
- Usually only with mass trials of various combinations, or very indirect reasoning with lots of layers.
- So you keep saying, as though your regular repetition of it makes it true.
- Now, recognizing intent where it is obfuscated by misleading signals is more like trying to recognize a camoflauged baseball mixed in with a tubful of other balls - or, worse, detecting intent obfuscated by a bunch of signals designed to look like the real intent, which is like detecting the real baseball amongst a bunch of grenades dressed to look like baseballs. This isn't an easy task whether you're talking about recognition of intent or the baseball. But I think you're treating 'intent' as a much greater problem in practice than it actually is; it is very rare that IT folks deal with computer systems that need to recognize hidden intent any more often than they deal with computer systems that need to recognize hidden baseballs from camoflauged grenades (or, more practically, hidden faces). There are no MindControlWithDerrenBrown HCIs that nead to "cold read" people yet, though that may happen - ability to recognize intent would be a vast stride forward in HCI and in Security systems.
- It is not a real problem because most don't get caught up in definitions. They just use something they find useful. It is only when somebody INSISTS their definition is the ONLY logical way to view something is when it becomes an issue and the dissection needs to be careful and precise. I perfectly agree that subjective things can be useful and objective things can be useless.
- I do consider support for subjectivity to be quite useful... e.g. the ability to support multiple 'views' of the same data. It's something more ObjectOriented code should support. Nonetheless, intent is not subjective. No matter how you attempt to view it, intent is something attached to an object... not a 'view' of that object.
- Intent being objective requires a god's-eye view of the universe. Even if it is objective (I'm skeptical), we don't have "access" to it in objective form. We've been over this already in ObjectivityIsAnIllusion. If you tie a definition to something we as mortals cannot objectively test/see, then it risks being a problematic definition. As it stands, we currently have poor tools to reliably measure "intent".
- What sophistry! According to you in ObjectivityIsAnIllusion, any objective property "requires a god's-eye view of the universe, and we don't have 'access' to it in objective form", and "risks being a problematic definition". As to whether the tools we have to reliably measure intent are "poor" or not - that is subjective.
- In a strict sense, yes. In a practical sense we tend to agree on most things such that they are not an issue. If both parties agree that a given object is a "baseball", its not an issue in the practical sense. If they don't, we have professional judges and juries to settle disputes. In short, we have practical ways to work around our subjective world.
- As far as "ranking" choices for coupling a definition for a particular concept: no unnecessary couplings should be included, so ranking should be irrelevant. Intent is, for better or worse, fundamental to representation and communication and associated directly with the very concept of 'goal', making it fundamental to ALL non-random computational processes.
- I may agree with "necessary evil", but intent is still a messy, subjective (to non-dieties) thing to tie a definition to, creating problems.
- I don't consider it "evil". Just "necessary". And intent is objective (being just as "subjective" as baseballs, chairs, and mildew when regarding real-world intent, and just as "subjective" as integers, numbers, and triangles when discussing intent codified within formal programming-language semantics). As such, intent is NOT a problem in definitions. And one doesn't need to be a deity for it to be objective any more than one needs to be a god to identify or describe chairs and baseballs. Your rejection of intent and purpose is founded in your own inventive imagination rather than in reality or experience.
- Experience? My experience tells me that EverythingIsRelative (or at least that humans don't have access to the central truth vault).
- Your experience doesn't tell you that EverythingIsRelative. That would be your extremely selective memory of your experience, cherry-picked for just this conversation. When you last drove in your car, did you or did you not strike the car in front of you while pulling up to a light? Is it relative? Are they (relatively) entitled to a insurance claim? If they do make a claim, do your insurance rates go up, or would that be relative too? It is true that humans don't have direct access to epistemological truth (because our senses and our reasoning over them and sensory synthesis are all indirect), nor do we have access to the whole of epistemological truth (because our senses and our reasoning are limited spatially, temporally, and selective to only a few stimuli), but to say that we don't have access at all is a statement more of faith than of reasoning.
- Sometimes our approximate models of a central truth are good enough to get us through life. But more to the point, our software models are often semi-divorced from real-world objects, and the relativism is thus magnified. --top
- Can you explain to me the logic by which you came to the conclusion, "and the relativism is thus magnified"?
- See SoftwareGivesUsGodLikePowers. The less something is connected to the real world, the less we can compare it to something concrete outside the model. Other than finding internal contradictions in the model, there is little or nothing to say the model is "wrong" if there is minimal need to tie it to the real world. There are multiple paths, perhaps infinite, to convert input X to output Y.
- Sure, if you don't connect your model to the real world then you're inventing fictions... castles in the software clouds, so to speak. One might look at InformLanguage for examples. I can't see how you came to call this "relativism", but I'm uninterested in a definition debate at the moment. More importantly, software written for any particular purpose will have a testable output Y for input X. This seems to contradict your premise that "our software models are often semi-divorced from real-world objects".
- No it doesn't. That's why I used "semi". Yes, the results put constraints on us, but otherwise leaves a hell of a lot open-ended though as far as how to achieve results.
- The word "semi" means "half". Which half do you mean is divorced from reality: the inputs or the outputs? As far as your "leaves a hell of a lot open-ended as how to achieve results" goes, that's true even outside of software. You can make plans to obtain some result or another, and there is a "hell of a lot open-ended as to how to achieve results", even if all you're planning is to make a peanut butter and jelly sandwich. That doesn't give you god-like powers. Nor does it mean that the plan is divorced from reality. I'm starting to think you're really attempting to push some sort of sophism here.
- SoftwareGivesUsGodLikePowers is a topic name, not a claim for this topic. You appeared to have mistaken it for a topic-related claim. As far as peanut butter sandwich making; it is much closer tied to the real world than software. We can measure the labor cost, material cost, electricity cost, etc. In that sense, its more comparable to the rocket analogy than the software analogy. Unless it introduces a new element that the rocket example cannot, lets please stick with the rocket analogy. (MacroAndMicroRigor probably has the most detailed version of it.)
- We can measure all sorts of costs for software, too. Your claim that the "peanut butter sandwich making is much closer tied to the real world than software" doesn't seem to have a valid foundation.
- I'll leave that debate in MacroAndMicroRigor.
Knowing intent (in this case the intended representation for integers) solves this otherwise undecidable problem. As mentioned above, where 'intent' as to this representation is not specified directly, it can be obtained via non-deductive inference (induction, abduction, evidence, probability - one traditionally expects certain bit-orders for integer representations because that's how one has seen them before).
- No it doesn't. It merely changes one imperfect technique (testing I/O) for another. Perhaps they reinforce our evidence body, but it is hardly a perfect replacement. Intent is low on the EvidenceTotemPole, or its definitional equivalent.
- Knowing intent solves the problem of deciding whether something is NOT an adder. Testing I/O does not (and cannot) - even if you test I/O forever, you cannot know whether you just haven't determined the correct representation for integers yet. And stop making shit up. Means, Motive, and Opportunity are all quite high on the "EvidenceTotemPole" when it comes to crime. Decision, Representation, and Semantics - all things that relate to psychology, and all things that codify intent, - are all quite fundamental to computation. At the very top of the EvidenceTotemPole is "symbolic" logic, which relies upon "symbols", which themselves are "representations", which are thus subject to the "intent" to represent something. Perhaps you in your extravagant fallacy believe that intent is meaningless, but most people, both intelligent and otherwise, do not.
- Fuzzy and useless are not necessarily the same thing. I am only pointing out fuzziness, not claiming uselessness. And, crime court is hardly a standard we want computer science to emulate.
- Intent is not fuzzy, and I/O testing is useless. It can take infinite time to decide that something is not an integer adder, and can therefore take unbounded time to decide that something IS an integer adder. And Computer Science would do very well indeed to learn to understand such things as 'Motive' and 'Intent' as well as a crime court; it would be a major boon to HCI at the very least, and would probably open a whole new programming paradigm (GoalBasedProgramming).
- The infininate testing issue is addressed above.
As a note, I said nothing of human intent. When a compiler transforms a higher-level programming language into a lower-level one, the intent to which the compiler pays attention is well codified in the semantics of the higher-level programming language. Even higher-level intent and purpose can be inferred from the comments and overall structure of the code... e.g. whether or not the code is supposed to describe an FTP server. This approach towards inferring of intent can be applied even to systems that were NOT made by humans. For example: what is the purpose of the gastrointestinal system? why do bears hibernate? etc. But often, inference isn't heavily required. Intent can be described and codified, turned into specifications, or specifications for specifications, or specifications for specifications for specifications. Intent does allow for hypothetical infinite regression and (in practice) unbounded arbitrary regression. Why? okay, but why? okay, but why? (...) but eventually you'll always reach a "just because". A codified intent (i.e. a specification) is likely to be far more precise and far less mutable (though no less "objective") than an intent held wholly in someone's head.
And, if an intent or codification thereof sayeth: "make unto this object a decision as to whether it be IN or OUT" (e.g. in the group of 'TRUE' things, or not), then it's clearly an intent to classify said object.
RE: "But human intent in a definition renders it a subjective definition. Otherwise, an alien compiler [...]" - I appreciate you presenting your reasoning here. However, to be a pedantic logician who can't stand fallacy, your reasoning doesn't support your statement. If 'human intent' is needed, then 'alien intent', very objectively, doesn't qualify (excepting human aliens). It'd be wiser to say that use of the 'human' qualifier should be avoided except in defining those words that demand it (e.g. man-made). Stick with plain, unqualified "intent" and "purpose" which are quite objective for the reasons I've already described.
I wonder, do you iron your socks?
I wonder, do you ever concern yourself with more than a superficial veneer of analytical thought?
Yes, but not for the sock comment. Anyhow, weaving purpose or intent into a definition is poor form in my opinion even if its possible to somehow claim those are objective. One could perhaps argue that there are no alternatives to linking to those, but they are a consolation prize. We'll just have to AgreeToDisagree and move on.
It isn't very intelligent to disagree with an argument just because you don't like its implications.
One can make a definition be anything they please. However, existing and being a usable definition are two different things. Definitions tied to measuring human intentions are ripe for HolyWar status. The less a definition is tied to personal psychology the easier it is to agree on and measure. It also begs the question: Does the definition of X need to be tied to intentions/pyschology? If so, why? Given a choice, it seems logic to avoid/reduce such ties.
One can assign arbitrary meaning to arbitrary sounds or sequences of characters if there are no other constraining factors. However, under the constraints of discussing any particular subject, one is NOT free to "make a definition be anything they please". Very simply, one's definitions determine that which one is talking about, thus if you wish to talk about something in particular, you cannot have arbitrary definitions. And every part of your argument is (once again) misleading and unsubstantiated.: (A) "Definitions tied to measuring human intentions are ripe for HolyWar status" - this isn't self-evident, top, and needs more support than your word (I certainly don't believe you; the difference between Murder and Homicide is one of intent. The nature of a tool is one of purpose. Semantics for a programming language are direct codifications of intent and purpose.) (B) "The less a definition is tied to personal psychology, the easier it is to agree on and measure" - this I'll agree with, but find completely irrelevant to the discussion at hand. I've already established above that intent and purpose are NOT tied necessarily to personal psychology. If you're going to argue that the opposite is true, don't rely upon an assumption that it is true. (C) "It begs the question: Does the definition of X need to be tired to intentions/psychology? [...]". So what? maybe X is necessarily tied to intent, maybe not; it depends on the concept described by "X". I agree that such ties should be avoided where possible (under the general principle that ALL unnecessary coupling should be avoided), but you shouldn't get all fussy simply because the coupling turns out to be necessary. And, for "representation" and "murder" and "classification", ties to intent and purpose are, indeed, quite necessary.
I find it interesting that you use court-room juries as an example being that I find the jury system the opposite of logic and clarity. But to narrow the discussion for now, can you please elaborate on "I've already established above that intent and purpose are NOT tied necessarily to personal psychology". What is an example of them being clearly not tied? (See comments above about adder example.)
I find it interesting that you make shit up all the effin' time; I recall saying nothing of court-rooms or juries. And one example of intent and purpose being clearly not tied to personal psychology: codification of intent and purpose within formalized semantics of a programming language.
Why don't you simply clarify your "homicide" statement instead of act like I snuck taco hot sauce up your ass while sleeping. The communication problem is mostly yours in this case. Accept your fault instead of project it onto me.
You silly little idiot. The meanings of homicide vs. murder have little to do with courtrooms and juries; the concepts they represent have been recognized (under a variety of names) since long before juries were invented, and the definitions for these concepts do not depend upon courtrooms or juries. As far as clarity: murder (by definition) requires intent, whereas homicide does not - this is accepted and is not particularly "ripe for a HolyWar", and thus offers one counter-example (among several others) to your previous unjustified statements. Now, I think it reasonable to assume that any moderately educated and intelligent English-speaking person from Europe or the Americas would know such things as "murder" and "homicide", their connection to intent, and the fact that they are defined (and even possible) without "courtrooms" or "juries". But if that reference was beyond your ken, either above your level of intelligence and education or completely outside your culture or vocabulary, then I'll accept fault for the communication problem. Which of those do you admit to?
Again, where's the universal algorithm for determining "intent"?
Again, one is not necessary. You have this notion in your head that a "universal algorithm" is somehow required for definitions, but that's simply untrue. It's another made-up notion on your part, top.
I never claimed it was. You keep implying it, but stop short of outright admitting that you believe this.
Despite what you've inferred, I've never implied an algorithm must exist (especially an "official" or "universal" one). What I've said is that, for a definition to be practical (usable in practice), it must be possible to formulate a computable algorithm that can decide whether or not (don't forget the "or not") something meets that definition. Some definitions (like that for "halting property") deductively imply that any such algorithms will be undecidable, for example, and this would make it unusable in practice. The same was true for your initial attempts to handle "adder". However, the definition is NOT the algorithm; indeed, there can be many algorithms that work for one definition, and (when algorithms allow for false positives or false negatives, or have a high resource cost) some may be better than others. But the definition clearly precedes the algorithm. Perhaps, with your tendency to think of things backwards (from implementation to definition, as with TypesAreSideFlags), you find this confusing?
- Well, it's no longer "practical" because the debate has grown finer in scope. Time to pony up and tighten up the meaning of "intent". If a definition doesn't produce controversy, then it is indeed perfectly fine ("practical") for it to stay somewhat vague. But when controversy comes about that requires a more precise meaning, then the definition needs to be refined. The vague definition no longer serves it purpose.
- Now, you will probably argue that I cause the controversy around the def. However, I dispute this in the general case. The usual cause is that somebody implies something is universally/objectively/clearly true or better. I simply come in to dissect such claims, which often requires first settling on base or key definitions. In short, I slay the claims of objective-claiming zealots by demanding rigor. Strong claims require strong evidence. Wishy-washy notions in your head are no longer sufficient. I don't care how strong you FEEL something in your brain. We cannot dissect and analyze your feelings. (I may be a T.O.P. zealot, but don't claim it objectively better in general.) The root cause of a need for a tighter definition is the introduction of a claim of an objective property of software, not me (unless asking for evidence of strong claims is a "bad thing".) -t
I've asked before that you try to establish it on some grounds, and you failed utterly. I've provided counter-evidence to this notion in the course of a previous argument, and it has gone unanswered. Definitions can imply requirements for or properties of an algorithm (e.g. some definitions, like that for "termination property" imply that any algorithm, no matter how you build it, will be undecidable), but definitions do not imply an algorithm exists. What makes you think that demanding this "again" will make it any more relevant than it was last time you demanded it?
What is "it" in "establish it"? I'm just asking for rigor if you make a strong claim. The form of that rigor doesn't have to be an algorithm nor a formal logical proof. Those are the most common, but you haven't offered an alternative sufficiently rigorous device.
I am confused!!
Dont we have anything else to discuss?
Nope, this is it :-)
Ahh, but why not MostHolyWarsTiedToPsychiatry?. Then think of WherePsychiatryMatters?, just try not to LaughOutLoud.
I'm not sure what your point is. If you mean most debates are caused by nutcases, you may have a point. However, each side thinks the other side is the nutcase. This brings us to the psychology of nutcaseness. There may be some correlation with being a nutcase and holding an intense position.
Why would it bring you to the psychology of the nutcaseness rather than the psychiatry of the nutcaseness?
See also: DisciplineEnvy, WherePsychologyMatters, WhatIsIntent, TopsLaw
SeptemberZeroSeven, SeptemberZeroEight (is this the SeptemberThatNeverEnded?) No that was 1993 if you follow the reference to the MeatBall page.
CategoryPhilosophy, CategorySubjectivityAndRelativism