(Eventual destination for material in MostHolyWarsTiedToPsychology, which is growing TooBigToEdit)
PageAnchor: Same_Design_Different_Intent
Here's one problem (among others) I have with using intent in a definition (such as "classification" or "types"). We have program A and program B. A is built by Bob and B is built by Marry. A and B ended up being identical, perhaps accidentally. However, Bob and Marry's intent were very different. They just coincidently made A and B the same. An intent-based definition may say that A has "types" but that B does not because Marry did not intend there to be types when she made B. I find this silly and potentially very confusing. --top
First, everyone programs with some notion of "types", or at least I've never met anyone who has not. Even in a language where every value is a string, people will have "types" of strings in their minds (e.g. strings representing integers vs. strings representing names). It's just that unityped languages are less expressive because cannot express this notion or take advantage of it. Even Mary will be programming with types, but she might have different informal 'type' distinctions in mind than Bob and end up producing the same program because she has different 'intents' for what the strings represent. Second, if program A and B are identical, then whatever formalized notion of types they possess must also be identical (unless they're writing the same code but using two different programming languages, which is rather unlikely). There is such a thing as codified semantics and 'intent'.
Anyhow, you can't know the 'intent' of anything except by also knowing its context. For sub-program elements, this is easy: the context is the programming language plus the surrounding program elements. For whole programs, this is more difficult because the intent of the program depends on how you're planning to use it, which needn't be codified in the program... though it is usually fairly obvious (at the macro-level, programs are usually quite specialized in what they can be used for). Bob and Mary could be using their programs for two different things. Bob could be using the program for "classification" while Mary is using it for "transform". The difference is in the intent, which becomes obvious in the greater context of the where the inputs received are coming from and how the output of the program is used. Given an idea of from where the inputs is coming and to where the outputs are going, you can probably make a 95% or better guess as to what Bob's and Mary's respective 'intents' are.
- "TypesAreSideFlags" is about 95% accurate, but that didn't seem good enough to you.
- TypesAreSideFlags is about 5% accurate, which doesn't seem good enough for me.
- Untrue. Roughly 95% of production code uses languages that use side-flags to track types.
- Irrelevant. That's like saying TypesAreBits because 100% of production code using languages involving types at some point uses bits to track types. TheMapIsNotTheTerritory. TheRepresentationIsNotTheRealThing.
- But the reverse is not the case with bits, but is generally the case with side flags. A "type-free" variable has no side-flags, at least not any "types" outside of the observer's mind. (Of course your definition may differ, but that's a different issue than your bit analogy.) --top
- Certainly the reverse IS the case with bits. An implementation of a "type-free" language won't ever have any 'bits' dedicated to tracking types. How do you justify claiming otherwise?
- We don't have to use bits to construct a flag. A machine with mechanical flags can be built. True, we are talking about frequency. I'll see if I can reformulate my statement to make it less "leaky".
- It won't help. Bits is just one example. If you can't get past thinking the representation of a type is the same as a type, your statement won't become any better no matter how you 'reformulate' it.
- But we can only use objective analysis on representations. That is all that will ever exist in the physical world. Your mental world is not objective by definition. By the way, values are not necessarily "types", but are still usually represented by bits. Thus, the bit statement is falsified by at least one common example.
- It seems you're assuming that 'side flags' are somehow better, that those simplistic meta-data annotations aren't ever used for anything but 'types'. You'd be wrong. Side flags are as bad as 'bits' in this regard. If you'd like examples, check out Youtube or the 'Keywords' section on an HTML page or a whitepaper. Or check out any modern IDE/syntax highlighter/debugger. Types might be represented by side-flags, but (just as not all bits represent types) not all such 'side-flag' annotations represent types. As far as "we can only use objective analysis on representations", that's simply untrue - we can, for example, certainly use objective analysis over formal semantics of representation.
- How is "keywords" a "side flag" anymore than the other HTML attributes? I used "side" to mean more or less hidden or "out of the way" from the more visible values. But a "keyword" attribute is no more "out of the way" than any other attribute. And it could be argued that keywords are a form of "types" anyhow.
- Huh? I did not claim or imply that keywords are somehow more on the "side", just that they are on the side and that they are annotations (meta-data). As far as your "it could be argued" comment: just because something can be argued doesn't mean said argument will be sound or cogent.
- Let's hope you remember your own lectures. Note that there are often multiple paths of reason, depending on how one defines or classifies things. I don't automatically assume there is only one path to the "truth".
- True, there are multiple paths of correct reasoning. And while not all of them will lead to 'truth' (due to GoedelsIncompletenessTheorem), none of them will lead you to conclusions contradicting the truth. Your problem is that YOU don't even bother staying on the many proven (aka 'logical') paths of reasoning. You use circular logic, jumping to unsubstantiated conclusions then shifting the burden of proof ("I can't support it but you can't argue the opposite, neener neener"), and HumptyDumpty equivocation, just to name a few. These are fallacies because they have proven time and time again to lead people to conclusions inconsistent with or unsupported by future and past observations. There is more than one path to the "truth" but there are also even more paths to "nonsense" and it seems you favor the latter.
- If you have a clean logic path, it is lost in your meandering diatribes and volumous insults. I'd suggest presenting it in a cleaner format, kind of like a geometry proof: Here's the givens, here's the symbols and descriptions of them, and here's the math based on givens and symbols presented above.
- I've offered clean logic as to the source of your fallacy, but it is consistently lost on your ignorance, your lack of education, your arrogance, and your further sophistry (you use more fallacy to defend you earlier fallacies). This is what prompts the insults and diatribes. For example, it is NOT a 'given' that representation and that which is represented are the same thing (indeed, the opposite seems to be true). Because of this, your claim that 'TypesAreSideFlags are 95% accurate' on the basis that 'Roughly 95% of production code uses languages that use side-flags to track types' is fallacious and illogical nonsense. It simply doesn't follow: types being represented by side-flags does not make types the same as side-flags because there is no 'given' to allow that logical leap. When I point this out to you, you become pissy, and start waving your hands and making stuff up to defend your earlier fallacy, such as your claim "but the reverse is not the case with bits". You deserve diatribes and insults. Actually, you deserve a HardBan for your disrespect and intellectual dishonesty, but I can't do the HardBan.
- If you document your claim logic path properly and clearly, then it will stand on its own regardless of any alleged personal faults of mine. If I have a problem with a given or what-not, there is a clear place reference it. Scattered verbal notiony reasoning doesn't serve that well. Maybe you've worked with people who can decipher your chickenshit scribbling and they spoiled you. I'm not one of them. --top
- I should NOT need to list 'givens' that anybody with a proper education to join a conversation on this forum already knows and knows the reasons for. I should NOT need to basically reiterate the whole of 'logic'. You thumb your nose at education and logic, and that makes you an irrational and unreasonable person who can't understand reason and logic when it slaps you in the face. That is YOUR problem. Not mine. When you start assuming 'givens' that have been disproven and explained in even elementary logic courses (such as your neglecting the distinction between representation and what is represented) that is YOUR fallacy, and merely pointing it out SHOULD be enough, and you SHOULDN'T have even made the mistake in the first place - that you did is a problem on YOUR end. Pointing out your assumption certainly is enough for anyone who is better than an anti-intellectual like yourself. That you expect me to accommodate your particular lack of education is arrogant and disrespectful. You've been too busy thumbing your nose at academia to learn how to profitably think.
- As far as a representations versus "being", that assumes a UniversalTruth outside of the model. Inside a model one can call something a "type" and that is a "truth" within that model. If you want to show that it conflicts with a universal definition of "type" or "classification", be my guest. So far you've only entangled with the gray-art psychology of "intent". --top
- Incorrect on two counts. (1) Representation versus "being" is also about truth within models and languages - such as the distinction between syntax and semantics, or between implementation and content.
- Semantics are in the head of the observer. We can only test, measure, and compare implementations. In the observer's case, the "implementation" may be neurons (which we have very limited analytical access to). --top
- Semantics are also in the formal definition of the language and in the language's description of a model. And you are making bad assumptions to defend your other bad assumptions, again, thus proving your tendency to use more fallacy to defend your fallacy.
- Semantics is about meaning, and meaning is in the head. Computers just follow orders blindly, they don't care about "meaning". (Although "meaning" is probably as fuzzily defined as "intent".) Stop waving your hands and start moving your fingers to type up a real proof.
- CPUs and service programs do care about "meaning" by giving "meaning" to and following the "meaning" of the orders they are issued. Semantics (as the word is used in computer science) is a highly formalized comprehension of "meaning". That it happens to be "in the head" or some other program does not make semantics of a statement relative or subjective.
- (2) Representation versus "being" of things outside of an applied model doesn't assume "a" UniversalTruth, but only that there "is a" 'universe' and some sort of association between inputs to the model and the universe such that the outputs of the model will also be associated with the universe. Actually verifying the model is what science is all about. You're the one waving your hands and making bad assumptions here.
- You are waving your hands trying to avoid a clear definition of "intent" by claiming that I am ignorant of your "proof". You have no real proof or definition, just hand wavy excuses to avoid stating it. Either you are not capable of or are too lazy to formalize your claims. --top
- If you don't wish to be a hypocrite, I suggest you avoid calling others 'lazy' until you are enthusiastic to make the occasional BookStop or take a brush-up logic course for no other purpose than to refresh your comprehension of a subject or learn something new... because it takes a supremely lazy, unprofessional slob to not maintain his skills. Programmers are expected to know logic.
- The problem is you, not me. You are just too lazy to present logic in a clean format already described, or are afraid of scrutiny by doing so. Almost everbody on the web calls the other person dumb when they have no real argument or get themselves into a bind. It's a worn-out excuse. --top
- Oh? The problem is me? Then I look forward to you presenting your future and past arguments and claims in this 'clean' format you describe, such that I'd have a consistent model of how you expect arguments from others..
- It will depend on the definition of words, and we will not agree on them and run into endless LaynesLaw fractal trails. EverythingIsRelative and I'm just the messenger. It's my head model against your head model.
- Back to waving your hands and making up fallacious excuses for your hypocrisy and sophistry, I see. Well, I should have expected it.
- It's not "waving my hands". You ask why I don't use the formal proof format suggested. The reason is clear: I don't claim there is any formal proof that backs my position and only my position because the definitions and concepts involved are subjective or relative. Why is that "hand waving"? Stop trying to turn this back on me: you claim universal truth, you show universal truth via clean logic outlines. If you can't do it, I shall consider you an empty blowhard. --top
- Claiming that you don't need proof of your conclusion because you assume everything involved is subjective or relative is a perfect example of "hand waving". Indeed, it is circular logic combined with playing HumptyDumpty, fallacy of equivocation, and ShiftingTheBurdenOfProof. As far as "universal truth" I have claimed only that "intent" is just as objective as "baseball" and "chair" and "rocket" and "distance" and "time" and "velocity". Like these other things, it is detected inductively and probabilistically but is not defined as a probabilistic entity or pattern.
- Shifting from what? What is the default?
- The default is to say "maybe", not to assume "X is subjective". There's a huge difference. If you aren't willing to prove your claims, you might as well be waving your hands and screaming, "The end of days is coming! The great spaghetti monster will kill you all with golden hammers! If you don't believe me, then it is your job to prove me wrong!". It's just religious bullshit. And. you. do. it. all. the. fucking. time.
- You are the claimer, not me. I made no objective claims and therefore have no obligation to prove diddly squat. (I can offer evidence that EverythingIsRelative, but not objective proof.)
- If EverythingIsRelative, then so must be my claims and therefore by your logic I have no more obligation than do you. But EverythingIsRelative is a stupid self-contradictory lie that is just one part of your religious bullshit. As far as your "I made no objective claims" objective claim, it's a lie.
- From a practical standpoint, you've still not shown a reliable way to define types and classification. If we have to play Sherlock Holmes against the brains of language designers, then we have an ugly situation. It's as useful as replacing snow with shaved ice so that we can claim "there is no snow here". --top
- From a practical standpoint, you've been waving your arms and spouting nonsense like 'ObjectivityIsAnIllusion' and claiming that distances, measurements, time, snow, baseballs, etc. are all 'subjective' just so you can say the same of intent, never mind your contradiction of the typical usage of English. From a practical standpoint, your complaints have long been sunk. From a practical standpoint, 'intent' and 'classification' and 'types' are all sufficiently well defined for real use in real conversations, models, and mathematics. From a practical standpoint, TopMind is a raving lunatic.
- Bull! From a practical standpoint you've made no semi-consensus way to determine what is NOT a classification. Everything in a language or app can easily be viewed as a classification by your "standards" (cough). You haven't even offered enough rigor to nail Jello to a cloud using melting gummy-nails.
- And you are back to your fallacy, implying without proof that "viewed as a" is the same as "is a". What I've offered is rigorous if you don't assume illogical things. You assume illogical things.
- You haven't shown is-a, was-a, shitta, fuzzy-waza-beara, etc. If you have real logic, the put it in the form described. Let me restate that for the accusationally narrow: The range of valid interpretations of the given definitions rule out very little.
- None of my assertions require or assume that 'can be viewed as a' be 'is a', so I have no need to assert or defend it. Indeed, I'd attack the notion in general: TheRepresentationIsNotTheRealThing. But that very assumption is the one nail upon which all your accusations and arguments have been hanging. And now you're back to making another claim ("The range of valid interpretations of the given definitions rule out very little") the argument for which relies, once again, on this same bad assumption. Your complaints about 'definitions', and your accusations that my claims are 'subjective' based on irrelevant word games, you're playing word games, is just so much smoke, hot air, circular logic, and sophistry.
- And so far definitions of "intent" are either so broad that everything is intent or too fuzzy.
- No more fuzzy than "baseball" and "chair" and "rocket" and "distance" and "time" and "velocity" and other things that the word "objective" was coined to refer to.
- You just claim the reader is poorly educated if they don't buy your fuzz.
- If you provided legitimate arguments and counterpoints I'd consider you well educated even if you didn't buy what I was saying. Instead, you wave your hands, make up indefensible shit, claim that you don't even need to defend it, attempt to use the shit you made up to defend the shit you made up, and attempt HumptyDumpty word games to excuse your sophistry.
- Does determining a baseball require your PhD?
- No. And neither does determining intent.
- You still have not produced a usable definition of "classification", and tying it to "intent" has only made it worse.
- You still have not produced a sound or cogent argument to support either of those accusations. Your wild gesticulation and rampant fallacy does not qualify. I have every reason to believe the definition of "classification" I provided is usable, and that tying it to intent does not harm it. I have, after all, been able to use them both.
- And are indulging in fallacy. Again. Among other things, definitions don't need to be "proven", and I've already given an equation (no less) on how to detect and verify intent as it is defined that applies regardless of how 'formal' or 'clear' the definition happens to be; this was done below. Further, you are now attempting to distract from the subject at hand... this discussion started with your spurious injection about TypesAreSideFlags, and has not been about "intent" since that point. If you must diverge, you should at least be able to stay on-topic within your divergence.
- I didn't say that definitions need to be "proved" there. You are insulting the hell out of me for something I didn't even say. This is strong evidence you are so biased against me that you read what you want to read in order to see me in a sinister light.
- You've wandered into that "sinister light" on your own steam; it isn't as though you've been pre-judged to be there. If you think every grievance I have against you is one big misunderstanding from seeing you in a 'sinister light', you're delusional. And if "avoid a clear definition of 'intent' by claiming ignorance of 'proof'" isn't intended to imply that the subject of the "proof" is a "definition" or that one can somehow replace the other, then you seriously need to brush up on your English.
- If it was unclear to you, then simply ask for clarification. That's the civalized way to handle it.
- If it were unclear, I'd have asked for clarification. Instead, what you said was clearly wrong.
- What specifically did I say that is "clearly wrong"?
- "avoid a clear definition of "intent" by claiming that I am ignorant of your "proof"", is one example. "TypesAreSideFlags" is about 95% accurate" is another. Nothing following the latter statement is even about definitions of 'intent'; it's about representation vs. semantics. Of course, most of what you say is clearly wrong, such as "EverythingIsRelative".
- The issue was "Among other things, definitions don't need to be 'proven'". When caught rand-handed you changed the subject. Typical of charlatans.
- Whatever. You can't see that the 'subject' was referenced in the first line. Typical of the blind.
I don't see how you can find this confusing. The meaning of any value/string/BLOB depends on how it is used, and the intent of creating and communicating any value/string/BLOB depends on how one expects it to be used. It is pretty obvious to me. But I've studied language, logic, and math all my life... I can't say how it confuses people who thumb their nose at education.
Lets make the example a little more extreme just to make sure we are not introducing unintended side-issues into this. Suppose program B is produced by a monkey on a type-writer that accidentally creates a program identical to Bob's. (Imagine Bob's program is a one-liner Perl script, just to make the scenario sound more likely :-) --top
What does the monkey plan to use the program for? :-)
More seriously, what the monkey created was line-noise that just happens to qualify as a Perl program (or even a TECO program). The program has no meaning because it has no context, no destination or purpose or intent. Trying to find meaning in it is a little like the Bible Code or the Barnum effect. If you look hard enough, you could probably interpret those little puffy things on your ceiling as a useful program in some invented language or another.
So program B (monkey version) has no "classification" but program A does, despite the fact that they are binary identical? Simply because the authors were thinking something different while creating it? You seem to be on a mission to make quantum physics seem non-weird by comparison.
Program B may not be a "classifier" where as program A might be despite the fact that they are identical. But not 'simply' because the authors were thinking something different while creating it. There is nothing simple about the context being different.
Think about it this way: it would be trivial for me to create a program that can accept any BLOB and treat it as a program. That will guarantee that any program you write, be it an SQL query or javascript CRUD screen, will be a program in this language. Now, given this fact, what properties can you say are true of that BLOB that looks vaguely like an SQL statement? Are you saying you cannot take advantage of the fact that you know where it is going? How can you write SQL in the first place if you don't know what properties you're aiming for?
We can potentially define SQL-ness by how many SQL keywords and SQL-fitting syntax elements it has. This is independent of the creator's mindset. It would be nearly useless to call something "SQL" just because something is trying to put it into the "SQL slot", such as copying "asdfkl;aksd asdf;asd &^%$# fasdfl%k" to file "my_query.sql" (although it would probably pass a Perl test ;-).
You would tend to vet 'SQL' down to strings that are within the official language definition (i.e. that can be parsed and have semantic meaning in the language). In a sense, that's where typing comes in. Only certain things can go into an "SQL slot" and actually mean something in communication. But remember: for every string that 'looks like' valid SQL, it also happens to be a TECO program and a BLOB program in the language mentioned above. The semantics would be entirely different. I.e. it is impossible for you to say what a piece of code means without knowing its context.
You make it sound singular and absolute. I'd lean toward, "knowing that it can be used in context X" rather than "Is *for* context X". We don't need to assume a singular absolute, otherwise we are assuming info we cannot back. We had this conversation before somewhere with regard to "is-a" adder versus "can-serve-as" an adder. We seem to be going in circles.
Those views aren't incompatible; one may classify objects in terms of "is a thing that can be used for X". It is defining 'classification' itself that requires use of 'intent'. Words describing different sorts of computations (including 'classification') generally follow this pattern of being defined in terms of their use because the definition itself is aimed to avoid implying or requiring a certain implementation. It may be that 'classification' and 'addition' overlap for some implementation or particular instance, but are still distinct in the English language due to their different purpose, evolution, history, origin, future, etc. English has many words like this, especially for man-made things (e.g. 'traffic light', 'weapon'), and so if you have any desire to continue speaking English, you'll be forced to set what you "favor" aside in order to accommodate the language. Words that naturally describe a pattern in terms of its usage inherently have 'intent' as a component, and neither you nor I can do anything about it... except get over it. Perhaps, when you build your own language and get other people to speak it, you can fix what you perceive to be language flaws; until then, your arguments, your assertions, your ideals that this shouldn't be the case is just noise and hot air.
As far as your regular complaints about how 'practical' the "is *for* X" is for technical definitions, I don't believe you. Given perfect information, as in a math problem or model, use of 'intent' is similarly perfect. And given imperfect information, as humans are always limited in the real world, then 'intent' is not any worse off than other technical definitions.
You can make very good reasonable abductive and heuristic guesses as to the nature of a piece of code using such things as SQL keywords and SQL-fitting syntax elements. Fundamentally, such an algorithm takes advantage of the fact that today we don't have many languages that have a high probability of looking like SQL but possess highly divergent semantics. E.g. the probability of a TECO program looking like an SQL program is 100% for a program of zero characters, but reduces exponentially for each character thereafter.
Anyhow, I feel some need to point out that defining SQL-ness in terms of this 'guessing algorithm' is neither technically nor philosophically any better than calling it 'SQL-intent-ness' and using the exact same guessing algorithm. You're simply looking for evidence of what code is meant to be. Arguments made from that basis might take you in an interesting circle, but a circle it will be.
- It removes having to Sherlock the author. The definition is tied to the object and only the object, not the creator AND the object which is what intent-based definitions do.
- You're only fooling yourself, there. Thing is, I can casually invent a language that looks exactly like SQL but has very divergent semantics. It breaks your solution.
- PageAnchor "double_noun": All things being equal, the fewer nouns or participants that the definition has to involve, the better. Determining if X can serve as an adder (or SQL) is a smaller job than determining if its intended to be an adder (or SQL). It usually simplifies life to not have to ascertain features about the author (at least as far as automation).
- I don't believe this "double_noun" principle of yours is valid. It doesn't follow from any logical principles of which I'm aware. Your conclusion about what constitutes a "smaller job" doesn't follow my experience. Greater specializaton often offers more specific search criterion and reduces error rates and search-space, which make the job smaller. It's a bit like running a string match: the more characters you have to match, the easier the task of rejecting a page as matching becomes.
Using intent to classify it would be a very very last resort. In real life sometimes we can use intent as a lazy shortcut, but its usually not good enough for automated processes that lack fuzzy thinking, social cue recognition, and intuition.
I feel you are once again assuming that intent is somehow 'smellier than' the options you've been presenting. I still reject that as a premise. I also don't believe recognizing intent requires social cue recognition or intuition. Inductively inferring intent does require fuzzy thinking, but so does every other option - including the ones you've been presenting (e.g. the whole idea of 'SQL-ness' is fuzzy).
We can write an algorithm that measures sql-ness far easier than we can write one that determines intent of the author. True, the criteria for sql-ness may be subject to dispute, especially for a "half-broken" query. But its better to have somewhat arbitrary criteria for just the SQL than have it for BOTH the SQL and the author (see "double_noun" problem above).
We can trivially use an algorithm that measures sql-ness and use it to determine whether the author likely intended to write SQL. How is doing that far more difficult? Or any more subject to dispute?
That is using something to estimate intent, not the other way around.
And that's a valid complaint because you weren't using something to estimate sql-ness?
If automation-related, I would rather face the problem of estimating sql-ness than the author's intent for reasons already stated recently. I'd usually rather be faced with the task of parsing an ASCII file than doing OCR on handwriting from scratch.
The OCR vs. ASCII is simply a FalseAnalogy. Better is "I'd rather be faced with the task of translating lines visible upon paper into ASCII than translating characters drawn upon a paper into ASCII". After all, the difference between 'lines' and 'characters' is one of assumed intent. I'll note three critical things: (a) the algorithm will be the same, (b) but the latter problem captures intent and context, and (c) in order to verify the correctness of the algorithm, you must know its intent and purpose (e.g. the reasoning behind the translation). One could justify any translation of lines to characters for the first version of the problem.
Where's a practical example of a computerizable algorithm verifying intent?
You verify intent the same way you scientifically verify any other claim: you make a prediction based on the assumption of intent, then you verify that prediction. To be good, of course, it requires that you don't predict anything you already used to infer the statement (the claim of intent) in the first place, but this can be done by splitting your knowledge into two sets: one set to detect intent, another set to verify it. As with verification of anything else, verification of intent usually happens only AFTER detection of intent. Detecting intent, also, is done the same way you go about detecting anything else in this world: probabilistic inference. P(Intent|Event) = P(Intent and Event)/P(Event with or without intent). We use the same mechanisms to detect baseballs and SQL-ness. There is nothing special about algorithms that detect or verify intent.
As far as practical examples of verifying intent (as opposed to merely detecting it) go, you'll find them all over the place if you look into communications hardware, signature security, typechecking, etc. If you accept interactive verifiers then confirmation dialogs, tab-completions, and various expert systems would also qualify. If your actual question was about practical examples of computerizable algorithms that detect intent, then OCR, gesture recognizers, focus heuristics, speech recognition, and various expert systems can all provide fine examples.
Tab-completions? They are prediction engines, not intent engines. At least, one does not have to assume intent to make an algorithm. Same with OCR: essentially predictors. We can predict rain and planet movements also, but there's no intent involved (unless you are Pat Robertson) even though they use (or can use) similar techniques.
It seems you are assuming that "prediction engines" and "intent verifiers" are mutually exclusive. They are not. Indeed, I'd say the opposite is clearly true: there is a great deal of overlap. All verifiers based on the ScientificMethod rely on prediction, including verifiers of intent. And what is it you believe tab-completion is aimed to predict, if not the command you wish to execute or search you intend to perform? As far as your "OCR: essentially predictors", that is similar: OCR is 'predicting the intended character', and OCR software is written with the assumption that the lines on the paper are intended to represent characters from a limited set.
- Without a consensus way to test "intent", this is essentially meaningless. You are trying to anthropomorphize machines for no clear purpose. You can declare that a prediction engine is also an "intent detection" engine out of an invented equivalency, but it won't get us anywhere that viewing it as a prediction engine can't.
- What is wrong with the scientific way I provided earlier to test "intent"? And where did I "invent equivalency"? I do see where I said that verifiers of intent rely on prediction, but I don't see where I said that all things that make predictions are verifiers of intent.
While "one does not have to assume intent to make an algorithm", the probability of writing an algorithm in a given language without intent (i.e. by accident) has at its upper bound the probability of parsing a random character strings. For many languages, that probability is infinitesimal. Further, to verify an algorithm (e.g. OCR against actual handwriting) does require that one assume intent.
Please clarify. This did not make sense to me. It's not our process's concern whether intent was involved. Machines have no emotional attachment to GIGO either way.
- How does: "you cannot call randomly arranged commands worse than your algorithm unless you know purpose" work for you? And intent has nothing to do with emotional attachment... or free will, for that matter.
[I think you may be confusing "intent", which is a simple notion, with "conscious intention."]
No, the definition is not a consensus. While its true we may call something "intent" for communication purposes, it's often just the human habit of anthropomorphizing everything. But its also a technique that risks problems when precision in definitions is required. "The prompt wants a password" may make it easier for us as humans to relate to, but from a scientific standpoint, we cannot measure "wants" without an arbitrary model of what "wants" is.
Please clarify: "if program A and B are identical, then whatever formalized notion of types they possess must also be identical". How do you arrive at this? By "formalized", do you mean built-in to the language?
If the language has a type-system, with such things as int and float and struct { int, int, float }, then these must also be identical if the programs are identical. Yes, in essence "formalized" means "built-in to the language".
For the moment, let's focus on classification instead of "types" so that we don't get caught up in language design. Suppose the program classifies stuff, like maybe a parser for a simplified little language.
That's fine. But notice that you just gave me a context. It's important to realize how fundamental that is to communication... even communication of programs.
Yeah, and it is relative. So "classification" is relative? Good.
It seems you assume that 'context-sensitive' and 'relative' are interchangeable notions. If you cannot defend this assumption, then no, your claim is not "Good." or even reasonable.
But context is "in the mind" in this case. We are back to analyzing the observer, not the target thing of issue.
Communications context is as formal and objective as the language provides. Do you believe that math is 'relative' merely because it is "in the mind"?
Math *is* relative. However, this property may or may not be related to it being "in the mind". I'm not prepared to answer that right now. To be Clintonian, it may depend on what "is" is.
Ah. And 'true = false' is relative, too. And 'P = NP' is relative. You assert math is relative with enough confidence to mark it bold. I think we can end this conversation now if "math *is* relative" in your faith and religion. It isn't as though we can ever come to agreement based on rational thought if you happen to believe that logic (which is a math) is relative.
Logic *is* relative. It's only in the mind. The real world appears to be probabilistic, not Boolean. But we usually agree to use it as a UsefulLie for testing statements. As long as both parties agree on a given UsefulLie, it's "reality" is a non-issue between them. If you can present your arguments in formal logic (English-itized please), that would be great. But most likely the disagreements will be in translating the real world into givens, not the actual formal logic computations.
"It's in the mind", whether it is 'only' in the mind or not, does not (by itself) make 'it' subjective or relative. The truth of 'P=NP', for example, DOES NOT DEPEND on who is observing it, his point of view, thoughts, feelings, or interpretations. It only depends on the language and context - what 'P' and 'NP' and '=' happen to represent. Truly relative statements are relative even if both parties get the language right. Truly subjective statements are subjective even if both parties get the language right. 'P=NP' is not subjective or relative. Logic is not subjective or relative. Math is not subjective or relative.
It's only "truth" as far as one accepts all the premises. Whether those premises are tied to the "real world" if often the key issue.
The issue of premises being tied to the real world is NOT one of 'relative' or 'subjective' principles, and is thus NOT a "key issue" in your little argument. Indeed, it still seems irrelevant. Even if a logic leads you to untruths, the logic itself isn't necessarily subjective nor relative; it's simply fallacious.
As far as models of the real world go, I don't see how they are particularly relevant to this discussion. If you're assuming that whether the physical world is or is not 'probabilistic' (maybe based on some sort of quantum structure) has some bearing on issues of communications context or whatnot, I'll need you to explain this connection to me.
The "context" you talk about makes some assumptions. These assumptions not necessarily universal truth. We may agree with assumptions for the purpose of mutual communication as a practical convenience.
I still see no connection; you keep waving your hands and shouting 'universal truth', but you did not answer the question of how this is relevant. As far as I can tell, (a) communications context is not about universal truth, and (b) even if it were about universal truth, ripping on its faults is still a completely non-sequitur and fallacious approach to arguing that math and logic or other things that are "only in the mind" are somehow 'relative'.
Let me try to restate this: The number of internally valid models is probably infinite. However, the number of all possible models that fit the real world is very limited. For example, we can create a model of a universe strikingly similar to ours in which an always-existing intelligent deity created everything. It's an "internally valid" model, and it tends to match what we see in the real world. (Think of an Enterprise Pro++ edition of The Sims.) However, it could still be a flat wrong model. -t
WhatIsIntent
SeptemberZeroEight
CategorySubjectivityAndRelativism, CategoryDefinition