Definitions That Rely On Intent

(Eventual destination for material in MostHolyWarsTiedToPsychology, which is growing TooBigToEdit)


PageAnchor: Same_Design_Different_Intent

Here's one problem (among others) I have with using intent in a definition (such as "classification" or "types"). We have program A and program B. A is built by Bob and B is built by Marry. A and B ended up being identical, perhaps accidentally. However, Bob and Marry's intent were very different. They just coincidently made A and B the same. An intent-based definition may say that A has "types" but that B does not because Marry did not intend there to be types when she made B. I find this silly and potentially very confusing. --top

First, everyone programs with some notion of "types", or at least I've never met anyone who has not. Even in a language where every value is a string, people will have "types" of strings in their minds (e.g. strings representing integers vs. strings representing names). It's just that unityped languages are less expressive because cannot express this notion or take advantage of it. Even Mary will be programming with types, but she might have different informal 'type' distinctions in mind than Bob and end up producing the same program because she has different 'intents' for what the strings represent. Second, if program A and B are identical, then whatever formalized notion of types they possess must also be identical (unless they're writing the same code but using two different programming languages, which is rather unlikely). There is such a thing as codified semantics and 'intent'.

Anyhow, you can't know the 'intent' of anything except by also knowing its context. For sub-program elements, this is easy: the context is the programming language plus the surrounding program elements. For whole programs, this is more difficult because the intent of the program depends on how you're planning to use it, which needn't be codified in the program... though it is usually fairly obvious (at the macro-level, programs are usually quite specialized in what they can be used for). Bob and Mary could be using their programs for two different things. Bob could be using the program for "classification" while Mary is using it for "transform". The difference is in the intent, which becomes obvious in the greater context of the where the inputs received are coming from and how the output of the program is used. Given an idea of from where the inputs is coming and to where the outputs are going, you can probably make a 95% or better guess as to what Bob's and Mary's respective 'intents' are.

I don't see how you can find this confusing. The meaning of any value/string/BLOB depends on how it is used, and the intent of creating and communicating any value/string/BLOB depends on how one expects it to be used. It is pretty obvious to me. But I've studied language, logic, and math all my life... I can't say how it confuses people who thumb their nose at education.


Lets make the example a little more extreme just to make sure we are not introducing unintended side-issues into this. Suppose program B is produced by a monkey on a type-writer that accidentally creates a program identical to Bob's. (Imagine Bob's program is a one-liner Perl script, just to make the scenario sound more likely :-) --top

What does the monkey plan to use the program for? :-)

More seriously, what the monkey created was line-noise that just happens to qualify as a Perl program (or even a TECO program). The program has no meaning because it has no context, no destination or purpose or intent. Trying to find meaning in it is a little like the Bible Code or the Barnum effect. If you look hard enough, you could probably interpret those little puffy things on your ceiling as a useful program in some invented language or another.

So program B (monkey version) has no "classification" but program A does, despite the fact that they are binary identical? Simply because the authors were thinking something different while creating it? You seem to be on a mission to make quantum physics seem non-weird by comparison.

Program B may not be a "classifier" where as program A might be despite the fact that they are identical. But not 'simply' because the authors were thinking something different while creating it. There is nothing simple about the context being different.

Think about it this way: it would be trivial for me to create a program that can accept any BLOB and treat it as a program. That will guarantee that any program you write, be it an SQL query or javascript CRUD screen, will be a program in this language. Now, given this fact, what properties can you say are true of that BLOB that looks vaguely like an SQL statement? Are you saying you cannot take advantage of the fact that you know where it is going? How can you write SQL in the first place if you don't know what properties you're aiming for?

We can potentially define SQL-ness by how many SQL keywords and SQL-fitting syntax elements it has. This is independent of the creator's mindset. It would be nearly useless to call something "SQL" just because something is trying to put it into the "SQL slot", such as copying "asdfkl;aksd asdf;asd &^%$# fasdfl%k" to file "my_query.sql" (although it would probably pass a Perl test ;-).

You would tend to vet 'SQL' down to strings that are within the official language definition (i.e. that can be parsed and have semantic meaning in the language). In a sense, that's where typing comes in. Only certain things can go into an "SQL slot" and actually mean something in communication. But remember: for every string that 'looks like' valid SQL, it also happens to be a TECO program and a BLOB program in the language mentioned above. The semantics would be entirely different. I.e. it is impossible for you to say what a piece of code means without knowing its context.

You make it sound singular and absolute. I'd lean toward, "knowing that it can be used in context X" rather than "Is *for* context X". We don't need to assume a singular absolute, otherwise we are assuming info we cannot back. We had this conversation before somewhere with regard to "is-a" adder versus "can-serve-as" an adder. We seem to be going in circles.

Those views aren't incompatible; one may classify objects in terms of "is a thing that can be used for X". It is defining 'classification' itself that requires use of 'intent'. Words describing different sorts of computations (including 'classification') generally follow this pattern of being defined in terms of their use because the definition itself is aimed to avoid implying or requiring a certain implementation. It may be that 'classification' and 'addition' overlap for some implementation or particular instance, but are still distinct in the English language due to their different purpose, evolution, history, origin, future, etc. English has many words like this, especially for man-made things (e.g. 'traffic light', 'weapon'), and so if you have any desire to continue speaking English, you'll be forced to set what you "favor" aside in order to accommodate the language. Words that naturally describe a pattern in terms of its usage inherently have 'intent' as a component, and neither you nor I can do anything about it... except get over it. Perhaps, when you build your own language and get other people to speak it, you can fix what you perceive to be language flaws; until then, your arguments, your assertions, your ideals that this shouldn't be the case is just noise and hot air.

As far as your regular complaints about how 'practical' the "is *for* X" is for technical definitions, I don't believe you. Given perfect information, as in a math problem or model, use of 'intent' is similarly perfect. And given imperfect information, as humans are always limited in the real world, then 'intent' is not any worse off than other technical definitions.

You can make very good reasonable abductive and heuristic guesses as to the nature of a piece of code using such things as SQL keywords and SQL-fitting syntax elements. Fundamentally, such an algorithm takes advantage of the fact that today we don't have many languages that have a high probability of looking like SQL but possess highly divergent semantics. E.g. the probability of a TECO program looking like an SQL program is 100% for a program of zero characters, but reduces exponentially for each character thereafter.

Anyhow, I feel some need to point out that defining SQL-ness in terms of this 'guessing algorithm' is neither technically nor philosophically any better than calling it 'SQL-intent-ness' and using the exact same guessing algorithm. You're simply looking for evidence of what code is meant to be. Arguments made from that basis might take you in an interesting circle, but a circle it will be.

Using intent to classify it would be a very very last resort. In real life sometimes we can use intent as a lazy shortcut, but its usually not good enough for automated processes that lack fuzzy thinking, social cue recognition, and intuition.

I feel you are once again assuming that intent is somehow 'smellier than' the options you've been presenting. I still reject that as a premise. I also don't believe recognizing intent requires social cue recognition or intuition. Inductively inferring intent does require fuzzy thinking, but so does every other option - including the ones you've been presenting (e.g. the whole idea of 'SQL-ness' is fuzzy).

We can write an algorithm that measures sql-ness far easier than we can write one that determines intent of the author. True, the criteria for sql-ness may be subject to dispute, especially for a "half-broken" query. But its better to have somewhat arbitrary criteria for just the SQL than have it for BOTH the SQL and the author (see "double_noun" problem above).

We can trivially use an algorithm that measures sql-ness and use it to determine whether the author likely intended to write SQL. How is doing that far more difficult? Or any more subject to dispute?

That is using something to estimate intent, not the other way around.

And that's a valid complaint because you weren't using something to estimate sql-ness?

If automation-related, I would rather face the problem of estimating sql-ness than the author's intent for reasons already stated recently. I'd usually rather be faced with the task of parsing an ASCII file than doing OCR on handwriting from scratch.

The OCR vs. ASCII is simply a FalseAnalogy. Better is "I'd rather be faced with the task of translating lines visible upon paper into ASCII than translating characters drawn upon a paper into ASCII". After all, the difference between 'lines' and 'characters' is one of assumed intent. I'll note three critical things: (a) the algorithm will be the same, (b) but the latter problem captures intent and context, and (c) in order to verify the correctness of the algorithm, you must know its intent and purpose (e.g. the reasoning behind the translation). One could justify any translation of lines to characters for the first version of the problem.

Where's a practical example of a computerizable algorithm verifying intent?

You verify intent the same way you scientifically verify any other claim: you make a prediction based on the assumption of intent, then you verify that prediction. To be good, of course, it requires that you don't predict anything you already used to infer the statement (the claim of intent) in the first place, but this can be done by splitting your knowledge into two sets: one set to detect intent, another set to verify it. As with verification of anything else, verification of intent usually happens only AFTER detection of intent. Detecting intent, also, is done the same way you go about detecting anything else in this world: probabilistic inference. P(Intent|Event) = P(Intent and Event)/P(Event with or without intent). We use the same mechanisms to detect baseballs and SQL-ness. There is nothing special about algorithms that detect or verify intent.

As far as practical examples of verifying intent (as opposed to merely detecting it) go, you'll find them all over the place if you look into communications hardware, signature security, typechecking, etc. If you accept interactive verifiers then confirmation dialogs, tab-completions, and various expert systems would also qualify. If your actual question was about practical examples of computerizable algorithms that detect intent, then OCR, gesture recognizers, focus heuristics, speech recognition, and various expert systems can all provide fine examples.

Tab-completions? They are prediction engines, not intent engines. At least, one does not have to assume intent to make an algorithm. Same with OCR: essentially predictors. We can predict rain and planet movements also, but there's no intent involved (unless you are Pat Robertson) even though they use (or can use) similar techniques.

It seems you are assuming that "prediction engines" and "intent verifiers" are mutually exclusive. They are not. Indeed, I'd say the opposite is clearly true: there is a great deal of overlap. All verifiers based on the ScientificMethod rely on prediction, including verifiers of intent. And what is it you believe tab-completion is aimed to predict, if not the command you wish to execute or search you intend to perform? As far as your "OCR: essentially predictors", that is similar: OCR is 'predicting the intended character', and OCR software is written with the assumption that the lines on the paper are intended to represent characters from a limited set.

While "one does not have to assume intent to make an algorithm", the probability of writing an algorithm in a given language without intent (i.e. by accident) has at its upper bound the probability of parsing a random character strings. For many languages, that probability is infinitesimal. Further, to verify an algorithm (e.g. OCR against actual handwriting) does require that one assume intent.

Please clarify. This did not make sense to me. It's not our process's concern whether intent was involved. Machines have no emotional attachment to GIGO either way.

[I think you may be confusing "intent", which is a simple notion, with "conscious intention."]

No, the definition is not a consensus. While its true we may call something "intent" for communication purposes, it's often just the human habit of anthropomorphizing everything. But its also a technique that risks problems when precision in definitions is required. "The prompt wants a password" may make it easier for us as humans to relate to, but from a scientific standpoint, we cannot measure "wants" without an arbitrary model of what "wants" is.


Please clarify: "if program A and B are identical, then whatever formalized notion of types they possess must also be identical". How do you arrive at this? By "formalized", do you mean built-in to the language?

If the language has a type-system, with such things as int and float and struct { int, int, float }, then these must also be identical if the programs are identical. Yes, in essence "formalized" means "built-in to the language".

For the moment, let's focus on classification instead of "types" so that we don't get caught up in language design. Suppose the program classifies stuff, like maybe a parser for a simplified little language.

That's fine. But notice that you just gave me a context. It's important to realize how fundamental that is to communication... even communication of programs.

Yeah, and it is relative. So "classification" is relative? Good.

It seems you assume that 'context-sensitive' and 'relative' are interchangeable notions. If you cannot defend this assumption, then no, your claim is not "Good." or even reasonable.

But context is "in the mind" in this case. We are back to analyzing the observer, not the target thing of issue.

Communications context is as formal and objective as the language provides. Do you believe that math is 'relative' merely because it is "in the mind"?

Math *is* relative. However, this property may or may not be related to it being "in the mind". I'm not prepared to answer that right now. To be Clintonian, it may depend on what "is" is.

Ah. And 'true = false' is relative, too. And 'P = NP' is relative. You assert math is relative with enough confidence to mark it bold. I think we can end this conversation now if "math *is* relative" in your faith and religion. It isn't as though we can ever come to agreement based on rational thought if you happen to believe that logic (which is a math) is relative.

Logic *is* relative. It's only in the mind. The real world appears to be probabilistic, not Boolean. But we usually agree to use it as a UsefulLie for testing statements. As long as both parties agree on a given UsefulLie, it's "reality" is a non-issue between them. If you can present your arguments in formal logic (English-itized please), that would be great. But most likely the disagreements will be in translating the real world into givens, not the actual formal logic computations.

"It's in the mind", whether it is 'only' in the mind or not, does not (by itself) make 'it' subjective or relative. The truth of 'P=NP', for example, DOES NOT DEPEND on who is observing it, his point of view, thoughts, feelings, or interpretations. It only depends on the language and context - what 'P' and 'NP' and '=' happen to represent. Truly relative statements are relative even if both parties get the language right. Truly subjective statements are subjective even if both parties get the language right. 'P=NP' is not subjective or relative. Logic is not subjective or relative. Math is not subjective or relative.

It's only "truth" as far as one accepts all the premises. Whether those premises are tied to the "real world" if often the key issue.

The issue of premises being tied to the real world is NOT one of 'relative' or 'subjective' principles, and is thus NOT a "key issue" in your little argument. Indeed, it still seems irrelevant. Even if a logic leads you to untruths, the logic itself isn't necessarily subjective nor relative; it's simply fallacious.

As far as models of the real world go, I don't see how they are particularly relevant to this discussion. If you're assuming that whether the physical world is or is not 'probabilistic' (maybe based on some sort of quantum structure) has some bearing on issues of communications context or whatnot, I'll need you to explain this connection to me.

The "context" you talk about makes some assumptions. These assumptions not necessarily universal truth. We may agree with assumptions for the purpose of mutual communication as a practical convenience.

I still see no connection; you keep waving your hands and shouting 'universal truth', but you did not answer the question of how this is relevant. As far as I can tell, (a) communications context is not about universal truth, and (b) even if it were about universal truth, ripping on its faults is still a completely non-sequitur and fallacious approach to arguing that math and logic or other things that are "only in the mind" are somehow 'relative'.

Let me try to restate this: The number of internally valid models is probably infinite. However, the number of all possible models that fit the real world is very limited. For example, we can create a model of a universe strikingly similar to ours in which an always-existing intelligent deity created everything. It's an "internally valid" model, and it tends to match what we see in the real world. (Think of an Enterprise Pro++ edition of The Sims.) However, it could still be a flat wrong model. -t


WhatIsIntent


SeptemberZeroEight


CategorySubjectivityAndRelativism, CategoryDefinition


EditText of this page (last edited November 9, 2011) or FindPage with title or text search