Classification Definitions Without Intent

It's been suggested in [asdflkasd] that definitions of "classification" must involve "intent". This topic explores the possibility of definitions that don't involve intent or related concepts such as "goal" and "purpose". It's an important issue because it relates to other issues, such as the definition of "types".


Classification is compression. For example, to a regular fish, "moving", "large", "big teeth" is compressed into "dangerous". Multiple attributes are compressed into a single attribute.

  if (object.size > threshold and object.ismoving and object.hasTeeth) 
  then alarm_flag = true;

The fish doesn't even need to be conscious of this, it may be instinct. Imagine similar behavior in a tiny worm or bacterium, which has no "intent", but is acting merely on instinct, yet still does classification: certain sensory input patterns trigger a response or condition. A single neuron can perform such classification: if the summed signal from enough of the inputs exceeds a threshold, then the output fires a signal. (An analog version is slightly more involved, but the concept is the same.)

In practice its usually compression for local or task-specific needs. We use it to simplify our models. But this is the "why", and does not need to be part of the definition.

--top


Classification is Mapping (candidate definition)

This is a more general view than compression in that reduction of information is not assumed. Mapping turns one set of variables into a different set, but these are still influenced by the values of the original but are not identical (in all cases). In the fish example, "large", "moving", and "teeth", are remapped into "danger".

Are you claiming that ALL functional mappings are classifications? If not, how do you tell the difference?

I know of no exception at this point, but won't rule one out.

You appeal to your ignorance?

It's called keeping an open mind. You should try it sometime.

Keeping an open mind requires I give new ideas a fair trial as I attempt to shoot them down, not that I embrace them without filter, give them pet names, then fight to defend them until somebody else kills them. I mean, there's "open" and then there's "Johnny on the Spot".

Then I shall offer you the opportunity to explore and shoot down your own ideas. Please justify how each of these either are not mappings or are classifications: integer representations, diagonalization of rational numbers, language translations, language compilation, MP3 encodings, integer arithmetic, and HTML displays.

You have not identified the input and output mappings in many cases.

I trust you are able to ask questions if you are unfamiliar with these cases.

As far as language translation, we probably would not call it "classification" per se, although classification is probably involved in the process. In ordinary speech, there is usually compression of information for classification. But I couldn't say there is always compression. It may be comparable to the DefinitionOfLife. No single criteria may be enough.


(Regarding "Classification is Compression")

Are you claiming that ANY compression algorithm qualifies as classification? If not, how do you tell the difference?

Perhaps only loss-full compression.

So you are claiming that any LOSSY compression is classification. Can you please explain this in the context of my MP3 collection?

It re-classifies sound waves into a frequency realm. It also rids what we humans cannot perceive such as quiet sounds close to the frequency of loud ones, thus making a human-centric abstraction of sound.

I can hardly believe a logical and astute fellow such as yourself would regress to 're-classify' as an explanation of 'classification'. And it seems from the second sentence that you know what 'lossy compression' means, but I wouldn't have brought up MP3s if I didn't know that much. Be honest: are you just rambling in hopes you'll hit a good answer?


Also, since you wish to get away from "intent or related concepts such as 'goal' and 'purpose'", are you trying to say that instinct has no purpose?

That is a very interesting philosophical question. Generally I consider "purpose" part of an observer's perspective. Is an amoeba reacting to stimuli any more "purposeful" than a water droplet hitting a pond? It's almost a religious question. We tend to anthropomorphise life much more than inanimate objects.

If you consider issues of computation to be religious discussions, is it any wonder you get into so many holy wars?

Or vise versa. Until "intent" is measurable in a consensus way, it will continue to be a problematic concept to tie definitions to.

["Intent" is measurable in a consensus way. The objection appears to be yours and yours alone.]

Bullsh8t! "I know it when I see it and that's good enough" is not good enough.

[Are you claiming that the intent of QuickSort is not to put items in sorted order? Or that the intent of opening a Berkeley socket is not to establish a connection with another socket? You appear to be equating "intent" with "conscious will."]

It may be to win a Turing Award for the inventor. I don't really know for sure. I can give a guess (see probability below), but it is only a guess based on a somewhat arbitrary (non-rigorous but experience-based) model of human behavior that I construct in my mind to try to figure out people. We don't always know the exact steps our mind takes to come up with a conclusion. It's "intuition" that is somewhat distance from science-quality scrutiny. Even in the lab where the guts can be monitored, some neural nets have grown too complicated for the researchers to dissect the steps for the result.

[You are referring to the intent of the inventor and not the invention itself.]

The "intent" of the program? WTF? It's just following mechanical rules. Does a snowflake crystal seed have intent to form a snowflake?

[Certainly. This is further evidence that you are equating "intent" with "conscious will", which it is not.] (written prior to 'snowflake' addendum)

This appears to contradict your DefinitionsThatRelyOnIntent PageAnchor Same_Design_Different_Intent claim. The "intent" would then be independent of the author. If a monkey accidentally types quick-sort code, then it can still have intent?

Technically, for anything a monkey writes, one can invent a language for which that monkey-typing qualifies as quick sort code. This goes the other direction: if everything a monkey types can be quick-sort code in some language or another, how do you judge? You make assumptions as to the language, that's how. And by doing so, you make assumptions as to the semantics, and therefore to the purpose and intent of the program. You cannot logically make that assumption about semantics without implicitly taking a leap of faith to intent and purpose. Unless you make that assumption, the monkey-typing is just that: monkey-typing. It isn't even a program.

Maybe EverythingIsRelative. But for practical considerations we are not dealing with all possible potential languages. Usually we are interested in something more specific. We can create an SQL-detector (classifier), for example, without having to mention "intent".

By resorting to claims of 'practical considerations' in a debate that is fundamentally philosophical, it seems what you really want to do treat 'intent' as 'the-word-that-shall-not-be-named' and sweep it under a rug with your other unmentionables. For philosophical considerations, we ARE dealing with all possible potential languages. We can assume something more specific, and in making that assumption we introduce semantics and, therefore, intent. We repeat the process for the program output: you can't even say an SQL-in garbage-out program is not an SQL detector unless you assume a particular language for the output. I don't know about you, but I do not believe it proper to ignore the assumptions you are being forced to make and pretend they are non-consequential. It's like hiding that "then a miracle happens here" step in (bad) math or logic. -- #2


Classification isn't a compression, so much as it is a let-assignment. Assuming the same context of the fish above, something is dangerous if it is moving, has teeth, and is large. That is:

  given self as fish:
    let dangerous other = (hasTeeth other) /\ (isLarge other) /\ (isMoving other)
    in  ...etc...

I'm sure that most EverythingIsa's have a take on it. --top


RE: We tend to anthropomorphise life much more than inanimate objects.

Perhaps you do. I tend to objectify humans just as much as other animate objects. Which, if either, of these sounds logical to you: (1) "Every human I know may be categorized animate constructs of physical matter." (2) "Every animate construct of physical matter I know may be categorized as human." I don't know which logic you use to make decisions, but to me (1) is logical (barring dead humans), and (2) is not, being contradicted by such things as plastic wind-up clapping monkeys.

Where you keep saying "intent" and "purpose" requires humanizing things, I see you spouting a bunch of religious mumbo-jumbo. It's as though you're claiming that "intent" can't be represented in any computational system except a human brain, but you just expect me to take you on your word. If human brains can represent intent, do we have any reason to believe that other computational systems may not? Are you going to invoke "free will"?

When a human is cause for an event, even just tightening a fist, that event is either intentional or unintentional. However, intent has limited extent. It may be that you intended to tighten your fist, but that you did not intend to pull the trigger. It may be that you intended to pull the trigger, but did not intend to fire a bullet. It may be that you intended to fire a bullet, but did not intend to shoot yourself in the foot. It may even be that you intended to shoot yourself in the foot. If human intent has limited extent, is there any reason to believe that actions caused by lesser computational systems may not possess intent with limited extent?

Is the intent of the visual cortex not to classify imagery? Is the intent of the neuron not to fire in response to the correct balance of chemical stimuli? What happens when these computational processes begin delivering signals unintentionally?

If intent has limited extent, should you not judge intent in part by examining its extent? How would you handle a man who claims: "I intended to load my revolver, point my piece at his head, and pull the trigger... but I did not intend to fire a bullet." This man is claiming a certain extent to his intent. How would you judge it, and by what reasoning?

What happens if you consistently apply this reasoning to judging other claims of intent? How do you compare this reasoning to the reasoning by which you detect 'baseball-ness'? Are you willing to even try this introspection?

I feel that much of our conflict about 'intent' comes from your anthropomorphizing of it and me not doing so. Perhaps if you consider intent without anthropomorphic features, getting rid of that 'unnecessary' dependency on human-ness, you'll have a better understanding of my past discussions.


Classification is applying a distinction for the purpose of the distinction. See also DifferenceThatMakesNoDifference.

This may be because Classification is related to compression in so far as any significant correlation with other distinctions implies a redundancy that can be exploited technically for space.


EditText of this page (last edited September 12, 2008) or FindPage with title or text search