Approaches To Definitions

The classic way to define terms is via boundaries of strict categories; a more recent approach that is increasingly widely used is to define things by specifying prototypes that are excellent examples of a category, and candidate element categorization is done in a fuzzy way by measuring distance (in terms of appropriate characteristics) from a prototype to the candidate. The distance is the degree of fuzzy set membership.

The two approaches are dual to each other; if we treat boundaries as primary, the space thus inscribed may have a center/prototypes/moment/skeletonization, but we may not care. If we treat prototypes as primary, the resulting categories may or may not have strict boundaries, and we may or may not care.

In both cases we are carving out subsets of an N-dimensional space, nominally one where each dimension is a salient attribute of the elements under consideration.

Although obviously very useful, classic definition via category boundary has led to a very large number of difficulties in application (which in turn has led to any number of flame wars).

In AI a classic example is an expert system, which might have a proposition that defines a dog as something that IS-A mammal with fur, four legs, a tail, hunts in packs, bays at the moon, and a number of other things. I'm being slightly fanciful here, but this sort of thing has been approached with great care and attempts at rigor.

Hypothetically pretending we are relatively happy with our definition, what do we now say about a 3-legged dog? The obvious answer is to adjust the definition, first to be a little less fanciful, but in any case to take into account the exception.

The problem is that this is often extraordinarily hard to do, because there are so very many exceptions of different sorts, and because some of the exceptions strike right at the foundations of our definition.

In biology, monotremes are an example that have posed such problems for biologists. Is a duck-billed platypus a mammal or not? Eventually, it was put into its own category, but it lead to a lot of gnashing of teeth and pulling of hair.

Worse is the biological approach to defining "life", all approaches to which are notorious for not properly categorizing things, in part because we're not 100% in agreement on which category to put things in. What about things like viruses, prions, dormant seeds and spores, frozen sperm or even animals that can be thawed and "brought back to life"? Any given reader may have firm opinions about these, but nonetheless they lead to difficulties in definitions, and many arguments, and different specialists have different opinions.

Therefore the newer approach of definition by prototype is often used instead, to short circuit those difficulties. That doesn't always mean that the classical approach of strict categories is wrong, it just means that we should consider which to use in given circumstance.

Definition by prototype potentially includes definition by strict categorization as a subset, for all prototypes that have 0% or 100% membership in the set.


This is related to the mathematical approach of specifying sets either by list or by rule, but none of this tends to be an issue for mathematical definitions, and in any case, formally, terminology definitions are not necessary to axiomatic derivations; they are annotations to assist human understanding.

This is strongly related to LimitsOfHierarchies, but not identical.


The above is all from Jul 2004


A brief but interesting discussion of the history of the above concepts appears in a review of TheBigBookOfConcepts? at www.bookshelved.org/cgi-bin/wiki.pl?TheBigBookOfConcepts? The lack of comment on this page over the last year gave me the impression people thought I was spouting off random half-baked ideas of my own. They may be half-baked, but I'm not talking about my own ideas. :-) -- DougMerritt

And then there's a third way to define things which is completely different. You break down a concept into its constituent particles and lay bare its essence. The human mind can't see individual concepts by introspection but it can make them interact with other concepts. You can deduce the constituent particles of a concept from the way it scatters other concepts.

Of course, you can think of this as an improved form of bounds-definition, however it doesn't necessarily establish bounds. DefinitionOfLife involves a lot of fuzzy concepts because the concept of life is itself fuzzy. -- RK

True. And in fact there are not just 3, but many ways. I was just singling out the two best known schools of thought that have been formalized; the original one in use for several millennia, and a second one that began to be formalized around 1970. The prototype approach doesn't inherently rest on fuzzy sets/logic; despite what I implied above, it merely allows for it. The older Aristotelian approach, similarly, was extended by ZadehLotfi to include fuzziness. Fuzziness is mostly an orthogonal issue, important though it can be. -- Doug

That's not clear to me, but it reminds me of a great old saying: The working mathematician is a Platonist on weekdays and a formalist on weekends.

For working mathematicians, this is a matter of functional pragmatism. The original was: PJ Davis and R. Hersh (Boston: Birkhäuser, 1981), "The Mathematical Experience", p321:

Or 'The mathematician qua mathematician is a realist, the mathematician qua philosopher is a formalist.'

But to try an answer to your question, one can attempt a prototype definition by relating it to something previously understood. E.g. say that best and worst have been previously defined, and now we want to add a definition by fuzzy prototype of good and bad. It's straightforward, e.g. perhaps (good best 0.75), (bad worst 0.75). There's an infinite regress problem if we say, wait, how did we previously define the abstractions best and worst? Which is precisely the SymbolGroundingProblem?, and arises in all symbolic contexts regardless of choice of formalization. (There is not yet a final answer, which is one of the things that has plagued AI, but one approach is to define abstractions relative to the concrete, to bootstrap. Well, that's vague enough that it probably covers many approaches.) -- DougMerritt

Note that there's no good way to define 'dog' without relying on 'a member of the species that includes X' where X is a prototype. Otherwise you run into problems when you try to exclude wolves, foxes and hyenas. And if you want to include dingoes then the "a member of the domesticated species of canids" doesn't work. And even if it did, it would only reify it to defining the canid family. -- RK

...which illustrates still more of the issues in the SymbolGroundingProblem?. Historically, a lot of AI work got gummed up in the details of such problems before they ever quite realized what the shape of the overall difficulty was. -- DougMerritt

Sorry Doug, but I don't understand what the grounding of symbols has to do with platonism vs formalism. To my mind, there's no difference in either case. I think rather that it has a lot to do with the problem of substances(?) wherein reality is continuous and made into discrete concepts by our minds vs reality being discrete and made continuous by our senses. -- rk

I didn't mean to say there was a pertinent connection here, and in fact my intention was to steer things away from Platonism and towards the SymbolGroundingProblem?. I was merely reminded of a quote (because it is sort of a math parallel to the SymbolGroundingProblem?, which otherwise doesn't have one, since there's no connection to physical reality in pure math). I just reordered text above to try to make that more clear; did I succeed? -- Doug


In the GoldenAgeOfScienceFiction? A.E. van Vogt wrote the novel "Null A", and sequels, about a vision of futuristic impacts of non-Aristotelian logic. It's not interesting from a theoretical point of view (aside from perhaps being a ZenSlap for those who think that Aristotelian logic is the end-all and be-all of existence - actual technical publications serve better IMHO for those who care), but is of historical note, and kind of fun if you like campy ancient sci-fi (I say that deliberately; it's more "sci-fi" than "science fiction" :-).


See also SelfLanguage (prototype-based OO language); ExtendedSetTheory and ExtendedSetTheoryStorageModel (unclear whether this is half-baked or not in its fleshing-out, but there's no strong theoretical reason to prevent some extended set theory from being pragmatically useful), and lots of other potentially related issues and pages.


The basic idea mentioned at the top of the page here seems isomorphic to the idea of VoronoiDiagrams and DelaunayTriangulations being related to each other.


Of interest in AugustZeroFive


CategoryDefinition


EditText of this page (last edited November 7, 2014) or FindPage with title or text search