Ontological Thinking

Def: ontology = "The philosophical study of existence and the nature of reality." (http://machaut.uchicago.edu/cgi-bin/WEDT1.sh?Submit=Search&word=ontology)

See also http://www.dictionary.com/cgi-bin/dict.pl?db=web1913&term=ontology


(musings inviting response, more than actual claims)

I was at a meeting last week and one of the participants piped up with a brief discussion of reuse. He made a claim, one that he's made before (though this time it was more clearly thought through), that the only areas in which OO programmers have achieved reuse are those in which, for whatever reason, there is now a well-understood, if not completely formalized, ontology.

The implied claim was, of course, that ontologies are the important things. Which was buttressed by another participant's comment that scalability is directly proportional to ontological thinking (to build really large systems, you must think more, and more clearly, about ontologies).

Kind of an interesting, if unsubstantiated, claim. I'll get back to it in a moment.

I've sort of thought, on occasion, that some of the discussions here (and on the mailing lists) border on the ontological. For example, the "Are there really more than 5 patterns" discussion that occasionally crops up. And the ways in which we reuse patterns ("distributed programming is sort of like a generalization of client-server database programming. So maybe the patterns used there are applicable here") have an ontological component as well.

And "pattern formats" with their 6 (or 10 or whatever) distinguished slots seem to be a form of object-modeling as well.

Are we, in our way, groping towards an ontology? Are the ontologies somehow fundamental? Or are they necessary but not very important (e.g. some ontology is going to emerge and be useful, but the exact details aren't very important). Or are they a complete by-product (when people think, classifications emerge like sparks from the forge).

Back to the claim:

Is the implied claim at all valid?

There's also the issue of scalability. Things can scale in many ways. Two of the most significant are: number of simultaneous users and number of lines of code. I've never built anything that had more than 500 simultaneous users, and never worked on anything with more than a million lines of code. So I'm not really in a position to judge the scalability claim.

On the other hand, it smells, to my nose, a lot like the claims various Eiffel people have made about reuse.

And, in scalability, the ontological notions are pretty much orthogonal to the (design) patterns ones.

-- WilliamGrosso


The biggest problem with signing up for this notion is that you'd have to admit that Ted Codd had one of the best handles on reality when he invented relational database and normalization. OTOH, Ted would most likely agree.

When we finally get something right, or nearly so, are we just discovering truths in the universe? "Listen to what Smalltalk is telling you", say WardAndKent.

Are we the author of our works, or merely the pen in the hand of reality?

Or do we not care, as long as the checks keep rolling in? -- RonJeffries


Ontologies are just a projection from the WholeSortOfGeneralMishMash, and I accordingly find them relative, limited and suspicious. The WholeSortOfGeneralMishMash keeps turning up in our OO modeling course (probably because I am teaching it). Put two people together to discuss their relative ontologies, and you already have your first double projection from the WholeSortOfGeneralMishMash, and the mishy mashiness of it starts to become apparent. ..but then I live in relativist overload these days... -- AlistairCockburn, CulturalRelativist


I think computer systems are simulations of the operation of abstract models: A General Ledger, for example, simulates the actions of an abstract accounting model.

A file system of an O/S simulates the creation and manipulation of "named files" which contain "sequential streams of bytes that can be accessed and changed." The physical voltage levels and magnetic fluctuations that actually exist in the computer aren't really accounts or files at all; we just use them to represent our abstract concepts of such entities.

Thus an ontology, an abstract model representing a given system, is essential for building any computer system at all. You'll have one, even if you don't formalize it:

If you do ad-hoc system building, without formal design or even refactoring, the abstract model of your system will be as much a convoluted unmaintainable mess as your code, but the abstract model will still be there.

Thus, when a domain has a widely accepted ontology, it will be easier to build a computer system for it. Reuse, OO design, and all other parts of the task will be easier too, but that just follows from the first. -- JeffGrigg


Isn't it funny that OO systems designed for generality tend to gravitate towards some control/data split? Strategies and metadata? -- MichaelFeathers


that the only areas in which OO programmers have achieved reuse are those in which, for whatever reason, there is now a well-understood, if not completely formalized, ontology.

Surely the biggest area of reuse is in window widgets. I missed the ontology on that one. Then there's relational database access, and while Ted may see ontology in there, I don't. I see math. Let's see, report generation, we all recall MrAristotle On Reporting. Second only to Aquinas' writings on HotDraw.

Maybe the original claim isn't quite so true? -- RonJeffries

One of the standard OO examples is a bank account. If it's a CORBA example, there is an account object, if it's an OLTP example there's a teller object. Since the ontology is artificial anyway (invented by banks to explain how they deal with money they don't really have), it's well-defined. Every bank uses the ontology, every bank customer understands it. But I don't see any code re-use happening there :-(.

It seems the most successful business ontologies are those supported by ERP software such as SAP, because companies change their business practices to match the software.

-- JohnFarrell


Reminds me of the "birds" analogy which I use when discussing learning. When I was little and saw something hopping about in the garden or flying in the sky it was just a "bird", so anything I learned from watching these strange creatures was just associated with "bird". Where my observations conflicted they were either associated as essentially meaningless differences (some birds are black, some are brown, and some are multi-coloured. Whatever), or ignored (all birds go "tweet tweet", so they don't go "quack quack"). All the time I thought like this I really knew very little about birds. At some point in my youth, however, someone named some types of bird for me. This is a robin, that is a blackbird, over there is a duck. Suddenly I was able to associate my observations on a much finer grained level. A robin has a red front, a blackbird is black, a duck goes "quack quack", but they are all also birds. I have since progressed to a much wider range of ornithological knowledge.

So my contention is that we learn by continually adding observations and examples, but without names to attach them to, our observations may end up being ignored or generalized and therefore of little use. In the context of software reuse, I find naming vital. Only once someone knows that such a concept as a "list" or "hash table" exists, can they start associating observed behaviour, needs and products with it. The alternative is to associate all observations with just "software". 'Software is infinitely flexible; in this case we need some "software" which does...' With that viewpoint, there is no drive to look for available things to reuse or to produce things which may be reused. -- FrankCarver


Yessir, I lump most of my problems into this WholeSortOfGeneralMishMash, via WhatTheMeaningOfIsIs. -- JohnClonts


EditText of this page (last edited August 31, 2006) or FindPage with title or text search