This is referring to WilliamKent's DataAndReality, he outlines his simultaneous preference for irreducible n-ary relations and irreducible pseudo-binary relationships. It may shed some light on the ObjectRelationalPsychologicalMismatch.
DataAndReality, though written in the 1970's, speaks mainly from a generic "record-oriented" model of data, and argues about tradeoffs between the record approach and other approaches (hierarchical, relational, etc.) I think it's pretty clear the Object Oriented "data model" is really a record-oriented data model with data/type inheritance.
In a pure-binary relationship, the pointer is a relationship, and it denies the existence of relationships involving more than two things at a time. In a pseudo-binary approach, the pointer is just... well, a pointer. It becomes just glue, and the relationship itself becomes an intersection record.
Let me quote (he probably explains it better than I):
-
- It's really simple: out of all this assortment, I prefer one model - which is simultaneously the pseudo binary and the irreducible n-ary. Let me try to explain why I see negligible difference between them, at least in the essential structure of the model of a ternary relation.In the pseudo binary model, we link four objects to model a relationship among a part, a warehouse, and a supplier. In the irreducible n-ary relational model, we also have four objects. There is one record each for the part, the warehouse, and the supplier, containing information about those entities. And there is the intersection record with three fields, containing the keys (identifiers) for that part, that warehouse and that supplier.
-
- We can make the geometries of these two representations look alike. We are only dealing with differences in the portrayal of the linkages. In the pseudo binary model, we draw explicit lines, suggesting some kind of internal pointer mechanism in the implementation. In the relational model, on the other hand, linkages are discovered by matching symbols (in this case, the identifiers in the intersection record match the keys of the other records). If we draw lines between the matching symbols, we see a topology quite identical to that of the pseudo binary model. Thus if we ignore the specific technique for achieving linkages, the two models look quite alike.
On the other hand, the problem with pseudo-binaries is that they are potentially not conceptually scalable. Just think:
- for every N in N-ary, we require N binary relationships
- that implies N cardinality constraints, arguably dispersed
- and if these are bi-directional, you have to figure out how to handle updating both sides of the relationship.
But note all of the above is mainly an implementation limitation of database management systems and/or object relational mapping frameworks today. The there are two practical alternatives to this "pseudo-binary tarpit":
- Forego modelling such data in an object model and to directly use SQL's declarative constraints, triggers, and procedures to maintain the integrity of such a structure.
- Build a general set-oriented relational integrity layer inside your domain model support framework or on top of your object database.
Neither are completely satisfying solutions.
For modelers, there's the HigherOrderEntityRelationshipModel (HERM). A discussion of it vs PseudoBinaryRelationships is here: http://www.informatik.tu-cottbus.de/~thalheim/HERM/hermdiscussion/.
See also: MultiParadigmDatabase