Delta Isolation

The concept of isolating the differences in programming code in order to extend or alter a concept. It is often found in ObjectOrientedProgramming inheritance concepts. In other words, "grouping by the differences". It is essentially saying that "B is like A, except the following...".


I personally don't find that it works as well as advertised, despite being an interesting concept. I would like to see set-based differentials rather than just tree-based (OO). Trees are too limiting and don't reflect the graph-like real world IMO. MultipleInheritance is sometimes quoted as a solution to this, but seems like an afterthought to OOP rather than being built-in. For example, MI does not have clean ways to deal with overlaps and multiple "matches". (See SetsAndPolymorphism.)

Plus, it is tough to manage the granularity of difference. The granularity might not fall on existing method boundaries. One may find they need to override 1/3 of a method (see below). Sure, one can refactor the code, but ANY technique of difference management can be "corrected" by reworking code. Refactoring as a "solution" is a truism. And a cop-out. (RefactoringMeansFailure)


Procedural Delta Isolation Techniques

Some procedural languages may provide the ability for "nearby" routines to override those further away in the search path so that we don't have to duplicate the list of routine calls for each "instance". However, this is generally not easy to do with the data. But, current procedural philosophy would tend to use a database probably to manage any taxonomy of data rather than use a code tree. DeltaIsolation may be one view of the data among many, and thus it does not make sense to hard-wire in one particular structure shape. The "grouping by isolation" is simply a viewpoint, a query. A "grouping" is a relativistic view. But, current code tools are not ready to integrate subroutines and scope management into databases. An alternative, file-based delta-ing, is described below.


Copied from OoVsFunctional:

The granularity of difference is often not on class or method boundaries. It is tough to override 1/3 of a method without mass refactoring. You have to pick the right chunking size up front [to avoid problems].

I am not sure whether this is an issue with inheritance in general, or using inheritance for DeltaIsolation in particular. If the first, then perhaps it does not belong here, but under Inheritance instead.

I disagree that it is tough to override 1/3 of a method without mass refactoring, I do that quite often. When you find that you need to override 1/3 of a method, that means you only have one implementation right now (or that other implementations do not need overriding), simply put that 1/3 you need to override in another protected method, then override it in your sub-class, nothing else need to be changed. This is very easy to do with current IDEs like EclipseIde.

Doesn't that create some duplication? (OnceAndOnlyOnce). Perhaps I am not interpreting this correctly. Factoring purists would possibly disagree with that approach. It can also clutter up your method-name-space it would seem to me.

That is precisely for removing duplication. If your have

   class A  {  public void fn() { f1(); f2(); f3(); } }

where f1, f2, f3 could be really many statements, and you need to create class B that have fn() calls f1(); f4(); f3();, i.e. override the middle 1/3, you change class A to

   class A  {  public void fn() { f1(); fx(); f3(); }  protected void fx() { f2(); } }
   class B extends A {  protected void fx() { f4(); } }

so you avoid repeating f1() and f3() in class B. If f2() contains a lot of ties with f1() and f3(), things will get more difficult, but the basic approach still works fine.

Does class B inherit from class A here? [Yes, clarified above.] This still appears messy to me, especially if you continue to have granularity issues over time. The "exception" methods get mixed within the primary methods. IOW, it does not abstract away niggly details. And, you still had to alter method "fn". If you use a simple IF statement, you don't have to birth out new methods or functions (unless it gets long).

If you add an IF statement, you are still modifying fn(), and when f1,f2,f3,f4 consist of many statements instead of a single one, you will be making fn() into a convoluted mess. As for abstracting away niggly details, it is certainly no worst than adding an IF in fn().

No better either. Over all it is *less* code shuffling it appears to me.

Plus, since you are overriding the f2 portion of fn, it is very likely that you can give a meaningful name to fx(),

Use a comment. Besides, make a sub-function if that is really what you want.

see MethodsVsCodeFragments for discussion on adding new methods. The fact that you need to override 1/3 of fn() is good indication that fn() is not of right granularity.

Right, but it is hard to know the "proper" granularity ahead of time. The granularity is based on requests that come along, often not anything you can assertain by thinking about it at design time. In my domain one is often modeling a manager's mind rather than any logical universal concept. The popularity of XP and the recognized drawbacks of BigDesignUpFront is testament to this.

[I disagree, it's quite easy to know the proper granularity ahead of time, every methods does one thing, or is a list of calls to other methods, i.e. a method is either a manager or a worker. That's it, it's that simple, if each method does one thing then down the road, you can override any single step. The example above is wrong... it should be:]

   class A {  
        public void fn() { f1(); f2(); f3(); } 
        public virtual void f1() { } 
        public virtual void f2() { } 
        public virtual void f3() { } 
   }
   class B : A {  
        public override void f2() { //do what would have been in f4 } 
   }
   class C : A {  
        public override void f1() { //do what would have been in f4 } 
   }
   class D : A {  
        public override void f3() { //do what would have been in f4 } 
   }

[No need to create the call to f2 as another method, it's already another method, just override it directly. Now I have 4 classes, all executing fn differently without duplicating it. Make a method do one thing, and there is nothing to think about, it's a simple rule that always works and always provieds the approiate flexibility down the road.]

Also, if strict polymorphism is used, then one may have to overhaul an interface in order to get finer granularity. If you have say 100 things that use an interface, but we need to overhaul the interface to make just one of those 100 have a new feature that we did not anticipate, then aren't we letting the tail wag the dog?

[What is "strict polymorphism"? No sure what you mean about overhauling an interface, examples please? In the above case, all we add is a protected method in the class containing the original implementation, which is transparent to all users of class A.]

Try turning IS-A driven feature selection into HAS-A driven.


Fragile Parent Problem

Moved from DoubleDispatchExample

Here's the actual situation I have in mind: you're coding a wargame. Moving into a hex has a movement cost, which depends on the terrain type of the hex, and the movement type of the unit. Different people should be able to add new terrain types and movement types without having to recompile existing code. There would be some sort of hierarchy: "hoofed movement is just like foot movement, except that roads are cheaper and forests are more expensive", "swamp is just like forest, except it's even harder for wheeled and tracked units", etc.

I disagree that a hierarchical DeltaIsolation is the way to go. After a while you build up a vine of differences such that you cannot change a parent without risking busting some or all the children. I prefer to view features like shopping in a super-market: you browse the available features and put them into your cart as you need them. If there is something similar that you want to borrow from, then simply copy that set of features, and then change what is different. This de-couples you from changes to the original. Only apply OnceAndOnlyOnce factoring if there are say 3 or 4 copies. Two is usually insufficient in my experience. Sure, sometimes you want to "inherit" differences from the original, but sometimes you don't. Maybe there is a compromise, but the set approach seems safer in my opinion. Maybe tie features you want related such that there is a notification if the original changes. Upon notification, you decide whether you want to update in order to match, or leave alone. However, this requires a much more complex structure/system than either trees or sets. -- top


File-based Approach

One can also use a file-based approach to create a custom "build" for a particular customer. In some cases a custom build is preferred over constant changes to a central code base. However, it may complicate the upgrade process because not all customizations may be compatible with new features. Thus, there may be custom code forks that are left behind. Some customers may have to forgo access to frequent new versions in order to have custom features.

A simple approach to file-based DeltaIsolation is to make a directly (folder) structure that mirrors that of the original. Then only put those files that are different in that folder structure. The DOS commands to create a build set based on these two sets then looks something like:

  xcopy standardSrc\*.*  customerXsrc\*.*  /sy
  xcopy customerXdelta\*.*  customerXsrc\*.*  /sy

(Option "s" means copy all subfolders and option "y" means don't prompt for overwrite confirmation if file already exists.)

One problem with this approach is that the granularity of files is sometimes too large. We may want delta-ing at the function level instead of the module or class level, for example. Techniques for getting around this greatly depend on the language and IDEs being used. Maybe when file systems start to use RDBMS technology, like that being explored by Microsoft, and the IDE's recognize smaller units, this problem will solve itself. Until then, the file level may be "good enough".


Ideally, DeltaIsolation is a view, not a hard-wired grouping of code or info. See SeparationAndGroupingAreArchaicConcepts about code grouping and classification. -top

What, technically, would it mean for DeltaIsolation to be a "view"? I would greatly appreciate support for DeltaIsolation on top of e.g. RDBMS views and other classes of data-fusion, especially in combination with PublishSubscribeModel (so I can have the deltas pushed to me as they occur rather than via polling). So, I can see how these two independent concepts combine by layering. But what would "DeltaIsolationIsaView" look like?

Examples:

  SELECT * FROM thingTracked WHERE daysDiff(sysDate(), changedDate) <=3

SELECT * FROM defaults d JOIN thing t ON d.ID=t.ID WHERE t.value NOT NULL AND d.value <> t.value

So you were talking about viewing some sort of delta in a database with temporal metadata. Where does the 'isolation' fit in?


I've been using 'DeltaIsolation' in a somewhat broader sense, I think, than some of the above.

DeltaIsolation in terms of communication or SystemsSoftware refers generally to the ability to isolate the changes (the delta) to a data-set, object graph, plan, or SceneGraph, and to communicate just the deltas. I.e. in terms of an RDBMS, DeltaIsolation would refer to identifying a set of inserts+deletes+updates necessary to update a remote 'cached' view or table to match a local view or table.

DeltaIsolation in this sense is useful for graphics applications (e.g. RefreshRectangles), DataflowProgramming and ReactiveProgramming, distributed programming of all sorts, data-fusion, and so on. It saves bandwidth, processing power, and from those can eliminate update latencies.

This broader sense of DeltaIsolation probably derives from DataAndCodeAreTheSameThing. I.e. a typical InteractiveSceneGraph, such as sending deltas to an HtmlDomJsCss application will involve isolating JavaScript code for updates.

Perhaps this deserves a separate WikiWord, such as DataDeltaIsolation


Can DllHell be considered an example of failure of DeltaIsolation?


See Also: LimitsOfHierarchies, GroupRelatedInformation, FragileBaseClassProblem, AttributeShufflingReduction, ClassificationProblem.


CategoryReuse, CategoryInfoPackaging, CategoryInheritance?


EditText of this page (last edited November 23, 2011) or FindPage with title or text search