Method Of Extensibility

I want to be able to use this a long time into the future, or, I want to do more with this! ThinkingOutLoud.DonaldNoyes.20110606


When designing a system which is to be used for an extended period of time, say ten or twenty years, one can expect changes in technology, systems and equipment which under normal system design methods, will make it inefficient, slow, and awkward to use, given the situation of rapid change.

What I am thinking about here is the application of a method which will make the executives and the data employed by it to be extensible.

To do so, at least two characteristics of operation need to be maintained as fixed in the executive and data.

This means the choice of mechanisms, structures and procedures must be chosen which are almost certain to continue in the future, and which can be located and retrieved and can be made to function on the local machine until such that the connections are restored, and when operations performed on a local stor can be synchronized with any Web-based store.

How this is to be done, I call the MethodOfExtensibility. If it works and is not broken, but it can work better without severing its past reachability, then ExtendIt!

Involved in making this happen are the systems of operation and the systems of storage. We seem to have made at least one of the artifacts we use every day a candidate for study as to how we can arrive at such a method. The artifact I have in mind is the InternetDocument, and the InternetBrowser. The processes, platforms and displays are ever changing and yet they remain useful because of common purposes and reasonings: I need it, I put it somewhere or it exists somewhere, and I can acquire and display it on what I am using now.

What more is needed:


There are many 'methods of extensibility' we could favor: PluginArchitecture, PluggableArchitecture, MultiCaster, BlackboardMetaphor, PublishSubscribeModel, ContentCentricNetworking, DataBusPattern, AlternateHardAndSoftLayers (embedding scripts, calling ForeignFunctionInterface), LateBinding and open recursion, adding slots and callbacks (AccessOrientedProgramming), ad-hoc reflection and MonkeyPatching, TransClusion, MultiMethods, et cetera. Admittedly, the set of valid options shrinks a great deal if we want nice composition properties (such as safety, security, scalability, performance, persistence, concurrency control, failure handling, resource management).

We can potentially benefit from utilizing multiple orthogonal methods of extensibility, e.g. distinguishing configuration from composition. I've been idly pursuing such models (cf. PolicyInjection) to achieve more resilient configurations. One interesting possibility I've found is that we can import modules based on their API and a few assertions - i.e. a sort of ContentCentricNetworking approach to resource discovery. This is very extensible and resilient against renaming, relocation, name conflicts, et cetera. It can be secured by controlling the search location.

As far as the maintenance element goes, I believe we can achieve technological pressure for systematic refactoring and maintenance across diverse projects. We can learn from WikiWiki here: potential for AccidentalLinking, both HappyCollisions and 'sad' ones (e.g. due to ambiguity or name collision). Happy collisions allow us to discover what we need and start refining it to meet our purpose, then share the refinements. And sad collisions still have a useful consequence: they cause developers to review, refine, and discuss code and ontologies from other projects to eliminate the collision. One possibility is discovering modules by API (unique function names) and requirements testing (e.g. that 'sort [2,1,3] == [1,2,3]', type safety). This allows arbitrary collisions between projects, unlike niche namespaces (such as com.example.sort). Further, the discovery mechanism supports a relatively high ratio for happy collisions, supports easy refinement and adapters, and allows an IDE to provide a list of near-matches. The 'search' potentially allows for linkers to act as constraint-solvers, composing a tree of compatible modules that will meet all stated requirements.

We can use linked DistributedVersionControl repositories to systematically share resources. With DVCS, developers will be pressured to give as much as they take - i.e. code is pushed/pulled through public repositories, but code in a public repository is subject to change, and tends to push back to the original developer to avoid branching. Thus, when you share code, you are also under pressure to accept changes to it. Use of intermediate company or project repositories still allows for security, an implicit web of trust based on relationships between repositories. Further, DVCS should also eliminate concerns that a UsefulUsableUsed module would be 'destroyed' or 'abandoned', since it will still be maintained in user repositories and shared via public repositories even if the original developer leaves.


Related:


EditText of this page (last edited June 20, 2011) or FindPage with title or text search