Technique With Many Prerequisites

Proponents of certain GoldenHammers, or perhaps bronze hammmers even, often cite a list of prerequisites that must be made before the technique is successful. If the technique fails, it's often, if not usually, "traced" back to one or more missing prerequisite.

I consider this a smell, because meeting all prerequisites is often not practical, or at least not likely to happen for social-political reasons beyond the control of technology itself or individual technicians. A big stack of prerequisites is a ripe target for excuses when it fails to deliver.

Typical prerequisites include:

Incremental techniques are usually easier to try, integrate, and dabble with, and require a smaller up-front investment. This does not mean that prerequisite-heavy techniques should be ignored, but they must deliver a significant advantage over incremental approaches to compensate for the high risks of these kinds of techniques. They are high-risk because they require "all cylinders to fire". 3 of 4 will doom it. (Otherwise, they wouldn't be "prerequisites".)

A physical example includes the jet engine. Many WWII technicians realized it's huge potential fairly early. However, it required that so many other surrounding technologies be improved or changed that it was arguably a poor bet at the time. Germany could have spent resources on a much larger quantity of conventional planes for a better kill-per-dollar ratio. True, if the war lasted longer, the expense may have started to pay off. There are many other Hitler projects that arguably fit this pattern, such as the V2 and Germany's later-model tanks.

In these cases, time eventually caught up with them. But, we're not sure if this will always be the case. Maybe spending R&D on the next-generation vacuum-tube is not worth it if transistors will be replacing all tubes, for example.

--top

As a side note, there are still applications where vacuum tubes are state of the art. MicrowaveOven?s, CycloTron?s, IonThrusters, and TravellingWaveTubes are examples.

I was generally thinking "spending relative to transistor".


A GoldenHammer that doesn't require change is probably fool's gold. Low-hanging fruit gets picked early, so is only readily available to pioneers of a new technique. Incremental improvement has diminishing returns.

Languages and techniques have ceilings. Fortunately, one can upgrade the language/technique over time itself, using code-walkers for preprocessing, widespread CompileTimeResolution, AbstractFactory, PolicyInjection, 3rd party analyzers, AutomatedCodeGeneration, and writing new frameworks and protocols to overlay the existing one. These will allow some improvement to those ceilings, delegating new features to a combination of SelfDiscipline and compile-time analysis. But at this point you know the language is heading towards an entropy-death, with ever increasing configuration-management, complexity, and composability issues. These sorts of improvements are the death knell of a language. Indeed, you effectively aren't writing in the original language anymore... you're instead writing in some lingua frameworka (FrameworkIsLanguage) and juggling frameworks that were not designed to work together.

That's the time to start building a GoldenHammer that raises those ceilings, reducing entropy, allowing a wider ranges of features to be achieved more universally among projects, allowing more composition between them, eliminating dependence on frameworks and AutomatedCodeGeneration: safety, security, modularity and composability, distribution, concurrency, consistency, memory management, persistence, policy injection for failure handling and recovery, disruption tolerance, graceful degradation, resilience or regeneration after attack or crash, IDE support for syntax highlighting and refactoring, fast compilation after changes, etc. Achieving these features will require significant reorganization of the language, making distinctions and separating concerns that the previous language did not make, did not separate.

Using any real GoldenHammer requires change. Improvement is change.

That said, modularity properties with loose coupling (e.g. good support for asynchronous MessagePassing, AlternateHardAndSoftLayers, PolicyInjection, etc.) do much to allow a beachhead style approach to integrating with existing toolsets, and thus can help dodge problems of the NashEquilibrium. You might not get all the useful features without buy-in from both clients and servers (i.e. you can't readily achieve DistributedTransactions or various forms of security unless everyone is helping out).

Your opinion is noted. But "outsiders" will want more evidence that your prerequisite-heavy approach actually delivers and overcomes the high cost to get all the pieces and personnel right. It would likely require a SteveJobs-style techno-evangelist to inspire, motivate, teach, and punish sufficiently to pull it all off.

That sort of charisma is only necessary if you want to push a new technique before the old one has made considerable progress towards its inevitable entropy-death. When existing systems reach the point where programmers are spending much of their effort juggling frameworks (e.g. in this modern era, said frameworks are for concurrency and consistency, distribution, persistence, data-flow, dependency injection, encryption and secrets protection, GUI, and network IO), people will be looking for change. At this point, new techniques - the good, the bad, and the ugly - will have their chance. Many people will buy the fool's gold. You are more aware than many that technology, like everything else, goes through 'fads'. Think about it logically, then ask yourself: "would fads really be possible if "outsiders" actually required as much evidence as I am hypothesizing?"

LifeIsaBigMessyGraph, and no magic wrench will fully fix that fact. In my opinion, it can be helped some by meta-tizing the rules/logic so that they are easier to sift, query, and re-project to study; and you want to use tons of "strict" rules to hopefully prevent inconsistencies and leaks. I focus on making the system easier to study, and you focus on making it more formal and self-protective. Same goal, different approach.

I, too, aim to make the system easier to understand... but I do so by making it more formal - based on the axiom that it's easier to understand any given system or subsystem when you can make correct assumptions and predictions about specific and relevant properties of that system or subsystem. There are also secondary effects. For example, I believe extant code will tend to be easier to study when programmers know that certain cross-abstraction-layer optimizations can be and will be performed without their hand-intervention, thus allowing them to write code closer to the problem domain instead of writing 'hand-optimized' code - which tends to be more complex and therefore harder to study.

I agree that better 'search' tools that can provide many different 'views' of the same codebase as you described in the misnamed SeparationAndGroupingAreArchaicConcepts, in addition to a good RefactoringBrowser in general, is a truly excellent way to advance state-of-the-art when it comes to making code systems easier to grok. It can expose and make obvious patterns that would otherwise be buried as a subtle signal in a different view. I fully encourage you to pursue it. But that isn't at all contradictory to having more formal and self-protective code; indeed, in a formal system where code has formal properties against which you may query, you will likely be better able to provide many useful views of the same code without the cost of having a bunch of hand-written annotations. (Not that annotations are restricted in such a system, either.) Improving IDEs to provide greater ability to grok the impact of changes to code is also a goal of mine, though I've been pursuing different avenues. I'd like to see ZeroButtonTesting become common, a WikiIde with neat little alerts telling you exactly which UnitTests and IntegrationTests and various project-pages are now failing due to your changes and need to be repaired. I'd like to see code with WebOfTrust signatures allowing you to rapidly pick out versions that you feel you can trust.

Even some of those properties are aided by more formalism. Consider testing: not only can more formally restricted systems guarantee useful properties for tests (like confinement - the inability for a test to communicate or cause SideEffects outside of its borders), but in a system with TransactionalActorModel where the initial actor may abort a behavior but return a value based on the 'failed' transaction, one may also perform a full test in a transaction then 'undo' it before returning pass/fail. This allows moderate IntegrationTesting even of systems that would have SideEffects, so long as they are undo-able. Neat, eh? But that also requires some 'deep' integration of formalisms features.

Formalisms give the user and compiler and IDE ways to cut away, precisely, which properties for which parts of the 'big messy graph' cannot be affected by certain changes or certain potential sources for error. Since you're relying upon SelfDiscipline (FourLevelsOfFeature), the number of assumptions and predictions users (and compilers/interpreters) can make about code written by themselves and others is quite limited. This means that software architects need to examine more of the system to understand it, of both their own code and others. The need to process more information to find flaws isn't a PsychologyMatters issue. It's an Information Processing issue.

Your opinion is noted.


See also: HighDisciplineMethodology


See: NashEquilibrium, QwertySyndrome, NetworkEffects, PathDependence


CategoryOopDiscomfort


EditText of this page (last edited June 11, 2014) or FindPage with title or text search