Discontinuity Spike

A term borrowed from BertrandMeyer's "Discontinuity Principle" (ContinuityPrinciple) where a small change results in a big effort. For example, if your PC has 4 ports and you are only using 3, then adding a 4th connector is relatively simple. But when you add number 5, you encounter a DiscontinuitySpike because you have to take greater effort to get a 5th connection than you did for the 4th, such as buying a new port card or having to buy a new computer altogether. A bar graph may look something like this:

  |*******************   (port 1 - original purchase)
  |*** (use port 2)
  |*** (port 3)
  |*** (port 4)
  |******************  (port 5)
  |*** (port 6)
  |etc.....
  ----------------------
  Effort -->

Note that all paradigms and techniques have DiscontinuitySpikes in at least a few places. I have yet to see one without. But, their spikes may be in different spots such that the same change in one does not spike in another, but the reverse could be true for a different change. Sometimes preventing a DiscontinuitySpike may look something like:

  |**************   (port 1 - original purchase)
  |********** (use port 2)
  |********** (port 3)
  |********** (port 4)
  |********** (port 5)
  |********** (port 6)
  |etc.....
  ----------------------
  Effort -->

Here, we "spread the pain" to avoid big spikes, but do not necessarily reduce total effort, perhaps even increase it. It perhaps has the comfort of having less "9pm surprises", but may still cost a lot in the end.

It is quite easy to produce artificial and unnecessary DiscontinuitySpikes in software systems. For example, using a fixed-length process-queue for the scheduler predictably creates a DiscontinuitySpike the moment the number of processes surpasses the width of the queue. These spikes can be said to constitute a form of AccidentalComplexity that could have easily been avoided by following the 'ZeroOneInfinity' rule and using a dynamic allocation, especially in a language where dynamically allocated collections are already part of common libraries.

Since it is possible to create 'unnecessary' DiscontinuitySpikes, it is necessarily possible to avoid creating them. However, there is some debate over whether we ought even try - see the debate regarding ProgramInTheFutureTense (leveraging our limited experience and foresight) vs. DoTheSimplestThingThatCouldPossiblyWork (myopically, actively disregarding any foresight or experience we might possess). There are good points on both sides of the debate, and there is likely a decent balance somewhere between the two extremes described by some as-yet unnamed programming philosophy, one likely involving use of known rules and patterns and choosing programming languages that are designed for myopic programmers to later make changes without great expense.


Not to be confused with a continuity spike where things get harder for no reason what so ever.


While it may be infeasible to wholly prevent DiscontinuitySpikes (the WaterbedTheory certainly lacks an existing counter-example), the real trick in language-design (and framework design) would be to take lessons learned from previous languages and shift, as much as possible, all known DiscontinuitySpikes into places that the target-audience shouldn't or doesn't really care to explore. This, of course, can leave various unknown DiscontinuitySpikes, but language designers can't do much about those until they've removed the known clutter in order to discover them.

Finding 'places nobody cares to explore' is actually fairly straightforward after you know your target audience and goals - very easy for DomainSpecificLanguages, and somewhat less so for GeneralPurposeProgrammingLanguages. An example would be making it difficult to do cheap bit-manipulations in a scripting-language not intended for hardware, or to make it difficult to violate abstractions and communication-layers via use of assembler in a network-language where security is paramount.

Languages with great SymmetryOfLanguage and ExtensibleProgrammingLanguages might also resist 'fixing' DiscontinuitySpikes in one place, allowing savvy programmers to pick them up and move them someplace else whenever the DiscontinuitySpike is sitting someplace he happens to be working in. This, of course, still has a DiscontinuitySpike involved with modifying the syntax and semantics to move the other known DiscontinuitySpikes, but a language-design can aim to shift that cost into the one-time language implementation cost.

Of course, one would still be left with the StumblingBlocksForDomainSpecificLanguages - i.e. it would be really easy to write code for whatever purpose, but reading other people's code creates a bunch of micro-DiscontinuitySpikes while learning the language they designed to write it. Of course, that isn't so different from multi-language solutions in common use today - SQL, C, Java or Ruby, HTML, Logic, Statistics, Arithmetic, even programming libraries and frameworks, etc. - so writing this off as a relative loss may be difficult to justify. A good RefactoringBrowser that fully parses and helps transform the code might also be able to help programmers learn the language, again at the expense of implementation and learning to use the RefactoringBrowser.


One-time DiscontinuitySpikes, however, are certainly worth paying if they reduce the expense of most later changes. Examples include relational model, creating an RDBMS, web browsers, protocol specifications, creation and adherence to known formats and standards, programming language design and implementation, the creation of the WikiWiki (which average reduced cost to add content), etc.

Paying for these just takes one person with LazinessImpatienceHubris, Time, Money, and an itch to scratch.


The same phenomenon occurs in physics. The different superstrings theories (paradigms) all describe the same physics (computations). A sequence of transformations in one theory can be discontinuous whereas the same sequence in a different theory will be continuous. Being able to switch to a different theory whenever the situation is too difficult to handle in the current one is an extremely important tool in the field. Unfortunately, switching programming paradigms doesn't seem to be as easy.

[Is this still true, I was under the impression that the various superstring theories had been resolved into a single unified theory. Currently unproveable, but one single theory none the less.]

M-theory, yes.


Example from RelationalDatabases: You have multiple billing or account categories and a column for each. After a while you realize that adding and changing category columns is a pain. Therefore, you make the categories into rows instead of columns. It may be quite a bit of work to overhaul the system to accept such information in a row-wise fashion.

This can happen in OOP designs also if we decide to make classes dedicated to such categories instead of using multiple attributes within a single class.


Arguments that tables have less DiscontinuitySpikes than dedicated structures such as lists, maps, stacks, etc. have been removed because they are covered in AreTablesGeneralPurposeStructures.


Also see TestDrivenDesignPhaseShift, WaterbedTheory


CategoryComplexity, CategoryDecisionMaking


EditText of this page (last edited August 22, 2010) or FindPage with title or text search