Leaky Abstraction

The Law of Leaky Abstractions

A paper by Joel Spolsky, making the claim that "All non-trivial abstractions, to some degree, are leaky."

Excerpted example:

Something as simple as iterating over a large two-dimensional array can have radically different performance if you do it horizontally rather than vertically, depending on the "grain of the wood" -- one direction may result in vastly more page faults than the other direction, and page faults are slow. Even assembly programmers are supposed to be allowed to pretend that they have a big flat address space, but virtual memory means it's really just an abstraction, which leaks when there's a page fault and certain memory fetches take way more nanoseconds than other memory fetches.

Contains nothing that wasn't already covered 10 years ago by Gregor Kiczales, in:

"Towards a New Model of Abstraction in the Engineering of Software"

Full article: http://www.joelonsoftware.com/articles/LeakyAbstractions.html

but read this one instead:

http://www2.parc.com/csl/groups/sda/publications/papers/Kiczales-IMSA92/for-web.pdf


Assertion: All of the examples in the paper are either (a) problems with C++ or (b) PrematureOptimization; worrying about performance factors that may or may not matter in the grand scheme of things.


Counter-assertion: Peformance will always get you in the end. Reality is messier than logical abstractions make it out to be. PrematureOptimization is not about avoiding that reality, it's about the futility of fixing small inefficiencies up-front. Blindly believing in an abstraction (whether TCP/IP or the relational model, or an OO domain model) is the source of many failures. DesignForPerformance.


I find his point is best made in the ASP.NET example.

I disagree with your assertion. His very first example is that TCP is a leaky abstraction because it somehow has to send data reliably using only an unreliable tool (IP) which is generally impossible. TCP works pretty well--most of the time. If the underlying IP layer is losing or garbling half the packets, TCP will still work, just slower. Occasionally, unpredictable real-world events (such as network outages) cause the underlying IP to stop working entirely, and then the unreliableness of IP *leaks through* TCP. The reliable abstraction of TCP has failed, and the client using TCP now has to deal with the failure anyway! I submit that the TCP example is neither (a) or (b) above.

Leaky abstractions are a fundamental problem in programming, because AllAbstractionsLie. Programs model real-world data, relationships, interactions, etc. using abstractions which are simpler than the actual domain being modelled. Abstraction of domain elements makes the programming more tractable, but it also means the real world can violate our expectations in ways that our abstraction doesn't handle very well.

The problem with the TCP example is that TCP doesn't actually promise completely reliable delivery. It promises to work around a large set of known transport problems, but not every possible problem. Thus (from RFC 793, emphasis added):

As long as the TCPs continue to function properly and the internet system does not become completely partitioned, no transmission errors will affect the correct delivery of data. TCP recovers from internet communication system errors.

TCP explicitly allows for dropped connections, either due to network failure or to the other party explicitly hanging up the phone. Note that this is still an abstraction. Again quoting from RFC 793

This simple model of the operation [i.e., the "Model of Operation" of the spec] glosses over many details. Oneimportant feature is the type of service.

Indeed. TCP has become wildly successful exactly because it glosses over, i.e., abstracts from, this level of detail. TCP works over everything from slow serial lines to gigabit networks and beyond. It doesn't even need IP, though TCP/IP is of course the best-known flavor.


If an abstraction is leaky, it means that it needs to be generalized further. One of the examples that Joel gives is NFS and how it's supposed to function as if it were mounted locally.

That abstraction is definitely leaky, but it's also wrong to use it in the first place. Instead, flip the abstraction around and say that all files are NFS shares.

That is to say, all files: 1. May take arbitrarily long to access 2. May not actually be able to be accessed at any time (and return an error code as such)

Then a file kept on a local hard drive is essentially nothing more than an NFS share that has performance and availability guarantees.

Same idea with the 2D array abstraction. The most general abstraction is to just assume that all elements are stored in a completely separate linked list node. Then if your specific implementation happens to have performance guarantees that consecutive elements will be stored in memory sequentially, you can then take advantage of that for performance improvements.

In fact, that's basically how optimization works, isn't it? You replace something slower and more general with something more tweaked and targeted to a specific situation.


CategoryAbstraction CategoryPaper


EditText of this page (last edited May 27, 2013) or FindPage with title or text search