The Law of Leaky Abstractions
A paper by Joel Spolsky, making the claim that "All non-trivial abstractions, to some degree, are leaky."
Excerpted example:
"Towards a New Model of Abstraction in the Engineering of Software"
Full article: http://www.joelonsoftware.com/articles/LeakyAbstractions.html
but read this one instead:
http://www2.parc.com/csl/groups/sda/publications/papers/Kiczales-IMSA92/for-web.pdf
Assertion: All of the examples in the paper are either (a) problems with C++ or (b) PrematureOptimization; worrying about performance factors that may or may not matter in the grand scheme of things.
Counter-assertion: Peformance will always get you in the end. Reality is messier than logical abstractions make it out to be. PrematureOptimization is not about avoiding that reality, it's about the futility of fixing small inefficiencies up-front. Blindly believing in an abstraction (whether TCP/IP or the relational model, or an OO domain model) is the source of many failures. DesignForPerformance.
I find his point is best made in the ASP.NET example.
I disagree with your assertion. His very first example is that TCP is a leaky abstraction because it somehow has to send data reliably using only an unreliable tool (IP) which is generally impossible. TCP works pretty well--most of the time. If the underlying IP layer is losing or garbling half the packets, TCP will still work, just slower. Occasionally, unpredictable real-world events (such as network outages) cause the underlying IP to stop working entirely, and then the unreliableness of IP *leaks through* TCP. The reliable abstraction of TCP has failed, and the client using TCP now has to deal with the failure anyway! I submit that the TCP example is neither (a) or (b) above.
Leaky abstractions are a fundamental problem in programming, because AllAbstractionsLie. Programs model real-world data, relationships, interactions, etc. using abstractions which are simpler than the actual domain being modelled. Abstraction of domain elements makes the programming more tractable, but it also means the real world can violate our expectations in ways that our abstraction doesn't handle very well.
If an abstraction is leaky, it means that it needs to be generalized further. One of the examples that Joel gives is NFS and how it's supposed to function as if it were mounted locally.
That abstraction is definitely leaky, but it's also wrong to use it in the first place. Instead, flip the abstraction around and say that all files are NFS shares.
That is to say, all files: 1. May take arbitrarily long to access 2. May not actually be able to be accessed at any time (and return an error code as such)
Then a file kept on a local hard drive is essentially nothing more than an NFS share that has performance and availability guarantees.
Same idea with the 2D array abstraction. The most general abstraction is to just assume that all elements are stored in a completely separate linked list node. Then if your specific implementation happens to have performance guarantees that consecutive elements will be stored in memory sequentially, you can then take advantage of that for performance improvements.
In fact, that's basically how optimization works, isn't it? You replace something slower and more general with something more tweaked and targeted to a specific situation.