I've never liked code that uses exceptions extensively, but until recently I wasn't sure why. The thought occurred to me that exceptions are really a feeble attempt at making a time machine.
Think about it. Wouldn't it be wonderful if whenever you detected an error you could just step back in time to a point before the error occurred and do things over again? We could do this. All it takes is the creation of a complete snapshot of memory and the stack at a safe point in execution. When we discover that we are in trouble, we just overwrite the current contents of memory with that snapshot. Hmm.. how is this different from exception handling?
Your time machine exists. They're called restarts, in CommonLisp. -- AlainPicard
Could we have an example, maybe in LispRestartExample.
Simple example added. -- BrianDowning?
MichaelJackson uses this kind of thinking to ResolveRecognitionDifficulties? - places where the obvious (JSP) way to write a program breaks down because there is no way to evaluate the conditions. In other words, if the program could make the correct decision then the coding would be easy, but it cannot. He taught some standard ways of moving from the obvious code to working code. One of them is the Time Machine method. He also suggested some ReadAhead? and LookAhead? strategies. His BackTrackingMethod? is probably best coded using exceptions.
For me exception handling is an implementation technique that resolves a class of difficulties that emerge when we try to code a special kind of solution where the code would be easy if only it could make a decision or a choice first. See the ideas on NonDeterminsticProgramming? and Backtracking - Cohen, Non-Deterministic Algorithms, ACM Comp Surveys Vol11 no2 (Jun 1979) pages 79-94
-- DickBotting
Well, obviously, exceptions don't wind the heap back. But you can fix the heap as you wind back the stack - check out http://ose.sourceforge.net/browse-manual.php?manual=OTC_Reaper for an example. This doesn't wind back the whole heap, mind you, just the bits that make you feel most nervous. And in any distributed or multithreaded system you've got a lot more state hanging around than the bits reapers can keep track of.
Of course you could keep your whole state in an in-memory pseudo-time database - all changes to all objects leave mementos so you can wind 'em back en masse. I've actually worked on expert systems that did just that. You also want to keep mementos of the pruned branches so you can base your execution decisions on them - cf. HerbertGeorgeWells' "The Man Who Could Work Miracles" for a cautionary tale about what happens if you don't. Or David Gerrold's The Man Who Folded Himself; the protagonist fails to exercise due care and attention and causes himself all sorts of grief.
Such a strategy is, however, hellishly inefficient. Looks like you're going to have to wait a couple of years for the first general-purpose quantum computers if you want to program in an environment like this; with one of these new babies you don't need to control your program's execution at all; it can happily explore all possible program states and report back to you the ones that work - in linear time.
Singularity, here we come ... (See TheSingularity.)
-- PeterMerel
Nah, it is not just the heap it is everything you touch that you may not redo the second time through. Anyway, point was exceptions are not, for me, something to use liberally. I usually use them only when I touch something on the system's periphery that I do not have control over. When programming we have intentions. In most cases, we can make sure that our intentions are carried out. When we talk to databases, UIs or hardware, our intentions are at their mercy. ExceptionalConditions may result. -- MichaelFeathers
Ah, then maybe it'd be better to think of TransactionsVsExceptions.
MichaelFeathers's description of when he uses exceptions is one very important use, but by no means the only one. As described in ClassifyingExceptions, it is also very useful to use them for logic errors (primarily problems related to DesignByContract).
Exceptions are simply not a very good control-flow construct, since they tend to have unpredictable timing. They should simply be thrown to indicate: "help! I cannot do what I promised..." -- RussellGold
Yes. I didn't want to sound like a zealot. I've just seen in C++ that there are some programmers who avoid them (after all they've been able to do okay without them for years) and those who use them pathologically for some very non-exceptional situations (the ultimate non-local goto). I do admit that that are many cases where the use of an exception is more elegant than any alternative. The one RalphJohnson provides in ClassifyingExceptions makes a lot of sense.
The DesignByContract use seems very reasonable. In C++, I typically don't do that, because I'd rather use an assert macro and halt the program than attempt to handle the exception and allow things to be silently wrong. Depends on the culture and the values.
If the condition is under your control, it is silly to write code to fly in the face of your own errors unless you are in a real fault-tolerant domain. One place I went had a boolean attribute on a base class. You could check this attribute after a call to any method and it would tell you whether the parameters that were passed in the method were in range and correct. I forget what it was called, but I dubbed it the WasIstupidFlag? and lobbied for its retirement. Suppose you used it and found that there was an error. Then what? -- MichaelFeathers
Then you have a bug. I don't see a reason to write extra code to check for erroneous code, since you really can't do much about it (So I agree with you on that flag). There is a good reason for throwing and catching exceptions on contract violations, however. Suppose (I know it's a stretch, but...) that you actually release code which still has errors in it. What would you rather have happen?
One of my management techniques has been to identify and sort of 3 "streams of code" - basic logic, UI, and exception handling. In my opinion, never should the trwain (3 + twain?!) intervene. Regarding exception handling, I've vilified methods with true/false return values for indicating their successful execution, and objects with status flags that you're supposed to check after every call to them; thus, I've liked the try/catch construct for separating out exception-handling concerns.
However, I just now realized that I've never fully defined "exception-handling". I like MichaelFeathers' definition, so let me show a thumbs up for that. My experiences in dealing with external, non-controllable objects, such as databases, network hosts, vendor APIs, etc., has given me a comfortable feeling for using exception handling. But, only for those cases. -- KevinLacobie
No, I don't want a time machine. If I went back in time to just before the exception and tried again, I'd surely make all the same mistakes again. Then I'd be in an infinite time loop, and those are nasty! (Just watch StarTrek for a while; you'll see. ;-)
I find exceptions useful for "Help! Something horrible and unexpected happened down here, and we have no idea how to recover. Please try another method, rollback the transaction and/or go on to the next unit of work; we wouldn't want the entire system to crash."
If you don't think horrible and unexpected things can happen, then you haven't worked with relational databases or networks.
-- JeffGrigg
Exceptions do work like a time machine provided that you don't use any mutable state. It's not that exceptions are evil, it is simply that set! is evil.
Think about it.
Mutable state as such wouldn't hurt if there was a way to undo each and every language statement. In the case of an exception, everything within a try block would be rolled back before entering the catch block. In other words, YouDontWantAnExceptionYouWantaTransaction without having to code it yourself, just like SQL transactions for a systems languages. Alas, this would probably be a trifle bit inefficient... -- FalkBruegmann
What about this: exceptions are there because somewhere in our code someone failed to correctly handle an error situation. If we would test every bit of software, and maybe hardware, like we should, exceptions would not be necessary. So, my point is: test better, and look further than just your software unit when coding.
Think about it.
Rob Kamp
But that implies infinite knowledge about the contextual usage of the code. Maybe it'd work for an embedded system, but for general purpose code... there is no way to limit the scope of "further than just your software unit"... At some point a trade-off needs to be made, between squelching errors, "handling errors locally," and admitting that you could be called from a context whose response to errors is something you cannot predict nor should you limit by "knowing better."
-- DougPhilips
(define time-machine #f) (call/cc (lambda (tm) (set! time-machine tm)))Does call/cc restore data consed in the heap too?
Not sure. The stack and the PC.
[None of the ContinuationImplementation techniques I've seen do so. It should be theoretically possible though. The system doesn't actually need to track every cons; these can only be reached through "roots" in the stack or registers, so as long as all continuations are "live", the garbage collector won't reclaim them. It does need to track every assignment, but most GenerationalGarbageCollection algorithms do this already. Instead of just using these to track which older objects may point into new objects, the collector could save the old value in any idle continuations, keeping it as a root for the garbage collector. When a continuation is reactivated, it would have to restore any reachable heap variables to their original values, saving the current ones in the previously-active continuation.
The effect is like an OS context switch, but could end up being much more expensive. The performance hit is worst when there's lots of shared data. Tree-structured data could potentially save some of this hit by swapping out whole subsections of a tree, letting continuations switch between one set of heap data and another merely by swapping a pointer.
And of course, this can't undo external side-effects. Once a network connection has been made, there's no recalling the data. Once an e-mail's been sent, there's no unsending it, unless you're on AOL ;). Once a page has been printed, there's no unprinting. This somewhat limits the usefulness of this time machine. -- JonathanTang]
Actually, you'd be surprised. Postscript (a quite powerful Turing-complete graphics programming language) uses a snapshot/restore model of memory management (newer versions also support GC, but never mind for the moment), which does in fact restore the value of every single change made to any collection object since the snapshot was taken.
When I implemented this in a naive way, yes, it soaked up 95% of the total execution time in rendering pages, all by itself, but when I leaned into it and did it in a clever way, it then used up approximately 0% (less than 1%) of the overall execution time.
I then checked this with friends working at Adobe (who of course couldn't talk directly about trade secrets, but could confirm things I'd already figured out) who confirmed that snapshot/restore was insignificant in the Adobe implementation, and not one of the things they were trying to accelerate with a special-purpose cpu they were then experimenting on.
Anyway, bottom line is that it might turn out to be quite reasonable to restore the heap; intuition is sometimes misleading.
-- DougMerritt
Interesting. Is there anything published about the "clever" way? The only thing I can think of offhand would be to maintain multiple areas of memory, addressing objects relatively, and then swap the areas on a restore. Virtual memory hardware could probably help - maybe just map in pages differently. It'd be kind of like how StopAndCopy swaps the heaps, but there wouldn't necessarily be any copying. When you re-checkpoint, you just throw away the old memory area and re-use it for new snapshots.
It seems like this would be really memory-intensive though. Oh, hrmm, if you're using the VM facilities, you can probably share a page between the checkpoint and the working heap, and then use copy-on-write to duplicate it for fresh changes. The overhead is no more than normal operation of the virtual memory hardware.
Is this how they did it? Or am I missing something?
-- JonathanTang
Good brainstorming, and I've heard of things similar to part of that being used in other contexts. However, I was aiming for a cost significantly lower than a context switch -- just a few machine instructions worth of overhead, and that turns out to be achievable, similarly to the cost of intercepting cell access in some of the GC schemes you've discussed.
So, it was surprisingly efficient to just intercept writes (it helped that only writes to collections needed this, since that's how the language is defined), and stash the old value into a restore area before allowing the write to go through. There are complications concerning doing this only once, and keeping generation numbers, since snapshots can be nested, but it still was only a few added instructions per write altogether.
And those are similar to the kinds of issues you'd need to address for the topic on this page.
This is probably yet another piece of evidence for not prematurely optimizing -- but this is really hard to apply in practice, because it doesn't seem like premature optimization, it seems like just being common sensical about avoiding things that are so pessimal as to be unworkable. And yet those intuitions can be quite wrong, so sometimes it is premature after all.
p.s. I can't swear Adobe did it that way, only that my friends there confirmed that I wasn't doing it idiotically based on what they knew. But < 1% overhead speaks for itself.
-- DougMerritt
You can do that in all APL dialects I know of.
-- Olivier Lefevre
What? I don't remember any such thing. How? -- dm
There is a neat concurrent programming abstraction for the HaskellLanguage called SoftwareTransactionalMemory, which was designed to handle thread synchronization in a completely lock-free way. With STM, you can just go ahead and modify a data structure without locking it, as long as you're in the context of an STM action. Any reads or writes to shared mutable variables are recorded in the action's transaction log. When the action completes, the STM library examines the log and determines whether all actions happened consistently -- that is, without concurrent modifications by other threads. If the transaction is valid, it is committed; otherwise, the transaction log is rewound, all modified variables are reset to their original states and the transaction is restarted from the beginning. This gives you all the flexibility of thread-based locking (and then some) without the possibility of deadlock.
See the paper: http://research.microsoft.com/~simonpj/papers/stm/stm.pdf
But what about livelock? LockFree? doesn't imply WaitFree?. Can threads repeatedly lock each other and be restarted without one of them ever finishing? -- GunnarZarncke (by the way, this belongs on another page I guess)
Smalltalk joins Lisp and Ruby in providing restartable exceptions.
The key heuristic (is that the right word?) that restartable exceptions enable is a separation between raising and handling an exception. Without restart capability, the developer generally needs to know, somewhere around where the exception is about to be raised, whether or not a restart is or will be required. By the time an exception handler is reached in Java, C++, Javascript, Perl, Python, and all the others, it is TOO LATE to do anything except complain and quit.
With a restartable exception system, a DIFFERENT DEVELOPER - or the same developer at a different time - can decide whether or not a particular exception is something stupid, a bug, or something more routine. The extreme, apocryphal case (I don't know if it was ever actually attempted or not) is the Smalltalk system whose exception handler unloads the failing module, loads a different version, then restarts the failing method.
In environments with restartable exceptions, the code that raises an exception is easy, clean, and straightforward. The developer does not, in general, need to even think about anything except whether or not the exception condition occurred. The handler code can be put in one readily accessible place - for those situations where all or most exceptions are to be handled in a similar fashion - and can still specialize the handler as need be where different behavior is required. This separation leads to MUCH cleaner environments.
-- TomStambaugh