CeePlusPlus destructors should never throw exceptions (they should provide the "nofail" ExceptionGuarantee). This comes down to the fact that destructors have a special role to play when exceptions are thrown -- they clean up resources as per ResourceAcquisitionIsInitialization -- and it's not clear what should happen if an exception is thrown from a destructor during the process of UnwindingTheStack. C++ defines a harsh rule, which says that an exception escaping from a destructor during stack unwinding will cause program termination, and that's not what you want.
(Java makes another bad choice; the first exception is silently discarded, even if it's important and informative and the second exception is something dull like a NullPointerException.)
Given the C++ rule on throwing from destructors during stack unwinding, here are some reasons not to allow your destructors to throw:
Because destructors cannot throw exceptions, they must use other mechanisms to report errors. The best approach is to ensure that your destructors do not generate errors; if releasing a resource might fail, have an explicit release() method which might throw, and a destructor which calls release and catches its exceptions as a last resort. Well-written code will use the release() method, but the fall-back in the destructor ensures that even badly written code won't cause resource leaks. An example of something which can fail is the destructor of an fstream, which might fail to flush the file contents to disk. Calling close() is always the correct thing to do (though of course toy code doesn't always need correctness).
close() can fail only if a flush fails, which can happen only for output streams. When using an input stream it is OK to omit an explicit close(). -- YakovGalka?
[ This page could do with a lot more information. ]
More information on how to program well in the presence of destructors can be found in HerbSutter's excellent book "ExceptionalCpp", as well as the sequels "More Exceptional C++" and "ExceptionalCppStyle".
-- JamesDennett
C++ destructors could (and should?) throw exceptions.
As in stated (*1), for example Stroustrup says: "The standard library function uncaught_ exception() returns true if an exception has been thrown but hasn't yet been caught. This allows the programmer to specify different actions in a destructor depending on whether an object is destroyed normally or as part of stack unwinding." . This works in standard compilers like GCC , but it is not reported to work on some deviated ones like some versions of Visual C++.
try { // [ ... ] } catch(...) { if(std::uncaught_exception() == false) throw; }(*1) "The C++ Programming Language (Third Edition)" by Bjarne Stroustrup
“Industrial Strength C++” http://hem.passagen.se/erinyq/industrial/IndustrialStrength.11.html
http://www.devx.com/tips/Tip/12850
http://docs.sun.com/app/docs/doc/805-4955/6j4mg807b?a=view
-- Miguel Mira
Note that even the standard implementation of std::uncaught_exception can give false positives. Consider checking for it in a destructor:
X::~X() { try { /* ... */ } catch(...) { if(!std::uncaught_exception()) throw; } }Now, if the following destructor is called during stack unwinding, X's destructor will swallow the exception, although it should not:
Y::~Y() { try { X x; // ... } catch(...) {} }-- YakovGalka?
C++ destructors could (and should?) throw exceptions.
And I will politely disagree. It goes back to the simple observation that program execution are inherently single threaded (though you may have multiple threads of execution). What I mean is that you can only do one thing at once (again barring explicit spawned threads). I'm trying to get at you can only handle one error at a time, barring extremely complicated code. That's the inherent problem. When you encounter an error (#1), if you encounter another error (#2), what do you do? Drop one in favor of the other (Java)? Try to save both and process both, one at a time? (What about out of memory errors? You might not be able to save a second error because you're out of memory.) Try to spawn a new thread to deal with the second error? (Oh god no.)
The inherent problem is that handling an error may raise another error. At some point, eventually your program must "give up". The C++ default way of giving up is to kill the program.
Luckily, almost all programs can in any language can be phrased as something like: 1- acquire resources to perform a calculation or task, 2- do that calculation or task, possibly failing for X reasons, 3- release resources. Acquiring resources may reasonably fail. Doing a calculation or a task may fail for a multitude of task-specific reasons. However, releasing a resource generally does not fail, e.g.: acquiring memory can fail because there is no memory left, but the runtime will never complain if you return a piece of memory which was previously allocated. This is true of all resources known to the author. Freeing resources does not fail. (If it does, such as from a sanity check, your program is FUBAR anyway, and taking it down isn't such a bad idea.)
Bjarne Stroustrup made this observation himself, and wrote a new programming language around it, CeePlusPlus. The first thing he added to CeeLanguage to make C++ was destructors. (This is years before virtual functions, exceptions, templates, etc.) To allocate a resource, put an object on the stack which allocates it. Building the stack acquires the resources to perform the task; it builds up the environment. Eventually you have the environment to perform the task. Then you need to free those resources, aka take down the environment, aka unwind the stack. Thus follows that destructors should (generally) only free resources. Any other use and you will run into this problem of what to do when encountering multiple errors at once.
This gets to transactional support; your program must always be in a destructable state, except for those very small handover pieces which are guaranteed to have the strong exception guarantee (or similar): either 1- it succeeded completely, or 2- nothing went through. This is otherwise known as commit.
[...] This is otherwise known as commit.
I cannot agree more. I've always seen destructors as doing a "rollback" on the program state. Consider the following pieces of code:
A& operator =(const A& x) { if(this != &x) { A y(x); // OP swap(y); // COMMIT } return *this; } void f() { transaction t(db); db.exec("INSERT INTO table ..."); // OP t.commit(); // COMMIT } void g() { ofstream f("out.txt"); f << "xyz\n"; // OP f.flush(); // COMMIT }They all have the same OP/COMMIT pattern. The first two have strong exception guarantee, the last has basic exception guarantee. Yet, all the three guarantee that no error is silently ignored.
The point is that things that fail shall not be thought of as releasing resources. They should be done as a separate commit operation. Freeing resources never fails. If it fails it does more than that, and this additional part is not the duty of the destructor.
-- YakovGalka?
This topic is always raised in the context of destructors as they are in OO languages, and C++ destructors specifically. Sometimes even to prove the superiority of error-codes to exception handling. However, this is a fallacy. The discussion should be about error handling in general, as it is irrelevant what mechanism we use:
Status f() { Status ret = Status_Ok; A *x = alloc_A(); if(!x) { ret = Status_OutOfMemory; goto E1; } A *y = alloc_A(); if(!y) { ret = Status_OutOfMemory; goto E2; } // do something E2: Status s2 = free_A(y); E1: Status s1 = free_A(x); // How shall we combing s2, s1 and ret? return ret; }As was written above, the problem is that program execution are inherently single threaded. In fact the problem is that Turing machines are inherently single threaded. Therefore no programming language can ever magically solve this. The solution as I see it is to accept that there can be only one active error at a time (or they shall be well-nested), and, upon the encounter of the first error, the program shall abandon the normal flow, rolling back to a state where the error can be handled.
-- YakovGalka?
What does single-threaded programming have to do with destructor exceptions? The problem with destructor exceptions is that C++ unwinds the stack before entering the exception handler, which leads to ambiguity when a destructor throws an exception during the unwinding process. The solution is to enter the exception handler first and allow programmers to control when the stack is unwound, so that exceptions caused by stack unwinding can be caught in the handler itself (or at a higher level, if appropriate). This is what you get with conditions in Common Lisp, no magic needed.
-- BenKreuter?