Destructors
What’s the deal with destructors?
A destructor gives an object its last rites.
Destructors are used to release any resources allocated by the object. E.g., class
Lock
might lock a semaphore, and
the destructor will release that semaphore. The most common example is when the constructor uses new
, and the
destructor uses delete
.
Destructors are a “prepare to die” member function. They are often abbreviated “dtor”.
What’s the order that local objects are destructed?
In reverse order of construction: First constructed, last destructed.
In the following example, b
’s destructor will be executed first, then a
’s destructor:
void userCode()
{
Fred a;
Fred b;
// ...
}
What’s the order that objects in an array are destructed?
In reverse order of construction: First constructed, last destructed.
In the following example, the order for destructors will be a[9]
, a[8]
, …, a[1]
, a[0]
:
void userCode()
{
Fred a[10];
// ...
}
What’s the order that sub-objects of an object are destructed?
In reverse order of construction: First constructed, last destructed.
The body of an object’s destructor is executed, followed by the destructors of the object’s data members (in reverse order of their appearance in the class definition), followed by the destructors of the object’s base classes (in reverse order of their appearance in the class definition).
In the following example, the order for destructor calls when d
goes out of scope will be ~local1()
, ~local0()
, ~member1()
, ~member0()
, ~base1()
, ~base0()
:
struct base0 { ~base0(); };
struct base1 { ~base1(); };
struct member0 { ~member0(); };
struct member1 { ~member1(); };
struct local0 { ~local0(); };
struct local1 { ~local1(); };
struct derived: base0, base1
{
member0 m0_;
member1 m1_;
~derived()
{
local0 l0;
local1 l1;
}
}
void userCode()
{
derived d;
}
Can I overload the destructor for my class?
No.
You can have only one destructor for a class Fred
. It’s always called Fred::~Fred()
. It never takes any parameters,
and it never returns anything.
You can’t pass parameters to the destructor anyway, since you never explicitly call a destructor (well, almost never).
Should I explicitly call a destructor on a local variable?
No!
The destructor will get called again at the close }
of the block in which the local was created. This is a guarantee
of the language; it happens automagically; there’s no way to stop it from happening. But you can get really bad
results from calling a destructor on the same object a second time! Bang! You’re dead!
What if I want a local to “die” before the close }
of the scope in which it was created? Can I call a destructor on a local if I really want to?
No! [For context, please read the previous FAQ].
Suppose the (desirable) side effect of destructing a local File
object is to close the File
. Now suppose you have an
object f
of a class File
and you want File
f
to be closed before the end of the scope (i.e., the }
) of the
scope of object f
:
void someCode()
{
File f;
// ...code that should execute when f is still open...
← We want the side-effect of f's destructor here!
// ...code that should execute after f is closed...
}
There is a simple solution to this problem. But in the mean time, remember: Do not explicitly call the destructor!
Okay, okay, already; I won’t explicitly call the destructor of a local; but how do I handle the situation from the previous FAQ?
[For context, please read the previous FAQ].
Simply wrap the extent of the lifetime of the local in an artificial block {...}
:
void someCode()
{
{
File f;
// ...code that should execute when f is still open...
}
↑ // f's destructor will automagically be called here!
// ...code that should execute after f is closed...
}
What if I can’t wrap the local in an artificial block?
Most of the time, you can limit the lifetime of a local by wrapping the local in an artificial block
({...}
). But if for some reason you can’t do that, add a member function that
has a similar effect as the destructor. But do not call the destructor itself!
For example, in the case of class
File
, you might add a close()
method. Typically the destructor will simply call
this close()
method. Note that the close()
method will need to mark the File
object so a subsequent call won’t
re-close an already-closed File
. E.g., it might set the fileHandle_
data member to some nonsensical value such as
-1, and it might check at the beginning to see if the fileHandle_
is already equal to -1:
class File {
public:
void close();
~File();
// ...
private:
int fileHandle_; // fileHandle_ >= 0 if/only-if it's open
};
File::~File()
{
close();
}
void File::close()
{
if (fileHandle_ >= 0) {
// ...code that calls the OS to close the file...
fileHandle_ = -1;
}
}
Note that the other File
methods may also need to check if the fileHandle_
is -1 (i.e., check if the File
is
closed).
Note also that any constructors that don’t actually open a file should set fileHandle_
to -1.
But can I explicitly call a destructor if I’ve allocated my object with new
?
Probably not.
Unless you used placement new
, you should simply delete
the object rather than explicitly calling
the destructor. For example, suppose you allocated the object via a typical new
expression:
Fred* p = new Fred();
Then the destructor Fred::~Fred()
will automagically get called when you delete
it via:
delete p; // Automagically calls p->~Fred()
You should not explicitly call the destructor, since doing so won’t release the memory that was allocated for the
Fred
object itself. Remember: delete p
does two things: it calls the destructor and it
deallocates the memory.
What is “placement new
” and why would I use it?
There are many uses of placement new
. The simplest use is to place an object at a particular location in memory. This
is done by supplying the place as a pointer parameter to the new
part of a new
expression:
#include <new> // Must #include this to use "placement new"
#include "Fred.h" // Declaration of class Fred
void someCode()
{
char memory[sizeof(Fred)]; // Line #1
void* place = memory; // Line #2
Fred* f = new(place) Fred(); // Line #3 (see "DANGER" below)
// The pointers f and place will be equal
// ...
}
Line #1 creates an array of sizeof(Fred)
bytes of memory, which is big enough to hold a Fred
object. Line #2 creates
a pointer place
that points to the first byte of this memory (experienced C programmers will note that this step was
unnecessary; it’s there only to make the code more obvious). Line #3 essentially just calls the constructor
Fred::Fred()
. The this
pointer in the Fred
constructor will be equal to place
. The returned pointer f
will
therefore be equal to place
.
ADVICE: Don’t use this “placement new
” syntax unless you have to. Use it only when you really care that an object
is placed at a particular location in memory. For example, when your hardware has a memory-mapped I/O timer device, and
you want to place a Clock
object at that memory location.
DANGER: You are taking sole responsibility that the pointer you pass to the “placement new
” operator
points to
a region of memory that is big enough and is properly aligned for the object type that you’re creating. Neither the
compiler nor the run-time system make any attempt to check whether you did this right. If your Fred
class needs to be
aligned on a 4 byte boundary but you supplied a location that isn’t properly aligned, you can have a serious disaster on
your hands (if you don’t know what “alignment” means, please don’t use the placement new
syntax). You have been
warned.
You are also solely responsible for destructing the placed object. This is done by explicitly calling the destructor:
void someCode()
{
char memory[sizeof(Fred)];
void* p = memory;
Fred* f = new(p) Fred();
// ...
f->~Fred(); // Explicitly call the destructor for the placed object
}
This is about the only time you ever explicitly call a destructor.
Note: there is a much cleaner but more sophisticated way of handling the destruction / deletion situation.
Is there a placement delete
?
No, but if you need one you can write your own.
Consider placement new
used to place objects in a set of arenas:
class Arena {
public:
void* allocate(size_t);
void deallocate(void*);
// ...
};
void* operator new(size_t sz, Arena& a)
{
return a.allocate(sz);
}
Arena a1(some arguments);
Arena a2(some arguments);
Given that, we can write
X* p1 = new(a1) X;
Y* p2 = new(a1) Y;
Z* p3 = new(a2) Z;
// ...
But how can we later delete those objects correctly? The reason that there is no built-in “placement delete
” to match placement new
is that there is no general way of assuring that it would be used correctly. Nothing in the C++ type system allows us to deduce that p1
points to an object allocated in Arena a1
. A pointer to any X
allocated anywhere can be assigned to p1
.
However, sometimes the programmer does know, and there is a way:
template<class T> void destroy(T* p, Arena& a)
{
if (p) {
p->~T(); // explicit destructor call
a.deallocate(p);
}
}
Now, we can write:
destroy(p1,a1);
destroy(p2,a2);
destroy(p3,a3);
If an Arena
keeps track of what objects it holds, you can even write destroy()
to defend itself against mistakes.
It is also possible to define matching operator new()
and operator delete()
pairs for a class hierarchy TC++PL(SE) 15.6. See also D&E 10.4 and TC++PL(SE) 19.4.5.
When I write a destructor, do I need to explicitly call the destructors for my member objects?
No. You never need to explicitly call a destructor (except with placement new
).
A class’s destructor (whether or not you explicitly define one) automagically invokes the destructors for member objects. They are destroyed in the reverse order they appear within the declaration for the class.
class Member {
public:
~Member();
// ...
};
class Fred {
public:
~Fred();
// ...
private:
Member x_;
Member y_;
Member z_;
};
Fred::~Fred()
{
// Compiler automagically calls z_.~Member()
// Compiler automagically calls y_.~Member()
// Compiler automagically calls x_.~Member()
}
When I write a derived class’s destructor, do I need to explicitly call the destructor for my base class?
No. You never need to explicitly call a destructor (except with placement new
).
A derived class’s destructor (whether or not you explicitly define one) automagically invokes the destructors for base class subobjects. Base classes are destructed after member objects. In the event of multiple inheritance, direct base classes are destructed in the reverse order of their appearance in the inheritance list.
class Member {
public:
~Member();
// ...
};
class Base {
public:
virtual ~Base(); // A virtual destructor
// ...
};
class Derived : public Base {
public:
~Derived();
// ...
private:
Member x_;
};
Derived::~Derived()
{
// Compiler automagically calls x_.~Member()
// Compiler automagically calls Base::~Base()
}
Note: Order dependencies with virtual
inheritance are trickier. If you are relying on order dependencies in a
virtual
inheritance hierarchy, you’ll need a lot more information than is in this FAQ.
Should my destructor throw an exception when it detects a problem?
Beware!!! See this FAQ for details.
Is there a way to force new
to allocate memory from a specific memory area?
Yes. The good news is that these “memory pools” are useful in a number of situations. The bad news is that I’ll have to drag you through the mire of how it works before we discuss all the uses. But if you don’t know about memory pools, it might be worthwhile to slog through this FAQ — you might learn something useful!
First of all, recall that a memory allocator is simply supposed to return uninitialized bits of memory; it is not
supposed to produce “objects.” In particular, the memory allocator is not supposed to set the virtual-pointer or any
other part of the object, as that is the job of the constructor which runs after the memory allocator. Starting with a
simple memory allocator function, allocate()
, you would use placement new
to construct an object in
that memory. In other words, the following is morally equivalent to new Foo()
:
void* raw = allocate(sizeof(Foo)); // line 1
Foo* p = new(raw) Foo(); // line 2
Assuming you’ve used placement new
and have survived the above two lines of code, the next step is to
turn your memory allocator into an object. This kind of object is called a “memory pool” or a “memory arena.” This lets
your users have more than one “pool” or “arena” from which memory will be allocated. Each of these memory pool objects
will allocate a big chunk of memory using some specific system call (e.g., shared memory, persistent memory, stack
memory, etc.; see below), and will dole it out in little chunks as needed. Your memory-pool class might look something
like this:
class Pool {
public:
void* alloc(size_t nbytes);
void dealloc(void* p);
private:
// ...data members used in your pool object...
};
void* Pool::alloc(size_t nbytes)
{
// ...your algorithm goes here...
}
void Pool::dealloc(void* p)
{
// ...your algorithm goes here...
}
Now one of your users might have a Pool
called pool
, from which they could allocate objects like this:
Pool pool;
// ...
void* raw = pool.alloc(sizeof(Foo));
Foo* p = new(raw) Foo();
Or simply:
Foo* p = new(pool.alloc(sizeof(Foo))) Foo();
The reason it’s good to turn Pool
into a class is because it lets users create N different pools of memory rather
than having one massive pool shared by all users. That allows users to do lots of funky things. For example, if they
have a chunk of the system that allocates memory like crazy then goes away, they could allocate all their memory from a
Pool
, then not even bother doing any delete
s on the little pieces: just deallocate the entire pool at once. Or they
could set up a “shared memory” area (where the operating system specifically provides memory that is shared between
multiple processes) and have the pool dole out chunks of shared memory rather than process-local memory. Another
angle: many systems support a non-standard function often called alloca()
which allocates a block of memory from the
stack rather than the heap. Naturally this block of memory automatically goes away when the function returns,
eliminating the need for explicit delete
s. Someone could use alloca()
to give the Pool
its big chunk of memory,
then all the little pieces allocated from that Pool
act like they’re local: they automatically vanish when the
function returns. Of course the destructors don’t get called in some of these cases, and if the destructors do something
nontrivial you won’t be able to use these techniques, but in cases where the destructor merely deallocates memory, these
sorts of techniques can be useful.
Assuming you survived the 6 or 8 lines of code needed to wrap your allocate function as a method of a Pool
class, the
next step is to change the syntax for allocating objects. The goal is to change from the rather clunky syntax
new(pool.alloc(sizeof(Foo)))
Foo()
to the simpler syntax new(pool)
Foo()
. To make this happen,
you need to add the following two lines of code just below the definition of your Pool
class:
inline void* operator new(size_t nbytes, Pool& pool)
{
return pool.alloc(nbytes);
}
Now when the compiler sees new(pool) Foo()
, it calls the above operator new
and passes sizeof(Foo)
and pool
as
parameters, and the only function that ends up using the funky pool.alloc(nbytes)
method is your own operator new
.
Now to the issue of how to destruct/deallocate the Foo
objects. Recall that the brute force approach sometimes used
with placement new
is to explicitly call the destructor then explicitly deallocate the memory:
void sample(Pool& pool)
{
Foo* p = new(pool) Foo();
// ...
p->~Foo(); // explicitly call dtor
pool.dealloc(p); // explicitly release the memory
}
This has several problems, all of which are fixable:
- The memory will leak if
Foo::Foo()
throws an exception. - The destruction/deallocation syntax is different from what most programmers are used to, so they’ll probably screw it up.
- Users must somehow remember which pool goes with which object. Since the code that allocates is often in a different
function from the code that deallocates, programmers will have to pass around two pointers (a
Foo*
and aPool*
), which gets ugly fast (example, what if they had an array ofFoo
s each of which potentially came from a differentPool
; ugh).
We will fix them in the above order.
Problem #1: plugging the memory leak. When you use the “normal” new operator, e.g., Foo* p = new Foo()
, the
compiler generates some special code to handle the case when the constructor throws an exception. The actual code
generated by the compiler is functionally similar to this:
// This is functionally what happens with Foo* p = new Foo()
Foo* p;
// don't catch exceptions thrown by the allocator itself
void* raw = operator new(sizeof(Foo));
// catch any exceptions thrown by the ctor
try {
p = new(raw) Foo(); // call the ctor with raw as this
}
catch (...) {
// oops, ctor threw an exception
operator delete(raw);
throw; // rethrow the ctor's exception
}
The point is that the compiler deallocates the memory if the ctor throws an exception. But in the case of the “new with
parameter” syntax (commonly called “placement new
”), the compiler won’t know what to do if the exception occurs so by
default it does nothing:
// This is functionally what happens with Foo* p = new(pool) Foo():
void* raw = operator new(sizeof(Foo), pool);
// the above function simply returns "pool.alloc(sizeof(Foo))"
Foo* p = new(raw) Foo();
// if the above line "throws", pool.dealloc(raw) is NOT called
So the goal is to force the compiler to do something similar to what it does with the global new
operator. Fortunately
it’s simple: when the compiler sees new(pool) Foo()
, it looks for a corresponding operator delete
. If it finds one,
it does the equivalent of wrapping the ctor call in a try
block as shown above. So we would simply provide an
operator delete
with the following signature (be careful to get this right; if the second parameter has a different
type from the second parameter of the operator new(size_t, Pool&)
, the compiler doesn’t complain; it simply bypasses
the try
block when your users say new(pool) Foo()
):
void operator delete(void* p, Pool& pool)
{
pool.dealloc(p);
}
After this, the compiler will automatically wrap the ctor calls of your new
expressions in a try
block:
// This is functionally what happens with Foo* p = new(pool) Foo()
Foo* p;
// don't catch exceptions thrown by the allocator itself
void* raw = operator new(sizeof(Foo), pool);
// the above simply returns "pool.alloc(sizeof(Foo))"
// catch any exceptions thrown by the ctor
try {
p = new(raw) Foo(); // call the ctor with raw as this
}
catch (...) {
// oops, ctor threw an exception
operator delete(raw, pool); // that's the magical line!!
throw; // rethrow the ctor's exception
}
In other words, the one-liner function operator delete(void* p, Pool& pool)
causes the compiler to automagically plug
the memory leak. Of course that function can be, but doesn’t have to be, inline
.
Problems #2 (“ugly therefore error prone”) and #3 (“users must manually associate pool-pointers with the object that
allocated them, which is error prone”) are solved simultaneously with an additional 10-20 lines of code in one place.
In other words, we add 10-20 lines of code in one place (your Pool
header file) and simplify an arbitrarily large
number of other places (every piece of code that uses your Pool
class).
The idea is to implicitly associate a Pool*
with every allocation. The Pool*
associated with the global allocator
would be NULL
, but at least conceptually you could say every allocation has an associated Pool*
. Then you replace
the global operator delete
so it looks up the associated Pool*
, and if non-NULL
, calls that Pool
’s deallocate
function. For example, if(!) the normal deallocator used free()
, the replacment for the
global operator delete
would look something like this:
void operator delete(void* p)
{
if (p != NULL) {
Pool* pool = /* somehow get the associated 'Pool*' */;
if (pool == NULL)
free(p);
else
pool->dealloc(p);
}
}
If you’re not sure if the normal deallocator was free()
, the easiest approach is also
replace the global operator new
with something that uses malloc()
. The replacement for the global operator new
would look something like this (note: this definition ignores a few details such as the new_handler
loop and the
throw std::bad_alloc()
that happens if we run out of memory):
void* operator new(size_t nbytes)
{
if (nbytes == 0)
nbytes = 1; // so all alloc's get a distinct address
void* raw = malloc(nbytes);
// ...somehow associate the NULL 'Pool*' with 'raw'...
return raw;
}
The only remaining problem is to associate a Pool*
with an allocation. One approach, used in at least one commercial
product, is to use a std::map<void*,Pool*>
. In other words, build a look-up table whose keys are the
allocation-pointer and whose values are the associated Pool*
. For reasons I’ll describe in a moment, it is essential
that you insert a key/value pair into the map only in operator new(size_t,Pool&)
. In particular, you must not insert
a key/value pair from the global operator new
(e.g., you must not say, poolMap[p] = NULL
in the global
operator new
). Reason: doing that would create a nasty chicken-and-egg problem — since std::map
probably uses
the global operator new
, it ends up inserting a new entry every time inserts a new entry, leading to infinite
recursion — bang you’re dead.
Even though this technique requires a std::map
look-up for each deallocation, it seems to have acceptable
performance, at least in many cases.
Another approach that is faster but might use more memory and is a little trickier is to prepend a Pool*
just before
all allocations. For example, if nbytes
was 24, meaning the caller was asking to allocate 24 bytes, we would
allocate 28 (or 32 if you think the machine requires 8-byte alignment for things like double
s and/or long long
s),
stuff the Pool*
into the first 4 bytes, and return the pointer 4 (or 8) bytes from the beginning of what you
allocated. Then your global operator delete
backs off the 4 (or 8) bytes, finds the Pool*
, and if NULL
, uses
free()
otherwise calls pool->dealloc()
. The parameter passed to free()
and pool->dealloc()
would be the pointer
4 (or 8) bytes to the left of the original parameter, p
. If(!) you decide on 4 byte alignment, your code would look
something like this (although as before, the following operator new
code elides the usual out-of-memory handlers):
void* operator new(size_t nbytes)
{
if (nbytes == 0)
nbytes = 1; // so all alloc's get a distinct address
void* ans = malloc(nbytes + 4); // overallocate by 4 bytes
*(Pool**)ans = NULL; // use NULL in the global new
return (char*)ans + 4; // don't let users see the Pool*
}
void* operator new(size_t nbytes, Pool& pool)
{
if (nbytes == 0)
nbytes = 1; // so all alloc's get a distinct address
void* ans = pool.alloc(nbytes + 4); // overallocate by 4 bytes
*(Pool**)ans = &pool; // put the Pool* here
return (char*)ans + 4; // don't let users see the Pool*
}
void operator delete(void* p)
{
if (p != NULL) {
p = (char*)p - 4; // back off to the Pool*
Pool* pool = *(Pool**)p;
if (pool == NULL)
free(p); // note: 4 bytes left of the original p
else
pool->dealloc(p); // note: 4 bytes left of the original p
}
}
Naturally the last few paragraphs of this FAQ are viable only when you are allowed to change the global operator new
and operator delete
. If you are not allowed to change these global functions, the first three quarters of this FAQ is
still applicable.