Memory allocation failures can happen. Wu Yongwei investigates when they happen and suggests a strategy to deal with them.
By Wu Yongwei
From the article:
C++ exceptions are habitually disabled in many software projects. A related issue is that these projects also encourage the use of
new (nothrow)instead of the more common
new, as the latter may throw an exception. This choice is kind of self-deceptive, as people don’t usually disable completely all mechanisms that potentially throw exceptions, such as standard library containers and
string. In fact, every time we initialize or modify a
map, we may be allocating memory on the heap. If we think that
newwill end in an exception (and therefore choose to use
new (nothrow)), exceptions may also occur when using these mechanisms. In a program that has disabled exceptions, the result will inevitably be a program crash.
However, it seems that the crashes I described are unlikely to occur… When was the last time you saw a memory allocation failure? Before I tested to check this issue, the last time I saw a memory allocation failure was when there was a memory corruption in the program: there was still plenty of memory in the system, but the memory manager of the program could no longer work reliably. In this case, there was already undefined behaviour, and checking for memory allocation failure ceased to make sense. A crash of the program was inevitable, and and it was a good thing if the crash occurred earlier, whether due to an uncaught exception, a null pointer dereference, or something else.
Now the question is: If there is no undefined behaviour in the program, will memory allocation ever fail? This seems worth exploring.