Articles & Books

Episode Eleven: To Kill a Move Constructor--Agustín "K-ballo" Bergé and Howard Hinnant

Following here is an advanced article about the behavior of C++ concerning copy and move operations. A more simple version is provided for a quicker and easier understanding of the best practices. The sum up is, don't declare as deleted a move constructor!

A more user friendly version

by Howard Hinnant

Episode Eleven: To Kill a Move Constructor

by Agustín "K-ballo" Bergé

From the article:

Unlike copy operations, which are provided by the compiler if not user declared, move operations can and often are suppressed such that a class might not have one. Furthermore, it is possible for a class to have a —user declared— move operation which is both defined as deleted, and at the same time ignored by overload resolution, as if it didn't exist...

 

Top 15 C++ Exception handling mistakes and how to avoid them--Deb Haldar

This article changed my vision about exceptions:

Top 15 C++ Exception handling mistakes and how to avoid them

by Deb Haldar

From the article:

Do you use exception handling in your C++ code?

If you don’t, why not?

Perhaps you’ve been conditioned to believe that exception handling is bad practice in C++. Or maybe you think that it’s prohibitively expensive in terms of performance. Or maybe it’s just not the way your legacy code is laid out and you’re stuck in the rut.

Whatever your reason is, it’s probably worth noting that using C++ Exceptions instead of error codes has a lot of advantages. So unless you’re coding some real-time or embedded systems, C++ exceptions can make your code more robust, maintainable and performant in the normal code path (yes performant, you read that right !).

In this article we’re going to look at 15 mistakes that a lot of developers make when just stating off with C++ exceptions or considering using C++ exceptions...

How C++14 and C++17 help to write faster (and better) code. Real world examples -- Dan Levin

High-performance code and new standards of C++ with code examples.

How C++14 and C++17 help to write faster (and better) code. Real world examples

by Dan Levin

From the article:

Writing high performance code is always a difficult task. Pure algorithms are not always a good fit for the real world architectures.

Often, we’re limited to just two options: either beautiful and slow or fast and ugly.

In this article I’ll show when C++11 and C++14 can help to write fast, compact and well-structured code.

[C++17] Structured Bindings - Convert struct to a tuple (simple reflection)

An interesting piece of code!

[C++17] Structured Bindings - Convert struct to a tuple (simple reflection)

From the article:

Very simple approach to convert any struct (up to N members) to a tuple using C++17 structured bindings and the idea from Boost.DI (http://boost-experimental.github.io/di/cppnow-2016/#/7/11) used to detect type constructor traits.

How Bloomberg is Advancing C++ at Scale--John Lakos

Interesting interview:

How Bloomberg is Advancing C++ at Scale

by John Lakos

From the article:

John Lakos manages the Bloomberg Development Environment group, which offers a set of C++ software libraries, development tools, and methodology to well over a thousand Bloomberg developers. He is an authority on large-scale C++ software infrastructure, receiving recent acclaim for two publications by Pearson Education on methodology for industrial software development [Part 1, Part 2]. BDE and its libraries are open source and can be found on GitHub. In this conversation, Lakos discusses the importance of instilling process and discipline in all software development projects...

Quick Q: Efficiency of postincrement v.s. preincrement in C++

Quick A: The difference is only marginal with optimizations enabled.

Recently on SO:

Efficiency of postincrement v.s. preincrement in C++

It is true - although perhaps overly strict. Pre increment doesn't necessarily introduce a data dependency - but it can.

A trivial example for exposition:

a = b++ * 2;

Here, the increment can be executed in parallel with the multiplication. The operands of both the increment and the multiplication are immediately available and do not depend on the result of either operation.

Another example:

a = ++b * 2;

Here, the multiplication must be executed after the increment, because one of the operands of the multiplication depends on the result of the increment.

Of course, these statements do slightly different things, so the compiler might not always be able to transform the program from one form to the other while keeping the semantics the same - which is why using the post increment might make a slight difference in performance.

A practical example, using a loop:

for(int i= 0; arr[i++];)
    count++;

for(int i=-1; arr[++i];)
    count++;

One might think that the latter is necessarily faster if they reason that "post-increment makes a copy" - which would have been very true in the case of non-fundamental types. However, due to the data dependency (and because int is a fundamental type with no overload function for increment operators), the former can theoretically be more efficient. Whether it is depends on the cpu architecture, and the ability of the optimizer.

For what it's worth - in a trivial program, on x86 arch, using g++ compiler with optimization enabled, the above loops had identical assembly output, so they are perfectly equivalent in that case.

 

Rules of thumb:

If the counter is a fundamental type and the result of increment is not used, then it makes no difference whether you use post/pre increment.

If the counter is not a fundamental type and the result of the increment is not used and optimizations are disabled, then pre increment may be more efficient. With optimizations enabled, there is no difference.

If the counter is a fundamental type and the result of increment is used, then post increment can theoretically be marginally more efficient - in some cpu architecture - in some context - using some compiler.

If the counter is a complex type and the result of the increment is used, then pre increment is typically faster than post increment. Also see R Sahu's answer regarding this case.

Take benefit from simple laziness -- Krzysztof Ostrowski

Presentation of simple techniques that can be used to delay execution of costly computations.

Take benefit from simple laziness

by Krzysztof Ostrowski

From the article:

In the C++ world lazy evaluation is usually linked to templates and their property that separates definition from actual instantiation. Given that we can, for instance, delay binding of a symbol.

Besides the above, we have old plain short circuiting inherited from C language: logical operators like && (and), || (or) and ternary operator ?:. They can be used as constructions to lazy execute (expressions must be valid C++) some of the expressions. With short circuiting we want to delay or skip execution of costly operations.

Turning Egyptian Division Into Logarithms -- David Sanders

Insights I've had while reading Elements of Programming and From Mathematics to Generic Programming

Turning Egyptian Division into Logarithms

by David Sanders

From the article:

I have benefitted greatly from multiple readings of Elements of Programming by Alexander Stepanov and Paul McJones as well as From Mathematics to Generic Programming by Stepanov and Daniel Rose. Each time I read either work, I learn something new.

In this article, I describe an extension to the ancient Egyptian division algorithm to yield logarithm and remainder in addition to quotient and remainder.