Quick Q: Efficiency of postincrement v.s. preincrement in C++

Quick A: The difference is only marginal with optimizations enabled.

Recently on SO:

Efficiency of postincrement v.s. preincrement in C++

It is true - although perhaps overly strict. Pre increment doesn't necessarily introduce a data dependency - but it can.

A trivial example for exposition:

a = b++ * 2;

Here, the increment can be executed in parallel with the multiplication. The operands of both the increment and the multiplication are immediately available and do not depend on the result of either operation.

Another example:

a = ++b * 2;

Here, the multiplication must be executed after the increment, because one of the operands of the multiplication depends on the result of the increment.

Of course, these statements do slightly different things, so the compiler might not always be able to transform the program from one form to the other while keeping the semantics the same - which is why using the post increment might make a slight difference in performance.

A practical example, using a loop:

for(int i= 0; arr[i++];)
    count++;

for(int i=-1; arr[++i];)
    count++;

One might think that the latter is necessarily faster if they reason that "post-increment makes a copy" - which would have been very true in the case of non-fundamental types. However, due to the data dependency (and because int is a fundamental type with no overload function for increment operators), the former can theoretically be more efficient. Whether it is depends on the cpu architecture, and the ability of the optimizer.

For what it's worth - in a trivial program, on x86 arch, using g++ compiler with optimization enabled, the above loops had identical assembly output, so they are perfectly equivalent in that case.

 

Rules of thumb:

If the counter is a fundamental type and the result of increment is not used, then it makes no difference whether you use post/pre increment.

If the counter is not a fundamental type and the result of the increment is not used and optimizations are disabled, then pre increment may be more efficient. With optimizations enabled, there is no difference.

If the counter is a fundamental type and the result of increment is used, then post increment can theoretically be marginally more efficient - in some cpu architecture - in some context - using some compiler.

If the counter is a complex type and the result of the increment is used, then pre increment is typically faster than post increment. Also see R Sahu's answer regarding this case.

CppCon 2015 std::allocator Is to Allocation what std::vector Is to Vexation--Andrei Alexandrescu

Have you registered for CppCon 2016 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2015 for you to enjoy. Here is today’s feature:

std::allocator Is to Allocation what std::vector Is to Vexation

by Andrei Alexandrescu

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

std::allocator has an inglorious past, murky present, and cheerless future. STL introduced allocators as a stop gap for the now antiquated segmented memory models of the 1990s. Their design was limited and in many ways wasn't even aiming at helping allocation that much. Because allocators were there, they simply continued being there, up to the point they became impossible to either uproot or make work, in spite of valiant effort spent by the community.

But this talk aims at spending less time on poking criticism at std::allocator and more on actually defining allocator APIs that work.

Scalable, high-performance memory allocation is a topic of increasing importance in today's demanding applications. For such, std::allocator simply doesn't work. This talk discusses the full design of a memory allocator created from first principles. It is generic, componentized, and composable for supporting application-specific allocation patterns.

Take benefit from simple laziness -- Krzysztof Ostrowski

Presentation of simple techniques that can be used to delay execution of costly computations.

Take benefit from simple laziness

by Krzysztof Ostrowski

From the article:

In the C++ world lazy evaluation is usually linked to templates and their property that separates definition from actual instantiation. Given that we can, for instance, delay binding of a symbol.

Besides the above, we have old plain short circuiting inherited from C language: logical operators like && (and), || (or) and ternary operator ?:. They can be used as constructions to lazy execute (expressions must be valid C++) some of the expressions. With short circuiting we want to delay or skip execution of costly operations.

CppCon 2015 Lessons in Sustainability: How to Maintain a C++ Codebase for Decades--Titus Winters

Have you registered for CppCon 2016 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2015 for you to enjoy. Here is today’s feature:

Lessons in Sustainability: How to Maintain a C++ Codebase for Decades

by Titus Winters

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Google maintains (we believe) the largest monolithic C++ codebase in the world with over 100M lines of C++ code. Early commits to this repository date back to the late 1990s. About 4000 engineers submit at least one change in C++ every week. We’ve learned a few things about what it takes to maintain a codebase at this scale.

In this talk I’ll present some of the lessons we’ve learned over the years with respect to policies, technology, education, design, and maintenance of a long-lived monolithic codebase.

CppCast Episode 67: CMake Server with Stephen Kelly

Episode 67 of CppCast the only podcast for C++ developers by C++ developers. In this episode Rob and Jason are joined by Stephen Kelley to discuss his work on the CMake Server project which will enable advanced tooling for CMake.

CppCast Episode 67: CMake Server with Stephen Kelly

by Rob Irving and Jason Turner

About the interviewee:

Stephen Kelly first encountered CMake through working on KDE and like many C++ developers, did his best to ignore the buildsystem completely. That worked well for 4 years until 2011 when the modularization of KDE libraries led to a desire to simplify and upstream as much as possible to Qt and CMake. Since then, Stephen has been responsible for many core features and designs of 'Modern CMake' and now tries to lead designs for its future.

Turning Egyptian Division Into Logarithms -- David Sanders

Insights I've had while reading Elements of Programming and From Mathematics to Generic Programming

Turning Egyptian Division into Logarithms

by David Sanders

From the article:

I have benefitted greatly from multiple readings of Elements of Programming by Alexander Stepanov and Paul McJones as well as From Mathematics to Generic Programming by Stepanov and Daniel Rose. Each time I read either work, I learn something new.

In this article, I describe an extension to the ancient Egyptian division algorithm to yield logarithm and remainder in addition to quotient and remainder.

CppCon 2016: David Schwartz Keynote and Jason Turner Plenary

cppcon-037.PNGDavid Schwartz, the Chief Cryptographer of the Ripple distributed payment system, will be presenting a keynote at CppCon 2016 about developing blockchain software in C++.

Also, Jason Turner will give a plenary talk about using C++17 to write high-performance code on the Commodore 64.

You can read more about their talks here.

There’s still time to register for CppCon 2016! Come join us in September!

Cppcheck-1.75 has been released--Daniel Marjamäki

A new version is here!

Cppcheck-1.75 has been released

by Daniel Marjamäki

From the article:

General changes:

  • Replaced internal preprocessor by the brand-new preprocessor 'simplecpp'
  • Improved Windows installer: Install a copy of the license instead of asking to accept it
  • The Windows x64 binaries are now compiled with profile guided optimization, resulting in a speedup of 11%
  • Improved manual, especially the chapter about Libraries
  • Improved CWE mapping
  • --append is deprecated and will be removed in 1.80...