Articles & Books

Use CRTP for Polymorphic Chaining -- Marco Arena

Use CRTP for Polymorphic Chaining

by Marco Arena

 

This suggested video reminded me another use of CRTP I described some months ago. It was about how to support method chaining when inheritance is involved.

Consider this simple scenario:

struct Printer
{
   Printer(ostream& stream) : m_stream(stream) {}

   template<typename T>
   Printer& println(T&& message)
   {
      cout << message << endl;
      return *this;
   }

protected:
   ostream& m_stream;
};

struct CoutPrinter : public Printer
{
   CoutPrinter() : Printer(cout) {}

   CoutPrinter& color(Color c) 
   {
      // change console color
      return *this;
   }
}

//...

CoutPrinter printer;
printer.color(red).println("Hello").color( // OOPS

println("Hello") returns a Printer& and not a CoutPrinter&, so color is no longer available.

In this post I wrote a very simple approach based on CRTP to maintain the chaining relationship when inheritance is involved. I also propose a complicated-at-first-sight way to avoid duplication employing a bit of metaprogramming.

Continue reading...

Another Alternative to Lambda Move Capture -- John Bandela

[Ed.: John Bandela correctly points out some hazards of one approach to capture-by-move in C++11 lambdas. In this blog post, he offers his own safer solution.]

Another Alternative to Lambda Move Capture

by John Bandela

Today the post on isocpp.org called Learn How to Capture By Move caught my attention. I found the post informative and thought provoking, and you should go and read it before reading the rest of this post.

The problem is how do we lambda capture a large object we want to avoid copying?

The motivating example is below:

function<void()> CreateLambda()  
{  
  vector<HugeObject> hugeObj;  
  // ...preparation of hugeObj...  
  auto toReturn = [hugeObj] { /*...operate on hugeObj...*/ };  
  return toReturn;  
}

The solution proposed is a template class move_on_copy that is used like this:

auto moved = make_move_on_copy(move(hugeObj));
auto toExec = [moved] { /*...operate on moved.value...*/ };

However, there are problems with this approach, mainly in safety. The move_on_copy acts as auto_ptr and silently performs moves instead of copies (by design).

I present here a different take on the problem which accomplishes much of the above safely, however, with a little more verbosity in exchange for more clarity and safety.

Continue reading...

 

Tour of C++: Second chapter posted

This 2nd chapter of my Tour of C++ introduces the basic C++ abstraction mechanisms. If you have a 1990s view of C++, you are in for a few surprises: the integrated set of abstraction mechanisms in C++11 allows for simpler and more powerful abstractions than previously. For starters, pointers basically disappear from sight (hiding inside resource handles). Naturally, this is achieved without loss of performance. Enjoy!

Constructive comments would be most welcome.

Compile Time Computations -- Andrzej KrzemieĊ„ski

[Ed.: This is an old post but we feel it has valuable information about constexpr.]

This is a good article about constant expressions and computations at compile time. It gives good coverage of the new constexpr keyword for allowing compile-time and run-time evalution. constexpr is discussed rigorously with information on the rules required by the facility.

Compile-Time Computations

by Andrzej Krzemieński

[...] But now, consider the following example:

const int i = 2;

const char array[ i == 2 ? 64 : throw exception() ];

Throwing an exception (which is an expression) cannot be evaluated at compile-time, but since i == 2 is true, the third argument of conditional operator should not be evaluated; at least at run-time. So is the above a valid code? Not in C++03. It is valid however in C++11. Does this sound incredible? [...]

 

Continue reading...

 

Learn How to Capture by Move -- Marco Arena

[Ed.: C++11 lambdas support capturing local variables by value and by reference... but not by move. This will likely be addressed in the next standard, but in the meantime we need workarounds, usually involving a shared_ptr, to get the right behavior. In this article, Marco Arena looks at the shared_ptr approach and offers another potential helper. Observing the tradeoffs in these workarounds can instructive.]

Learn How to Capture by Move

by Marco Arena

what if I want to capture by moving an object instead of both copying and referencing? Consider this plausible scenario:

function<void()> CreateLambda()

{

    vector<HugeObject> hugeObj;

    // ...preparation of hugeObj...


    auto toReturn = [hugeObj] { ...operate on hugeObj... };

    return toReturn;

}

This fragment of code prepares a vector of HugeObject (e.g. expensive to copy) and returns a lambda which uses this vector (the vector is captured by copy because it goes out of scope when the lambda is returned). Can we do better?

"Yes, of course we can!" – I heard. "We can use a shared_ptr to reference-count the vector and avoid copying it."  ...

Continue reading...

C++ Benchmark: std::vector vs. std::list -- Baptiste Wicht

[Ed.: Another reason why vector is the default (not only, but preferred) standard container.]

C++ Benchmark: std::vector vs. std::list

by Baptiste Wicht

In C++, two of the most used data structures are std::vector and std::list. In this article, their performance is compared in practice on several different workloads.

It is generally said that a list should be used when random insert and remove will be performed (performed in O(1) versus O(n) for a vector). If we look only at the complexity, search in both data structures should be roughly equivalent, complexity being in O(n). When random insert/replace operations are performed on a vector, all the subsequent data needs to be moved and so each element will be copied. That is why the size of the data type is an important factor when comparing those two data structures.

However, in practice, there is a huge difference, the usage of the memory caches. All the data in a vector is contiguous where the std::list allocates separately memory for each element. How does that change the results in practice ?

This article shows the difference in performance between these two data structures. 

Continue reading...

On vector<bool> -- Howard Hinnant

On vector<bool>

by Howard Hinnant

 

vector<bool> has taken a lot of heat over the past decade, and not without reason. However I believe it is way past time to draw back some of the criticism and explore this area with a dispassionate scrutiny of detail.

There are really two issues here:

  1. Is the data structure of an array of bits a good data structure?
  2. Should the aforementioned data structure be named vector<bool>?

I have strong opinions on both of these questions. And to get this out of the way up front:

  1. Yes.
  2. No.

The array of bits data structure is a wonderful data structure. It is often both a space and speed optimization over the array of bools data structure if properly implemented. However it does not behave exactly as an array of bools, and so should not pretend to be one.

First, what's wrong with vector<bool>?

Because vector<bool> holds bits instead of bools, it can't return a bool& from its indexing operator or iterator dereference. This can play havoc on quite innocent looking generic code. For example:

template <class T>
void
process(T& t)
{
    // do something with t
}

template <class T, class A>
void
test(std::vector<T, A>& v)
{
    for (auto& t : v)
        process(t);
}

The above code works for all T except bool. When instantiated with bool, you will receive a compile time error along the lines of:

error: non-const lvalue reference to type 'std::__bit_reference<std::vector<bool, std::allocator<bool>>, true>' cannot bind to
      a temporary of type 'reference' (aka 'std::__bit_reference<std::vector<bool, std::allocator<bool>>, true>')
    for (auto& t : v)
               ^ ~
note: in instantiation of function template specialization 'test<bool, std::allocator<bool>>' requested here
    test(v);
    ^
vector:2124:14: note: selected 'begin' function
      with iterator type 'iterator' (aka '__bit_iterator<std::vector<bool, std::allocator<bool>>, false>')
    iterator begin()
             ^
1 error generated.

This is not a great error message. But it is about the best the compiler can do. The user is confronted with implementation details of vector and in a nutshell says that the vector is not working with a perfectly valid ranged-based for statement. The conclusion the client comes to here is that the implementation of vector is broken. And he would be at least partially correct.

But consider if instead of vector<bool> being a specialization instead there existed a separate class template std::bit_vector<A = std::allocator<bool>> and the coder had written:

template <class A>
void
test(bit_vector<A>& v)
{
    for (auto& t : v)
        process(t);
}

Now one gets a similar error message:

error: non-const lvalue reference to type 'std::__bit_reference<std::bit_vector<std::allocator<bool>>, true>' cannot bind to
      a temporary of type 'reference' (aka 'std::__bit_reference<std::bit_vector<std::allocator<bool>>, true>')
    for (auto& t : v)
               ^ ~
note: in instantiation of function template specialization 'test<std::allocator<bool>>' requested here
    test(v);
    ^
bit_vector:2124:14: note: selected 'begin' function
      with iterator type 'iterator' (aka '__bit_iterator<std::bit_vector<std::allocator<bool>>, false>')
    iterator begin()
             ^
1 error generated.

And although the error message is similar, the coder is far more likely to see that he is using a dynamic array of bits data structure and it is understandable that you can't form a reference to a bit.

I.e. names are important. And creating a specialization that has different behavior than the primary, when the primary template would have worked, is poor practice.

But what's right with vector<bool>?

For the rest of this article assume that we did indeed have a std::bit_vector<A = std::allocator<bool>> and that vector was not specialized on bool. bit_vector<> can be much more than simply a space optimization over vector<bool>, it can also be a very significant performance optimization. But to achieve this higher performance, your vendor has to adapt many of the std::algorithms to have specialized code (optimizations) when processing sequences defined by bit_vector<>::iterators.

find

For example consider this code:

template <class C>
typename C::iterator
test()
{
    C c(100000);
    c[95000] = true;
    return std::find(c.begin(), c.end(), true);
}

How long does std::find take in the above example for:

  1. A hypothetical non-specialized vector<bool>?
  2. A hypothetical bit_vector<> using an optimized find?
  3. A hypothetical bit_vector<> using the unoptimized generic find?

I'm testing on an Intel Core i5 in 64 bit mode. I am normalizing all answers such that the speed of A is 1 (smaller is faster):

  1. 1.0
  2. 0.013
  3. 1.6

An array of bits can be a very fast data structure for a sequential search! The optimized find is inspecting 64 bits at a time. And due to the space optimization, it is much less likely to cause a cache miss. However if the implementation fails to do this, and naively checks one bit at a time, then this giant 75X optimization turns into a significant pessimization.

count

std::count can be optimized much like std::find to process a word of bits at a time:

template <class C>
typename C::difference_type
test()
{
    C c(100000);
    c[95000] = true;
    return std::count(c.begin(), c.end(), true);
}

My results are:

  1. 1.0
  2. 0.044
  3. 1.02

Here the results are not quite as dramatic as for the std::find case. However any time you can speed up your code by a factor of 20, one should do so!

fill

std::fill is yet another example:

template <class C>
void
test()
{
    C c(100000);
    std::fill(c.begin(), c.end(), true);
}

My results are:

  1. 1.0
  2. 0.40
  3. 38.

The optimized fill is over twice as fast as the non-specialized vector<bool>. But if the vendor neglects to specialize fill for bit-iterators the results are disastrous! Naturally the results are identical for the closely related fill_n.

copy

std::copy is yet another example:

template <class C>
void
test()
{
    C c1(100000);
    C c2(100000);
    std::copy(c1.begin(), c1.end(), c2.begin());
}

My results are:

  1. 1.0
  2. 0.36
  3. 34.

The optimized copy is approaches three times as fast as the non-specialized vector<bool>. But if the vendor neglects to specialize fill for bit-iterators the results are not good. If the copy is not aligned on word boundaries (as in the above example), then the optimized copy slows down to the same speed as the copy for A. Results for copy_backward, move and move_backward are similar.

swap_ranges

std::swap_ranges is yet another example:

template <class C>
void
test()
{
    C c1(100000);
    C c2(100000);
    std::swap_ranges(c1.begin(), c1.end(), c2.begin());
}

My results are:

  1. 1.0
  2. 0.065
  3. 4.0

Here bit_vector<> is 15 times faster than an array of bools, and over 60 times as fast as working a bit at a time.

rotate

std::rotate is yet another example:

template <class C>
void
test()
{
    C c(100000);
    std::rotate(c.begin(), c.begin()+c.size()/4, c.end());
}

My results are:

  1. 1.0
  2. 0.59
  3. 17.9

Yet another example of good results with an optimized algorithm and very poor results without this extra attention.

equal

std::equal is yet another example:

template <class C>
bool
test()
{
    C c1(100000);
    C c2(100000);
    return std::equal(c1.begin(), c1.end(), c2.begin());
}

My results are:

  1. 1.0
  2. 0.016
  3. 3.33

If you're going to compare a bunch of bools, it is much, much faster to pack them into bits and compare a word of bits at a time, rather than compare individual bools, or individual bits!

Summary

The dynamic array of bits is a very good data structure if attention is paid to optimizing algorithms that can process up to a word of bits at a time. In this case it becomes not only a space optimization but a very significant speed optimization. If such attention to detail is not given, then the space optimization leads to a very significant speed pessimization.

But it is a shame that the C++ committee gave this excellent data structure the name vector<bool> and that it gives no guidance nor encouragement on the critical generic algorithms that need to be optimized for this data structure. Consequently, few std::lib implementations go to this trouble.

Rule of Zero -- R. Martinho Fernandes

Long-time C++98 developers know about the Rule of Three. But with C++11's addition of move semantics, we need to be talking instead about the Rule of Five.

This article argues for limiting the use of the Rule of Five, and instead mostly aiming for a "Rule of Zero" -- why you don't have and don't want to write logic for copying, for moving, and for destruction all the time in C++, and why you should encapsulate that logic as much as possible in the remaining situations when you actually need to write it. (And, in passing, he shows once again why C++11 developers seem to be drawn to reusing unique_ptrs with custom deleters like moths to flames bees to flowers. See also recent examples like this one on std-proposals.)

Rule of Zero

In C++ the destructors of objects with automatic storage duration are invoked whenever their scope ends. This property is often used to handle cleanup of resources automatically in a pattern known by the meaningless name RAII.

An essential part of RAII is the concept of resource ownership: the object responsible for cleaning up a resource in its destructor owns that resource. Proper ownership policies are the secret to avoid resource leaks...

Continue reading...

scope(exit) in C++11

A C++ note in passing from an indie game developer. He notes that Boost has a similar feature already (see the BOOST_SCOPE_EXIT macros and discussion of alternatives) but likes that you can write the basic feature yourself in under 10 lines, have it work like a statement instead of a BEGIN/END macro pair, and use it with other C++11 features like lambdas.

scope(exit) in C++11

I really like the scope(exit) statements in D. They allow you to keep the cleanup code next to the initialization code making it much easier to maintain and understand. I wish we had them in C++.

Turns out this can be implemented easily using some of the new C++ features...

Continue reading...

Storage Layout of Polymorphic Objects -- Dan Saks

It's great to see Dan writing about C++ again. Here's his latest today, in Dr. Dobb's:

Storage Layout of Polymorphic Objects

By Dan Saks

Adding at least one virtual function to a class alters the storage layout for all objects of that class type.

Adding at least one virtual function to a class alters the storage layout for all objects of that class type. In this article, I begin to explain how C++ compilers typically implement virtual functions by explaining how the use of virtual functions affects the storage layout for objects.

Continue reading...