Articles & Books

Exploring C++20 - Designated initialisers--Tobias Widlund

Will be very useful.

Exploring C++20 - Designated initialisers

by Tobias Widlund

From the article:

Compared to many other C++20 additions this one is quite small, only needing a single box of explanation on cppreference. Despite this, I quite like the feature as it helps code that uses aggregate initialisation to be more readable and less error prone - both are well needed in that area!

Quick Q: Type inference of 'auto' when using range-for loop in multidimensional arrays

Quick A: a pointer has no size information.

Recently on SO:

Type inference of 'auto' when using range-for loop in multidimensional arrays

The type of row is deduced to be int*. That means, just like any other int*, the compiler doesn't know how big the array it points to is, or even that it's a pointer to the first element of an array at all. All of that information is lost when an array decays into a pointer to its first element.

If instead you use something like

for (auto& row : ia) //<-- NOTE: row is now a reference
    for (int* j = std::begin(row); j != std::end(row); ++j)
        std::cout << *j << '\n';

then the type of row will be deduced to int (&)[4]: reference to an array of 4 ints. The length information is retained, so std::begin and std::end have the information they need.

PS: Just as a note: range-for works by using std::begin and std::end internally, so the above can be a bit more concisely written as

for (auto& row : ia)
    for (auto j : row)
        std::cout << j << '\n';

Threads are not the answer -- Lucian Radu Teodorescu

You have a performance problem, you want to employ parallelism, you start adding threads... pause a moment and reflect

Threads are not the answer

by Lucian Radu Teodorescu

From the article:

Whatever the problem is, threads are not the answer. At least, not to most software engineers.

In this post I will attack the traditional way in which we write concurrent applications (mostly in C++, but also in other languages). It’s not that concurrency it’s bad, it’s just we are doing it wrong.

The abstractions that we apply in industry to make our programs concurrent-enabled are wrong. Moreover, it seems that the way this subject is thought in universities is also wrong.

Look ma, no locks -- Lucian Radu Teodorescu

How can you solve the core problem in concurrent programming? How can you avoid using locks, but still have safety for your resources?

Look ma, no locks

by Lucian Radu Teodorescu

From the article:

We previously argued that threads are not the answer to parallelism problems, and also that we should avoid locks as much as possible. But how can we have a world in which threads do not block each other?

We want to explore here a systematic way for replacing locks (mutexes, read-write mutexes, semaphores) in our systems with tasks. We argue that there is a general schema that allows us to move from a world full of locks to a world without locks, in which the simple planning and execution of tasks makes things simpler and more efficient.

How to Design Function Parameters That Make Interfaces Easier to Use (2/3)--Jonathan Boccara

Agree with the logic?

How to Design Function Parameters That Make Interfaces Easier to Use (2/3)

by Jonathan Boccara

From the article:

Let’s continue exploring how to design function parameters that help make both interfaces and their calling code more expressive.

If you missed on the previous episode of this topic, here is what this series of articles contains:

  • Part 1: interface-level parameters, one-parameter functions, const parameters,
  • Part 2: calling contexts, strong types, parameters order,
  • Part 3: packing parameters, processes, levels of abstraction.

How to Boost Performance with Intel Parallel STL and C++17 Parallel Algorithms--Bartlomiej Filipek

Another one.

How to Boost Performance with Intel Parallel STL and C++17 Parallel Algorithms

by Bartlomiej Filipek

From the article:

C++17 brings us parallel algorithms. However, there are not many implementations where you can use the new features. The situation is getting better and better, as we have the MSVC implementation and now Intel’s version will soon be available as the base for libstdc++ for GCC.

Since the library is important, I’ve decided to see how to use it and what it offers...

Python-Like enumerate() In C++17--Nathan Reed

Handy little piece of code.

Python-Like enumerate() In C++17

by Nathan Reed

From the article:

Python has a handy built-in function called enumerate(), which lets you iterate over an object (e.g. a list) and have access to both the index and the item in each iteration. You use it in a for loop, like this:

for i, thing in enumerate(listOfThings):
    print("The %dth thing is %s" % (i, thing))

Iterating over listOfThings directly would give you thing, but not i, and there are plenty of situations where you’d want both (looking up the index in another data structure, progress reports, error messages, generating output filenames, etc).

C++ range-based for loops work a lot like Python’s for loops. Can we implement an analogue of Python’s enumerate() in C++? We can!

Quick Q: Regarding shared_ptr reference count block

Quick A: it is an implementation detail for the std.

Recently on SO:

Regarding shared_ptr reference count block

(1) Regarding size: How can I programatically find the exact size of the control block for a std::shared_ptr?

There is no way. It's not directly accessible.

(2) Regarding logic: Additionally, boost::shared_ptr mentions that they are completely lock-free with respect to changes in the control block.(Starting with Boost release 1.33.0, shared_ptr uses a lock-free implementation on most common platforms.) I don't think std::shared_ptr follows the same - is this planned for any future C++ version? Doesn't this also mean that boost::shared_ptr is a better idea for multithreaded cases?

Absolutely not. Lock-free implementations are not always better than implementations that use locks. Having an additional constraint, at best, doesn't make the implementation worse but it cannot possibly make the implementation better.

Consider two equally competent programmers each doing their best to implement shared_ptr. One must produce a lock-free implementation. The other is completely free to use their best judgment. There is simply no way the one that must produce a lock-free implementation can produce a better implementation all other things being equal. At best, a lock-free implementation is best and they'll both produce one. At worse, on this platform a lock-free implementation has huge disadvantages and one implementer must use one. Yuck.