December 2014

Common reasons of using namespaces in C++ projects -- CoderGears Team

CoderGears team draws some conclusions on how namespaces are used in C++ projects:

Common reasons of using namespaces in C++ projects

by CodeGears Team

From the article:

Namespaces in C++ are most often used to avoid naming collisions. Although namespaces are used extensively in recent C++ code, most older code does not use this facility. After exploring the source code of many C++ projects, here are some common reasons of using the namespaces in these projects...

Quick Q: Are std::threads run in the order they're created?--StackOverflow

Quick A: There is no guarantee that threads will run in a given order.

Recently on SO:

Why don't these threads run in order?

When I run this code:

#include <iostream>
#include <thread>
#include <mutex>

std::mutex m;

int main()
{
    std::vector<std::thread> workers;
    for (int i = 0; i < 10; ++i)
    {
        workers.emplace_back([i]
        {
            std::lock_guard<std::mutex> lock(m);
            std::cout << "Hi from thread " << i << std::endl;
        });
    }

    std::for_each(workers.begin(), workers.end(), [] (std::thread& t)
    { t.join(); });
}

I get the output:

Hi from thread 7
Hi from thread 1
Hi from thread 4
Hi from thread 6
Hi from thread 0
Hi from thread 5
Hi from thread 2
Hi from thread 3
Hi from thread 9
Hi from thread 8

Even though I used a mutex to keep only one thread access at a time. Why isn't the output in order?

C++ STL for Embedded Developers--John Hinke

E.S.R. Labs presents its STL library adapted to embedded environments and discusses some of the choices they made for it:

C++ STL for Embedded Developers

by John Hinke

From the article:

C++ embedded programming is very difficult. There are some limitations that are not always present in traditional programming environments such as limited memory, slower processors, and older C++ compilers.

We have developed a set of best-practice processes and frameworks to support writing high-quality embedded applications...

Valgrind 3.10.1 released

Valgrind, a useful tool for any C++ developer released a new version today:

Valgrind 3.10.1 released

From the announcement:

3.10.1 is a bug fix release. It fixes various bugs reported in 3.10.0 and backports fixes for all reported missing AArch64 ARMv8 instructions and syscalls from the trunk. If you package or deliver 3.10.0 for others to use, you might want to consider upgrading to 3.10.1 instead.

A gotcha with Boost.Optional --Andrzej Krzemieński

Have you used the Boost.Optional library? There is one thing you might need to keep an eye on as explained by Andrzej.

  A gotcha with Optional

  by Andrzej Krzemieński

From the article:

This post is about one gotcha in Boost.Optional library. When starting to use it, you might get the impression that when you try to put optional<T> where T is expected, you will get a compile-time error. In most of the cases it is exactly so, but sometimes you may get really surprised...

Quick Q: Is an acquire_release fence enough for Dekker synchronization?--StackOverflow

Quick A: No, because acq_rel doesn't prevent reordering a store to x followed by a load from y... seq_cst does.

Recently on SO:

Why isn't a C++11 acquire_release fence enough for Dekker synchronization?

The failure of Dekker-style synchronization is typically explained with reordering of instructions. I.e., if we write

atomic_int X;
atomic_int Y;
int r1, r2;
static void t1() {
    X.store(1, std::memory_order_relaxed)
    r1 = Y.load(std::memory_order_relaxed);
}
static void t2() {
    Y.store(1, std::memory_order_relaxed)
    r2 = X.load(std::memory_order_relaxed);
}

Then the loads can be reordered with the stores, leading to r1==r2==0.

I was expecting an acquire_release fence to prevent this kind of reordering:

static void t1() {
    X.store(1, std::memory_order_relaxed);
    atomic_thread_fence(std::memory_order_acq_rel);
    r1 = Y.load(std::memory_order_relaxed);
}
static void t2() {
    Y.store(1, std::memory_order_relaxed);
    atomic_thread_fence(std::memory_order_acq_rel);
    r2 = X.load(std::memory_order_relaxed);
}

The load cannot be moved above the fence and the store cannot be moved below the fence, and so the bad result should be prevented.

However, experiments show r1==r2==0 can still occur. Is there a reordering-based explanation for this? Where's the flaw in my reasoning?

Sneak Preview of C++17 -- Gabriel Ha

A preview what is going into C++17 after the recent Standards Committee meeting. 

GoingNative 32: Sneak Preview of C++17

By Gabriel Ha

From the article: 

Join us with ISO Committee member (and Microsoft as well, of course =P) Gabriel Dos Reis, who graciously took the time to give us the inside scoop of some things that made it into C++17, as well as things that got taken out. All this is fresh off the press of the most recent C++ Standards Meeting...

 

 

C++ Has Become More Pythonic -- Jeff Preshing

A nice article pointing out similarities between modern C++ and Python:

C++ Has Become More Pythonic

by Jeff Preshing

From the article:

C++ has changed a lot in recent years. The last two revisions, C++11 and C++14, introduce so many new features that, in the words of Bjarne Stroustrup, “It feels like a new language.”

It’s true. Modern C++ lends itself to a whole new style of programming – and you can’t help but feel Python’s influence on this new style. Ranged-based for loops, type deduction, vector and map initializers, lambda expressions. The more you explore modern C++, the more you find Python’s fingerprints all over it.

Was Python a direct influence on modern C++? Or did Python simply adopt a few common idioms before C++ got around to it? You be the judge...

Efficiency with Algorithms, Performance with Data Structures -- Chandler Carruth

carruth-cppcon2014.PNGAt the recent CppCon 2014, Chandler Carruth gave a great talk on using Modern C++ for writing high-performance applications. 

Efficiency with Algorithms, Performance with Data Structures

by Chandler Carruth

From the video introduction:

C++ programmers throughout the industry have an insatiable desire for writing high performance code. Unfortunately, even with C++, this can be really challenging. Over the past twenty years processors, memory, software libraries, and even compilers have radically changed what makes C++ code fast. Even measuring the performance of your code can be a daunting task. This talk will dig into how modern processors work, what makes them fast, and how to exploit them effectively with modern C++ code.