July 2021

Extended Aggregate Initialisation in C++17--Jonathan Boccara

Were you aware of the change?

Extended Aggregate Initialisation in C++17

by Jonathan Boccara

From the article:

By upgrading a compiler to C++17, a certain piece of code that looked reasonable stopped compiling.

This code doesn’t use any deprecated feature such as std::auto_ptr or std::bind1st that were removed in C++ 17, but it stopped compiling nonetheless.

Understanding this compile error will let us better understand a new feature of C++17: extended aggregate initialisation...

C++20 three way comparison operator — ensure backward compatibility: Part 8--Gajendra Gulgulia

The series continue.

C++20 three way comparison operator — ensure backward compatibility: Part 8

by Gajendra Gulgulia

From the article:

In part one till seven of the tutorial series, we looked at how to use the C++20’s three way comparison operator. In this part of the tutorial series, we’ll look at the compatibility issues when using objects that were constructed before C++20 with the three way comparison operator and how to resolve them...

CppCon 2019 Reducing Template Compilation Overhead, Using C++11, 14, 17, and 20--Jorg Brown

Registration is now open for CppCon 2021, which starts on October 24 and will be held both in person and online. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from our most recent in-person conference in 2019 and our online conference in 2020. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2021 to attend in person, online, or both!

Reducing Template Compilation Overhead, Using C++11, 14, 17, and 20

by Jorg Brown

Summary of the talk:

At their best, new C++ standards offer simpler, clearer, and faster-to-compile ways to write your code. But many information sources, for example Andrei Alexandrescu’s Modern C++ Design, haven’t been updated.

More importantly, template metaprogramming is not something we generally seek to optimize because a good compiler handles it well, and problems generally only show up in the form of long compile times.

In this presentation, I'll describe techniques you can use to simplify, clarify, and improve the compile speed, of your code, including:
* Using C++17 "if constexpr"
* Using C++11 variadic function / template arguments (often without needing recursion!)
* Using decltype on auto-return functions in order to compute types in a more readable way.
* Using C++20 constraints rather than std::enable_if

Why do smart pointers null out the wrapped pointer before destroying it?--Raymond Chen

Ever thought about it?

Why do smart pointers null out the wrapped pointer before destroying it?

by Raymond Chen

From the article:

When you null out a smart pointer type, the smart pointer type nulls out the old pointer before releasing it, rather than releasing the member and then setting it to null. Why does the old value get detached from the smart pointer before releasing it? Why not release it, and then set it to null?

CppCon 2020 2020: The Year of Sanitizers?--Victor Ciura

Registration is now open for CppCon 2021, which starts on October 24 and will be held both in person and online. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from our most recent in-person conference in 2019 and our online conference in 2020. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2021 to attend in person, online, or both!

2020: The Year of Sanitizers?

by Victor Ciura

Summary of the talk:

Clang-tidy is the go-to assistant for most C++ programmers looking to improve their code, whether to modernize it or to find hidden bugs with its built-in checks. Static analysis is great, but you also get tons of false positives.

Now that you’re hooked on smart tools, you have to try dynamic/runtime analysis. After years of improvements and successes for Clang and GCC users, LLVM AddressSanitizer (ASan) is finally available on Windows, in the latest Visual Studio 2019 versions. Let's find out how this experience is for MSVC projects.

We’ll see how AddressSanitizer works behind the scenes (compiler and ASan runtime) and analyze the instrumentation impact, both in perf and memory footprint. We’ll examine a handful of examples diagnosed by ASan and see how easy it is to read memory snapshots in Visual Studio, to pinpoint the failure.

Want to unleash the memory vulnerability beast? Put your test units on steroids, by spinning fuzzing jobs with ASan in Azure, leveraging the power of the Cloud from the comfort of your Visual Studio IDE.

HPX V1.7.0 released -- STE||AR Group

The STE||AR Group has released V1.7.0 of HPX -- A C++ Standard library for parallelism and concurrency.

HPX V1.7.0 Released

The newest version of HPX (V1.7.0) is now available for download! This release continues the focus on C++20 conformance with multiple new algorithms adapted to be C++20 conformant and becoming customization point objects (CPOs). We’ve also added experimental support for using GCC’s SIMD data types with our parallel algorithms. Finally, we've implemented a large subset of sender/receiver functionality based on current proposals (mainly P0443, P1897, and P2300). HPX futures fulfill the sender concept, and senders can explicitly be turned into futures, which means that codebases can gradually adopt senders where appropriate. The full list of improvements, fixes, and breaking changes can be found in the release notes.

    HPX is a general purpose parallel C++ runtime system for applications of any scale. It implements all of the related facilities as defined by the C++ Standard. As of this writing, HPX provides one of the only widely available open-source implementation of the new C++17 parallel algorithms. Additionally, HPX implements functionalities proposed as part of the ongoing C++ standardization process, such as large parts of the features related parallelism and concurrency as specified by the upcoming C++20 Standard, the C++ Concurrency TS, Parallelism TS V2, data-parallel algorithms, executors, senders/receivers and many more. It also extends the existing C++ Standard APIs to the distributed case (e.g. compute clusters) and for heterogeneous systems (e.g. GPUs).

    HPX seamlessly enables a new Asynchronous C++ Standard Programming Model that tends to improve the parallel efficiency of our applications and helps reducing complexities usually associated with parallelism and concurrency.

 

C++20 three way comparison operator: Part 7--Gajendra Gulgulia

The series continue.

C++20 three way comparison operator: Part 7

by Gajendra Gulgulia

From the article:

In the fifth and sixth part of the tutorial series, I explained the comparison category std::strong_ordering and std::weak_ordering respectively with examples and use cases. In this part of the tutorial series, we take a closer look at the third and final comparison category, i.e. std::partial_ordering...

Classic Railroad Field Trip Announced--CppCon 2021

Will you participate?

Classic Railroad Field Trip Announced

From the article:

The CppCon 2021 Field Trip will be an adventure into the mountains to sample classic mountain cuisine from Beau Jo’s, followed by a train trip over the far-famed Georgetown Loop. Spend a fun-filled Sunday on October 24 with fellow attendees as we ascend to 9,101 ft (2,774 m) via air-conditioned buses and railroad coaches. This trip will encompass Geologic, Historic, Natural, and Culinary wonders west of Denver...

CppCon 2019 Abusing Your Memory Model for Fun and Profit--Samy Al Bahra, Paul Khuong

Registration is now open for CppCon 2021, which starts on October 24 and will be held both in person and online. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from our most recent in-person conference in 2019 and our online conference in 2020. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2021 to attend in person, online, or both!

Abusing Your Memory Model for Fun and Profit

by Samy Al Bahra, Paul Khuong

Summary of the talk:

The most efficient concurrent C++ data structures used in the wild today usually achieve break-neck performance by either constraining their workload or constraining correctness to a particular memory model. The audience will learn about the Wild West of abusing memory models for performance and simplification, through real world examples. Non-blocking data structures and their benefits often come at the cost of increased latency because they require additional complexity in the common case. There are plenty of exceptions to this if the requirements of the data structure are relaxed, such as supporting only a bounded level of write or read concurrency or if correctness is constrained to a particular memory model. For this reason, well-designed specialized non-blocking data structures guarantee improved resiliency, throughput and latency in all cases compared to alternatives relying on traditional concurrency primitives. Specialized concurrent structures are common place in the Linux kernel and other performance critical systems.

You will learn about foundational concepts to understanding your underlying hardware's memory model and abusing memory models for fun and profit:
* Cache coherency
* Store Buffers
* Pipelines and speculative execution

This talk provides real-world examples that exploit the x86-TSO model to their advantage:
* A general technique to turn literally, any, open-addressed hash table into a concurrent hash table with low to negligible (near 0) cost. The transformation makes your hash table wait-free for writers and mostly wait-free for readers (lock-free in hypothetical worse cases) and is practical for languages such as C++. The mechanism is superior to the previously popular Azure lock-free hash table and even more importantly, practical for any non-garbage-collected environment. The overhead is negligible on TSO and low on non-TSO.
* Blazingly fast event counters. An extremely efficient replacement for condition variables is introduced and faster than any other alternative. This is implemented without requiring any heavy-weight atomic operations on the fast path by exploiting properties of the x86-TSO model.
* Scalable memory management: Exploit the ordering and visibility constraints of the underlying architecture for blazingly fast implementations of RCU and other safe memory reclamation schemes.
* and more.