CppCon 2024 From Macro to Micro in C++ -- Conor Spilsbury

macrotomicro-spilsbury.pngRegistration is now open for CppCon 2025! The conference starts on September 15 and will be held in person in Aurora, CO. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from last year's conference. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2025!

Lightning Talk: From Macro to Micro in C++

by Conor Spilsbury

Summary of the talk:

Our continuous real-time monitoring led us to investigate an anomaly in the data about our system's performance. This led us to investigate and identify the culprit: a specific data structure used in our code and the way that structure was being initialized.

Streamlined Iteration: Exploring Keys and Values in C++20 -- Daniel Lemire

image-27-825x510.jpgModern C++ offers a variety of ways to work with key-value data structures like std::map and std::unordered_map, from traditional loops to sleek functional-style expressions using C++20 ranges. By exploring both styles and benchmarking them across platforms, we can better understand how newer language features affect readability, expressiveness, and performance.

Streamlined Iteration: Exploring Keys and Values in C++20

by Daniel Lemire

From the article:

In software, we often use key-value data structures, where each key is unique and maps to a specific value. Common examples include dictionaries in Python, hash maps in Java, and objects in JavaScript. If you combine arrays with key-value data structures, you can represent most data.

In C++, we have two standard key-value data structures: the std::map and the std::unordered_map. Typically, the std::map is implemented as a tree (e.g., a red-black tree) with sorted keys, providing logarithmic time complexity O(log n) for lookups, insertions, and deletions, and maintaining keys in sorted order. The std::unordered_map is typically implemented as a hash table with unsorted keys, offering average-case constant time complexity O(1) for lookups, insertions, and deletions. In the std::unordered_map, the hash table uses a hashing function to map keys to indices in an array of buckets. Each bucket is essentially a container that can hold multiple key-value pairs, typically implemented as a linked list. When a key is hashed, the hash function computes an index corresponding to a bucket. If multiple keys hash to the same bucket (a collision), they are stored in that bucket’s linked list.

Quite often, we only need to look at the keys, or look at the values. The C++20 standard makes this convenient through the introduction of ranges (std::ranges::views::keys and std::ranges::views::values). Let us consider two functions using the ‘modern’ functional style. The first function sums the values and the next function counts how many keys (assuming that they are strings) start with a given prefix.

CppCon 2024 To Int or to Uint, This is the Question -- Alex Dathskovsky

intortouint-dathskovsky.pngRegistration is now open for CppCon 2025! The conference starts on September 15 and will be held in person in Aurora, CO. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from last year's conference. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2025!

To Int or to Uint, This is the Question

by Alex Dathskovsky

Summary of the talk:

In our daily work, we often use integral data types to perform arithmetic calculations, but we may not always consider how the selection of the data type can affect performance and compiler optimizations. This talk will delve into the importance of choosing the correct data type for the job and how it impacts compiler optimizations. We will also examine the overall performance implications for the application. We will explore specific algorithms where using unsigned data types is more beneficial and other situations where signed data types are the best choice. Furthermore this talk will dive into the differences between signed and unsigned integers, how the processor handles certain operations and explain many of the surprising pitfalls of using integral types.

Attendees will come away with a deeper understanding of how data type selection can impact their code and how to make better choices for optimal performance.

This session will follow the guidelines from my short article on LinkedIn but it will go into higher details and contain more examples and explanations.

Owning and non-owning C++ Ranges -- Hannes Hauswedell

This is the first article in a series discussing some of the underlying properties of C++ ranges and in particular range adaptors. At the same time, I introduce the design of an experimental library which aims to solve some of the problems discussed here.

Owning and non-owning C++ Ranges

by Hannes Hauswedell

From the article:

We will begin by having a look at ranges from the standard library prior to C++20, since this is what people are most used to. Note that although the ranges themselves are from C++17, I will use some terminology/concepts/algorithms introduced later to explain how they relate to each other. Remember that to count as a range in C++, a type needs to have just begin() and end(). Everything else is bonus.

[…]

Containers are the ranges everybody already used before Ranges were a thing. They own their elements, i.e. the storage of the elements is managed by the container and the elements disappear when the container does. Containers are multi-pass ranges, i.e. you can iterate over them multiple times and will always observe the same elements.

[…]

If containers are owning ranges, what are non-owning ranges? C++17 introduced a first example: std::string_view, a range that consists just of a begin and end pointer into another range’s elements.

[…]

However, the most important (and controversial) change came by way of P2415, which allowed views to become owning ranges. It was also applied to C++20 as a defect report, although it was quite a significant design change. This is a useful feature, however, it resulted in the std::ranges::view concept being changed to where it no longer means “non-owning range”.

[…]

 

 

Visit Meeting C++ 2025 with assistance and your wheelchair

Sharing an opportunity for those needing assistance to travel and lodge: the conference hotel of Meeting C++ has special rooms for you!

Visit Meeting C++ 2025 with assistance and your wheelchair

by Jens Weller

From the article:

As you may not be aware about this opportunity, I wanted to highlight that the Vienna House Andel's Berlin Hotel offers accessibility rooms for those who need them.

a picture showing a shower with hand rails and a chair

Meeting C++ in Berlin has been visited by folks in wheel chairs, and I thought I highlight this possiblity. Recently when looking through pictures provided by my hotel contact, I've seen aboves picture of an accessible bathroom, which sparked my interest in finding out more about them. While I knew they existed, I didn't know the Hotel has actually 14 rooms of them, and that they each have a twin room for an assistant to stay. So if such a room is needed for your stay, wether you bring a wheel chair or not - now you know that its possible...

 

How to break or continue from a lambda loop? -- Vittorio Romeo

Is it possible to write a simple iteration API that hides implementation details and lets users break and continue?

Here's a new article about a lightweight solution using a `ControlFlow` enumeration!

How to break or continue from a lambda loop?

by Vittorio Romeo

From the article:

Here’s an encapsulation challenge that I frequently run into: how to let users iterate over an internal data structure without leaking implementation details, but still giving them full control over the loop?

Implementing a custom iterator type requires significant boilerplate and/or complexity, depending on the underlying data structure.

Coroutines are simple and elegant, but the codegen is atrocious – definitely unsuitable for hot paths.

 

Raw Loops for Performance? -- Sandor Dargo

SANDOR_DARGO_ROUND.JPGUsing ranges or algorithms has several advantages over raw loops, notably readability. On the other hand, as we’ve just seen, sheer performance is not necessarily among those advantages. Using ranges can be slightly slower than a raw loop version. But that’s not necessarily a problem, it really depends on your use case. Most probably it won’t make a bit difference.

Raw Loops for Performance?

by Sandor Dargo

From the article:

To my greatest satisfaction, I’ve recently joined a new project. I started to read through the codebase before joining and at that stage, whenever I saw a possibility for a minor improvement, I raised a tiny pull request. One of my pet peeves is rooted in Sean Parent’s 2013 talk at GoingNative, Seasoning C++ where he advocated for no raw loops.

When I saw this loop, I started to think about how to replace it:

2025-05-08_16-46-14.png

Please note that the example is simplified and slightly changed so that it compiles on its own.

Let’s focus on foo, the rest is there just to make the example compilable.

It seems that we could use std::transform. But heck, we use C++20 we have ranges at our hands so let’s go with std::ranges::transform!

CppCon 2024 When Nanoseconds Matter: Ultrafast Trading Systems in C++ -- David Gross

nanosecondsmatter-gross.pngRegistration is now open for CppCon 2025! The conference starts on September 13 and will be held in person in Aurora, CO. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from last year's conference. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2025!

When Nanoseconds Matter: Ultrafast Trading Systems in C++

by David Gross

Summary of the talk:

Achieving low latency in a trading system cannot be an afterthought; it must be an integral part of the design from the very beginning. While low latency programming is sometimes seen under the umbrella of "code optimization", the truth is that most of the work needed to achieve such latency is done upfront, at the design phase. How to translate our knowledge about the CPU and hardware into C++? How to use multiple CPU cores, handle concurrency issues and cost, and stay fast?

In this talk, I will be sharing with you some industry insights on how to design from scratch a low latency trading system. I will be presenting building blocks that application developers can directly re-use when in their trading systems (or some other high performance, highly concurrent applications).

Additionally, we will delve into several algorithms and data structures commonly used in trading systems, and discuss how to optimize them using the latest features available in C++. This session aims to equip you with practical knowledge and techniques to enhance the performance of your systems and make informed decisions about the tools and technologies you choose to employ.

constexpr Functions: Optimization vs Guarantee -- Andreas Fertig

Depositphotos_193487484_S.jpgConstexpr has been around for a while now, but many don’t fully understand its subtleties. Andreas Fertig explores its use and when a constexpr expression might not be evaluated at compile time.

constexpr Functions: Optimization vs Guarantee

by Andreas Fertig

From the article:

The feature of constant evaluation is nothing new in 2023. You have constexpr available since C++11. Yet, in many of my classes, I see that people still struggle with constexpr functions. Let me shed some light on them.

What you get is not what you see

One thing, which is a feature, is that constexpr functions can be evaluated at compile-time, but they can run at run-time as well. That evaluation at compile-time requires all values known at compile-time is reasonable. But I often see that the assumption is once all values for a constexpr function are known at compile-time, the function will be evaluated at compile-time.

I can say that I find this assumption reasonable, and discovering the truth isn’t easy. Let’s consider an example (Listing 1).

constexpr auto Fun(int v)
{
  return 42 / v; ①
}

int main()
{
  const auto f = Fun(6); ②
  return f;              ③
}
Listing 1

The constexpr function Fun divides 42 by a value provided by the parameter v ①. In ②, I call Fun with the value 6 and assign the result to the variable f.

Last, in ③, I return the value of f to prevent the compiler optimizes this program away. If you use Compiler Explorer to look at the resulting assembly, GCC with -O1 brings this down to:

  main:
          mov     eax, 7
          ret

As you can see, the compiler has evaluated the result of 42 / 6, which, of course, is 7. Aside from the final number, there is also no trace at all of the function Fun.

Now, this is what, in my experience, makes people believe that Fun was evaluated at compile-time thanks to constexpr. Yet this view is incorrect. You are looking at compiler optimization, something different from constexpr functions.

Results summary: 2025 Annual C++ Developer Survey "Lite"

Thank you to everyone who reponded to our 2025 annual global C++ developer survey. As promised, here is a summary of the results, including one-page AI-generated summaries of your answers to the free-form questions:

CppDevSurvey-2025-summary.pdf

A 100-page version of this report that also includes all individual write-in responses has now been forwarded to the C++ standards committee and C++ product vendors, to help inform C++ evolution and tooling.

Your feedback is valuable, and appreciated.