Articles & Books

C++26: A User-Friendly assert() macro -- Sandor Dargo

SANDOR_DARGO_ROUND.JPGC++26 is bringing some long-overdue changes to assert(). But why are those changes needed? And when do we actually use assert, anyway?

At its core, assert() exists to validate runtime conditions. If the given expression evaluates to false, the program aborts. I’m almost certain you’ve used it before — at work, in personal projects, or at the very least in examples and code snippets.

So what’s the problem?

C++26: A User-Friendly assert() macro

by Sandor Dargo

From the article:

assert() is a macro — and a slightly sneaky one at that. Its name is written in lowercase, so it doesn’t follow the usual SCREAMING_SNAKE_CASE convention we associate with macros. There’s a good chance you’ve been using it for years without ever thinking about its macro nature.

Macros, of course, aren’t particularly popular among modern C++ developers. But the issue here isn’t the usual - but valid - “macros are evil” argument. The real problem is more specific:

The preprocessor only understands parentheses for grouping. It does not understand other C++ syntax such as template angle brackets or brace-initialization.

As a result, several otherwise perfectly valid-looking assertions fail to compile:

// https://godbolt.org/z/9sqM7PvWh
using Int = int;
int x = 1, y = 2;

assert(std::is_same<int, Int>::value);
assert([x, y]() { return x < y; }() == 1);
assert(std::vector<int>{1, 2, 3}.size() == 3);
 

 

Glaze 7.2 - C++26 Reflection | YAML, CBOR, MessagePack, TOML and more

Glaze is a high-performance C++23 serialization library with compile-time reflection. It has grown to support many more formats and features, and in v7.2.0 C++26 Reflection support has been merged!

Glaze 7.2 - C++26 Reflection | YAML, CBOR, MessagePack, TOML and more

From the article:

Glaze now supports C++26 reflection with experimental GCC and Clang compilers. GCC 16 will soon be released with this support. When enabled, Glaze replaces the traditional __PRETTY_FUNCTION__ parsing and structured binding tricks with proper compile-time reflection primitives (std::meta).

The API doesn't change at all. You just get much more powerful automatic reflection that still works with Glaze overrides! Glaze was designed with automatic reflection in mind and still lets you customize reflection metadata using glz::meta on top of what std::meta provides via defaults.

Behold the power of meta::substitute -- Barry Revzin

What if string formatting could do far more than just substitute values—and do it all at compile time? This deep dive explores how modern C++ features like reflection unlock powerful new possibilities for parsing, analyzing, and transforming format strings before your program even runs.

Behold the power of meta::substitute

by Barry Revzin

From the article:

Over winter break, I started working on proposal for string interpolation. It was a lot of fun to work through implementing, basically an hour a day during my daughter’s nap time. The design itself is motivated by wanting to have a lot more functionality other than just formatting — and one of the examples in the paper was implementing an algorithm that does highlighting of the interpolations, such that:

revzin-codeeg.png

 

 

 

would print this:

x=5 and y=*10* and z=hello!

without doing any additional parsing work. I got the example from Vittorio Romeo’s original paper.

Now, when I wrote the paper, I considered this to be a simple example demonstrating something that was possible with the design I was proposing that was not possible with the other design. I thought that because obviously you need the format string as a compile-time constant in order to parse it at compile time to get the information that you need.

Let's check vibe code that acts like optimized C++ one but is actually a mess

The value of a skilled developer is shifting toward the ability to effectively review code. Although generating code now is easier than ever, evaluating it for proper decomposition, correctness, efficiency, and security is still important. To see why it's important to understand generated code and to recognize what lies beneath a program's elegant syntax, let's look at a small project called markus, created using Claude Opus.

Let's check vibe code that acts like optimized C++ one but is actually a mess

by Andrey Karpov

From the article:

Clearly, the 64-bit code is much more efficient, except when SSE2 is involved. Even then, though, everything runs quickly. The code Claude Opus generated for the markus project is the worst of all the options. Not only is the simplest implementation with a regular loop faster, it's also shorter. The extra code lines only made things worse.

 

Devirtualization and Static Polymorphism -- David Álvarez Rosa

rosa-devirtualization.pngEver wondered why your clean, object-oriented design sometimes slows things down? This piece breaks down how virtual dispatch impacts performance—and how techniques like devirtualization and static polymorphism can eliminate that overhead entirely.

Devirtualization and Static Polymorphism

by David Álvarez Rosa

From the article:

Ever wondered why your “clean” polymorphic design underperforms in benchmarks? Virtual dispatch enables polymorphism, but it comes with hidden overhead: pointer indirection, larger object layouts, and fewer inlining opportunities.

Compilers do their best to devirtualize these calls, but it isn’t always possible. On latency-sensitive paths, it’s beneficial to manually replace dynamic dispatch with static polymorphism, so calls are resolved at compile time and the abstraction has effectively zero runtime cost.

Power of C++26 Reflection: Strong (opaque) type definitions -- r/cpp

Inspired by a similar previous thread showcasing cool uses for C++26 reflection.

Power of C++26 Reflection: Strong (opaque) type definitions 

From the article:

With reflection, you can easily create "opaque" type definitions, i.e "strong types". It works by having an inner value stored, and wrapping over all public member functions.

Note: I am using queue_injection { ... } with the EDG experimental reflection, which afaik wasn't actually integrated into the C++26 standard, but without it you would simply need two build stages for codegen. This is also just a proof of concept, some features aren't fully developed (e.g aggregate initialization)

struct Item { /* ... */ }; // name, price as methods

struct FoodItem;
struct BookItem;
struct MovieItem;

consteval { 
    make_strong_typedef(^^FoodItem, ^^Item); 
    make_strong_typedef(^^BookItem, ^^Item); 
    make_strong_typedef(^^MovieItem, ^^Item); 
}

// Fully distinct types
void display(FoodItem &i) {
    std::cout << "Food: " << i.name() << ", " << i.price() << std::endl;
}
void display(BookItem &i) {
    std::cout << "Book: " << i.name() << ", " << i.price() << std::endl;
}

int main() {
    FoodItem fi("apple", 10); // works if Item constructor isn't marked explicit
    FoodItem fi_conversion(Item{"chocolate", 5}); // otherwise
    BookItem bi("the art of war", 20);
    MovieItem mi("interstellar", 25);

    display(fi);
    display(bi);
    // display(Item{"hello", 1}); // incorrect, missing display(Item&) function
    // display(mi); // incorrect, missing display(MovieItem&) function
}

std::vector — Four Mechanisms Behind Every push_back() -- Gracjan Olbinski

A walkthrough of four mechanisms working behind every push_back() call — exponential growth and amortized O(1), the growth factor's effect on memory reuse, cache performance from contiguity, and the silent noexcept trap in move semantics during reallocation.

std::vector — Four Mechanisms Behind Every push_back()

by Gracjan Olbinski

From the article:

"You call push_back() a thousand times. The vector reallocates about ten. Behind that simple interface, four mechanisms are working together — each one invisible during normal use, each one shaping your performance in ways that push_back() will never tell you about."

 

The hidden compile-time cost of C++26 reflection -- Vittorio Romeo

How much does C++26 Reflection actually cost your build?

In this article, we'll perform some early compilation time benchmarks of one of the most awaited C++26 features.

The hidden compile-time cost of C++26 reflection

by Vittorio Romeo

From the article:

Fast compilation times are extremely valuable to keep iteration times low, productivity and motivation high, and to quickly see the impact of your changes.

I would love to see a world where C++26 reflection is as close as possible to a lightweight language feature [...]

So, I decided to take some early measurements.

Stop Choosing: Get C++ Performance in Python Algos with C++26 -- Richard Hickling

merton-bot.pngIn algorithmic trading, the Python-vs-C++ debate is usually framed as flexibility versus speed — rapid strategy development on one side, ultra-low-latency execution on the other. But with C++26 reflection, that trade-off starts to disappear, making it possible to generate Python bindings automatically while keeping the core logic running at native C++ performance.

Stop Choosing: Get C++ Performance in Python Algos with C++26

by Richard Hickling

From the article:

The “religious war” between Python and C++ in algorithmic trading usually boils down to a single trade-off: Python is faster for getting ideas to market, while C++ is faster for getting orders into the book.

But why choose? With the advent of C++26 Reflection, you can now have the flexibility of a Python-based strategy without the performance penalty of slow loops.

Bring in C++ Rapidly: How Reflection Works

The biggest hurdle in hybrid trading systems has always been the “bridge.” Traditionally, if you wrote a complex pricer in C++, you had to manually write “boilerplate” code to tell Python how to talk to it. If you added a new function, you had to update the bridge. It was tedious and error-prone.

Reflection changes the game by allowing the code to “look in the mirror”. Instead of you manually describing your C++ functions to Python, the compiler does it for you. It programmatically inspects your classes and generates the bindings automatically.

C++26: std::is_within_lifetime -- Sandor Dargo

SANDOR_DARGO_ROUND.JPGWhen I first came across std::is_within_lifetime, I expected another small type-traits utility — not a feature tied to checking whether a union alternative is active. But once you look closer, this seemingly narrow addition turns out to solve a surprisingly fundamental problem in constant evaluation.

C++26: std::is_within_lifetime

by Sandor Dargo

From the article:

When I was looking for the next topic for my posts, my eyes stopped on std::is_within_lifetime. Dealing with lifetime issues is a quite common source of bugs, after all. Then I clicked on the link and I read Checking if a union alternative is active. I scratched my head. Is the link correct?

It is — and it totally makes sense.

Let’s get into the details and first check what P2641R4 is about.

What does std::is_within_lifetime do?

C++26 adds bool std::is_within_lifetime(const T* p) to the <type_traits> header. This function checks whether p points to an object that is currently within its lifetime during constant evaluation.