A Brief History of Bjarne Stroustrup, the Creator of C++ -- CultRepo

In this portrait, we meet Bjarne Stroustrup where we talk about his childhood, his accidental entry into computer science (what is "datologi" anyway?), and the ideas that shaped one of the most influential programming languages ever made -- among many, many other things... like how pronouncing his last name involves a potato.

A Brief History of Bjarne Stroustrup, the Creator of C++

by CultRepo

Watch now:

<iframe width="560" height="315" src="https://www.youtube.com/embed/uDtvEsv730Y?si=QD_WmemwpsAfNsVI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Announcing Meeting C++ 2026

This years Meeting C++ conference is special, as its the 15th conference in total that Meeting C++ has organized, and its also the 5th time the event is hybrid!

Announcing Meeting C++ 2026

by Jens Weller

From the article:

We'll be meeting from the 26th - 28th November in Berlin! You have the unique chance to spend the 1st advent in Berlin with C++ and Christmas Markets open!

With Mateusz Pusz and Kate Gregory I've chosen two well known speakers for the keynotes this year. Mateusz is well known for his units library, which currently also is proposed for the standard. It is also an important contribution to making C++ more safe and secure, the big topic of last year. Then Kate Gregory will be visiting us in Berlin again, she is known for her ability to create great talks around technical and social aspects in our daily lives as devs. You might remember her from giving a keynote in 2017, or speaking about the aging programmer two years ago.

For the 15th time Meeting C++ will organize a great event for 3 days filled with lots of content about C++, like last year the plan is to host 3 tracks in parallel in Berlin, with an optional 4th track. The 4th track will be unlocked either by sponsorships or ticket sales. You can be a part of this great C++ event by attending onsite and online. There is already great news for onsite attendees: Hudson River Trading is again this years t-shirt sponsor, a great and unique Meeting C++ 2026 t-shirt is an exclusive perk for onsite attendees!

CppCon 2025 Threads vs Coroutines: Why C++ Has Two Concurrency Models -- Conor Spilsbury

Threads_vs_coroutines_Conor_Spilsbury.pngRegistration is now open for CppCon 2026! The conference starts on September 12 and will be held in person in Aurora, CO. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from last year's conference. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2026!

Threads vs Coroutines: Why C++ Has Two Concurrency Models

by Conor Spilsbury

Summary of the talk:

The C++11 standard introduced a powerful set of tools for concurrency such as threads, mutexes, condition variables, and futures. More recently, C++20 introduced another powerful but fundamentally different concurrency abstraction in the form of coroutines. But coroutines are not just an evolution or a replacement for threads. Instead, they each solve different problems in different ways. Choosing the right tool for the job requires understanding how each works under the hood and where they shine. This talk will help build that intuition by looking at how each interacts with the operating system and hardware which will help make better decisions when choosing which to use.

We'll explore how threads and synchronization primitives work at the operating-system and hardware level, from thread creation and scheduling to where context switching and synchronization introduce performance costs. We’ll then contrast this to the coroutine model introduced in C++20 which takes a fundamentally different approach by using a cooperative model based on the suspension and resumption of work.

Given this understanding, we’ll finish by applying this intuition to a set of real-world scenarios to identify whether threads or coroutines are a better fit for the problem at hand.

C++26: std::is_within_lifetime -- Sandor Dargo

SANDOR_DARGO_ROUND.JPGWhen I first came across std::is_within_lifetime, I expected another small type-traits utility — not a feature tied to checking whether a union alternative is active. But once you look closer, this seemingly narrow addition turns out to solve a surprisingly fundamental problem in constant evaluation.

C++26: std::is_within_lifetime

by Sandor Dargo

From the article:

When I was looking for the next topic for my posts, my eyes stopped on std::is_within_lifetime. Dealing with lifetime issues is a quite common source of bugs, after all. Then I clicked on the link and I read Checking if a union alternative is active. I scratched my head. Is the link correct?

It is — and it totally makes sense.

Let’s get into the details and first check what P2641R4 is about.

What does std::is_within_lifetime do?

C++26 adds bool std::is_within_lifetime(const T* p) to the <type_traits> header. This function checks whether p points to an object that is currently within its lifetime during constant evaluation.

CppCon 2025 More Speed & Simplicity: Practical Data-Oriented Design in C++ -- Vittorio Romeo

More_Speed_Vittorio_Romeo.pngRegistration is now open for CppCon 2026! The conference starts on September 12 and will be held in person in Aurora, CO. To whet your appetite for this year’s conference, we’re posting videos of some of the top-rated talks from last year's conference. Here’s another CppCon talk video we hope you will enjoy – and why not register today for CppCon 2026!

CppCon 2025 More Speed & Simplicity: Practical Data-Oriented Design in C++

by Vittorio Romeo

Summary of the talk:

Data-Oriented Design (DOD) presents a different way of thinking: prioritizing data layout not only unlocks significant performance gains via cache efficiency but can also lead to surprising simplicity in the code that actually processes the data.

This talk is a practical introduction for C++ developers familiar with OOP. Through a step-by-step refactoring of a conventional OOP design, we’ll both cover how data access patterns influence speed and how a data-first approach can clarify intent.

We’ll measure the performance impact with benchmarks and analyze how the refactored code, particularly the data processing loops, can become more direct and conceptually simpler.

Key techniques like Structure-of-Arrays (SoA) vs. Array-of-Structures (AoS) will be explained and benchmarked, considering their effects on both execution time and code clarity. We’ll pragmatically weigh the strengths (performance, simpler data logic) and weaknesses of DOD, highlighting how it can complement, not just replace, OOP.

We’ll also demonstrate that DOD doesn’t necessitate abandoning robust abstractions, showcasing C++ techniques for creating safe, expressive APIs that manage both complexity and performance.

Let’s learn how thinking “data-first” can make your C++ code faster and easier to reason about!

The cost of a function call -- Daniel Lemire

Capture-decran-le-2026-02-08-a-15.07.17-825x510.pngFunction calls are cheap — but they are not free — and in tight loops their cost can dominate your runtime. Modern compilers rely on inlining to remove that overhead and unlock deeper optimizations, sometimes turning an ordinary loop into dramatically faster SIMD code.

The cost of a function call

by Daniel Lemire

From the article:

When programming, we chain functions together. Function A calls function B. And so forth.

You do not have to program this way, you could write an entire program using a single function. It would be a fun exercise to write a non-trivial program using a single function… as long as you delegate the code writing to AI because human beings quickly struggle with long functions.

A key compiler optimization is ‘inlining’: the compiler takes your function definition and it tries to substitute it at the call location. It is conceptually quite simple. Consider the following example where the function add3 calls the function add.

int add(int x, int y) { 
     return x + y; 
} 
int add3(int x, int y, int z) { 
     return add(add(x, y), z); 
} 

You can manually inline the call as follows.

int add3(int x, int y, int z) { 
     return x + y + z; 
}

 

Implementing vector -- Quasar Chunawala

logo.pngFinding out how to implement features from the standard library can be a useful learning exercise. Quasar Chunawala explores implementing your own version of std::vector.

Implementing vector<T>

by Quasar Chunawala

From the article:

The most fundamental STL data-structure is the vector. In this article, I am going to explore writing a custom implementation of the vector data-structure. The standard library implementation std::vector is a work of art, it is extremely efficient at managing memory and has been tested ad nauseam. It is much better, in fact, than a homegrown alternative would be.

Why then write our own custom vector?

  • Writing a naive implementation is challenging and rewarding. It is a lot of fun!
  • Coding up these training implementations, thinking about corner cases, getting your code reviewed, revisiting your design is very effective at understanding the inner workings of STL data-structures and writing good C++ code.
  • Its a good opportunity to learn about low-level memory management algorithms.

We are not aiming for an exhaustive representation or implementation, but we will write test cases for all basic functionalities expected out of a vector-like data-structure.

Informally, a std::vector<T> represents a dynamically allocated array that can grow as needed. As with any array, a std::vector<T> is a sequence of elements of type T arranged contiguously in memory. We will put our homegrown version of vector<T> under the dev namespace.

Flavours of Reflection -- Bernard Teo

Reflection is coming to C++26 and is arguably the most anticipated feature accepted into this version of the standard. With substantial implementation work already done in Clang and GCC, compile-time reflection in C++ is no longer theoretical — it’s right around the corner.

Flavours of Reflection: Comparing C++26 reflection with similar capabilities in other programming languages

by Bernard Teo

From the article:

Reflection is imminent. It is set to be part of C++26, and it is perhaps the most anticipated feature accepted into this version of the standard. Lots of implementation work has already been done for Clang and GCC — they pretty much already work (both are on Compiler Explorer), and it hopefully won’t be too long before they get merged into their respective compilers.

Many programming languages already provide some reflection capabilities, so some people may think that C++ is simply playing catch-up. However, C++’s reflection feature does differ in important ways. This blog post explores the existing reflection capabilities of Python, Java, C#, and Rust, comparing them with one another as well as with the flavour of reflection slated for C++26.

In the rest of this blog post, the knowledge of C++ fundamentals is assumed (but not for the other programming languages, where non-commonsensical code will be explained).

 

 

IIFE for Complex Initialization -- Bartlomiej Filipek

filipek-iife.pngWhat do you do when the code for a variable initialization is complicated? Do you move it to another method or write inside the current scope? Bartlomiej Filipek presents a trick that allows computing a value for a variable, even a const variable, with a compact notation.

IIFE for Complex Initialization

by Bartlomiej Filipek

In this article:

I hope you’re initializing most variables as const (so that the code is more explicit, and also compiler can reason better about the code and optimize).

For example, it’s easy to write:

const int myParam = inputParam * 10 + 5; 

or even:

const int myParam = bCondition ? inputParam*2 : inputParam + 10; 

But what about complex expressions? When we have to use several lines of code, or when the ? operator is not sufficient.

‘It’s easy’ you say: you can wrap that initialization into a separate function.

While that’s the right answer in most cases, I’ve noticed that in reality a lot of people write code in the current scope. That forces you to stop using const and code is a bit uglier.

PVS-Studio 7.41: MISRA C 2023, enhanced Unreal Engine integration, new logging system

PVS-Studio 7.41 has been released. It brings improvements for Unreal Engine, support for MISRA C 2023, an update to the IntelliJ IDEA plugin, and other useful changes.

PVS-Studio 7.41: MISRA C 2023, enhanced Unreal Engine integration, new logging system, and much more

by Valerii Filatov

From the article:

Throughout the year, we have been working to cover more of the MISRA C 2023 standard. Currently, PVS-Studio analyzer covers 86% of the standard. You can find more information on this page. In future releases, we will continue to expand MISRA C++ 2023 standard coverage.