Video & On-Demand

CppCon 2016: C++ Coroutines: Under the covers--Gor Nishanov

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

C++ Coroutines: Under the covers

by Gor Nishanov

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Coroutines feel like magic. Functions that can suspend and resume in the middle of the execution without blocking a thread! We will look under the covers to see what transformations compilers perform on coroutines, what happens when a coroutine is started, suspended, resumed or cancelled. We will look at optimizations that can make a coroutine disappear into thin air.

CppCon 2016: std::accumulate: Exploring an Algorithmic Empire--Ben Deane

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

std::accumulate: Exploring an Algorithmic Empire

by Ben Deane

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

What is the most powerful algorithm in the STL? In the world? There are many cases to be made. But this talk explores what I think is a pretty good candidate, which C++ calls std::accumulate(). Tucked away in <numeric>, perhaps relatively unregarded when compared with workhorses like std::find_if() and std::partition(); nevertheless, std::accumulate() is in some sense the ur-algorithm on sequences.

Let’s explore the result of looking at code through an accumulate-shaped lens, how tweaking the algorithm for better composability can unlock many more uses, and how it can be further genericized with applications to parallelism, tree structures, and heterogeneous sequences.

std::accumulate(): it’s not just for adding things up!

CppCon 2016: Improving Performance Through Compiler Switches...--Tim Haines

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

Improving Performance Through Compiler Switches...

by Tim Haines

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Much attention has been given to what modern optimizing compilers can do with your code, but little is ever said as to how to make the compiler invoke these optimizations. Of course, the answer is compiler switches! But which ones are needed to generate the best code? How many switches does it take to get the best performance? How do different compilers compare when using the same set of switches? I explore all of these questions and more to shed light on the interplay between C++ compilers and modern hardware drawing on my work in high performance scientific computing.

Enabling modern optimizing compilers to exploit current-generation processor features is critical to success in this field. Yet, modernizing aging codebases to utilize these processor features is a daunting task that often results in non-portable code. Rather than relying on hand-tuned optimizations, I explore the ability of today's compilers to breathe new life into old code. In particular, I examine how industry-standard compilers like those from gcc, clang, and Intel perform when compiling operations common to scientific computing without any modifications to the source code. Specifically, I look at streaming data manipulations, reduction operations, compute-intensive loops, and selective array operations. By comparing the quality of the code generated and time to solution from these compilers with various optimization settings for several different C++ implementations, I am able to quantify the utility of each compiler switch in handling varying degrees of abstractions in C++ code. Finally, I measure the effects of these compiler settings on the up-and-coming industrial benchmark High Performance Conjugate Gradient that focuses more on the effects of the memory subsystem than current benchmarks like the traditional High Performance LinPACK suite.

CppCast Episode 108: Teaching Concepts with Christopher Di Bella

Episode 108 of CppCast the only podcast for C++ developers by C++ developers. In this episode Rob and Jason are joined by Christopher Di Bella to talk about his experience teaching C++ and his proposed changes to Concepts.

CppCast Episode 108: Teaching Concepts with Christopher Di Bella

by Rob Irving and Jason Turner

About the interviewee:

Christopher Di Bella will soon be a Runtime Technology Engineer at Codeplay, and was previously university tutor (teaching assistant) for the course 'Advanced C++ Programming', at the University of New South Wales, Australia. He is an avid C++ programmer, and also enjoys film, board games, and snowboarding in his spare time.

5 years of Meeting C++

Meeting C++ exists now for 5 years, lets celebrate on the blog:

5 years of Meeting C++

by Jens Weller

From the article:

Just a little bit more then 5 years ago, Meeting C++ went public. Since then, it has been a wild ride and huge success. Today, Meeting C++ reaches over 50k in social media, the conference it self has grown from 150 to 600 in its 5 editions...

Meeting C++ live: C++17 with Tony van Eerd

A really great new episode about Tony van Eerd was live on YouTube, watch the recording:

Meeting C++ live: C++17 with Tony van Eerd

by Jens Weller & Tony van Eerd

Watch on YouTube:

CppCon 2016: C++14 Reflections Without Macros, Markup nor External Tooling..--Antony Polukhin

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

C++14 Reflections Without Macros, Markup nor External Tooling..

by Antony Polukhin

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

C++ was lacking the reflections feature for a long time. But a new metaprogramming trick was discovered recently: we can get some information about POD structure by probing it's braced initializes. Combining that trick with variadic templates, constexpr functions, implicit conversion operators, SFINAE, decltype and integral constants we can count structure's fields and even deduce type of each field.

Now the best part: everything works without any additional markup nor macros typically needed to implement reflections in C++.

In this talk I'll explain most of the tricks in detail, starting from a very basic implementation that is only capable of detecting fields count and ending up with a fully functional prototype capable of dealing with nested PODs, const/volatile qualified pointers, pointers-to-pointers and enum members. Highly useful use-cases will be shown a the end of the talk. You may start experimenting right now using the implementation at https://github.com/apolukhin/magic_get.

CppCon 2016: Embracing Standard C++ for the Windows Runtime--Kenny Kerr & James McNellis

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

Embracing Standard C++ for the Windows Runtime

by Kenny Kerr & James McNellis

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Believe it or not, avoiding language extensions and embracing modern C++ will make it easier for you to write code for Windows. The Universal Windows Platform in Windows 10 provides the ability for developers to write apps for many devices in many languages. To achieve this goal, it uses the Windows Runtime platform technology to expose functionality from the operating system into languages, including C++. Microsoft wants to make the Windows Runtime naturally and easily available to standard C++ developers. "C++/WinRT" (formerly moderncpp.com) is a standard C++ library and toolset currently under development at Microsoft. It includes a standalone compiler, which converts Windows Runtime metadata into a header-only library. The source code uses standard syntax consumable by any C++ compiler, making it easier for developers to use Windows Runtime APIs from C++.

We will begin this session with the goals of the "C++/WinRT" project. We'll look at the primitives of the Windows Runtime ABI and how this C++ library provides a natural projection of those primitives. We'll look at how C++11 and C++14 language features make it easier to encapsulate the COM infrastructure that underpins the Windows Runtime. Finally, we'll look at how we've optimized the implementation and discuss how a handful of compiler optimizations can make this C++ library efficient and effective for building a wide range of applications.

C++ Weekly Episode 70: C++ IIFE in quick-bench.com—Jason Turner

Episode 70 of C++ Weekly.

C++ IIFE in quick-bench.com

by Jason Turner

About the show:

We are commonly taught to const everything that we can in C++. One way to accomplish this goal in the post-C++11 world is to use a immediately invoked lambda (equivalent to an IIFE in the JavaScript world) that generates a value for us, which we assign to a const value. But what impact does this design decision have on the quality of code generated and the performance? In this episode of C++ Weekly we use the new website quick-bench to test the various options available.

CppCon 2016: Deploying C++ modules to 100s of millions of lines of code--Manuel Klimek

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

Deploying C++ modules to 100s of millions of lines of code

by Manuel Klimek

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Compile times are a pain point for C++ programmers all over the world. Google is no exception.. We have a single unified codebase with hundreds of millions of lines of C++ code, all of it built from source. As the size of the codebase and the depth of interrelated interfaces exposed through textually included headers grew, the scaling of compiles became a critical issue.

Years ago we started working to build technology in the Clang compiler that could help scale builds more effectively than textual inclusion. This is the core of C++ Modules: moving away from the model of textual inclusion. We also started preparing our codebase to migrate to this technology en masse, and through a highly automated process. It's been a long time and a tremendous effort, but we'd like to share where we are as well as what comes next.

In this talk, we will outline the core C++ Modules technology in Clang. This is just raw technology at this stage, not an integrated part of the C++ programming language. That part is being worked on by a large group of people in the ISO C++ standards committee. But we want to share how Google is using this raw technology internally to make today's C++ compiles faster, what it took to get there, and how you too can take advantage of these features. We will cover everything from the details of migrating a codebase of this size to use a novel compilation model to the ramifications for both local and distributed build systems. We hope to give insight into the kinds of benefits that technology like C++ Modules can bring to a large scale C++ development environment.