Video & On-Demand

CppCon 2016: Constant Fun--Dietmar Kühl

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

Constant Fun

by Dietmar Kühl

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

This presentation discusses why it is useful to move some of the processing to compile time and shows some applications of doing so. In particular it shows how to create associative containers created at compile time and what is needed from the types involved to make it possible. The presentation also does some analysis to estimate the costs in terms for compile-time and object file size.

Specifically, the presentation discusses:
- implications of static and dynamic initialization – the C++ language rules for implementing constexpr functions and classes supporting constexpr objects.
- differences in error handling with constant expressions.
- sorting sequences at compile time and the needed infrastructure – creating constant associative containers with compile-time and run-time look-up.

CppCon 2016: There and Back Again: An Incremental C++ Modules Design--Richard Smith

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

There and Back Again: An Incremental C++ Modules Design

by Richard Smith

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

The Clang project has been working on Modules in one form or another for many years. It started off with C and Objective-C many years ago. Today, we have a C++ compiler that can transparently use C++ Modules with existing C++ code, and we have deployed that at scale. However, this is very separate from the question of how to integrate a modular compilation model into the language itself. That is an issue that several groups working on C++ have been trying to tackle over the last few years.

Based on our experience deploying the core technology behind Modules, we have learned a tremendous amount about how they interact with existing code. This has informed the particular design we would like to see for C++ Modules, and it centers around incremental adoption. In essence, how do we take the C++ code we have today, and migrate it to directly leverage C++ Modules in its very syntax, while still interacting cleanly with C++ code that will always and forever be stuck in a legacy mode without Modules.

In this talk we will present our ideas on how C++ Modules should be designed in order to interoperate seamlessly with existing patterns, libraries, and codebases. However, these are still early days for C++ Modules. We are all still experimenting and learning about what the best design is likely to be. Here, we simply want to present a possible and still very early design direction for this feature.

CppCast Episode 110: Coroutines with Gor Nishanov

Episode 110 of CppCast the only podcast for C++ developers by C++ developers. In this episode Rob and Jason are joined by Gor Nishanov to talk about the C++ Coroutines proposal.

CppCast Episode 110: Coroutines with Gor Nishanov

by Rob Irving and Jason Turner

About the interviewee:

Gor Nishanov is a Principal Software Design Engineer on the Microsoft C++ team. He works on design and standardization of C++ Coroutines, and on asynchronous programming models. Prior to joining C++ team, Gor was working on distributed systems in Windows Clustering team.

CppCon 2016: The C++17 Parallel Algorithms Library and Beyond--Bryce Adelstein Lelbach

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

The C++17 Parallel Algorithms Library and Beyond

by Bryce Adelstein Lelbach

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

One of the major library features in C++17 is a parallel algorithms library (formerly the Parallelism Technical Specification v1). The parallel algorithms library has both parallel versions of the existing algorithms in the standard library and a handful of new algorithms inspired by common patterns from parallel programming (such as std::reduce() and std::transform_reduce()).

We’ll talk about what’s in the parallel algorithms library, and how to utilize it in your code today. Also, we’ll discuss some exciting future developments relating to the parallel algorithms library which are targeted for the second version of the Parallelism Technical Specification – executors, and asynchronous parallel algorithms.

CppCon 2016: Make Friends with the Clang Static Analysis Tools--Gabor Horvath

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

Make Friends with the Clang Static Analysis Tools

by Gabor Horvath

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

This talk is an overview of the open source static analysis tools for C++. The emphasis is on Clang based tools. While this talk is not intended to be a tutorial how to develop such tools I will cover the algorithms, methods and interesting heuristics that are utilized by them. Understanding these methods can be really useful as it helps us write more static analysis friendly code and understand the cause of false positive results. It will also help to understand limitations of the currently available tools. I will also present some guidelines how to make a library static analysis friendly, to make clients interested in such tools happy. I will also give a short tutorial on how to use these tools and how to integrate them into the work flow.

CppCast Episode 109: CopperSpice with Barbara Geller and Ansel Sermersheim

Episode 109 of CppCast the only podcast for C++ developers by C++ developers. In this episode Rob and Jason are joined by Barbara Geller and Ansel Sermersheim to talk about the CopperSpice C++ GUI Library.

CppCast Episode 109: CopperSpice with Barbara Geller and Ansel Sermersheim

by Rob Irving and Jason Turner

About the interviewees:

Barbara is an independent consultant working as a programmer and software developer for over 25 years. She has been a featured speaker at more than a dozen trade shows and computer conferences in the US and on two separate occasions Barbara taught an extended class in software architecture and GUI design for the Panama Canal Commission in Panama.

Ansel has been working as a programmer for over 15 years. Ansel worked for 8 years at a communications company designing scalable, high performance, multi-threaded network daemons in C++ and he is currently a software consultant for RealityShares in San Francisco.

CppCon 2016: C++ Coroutines: Under the covers--Gor Nishanov

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

C++ Coroutines: Under the covers

by Gor Nishanov

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Coroutines feel like magic. Functions that can suspend and resume in the middle of the execution without blocking a thread! We will look under the covers to see what transformations compilers perform on coroutines, what happens when a coroutine is started, suspended, resumed or cancelled. We will look at optimizations that can make a coroutine disappear into thin air.

CppCon 2016: std::accumulate: Exploring an Algorithmic Empire--Ben Deane

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

std::accumulate: Exploring an Algorithmic Empire

by Ben Deane

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

What is the most powerful algorithm in the STL? In the world? There are many cases to be made. But this talk explores what I think is a pretty good candidate, which C++ calls std::accumulate(). Tucked away in <numeric>, perhaps relatively unregarded when compared with workhorses like std::find_if() and std::partition(); nevertheless, std::accumulate() is in some sense the ur-algorithm on sequences.

Let’s explore the result of looking at code through an accumulate-shaped lens, how tweaking the algorithm for better composability can unlock many more uses, and how it can be further genericized with applications to parallelism, tree structures, and heterogeneous sequences.

std::accumulate(): it’s not just for adding things up!

CppCon 2016: Improving Performance Through Compiler Switches...--Tim Haines

Have you registered for CppCon 2017 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2016 for you to enjoy. Here is today’s feature:

Improving Performance Through Compiler Switches...

by Tim Haines

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Much attention has been given to what modern optimizing compilers can do with your code, but little is ever said as to how to make the compiler invoke these optimizations. Of course, the answer is compiler switches! But which ones are needed to generate the best code? How many switches does it take to get the best performance? How do different compilers compare when using the same set of switches? I explore all of these questions and more to shed light on the interplay between C++ compilers and modern hardware drawing on my work in high performance scientific computing.

Enabling modern optimizing compilers to exploit current-generation processor features is critical to success in this field. Yet, modernizing aging codebases to utilize these processor features is a daunting task that often results in non-portable code. Rather than relying on hand-tuned optimizations, I explore the ability of today's compilers to breathe new life into old code. In particular, I examine how industry-standard compilers like those from gcc, clang, and Intel perform when compiling operations common to scientific computing without any modifications to the source code. Specifically, I look at streaming data manipulations, reduction operations, compute-intensive loops, and selective array operations. By comparing the quality of the code generated and time to solution from these compilers with various optimization settings for several different C++ implementations, I am able to quantify the utility of each compiler switch in handling varying degrees of abstractions in C++ code. Finally, I measure the effects of these compiler settings on the up-and-coming industrial benchmark High Performance Conjugate Gradient that focuses more on the effects of the memory subsystem than current benchmarks like the traditional High Performance LinPACK suite.

CppCast Episode 108: Teaching Concepts with Christopher Di Bella

Episode 108 of CppCast the only podcast for C++ developers by C++ developers. In this episode Rob and Jason are joined by Christopher Di Bella to talk about his experience teaching C++ and his proposed changes to Concepts.

CppCast Episode 108: Teaching Concepts with Christopher Di Bella

by Rob Irving and Jason Turner

About the interviewee:

Christopher Di Bella will soon be a Runtime Technology Engineer at Codeplay, and was previously university tutor (teaching assistant) for the course 'Advanced C++ Programming', at the University of New South Wales, Australia. He is an avid C++ programmer, and also enjoys film, board games, and snowboarding in his spare time.