The CppCon 2015 conference program has been posted for the upcoming September conference. We’ve received requests that the program continue to be posted in “bite-sized” posts, a few sessions at a time, to make the 100+ sessions easier to absorb, so here is another set of talks. This series of posts will conclude once the entire conference program has been posted in this way.
C++, it is also a language that we like to discover and master always more.
The following interrelated CppCon 2015 talks tackle these issues and more (part 1).
In this post:
- Tuning C++: Benchmarks, and Compilers, and CPUs! Oh My!
- Value Semantics: It ain't about the syntax!, Part I and Part II
- Five amazing things you couldn't imagine C++ can do
- C++ Atomics: The Sad Story of memory_order_consume: A Happy Ending at Last?
- A C++14 approach to dates and times
Tuning C++: Benchmarks, and Compilers, and CPUs! Oh My! by Chandler Carruth, C++ Language and Compiler Lead, Google
A primary use case for C++ is low latency, low overhead, high performance code. But C++ does not give you these things for free, it gives you the tools to control these things and achieve them where needed. How do you realize this potential of the language? How do you tune your C++ code and achieve the necessary performance metrics?
This talk will walk through the process of tuning C++ code from benchmarking to performance analysis. It will focus on small scale performance problems ranging from loop kernels to data structures and algorithms. It will show you how to write benchmarks that effectively measure different aspects of performance even in the face of advanced compiler optimizations and bedeviling modern CPUs. It will also show how to analyze the performance of your benchmark, understand its behavior as well as the CPUs behavior, and use a wide array of tools available to isolate and pinpoint performance problems. The tools and some processor details will be Linux and x86 specific, but the techniques and concepts should be broadly applicable.
Value Semantics: It ain't about the syntax!, Part I and Part II by John Lakos, Software Infrastructure Manager, Bloomberg LP
When people talk about a type as having *value* *semantics*, they are often thinking about its ability to be passed to (or returned from) a function by value. In order to do that, the C++ language requires that the type implement a copy constructor, and so people routinely implement copy constructors on their classes, which begs the question, "Should an object of that type be copyable at all?" If so, what should be true about the copy? Should it have the same state as the original object? Same behavior? What does copying an object mean?! By *value* *type*, most people assume that the type is specifically intended to represent a member of some set (of values). A *value-semantic* *type*, however, is one that strives to approximate an abstract *mathematical* type (e.g., integer, character set, complex-number sequence), which comprises operations as well as values. When we copy an object of a value-semantic type, the new object might not have the same state, or even the same behavior as the original object; for proper value-semantic types, however, the new object will have the same *value*. In this talk, we begin by gaining an intuitive feel for what we mean by *value* by identifying *salient* *attributes*, i.e., those that contribute to value, and by contrasting types whose objects naturally represent values with those that don't. After quickly reviewing the syntactic properties common to typical value types, we dive into the much deeper issues that *value* *semantics* entail. In particular, we explore the subtle *Essential* *Property* *of* *Value*, which applies to every *salient* mutating operation on a value-semantic object, and then profitably apply this property to realize a correct design for each of a variety of increasingly interesting (value-semantic) classes.
Five amazing things you couldn't imagine C++ can do by Sasha Goldshtein
Modern C++ is incredibly flexible. It's truly the multi-paradigm language Bjarne Stroustrup envisioned. In this talk, we will explore five functions or small libraries that make use of modern C++'s amazing features. Among them: compile-time- and runtime-safe printf, automatic recursive serialization of containers, a simple but complete parallelism framework using only portable C++, and two others to keep you motivated.
C++ Atomics: The Sad Story of memory_order_consume: A Happy Ending at Last? by Paul McKenney, Distinguished Engineer, IBM Linux Technology Center
One of the big advantages of C++ atomics is that developers can now let the compiler know about the intent behind their multi-threaded synchronization mechanisms. At least they can tell the compiler as long as the synchronization mechanism in question is not RCU. You see, all production compilers promote RCU's memory_order_consume to memory_order_acquire. Although this promotion does ensure correctness, it also ensures the additional overhead of memory-barrier instructions on weakly ordered systems and of needlessly suppressed compiler optimizations on all systems.
All previous attempts to resolve this issue have foundered on either standard-committee reluctance to eviscerate the standard for a special case, compiler-writer reluctance to eviscerate their compilers for a special case, and kernel-developers reluctance to eviscerate their source base for late-to-the-party compiler support.
But now there is a glimmer of hope in the guise of a small set of small patches to the Linux kernel that eliminate the most challenging use cases. Will this hope be realized? Come to this talk to here the story, which by September will hopefully have a happy ending!
A C++14 approach to dates and times by Howard Hinnant, Senior Software Engineer, Ripple Labs
A new date and date/time library designed for C++14 is presented. This library stresses ease of use, easy-to-read code, catching common errors at compile time, and uncompromising run-time performance.
The design starts with the C++11 std::chrono library, and extends it into the realm of calendars, giving a seamless experience built upon chrono::system_clock::time_point, the durations you already know such as chrono::hours and nanoseconds. Functionality that allows easy and efficient conversions between the std::chrono types and year/month/day - hh::mm::ss data structures is presented.
When dates (and times) are known at compile-time (e.g. leap second transitions), all computations are available at compile time (constexpr). When only parts of a date are known at compile time, run-time efficiencies are still gained by compile-time computing parts of the date.
The syntax of the library is built around a few easy-to-learn rules, and strictly checked at compile time. This makes it easy to learn, and very forgiving for the novice.
Add a Comment
Comments are closed.