August 2014

Near-final version of Effective Modern C++ available -- Scott Meyers

Scott's long-awaited book on using C++11 and C++14 is nearing completion:

Near-Final Draft of Effective Modern C++ Now Available (plus TOC and sample Item)

by Scott Meyers

From the announcement:

Effective Modern C++ is moving closer and closer to reality. This post contains:

  •     Information about availability of an almost-final draft of the book.
  •     The current (and probably final) table of contents.
  •     A link to the I-hope-I-got-it-right-this-time version of my Item on noexcept.

Note: Scott's session at CppCon ("Type Deduction and Why You Care") is based on the first chapter of Effective Modern C++.

CppCon Evening Panel Topics Confirmed -- Boris Kolpackov

Announced this morning, as part of the free-and-open-to-all part of the CppCon program:

Evening Panel Topics Confirmed

by Boris Kolpackov 

From the announcement:

We have now confirmed details for the Monday [8:30pm], Wednesday [8:30pm], and Friday [2:00pm] panels:

Monday: "Meet the Authors"

Moderator: Chandler Carruth

Panelists: Ade Miller, Alex Allain, Kate Gregory, Pablo Halpern, Scott Meyers, Peter Sommerlad, Herb Sutter

Come to this panel to put your questions to many of the world’s top C++ published authors, and hear them discuss what they think is most important about C++ today. The CppCon 2014 program includes many of the world’s top C++ published authors, so we’re taking advantage of their being in town to bring them together in our opening panel for a discussion and Q&A session.

Wednesday: "Grill the Committee"

Moderator: Jon Kalb

Panelists: Chandler Carruth, Nevin Liber, Alisdair Meredith, Herb Sutter, Michael Wong

What would you like to know about how the C++ Standard happens? The panel is made up of members of the C++ Standards Committee and the audience asks what’s on their mind.

Friday: "Paying for Lunch: C++ in the ManyCore Age"

Moderator: Herb Sutter

Panelists: Jared Hoberock, Artur Laksberg, Ade Miller, Gor Nishanov, Michael Wong, Pablo Halpern

If you’re serious about efficient computation, from efficient battery-sipping apps on mobile devices to efficient use of compute cloud nodes, you need to know how to exploit the massive parallelism already available in all of today’s mainstream devices. Even small tablets and smartphones already contain multiple CPU/GPU cores and vector units. CppCon 2014 includes lots of talks about implementing such parallelism in C++ using existing products and techniques, and the standardization committee is actively working on standardizing several C++ extensions for concurrency and parallelism, including resumable functions, a Parallel STL, and transactional memory support. In this panel, we bring together several experts, including the primary authors of these products and standard specifications – in other words the who’s-who driving C++ parallelism forward – to discuss this topic across all devices and form factors, large and small.

G3log released, asynchronous logging with custom log sinks

[In addition to major product announcements, we're also interested in announcements like this of major releases of smaller and indie projects that our C++ readers might like to know about via our Products category. When you have a big release of your own project and would like to provide a writeup to let people know, feel free to suggest an article like this one. The "Suggest an Article" link appears in the top isocpp.org navbar when you are logged in; logins are free. --Ed.]

G3log is an asynchronous logger with support for adding custom made logging sinks. 

G3log is open source and cross-platform. G3logs builds on the asynchronous logger g2log that was released in 2011.  The current release supports dynamic adding of logging sinks and significant performance improvements. 

G3log features compelling functionality such as:

  • Logging and design-by-contract framework
  • LOG calls are asynchronous to avoid slowing down the LOG calling thread
  • LOG calls are thread safe
  • Queued LOG entries are flushed to log sinks before exiting so that no entries are lost at shutdown
  • Catching and logging of SIGSEGV and other fatal signals ensures controlled shutdown
  • On Linux/OSX a caught fatal signal will generate a stack dump to the log
  • G3log is cross platform, currently in use on  Windows, various Linux platforms and OSX

G3log can be built with Visual Studio 2013, Clang and GCC4.7 or newer.

Release information can be found at author's blog. See also the project's page on Bitbucket.

CppCon Program Highlights, 14 of N: Parallel Computation on GPUs

The CppCon 2014 conference program has been posted for the upcoming September conference. We've received requests that the program continue to be posted in "bite-sized" posts, a few sessions at a time, to make the 100+ sessions easier to absorb, so here is another set of talks. This series of posts will conclude once the entire conference program has been posted in this way.

 

In addition to other performance-focused CppCon talks posted yesterday, CppCon 2014 also has thorough coverage of a very specific form of high performance parallel code -- using GPUs for general-purpose computation (aka GPGPU). This is important because every desktop machine and notebook, and nearly every tablet and smartphone, contains not only multiple CPU cores and vector units, but also a "compute-class" GPU -- these three forms of hardware parallelism are part of the mainstream hardware platform for the foreseeable future in all form factors. If you have a computationally intensive app or an app that could benefit from faster local processing, and you're not exploiting the GPU, you're leaving performance (and battery life) on the table and should be sure to attend these sessions.

In this post:

  • Writing Data Parallel Algorithms on GPUs
  • Another fundamental shift in Parallelism Paradigm? OpenMP 4.0 for GPU/Accelerators and other things
  • Introduction to C++ AMP (GPGPU Computing)

 

Writing Data Parallel Algorithms on GPUs

Today most PCs, tablets and phones support multi-core processors and most programmers have some familiarity with writing (task) parallel code. Many of those same devices also have GPUs but writing code to run on a GPU is harder. Or is it?

Getting to grips with GPU programming is really about understanding things in a data parallel way. This talk will look at some of the common patterns for implementing algorithms on today's GPUs using examples from the C++ AMP Algorithms Library. Along the way it will cover some of the unique aspects of writing code for GPUs and contrast them with a more conventional code running on a CPU.

Speaker: Ade Miller. Ade Miller writes C++ for fun. He wrote his first N-body model in BASIC on an 8-bit microcomputer 30 years ago and never really looked back. Recently, he's written two books on parallel programming with C++; "C++ AMP: Accelerated Massive Parallelism with Microsoft Visual C++" and "Parallel Programming with Microsoft Visual C++". Ade spends the long winters in Washington contributing to the open source C++ AMP Algorithms Library and well as a few other projects. His summers are mostly spent crashing expensive bicycles into trees.

 

Another fundamental shift in Parallelism Paradigm? OpenMP 4.0 for GPU/Accelerators and other things

Another fundamental shift in Parallelism Paradigm? Sure. When was the last time you heard that before?

But seriously, as the number of threads/cores continue to increase, there is a growing pressure on applications to exploit more of the available parallelism in their codes, including coarse-, medium-, and fine-grain parallelism. OpenMP has been one of the dominant shared-memory programming models but is evolving beyond that with a new Mission Statement (no, really!) making it well suited for exploiting medium- and fine-grained parallelism.

OpenMP 4.0 exhibits many of these features to support the next step in both consumer, high-performance and exascale computing, with one of the world's first programming model for high-level language support for GPU/Accelerators and vector SIMD across not 1 but 3 high-level languages: C++, C, and that language whose name we dare not speak, but starts with F.

Speaker: Michael Wong, OpenMP CEO/Architect, IBM/OpenMP. Anything including C++, Transactional Memory, Parallel Programming, OpenMP, stars, tennis, travel, and the best food.

 

Introduction to C++ AMP (GPGPU Computing)

Meet C++ AMP (Accelerated Massive Parallelism), an abstraction layer on top of accelerators such as GPUs. In its current version it allows you to run code on any DX11 GPU, independent of the vendor, and it will even distribute workload across GPUs of different vendors simultaneously. C++ AMP was originally designed by Microsoft but is now an open standard. C++ AMP can deliver orders of magnitude performance increase with certain algorithms by utilizing the GPU to perform mathematical calculations. This talk will give a high level overview of what C++ AMP is and what it can do for you. It is time to start taking advantage of the computing power of GPUs!

Speaker: Marc Gregoire, Nikon Metrology. Marc Gregoire has worked as a software engineer consultant for 6 years for Siemens and Nokia Siemens Networks on critical 2G and 3G software running on Solaris for telecom operators. This required working in international teams stretching from South America and USA to EMEA and Asia. Now, Marc is working for Nikon Metrology on 3D scanning software. Marc is the author of "Professional C++, Second and Third Edition", published by Wiley/Wrox, is the founder of the Belgian C++ Users Group (www.becpp.org), and has written a number of articles which have been published on CodeGuru and/or his personal blog. He also creates freeware and shareware programs that are distributed through his website at www.nuonsoft.com, and maintains a blog on www.nuonsoft.com/blog/.

Slashdot AMA responses from Bjarne Stroustrup

Hot off the presses, Stroustrup's answers to the most-upvoted questions (and a few more) from last week's AMA question thread.

Interviews: Bjarne Stroustrup Answers Your Questions 

Last week you had a chance to ask Bjarne Stroustrup about programming and C++. Below you'll find his answers to those questions. If you didn't get a chance to ask him a question, or want to clarify something he said, don't forget he's doing a live Google + Q & A today at 12:30pm Eastern. 

And don't forget that said live Q&A session starts in 20 minutes...

auto considered awesome -- Jarryd Beck

Today's op-ed from our Australian correspondent:

auto considered awesome

by Jarryd Beck

From the article:

 

My last post about why you might not want to use auto may have left some people thinking that I think you shouldn’t use it. In fact I think you should almost always use it...

 

CppCon Program Highlights, 13 of N: Performance, Performance, and Parallel Performance

The CppCon 2014 conference program has been posted for the upcoming September conference. We've received requests that the program continue to be posted in "bite-sized" posts, a few sessions at a time, to make the 100+ sessions easier to absorb, so here is another set of talks. This series of posts will conclude once the entire conference program has been posted in this way.

 

Many CppCon talks touch on efficiency and performance optimization, with good reason. Here are five specific talks that focus on this topic, from algorithmic efficiency to full-throttle performance on modern parallel hardware, and how support for the latest mainstream architectures is coming quickly to a standard near you. As always, these talks are by Those Who Know and Those Who Do -- several of these speakers are personally leading the work on high performance and parallelism at major C++ compiler companies and in the C++ standards process.

In this post:

  • Efficiency with Algorithms, Performance with Data Structures
  • It's About Time
  • Parallelism in the Standard C++: What to Expect in C++ 17
  • Overview of Parallel Programming in C++
  • Decomposing a Problem for Parallel Execution

 

Efficiency with Algorithms, Performance with Data Structures

Why do you write C++ code? There is a good chance it is in part because of concerns about the performance of your software. Whether they stem from needing to run on every smaller mobile devices, squeezing the last few effects into video game, or because every watt of power in your data center costs too much, C++ programmers throughout the industry have an insatiable desire for writing high performance code.

Unfortunately, even with C++, this can be really challenging. Over the past twenty years processors, memory, software libraries, and even compilers have radically changed what makes C++ code fast. Even measuring the performance of your code can be a daunting task. This talk will dig into how modern processors work, what makes them fast, and how to exploit them effectively with modern C++ code. It will teach you how modern C++ optimizers see your code today, and how that is likely to change in the coming years. It will teach you how to reason better about the performance of your code, and how to write your code so that it performs better. You will even learn some tricks about how to measure the performance of your code.

Speaker: Chandler Carruth, Google and Member of Board of Directors, Standard C++ Foundation. Chandler Carruth leads the Clang team at Google, building better diagnostics, tools, and more. Previously, he worked on several pieces of Google’s distributed build system. He makes guest appearances helping to maintain a few core C++ libraries across Google’s codebase, and is active in the LLVM and Clang open source communities. He received his M.S. and B.S. in Computer Science from Wake Forest University, but disavows all knowledge of the contents of his Master’s thesis. He is regularly found drinking Cherry Coke Zero in the daytime and pontificating over a single malt scotch in the evening.

 

It's About Time

This session will build up your mental model about time at the very small scale.

We'll examine progressively smaller units of time and what modern computers can accomplish in these units of time. We'll put these in perspective with human factors on these scales. For example, we'll consider how fast a computer must respond in order for a human to consider it "instantaneous."

Then we'll build up our mental model of how expensive it is for us to use specific data types, containers, and operations in our code.

We'll close by looking at why and how we should measure, always measure.

Speaker: Jon Kalb. Jon has been programming in C++ for over twenty years. During the last two decades he has written C++ for Apple, Dow Chemical, Intuit, Lotus, Microsoft, Netscape, Sun, Yahoo! and some less well­‐known companies. He taught C++ in the graduate school at Golden Gate University for three years and is a founding moderator of the Boost‐User and Boost‐Interest mailing lists. Jon is active in the Silicon Valley chapter of the ACCU and programs the C++ track at the Silicon Valley Code Camp. Jon blogs at http://slashshlash.info@_JonKalb JonKalb on G+ LinkedIn

 

Parallelism in the Standard C++: What to Expect in C++ 17

It is 2014 and parallel programming has entered the mainstream. No longer is it the domain of the few highly trained experts. The tools available in the C++ today make parallelism accessible - if not yet easy - to average developers.

However, writing efficient cross-platform parallel code in C++ is still hard. The standard constructs available in C++ 11/14 are too basic and too low-level. More advanced tools exist, but most are either vendor-specific or don't work on all platforms.

In this presentation, we'll talk about the joint effort spearheaded by several members of the ISO C++ Committee to bring parallelism into the C++ Standard Template Library. The project known as the "Parallel STL" aims to bring muliticore and SIMD parallelism into the next revision of the ISO C++ Standard.

Speaker: Artur Laksberg, Microsoft. Artur Laksberg leads the Visual C++ Libraries development team at Microsoft. His interests include concurrency, programming language and library design, and modern C++. Artur is one of the co-authors of the Parallel STL proposal; his team is now working on the prototype implementation of the proposal.

 

Overview of Parallel Programming in C++

Parallel programming was once considered to be the exclusive realm of weather forecasters and particle physicists working on multi-million dollar super computers while the rest us relied on chip manufacturers to crank out faster CPUs every year. That era has come to an end. Clock speedups have been largely replaced by having more CPUs on a chip. Your typical smart phone now has 2 to 4 cores and your typical laptop or tablet has 4 to 8 cores. Servers have dozens of cores and supercomputers have thousands of cores.

If you want to speed up a computation on modern hardware, you need to take advantage of the multiple cores available. This talk is provides an overview of the parallelism landscape. We'll explore the what, why, and how of parallel programming, discuss the distinction between parallelism and concurrency and how they overlap, and learn about the problems that one runs into. We'll conclude with an overview of existing parallelism technologies in C++ and the future directions being considered for parallel programming in standard C++.

 

Decomposing a Problem for Parallel Execution

So you want to speed up your computation using multicore parallel execution and you've picked a parallelism framework. What now? Parallelism frameworks give you the tools you need, but they don't actually parallelize the code; that's your job. To take advantage of parallel hardware, you must decompose your computation into tasks that can be computed in parallel. In this session, I'll present a real-world problem (the n-bodies problem) and guide you through several different ways in which it can be decomposed for parallel execution. We'll look at how to achieve scalability, resolve data races, and avoid negative multi-core cache effects. At the end of this session, you should have a conceptual understanding of parallel programming fundamentals that can be applied to a wide range of problems using a variety of frameworks.

Speaker: Pablo Halpern, Parallel Programming Languages Architect, Intel. Pablo Halpern has been programming in C++ since 1989 and has been a member of the C++ Standards Committee since 2007. He is currently the Parallel Programming Languages Architect at Intel Corp., where he coordinates the efforts of teams working on Cilk Plus, TBB, OpenMP, and other parallelism languages, frameworks, and tools targeted to C++, C, and Fortran users. Pablo came to Intel from Cilk Arts, Inc., which was acquired by Intel in 2009. During his time at Cilk Arts, he co-authored the paper "Reducers and other Cilk++ Hyperobjects", which won best paper at the SPAA 2009 conference. His current work is focused on creating simpler and more powerful parallel programming languages and tools for Intel's customers and promoting adoption of parallel constructs into the C++ and C standards. He lives with his family in southern New Hampshire, USA. When not working on parallel programming, he enjoys studying the viola, skiing, snowboarding, and watching opera. Twitter handle: @PabloGHalpern

A visitor’s guide to C++ allocators -- Thomas Köppe

The standard library allocators are one of the more mysterious parts of namespace std, as well as one of the more flexible parts. In this "under construction" article and GitHub repo, Thomas Köppe undertakes to demystify the feature.

A visitor’s guide to C++ allocators (repo)

by Thomas Köppe

From the README:

This repository contains a collection of documents that describe the allocator concept in the standard library of C++11 and beyond. The main guide covers the following topics.

  • Allocator traits
  • Statefulness
  • Fancy pointers
  • Allocator propagation in breadth (container copy, POC{CA,MA,S}) and depth (scoped_allocator_adaptor)

Start reading with the main guide.

Furthermore, there are several worked-out end-to-end examples:

The code for the end-to-end examples is available separately in the example_code directory.