performance

CppCon 2014 Viewing The World Through Array-Shaped Glasses--Ɓukasz Mendakiewicz

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Viewing The World Through Array-Shaped Glasses

by Łukasz Mendakiewicz

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

It's agreed among experts that the most performant data structure in C++ is an array. Or a vector. Or a dynarray. Indeed, until recently there was no standardized approach in C++ to view these types in an uniform manner. It was even murkier when the data had logically more than one dimension. This talk is an introduction to the new features proposed for C++17 in N3851 [TBD: update after Rapperswil] bringing all contiguous data into harmony and lifting it to higher dimensions: index, bounds, array_view and more. Attendees will also learn how indexable algorithms differ from the traditional elemental ones, and what does it mean for parallelism.

CppCon 2014 Metaprogramming with Boost.Hana: Unifying Boost.Fusion and Boost.MPL--Louis Dionne

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Metaprogramming with Boost.Hana: Unifying Boost.Fusion and Boost.MPL

by Louis Dionne

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Template metaprogramming sucks. No, seriously; you might like the imposed purely functional paradigm, but not the templates themselves. While C++11 has made our life easier, even simple metaprograms are often hard to write, impossible to maintain and slow to compile; we need better abstractions. In this talk, I will present Boost.Hana[1], an experimental C++14 library for heterogeneous computation. The library takes metaprogramming to a whole new level of expressiveness by unifying the well-known Boost.MPL and Boost.Fusion libraries under a single generic, purely functional interface. The library incorporates some of the most recent advances in C++ metaprogramming; I will give an overview of the most interesting implementation techniques used internally. Finally, I will show concrete ways to use the library so you, as a developer, can write less template black magic, increase your productivity and spend less time in coffee breaks waiting for the compiler (sorry).

CppCon 2014 ...Scaling Visualization in concurrent C++ programs--Fedor G Pikus

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

...Scaling Visualization in concurrent C++ programs

by Fedor G Pikus

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

High performance is one of the main reasons programmers choose C++ for their applications. If you are writing in C++, odds are you need every bit of computing power your hardware can provide. Today, this means writing multi-threaded programs to effectively utilize the multiple CPU cores that the hardware manufacturers keep adding. Everyone knows that writing multi-threaded programs is hard. Writing correct multi-threaded programs is even harder. Only after spending countless hours debugging race conditions and weird intermittent bugs do many programmers learn that writing efficient multi-threaded programs is harder yet. Have you ever wanted to see what are all your threads doing when they should be busy computing? This talk will show you how.

We begin by studying several techniques necessary for collecting large amounts of data from the running program very efficiently, with little overhead and minimal disruption to the program itself. We will cover efficient thread-safe memory management and efficient thread-safe disk I/O. Along the way we dabble in lock-free programming just enough to meet our needs, lest the subject will spiral into an hour-long talk of its own. With all these techniques put together, we can collect information about what each thread is doing, which threads are computing and what exactly, and which threads are slacking off waiting on locks, and do it at the time scale of tens of microseconds if necessary. Then we process the collected data and create a timeline that shows exactly what the program was doing at every moment in time.

Vector Hosted Lists--Thomas Young

Want perfomance and speed? Vectors are the solution:

Vector Hosted Lists

by Thomas Young

From the article:

Vectors are great when adding or removing elements at the end of a sequence, but not so hot when deleting elements at arbitrary positions.

If that's a requirement, you might find yourself reaching for a pointer-based list.

Not so fast!

Memory locality is important, contiguous buffers are a really good thing, and a standard vector will often out-perform pointer-based lists even where you perform non-contiguous, list-style modifications such as arbitrary element deletion.

And we can 'host' a list within a vector to get the advantages of a contiguous buffer at the same time as 0(1) complexity for these kinds of manipulations...

CppCon 2014 Overview of Parallel Programming in C++--Pablo Halpern

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Overview of Parallel Programming in C++

 

by Pablo Halpern

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Parallel programming was once considered to be the exclusive realm of weather forecasters and particle physicists working on multi-million dollar super computers while the rest us relied on chip manufacturers to crank out faster CPUs every year. That era has come to an end. Clock speedups have been largely replaced by having more CPUs on a chip. Your typical smart phone now has 2 to 4 cores and your typical laptop or tablet has 4 to 8 cores. Servers have dozens of cores and supercomputers have thousands of cores.

If you want to speed up a computation on modern hardware, you need to take advantage of the multiple cores available. This talk is provides an overview of the parallelism landscape. We'll explore the what, why, and how of parallel programming, discuss the distinction between parallelism and concurrency and how they overlap, and learn about the problems that one runs into. We'll conclude with an overview of existing parallelism technologies in C++ and the future directions being considered for parallel programming in standard C++.

Ranges and Iterators for numerical problems

A guest blog post by Karsten Ahnert at Meeting C++ as a follow up of his talk!

Ranges and Iterators for numerical problems

by Karsten Ahnert

From the article:

In this blog post I am going to show some ideas how one can implement numerical algorithms with ranges. Examples are the classical Newton algorithm for finding the root of a function and ordinary differential equations. The main idea is to put the main loop of the algorithm into range such that the user can...

the asynchronous library - Christophe Henry @ Meeting C++ 2014

A new video from Meeting C++ 2014:

the asynchronous library

by Christophe Henry

From the talk description:

An infrastructure library on which Boost Meta State Machine can build. This will be provided by the Asynchronous library: Active Objects, proxies, threadpools, parallelization algorithms, work-stealing, distributed programming...