Threads are an illusion - asynchronous programming with boost::asio
This talk was recorded during the LWG Meeting in Cologne:
Threads are an illusion - asynchronous programming with boost::asio
by Chris Kohlhoff
February 10-15, Hagenberg, Austria
March 19-21, Madrid, Spain
April 1-4, Bristol, UK
June 16-21, Sofia, Bulgaria
By Meeting C++ | Mar 13, 2015 03:17 AM | Tags: performance intermediate efficiency c++11 boost basics
This talk was recorded during the LWG Meeting in Cologne:
Threads are an illusion - asynchronous programming with boost::asio
by Chris Kohlhoff
By Adrien Hamelin | Mar 4, 2015 08:00 AM | Tags: performance intermediate
While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:
...Scaling Visualization in concurrent C++ programs
by Fedor G Pikus
Summary of the talk:
High performance is one of the main reasons programmers choose C++ for their applications. If you are writing in C++, odds are you need every bit of computing power your hardware can provide. Today, this means writing multi-threaded programs to effectively utilize the multiple CPU cores that the hardware manufacturers keep adding. Everyone knows that writing multi-threaded programs is hard. Writing correct multi-threaded programs is even harder. Only after spending countless hours debugging race conditions and weird intermittent bugs do many programmers learn that writing efficient multi-threaded programs is harder yet. Have you ever wanted to see what are all your threads doing when they should be busy computing? This talk will show you how.
We begin by studying several techniques necessary for collecting large amounts of data from the running program very efficiently, with little overhead and minimal disruption to the program itself. We will cover efficient thread-safe memory management and efficient thread-safe disk I/O. Along the way we dabble in lock-free programming just enough to meet our needs, lest the subject will spiral into an hour-long talk of its own. With all these techniques put together, we can collect information about what each thread is doing, which threads are computing and what exactly, and which threads are slacking off waiting on locks, and do it at the time scale of tens of microseconds if necessary. Then we process the collected data and create a timeline that shows exactly what the program was doing at every moment in time.
By Adrien Hamelin | Mar 2, 2015 12:01 AM | Tags: performance intermediate
Want perfomance and speed? Vectors are the solution:
Vector Hosted Lists
by Thomas Young
From the article:
Vectors are great when adding or removing elements at the end of a sequence, but not so hot when deleting elements at arbitrary positions.
If that's a requirement, you might find yourself reaching for a pointer-based list.
Not so fast!
Memory locality is important, contiguous buffers are a really good thing, and a standard vector will often out-perform pointer-based lists even where you perform non-contiguous, list-style modifications such as arbitrary element deletion.
And we can 'host' a list within a vector to get the advantages of a contiguous buffer at the same time as 0(1) complexity for these kinds of manipulations...
By Adrien Hamelin | Feb 27, 2015 08:00 AM | Tags: performance intermediate
While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:
Overview of Parallel Programming in C++
by Pablo Halpern
Summary of the talk:
Parallel programming was once considered to be the exclusive realm of weather forecasters and particle physicists working on multi-million dollar super computers while the rest us relied on chip manufacturers to crank out faster CPUs every year. That era has come to an end. Clock speedups have been largely replaced by having more CPUs on a chip. Your typical smart phone now has 2 to 4 cores and your typical laptop or tablet has 4 to 8 cores. Servers have dozens of cores and supercomputers have thousands of cores.
If you want to speed up a computation on modern hardware, you need to take advantage of the multiple cores available. This talk is provides an overview of the parallelism landscape. We'll explore the what, why, and how of parallel programming, discuss the distinction between parallelism and concurrency and how they overlap, and learn about the problems that one runs into. We'll conclude with an overview of existing parallelism technologies in C++ and the future directions being considered for parallel programming in standard C++.
By Meeting C++ | Feb 20, 2015 03:49 AM | Tags: performance meeting c++ intermediate experimental community c++14 c++11 basics advanced
Yesterday I uploaded the last video of Meeting C++ 2014:
All videos of Meeting C++ 2014 are online!
by Jens Weller
From the article:
I uploaded yesterday the last video of last years Meeting C++ conference! So you can now watch all 21 talks and the 2 keynotes at youtube!
By Meeting C++ | Feb 12, 2015 09:29 AM | Tags: performance intermediate c++11 boost
A guest blog post by Karsten Ahnert at Meeting C++ as a follow up of his talk!
Ranges and Iterators for numerical problems
by Karsten Ahnert
From the article:
In this blog post I am going to show some ideas how one can implement numerical algorithms with ranges. Examples are the classical Newton algorithm for finding the root of a function and ordinary differential equations. The main idea is to put the main loop of the algorithm into range such that the user can...
By Meeting C++ | Feb 11, 2015 10:24 AM | Tags: performance parallelism intermediate experimental c++11
A new video from Meeting C++ 2014:
the asynchronous library
by Christophe Henry
From the talk description:
An infrastructure library on which Boost Meta State Machine can build. This will be provided by the Asynchronous library: Active Objects, proxies, threadpools, parallelization algorithms, work-stealing, distributed programming...
By Adrien Hamelin | Feb 9, 2015 09:39 AM | Tags: performance intermediate
Here is a new library:
Release of C++ Format 1.0, a small, safe and fast formatting library for C++
by Victor Zverovich
From the main page:
C++ Format is an open-source formatting library for C++. It can be used as a safe alternative to printf or as a fast alternative to IOStreams...
By Adrien Hamelin | Feb 6, 2015 08:00 AM | Tags: performance intermediate
While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:
Writing Data Parallel Algorithms on GPUs
by Ade Miller
Summary of the talk:
Today most PCs, tablets and phones support multi-core processors and most programmers have some familiarity with writing (task) parallel code. Many of those same devices also have GPUs but writing code to run on a GPU is harder. Or is it?
Getting to grips with GPU programming is really about understanding things in a data parallel way. This talk will look at some of the common patterns for implementing algorithms on today's GPUs using examples from the C++ AMP Algorithms Library. Along the way it will cover some of the unique aspects of writing code for GPUs and contrast them with a more conventional code running on a CPU.
By Adrien Hamelin | Feb 4, 2015 02:08 PM | Tags: performance intermediate
Everything is in the title:
What Every Programmer Should Know About Compiler Optimizations
by Hadi Brais
From the article:
High-level programming languages offer many abstract programming constructs such as functions, conditional statements and loops that make us amazingly productive. However, one disadvantage of writing code in a high-level programming language is the potentially significant decrease in performance. Ideally, you should write understandable, maintainable code—without compromising performance. For this reason, compilers attempt to automatically optimize the code to improve its performance, and they’ve become quite sophisticated in doing so nowadays. They can transform loops, conditional statements, and recursive functions; eliminate whole blocks of code; and take advantage of the target instruction set architecture (ISA) to make the code fast and compact. It’s much better to focus on writing understandable code, than making manual optimizations that result in cryptic, hard-to-maintain code. In fact, manually optimizing the code might prevent the compiler from performing additional or more efficient optimizations...