Threads and Locks must Go - Rainer Grimm
Rainer Grimm speaks on Concurrency and Parallelism with C++17 & C++20
Threads and Locks must Go
by Rainer Grimm
February 10-15, Hagenberg, Austria
March 19-21, Madrid, Spain
April 1-4, Bristol, UK
June 16-21, Sofia, Bulgaria
By Meeting C++ | Feb 28, 2018 04:02 AM | Tags: parallelism meetingcpp concurrency c++20 c++17
Rainer Grimm speaks on Concurrency and Parallelism with C++17 & C++20
Threads and Locks must Go
by Rainer Grimm
By Meeting C++ | Jan 25, 2018 03:20 AM | Tags: performance parallelism multithreading meetingcpp concurrency boost
This talk is about a new future, used in Sean Parents Concurrency library
There is a new future
by Felix Petriconi
By Felix Petriconi | Jun 10, 2017 01:11 PM | Tags: performance efficiency concurrency c++14
Version 1.0 of a new C++ future and channel library has been released.
by Sean Parent, Foster Brereton and Felix Petriconi
About the library:
This library provides high level abstractions for implementing algorithms that eases the use of multiple CPU cores while minimizing the contention.
The future implementaton differs in several aspects compared to the C++11/14/17 standard futures: It provides continuations and joins, which were just added in a C++17 TS. But more important this futures propagate values through the graph and not futures. This allows an easy way of creating splits. That means a single future can have multiple continuations into different directions. An other important difference is that the futures support cancellation. So if one is not anymore interested in the result of a future, then one can destroy the future without the need to wait until the future is fullfilled, as it is the case with std::future (and boost::future). An already started future will run until its end, but will not trigger any continuation. So in all these cases, all chained continuations will never be triggered. Additionally the future interface is designed in a way, that one can use build in or custom build executors.
Since one can create with futures only graphs for single use, this library provides as well channels. With these channels one can build graphs, that can be used for multiple invocations.
By Meeting C++ | Jan 27, 2016 09:29 AM | Tags: parallelism intermediate concurrency c++17 c++14 c++11 basics
A new video from Meeting C++ 2015:
The Landscape of Parallelism
by Michael Wong
By Marco Arena | Apr 30, 2015 12:48 AM | Tags: intermediate concurrency
A new blog post containing runnable code from the Italian C++ Community:
Serializing access to Streams
by Marco Foco
From the article:
Two or more threads were writing to cout using the form:
cout << someData << "some string" << someObject << endl;And one of the problems was that data sent from one thread often interrupted another thread, so the output was always messed up. [...] I started designing a solution by giving myself some guidelines, here listed in order of importance...
By Marco Arena | Mar 19, 2015 04:11 AM | Tags: generators concurrency advanced
From a totally unnecessary blog (we beg to differ):
Range comprehensions with C++ lazy generators
by Paolo Severini
From the article:
Lazy evaluation is a powerful tool and a pillar of functional programming; it gives the ability to construct potentially infinite data structures, and increases the performance by avoiding needless calculations ...
... Functional languages like Haskell have the concept of list comprehensions ... In C#, of course, we have LINQ ... It would be nice to have something similar in an eager language like C++ ... now the lazy, resumable generators proposed by N4286 seem perfect for this purpose ... We can use the VS2015 CTP prototype to experiment with this idea ...
By Felix Petriconi | Mar 18, 2015 01:07 AM | Tags: intermediate concurrency
Max Khiszinsky describes in his recent blog article different approaches to develop concurrent containers.
Lock-Free Data Structures. The Evolution of a Stack
by Max Khiszinsky
From the article
Describing the known algorithms would be quite boring, as there would be a lot of [pseudo-]code, plenty of details that are important but quite specific. After all, you can always find them in the references I provide in articles. What I wanted was to tell you an interesting story about exciting things. I wanted to show the development of approaches to designing concurrent containers.
By Meeting C++ | Jan 27, 2015 12:59 PM | Tags: performance parallelism openmp intermediate experimental efficiency concurrency advanced
A new video from Meeting C++ 2014
C++ SIMD parallelism with Intel Cilk Plus and OpenMP 4.0
by Georg Zitzlsberger
From the talk description:
Performance is one of the most important aspects that comes to mind if deciding for a programming language. Utilizing performance of modern processors is not as straight forward as it has been decades ago. Modern processors only rarely improve serial execution of applications by increasing their frequency or adding more execution units.
By Meeting C++ | Jan 23, 2015 06:47 AM | Tags: templates performance parallelism intermediate generic experimental concurrency c++11 advanced
A new video from Meeting C++ 2014:
Generic parallel programming for scientific and technical applications
by Guntram Berti
From the talk description:
Technical and scientific applications dealing with a high computational load today face the challenge to match the increasingly parallel nature of current and future hardware. The talk shows how the increased complexity of software can be controlled by using generic programming technologies. The process and its advantages are introduced using many concrete examples...
By Meeting C++ | Jan 15, 2015 03:39 AM | Tags: performance parallelism intermediate efficiency concurrency advanced
A new video from Meeting C++ 2014:
The C++ Memory Model
by Valentin Ziegler
From the talk description:
The C++ memory model defines how multiple threads interact with memory and shared data, enabling developers to reason about concurrent code in a platform independent way. The talk will explain multi-threaded executions and data races in C++...