performance

CppCon 2014 Lock-Free Programming (or, Juggling Razor Blades), Part II--Herb Sutter

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Lock-Free Programming (or, Juggling Razor Blades), Part II

by Herb Sutter

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Example-driven talk on how to design and write lock-free algorithms and data structures using C++ atomic -- something that can look deceptively simple, but contains very deep topics. (Important note: This is not the same as my "atomic Weapons" talk; that talk was about the "what they are and why" of the C++ memory model and atomics, and did not cover how to actually use atomics to implement highly concurrent algorithms and data structures.)

CppCon 2014 Lock-Free Programming (or, Juggling Razor Blades), Part I--Herb Sutter

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Lock-Free Programming (or, Juggling Razor Blades), Part I

by Herb Sutter

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Example-driven talk on how to design and write lock-free algorithms and data structures using C++ atomic -- something that can look deceptively simple, but contains very deep topics. (Important note: This is not the same as my "atomic Weapons" talk; that talk was about the "what they are and why" of the C++ memory model and atomics, and did not cover how to actually use atomics to implement highly concurrent algorithms and data structures.)

Template Code Bloat Revisited -- Jason Turner

Jason Turner discusses in his recent article the size of instantiated template code.

Template Code Bloat Revisited Smaller Makeshared

by Jason Turner

From the article:

Back in 2008 I wrote an article on template code bloat. In that article I concluded that the use of templates does not necessarily cause your binary code to bloat and may actually result in smaller code!

However, after spending the last few months optimizing and evaluating ChaiScript I've learned that the misuse of templates, particularly when inheritance is involved, can have a huge impact on code size.

CppCon 2014 Another fundamental shift in Parallelism Paradigm?--Michael Wong

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Another fundamental shift in Parallelism Paradigm?

by Michael Wong

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Another fundamental shift in Parallelism Paradigm? Sure. When was the last time you heard that before?

But seriously, as the number of threads/cores continue to increase, there is a growing pressure on applications to exploit more of the available parallelism in their codes, including coarse-, medium-, and fine-grain parallelism. OpenMP has been one of the dominant shared-memory programming models but is evolving beyond that with a new Mission Statement (no, really!) making it well suited for exploiting medium- and fine-grained parallelism.

OpenMP 4.0 exhibits many of these features to support the next step in both consumer, high-performance and exascale computing, with one of the world's first programming model for high-level language support for GPU/Accelerators and vector SIMD across not 1 but 3 high-level languages: C++, C, and that language whose name we dare not speak, but starts with F.

Tippet: Use reference_wrapper to create views of data -- Indi

Explicit C++ describes how to use std::reference_wrapper to create alternative views of data.

Tippet: Use reference_wrapper to create views of data

by Indi

from the article:

When working with objects indirectly, always use references. Only use pointers to indicate optional referencing. But there’s one little hitch: because you can’t rebind references, you can’t simply have a container of references. Enter std::reference_wrapper.

CppCast Episode 9: Asynchronous Programming with Hartmut Kaiser

Episode 9 of CppCast the only podcast by C++ developers for C++ developers. In this episode Rob and Jason are joined by Hartmut Kaiser to talk about Asynchronous Program and the HPX framework.

CppCast Episode 9: Asynchronous Programming with Hartmut Kaiser

by Rob Irving and Jason Turner

About the interviewee:

Hartmut Kaiser is an Adjunct Professor of Computer Science at Louisiana State University. At the same time, he holds the position of a senior scientist at the Center for Computation and Technology at LSU. He received his doctorate from the Technical University of Chemnitz (Germany) in 1988. He is probably best known through his involvement in open source software projects, mainly as the author of several C++ libraries he has contributed to Boost, which are in use by thousands of developers worldwide. He is a voting member of the ISO C++ Standards Committee and his current research is focused on leading the STE||AR group at CCT working on the practical design and implementation of the ParalleX execution model and related programming methods. In addition, he architected and developed the core library modules of SAGA for C++, a Simple API for Grid Applications.

rapidjson 1.0.0 released

Want a fast JSON parser? Check out this one…

rapidjson 1.0.0 released

From the article:

This is the final v1.0.0 release of RapidJSON.

After the v1.0-beta, a lot of efforts have been put to make RapidJSON 100% line-of-code covered by the unit tests...

GCC 5.1 released

A new version of GCC is out, with a lot of improvements:

GCC 5.1 released

Some changes:

C++

  • G++ now supports C++14 variable templates.
  • -Wnon-virtual-dtor doesn't warn anymore for final classes.
  • Excessive template instantiation depth is now a fatal error. This prevents excessive diagnostics that usually do not help to identify the problem.
  • G++ and libstdc++ now implement the feature-testing macros from Feature-testing recommendations for C++.
  • G++ now allows typename in a template template parameter.
template<template<typename> typename X> struct D; // OK
  • G++ now supports C++14 aggregates with non-static data member initializers.
struct A { int i, j = i; };
A a = { 42 }; // a.j is also 42
  • G++ now supports C++14 extended constexpr.
constexpr int f (int i)
{
  int j = 0;
  for (; i > 0; --i)
    ++j;
  return j;
}
constexpr int i = f(42); // i is 42
  • G++ now supports the C++14 sized deallocation functions.
void operator delete (void *, std::size_t) noexcept;
void operator delete[] (void *, std::size_t) noexcept;
  • A new One Definition Rule violation warning (controlled by -Wodr) detects mismatches in type definitions and virtual table contents during link-time optimization.
  • New warnings -Wsuggest-final-types and -Wsuggest-final-methods help developers to annotate programs with final specifiers (or anonymous namespaces) to improve code generation. These warnings can be used at compile time, but they are more useful in combination with link-time optimization.
  • G++ no longer supports N3639 variable length arrays, as they were removed from the C++14 working paper prior to ratification. GNU VLAs are still supported, so VLA support is now the same in C++14 mode as in C++98 and C++11 modes.
  • G++ now allows passing a non-trivially-copyable class via C varargs, which is conditionally-supported with implementation-defined semantics in the standard. This uses the same calling convention as a normal value parameter.
  • G++ now defaults to -fabi-version=0 and -fabi-compat-version=2. So various mangling bugs are fixed, but G++ will still emit aliases with the old, wrong mangling where feasible. -Wabi continues to warn about differences.
libstdc++
  • A Dual ABI is provided by the library. A new ABI is enabled by default. The old ABI is still supported and can be used by defining the macro _GLIBCXX_USE_CXX11_ABI to 0 before including any C++ standard library headers.
  • A new implementation of std::string is enabled by default, using the small string optimization instead of copy-on-write reference counting.
  • A new implementation of std::list is enabled by default, with an O(1) size() function;
  • Full support for C++11, including the following new features:
    • std::deque and std::vector<bool> meet the allocator-aware container requirements;
    • movable and swappable iostream classes;
    • support for std::align and std::aligned_union;
    • type traits std::is_trivially_copyable, std::is_trivially_constructible, std::is_trivially_assignable etc.;
    • I/O manipulators std::put_time, std::get_time, std::hexfloat and std::defaultfloat;
    • generic locale-aware std::isblank;
    • locale facets for Unicode conversion;
    • atomic operations for std::shared_ptr;
    • std::notify_all_at_thread_exit() and functions for making futures ready at thread exit.
  • Support for the C++11 hexfloat manipulator changes how the num_put facet formats floating point types when ios_base::fixed|ios_base::scientific is set in a stream's fmtflags. This change affects all language modes, even though the C++98 standard gave no special meaning to that combination of flags. To prevent the use of hexadecimal notation for floating point types use str.unsetf(std::ios_base::floatfield) to clear the relevant bits in str.flags().
  • Full experimental support for C++14, including the following new features:
    • std::is_final type trait;
    • heterogeneous comparison lookup in associative containers.
    • global functions cbegin, cend, rbegin, rend, crbegin, and crend for range access to containers, arrays and initializer lists.
  • Improved experimental support for the Library Fundamentals TS, including:
    • class std::experimental::any;
    • function template std::experimental::apply;
    • function template std::experimental::sample;
    • function template std::experimental::search and related searcher types;
    • variable templates for type traits;
    • function template std::experimental::not_fn.
  • New random number distributions logistic_distribution and uniform_on_sphere_distribution as extensions.
  • GDB Xmethods for containers and std::unique_ptr.

CppCon 2014 Pragmatic Type Erasure: Solving OOP Problems w/ Elegant Design Pattern--Zach Laine

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Pragmatic Type Erasure: Solving OOP Problems w/ Elegant Design Pattern

by Zach Laine

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

There are numerous, serious OOP design problems that we have all encountered in production code. These include, among others: - object lifetime/ownership - how to make classes from different class hierarchies conform to a common interface - writing classes that can present multiple interfaces - separating interface and implementation - how to write virtual functions so that subclasses override them properly - the virtual inheritance "diamond of death"

Proper use of type erasure can mitigate, or outright eliminate, these and other problems, without sacrificing performance.

This talk will cover the OOP design problems above and more, and will cover hand-rolled and library-based type erasure approaches that solve those problems. Performance metrics will be provided for the different approaches, and source code will be available after the talk.

CppCon 2014 Introduction to C++ AMP (GPGPU Computing)--Marc Gregoire

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Introduction to C++ AMP (GPGPU Computing)

by Marc Gregoire

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Meet C++ AMP (Accelerated Massive Parallelism), an abstraction layer on top of accelerators such as GPUs. In its current version it allows you to run code on any DX11 GPU, independent of the vendor, and it will even distribute workload across GPUs of different vendors simultaneously. C++ AMP was originally designed by Microsoft but is now an open standard. C++ AMP can deliver orders of magnitude performance increase with certain algorithms by utilizing the GPU to perform mathematical calculations. This talk will give a high level overview of what C++ AMP is and what it can do for you. It is time to start taking advantage of the computing power of GPUs!