N4373: Atomic View (Rev. N4142, Atomic Operations on a Very Large Array) -- C Edwards, H Boehm

New WG21 papers are available. If you are not a committee member, please use the comments section below or the std-proposals forum for public discussion.

Document number: N4373

Date: 2015-01-26

Atomic View (Revision to N4142, Atomic Operations on a Very Large Array)

by Carter Edwards and Hans Boehm

Excerpt:

This paper proposes an extension to the atomic operations library [atomics] for atomic operations applied to non-atomic objects. The proposal is in three parts: (1) the concept of an atomic view, (2) application of this concept for atomic operations applied to members of a very large array in a high performance computing (HPC) code, and (3) application of this concept for atomic operations applied to an object in a large legacy code which cannot replace this object with a corresponding atomic object.

CppCon 2014 Writing Data Parallel Algorithms on GPUs--Ade Miller

While we wait for CppCon 2015 in September, we’re featuring videos of some of the 100+ talks from CppCon 2014. Here is today’s feature:

Writing Data Parallel Algorithms on GPUs

by Ade Miller

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Today most PCs, tablets and phones support multi-core processors and most programmers have some familiarity with writing (task) parallel code. Many of those same devices also have GPUs but writing code to run on a GPU is harder. Or is it?

Getting to grips with GPU programming is really about understanding things in a data parallel way. This talk will look at some of the common patterns for implementing algorithms on today's GPUs using examples from the C++ AMP Algorithms Library. Along the way it will cover some of the unique aspects of writing code for GPUs and contrast them with a more conventional code running on a CPU.

Sqlpp11, An EDSL For Type-Safe SQL In C++11

A new video from Meeting C++ 2014:

Sqlpp11, An EDSL For Type-Safe SQL In C++

by Roland Bock

From the talk description:

Most C/C++ interfaces to SQL databases are string based. Theses strings effectively hide expression structures, names and types from the compiler. And they are vendor-specific. And they defer expression parsing and validation until the test phase or (even worse) production...

Quick Q: What is move_iterator for?

Quick A: To enabling moving out of *iterator.

Recently on SO:

What is move_iterator for?

If I understand it correct, a=std::move(b) binds reference a to the address of b. And after this operation the content that b points to is not guaranteed.

The implementation of move_iterator here has this line

auto operator[](difference_type n) const -> decltype(std::move(current[n]))
  { return std::move(current[n]); }

However, I don't think it makes sense to std::move an element in an array. What happens if a=std::move(b[n])?

The following example confuses me also:

std::string concat = std::accumulate(
                             std::move_iterator<iter_t>(source.begin()),
                             std::move_iterator<iter_t>(source.end()),
                             std::string("1234"));

Since the concat will itself allocate a continuous chunk of memory to store the result, which will not have any overlap with source. The data in source will be copied to concat but not moved.

What Every Programmer Should Know About Compiler Optimizations--Hadi Brais

Everything is in the title:

What Every Programmer Should Know About Compiler Optimizations

by Hadi Brais

From the article:

High-level programming languages offer many abstract programming constructs such as functions, conditional statements and loops that make us amazingly productive. However, one disadvantage of writing code in a high-level programming language is the potentially significant decrease in performance. Ideally, you should write understandable, maintainable code—without compromising performance. For this reason, compilers attempt to automatically optimize the code to improve its performance, and they’ve become quite sophisticated in doing so nowadays. They can transform loops, conditional statements, and recursive functions; eliminate whole blocks of code; and take advantage of the target instruction set architecture (ISA) to make the code fast and compact. It’s much better to focus on writing understandable code, than making manual optimizations that result in cryptic, hard-to-maintain code. In fact, manually optimizing the code might prevent the compiler from performing additional or more efficient optimizations...

Expressions can have Reference Type--Scott Meyers

Did you you know that...

Expressions can have Reference Type

by Scott Meyers

From the article:

Today I got email about some information in Effective Modern C++. The email included the statement, "An expression never has reference type." This is easily shown to be incorrect, but people assert it to me often enough that I'm writing this blog entry so that I can refer people to it in the future...