A Look at C++14: Papers Part I -- Meeting C++

[Ed. Note: Although labeled "A look at C++14," not all of these papers are for C++14, and regardless of timeframe not all will be adopted. But it is a useful look at what's on the commitee's radar.]

As we gear up for the Bristol meeting that starts on April 15, the committee members now are busily reading and preparing positions on the many papers to be considered at the meeting.

Since our last meeting in Portland, we have a recent-record-level 160 technical papers, most of which are proposals that will need to be considered and discussed at the face-to-face meeting.

Those papers have been posted here; see the blog's Standardization category. Most recently, the papers have been posted individually with crafted excerpts, and many committee members have found that useful and already follow the mailings primarily here on the blog like everyone else.

However, it's not just the committee members who are interested in these proposals. The Meeting C++ blog has posted the first of a multi-part series of "digest" summaries of the papers in the March mailing (they skipped the January mailing). In this first installment, they summarize a first batch of papers...

A look at C++14: Papers Part I

This is the first Part of n, or lets say many entries in this blog. In total I hope to be able to cover most papers in 3-4 blog posts, giving the reader an overview over the suggestions and changes for C++ at the coming C++ Committee Meeting in April. In total there are 98 Papers, so I will skip some, but try to get as much covered as possible. I'll skip papers with Meeting Minutes for sure, and try to focus on those focusing on C++11 or C++14 features. As the papers are ordered by there Number (N3522 being the first), I'll go top down, so every blog post will contain different papers from different fields of C++ Standardization. As N352-24 are Reports about Active Issues, Defects and Closed Issues I'll skip them for now, also I will not read through Meeting Minutes and such.

Add a Comment

Comments are closed.

Comments (7)

0 0

Bjarne Stroustrup said on Mar 28, 2013 09:05 PM:

Constraints ("concepts lite") checks calls; they do not check that a template definition use only the properties of argument types that it required from its callers.
0 0

Bartosz Bielecki said on Mar 29, 2013 12:27 AM:

I really like the idea of summarizing the proposals. Much easier to keep track of what might be changing in standard, or what are the current pitfalls.
0 0

Evgeny Panasyuk said on Mar 29, 2013 01:52 PM:

Regarding current proposed Constraints syntax, I don't see how template definiton can be checked against arbitary set of constraints in general case. Especially considireng fact that "requires clause" may contain arbitary complex conditions.
For instance, what to do with old-tricks like has_member_xxx based on SFINAE+overload+sizeof - is compiler supposed to revert them into archetype?

N3580 says: "We expect this feature to be a part of a final design for concepts".
Does it mean that only partial set of conditions(like only "requires expressions") can be turned into archetype?
0 0

Evgeny Panasyuk said on Mar 29, 2013 03:02 PM:

Regarding N3530: "Leveraging OpenMP infrastructure for language level parallelisation".

C++ has: good support of higher-order functions (HOF), lambdas and upcoming polymorphic lambdas, greatly designed STL with a lot of HOF.
There are ready library-only solutions which provide great parallel services like Intel Threading Building Blocks or Microsoft Parallel Patterns Library, without any kind of langauge extensions. There is also library-only proposal N3554: "A Parallel Algorithms Library".

I was shocked to see proposal for adding OpenMP-like stuff (language extensions) into C++ ISO.
Ok, many compilers already support OpenMP - and where it is already available, it can be used just as it is. But why we should integrate that stuff into language?
OpenMP infrastructure can be leveraged during *implementation* of libraries like N3554, but that is purely choice of library implementor - he may choice OpenMP or just use std::thread. There is no need to add parasitic language extensions, which would put burden on core language.

I am convinced that we should not drag into the language every rudimentary stuff (which probably was practical at time of creation) just because it already has some infrastructure.
0 0

Meeting C++ said on Apr 1, 2013 04:52 PM:

Published Part 2 today:
http://www.meetingcpp.com/index.php/br/items/a-look-at-c14-papers-part-2.html

@Evgeny Panasyuk
There is a lot of proposals for paralellism, and at the end, they all going to be put in a certain order. OpenMP has its advantages, also the paper states clearly to not add OpenMP as is to the Standard, but rather leverage on its apis. The 15 years of expierence with this defacto industry standard for parallelism can only be a win for C++. There is also a proposal for adding a cilk like fork-join parallelism etc. Its so much, that currently most of those papers are there, to be discussed, and maybe later unified and streamlined for a better (task based) parallelism support in C++.
0 0

Evgeny Panasyuk said on Apr 2, 2013 05:54 AM:

@JensW:

"OpenMP has its advantages, also the paper states clearly to not add OpenMP as is to the Standard, but rather leverage on its apis"

I saw proposal, for instance they proposed: "For example, to define a parallel loop, the keyword 'parallelfor' could be used instead of '#pragma omp parallel for'".

1. It is same OpenMP-style programming, just moved from pragmas to langauge keyword.

2. parallel_for is too low-level. Most of time developers don't realy need parallel_for, but instead they need higher-level primitives like parallel_reduce, parallel_transform, etc.

3. In OpenMP, "parallel for" has it's own semantic restrictions - you can not turn arbitary "for" into "parallel for" (for instance http://liveworkspace.org/code/2W7wwB$0 ). Those restrictions must be standartized, bloating *core* language specification.
While library-only solution may enforce some of those restrictions just by interface/syntax, like:

parallel_for_each( irange(0,100), [&](i) a[i]=i*2 );
or
parallel_transform( irange(0,100), a, [](i) i*2 );


Also, they propose: "The second parallelisation approach is the 'parallel task'. The keyword 'paralleltask' would be used to indicate a block of code that should be placed on the task queue and executed by either the current thread or one of the threads available in the thread pool."

Why we need any langauge keyword for this? We already have library-only std::async, which overlaps greatly with that 'paralleltask'.


"The 15 years of expierence with this defacto industry standard for parallelism can only be a win for C++."

Yes, OpenMP has support on many popular compilers. And yes, OpenMP is good for C and Fortran.
But it is obviously foreign construction to modern C++. Intel TBB, Microsoft PPL, N3554 - shows much better and higher-level *library-only* alternatives. Such kind of libraries are widely used.

C++ already has all required tools to implement task-based parallelism as library-only solution.

OpenMP can be freely leveraged by library implementors, without any additions of new language features.

0 0

Meeting C++ said on Apr 4, 2013 05:09 AM:

@Evgeny Panasyuk
N3554 is more or less the frontend, which could be used to have parallelism in the STL.
But there is also N3557, saying that a task based approach from the view of CilkPlus does not make sense without a language integration.
As I said there are a lot of papers about parallelism, and most of them kind of fit together.
Bristol will bring further clearance in this area I hope smile