What is Code Modernization -- Mike Pearce
Discussion regarding systematic approach to go about code modernization.
What is Code Modernization?
by Mike Pearce (Intel)
From the article:
October 25, Pavia, Italy
November 6-8, Berlin, Germany
November 3-8, Kona, HI, USA
By Mantosh Kumar | Nov 10, 2015 10:13 PM | Tags: performance intermediate efficiency
Discussion regarding systematic approach to go about code modernization.
What is Code Modernization?
by Mike Pearce (Intel)
From the article:
By Bjarne Stroustrup | Nov 10, 2015 11:10 AM | Tags: None
I’m writing this immediately after the October 19-26, 2015 ISO C++ standards meeting in Kona, Hawaii. I expect that this report will make you drool for some of the new features, make you terribly frustrated from not being able to use them right away, and a bit worried that some instability might ensure. If you also feel inspired to help or to experiment, so much the better.
I am describing features aimed for C++17, but this not is not science fiction. Essentially all features exist in experimental form and some features are already shipping so that you will soon be able to experiment with them. If all you are interested in is what you can use in production tomorrow, please stop reading here. If you want a view of where C++ is going over the next couple of years, please read on and try to imagine what we could do better with the new facilities.
When people hear “Kona” and “Hawaii”, they immediately think something like “loafing on the beach.” Well, the meetings started at 8am and carried on to 10pm with about an hour for lunch an about two for dinner. Outside the formal meetings, there are preparation, document revision, negotiation, setting up collaborations, and more. The large meeting rooms have no A/C, poor ventilation, and are generally very hot and humid. This goes on for six days – well, some of us did stop at 1pm Saturday while others continued until 4pm.
I came into the meeting with a reasonably clear idea of what I wanted (Thoughts about C++17, N4492) and what I could do to get it. At this meeting, I spent all my time in the Evolution Working Group trying to make sure that proposals I considered important progressed and that proposals I deemed wrongheaded, distracting, irrelevant, or not ready did not. Outside that, I spoke with people working on libraries, keeping an eye on the progress there. There is also constant communication between the various working group chairmen, the convener (Herb Sutter from Microsoft), and me: face-to-face, email, and phone.
Apologies for drastically simplifying issues in my summaries. Typically an hour’s serious discussion based on written proposals is reported in less than a paragraph, if at all. If you want more information, follow the links.
As for stability: Long-term stability is a feature. If you want your programs broken every five years, try a proprietary language. The ISO C++ standards committee goes out of its way to avoid breaking old code. That’s one reason ISO standardization is hard.
Let me give you an idea of scale: I counted 97 people in the room on the first morning. We processed over 140 papers. There are 4 major working groups
The evolution groups are focused on new “stuff” and the other two on polishing the specification. I chaired the EWG for 25 years before handing it over to Ville to get more time for technical work. In addition, there are several “study groups” looking at specific areas:
Most of these groups met in Kona so that we almost always run 4-way parallel and typically more. This not “four men and a dog” sitting around idly chatting. Rather, it’s a lot of people working devotedly to improve C++. We are all volunteers. There are no full-time C++ standards people. Most are supported by their companies, but there are also individual contributors, some even paying their own way.
You can see this organization described on the Standard C++ Foundation’s Website: www.isocpp.org. Morgan Stanley is a gold member of the Standard C++ Foundation. In general, isocpp.org is an excellent source of information about C++, such as technical videos, articles, blogs, user group meetings, and conferences.
The documents used by the committee are found on the group’s web site:
Below, I refer to paper numbers, such as N4531 and P0013R1. You can find such documents here:
There were evening sessions (8pm-10pm or later):
The variant standard-library type. There had been endless discussions (in the committee and elsewhere) about the design of a variant type (a type-safe alternative to unions). We reached consensus on a variant (sic!) of Axel Neumann’s (CERN) proposal (P0086R0, P0087R0, P0088R0). The main difference from previous versions is that it has no undefined behavior (UB is standards jargon).
variant<int,string> v = {“asdf”};
string s = get<string>(v);
v = 42;
string s2 = get<string>(v); // doesn’t hold a string: throw bad_variant_access
The same behavior (a throw) happens if we try to retrieve a value from a variant that has been damaged by a failed move- or copy-constructor. This variant can be efficiently and compactly implemented.
Contracts (N4415, N4378): We had two evening sessions. I chaired the first and Herb Sutter the second. The first was (deliberately) quite wide-ranging and the second trying to focus on one simple example. The first reached a result that many found unsatisfactory. The second made further progress. All in all, we appeared to reach a consensus on a design that I happen to like. However, we’ll see how that works out.
Unfortunately, I have been “volunteered” to write it all up with the support of Alisdair Meridith (Bloomberg), Nathan Myers (Bloomberg), Gabriel Dos Reis (Microsoft), and J-Daniel Garcia (U. Carlos III, Madrid). This could be tricky but I expect we’ll be able to write things like this:
T& operator[](int i) [[pre(bad_range{}): 0<=i && i<size()]]
{
return elem[i];
}
Depending on an assertion level (a build mode), the precondition will be evaluated and if it fails, the specified bad_range{} exception will be thrown. If no exception is mentioned, a global assertion violation handler will be invoked. In addition to preconditions, we should be able to handle postconditions and in-function-body assertions. Many details are still being resolved.
Modules (N4465): I would have preferred for the implementers (primarily Gabriel Dos Reis (Microsoft), Richard Smith (Google), and Jason Merrill (Red Hat)) to be locked up in a room without distractions, but many people turned up so that the discussion became more general, more philosophical, more confused, and less constructive. Sometimes, it is best to let a few experts work out the hard decisions in private. Here is an example:
import std.vector; // like #include <vector>
import std.string; // like #include <string>
import std.iostream; // like #include <iostream>
import std.iterator; // like #include <iterator >
int main() {
using namespace std;
vector<string> v = {
"Socrates", "Plato", "Descartes", "Kant", "Bacon"
};
copy(begin(v), end(v), ostream_iterator<string>(cout, "\n"));
}
I have high hopes for significantly improved compile times from modules. Two approximations of the design exist (Clang and Microsoft, the Microsoft implementation should become available within weeks). Finally, we will be able to state directly how our programs are structured, rather than approximating that through automated copy-and-paste. We get the semantics that Dennis Ritchie wanted for C and I wanted for C++, but was infeasible then.
The variant and contracts evening sessions had 50+ attendees, the module session a few less than that.
Most of the changes are – from a high-level view – minor, but all are important to some groups of programmers. I’ll not list them here. For details, see the trip report from Stephan Lavavej (Microsoft’s standard library expert, usually known by his initials: STL) listed below.
From my perspective the major action was in the new Technical Specifications (TSs):
Somewhere along the line, we approved array_view [from GSL, and renamed span], string_view [GSL's renamed to string_span, in addition to the existing string_view], optional, and variant.
In aggregate, this is of massive importance. It is a huge amount of foundational library support and most are (or will soon be) available for trying out. One advantage of libraries over language features is that you usually don’t need the next generation compilers to use a library.
See also the Kona trip reports listed at the end of this report.
I spent most time in the Evolution Working Group (EWG). In fact, I was in that group for every proposal discussed. That turned out to be important. It is also a bit unusual. At most meetings, I get dragged into other groups to help with issues affecting the whole language, its general use, or simply unusually tricky. At this meeting my coordination and fighting bushfires were done over meals and after the evening session.
As usual, Ville Voutilainen ran the EWG meeting with a firm hand so that we managed to get through about 50 proposals. Fortunately, the majority didn’t pass. Some proposal ought to fail. The reasons for that include
If everything else is right, I ask myself: Does it solve a problem on my top-20 list? We can’t accept a feature just because it is good, reasonably designed, and implementable; it must also add significant value to developers. Every feature has a cost: committee time, implementation effort, teaching/learning time for millions of users (now and in the future), and maintenance “forever.”
Weeding out problematic proposals is important. Many are the result of serious work and contain good ideas, so giving feedback to proposers and considering what might become viable in the future is a significant part of the process. Some proposers find that hard to take, but I point out that there is hardly a meeting where I haven’t had one of my proposals rejected. More often than not, I learn from the experience and succeed later, sometimes much later. During a lengthy boring discussion, I amused myself by digging out one of my proposals from 2003 (N1705). I had proposed things like
int f(auto); // a template taking an argument of any type and returning an int
auto g(int); // a function returning a value of a type deduced from its return statement
auto h(auto); // yes, you guessed it ![]()
We now have that, with the syntax and semantics I proposed. I don’t recall any proposal being less well received. That proposal was incomplete, its motivation insufficiently articulated, flawed in details, and a decade too early for much of the C++ community. Now, we can do better still with concepts. This is a story to remember next time your friends “don’t get” your latest and greatest idea or your proposal is voted down. It happens to us all.
The attendance of EWG varied from just under 20 to over 50. The more important a proposal is perceived to be and how controversial it is determines the attendance. In no particular order:
enum class Index : uint32_t { }; // Note: no enumerator
Index i {42}; // note: no cast
Basically, if you don’t give enumerators, you can use values of the underlying type in {}s. Importantly, this is ABI compatible with the underlying type. This was accepted but versions of it kept bouncing back and forth between EWG and CWG (the Core Working Group, the group that refine Working Paper wording), so details are still up in the air.
o Should <= be generated from < and ==, or just from <? We reaffirmed that we use < and ==. It’s better for floating point (remember NaN) and (surprisingly) not slower.
o How should a user-defined == be found? There were two alternatives: Use “ordinary, traditional C++ lookup” (N4532) and “Make sure that == is always unique for a type”. The latter proposal (mine) was voted down for being “unusual and complex”. I was somewhat annoyed because I insist that x==y is either always true or always false (wherever you are in a program) when the values x and y doesn’t change. The other proposal then got itself tangled up in complexities because you can get different results of x==y in different namespaces:
X x = 1;
X y = 1;
// …
namespace N {
bool operator==(X,X) { return true; }
bool b = (x==y); // true
}
namespace M {
bool operator==(X,X) { return false; }
bool b = (x==y); // false
}
This does violence to any reasonable idea of equality. I will bring up that issue again at the next meeting. Then, I should be much better prepared.
// recursive generator:
auto flatten(node* n) -> recursive_generator<decltype(n->value)>
{
if (n != nullptr) {
co_yield flatten(n->left);
co_yield n->value;
co_yield flatten(n->right);
}
}
Each call will return the next node; the position in the tree is represented as the state of the coroutine. This will ship in Microsoft C++ soon (or maybe it is already shipping). These coroutines are blindingly fast (P0054R0 ; equivalent to function call/return).
tuple<T1,T2,T3> f(/*...*/) { /*...*/ return {a,b,c}; }
auto {x,y,z} = f(); // x has type T1, y has type T2, z has type T3
The auto {x,y,z} introduces three new variables of the types of the corresponding class members. The EWG insisted that this should be done without compromising copy elision and with a protocol to allow values to be extracted from user-defined types. I agree wholeheartedly. That was the plan. Also, they wanted us to explore the possibility of optionally adding a type for an introduced variable. For example:
auto {x,string y,z} = f(); // x has type T1, y has type string, z has type T3
The point being that there are many examples of a C-style string being returned and needing conversion to string.
The proposal was accepted by acclamation! That never happens at first try
. There will be a paper in the post-Kona mailing (see the WG21 site). So it seems that C++ will get a very general form of multiple return values. The fact that this looks a bit like simple pattern matching was of course noted.
Why bother with such a “little detail”? Shouldn’t we be concentrating on grand schemes and deep theoretical issues? “Structured decomposition” give us a little convenience, but convenience is pleasant and expressing things concisely is an important consideration. Also, what are the alternatives? We could write:
auto t = f();
auto x = get<1>(t);
auto y = get<2>(t);
auto z = get<3>(t);
In comparison, that’s verbose (four times the size!) and potentially inefficient. Also, the concern that provided the motivation to “clean up” multiple return values now was examples like this:
int x;
string y;
double z;
// …
tie(x,y,z) = f();
Again (for a suitable definition of f()), this code is roughly equivalent to the example of structured binding. However, here we see one of the very last reasons to use uninitialized variables and also a default initialization of a string followed by an assignment (i.e., redundant overhead). Eliminating uninitialized variables (with their associated error opportunities) and eliminating redundant (and their associated temptations for low-level performance hacks) are part of a grander scheme of things. Details can be important.
The reason to reject finding f(x) from x.f() was fear of getting instable interfaces. I consider that an unreasonable fear based on fear of overloading (see P0131R0), but many in EWG thought otherwise. The extension methods proposal (P0079R0) was rejected; many thought decorating functions to make them callable was ugly, special purpose, and backwards. We may be able to do better with modules.
Note that we don’t approve proposals by simple majority, we aim for consensus and consensus is defined as a very large majority (3-to-1 would be marginal). The alternative would be for a large angry minority to lobby against an “agreed upon” decisions and/or creating dialects. Consensus is essential for long-term stability.
void f()
{
std::string s = “but I have heard it works even if you don’t believe in it”;
s.replace(0, 4, “”).replace(s.find(“even”), 4, “only”).replace(s.find(“ don’t”), 6, “”);
assert(s == “I have heard it works only if you believe in it”);
}
It looks sensible until you realize that under the old C and C++ rules its meaning is undefined. Oops! Some compilers generate code that violates the expectations vis a vis chaining.
Here is another (more recent example):
#include <map>
int main()
{
std::map<int, int> m;
m[0] = m.size();
}
Under the new rules, this works. Now, why doesn’t it work today? Might you have something like that in your code today?
Performance was a concern: will performance suffer? It will for some older machine architectures (1% was mentioned), but there can be backwards compatibility switches and performance-sensitive applications have mostly moved to architectures where there is no performance degradation years ago.
This proposal was accepted.
template <class T, class... Args>
unique_ptr<T> make_unique(Args&&... args)
{
constexpr_if (is_constructible_v<T, Args...>) {
return unique_ptr<T>(new T(forward<Args>(args)...));
}
constexpr_else {
return unique_ptr<T>(new T{forward<Args>(args)...});
}
}
This proposal doesn’t mess with scope rules or AST-based compilation models the way earlier proposals for compile-time selection had.
struct base { int bm; };
struct derived : base { int m; };
derived d {42,43}; // bm==42, m=43
This reflects the view that a base is an unnamed member (e.g., see the ARM (1989)).
f(make_pair(“foo”s,12)); // make a pair<string,int> and pass it to f()
I don’t, but the alternative has up until now been worse:
pair<string,int> x = { “foo”s,12 };
f(x);
C++17 will allow
f(pair{“foo”s,12});
and
pair y {“foo”s,12};
This is a general deduction mechanism, rather than something just for std::pair. The template arguments for the constructor is simply chosen exactly as for a make function. This removes boilerplate.
switch (n) {
case 22:
case 33: // OK: no statements between case labels
f();
case 44: // WARNING: no fallthrough statement
g();
[[fallthrough]];
case 55: // OK
// …
}
This should save us from nasty bugs and improve warning messages. Similarly, the attributes [[unused]] and [[nodiscard]] can make assumptions accessible to compilers and analysis tools. I have my doubts about the utility of those last two attributes outside C-style code, but experienced people disagree. [[unused]] may be renamed [[maybe_unused]]; there is a “bikeshed discussion” about that name. Naming is always hard.
I have listed under half of the proposals considered by EWG. Most (probably all) of the rest were rejected.
There has been no work on a stack_array (a standard-library type with its elements guaranteed to be on the stack). This forces people who want to place arrays on the stack to use alloca() or other non-standard facilities. As stack allocation is becoming increasingly important because of caches and concurrency, that’s rather bad. Also, we are assured that progress is made on compile-time reflection in SG7, but I fear that nothing will be ready for C++17.
Note that there are TSs for good proposals that don’t make C++17, and after that, there is C++20.
There will be other trip reports. Here are some early ones:
The next WG21 meetings:
o Jacksonville, Florida; Feb 29 – (Perennial and Foundation)
o Oulu, Finland; June 20 – (Symbio). The C++17 feature set should be fixed here.
o Oklahoma (most likely); October
o San Diego (most likely); January or February
o “London”; June or July – (Morgan Stanley?). Vote out C++17! (if all goes well).
o Somewhere in the US; November
We seem to be on track to deliver C++17 on time as an exciting and extremely useful new standard. I have reason to believe that C++17 will be a step up of the magnitude of the step from C++98 to C++11, but easier to make because of improved work on compatibility and usability. For those of you that are not even at C++11, get there ASAP for correctness, productivity, performance, and fun, but also because C++11 will get you most of the way to C++14 and C++17.
Summary of the features (discussed in this meeting) that I consider most significant:
I’m sure that most of the time was spent on “other”. That’s important too, but I can’t describe everything without boring all.
You can compare that to the top-ten list I made about a year ago (“Thoughts about C++17”, N4492):
Given that nobody can expect to get exactly what they want, we seem to be on track.
So, what is the big picture? What are we really trying to do? N4492 is my answer – the only answer I have seen articulated:
That paper provides details.
The C++ Core Guidelines effort (https://channel9.msdn.com/Events/CPP/CppCon-2015/Writing-Good-C-14 , https://channel9.msdn.com/Events/CPP/CppCon-2015/Writing-Good-C14-By-Default ) is an attempt to set a direction that can lead us from today’s “legacy” code to the envisioned future. For starters, we offer a promise of no dangling pointers and no resource leaks (guaranteed).
By Roger Orr | Nov 7, 2015 09:17 AM | Tags: community

A reminder that the ACCU 2016 conference call for papers closes on Fri 13th Nov. The conference dates are 19th- 23rd April 2016 and the venue will again be Bristol, England.
ACCU 2016 call for papers
ACCU has a strong C++ track, though it is not a C++-only conference. If you have something to share, check out the call for papers on the ACCU website:
By Adrien Hamelin | Nov 7, 2015 05:42 AM | Tags: web intermediate
If you want your C++ code to run on the web, there is Emscripten, but there is also Cheerp:
Cheerp 1.1 - C++ for the Web with fast startup times, dynamic memory and now, more speed!
by Leaning Technologies Ltd.
From the article:
Cheerp is a C++ compiler for the Web platform. Roughly a year ago we released Cheerp 1.0 with the promise of making C++ a first class language for the Web, with full access to DOM and HTML5 APIs (including WebGL) and great performance. At that time, we could only partially meet that promise.
With our early adopters starting to use Cheerp on real world, large scale applications, we were proud to see that Cheerp could indeed be used to seamlessly integrate C++ code into HTML5 apps. But we also realized that the performance of the compiled code was disappointing on real codebases.
As an example, our first benchmarking result on our first large-scale (~1M sloc) customer code was around forty times (40x) slower than native. Not only was this result disappointing, but it also was much worse than what we were expecting based on our internal benchmarks.
One year later, after significant effort focused on performance optimizations, we are here today to announce Cheerp 1.1...
By Axel | Nov 6, 2015 04:32 PM | Tags: intermediate experimental efficiency
Variant is like a union, only it tells you what it currently contains, and it will barf if you try to get something out of it that is does not currently contain. It's the type safe sibling of the union:
variant<double, string> v = 42.; double d = get<double>(v);
I had proposed a variant many moons ago (N4218). After many discussions it seemed that the committee cannot agree on what the ideal C++ variant would look like. I will resolve this cliffhanger -- but before doing that let me introduce you to some of the key discussion points.
An ideal variant will always contain one of its alternative types. But look at this code snippet:
variant<string, MyClass> v = "ABC"; v = MyClass();
The second line will destroy the old value contained in the variant and construct the new value of a different type. Now suppose that the MyClass construction threw: what will the variant contain? What happens when you call get<1>(v)? What happens when the variant gets destroyed?
We either provide the strong exception guarantee (the variant would still contain the string) -- but this requires double buffering, as for instance boost::variant does. Or we could restrict the alternative types to only those that are nothrow_move_constructible. Or we make this a new state -- "invalid, because the variant has been derailed due to an exception thrown during type changing assignment". Or we say "you shouldn't write code that behaves like this; if you do you're on your own", i.e. undefined behavior. The committee was discussing what to do, and so was The Internet. There are other design decisions -- default construction, visitation etc -- but they are all insignificant compared to how to deal with the throwing, type-changing assignment.
I have tried to present the options and their pros and cons in P0086. In short: it's incredibly difficult and fragile to predict whether a type is_nothrow_move_constructible. And double buffering -- required for a strong exception guarantee -- kills the quest for an efficient variant. But efficiency is one of the main motivations for using a discriminated union.
After the second Library Evolution Working Group (LEWG) review in Lenexa, we got P0088R0: a design that was making this invalid state extremely rare. But if it happened, access to the value would result in undefined behavior. This caused a vivid reaction from the committee members. And from The Internet. Hundreds of emails on the committee email lists. Many many smart and convincing blog posts.
In the end, different parts of the committee strongly supported different designs -- and vetoing other designs. Massive disagreement. So when we came to our C++ Standards Meeting in Kona, it was immediately clear that we needed to expose this to the full committee (and not just LEWG). The expectation was that we would declare variant dead, and keep it locked away for the next five years. At least. (An I would have time to water my fishes again.)
So back to the cliffhanger. On the full committee, Monday evening stage in Kona were David Sankel and I. We presented (and represented) the different design options. While we were discussing with the committee members, live and uncut and on stage, David and I realized that we could make it happen. "The Kona Kompromise": similar to P0088R0, but instead of undefined behavior when extracting the value of such a zombie variant it would just throw!
The Kona Kompromise means that we don't pay any efficiency for the extremely rare case of a throwing move. The interface stays nice and clean. A variant of n alternatives is a "mostly" an n-state type. It offers the basic exception guarantee at no relevant performance loss. It is a safe vocabulary type for every-day use, also for novices. The vast majority of the committee was convinced by this idea. Almost everyone in the room was happy!
Do we have a std::variant now? Not yet. But we are a giant leap closer: variant is now under wording review with the Library Working Group (LWG); I will publish a new revision in the Kona post-mailing (P0088R1). This will get re-reviewed in Jacksonville, first week of March. Once LWG gives the green light, the full committee can vote variant into a Technical Specification (TS) as std::experimental::variant. Now that a large fraction of the committee has expressed its consent (happiness, even!), I expect that this will be in the TS called Library Fundamentals, v3. It might or might not make it into C++17 -- that depends mostly on how quickly I manage to bring P0088 into an acceptable state, and how quickly we will gain use experience with variant.
So there is one thing I'd really appreciate your help with: std::experimental::variant will show up in library implementations near you, likely in their first releases next year. It would be fantastic if you could try it out, and as importantly: give feedback, on the public forums or by contacting me directly ([email protected]). Your feedback will tell us whether the design decisions we took are the right ones, for instance regarding default construction, visitation, performance, and especially converting construction and assignment. As they say here: Mahalo!
Axel Naumann, CERN ([email protected])
By robwirving | Nov 6, 2015 07:39 AM | Tags: None
Episode 33 of CppCast the only podcast for C++ developers by C++ developers. In this episode Rob and Jason are joined by Tobias Hunger to discuss the Qt Creator IDE for C++.
CppCast Episode 33: Qt Creator with Tobias Hunger
by Rob Irving and Jason Turner
About the interviewee:
Tobias graduated from the University of Kaiserslautern in Germany with a degree in computer engineering. Before joining Nokia in 2009 to work on Qt Creator he has been a consultant, specializing in systems administration and later Qt software development. He went with Qt to Digia and now works for The Qt Company in Berlin, Germany.
Tobias has been an open source contributor ever since his student days and is now a maintainer in the Qt project, responsible for the version control plugins in Qt Creator. He also is heavily involved with the project management plugins.
In his spare time he does way to many computer related things, but also manages to read books, go to the movies and play with his son.
By Meeting C++ | Nov 6, 2015 03:17 AM | Tags: intermediate community basics
I posted an update on my founding C++ User Groups article from 2 years ago:
6 topics on starting and running a User Group
by Jens Weller
From the article:
Almost two years ago I blogged about founding C++ User Groups, since then I have learned a lot more on the topic, and I want to share that experience with you in this blog post. While my focus here at Meeting C++ is C++, this post is more on the topic of a User Group, so its also useful to you, if you want to start a user group on something else. Yet, I might strive away into C++ lands in this post...
By Adrien Hamelin | Nov 5, 2015 09:03 AM | Tags: intermediate c++11
Everything is in the title:
Becoming a Rule of Zero Hero
by Glennan Carnie
From the article:
Previously, we’ve looked at The Rule of Zero which, in essence, says: avoid doing your own resource management; use a pre-defined resource-managing type instead.
This is an excellent guideline and can significantly improve the quality of your application code. However, there are some circumstances where you might not get exactly what you were expecting. It’s not that the code will fail; it just might not be as efficient as you thought.
Luckily, the solution is easy to implement and has the additional side-effect of making your code even more explicit.
By Andrey Karpov | Nov 5, 2015 01:57 AM | Tags: None
The idea to check Cfront occurred to me after reading an article, devoted to the 30-th anniversary of the first Release version of this compiler: "30 YEARS OF C++".
Celebrating 30-th anniversary of the first C++ compiler: let's find bugs in it
by Andrey Karpov, Bjarne Stroustrup
From the article:
Bjarne warned me that checking Cfront could be troublesome: "Please remember this is *very* old software designed to run on a 1MB 1MHz machine and also used on original PCs (640KB). It was also done by one person (me) as only part of my full time job".
By Adrien Hamelin | Nov 4, 2015 03:18 PM | Tags: experimental
You can now compile using Visual C++ without Visual Studio:
Announcing Visual C++ Build Tools 2015 – standalone C++ tools for build environments
by Marian Luparu
From the article:
Together with the availability of Visual Studio 2015 Update 1 RC, we’re also announcing a new way of acquiring the C++ tools: as a standalone installer that only lays down the tools required to build C++ projects without installing the Visual Studio IDE. This new installer is meant to streamline the delivery of the C++ build tools in your build environments and continuous-integration systems.