September 2015

CppCon 2015 Program Highlights, 14 of N

The CppCon 2015 conference program has been posted for the upcoming September conference. We’ve received requests that the program continue to be posted in “bite-sized” posts, a few sessions at a time, to make the 100+ sessions easier to absorb, so here is another set of talks. This series of posts will conclude once the entire conference program has been posted in this way.

 

The following interrelated CppCon 2015 talks tackle interesting issues and more.

In this post:

  • An Overview on Encryption in C++
  • Demystifying Floating Point Numbers
  • What is Open Source, and Why Should You Care?
  • Live lock-free or deadlock (practical Lock-free programming), Part I, and Part II
  • Programming with less effort in C++
  • Functional Design Explained

 

An Overview on Encryption in C++ by Jens Weller, Meeting C++, Meeting C++

Encryption has become a very important topic for C++ developers and this session will serve as an introduction and overview this topic. I will present an overview on the popular encryption libraries cryptopp, botan and libsodium, and give you an update on the popular encryption algorithms of AES and RSA, plus why cryptoboxes can be a great help.


Demystifying Floating Point Numbers by John Farrier, Sr. Lead Technologist, Booz Allen Hamilton

Every day we develop software that relies on math while we often overlook the importance of understanding the implications of using our IEEE floats. From the often cited “floating point error” to unstable algorithms, this talk will explain the importance of floats, understanding their storage, the impact of the IEEE floats on math, and designing algorithms better. Finally, the talk will conclude with a quick case study of storing time for games and simulations.


What is Open Source, and Why Should You Care? by Kevin P. Fleming, Member of the CTO Office, Bloomberg

In this session, Kevin will present a condensed history of open source software: its origins, motivations and effect on the world of software development. He'll then talk about open source *beyond* software, and various ways that students can get involved in open source projects to develop useful (and marketable) skills. These are skills which are not taught in most degree programs, but are very valuable for jobs in scientific and engineering disciplines.


Live lock-free or deadlock (practical Lock-free programming), Part I, and Part II by Fedor Pikus, Chief Scientist, Mentor Graphics

Part I: Introduction to lock-free programming. We will cover the fundamentals of lock-free vs lock-based programming, explore the reasons to write lock-free programs as well as the reasons not to. We will learn, or be reminded, of the basic tools of lock-free programming and consider few simple examples. To make sure you stay on for part II, we will try something beyond the simple examples, for example, a lock-free list, just to see how insanely complex the problems can get.

Part II: having been burned on the complexities of generic lock-free algorithms in part I, we take a more practical approach: assuming we are not all writing STL, what limitations can we really live with? Turns out that there are some inherent limitations imposed by the nature of the concurrent problem: is here really such a thing as “concurrent queue” (yes, sort of) and we can take advantages of these limitations (what an idea, concurrency actually makes something easier!) Then there are practical limitations that most application programmers can accept: is there really such a thing as a “lock-free queue” (may be, and you don’t need it). We will explore practical examples of (mostly) lock-free data structures, with actual implementations and performance measurements. Even if the specific limitations and simplifying assumptions used in this talk do not apply to your problem, the main idea to take away is how to find such assumptions and take advantage of them, because, chances are, you can use lock-free techniques and write code that works for you and is much simpler than what you learned before.


Programming with less effort in C++ by Ferreira Da Costa and Sylvain JUBERTIE

The C++ language and libraries propose different ways to implement codes. For example, using explicit loops or STL algorithms to traverse containers and process data. C++11&14 bring also new features to the C++ language aimed at simplifying the writing of codes. But what is the gain we can expect in term of development effort when using these different possibilities and features ? or, as a developer may ask himself: Is it viable for me to spend some time learning new C++ libraries or standards to provide less effort/spend less time on my future codes ?

Before answering these questions, we must give a definition of the development effort, and define a way to measure it. Thus, we first propose to describe existing software metrics, from the simple Single Line Of Code (SLOC) to the more complex Halstead metrics, then to implement them in an automatic tool based on Clang tools, and finally to apply them on several codes to compare their respective development efforts.

First results show that using modern C++ features like auto, decltype and lambdas help to dramatically reduce the development effort. These results may help to convince developers to use new C++ features, or to port their codes from old standards to new ones, or even switch from other languages to C++ !


Functional Design Explained by David Sankel, Untitled, Stellar Science

An oft-cited benefit of learning a functional language is that it changes one's approach to solving problems for the better. The functional approach has such a strict emphasis on simplistic and highly composable solutions that an otherwise varied landscape of solution possibilities narrows down to only a few novel options.

This talk introduces functional design and showcases its application to several real-world problems. It will briefly cover denotational semantics and several math-based programming abstractions. Finally, the talk will conclude with a comparison of functional solutions to the results more traditional design methodologies.

No prior knowledge of functional programming or functional programming languages is required for this talk. All the examples make use of the C++ programming language.

CppCon 2014 C++ in Huge AAA Games--Nicolas Fleury

Have you registered for CppCon 2015 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2014 for you to enjoy. Here is today’s feature:

C++ in Huge AAA Games

by Nicolas Fleury

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

Video games like Assassin's Creed or Rainbow Six are among the biggest code bases with a single static linking. Iteration-time is critical to develop a great game and keeping a complete compilation-time of a few minutes is a constant challenge. This talk will explain the C++ usage reality at Ubisoft Montreal for huge projects. Ideas will be shared regarding performance, debugging and iteration time.

CppCon 2015 Program Highlights, 13 of N

The CppCon 2015 conference program has been posted for the upcoming September conference. We’ve received requests that the program continue to be posted in “bite-sized” posts, a few sessions at a time, to make the 100+ sessions easier to absorb, so here is another set of talks. This series of posts will conclude once the entire conference program has been posted in this way.

 

The current processors have many cores. To use them well and safely for concurrent programming is not always easy.

The following interrelated CppCon 2015 talks tackle these issues and more.

In this post:

  • Racing the file system
  • Transactional Memory in Practice
  • Single Threaded Functional to Massively Parellel Stochastic with AMP
  • How to make your data structures wait-free for reads
  • Parallel Program Execution using Work Stealing
  • Executors for C++ - A Long Story

 

Racing the file system by Niall Douglas, Consultant for Hire, ned Productions Ltd

Almost every programmer knows about and fears race conditions on memory where one strand of execution may concurrently update data in use by another strand of execution, leading to an inconsistent and usually dangerous inconsistent read of program state. Almost every programmer therefore is aware of mutexes, memory ordering, semaphores and the other techniques used to serialise access to memory.

Interestingly, most programmers are but vaguely aware of potential race conditions on the filing system, and as a result write code which assumes that the filing system does not suddenly change out from underneath you when you are working on it. This assumption of a static filing system introduces many potential security bugs never mind ways of crashing your program, and of course creating data loss and corruption.

This workshop will cover some of the ways in which filing system races can confound, and what portable idioms and patterns you should employ to prevent misoperation, even across networked Samba shares. Finally, an introduction of the proposed Boost library AFIO will be made which can help application developers writing filing system race free code portably.


Transactional Memory in Practice by Brett Hall, Principal Software Engineer, Wyatt Technology

Transactional memory has been held up as a panacea for concurrent programming in some quarters. The C++ standardization committee is even looking at including it in the standard. But is it really a panacea? Has anyone used it in a shipping piece of software? There are scattered examples, mostly from the high-performance and super-computing realms. On the other end of the spectrum, at Wyatt Technology we've been using transactional memory in a desktop application that does data acquisition and analysis for the light-scattering instruments we build. That application is called Dynamics and we've been using a software transactional memory system in it for four years now. This talk will detail how our system works, how well it worked, and what pitfalls we've run into. Prior experience with transactional memory will not be assumed, though it would help if you have experience programming threads with locks and an open mind about alternatives and why we're looking for them.


Single Threaded Functional to Massively Parellel Stochastic with AMP by Kevin Carpenter, Software Engineer, Carpenter Systems LLC

Come with us as we take a legacy MFC financial modelling application that is largely function in design and transform it to something new. We will take a portion of this large financial simulation application and change its single threaded ways into parallel processing stochastic model. Transforming single class’s with a hodge-podge of functions into an object oriented parallel design using c++ amp and implementing Stochastic modelling methodology. Aside from focusing on the key portions of converting functional single threaded code to a parallel design we will also touch on some of the details of financial modeling for interest rate risk.


How to make your data structures wait-free for reads by Pedro Ramalhete, Cisco

In this talk we will describe a new concurrency control algorithm with Blocking write operations and Wait-Free Population Oblivious read operations, which we named the Left-Right algorithm.

We will show a new pattern where this algorithm is applied, which requires using two instances of a given resource, and can be used for any data structure, allowing concurrent access to it similarly to a Reader-Writer lock, but in a non-blocking manner for reads, including safe memory management without needing a Garbage Collector.


Parallel Program Execution using Work Stealing by Pablo Halpern, Mr, Intel Corp.

If you've used a C++ parallel-programming system in the last decade, you've probably run across the term "work stealing." Work stealing is a scheduling strategy that automatically balances a parallel workload among available CPUs in a multi-core computer, using computation resources with theoretical utilization that is nearly optimal. Modern C++ parallel template libraries such as Intel(R)'s TBB or Microsoft*'s PPL and language extensions such as Intel(R) Cilk(tm) Plus or OpenMP tasks are implemented using work-stealing runtime libraries.

Most C++ programmers pride themselves on understanding how their programs execute on the underlying machine. Yet, when it comes to parallel programming, many programmers mistakenly believe that if you understand threads, then you understand parallel runtime libraries. In this talk, we'll investigate how work-stealing applies to the semantics of a parallel C++ program. We'll look at the theoretical underpinnings of work-stealing, now it achieves near optimal machine utilization, and a bit about how it's implemented. In the process, we'll discover some pit-falls and how to avoid them. You should leave this talk with a deeper appreciation of how parallel software runs on real systems.

Previous experience with parallel programming is helpful but not required. A medium level of expertise in C++ is assumed.


Executors for C++ - A Long Story by Detlef Vollmann

Executors will be a base building block in C++ for asynchronous, concurrent and parallel work. The job of an executor is simple: run the tasks that are posted. So the first proposals for executors in C++ had a very simple interface. However, being a building block, the executor should provide an interface that's useful for all kind of higher level abstractions and needs to work together with different types of concurrency, like co-operative multi-tasking or GPU like hardware. This presentation will look at the evolution of the executor proposals for C++ and what they'll provide for normal application programmers.

Declaring the move constructor--Andrzej Krzemieński

An interesting question, with a proposed answer:

Declaring the move constructor

by Andrzej Krzemieński

From the article:

I am not satisfied with the solution I gave in the previous post. The proposed interface was this:


class Tool
{
private:
  ResourceA resA_;
  ResourceB resB_;
  // more resources
 
public:
  // Tools's interface
 
  Tool(Tool &&) = default;           // noexcept is deduced
  Tool& operator=(Tool&&) = default; // noexcept is deduced
 
  Tool(Tool const&) = delete;
  Tool& operator=(Tool const&) = delete;
};

static_assert(std::is_move_constructible<Tool>::value, "...");
static_assert(std::is_move_assignable<Tool>::value, "...");

In a way, it is self contradictory. The whole idea behind departing from the Rule of Zero is to separate the interface from the current implementation. Yet, as the comments indicate, the exception specification is deduced from the current implementation, and thus unstable...

CoderPower exclusive C++ Back to School Challenge

Summer is almost over, meaning it’s time to return to reality. To help you reconnect with coding and open a new season of series, CoderPower offers an exclusive opportunity to discover the innovative interface of the new HP all-in-one computer, the HP Sprout.

Untill Oct. 26th, you can get to know the Sprout SDK in an environment that will show you, step-by-step, how easy and fun coding on Sprout really is. 

Your mission: follow the three warm-ups made by the @CoderPower team, that will teach you how to return the SDK version number and to use the monitor GUID as well as the touch mat. Do the complementary exercises and level up your skills! Write, copy and paste your code in the panel on the right side of the screen. The technology is all along simulated by a mocking system that send you back an answer. If it is the correct one, well done! If not, use your super-coder abilities to spot your mistake and try again!

Then enter the weekly challenges, for examples: “Show the mat keyboard” or “Extract Outline”. For starters, 2 challenges will be released the two first weeks and one challenge per week until Oct. 26th. So you have to perform well in 10 challenges to be the best C++ coder!

The rules are easy: You have to code against the clock. Quicker you are, more points you get. Points are cumulative between challenges and at the end of the challenge, the contestant with the biggest score on the leaderboard will win a HP Sprout. Be aware that you can only take a challenge once! Do your best on the first try!

To get started, click here http://bit.ly/1KhxOw9

CppCon 2014 Embarcadero Case Study: Bringing CLANG/LLVM To Windows--John "JT" Thomas

Have you registered for CppCon 2015 in September? Don’t delay – Registration is open now.

While we wait for this year’s event, we’re featuring videos of some of the 100+ talks from CppCon 2014 for you to enjoy. Here is today’s feature:

Embarcadero Case Study: Bringing CLANG/LLVM To Windows

by John "JT" Thomas

(watch on YouTube) (watch on Channel 9)

Summary of the talk:

CLANG/LLVM delivers a highly conforming C++ compiler and architecture for targeting multiple CPUs, and, as such, has seen success in iOS and other operating systems. Embarcadero has successfully delivered the first commercial compiler for Windows based on CLANG/LLVM. This session describes the benefits of CLANG/LLVM as well as the challenges in bringing it to the Windows operating system. Particular emphasis is placed on the managing the changes in CLANG as well as the additional features added to enable Windows development.

CppCon 2015: More Lightning Talks -- Kate Gregory

cppcon-009.PNGReminder: The CppCon regular registration rate ends in 2 1/2 days! After that late registration will still be available.

CppCon 2015: More Lightning Talks

by Kate Gregory

From the announcement:

One of the big surprises last year at CppCon was the tremendous response to the lightning talks. People kept submitting them, and we just kept adding sessions. This year, we’re adding those sessions in advance as the submissions come in, so that you can plan to attend. (And yes, you can still submit a talk. We have time slots we can hold more lightning talk sessions in.) We’ve just added two more sessions – Tuesday lunch and Wednesday morning – to accommodate the submissions already received. You’ll see the lightning talk sessions in yellow on the program. The abstract is vague, and it’s not going to get less vague. You don’t know precisely what you’re going to get until you show up.

What roughly will you get? A number of different talks – some funny, some very technical, some personal, some inspirational, some that will make your grateful you have the job you do and not the speaker’s job. Some will be 5 minutes long and some 15 minutes long. A few might be followups to something that’s already happened. Others might be a way to invite you to something that hasn’t happened yet. Some will be the very first public speaking that speaker has ever done, and some will be a chance to let your hair down with a speaker you’ve seen being serious many times before. Some might not interest you, but that’s ok – they’re short, you can be bored for 5 or 15 minutes, and then there will be a different one. They’re little bite size goodies, and for many of us they were a very enjoyable highlight of the conference. Add some to your schedule now, and be prepared to get up a little early or stay on site a little late to get the full benefit of your time here!

Posted in News