Yaser Zhian Dead Mage IGDI, Workshop 10, May 30 th -31 st, 2013.

Slides:



Advertisements
Similar presentations
Boost Writing better code faster with Boost. Boost Introduction Collection of C++ libraries Boost includes 52 libraries, 1.31 will have at least.
Advertisements

Chapter 17 vector and Free Store Bjarne Stroustrup
Chapter 8 Technicalities: Functions, etc. Bjarne Stroustrup
Chapter 4 Computation Bjarne Stroustrup
Chapter 18 Vectors and Arrays
Chapter 6 Writing a Program
Chapter 7 Constructors and Other Tools. Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 7-2 Learning Objectives Constructors Definitions.
Chapter 12 Lists and Iterators. List Its an abstract concept not a vector, array, or linked structure. Those are implementations of a List. A list is a.
Brown Bag #3 Return of the C++. Topics  Common C++ “Gotchas”  Polymorphism  Best Practices  Useful Titbits.
Brown Bag #2 Advanced C++. Topics  Templates  Standard Template Library (STL)  Pointers and Smart Pointers  Exceptions  Lambda Expressions  Tips.
You can do more than what you think ……… If you believe you can.
Overview  Miscellaneous Utility Functions  Return Type Deduction  Generic Lambdas  Generalised Lambda Captures  C++14 Literals  constexpr Functions.
Or: The Ampersand Recovery && Reinvestment Act of 2010 Stephan T. Lavavej Visual C++ Libraries Developer 1Version April 28, 2009.
Writing Modern C++ Marc Grégoire Software Architect April 3 rd 2012.
The C ++ Language BY Shery khan. The C++ Language Bjarne Stroupstrup, the language’s creator C++ was designed to provide Simula’s facilities for program.
What's new in Microsoft Visual C Preview
Chapter 17 vector and Free Store John Keyser’s Modifications of Slides By Bjarne Stroustrup
Welcome to C++ Programming Workshop at The University of Texas at Dallas Presented by John Cole July 8-12, 2013.
1 Lecture-4 Chapter 2 C++ Syntax and Semantics, and the Program Development Process Dale/Weems/Headington.
Your First C++ Program Aug 27, /27/08 CS 150 Introduction to Computer Science I C++  Based on the C programming language  One of today’s most.
1 Lab-1 CSIT-121 Spring 2005 Lab Targets Solving problems on computer Programming in C++ Writing and Running Programs Programming Exercise.
1 Chapter 2 C++ Syntax and Semantics, and the Program Development Process Dale/Weems/Headington.
Finding and Debugging Errors
Templates. Objectives At the conclusion of this lesson, students should be able to Explain how function templates are used Correctly create a function.
1 Lab-1 CSIT-121 Fall 2004 Lab Targets Solving problems on computer Programming in C++ Writing and Running Programs Programming Exercise.
Introduction To C++ Programming 1.0 Basic C++ Program Structure 2.0 Program Control 3.0 Array And Structures 4.0 Function 5.0 Pointer 6.0 Secure Programming.
Inheritance in C++ CS-1030 Dr. Mark L. Hornick.
CSE 332: C++ templates This Week C++ Templates –Another form of polymorphism (interface based) –Let you plug different types into reusable code Assigned.
By – Tanvir Alam.  This tutorial offers several things.  You’ll see some neat features of the language.  You’ll learn the right things to google. 
CSCE 121: Introduction to Program Design and Concepts, Honors Dr. J. Michael Moore Spring 2015 Set 3: Objects, Types, and Values 1 Based on slides.
CS 114 – Class 02 Topics  Computer programs  Using the compiler Assignments  Read pages for Thursday.  We will go to the lab on Thursday.
Class Inheritance UNC-CHAPEL HILL COMP 401 BRIAN CRISTANTE 5 FEBRUARY 2015.
Rossella Lau Lecture 1, DCO10105, Semester B, DCO10105 Object-Oriented Programming and Design  Lecture 1: Introduction What this course is about:
Templates An introduction. Simple Template Functions template T max(T x, T y) { if (x > y) { return x; } else { return y; } } int main(void) { int x =
1 C++ Syntax and Semantics, and the Program Development Process.
Copyright  Hannu Laine C++-programming Part 1 Hannu Laine.
COMPUTER PROGRAMMING. A Typical C++ Environment Phases of C++ Programs: 1- Edit 2- Preprocess 3- Compile 4- Link 5- Load 6- Execute Loader Primary Memory.
CSE 303 Concepts and Tools for Software Development Richard C. Davis UW CSE – 12/6/2006 Lecture 24 – Profilers.
Chapter 2 Overview of C++. 2 Overview  2.1 Language Elements  2.2 Reserved Words & Identifiers  2.3 Data Types & Declarations  2.4 Input/Output 
C++ Programming Lecture 11 Functions – Part III By Ghada Al-Mashaqbeh The Hashemite University Computer Engineering Department.
CS536 Semantic Analysis Introduction with Emphasis on Name Analysis 1.
1 Chapter 2 C++ Syntax and Semantics, and the Program Development Process.
CS 11 C++ track: lecture 1 Administrivia Need a CS cluster account sysadmin/account_request.cgi Need to know UNIX (Linux)
Lecture 19 CIS 208 Wednesday, April 06, Welcome to C++ Basic program style and I/O Class Creation Templates.
Templates & STL Stefan Roiser, Lorenzo Moneta CERN PH/SFT.
CSE 332: C++ template examples Today: Using Class and Function Templates Two examples –Function template for printing different types –Class template for.
CMSC 202 Lesson 6 Functions II. Warmup Correctly implement a swap function such that the following code will work: int a = 7; int b = 8; Swap(a, b); cout.
1 Introduction to Object Oriented Programming Chapter 10.
Lecture 7.  There are 2 types of libraries used by standard C++ The C standard library (math.h) and C++ The C++ standard template library  Allows us.
CSIS 123A Lecture 7 Static variables, destructors, & namespaces.
CSCI 161 Lecture 3 Martin van Bommel. Operating System Program that acts as interface to other software and the underlying hardware Operating System Utilities.
CS Class 04 Topics  Selection statement – IF  Expressions  More practice writing simple C++ programs Announcements  Read pages for next.
Generic Programming and Library Design Brian Bartman
1 8/30/06CS150 Introduction to Computer Science 1 Your First C++ Program.
Current Assignments Project 3 has been posted, due next Tuesday. Write a contact manager. Homework 6 will be posted this afternoon and will be due Friday.
C++ Functions A bit of review (things we’ve covered so far)
Software Engineering Algorithms, Compilers, & Lifecycle.
From C to C++. 2 Why C++ is much more fun than C (C++ FAQ)? 1.Classes & methods - OO design 2.Generic programming - Templates allow for code reuse 3.Stricter.
CSE 332: C++ Exceptions Motivation for C++ Exceptions Void Number:: operator/= (const double denom) { if (denom == 0.0) { // what to do here? } m_value.
Chapter 6 CS 3370 – C++ Functions.
From C to C++.
C++ Templates.
Semantic Analysis with Emphasis on Name Analysis
Name: Rubaisha Rajpoot
CMSC 202 Lesson 22 Templates I.
CSE 303 Lecture 25 Loose Ends and Random Topics
CISC/CMPE320 - Prof. McLeod
CISC/CMPE320 - Prof. McLeod
Templates I CMSC 202.
CMSC 202 Lesson 6 Functions II.
Presentation transcript:

Yaser Zhian Dead Mage IGDI, Workshop 10, May 30 th -31 st, 2013

Today: auto, decltype, range-based for, etc. Lambdas Rvalue references and moving Variadic templates Tomorrow Threads, atomics and the memory model Other features: initializer lists, constexpr, etc. Library updates: new containers, smart pointers, etc. General Q&A

Ways to write code that is: Cleaner and less error-prone Faster Richer and can do more (occasionally) Know thy language You can never have too many tools Elegance in interface; complexity (if any) in implementation Take everything here with a grain of salt!

We will use Visual Studio 2012 (with the Nov 2012 CTP compiler update.) Go ahead. Open it up, make a project, add a file, set the toolset in project options. Write a simple hello, world and run it. Please do try and write code; the sound of keyboard does not disrupt the workshop.

We will also use IDE One online compiler at You might want to register an account there. Do so while I talk about unimportant stuff and answer any questions… Remember to select the C++11 compiler. Write and run a simple program here as well.

5

What is the type of a + b ? a ? int ? double ? Dependent on operator +, and on a and b. And a whole lot of name lookup, type deduction and overload resolution rules. Even if you dont know, the compiler always does. decltype(a + b) c; c = a + b; (Instead of e.g. double c; )

Whats the return type of this function? template ??? Add (T const & a, U const & b) { return a + b; } One answer is decltype(T() + U()) Not entirely correct. (Why?) The correct answer is decltype(a + b) But that wont compile.

What is wrong with this? template decltype(a + b) Add (T const & a, U const & b) { return a + b; } This is basically the motivation behind the new function declaration syntax in C

auto Fun (type1 p1) -> returntype; The previous function template then becomes: template auto Add (T const & a, U const & b) -> decltype(a + b) { return a + b; } This works for ordinary functions too: auto Sqr (float x)->float {return x*x;}

Putting auto where a type name is expected, instructs the compiler to infer type from initializing expression, e.g. auto foo = a * b + c * d; auto bar = new std::map ; auto baz = new std::map, std::vector >::const_iterator;

Some more examples: auto x = 0; auto y = do_stuff (x); auto const & y = do_stuff (x); auto f = std::bind (foo, _1, 42); for (auto i = c.begin(), e = c.end(); i != e; ++i) {…}

Sometimes, you have to be very careful with auto and decltype : std::vector const & v (1); auto a = v[0]; // int decltype(v[1]) b = 1; // int const & auto c = 0; // int auto d = c; // int decltype(c) e = 1; // int decltype((c)) f = c; // int & decltype(0) g; // int

How common is this code snippet? vector v; for (vector ::iterator i = v.begin(); i != u.end(); i++) cout << *i << endl; How many problems can you see? Heres a better version: for (auto i = v.cbegin(), e = v.cend(); i != e; ++i) cout << *i << endl; This is the best version: for (auto const & s : v) cout << s << endl;

This loop: for (for-range-declaration : expression) statement will get expanded to something like this: { auto && __range = range-init; for (auto __begin= begin-expr, __end= end-expr; __begin != __end; ++__begin) { for-range-declaration = *__begin; statement } }

Introducing more functionality into C++

Lambdas are unnamed functions that you can write almost anywhere in your code (that you can write an expression.) For example: [] (int x) -> int {return x * x;} [] (int x,int y){return x<y ? y : x;} What does this do? [] (double v) {cout << v;} (4.2);

Storing lambdas: auto sqr = [] (int x) -> int {return x * x;}; auto a = sqr(42); std::function g = [] (int a, int b) {return a + b;}; int d = g(43, -1); auto h = std::bind ( [](int x,int y){return x<y ? y : x;}, _1, 0); auto n = h (-7);

Consider these functions: template void Apply (C & c, F const & f) { for (auto & v : c) f(v); } template void Apply2 (C & c, function const & f) { for (auto & v : c) f(v); } Used like this: int a [] = {10, 3, 17, -1}; Apply (a, [] (int & x) {x += 2;});

Apply (a, [](int x) {cout << x << endl;}); int y = 2; Apply (a, [y](int & x) {x += y;}); int s = 0; Apply (a, [&s](int x) {s += x;}); Apply (a, [y, &s](int x) {s += x + y;} );

int y = 2; auto f = [y](int & x) {x += y;}; y = 10; Apply (a, f); int y = 2; auto f = [&y](int & x) {x += y;}; y = 10; Apply (a, f); By the way, you can capture everything by value ( [=] ) or by reference ( [&] ).

C++ used to have a tendency to copy stuff around if you werent paying attention! What happens when we call this function? vector GenerateNames () { return vector (50, string(100, '*')); } A whole lot of useless stuff are created and copied around. All sorts of techniques and tricks to avoid those copies.

string s = string("Hello") + " " + "world."; 1. string (char const *) 2. string operator + (string const &, char const *) 3. string operator + (string const &, char const *) 4. this ultimately called the copy ctor string (string const &). In total, there can be as many as 5 (or even 7) temporary strings here. (Unrelated note) Some allocations can be avoided with Expression Templates.

When dealing with anonymous temporary objects, the compiler can elide their (copy-) construction, which is called copy elision. This is a unique kind of optimization, as the compiler is allowed to remove code that has side effects! Return Value Optimization is one kind of copy elision.

C++11 introduces rvalue references to let you work with (kinda) temporary objects. Rvalue references are denoted with &&. e.g. int && p = 3; or void foo (std::string && s); or Matrix::Matrix (Matrix && that){…}

In situations where you used to copy the data from an object into another object, if your first object is an rvalue (i.e. temporary) now you can move the data from that to this. Two important usages of rvalue references are move construction and move assignment. e.g. string (string && that);// move c'tor and string & operator = (string && that); // move assignment

template class Matrix { private: T * m_data; unsigned m_rows, m_columns; public: Matrix (unsigned rows, unsigned columns); ~Matrix (); Matrix (Matrix const & that); template Matrix (Matrix const & that); Matrix & operator = (Matrix const & that); Matrix (Matrix && that); Matrix & operator = (Matrix && that);... };

template class Matrix {... unsigned rows () const; unsigned columns () const; unsigned size () const; T & operator () (unsigned row, unsigned col);// m(5, 7) = 0; T const & operator () (unsigned row, unsigned col) const; template auto operator + (Matrix const & rhs) const -> Matrix ; template auto operator * (Matrix const & rhs) const -> Matrix ; };

Matrix (unsigned rows, unsigned columns) : m_rows (rows), m_columns (columns), m_data (new T [rows * columns]) { } ~Matrix () { delete[] m_data; } Matrix (Matrix const & that) : m_rows (that.m_rows), m_columns (that.m_columns), m_data (new T [that.m_rows * that.m_columns]) { std::copy ( that.m_data, that.m_data + (m_rows * m_columns), m_data ); }

Matrix & operator = (Matrix const & that) { if (this != &that) { T * new_data = new T [that.m_rows * that.m_columns]; std::copy ( that.m_data, that.m_data + (m_rows * m_columns), new_data ); delete[] m_data; m_data = new_data; m_rows = that.m_rows; m_columns = that.m_columns; } return *this; }

Matrix (Matrix && that) : m_rows (that.m_rows), m_columns (that.m_columns), m_data (that.m_data) { that.m_rows = that.m_columns = 0; that.m_data = nullptr; }

Matrix & operator = (Matrix && that) { if (this != &that) { delete[] m_data; m_rows = that.m_rows; m_columns = that.m_columns; m_data = that.data; that.m_rows = rhs.m_columns = 0; that.m_data = nullptr; } return *this; }

struct SomeClass { string s; vector v; public: // WRONG! WRONG! WRONG! // Doesnt move, just copies. SomeClass (SomeClass && that) : s (that.s), v (that.v) {} SomeClass (SomeClass && that) : s (std::move(that.s)), v (std::move(that.v)) {} };

In principle, std::move should look like this: template ??? move (??? something) { return something; } What should the argument type be? T&& ? T& ? Both? Neither? We need to be able to pass in both lvalues and rvalues.

We can overload move() like this: move (T && something) move (T & something) But that will lead to exponential explosion of overloads if the function has more arguments. Reference collapse rule in C++98: int& & is collapsed to int&. In C++11, the rules are: (in addition to the above) int&& & is collapsed to int&. int&& && is collapsed to int&&.

Therefore, only the T&& version should be enough. If you pass in an lvalue to our move, the actual argument type will collapse into T&, which is what we want (probably.) So, move looks like this thus far: template ??? move (T && something) { return something; }

Now, what is the return type? T&& ? It should be T&& in the end. But if we declare it so, and move() is called on an lvalue, then T will be SomeType& then T&& will be SomeType& && then it will collapse into SomeType& then we will be returning an lvalue reference from move(), which will prevent any moving at all. We need a way to remove the & if T already has one.

We need a mechanism to map one type to another In this case, to map T& and T&& to T, and T to T. There is no simple way to describe the process, but this is how its done: template struct RemoveReference { typedef T type; }; With that, RemoveReference ::type will be equivalent to int. But we are not done.

Now we specialize: template struct RemoveReference { typedef T type; }; template struct RemoveReference { typedef T type; }; Now, RemoveReference ::type will be int too.

Our move now has the correct signature: template typename RemoveReference ::type && move (T && something) { return something; } But its not correct. That something in there is an lvalue, remember?

…so we cast it to an rvalue reference: template typename RemoveReference ::type && move (T && something) { return static_cast ::type && > (something); } Hopefully, this is correct now!

There is no such thing as universal references! But, due to the C++11 reference collapsing, sometimes when you write T && v, you can get anything; both lvalues and rvalues. These can be thought of as universal references. Two preconditions: There must be T&&, And there must be type deduction.

Any questions?

A Simple Method to Do RAII and Transactions

This is an extremely common pattern in programming: if ( ) { if (! ) } For example: if (OpenDatabase()) { if (!WriteNameAndAge()) UnwriteNameAndAge(); CloseDatabase (); }

The object-oriented way might be: class RAII { RAII () { } ~RAII () { } }; … RAII raii; try { } catch (...) { throw; }

What happens if you need to compose actions? if ( ) { if ( ) { if (! ) { } } else }

What if we could write this: SCOPE_EXIT { }; SCOPE_FAIL { };

Extremely easy to compose: SCOPE_EXIT { }; SCOPE_FAIL { }; SCOPE_EXIT { }; SCOPE_FAIL { };

To start, we want some way to execute code when the execution is exiting the current scope. The key idea here is to write a class that accepts a lambda at construction and calls it at destruction. But how do we store a lambda for later use? We can use std::function, but should we?

Lets start like this: template class ScopeGuard { public: ScopeGuard (F f) : m_f (std::move(f)) {} ~ScopeGuard () {m_f();} private: F m_f; }; And a helper function: template ScopeGuard MakeScopeGuard (F f) { return ScopeGuard (std::move(f)); }

This is used like this: int * p = new int [1000]; auto g = MakeScopeGuard([&]{delete[] p;}); //… Without MakeScopeGuard(), we cant construct ScopeGuard instances that use lambdas, because they dont have type names. But we dont have a way to tell scope guard not to execute its clean-up code (in case we dont want to roll back.)

So we add a flag and a method to dismiss the scope guard when needed: template class ScopeGuard { public: ScopeGuard (F f) : m_f (std::move(f)), m_dismissed (false) {} ~ScopeGuard () {if (!m_dismissed) m_f();} void dismiss () {m_dismissed = true;} private: F m_f; bool m_dismissed; };

A very important part is missing though… A move constructor: ScopeGuard (ScopeGuard && that) : m_f (std::move(that.m_f)), m_dismissed (std::move(that.m_dismissed)) { that.dismiss (); } And we should disallow copying, etc. private: ScopeGuard (ScopeGuard const &); ScopeGuard & operator = (ScopeGuard const &);

Our motivating example becomes: auto g1 = MakeScopeGuard([&]{ }); auto g2 = MakeScopeGuard([&]{ }); auto g3 = MakeScopeGuard([&]{ }); auto g4 = MakeScopeGuard([&]{ }); g2.dismiss(); g4.dismiss();

Do you feel lucky?!

Templates with variable number of arguments For example template size_t log (int severity, char const * msg, Ts&&... vs); Remember the old way? size_t log (int severity, char const * msg,...); Using va_list, va_start, va_arg and va_end in Or #define LOG_ERROR(msg,...)\ log (SevError, msg, __VA_ARGS__)

Almost the same for classes: template class ManyParents : Ts... { ManyParents () : Ts ()... {} }; Now these are valid: ManyParents a; ManyParents b;

template T * Create (T * parent, PTs&&... ps) { T* ret = new T; ret->create (parent, std::forward (ps)...); return ret; } PTs and ps are not types, values, arrays, tuples or initializer lists. They are new things.

Rules of expansion are very interesting: Ts... T1,T2,…,Tn Ts&&... T1&&,…,Tn&& A... A,…,A f(42, vs...) f(42,v1,…,vn) f(42, vs)... f(42,v1),…,f(42,vn) One more operation you can do: size_t items = sizeof...(Ts); // or vs

Lets implement the sizeof... operator as an example. template struct CountOf; template <> struct CountOf<> { enum { value = 0 }; }; template struct CountOf { enum { value = CountOf ::value + 1 }; }; Use CountOf like this: size_t items = CountOf ::value;

Lets implement a function named IsOneOf() that can be used like this: IsOneOf(42, 3, -1, , 42.0f, 0) which should return true or IsOneOf (0, "hello") which should fail to compile How do we start the implementation? Remember, think recursively!

template bool IsOneOf (A && a, T && t0) { return a == t0; } template bool IsOneOf (A && a, T0 && t0, Ts&&... ts) { return a == t0 || IsOneOf(a, std::forward (ts)...); }

Finally!

The machine we code for (or want to code for): Each statement in your high-level program gets translated into several machine instructions The (one) CPU runs the instructions in the program one by one All interactions with memory finish before the next instruction starts This is absolutely not true even in a single-threaded program running on a single-CPU machine It hasnt been true for about 2-3 decades now CPU technology, cache and memory systems and compiler optimizations make it not true

Even in a multi-core world, we assume that: Each CPU runs the instructions one-by-one All interactions of each CPU with memory finish before the next instruction starts on that CPU Memory ops from different CPUs are serialized by the memory system and effected one before the other The whole system behaves as if we were executing some interleaving of all threads as a single stream of operations on a single CPU This is even less true (if thats at all possible!)

Y OUR COMPUTER DOES NOT EXECUTE THE PROGRAMS YOU WRITE. If it did, your programs would have been 10s or 100s of times slower It makes it appear as though your program is being executed

The expected behavior of hardware with respect to shared data among threads of execution Obviously important for correctness Also important for optimization If you want to have the slightest chance to know what the heck is going on!

We have sequential consistency if: the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program. E.g., if A, B, C are threads in a program, This is SC: A 0, A 1, B 0, C 0, C 1, A 2, C 2, C 3, C 4, B 1, A 3 This is not: A 0, A 1, B 0, C 0, C 2, A 2, C 1, C 3, C 4, B 1, A 3

You have a race condition if: A memory location can be simultaneously accessed by two threads and at least one thread is a writer Memory location is defined as Either a non-bitfield variable Or a sequence of non-zero-length bitfields Simultaneously is defined as you cant prove that one happens before the other Remember that in case of a race condition in your code, anything can happen. Anything.

Transformations (reorder, change, add, remove) to your code: Compiler: eliminate/combine subexprs, move code around, etc. Processor: execute your code out-of-order or speculatively, etc. Caches: delay your writes, poison or share data with each other, etc. But you dont care about all this. What you care about are: The code that you wrote The code that gets finally executed You dont (usually) care who did what; you only care that your correctly-synchronized program behaves as if some sequentially- consistent interleaving of the instructions (specially memory ops) of your threads is being executed. Also, all writes are visible atomically, globally, simultaneously

Consider Petersons algorithm: (Both flags are atomic and initially zero) Does this actually work? Thread 1: flag1 = 1; // (1) if (flag2 != 0)// (2) else Thread 2: flag2 = 1; // (3) if (flag1 != 0)// (4) else

The system (compiler, processor, memory) gives you sequentially-consistent execution, as long as your program is data-race free. This is the memory model that C++11 (and C11) expect compilers and hardware to provide for the programmer. The memory model is a contract between programmer and the system The programmer promises to correctly synchronize her program (no race conditions) The system promises to provide the illusion that it is executing the program you wrote

Transaction: a logical op on related data that maintains an invariant Atomic: all or nothing Consistent: takes the system from one valid state to another Independent: correct in the presence of other transactions on the same data Example: (We have two bank accounts: A and B) Begin transaction (we acquire exclusivity) 1.Add X units to account B 2.Subtract X units from account A End transaction (we release exclusivity)

Critical Region (or Critical Section): Code that must be executed in isolation from rest of program A tool that is used to implement transactions E.g., youd implement CR using a mutex like this: mutex MX; // MX is a mutex protecting X … { lock_guard lock (MX); // Acquire } // Release Same principle using atomic variables, etc.

Important rule: code cant move out of a CR E.g., if you have: MX.lock (); // Acquire x = 42; MX.unlock (); // Release The system cant transform it to: x = 42; MX.lock (); // Acquire MX.unlock (); // Release MX.lock (); // Acquire MX.unlock (); // Release x = 42;

If we have: x = 7; M.lock(); y = 42; M.unlock(); z = 0; Which of these can/cant be done? M.lock(); x = 7; y = 42; z = 0; M.unlock(); M.lock(); z = 0; y = 42; x = 7; M.unlock(); z = 0; M.lock(); y = 42; M.unlock(); x = 7;

A pattern emerges! For SC acquire/release: You cant move things up across an acquire. You cant move things down across a release. You cant move an acquire up across a release. Acquire and release are also called one-way barriers (or one-way fences.) A release store makes its prior accesses visible to an acquire load that sees (pairs with) that store. Important: a release pairs with an acquire in another thread. A mutex lock or loading from an atomic variable is an acquire. A mutex unlock or storing to an atomic variable is a release.

Weapons of Mass Destruction

Defined in header Use like std::atomic x; E.g, std::atomic ai; or std::atomic_int ai; or, std::atomic as; Might use locks (spinlocks) under the hood. Check with x.is_lock_free() No operation works on two atomics at once or return an atomic. Available ops are =, T, ++, --, +=, -=, &=, |=, ^= There is also: T exchange (T desired, …) bool compare_exchange_strong (T& expected, T desired, …) bool compare_exchange_weak (T& expected, T desired, …) You can also use std::atomic_flag which has test_and_set(…) and clear(…). (And dont forget ATOMIC_FLAG_INIT.)

Represented by class std::thread (in header ) default-constructible and movable (not copyable) template explicit thread (F&& f, Args&&... args); Should always call join() or detach() t.join() waits for thread t to finish its execution t.detach() detaches t from the actual running thread otherwise the destructor will terminate the program Get information about a thread object using std::thread::id get_id () bool joinable ()

The static function unsigned std::thread::hardware_concurrency() returns the number of threads that the hardware can run concurrently There is also a namespace std::this_thread with these members: std::thread::id get_id () void yield () void sleep_for ( ) void sleep_until ( )

There are four types of mutexes in C++ (in header ) mutex : basic mutual exclusion device timed_mutex : provides locking with a timeout recursive_mutex : can be acquired more than once by the same thread recursive_timed_mutex They all provide lock(), unlock() and bool try_lock() The timed versions provide bool try_lock_for ( ) and bool try_lock_until ( ) Generally, you want to use a std::lock_guard to lock/unlock the mutex Locks the mutex on construction; unlocks on destruction

It is not uncommon to need to do something once and exactly once, e.g., initialization of some state, setting up of some resource, etc. Multiple threads might attempt this, because they need the result of the initialization, setup, etc. You can use (from header ) template void call_once (std::once_flag & flag, F && f, Args&& args...); Like this: (remember that it also acts as a barrier) std::once_flag init_done; void ThreadProc () { std::call_once (init_done, []{InitSystem();}); }

async() can be used to run functions asynchronously (from header ) template std::future async (F && f, Args&&... args); returns immediately, but runs f(args...) asynchronously (possibly on another thread) e.g. future t0 = async(FindMin, v); or future t1 = async([&]{return FindMin(v);}); An object of type std::future basically means that someone has promised to put a T in there in the future. Incidentally, the other half of future is called promise Key operation is T get (), which waits for the promised value.

#include string flip (string s) { reverse (s.begin(), s.end()); return s; } int main () { vector > v; v.push_back (async ([] {return flip( ",olleH");})); v.push_back (async ([] {return flip(" weN evarB");})); v.push_back (async ([] {return flip( "!dlroW");})); for (auto& i : v) cout << i.get(); cout << endl; return 0; }

Really Getting Rid of NULL

What do you do when you have char const * get_object_name (int id), and the object ID does not exist in your objects? unsigned get_file_size (char const * path), and the file does not exist? double sqrt (double x), and x is negative? You might use NULL, or special error values or even exceptions, but the fact remains that sometimes, you dont want to return (or pass around) anything. You want some values to be optional. Aha! Lets write a class that allows us to work with such values…

Any questions?

Implementation of Optional Discussion of wrapping objects with locking General wrapping of asynchronous transactions Initializer lists and uniform initialization constexpr std::unordered_containers Smart pointers std::unique_ptr std::shared_ptr Implementing shared pointer

If you write C-style code, youll end up with C-style bugs. -- Bjarne Stroustrup If you write Java-style code, youll have Java-level performance.

Contact us at And me at