Download presentation
Presentation is loading. Please wait.
1
Assignment 6 Recitation
Concurrency Winter 2017
2
Recitation Outline Concurrency and threads - Theory
Threads - Practical Problems with parallel execution? Enter: Mutexes Semaphores The Dining Philosopher Problem
3
Concurrency and Threads - Theory
4
What Is Concurrency? This basically means that you can break up a program into some number of pieces, execute them in any order, and the result of the program would still be the same. This property is nicely correlated with parallelizability – you can do a set of calculations a bunch of times simultaneously and the end result would always be the same. Consider the following problem: Input: A 100 x 100 matrix of integers. Output: A vector of 100 elements, the sum of each row. How would you write code to solve this problem right now? Write a loop to sum up each row one by one. But…what happens if we want the computer to return the result faster?
5
What Is Concurrency? Notice something interesting about this problem?
Say I break this matrix up into two – each of fifty rows. If I calculate the sums for each matrix individually, isn’t that the same problem? What if I break the matrix up into four, each with 25 rows? Or 10? Or even 100? If I could get the computer to multitask, calculating the sums for both matrices at the same time Get the results in half the time! (theoretically) Big question – how to do that?
6
Enter Multiple Threads
What is a thread? It’s a single line of execution. Most of what you’ve been coding so far can be run on a single thread. But computers can run more than one thread simultaneously! So that matrix problem? You can break it up and tell the computer to calculate the sums for each of the pieces. And it can do it. Yayayayayay! I’m just going to break the matrix up into 100 matrices and the computer can do all the sum calculations in the time it takes to do one! Yippee!
7
Caveats Thread creation is pretty expensive relative to not creating one at all. So…it might be worth it to have four threads in the matrix problem, but probably not 100…The costs of creating threads might not make it worthwhile.
8
Applications (Ray Tracing)
In Assignment 6, you will get a taste of raytracing (highly recommend you guys take CS171 because graphics is great). Raytracing is a problem that is highly parallelizable (and CS179 if you’re interested in parallel programming).
9
Threads - Practical
10
Thread Creation in C++ #include <thread> // This allows you to use threads in the first place. void int_print(int low_range, int high_range) { // Some function we want to run on different threads. for(int i = low_range; i <= high_range; i++) cout << i << endl; } int main(int argc, char * argv[]) { int a = 5, b = 1000, c = -10, d = 10000; thread t_a(int_print, a, b); // Start int_print(5, 1000) on one thread called t_a thread t_c(int_print, c, d); // Start int_print(-10, 10000) on another thread t_c // The two threads are running simultaneously! t_a.join(); // Wait for thread t_a to stop execution t_c.join(); // Wait for thread t_c to stop execution
11
A Few Notes When calling join() in the previous example, what happens is that the calling function actually stops executing until the thread finishes execution and returns. t_a.join(); - main() waits for t_a to finish before continuing If you want both main() and t_a to run simultaneously, use detach():
12
Problems with Parallel Execution
13
Resource Management There are problems one would encounter in parallel programming that one wouldn’t really encounter with sequential programming. Consider the following problem: I have a vector full of integers. I create two threads. One thread multiplies each number in the vector by two. The other thread adds four to each number in the same vector. I start them both at the same time. What’s the problem here? This is what we call a race condition.
14
How to Fix? Enter the mutex.
A mutex basically indicates that only one thread should be able to use a certain resource at one time. So in the previous scenario, we would place a mutex such that only one thread can access the vector at any one time. Lock() – The mutex is taken. No other thread can lock this mutex until this current thread unlocks it. Unlock() – The thread has finished with the resources associated with this mutex. Other threads can lock this mutex / use the resources associated with this mutex.
15
#include <thread> #include <mutex> // Allows you to use mutexes void mult2(vector<int> &vec, mutex &m) { // Some function to multiply each number by 2. for(int i = 0; i < vec.size(); i++) { m.lock(); // Lock the mutex vec[i] *= 2; // Modify the vector without fear of interference m.unlock(); // Unlock the mutex so that the other thread can access the vector } void add4(vector<int> &vec, mutex &m) { // Some function to add 4 to each number. vec[i] += 4; // Modify the vector without fear of interference int main(int argc, char * argv[]) { // Initialize vector v to some arbitrary ints mutex m; // IMPORTANT: t_a and t_c uses the same mutex, which cannot be // locked twice. thread t_a(mult2, v, m); thread t_c(add4, v, m); t_a.join(); // Wait for thread t_a to stop execution t_c.join(); // Wait for thread t_c to stop execution
16
Be Careful with Mutexes: Deadlock
Mutexes are great, but like everything in CS, if you don’t use it properly, things can go wrong. Look at the code again. What happens if I modify it so that the following function reads: void mult2(vector<int> &vec, mutex &m) { // Some function to multiply each number by 2. for(int i = 0; i < vec.size(); i++) { m.lock(); // Lock the mutex vec[i] *= 2; // Modify the vector without fear of interference } More explicitly, since mutexes can only be locked once, can the threads ever complete once this thread reaches the end of the first loop? This is called a deadlock – the threads basically froze.
17
Semaphores What if we want to allow a resource to be used by more than one, but limited number of, threads? Enter semaphores – which are basically mutexes that allow 0..n threads to lock them. Not available in C++11. Soooooo…we implement our own Semaphore* s = new Semaphore(2); s->dec(); s->dec(); // Semaphore count is now 0, more decs will block s->inc(); // Semaphore now has space s->value(); // Returns 0
18
The Dining Philosophers Problem
19
The Problem Statement So you have the situation as shown on the right.
The philosophers alternate between thinking and eating. Must have both the left and right forks in order to eat. Replace forks after eating. How do you design the behavior of the philosophers such that none would starve? Seems like a silly problem, but this is a simplification of a problem that could arise with using threads. Philosophers = threads Forks = mutexes Spaghetti = resources
20
The Actual Problem What if all five philosophers pick up their right forks at the exact same time? DEADLOCK. It is your goal in the assignment to design the behavior such that this doesn’t happen.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.