Assignment 6 Recitation

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 14: Deadlock & Dinning Philosophers.
1 C OMP 346 – W INTER 2015 Tutorial # 5. Semaphores for Barrier Sync A barrier is a type of synchronization method. A barrier for a group of threads or.
CSEN5322 Quiz-5.
1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
SPL/2010 Liveness And Performance 1. SPL/2010 Performance ● Throughput - How much work can your program complete in a given time unit? ● Example: HTTP.
Deadlock CS Introduction to Operating Systems.
1 Loops. 2 Often we want to execute a block of code multiple times. Something is always different each time through the block. Typically a variable is.
Classic Synchronization Problems
1 10/20/08CS150 Introduction to Computer Science 1 do/while and Nested Loops Section 5.5 & 5.11.
Concurrency: Deadlock and Starvation Chapter 6. Revision Describe three necessary conditions for deadlock Which condition is the result of the three necessary.
1 10/11/06CS150 Introduction to Computer Science 1 do/while and Nested Loops.
Computer Science 1620 Programming & Problem Solving.
1 Advanced Computer Programming Concurrency Multithreaded Programs Copyright © Texas Education Agency, 2013.
CS 149: Operating Systems February 17 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
1 CS 177 Week 5 Recitation Slides Loops. 2 Announcements Project 2 due next Thursday at 9PM. Exam 1 this evening (switch and loop not covered) Old exams.
Outlines Chapter 3 –Chapter 3 – Loops & Revision –Loops while do … while – revision 1.
A Computer Science Tapestry 1 Recursion (Tapestry 10.1, 10.3) l Recursion is an indispensable technique in a programming language ä Allows many complex.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Variables, Functions & Parameter Passing CSci 588 Fall 2013 All material not from online sources copyright © Travis Desell, 2011.
C++ for Everyone by Cay Horstmann Copyright © 2012 by John Wiley & Sons. All rights reserved For Loops October 16, 2013 Slides by Evan Gallagher.
Java Threads. What is a Thread? A thread can be loosely defined as a separate stream of execution that takes place simultaneously with and independently.
Copyright © 1997 – 2014 Curt Hill Concurrent Execution of Programs An Overview.
Games Development 2 Concurrent Programming CO3301 Week 9.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
Lecture 4 Looping. Building on the foundation Now that we know a little about  cout  cin  math operators  boolean operators  making decisions using.
Scope When we create variables and functions, they are limited in where they are visible and where they can be referenced For the most part, the identifiers.
CMP-MX21: Lecture 5 Repetitions Steve Hordley. Overview 1. Repetition using the do-while construct 2. Repetition using the while construct 3. Repetition.
Consider the program fragment below left. Assume that the program containing this fragment executes t1() and t2() on separate threads running on separate.
Discussion Week 2 TA: Kyle Dewey. Overview Concurrency Process level Thread level MIPS - switch.s Project #1.
REPETITION STATEMENTS - Part2 Structuring Input Loops Counter-Controlled Repetition Structure Sentinel-Controlled Repetition Structure eof()-Controlled.
Copyright © Curt Hill Concurrent Execution An Overview for Database.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Chien-Chung Shen CIS/UD
Lesson #5 Repetition and Loops.
Prepared by Oussama Jebbar
User-Written Functions
REPETITION CONTROL STRUCTURE
Background on the need for Synchronization
Repetition Structures Chapter 9
Lesson #5 Repetition and Loops.
Lecture 25 More Synchronized Data and Producer/Consumer Relationship
Threads, Concurrency, and Parallelism
Recitation 2: Synchronization, Shared memory, Matrix Transpose
Definitions Concurrent program – Program that executes multiple instructions at the same time. Process – An executing program (the running JVM for Java.
Introduction to Operating Systems
Lecture 5: GPU Compute Architecture for the last time
Synchronization Lecture 23 – Fall 2017.
Recitation 14: Proxy Lab Part 2
Lesson #5 Repetition and Loops.
Liveness And Performance
Recitation 14: Proxy Lab Part 2
Functions Pass By Value Pass by Reference
Module 4 Loops.
Module 4 Loops and Repetition 2/1/2019 CSE 1321 Module 4.
Semaphores Chapter 6.
Computing Fundamentals
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
Lesson #5 Repetition and Loops.
Programming with Shared Memory - 2 Issues with sharing data
CSCI1600: Embedded and Real Time Software
EE 193: Parallel Computing
“The Little Book on Semaphores” Allen B. Downey
Module 4 Loops and Repetition 9/19/2019 CSE 1321 Module 4.
Presentation transcript:

Assignment 6 Recitation Concurrency Winter 2017

Recitation Outline Concurrency and threads - Theory Threads - Practical Problems with parallel execution? Enter: Mutexes Semaphores The Dining Philosopher Problem

Concurrency and Threads - Theory

What Is Concurrency? This basically means that you can break up a program into some number of pieces, execute them in any order, and the result of the program would still be the same. This property is nicely correlated with parallelizability – you can do a set of calculations a bunch of times simultaneously and the end result would always be the same. Consider the following problem: Input: A 100 x 100 matrix of integers. Output: A vector of 100 elements, the sum of each row. How would you write code to solve this problem right now? Write a loop to sum up each row one by one. But…what happens if we want the computer to return the result faster?

What Is Concurrency? Notice something interesting about this problem? Say I break this matrix up into two – each of fifty rows. If I calculate the sums for each matrix individually, isn’t that the same problem? What if I break the matrix up into four, each with 25 rows? Or 10? Or even 100? If I could get the computer to multitask, calculating the sums for both matrices at the same time  Get the results in half the time! (theoretically) Big question – how to do that?

Enter Multiple Threads What is a thread? It’s a single line of execution. Most of what you’ve been coding so far can be run on a single thread. But computers can run more than one thread simultaneously! So that matrix problem? You can break it up and tell the computer to calculate the sums for each of the pieces. And it can do it. Yayayayayay! I’m just going to break the matrix up into 100 matrices and the computer can do all the sum calculations in the time it takes to do one! Yippee!

Caveats Thread creation is pretty expensive relative to not creating one at all. So…it might be worth it to have four threads in the matrix problem, but probably not 100…The costs of creating threads might not make it worthwhile.

Applications (Ray Tracing) In Assignment 6, you will get a taste of raytracing (highly recommend you guys take CS171 because graphics is great). Raytracing is a problem that is highly parallelizable (and CS179 if you’re interested in parallel programming).

Threads - Practical

Thread Creation in C++ #include <thread> // This allows you to use threads in the first place. void int_print(int low_range, int high_range) { // Some function we want to run on different threads. for(int i = low_range; i <= high_range; i++) cout << i << endl; } int main(int argc, char * argv[]) { int a = 5, b = 1000, c = -10, d = 10000; thread t_a(int_print, a, b); // Start int_print(5, 1000) on one thread called t_a thread t_c(int_print, c, d); // Start int_print(-10, 10000) on another thread t_c // ------------------- The two threads are running simultaneously! ----------------------------------- t_a.join(); // Wait for thread t_a to stop execution t_c.join(); // Wait for thread t_c to stop execution

A Few Notes When calling join() in the previous example, what happens is that the calling function actually stops executing until the thread finishes execution and returns. t_a.join(); - main() waits for t_a to finish before continuing If you want both main() and t_a to run simultaneously, use detach(): http://www.cplusplus.com/reference/thread/thread/detach/

Problems with Parallel Execution

Resource Management There are problems one would encounter in parallel programming that one wouldn’t really encounter with sequential programming. Consider the following problem: I have a vector full of integers. I create two threads. One thread multiplies each number in the vector by two. The other thread adds four to each number in the same vector. I start them both at the same time. What’s the problem here? This is what we call a race condition.

How to Fix? Enter the mutex. A mutex basically indicates that only one thread should be able to use a certain resource at one time. So in the previous scenario, we would place a mutex such that only one thread can access the vector at any one time. Lock() – The mutex is taken. No other thread can lock this mutex until this current thread unlocks it. Unlock() – The thread has finished with the resources associated with this mutex. Other threads can lock this mutex / use the resources associated with this mutex.

#include <thread> #include <mutex> // Allows you to use mutexes void mult2(vector<int> &vec, mutex &m) { // Some function to multiply each number by 2. for(int i = 0; i < vec.size(); i++) { m.lock(); // Lock the mutex vec[i] *= 2; // Modify the vector without fear of interference m.unlock(); // Unlock the mutex so that the other thread can access the vector } void add4(vector<int> &vec, mutex &m) { // Some function to add 4 to each number. vec[i] += 4; // Modify the vector without fear of interference int main(int argc, char * argv[]) { // Initialize vector v to some arbitrary ints mutex m; // IMPORTANT: t_a and t_c uses the same mutex, which cannot be // locked twice. thread t_a(mult2, v, m); thread t_c(add4, v, m); t_a.join(); // Wait for thread t_a to stop execution t_c.join(); // Wait for thread t_c to stop execution

Be Careful with Mutexes: Deadlock Mutexes are great, but like everything in CS, if you don’t use it properly, things can go wrong. Look at the code again. What happens if I modify it so that the following function reads: void mult2(vector<int> &vec, mutex &m) { // Some function to multiply each number by 2. for(int i = 0; i < vec.size(); i++) { m.lock(); // Lock the mutex vec[i] *= 2; // Modify the vector without fear of interference } More explicitly, since mutexes can only be locked once, can the threads ever complete once this thread reaches the end of the first loop? This is called a deadlock – the threads basically froze.

Semaphores What if we want to allow a resource to be used by more than one, but limited number of, threads? Enter semaphores – which are basically mutexes that allow 0..n threads to lock them. Not available in C++11. Soooooo…we implement our own  Semaphore* s = new Semaphore(2); s->dec(); s->dec(); // Semaphore count is now 0, more decs will block s->inc(); // Semaphore now has space s->value(); // Returns 0

The Dining Philosophers Problem

The Problem Statement So you have the situation as shown on the right. The philosophers alternate between thinking and eating. Must have both the left and right forks in order to eat. Replace forks after eating. How do you design the behavior of the philosophers such that none would starve? Seems like a silly problem, but this is a simplification of a problem that could arise with using threads. Philosophers = threads Forks = mutexes Spaghetti = resources

The Actual Problem What if all five philosophers pick up their right forks at the exact same time? DEADLOCK. It is your goal in the assignment to design the behavior such that this doesn’t happen.