Programming -- Levels of Difficulty Sequential –Termination –Determinism Concurrent –Non-Termination –Non-Determinism Distributed –Multiple Computers –Partial.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Operating System Concepts and Techniques Lecture 12 Interprocess communication-1 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
8a-1 Programming with Shared Memory Threads Accessing shared data Critical sections ITCS4145/5145, Parallel Programming B. Wilkinson Jan 4, 2013 slides8a.ppt.
Threads. What do we have so far The basic unit of CPU utilization is a process. To run a program (a sequence of code), create a process. Processes are.
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Concurrency CS 510: Programming Languages David Walker.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
University of Pennsylvania 9/19/00CSE 3801 Concurrent Processes CSE 380 Lecture Note 4 Insup Lee.
Chapter 2.3 : Interprocess Communication
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
1 Interprocess Communication Race Conditions Two processes want to access shared memory at same time.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
SYNCHRONIZATION Module-4. scheduling Scheduling is an operating system mechanism that arbitrate CPU resources between running tasks. Different scheduling.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Thread Synchronization with Semaphores
TANNENBAUM SECTION 2.3 INTERPROCESS COMMUNICATION OPERATING SYSTEMS.
Threads and Thread Control Thread Concepts Pthread Creation and Termination Pthread synchronization Threads and Signals.
ICS 145B -- L. Bic1 Project: Process/Thread Synchronization Textbook: pages ICS 145B L. Bic.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Pthreads: A shared memory programming model
Threads CSCE Thread Motivation Processes are expensive to create. Context switch between processes is expensive Communication between processes.
S -1 Posix Threads. S -2 Thread Concepts Threads are "lightweight processes" –10 to 100 times faster than fork() Threads share: –process instructions,
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
Synchronizing Threads with Semaphores
Concurrency Control 1 Fall 2014 CS7020: Game Design and Development.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Software Systems Advanced Synchronization Emery Berger and Mark Corner University.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Lecture 6: Monitors & Semaphores. Monitor Contains data and procedures needed to allocate shared resources Accessible only within the monitor No way for.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Threads A thread is an alternative model of program execution
CS533 Concepts of Operating Systems Jonathan Walpole.
Presented by: Belgi Amir Seminar in Distributed Algorithms Designing correct concurrent algorithms Spring 2013.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Agenda  Quick Review  Finish Introduction  Java Threads.
Concurrency in Java MD. ANISUR RAHMAN. slide 2 Concurrency  Multiprogramming  Single processor runs several programs at the same time  Each program.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
pThread synchronization
1 Programming with Shared Memory - 2 Issues with sharing data ITCS 4145 Parallel Programming B. Wilkinson Jan 22, _Prog_Shared_Memory_II.ppt.
Threads Some of these slides were originally made by Dr. Roger deBry. They include text, figures, and information from this class’s textbook, Operating.
Topic 3 (Textbook - Chapter 3) Processes
Process Synchronization: Semaphores
PARALLEL PROGRAM CHALLENGES
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Thread synchronization
Synchronization and Semaphores
Jonathan Walpole Computer Science Portland State University
Concurrency: Mutual Exclusion and Process Synchronization
Synchronization Primitives – Semaphore and Mutex
Imperative Data Parallelism (Correctness)
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 2019
Chapter 3: Process Concept
CS 144 Advanced C++ Programming May 7 Class Meeting
Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)
POSIX Threads(pthreads)
Presentation transcript:

Programming -- Levels of Difficulty Sequential –Termination –Determinism Concurrent –Non-Termination –Non-Determinism Distributed –Multiple Computers –Partial Failure Byzantine –Failure at the worst time, in the worst way

Sequential Programs Deterministic x = x + 1; Terminate exit(0) Current State x = x + 1; Next State

Concurrent Programs Non-terminating Non-deterministic (exhibit interference)

Concurrent Programs Synchronize -- perform actions in a desired order Communicate -- transfer value(s) from one thread/process to another

(NIST) POSIX® (Portable Operating System Interface), FIPS 151-2, ISO/IEC : 2003 (IEEE Std : 2001) Option Groups _POSIX_SPIN_LOCKS_POSIX_SPIN_LOCKSThe implementation supports the Spin Locks option. If this #define has a value other than -1 or 0, it shall have the value L POSIX

#include #include void *p(void *arg) { int i; for (i=0; i<5; i++) { printf("X\n"); sleep(1); } pthread_exit((void *)99); } int main() { pthread_t x; void *r; int i; assert(pthread_create(&x, NULL, p, (void *)34) == 0); for (i=0; i<5; i++) { printf("Y\n"); sleep(1); } assert(pthread_join(x, &r) == 0); return 0; } OUTPUT Y X Y X X Y X Y X Y

Parallel DAG (directed acyclic graph) and Happens-before Edges x=0 x=1 WriteLine(x) x=2 Practical Parallel and Concurrent Programming DRAFT: comments to 76/16/2010

Schedule, Informally A topological sort (serialization) of the nodes in a parallel DAG - A sequential ordering of the nodes that respects the happens-before edges Practical Parallel and Concurrent Programming DRAFT: comments to 86/16/2010

Different schedules, different outputs x=0 x=1 WriteLine(x) x=2 x=0 x=2 WriteLine(x) x=1 x = 2 x = 1 Practical Parallel and Concurrent Programming DRAFT: comments to 96/16/2010

Determinism For the same initial state, observe the same final state, regardless of the schedule Determinism desirable for most data- parallel problems Practical Parallel and Concurrent Programming DRAFT: comments to 106/16/2010

Another Example - 2 robots in a room Data Structures Practical Parallel and Concurrent Programming DRAFT: comments to 11 struct RoomPoint { public int X; public int Y; } class Robot { public RoomPoint Location; } List _robots; Robot [][] _roomCells; r1 r2 _roomCells; (0,0) 6/16/2010

MoveOneStep(Robot r1) Find new empty cell for r1 Move r1 to new cell, if not already occupied Practical Parallel and Concurrent Programming DRAFT: comments to 12 r1 r2 r1 r2 6/16/2010

Practical Parallel and Concurrent Programming DRAFT: comments to rrrr 1 rr1r2r 2 rrrrr 6/16/ rrr1rr 1 rr2r 2 rrrrr rr rr 1 rr1r 2 rrrrr PerformSimulationStep

Practical Parallel and Concurrent Programming DRAFT: comments to rrrr 1 rr1r2r 2 rrrrr 6/16/ rr r1 r2 rr 1 rr 2 rrrrr

Different schedules, different outputs Is 0,2 empty ? Move to 0,2 Is 0,2 empty ? Move to 0,2 Is 0,2 empty ? error correct Practical Parallel and Concurrent Programming DRAFT: comments to 156/16/2010

REENTRANT A procedure with no external communication or synchronization is reentrant Identifying Communication Points or Data Races 1.Is a variable that is Read by one thread Written by any other thread? 2.Is a variable that is Written by one thread Read by any other thread? 3.Do two threads Write the same variable?

Interference point -- bad communication point An Interference Point for A Singly-Linked List p = q.head; /* remove the list element */ A. q.head = q.head -> pCB; /* update the list */ B. return p; C. Possible Execution Sequences Ai denotes that thread i executes statement A. (B1B2) denotes that statement B is executed simultaneously by Threads 1 and 2. A1B1A2B2 If T1 executes statements A and B then T2 executes A and B, both threads are returned a different context block and no error occurs. If threads execute the code independently, it is referred to as serial execution.

1.A1A2(B1B2) If P1 executes A then P2 executes A and then both simultaneously execute B, both threads are returned the same context block, which is an error. Note that (B1B2) and (B2B1) are identical (A1A2)B1B2 If both threads execute A simultaneously and then execute B sequentially, both threads are returned the same context block yet two blocks are deleted from the free list The other possible execution sequences are A2B2A1B1, A1A2B1B2, A1A2B2B1, A2A1B1B2, A2A1B2B1, A2A1(B1B2), (A1A2)B2B1, (A1A2)(B1B2), A1(A2B1)B2, A2(A1B2)B1.

Interference Points referred to as CRITICAL SECTIONS Critical Section Solution 1.The procedures in a critical section are executed indivisibly with respect to the shared variables or resources that are accessed. 2.A thread must not halt inside a critical section. 3.A thread outside a critical section cannot block another thread from entering the critical section (Optional, fairness and finite progress) A thread trying to enter a critical section will eventually do so.

Important Points! All Concurrent programs must be certified interference free!! And The property cannot be validated by testing!! However Proper use of Pthreads can guarantee it!!

POSIX FUNCTIONS int pthread_join(pthread_t thread, void **out); waits for termination, retrieves return value pthread_t pthread_self(void); know thyself! or who am I really? int pthread_equal(pthread_t t1, pthread_t t2); are we or are they twins? int pthread_detach(pthread_t thread); no one can join on this thread int pthread_cancel(pthread_t thread); terminate the thread void sched_yield(void); give up CPU

THREAD STATE DIAGRAM

A Little History In 1967, E.W. Dijkstra submitted a paper[6] on the "THE Multiprogramming System" to an operating systems conference. It contained the following quotation: ‘Therefore, we have arranged the whole system as a society of sequential processes, progressing with undefined speed ratios..... Their harmonious cooperation is regulated by means of explicit mutual synchronization statements. On the one hand, this explicit mutual synchronization is necessary, as we do not make any assumption about speed ratios; on the other hand, this mutual synchronization is possible because "delaying the progress of a process temporarily" can never be harmful to the interior logic of the process delayed.’

SEMAPHORE USE DETERMINED BY INITIAL VALUE 1 -- CRITICAL SECTION P V >1 -- RESOURCE COUNTER P(RC) //allocate P(MUTEX) V(MUTEX) P(MUTEX) deallocate V(MUTEX) V(RC) 0 -- USED TO IMPLEMENT PROCESS SYNCHRONIZATION GRAPHS

Semaphores to Solve the C.S. Problem typedef struct { long count = 1; Queue q = {EMPTY}; } Semaphore; Semaphore s; P(s); if (--s.count<0) SleepOn(s.q); V(s);if (++s.count<=0) Wakeup(s.q);

PROCESS SYNCHRONIZATION GRAPH Semaphore_t ac=0,bc=0,cd=0,ce=0; PA PB PC PD PE P(ac) P(cd) P(ce) P(bc) …. …. …. V(ac) V(bc) V(cd) V(ce) D B C A E

Mutex – semaphore only used for C.S. #include pthread_mutex_t m = PTHREAD_MUTEX_INITIALIZER; void * func(void * arg) {  Note: void * arg and return value!! assert(pthread_mutex_lock(&m) == 0); /****CRITICAL SECTION ***/ assert(pthread_mutex_unlock(&m) == 0); return 0; } int main() { pthread_t tA, tB; pthread_lib_init(""); assert(pthread_create(&tA, NULL, func, NULL) == 0); assert(pthread_create(&tB, NULL, func, NULL) == 0); assert(pthread_join(&tA, NULL) == 0); assert(pthread_join(&tB, NULL) == 0); return 0; }