Introduction to Concurrency

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Parallel Processing & Parallel Algorithm May 8, 2003 B4 Yuuki Horita.
Ch4. Processes Process Concept (1) new waiting terminate readyrunning admitted interrupt exit scheduler dispatch I/O or event completion.
CSCC69: Operating Systems
CS 311 – Lecture 14 Outline Process management system calls Introduction System calls  fork()  getpid()  getppid()  wait()  exit() Orphan process.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Fork Fork is used to create a child process. Most network servers under Unix are written this way Concurrent server: parent accepts the connection, forks.
Based on Silberschatz, Galvin and Gagne  2009 Threads Definition and motivation Multithreading Models Threading Issues Examples.
3.5 Interprocess Communication
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Process in Unix, Linux and Windows CS-3013 C-term Processes in Unix, Linux, and Windows CS-3013 Operating Systems (Slides include materials from.
CSSE Operating Systems
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
CSE 451 Section 4 Project 2 Design Considerations.
Fork and Exec Unix Model Tutorial 3. Process Management Model The Unix process management model is split into two distinct operations : 1. The creation.
Processes in Unix, Linux, and Windows CS-502 Fall Processes in Unix, Linux, and Windows CS502 Operating Systems (Slides include materials from Operating.
The Programming Interface. Main Points Creating and managing processes – fork, exec, wait Performing I/O – open, read, write, close Communicating between.
Unix Processes Slides are based upon IBM technical library, Speaking Unix, Part 8: Unix processes Extended System Programming Laboratory (ESPL) CS Department.
Process in Unix, Linux, and Windows CS-3013 A-term Processes in Unix, Linux, and Windows CS-3013 Operating Systems (Slides include materials from.
Threads, Thread management & Resource Management.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Today’s Topics Introducing process: the basic mechanism for concurrent programming –Process management related system calls Process creation Process termination.
Problems with Send and Receive Low level –programmer is engaged in I/O –server often not modular –takes 2 calls to get what you want (send, followed by.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Creating and Executing Processes
June-Hyun, Moon Computer Communications LAB., Kwangwoon University Chapter 26 - Threads.
CS 346 – Chapter 4 Threads –How they differ from processes –Definition, purpose Threads of the same process share: code, data, open files –Types –Support.
Parallel Programming. Introduction Idea has been around since 1960’s –pseudo parallel systems on multiprogram-able computers True parallelism –Many processors.
8-Sep Operating Systems Yasir Kiani. 8-Sep Agenda for Today Review of previous lecture Process scheduling concepts Process creation and termination.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
CS333 Intro to Operating Systems Jonathan Walpole.
CPS4200 Unix Systems Programming Chapter 2. Programs, Processes and Threads A program is a prepared sequence of instructions to accomplish a defined task.
Processes CS 6560: Operating Systems Design. 2 Von Neuman Model Both text (program) and data reside in memory Execution cycle Fetch instruction Decode.
Operating Systems Process Creation
CS4315A. Berrached:CMS:UHD1 Process Management Chapter 6.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
CSCI 330 UNIX and Network Programming Unit XII: Process & Pipe Part 1.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
Tutorial 3. In this tutorial we’ll see Fork() and Exec() system calls.
CS241 Systems Programming Discussion Section Week 2 Original slides by: Stephen Kloder.
CS241 Systems Programming Discussion Section Week 2 Original slides by: Stephen Kloder.
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
Chapter 3 The Programming Interface Chien-Chung Shen CIS/UD
Chapter 4 – Thread Concepts
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
Process Management Process Concept Why only the global variables?
Chapter 3: Process Concept
Boots Cassel Villanova University
Definition of Distributed System
Protection of System Resources
Chapter 4 – Thread Concepts
CS399 New Beginnings Jonathan Walpole.
Using Processes.
Processes in Unix, Linux, and Windows
Example questions… Can a shell kill itself? Can a shell within a shell kill the parent shell? What happens to background processes when you exit from.
Programmazione I a.a. 2017/2018.
Processes in Unix, Linux, and Windows
Fork and Exec Unix Model
Processes in Unix, Linux, and Windows
Operating Systems Lecture 6.
CGS 3763 Operating Systems Concepts Spring 2013
Tutorial 3 Tutorial 3.
Tutorial: The Programming Interface
Programming with Shared Memory
Jonathan Walpole Computer Science Portland State University
Programming with Shared Memory
Processes in Unix, Linux, and Windows
CS510 Operating System Foundations
CSE 451: Operating Systems Autumn 2004 Module 4 Processes
Presentation transcript:

Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability of different machines - A piece of code must”serve” many other client processes - To achieve reliability By Choice: to achieve speedup sometimes makes programming easier (e.g. UNIX pipes)‏

Possibilities for Concurrency Architecture: Architecture: Uniprocessor with: I/O channel I/o processor DMA Multiprogramming, multiple process system Programs Network of uniprocessors Distributed programming Multiple CPU’s Parallel programming

Definitions Concurrent process execution can be: - interleaved,or - physically simultaneous Interleaved: Multiprogramming on uniprocessor Physically simultaneous: Uni-or multiprogramming on multiprocessor Process, thread, or task: Schedulable unit of computation Granularity: Process “size” or computation to communication ratio - Too small: excessive overhead - Too large: less concurrency

Precedence Graph Consider writing a program as a set of tasks. Specifies execution ordering among tasks S0: A:= X – Y S1: B:= X + Y S2: C := Z + 1 S3: C := A - B S4: W:= C + 1 S0 S2 S1 S3 S4 Parallel zing compilers for computers with vector processors build dependency graphs

Cyclic Precedence Graphs What does the following graph represent ? S1 S2 S3

Examples of Concurrency in Uniprocessors Example 1: Unix pipes Motivations: fast to write code Fast to execute Example2: Buffering Motivation: required when two asynchronous processes must communicate Example3: Client/ Server model - geographically distributed computing

Operating System Issues Synchronization: What primitives should OS provide? Communication: What primitives should OS provide to interface communication protocol? Hardware support: Needed to implement OS primitives Remote execution: What primitives should OS provide? - Remote procedure call(RPC) - Remote command shell Sharing address spaces: Makes programmer easier Lightweight threads: Can a process creation be as cheap as a procedure call?

Parallel Language Constructs FORK and JOIN FORK L Starts parallel execution at the statement labeled L and at the statement following the fork JOIN Count Recombines ‘Count’ concurrent computations Count:=Count-1; If (Count>0) Then Quit; Join is an atomic operation

Definition: Atomic Operation If I am a process executing on a processor, and I execute an atomic operation, then all other processes executing on this or any other processor: Can see state of system before I execute or after I execute but cannot see any intermediate state while I am executing Example: bank teller /* Joe has $1000, split equally between savings and checking accounts*/ Subtract $100 from Joe’s savings account Add $100 to Joe’s checking account Other processes should never read Joe’s balances and find he has 900 in both accounts.

Concurrency Conditions Let Si denote a statement Read set of Si: R(Si) = {a1,a2,…,an)‏ Set of all variables referenced in Si Write set of Si: W(Si) = { b1, b2, …, bm}, Set of all variables changed by si C := A - B R(C := A - B) = {A , B} W(C := A - B) = {C} Scanf(“%d” , &A)‏ R(scanf(“%d” , &A))={} W(scanf(“%d” , &A))={A}

Bernstein’s Conditions The following conditions must hold for two statements S1 and S2 to execute concurrently with valid results: R(S1) INTERSECT W(S2)={} W(S1) INTERSECT R(S2)={} W(S1) INTERSECT W(S2)={} These are called the Berstein Conditions.

Fork and Join Example #1 S1 S2 S1: A := X + Y S2: B := Z + 1 S3: C := A - B S4: W :=C + 1 S3 Count := 2; FORK L1; A := X + Y; Goto L2; L1: B := Z + 1; L2: JOIN Count; C := A - B; W := C + 1; S4

Structured Parallel Constructs PARBEGIN / PAREND PARBEGIN Sequential execution splits off into several concurrent sequences PAREND Parallel computations merge PARBEGIN Statement 1; Statement 2; :::::::::::::::: Statement Nl PAREND; PARBEGIN Q := C mod 25; Begin N := N - 1; T := N / 5; End; Proc1(X , Y); PAREND;

Fork and Join Example #2 S1 S1; Count := 3; FORK L1; S2; S4; S2 S3 Goto L3; L2: S6; Goto L3 L1: S3; L3: JOIN Count S7; S2 S3 S4 S6 S5 S7 Up to three tasks may concurrently execute

Parbegin / Parend Examples A := X + Y; B := Z + 1; PAREND; C := A - B; W := C + 1; End Begin S1; PARBEGIN S3 BEGIN S2; S4; S5; S6; PAREND; End; S7;

Comparison Unfortunately, the structured concurrent statement is not powerful enough to model all precedence graphs. S1 S1 S1 S1 S1 S1 S1

Comparison(contd)‏ Fork and Join code for the modified precedence graph: S1; Count 1:=2; FORK L1; S2; S4; Count2:=2; FORK L2; S5 Goto L3; L1: S3; L2: JOIN Count1; S6; L3: JOIN Count2; S7;

Comparison(cont’d)‏ There is no corresponding structured construct code for the same graph However, other synchronization techniques can supplement Also, not all graphs need implementing for real-world problems

Overview System Calls - fork( )‏ - wait( )‏ - pipe( )‏ - write( )‏ - read( )‏ Examples

Process Creation Fork( )‏ NAME fork() – create a new process SYNOPSIS # include <sys/types.h> # include <unistd.h> pid_t fork(void)‏ RETURN VALUE success parent- child pid child- 0 failure -1

Fork() system call- example #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main()‏ { printf(“[%ld] parent process id: %ld\n”, getpid(), getppid()); fork(); printf(“\n[%ld] parent process id: %ld\n”, getpid(), getppid()); }

Fork() system call- example [17619] parent process id: 12729 [2372] parent process id: 17619

Fork()- program structure #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main()‏ { pid_t pid; if((pid = fork())>0){ /* parent */ } else if ((pid==0){ /*child*/ else { /* cannot fork* exit(0);

Wait() system call Wait()- wait for the process whose pid reference is passed to finish executing SYNOPSIS #include<sys/types.h> #include<sys/wait.h> pid_t wait(int *stat)loc)‏ The unsigned decimal integer process ID for which to wait RETURN VALUE success- child pid failure- -1 and errno is set

Wait()- program structure #include <sys/types.h> #include <unistd.h> #include <stdlib.h> #include <stdio.h> Main(int argc, char* argv[])‏ { pid_t childPID; if((childPID = fork())==0){ /*child*/ } else { /* parent* wait(0); exit(0);

Pipe() system call Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off. SYNOPSIS Int pipe(pfd)‏ int pfd[2]; PARAMETER Pfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipe RETURN VALUE: 0 – success; -1 – error.

Pipe() - structure /* first, define an array to store the two file descriptors*/ Int pipe[2]; /* now, create the pipe*/ int rc = pipe (pipes); if(rc = = -1) { /* pipe() failed*/ Perror(“pipe”); exit(1); } If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.

Write() system call Write() – used to write data to a file or other object identified by a file descriptor. SYNOPSIS #include <sys/types.h> Size_t write(int fildes, const void * buf, size_t nbyte); PARAMETER fildes is the file descriptor, buf is the base address of area of memory that data is copied from, nbyte is the amount of data to copy RETURN VALUE The return value is the actual amount of data written, if this differs from nbyte then something has gone wrong

Read() system call Read() – read data from a file or other object identified by a file descriptor SYNOPSIS #include <sys/types.h> Size_t read(int fildes, void *buf, size_t nbyte); ARGUMENT fildes is the file descriptor, buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read. RETURN VALUE The actual amount of data read from the file. The pointer is incremented by the amount of data read.