1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron.

Slides:



Advertisements
Similar presentations
Threads. Readings r Silberschatz et al : Chapter 4.
Advertisements

Carnegie Mellon 1 Concurrent Programming / : Introduction to Computer Systems 23 rd Lecture, Nov. 15, 2012 Instructors: Dave O’Hallaron, Greg.
Concurrent Servers May 31, 2006 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based concurrent.
Lecture 4: Concurrency and Threads CS 170 T Yang, 2015 Chapter 4 of AD textbook.
– 1 – , F’02 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Concurrent Programming November 28, 2007 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based.
Fork Fork is used to create a child process. Most network servers under Unix are written this way Concurrent server: parent accepts the connection, forks.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Concurrent Servers Dec 3, 2002 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based concurrent.
Threads CSCI 444/544 Operating Systems Fall 2008.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
Concurrent HTTP Proxy with Caching Ashwin Bharambe Monday, Dec 4, 2006.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Threads A thread (or lightweight process) is a basic unit of CPU.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Programming with TCP – III 1. Zombie Processes 2. Cleaning Zombie Processes 3. Concurrent Servers Using Threads  pthread Library Functions 4. TCP Socket.
Concurrent Servers April 25, 2002 Topics Limitations of iterative servers Process-based concurrent servers Threads-based concurrent servers Event-based.
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
1 Concurrent Programming. 2 Outline Concurrent programming –Processes –Threads Suggested reading: –12.1, 12.3.
1 Threads Chapter 11 from the book: Inter-process Communications in Linux: The Nooks & Crannies by John Shapley Gray Publisher: Prentice Hall Pub Date:
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Concurrent Programming : Introduction to Computer.
CS333 Intro to Operating Systems Jonathan Walpole.
Week 16 (April 25 th ) Outline Thread Synchronization Lab 7: part 2 & 3 TA evaluation form Reminders Lab 7: due this Thursday Final review session Final.
Threads Chapter 26. Threads Light-weight processes Each process can have multiple threads of concurrent control. What’s wrong with processes? fork() is.
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
PROCESS AND THREADS. Three ways for supporting concurrency with socket programming  I/O multiplexing  Select  Epoll: man epoll  Libevent 
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
Department of Computer Science and Software Engineering
Thread S04, Recitation, Section A Thread Memory Model Thread Interfaces (System Calls) Thread Safety (Pitfalls of Using Thread) Racing Semaphore.
Carnegie Mellon 1 Concurrent Programming / : Introduction to Computer Systems 22 nd Lecture, Nov. 11, 2014 Instructors: Greg Ganger, Greg.
Concurrency I: Threads Nov 9, 2000 Topics Thread concept Posix threads (Pthreads) interface Linux Pthreads implementation Concurrent execution Sharing.
Concurrency Threads Synchronization Thread-safety of Library Functions Anubhav Gupta Dec. 2, 2002 Outline.
2.2 Threads  Process: address space + code execution  There is no law that states that a process cannot have more than one “line” of execution.  Threads:
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
Carnegie Mellon Recitation 11: ProxyLab Fall 2009 November 16, 2009 Including Material from Previous Semesters' Recitations.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Threads Some of these slides were originally made by Dr. Roger deBry. They include text, figures, and information from this class’s textbook, Operating.
Instructor: Haryadi Gunawi
Threads Synchronization Thread-safety of Library Functions
Threads in C Caryl Rahn.
Threads Threads.
CS399 New Beginnings Jonathan Walpole.
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Chapter 4: Threads.
Concurrent Programming November 13, 2008
Concurrency I: Threads April 10, 2001
Threads and Cooperation
Concurrent Servers Topics Limitations of iterative servers
Operating Systems Lecture 13.
Threads and Concurrency
Operating System Concepts
Multithreading Tutorial
Programming with Shared Memory
Jonathan Walpole Computer Science Portland State University
Realizing Concurrency using Posix Threads (pthreads)
Concurrent Programming : Introduction to Computer Systems 23rd Lecture, Nov. 13, 2018
Concurrent Programming CSCI 380: Operating Systems
Programming with Shared Memory
CS510 Operating System Foundations
Concurrent Programming November 13, 2008
Concurrent Programming December 2, 2004
Lecture 19: Threads CS April 3, 2019.
Instructor: Brian Railing
Threads CSE 2431: Introduction to Operating Systems
Concurrent Programming
Presentation transcript:

1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

2 Concurrent vs. Parallelism Concurrency: At least two tasks are making progress at the same time frame.  – Not necessarily at the same time  – Include techniques like time-slicing  – Can be implemented on a single processing unit  – Concept more general than parallelism Parallelism: At least two tasks execute literally at the same time.  – Requires hardware with multiple processing units

3 Concurrent vs. Parallelism

4 Concurrent Programming is Hard! The human mind tends to be sequential The notion of time is often misleading Thinking about all possible sequences of events in a computer system is at least error prone and frequently impossible

5 Concurrent Programming is Hard! Classical problem classes of concurrent programs:  Races: outcome depends on arbitrary scheduling decisions elsewhere in the system  Example: who gets the last seat on the airplane?  Deadlock: improper resource allocation prevents forward progress  Example: traffic gridlock  Livelock / Starvation / Fairness: external events and/or system scheduling decisions can prevent sub-task progress  Example: people always jump in front of you in line Many aspects of concurrent programming are beyond the scope of this class

6 Echo Server Echo Service  Original testing of round trip times / connectivity checks  Superseded by “ping/ICMP” Protocol:  Client connects to server  Client sends a message to server  Server sends the same mess back to client (echos)

7 Sequential Echo Server ClientServer socket bind listen read writeread write Connection request read EOF close Connection Finished Await connection request from next client open_listenfd open_clientfd acceptconnect

8 Problems with Sequential Servers Low utilization  Server is idle while waiting for client1’s requests.  It could have served another client during those idle times! Increased latency  Client2 waits on client1 to finish before getting served Solution: implement concurrent servers  serve multiple clients at the same time

9 Sequential Echo Server Iterative servers process one request at a time client 1serverclient 2 connect accept connect write read call read close accept write read close Wait for Client 1 call read write read returns write read returns

10 Creating Concurrent Flows  Allow server to handle multiple clients simultaneously 1. Processes  Fork a new server process to handle each connection  Kernel automatically interleaves multiple logical flows  Each process has its own private address space 2. Threads (same process, multiple flows)  Create one server thread to handle each connection  Kernel automatically interleaves multiple logical flows  Each flow shares the same address space 3. I/O multiplexing with select()  Programmer manually interleaves multiple logical flows  All flows share the same address space  Relies on lower-level system abstractions

11 Concurrent Servers: Multiple Processes Spawn separate process for each client client 1serverclient 2 call connect call accept call read ret connect ret accept call connect call fgets fork child 1 Client 1 blocks waiting for user to type in data call accept ret connect ret accept call fgets writefork call read child 2 write call read end read close...

12 Sequential Echo Server int main(int argc, char **argv) { int listenfd, connfd; listenfd = socket(…) /* create socket */ bind(listenfd,…); /* bind socket to port */ listen(listenfd,…); /* listen for incoming connections*/ while (1) { connfd = accept(listenfd, …); /* connect to client */ while (readline(connfd, …) > 0) /* read from client */ write(connfd, …); /* write to client */ close(connfd); } exit(0); }  Accept a connection request  Handle echo requests until client terminates

13 int main(int argc, char **argv) { int listenfd, connfd; signal(SIGCHLD, sigchld_handler); listenfd = socket(…) /* create socket */ bind(listenfd,…); /* bind socket to port */ listen(listenfd,…); /* listen for incoming connections*/ while (1) { connfd = accept(listenfd, …); /* client connects */ if (fork() == 0) { close(listenfd); /* Child closes its listening socket */ while (readline(connfd, …); /* Child services client */ write(connfd,…) close(connfd); /* Child closes connection with client */ exit(0); /* Child exits */ } close(connfd); /* Parent closes connected socket (important!) */ } Process-Based Concurrent Server Fork separate process for each client Does not allow any communication between different client handlers

14 Process Concurrency Requirements Listening server process must reap zombie children  to avoid fatal memory leak Listening server process must close its copy of connfd  Kernel keeps reference for each socket/open file  After fork, refcnt(connfd) = 2  Connection will not be closed until refcnt(connfd) == 0

15 Process-Based Concurrent Server – Reaping void sigchld_handler(int sig) { while (waitpid(-1, 0, WNOHANG) > 0) ; return; }  Reap all zombie children

16 Process-Based Execution Model  Each client handled by independent process  No shared state between them  Both parent & child have copies of listenfd and connfd Client 1 Server Process Client 2 Server Process Listening Server Process Connection Requests Client 1 dataClient 2 data Connection Requests

17 View from Server’s TCP Manager Client 1ServerClient 2 cl1>./echoclient greatwhite.ics.cs.cmu.edu srv>./echoserverp srv> connected to ( ), port cl2>./echoclient greatwhite.ics.cs.cmu.edu srv> connected to ( ), port ConnectionHostPortHostPort Listening cl cl

18 View from Server’s TCP Manager Port Demultiplexing  TCP manager maintains separate stream for each connection  Each represented to application program as socket  New connections directed to listening socket  Data from clients directed to one of the connection sockets ConnectionHostPortHostPort Listening cl cl

19 Pros and Cons of Process-Based Designs + Handle multiple connections concurrently + Clean sharing model  Shared file tables with separate file descripters  No shared address space (no shared variables/globals) + Simple and straightforward – Additional overhead for process control – Nontrivial to share data between processes  Requires IPC (interprocess communication) mechanisms  FIFO’s (named pipes), shared memory and semaphores

20 Approach #2: Multiple Threads Very similar to approach #1 (multiple processes)  but, with threads instead of processes Multithreading

21 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Kernel context: VM structures Descriptor table brk pointer Code, data, and stack read-only code/data stack SP PC brk Process context

22 Alternate View of a Process Process = thread + code, data, and kernel context shared libraries run-time heap 0 read/write data Thread context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Code and Data read-only code/data stack SP PC brk Thread (main thread) Kernel context: VM structures Descriptor table brk pointer

23 Multi-Threaded process A process can have multiple threads  Each thread has its own logical control flow (PC, registers, etc.)  Each thread shares the same code, data, and kernel context  Share common virtual address space shared libraries run-time heap 0 read/write data Thread 1 context: Data registers Condition codes SP1 PC1 Shared code and data read-only code/data stack 1 Thread 1 (main thread) Kernel context: VM structures Descriptor table brk pointer Thread 2 context: Data registers Condition codes SP2 PC2 stack 2 Thread 2 (peer thread)

24 Thread Execution Single Core Processor  time slicing Multi-Core Processor  true concurrency Time Thread AThread BThread C Thread AThread BThread C (Running 3 threads on 2 cores)

25 Threads vs. Processes Similarities  Each has its own logical control flow  Each can run concurrently with others (often on different cores/CPUs)  Each is context switched Differences  Threads share code and some data  Processes (typically) do not  Threads are less resource intensive than processes  Process control (creating and reaping) is ~2x as expensive as thread control  Scheduling may be skewed in favor of processes  If one thread blocks, more likely to bring down the whole lot

26 POSIX Threads (Pthreads) Pthreads: Standard interface for 60+ functions that manipulate threads from C programs (pthread.h) Compile with GCC: gcc/g++ -pthread programname Programmer responsible for synchronizing/protecting shared resources

27 POSIX Threads (Pthreads)  Creating and reaping threads  pthread_create()  pthread_join()  Terminating threads  pthread_cancel()  pthread_exit()  exit() [terminates all threads]  Synchronizing access to shared variables  pthread_mutex_init  pthread_mutex_[un]lock  pthread_cond_init  pthread_cond_[timed]wait

28 The pthread_join() blocks the calling thread until the thread with the specified threadID exits.

29 /* thread routine */ void *thread(void *vargp) { printf("Hello, world!\n"); return NULL; } The Pthreads "hello, world" Program /* * hello.c - Pthreads "hello, world" program */ void *thread(void *vargp); int main() { pthread_t tid; pthread_create(&tid, NULL, thread, NULL); pthread_join(tid, NULL); exit(0); } Thread attributes (usually NULL) Thread arguments (void *p) return value (void **p) Thread function

30 Execution of Threaded“hello, world” main thread peer thread return NULL; main thread waits for peer thread to terminate exit() terminates main thread and any peer threads Call pthread_create() Call pthread_join() pthread_join() returns printf() (peer thread terminates) pthread_create() returns

31 Thread-Based Concurrent Echo Server int main(int argc, char **argv) { … pthread_t tid; listenfd = socket(…); bind(listenfd,…); listen(listenfd,…); while (1) { connfd = accept(listenfd,…); /* Create a new thread for each connection */ pthread_create(&tid, NULL, echo_thread, (void *) connfd); ptread_detach(tid); /* if not planning to pthread_join */ } } void *echo_thread(void *p) { int connfd = (int)p; /* Get connection file handler */ while (readline(connfd,…)>0) write(connfd,…); close(connfd); /* Close file handler when done */ return NULL; /* Not returning any data to main thread */ }

32 Threaded Execution Model  Multiple threads within single process  Shared state between them Client 1 Server Client 2 Server Listening Server Connection Requests Client 1 dataClient 2 data

33 Problems: Race Conditions A race condition occurs when correctness of the program depends on one thread or process reaching point x before another thread or process reaches point y Any ordering of instructions from multiple threads may occur race.c

34 Potential Form of Unintended Sharing main thread peer 1 while (1) { int connfd = Accept(listenfd, (SA *) &clientaddr, &clientlen); pthread_create(&tid, NULL, echo_thread, (void *) &connfd); } connfd Main thread stack vargp Peer 1 stack vargp Peer 2 stack peer 2 connfd = connfd 1 connfd = *vargp connfd = connfd 2 connfd = *vargp Race! Why would both copies of vargp point to same location?

35 Potential Race Condition int i; for (i = 0; i < 100; i++) { pthread_create(&tid, NULL, thread, &i); } Race Test  If no race, then each thread would get different value of i  Set of saved values would consist of one copy each of 0 through 99. Main void *thread(void *p) { int i = *((int *)p); pthread_detach(pthread_self()); save_value(i); return NULL; } Thread

36 Experimental Results More on this later No Race Multicore server Single core laptop (Number of times each value is used)

37 Issues With Thread-Based Servers Must run “detached” to avoid memory leak.  At any point in time, a thread is either joinable or detached.  Joinable thread can be reaped and killed by other threads.  must be reaped (with pthread_join ) to free memory resources.  Detached thread cannot be reaped or killed by other threads.  resources are automatically reaped on termination.  Default state is joinable.  use pthread_detach(pthread_self()) to make detached. Must be careful to avoid unintended sharing.  For example, passing pointer to main thread’s stack Pthread_create(&tid, NULL, thread, (void *)&connfd); All functions called by a thread must be thread-safe  Stay tuned

38 Pros and Cons of Thread-Based Designs + Easy to share data structures between threads  e.g., logging information, file cache. + Threads are more efficient than processes. – Unintentional sharing can introduce subtle and hard-to- reproduce errors!  Race conditions, data corruptions from other threads, etc.

39 Event-Based Servers Using I/O Multiplexing OS provides detection for input among a list of FDs  select(), poll(), and epoll() Server maintains set of active connections  Array of connfd’s Repeat:  Determine which connections have pending inputs  If listenfd has input, then accept connection  Add new connfd to array  Service all connfd’s with pending inputs Details in book

40 I/O Multiplexed Event Processing 10 clientfd Active Inactive Active Never Used listenfd = 3 10 clientfd listenfd = 3 Active Descriptors Pending Inputs Read

41 Pros and Cons of I/O Multiplexing + One logical control flow + Easier to debug + No process or thread control overhead – More complex to code than process/thread based designs – Cannot take advantage of multi-core  Single thread of control

42 Approaches to Concurrency Processes  Hard to share resources: Easy to avoid unintended sharing  High overhead in adding/removing clients Threads  Easy to share resources: Perhaps too easy  Medium overhead  Not much control over scheduling policies  Difficult to debug  Event orderings not repeatable I/O Multiplexing  Tedious / more code to write (scheduler)  Total control over scheduling  Very low overhead  only one process so it can only run on one cpu-core