Carnegie Mellon 1 Concurrent Programming 15-213 / 18-213: Introduction to Computer Systems 23 rd Lecture, Nov. 15, 2012 Instructors: Dave O’Hallaron, Greg.

Slides:



Advertisements
Similar presentations
Threads. Readings r Silberschatz et al : Chapter 4.
Advertisements

Concurrent Servers May 31, 2006 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based concurrent.
1 Carnegie Mellon Proxylab and stuff : Introduction to Computer Systems Recitation 13: November 19, 2012 Donald Huang (donaldh) Section M.
Carnegie Mellon 1 Synchronization: Basics : Introduction to Computer Systems 23 rd Lecture, Nov. 16, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
Lecture 4: Concurrency and Threads CS 170 T Yang, 2015 Chapter 4 of AD textbook.
– 1 – , F’02 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program.
1 Processes and Pipes COS 217 Professor Jennifer Rexford.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Concurrent Programming November 28, 2007 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based.
Fork Fork is used to create a child process. Most network servers under Unix are written this way Concurrent server: parent accepts the connection, forks.
Concurrent Servers Dec 3, 2002 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based concurrent.
Threads CSCI 444/544 Operating Systems Fall 2008.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
1 Process Description and Control Chapter 3 = Why process? = What is a process? = How to represent processes? = How to control processes?
Concurrent HTTP Proxy with Caching Ashwin Bharambe Monday, Dec 4, 2006.
1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron.
1. Web Services 2. Concurrency and threads. Web History 1989:  Tim Berners-Lee (CERN) writes internal proposal to develop a distributed hypertext system.
University of Amsterdam Computer Systems –Concurrent servers Arnoud Visser 1 Computer Systems Concurrent servers.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Programming with TCP – III 1. Zombie Processes 2. Cleaning Zombie Processes 3. Concurrent Servers Using Threads  pthread Library Functions 4. TCP Socket.
Concurrent Servers April 25, 2002 Topics Limitations of iterative servers Process-based concurrent servers Threads-based concurrent servers Event-based.
June-Hyun, Moon Computer Communications LAB., Kwangwoon University Chapter 26 - Threads.
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
1 Concurrent Programming. 2 Outline Concurrent programming –Processes –Threads Suggested reading: –12.1, 12.3.
1 Threads Chapter 11 from the book: Inter-process Communications in Linux: The Nooks & Crannies by John Shapley Gray Publisher: Prentice Hall Pub Date:
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Concurrent Programming : Introduction to Computer.
CS333 Intro to Operating Systems Jonathan Walpole.
Week 16 (April 25 th ) Outline Thread Synchronization Lab 7: part 2 & 3 TA evaluation form Reminders Lab 7: due this Thursday Final review session Final.
Threads and Locking Ioctl operations. Threads Lightweight processes What’s wrong with processes? –fork() is expensive – 10 to 100 times slower –Inter.
Threads Chapter 26. Threads Light-weight processes Each process can have multiple threads of concurrent control. What’s wrong with processes? fork() is.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
CONCURRENT PROGRAMMING Lecture 5. 2 Assignment 1  Having trouble?  Resolve it as early as possible!  Assignment 2:  Handling text-based protocol 
PROCESS AND THREADS. Three ways for supporting concurrency with socket programming  I/O multiplexing  Select  Epoll: man epoll  Libevent 
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
CSC 360, Instructor: Kui Wu Thread & PThread. CSC 360, Instructor: Kui Wu Agenda 1.What is thread? 2.User vs kernel threads 3.Thread models 4.Thread issues.
Thread S04, Recitation, Section A Thread Memory Model Thread Interfaces (System Calls) Thread Safety (Pitfalls of Using Thread) Racing Semaphore.
Carnegie Mellon 1 Concurrent Programming / : Introduction to Computer Systems 22 nd Lecture, Nov. 11, 2014 Instructors: Greg Ganger, Greg.
Netprog: Client/Server Issues1 Issues in Client/Server Programming Refs: Chapter 27.
Concurrency I: Threads Nov 9, 2000 Topics Thread concept Posix threads (Pthreads) interface Linux Pthreads implementation Concurrent execution Sharing.
Concurrency Threads Synchronization Thread-safety of Library Functions Anubhav Gupta Dec. 2, 2002 Outline.
2.2 Threads  Process: address space + code execution  There is no law that states that a process cannot have more than one “line” of execution.  Threads:
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
Carnegie Mellon Recitation 11: ProxyLab Fall 2009 November 16, 2009 Including Material from Previous Semesters' Recitations.
CS703 – Advanced Operating Systems By Mr. Farhan Zaidi.
Instructor: Haryadi Gunawi
Threads Synchronization Thread-safety of Library Functions
Threads Threads.
CS399 New Beginnings Jonathan Walpole.
CENG334 Introduction to Operating Systems
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Concurrent Programming November 13, 2008
Chapter 3: Process Concept
Concurrency I: Threads April 10, 2001
Threads and Cooperation
Concurrent Servers Topics Limitations of iterative servers
Operating Systems Lecture 13.
Recitation 14: Proxy Lab Part 2
Issues in Client/Server Programming
Programming with Shared Memory
Jonathan Walpole Computer Science Portland State University
Concurrent Programming : Introduction to Computer Systems 23rd Lecture, Nov. 13, 2018
Concurrent Programming CSCI 380: Operating Systems
Programming with Shared Memory
CS510 Operating System Foundations
Concurrent Programming November 13, 2008
Concurrent Programming December 2, 2004
Lecture 19: Threads CS April 3, 2019.
Instructor: Brian Railing
Threads CSE 2431: Introduction to Operating Systems
Concurrent Programming
Presentation transcript:

Carnegie Mellon 1 Concurrent Programming / : Introduction to Computer Systems 23 rd Lecture, Nov. 15, 2012 Instructors: Dave O’Hallaron, Greg Ganger, and Greg Kesden

Carnegie Mellon 2 Concurrent Programming is Hard! The human mind tends to be sequential The notion of time is often misleading Thinking about all possible sequences of events in a computer system is at least error prone and frequently impossible

Carnegie Mellon 3 Concurrent Programming is Hard! Classical problem classes of concurrent programs:  Races: outcome depends on arbitrary scheduling decisions elsewhere in the system  Example: who gets the last seat on the airplane?  Deadlock: improper resource allocation prevents forward progress  Example: traffic gridlock  Livelock / Starvation / Fairness: external events and/or system scheduling decisions can prevent sub-task progress  Example: people always jump in front of you in line Many aspects of concurrent programming are beyond the scope of  but, not all

Carnegie Mellon 4 Client / Server Session Reminder: Iterative Echo Server ClientServer socket bind listen rio_readlineb rio_writenrio_readlineb rio_writen Connection request rio_readlineb close EOF Await connection request from next client open_listenfd open_clientfd acceptconnect

Carnegie Mellon 5 Iterative Servers Iterative servers process one request at a time client 1serverclient 2 connect accept connect write read call read close accept write read close Wait for Client 1 call read write ret read write ret read

Carnegie Mellon 6 Where Does Second Client Block? Second client attempts to connect to iterative server Call to connect returns  Even though connection not yet accepted  Server side TCP manager queues request  Feature known as “TCP listen backlog” Call to rio_writen returns  Server side TCP manager buffers input data Call to rio_readlineb blocks  Server hasn’t written anything for it to read yet. Client socket rio_readlineb rio_writen Connection request open_clientfd connect

Carnegie Mellon 7 Fundamental Flaw of Iterative Servers Solution: use concurrent servers instead  Concurrent servers use multiple concurrent flows to serve multiple clients at the same time User goes out to lunch Client 1 blocks waiting for user to type in data Client 2 blocks waiting to read from server Server blocks waiting for data from Client 1 client 1serverclient 2 connect accept connect write read call read write call read write ret read

Carnegie Mellon 8 Server concurrency (3 approaches) Allow server to handle multiple clients simultaneously 1. Processes  Kernel automatically interleaves multiple logical flows  Each flow has its own private address space 2. Threads  Kernel automatically interleaves multiple logical flows  Each flow shares the same address space 3. I/O multiplexing with select()  Programmer manually interleaves multiple logical flows  All flows share the same address space  Relies on lower-level system abstractions

Carnegie Mellon 9 Concurrent Servers: Multiple Processes Spawn separate process for each client client 1serverclient 2 call connect call accept call read ret connect ret accept call connect call fgets fork child 1 User goes out to lunch Client 1 blocks waiting for user to type in data call accept ret connect ret accept call fgets writefork call read child 2 write call read end read close...

Carnegie Mellon 10 Review: Iterative Echo Server int main(int argc, char **argv) { int listenfd, connfd; int port = atoi(argv[1]); struct sockaddr_in clientaddr; int clientlen = sizeof(clientaddr); listenfd = Open_listenfd(port); while (1) { connfd = Accept(listenfd, (SA *)&clientaddr, &clientlen); echo(connfd); Close(connfd); } exit(0); }  Accept a connection request  Handle echo requests until client terminates

Carnegie Mellon 11 int main(int argc, char **argv) { int listenfd, connfd; int port = atoi(argv[1]); struct sockaddr_in clientaddr; int clientlen=sizeof(clientaddr); Signal(SIGCHLD, sigchld_handler); listenfd = Open_listenfd(port); while (1) { connfd = Accept(listenfd, (SA *) &clientaddr, &clientlen); if (Fork() == 0) { Close(listenfd); /* Child closes its listening socket */ echo(connfd); /* Child services client */ Close(connfd); /* Child closes connection with client */ exit(0); /* Child exits */ } Close(connfd); /* Parent closes connected socket (important!) */ } Process-Based Concurrent Echo Server Fork separate process for each client Does not allow any communication between different client handlers

Carnegie Mellon 12 Process-Based Concurrent Echo Server (cont) void sigchld_handler(int sig) { while (waitpid(-1, 0, WNOHANG) > 0) ; return; }  Reap all zombie children

Carnegie Mellon 13 Client 2 data Process Execution Model  Each client handled by independent process  No shared state between them  Both parent & child have copies of listenfd and connfd  Parent must close connfd  Child must close listenfd Client 1 Server Process Client 2 Server Process Listening Server Process Connection Requests Client 1 data

Carnegie Mellon 14 Concurrent Server: accept Illustrated listenfd(3) Client 1. Server blocks in accept, waiting for connection request on listening descriptor listenfd clientfd Server listenfd(3) Client clientfd Server 2. Client makes connection request by calling connect Connection request listenfd(3) Client clientfd Server 3. Server returns connfd from accept. Forks child to handle client. Connection is now established between clientfd and connfd Server Child connfd(4)

Carnegie Mellon 15 Implementation Must-dos With Process-Based Designs Listening server process must reap zombie children  to avoid fatal memory leak Listening server process must close its copy of connfd  Kernel keeps reference for each socket/open file  After fork, refcnt(connfd) = 2  Connection will not be closed until refcnt(connfd) == 0

Carnegie Mellon 16 Pros and Cons of Process-Based Designs + Handle multiple connections concurrently + Clean sharing model  descriptors (no)  file tables (yes)  global variables (no) + Simple and straightforward – Additional overhead for process control – Nontrivial to share data between processes  Requires IPC (interprocess communication) mechanisms  FIFO’s (named pipes), System V shared memory and semaphores

Carnegie Mellon 17 Approach #2: Multiple Threads Very similar to approach #1 (multiple processes)  but, with threads instead of processes

Carnegie Mellon 18 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Kernel context: VM structures Descriptor table brk pointer Code, data, and stack read-only code/data stack SP PC brk Process context

Carnegie Mellon 19 Alternate View of a Process Process = thread + code, data, and kernel context shared libraries run-time heap 0 read/write data Thread context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Code and Data read-only code/data stack SP PC brk Thread (main thread) Kernel context: VM structures Descriptor table brk pointer

Carnegie Mellon 20 A Process With Multiple Threads Multiple threads can be associated with a process  Each thread has its own logical control flow  Each thread shares the same code, data, and kernel context  Share common virtual address space (inc. stacks)  Each thread has its own thread id (TID) shared libraries run-time heap 0 read/write data Thread 1 context: Data registers Condition codes SP1 PC1 Shared code and data read-only code/data stack 1 Thread 1 (main thread) Kernel context: VM structures Descriptor table brk pointer Thread 2 context: Data registers Condition codes SP2 PC2 stack 2 Thread 2 (peer thread)

Carnegie Mellon 21 Logical View of Threads Threads associated with process form a pool of peers  Unlike processes which form a tree hierarchy P0 P1 sh foo bar T1 Process hierarchy Threads associated with process foo T2 T4 T5 T3 shared code, data and kernel context

Carnegie Mellon 22 Thread Execution Single Core Processor  Simulate concurrency by time slicing Multi-Core Processor  Can have true concurrency Time Thread AThread BThread C Thread AThread BThread C Run 3 threads on 2 cores

Carnegie Mellon 23 Logical Concurrency Two threads are (logically) concurrent if their flows overlap in time Otherwise, they are sequential Examples:  Concurrent: A & B, A&C  Sequential: B & C Time Thread AThread BThread C

Carnegie Mellon 24 Threads vs. Processes How threads and processes are similar  Each has its own logical control flow  Each can run concurrently with others (possibly on different cores)  Each is context switched How threads and processes are different  Threads share code and some data  Processes (typically) do not  Threads are somewhat less expensive than processes  Process control (creating and reaping) twice as expensive as thread control  Linux numbers: –~20K cycles to create and reap a process –~10K cycles (or less) to create and reap a thread

Carnegie Mellon 25 Posix Threads (Pthreads) Interface Pthreads: Standard interface for ~60 functions that manipulate threads from C programs  Creating and reaping threads  pthread_create()  pthread_join()  Determining your thread ID  pthread_self()  Terminating threads  pthread_cancel()  pthread_exit()  exit() [terminates all threads], RET [terminates current thread]  Synchronizing access to shared variables  pthread_mutex_init  pthread_mutex_[un]lock  pthread_cond_init  pthread_cond_[timed]wait

Carnegie Mellon 26 /* thread routine */ void *thread(void *vargp) { printf("Hello, world!\n"); return NULL; } The Pthreads "hello, world" Program /* * hello.c - Pthreads "hello, world" program */ #include "csapp.h" void *thread(void *vargp); int main() { pthread_t tid; Pthread_create(&tid, NULL, thread, NULL); Pthread_join(tid, NULL); exit(0); } Thread attributes (usually NULL) Thread arguments (void *p) return value (void **p)

Carnegie Mellon 27 Execution of Threaded“hello, world” main thread peer thread return NULL; main thread waits for peer thread to terminate exit() terminates main thread and any peer threads call Pthread_create() call Pthread_join() Pthread_join() returns printf() (peer thread terminates) Pthread_create() returns

Carnegie Mellon 28 Thread-Based Concurrent Echo Server int main(int argc, char **argv) { int port = atoi(argv[1]); struct sockaddr_in clientaddr; int clientlen=sizeof(clientaddr); pthread_t tid; int listenfd = Open_listenfd(port); while (1) { int *connfdp = Malloc(sizeof(int)); *connfdp = Accept(listenfd, (SA *) &clientaddr, &clientlen); Pthread_create(&tid, NULL, echo_thread, connfdp); }  Spawn new thread for each client  Pass it copy of connection file descriptor  Note use of Malloc()!  Without corresponding Free()

Carnegie Mellon 29 Thread-Based Concurrent Server (cont) /* thread routine */ void *echo_thread(void *vargp) { int connfd = *((int *)vargp); Pthread_detach(pthread_self()); Free(vargp); echo(connfd); Close(connfd); return NULL; }  Run thread in “detached” mode  Runs independently of other threads  Reaped automatically (by kernel) when it terminates  Free storage allocated to hold clientfd  “Producer-Consumer” model

Carnegie Mellon 30 Threaded Execution Model  Multiple threads within single process  Some state between them  e.g., file descriptors Client 1 Server Client 2 Server Listening Server Connection Requests Client 1 dataClient 2 data

Carnegie Mellon 31 Potential Form of Unintended Sharing main thread peer 1 while (1) { int connfd = Accept(listenfd, (SA *) &clientaddr, &clientlen); Pthread_create(&tid, NULL, echo_thread, (void *) &connfd); } connfd Main thread stack vargp Peer 1 stack vargp Peer 2 stack peer 2 connfd = connfd 1 connfd = *vargp connfd = connfd 2 connfd = *vargp Race! Why would both copies of vargp point to same location?

Carnegie Mellon 32 Could this race occur? int i; for (i = 0; i < 100; i++) { Pthread_create(&tid, NULL, thread, &i); } Race Test  If no race, then each thread would get different value of i  Set of saved values would consist of one copy each of 0 through 99 Main void *thread(void *vargp) { int i = *((int *)vargp); Pthread_detach(pthread_self()); save_value(i); return NULL; } Thread

Carnegie Mellon 33 Experimental Results The race can really happen! No Race Multicore server Single core laptop

Carnegie Mellon 34 Issues With Thread-Based Servers Must run “detached” to avoid memory leak  At any point in time, a thread is either joinable or detached  Joinable thread can be reaped and killed by other threads  must be reaped (with pthread_join ) to free memory resources  Detached thread cannot be reaped or killed by other threads  resources are automatically reaped on termination  Default state is joinable  use pthread_detach(pthread_self()) to make detached Must be careful to avoid unintended sharing  For example, passing pointer to main thread’s stack  Pthread_create(&tid, NULL, thread, (void *)&connfd); All functions called by a thread must be thread-safe  (next lecture)

Carnegie Mellon 35 Pros and Cons of Thread-Based Designs + Easy to share data structures between threads  e.g., logging information, file cache + Threads are more efficient than processes – Unintentional sharing can introduce subtle and hard- to-reproduce errors!  The ease with which data can be shared is both the greatest strength and the greatest weakness of threads  Hard to know which data shared & which private  Hard to detect by testing  Probability of bad race outcome very low  But nonzero!  Future lectures

Carnegie Mellon 36 Approaches to Concurrency Processes  Hard to share resources: Easy to avoid unintended sharing  High overhead in adding/removing clients Threads  Easy to share resources: Perhaps too easy  Medium overhead  Not much control over scheduling policies  Difficult to debug  Event orderings not repeatable I/O Multiplexing  Tedious and low level  Total control over scheduling  Very low overhead  Cannot create as fine grained a level of concurrency  Does not make use of multi-core

Carnegie Mellon 37 View from Server’s TCP Manager Client 1ServerClient 2 cl1>./echoclient greatwhite.ics.cs.cmu.edu srv>./echoserverp srv> connected to ( ), port cl2>./echoclient greatwhite.ics.cs.cmu.edu srv> connected to ( ), port ConnectionHostPortHostPort Listening cl cl

Carnegie Mellon 38 View from Server’s TCP Manager Port Demultiplexing  TCP manager maintains separate stream for each connection  Each represented to application program as socket  New connections directed to listening socket  Data from clients directed to one of the connection sockets ConnectionHostPortHostPort Listening cl cl