Threads.

Slides:



Advertisements
Similar presentations
Threads. Readings r Silberschatz et al : Chapter 4.
Advertisements

CS162B: POSIX Threads Jacob Chan. Objectives ▪ Review on fork() and exec() – Some issues on forking and exec-ing ▪ POSIX Threads ▪ Lab 8.
Day 10 Threads. Threads and Processes  Process is seen as two entities Unit of resource allocation (process or task) Unit of dispatch or scheduling (thread.
Lecture 4: Concurrency and Threads CS 170 T Yang, 2015 Chapter 4 of AD textbook.
Fork Fork is used to create a child process. Most network servers under Unix are written this way Concurrent server: parent accepts the connection, forks.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Pthreads Operating Systems Hebrew University of Jerusalem Spring 2004.
Lecture 18 Threaded Programming CPE 401 / 601 Computer Network Systems slides are modified from Dave Hollinger.
Unix Threads operating systems. User Thread Packages pthread package mach c-threads Sun Solaris3 UI threads Kernel Threads Windows NT, XP operating systems.
Introduction to Pthreads. Pthreads Pthreads is a POSIX standard for describing a thread model, it specifies the API and the semantics of the calls. Model.
PRINCIPLES OF OPERATING SYSTEMS Lecture 6: Processes CPSC 457, Spring 2015 May 21, 2015 M. Reza Zakerinasab Department of Computer Science, University.
10/16/ Realizing Concurrency using the thread model B. Ramamurthy.
B. RAMAMURTHY 10/24/ Realizing Concurrency using the thread model.
CS 346 – Chapter 4 Threads –How they differ from processes –Definition, purpose Threads of the same process share: code, data, open files –Types –Support.
Threads and Thread Control Thread Concepts Pthread Creation and Termination Pthread synchronization Threads and Signals.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
1 Threads Chapter 11 from the book: Inter-process Communications in Linux: The Nooks & Crannies by John Shapley Gray Publisher: Prentice Hall Pub Date:
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
CS333 Intro to Operating Systems Jonathan Walpole.
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
Pthreads.
Silberschatz, Galvin and Gagne ©2005 Modified by Dimitris Margaritis, Spring 2007 Chapter 4: Threads.
12/22/ Thread Model for Realizing Concurrency B. Ramamurthy.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
CSC 322 Operating Systems Concepts Lecture - 7: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
CSC 360, Instructor: Kui Wu Thread & PThread. CSC 360, Instructor: Kui Wu Agenda 1.What is thread? 2.User vs kernel threads 3.Thread models 4.Thread issues.
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Threads A thread is an alternative model of program execution
Threads. Progressing With Parallel Processing(PWPP?) eWeek (09/18/06) Vol. 23, No. 37, P. D5 Multithreading skills are becoming essential as parallel.
2.2 Threads  Process: address space + code execution  There is no law that states that a process cannot have more than one “line” of execution.  Threads:
B. RAMAMURTHY 5/10/2013 Amrita-UB-MSES Realizing Concurrency using the thread model.
7/9/ Realizing Concurrency using Posix Threads (pthreads) B. Ramamurthy.
Tutorial 4. In this tutorial session we’ll see Threads.
A thread is a basic unit of CPU utilization within a process Each thread has its own – thread ID – program counter – register set – stack It shares the.
Multiprogramming. Readings r Chapter 2.1 of the textbook.
Realizing Concurrency using the thread model
Threads Some of these slides were originally made by Dr. Roger deBry. They include text, figures, and information from this class’s textbook, Operating.
Process Tables; Threads
Realizing Concurrency using the thread model
Protection of System Resources
Day 12 Threads.
Threads in C Caryl Rahn.
Threads Threads.
Boost String API & Threads
CS399 New Beginnings Jonathan Walpole.
Thread Programming.
Chapter 4: Threads.
Realizing Concurrency using Posix Threads (pthreads)
Operating Systems Lecture 13.
Realizing Concurrency using the thread model
Processes in Unix, Linux, and Windows
System Structure and Process Model
Process Tables; Threads
Thread Programming.
Realizing Concurrency using the thread model
CS510 Operating System Foundations
Operating System Concepts
Programming with Shared Memory
Jonathan Walpole Computer Science Portland State University
Realizing Concurrency using the thread model
Realizing Concurrency using Posix Threads (pthreads)
Realizing Concurrency using the thread model
Programming with Shared Memory
Realizing Concurrency using Posix Threads (pthreads)
CS510 Operating System Foundations
Tutorial 4.
Foundations and Definitions
Outline Chapter 3: Processes Chapter 4: Threads So far - Next -
Concurrency, Processes and Threads
Presentation transcript:

Threads

Progressing With Parallel Processing eWeek (09/18/06) Vol. 23, No. 37, P. D5 Multithreading skills are becoming essential as parallel processing hardware proliferates, and developers ignore at their own peril indications of this trend such as Intel's investments in college curriculum and resources for multithread development training. The Java programming language supports the expression of concurrency in the same language developers are already employing for application logic, while powerful abstractions for C++ are also offered by concurrency toolkits and frameworks. Being able to count the threads developers are using on the fingers of one hand is folly, according to principal author of "Java Concurrency in Practice" Brian Goetz. He and his five co-authors note that "The need for thread safety is contagious," because "frameworks may create threads on your behalf, and code called from these threads must be thread-safe." It is the authors' contention that developers should never lose sight of the application state and avoid becoming overwhelmed by threading mechanisms. Developers must also keep in mind that careless habits that are acceptable in single-thread environments may be exposed in multithread environments.

2.2 Threads Process: address space + code execution There is no law that states that a process cannot have more than one “line” of execution. Threads: single address space + many threads of execution Shared global variables, stack address space, open files, signals, child processes, etc.

Threads Process – used to group resources together Threads – entities scheduled for execution on CPU Sometimes called lightweight processes Multithreading – multiple threads in the same process

2 3

Thread functions Create threads: #include <pthread.h> int pthread_create ( pthread_t* thread, pthread_attr_t* attr, void* (*start_routine)(void *), void* arg );

Thread termination void pthread_exit ( void* retval ); int pthread_cancel ( pthread_t thread );

Why threads? Easier to create a thread than a process Share resources Compute and I/O within a single process on a single processor can be overlapped Compute and compute within a single process on a multiprocessor can be overlapped

Example: word processor Multiple threads: User interaction Document reformatting Automatic saving/backups Alternative is everything else stops when (2) or (3) occurs.

Example: web server Dispatcher – handles incoming requests Workers – performs request Checks cache Reads from disk if necessary Adds to cache

Threads and alternatives Threads retain the simple sequential, blocking model while allowing for parallelism. The single threaded server retains the simple sequential, block model but performance suffers (no parallelism). Finite state machine = each computation has a saved state and there exists some set of events that can occur to change the state High performance through parallelism Uses nonblocking calls and interrupts (not simple) May be in user space or kernel.

Threads Thread types: (also popup) Implementations: Detached Joinable User space Kernel Hybrid

Pitfall: global variables #include <stdio.h> bool err = false; void* t1 ( void* p ) { … err = true; if (err) puts( “hello” ); //may never get here! } void* t2 ( void* p ) { err = false; int main ( int argc, char* argv[] ) { if (err) puts( “something bad happened.” ); Pitfall: global variables

Pitfalls: global variables

Pitfall: global variables Solution: don’t use ‘em. Solution: create thread-wide globals (using your own libraries) Solution: (other mechanisms that we will explore in the future)

Other pitfalls Libraries may not be reentrant Solution: rewrite the library Solution: wrap each library call with a wrapper Signals, alarms, and I/O may not be thread specific.

Other pthread functions int thread_yield ( ); int pthread_join ( pthread_t th, void** thread_return ); Many, many others

Multithreaded example 1

/* file: pt1.cpp date: 23-sep-2005 author: george j. grevera, ph.d. compile: g++ -o pt1.exe pt1.cpp -lpthread -lm desc.: shell of a multithreaded app */ #include <math.h> #include <pthread.h> #include <stdio.h> //global variables const int N = 10; //max number of threads pthread_t thread[N]; //for thread id storage //----------------------------------------------------------------------

//---------------------------------------------------------------------- //program execution begins here. int main ( const int argc, const char* const argv[] ) { //create N threads for (int i=0; i<N; i++) { pthread_create( &thread[i], 0, start_routine, (void*)i ); printf( "main: thread %d created. \n", thread[i] ); } //wait for the N threads to finish void* v; printf( "main: waiting \n" ); pthread_join( thread[i], &v ); printf( "main: returning \n" ); return 0;

//---------------------------------------------------------------------- //example worker thread function. this function does a lot of "work" // (i.e., computation). double doWork ( const int whoAmI ) { double sum = 0.0; for (int i=0; i<10000000; i++) { sum += sin( i ) * cos( i ); } return sum; //main thread function void* start_routine ( void* p ) { int whoAmI = (int)p; printf( "%d processing \n", whoAmI ); doWork( whoAmI ); printf( "%d exit \n", whoAmI ); return 0;

/* file: pt1.cpp date: 23-sep-2005 author: george j. grevera, ph.d. compile: g++ -o pt1.exe pt1.cpp -lpthread -lm desc.: shell of a multithreaded app */ #include <math.h> #include <pthread.h> #include <stdio.h> //global variables const int N = 10; //max number of threads pthread_t thread[N]; //for thread id storage //---------------------------------------------------------------------- //example worker thread function. this function does a lot of "work" // (i.e., computation). double doWork ( const int whoAmI ) { double sum = 0.0; for (int i=0; i<10000000; i++) { sum += sin( i ) * cos( i ); } return sum; //main thread function void* start_routine ( void* p ) { int whoAmI = (int)p; printf( "%d processing \n", whoAmI ); doWork( whoAmI ); printf( "%d exit \n", whoAmI ); return 0; //program execution begins here. int main ( const int argc, const char* const argv[] ) { //create N threads for (int i=0; i<N; i++) { pthread_create( &thread[i], 0, start_routine, (void*)i ); printf( "main: thread %d created. \n", thread[i] ); //wait for the N threads to finish void* v; printf( "main: waiting \n" ); pthread_join( thread[i], &v ); printf( "main: returning \n" );

Multithreading discussion pthread_create only allows a single parameter to be passed to the thread pthread_join only allows a single parameter to be returned from a thread How can we pass and return many parameters?

Multithreading: advanced topic We know that process memory is shared among all threads. We know that the stack is part of the process memory. Therefore the stack is part of the memory that is shared among the threads. How can we demonstrate that the stack is shared among threads?

From where are local variables allocated? /* This program demonstrates that, although stack variables are not shared among threads, stack memory (_all_ process memory) is indeed shared by threads. g++ -o sharedStack.exe sharedStack.cpp -lpthread -lm -lrt */ #include <iostream> #include <math.h> #include <pthread.h> #include <sched.h> #include <unistd.h> using namespace std; const int N = 2; //max number of threads //this will be a pointer to a local variable in thread 0. static int* whoAmIPointer = NULL; //---------------------------------------------------------------------- From where are local variables allocated?

//---------------------------------------------------------------------- int main ( const int argc, const char* const argv[] ) { pthread_t thread[::N]; //for thread id storage //create N threads for (int i=0; i< ::N; i++) { pthread_create( &thread[i], 0, start_routine, (void*)i ); cout << "main: thread " << i << " created with id=" << thread[i] << endl; } //wait for the N threads to finish void* v; cout << "main: wait" << endl; pthread_join( thread[i], &v ); cout << "main: returning" << endl; return 0; Nothing new here.

//---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0; What does :: mean in C++?

//---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0; What does :: mean in C++? :: is the C++ scope resolution operator. In this case, it refers to a global variable.

//---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0;

//---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0;

//---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0;

//---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0;

How could this ever be true (if threads didn’t share stack memory)? //---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0; How could this ever be true (if threads didn’t share stack memory)?

/* This program demonstrates that, although stack variables are not shared among threads, stack memory (_all_ process memory) is indeed shared by threads. g++ -o sharedStack.exe sharedStack.cpp -lpthread -lm -lrt */ #include <iostream> #include <math.h> #include <pthread.h> #include <sched.h> #include <unistd.h> using namespace std; const int N = 2; //max number of threads //this will be a pointer to a local variable in thread 0. static int* whoAmIPointer = NULL; //---------------------------------------------------------------------- void* start_routine ( void* p ) { int whoAmI = (int)p; int whoAmICopy = whoAmI; cout << whoAmI << " processing" << endl; if (whoAmI==0) { //is this thread 0? //make the global var point to my local var ::whoAmIPointer = &whoAmI; sched_yield(); sleep( 5 ); } else { //this is not thread 0 so wait until thread 0 sets the global var // that points to thread 0's local var. while (::whoAmIPointer==NULL) { } //change thread 0's local var *::whoAmIPointer = 92; if (whoAmI!=whoAmICopy) { cout << "Hey! Wait a minute! Somebody changed who I am from " << whoAmICopy << " to " << whoAmI << "!" << endl; cout << whoAmI << " done" << endl << whoAmI << " exit" << endl; return 0; int main ( const int argc, const char* const argv[] ) { pthread_t thread[::N]; //for thread id storage //create N threads for (int i=0; i< ::N; i++) { pthread_create( &thread[i], 0, start_routine, (void*)i ); cout << "main: thread " << i << " created with id=" << thread[i] << endl; //wait for the N threads to finish void* v; cout << "main: wait" << endl; pthread_join( thread[i], &v ); cout << "main: returning" << endl;

Win32 thread functions CreateThread ExitThread TerminateThread WaitForSingleObject GetExitCodeThread

Win32 threads & fibers Fibers (in win32 – not available in Linux) A lightweight thread Owned by thread Threads are preemptively scheduled Fibers are not preemptively scheduled When thread is preempted, so is fiber When thread is resumed, so is fiber May be scheduled by owning thread