Download presentation
Presentation is loading. Please wait.
Published bySamson Baker Modified over 8 years ago
1
Thread Programming 김 도 형김 도 형
2
2 Table of Contents What is a Threads? Designing Threaded Programs Synchronizing Threads Managing Threads
3
3 What is a Thread? Relationship between a process and a thread. A process is created by the operating system. Processes contain information about program resources and program execution state, including: Process ID, process group ID, user ID, and group ID Environment Working directory. Program instructions Registers Stack Heap File descriptors Signal actions Shared libraries Inter-process communication tools (such as message queues, pipes, semaphores, or shared memory).
4
4 A Process
5
5 A Thread Threads use and exist within these process resources, yet are able to be scheduled by the operating system and run as independent entities within a process. A thread can possess an independent flow of control and be schedulable because it maintains its own: Stack pointer Registers Scheduling properties (such as policy or priority) Set of pending and blocked signals Thread specific data.
6
6 Threads
7
7 A process can have multiple threads, all of which share the resources within a process and all of which execute within the same address space. Within a multi-threaded program, there are at any time multiple points of execution. Because threads within the same process share resources: Changes made by one thread to shared system resources (such as closing a file) will be seen by all other threads. Two pointers having the same value point to the same data. Reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.
8
8 User Level Thread Usually run on top of operating system The threads within the process are invisible to the kernel Visible only from within the process These threads compete among themselves for the resources allocated to a process Scheduled by a thread runtime system which is part of the process code Advantage Inexpensive, low overhead Need not create their own address space Switching between thread does not involve switching address space
9
9 Kernel Level Thread Kernel is aware of each thread as a schedulable entity Threads compete for processor resources on a system wide basis Advantage Multiple processor
10
10 Hybrid Thread Task advantage of both user-level and kernel-level thread User-level threads are mapped into the kernel- schedulable entities as the process is running to achieve parallelism Sun Solaris The user-level thread called thread Kernel schedulable entities are called lightweight processes User can specify that a particular thread have a dedicated lightweight process
11
11 Why Use Threads Over Processes? Creating a new process can be expensive Time A call into the operating system is needed The operating system’s context-switching mechanism will be involved Memory The entire process must be replicated The cost of inter-process communication and synchronization of shared data May involve calls into the operation system kernel Threads can be created without replicating an entire process Creating a thread is done in user space rather than kernel
12
12 Benefits of Thread Programming Inter-thread communication is more efficient and in many cases, easier to use than inter-process communication. Threaded applications offer potential performance gains and practical advantages over non-threaded applications in several other ways: Overlapping CPU work with I/O: For example, a program may have sections where it is performing a long I/O operation. While one thread is waiting for an I/O system call to complete, CPU intensive work can be performed by other threads. Priority/real-time scheduling: tasks which are more important can be scheduled to supersede or interrupt lower priority tasks. Asynchronous event handling: tasks which service events of indeterminate frequency and duration can be interleaved. For example, a web server can both transfer data from previous requests and manage the arrival of new requests. SMP machine perfromance
13
13 What are Pthreads? IEEE POSIX Section 1003.1c IEEE ( Institute of Electric and Electronic Engineering ) POSIX ( Portable Operating System Interface ) Pthread is a standardized model for dividing a program into subtasks whose execution can be interleaved or run in parallel Specifics are different for each implementation Mach Threads and NT Threads Pthread of Linux is kernel level thread Implemented by clone() systemcall
14
14 Process/Thread Creation Overhead Platform fork()pthread_create() realusersysrealusersys IBM 332 MHz 604e 4 CPUs/node 512 MB Memory AIX 4.3 92.42.7105.38.74.93.9 IBM 375 MHz POWER3 16 CPUs/node 16 GB Memory AIX 5.1 173.613.9172.19.63.86.7 INTEL 2.2 GHz Xeon 2 CPU/node 2 GB Memory RedHat Linux 7.3 17.43.913.55.90.85.3 fork_vs_thread.txt
15
15 The Pthreads API The Pthreads API is defined in the ANSI/IEEE POSIX 1003.1 - 1995 standard. The subroutines which comprise the Pthreads API can be informally grouped into three major classes: 1.Thread management: The first class of functions work directly on threads - creating, detaching, joining, etc. They include functions to set/query thread attributes (joinable, scheduling etc.) 2.Mutexes: The second class of functions deal with synchronization, called a "mutex", which is an abbreviation for "mutual exclusion". Mutex functions provide for creating, destroying, locking and unlocking mutexes. They are also supplemented by mutex attribute functions that set or modify attributes associated with mutexes. 3.Condition variables: The third class of functions address communications between threads that share a mutex. They are based upon programmer specified conditions. This class includes functions to create, destroy, wait and signal based upon specified variable values. Functions to set/query condition variable attributes are also included.
16
16 The Pthreads API The Pthreads API contains over 60 subroutines. This tutorial will focus on a subset of these - specifically, those which are most likely to be immediately useful to the beginning Pthreads programmer. The pthread.h header file must be included in each source file using the Pthreads library. The current POSIX standard is defined only for the C language.
17
17 Creating Threads Initially, your main() program comprises a single, default thread. All other threads must be explicitly created by the programmer. Routines: pthread_create (thread,attr,start_routine,arg)pthread_create Creates a new thread and makes it executable. Once created, threads are peers, and may create other threads. Returns: the new thread ID via the thread argument. This ID should be checked to ensure that the thread was successfully created.
18
18 Creating Threads attr : is used to set thread attributes. You can specify a thread attributes object, or NULL for the default values. Thread attributes are discussed later. start_routine: is the C routine that the thread will execute once it is created. arg : An argument may be passed to start_routine via arg. It must be passed by reference as a pointer cast of type void. The maximum number of threads that may be created by a process is implementation dependent.
19
19 Terminating Thread Execution There are several ways in which a Pthread may be terminated: The thread returns from its starting function The thread makes a call to the pthread_exit function The thread is canceled by another thread via the pthread_cancel function The entire process is terminated due to a call to either the exec or exit Routines: pthread_exit (status)pthread_exit
20
20 Terminating Thread Execution Routines: pthread_exit (status)pthread_exit If main() finishes before the threads it has created, and exits with pthread_exit(), the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes. Status: The programmer may optionally specify a termination status, which is stored as a void pointer for any thread that may join the calling thread. Cleanup: the pthread_exit() routine does not close files; any files opened inside the thread will remain open after the thread is terminated. Recommendation: Use pthread_exit() to exit from all threads...especially main().
21
21 Pthread Creation and Termination #include #define NUM_THREADS 5 void *PrintHello(void *threadid) { printf("\n%d: Hello World!\n", threadid); pthread_exit(NULL); } int main (int argc, char *argv[]) { pthread_t threads[NUM_THREADS]; int rc, t; for(t=0;t < NUM_THREADS;t++) { printf("Creating thread %d\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *)t); if (rc) { printf("ERROR; return code from pthread_create() is %d\n", rc); exit(-1); } pthread_exit(NULL); }
22
22 Passing Arguments to Threads The pthread_create() routine permits the programmer to pass one argument to the thread start routine. For cases where multiple arguments must be passed, this limitation is easily overcome by creating a structure which contains all of the arguments, and then passing a pointer to that structure in the pthread_create() routine. All arguments must be passed by reference and cast to (void *).
23
Designing Threaded Programs
24
24 Designing Threaded Programs In order for a program to take advantage of Pthreads, it must be able to be organized into discrete, independent tasks which can execute concurrently.
25
25 Designing Threaded Programs
26
26 Designing Threaded Programs Splitting CPU-based (e.g., computation) I/O-based (e.g., read data block from a file on disk) Potential parallelism The property that statements can be executed in any order without changing the result
27
27 Potential Parallelism Reasons for exploiting potential parallelism Obvious : make a program run faster on a multiprocessor Additional reasons Overlapping I/O Parallel executing of I/O-bound and CPU-bound jobs E.g., word processor -> printing & editing Asynchronous events If one more tasks is subject to the indeterminate occurrence of events of unknown duration and unknown frequency, it may be more efficient to allow other tasks to proceed while the task subject to asynchronous events is in some unknown state of completion. E.g., network-based server
28
28 Designing Threaded Programs Several common models for threaded programs exist: Manager/worker: a single thread, the manager assigns work to other threads, the workers. Typically, the manager handles all input and parcels out work to the other tasks. At least two forms of the manager/worker model are common: static worker pool and dynamic worker pool. Pipeline: a task is broken into a series of suboperations, each of which is handled in series, but concurrently, by a different thread. An automobile assembly line best describes this model. Peer: similar to the manager/worker model, but after the main thread creates other threads, it participates in the work.
29
29 Manager/Worker Model
30
30 Manager/Worker Model Example 1 (manager/worker model program) Main ( void ) /* the manager */ { forever { get a request; switch request case X : pthread_create ( … taskX); case Y : pthread_create ( … taskY); …….. } taskX () /* Workers processing requests of type X */ { perform the task, synchronize as needed if accessing shared resources; done; }
31
31 Manager/Worker Model Thread pool A variant of the Manager/worker model The manager could save some run-time overhead by creating all worker threads up front example 2 (manager/worker model with a thread pool ) Main(void) /* manager */ { for the number of workers pthread_create( ….pool_base ); forever { get a request; place request in work queue; signal sleeping threads that work is available; }
32
32 Manager/Worker Model Example 2 (cont’d) pool_base() /* All workers */ { forever { sleep until awoken by boss; dequeue a work request; switch { case request X : taskX(); case request Y : taskY(); …. }
33
33 Pipeline Model
34
34 Pipeline Model Example (pipeline model program) Main (void) { pthread_create( …stage1 ); pthread_create( …stage2); … wait for all pipeline threads to finish; do any clean up; } Stage1 () { forever { get next input for the program; do stage1 processing of the input; pass result to next thread in pipeline; }
35
35 Pipeline Model Example (cont’d) Stage2 () { forever { get input from previous thread in pipeline; do stage2 processing of the input; pass result to next thread in pipeline; } stageN () { forever { get input from previous thread in pipeline; do stageN processing of the input; pass result to program output; }
36
36 Peer Model
37
37 Peer Model Example (peer model program) Main (void) { pthread_create ( …thread1 … task1 ); pthread_create ( …thread2 … task2 ); … signal all workers to start; wait for all workers to finish; do any clean up; } task1 () { wait for start; perform task, synchronize as needed if accessing shared resources; done; }
38
To be continue…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.