Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCI 511 Operating Systems Ch3, 4 (Part C) Cooperating Threads

Similar presentations


Presentation on theme: "CSCI 511 Operating Systems Ch3, 4 (Part C) Cooperating Threads"— Presentation transcript:

1 CSCI 511 Operating Systems Ch3, 4 (Part C) Cooperating Threads
Dr. Frank Li

2 Review: Per Thread State
Each Thread has a Thread Control Block (TCB) Execution State: CPU registers, program counter, pointer to stack Scheduling info: State, priority, CPU time Accounting Info Various Pointers (for implementing scheduling queues) Pointer to enclosing process (PCB) Etc. OS Keeps track of TCBs in protected memory In Arrays, or Linked Lists, or … Other State TCB9 Link Registers TCB6 TCB16 Head Tail Ready Queue

3 Multithreading Models Benefits of Multithreaded Programming
Goals for Today Multithreading Models Benefits of Multithreaded Programming Cooperating Threads

4 Kernel threads vs. User threads
Native threads supported directly by the kernel Every thread can run or block independently All modern OSs support kernel threads Downside: a bit expensive, need to make a crossing into kernel mode to schedule User Threads: (a lighter version) User program provides scheduler and thread package May have several user threads per kernel thread More economical Downside: When one thread blocks on I/O, all threads block Kernel cannot adjust scheduling among all threads

5 Multithreading Models
There must exist a relationship between user threads and kernel threads In many-to-one model, thread management is done by the thread library in user space Efficient The entire process will block if a thread makes a blocking system call Unable to run in parallel on multiprocessor architecture M-M is available in posix pthread, probably the best model Many-to-One Model E.g., Green thread library in Solaris

6 Multithreading Models
One-to-One Model Provide more concurrency than the many-to-one model. Allow another thread to run when a thread makes a blocking system call Allow multiple threads to run in parallel on multiprocessors The drawback: creating a user thread requires creating the corresponding kernel thread  larger overhead E.g., Win 95, 98, NT, 2000

7 Multithreading Models
Many-to-Many Model Multiplex many user threads to a smaller or equal number of kernel threads The most flexible multithreading model App programmers can create as many user threads as necessary The corresponding kernel threads can run in parallel on a multiprocessor Two-level model is one popular variation on the many-to-many model E.g., Supported by HP-UX, Tru64 UNIX, Solaris

8 Thread Libraries A thread library provides the programmer an API for creating and managing threads Two ways of implementing a thread library Provide a thread library entirely in user space with no kernel support Implement a kernel-level library supported directly by OS Invoking a function in the API for the kernel-level library results a system call to kernel Three main thread libraries POSIX Pthreads the thread extension of POSIX standard Provides both user or kernel level library Win32 thread library: a kernel-level library of WinOS Java Pthreads Allows thread creation and management directly in Java programs Typically implemented using a thread library available on hose OS Read examples in the text

9 Multithreading Models Benefits of Multithreaded Programming
Goals for Today Multithreading Models Benefits of Multithreaded Programming Cooperating Threads

10 Benefits of Multithreaded Programming
Responsiveness allow a program to continue running even if part of it is blocked or is performing a lengthy operation. Resource sharing Threads share the memory and the resources of the process to which they belong Economy Allocating memory and resources for process creation is costly It is more economical to create and context-switch threads Utilization of multiprocessor architectures In a multiprocessor architecture, threads may be running in parallel on different processors Interactive applications (In a multithreaded web browser, one thread can interact with user, while another thread is downloading an image.) Threads within a process share address space E.g., in Solaris, creating a process is about 30 times slower than is creating a thread, and context switch is about 5 times slower

11 Objective: Web server must handle many requests
Example: Web Server Objective: Web server must handle many requests Heavyweight process solution: serverLoop() { con = AcceptCon(); ProcessFork(ServiceWebPage(),con); } What are some disadvantages of this technique?

12 Example: Threaded Web Server
Now, use a single process, multithreaded version: serverLoop() { connection = AcceptCon(); ThreadFork(ServiceWebPage(),connection); } Looks almost the same, but has many advantages: Share file caches kept in memory, results of CGI scripts, etc. Threads are much cheaper to create than processes Q1. Would a many-to-one thread package make sense here? Q2. Are there any other potential problem? When one request blocks on disk, all block…

13 Problem with previous version: Unbounded Threads
Example: Thread Pools Problem with previous version: Unbounded Threads When web-site becomes too popular – throughput sinks Instead, allocate a bounded “pool” of threads, representing the maximum level of multiprogramming Master Thread Thread Pool queue Two benefits of thread pool Servicing a request with an existing thread is faster Prevent exhausting the system resources slave(queue) { while(TRUE) { con=Dequeue(queue); if (con==null) sleepOn(queue); else ServiceWebPage(con); } } master() { allocThreads(slave,queue); while(TRUE) { con=AcceptCon(); Enqueue(queue,con); wakeUp(queue); } }

14 Multithreading Models Benefits of Multithreaded Programming
Goals for Today Multithreading Models Benefits of Multithreaded Programming Cooperating Threads

15 Fork() and exec() System Calls
We’ve discussed fork() and exec() system calls in heavy-weight process. Now let’s look at multithread case. One thread in a process calls fork() Q1 Does the new process duplicate all thread or just the calling thread? A1. UNIX has two version of fork(). Q2 Which one to choose? Q3 What happens when one thread in a process calls exec() ? A3.The program specified in the parameter to exec() replace the entire process (include all threads in this process.) Now Back to A2 for Q2 Q2: depends on the app. If exec( ) is immediately called after fork(), duplicating all threads is unnecessary  Duplicating only the calling thread If the new process does NOT call exec(), duplicate all threads

16 Thread Cancellation Two scenarios of thread cancellation:
Thread Cancellation: terminating a thread before it has completed. A thread is to be cancelled is called the “target thread” Q. Why terminating a thread before it has completed? (Could you give an example?) Two scenarios of thread cancellation: Asynchronous cancellation: one thread immediately terminate the target thread Deferred cancellation: the target thread periodically checks whether is should terminate, Opportunity for terminate itself gracefully Comparing two scenarios: if the target thread has system resources or the target thread is updating data sharing with other threads, asynchronous cancellation may cause some problem Loading a web page, using one thread for text and each image If cancel loading a web page, …

17 Signal Handling A signal notify a process that a particular event has occurred. Three steps of signal handling: A signal is generated by an event The signal is delivered to a process Once delivered, the signal must be handled A signal may be received synchronously or asynchronously Synchronous signals are delivered to the same process that perform the operation cause the signal E.g., illegal memory access Asynchronous signals: when a signal is generated by an event external to a running process An asynchronous signal is sent to another process E.g., <CTR> + <C>

18 Signal Handling default signal handler user-defined signal handler
Signal may be handled in one of two possible ways default signal handler Every signal has one default signal handler run by the kernel user-defined signal handler Default action can be overridden by a user-defined signal handler Handling signals in heavy-weight process is straightforward. Handling signals in multithreaded process has four opinions: Deliver the signal to the thread to which signal applies Deliver the signal to all threads in the process Deliver the signal to certain threads in the process Assign a specific thread to receive all signal for the process Q: which opinion to choose?

19 Signal Handling Ans: The method for delivering a signal depends on the type of the signal Synchronous signals need to be delivered to the thread causing the signal There are different situations for asynchronous signals. Some asynchronous signals should be sent to all threads E.g., terminate this process ( CTR+C ) Some asynchronous signals may be sent to certain threads In UNIX, allow a thread to specify which signal it will accept and which it will block. Therefore an asynchronous signal is sent to those threads are not blocking it.

20 Concurrent Threads Q: What does it mean to run two threads “concurrently”? Scheduler is free to run threads in any order and interleaving Run each thread to completion or Run each thread in big time-slice or Run each thread in small time-slice A B C Multiprogramming Skipped Sections: , , , 3.5, 3.6, 4.3.2, 4.4.5, and 4.5.1

21 Correctness for concurrent threads
If scheduler can schedule threads in any way, programs must work under all circumstances! Q1. Can you test for this? Q2. How can you know if your program works? Independent Threads: No state shared with other threads Deterministic: Input state determines results Reproducible: Can recreate Starting Conditions, I/O Scheduling order doesn’t matter Cooperating Threads: Shared State between multiple threads Non-deterministic and Non-reproducible Non-deterministic and Non-reproducible means that bugs can be intermittent (Heisenbugs) Heisenburg principle: the more you look at it, the less you see. Bohr bugs are `reliable' bugs: given a particular input, they will always manifest themselves. Heisenbugs are bugs that are difficult to reproduce reliably; they appear to depend on the phase of the moon (environmental factors like time, particular memory allocation etc.). A Heisenbug is very often the result of errors in pointers: using memory that is not allocated.


Download ppt "CSCI 511 Operating Systems Ch3, 4 (Part C) Cooperating Threads"

Similar presentations


Ads by Google