Presentation is loading. Please wait.

Presentation is loading. Please wait.

U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.

Similar presentations


Presentation on theme: "U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture."— Presentation transcript:

1 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture 5: Threads & Scheduling

2 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 2 Last Time: Processes Process = unit of execution Process control blocks Process state, scheduling info, etc. New, Ready, Waiting, Running, Terminated One at a time (on uniprocessor) Change by context switch Multiple processes: Communicate by message passing or shared memory

3 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 3 This Time: Threads & Scheduling What are threads? vs. processes Where does OS implement threads? User-level, kernel How does OS schedule threads?

4 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 4 Processes versus Threads Process = Control + address space + resources fork() Thread = Control only PC, stack, registers pthread_create() One process may contain many threads

5 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 5 Threads Diagram Address space in process: shared among threads Cheaper, faster communication than IPC

6 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 6 Threads Example, C/C++ POSIX threads standard #include void * run (void * d) { int q = ((int) d); int v = 0; for (int i = 0; i < q; i++) { v = v + expensiveComputation(i); } return (void *) v; } main() { pthread_t t1, t2; int r1, r2; pthread_create (&t1, run, 100); pthread_create (&t2, run, 100); pthread_wait (&t1, (void *) &r1); pthread_wait (&t2, (void *) &r2); printf (“r1 = %d, r2 = %d\n”, r1, r2); }

7 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 7 Threads Example, Java import java.lang.*; class Worker extends Thread implements Runnable { public Worker (int q) { this.q = q; this.v = 0; } public void run() { int i; for (i = 0; i < q; i++) { v = v + i; } } public int v; private int q; } public class Example { public static void main(String args[]) { Worker t1 = new Worker (100); Worker t2 = new Worker (100); try { t1.start(); t2.start(); t1.join(); t2.join(); } catch (InterruptedException e) {} System.out.println ("r1 = " + t1.v + ", r2 = " + t2.v); } }

8 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 8 Classifying Threaded Systems One or many address spaces, one or many threads per address space MS-DOS

9 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 9 Classifying Threaded Systems One or many address spaces, one or many threads per address space MS-DOS Embedded systems

10 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 10 Classifying Threaded Systems One or many address spaces, one or many threads per address space UNIX, Ultrix, MacOS (< X), Win95 Embedded systems MS-DOS

11 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 11 Classifying Threaded Systems One or many address spaces, one or many threads per address space UNIX, Ultrix, MacOS (< X), Win95 Mach, Linux, Solaris, WinNT MS-DOS Embedded systems

12 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 12 This Time: Threads What are threads? vs. processes Where does OS implement threads? User-level, kernel How does CPU schedule threads?

13 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 13 Kernel Threads Kernel threads: scheduled by OS A.k.a. lightweight process (LWPs) Switching threads requires context switch PC, registers, stack pointers BUT: no mem mgmt. = no TLB “shootdown”  Switching faster than for processes Hide latency (don’t block on I/O) Can be scheduled on multiple processors

14 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 14 User-Level Threads No OS involvement w/user-level threads Only knows about process containing threads Use thread library to manage threads Creation, synchronization, scheduling Example: Java green threads Cannot be scheduled on multiple processors

15 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 15 User-Level Threads: Advantages No context switch when switching threads But… Flexible: Allow problem-specific thread scheduling policy Computations first, service I/O second, etc. Each process can use different scheduling algorithm No system calls for creation, context switching, synchronization  Can be much faster than kernel threads

16 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 16 User-Level Threads: Disadvantages Requires cooperative threads Must yield when done working (no quanta) Uncooperative thread can take over OS knows about processes, not threads: Thread blocks on I/O: whole process stops More threads ≠ more CPU time Process gets same time as always Can’t take advantage of multiple processors

17 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 17 Solaris Threads Hybrid model: User-level threads mapped onto LWPs

18 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 18 Threads Roundup User-level threads Cheap, simple Not scheduled, blocks on I/O, single CPU Requires cooperative threads Kernel-level threads Involves OS – time-slicing (quanta) More expensive context switch, synch Doesn’t block on I/O, can use multiple CPUs Hybrid “Best of both worlds”, but requires load balancing

19 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 19 Load Balancing Spread user-level threads across LWPs so each processor does same amount of work Solaris scheduler: only adjusts load when I/O blocks thread scheduler kernel thread scheduler threads processes processors

20 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 20 Load Balancing Two classic approaches: work sharing & work stealing Work sharing: give excess work away Can waste time

21 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 21 Load Balancing Two classic approaches: work sharing & work stealing Work stealing: get threads from someone else Optimal approach Sun, IBM Java runtime but what about OS?

22 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 22 This Time: Threads What are threads? vs. processes Where does OS implement threads? User-level, kernel How does OS schedule threads?

23 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 23 Scheduling Overview Metrics Long-term vs. short-term Interactive vs. servers Example algorithm: FCFS

24 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 24 Scheduling Multiprocessing: run multiple processes Improves system utilization & throughput Overlaps I/O and CPU activities

25 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 25 Scheduling Processes Long-term scheduling: How does OS determine degree of multiprogramming? Number of jobs executing at once Short-term scheduling: How does OS select program from ready queue to execute? Policy goals Policy options Implementation considerations

26 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 26 Short-Term Scheduling Kernel runs scheduler at least: When process switches from running to waiting On interrupts When processes are created or terminated Non-preemptive system: Scheduler must wait for these events Preemptive system: Scheduler may interrupt running process

27 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 27 Comparing Scheduling Algorithms Important metrics: Utilization = % of time that CPU is busy Throughput = processes completing / time Response time = time between ready & next I/O Waiting time = time process spends on ready queue

28 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 28 Scheduling Issues Ideally: Maximize CPU utilization, throughput & minimize waiting time, response time Conflicting goals Cannot optimize all criteria simultaneously  Must choose according to system type Interactive systems Servers

29 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 29 Scheduling: Interactive Systems Goals for interactive systems: Minimize average response time Time between waiting & next I/O Provide output to user as quickly as possible Process input as soon as received Minimize variance of response time Predictability often important Higher average better than low average, high variance

30 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 30 Scheduling: Servers Goals different than for interactive systems Maximize throughput (jobs done / time) Minimize OS overhead, context switching Make efficient use of CPU, I/O devices Minimize waiting time Give each process same time on CPU May increase average response time

31 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 31 Scheduling Algorithms Roundup FCFS: First-Come, First-Served Round-robin: Use quantum & preemption to alternate jobs SJF: Shortest job first Multilevel Feedback Queues: Round robin on each priority queue Lottery Scheduling: Jobs get tickets Scheduler randomly picks winner

32 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 32 Scheduling Policies FCFS (a.k.a., FIFO = First-In, First-Out) Scheduler executes jobs to completion in arrival order Early version: jobs did not relinquish CPU even for I/O Assume: Runs when processes blocked on I/O Non-preemptive

33 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 33 FCFS Scheduling: Example Processes arrive 1 time unit apart: average wait time in these three cases?

34 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 34 FCFS: Advantages & Disadvantages + Advantage: Simple - Disadvantages: - Average wait time highly variable  Short jobs may wait behind long jobs - May lead to poor overlap of I/O & CPU  CPU-bound processes force I/O-bound processes to wait for CPU  I/O devices remain idle

35 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 35 Summary Thread = single execution stream within process User-level, kernel-level, hybrid No perfect scheduling algorithm Selection = policy decision Base on processes being run & goals Minimize response time Maximize throughput etc. Next time: much more on scheduling


Download ppt "U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture."

Similar presentations


Ads by Google