COT 5611 Operating Systems Design Principles Spring 2014

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 3: CPU Scheduling
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Chapter 6 CPU SCHEDULING.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
lecture 5: CPU Scheduling
CPU SCHEDULING.
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Process Scheduling B.Ramamurthy 9/16/2018.
Scheduling (Priority Based)
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
ICS 143 Principles of Operating Systems
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
COT 4600 Operating Systems Spring 2011
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Process Scheduling B.Ramamurthy 12/5/2018.
CGS 3763 Operating Systems Concepts Spring 2013
COP 4600 Operating Systems Spring 2011
Chapter5: CPU Scheduling
COT 4600 Operating Systems Spring 2011
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
COT 5611 Operating Systems Design Principles Spring 2012
Outline Scheduling algorithms Multi-processor scheduling
COT 4600 Operating Systems Fall 2009
Chapter 6: CPU Scheduling
CGS 3763 Operating Systems Concepts Spring 2013
Chapter 5: CPU Scheduling
CGS 3763 Operating Systems Concepts Spring 2013
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Process Scheduling B.Ramamurthy 5/7/2019.
CPU SCHEDULING CPU SCHEDULING.
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Chapter 6: CPU Scheduling
CGS 3763 Operating Systems Concepts Spring 2013
COT 5611 Operating Systems Design Principles Spring 2014
CPU Scheduling.
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Presentation transcript:

COT 5611 Operating Systems Design Principles Spring 2014 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 3:30 – 5:30 PM

Lecture 22 Reading assignment: Last time – Chapter 9 from the on-line text Last time – Conditions for thread coordination – Safety, Liveness, Bounded-Wait, Fairness Critical sections – a solution to critical section problem Deadlocks Signals Semaphores Monitors Thread coordination with a bounded buffer. WAIT NOTIFY AWAIT 1/3/2019 Lecture 22

Today Scheduling Virtual memory and multi-level memory management Atomic actions All-or nothing and Before-or-after atomicity Applications of atomicity 1/3/2019 Lecture 22

Scheduling Basic concepts; scheduling objectives. Scheduling policies First-Come First-Serve (FCFS) Shortest Job First (SJF) Round Robin (RR) Preemptive/non-preemptive scheduling Priority scheduling Priority inversion Schedulers CPU burst. Estimation of the CPU burst Multi-level queues Multi-level queues with feedback Example: the UNIX scheduler 1/3/2019 Lecture 22

Scheduling – basic concepts Scheduling  assigning jobs to machines. A schedule S  a plan on how to process N jobs using one or machines. Scheduling in the general case in a NP complete problem. A job 1 <= j <- N is characterized by Ci S  completion time of job j under schedule S pi  processing time ri  release time; the time when the job is available for processing di  due time ; the time when the job should be completed. ui =0 if Ci S <= di and ui =1 otherwise Lj = Ci S - di  lateness A schedule S is characterized by The makespan Cmax = max Ci S Average completion time 1/3/2019 Lecture 22

Scheduling objectives Performance metrics: CPU Utilization  Fraction of time CPU does useful work over total time Throughput  Number of jobs finished per unit of time Turnaround time  Time spent by a job in the system Response time  Time to get the results Waiting time  Time waiting to start processing All these are random variables  we are interested in averages!! The objectives - system managers (M) and users (U): Maximize CPU utilization M Maximize throughput  M Minimize turnaround time  U Minimize waiting time  U Minimize response time  U 1/3/2019 Lecture 22 6

Other concepts related to scheduling Burst time  time needed by a thread/process to use the processor/core Time slice/quantum  time a thread/process is allowed to use the processor/core Preemptive scheduling  A thread/process could be forced to release the control of the processor. Non-preemptive scheduling  A thread once in control of the processor/core is allowed to finish its allocated time quantum. Scheduling policies  decide the order in which treads/processes get control of the processor/core First-Come First-Serve  FCFS Shortest Job First SJF Round Robin  RR Release time – the time when the job/task/process/thread is available for execution, arrives or it is released to the system 1/3/2019 Lecture 22

First-Come First-Served (FCFS) Thread Burst Time P1 24 P2 3 P3 3 Processes arrive in the order: P1  P2  P3 Gantt Chart for the schedule: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Convoy effect short process behind long process P1 P2 P3 24 27 30 1/3/2019 Lecture 22 8

The effect of the release time on FCFS scheduling Now threads arrive in the order: P2  P3  P1 Gantt chart: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better!! P1 P3 P2 6 3 30 1/3/2019 Lecture 22 9

Shortest-Job-First (SJF) Use the length of the next burst to schedule the thread/process with the shortest time. SJF is optimal minimum average waiting time for a given set of threads/processes Two schemes: Non-preemptive  the thread/process cannot be preempted until completes its burst Preemptive  if a new thread/process arrives with burst length less than remaining time of current executing process, preempt. known as Shortest-Remaining-Time-First (SRTF) 1/3/2019 Lecture 22 10

Example of non-preemptive SJF Thread Release time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 SJF (non-preemptive) Average waiting time = (0 + 6 + 3 + 7)/4 = 4 P1 P3 P2 7 3 16 P4 8 12 1/3/2019 Lecture 22 11

Example of Shortest-Remaining-Time-First (SRTF) (Preemptive SJF) Thread Release time Burst time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 Shortest-Remaining-Time-First Average waiting time = (9 + 1 + 0 +2)/4 = 3 P1 P3 P2 4 2 11 P4 5 7 16 1/3/2019 Lecture 22 12

Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the thread/process is preempted and added to the end of the ready queue. If there are n threads/processes in the ready queue and the time quantum is q, then each thread/process gets 1/n of the processor time in chunks of at most q time units at once. No thread/process waits more than (n-1)q time units. Performance q large  FIFO q small  q must be large with respect to context switch, otherwise overhead is too high 1/3/2019 Lecture 22 13

Round Robin (RR) with time slice q = 20 Thread Burst Time P1 53 P2 17 P3 68 P4 24 Typically, higher average turnaround than SJF, but better response P1 P2 P3 P4 20 37 57 77 97 117 121 134 154 162 1/3/2019 Lecture 22 14

Time slice (quantum) and context switch time Assume a rather long processing time, 10 units of time. How many context switches occur for different time quantum? 1/3/2019 Lecture 22 15

A comparison of scheduling strategies FCFS FJF RR 1/3/2019 Lecture 22

Job Release time Work Start time Finish time Wait time till start Time in system A 3 B 1 5 3 + 5 = 8 3 – 1 = 2 8 – 1 = 7 C 2 8 8 + 2 = 10 8 – 3 = 5 10 – 3 = 7 A 3 B 1 5 5 + 5 = 10 4 10 – 1 = 9 C 2 3 + 2 = 5 5 – 3 = 2 A 3 6 6 – 0 = 6 B 1 5 10 1 – 1 = 0 10 – 1 = 9 C 2 8 5 – 3 = 2 8 – 3 = 5 1/3/2019 Lecture 22

Average waiting time till the job started Average time in system Scheduling policy Average waiting time till the job started Average time in system FCFS 7/3 17/3 SJF 4/3 14/3 RR 3/3 20/3 1/3/2019 Lecture 22

Priority scheduling Each thread/process has a priority and the one with the highest priority (smallest integer  highest priority) is scheduled next. Preemptive Non-preemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem  Starvation – low priority threads/processes may never execute Solution to starvation  Aging – as time progresses increase the priority of the thread/process Priority my be computed dynamically 1/3/2019 Lecture 22

Priority inversion A lower priority thread/process prevents a higher priority one from running. T3 has the highest priority, T1 has the lowest priority; T1 and T3 share a lock. T1 acquires the lock, then it is suspended when T3 starts. Eventually T3 requests the lock and it is suspended waiting for T1 to release the lock. T2 has higher priority than T1 and runs; neither T3 nor T1 can run; T1 due to its low priority, T3 because it needs the lock help by T1. Allow a low priority thread holding a lock to run with the higher priority of the thread which requests the lock 1/3/2019 Lecture 22

Virtual memory and multi-level memory management Recall that there is tension between pipelining and VM management the page targeted by a load or store instruction may not be in the real memory and we experience a page fault. Multi-level memory management  brings a page from the secondary device (e.g., disk) in a frame in main memory. Virtual memory management  performs dynamic address translation, maps virtual to physical addresses. Modular design: separate Multi-level memory management Virtual memory management 1/3/2019 Lecture 22

Name resolution in multi-level memories We consider pairs of layers: Upper level of the pair  primary Lower level of the pair  secondary The top level managed by the application which generates LOAD and STORE instructions to/from CPU registers from/to named memory locations The processor issues READs/WRITEs to named memory locations. The name goes to the primary memory device located on the same chip as the processor which searches the name space of the on-chip cache (L1 cache), the primary device with the L2 cache as secondary device. If the name is not found in L1 cache name space the Multi-Level Memory Manager (MLMM) looks at the L2 cache (off-chip cache) which becomes the primary with the main memory as secondary. If the name is not found in the L2 cache name space the MLMM looks at the main memory name space. Now the main memory is the primary device. If the name is not found in the main memory name space then the Virtual Memory Manager (VMM) is invoked. 1/3/2019 Lecture 22

The modular design VM attempts to translate the virtual memory address to a physical memory address If the page is not in main memory VM generates a page-fault exception. The exception handler uses a SEND to send to an MLMM port the page number The SEND invokes ADVANCE which wakes up a thread of MLMM The MMLM invokes AWAIT on behalf of the thread interrupted due to the page fault. The AWAIT releases the processor to the SCHEDULER thread. 1/3/2019 Lecture 22

1/3/2019 Lecture 22

Atomicity Atomicity  ability to carry out an action involving multiple steps as an indivisible action; hide the structure of the action from an external observer. All-or-nothing atomicity (AONA) To an external observer (e.g., the invoker) an atomic action appears as if it either completes or it has never taken place. Before-or-after atomicity (BOAA) Allows several actions operating on the same resources (e.g., shared data) to act without interfering with one another To an external observer (e.g., the invoker) the atomic actions appear as if they completed either before or after each other. Atomicity simplifies the description of the possible states of the system as it hides the structure of a possible complex atomic action allows us to treat systematically and using the same strategy two critical problems in system design and implementation (a) recovery from failures and (b) coordination of concurrent activities 1/3/2019 Lecture 22

Atomicity in computer systems Hardware: interrupt and exception handling (AONA) + register renaming (BOAA) OS: SVCs (AONA) + non-sharable device (e.g., printer) queues (BOAA) Applications: layered design (AONA) + process coordination (BOAA) Database: updating records (AONA) + sharing records (BOAA) Example: exception handling when one of the following events occur Hardware faults External events Program exception Fair-share scheduling Preemptive scheduling when priorities are involved Process termination to avoid deadlock User-initiated process termination Register renaming avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations.  High performance CPUs have more physical registers than may be named directly in the instruction set, so they rename registers in hardware to achieve additional parallelism. r1  m(1000) r1  r1+5 m(1000)  r1 r1  m(2000) r1  r1+8 m(2000)  r1 1/3/2019 Lecture 22

Atomicity in databases and application software Recovery from system failures and coordination of multiple activities is not possible if actions are not atomic. Database example: a procedure to transfer from a debit account (A) to a credit account (B) Procedure TRANSFER (debit_account, credit_account, amount) GET (temp, A) temp  temp – amount PUT (temp, A) GET (temp, B) temp  temp + amount PUT (temp, B) What if: (a) the system fails after the first PUT; (b) multiple transactions on the same account take place. Layered application software example: a calendar program with three layers of interpreters: Calendar program JVM Physical layer 1/3/2019 Lecture 22

All-or-nothing atomicity The AONA is required to (1) handle interrupts (e.g., a page fault in the middle of a pipelined instruction). Need to retrofit the AONA at the machine language interface if every machine instruction is an AONA then the OS could save as the next instruction the one where the page fault occurs. Additional complications with a user-supplied exception handler. (2) handle supervisor calls (SVCs); an SVC requires a kernel action to change the PC, the mode bit (from user to kernel) and the code to carry out the required function. The SVC should appear as an extension of the hardware. Design solutions a typewriter driver activated by a user issued SVC, READ. Implement the “nothing” option  blocking read; when no input is present reissue the READ as the next instruction. This solution allows a user to supply its own exception handler. Implement the “all” option  non-blocking read; return control to the user program if no input available with a zero length input. 1/3/2019 Lecture 22

Before-or-after atomicity Two approaches to concurrent action coordination: Sequence coordination e.g., “action A should occur before B”  strict ordering BOAA, the effect of A and B is the same whether A occurs before B or B before A  non-strict ordering. BOAA is more general than sequence coordination. Example: two transactions operating on account A each performs GET and PUT Six possible sequences of actions: (G1,P1, G2, P2), (G2,P2,G1,P1), (G1,G2, P1, P2), (G1,G2, P2, P1), (G2,G1, P1,P2), (G2, G1, P2, P1). Only the first two lead to correct results. Solution the sequence Ri  Pi should be atomic. Correctness condition for coordination  if every possible result is guaranteed to be the same as if the actions were applied in one after another in some order. Before-or-after atomicity guarantees the correctness of coordination  indeed it serializes the actions. Stronger correctness requirements are sometimes necessary: External time consistency  e.g., in banking the transaction should be processed in the order they are issued. Sequential consistency  e.g., instruction reordering should not affect the result 1/3/2019 Lecture 22

Common strategy and side-effects of atomicity The common strategy for BOAA and AONA  hide the internal structure of a complex action; prevent an external observer to discover the structure and the implementation of the atomic action. Atomic actions could have “good” (benevolent) side-effects: An audit log records the cause of a failure and the recovery steps for later analysis Performance optimization: when adding a record to a file the data management may restructure/reorganize the file to improve the access time 1/3/2019 Lecture 22