COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.

Slides:



Advertisements
Similar presentations
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
Advertisements

Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
A. Frank - P. Weisberg Operating Systems Advanced CPU Scheduling.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling CS 3100 CPU Scheduling1. Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Virtual Memory.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Chapter 5-CPU Scheduling
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Chapter 6: CPU Scheduling
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
CPU Scheduling Algorithms CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU SCHEDULING.
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
Chapter 5a: CPU Scheduling
CPU Scheduling Algorithms
Module 9: Virtual Memory
Chapter 6: CPU Scheduling
COT 4600 Operating Systems Fall 2009
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
COT 4600 Operating Systems Spring 2011
CGS 3763 Operating Systems Concepts Spring 2013
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
5: Virtual Memory Background Demand Paging
3: CPU Scheduling Basic Concepts Scheduling Criteria
CGS 3763 Operating Systems Concepts Spring 2013
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
COT 4600 Operating Systems Spring 2011
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
COT 4600 Operating Systems Fall 2009
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 9: Virtual Memory
COT 5611 Operating Systems Design Principles Spring 2014
CGS 3763 Operating Systems Concepts Spring 2013
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Presentation transcript:

COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM

Lecture 21 - Thursday April 7, 2011 Last time:  Scheduling Today:  The scheduler  Multi-level memories Next Time:  Memory characterization  Multilevel memories management using virtual memory  Adding multi-level memory management to virtual memory  Page replacement algorithms Lecture 212

The scheduler The system component which manages the allocation of the processor/core. It runs inside the processor thread and implements the scheduling policies. Other functions  Determines the burst  Manages multiple queues of threads Lecture 203

CPU burst CPU burst  the time required by the thread/process to execute 4Lecture 20

Estimating the length of next CPU burst Done using the length of previous CPU bursts, using exponential averaging 5Lecture 20

Exponential averaging  =0   n+1 =  n  Recent history does not count  =1   n+1 =  t n  Only the actual last CPU burst counts If we expand the formula, we get:  n+1 =  t n +(1 -  )  t n -1 + … +(1 -  ) j  t n -j + … +(1 -  ) n +1  0 Since both  and (1 -  ) are less than or equal to 1, each successive term has less weight than its predecessor 6Lecture 20

Predicting the length of the next CPU burst 7Lecture 20

Multilevel queue Ready queue is partitioned into separate queues each with its own scheduling algorithm :  foreground (interactive)  RR  background (batch)  FCFS Scheduling between the queues  Fixed priority scheduling - (i.e., serve all from foreground then from background). Possibility of starvation.  Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS 8Lecture 20

Multilevel queue scheduling 9Lecture 20

Multilevel feedback queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler characterized by:  number of queues  scheduling algorithms for each queue  strategy when to upgrade/demote a process  strategy to decide the queue a process will enter when it needs service 10Lecture 20

Example of a multilevel feedback queue exam Three queues:  Q 0 – RR with time quantum 8 milliseconds  Q 1 – RR time quantum 16 milliseconds  Q 2 – FCFS Scheduling  A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1.  At Q 1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2. 11Lecture 20

Multilevel feedback queues 12Lecture 20

Unix scheduler The higher the number quantifying the priority the lower the actual process priority. Priority = (recent CPU usage)/2 + base Recent CPU usage  how often the process has used the CPU since the last time priorities were calculated. Does this strategy raises or lowers the priority of a CPU-bound processes? Example:  base = 60  Recent CPU usage: P1 =40, P2 =18, P3 = 10 13Lecture 20

Comparison of scheduling algorithms Round RobinFIFOMFQ Multi-Level Feedback Queue SFJ Shortest Job First SRJN Shortest Remaining Job Next Throughput Response time May be low is quantum is too small Shortest average response time if quantum chosen correctly Not emphasized May be poor May be low is quantum is too small Good for I/O bound but poor for CPU- bound processes High Good for short processes But maybe poor for longer processes High Good for short processes But maybe poor for longer processes 14Lecture 20

Round Robin FIFOMFQ Multi-Level Feedback Queue SFJ Shortest Job First SRJN Shortest Remaining Job Next IO-bound Infinite postponem ent No distinction between CPU-bound and IO-bound Does not occur No distinction between CPU-bound and IO-bound Does not occur Gets a high priority if CPU- bound processes are present May occur for CPU bound processes No distinction between CPU-bound and IO-bound May occur for processes with long estimated running times No distinction between CPU-bound and IO-bound May occur for processes with long estimated running times 15Lecture 20

Round Robin FIFOMFQ Multi-Level Feedback Queue SFJ Shortest Job First SRJN Shortest Remaining Job Next Overhead CPU- bound Low No distinction between CPU-bound and IO-bound The lowest No distinction between CPU-bound and IO-bound Can be high Complex data structures and processing routines Gets a low priority if IO- bound processes are present Can be high Routine to find to find the shortest job for each reschedule No distinction between CPU-bound and IO-bound Can be high Routine to find to find the minimum remaining time for each reschedule No distinction between CPU-bound and IO-bound 16Lecture 20

Memory virtualization A process runs in its own address space; multiple threads may share an address space. Process/tread management and memory management are important functions of an operating system Virtual memory Allows programs to run on systems with different sizes of real memory It may lead to performance penalty. A virtual address space has a fixed size determined by the number of bits in an address., e.g., if n=32 then size = 2 32 ~ 4 GB. Swap area: image on a disk of the virtual memory of a process. Page: a group of consecutive virtual addresses e.g., of size 4 K is brought from the swap area to the main memory at once. Not all pages of a process are in the real memory at the same time Caching Blocks of virtual memory addresses are brought into faster memory. Leads to performance improvement. Lecture 2117

Locality of reference Locality of reference: when a memory location is referenced then the next references are likely to be in close proximity.  The programs include sequential code  The data structures include often data that are processed together; e.g., an array. Spatial and temporal locality of reference. Thus it makes sense to group a set of consecutive addresses into units and transfer such units at once between a slow but larger storage space to a faster but smaller storage. The sixe of the units is different  Virtual memory  Page size: KB  Cache  16 – 256 words. Lecture 2118

Virtual memory Several strategies  Paging  Segmentation  Paging+ segmentation At the time a process/thread is created the system creates a page table for it; an entry in the page table contains  The location in the swap area of the page  The address in main memory where the page resides if the page has been brought in from the disk  Other information e.g. dirty bit. Page fault  a process/thread references an address in a page which is not in main memory On demand paging  a page is brought in the main memory from the swap area on the disk when the process/thread references an address in that page. Lecture 2119

Lecture 2120

Lecture 2121

Dynamic address translation Lecture 2022

Lecture 2123

Virtual memory Several strategies  Paging  Segmentation  Paging+ segmentation At the time a process/thread is created the system creates a page table for it; an entry in the page table contains  The location in the swap area of the page  The address in main memory where the page resides if the page has been brought in from the disk  Other information e.g. dirty bit. Page fault  a process/thread references an address in a page which is not in main memory On demand paging  a page is brought in the main memory from the swap area on the disk when the process/thread references an address in that page. Lecture 2224

Lecture 2225

Lecture 2226

Dynamic address translation Lecture 2227

Multi-level memories Lecture 2228

System components involved in memory management Virtual memory manager – VMM  dynamic address translation Multi level memory management – MLMM  Lecture 2229

The modular design VM attempts to translate the virtual memory address to a physical memory address If the page is not in main memory VM generates a page-fault exception. The exception handler uses a SEND to send to an MLMM port the page number The SEND invokes ADVANCE which wakes up a thread of MLMM The MMLM invokes AWAIT on behalf of the thread interrupted due to the page fault. The AWAIT releases the processor to the SCHEDULER thread. Lecture 2230

Lecture 2231

Name resolution in multi- level memories We consider pairs of layers:  Upper level of the pair  primary  Lower level of the pair  secondary The top level managed by the application which generates LOAD and STORE instructions to/from CPU registers from/to named memory locations The processor issues READs/WRITEs to named memory locations. The name goes to the primary memory device located on the same chip as the processor which searches the name space of the on-chip cache (L1 cache), the primary device with the L2 cache as secondary device. If the name is not found in L1 cache name space the Multi-Level Memory Manager (MLMM) looks at the L2 cache (off-chip cache) which becomes the primary with the main memory as secondary. If the name is not found in the L2 cache name space the MLMM looks at the main memory name space. Now the main memory is the primary device. If the name is not found in the main memory name space then the Virtual Memory Manager is invoked Lecture 2232

The performance of a two level memory The latency L p << L S  L P  latency of the primary device e.g., 10 nsec for RAM  L S  latency of the secondary device, e.g., 10 msec for disk Hit ratio h  the probability that a reference will be satisfied by the primary device. Average Latency (AS)  AS = h x L P + (1-h) L S. Example:  L P = 10 nsec (primary device is main memory)  L S = 10 msec (secondary device is the disk)  Hit ratio h= 0.90  AS= 0.9 x x 10,000,000 = 1,000, nsec~ 1000 microseconds = 1 msec  Hit ratio h= 0.99  AS= 0.99 x x 10,000,000 = 100, nsec~ 100 microseconds = 0.1 msec  Hit ratio h=  AS= x x 10,000,000 = 10, nsec~ 10 microseconds = 0.01 msec  Hit ratio h=  AS= x x 10,000,000 = 1, nsec~ 1 microsecond This considerable slowdown is due to the very large discrepancy (six orders of magnitude) between the primary and the secondary device. Lecture 2233

The performance of a two level memory (cont’d) Statement: if each reference occurs with equal frequency to a cell in the primary and in the secondary device then the combined memory will operate at the speed of the secondary device. The size Size P << Size S Size S =K x Size P with K large (1/K small)  Size P  number of cells of the primary device  Size S  number of cells of the secondary device Lecture 2234

Memory management elements at each level 1. The string of references directed at that level. 2. The capacity at that level 3. The bring in policies 1. On demand  bring the cell to the primary device from the secondary device when it is needed. E.g., demand paging 2. Anticipatory. E.g. pre-paging 4. The replacement policies  FIFO  First in first out  OPTIMAL  what a clairvoyant multi-level memory manager would do. Alternatively, construct the string of references and use it for a second execution of the program (with the same data as input).  LRU – Least Recently Used  replace the page that has not been referenced for the longest time.  MSU – Most Recently Used  replace the page that was referenced most recently Lecture 2235

Memory management elements at each level 1. The string of references directed at that level. 2. The capacity at that level 3. The bring in policies 1. On demand  bring the cell to the primary device from the secondary device when it is needed. E.g., demand paging 2. Anticipatory. E.g. pre-paging 4. The replacement policies  FIFO  First in first out  OPTIMAL  what a clairvoyant multi-level memory manager would do. Alternatively, construct the string of references and use it for a second execution of the program (with the same data as input).  LRU – Least Recently Used  replace the page that has not been referenced for the longest time.  MSU – Most Recently Used  replace the page that was referenced most recently Lecture 2336

Page replacement policies; Belady’s anomaly In the following examples we use a given string of references to illustrate several page replacement policies. We consider a primary device (main memory) with a capacity of three or four blocks and a secondary device (the disk) where a replica of all pages reside. Once a block has the “dirty bit” on it means that the page residing in that block was modifies and must be written back to the secondary device before being replaced. The capacity of the primary device is important. One expects that increasing the capacity, in our case the number of blocs in RAM leads to a higher hit ratio. That is not always the case as our examples will show. This is the Belady’s anomaly. Note: different results are obtained with a different string of references!! Lecture 2337

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN FIFO Page replacement algorithm  PS: Primary storage Lecture 2338

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN OPTIMAL page replacement algorithm Lecture 2339

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN LRU page replacement algorithm Lecture 2340

LRU, OPTIMAL, MRU LRU looks only at history OPTIMAL “knows” not only the history but also the future. In some particular cases Most Recently Used Algorithm performs better than LRU. Example: primary device with 4 cells. Reference string LRU F F F F F F F F F F F F F F F MRU F F F F F F - - Lecture 2341

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN The OPTIMAL replacement policy keeps in the 3-blocks primary memory the same pages as it does in case of the 4-block primary memory. Lecture 2342

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN The FIFO replacement policy does not keep in the 3-blocks primary memory the same pages as it does in case of the 4-block primary memory. Lecture 2343

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN The LRU replacement policy keeps in the 3-blocks primary memory the same pages as it does in case of the 4-block primary memory. Lecture 2344

Time intervals Total number of page faults Reference string Block 1 in PS Block 2 in PS Block 3 in PS Page OUT Page IN Block 1 in PS Block 2 in PS Bloch 3 in PS Block 4 in PS Page OUT Page IN The FIFO replacement policy does not keep in the 3-blocks primary memory the same pages as it does in case of the 4-block primary memory Lecture 2345

How to avoid Belady’s anomaly The OPTIMAL and the LRU algorithms have the subset property, a primary device with a smaller capacity hold a subset of the pages a primary device with a larger capacity could hold. The subset property creates a total ordering. If the primary system has 1 blocks and contains page A a system with two block add page B, and a system with three blocks will add page C. Thus we have a total ordering A  B  C or (A,B,C) Replacement algorithms that have the subset property are called “stack” algorithms. If we use stack replacement algorithms a device with a larger capacity can never have more page faults than the one with a smaller capacity. m  the pages held by a primary device with smaller capacity n  the pages held by a primary device with larger capacity m is a subset of n Lecture 2346

Simulation analysis of page replacement algorithms Given a reference string we can carry out the simulation for all possible cases when the capacity of the primary storage device varies from 1 to n with a single pass. At each new reference the some page move to the top of the ordering and the pages that were above it either move down or stay in the same place as dictated by the replacement policy. We record whether this movement correspond to paging out, movement to the secondary storage. Lecture 2347

Time Reference string Size 1 in/out0/-1/02/13/20/31/04/10/41/02/13/24/3 12 Size 2 in/out0/-1/-2/03/10/21/34/00/11/42/03/14/2 12 Size 3 in/out0/-1/-2/-3/00/11/24/3-/- 2/43/04/1 10 Size 4 in/out0/-1/-2/-3/--/- 4/2-/- 2/33/44/0 8 Size 5 in/out0/-1/-2/-3/--/- 4/--/- 5 Simulation of LRU page replacement algorithm Stack contents after refrence Total number of page faults Lecture 2348

Time Reference string Size 1 victim Size 2 victim Size 3 victim Size 4 victim Size 5 victim Simulation of OPTIMUM Stack contents after refrence Total number of page faults Lecture 2349

Clock replacement algorithm Approximates LRU with a minimum  Additional hardware: one reference bit for each page  Overhead Algorithm activated :  when a new page must be brought in move the pointer of a virtual clock in clockwise direction  if the arm points to a block with reference bit TRUE Set it FALSE Move to the next block  if the arm points to a block with reference bit FALSE The page in that block could be removed (has not been referenced for a while) Write it back to the secondary storage if the “dirty” bit is on (if the page has been modified. Lecture 2350