Disk Scheduling Because Disk I/O is so important, it is worth our time to Investigate some of the issues involved in disk I/O. One of the biggest issues.

Slides:



Advertisements
Similar presentations
Numbers Treasure Hunt Following each question, click on the answer. If correct, the next page will load with a graphic first – these can be used to check.
Advertisements

I/O Management and Disk Scheduling
Chapter 6 I/O Systems.
Chapter 8: Main Memory.
1 Concurrency: Deadlock and Starvation Chapter 6.
1
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Processes and Operating Systems
Copyright © 2011, Elsevier Inc. All rights reserved. Chapter 6 Author: Julia Richards and R. Scott Hawley.
Author: Julia Richards and R. Scott Hawley
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 3 CPUs.
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
UNITED NATIONS Shipment Details Report – January 2006.
Properties of Real Numbers CommutativeAssociativeDistributive Identity + × Inverse + ×
So far Binary numbers Logic gates Digital circuits process data using gates – Half and full adder Data storage – Electronic memory – Magnetic memory –
Chapter 5 Input/Output 5.1 Principles of I/O hardware
Chapter 6 File Systems 6.1 Files 6.2 Directories
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
1 Chapter 12 File Management Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Break Time Remaining 10:00.
SE-292 High Performance Computing
Chapter 5 : Memory Management
Disk Storage SystemsCSCE430/830 Disk Storage Systems CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine) Fall,
Secondary Storage Devices: Magnetic Disks
Overview of Mass Storage Structure
SE-292 High Performance Computing File Systems Sathish Vadhiyar.
I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
I/O Management and Disk Scheduling
Operating systems Disk Management.
Slide 5-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 5 5 Device Management.
PP Test Review Sections 6-1 to 6-6
Chapter 4 Memory Management Basic memory management Swapping
Page Replacement Algorithms
Chapter 3.3 : OS Policies for Virtual Memory
Chapter 10: Virtual Memory
Memory Management.
Operating Systems Operating Systems - Winter 2012 Chapter 4 – Memory Management Vrije Universiteit Amsterdam.
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Sistemas de Ficheiros Ficheiros Diretórios
Chapter 6 File Systems 6.1 Files 6.2 Directories
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
1..
CONTROL VISION Set-up. Step 1 Step 2 Step 3 Step 5 Step 4.
Adding Up In Chunks.
Model and Relationships 6 M 1 M M M M M M M M M M M M M M M M
Subtraction: Adding UP
1 hi at no doifpi me be go we of at be do go hi if me no of pi we Inorder Traversal Inorder traversal. n Visit the left subtree. n Visit the node. n Visit.
Analyzing Genes and Genomes
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Essential Cell Biology
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Intracellular Compartments and Transport
PSSA Preparation.
Essential Cell Biology
13. Secondary Storage (S&G, Ch. 13)
Energy Generation in Mitochondria and Chlorplasts
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
Disk Drivers May 10, 2000 Instructor: Gary Kimura.
Device Management.
Secondary Storage Management Hank Levy. 8/7/20152 Secondary Storage • Secondary Storage is usually: –anything outside of “primary memory” –storage that.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
1 IO Management and Disk Scheduling Chapter Categories of I/O Devices n Human readable u used to communicate with the user u video display terminals.
I/O Management and Disk Structure Introduction to Operating Systems: Module 14.
External Storage Primary Storage : Main Memory (RAM). Secondary Storage: Peripheral Devices –Disk Drives –Tape Drives Secondary storage is CHEAP. Secondary.
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
Presentation transcript:

Disk Scheduling Because Disk I/O is so important, it is worth our time to Investigate some of the issues involved in disk I/O. One of the biggest issues is disk performance.

seek time is the time required for the read head to move to the track containing the data to be read.

rotational delay or latency, is the time required for the sector to move under the read head.

Performance Parameters rotational delay data transfer Wait for device seek (latency) Wait for Channel Device busy Seek time is the time required to move the disk arm to the specified track Ts = # tracks * disk constant + startup time ~ Rotational delay is the time required for the data on that track to come underneath the read heads. For a hard drive rotating at 3600 rpm, the average rotational delay will be 8.3ms. Transfer Time Tt = bytes / ( rotation_speed * bytes_on_track )

Data Organization vs. Performance Consider a file where the data is stored as compactly as possible, in this example the file occupies all of the sectors on 8 adjacent tracks (32 sectors x 8 tracks = 256 sectors total). The time to read the first track will be average seek time 20 ms rotational delay 8.3 ms read 32 sectors 16.7 ms 45ms Assuming that there is essentially no seek time on the remaining tracks, each successive track can be read in 8.3 + 16.7 ms = 25ms. Total read time = 45ms + 7 * 25ms = 220ms = 0.22 seconds

If the data is randomly distributed across the disk: For each sector we have average seek time 20 ms rotational delay 8.3 ms read 1 sector 0.5 ms Total time = 256 sectors * 28.8 ms/sector = 7.73 seconds 28.8 ms

In the previous example, the biggest factor on performance is ? Seek time! To improve performance, we need to reduce the average seek time.

The operating system keeps a queue of requests to read/write the disk. … The operating system keeps a queue of requests to read/write the disk. In these exercises we assume that all of the requests are on the queue.

Queue If requests are scheduled in random order, … If requests are scheduled in random order, then we would expect the disk tracks to be visited in a random order.

First-come, First-served Queue First-come, First-served Scheduling Request … If there are only a few processes competing for the drive, then we can hope for good performance. If there are a large number of processes competing for the drive, then performance approaches the random scheduling case.

While at track 15, assume some random set of read requests -- tracks 4, 40, 11, 35, 7 and 16 Head Path Tracks Traveled Track 15 to 4 11 4 to 40 36 40 to 11 29 11 to 35 24 35 to 7 28 7 to 16 9 137 tracks 40 30 20 10

Shortest Seek Time First Queue Shortest Seek Time First Request … Always select the request that requires the shortest seek time from the current position.

Shortest Seek Time First While at track 15, assume some random set of read requests -- tracks 4, 40, 11, 35, 7 and 16 Shortest Seek Time First Track Head Path Tracks Traveled 40 30 20 10 In a heavily loaded system, incoming requests with a shorter seek time will constantly push requests with long seek times to the end of the queue. This results In what is called “Starvation”. Problem?

Shortest Seek Time First While at track 15, assume some random set of read requests -- tracks 4, 40, 11, 35, 7 and 16 Shortest Seek Time First Track Head Path Tracks Traveled 40 15 – 16 1 16 – 11 5 11 – 7 4 7 – 4 3 4 – 35 31 35 – 40 5 30 20 10 49 tracks 50 100 In a heavily loaded system, incoming requests with a shorter seek time will constantly push requests with long seek times to the end of the queue. This results In what is called “Starvation”. Problem?

The elevator algorithm Queue The elevator algorithm (scan-look) Request … Search for shortest seek time from the current position only in one direction. Continue in this direction until all requests in this direction have been satisfied, then go the opposite direction. In the scan algorithm, the head moves all the way to the first (or last) track before it changes direction.

Scan-Look While at track 15, assume some random set of read requests Track 4, 40, 11, 35, 7 and 16. Head is moving towards higher numbered tracks. Scan-Look Track Head Path Tracks Traveled 40 30 20 10 Steps 50 100

Scan-Look While at track 15, assume some random set of read requests Track 4, 40, 11, 35, 7 and 16. Head is moving towards higher numbered tracks. Scan-Look Track Head Path Tracks Traveled 40 15 – 16 1 16 – 35 19 35 – 40 5 40 – 11 29 11 – 7 4 7 – 4 3 30 20 10 61 tracks Steps 50 100

Which algorithm would you choose if you were implementing an operating system? Issues to consider when selecting a disk scheduling algorithm: Performance is based on the number and types of requests. What scheme is used to allocate unused disk blocks? How and where are directories and i-nodes stored? How does paging impact disk performance? How does disk caching impact performance?

Disk Cache The disk cache holds a number of disk blocks in memory, usually in RAM on the disk controller. When an I/O request is made for a particular block, the disk cache is checked. If the block is in the cache, it is read. Otherwise, the required block (and often some contiguous blocks) are read into the cache.

Replacement Strategies Least Recently Used replace the block that has been in the cache the longest, without being referenced. Least Frequently Used replace the block that has been used the least

Redundant Array of Independent Disks RAID Redundant Array of Independent Disks Push Performance Add reliability

RAID Level 0: Striping Physical Physical Drive 1 Drive 2 A Stripe o o o o o o strip 8 strip 9 strip 10 Disk Management Software strip 11 o o o Logical Disk

RAID Level 1: Mirroring High Reliability Physical Physical Physical strip 0 strip 1 Physical Drive 1 Physical Drive 2 Physical Drive 3 Physical Drive 4 strip 2 strip 3 strip 0 strip 0 strip 1 strip 1 strip 0 strip 0 strip 1 strip 1 strip 4 strip 2 strip 2 strip 3 strip 3 strip 2 strip 2 strip 3 strip 3 strip 5 strip 4 strip 4 strip 5 strip 5 strip 4 strip 4 strip 5 strip 5 strip 6 strip 6 strip 6 strip 7 strip 7 strip 6 strip 6 strip 7 strip 7 strip 7 o o o o o o o o o o o o o o o o o o o o o o o o strip 8 strip 9 strip 10 Disk Management Software Duplicate writes to drive 1 and drive 2 on these disks strip 11 o o o Logical Disk

RAID Level 3: Parity High Throughput Physical Physical Physical strip 0 strip 1 Physical Drive 1 Physical Drive 2 Physical Drive 3 Physical Drive 4 strip 2 strip 3 strip 0 strip 0 strip 1 strip 1 strip 2 strip 0 strip 1 para strip 4 strip 3 strip 2 strip 4 strip 3 strip 2 strip 5 strip 3 parb strip 5 strip 6 strip 4 strip 7 strip 5 strip 8 strip 4 strip 5 parc strip 6 strip 9 strip 6 strip 10 strip 7 strip 11 strip 6 strip 7 pard strip 7 o o o o o o o o o o o o o o o o o o o o o o o o strip 8 parity strip 9 strip 10 Disk Management Software strip 11 o o o Logical Disk

Thinking about what you have learned

Suppose that 3 processes, p1, p2, and p3 are attempting to concurrently use a machine with interrupt driven I/O. Assuming that no two processes can be using the cpu or the physical device at the same time, what is the minimum amount of time required to execute the three processes, given the following (ignore context switches): Process Time compute Time device 1 10 50 2 30 10 3 15 35

Process Time compute Time device 1 10 50 2 30 10 3 15 35 1 10 50 2 30 10 3 15 35 p3 p2 P1 10 20 30 40 50 60 70 80 90 100 110 120 130 105

Consider the case where the device controller is double buffering I/O. That is, while the process is reading a character from one buffer, the device is writing to the second. Process What is the effect on the running time of the process if the process is I/O bound and requests characters faster than the device can provide them? A Device Controller B The process reads from buffer A. It tries to read from buffer B, but the device is still reading. The process blocks until the data has been stored in buffer B. The process wakes up and reads the data, then tries to read Buffer A. Double buffering has not helped performance.

Consider the case where the device controller is double buffering I/O. That is, while the process is reading a character from one buffer, the device is writing to the second. Process What is the effect on the running time of the process if the process is Compute bound and requests characters much slower than the device can provide them? A Device Controller B The process reads from buffer A. It then computes for a long time. Meanwhile, buffer B is filled. When The process asks for the data it is already there. The process does not have to wait and performance improves.

Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using a FCFS strategy?

How many tracks must the head step across using a FCFS strategy? Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using a FCFS strategy? Track 97 to 84 13 steps 84 to 155 71 steps 155 to 103 52 steps 103 to 96 7 steps 96 to 197 101 steps 244 steps 199 150 100 50 Steps 100 200

Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using an elevator strategy?

How many tracks must the head step across using an elevator strategy? Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using an elevator strategy? Track 97 to 103 6 steps 103 to 155 52 steps 155 to 197 42 steps 197 to 199 2 steps 199 to 96 103 steps 96 to 84 12 steps 217steps 199 150 100 50 Steps 100 200

In our class discussion on directories it was suggested that directory entries are stored as a linear list. What is the big disadvantage of storing directory entries this way, and how could you address this problem? Consider what happens when look up a file … The directory must be searched in a linear way.

Which file allocation scheme discussed in class gives the best performance? What are some of the concerns with this approach? Contiguous allocation schemes gives the best performance. Two big problems are: * Finding space for a new file (it must all fit in contiguous blocks) * Allocating space when we don’t know how big the file will be, or handling files that grow over time.

What is the difference between internal and external fragmentation? Internal fragmentation occurs when only a portion of a file block is used by a file. External fragmentation occurs when the free space on a disk does not contain enough space to hold a file.

Linked allocation of disk blocks solves many of the problems of contiguous allocation, but it does not work very well for random access files. Why not? To access a random block on disk, you must walk Through the entire list up to the block you need.

Linked allocation of disk blocks has a reliability problem. What is it? If a link breaks for any reason, the disk blocks after The broken link are inaccessible.