Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:

Slides:



Advertisements
Similar presentations
Silberschatz, Galvin and Gagne Operating System Concepts Disk Scheduling Disk IO requests are for blocks, by number Block requests come in an.
Advertisements

I/O Management and Disk Scheduling Chapter 11. I/O Driver OS module which controls an I/O device hides the device specifics from the above layers in the.
I/O Management and Disk Scheduling
CS 6560: Operating Systems Design
Disk Scheduling Based on the slides supporting the text 1.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Disks and RAID.
CSE506: Operating Systems Disk Scheduling. CSE506: Operating Systems Key to Disk Performance Don’t access the disk – Whenever possible Cache contents.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
RAID and Other Disk Details
Other Disk Details. 2 Disk Formatting After manufacturing disk has no information –Is stack of platters coated with magnetizable metal oxide Before use,
File System Implementation
1 Lecture 26: Storage Systems Topics: Storage Systems (Chapter 6), other innovations Final exam stats:  Highest: 95  Mean: 70, Median: 73  Toughest.
Recap of Feb 25: Physical Storage Media Issues are speed, cost, reliability Media types: –Primary storage (volatile): Cache, Main Memory –Secondary or.
Based on the slides supporting the text
Disks.
Device Management.
1 Disk Scheduling Chapter 14 Based on the slides supporting the text.
Disks CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Avishai Wool lecture Introduction to Systems Programming Lecture 9 Input-Output Devices.
Secondary Storage CSCI 444/544 Operating Systems Fall 2008.
Secondary Storage Management Hank Levy. 8/7/20152 Secondary Storage • Secondary Storage is usually: –anything outside of “primary memory” –storage that.
12.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 12: Mass-Storage Systems.
CS4432: Database Systems II Data Storage (Better Block Organization) 1.
1 Recitation 8 Disk & File System. 2 Disk Scheduling Disks are at least four orders of magnitude slower than main memory –The performance of disk I/O.
1 File System Implementation Operating Systems Hebrew University Spring 2010.
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Topic: Disks – file system devices. Rotational Media Sector Track Cylinder Head Platter Arm Access time = seek time + rotational delay + transfer time.
1 Lecture 8: Secondary-Storage Structure 2 Disk Architecture Cylinder Track SectorDisk head rpm.
Operating Systems CMPSC 473 I/O Management (4) December 09, Lecture 25 Instructor: Bhuvan Urgaonkar.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo1 Disk & disk scheduling.
CS 153 Design of Operating Systems Spring 2015 Final Review.
Disk Structure Disk drives are addressed as large one- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
Page 110/12/2015 CSE 30341: Operating Systems Principles Network-Attached Storage  Network-attached storage (NAS) is storage made available over a network.
1Fall 2008, Chapter 12 Disk Hardware Arm can move in and out Read / write head can access a ring of data as the disk rotates Disk consists of one or more.
CS 6502 Operating Systems Dr. J.. Garrido Device Management (Lecture 7b) CS5002 Operating Systems Dr. Jose M. Garrido.
CE Operating Systems Lecture 20 Disk I/O. Overview of lecture In this lecture we will look at: Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O Management and Disk Structure Introduction to Operating Systems: Module 14.
Disks Chapter 5 Thursday, April 5, Today’s Schedule Input/Output – Disks (Chapter 5.4)  Magnetic vs. Optical Disks  RAID levels and functions.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 14: Mass-Storage Systems Disk Structure Disk Scheduling Disk Management Swap-Space.
CS 153 Design of Operating Systems Spring 2015 Lecture 21: File Systems.
I/O Management and Disk Scheduling Chapter 11. Disk Performance Parameters To read or write, the disk head must be positioned at the desired track and.
1.  Disk Structure Disk Structure  Disk Scheduling Disk Scheduling  FCFS FCFS  SSTF SSTF  SCAN SCAN  C-SCAN C-SCAN  C-LOOK C-LOOK  Selecting a.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 13 Mass-Storage Systems Slide 1 Chapter 13 Mass-Storage Systems.
Chapter 14: Mass-Storage Systems Disk Structure. Disk Scheduling. RAID.
CS399 New Beginnings Jonathan Walpole. Disk Technology & Secondary Storage Management.
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Part IV I/O System Chapter 12: Mass Storage Structure.
Lecture Topics: 11/22 HW 7 File systems –block allocation Unix and NT –disk scheduling –file caches –RAID.
Lecture 17 Raid. Device Protocol Variants Status checks: polling vs. interrupts Data: PIO vs. DMA Control: special instructions vs. memory-mapped I/O.
Magnetic Disks Have cylinders, sectors platters, tracks, heads virtual and real disk blocks (x cylinders, y heads, z sectors per track) Relatively slow,
Storage Overview of Physical Storage Media Magnetic Disks RAID
Disks and RAID.
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
CS703 - Advanced Operating Systems
Operating System I/O System Monday, August 11, 2008.
Disk Scheduling Algorithms
DISK SCHEDULING FCFS SSTF SCAN/ELEVATOR C-SCAN C-LOOK.
Lecture 45 Syed Mansoor Sarwar
Chapter 14 Based on the slides supporting the text
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Overview Continuation from Monday (File system implementation)
Jonathan Walpole Computer Science Portland State University
Disks and scheduling algorithms
Secondary Storage Management Brian Bershad
Secondary Storage Management Hank Levy
CS333 Intro to Operating Systems
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
Presentation transcript:

Lecture 17 I/O Optimization

Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder: corresponding track on each surface

Disk Performance Seek: position heads over cylinder (~10 ms to move across disk) Rotational delay: wait for sector to rotate underneath head (~8 ms per rotation) Transfer rate: ~ 4MB/s Tradeoff: Small sectors: seek time dominates Large sectors: transfer at disk bandwidth, but wastes space if file small

Block size optimization Con: many transfers for large file, potentially more seeks High overhead of disk space (interrecord gaps between each sector) More Inodes Pro: Quick transfer Less internal fragmentation Berkeley FFS uses 4K blocks, also uses fragments which are ¼ of blocks to save space Tradeoff for Small Blocks

Disk Arm Scheduling FIFO: fair, but may result in long seeks. SSTF: shortest seek time first. Reduce seeks, but prone to starvation. SCAN: like an elevator, move arm in one direction, take the closest request until no additional requests. Fairer than SSTF but does not perform as well, favors cylinders in center of disk. How to choose the next request to serve:

More Disk Scheduling CSCAN: like SCAN, but only goes in one direction – skips any requests when moving the head back. Fairer than SCAN, but performance a little worse In practice, if locality is good, then few seeks will be required, so any algorithm works well.

Rotational Scheduling SRLTF (shortest rotational latency time first) works well Skip-sector: allocate a sequential file to interleaved sectors Track offset for head switching: Since switching head takes time, offset the track start position on all tracks by a few sectors to allow the head to be selected

File placement Locality of reference: place in the same cylinder files that are frequently accessed together can minimize seek time. E.g., inodes of files in the same directories are placed in the same cylinder group. (ls command) Seek frequency can be reduced if commonly used files are placed on different disks

Disk Caching Exploit locality by caching recently used blocks in memory  use LRU for replacement Works well as long as memory is big enough to hold working set of files; hit ratio is high (70 – 90%) Problem as with any LRU scheme, is thrashing; working set size > file system cache

Prefetching Since most common access pattern is sequential, want to prefetch subsequent disk blocks ahead of the current read request Typically will load the entire track into cache for every read. Problem with prefetching too many blocks: delays concurrent disk requests by other processes

Write behind Batch writes so disk scheduler can efficiently order lots of requests Avoid some writes since many temporary files need never get written to disk UNIX has 30-second write-behind policy; writes go to kernel buffers, and the buffers are flushed every 30 second Problem: a system crash could lose data. Can be solved by using NVRAM(non-volatile ram) as write cache

RAID Idea: improve performance by doing parallel reads, and improve reliability through redundancy Keep an array of disks, distribute data over multiple disks, so read can be done in parallel. Keep a check disk to store the parity of every bit to recover from failure. This works only if only one disk can fail at a time

RAID, continued Problem: parity disk can be a bottleneck since it has to be updated on every write. Solution: Distribute parity information uniformly over all disks, so the bottleneck of a unique single disk is eliminated