Download presentation
Presentation is loading. Please wait.
1
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture 18: I/O Systems & Storage
2
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 2 Last Time File Systems Implementation How disks work How to organize data (files) on disks Data structures Placement of files on disk
3
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 3 Today I/O Systems Storage
4
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 4 I/O Devices Many different kinds of I/O devices Software that controls them: device drivers
5
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 5 Connection to CPU Devices connect to ports or system bus: Allows devices to communicate w/CPU Typically shared by multiple devices Two ways of communicating with CPU: Ports (programmed I/O) Direct-Memory Access (DMA)
6
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 6 Ports (a.k.a. Programmed I/O) Device port – 4 registers Status indicates device busy, data ready, error Control indicates command to perform Data-in read by host to get input from device Data-out written by CPU to device Controller receives commands from bus, translates into actions, reads/writes data onto bus
7
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 7 Polling CPU: Busy-wait until status = idle Set command register, data-out (output) Set status = command-ready Controller: set status = busy Controller: Reads command register, performs command Places data in data-in (input) Change state to idle or error
8
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 8 Interrupts Avoids busy waiting Device interrupts CPU when I/O operation completes On interrupt: Determine which device caused interrupt If last command to device was input operation, retrieve data from register Initiate next operation for device
9
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 9 DMA Ports (“programmed I/O”) Fine for small amounts of data, low-speed Too expensive for large data transfers! Solution: Direct Memory Access (DMA) Device & CPU communicate through RAM Pointers to source, destination, & # bytes DMA interrupts CPU when entire transfer complete
10
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 10 I/O – Outline I/O Systems I/O hardware basics Services provided by OS How OS implements these services
11
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 11 Low-Level Device Characteristics Transfer unit: character, block Access method: sequential, random Timing: synchronous, asynchronous Sharing: dedicated, sharable (by multiple threads/processes) Speed: latency, seek time, transfer rate, delay I/O direction: read-only, write-only, read-write Examples: terminal, CD-ROM, modem, keyboard, tape, graphics controller
12
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 12 Application I/O Interface High-level abstractions to hide device details Block devices (read, write, seek) Also memory-mapped File abstraction Character-stream devices (get, put) Keyboard, printers, etc. Network devices (socket connections) But how do we hide huge differences in latency?
13
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 13 Blocking & Non-Blocking Blocking: wait until operation completes Non-blocking: returns immediately (or after timeout) with whatever data is available (none, some, all) Select call Note: asynchronous – returns immediately, signals completion via callback or interrupt
14
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 14 I/O – Outline I/O Systems I/O hardware basics Services provided by OS How OS implements these services
15
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 15 OS Services Buffering Caching
16
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 16 I/O Buffering Buffer = memory area to store data before transfer from/to CPU Disk buffer stores block when read from disk Transferred over bus by DMA controller into buffer in physical memory DMA controller interrupts CPU when transfer complete
17
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 17 Why Buffering? Speed mismatches between device, CPU Compute contents of display in buffer (slow), send buffer to screen (fast) = double buffering Different data transfer sizes ftp brings file over network one packet at a time, stores to disk one block at a time Minimizes time user process blocks on write Write = copy data to kernel buffer, return to user Kernel: writes from buffer to disk at later time
18
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 18 Caching Keep recently-used disk blocks in main memory after I/O call completes read(diskAddress) : check memory first, then read from disk write(diskAddress) : if in memory, update value, otherwise, read block in and update in place Provide “synchronization” operations to commit writes to disk: flush(), msync()
19
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 19 Cache Write Policies Trade-off between speed & reliability Write-through: Write to all levels of memory simultaneously (memory containing block & disk) High reliability Write-back Write only to memory (committing later)
20
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 20 Full Example: Read Operation User process requests read from device OS checks if data is in buffer. If not, OS tells device drivers to perform input Device driver tells DMA controller what to do, blocks DMA controller transfers data to kernel buffer DMA controller interrupts CPU when xfer complete OS transfers data to user process, places process in ready queue, Process continues at point after system call
21
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 21 I/O Summary I/O expensive for several reasons: Slow devices & slow communication links Contention from multiple processes Supported via system calls, interrupt handling (slow) Approaches to improving performance: Caching – reduces data copying Reduce interrupt frequency via large data transfers DMA controllers – offload computation from CPU
22
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 22 Storage Goal: Improve performance of storage systems (i.e., the disk) Scheduling Interleaving Read-ahead
23
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 23 Disk Operations Disks are SLOW! 1 disk seek = 40,000,000 cycles Latency: Seek – position head over track/cylinder Rotational delay – time for sector to rotate underneath head Bandwidth: Transfer time – move bytes from disk to memory
24
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 24 Calculating Disk Operations Time Read/write n bytes: Can’t reduce transfer time Can we reduce latency? I/O time = seek + rotational delay + transfer(n)
25
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 25 Reducing Latency Minimize seek time & rotational latency: Smaller disks Faster disks Sector size tradeoff Layout Scheduling
26
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 26 Disk Head Scheduling Change order of disk accesses Reduce length & number of seeks Algorithms FCFS SSTF SCAN, C-SCAN LOOK, C-LOOK
27
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 27 FCFS First-come, first-serve Example: disk tracks – (65, 40, 18, 78) assume head at track 50 Total seek time? 15+25+22+60 = 122 651878 40 disk I/O time = seek + rotational delay + transfer(n)
28
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 28 SSTF Shortest-seek-time first (like SJF) Example: disk tracks – (65, 40, 18, 78) assume head at track 50 Total seek time? 10+22+47+13 = 92 Is this optimal? 651878 40 disk I/O time = seek + rotational delay + transfer(n)
29
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 29 SCAN Always move back and forth across disk a.k.a. elevator Example: disk tracks – (65, 40, 18, 78) assume head at track 50, moving forwards Total seek time? 15+13+22+60+22=132 When is this good? 651878 40 disk I/O time = seek + rotational delay + transfer(n)
30
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 30 C-SCAN Circular SCAN: Go back to start of disk after reaching end Example: disk tracks – (65, 40, 18, 78) assume head at track 50, moving forwards Total seek time? 15+13+22+100+18+22=190 Can be expensive, but more fair 651878 40 disk I/O time = seek + rotational delay + transfer(n)
31
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 31 LOOK variants Instead of going to end of disk, go to last request in each direction SCAN, C-SCAN ) LOOK, C-LOOK
32
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 32 Exercises 5000 blocks, currently at 143, moving forwards (86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130 ) FCFS SSTF SCAN LOOK C-SCAN C-LOOK Find sequence & total seek time First = 0, last = 4999
33
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 33 Solutions FCFS 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130 – distance: 7,081 SSTF 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774 – distance: 1,745 SCAN 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86 – distance: 9,769 LOOK 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86 – distance: 3,319 C-SCAN 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 0, 86, 130 – distance: 9,985 C-LOOK 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130 – distance: 3,363
34
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 34 Storage – Outline Improving performance of storage systems (i.e., the disk) Scheduling Interleaving Read-ahead Tertiary storage
35
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 35 Disk Interleaving Contiguous allocation Requires OS response before disk spins past next block Instead of physically contiguous allocation, interleave blocks Relative to OS speed, rotational speed
36
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 36 Read-Ahead Reduce seeks by reading blocks from disk ahead of user’s request Place in buffer on disk controller Similar to pre-paging Easier to predict future accesses on disk?
37
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 37 RAID One way to really improve throughput: add disks Data spread across 100 disks = 100x improvement in bandwidth But: reliability drops by 100 Solution - RAID: Redundant Array of Inexpensive Disks Replicate, but use some disks/blocks for checking
38
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 38 RAID Levels RAID-1: Mirroring Just copy disks = 2x disks, ½ for checking RAID-2: Add error-correcting checks Interleave disk blocks with ECC codes (parity, XOR) 10 disks requires 4 check disks Same performance as level 1 RAID-4: Striping data Spread blocks across disks Improves read performance, but impairs writes RAID-5: Striping data & check info Removes bottleneck on check disks
39
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 39 Storage – Summary OS can improve storage system performance Scheduling Interleaving Read-ahead Adding intelligent hardware (RAID) improves performance & reliability
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.