RAID COP 5611 Advanced Operating Systems Adapted from Andy Wang’s slides at FSU
Parallel Disk Access and RAID One disk can only deliver data at its maximum rate So to get more data faster, get it from multiple disks simultaneously Saving on rotational latency and seek time
Utilizing Disk Access Parallelism Some parallelism available just from having several disks But not much Instead of satisfying each access from one disk, use multiple disks for each access Store part of each data block on several disks
Disk Parallelism Example open(foo)read(bar)write(zoo) File System
Data Striping Transparently distributing data over multiple disks Benefits – Increases disk parallelism Faster response for big requests Major parameters are number of disks and size of data interleaf
Fine-Grained Vs. Coarse-Grained Data Interleaving Fine grain data interleaving +High data rate for all requests But only one request per disk array Lots of time spent positioning Coarse-grain data interleaving +Large requests access many disks +Many small requests handled at once Small I/O requests access few disks
Reliability of Disk Arrays Without disk arrays, failure of one disk among N loses 1/Nth of the data With disk arrays (fine grained across all N disks), failure of one disk loses all data N disks 1/Nth as reliable as one disk
Adding Reliability to Disk Arrays Buy more reliable disks Build redundancy into the disk array Multiple levels of disk array redundancy possible Most organizations can prevent any data loss from single disk failure
Basic Reliability Mechanisms Duplicate data Parity for error detection Error Correcting Code for detection and correction
Parity Methods Can use parity to detect multiple errors But typically used to detect single error If hardware errors are self-identifying, parity can also correct errors When data is written, parity must be written, too
Error-Correcting Code Based on Hamming codes, mostly Not only detect error, but identify which bit is wrong
RAID Architectures Redundant Arrays of Independent Disks Basic architectures for organizing disks into arrays Assuming independent control of each disk Standard classification scheme divides architectures into levels
Non-Redundant Disk Arrays (RAID Level 0) No redundancy at all So, what we just talked about Any failure causes data loss
Non-Redundant Disk Array Diagram (RAID Level 0) open(foo)read(bar)write(zoo) File System
Mirrored Disks (RAID Level 1) Each disk has second disk that mirrors its contents Writes go to both disks No data striping + Reliability is doubled + Read access faster - Write access slower - Expensive and inefficient
Mirrored Disk Diagram (RAID Level 1) open(foo)read(bar)write(zoo) File System
Memory-Style ECC (RAID Level 2) Some disks in array are used to hold ECC E.g., 4 data disks require 3 ECC disks + More efficient than mirroring + Can correct, not just detect, errors - Still fairly inefficient
Memory-Style ECC Diagram (RAID Level 2) open(foo)read(bar)write(zoo) File System
Bit-Interleaved Parity (RAID Level 3) Each disk stores one bit of each data block One disk in array stores parity for other disks + More efficient that Levels 1 and 2 - Parity disk doesn’t add bandwidth - Can’t correct errors
Bit-Interleaved RAID Diagram (Level 3) open(foo)read(bar)write(zoo) File System
Block-Interleaved Parity (RAID Level 4) Like bit-interleaved, but data is interleaved in blocks of arbitrary size Size is called striping unit Small read requests use 1 disk + More efficient data access than level 3 + Satisfies many small requests at once - Parity disk can be a bottleneck - Small writes require 4 I/Os
Block-Interleaved Parity Diagram (RAID Level 4) open(foo)read(bar)write(zoo) File System
Block-Interleaved Distributed-Parity (RAID Level 5) Sort of the most general level of RAID Spread the parity out over all disks +No parity disk bottleneck +All disks contribute read bandwidth –Requires 4 I/Os for small writes
Block-Interleaved Distributed-Parity Diagram (RAID Level 5) open(foo)read(bar)write(zoo) File System
Where Did RAID Look For Performance Improvements? Parallel use of disks Improve overall delivered bandwidth by getting data from multiple disks Biggest problem is small write performance But we know how to deal with small writes...