Download presentation
Presentation is loading. Please wait.
Published byBeverly Hart Modified over 9 years ago
1
Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department Computer System Architecture ESGD2204 Saturday, 23 rd April 2010 Chapter 8 Lecture 14
2
Chapter 7 Storage Systems
3
Outline Magnetic Disks RAID Advanced Dependability/Reliability/Availability I/O Benchmarks, Performance and Dependability Conclusion
4
Storage Hierarchy Third Level - I/O (Disks)
5
I/O (Disk) Performance
6
Disk Parameters
7
Disk Performance
8
Disk Performance Example
9
Disk Performance: Queuing Theory
10
Extensions to Conventional Disks
11
More Extensions to Conventional Disks
12
Disk Figure of Merit: Areal Density Bits recorded along a track –Metric is Bits Per Inch (BPI) Number of tracks per surface –Metric is Tracks Per Inch (TPI) Disk Designs Brag about bit density per unit area –Metric is Bits Per Square Inch: Areal Density = BPI x TPI B i t s / s q. i n.
13
Historical Perspective 1956 IBM Ramac - 1963 IBM Dual CPU Disk - early 1970s Winchester –Developed for mainframe computers, proprietary interfaces, ‘63.25MB 50 in. platter –Steady shrink in form factor: 27 in. to 14 in. Form factor and capacity drives market more than performance 1970s developments –8 inch => 5.25 inch floppy disk form-factor (microcode into mainframe) –Emergence of industry standard disk interfaces Early 1980s: PCs and first generation workstations Mid 1980s: Client/server computing –Centralized storage on file server »accelerates hard disk downsizing: 8 inch to 5.25 inch –Mass market disk drives become a reality »industry standards: SCSI, IPI (Intelligent Peripheral Interface), IDE »5.25 inch to 3.5 inch drives for PCs, End of proprietary interfaces 1990s: Laptops => 2.5 inch drives 2000s: What new devices leading to new drives? (iPhones)
14
Case for Storage Shift in focus from computation to communication and storage of information –E.g., Cray Research/Thinking Machines vs. Google/Yahoo –“The Computing Revolution” (1960s to 1980s) “The Information Age” (1990 to today) Storage emphasizes reliability and scalability as well as cost-performance What is “the software king” that determines which HW features actually used? –Operating System for storage –Compiler for processor Also has its own performance theory—queuing theory—balances throughput vs. response time
15
Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Randy Katz and Dave Patterson in 1987: Can smaller disks be used to close gap in performance between disks and CPUs?
16
Replace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks) Capacity Volume Power Data Rate I/O Rate MTTF Cost IBM 3390K 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" 0061 320 MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70=>RAID 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s ??? Hrs $150K Disk Arrays have potential for large data and I/O rates, high MB per cu. ft., high MB per KW, but what about reliability? 9X 3X 8X 6X 1.7X
17
Array Reliability Reliability of N disks = Reliability of 1 Disk ÷ N 50,000 Hours ÷ 70 disks = 700 hours (~9K hrs/yr) Disk system MTTF: Drops from 6 years to 1 month! Arrays (without redundancy) too unreliable to be useful! Hot spares support reconstruction in parallel with access: very high media availability can be achieved Hot spares support reconstruction in parallel with access: very high media availability can be achieved
18
R edundant A rrays of I nexpensive D isks Newer RAID Slides F09 Files are "striped" across multiple disks Redundancy yields high data availability –Availability: service still provided to user, even if some components have failed Disks will still fail Contents reconstructed from data redundantly stored in the array Capacity penalty to store redundant info Bandwidth penalty to update redundant info
19
RAIDs: Disk Arrays Arrays of small and inexpensive disks - Increase potential throughput by having many disk drives -Data is spread over multiple disk -Multiple accesses are made to several disks at a time Reliability is lower than a single disk But availability can be improved by adding redundant disks (RAID) l Lost information can be reconstructed from redundant information l MTTR: mean time to repair is in the order of hours l MTTF: mean time to failure of disks is tens of years Redundant Array of Inexpensive Disks
20
RAID: Level 0 (No Redundancy; Striping) Multiple smaller disks as opposed to one big disk l Spreading the sector over multiple disks – striping – means that multiple blocks can be accessed in parallel increasing the performance -A 4 disk system gives four times the throughput of a 1 disk system l Same cost as one big disk – assuming 4 small disks cost the same as one big disk No redundancy, so what if one disk fails? l Failure of one or more disks is more likely as the number of disks in the system increases sec1sec3sec2sec4 sec1,b0sec1,b2sec1,b1sec1,b3
21
RAID: Level 1 (Redundancy via Mirroring) Uses twice as many disks as RAID 0 (e.g., 8 smaller disks with the second set of 4 duplicating the first set) so there are always two copies of the data l # redundant disks = # of data disks so twice the cost of one big disk -writes have to be made to both sets of disks, so writes will be only 1/2 the performance of a RAID 0 What if one disk fails? l If a disk fails, the system just goes to the “mirror” for the data redundant (check) data sec1sec3sec2sec4sec1sec3sec2sec4
22
RAID: Level 0+1 (Striping with Mirroring) Combines the best of RAID 0 and RAID 1, data is striped across four disks and mirrored to four disks, called “a mirror of stripes” or RAID 01, but may be marketed as “RAID 10” l + Four times the throughput (due to striping) l - # redundant disks = # of data disks, so 2X the cost of one big disk l - writes have to be made to both sets of disks, so writes will be only 1/2 the performance of RAID 0 What if one disk fails? l + If a disk fails, the system just goes to the “mirror” for the data sec1blk3blk2blk4blk1blk2blk3blk4 redundant (check) data sec1,b0sec1,b2sec1,b1sec1,b3sec1,b0sec1,b2sec1,b1sec1,b3
23
RAID: Level 2 (Redundancy via Hamming ECC) ECC (Hamming Error Correcting Code) disks contain the even-parity of data on a set of distinct overlapping disks l # ECC redundant disks = log 2 (total # of data + ECC disks) so almost twice the cost of one big disk if small # of data disks used. -writes require computing parity to write to “half” the ECC disks -reads require reading “half” the ECC disks and confirming parity Can tolerate limited disk failure, since the data can be reconstructed; first correction method; no longer used. sec1,b0sec1,b2sec1,b1sec1,b3 Checks 4,5,6,7 Checks 2,3,6,7 Checks 1,3,5,7 3 567421 100011 ECC disks ECC disks 4 and 2 point to either data disk 6 or 7 as being bad, but ECC disk 1 says disk 7 is okay, so disk 6 must be in error 1 1 001 1 2 010 2 21 011 3 4 100 4 4 1 101 5 42 110 6 421 111 7 ECC: 421 Why it works. 0 error
24
RAID: Level 3 (Bit-Interleaved Parity) Cost of higher availability is reduced to 1/N where N is the number of disks in a protection group l # redundant disks = 1 × # of protection groups -writes require writing the new data to the data disk as well as computing the parity, meaning reading the other disks, so that the parity disk can be updated Can tolerate limited (single) disk failure, since the data can be reconstructed -reads require reading all the operational data disks as well as the parity disk to calculate the missing data that was stored on the failed disk sec1,b0sec1,b2sec1,b1sec1,b3 1001 (odd) bit parity disk
25
RAID: Level 3 (Bit-Interleaved Parity) Cost of higher availability is reduced to 1/N where N is the number of disks in a protection group l # redundant disks = 1 × # of protection groups -writes require writing the new data to the data disk as well as computing the parity, meaning reading the other disks, so that the parity disk can be updated Can tolerate limited (single) disk failure, since the data can be reconstructed -reads require reading all the operational data disks as well as the parity disk to calculate the missing data that was stored on the failed disk sec1,b0sec1,b2sec1,b1sec1,b3 1001 (odd) bit parity disk disk fails 1 0
26
RAID: Level 4 (Block-Interleaved Parity) Cost of higher availability still only 1/N but the parity is stored as blocks associated with sets of data blocks l Four times the throughput (striping) l # redundant disks = 1 × # of protection groups l Supports “small reads” and “small writes” (reads and writes that go to just one (or a few) data disk in a protection group) -by watching which bits change when writing new information, need only to change the corresponding bits on the parity disk -the parity disk must be updated on every write, so it is a bottleneck for back-to-back writes Can tolerate limited (single) disk failure, since the data can be reconstructed block parity disk sec1sec2sec3sec4
27
Small Writes RAID 3 writes New D1 data D1D2D3D4P D1D2D3D4P 3 reads and 2 writes involving all the disks RAID 4 small writes New D1 data D1D2D3D4P D1D2D3D4P 2 reads and 2 writes involving just two disks
28
RAID: Level 5 (Distributed Block-Interleaved Parity) Cost of higher availability still only 1/N but any single parity block may be located on any of the disks so there is no single bottleneck for writes l Still four times the throughput (striping) l # redundant disks = 1 × # of protection groups l Supports “small reads” and “small writes” (reads and writes that go to just one (or a few) data disk in a protection group) l Allows multiple simultaneous writes as long as the accompanying parity blocks are not located on the same disk Can tolerate limited (single) disk failure, since the data can be reconstructed one of these assigned as the block parity disk
29
Distributing Parity Blocks By distributing parity blocks to all disks, some small writes can be performed in parallel using RAID 5 1 2 3 4 P0 5 6 7 8 P1 9 10 11 12 P2 13 14 15 16 P3 RAID 4RAID 5 1 2 3 4 P0 5 6 7 P1 8 9 10 P2 11 12 13 P3 14 15 16 Time Can be done in parallel Bottle- neck
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.