Presentation is loading. Please wait.

Presentation is loading. Please wait.

Module: Storage Systems

Similar presentations


Presentation on theme: "Module: Storage Systems"— Presentation transcript:

1 Module: Storage Systems
ECE 4100/6100 Fall 2004

2 Reading Storage technologies and trends
Section 7.2 Interfacing storage devices to the CPU Section 7.3 Redundant arrays of inexpensive disks (RAID) Section 7.5 A very good reference for an introductory explanation of the basics a of RAID systems Fall 2004

3 Motivation: Who Cares About I/O?
CPU Performance has been doubling every 18 months I/O system performance grows much more slowly limited by speed of mechanical components Disk seek speeds have improved on the order of 10%/yr Amdahl's Law: system speed-up limited by the slowest part! Time spent in I/O determines application speedup Speedup limited by 1/s, where s is fraction spent in I/O Faster CPUs do not lead to corresponding reductions in execution time Ancestor of Java had no I/O CPU vs. Peripheral Primary vs. Secondary What maks portable, PDA exciting? Fall 2004

4 The I/O Subsystem Main Memory Fall 2004 Processor Memory Bus I/O Bus
L1 Cache L2 Cache Processor Memory Bus Bridge I/O Bus I/O Controller Graphics Network Interface Fall 2004

5 Storage Technology Drivers
Continued migration of data into electronic form Businesses Financial, operational, matrketing Personal Consumer electronics, home computing Dependability More important than more performance Cannot lose data! Fall 2004

6 Price/Capacity Rate of price decline accelerates in the recent past
Disk capacity increase of 4000 in 18 years Smaller disks playing a bigger role More efficient to spin smaller mass, smaller diameter disks Fall 2004

7 Price/Gigabyte Price per gigabyte has dropped by a factor of over 10,000 since 1983! Making personal, storage applications affordable! Media technologies Fall 2004

8 Cost/Access Time Several orders of magnitude gap in cost and access time between disk and semiconductor memory Performance gain of semiconductor memory relative costs a factor of 100 more per gigabyte Fall 2004

9 Disk Drive Terminology
Platter Head Arm Actuator Sector Platters Track Data is recorded on concentric tracks on both sides of a platter Tracks are organized as fixed size (bytes) sectors Corresponding tracks on all platters form a cylinder Data is addressed by three coordinates: cylinder, platter, and sector Actuator moves (seek) the correct read/write head over the correct sector Under the control of the controller Fall 2004

10 Disk Performance Head Arm Actuator Platters Disk latency = controller overhead + seek time + rotational delay + transfer delay Seek time and rotational delay are limited by mechanical parts Fall 2004

11 Disk Performance Seek time determined by the current position of the head, i.e., what track is it covering, and the new position of the head Average rotational delay is time for 0.5 revolutions Transfer rate is a function of bit density Fall 2004

12 Areal Density Density in a track is measured in bits per inch (BPI)
Track density is measured as tracks per inch (TPI) Areal density is bit density per unit area and is BPI x TPI Check out Fall 2004

13 Physical Layout Older designs had the same number of sectors on all tracks Outer tracks had lower bit density Constant bit density recording Outer tracks have more sectors than the inner tracks Bit density is not quite constant with inner tracks having higher recording density Fall 2004

14 Disk Performance Model /Trends
Capacity More recently growth accelerated to 100%/year Transfer rate 40%/yr Rotation + Seek time Closer to 10%/yr MB/$ See Figure 7.4 Fall 2004

15 State of the Art: Barracuda 7200.7
200 GB, 3.5 inch disk 16,383 cylinders (def) 7,200 RPM 8.5 ms average seek time 100 Mbytes/sec max external transfer rate Recording density 671,500 bpi 7.5 watts (idle) – 12.5 (seek) Source: 47 to 26 MB/s (external) Fall 2004

16 Microdrives Hitachi Microdrive 2004 2006 IBM (speculation)?
56.5 Gbits/sq in 3600 rpm 8.3 ms average seek 4.2 – 7.3 Mbits/sec data rate 2006 IBM (speculation)? 6 GB for 1 in drives 100 Gbits/sq in Fall 2004

17 Disk Drive Performance
Published seek times are based on uniformly distributed requests Reality is that layout is optimized particularly when large data sets are involved Published data rates include reading error correction bits and therefore “actual” data rates are lower Fall 2004

18 Improving Disk I/O Performance
Single disk performance is limited by the drive mechanics Multiple disks operating together can offer a single, faster, larger, logical disk that can offer Higher aggregate data transfer rates on large data sets Higher aggregate I/O request service rate on small accesses that are distributed across the disks Fall 2004

19 Arrays of Inexpensive Disks: Throughput
CPU read request Block 0 Block 1 Block 2 Block 3 Data is striped across all disks Visible performance overhead of drive mechanics is amortized across multiple accesses Scientific workloads are well suited to such oranizations Fall 2004

20 Arrays of Inexpensive Disks: Request Rate
Multiple CPU read requests Consider multiple read requests for small blocks of data Several I/O requests can be serviced concurrently Fall 2004

21 Reliability of Disk Arrays
Redundant information The reliability of an array of N disks is lower than the reliability of a single disk Any single disk failure will cause the array to fail The array is N times more likely to fail Use redundant disks to recover from failures Similar to use of error correcting codes Overhead Bandwidth and cost Fall 2004

22 Redundant Arrays of Small Disks (RAID)
What size disks should we use? Smaller disks have the advantages of Lower cost/MB Higher MB/volume Higher MB/watt Classic paper “Disk System Architectures for High Performance Computing,” R. Katz, G. Gibson, D. Patterson, Proceedings of the IEEE, vol. 77, no. 12, December 1989 Fall 2004

23 RAID Level 0 RAID 0 corresponds to use of striping with no redundancy
1 2 3 4 5 6 7 RAID 0 corresponds to use of striping with no redundancy Provides the highest performance Provides the lowest reliability Frequently used in scientific and supercomputing applications where data throughput is important Fall 2004

24 RAID Level 1 mirrors The disk array is “mirrored” or “shadowed” in its entirety Reads can be optimized Pick the array with smaller queuing and seek times Performance sacrifice on writes – to both arrays Fall 2004

25 RAID Level 10 Use a combination of mirroring and striping mirrors
1 2 3 4 5 6 7 mirrors 1 2 3 4 5 6 7 Use a combination of mirroring and striping Fall 2004

26 RAID Level 3 Bit level parity 1 1 Parity Disk RAID 2 evolved to bit-level striping and use of error correcting codes Key observation Failed disk can always be recognized Gave way to RAID level 3 with the use of parity The contents of the failed disk can be reconstructed Fall 2004

27 RAID Level 4 Block level parity Block 0 Block 1 Block 2 Block 3 Parity Block 4 Block 5 Block 6 Block 7 Parity Parity Disk Data is interleaved in blocks, referred to as the striping unit and striping width Small reads can access subset of the disks A write to a single disk requires 4 accesses read old block, write new block, read and write parity disk Parity disk can become a bottleneck Fall 2004

28 The Small Write Problem
4 1 B1-New Ex-OR 2 Ex-OR 3 Two disk read operations followed by two disk write operations Fall 2004

29 RAID 4 Logical View Small reads can take place concurrently
B0 B1 B2 B2 parity B4 B5 B6 B7 parity B8 B9 B10 B11 parity B12 B13 B14 B15 parity Small reads can take place concurrently Large reads/writes use multiple disks and (for writes) the parity disk Small writes take four accesses Parity disk is a bottleneck Fall 2004

30 RAID 5 Logical View B0 B1 B2 B2 parity B4 B5 B6 parity B7 B8 B9 parity B11 B10 B12 parity B14 B15 B13 Block interleaved distributed parity organization distributes write traffic amongst disks Best combination of small read and small write performance Fall 2004

31 Summary Storage: Systems
Extraordinary advances in the densities and packaging Raw performance limited by drive mechanics System level organizations such as RAID evolved to continue the growth in performance Fall 2004


Download ppt "Module: Storage Systems"

Similar presentations


Ads by Google