CS61C L25 Input/Output, Networks II, Disks (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia www.cs.berkeley.edu/~ddgarcia inst.eecs.berkeley.edu/~cs61c.

Slides:



Advertisements
Similar presentations
IT253: Computer Organization
Advertisements

I/O Chapter 8. Outline Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
CS61C L13 I/O © UC Regents 1 CS 161 Chapter 8 - I/O Lecture 17.
CSCE430/830 Computer Architecture
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos.
RAID Technology. Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson.
CS 430 – Computer Architecture Disks
CS61C L40 Performance I (1) Garcia © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L26 RAID & Performance (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS61C L38 I/O : Networks (1) Garcia © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L27 I/O & Networks (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #27 I/O.
CS61C L39 I/O : Networks (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
CS61C L36 I/O : Networks (1) Iyengar © UCB TA Sameer “The Yellow Dart” Iyengar inst.eecs/~cs61c-ti inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L40 I/O: Disks (1) Ho, Fall 2004 © UCB TA Casey Ho inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 39 I/O : Disks Microsoft.
CS61C L38 Disks (1) Garcia, Spring 2007 © UCB The enabling power of iPod  What can you do with it? View medical images. Record flight data. Watch videos.
Disk Storage SystemsCSCE430/830 Disk Storage Systems CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine) Fall,
Computer ArchitectureFall 2007 © November 28, 2007 Karem A. Sakallah Lecture 24 Disk IO and RAID CS : Computer Architecture.
CS61C L16 Disks © UC Regents 1 CS61C - Machine Structures Lecture 16 - Disks October 20, 2000 David Patterson
CS 61C L40 I/O Networks (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 37 – Disks The ST506 was their first drive in 1979 (shown here), weighed.
1  1998 Morgan Kaufmann Publishers Chapter 8 Storage, Networks and Other Peripherals.
CS61C L37 Disks (1) Garcia, Fall 2006 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c UC Berkeley CS61C : Machine.
CS 61C L26 Disks & Networks (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #26: Disks & Networks.
First magnetic disks, the IBM 305 RAMAC (2 units shown) introduced in One platter shown top right. A RAMAC stored 5 million characters on inch.
CS61C L40 I/O: Disks (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS430 Computer Architecture 1 CS430 Computer Architecture --Networks-- William J. Taffe using the slides of David Patterson.
Computer ArchitectureFall 2008 © November 12, 2007 Nael Abu-Ghazaleh Lecture 24 Disk IO.
CS61C L28 Networks & Disks (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #28.
CS61C L36 I/O : Networks (1) Zimmer © UCB TA Brian Zimmer inst.eecs/~cs61c-th inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 36 I/O.
Disk Technologies. Magnetic Disks Purpose: – Long-term, nonvolatile, inexpensive storage for files – Large, inexpensive, slow level in the memory hierarchy.
Cs 61C L15 Disks.1 Patterson Spring 99 ©UCB CS61C Anatomy of I/O Devices: Magnetic Disks Lecture 15 March 10, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)
S.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
12/3/2004EE 42 fall 2004 lecture 391 Lecture #39: Magnetic memory storage Last lecture: –Dynamic Ram –E 2 memory This lecture: –Future memory technologies.
CS 61C L41 I/O Disks (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
CS61C L37 I/O : Networks (1) Kronrod © UCB TA Alex Kronrod inst.eecs/~cs61c-te inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 37 I/O.
Introduction to Database Systems 1 The Storage Hierarchy and Magnetic Disks Storage Technology: Topic 1.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 39: IO Disks Instructor: Dan Garcia 1.
Redundant Array of Inexpensive Disks (RAID). Redundant Arrays of Disks Files are "striped" across multiple spindles Redundancy yields high data availability.
CS61C L36 I/O : Networks (1) Garcia © UCB Senior Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department.
Storage & Peripherals Disks, Networks, and Other Devices.
Lecture 4 1 Reliability vs Availability Reliability: Is anything broken? Availability: Is the system still available to the user?
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
RAID Storage EEL4768, Fall 2009 Dr. Jun Wang Slides Prepared based on D&P Computer Architecture textbook, etc.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations.
Disk Storage SystemsCSCE430/830 Disk Storage Systems CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine) Fall,
Csci 136 Computer Architecture II – IO and Storage Systems Xiuzhen Cheng
CS10 L17 Internet II (1) Garcia © UCB Senior Lecturer SOE Dan Garcia cs10.berkeley.edu CS10 : Beauty and Joy of Computing.
COSC 6340: Disks 1 Disks and Files DBMS stores information on (“hard”) disks. This has major implications for DBMS design! » READ: transfer data from disk.
Mohamed Younis CMCS 411, Computer Architecture 1 CMCS Computer Architecture Lecture 25 I/O Systems May 2,
W4118 Operating Systems Instructor: Junfeng Yang.
LECTURE 13 I/O. I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access.
CMSC 611: Advanced Computer Architecture I/O & Storage Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 10: Mass-Storage Systems.
Storage HDD, SSD and RAID.
IT 251 Computer Organization and Architecture
Multiple Platters.
Hardware Technology Trends and Database Opportunities
Vladimir Stojanovic & Nicholas Weaver
IT 251 Computer Organization and Architecture
IT 251 Computer Organization and Architecture
CS61C : Machine Structures Lecture 7. 2
Lecture 13 I/O.
Practical Issues for Commercial Networks
Presentation transcript:

CS61C L25 Input/Output, Networks II, Disks (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #25 Input / Output, Networks II, Disks There is one handout today at the front and back of the room! Maxell’s 300GB HVDs!  We all fondly remember the days of Zip and Syquest drives. InPhase Technologies has developed 300GB Holographic Versatile discs, w/1.6TB discs to come later! CPS today!

CS61C L25 Input/Output, Networks II, Disks (2) Garcia, Fall 2005 © UCB Review I/O gives computers their 5 senses I/O speed range is 12.5-million to one Processor speed means must synchronize with I/O devices before use Polling works, but expensive processor repeatedly queries devices Interrupts works, more complex devices cause exception, OS runs and deal with the device I/O control leads to Operating Systems Integrated circuit (“Moore’s Law”) revolutionizing network switches as well as processors Switch just a specialized computer Trend from shared to switched networks to get faster links and scalable bandwidth

CS61C L25 Input/Output, Networks II, Disks (3) Garcia, Fall 2005 © UCB ABCs of Networks: 2 Computers Starting Point: Send bits between 2 computers Queue (First In First Out) on each end Can send both ways (“Full Duplex”) One-way information is called “Half Duplex” Information sent called a “message” Note: Messages also called packets network interface device OS app OS app

CS61C L25 Input/Output, Networks II, Disks (4) Garcia, Fall 2005 © UCB A Simple Example: 2 Computers What is Message Format? Similar idea to Instruction Format Fixed size? Number bits? Header(Trailer): information to deliver message Payload: data in message What can be in the data? anything that you can represent as bits values, chars, commands, addresses... 8 bit 32 x Length bits Data Length

CS61C L25 Input/Output, Networks II, Disks (5) Garcia, Fall 2005 © UCB Questions About Simple Example What if more than 2 computers want to communicate? Need computer “address field” in packet to know which computer should receive it (destination), and to which computer it came from for reply (source) [just like envelopes!] 8 bits 32xn bits 8 bits HeaderPayload CMD/ Address /DataNet ID Dest.Source Len

CS61C L25 Input/Output, Networks II, Disks (6) Garcia, Fall 2005 © UCB ABCs: many computers switches and routers interpret the header in order to deliver the packet source encodes and destination decodes content of the payload network interface device OS application OS application

CS61C L25 Input/Output, Networks II, Disks (7) Garcia, Fall 2005 © UCB Questions About Simple Example What if message is garbled in transit? Add redundant information that is checked when message arrives to be sure it is OK 8-bit sum of other bytes: called “Check sum”; upon arrival compare check sum to sum of rest of information in message. xor also popular. HeaderPayload Checksum Trailer CMD/ Address /Data Net ID Len Math 55 talks about what a Check sum is…

CS61C L25 Input/Output, Networks II, Disks (8) Garcia, Fall 2005 © UCB Questions About Simple Example What if message never arrives? Receiver tells sender when it arrives (ack) [ala registered mail], sender retries if waits too long Don’t discard message until get “ACK” (for ACKnowledgment); Also, if check sum fails, don’t send ACK HeaderPayload Checksum Trailer CMD/ Address /Data Net ID Len ACK INFO

CS61C L25 Input/Output, Networks II, Disks (9) Garcia, Fall 2005 © UCB Observations About Simple Example Simple questions such as those above lead to more complex procedures to send/receive message and more complex message formats Protocol: algorithm for properly sending and receiving messages (packets)

CS61C L25 Input/Output, Networks II, Disks (10) Garcia, Fall 2005 © UCB Software Protocol to Send and Receive SW Send steps 1: Application copies data to OS buffer 2: OS calculates checksum, starts timer 3: OS sends data to network interface HW and says start SW Receive steps 3: OS copies data from network interface HW to OS buffer 2: OS calculates checksum, if OK, send ACK; if not, delete message (sender resends when timer expires) 1: If OK, OS copies data to user address space, & signals application to continue

CS61C L25 Input/Output, Networks II, Disks (11) Garcia, Fall 2005 © UCB Protocol for Networks of Networks? Internetworking: allows computers on independent and incompatible networks to communicate reliably and efficiently; Enabling technologies: SW standards that allow reliable communications without reliable networks Hierarchy of SW layers, giving each layer responsibility for portion of overall communications task, called protocol families or protocol suites Abstraction to cope with complexity of communication vs. Abstraction for complexity of computation

CS61C L25 Input/Output, Networks II, Disks (12) Garcia, Fall 2005 © UCB Protocol Family Concept Message TH TH THTH Actual Physical MessageTH TH Actual Logical

CS61C L25 Input/Output, Networks II, Disks (13) Garcia, Fall 2005 © UCB Protocol Family Concept Key to protocol families is that communication occurs logically at the same level of the protocol, called peer-to-peer… …but is implemented via services at the next lower level Encapsulation: carry higher level information within lower level “envelope” Fragmentation: break packet into multiple smaller packets and reassemble

CS61C L25 Input/Output, Networks II, Disks (14) Garcia, Fall 2005 © UCB Protocol for Network of Networks Transmission Control Protocol/Internet Protocol (TCP/IP) This protocol family is the basis of the Internet, a WAN protocol IP makes best effort to deliver TCP guarantees delivery TCP/IP so popular it is used even when communicating locally: even across homogeneous LAN

CS61C L25 Input/Output, Networks II, Disks (15) Garcia, Fall 2005 © UCB Message TCP/IP packet, Ethernet packet, protocols Application sends message TCP data TCP Header IP Header IP Data EH Ethernet Hdr TCP breaks into 64KiB segments, adds 20B header IP adds 20B header, sends to network If Ethernet, broken into 1500B packets with headers, trailers (24B) All Headers, trailers have length field, destination,...

CS61C L25 Input/Output, Networks II, Disks (16) Garcia, Fall 2005 © UCB Overhead vs. Bandwidth Networks are typically advertised using peak bandwidth of network link: e.g., 100 Mbits/sec Ethernet (“100 base T”) Software overhead to put message into network or get message out of network often limits useful bandwidth Assume overhead to send and receive = 320 microseconds (  s), want to send 1000 Bytes over “100 Mbit/s” Ethernet Network transmission time: 1000Bx8b/B /100Mb/s = 8000b / (100b/  s) = 80  s Effective bandwidth: 8000b/(320+80)  s = 20 Mb/s

CS61C L25 Input/Output, Networks II, Disks (17) Garcia, Fall 2005 © UCB Peer Instruction (T / F) P2P filesharing has been the dominant application on many links! Suppose we have 2 networks, Which has a higher effective bandwidth as a function of the transferred data size? BearsNet TCP/IP overhead 300  s, peak BW 10Mb/s CalNet TCP/IP overhead 500  s, peak BW 100Mb/s TRUE 1: B always 2: C always 3: B small C big 4: B big C small 5: The same! FALSE 6: B always 7: C always 8: B small C big 9: B big C small 0: The same!

CS61C L25 Input/Output, Networks II, Disks (18) Garcia, Fall 2005 © UCB Peer Instruction Answer (T / F) P2P filesharing has been the dominant application on many links! Suppose we have 2 networks, Which has a higher effective bandwidth as a function of the transferred data size? BearsNet TCP/IP overhead 300  s, peak BW 10Mb/s CalNet TCP/IP overhead 500  s, peak BW 100Mb/s TRUE 1: B always 2: C always 3: B small C big 4: B big C small 5: The same! FALSE 6: B always 7: C always 8: B small C big 9: B big C small 0: The same! 2002 Sprint Gateway link Dominates at small sizes Dominates at big sizes TRUE

CS61C L25 Input/Output, Networks II, Disks (19) Garcia, Fall 2005 © UCB Administrivia Only 2 lectures to go (after this one)! :-( Project 4 (Cache simulator) due friday Compete in the Performance contest! Deadline is Mon, 11:59pm, ~12 days from now HW4 and HW5 are done Regrade requests are due by Project 3 will be graded face-to-face, check web page for scheduling Final: 12:30pm in 2050 VLSB!

CS61C L25 Input/Output, Networks II, Disks (20) Garcia, Fall 2005 © UCB Upcoming Calendar Week #MonWedThu LabSat #14 This week I/O Basics & Networks I I/O Networks II & Disks I/O Polling Cache project due yesterday #15 Last Week o’ Classes Performance LAST CLASS Summary, Review, & HKN Evals I/O Networking & 61C Feedback Survey #16 Sun 2pm Review 10 Evans Performance competition due midnight FINAL EXAM SAT 12:30pm- 3:30pm 2050 VLSB Performance awards

CS61C L25 Input/Output, Networks II, Disks (21) Garcia, Fall 2005 © UCB Magnetic Disks Purpose: Long-term, nonvolatile, inexpensive storage for files Large, inexpensive, slow level in the memory hierarchy Processor (active) Computer Control (“brain”) Datapath (“brawn”) Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk, Network

CS61C L25 Input/Output, Networks II, Disks (22) Garcia, Fall 2005 © UCB Photo of Disk Head, Arm, Actuator Actuator Arm Head Platters (12) { Spindle

CS61C L25 Input/Output, Networks II, Disks (23) Garcia, Fall 2005 © UCB Disk Device Terminology Several platters, with information recorded magnetically on both surfaces (usually) Actuator moves head (end of arm) over track (“seek”), wait for sector rotate under head, then read or write Bits recorded in tracks, which in turn divided into sectors (e.g., 512 Bytes) Platter Outer Track Inner Track Sector Actuator HeadArm

CS61C L25 Input/Output, Networks II, Disks (24) Garcia, Fall 2005 © UCB Disk Device Performance Platter Arm Actuator HeadSector Inner Track Outer Track Disk Latency = Seek Time + Rotation Time + Transfer Time + Controller Overhead Seek Time? depends no. tracks move arm, seek speed of disk Rotation Time? depends on speed disk rotates, how far sector is from head Transfer Time? depends on data rate (bandwidth) of disk (bit density), size of request Controller Spindle

CS61C L25 Input/Output, Networks II, Disks (25) Garcia, Fall 2005 © UCB Data Rate: Inner vs. Outer Tracks To keep things simple, originally same # of sectors/track Since outer track longer, lower bits per inch Competition decided to keep bits/inch (BPI) high for all tracks (“constant bit density”) More capacity per disk More sectors per track towards edge Since disk spins at constant speed, outer tracks have faster data rate Bandwidth outer track 1.7X inner track!

CS61C L25 Input/Output, Networks II, Disks (26) Garcia, Fall 2005 © UCB Disk Performance Model /Trends Capacity : + 100% / year (2X / 1.0 yrs) Over time, grown so fast that # of platters has reduced (some even use only 1 now!) Transfer rate (BW) : + 40%/yr (2X / 2 yrs) Rotation+Seek time : – 8%/yr (1/2 in 10 yrs) Areal Density Bits recorded along a track: Bits/Inch (BPI) # of tracks per surface: Tracks/Inch (TPI) We care about bit density per unit area Bits/Inch 2 Called Areal Density = BPI x TPI MB/$: > 100%/year (2X / 1.0 yrs) Fewer chips + areal density

CS61C L25 Input/Output, Networks II, Disks (27) Garcia, Fall 2005 © UCB Disk History (IBM) Data density Mibit/sq. in. Capacity of Unit Shown Mibytes 1973: 1. 7 Mibit/sq. in 0.14 GiBytes 1979: 7. 7 Mibit/sq. in 2.3 GiBytes source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces”

CS61C L25 Input/Output, Networks II, Disks (28) Garcia, Fall 2005 © UCB Disk History 1989: 63 Mibit/sq. in 60 GiBytes 1997: 1450 Mibit/sq. in 2.3 GiBytes source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces” 1997: 3090 Mibit/sq. in 8.1 GiBytes

CS61C L25 Input/Output, Networks II, Disks (29) Garcia, Fall 2005 © UCB Historical Perspective Form factor and capacity drives market, more than performance 1970s: Mainframes  14" diam. disks 1980s: Minicomputers, Servers  8", 5.25" diam. disks Late 1980s/Early 1990s: Pizzabox PCs  3.5 inch diameter disks Laptops, notebooks  2.5 inch disks Palmtops didn’t use disks, so 1.8 inch diameter disks didn’t make it The five most popular internal form factors for PC hard disks. Clockwise from the left: 5.25", 3.5", 2.5", PC Card and CompactFlash.

CS61C L25 Input/Output, Networks II, Disks (30) Garcia, Fall 2005 © UCB State of the Art: Two camps (2005) Performance Enterprise apps, servers E.g., Seagate Cheetah 15K.4 Serial-Attached SCSI, Ultra320 SCSI, 2Gbit Fibre Channel interface 146 GB, 3.5-inch disk 15,000 RPM 4 discs, 8 heads 13 watts (idle) 3.5 ms avg. seek 200 MB/s transfer rate 1.4 Million hrs MTBF 5 year warrantee $1000 = $6.8 / GB source: Capacity Mainstream, home uses E.g., Seagate Barracuda Serial ATA 3Gb/s, Ultra ATA/ GB, 3.5-inch disk 7,200 RPM ? discs, ? heads 7 watts (idle) 8.5 ms avg. seek 300 MB/s transfer rate ? Million hrs MTBF 5 year warrantee $330 = $0.66 / GB

CS61C L25 Input/Output, Networks II, Disks (31) Garcia, Fall 2005 © UCB 1 inch disk drive! 2005 Hitachi Microdrive: 40 x 30 x 5 mm, 13g 8 GB, 3600 RPM, 1 disk, 10 MB/s, 12 ms seek 400G operational shock, 2000G non-operational Can detect a fall in 4” and retract heads to safety For iPods, cameras, phones 2006 MicroDrive? 16 GB, 12 MB/s! Assuming past trends continue

CS61C L25 Input/Output, Networks II, Disks (32) Garcia, Fall 2005 © UCB Where does Flash memory come in? Microdrives and Flash memory (e.g., CompactFlash) are going head-to-head Both non-volatile (no power, data ok) Flash benefits: durable & lower power (no moving parts) Flash limitations: finite number of write cycles (wear on the insulating oxide layer around the charge storage mechanism) -OEMs work around by spreading writes out How does Flash memory work? NMOS transistor with an additional conductor between gate and source/drain which “traps” electrons. The presence/absence is a 1 or 0. wikipedia.org/wiki/Flash_memory

CS61C L25 Input/Output, Networks II, Disks (33) Garcia, Fall 2005 © UCB Use Arrays of Small Disks… 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson asked in 1987: Can smaller disks be used to close gap in performance between disks and CPUs?

CS61C L25 Input/Output, Networks II, Disks (34) Garcia, Fall 2005 © UCB Replace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks) Capacity Volume Power Data Rate I/O Rate MTTF Cost IBM 3390K 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s ??? Hrs $150K Disk Arrays potentially high performance, high MB per cu. ft., high MB per KW, but what about reliability? 9X 3X 8X 6X

CS61C L25 Input/Output, Networks II, Disks (35) Garcia, Fall 2005 © UCB Array Reliability Reliability - whether or not a component has failed measured as Mean Time To Failure (MTTF) Reliability of N disks = Reliability of 1 Disk ÷ N (assuming failures independent) 50,000 Hours ÷ 70 disks = 700 hour Disk system MTTF: Drops from 6 years to 1 month! Disk arrays too unreliable to be useful!

CS61C L25 Input/Output, Networks II, Disks (36) Garcia, Fall 2005 © UCB Redundant Arrays of (Inexpensive) Disks Files are “striped” across multiple disks Redundancy yields high data availability Availability: service still provided to user, even if some components failed Disks will still fail Contents reconstructed from data redundantly stored in the array  Capacity penalty to store redundant info  Bandwidth penalty to update redundant info

CS61C L25 Input/Output, Networks II, Disks (37) Garcia, Fall 2005 © UCB Berkeley History, RAID-I RAID-I (1989) Consisted of a Sun 4/280 workstation with 128 MB of DRAM, four dual-string SCSI controllers, inch SCSI disks and specialized disk striping software Today RAID is > $27 billion dollar industry, 80% nonPC disks sold in RAIDs

CS61C L25 Input/Output, Networks II, Disks (38) Garcia, Fall 2005 © UCB “RAID 0”: No redundancy = “AID” Assume have 4 disks of data for this example, organized in blocks Large accesses faster since transfer from several disks at once This and next 5 slides from RAID.edu,

CS61C L25 Input/Output, Networks II, Disks (39) Garcia, Fall 2005 © UCB RAID 1: Mirror data Each disk is fully duplicated onto its “mirror” Very high availability can be achieved Bandwidth reduced on write: 1 Logical write = 2 physical writes Most expensive solution: 100% capacity overhead

CS61C L25 Input/Output, Networks II, Disks (40) Garcia, Fall 2005 © UCB RAID 3: Parity (RAID 2 has bit-level striping) Parity computed across group to protect against hard disk failures, stored in P disk Logically, a single high capacity, high transfer rate disk 25% capacity cost for parity in this example vs. 100% for RAID 1 (5 disks vs. 8 disks)

CS61C L25 Input/Output, Networks II, Disks (41) Garcia, Fall 2005 © UCB RAID 4: parity plus small sized accesses RAID 3 relies on parity disk to discover errors on Read But every sector has an error detection field Rely on error detection field to catch errors on read, not on the parity disk Allows small independent reads to different disks simultaneously

CS61C L25 Input/Output, Networks II, Disks (42) Garcia, Fall 2005 © UCB Inspiration for RAID 5 Small writes (write to one disk): Option 1: read other data disks, create new sum and write to Parity Disk (access all disks) Option 2: since P has old sum, compare old data to new data, add the difference to P: 1 logical write = 2 physical reads + 2 physical writes to 2 disks Parity Disk is bottleneck for Small writes: Write to A0, B1 => both write to P disk A0 B0C0 D0 P A1 B1 C1 P D1

CS61C L25 Input/Output, Networks II, Disks (43) Garcia, Fall 2005 © UCB RAID 5: Rotated Parity, faster small writes Independent writes possible because of interleaved parity Example: write to A0, B1 uses disks 0, 1, 4, 5, so can proceed in parallel Still 1 small write = 4 physical disk accesses

CS61C L25 Input/Output, Networks II, Disks (44) Garcia, Fall 2005 © UCB Peer Instruction 1. RAID 1 (mirror) and 5 (rotated parity) help with performance and availability 2. RAID 1 has higher cost than RAID 5 3. Small writes on RAID 5 are slower than on RAID 1 ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT

CS61C L25 Input/Output, Networks II, Disks (45) Garcia, Fall 2005 © UCB Peer Instruction Answer 1. RAID 1 (mirror) and 5 (rotated parity) help with performance and availability 2. RAID 1 has higher cost than RAID 5 3. Small writes on RAID 5 are slower than on RAID 1 ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT 1. All RAID (0-5) helps with performance, only RAID0 doesn’t help availability. TRUE 2. Surely! Must buy 2x disks rather than 1.25x (from diagram, in practice even less) FALSE 3. RAID5 (2R,2W) vs. RAID1 (2W). Latency worse, throughput (|| writes) better. TRUE

CS61C L25 Input/Output, Networks II, Disks (46) Garcia, Fall 2005 © UCB “And In conclusion…” Protocol suites allow heterogeneous networking Another form of principle of abstraction Protocols  operation in presence of failures Standardization key for LAN, WAN Magnetic Disks continue rapid advance: 60%/yr capacity, 40%/yr bandwidth, slow on seek, rotation improvements, MB/$ improving 100%/yr? Designs to fit high volume form factor RAID Higher performance with more disk arms per $ Adds option for small # of extra disks Today RAID is > $27 billion dollar industry, 80% nonPC disks sold in RAIDs; started at Cal