CS 152 L18 Disks, RAID, BW (1)Fall 2004 © UC Regents CS152 – Computer Architecture and Engineering Lecture 18 – ECC, RAID, Bandwidth vs. Latency 2004-10-28.

Slides:



Advertisements
Similar presentations
IT253: Computer Organization
Advertisements

Disk Arrays COEN 180. Large Storage Systems Collection of disks to store large amount of data. Performance advantage: Each drive can satisfy only so many.
Computer Organization and Architecture
CP1610: Introduction to Computer Components Primary Memory.
CA 714CA Midterm Review. C5 Cache Optimization Reduce miss penalty –Hardware and software Reduce miss rate –Hardware and software Reduce hit time –Hardware.
Lecture 12 Reduce Miss Penalty and Hit Time
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos.
RAID Technology. Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson.
NETWORKING CONCEPTS. ERROR DETECTION Error occures when a bit is altered between transmission& reception ie. Binary 1 is transmitted but received is binary.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Intro II & Trends Steve Ko Computer Sciences and Engineering University at Buffalo.
Instruction Level Parallelism and Its Exploitation.
CPSC 614 Computer Architecture Lec 2 - Introduction EJ Kim Dept. of Computer Science Texas A&M University Adapted from CS 252 Spring 2006 UC Berkeley Copyright.
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
1 Lecture 26: Storage Systems Topics: Storage Systems (Chapter 6), other innovations Final exam stats:  Highest: 95  Mean: 70, Median: 73  Toughest.
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
Informationsteknologi Friday, October 19, 2007Computer Architecture I - Class 61 Today’s class Floating point numbers Computer systems organization.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
S.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
Chapter 8: Part II Storage, Network and Other Peripherals.
Bandwidth Rocks (1) Latency Lags Bandwidth (last ~20 years) Performance Milestones Disk: 3600, 5400, 7200, 10000, RPM.
Organization of a Simple Computer. Computer Systems Organization  The CPU (Central Processing Unit) is the “brain” of the computer. Fetches instructions.
CptS / E E COMPUTER ARCHITECTURE
CSE820 Lec 2 - Introduction Rich Enbody Based on slides by David Patterson.
Computer performance.
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 6 – RAID ©Manuel Rodriguez.
CS136, Advanced Architecture Introduction to Architecture (continued)
Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department.
Storage & Peripherals Disks, Networks, and Other Devices.
Lecture 4 1 Reliability vs Availability Reliability: Is anything broken? Availability: Is the system still available to the user?
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
UC Berkeley 1 The Datacenter is the Computer David Patterson Director, RAD Lab January, 2007.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department.
1 Chapter 1: Fundamentals of Computer Design Introduction, class of computers Instruction set architecture (ISA) Technology trend: performance, power,
Computer Processing of Data
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
Computer Architecture Lec 2 - Introduction. 01/19/10Lec 02-intro 2 Review from last lecture Computer Architecture >> instruction sets Computer Architecture.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Chapter 5 Internal Memory. Semiconductor Memory Types.
CPE 731 Advanced Computer Architecture Technology Trends Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of California,
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
CS 5513 Computer Architecture Lecture 2 – More Introduction, Measuring Performance.
Riyadh Philanthropic Society For Science Prince Sultan College For Woman Dept. of Computer & Information Sciences CS 251 Introduction to Computer Organization.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
Programming for GCSE Topic 5.1: Memory and Storage T eaching L ondon C omputing William Marsh School of Electronic Engineering and Computer Science Queen.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
1 COMPUTER ARCHITECTURE (for Erasmus students) Assoc.Prof. Stasys Maciulevičius Computer Dept.
ECE/CS 552: Main Memory and ECC © Prof. Mikko Lipasti Lecture notes based in part on slides created by Mark Hill, David Wood, Guri Sohi, John Shen and.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
10.1 Chapter 10 Error Detection and Correction Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
COMPUTER SYSTEMS ARCHITECTURE A NETWORKING APPROACH CHAPTER 12 INTRODUCTION THE MEMORY HIERARCHY CS 147 Nathaniel Gilbert 1.
CS35101 Computer Architecture Spring 2006 Lecture 18: Memory Hierarchy Paul Durand ( ) [Adapted from M Irwin (
Lecture # 10 Processors Microcomputer Processors.
Types of RAM (Random Access Memory) Information Technology.
LECTURE 13 I/O. I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access.
Ch1. Fundamentals of Computer Design 1. Formulas ECE562 Advanced Computer Architecture Prof. Honggang Wang ECE Department University of Massachusetts Dartmouth.
SPRING 2012 Assembly Language. Definition 2 A microprocessor is a silicon chip which forms the core of a microcomputer the concept of what goes into a.
A Case for Redundant Arrays of Inexpensive Disks (RAID) -1988
Hardware Technology Trends and Database Opportunities
Types of RAM (Random Access Memory)
Vladimir Stojanovic & Nicholas Weaver
HY425 – Αρχιτεκτονική Υπολογιστών Διάλεξη 02
School of Computing and Informatics Arizona State University
Cache Memory Presentation I
CPE 432 Computer Design 1 – Introduction and Technology Trends
RAID Redundant Array of Inexpensive (Independent) Disks
Chapter 4: MEMORY.
Computer Evolution and Performance
Presentation transcript:

CS 152 L18 Disks, RAID, BW (1)Fall 2004 © UC Regents CS152 – Computer Architecture and Engineering Lecture 18 – ECC, RAID, Bandwidth vs. Latency John Lazzaro ( Dave Patterson ( www-inst.eecs.berkeley.edu/~cs152/

CS 152 L18 Disks, RAID, BW (2)Fall 2004 © UC Regents Review Buses are an important technique for building large-scale systems –Their speed is critically dependent on factors such as length, number of devices, etc. –Critically limited by capacitance Direct Memory Access (dma) allows fast, burst transfer into processor’s memory: –Processor’s memory acts like a slave –Probably requires some form of cache-coherence so that DMA’ed memory can be invalidated from cache. Networks and switches popular for LAN, WAN Networks and switches starting to replace buses on desktop, even inside chips

CS 152 L18 Disks, RAID, BW (3)Fall 2004 © UC Regents Review: ATA cables Serial ATA, Rounded parallel ATA, Ribbon parallel ATA cables 40 inches max vs. 18 inch Serial ATA cables are thin

CS 152 L18 Disks, RAID, BW (4)Fall 2004 © UC Regents Outline ECC RAID: Old School & Update Latency vs. Bandwidth (if time permits)

CS 152 L18 Disks, RAID, BW (5)Fall 2004 © UC Regents Computer memories can make errors occasionally To guard against errors, some memories use error-detecting codes or error-correcting codes (ECC) => extra bits are added to each memory word When a word is read out of memory, the extra bits are checked to see if an error has occurred and, if using ECC, correct them Data + extra bits called “code words” Error-Detecting Codes

CS 152 L18 Disks, RAID, BW (6)Fall 2004 © UC Regents Given 2 code words, can determine how many corresponding bits differ. To determine how many bits differ, just compute the bitwise Boolean EXCLUSIVE OR of the two codewords, and count the number of 1 bits in the result The number of bit positions in which two codewords differ is called the Hamming distance if two code words are a Hamming distance d apart, it will require d single-bit errors to convert one into the other Error-Detecting Codes

CS 152 L18 Disks, RAID, BW (7)Fall 2004 © UC Regents For example, the code words and are a Hamming distance 3 apart because it takes 3 single-bit errors to convert one into the other Xor ’s = Hamming distance 3 Error-Detecting Codes

CS 152 L18 Disks, RAID, BW (8)Fall 2004 © UC Regents As a simple example of an error-detecting code, consider a code in which a single parity bit is appended to the data. The parity bit is chosen so that the number of 1 bits in the codeword is even (or odd). E.g., if even parity, parity bit for is 1. Such a parity code has Hamming distance 2, since any single-bit error produces a codeword with the wrong parity It takes 2 single-bit errors to go from a valid codeword to another valid codeword => detect single bit errors. Whenever a word containing the wrong parity is read from memory, an error condition is signaled. The program cannot continue, but at least no incorrect results are computed. Error-Detecting Codes

CS 152 L18 Disks, RAID, BW (9)Fall 2004 © UC Regents a Hamming distance of 2k + 1 is required to be able to correct k errors in any data word As a simple example of an error-correcting code, consider a code with only four valid code words: , , , and This code has a distance 5, which means that it can correct double errors. If the codeword arrives, the receiver knows that the original must have been (if there was no more than a double error). If, however, a triple error changes into , the error cannot be corrected. Error-Correcting Codes

CS 152 L18 Disks, RAID, BW (10)Fall 2004 © UC Regents Hamming Codes How many parity-bits are needed? m parity-bits can code 2 m -1-m info-bits Info-bitsParity-bits <5<53 <124 <275 <586 <1217 How correct single error (SEC) and detect 2 errors (DED)? How many “SEC/DED” bits for 64 bits data?

CS 152 L18 Disks, RAID, BW (11)Fall 2004 © UC Regents Administrivia - HW 3, Lab 4 Lab 4 is next: Plan by Thur for TA, Meet with TA Friday, Final Monday

CS 152 L18 Disks, RAID, BW (12)Fall 2004 © UC Regents ECC Hamming Code. Hamming Coding is a coding method for detecting and correcting errors. –ECC Hamming distance between 2 coded words must be ≥ 3 Number bits from right, starting with 1 All bits whose bit number is a power of 2 are parity bits We use EVEN PARITY in this example This example shows a 4 data bits DDDPDPP 7-BIT CODEWORD D-D-D-P (EVEN PARITY) DD--DP- DDDP--- Bit 1 will check (parity) in all the bit positions that use a 1 in their number Bit 2 will check all the bit positions that use a 2 in their number Bit 4 will check all the bit positions that use a 4 in their number Etc.

CS 152 L18 Disks, RAID, BW (13)Fall 2004 © UC Regents Example: Hamming Code. Example: The message 1101 would be sent as , since: BIT CODEWORD (EVEN PARITY) Let us consider the case where an error caused by the channel transmitted message received message > BIT: EVEN PARITY If number of 1s is even then Parity = 0 Else Parity = 1

CS 152 L18 Disks, RAID, BW (14)Fall 2004 © UC Regents Example: Hamming Code. transmitted message received message > BIT: The above error (in bit 5) can be corrected by examining which of the three parity bits was affected by the bad bit: BIT CODEWORD (EVEN PARITY)NOT! (EVEN PARITY)OK! (EVEN PARITY)NOT!1 bad parity bits labeled 101 point directly to the bad bit since 101 binary equals 5

CS 152 L18 Disks, RAID, BW (15)Fall 2004 © UC Regents transmitted message received message > BIT: The above error in parity bit (bit 1) can be corrected by examining as below: Will Hamming Code detect and correct errors on parity bits? Yes! BIT CODEWORD (EVEN PARITY)NOT! (EVEN PARITY)OK! (EVEN PARITY)OK!0 the bad parity bits labeled 001 point directly to the bad bit since 001 binary equals 1. In this example error in parity bit 1 is detected and can be corrected by flipping it to a 0

CS 152 L18 Disks, RAID, BW (16)Fall 2004 © UC Regents RAID Beginnings We had worked on 3 generations of Reduced Instruction Set Computer (RISC) processors 1980 – 1987 Our expectation: I/O will become a performance bottleneck if doesn’t get faster Randy Katz gets Macintosh with disk along side “Use PC disks to build fast I/O to keep pace with RISC?”

CS 152 L18 Disks, RAID, BW (17)Fall 2004 © UC Regents Redundant Array of Inexpensive Disks ( ) Hard to explain ideas, given past disk array efforts Paper to educate, differentiate? RAID paper spread like virus Products from Compaq, EMC, IBM, RAID I Sun 4/280, 128 MB of DRAM, 4 dual-string SCSI controllers, ” 340 MB disks + SW RAID II Gbit/s net ” 320 MB disks 1 st Network Attached Storage Ousterhout: Log Structured File Sys. widely used (NetAp) Today RAID ~ $25B industry; 80% of server disks in RAID 1998 IEEE Storage Award Students: Peter Chen, Ann Chevernak, Garth Gibson, Ed Lee, Ethan Miller, Mary Baker, John Hartman, Kim Keeton, Mendel Rosenblum, Ken Sherriff, …

CS 152 L18 Disks, RAID, BW (18)Fall 2004 © UC Regents Latency Lags Bandwidth Over last 20 to 25 years, for network disk, DRAM, MPU, Latency Lags Bandwidth: Bandwidth Improved 120X to 2200X But Latency Improved only 4X to 20X Look at examples, reasons for it

CS 152 L18 Disks, RAID, BW (19)Fall 2004 © UC Regents Disks: Archaic(Nostalgic) v. Modern(Newfangled) Seagate , RPM (4X) 73.4 GBytes (2500X) Tracks/Inch: (80X) Bits/Inch: 533,000 (60X) Four 2.5” platters (in 3.5” form factor) Bandwidth: 86 MBytes/sec (140X) Latency: 5.7 ms (8X) Cache: 8 MBytes CDC Wren I, RPM 0.03 GBytes capacity Tracks/Inch: 800 Bits/Inch: 9550 Three 5.25” platters Bandwidth: 0.6 MBytes/sec Latency: 48.3 ms Cache: none

CS 152 L18 Disks, RAID, BW (20)Fall 2004 © UC Regents Latency Lags Bandwidth (for last ~20 years) Performance Milestones Disk: 3600, 5400, 7200, 10000, RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case)

CS 152 L18 Disks, RAID, BW (21)Fall 2004 © UC Regents Memory:Archaic(Nostalgic)v. Modern(Newfangled) 1980 DRAM (asynchronous) 0.06 Mbits/chip 64,000 xtors, 35 mm 2 16-bit data bus per module, 16 pins/chip 13 Mbytes/sec Latency: 225 ns (no block transfer) 2000 Double Data Rate Synchr. (clocked) DRAM Mbits/chip (4000X) 256,000,000 xtors, 204 mm 2 64-bit data bus per DIMM, 66 pins/chip (4X) 1600 Mbytes/sec (120X) Latency: 52 ns (4X) Block transfers (page mode)

CS 152 L18 Disks, RAID, BW (22)Fall 2004 © UC Regents Latency Lags Bandwidth (last ~20 years) Performance Milestones Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk: 3600, 5400, 7200, 10000, RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case)

CS 152 L18 Disks, RAID, BW (23)Fall 2004 © UC Regents LANs: Archaic(Nostalgic)v. Modern(Newfangled) Ethernet Year of Standard: Mbits/s link speed Latency: 3000  sec Shared media Coaxial cable Ethernet 802.3ae Year of Standard: ,000 Mbits/s(1000X) link speed Latency: 190  sec (15X) Switched media Category 5 copper wire Coaxial Cable: Copper core Insulator Braided outer conductor Plastic Covering Copper, 1mm thick, twisted to avoid antenna effect Twisted Pair: "Cat 5" is 4 twisted pairs in bundle

CS 152 L18 Disks, RAID, BW (24)Fall 2004 © UC Regents Latency Lags Bandwidth (last ~20 years) Performance Milestones Ethernet: 10Mb, 100Mb, 1000Mb, Mb/s (16x,1000x) Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk: 3600, 5400, 7200, 10000, RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case)

CS 152 L18 Disks, RAID, BW (25)Fall 2004 © UC Regents CPUs: Archaic(Nostalgic) v. Modern(Newfangled) 1982 Intel MHz 2 MIPS (peak) Latency 320 ns 134,000 xtors, 47 mm 2 16-bit data bus, 68 pins Microcode interpreter, separate FPU chip (no caches) 2001 Intel Pentium MHz(120X) 4500 MIPS (peak) (2250X) Latency 15 ns (20X) 42,000,000 xtors, 217 mm 2 64-bit data bus, 423 pins 3-way superscalar, Dynamic translate to RISC, Superpipelined (22 stage), Out-of-Order execution On-chip 8KB Data caches, 96KB Instr. Trace cache, 256KB L2 cache

CS 152 L18 Disks, RAID, BW (26)Fall 2004 © UC Regents Latency Lags Bandwidth (last ~20 years) Performance Milestones Processor: ‘286, ‘386, ‘486, Pentium, Pentium Pro, Pentium 4 (21x,2250x) Ethernet: 10Mb, 100Mb, 1000Mb, Mb/s (16x,1000x) Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk : 3600, 5400, 7200, 10000, RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case) Note: Processor Biggest, Memory Smallest

CS 152 L18 Disks, RAID, BW (27)Fall 2004 © UC Regents Annual Improvement per Technology But what about recent BW, Latency change? How summarize BW vs. Latency change? CPUDRAMLANDisk Annual Bandwidth Improvement (all milestones) Annual Latency Improvement (all milestones) Again, CPU fastest change, DRAM slowest Annual Bandwidth Improvement (last 3 milestones) Annual Latency Improvement (last 3 milestones)

CS 152 L18 Disks, RAID, BW (28)Fall 2004 © UC Regents Towards a Rule of Thumb How long for Bandwidth to Double? How much does Latency Improve in that time? But what about recently? Time for Bandwidth to Double (Years, all milestones) Latency Improvement in Time for Bandwidth to Double (all milestones) Time for Bandwidth to Double (Years, last 3 milestones) Latency Improvement in Time for Bandwidth to Double (last 3 milestones) Despite faster LAN, all 1.2X to 1.4X

CS 152 L18 Disks, RAID, BW (29)Fall 2004 © UC Regents Rule of Thumb for Latency Lagging BW In the time that bandwidth doubles, latency improves by no more than a factor of 1.2 to 1.4 Stated alternatively: Bandwidth improves by more than the square of the improvement in Latency (and capacity improves faster than bandwidth)

CS 152 L18 Disks, RAID, BW (30)Fall 2004 © UC Regents 6 Reasons Latency Lags Bandwidth 1.Moore’s Law helps BW more than latency Faster transistors, more transistors, more pins help Bandwidth MPU Transistors:0.130 vs. 42 M xtors (300X) DRAM Transistors:0.064 vs. 256 M xtors (4000X) MPU Pins:68 vs. 423 pins (6X) DRAM Pins: 16 vs. 66 pins (4X) Smaller, faster transistors but communicate over (relatively) longer lines: limits latency Feature size:1.5 to 3 vs micron(8X,17X) MPU Die Size:35 vs. 204 mm 2 (ratio sqrt  2X) DRAM Die Size: 47 vs. 217 mm 2 (ratio sqrt  2X)

CS 152 L18 Disks, RAID, BW (31)Fall 2004 © UC Regents 6 Reasons Latency Lags Bandwidth (cont’d) 2. Distance limits latency Size of DRAM block  long bit and word lines  most of DRAM access time Speed of light and computers on network 1. & 2. explains linear latency vs. square BW? 3.Bandwidth easier to sell (“bigger=better”) E.g., 10 Gbits/s Ethernet (“10 Gig”) vs. 10  sec latency Ethernet 4400 MB/s DIMM (“PC4400”) vs. 50 ns latency Even if just marketing, customers now trained Since bandwidth sells, more resources thrown at bandwidth, which further tips the balance

CS 152 L18 Disks, RAID, BW (32)Fall 2004 © UC Regents 4.Latency helps BW, but not vice versa Spinning disk faster improves both bandwidth and rotational latency 3600 RPM  RPM = 4.2X Average rotational latency: 8.3 ms  2.0 ms Things being equal, also helps BW by 4.2X Lower DRAM latency  More access/second (higher bandwidth) Higher linear density helps disk BW (and capacity), but not disk Latency 9,550 BPI  533,000 BPI  60X in BW 6 Reasons Latency Lags Bandwidth (cont’d)

CS 152 L18 Disks, RAID, BW (33)Fall 2004 © UC Regents 5. Bandwidth hurts latency Queues help Bandwidth, hurt Latency (Queuing Theory) Adding chips to widen a memory module increases Bandwidth but higher fan-out on address lines may increase Latency 6. Operating System overhead hurts Latency more than Bandwidth Long messages amortize overhead; overhead bigger part of short messages 6 Reasons Latency Lags Bandwidth (cont’d)

CS 152 L18 Disks, RAID, BW (34)Fall 2004 © UC Regents 3 Ways to Cope with Latency Lags Bandwidth 1.Caching (Leveraging Capacity) Processor caches, file cache, disk cache 2.Replication (Leveraging Capacity) Read from nearest head in RAID, from nearest site in content distribution 3.Prediction (Leveraging Bandwidth) Branches + Prefetching: disk, caches “If a problem has no solution, it may not be a problem, but a fact--not to be solved, but to be coped with over time” — Shimon Peres (“Peres’s Law”)

CS 152 L18 Disks, RAID, BW (35)Fall 2004 © UC Regents HW BW Example: Micro Massively Parallel Processor (  MMP) Intel 4004 (1971): 4-bit processor, 2312 transistors, 0.4 MHz, 10 micron PMOS, 11 mm 2 chip Processor = new transistor? Cost of Ownership, Dependability, Security v. Cost/Perf. =>  MPP RISC II (1983): 32-bit, 5 stage pipeline, 40,760 transistors, 3 MHz, 3 micron NMOS, 60 mm 2 chip –4004 shrinks to ~ 1 mm 2 at 3 micron 250 mm 2 chip, micron CMOS = 2312 RISC IIs + Icache + Dcache –RISC II shrinks to ~ 0.05 mm 2 at 0.09 mi. –Caches via DRAM or 1 transistor SRAM ( ) –Proximity Communication via capacitive coupling at > 1 TB/s (Ivan

CS 152 L18 Disks, RAID, BW (36)Fall 2004 © UC Regents Too Optimistic so Far (its even worse)? Optimistic: Cache, Replication, Prefetch get more popular to cope with imbalance Pessimistic: These 3 already fully deployed, so must find next set of tricks to cope; hard! Its even worse: bandwidth gains multiplied by replicated components  parallelism –simultaneous communication in switched LAN –multiple disks in a disk array –multiple memory modules in a large memory –multiple processors in a cluster or SMP

CS 152 L18 Disks, RAID, BW (37)Fall 2004 © UC Regents Conclusion: Latency Lags Bandwidth For disk, LAN, memory, and MPU, in the time that bandwidth doubles, latency improves by no more than 1.2X to 1.4X –BW improves by square of latency improvement Innovations may yield one-time latency reduction, but unrelenting BW improvement If everything improves at the same rate, then nothing really changes –When rates vary, require real innovation HW and SW developers should innovate assuming Latency Lags Bandwidth