Computer Architecture I/O Systems. 11/8/2007 2 Recap: Virtual Machines User Virtual Machines or ABIs –ABI = ISA + Environment –All or part of ABI may.

Slides:



Advertisements
Similar presentations
IT253: Computer Organization
Advertisements

I/O Chapter 8. Outline Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 6:
Lecture 21Comp. Arch. Fall 2006 Chapter 8: I/O Systems Adapted from Mary Jane Irwin at Penn State University for Computer Organization and Design, Patterson.
1  1998 Morgan Kaufmann Publishers Interfacing Processors and Peripherals.
IT Essentials PC Hardware & Software v5.0
CSCE430/830 Computer Architecture
CPE 442 io.1 Introduction To Computer Architecture CpE 442 I/O Systems.
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos.
RAID Technology. Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson.
CS 430 – Computer Architecture Disks
S. Barua – CPSC 440 CHAPTER 8 INTERFACING PROCESSORS AND PERIPHERALS Topics to be covered  How to.
CS61C L40 I/O: Disks (1) Ho, Fall 2004 © UCB TA Casey Ho inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 39 I/O : Disks Microsoft.
ENGS 116 Lecture 181 I/O Interfaces, A Little Queueing Theory RAID Vincent H. Berk November 19, 2007 Reading for Today: Sections 7.1 – 7.4 Reading for.
Computer ArchitectureFall 2007 © November 28, 2007 Karem A. Sakallah Lecture 24 Disk IO and RAID CS : Computer Architecture.
CS61C L16 Disks © UC Regents 1 CS61C - Machine Structures Lecture 16 - Disks October 20, 2000 David Patterson
1  1998 Morgan Kaufmann Publishers Chapter 8 Storage, Networks and Other Peripherals.
1  1998 Morgan Kaufmann Publishers Chapter 8 Interfacing Processors and Peripherals.
1 Lecture 26: Storage Systems Topics: Storage Systems (Chapter 6), other innovations Final exam stats:  Highest: 95  Mean: 70, Median: 73  Toughest.
First magnetic disks, the IBM 305 RAMAC (2 units shown) introduced in One platter shown top right. A RAMAC stored 5 million characters on inch.
CS61C L40 I/O: Disks (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Computer ArchitectureFall 2008 © November 12, 2007 Nael Abu-Ghazaleh Lecture 24 Disk IO.
Disk Technologies. Magnetic Disks Purpose: – Long-term, nonvolatile, inexpensive storage for files – Large, inexpensive, slow level in the memory hierarchy.
Cs 61C L15 Disks.1 Patterson Spring 99 ©UCB CS61C Anatomy of I/O Devices: Magnetic Disks Lecture 15 March 10, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)
S.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
CS 61C L41 I/O Disks (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Secondary Storage Unit 013: Systems Architecture Workbook: Secondary Storage 1G.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 39: IO Disks Instructor: Dan Garcia 1.
PHY 201 (Blum) Buses Warning: some of the terminology is used inconsistently within the field.
Redundant Array of Inexpensive Disks (RAID). Redundant Arrays of Disks Files are "striped" across multiple spindles Redundancy yields high data availability.
Computer Organization CS224 Fall 2012 Lesson 51. Measuring I/O Performance  I/O performance depends on l Hardware: CPU, memory, controllers, buses l.
Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department.
Storage & Peripherals Disks, Networks, and Other Devices.
Lecture 4 1 Reliability vs Availability Reliability: Is anything broken? Availability: Is the system still available to the user?
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
RAID Storage EEL4768, Fall 2009 Dr. Jun Wang Slides Prepared based on D&P Computer Architecture textbook, etc.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
CPU BASICS, THE BUS, CLOCKS, I/O SUBSYSTEM Philip Chan.
 Design model for a computer  Named after John von Neuman  Instructions that tell the computer what to do are stored in memory  Stored program Memory.
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
I/O Example: Disk Drives To access data: — seek: position head over the proper track (8 to 20 ms. avg.) — rotational latency: wait for desired sector (.5.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Storage. 10/20/20152 Case for Storage Shift in focus from computation to communication and storage of information –E.g., Cray Research/Thinking Machines.
CS 136, Advanced Architecture Storage. CS136 2 Case for Storage Shift in focus from computation to communication and storage of information –E.g., Cray.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
August 1, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 9: I/O Devices and Communication Buses * Jeremy R. Johnson Wednesday,
I/O—I/O Busses Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
Αρχιτεκτονική Υπολογιστών Ενότητα # 6: RAID Διδάσκων: Γεώργιος Κ. Πολύζος Τμήμα: Πληροφορικής.
CS2100 Computer Organisation Input/Output – Own reading only (AY2015/6) Semester 1 Adapted from David Patternson’s lecture slides:
Csci 136 Computer Architecture II – IO and Storage Systems Xiuzhen Cheng
1 Chapter 2 Central Processing Unit. 2 CPU The "brain" of the computer system is called the central processing unit. Everything that a computer does is.
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
Mohamed Younis CMCS 411, Computer Architecture 1 CMCS Computer Architecture Lecture 26 Bus Interconnect May 7,
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
1  2004 Morgan Kaufmann Publishers Page Tables. 2  2004 Morgan Kaufmann Publishers Page Tables.
CMSC 611: Advanced Computer Architecture I/O & Storage Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
CSCE 385: Computer Architecture Spring 2014 Dr. Mike Turi I/O.
Lecture 2. A Computer System for Labs
Buses: Presentation Outline
Multiple Platters.
Vladimir Stojanovic & Nicholas Weaver
Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: illusion of having more physical memory program relocation protection.
Input-output I/O is very much architecture/system dependent
Lecture 28: Reliability Today’s topics: GPU wrap-up Disk basics RAID
Buses: Presentation Outline
Appendix D– Storage Systems
Presentation transcript:

Computer Architecture I/O Systems

11/8/ Recap: Virtual Machines User Virtual Machines or ABIs –ABI = ISA + Environment –All or part of ABI may be implemented in software –Easier if ABI designed with software implementation in mind (e.g., AS/400 or JVM) System Virtual Machine Revival –Overcome security flaws of modern OSes –Processor performance no longer highest priority –Manage Software, Manage Hardware “… VMMs give OS developers another opportunity to develop functionality no longer practical in today’s complex and ossified operating systems, where innovation moves at geologic pace.” –[Rosenblum and Garfinkel, 2005]

11/8/ I/O Systems Processor Cache Memory - I/O Bus Main Memory I/O Controller Disk I/O Controller I/O Controller Graphics Network interrupts

11/8/ A Bus Is: shared communication link single set of wires used to connect multiple subsystems A Bus is also a fundamental tool for composing large, complex systems –systematic means of abstraction Control Datapath Memory Processor Input Output What is a bus?

11/8/ Versatility: –New devices can be added easily –Peripherals can be moved between computer systems that use the same bus standard Low Cost: –A single set of wires is shared in multiple ways Memory Processer I/O Device Advantages of Buses

11/8/ It creates a communication bottleneck –The bandwidth of that bus can limit the maximum I/O throughput The maximum bus speed is largely limited by: –The length of the bus –The number of devices on the bus –The need to support a range of devices with: »Widely varying latencies »Widely varying data transfer rates Memory Processer I/O Device Disadvantage of Buses

11/8/ Control lines: –Signal requests and acknowledgments –Indicate what type of information is on the data lines Data lines carry information between the source and the destination: –Data and Addresses –Complex commands Data Lines Control Lines The General Organization of a Bus

11/8/ A bus transaction includes two parts: –Issuing the command (and address) – request –Transferring the data – action Master is the one who starts the bus transaction by: –issuing the command (and address) Slave is the one who responds to the address by: –Sending data to the master if the master ask for data –Receiving data from the master if the master wants to send data Bus Master Bus Slave Master issues command Data can go either way Master versus Slave

11/8/ Types of Buses Processor-Memory Bus (design specific) –Short and high speed –Only need to match the memory system »Maximize memory-to-processor bandwidth –Connects directly to the processor –Optimized for cache block transfers I/O Bus (industry standard) –Usually is lengthy and slower –Need to match a wide range of I/O devices –Connects to the processor-memory bus or backplane bus Backplane Bus (standard or proprietary) –Backplane: an interconnection structure within the chassis –Allow processors, memory, and I/O devices to coexist –Cost advantage: one bus for all components Serial Buses (wholescale move to serial interconnects)

11/8/ A Computer System with One Bus: Backplane Bus A single bus (the backplane bus) is used for: –Processor to memory communication –Communication between I/O devices and memory Advantages: Simple and low cost Disadvantages: slow and the bus can become a major bottleneck Example: IBM PC - AT ProcessorMemory I/O Devices Backplane Bus

11/8/ A Two-Bus System I/O buses tap into the processor-memory bus via bus adaptors: –Processor-memory bus: mainly for processor-memory traffic –I/O buses: provide expansion slots for I/O devices Apple Macintosh-II –NuBus: Processor, memory, and a few selected I/O devices –SCCI Bus: the rest of the I/O devices ProcessorMemory I/O Bus Processor Memory Bus Bus Adaptor Bus Adaptor Bus Adaptor I/O Bus I/O Bus

11/8/ A Three-Bus System (+ backside cache) A small number of backplane buses tap into the processor- memory bus –Processor-memory bus is only used for processor-memory traffic –I/O buses are connected to the backplane bus Advantage: loading on the processor bus is greatly reduced ProcessorMemory Processor Memory Bus Bus Adaptor Bus Adaptor Bus Adaptor I/O Bus Backside Cache bus I/O Bus L2 Cache

11/8/ The move from Parallel to Serial I/O CPU I/O IF I/O 1I/O 2 Shared Parallel Bus Wires Central Bus Arbiter CPU I/O IF I/O 1 I/O 2 Dedicated Point-to-point Serial Links Parallel bus clock rate limited by clock skew across long bus (~100MHz) High power to drive large number of loaded bus lines Central bus arbiter adds latency to each transaction, sharing limits throughput Expensive parallel connectors and backplanes/cables (all devices pay costs) Examples: VMEbus, Sbus, ISA bus, PCI, SCSI, IDE Point-to-point links run at multi-gigabit speed using advanced clock/signal encoding (requires lots of circuitry at each end) Lower power since only one well-behaved load Multiple simultaneous transfers Cheap cables and connectors (trade greater endpoint transistor cost for lower physical wiring cost), customize bandwidth per device using multiple links in parallel Examples: Ethernet, Infiniband, PCI Express, SATA, USB, Firewire, etc.

11/8/ Main components of Intel Chipset: Pentium 4 Northbridge: –Handles memory –Graphics Southbridge: I/O –PCI bus –Disk controllers –USB controllers –Audio –Serial I/O –Interrupt controller –Timers

11/8/ Disk Storage Shift in focus from computation to communication and storage of information –E.g., Cray Research/Thinking Machines vs. Google/Yahoo –“The Computing Revolution” (1960s to 1980s)  “The Information Age” (1990 to today) Storage emphasizes reliability and scalability as well as cost-performance What is “Software king” that determines which HW features actually used? –Operating System for storage –c.f. Compiler for processor

11/8/ Disk Figure of Merit: Areal Density Bits recorded along a track –Metric is Bits Per Inch (BPI) Number of tracks per surface –Metric is Tracks Per Inch (TPI) Disk designs brag about bit density per unit area –Metric is Bits Per Square Inch: Areal Density = BPI x TPI

11/8/ Historical Perspective 1956 IBM Ramac — early 1970s Winchester –Developed for mainframe computers, proprietary interfaces –Steady shrink in form factor: 27 in. to 14 in. Form factor and capacity drives market more than performance 1970s developments –8” & 5.25” floppy disk form factor (microcode into mainframe) –Emergence of industry standard disk interfaces Early 1980s: PCs and first generation workstations Mid 1980s: Client/server computing –Centralized storage on file server »accelerates disk downsizing: 8 inch to 5.25 –Mass market disk drives become a reality »industry standards: SCSI, IPI, IDE »5.25 inch to 3.5 inch drives for PCs, End of proprietary interfaces 1900s: Laptops => 2.5 inch drives 2000s: What new devices leading to new drives?

11/8/ Future Disk Size and Performance Continued advance in capacity (60%/yr) and bandwidth (40%/yr) Slow improvement in seek, rotation (8%/yr) Time to read whole disk YearSequentiallyRandomly (1 sector/seek) minutes6 hours minutes 1 week(!) minutes 3 weeks (SCSI) minutes 7 weeks (SATA)

11/8/ Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson asked in 1987: “Can smaller disks be used to close gap in performance between disks and CPUs?”

11/8/ Advantages of Small Formfactor Disk Drives Low cost/MB High MB/volume High MB/watt Low cost/Actuator Cost and Environmental Efficiencies

11/8/ Replace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks) Capacity Volume Power Data Rate I/O Rate MTTF Cost IBM 3390K 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s ??? Hrs $150K Disk Arrays have potential for large data and I/O rates, high MB per cu. ft., high MB per KW, but what about reliability? 9X 3X 8X 6X

11/8/ Array Reliability Reliability of N disks = Reliability of 1 Disk ÷ N 50,000 Hours ÷ 70 disks = 700 hours Disk system MTTF: Drops from 6 years to 1 month! Arrays (without redundancy) too unreliable to be useful! Hot spares support reconstruction in parallel with access: very high media availability can be achieved Hot spares support reconstruction in parallel with access: very high media availability can be achieved

11/8/ CS252 Administrivia Next week is project meetings –Should have “interesting” results by then November 12 is Veterans’ Day Holiday –Schedule Monday Project meetings on Tuesday 1-3pm Second midterm Tuesday Nov 20 in class –Focus on multiprocessor/multithreading issues –We’ll assume you’ll have worked through practice questions, i.e., be familiar with cache protocols

11/8/ Redundant Arrays of (Inexpensive) Disks Files are "striped" across multiple disks Redundancy yields high data availability –Availability: service still provided to user, even if some components failed Disks will still fail Contents reconstructed from data redundantly stored in the array  Capacity penalty to store redundant info  Bandwidth penalty to update redundant info

11/8/ Redundant Arrays of Inexpensive Disks RAID 1: Disk Mirroring/Shadowing Each disk is fully duplicated onto its “mirror” Very high availability can be achieved Bandwidth sacrifice on write: Logical write = two physical writes Reads may be optimized Most expensive solution: 100% capacity overhead ( RAID 2 not interesting, so skip) recovery group

11/8/ Redundant Array of Inexpensive Disks RAID 3: Parity Disk P logical record P contains sum of other disks per stripe mod 2 (“parity”) If disk fails, subtract P from sum of other disks to find missing information Striped physical records

11/8/ RAID 3 Sum computed across recovery group to protect against hard disk failures, stored in P disk Logically, a single high capacity, high transfer rate disk: good for large transfers Wider arrays reduce capacity costs, but decreases availability 33% capacity cost for parity if 3 data disks and 1 parity disk

11/8/ Inspiration for RAID 4 RAID 3 relies on parity disk to discover errors on Read But every sector has an error detection field To catch errors on read, rely on error detection field vs. the parity disk Allows independent reads to different disks simultaneously

11/8/ Redundant Arrays of Inexpensive Disks RAID 4: High I/O Rate Parity D0D1D2 D3 P D4D5D6 PD7 D8D9 PD10 D11 D12 PD13 D14 D15 P D16D17 D18 D19 D20D21D22 D23 P Disk Columns Increasing Logical Disk Address Stripe Insides of 5 disks Example: small read D0 & D5, large write D12-D15 Example: small read D0 & D5, large write D12-D15

11/8/ Inspiration for RAID 5 RAID 4 works well for small reads Small writes (write to one disk): –Option 1: read other data disks, create new sum and write to Parity Disk –Option 2: since P has old sum, compare old data to new data, add the difference to P Small writes are limited by Parity Disk: Write to D0, D5 both also write to P disk D0 D1D2 D3 P D4 D5 D6 P D7

11/8/ Redundant Arrays of Inexpensive Disks RAID 5: High I/O Rate Interleaved Parity Independent writes possible because of interleaved parity Independent writes possible because of interleaved parity D0D1D2 D3 P D4D5D6 P D7 D8D9P D10 D11 D12PD13 D14 D15 PD16D17 D18 D19 D20D21D22 D23 P Disk Columns Increasing Logical Disk Addresses Example: write to D0, D5 uses disks 0, 1, 3, 4

11/8/ Problems of Disk Arrays: Small Writes D0D1D2 D3 P D0' + + D1D2 D3 P' new data old data old parity XOR (1. Read) (2. Read) (3. Write) (4. Write) RAID-5: Small Write Algorithm 1 Logical Write = 2 Physical Reads + 2 Physical Writes

11/8/ RAID 6: Recovering from 2 failures Why > 1 failure recovery? –operator accidentally replaces the wrong disk during a failure –since disk bandwidth is growing more slowly than disk capacity, the MTT Repair a disk in a RAID system is increasing  increases the chances of a 2nd failure during repair since takes longer –reading much more data during reconstruction meant increasing the chance of an uncorrectable media failure, which would result in data loss

11/8/ Berkeley History: RAID-I RAID-I (1989) –Consisted of a Sun 4/280 workstation with 128 MB of DRAM, four dual-string SCSI controllers, inch SCSI disks and specialized disk striping software Today RAID is $24 billion dollar industry, 80% nonPC disks sold in RAIDs

11/8/ Summary: RAID Techniques: Goal was performance, popularity due to reliability of storage Disk Mirroring, Shadowing (RAID 1) Each disk is fully duplicated onto its "shadow" Logical write = two physical writes 100% capacity overhead Parity Data Bandwidth Array (RAID 3) Parity computed horizontally Logically a single high data bw disk High I/O Rate Parity Array (RAID 5) Interleaved parity blocks Independent reads and writes Logical write = 2 reads + 2 writes

11/8/ I/O Performance Response time = Queue + Device Service time 100% Response Time (ms) Throughput (% total BW) % Proc Queue IOCDevice Metrics: Response Time vs. Throughput

11/8/ I/O Benchmarks For better or worse, benchmarks shape a field –Processor benchmarks classically aimed at response time for fixed sized problem –I/O benchmarks typically measure throughput, possibly with upper limit on response times (or 90% of response times) Transaction Processing (TP) (or On-line TP=OLTP) –If bank computer fails when customer withdraw money, TP system guarantees account debited if customer gets $ & account unchanged if no $ –Airline reservation systems & banks use TP Atomic transactions make this work Classic metric is Transactions Per Second (TPS)

11/8/ I/O Benchmarks: Transaction Processing Early 1980s great interest in OLTP –Expecting demand for high TPS (e.g., ATM machines, credit cards) –Tandem’s success implied medium range OLTP expands –Each vendor picked own conditions for TPS claims, report only CPU times with widely different I/O –Conflicting claims led to disbelief of all benchmarks  chaos 1984 Jim Gray (Tandem) distributed paper to Tandem + 19 in other companies propose standard benchmark Published “A measure of transaction processing power,” Datamation, 1985 by Anonymous et. al –To indicate that this was effort of large group –To avoid delays of legal department of each author’s firm –Still get mail at Tandem to author “Anonymous” Led to Transaction Processing Council in 1988 –

11/8/ I/O Benchmarks: TP1 by Anon et. al DebitCredit Scalability: size of account, branch, teller, history function of throughput TPSNumber of ATMsAccount-file size 101, GB 10010, GB 1,000100, GB 10,0001,000, GB – Each input TPS =>100,000 account records, 10 branches, 100 ATMs – Accounts must grow since a person is not likely to use the bank more frequently just because the bank has a faster computer! Response time: 95% transactions take ≤ 1 second Report price (initial purchase price + 5 year maintenance = cost of ownership) Hire auditor to certify results

11/8/ Unusual Characteristics of TPC Price is included in the benchmarks –cost of HW, SW, and 5-year maintenance agreements included  price-performance as well as performance The data set generally must scale in size as the throughput increases –trying to model real systems, demand on system and size of the data stored in it increase together The benchmark results are audited –Must be approved by certified TPC auditor, who enforces TPC rules  only fair results are submitted Throughput is the performance metric but response times are limited –eg, TPC-C: 90% transaction response times < 5 seconds An independent organization maintains the benchmarks –COO ballots on changes, meetings, to settle disputes...

11/8/ TPC Benchmark History/Status

11/8/ I/O Benchmarks via SPEC SFS 3.0 Attempt by NFS companies to agree on standard benchmark –Run on multiple clients & networks (to prevent bottlenecks) –Same caching policy in all clients –Reads: 85% full block & 15% partial blocks –Writes: 50% full block & 50% partial blocks –Average response time: 40 ms –Scaling: for every 100 NFS ops/sec, increase capacity 1GB Results: plot of server load (throughput) vs. response time & number of users –Assumes: 1 user => 10 NFS ops/sec –3.0 for NSF 3.0 Added SPECMail (mailserver), SPECWeb (webserver) benchmarks

11/8/ Example SPEC SFS Result: NetApp FAS3050c NFS servers 2.8 GHz Pentium Xeon microprocessors, 2 GB of DRAM per processor, 1GB of Non-volatile memory per system 4 FDDI networks; 32 NFS Daemons, 24 GB file size 168 fibre channel disks: 72 GB, RPM, 2 or 4 FC controllers

11/8/ Flash: The future of disks? Advent of digital cameras, mp3 players, has driven market for low-cost non-volatile flash memory –Other promising technologies in development: phase-change RAM, magnetic RAM (core returns!), but flash has big lead In 2007, several announcements of flash-based disk replacement for laptops/servers –SanDisk, Samsung, I/O fusion, … Flash drive advantages: –Lower power (no moving parts) –Much faster seek time, 100X IOs per second (no moving parts) –Greater reliability (no moving parts) –Lower noise (no moving parts) Flash disadvantages –Cost (20-100x disk cost/GB) –Slow writes with current design (competitive with disks) –write endurance - not an issue for most applications since use write-leveling to spread wear around blocks on chip Potential benefit of flash hidden behind standard disk interface –Scope for massive rethinking of storage architecture if non-volatile moves into memory hierarchy and accessed via processor loads/stores not seek/read/write