CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

Slides:



Advertisements
Similar presentations
I/O InterfaceCS510 Computer ArchitecturesLecture Lecture 17 I/O Interfaces and I/O Busses.
Advertisements

IT253: Computer Organization
Computer Architecture
I/O Management and Disk Scheduling
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
Lecture 21Comp. Arch. Fall 2006 Chapter 8: I/O Systems Adapted from Mary Jane Irwin at Penn State University for Computer Organization and Design, Patterson.
1  1998 Morgan Kaufmann Publishers Interfacing Processors and Peripherals.
1/1/ / faculty of Electrical Engineering eindhoven university of technology Architectures of Digital Information Systems Part 1: Interrupts and DMA dr.ir.
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos.
RAID Technology. Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson.
EE30332 Ch8 DP – 1 Ch 8 Interfacing Processors and Peripherals Buses °Fundamental tool for designing and building computer systems divide the problem into.
CS152 / Kubiatowicz Lec24.1 4/19/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 24 Busses (continued) I/O and Storage Systems April.
Interfacing Processors and Peripherals Andreas Klappenecker CPSC321 Computer Architecture.
ENGS 116 Lecture 181 I/O Interfaces, A Little Queueing Theory RAID Vincent H. Berk November 19, 2007 Reading for Today: Sections 7.1 – 7.4 Reading for.
Computer ArchitectureFall 2007 © November 28, 2007 Karem A. Sakallah Lecture 24 Disk IO and RAID CS : Computer Architecture.
CS152 / Kubiatowicz Lec25.1 5/05/03©UCB Spring 2003 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Power May 5, 2003 John.
1 Lecture 26: Storage Systems Topics: Storage Systems (Chapter 6), other innovations Final exam stats:  Highest: 95  Mean: 70, Median: 73  Toughest.
CS152 / Kubiatowicz Lec /24/99©UCB Fall 1999 CS152 Computer Architecture and Engineering Lecture 24 I/O Systems II November 24, 1999 John Kubiatowicz.
CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 CS152 Computer Architecture and Engineering Lecture 24 Busses (continued) Queueing Theory Disk IO November.
Computer ArchitectureFall 2008 © November 12, 2007 Nael Abu-Ghazaleh Lecture 24 Disk IO.
CS152 Computer Architecture and Engineering Lecture 23 I/O and Storage Systems April 26, 2004 John Kubiatowicz ( lecture.
Device Management.
12/3/2004EE 42 fall 2004 lecture 391 Lecture #39: Magnetic memory storage Last lecture: –Dynamic Ram –E 2 memory This lecture: –Future memory technologies.
1 Today I/O Systems Storage. 2 I/O Devices Many different kinds of I/O devices Software that controls them: device drivers.
I/0 devices.
Redundant Array of Inexpensive Disks (RAID). Redundant Arrays of Disks Files are "striped" across multiple spindles Redundancy yields high data availability.
Memory/Storage Architecture Lab Computer Architecture Lecture Storage and Other I/O Topics.
Storage & Peripherals Disks, Networks, and Other Devices.
Lecture 4 1 Reliability vs Availability Reliability: Is anything broken? Availability: Is the system still available to the user?
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
CS151B Computer Systems Architecture Winter 2002 TuTh 2-4pm BH Instructor: Prof. Jason Cong Lecture 18 I/O and Storage Systems Continued.
3/11/2002CSE Input/Output Input/Output Control Datapath Memory Processor Input Output Memory Input Output Network Control Datapath Processor.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
The Big Picture: Where are We Now?
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
CHAPTER 3 TOP LEVEL VIEW OF COMPUTER FUNCTION AND INTERCONNECTION
I/O Example: Disk Drives To access data: — seek: position head over the proper track (8 to 20 ms. avg.) — rotational latency: wait for desired sector (.5.
Lecture 3: 1 Introduction to Queuing Theory More interested in long term, steady state than in startup => Arrivals = Departures Little’s Law: Mean number.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
August 1, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 9: I/O Devices and Communication Buses * Jeremy R. Johnson Wednesday,
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
Csci 136 Computer Architecture II – Buses and IO
CS2100 Computer Organisation Input/Output – Own reading only (AY2015/6) Semester 1 Adapted from David Patternson’s lecture slides:
Csci 136 Computer Architecture II – IO and Storage Systems Xiuzhen Cheng
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
Chapter 6 Storage and Other I/O Topics. Chapter 6 — Storage and Other I/O Topics — 2 Introduction I/O devices can be characterized by Behaviour: input,
Processor Memory Processor-memory bus I/O Device Bus Adapter I/O Device I/O Device Bus Adapter I/O Device I/O Device Expansion bus I/O Bus.
JDK.F98 Slide 1 Lecture 26: I/O Continued Prof. John Kubiatowicz Computer Science 252 Fall 1998.
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
1 Lecture 23: Storage Systems Topics: disk access, bus design, evaluation metrics, RAID (Sections )
بسم الله الرحمن الرحيم MEMORY AND I/O.
1 Device Controller I/O units typically consist of A mechanical component: the device itself An electronic component: the device controller or adapter.
10/15: Lecture Topics Input/Output –Types of I/O Devices –How devices communicate with the rest of the system communicating with the processor communicating.
Mohamed Younis CMCS 411, Computer Architecture 1 CMCS Computer Architecture Lecture 25 I/O Systems May 2,
CMSC 611: Advanced Computer Architecture I/O & Storage Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
CSCE 385: Computer Architecture Spring 2014 Dr. Mike Turi I/O.
Multiple Platters.
RAID, Programmed I/O, Interrupt Driven I/O, DMA, Operating System
Vladimir Stojanovic & Nicholas Weaver
Lecture 13 I/O.
Lecture 21: Storage Systems
Input-output I/O is very much architecture/system dependent
Peng Liu Lecture 14 I/O Peng Liu
CSC3050 – Computer Architecture
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24, 2001 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides:

CS152 / Kubiatowicz Lec25.2 4/24/01©UCB Spring 2001 The Big Picture: Where are We Now? Control Datapath Memory Processor Input Output °Today’s Topic: I/O Systems Control Datapath Memory Processor Input Output Network/Bus

CS152 / Kubiatowicz Lec25.3 4/24/01©UCB Spring 2001 Recap: A Multi-Bus System Memory Processor Memory Bus Bus Adaptor Bus Adaptor I/O Bus Backside Cache bus I/O Bus L2 Cache NorthBridge °Separate sets of pins for different functions Memory bus Caches Graphics bus (for fast frame buffer) I/O buses are connected to the backplane bus °Advantage: Buses can run at different speeds Much less overall loading! SouthBridge Processor

CS152 / Kubiatowicz Lec25.4 4/24/01©UCB Spring 2001 Recap: Main components of Intel Chipset: Pentium II/III °Northbridge: Handles memory Graphics °Southbridge: I/O PCI bus Disk controllers USB controlers Audio Serial I/O Interrupt controller Timers

CS152 / Kubiatowicz Lec25.5 4/24/01©UCB Spring 2001 Recap: Bus Summary °Buses are an important technique for building large- scale systems Their speed is critically dependent on factors such as length, number of devices, etc. Critically limited by capacitance Tricks: esoteric drive technology such as GTL °Important terminology: Master: The device that can initiate new transactions Slaves: Devices that respond to the master °Two types of bus timing: Synchronous: bus includes clock Asynchronous: no clock, just REQ/ACK strobing °Direct Memory Access (DMA) allows fast, burst transfer into processor’s memory: Processor’s memory acts like a slave Probably requires some form of cache-coherence so that DMA’ed memory can be invalidated from cache.

CS152 / Kubiatowicz Lec25.6 4/24/01©UCB Spring 2001 Disk Capacity now doubles every 18 months; before 1990 every 36 months Today: Processing Power Doubles Every 18 months Today: Memory Size Doubles Every 18 months(4X/3yr) Today: Disk Capacity Doubles Every 18 months Disk Positioning Rate (Seek + Rotate) Doubles Every Ten Years! The I/O GAP The I/O GAP Recap: Technology Trends

CS152 / Kubiatowicz Lec25.7 4/24/01©UCB Spring 2001 source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces” 470 v Mb/si 9 v. 22 Mb/si 0.2 v. 1.7 Mb/si Recap: MBits per square inch: DRAM as % of Disk over time

CS152 / Kubiatowicz Lec25.8 4/24/01©UCB Spring 2001 Recap: Nano-layered Disk Heads °Special sensitivity of Disk head comes from “Giant Magneto-Resistive effect” or (GMR) °IBM is leader in this technology Same technology as TMJ-RAM breakthrough we described in earlier class. Coil for writing

CS152 / Kubiatowicz Lec25.9 4/24/01©UCB Spring 2001 Typical Numbers of a Magnetic Disk °Rotational Latency: Most disks rotate at 3,600 to 7200 RPM Approximately 16 ms to 8 ms per revolution, respectively An average latency to the desired information is halfway around the disk: 8 ms at 3600 RPM, 4 ms at 7200 RPM °Transfer Time is a function of : Transfer size (usually a sector): 1 KB / sector Rotation speed: 3600 RPM to RPM Recording density: bits per inch on a track Diameter typical diameter ranges from 2.5 to 5.25 in Typical values: 2 to 40 MB per second Sector Track Cylinder Head Platter

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Disk I/O Performance °Disk Access Time = Seek time + Rotational Latency + Transfer time + Controller Time + Queueing Delay °Estimating Queue Length: Utilization = u = Request Rate / Service Rate= /  Mean Queue Length = u / (1 - u) As Request Rate -> Service Rate -Mean Queue Length -> Infinity Processor Queue Disk Controller Disk  Service Rate Request Rate Queue Disk Controller Disk

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Disk Latency = Queueing Time + Controller time + Seek Time + Rotation Time + Xfer Time Order of magnitude times for 4K byte transfers: Average Seek: 8 ms or less Rotate: rpm Xfer: rpm Disk Device Terminology

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Example °512 byte sector, rotate at 5400 RPM, advertised seeks is 12 ms, transfer rate is 4 MB/sec, controller overhead is 1 ms, queue idle so no service time °Disk Access Time = Seek time + Rotational Latency + Transfer time + Controller Time + Queueing Delay °Disk Access Time = 12 ms / 5400 RPM KB / 4 MB/s + 1 ms + 0 °Disk Access Time = 12 ms / 90 RPS / 1024 s + 1 ms + 0 °Disk Access Time = 12 ms ms ms + 1 ms + 0 ms °Disk Access Time = 18.6 ms °If real seeks are 1/3 advertised seeks, then its 10.6 ms, with rotation delay at 50% of the time!

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Reliability and Availability °Two terms that are often confused: Reliability: Is anything broken? Availability: Is the system still available to the user? °Availability can be improved by adding hardware: Example: adding ECC on memory °Reliability can only be improved by: Better environmental conditions Building more reliable components Building with fewer components -Improve availability may come at the cost of lower reliability

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Simple Producer-Server Model °Throughput: The number of tasks completed by the server in unit time In order to get the highest possible throughput: -The server should never be idle -The queue should never be empty °Response time: Begins when a task is placed in the queue Ends when it is completed by the server In order to minimize the response time: -The queue should be empty -The server will be idle Producer ServerQueue

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Disk I/O Performance Response time = Queue + Device Service time 100% Response Time (ms) Throughput (Utilization) (% total BW) % Proc Queue IOCDevice Metrics: Response Time Throughput latency goes as T ser ×u/(1-u) u = utilization

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Queueing Theory applies to long term, steady state behavior  Arrival rate = Departure rate °Little’s Law: Mean number tasks in system = arrival rate x mean reponse time Observed by many, Little was first to prove Simple interpretation: you should see the same number of tasks in queue when entering as when leaving. °Applies to any system in equilibrium, as long as nothing in black box is creating or destroying tasks “Black Box” Queueing System ArrivalsDepartures Introduction to Queueing Theory

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Queuing models assume state of equilibrium: input rate = output rate °Notation: average number of arriving customers/second T ser average time to service a customer (tradtionally µ = 1/ T ser ) userver utilization (0..1): u = x T ser (or u = / µ ) T q average time/customer in queue T sys average time/customer in system: T sys = T q + T ser L q average length of queue: L q = x T q L sys average length of system: L sys = x T sys °Little’s Law: L sys = x T sys (Mean number customers = arrival rate x mean service time) ProcIOCDevice Queue server System A Little Queuing Theory: Notation

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Server spends a variable amount of time with customers Weighted mean m1 = (f1 x T1 + f2 x T fn x Tn)/F =  p(T)xT variance = (f1 x T1 2 + f2 x T fn x Tn 2 )/F – m1 2 =  p(T)xT 2 - m1 2 Squared coefficient of variance: C = variance/m1 2 -Unitless measure (100 ms 2 vs. 0.1 s 2 ) °Exponential distribution C = 1 : most short relative to average, few others long; 90% 90% 1 : further from average C=2.0 => 90% < 2.8 x average, 69% < average Avg. A Little Queuing Theory: Use of random distributions Avg. 0 ProcIOCDevice Queue server System

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Disk response times C  1.5 (majority seeks < average) °Yet usually pick C = 1.0 for simplicity Memoryless, exponential dist Many complex systems well described by memoryless distribution! °Another useful value is average time must wait for server to complete current task: m1(z) Called “Average Residual Wait Time” Not just 1/2 x m1 because doesn’t capture variance Can derive m1(z) = 1/2 x m1 x (1 + C) No variance  C= 0 => m1(z) = 1/2 x m1 Exponential  C= 1 => m1(z) = m1 A Little Queuing Theory: Variable Service Time ProcIOCDevice Queue server System Avg. 0 Time

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Calculating average wait time in queue T q : All customers in line must complete; avg time: m1  T ser = 1/  If something at server, it takes to complete on average m1(z) -Chance server is busy = u= /  ; average delay is u x m1(z) T q = u x m1(z) + L q x T s er T q = u x m1(z) + x T q x T s er T q = u x m1(z) + u x T q T q x (1 – u) = m1(z) x u T q = m1(z) x u/(1-u) = T s er x {1/2 x (1+C)} x u/(1 – u)) Notation: average number of arriving customers/second T ser average time to service a customer userver utilization (0..1): u = x T ser T q average time/customer in queue L q average length of queue:L q = x T q m1(z) average residual wait time = T s er x {1/2 x (1+C)} A Little Queuing Theory: Average Wait Time Little’s Law Defn of utilization (u)

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Assumptions so far: System in equilibrium Time between two successive arrivals in line are random Server can start on next customer immediately after prior finishes No limit to the queue: works First-In-First-Out Afterward, all customers in line must complete; each avg T ser °Described “memoryless” or Markovian request arrival (M for C=1 exponentially random), General service distribution (no restrictions), 1 server: M/G/1 queue °When Service times have C = 1, M/M/1 queue T q = T ser x u / (1 – u) T ser average time to service a customer userver utilization (0..1): u = x T ser T q average time/customer in queue A Little Queuing Theory: M/G/1 and M/M/1

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Processor sends 10 x 8KB disk I/Os per second, requests & service exponentially distrib., avg. disk service = 20 ms This number comes from disk equation: Service time = Ave seek + ave rot delay + transfer time + ctrl overhead °On average, how utilized is the disk? What is the number of requests in the queue? What is the average time spent in the queue? What is the average response time for a disk request? °Notation: average number of arriving customers/second = 10 T ser average time to service a customer = 20 ms (0.02s) userver utilization (0..1): u = x T ser = 10/s x.02s = 0.2 T q average time/customer in queue = T ser x u / (1 – u) = 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0.005s) T sys average time/customer in system: T sys =T q +T ser = 25 ms L q average length of queue:L q = x T q = 10/s x.005s = 0.05 requests in queue L sys average # tasks in system: L sys = x T sys = 10/s x.025s = 0.25 A Little Queuing Theory: An Example

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Administrivia: Not much left °Only 2 groups have put up project descriptions! Go to “Projects” link on home page °Tomorrow: Sections in lab again (119 Cory) Bring complete Lab 6 status with you -What are you doing? -How are you doing? -What is your testing strategy? °NO LECTURE ON THURSDAY! (4/26) °Midterm II next Tuesday (5/1) 277 Cory as before Pizza afterwards Topics: -Pipelining, Out-of-order scheduling, Caches, Memory, Buses, I/O °Review session this Sunday (4/29) °Remaining schedule: Lecture on Power/assorted topics (Quantum computing?) on 5/3 Wrap up lecture on 5/8 Oral presentations/contest on 5/10 Grades out by 5/12

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Giving Commands to I/O Devices °Two methods are used to address the device: Special I/O instructions Memory-mapped I/O °Special I/O instructions specify: Both the device number and the command word -Device number: the processor communicates this via a set of wires normally included as part of the I/O bus -Command word: this is usually send on the bus’s data lines °Memory-mapped I/O: Portions of the address space are assigned to I/O device Read and writes to those addresses are interpreted as commands to the I/O devices User programs are prevented from issuing I/O operations directly: -The I/O address space is protected by the address translation

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Single Memory & I/O Bus No Separate I/O Instructions CPU Interface Peripheral Memory ROM RAM I/O $ CPU L2 $ Memory Bus MemoryBus Adaptor I/O bus Memory Mapped I/O

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 I/O Device Notifying the OS °The OS needs to know when: The I/O device has completed an operation The I/O operation has encountered an error °This can be accomplished in two different ways I/O Interrupt: -Whenever an I/O device needs attention from the processor, it interrupts the processor from what it is currently doing. Polling: -The I/O device put information in a status register -The OS periodically check the status register

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 I/O Interrupt °An I/O interrupt is just like the exceptions except: An I/O interrupt is asynchronous Further information needs to be conveyed °An I/O interrupt is asynchronous with respect to instruction execution: I/O interrupt is not associated with any instruction I/O interrupt does not prevent any instruction from completion -You can pick your own convenient point to take an interrupt °I/O interrupt is more complicated than exception: Needs to convey the identity of the device generating the interrupt Interrupt requests can have different urgencies: -Interrupt request needs to be prioritized

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001  add $r1,$r2,$r3 subi $r4,$r1,#4 slli $r4,$r4,#2 Hiccup(!) lw$r2,0($r4) lw$r3,4($r4) add$r2,$r2,$r3 sw8($r4),$r2  Raise priority Reenable All Ints Save registers  lw$r1,20($r0) lw$r2,0($r1) addi$r3,$r0,#5 sw $r3,0($r1)  Restore registers Clear current Int Disable All Ints Restore priority RTI External Interrupt PC saved Disable All Ints Supervisor Mode Restore PC User Mode “Interrupt Handler” Example: Device Interrupt °Advantage: User program progress is only halted during actual transfer °Disadvantage, special hardware is needed to: Cause an interrupt (I/O device) Detect an interrupt (processor) Save the proper states to resume after the interrupt (processor)

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Disable Network Intr  subi $r4,$r1,#4 slli $r4,$r4,#2 lw$r2,0($r4) lw$r3,4($r4) add$r2,$r2,$r3 sw8($r4),$r2 lw$r1,12($zero) beq$r1,no_mess lw$r1,20($r0) lw$r2,0($r1) addi$r3,$r0,#5 sw0($r1),$r3 Clear Network Intr  External Interrupt “Handler” no_mess: Polling Point (check device register) Alternative: Polling

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Polling: Programmed I/O °Advantage: Simple: the processor is totally in control and does all the work °Disadvantage: Polling overhead can consume a lot of CPU time CPU IOC device Memory Is the data ready? read data store data yes no done? no yes busy wait loop not an efficient way to use the CPU unless the device is very fast! but checks for I/O completion can be dispersed among computation intensive code

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 °Polling is faster than interrupts because Compiler knows which registers in use at polling point. Hence, do not need to save and restore registers (or not as many). Other interrupt overhead avoided (pipeline flush, trap priorities, etc). °Polling is slower than interrupts because Overhead of polling instructions is incurred regardless of whether or not handler is run. This could add to inner-loop delay. Device may have to wait for service for a long time. °When to use one or the other? Multi-axis tradeoff -Frequent/regular events good for polling, as long as device can be controlled at user level. -Interrupts good for infrequent/irregular events -Interrupts good for ensuring regular/predictable service of events. Polling is faster/slower than Interrupts

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Delegating I/O Responsibility from the CPU: DMA °Direct Memory Access (DMA): External to the CPU Act as a maser on the bus Transfer blocks of data to or from memory without CPU intervention CPU IOC device Memory DMAC CPU sends a starting address, direction, and length count to DMAC. Then issues "start". DMAC provides handshake signals for Peripheral Controller, and Memory Addresses and handshake signals for Memory.

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Delegating I/O Responsibility from the CPU: IOP CPU IOP Mem D1 D2 Dn... main memory bus I/O bus CPU IOP (1) Issues instruction to IOP memory (2) (3) Device to/from memory transfers are controlled by the IOP directly. IOP steals memory cycles. OP Device Address target device where cmnds are IOP looks in memory for commands OP Addr Cnt Other what to do where to put data how much special requests (4) IOP interrupts CPU when done

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Responsibilities of the Operating System °The operating system acts as the interface between: The I/O hardware and the program that requests I/O °Three characteristics of the I/O systems: The I/O system is shared by multiple program using the processor I/O systems often use interrupts (external generated exceptions) to communicate information about I/O operations. -Interrupts must be handled by the OS because they cause a transfer to supervisor mode The low-level control of an I/O device is complex: -Managing a set of concurrent events -The requirements for correct device control are very detailed

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Operating System Requirements °Provide protection to shared I/O resources Guarantees that a user’s program can only access the portions of an I/O device to which the user has rights °Provides abstraction for accessing devices: Supply routines that handle low-level device operation °Handles the interrupts generated by I/O devices °Provide equitable access to the shared I/O resources All user programs must have equal access to the I/O resources °Schedule accesses in order to enhance system throughput

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 OS and I/O Systems Communication Requirements °The Operating System must be able to prevent: The user program from communicating with the I/O device directly °If user programs could perform I/O directly: Protection to the shared I/O resources could not be provided °Three types of communication are required: The OS must be able to give commands to the I/O devices The I/O device must be able to notify the OS when the I/O device has completed an operation or has encountered an error Data must be transferred between memory and an I/O device

CS152 / Kubiatowicz Lec /24/01©UCB Spring ” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Disk Product Families Manufacturing Advantages of Disk Arrays

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Data Capacity Volume Power Data Rate I/O Rate MTTF Cost IBM 3390 (K) 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s ??? Hrs $150K Disk Arrays have potential for large data and I/O rates high MB per cu. ft., high MB per KW reliability? Small # of Large Disks  Large # of Small Disks!

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Reliability of N disks = Reliability of 1 Disk ÷ N 50,000 Hours ÷ 70 disks = 700 hours Disk system MTTF: Drops from 6 years to 1 month! Arrays (without redundancy) too unreliable to be useful! Hot spares support reconstruction in parallel with access: very high media availability can be achieved Hot spares support reconstruction in parallel with access: very high media availability can be achieved Array Reliability

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Files are "striped" across multiple spindles Redundancy yields high data availability Disks will fail Contents reconstructed from data redundantly stored in the array Capacity penalty to store it Bandwidth penalty to update Mirroring/Shadowing (high capacity cost) Horizontal Hamming Codes (overkill) Parity & Reed-Solomon Codes Failure Prediction (no capacity overhead!) VaxSimPlus — Technique is controversial Techniques: Redundant Arrays of Disks

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Each disk is fully duplicated onto its "shadow" Very high availability can be achieved Bandwidth sacrifice on write: Logical write = two physical writes Reads may be optimized Most expensive solution: 100% capacity overhead Targeted for high I/O rate, high availability environments recovery group RAID 1: Disk Mirroring/Shadowing

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 P logical record Striped physical records Parity computed across recovery group to protect against hard disk failures 33% capacity cost for parity in this configuration wider arrays reduce capacity costs, decrease expected availability, increase reconstruction time Arms logically synchronized, spindles rotationally synchronized logically a single high capacity, high transfer rate disk Targeted for high bandwidth applications: Scientific, Image Processing RAID 3: Parity Disk

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 A logical write becomes four physical I/Os Independent writes possible because of interleaved parity Reed-Solomon Codes ("Q") for protection during reconstruction A logical write becomes four physical I/Os Independent writes possible because of interleaved parity Reed-Solomon Codes ("Q") for protection during reconstruction D0D1D2 D3 P D4D5D6 P D7 D8D9P D10 D11 D12PD13 D14 D15 PD16D17 D18 D19 D20D21D22 D23 P Disk Columns Increasing Logical Disk Addresses Stripe Unit Targeted for mixed applications RAID 5+: High I/O Rate Parity

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 D0D1D2 D3 P D0' + + D1D2 D3 P' new data old data old parity XOR (1. Read) (2. Read) (3. Write) (4. Write) RAID-5: Small Write Algorithm 1 Logical Write = 2 Physical Reads + 2 Physical Writes Problems of Disk Arrays: Small Writes

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Hewlett-Packard (HP) AutoRAID °HP has interesting solution which combines both mirroring and RAID level 5. Dynamically adapts disk storage -For recent or highly used data, uses mirroring -For less recently used data, uses RAID 5 Gets speed of mirroring when it matters and density of RAID 5 on average

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 host array controller single board disk controller single board disk controller single board disk controller single board disk controller host adapter manages interface to host, DMA control, buffering, parity logic physical device control often piggy-backed in small format devices striping software off-loaded from host to array controller no applications modifications no reduction of host performance Subsystem Organization

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Array Controller String Controller String Controller String Controller String Controller String Controller String Controller... Data Recovery Group: unit of data redundancy Redundant Support Components: fans, power supplies, controller, cables End to End Data Integrity: internal parity protected data paths System Availability: Orthogonal RAIDs

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Fully dual redundant I/O Controller Array Controller Recovery Group Goal: No Single Points of Failure Goal: No Single Points of Failure host with duplicated paths, higher performance can be obtained when there are no failures System-Level Availability

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Decreasing Disk Diameters Increasing Network Bandwidth Network File Services High Performance Storage Service on a High Speed Network High Performance Storage Service on a High Speed Network 14" » 10" » 8" » 5.25" » 3.5" » 2.5" » 1.8" » 1.3" »... high bandwidth disk systems based on arrays of disks 3 Mb/s » 10Mb/s » 50 Mb/s » 100 Mb/s » 1 Gb/s » 10 Gb/s networks capable of sustaining high bandwidth transfers Network provides well defined physical and logical interfaces: separate CPU and storage system! OS structures supporting remote file access Network Attached Storage

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 OceanStore: The Oceanic Data Utility:

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 OceanStore Context: Ubiquitous Computing °Computing everywhere: Desktop, Laptop, Palmtop Cars, Cellphones Shoes? Clothing? Walls? °Connectivity everywhere: Rapid growth of bandwidth in the interior of the net Broadband to the home and office Wireless technologies such as CMDA, Satelite, laser

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Questions about information: °Where is persistent information stored? Want: Geographic independence for availability, durability, and freedom to adapt to circumstances °How is it protected? Want: Encryption for privacy, signatures for authenticity, and Byzantine commitment for integrity °Can we make it indestructible? Want: Redundancy with continuous repair and redistribution for long-term durability °Is it hard to manage? Want: automatic optimization, diagnosis and repair °Who owns the aggregate resouces? Want: Utility Infrastructure!

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 First Observation: Want Utility Infrastructure °Mark Weiser from Xerox: Transparent computing is the ultimate goal Computers should disappear into the background °In storage context: Don’t want to worry about backup Don’t want to worry about obsolescence Need lots of resources to make data secure and highly available, BUT don’t want to own them Outsourcing of storage already becoming popular °Pay monthly fee and your “data is out there” Simple payment interface  one bill from one company

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Second Observation: Want Automatic Maintenance °Can’t possibly manage billions of servers by hand! °System should automatically: Adapt to failure Repair itself Incorporate new elements °Can we guarantee data is available for 1000 years? New servers added from time to time Old servers removed from time to time Everything just works °Many components with geographic separation System not disabled by natural disasters Can adapt to changes in demand and regional outages Gain in stability through statistics

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Utility-based Infrastructure Pac Bell Sprint IBM AT&T Canadian OceanStore IBM °Transparent data service provided by federation of companies: Monthly fee paid to one service provider Companies buy and sell capacity from each other

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 OceanStore: Everyone’s Data, One Big Utility °How many files in the OceanStore? Assume people in world Say 10,000 files/person (very conservative?) So files in OceanStore! If 1 gig files (ok, a stretch), get 1 mole of bytes! Truly impressive number of elements… … but small relative to physical constants Aside: new results: 1.5 Exabytes/year (1.5  )

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 OceanStore Assumptions °Untrusted Infrastructure: The OceanStore is comprised of untrusted components Only ciphertext within the infrastructure °Responsible Party: Some organization (i.e. service provider) guarantees that your data is consistent and durable Not trusted with content of data, merely its integrity °Mostly Well-Connected: Data producers and consumers are connected to a high- bandwidth network most of the time Exploit multicast for quicker consistency when possible °Promiscuous Caching: Data may be cached anywhere, anytime °Optimistic Concurrency via Conflict Resolution: Avoid locking in the wide area Applications use object-based interface for updates

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Use of Moore’s law gains: The OceanStore Creed °Question: Can we use Moore’s law gains for something other than just raw performance? °Examples: Stability through Statistics -Use of redundancy of servers, network packets, etc. in order to gain more predictable behavior -Systems version of Thermodynamics! Extreme Durability (1000-year time scale?) -Use of erasure coding and continuous repair Security and Authentication -Signatures and secure hashes in many places Continuous dynamic optimization

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 Basic Structure: Irregular Mesh of “Pools”

CS152 / Kubiatowicz Lec /24/01©UCB Spring 2001 I/O Summary: °I/O performance limited by weakest link in chain between OS and device °Queueing theory is important 100% utilization means very large latency Remember, for M/M/1 queue (exponential source of requests/service) -queue size goes as u/(1-u) -latency goes as T ser ×u/(1-u) For M/G/1 queue (more general server, exponential sources) -latency goes as m1(z) x u/(1-u) = T ser x {1/2 x (1+C)} x u/(1-u) °Three Components of Disk Access Time: Seek Time: advertised to be 8 to 12 ms. May be lower in real life. Rotational Latency: 4.1 ms at 7200 RPM and 8.3 ms at 3600 RPM Transfer Time: 2 to 12 MB per second °I/O device notifying the operating system: Polling: it can waste a lot of processor time I/O interrupt: similar to exception except it is asynchronous °Delegating I/O responsibility from the CPU: DMA, or even IOP °Today: Researchers thinking about the wide scale for I/O