I/O Lecture notes from MKP and S. Yalamanchili. (2) Reading Sections 6.1-6.9.

Slides:



Advertisements
Similar presentations
Morgan Kaufmann Publishers Storage and Other I/O Topics
Advertisements

Chapter 6 I/O Systems.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Process Description and Control
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 4 Computing Platforms.
Chapter 5 Input/Output 5.1 Principles of I/O hardware
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
Chapter 8 Interfacing Processors and Peripherals.
1 Lecture 22: I/O, Disk Systems Todays topics: I/O overview Disk basics RAID Reminder: Assignment 8 due Tue 11/21.
I/O Chapter 8. Outline Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
CS224 Spring 2011 Computer Organization CS224 Chapter 6A: Disk Systems With thanks to M.J. Irwin, D. Patterson, and J. Hennessy for some lecture slide.
I/O Management and Disk Scheduling
Princess Sumaya Univ. Computer Engineering Dept. Chapter 6:
PP Test Review Sections 6-1 to 6-6
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Lecture 21Comp. Arch. Fall 2006 Chapter 8: I/O Systems Adapted from Mary Jane Irwin at Penn State University for Computer Organization and Design, Patterson.
1  1998 Morgan Kaufmann Publishers Interfacing Processors and Peripherals.
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos.
Lecture Objectives: 1)Explain the limitations of flash memory. 2)Define wear leveling. 3)Define the term IO Transaction 4)Define the terms synchronous.
1  1998 Morgan Kaufmann Publishers Chapter 8 Storage, Networks and Other Peripherals.
1 Lecture 26: Storage Systems Topics: Storage Systems (Chapter 6), other innovations Final exam stats:  Highest: 95  Mean: 70, Median: 73  Toughest.
Computer ArchitectureFall 2008 © November 12, 2007 Nael Abu-Ghazaleh Lecture 24 Disk IO.
Chapter 6 Storage and Other I/O Topics CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University Revised.
Input & Output Systems Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
Computer Organization CS224 Fall 2012 Lesson 51. Measuring I/O Performance  I/O performance depends on l Hardware: CPU, memory, controllers, buses l.
Memory/Storage Architecture Lab Computer Architecture Lecture Storage and Other I/O Topics.
Storage & Peripherals Disks, Networks, and Other Devices.
Lecture 17: Storage and I/O EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Computer Input & Output Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 5 Storage and Other I/O Topics (condensed lecture)
Computer Organization CS224 Fall 2012 Lessons 49 & 50.
1 Input Output [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and Irwin, PSU 2005]
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
I/O 1 Computer Organization II © McQuain Introduction I/O devices can be characterized by – Behavior: input, output, storage – Partner:
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
I/O Lecture notes from MKP and S. Yalamanchili. (2) Introduction I/O devices can be characterized by  Behavior: input, output, storage  Partner: human.
Lecture 16: Storage and I/O EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
I/O Computer Organization II 1 Introduction I/O devices can be characterized by – Behavior: input, output, storage – Partner: human or machine – Data rate:
Lecture 35: Chapter 6 Today’s topic –I/O Overview 1.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
Computer Organization CS224 Fall 2012 Lessons 47 & 48.
Introduction I/O devices can be characterized by – Behaviour: input, output, storage – Partner: human or machine – Data rate: bytes/sec, transfers/sec.
Chapter 6 Storage and Other I/O Topics. Chapter 6 — Storage and Other I/O Topics — 2 Introduction I/O bus connections §6.1 Introduction.
Chapter 6 Storage and Other I/O Topics. Chapter 6 — Storage and Other I/O Topics — 2 Introduction I/O devices can be characterized by Behaviour: input,
Chapter 6 Storage and Other I/O Topics. Chapter 6 — Storage and Other I/O Topics — 2 Introduction I/O devices can be characterized by Behaviour: input,
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
1 Lecture 23: Storage Systems Topics: disk access, bus design, evaluation metrics, RAID (Sections )
Chapter 6 Storage and Other I/O Topics. Chapter 6 — Storage and Other I/O Topics — 2 Introduction I/O devices can be characterized by Behaviour: input,
Chapter 6 — Storage and Other I/O Topics — 1 Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine.
Chapter 6 Storage and Other I/O Topics. Chapter 6 — Storage and Other I/O Topics — 2 Introduction I/O devices can be characterized by Behaviour: input,
LECTURE 13 I/O. I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access.
CSCE 385: Computer Architecture Spring 2014 Dr. Mike Turi I/O.
I/O Lecture notes from MKP and S. Yalamanchili.
I/O Lecture notes from MKP and S. Yalamanchili.
Morgan Kaufmann Publishers Storage and Other I/O Topics
Virtual Memory Lecture notes from MKP and S. Yalamanchili.
Multiple Platters.
Module: Storage Systems
Morgan Kaufmann Publishers Storage and Other I/O Topics
Introduction I/O devices can be characterized by I/O bus connections
Lecture 13 I/O.
Lecture 28: Reliability Today’s topics: GPU wrap-up Disk basics RAID
Morgan Kaufmann Publishers Storage and Other I/O Topics
Morgan Kaufmann Publishers Storage and Other I/O Topics
Morgan Kaufmann Publishers Storage and Other I/O Topics
CSC3050 – Computer Architecture
Presentation transcript:

I/O Lecture notes from MKP and S. Yalamanchili

(2) Reading Sections

(3) Overview The individual I/O devices Performance properties – latency, throughput, energy Disks, network interfaces, SSDs, graphics prcessors Interconnects within and between nodes Standards play a key role here Third party ecosystem Protocols for CPU-I/O device interaction How do devices of various speeds communicate with the CPU and memory? Metrics How do we assess I/O system performance?

(4) Typical x86 PC I/O System Network Interface GPU Software interaction/control Interconnect Replaced with Quickpath Interconnect (QPI) Note the flow of data (and control) in this system!

(5) Overview The individual I/O devices Interconnects within and between nodes Protocols for CPU-I/O device interaction Metrics

(6) Disk Storage Nonvolatile, rotating magnetic storage

(7) Disk Drive Terminology Data is recorded on concentric tracks on both sides of a platter Tracks are organized as fixed size (bytes) sectors Corresponding tracks on all platters form a cylinder Data is addressed by three coordinates: cylinder, platter, and sector Actuator Arm Head Platters

(8) Disk Sectors and Access Each sector records Sector ID Data (512 bytes, 4096 bytes proposed) Error correcting code (ECC) oUsed to hide defects and recording errors Synchronization fields and gaps Access to a sector involves Queuing delay if other accesses are pending Seek: move the heads Rotational latency Data transfer Controller overhead

(9) Disk Performance Actuator moves (seek) the correct read/write head over the correct sector Under the control of the controller Disk latency = controller overhead + seek time + rotational delay + transfer delay Seek time and rotational delay are limited by mechanical parts Actuator Arm Head Platters

(10) Disk Performance Seek time determined by the current position of the head, i.e., what track is it covering, and the new position of the head milliseconds Average rotational delay is time for 0.5 revolutions Transfer rate is a function of bit density

(11) Disk Access Example Given 512B sector, 15,000rpm, 4ms average seek time, 100MB/s transfer rate, 0.2ms controller overhead, idle disk Average read time 4ms seek time + ½ / (15,000/60) = 2ms rotational latency / 100MB/s = 0.005ms transfer time + 0.2ms controller delay = 6.2ms If actual average seek time is 1ms Average read time = 3.2ms

(12) Disk Performance Issues Manufacturers quote average seek time Based on all possible seeks Locality and OS scheduling lead to smaller actual average seek times Smart disk controller allocate physical sectors on disk Present logical sector interface to host Standards: SCSI, ATA, SATA Disk drives include caches Prefetch sectors in anticipation of access Avoid seek and rotational delay Maintain caches in host DRAM

(13) Arrays of Inexpensive Disks: Throughput Data is striped across all disks Visible performance overhead of drive mechanics is amortized across multiple accesses Scientific workloads are well suited to such organizations CPU read request Block 0Block 1Block 2Block 3

(14) Arrays of Inexpensive Disks: Request Rate Consider multiple read requests for small blocks of data Several I/O requests can be serviced concurrently Multiple CPU read requests

(15) Reliability of Disk Arrays The reliability of an array of N disks is lower than the reliability of a single disk Any single disk failure will cause the array to fail The array is N times more likely to fail Use redundant disks to recover from failures Similar to use of error correcting codes Overhead Bandwidth and cost Redundant information

(16) RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant storage system Especially if failed disks can be hot swapped

(17) RAID Level 0 RAID 0 corresponds to use of striping with no redundancy Provides the highest performance Provides the lowest reliability Frequently used in scientific and supercomputing applications where data throughput is important

(18) RAID Level 1 The disk array is mirrored or shadowed in its entirety Reads can be optimized Pick the array with smaller queuing and seek times Performance sacrifice on writes – to both arrays mirrors

(19) RAID 3: Bit-Interleaved Parity N + 1 disks Data striped across N disks at byte level Redundant disk stores parity Read access oRead all disks Write access oGenerate new parity and update all disks On failure oUse parity to reconstruct missing data Not widely used Bit level parity Parity Disk

(20) RAID Level 4: N+1 Disks Data is interleaved in blocks, referred to as the striping unit and striping width Small reads can access subset of the disks A write to a single disk requires 4 accesses read old block, write new block, read and write parity disk Parity disk can become a bottleneck Block 0Block 1Block 2Block 3Parity Parity Disk Block level parity Block 4Block 5Block 6Block 7Parity

(21) The Small Write Problem Two disk read operations followed by two disk write operations B0B1B2B3 P B1-New Ex-OR

(22) RAID 5: Distributed Parity N + 1 disks Like RAID 4, but parity blocks distributed across disks oAvoids parity disk being a bottleneck Widely used

(23) RAID Summary RAID can improve performance and availability High availability requires hot swapping Assumes independent disk failures Too bad if the building burns down! See Hard Disk Performance, Quality and Reliability

(24) Flash Storage Nonvolatile semiconductor storage 100× – 1000× faster than disk Smaller, lower power, more robust But more $/GB (between disk and DRAM)

(25) Flash Types NOR flash: bit cell like a NOR gate Random read/write access Used for instruction memory in embedded systems NAND flash: bit cell like a NAND gate Denser (bits/area), but block-at-a-time access Cheaper per GB Used for USB keys, media storage, … Flash bits wears out after 1000 s of accesses Not suitable for direct RAM or disk replacement Wear leveling: remap data to less used blocks

(26) Solid State Disks Replace mechanical drives with solid state drives Superior access performance Adding another level to the memory hierarchy Disk is the new tape! Wear-leveling management Wikipedia:PCIe DRAM and SSD Fusion-IO

(27) Overview The individual I/O devices Interconnects within and between nodes Protocols for CPU-I/O device interaction Metrics

(28) Interconnecting Components

(29) Interconnecting Components Need interconnections between CPU, memory, I/O controllers Bus: shared communication channel Parallel set of wires for data and synchronization of data transfer Can become a bottleneck Performance limited by physical factors Wire length, number of connections More recent alternative: high-speed serial connections with switches Like networks What do we want Processor independence, control, buffered isolation

(30) Bus Types Processor-Memory buses Short, high speed Design is matched to memory organization I/O buses Longer, allowing multiple connections Specified by standards for interoperability Connect to processor-memory bus through a bridge

(31) Bus Signals and Synchronization Data lines Carry address and data Multiplexed or separate Control lines Indicate data type, synchronize transactions Synchronous Uses a bus clock Asynchronous Uses request/acknowledge control lines for handshaking

(32) I/O Bus Examples FirewireUSB 2.0PCI ExpressSerial ATASerial Attached SCSI Intended useExternal Internal External Devices per channel Data width422/lane44 Peak bandwidth 50MB/s or 100MB/s 0.2MB/s, 1.5MB/s, or 60MB/s 250MB/s/lane 1×, 2×, 4×, 8×, 16×, 32× 300MB/s Hot pluggable Yes DependsYes Max length4.5m5m0.5m1m8m StandardIEEE 1394USB Implementers Forum PCI-SIGSATA-IOINCITS TC T10

(33) PCI Express Standardized local bus Load store flat address model Packet based split transaction protocol Reliable data transfer

(34) PCI Express: Operation Packet-based, memory mapped operation HeaderData Seq#CRC Frame Transaction Layer Data Link Layer Physical Layer

(35) The Big Picture From electronicdesign.com

(36) Local Interconnect Standards HyperTransport Packet switched, point-to-point link HyperTransport Consortium (AMD) Quickpath Interconnect Packet switched, point-to-point link Intel Corporation arstechnica.com hypertransport.org

(37) Overview The individual I/O devices Interconnects within and between nodes Protocols for CPU-I/O device interaction Metrics

(38) I/O Management

(39) I/O Management I/O is mediated by the OS Multiple programs share I/O resources oNeed protection and scheduling I/O causes asynchronous interrupts oSame mechanism as exceptions I/O programming is fiddly oOS provides abstractions to programs

(40) I/O Commands I/O devices are managed by I/O controller hardware Transfers data to/from device Synchronizes operations with software Command registers Cause device to do something Status registers Indicate what the device is doing and occurrence of errors Data registers Write: transfer data to a device Read: transfer data from a device

(41) I/O Register Mapping Memory mapped I/O Registers are addressed in same space as memory Address decoder distinguishes between them OS uses address translation mechanism to make them only accessible to kernel I/O instructions Separate instructions to access I/O registers Can only be executed in kernel mode Example: x86

(42) Polling Periodically check I/O status register If device ready, do operation If error, take action Common in small or low-performance real-time embedded systems Predictable timing Low hardware cost In other systems, wastes CPU time

(43) Interrupts When a device is ready or error occurs Controller interrupts CPU Interrupt is like an exception But not synchronized to instruction execution Can invoke handler between instructions Cause information often identifies the interrupting device Priority interrupts Devices needing more urgent attention get higher priority Can interrupt handler for a lower priority interrupt

(44) I/O Data Transfer Polling and interrupt-driven I/O CPU transfers data between memory and I/O data registers Time consuming for high-speed devices Direct memory access (DMA) OS provides starting address in memory I/O controller transfers to/from memory autonomously Controller interrupts on completion or error

(45) Direct Memory Access Program the DMA engine with start and destination addresses Transfer count Interrupt-driven or polling interface What about use of virtual vs. physical addresses? Example

(46) DMA/Cache Interaction If DMA writes to a memory block that is cached Cached copy becomes stale If write-back cache has dirty block, and DMA reads memory block Reads stale data Need to ensure cache coherence Flush blocks from cache if they will be used for DMA Or use non-cacheable memory locations for I/O

(47) I/O System Design Satisfying latency requirements For time-critical operations If system is unloaded oAdd up latency of components Maximizing throughput Find weakest link (lowest-bandwidth component) Configure to operate at its maximum bandwidth Balance remaining components in the system If system is loaded, simple analysis is insufficient Need to use queuing models or simulation

(48) Overview The individual I/O devices Interconnects within and between nodes Protocols for CPU-I/O device interaction Metrics

(49) Measuring I/O Performance I/O performance depends on Hardware: CPU, memory, controllers, buses Software: operating system, database management system, application Workload: request rates and patterns I/O system design can trade-off between response time and throughput Measurements of throughput often done with constrained response-time

(50) Transaction Processing Benchmarks Transactions Small data accesses to a DBMS Interested in I/O rate, not data rate Measure throughput Subject to response time limits and failure handling ACID (Atomicity, Consistency, Isolation, Durability) Overall cost per transaction Transaction Processing Council (TPC) benchmarks ( TPC-APP: B2B application server and web services TCP-C: on-line order entry environment TCP-E: on-line transaction processing for brokerage firm TPC-H: decision support business oriented ad-hoc queries

(51) File System & Web Benchmarks SPEC System File System (SFS) Synthetic workload for NFS server, based on monitoring real systems Results oThroughput (operations/sec) oResponse time (average ms/operation) SPEC Web Server benchmark Measures simultaneous user sessions, subject to required throughput/session Three workloads: Banking, Ecommerce, and Support

(52) I/O vs. CPU Performance Amdahls Law Dont neglect I/O performance as parallelism increases compute performance Example Benchmark takes 90s CPU time, 10s I/O time Double the number of CPUs/2 years oI/O unchanged YearCPU timeI/O timeElapsed time% I/O time now90s10s100s10% +245s10s55s18% +423s10s33s31% +611s10s21s47%

(53) I/O System Characteristics Dependability is important Particularly for storage devices Performance measures Latency (response time) Throughput (bandwidth) Desktops & embedded systems oMainly interested in response time & diversity of devices Servers oMainly interested in throughput & expandability of devices

(54) Dependability Fault: failure of a component May or may not lead to system failure Service accomplishment Service delivered as specified Service interruption Deviation from specified service FailureRestoration

(55) Dependability Measures Reliability: mean time to failure (MTTF) Service interruption: mean time to repair (MTTR) Mean time between failures MTBF = MTTF + MTTR Availability = MTTF / (MTTF + MTTR) Improving Availability Increase MTTF: fault avoidance, fault tolerance, fault forecasting Reduce MTTR: improved tools and processes for diagnosis and repair

(56) Concluding Remarks I/O performance measures Throughput, response time Dependability and cost also important Buses used to connect CPU, memory, I/O controllers Polling, interrupts, DMA I/O benchmarks TPC, SPECSFS, SPECWeb RAID Improves performance and dependability

(57) Study Guide Provide a step-by-step example of how each of the following work Polling, DMA, interrupts, read/write accesses in a RAID configuration, memory mapped I/O Compute the bandwidth for data transfers to/from a disk Delineate and explain different types of benchmarks How is the I/O system of a desktop or laptop different from that of a server? Recognize the following standards: QPI, HyperTransport, PCIe

(58) Glossary Asynchronous bus Direct Memory Access (DMA) Interrupts Memory Mapped I/O MTTR MTBF MTTF PCI Express Polling RAID Solid State Disk Synchronous bus