Penn ESE680-002 Spring2007 -- DeHon 1 ESE680-002 (ESE534): Computer Organization Day 25: April 16, 2007 Defect and Fault Tolerance.

Slides:



Advertisements
Similar presentations
RAID (Redundant Arrays of Independent Disks). Disk organization technique that manages a large number of disks, providing a view of a single disk of High.
Advertisements

 RAID stands for Redundant Array of Independent Disks  A system of arranging multiple disks for redundancy (or performance)  Term first coined in 1987.
+ CS 325: CS Hardware and Software Organization and Architecture Internal Memory.
COS 461 Fall 1997 COS 461: Networks and Distributed Computing u Prof. Ed Felten u u.
May 7, A Real Problem  What if you wanted to run a program that needs more memory than you have?
REDUNDANT ARRAY OF INEXPENSIVE DISCS RAID. What is RAID ? RAID is an acronym for Redundant Array of Independent Drives (or Disks), also known as Redundant.
CSE 461: Error Detection and Correction. Next Topic  Error detection and correction  Focus: How do we detect and correct messages that are garbled during.
1 Foundations of Software Design Lecture 3: How Computers Work Marti Hearst Fall 2002.
Other Disk Details. 2 Disk Formatting After manufacturing disk has no information –Is stack of platters coated with magnetizable metal oxide Before use,
CSCI 4550/8556 Computer Networks Comer, Chapter 7: Packets, Frames, And Error Detection.
1 Lecture 22: Fault Tolerance Papers: Token Coherence: Decoupling Performance and Correctness, ISCA’03, Wisconsin A Low Overhead Fault Tolerant Coherence.
Shivkumar KalyanaramanRensselaer Q1-1 ECSE-6600: Internet Protocols Quiz 1 Time: 60 min (strictly enforced) Points: 50 YOUR NAME: Be brief, but DO NOT.
Computer ArchitectureFall 2008 © November 12, 2007 Nael Abu-Ghazaleh Lecture 24 Disk IO.
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 26: April 18, 2007 Et Cetera…
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 11: February 14, 2007 Compute 1: LUTs.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
FPGA Defect Tolerance: Impact of Granularity Anthony YuGuy Lemieux December 14, 2005.
Penn ESE Spring DeHon 1 FUTURE Timing seemed good However, only student to give feedback marked confusing (2 of 5 on clarity) and too fast.
Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 5: January 24, 2007 ALUs, Virtualization…
TCP: Software for Reliable Communication. Spring 2002Computer Networks Applications Internet: a Collection of Disparate Networks Different goals: Speed,
RAID Systems CS Introduction to Operating Systems.
Operating Systems COMP 4850/CISG 5550 Disks, Part II Dr. James Money.
1 Fault-Tolerant Computing Systems #2 Hardware Fault Tolerance Pattara Leelaprute Computer Engineering Department Kasetsart University
Redundant Array of Independent Disks
ECE 526 – Network Processing Systems Design Network Processor Architecture and Scalability Chapter 13,14: D. E. Comer.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
Redundant Array of Inexpensive Disks aka Redundant Array of Independent Disks (RAID) Modified from CCT slides.
1 Fault Tolerance in the Nonstop Cyclone System By Scott Chan Robert Jardine Presented by Phuc Nguyen.
Protocols and the TCP/IP Suite
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 26: April 30, 2014 Defect and Fault Tolerance.
Microprocessor-based systems Curse 7 Memory hierarchies.
CBSSS 2002: DeHon Architecture as Interface André DeHon Friday, June 21, 2002.
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 9: February 24, 2014 Operator Sharing, Virtualization, Programmable Architectures.
Post-Manufacturing ECC Customization Based on Orthogonal Latin Square Codes and Its Application to Ultra-Low Power Caches Rudrajit Datta and Nur A. Touba.
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 7: February 6, 2012 Memories.
Computer Networks with Internet Technology William Stallings
Delivery, Forwarding, and Routing of IP Packets
Redundant Array of Independent Disks.  Many systems today need to store many terabytes of data.  Don’t want to use single, large disk  too expensive.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
1 Chapter Six - Errors, Error Detection, and Error Control Chapter Six.
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 8: February 4, 2004 Fault Detection.
ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) Availability Copyright 2004 Daniel J. Sorin Duke University.
CprE 458/558: Real-Time Systems
1 Lecture 24: Fault Tolerance Papers: Token Coherence: Decoupling Performance and Correctness, ISCA’03, Wisconsin A Low Overhead Fault Tolerant Coherence.
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 18: April 2, 2014 Interconnect 4: Switching.
Penn ESE534 Spring DeHon 1 ESE534: Computer Organization Day 5: February 1, 2010 Memories.
Caltech CS184 Spring DeHon 1 CS184b: Computer Architecture (Abstractions and Optimizations) Day 17: May 9, 2005 Defect and Fault Tolerance.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
CALTECH cs184c Spring DeHon CS184c: Computer Architecture [Parallel and Multithreaded] Day 16: May 31, 2001 Defect and Fault Tolerance.
CS399 New Beginnings Jonathan Walpole. Disk Technology & Secondary Storage Management.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
CALTECH CS137 Fall DeHon CS137: Electronic Design Automation Day 9: October 17, 2005 Fault Detection.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 Muhammad Waseem Iqbal Lecture # 20 Data Communication.
CS Introduction to Operating Systems
ESE532: System-on-a-Chip Architecture
ESE532: System-on-a-Chip Architecture
ESE532: System-on-a-Chip Architecture
ESE532: System-on-a-Chip Architecture
Fault Tolerance In Operating System
Maintaining Data Integrity in Programmable Logic in Atmospheric Environments through Error Detection Joel Seely Technical Marketing Manager Military &
ESE534: Computer Organization
RAID RAID Mukesh N Tekwani
UNIT IV RAID.
ESE532: System-on-a-Chip Architecture
CS333 Intro to Operating Systems
RAID RAID Mukesh N Tekwani April 23, 2019
Reliability and Error Control 5/17/11
Seminar on Enterprise Software
Presentation transcript:

Penn ESE Spring DeHon 1 ESE (ESE534): Computer Organization Day 25: April 16, 2007 Defect and Fault Tolerance

Penn ESE Spring DeHon 2 Today Defect and Fault Tolerance –Problem –Defect Tolerance –Fault Tolerance

Penn ESE Spring DeHon 3 Warmup Discussion Where do we guard against defects and faults today?

Penn ESE Spring DeHon 4 Motivation: Probabilities Given: –N objects –P yield probability What’s the probability for yield of composite system of N items? –Asssume iid faults –P(N items good) = P N

Penn ESE Spring DeHon 5 Probabilities P(N items good) = P N N=10 6, P= P(all good) ~= 0.37 N=10 7, P= P(all good) ~=

Penn ESE Spring DeHon 6 Simple Implications As N gets large –must either increase reliability –…or start tolerating failures N –memory bits –disk sectors –wires –transmitted data bits –processors –transistors –molecules –As devices get smaller, failure rates increase – chemists think P=0.95 is good –As devices get faster, failure rate increases

Penn ESE Spring DeHon 7 Defining Problems

Penn ESE Spring DeHon 8 Three “problems” Manufacturing imperfection –Shorts, breaks –wire/node X shorted to power, ground, another node –Doping/resistance variation too high Parameters vary over time –Electromigration –Resistance increases Incorrect operation –node X value flips crosstalk alpha particle bad timing

Penn ESE Spring DeHon 9 Defects Shorts example of defect Persistent problem –reliably manifests Occurs before computation Can test for at fabrication / boot time and then avoid (1 st half of lecture)

Penn ESE Spring DeHon 10 Faults Alpha particle bit flips is an example of a fault Fault occurs dynamically during execution At any point in time, can fail –(produce the wrong result) (2 nd half of lecture)

Penn ESE Spring DeHon 11 Lifetime Variation Starts out fine Over time changes –E.g. resistance increases until out of spec. Persistent –So can use defect techniques to avoid But, onset is dynamic –Must use fault detection techniques to recognize?

Penn ESE Spring DeHon 12 Sherkhar Bokar Intel Fellow Micro37 (Dec.2004)

Penn ESE Spring DeHon 13 Defect Rate Device with elements (100BT) 3 year lifetime = 10 8 seconds Accumulating up to 10% defects defects in 10 8 seconds  1 new defect every 10ms At 10GHz operation: One new defect every 10 8 cycles P newdefect =10 -19

Penn ESE Spring DeHon 14 First Step to Recover Admit you have a problem (observe that there is a failure)

Penn ESE Spring DeHon 15 Detection Determine if something wrong? –Some things easy ….won’t start –Others tricky …one and gate computes False & True  True Observability –can see effect of problem –some way of telling if defect/fault present

Penn ESE Spring DeHon 16 Detection Coding –space of legal values << space of all values –should only see legal –e.g. parity, ECC (Error Correcting Codes) Explicit test (defects, recurring faults) –ATPG = Automatic Test Pattern Generation –Signature/BIST=Built-In Self-Test –POST = Power On Self-Test Direct/special access –test ports, scan paths

Penn ESE Spring DeHon 17 Coping with defects/faults? Key idea: redundancy Detection: –Use redundancy to detect error Mitigating: use redundant hardware –Use spare elements in place of faulty elements (defects) –Compute multiple times so can discard faulty result (faults) –Exploit Law-of-Large Numbers

Penn ESE Spring DeHon 18 Defect Tolerance

Penn ESE Spring DeHon 19 Two Models Disk Drives (defect map) Memory Chips (perfect chip)

Penn ESE Spring DeHon 20 Disk Drives Expose defects to software –software model expects faults Create table of good (bad) sectors –manages by masking out in software (at the OS level) Never allocate a bad sector to a task or file –yielded capacity varies

Penn ESE Spring DeHon 21 Memory Chips Provide model in hardware of perfect chip Model of perfect memory at capacity X Use redundancy in hardware to provide perfect model Yielded capacity fixed –discard part if not achieve

Penn ESE Spring DeHon 22 Example: Memory Correct memory: –N slots –each slot reliably stores last value written Millions, billions, etc. of bits… –have to get them all right?

Penn ESE Spring DeHon 23 Memory Defect Tolerance Idea: –few bits may fail –provide more raw bits –configure so yield what looks like a perfect memory of specified size

Penn ESE Spring DeHon 24 Memory Techniques Row Redundancy Column Redundancy Block Redundancy

Penn ESE Spring DeHon 25 Row Redundancy Provide extra rows Mask faults by avoiding bad rows Trick: –have address decoder substitute spare rows in for faulty rows –use fuses to program

Penn ESE Spring DeHon 26 Spare Row

Penn ESE Spring DeHon 27 Column Redundancy Provide extra columns Program decoder/mux to use subset of columns

Penn ESE Spring DeHon 28 Spare Memory Column Provide extra columns Program output mux to avoid

Penn ESE Spring DeHon 29 Block Redundancy Substitute out entire block –e.g. memory subarray include 5 blocks –only need 4 to yield perfect (N+1 sparing more typical for larger N)

Penn ESE Spring DeHon 30 Spare Block

Penn ESE Spring DeHon 31 Yield M of N P(M of N) = P(yield N) +(N choose N-1) P(exactly N-1) +(N choose N-2) P(exactly N-2)… +(N choose N-M) P(exactly N-M)… [think binomial coefficients]

Penn ESE Spring DeHon 32 M of 5 example 1*P 5 + 5*P 4 (1-P) 1 +10P 3 (1-P) 2 +10P 2 (1- P) 3 +5P 1 (1-P) 4 + 1*(1-P) 5 Consider P=0.9 –1*P M=5 P(sys)=0.59 –5*P 4 (1-P) M=4 P(sys)=0.92 –10P 3 (1-P) M=3 P(sys)=0.99 –10P 2 (1-P) –5P 1 (1-P) –1*(1-P) Can achieve higher system yield than individual components!

Penn ESE Spring DeHon 33 Repairable Area Not all area in a RAM is repairable –memory bits spare-able –io, power, ground, control not redundant

Penn ESE Spring DeHon 34 Repairable Area P(yield) = P(non-repair) * P(repair) P(non-repair) = P N –N<<Ntotal –Maybe P > Prepair e.g. use coarser feature size P(repair) ~ P(yield M of N)

Penn ESE Spring DeHon 35 Consider a Crossbar Allows me to connect any of N things to each other –E.g. N processors N memories N/2 processors +N/2 memories

Penn ESE Spring DeHon 36 Crossbar Buses and Defects Two crossbars Wires may fail Switches may fail Provide more wires –Any wire fault avoidable M choose N

Penn ESE Spring DeHon 37 Crossbar Buses and Defects Two crossbars Wires may fail Switches may fail Provide more wires –Any wire fault avoidable M choose N

Penn ESE Spring DeHon 38 Crossbar Buses and Faults Two crossbars Wires may fail Switches may fail Provide more wires –Any wire fault avoidable M choose N

Penn ESE Spring DeHon 39 Crossbar Buses and Faults Two crossbars Wires may fail Switches may fail Provide more wires –Any wire fault avoidable M choose N –Same idea

Penn ESE Spring DeHon 40 Simple System P Processors M Memories Wires Memory, Compute, Interconnect

Penn ESE Spring DeHon 41 Simple System w/ Spares P Processors M Memories Wires Provide spare –Processors –Memories –Wires

Penn ESE Spring DeHon 42 Simple System w/ Defects P Processors M Memories Wires Provide spare –Processors –Memories –Wires...and defects

Penn ESE Spring DeHon 43 Simple System Repaired P Processors M Memories Wires Provide spare –Processors –Memories –Wires Use crossbar to switch together good processors and memories

Penn ESE Spring DeHon 44 In Practice Crossbars are inefficient [Day13] Use switching networks with –Locality –Segmentation …but basic idea for sparing is the same

Penn ESE Spring DeHon 45 Fault Tolerance

Penn ESE Spring DeHon 46 Faults Bits, processors, wires –May fail during operation Basic Idea same: –Detect failure using redundancy –Correct Now –Must identify and correct online with the computation

Penn ESE Spring DeHon 47 Simple Memory Example Problem: bits may lose/change value –Alpha particle –Molecule spontaneously switches Idea: –Store multiple copies –Perform majority vote on result

Penn ESE Spring DeHon 48 Redundant Memory

Penn ESE Spring DeHon 49 Redundant Memory Like M-choose-N Only fail if >(N-1)/2 faults P=0.9 P(2 of 3) All good: (0.9) 3 = Any 2 good: 3(0.9) 2 (0.1)=0.243 = 0.971

Penn ESE Spring DeHon 50 Better: Less Overhead Don’t have to keep N copies Block data into groups Add a small number of bits to detect/correct errors

Penn ESE Spring DeHon 51 Row/Column Parity Think of NxN bit block as array Compute row and column parities –(total of 2N bits)

Penn ESE Spring DeHon 52 Row/Column Parity Think of NxN bit block as array Compute row and column parities –(total of 2N bits) Any single bit error

Penn ESE Spring DeHon 53 Row/Column Parity Think of NxN bit block as array Compute row and column parities –(total of 2N bits) Any single bit error By recomputing parity –Know which one it is –Can correct it

Penn ESE Spring DeHon 54 In Use Today Conventional DRAM Memory systems –Use 72b ECC (Error Correcting Code) –On 64b words –Correct any single bit error –Detect multibit errors CD blocks are ECC coded –Correct errors in storage/reading

Penn ESE Spring DeHon 55 Interconnect Also uses checksums/ECC –Guard against data transmission errors –Environmental noise, crosstalk, trouble sampling data at high rates… Often just detect error Recover by requesting retransmission –E.g. TCP/IP (Internet Protocols)

Penn ESE Spring DeHon 56 Interconnect Also guards against whole path failure Sender expects acknowledgement If no acknowledgement will retransmit If have multiple paths –…and select well among them –Can route around any fault in interconnect

Penn ESE Spring DeHon 57 Interconnect Fault Example Send message Expect Acknowledgement

Penn ESE Spring DeHon 58 Interconnect Fault Example Send message Expect Acknowledgement If Fail

Penn ESE Spring DeHon 59 Interconnect Fault Example Send message Expect Acknowledgement If Fail –No ack

Penn ESE Spring DeHon 60 Interconnect Fault Example If Fail  no ack –Retry –Preferably with different resource

Penn ESE Spring DeHon 61 Interconnect Fault Example If Fail  no ack –Retry –Preferably with different resource Ack signals success

Penn ESE Spring DeHon 62 Transit Multipath Butterfly (or Fat-Tree) networks with multiple paths

Penn ESE Spring DeHon 63 Multiple Paths Provide bandwidth Minimize congestion Provide redundancy to tolerate faults

Penn ESE Spring DeHon 64 Routers May be faulty (links may be faulty) Dynamic –Corrupt data –Misroute –Send data nowhere

Penn ESE Spring DeHon 65 Multibutterfly Performance w/ Faults

Penn ESE Spring DeHon 66 Compute Elements Simplest thing we can do: –Compute redundantly –Vote on answer –Similar to redundant memory

Penn ESE Spring DeHon 67 Compute Elements Unlike Memory –State of computation important –Once a processor makes an error All subsequent results may be wrong Response –“reset” processors which fail vote –Go to spare set to replace failing processor

Penn ESE Spring DeHon 68 In Use NASA Space Shuttle –Uses set of 4 voting processors Boeing 777 –Uses voting processors (different architectures, code)

Penn ESE Spring DeHon 69 Forward Recovery Can take this voting idea to gate level –VonNeuman 1956 Basic gate is a majority gate –Example 3-input voter Alternate stages –Compute –Voting (restoration) Number of technical details… High level bit: –Requires Pgate>0.996 –Can make whole system as reliable as individual gate

Penn ESE Spring DeHon 70 Majority Multiplexing [Roy+Beiu/IEEE Nano2004] Maybe there’s a better way…

Penn ESE Spring DeHon 71 Rollback Recovery Commit state of computation at key points –to memory (ECC, RAID protected...) –…reduce to previously solved problem… On faults (lifetime defects) –recover state from last checkpoint –like going to last backup…. –…(snapshot)

Penn ESE Spring DeHon 72 Defect vs. Fault Tolerance Defect –Can tolerate large defect rates (10%) Use virtually all good components Small overhead beyond faulty components Fault –Require lower fault rate (e.g. VN <0.4%) Overhead to do so can be quite large

Penn ESE Spring DeHon 73 Summary Possible to engineer practical, reliable systems from –Imperfect fabrication processes (defects) –Unreliable elements (faults) We do it today for large scale systems –Memories (DRAMs, Hard Disks, CDs) –Internet …and critical systems –Space ships, Airplanes Engineering Questions –Where invest area/effort? Higher yielding components? Tolerating faulty components? –Where do we invoke law of large numbers? Above/below the device level

Penn ESE Spring DeHon 74 Admin Final Class on Wednesday –Will have course feedback forms André traveling 18—26 th –Won’t find in office Final exercise –Due Friday May 4 th –Proposals for Problem 3 before Friday April 27 th …same goes for clarifying questions

Penn ESE Spring DeHon 75 Big Ideas Left to itself: –reliability of system << reliability of parts Can design –system reliability >> reliability of parts [defects] –system reliability ~= reliability of parts [faults] For large systems –must engineer reliability of system –…all systems becoming “large”

Penn ESE Spring DeHon 76 Big Ideas Detect failures –static: directed test –dynamic: use redundancy to guard Repair with Redundancy Model –establish and provide model of correctness Perfect component model (memory model) Defect map model (disk drive model)