Reliability and Fault Tolerance

Slides:



Advertisements
Similar presentations
Redundant Array of Independent Disks (RAID) Striping of data across multiple media for expansion, performance and reliability.
Advertisements

Chapter 8 Fault Tolerance
Fault-Tolerant Systems Design Part 1.
Copyright 2007 Koren & Krishna, Morgan-Kaufman Part.1.1 FAULT TOLERANT SYSTEMS Part 1 - Introduction.
5th Conference on Intelligent Systems
Software Fault Tolerance – The big Picture RTS April 2008 Anders P. Ravn Aalborg University.
CS 582 / CMPE 481 Distributed Systems Fault Tolerance.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Last Class: Weak Consistency
7. Fault Tolerance Through Dynamic or Standby Redundancy 7.5 Forward Recovery Systems Upon the detection of a failure, the system discards the current.
CSE 466 – Fall Introduction - 1 Safety  Examples  Terms and Concepts  Safety Architectures  Safe Design Process  Software Specific Stuff 
Computer Science Lecture 16, page 1 CS677: Distributed OS Last Class:Consistency Semantics Consistency models –Data-centric consistency models –Client-centric.
Page 1 Copyright © Alexander Allister Shvartsman CSE 6510 (461) Fall 2010 Selected Notes on Fault-Tolerance (12) Alexander A. Shvartsman Computer.
Introduction to Dependability slides made with the collaboration of: Laprie, Kanoon, Romano.
Lecture 11: Storage Systems Disk, RAID, Dependability Kai Bu
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
Lecture 11: Storage Systems Disk, RAID, Dependability Kai Bu
Chapter 6 RAID. Chapter 6 — Storage and Other I/O Topics — 2 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f.
1 Fault-Tolerant Computing Systems #2 Hardware Fault Tolerance Pattara Leelaprute Computer Engineering Department Kasetsart University
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
Reliability and Fault Tolerance Setha Pan-ngum. Introduction From the survey by American Society for Quality Control [1]. Ten most important product attributes.
2. Fault Tolerance. 2 Fault - Error - Failure Fault = physical defect or flow occurring in some component (hardware or software) Error = incorrect behavior.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
Software Reliability SEG3202 N. El Kadri.
High Performance Embedded Computing © 2007 Elsevier Lecture 5: Embedded Systems Issues Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte.
Introduction to Dependability. Overview Dependability: "the trustworthiness of a computing system which allows reliance to be justifiably placed on the.
Part.1.1 In The Name of GOD Welcome to Babol (Nooshirvani) University of Technology Electrical & Computer Engineering Department.
Fault-Tolerant Systems Design Part 1.
Redundant Array of Independent Disks.  Many systems today need to store many terabytes of data.  Don’t want to use single, large disk  too expensive.
Building Dependable Distributed Systems Chapter 1 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
1 Fault Tolerant Computing Basics Dan Siewiorek Carnegie Mellon University June 2012.
Copyright 1999 G.v. Bochmann ELG 7186B ch.1 1 Course Notes ELG 7186C Formal Methods for the Development of Real-Time System Applications Gregor v. Bochmann.
Fault-Tolerant Systems Design Part 1.
Hwajung Lee. One of the selling points of a distributed system is that the system will continue to perform even if some components / processes fail.
Chapter 11 Fault Tolerance. Topics Introduction Process Resilience Reliable Group Communication Recovery.
Mixed Criticality Systems: Beyond Transient Faults Abhilash Thekkilakattil, Alan Burns, Radu Dobrin and Sasikumar Punnekkat.
1 Fault-Tolerant Computing Systems #1 Introduction Pattara Leelaprute Computer Engineering Department Kasetsart University
Introduction to Fault Tolerance By Sahithi Podila.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M12 8/20/2001Slide 1 SMU CSE 8314 /
Faults and fault-tolerance One of the selling points of a distributed system is that the system will continue to perform even if some components / processes.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
Topic: Reliability and Integrity. Reliability refers to the operation of hardware, the design of software, the accuracy of data or the correspondence.
CS203 – Advanced Computer Architecture Dependability & Reliability.
UNIT 14: INSTALLING & MAINTAINING COMPUTER HARDWARE
I/O Errors 1 Computer Organization II © McQuain RAID Redundant Array of Inexpensive (Independent) Disks – Use multiple smaller disks (c.f.
1 Introduction to Engineering Spring 2007 Lecture 16: Reliability & Probability.
Software Metrics and Reliability
Hardware & Software Reliability
Faults and fault-tolerance
Fault Tolerance & Reliability CDA 5140 Spring 2006
Fault-Tolerant Computing Systems #5 Reliability and Availability2
Software Reliability PPT BY:Dr. R. Mall 7/5/2018.
Fault Tolerance In Operating System
Software Reliability: 2 Alternate Definitions
RAID RAID Mukesh N Tekwani
ICOM 6005 – Database Management Systems Design
Faults and fault-tolerance
Fault Tolerance Distributed Web-based Systems
Faults and fault-tolerance
Mattan Erez The University of Texas at Austin July 2015
Introduction to Fault Tolerance
RAID Redundant Array of Inexpensive (Independent) Disks
Overview Dependability: "[..] the trustworthiness of a computing system which allows reliance to be justifiably placed on the service it delivers [..]"
Reliability and Safety
RAID RAID Mukesh N Tekwani April 23, 2019
RELIABILITY Reliability is -
Definitions Cumulative time to failure (T): Mean life:
Seminar on Enterprise Software
Presentation transcript:

Reliability and Fault Tolerance Setha Pan-ngum

Introduction From the survey by American Society for Quality Control [1]. Ten most important product attributes Attribute Ave. Score performance 9.5 Ease of use 8.3 Last a long time (reliability) 9.0 Appearance 7.7 Service 8.9 Brand name 6.3 Easily repaired (maintainability) 8.8 Packaging/display 5.8 warranty 8.4 Latest model 5.4

Introduction Embedded system major requirements Low failure rate Leads to fault tolerance design Gracefully degradable

Failures, errors, faults Fault – defects that cause malfunction Hardware fault e.g. broken wire, stuck logic Software fault e.g. bug Error – unintended state caused by fault. E.g. software bug leads to wrong calculation  wrong output Failure – errors leads to system failure (opearates differently from intended)

Causes of Failures Errors in specification or design Component defects Environmental effects

Errors in specification or design Probably the hardest to detect Embedded system development: Specification Design Implementation If specification is wrong, the following steps will be wrong. E.g. unit compatibility of rocket example.

Component defects Depends on device Electronic components can have defects from manufacturing, and wear and tear.

Operating environment Stresses Temperatures Moisture vibration

Classification of failures Nature Value – incorrect output Timing – correct output but too late. Perception – as seen by users Persistent – all users see same results. E.g. sensor reading stuck at ‘0’ Inconsistent – users see differently. E.g. sensor reading floats (say between 1-3V, and could be seen as ‘1’ or ‘0’). Called malicious or Byzantine failures

Classification of failures Effects Benign – not serious e.g. broken tv Malign – serious e.g. plane crash Oftenness Permanent – broken equipment Transient – lose wire, processors under stress (EMI, power supply, radiation) Transient occurs a lot more often!

Example of transient failure From report on fire control radar of F-16 fighters [3] Pilot noticed malfunctions every 6 hrs Pilot requested maintenance every 31 hrs 1/3 of requests can be reproduced in workshop Overall less than 10% of transient failures can be reproduced!

Types of errors Transient Permanent Regularly occurs. E.g. electrical glitches causes temporary value error Permanent Transient fault can be kept in database, making it permanent.

Classifications of faults Nature By chance – broken wire Intentional – virus Perception Physical Design Boundary Internal – component breakdown External – EMI causes faults

Classifications of faults Origin Development e.g. in program or device Operation e.g. user entering wrong input Persistence Transient – glitches caused by lightning Permanent faults that need repair

Definitions Reliability R(t) Maintainability M(t) Availability A(t) Probability that a system will perform its intended function in the specified environment up to time t. Maintainability M(t) Probability that a system can be restored within t units after a failure. Availability A(t) Probability that a system is available to perform the specified service at tdt. (% of system working)

Reliability [4] R(0) = 1, R( Failure density f(t) = -dR(t)/dt Failure rate (t) = f(t)/R(t) (t) dt is the conditional probability that a system will fail in the interval dt, provided it has been operational at the beginning of this interval When (t) = constant then R(t) = e-t = MTTF (Mean Time to Failure)

Failure rate (t) Burn-in Wear-out Late Early faillures Real-time Period of constant Failure Rate Early faillures Late Burn-in Wear-out

Failure rate vs Costs [4] Cost of System US Air Force: Failure rate of electronic systems within a given technology increases with increasing system cost.

Maintainability Mesured by Repair-rate  When (t) = constant then M(t) = e-t = MTTR (Mean Time to Repair) Preventive maintenace: If  increases in time, then it makes sense to replace the aging unit. If  of different units evolves differently, preventive maintenace consists in replacing the “Smallest Replaceable Units” with growing 

Reliability vs. Maintainability Reliability and maintainability are, to a certain extent, conflicting goals. Example: Connectors Inside a SRU, reliability must be optimized Between SRU’s, maintainability is important Plug Solder Reliability bad good Maintainability

Availability A = MTTF / ( MTTF + MTTR ) Good availability can be achieved either by a high MTTF by a small MTTR A high system MTTF can be achieved by means of fault tolerance: the system continues to operate properly even when some components have failed. Fault tolerance reduces also the MTTR requirements.

Fault tolerance obtained through redundancy (more resources assigned to a task than strictly required) REDUNDANCY can be used for Fault detection Fault correction can be implemented at various levels at component level at processor level at system level

Redundancy at component level Error detection/correction in memories Error detection by parity bit. Error correction by multiple parity bits.

Redundancy at component level Stripe Sets with Parity (RAID) Disk 1 Disk 2 Disk 3 = XOR of two other disks

Redundancy at component level Error detection in an ALU ALU proof by 9 Error !

Redundancy in components Error detection to correct transient errors by retry to avoid using corrupted data Error correction to correct transient errors on the fly to remain operational after catastrophic component failure Scheduled maintenance instead of urgent repair.

Fault detection at Processor Level 1 C P U 2 = Error

Fault correction at Processor Level Voting Logic C P U 1 C P U 2 C P U 3

Replica Determinism A set of replicated RT objects is “replica determinate” if all objects of this set visit the same state at about the same time. “At about the same time” makes a concession to the finite precision of the clock synchronization Replica determinism is needed for consistent distributed actions fault tolerance by active redundancy

Replica Determinism Lack of replica determinism makes voting meaningless. Example: Airplane on takeoff Lack of replica determinism causes the faulty channel to win !!! System 1: System 2: System 3: Majority: Take off Abort Accelerate Engine Stop Engine Stop Engine (fault)

Fault Correction at System Level Hot Stand-By 1 S Y T E M 2 Error Detection

Fault Correction at System Level Cold Stand-By 1 S Y T E M 2 Error Detection Common Memory

Fault Correction at System Level Distributed Common Memory 1 S Y T E M 2 Error Detection Distributed Common Memory In fact, each processor has access to the memory of the other to keep a copy of the state of all critical processes

Fault Correction at System Level Load Sharing 1 S Y T E M 1 S Y T E M 1 S Y T E M 1 Common Memory

Safety Critical systems Voting Logic S Y 1 S Y 2 S Y 3 S Y 4 Fail once, still operational, fail twice, still safe.

Safety Critical Systems But What happens in case of a Software Bug ???

Space Shuttle Computer system Voting Logic S Y 4 S Y 1 S Y 2 S Y 3 S Y 5

References Ebeling C, An introduction to reliability and maintainability engineering, McGraw-Hill, 1997 Krishna C, Real-time systems, McGraw-Hill, 1997 Kopetz H, Real-time systems design principles for distributed embedded applications, Kluwer, 1997 Tiberghien J, Real-time system fault tolerance, Lecture slides