Download presentation
Presentation is loading. Please wait.
Published byDamon Harper Modified over 9 years ago
1
Reliability and Fault Tolerance Setha Pan-ngum
2
Introduction From the survey by American Society for Quality Control [1]. Ten most important product attributes From the survey by American Society for Quality Control [1]. Ten most important product attributes Attribute Ave. Score Attribute performance9.5 Ease of use 8.3 Last a long time (reliability) 9.0Appearance7.7 Service8.9 Brand name 6.3 Easily repaired (maintainability) 8.8 Packaging/displa y 5.8 warranty8.4 Latest model 5.4
3
Introduction Embedded system major requirements Embedded system major requirements –Low failure rate –Leads to fault tolerance design –Gracefully degradable
4
Failures, errors, faults Fault – defects that cause malfunction Fault – defects that cause malfunction –Hardware fault e.g. broken wire, stuck logic –Software fault e.g. bug Error – unintended state caused by fault. E.g. software bug leads to wrong calculation wrong output Error – unintended state caused by fault. E.g. software bug leads to wrong calculation wrong output Failure – errors leads to system failure (opearates differently from intended) Failure – errors leads to system failure (opearates differently from intended)
5
Causes of Failures Errors in specification or design Errors in specification or design Component defects Component defects Environmental effects Environmental effects
6
Errors in specification or design Probably the hardest to detect Probably the hardest to detect Embedded system development: Embedded system development: –Specification –Design –Implementation If specification is wrong, the following steps will be wrong. E.g. unit compatibility of rocket example. If specification is wrong, the following steps will be wrong. E.g. unit compatibility of rocket example.
7
Component defects Depends on device Depends on device Electronic components can have defects from manufacturing, and wear and tear. Electronic components can have defects from manufacturing, and wear and tear.
8
Operating environment Stresses Stresses Temperatures Temperatures Moisture Moisture vibration vibration
9
Classification of failures Nature Nature –Value – incorrect output –Timing – correct output but too late. Perception – as seen by users Perception – as seen by users –Persistent – all users see same results. E.g. sensor reading stuck at ‘0’ –Inconsistent – users see differently. E.g. sensor reading floats (say between 1-3V, and could be seen as ‘1’ or ‘0’). Called malicious or Byzantine failures Called malicious or Byzantine failures
10
Classification of failures Effects Effects –Benign – not serious e.g. broken tv –Malign – serious e.g. plane crash Oftenness Oftenness –Permanent – broken equipment –Transient – lose wire, processors under stress (EMI, power supply, radiation) –Transient occurs a lot more often!
11
Example of transient failure From report on fire control radar of F- 16 fighters [3] From report on fire control radar of F- 16 fighters [3] –Pilot noticed malfunctions every 6 hrs –Pilot requested maintenance every 31 hrs –1/3 of requests can be reproduced in workshop –Overall less than 10% of transient failures can be reproduced!
12
Types of errors Transient Transient –Regularly occurs. E.g. electrical glitches causes temporary value error Permanent Permanent –Transient fault can be kept in database, making it permanent.
13
Classifications of faults Nature Nature –By chance – broken wire –Intentional – virus Perception Perception –Physical –Design Boundary Boundary –Internal – component breakdown –External – EMI causes faults
14
Classifications of faults Origin Origin –Development e.g. in program or device –Operation e.g. user entering wrong input Persistence Persistence –Transient – glitches caused by lightning –Permanent faults that need repair
15
Definitions Reliability R(t) Reliability R(t) –Probability that a system will perform its intended function in the specified environment up to time t. Maintainability M(t) Maintainability M(t) –Probability that a system can be restored within t units after a failure. Availability A(t) Availability A(t) –Probability that a system is available to perform the specified service at tdt. (% of system working)
16
Reliability [4] > R(0) = 1, R( > Failure density f(t) = -dR(t)/dt > Failure rate (t) = f(t)/R(t) (t) dt is the conditional probability that a system will fail in the interval dt, provided it has been operational at the beginning of this interval > When (t) = constant then R(t) = e - t = MTTF (Mean Time to Failure)
17
Failure rate (t) Real-time Period of constant Failure Rate Early faillures Late faillures Burn-inWear-out
18
Failure rate vs Costs [4] (t) Cost of System US Air Force: Failure rate of electronic systems within a given technology increases with increasing system cost.
19
19 Maintainability > Mesured by Repair-rate > When (t) = constant then M(t) = e - t = MTTR (Mean Time to Repair) > Preventive maintenace: –If increases in time, then it makes sense to replace the aging unit. –If of different units evolves differently, preventive maintenace consists in replacing the “Smallest Replaceable Units” with growing
20
20 Reliability vs. Maintainability > Reliability and maintainability are, to a certain extent, conflicting goals. > Example: Connectors > Inside a SRU, reliability must be optimized > Between SRU’s, maintainability is important PlugSolderReliabilitybadgoodMaintainabilitygoodbad
21
21 Availability > A = MTTF / ( MTTF + MTTR ) > Good availability can be achieved either –by a high MTTF –by a small MTTR A high system MTTF can be achieved by means of fault tolerance: the system continues to operate properly even when some components have failed. A high system MTTF can be achieved by means of fault tolerance: the system continues to operate properly even when some components have failed. Fault tolerance reduces also the MTTR requirements. Fault tolerance reduces also the MTTR requirements.
22
22 Fault tolerance obtained through redundancy (more resources assigned to a task than strictly required) REDUNDANCY can be used for can be used for –Fault detection –Fault correction can be implemented at various levels can be implemented at various levels –at component level –at processor level –at system level
23
23 Redundancy at component level Error detection/correction in memories Error detection by parity bit. Error correction by multiple parity bits.
24
24 Redundancy at component level Stripe Sets with Parity (RAID) Disk 1 Disk 3 Disk 2 = XOR of two other disks
25
25 Redundancy at component level Error detection in an ALU ALU proof by 9 Error !
26
26 Redundancy in components Error detection Error detection –to correct transient errors by retry –to avoid using corrupted data Error correction Error correction –to correct transient errors on the fly –to remain operational after catastrophic component failure –Scheduled maintenance instead of urgent repair.
27
27 Fault detection at Processor Level CPU1CPU2 = Error
28
28 Fault correction at Processor Level CPU1 CPU3CPU2 Voting Logic
29
29 Replica Determinism A set of replicated RT objects is “replica determinate” if all objects of this set visit the same state at about the same time. A set of replicated RT objects is “replica determinate” if all objects of this set visit the same state at about the same time. “At about the same time” makes a concession to the finite precision of the clock synchronization “At about the same time” makes a concession to the finite precision of the clock synchronization Replica determinism is needed for Replica determinism is needed for –consistent distributed actions –fault tolerance by active redundancy
30
30 Replica Determinism Lack of replica determinism makes voting meaningless. Lack of replica determinism makes voting meaningless. Example: Airplane on takeoff Example: Airplane on takeoff Lack of replica determinism causes the faulty channel to win !!! Lack of replica determinism causes the faulty channel to win !!! System 1: System 2: System 3: Majority: Take off Abort Accelerate Engine Stop Engine Stop Engine (fault) Stop Engine
31
31 Fault Correction at System Level Hot Stand-By SYSTEM1 SYSTEM2 Error Detection
32
32 Fault Correction at System Level Cold Stand-By SYSTEM1 SYSTEM2 Error Detection Common Memory
33
33 Fault Correction at System Level Distributed Common Memory SYSTEM1 SYSTEM2 Distributed Common Memory In fact, each processor has access to the memory of the other to keep a copy of the state of all critical processes Error Detection
34
34 Fault Correction at System Level Load Sharing Common Memory SYSTEM1SYSTEM1SYSTEM1SYSTEM1
35
35 Safety Critical systems SYS1 Voting Logic SYS2 SYS4SYS3 Fail once, still operational, fail twice, still safe.
36
36 Safety Critical Systems But What happens in case of a Software Bug ???
37
37 Space Shuttle Computer system SYS1 Voting Logic SYS2 SYS4 SYS3SYS5
38
References 1. Ebeling C, An introduction to reliability and maintainability engineering, McGraw-Hill, 1997 2. Krishna C, Real-time systems, McGraw-Hill, 1997 3. Kopetz H, Real-time systems design principles for distributed embedded applications, Kluwer, 1997 4. Tiberghien J, Real-time system fault tolerance, Lecture slides
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.