GE 116 Lecture 1 ENGR. MARVIN JAY T. SERRANO Lecturer.

Slides:



Advertisements
Similar presentations
Integra Consult A/S Safety Assessment. Integra Consult A/S SAFETY ASSESSMENT Objective Objective –Demonstrate that an acceptable level of safety will.
Advertisements

EECE499 Computers and Nuclear Energy Electrical and Computer Eng Howard University Dr. Charles Kim Fall 2013 Webpage:
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development.
Software Quality Assurance (SQA). Recap SQA goal, attributes and metrics SQA plan Formal Technical Review (FTR) Statistical SQA – Six Sigma – Identifying.
Failure Modes and Effects Analysis A Failure Modes and Effects Analysis (FMEA) tabulates failure modes of equipment and their effects on a system or plant.
Reliability Risk Assessment
1 Software Testing and Quality Assurance Lecture 39 – Software Quality Assurance.
SWE Introduction to Software Engineering
Title slide PIPELINE QRA SEMINAR. PIPELINE RISK ASSESSMENT INTRODUCTION TO RISK IDENTIFICATION 2.
CSE 466 – Fall Introduction - 1 Safety  Examples  Terms and Concepts  Safety Architectures  Safe Design Process  Software Specific Stuff 
©Ian Sommerville 2004Software Engineering, 7th edition. Insulin Pump Slide 1 An automated insulin pump.
Annex I: Methods & Tools prepared by some members of the ICH Q9 EWG for example only; not an official policy/guidance July 2006, slide 1 ICH Q9 QUALITY.
CIS 376 Bruce R. Maxim UM-Dearborn
Testing safety-critical software systems
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Systems engineering 1.
What is Fault Tree Analysis?
Basics of Fault Tree and Event Tree Analysis Supplement to Fire Hazard Assessment for Nuclear Engineering Professionals Icove and Ruggles (2011) Funded.
Airbus flight control system  The organisation of the Airbus A330/340 flight control system 1Airbus FCS Overview.
SAFE 605: Principles of Safety Engineering Overview of Safety Engineering Safety Engineering Concepts.
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
Software Project Management
Isograph Reliability Software RiskVu V3. Isograph Reliability Software ESSM – The first risk monitor ? Essential Systems Status Monitor Installed at Heysham.
EE551 Real-Time Operating Systems
Risk Assessment and Probabilistic Risk Assessment (PRA) Mario. H. Fontana PhD.,PE Research Professor Arthur E. Ruggles PhD Professor The University of.
1 Chapter 2 Socio-technical Systems (Computer-based System Engineering)
ERT 312 SAFETY & LOSS PREVENTION IN BIOPROCESS RISK ASSESSMENT Prepared by: Miss Hairul Nazirah Abdul Halim.
Safety-Critical Systems 6 Certification
1 Can We Trust the Computer? What Can Go Wrong? Case Study: The Therac-25 Increasing Reliability and Safety Perspectives on Failures, Dependence, Risk,
ERT 322 SAFETY AND LOSS PREVENTION RISK ASSESSMENT
Protecting the Public, Astronauts and Pilots, the NASA Workforce, and High-Value Equipment and Property Mission Success Starts With Safety Believe it or.
Socio-technical Systems (Computer-based System Engineering)
FAULT TREE ANALYSIS (FTA). QUANTITATIVE RISK ANALYSIS Some of the commonly used quantitative risk assessment methods are; 1.Fault tree analysis (FTA)
Lecture: Reliability & FMECA Lecturer: Dr. Dave Olwell Dr. Cliff Whitcomb, CSEP System Suitability.
Safety-Critical Systems T Ilkka Herttua. Safety Context Diagram HUMANPROCESS SYSTEM - Hardware - Software - Operating Rules.
Building Dependable Distributed Systems Chapter 1 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Software Testing and Quality Assurance Software Quality Assurance 1.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
Quality Assurance.
Safety-Critical Systems 7 Summary T V - Lifecycle model System Acceptance System Integration & Test Module Integration & Test Requirements Analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 9 Slide 1 Critical Systems Specification 1.
Idaho RISE System Reliability and Designing to Reduce Failure ENGR Sept 2005.
ME 4054W: Design Projects RISK MANAGEMENT. 2 Lecture Topics What is risk? Types of risk Risk assessment and management techniques.
Probabilistic Risk Assessment (PRA) Mathew Samuel NASA/GSFC/MEI (301)
IAEA Training Course on Safety Assessment of NPPs to Assist Decision Making System Analysis Workshop Information IAEA Workshop City, Country XX - XX Month,
The first question is really "Why do you need a control system at all?” Consider the following: What good is an airplane if you are a pilot and you.
SENG521 (Fall SENG 521 Software Reliability & Testing Fault Tolerant Software Systems: Techniques (Part 4a) Department of Electrical.
Failure Modes and Effects Analysis (FMEA)
1 Software Testing and Quality Assurance Lecture 38 – Software Quality Assurance.
Fault Tree Analysis for the BLEDP Student meeting Vegard Joa Moseng.
Safety Critical Systems
Failure Modes, Effects and Criticality Analysis
ON “SOFTWARE ENGINEERING” SUBJECT TOPIC “RISK ANALYSIS AND MANAGEMENT” MASTER OF COMPUTER APPLICATION (5th Semester) Presented by: ANOOP GANGWAR SRMSCET,
LOGO Combining Fault Trees and Event Trees Seung Ki, Shin.
Version 1.0, July 2015 BASIC PROFESSIONAL TRAINING COURSE Module VII Probabilistic Safety Assessment Case Studies This material was prepared by the IAEA.
Fault Trees.
GOOD MANUFACTURING PRACTICE FOR BIOPROCESS ENGINEERING (ERT 425)
PRA: Validation versus Participation in Risk Analysis PRA as a Risk Informed Decision Making Tool Richard T. Banke– SAIC
Software Reliability PPT BY:Dr. R. Mall 7/5/2018.
Safety and Risk.
Diversity analysis for advanced reactor design
leaks thru rupture sticks open closed
HMI Reliability Dale Wolfe Reliability Engineer LMSSC*ATC*LMSAL
Knowing When to Stop: An Examination of Methods to Minimize the False Negative Risk of Automated Abort Triggers RAM XI Training Summit October 2018 Patrick.
Failure Mode and Effect Analysis
BASIC PROFESSIONAL TRAINING COURSE Module VII Probabilistic Safety Assessment Case Studies Version 1.0, July 2015 This material was prepared.
Definitions Cumulative time to failure (T): Mean life:
Mikael Olsson Control Engineer
Seminar on Enterprise Software
Presentation transcript:

GE 116 Lecture 1 ENGR. MARVIN JAY T. SERRANO Lecturer

SAFETY ENGINEERING  Engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to systems engineering, industrial engineering and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.safetysystems engineeringindustrial engineeringsystem safetylife-critical system

ANALYSIS TECHNIQUES  Analysis techniques can be split into two categories: qualitative and quantitative methods. Both approaches share the goal of finding causal dependencies between a hazard on system level and failures of individual components. . Qualitative approaches focus on the question "What must go wrong, such that a system hazard may occur?“  Quantitative methods aim at providing estimations about probabilities, rates and/or severity of consequences.

TRADITIONAL METHODS FOR SAFETY ANALYSIS  Failure mode and effects analysis and fault tree analysis. Failure mode and effects analysisfault tree analysis  These techniques are just ways of finding problems and of making plans to cope with failures, as in probabilistic risk assessment. One of the earliest complete studies using this technique on a commercial nuclear plant was the WASH-1400 study, also known as the Reactor Safety Study or the Rasmussen Report.probabilistic risk assessmentWASH-1400

FAILURE MODE AND EFFECTS ANALYSIS (FMEA)  A bottom-up, inductive analytical method which may be performed at either the functional or piece-part level.inductive  For functional FMEA, failure modes are identified for each function in a system or equipment item, usually with the help of a functional block diagram. For piece-part FMEA, failure modes are identified for each piece-part component (such as a valve, connector, resistor, or diode).block diagram  The effects of the failure mode are described, and assigned a probability based on the failure rate and failure mode ratio of the function or component. This quantiazation is difficult for software ---a bug exists or not, and the failure models used for hardware components do not apply. Temperature and age and manufacturing variability affect a resistor; they do not affect software. failure rate  Failure modes with identical effects can be combined and summarized in a Failure Mode Effects Summary. When combined with criticality analysis, FMEA is known as Failure Mode, Effects, and Criticality Analysis or FMECA.Failure Mode, Effects, and Criticality Analysis

FAULT TREE ANALYSIS (FTA)  A top-down, deductive analytical method.deductive  In FTA, initiating primary events such as component failures, human errors, and external events are traced through Boolean logic gates to an undesired top event such as an aircraft crash or nuclear reactor core melt. The intent is to identify ways to make top events less probable, and verify that safety goals have been achieved.Boolean logic  FTA may be qualitative or quantitative. When failure and event probabilities are unknown, qualitative fault trees may be analyzed for minimal cut sets. For example, if any minimal cut set contains a single base event, then the top event may be caused by a single failure. Quantitative FTA is used to compute top event probability, and usually requires computer software such as CAFTA from the Electric Power Research Institute or SAPHIRE from the Idaho National Laboratory. Electric Power Research InstituteSAPHIREIdaho National Laboratory

 Fault trees are a logical inverse of success trees, and may be obtained by applying de Morgan's theorem to success trees (which are directly related to reliability block diagrams). de Morgan's theoremreliability block diagrams

PREVENTING FAILURE  Once a failure mode is identified, it can usually be mitigated by adding extra or redundant equipment to the system. For example, nuclear reactors contain dangerous radiation, and nuclear reactions can cause so much heat that no substance might contain them. Therefore reactors have emergency core cooling systems to keep the temperature down, shielding to contain the radiation, and engineered barriers (usually several, nested, surmounted by a containment building) to prevent accidental leakage. Safety-critical systems are commonly required to permit no single event or component failure to result in a catastrophic failure mode.radiationheatcontainment building  Most biological organisms have a certain amount of redundancy: multiple organs, multiple limbs, etc.biological  For any given failure, a fail-over or redundancy can almost always be designed and incorporated into a system.

SAFETY AND RELIABILITY  Safety is not reliability. If a medical device fails, it should fail safely; other alternatives will be available to the surgeon. If an aircraft fly-by-wire control system fails, there is no backup. Electrical power grids are designed for both safety and reliability; telephone systems are designed for reliability, which becomes a safety issue when emergency (e.g. US "911") calls are placed.  Probabilistic risk assessment has created a close relationship between safety and reliability. Component reliability, generally defined in terms of component failure rate, and external event probability are both used in quantitative safety assessment methods such as FTA. Probabilistic risk assessmentfailure rate  Related probabilistic methods are used to determine system Mean Time Between Failure (MTBF), system availability, or probability of mission success or failure. Reliability analysis has a broader scope than safety analysis, in that non-critical failures are considered. On the other hand, higher failure rates are considered acceptable for non-critical systems.Mean Time Between Failure (MTBF)

 Safety generally cannot be achieved through component reliability alone. Catastrophic failure probabilities of 10 −9 per hour correspond to the failure rates of very simple components such as resistors or capacitors. A complex system containing hundreds or thousands of components might be able to achieve a MTBF of 10,000 to 100,000 hours, meaning it would fail at 10 −4 or 10 −5 per hour. If a system failure is catastrophic, usually the only practical way to achieve 10 −9 per hour failure rate is through redundancy.resistorscapacitors  When adding equipment is impractical (usually because of expense), then the least expensive form of design is often "inherently fail-safe". That is, change the system design so its failure modes are not catastrophic. Inherent fail-safes are common in medical equipment, traffic and railway signals, communications equipment, and safety equipment.

 The typical approach is to arrange the system so that ordinary single failures cause the mechanism to shut down in a safe way (for nuclear power plants, this is termed a passively safe design, although more than ordinary failures are covered). Alternately, if the system contains a hazard source such as a battery or rotor, then it may be possible to remove the hazard from the system so that its failure modes cannot be catastrophic.passively safe  One of the most common fail-safe systems is the overflow tube in baths and kitchen sinks. If the valve sticks open, rather than causing an overflow and damage, the tank spills into an overflow. Another common example is that in an elevator the cable supporting the car keeps spring-loaded brakes open. If the cable breaks, the brakes grab rails, and the elevator cabin does not fall. elevatorbrakes

 Some systems can never be made fail safe, as continuous availability is needed. For example, loss of engine thrust in flight is dangerous. Redundancy, fault tolerance, or recovery procedures are used for these situations (e.g. multiple independent controlled and fuel fed engines). This also makes the system less sensitive for the reliability prediction errors or quality induced uncertainty for the separate items. On the other hand, failure detection & correction and avoidance of common cause failures becomes here increasingly important to ensure system level reliability.