7/22/04 Report Back: Performance Analysis Track

Slides:



Advertisements
Similar presentations
1.Quality-“a characteristic or attribute of something.” As an attribute of an item, quality refers to measurable characteristics— things we are able to.
Advertisements

Test Automation Success: Choosing the Right People & Process
Chapter 4 Quality Assurance in Context
R R R CSE870: Advanced Software Engineering (Cheng): Intro to Software Engineering1 Advanced Software Engineering Dr. Cheng Overview of Software Engineering.
EE694v-Verification-Lect5-1- Lecture 5 - Verification Tools Automation improves the efficiency and reliability of the verification process Some tools,
Introduction to Software Testing
What is Business Analysis Planning & Monitoring?
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
S/W Project Management
1SAS 03/ GSFC/SATC- NSWC-DD System and Software Reliability Dolores R. Wallace SRS Technologies Software Assurance Technology Center
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
What is a life cycle model? Framework under which a software product is going to be developed. – Defines the phases that the product under development.
7/22/04 Report Back: Performance Analysis Track Dr. Carol Smidts Wes Deadrick.
©Ian Sommerville 2000, Mejia-Alvarez 2009 Slide 1 Software Processes l Coherent sets of activities for specifying, designing, implementing and testing.
IV&V Facility PI: Katerina Goseva – Popstojanova Students: Sunil Kamavaram & Olaolu Adekunle Lane Department of Computer Science and Electrical Engineering.
Protecting the Public, Astronauts and Pilots, the NASA Workforce, and High-Value Equipment and Property Mission Success Starts With Safety Believe it or.
Research Heaven, West Virginia A Compositional Approach for Validation of Formal Models Bojan Cukic, Dejan Desovski West Virginia University NASA OSMA.
Systems Design Approaches The Waterfall vs. Iterative Methodologies.
IT Requirements Management Balancing Needs and Expectations.
Lecture 3 Software Engineering Models (Cont.)
Slide 1V&V 10/2002 Software Quality Assurance Dr. Linda H. Rosenberg Assistant Director For Information Sciences Goddard Space Flight Center, NASA
© 2012 xtUML.org Bill Chown – Mentor Graphics Model Driven Engineering.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Programmable Logic Educating Assurance Engineers NASA Glenn Research Center Kalynnda Berens (PI) Jackie Somos (Course designer)
The Role of Experience in Software Testing Practice Zahra Molaei Soheil Hedayatitezengi Comp 587 Prof. Lingard 1 of 21.
The System and Software Development Process Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Business Analysis. Business Analysis Concepts Enterprise Analysis ► Identify business opportunities ► Understand the business strategy ► Identify Business.
Assurance of Programmable Logic Devices NASA Glenn Research Center Kalynnda Berens, SAIC Coursework: Jacqueline Somos, SAIC.
SOFTWARE ENGINEERING. Objectives Have a basic understanding of the origins of Software development, in particular the problems faced in the Software Crisis.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Slide 1SATC June 2000 Dolores R. Wallace* NASA Goddard Space Flight Center Greenbelt, Maryland for the American Society.
SOFTWARE TESTING SOFTWARE TESTING Presented By, C.Jackulin Sugirtha-10mx15 R.Jeyaramar-10mx17K.Kanagalakshmi-10mx20J.A.Linda-10mx25P.B.Vahedha-10mx53.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVII. Verification and Validation.
Advanced Software Engineering Dr. Cheng
ISQB Software Testing Section Meeting 10 Dec 2012.
Rekayasa Perangkat Lunak Part-6
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Group mambers: Maira Naseer (BCS ).
Software Prototyping.
Lecture 3 Prescriptive Process Models
Integration Testing.
Software Quality Engineering
John D. McGregor Session 9 Testing Vocabulary
CSC 480 Software Engineering
Software Verification and Validation
Software Testing.
Chapter 18 Maintaining Information Systems
Software Quality Engineering
Chapter 8 – Software Testing
Software Requirements
Software Independent Verification and Validation (IV&V)
Level 1 Level 1 – Initial: The software process is characterized as ad hoc and occasionally even chaotic. Few processes are defined, and success depends.
FORMAL SYSTEM DEVELOPMENT METHODOLOGIES
IS442 Information Systems Engineering
Software Project Planning &
Software Processes.
Goal, Question, and Metrics
Introduction to Software Testing
Lecture 09:Software Testing
Verification and Validation Unit Testing
Chapter 2 – Software Processes
Testing and Test-Driven Development CSC 4700 Software Engineering
Quality Measurable characteristic Cyclomatic complexity Cohesion
Welcome to Corporate Training -1
Software Engineering I
Case Study 1 By : Shweta Agarwal Nikhil Walecha Amit Goyal
Methodology for Architectural Level Reliability Risk Analysis
Metrics for Process and Projects
Chapter 7 Software Testing.
Presentation transcript:

7/22/04 Report Back: Performance Analysis Track Dr. Carol Smidts Wes Deadrick

Track Members Carol Smidts (UMD) – Track Chair Integrating Software into PRA Ted Bennett and Paul Wennberg (Triakis) Empirical Assurance of Embedded Software Using Realistic Simulated Failure Modes Dolores Wallace (GSFC) System and Software Reliability Bojan Cukic (WVU) Compositional Approach to Formal Models Kalynnda Berens (GRC) Software Safety Assurance of Programmable Logic Injecting Faults for Software Error Evaluation of Flight Software Hany Ammar (WVU) Risk Assessment of Software Architectures

Agenda Characterization of the Field Problem Statement Benefits of Performance Analysis Future Directions Limitations Technology Readiness Levels

Characterization of Field Goal: Prediction and Assessment of Software Risk/Assurance Level (Mitigation optimization) System Characteristics of interest Risk (Off-nominal situations) Reliability, availability, maintainability = Dependability Failures - general sense Performance Analysis Techniques - modeling and simulation, data analysis, failure analysis, design analysis focused on criticality

Problem Statement Why should NASA do performance analysis? - We care if things fail! Successfully conducting SW and System Performance Analysis gives us the data necessary to make informed decisions in order to improve performance and overall quality Performance analysis permits: Ability to determine if/when system meets requirements Risk reduction and quantification Application of new knowledge to future systems A better understanding of the processes by which systems are developed and therefore enables NASA to exercise continual improvement

Benefits of Performance Analysis Reduced development and operating costs Manage and optimize current processes thereby resulting in more efficient and effective processes Defined and repeatable process – reduced time to do same volume of work Reduces risk and increases safety and reliability Better software architecture designs More maintainable systems Enable NASA to handle more complex systems in the future Put the responsibility where it belongs from a organizational perspective - focuses accountability

Future Directions for Performance Analysis Automation of modeling and data collection – increased efficiency and accuracy A more useful, better reliability model useful = user friendly (enable the masses not just the domain experts), increased usability of the data (learn more from what we have) better = greater accuracy and predictability Define and follow repeatable methods/processes for data collection and analysis including: education and training use of simulation gold nugget = accurate and complete data

Future Directions for Performance Analysis (Cont.) Develop a method for establishing accurate performance predictions earlier in life cycle Evolve to refine system level assessment factor in the human element Establish and define an approach to performing trade-off of attributes – reliability, etc. Need for early guidance on criticality of components Optimize a defect removal model Methods and metrics for calculating/defending return on investment of conducting performance analysis

Why not Standard traps - Obstacles Costs and benefits Uncertainty about scalability User friendliness Lack of generality “Not invented here” syndrome Costs and benefits Difficult to assess and quantify Long term project benefit tracking recommended

Technology Readiness Level Integrating Software into PRA – Taxonomy (7) Test-Based Approach for Integrating SW in PRA (3) Empirical Assurance of Embedded Software Using Realistic Simulated Failure Modes (5) Maintaining system and SW test consistency (8) System Reliability (3) Software Reliability (9) Compositional Approach to Formal Models (2) Software Safety Assurance of Programmable Logic (2) Injecting Faults for Software Error Evaluation of Flight Software (9) Risk Assessment of Software Architectures (5)

Research Project Summaries

Integrating Software Into PRA Dr. Carol Smidts, Bin Li Objective: PRA is a methodology to assess the risk of large technological systems The objective of this research is to extend current classical PRA methodology to account for the impact of software onto mission risk

Integrating Software Into PRA (Cont) Achievements Developed a software related failure mode taxonomy Validated the taxonomy on multiple projects (ISS, Space Shuttle, X38) Proposed a step-by-step approach to integration in the classical PRA framework with quantification of input and functional failures.

Model, Simulate, Prototype, ES, etc. Problem Most embedded SW faults found at integ. test traceable to Rqmts. & interface misunderstanding SYSTEM Design/Debug Analyze/Test/V&V Model, Simulate, Prototype, ES, etc. SW Interpretation Requirements Analyze/Test/Verify Design/Debug Build Integration Testing The status quo process for embedded systems development begins with a system design that is developed using any number of tools created in support of the process. For example, the system designer may make use of modeling, simulation, prototyping, specification, and other tools to satisfy the need to validate control algorithms, component interactions, etc. and document the requirements for the implementation teams to follow. The system architects verify and validate their design through analysis, possibly tests, and possibly by similarity with reused components. The implementation teams take the requirements, interpret them the best they can - be they written in natural language, specification design language, or executable specifications, and derive a hardware and software design from them. The software developers write their own tests to verify conformance to the requirements as they have interpreted them. They may use some form of simulation, hardware development boards, inspection, analysis, or similarity comparison to facilitate the verification of their code. When a major part of the system functionality has been coded, they create a build and begin the process of loading their software onto hardware connected to test equipment and perhaps other system elements to begin the process of system integration testing. The faults discovered here are filed as problem reports and passed back to the development team to resolve. The integration hardware test setup is generally very costly and consequently, it is rare to have more than one or two setups available for project use. Not only does the high demand for integration test time cause create a resource bottleneck that often delays the project, but finding faults during this phase of development is by far the most costly. According to Lutz’s JPL spacecraft software study, 98% of the bugs discovered here “arise most commonly from (1) Discrepancies between the documented requirements specifications and the requirements needed for correct functioning of system and (2) Misunderstanding of the software’s interface with the rest of the system.” [1] We feel this is largely due to what we see as a disconnect between the system development/verification and software development/verification loops. If the software could be tested using the same tests developed to verify the system design, it might prove to be an effective means of reducing the number of software faults discovered during integration test. Disconnect exists between System and software development loops TRIAKIS Corporation

Approach Develop & simulate entire system design using executable specifications (ES) Verify total system design with suite of tests Simulate controller hardware Replace controller ES with simulated HW running object (flight) software Test SW using system verification tests The status quo process for embedded systems development begins with a system design that is developed using any number of tools created in support of the process. For example, the system designer may make use of modeling, simulation, prototyping, specification, and other tools to satisfy the need to validate control algorithms, component interactions, etc. and document the requirements for the implementation teams to follow. The system architects verify and validate their design through analysis, possibly tests, and possibly by similarity with reused components. The implementation teams take the requirements, interpret them the best they can - be they written in natural language, specification design language, or executable specifications, and derive a hardware and software design from them. The software developers write their own tests to verify conformance to the requirements as they have interpreted them. They may use some form of simulation, hardware development boards, inspection, analysis, or similarity comparison to facilitate the verification of their code. When a major part of the system functionality has been coded, they create a build and begin the process of loading their software onto hardware connected to test equipment and perhaps other system elements to begin the process of system integration testing. The faults discovered here are filed as problem reports and passed back to the development team to resolve. The integration hardware test setup is generally very costly and consequently, it is rare to have more than one or two setups available for project use. Not only does the high demand for integration test time cause create a resource bottleneck that often delays the project, but finding faults during this phase of development is by far the most costly. According to Lutz’s JPL spacecraft software study, 98% of the bugs discovered here “arise most commonly from (1) Discrepancies between the documented requirements specifications and the requirements needed for correct functioning of system and (2) Misunderstanding of the software’s interface with the rest of the system.” [1] We feel this is largely due to what we see as a disconnect between the system development/verification and software development/verification loops. If the software could be tested using the same tests developed to verify the system design, it might prove to be an effective means of reducing the number of software faults discovered during integration test. When SW passes all system verification tests, it has correctly implemented all of the tested requirements TRIAKIS Corporation

Problem: FMEA Limitations IV&V Facility Empirical Assurance of Embedded SW Using Realistic Simulated Failure Modes Mini-AERCam Problem: FMEA Limitations Expensive & time-consuming List of possible failure modes extensive Focuses on prioritized subset of failure modes Approach: Test SW w/sim’d Failures Create pure virtual simulation of Mini-AERCam HW & flight environment running on PC Induce realistic component/subsystem failures Observe flight SW response to induced failures Can we improve coverage by testing SW resp. to sim’d failures? Compare results with project-sponsored FMEA, FTA, etc.: #Failure modes evaluated? #Issues uncovered? Effort involved? TRIAKIS Corporation

Software and System Reliability Dolores Wallace, Bill Farr, Swapna Gokhale Addresses the need to evaluate and assess the reliability and availability of large complex software intensive systems by predicting (with associated confidence intervals): The number of software/system faults, Mean time to failure and restore/repair, Availability, Estimated release time from testing.

2003 & 2004 Research 2003 (Software Based) Literature search completed New models were selected: 1) Enhanced Schneidewind (includes risk assessment and trade-off analysis) and 2) Hypergeometric Model Incorporated the new software models into the established public domain tool SMERFS^3 Applied the new models on a Goddard software project Made the latest version of SMERFS^3 available to the general public 2004 (System Based) Conducted similar research effort for System Reliability and Availability Will enhance SMERFS^3 and validate the system models on a Goddard data set

A Compositional approach to Validation of Formal Models Dejan Desovski, Bojan Cukic Problem Significant number of faults in real systems can be traced back to specifications. Current methodologies of specification assurance have problems: Theorem Proving: Complex Model Checking: State explosion problems Testing: Incomplete. Approach Combine them! Use test coverage to build abstractions. Abstractions reduce the size of the state space for model checking. Develop visual interfaces to improve the usability of the method.

Sufficient time and funds? Software Fault Injection Process Kalynnda Berens, Dr. John Crigler, Richard Plastow Standardized approach to test systems with COTS and hardware interfaces Provides a roadmap of where to look to determine what to test Identify Interfaces and Critical Sections Error/Fault Research Estimate Effort Required Obtain Source Code and Documentation Start Sufficient time and funds? Importance Analysis Select Subset Test Case Generation Fault Injection Testing Document Results, Metrics, Lessons Learned Feedback to FCF Project End Yes

Programmable Logic at NASA Kalynnda Berens, Jacqueline Somos Issues Lack of good assurance of PLCs and PLDs Increasing complexity = increasing problems Usage and Assurance Survey - SA involved in less than 1/3 of the projects; limited knowledge Recommendations Trained SA for PLCs PLDs – determine what is complex; use process assurance (SA or QA) Training Created Basic PLC and PLD training aimed at SA Process assurance for hardware QA

Year 2 of Research What is industry and other government agencies doing for assurance and verification? An intensive literature search of white papers, manuals, standards, and other documents that illustrated what various organizations were doing. Focused interviews with industry practitioners. Interviews were conducted with assurance personnel (both hardware and software) and engineering practitioners in various industries, including biomedical, aerospace, and control systems. Meeting with FAA representatives. Discussions with FAA representatives lead to a more thorough understanding of their approach and the pitfalls they have encountered along the way. Position paper, with recommendations for NASA Code Q

Current Effort Implement some of the recommendations Develop coursework to educate software and hardware assurance engineers Three courses PLCs for Software Assurance personnel PLDs for Software Assurance personnel Process Assurance for Hardware QA Guidebook Other recommendations For Code Q to implement if desired Follow-up CSIP to try software-style assurance on complex electronics

Severity Analysis Methodology Hanny Ammar, Katerina Goseva-Popstojanova, Ajith Guedem, Kalaivani Appukutty, Walib AbdelMoez, and Ahmad Hassan We have developed a methodology to assess severity of failures of components, connectors, and scenarios based on UML models This methodology is applied on NASA’s Earth Observing System (EOS)

Requirement Risk Analysis Methodology We have developed a methodology for assessing requirements based risk using normalized dynamic complexity and severity of failures. This can be used in the DDP process developed at JPL. Requirements Scenarios Failure Modes FM1 FM2 … FMn R1 S1 Rf S2 R2 S3 Rm Risk factor of scenario S1 in Failure mode FM2 According to Dr. Martin Feather’s DDP Process, “The Requirements matrix maps the impacts of each failure mode on each requirement.” Requirements are mapped to UML use case and scenarios A failure mode refers to the way in which a scenario fails to achieve its requirement

What to Read Key works in the field Tutorials Web sites Will be completed at a later time