Software Risk Assessment based on UML models

Slides:



Advertisements
Similar presentations
Seyedehmehrnaz Mireslami, Mohammad Moshirpour, Behrouz H. Far Department of Electrical and Computer Engineering University of Calgary, Canada {smiresla,
Advertisements

PROJECT RISK MANAGEMENT
Ossi Taipale, Lappeenranta University of Technology
1 SWE Introduction to Software Engineering Lecture 5.
SQM - 1DCS - ANULECTURE Software Quality Management Software Quality Management Processes V & V of Critical Software & Systems Ian Hirst.
Copyright © , Software Engineering Research. All rights reserved. Creating Responsive Scalable Software Systems Dr. Lloyd G. Williams Software.
West Virginia University A Bayesian Approach to Reliability Predication of Component Based Systems H. Singh, V. Cortellessa, B. Cukic, E. Gunel, V. Bharadwaj.
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
Methodology for Architectural Level Reliability Risk Analysis Lalitha Krothapalli CSC 532.
Copyright Critical Software S.A All Rights Reserved. VAL-COTS Validation of Real Time COTS Products Ricardo Barbosa, Henrique Madeira, Nuno.
Software Architecture Risk Assessment (SARA) Tool Khader Basha Shaik Problem Report Defense Master of Science in Computer Science Lane Department of Computer.
Independent Verification and Validation (IV&V) Techniques for Object Oriented Software Systems SAS meeting July 2003.
بسم الله الرحمن الرحيم الحمد لله ، والصلاة والسلام على رسول الله
IV&V Facility 1 FY2002 Initiative: Software Architecture Metrics Hany Ammar, Mark Shereshevsky, Nicholay Gradetsky, Diaa Eldin Nassar, Walid AbdelMoez,
CMSC 345 Fall 2000 Unit Testing. The testing process.
University of Coimbra, DEI-CISUC
OSMA2003 Center for Reliability Engineering 1 Integrating Software into PRA Presented by C. Smidts Center for Reliability Engineering University of Maryland.
Software Architecture Metrics Hany Ammar, Mark Shereshevsky, Ali Mili, Walid Rabie and Nicholay Gradetsky Lane Department of Computer Science & Electrical.
Research Heaven, West Virginia 1 FY 2004 Initiative: Risk Assessment of Software Architectures Hany Ammar, Katerina Goseva-Popstojanova, Ajith Guedem,
T. Dawson, TASC 9/11/13 Use of a Technical Reference in NASA IV&V.
The Software Quality Assurance System By Jonathon Gibbs Jonathon Gibbs (jxg16u) 26 th November 2009.
IV&V Facility PI: Katerina Goseva – Popstojanova Students: Sunil Kamavaram & Olaolu Adekunle Lane Department of Computer Science and Electrical Engineering.
Research Heaven, West Virginia 1 FY 2003 Initiative: IV&V of UML Hany Ammar, Katerina Goseva-Popstojanova, V. Cortelessa, Ajith Guedem, Kalaivani Appukutty,
A performance evaluation approach openModeller: A Framework for species distribution Modelling.
Slide 1V&V 10/2002 Software Quality Assurance Dr. Linda H. Rosenberg Assistant Director For Information Sciences Goddard Space Flight Center, NASA
Introduction Complex and large SW. SW crises Expensive HW. Custom SW. Batch execution Structured programming Product SW.
IV&V Facility 1 FY 2002 Initiative IV&V of UML Hany Ammar, Katerina Goseva-Popstojanova, V. Cortelessa, Ajith Guedem, Diaa Eldin Nassar, Walid AbdelMoez,
Performance evaluation of component-based software systems Seminar of Component Engineering course Rofideh hadighi 7 Jan 2010.
1 West Virginia University FY2001 University SOFTWARE INITIATIVE PROPOSAL for the NASA SOFTWARE IV&V FACILITY Initiative Title: Verification & Validation.
West Virginia University Architectural-Level Risk Analysis for UML Dynamic Specifications Dr. Sherif M. Yacoub Hewlett-Packard Laboratories.
Development of Methodologies for Independent Verification and Validation of Neural Networks NAG OSMA-F001-UNCLASS Methods and Procedures.
Research Heaven, West Virginia 1 FY 2004 Initiative: Risk Assessment of Software Architectures Hany Ammar, Katerina Goseva-Popstojanova, Ajith Guedem,
Research Heaven, West Virginia FY2003 Initiative: Hany Ammar, Mark Shereshevsky, Walid AbdelMoez, Rajesh Gunnalan, and Ahmad Hassan LANE Department of.
Software Architecture Risk Assessment (SARA) Tool Khader Shaik, Wallid Abdelmoez, Dr. Hanny Ammar Lane Department of Computer Science and Electrical Engineering,
Toulouse, September 2003 Page 1 JOURNEE ALTARICA Airbus ESACS  ISAAC.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Pavan Rajagopal, GeoControl Systems James B. Dabney, UHCL Gary Barber, GeoControl Systems 1Spacecraft FSW Workshop 2015.
West Virginia University Sherif Yacoub, Hany H. Ammar, and Ali Mili A UML Model for Analyzing Software Quality Sherif Yacoub, Hany H. Ammar, and Ali Mili.
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
Quality Improvement Tools for Intervention Determination Presenters: Kris Hartmann, MS Healthcare Analyst, Performance Improvement Projects Don Grostic,
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
Toward a New ATM Software Safety Assessment Methodology dott. Francesca Matarese.
Testing Integral part of the software development process.
CIS 375 Bruce R. Maxim UM-Dearborn
Chapter 1 The Systems Development Environment
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Research Heaven, West Virginia
Chapter 1 The Systems Development Environment
Instructor: Dr. Hany H. Ammar
Definition CASE tools are software systems that are intended to provide automated support for routine activities in the software process such as editing.
Chapter 6: Database Project Management
Software Engineering (CSI 321)
Software Project Management
Chapter 1 The Systems Development Environment
Software Requirements
Chapter 1 The Systems Development Environment
DEFECT PREDICTION : USING MACHINE LEARNING
بسم الله الرحمن الرحيم الحمد لله ، والصلاة والسلام على رسول الله
Chapter 1 The Systems Development Environment
Raytheon Parts Management
Quantifying Quality in DevOps
Progression of Test Categories
Analysis models and design models
Practical Software Engineering
Software Architecture Risk Assessment (SARA) Tool
Failure Mode and Effect Analysis
Methodology for Architectural Level Reliability Risk Analysis
© Oxford University Press All rights reserved.
Chapter 1 The Systems Development Environment
Logical Architecture & UML Package Diagrams
Presentation transcript:

Software Risk Assessment based on UML models Requirements-based and Performance-based Risk Assessment By: Kalaivani Appukkutty, MS thesis, WVU This work is funded in part by grants to West Virginia University NASA Office of Safety and Mission Assurance (OSMA) Software Assurance Research Program (SARP) managed through the NASA Independent Verification and Validation (IV&V) Facility in Fairmont, West Virginia.

Overview Introduction Research Contribution Requirements-based Risk Analysis Performance-based Risk Analysis Conclusions and Future work

Introduction Risk is the possibility of suffering loss. Software risk is defined as a combination of two factors: probability of failure of software and the severity of the failure Risk Assessment is the process of identifying the potential risk and their impact It identifies potentially troublesome software components that require careful development and allocation of more testing efforts It can be performed at various phases of life software cycle.

Requirements-based Risk Assessment Research Objectives Requirements-based Risk Assessment To identify the requirements that are at high risk early in the software life cycle To assess requirements-based risk from UML models To develop a methodology that can be automated Performance-based Risk Assessment To assess performance based risk taking into account the severity of performance failures To develop a methodology based on the UML profile for performance, schedulability and time.

Requirements-based Risk Software risk is the product of : The probability of malfunctioning (failure) of the software The consequence of malfunctioning (severity) The probability of failure is proportional to the complexity of the system. Requirements-based risk is assessed for a requirement in a specific failure mode

Requirements-based Risk Requirements are mapped to UML use case and scenarios A failure mode of a scenario refers to the way in which a scenario fails to achieve its requirement The risk factor of a particular scenario (a), in a particular failure mode (b) is: Rfab = Normalized Dynamic Complexityab X Severityab

Calculating Complexity We estimate complexity as: Complexity = CC * Msg CC : McCabe’s cyclomatic complexity of a scenario for each failure mode Msg: The number of messages exchanged between the components in the sequence diagram to the point where the failure mode occurs in the scenario According to Fenton et al, “The product of cyclomatic complexity (CC) and sigFF was shown to be a good predictor of fault proneness ... The combined metrics appear to be better than both sigFF and Cyclomatic complexity on their own and also better than the size metric”

Requirement Risk Matrix Requirements R1 R2 … Rm Scenarios S1 S2 S3 Failure Modes FM1 FM2 … FMn Rf Risk factor of scenario S1 in Failure mode FM2

Requirement-based Risk Assessment Methodology STEP 1: Map the requirements to UML sequence diagrams For each Sequence diagram STEP 2: Construct the control flow graph of the scenario from the sequence diagram STEP 3: Identify the different failure modes on the sequence diagram and the control flow graph For each failure mode STEP 4: Assess the severity of the failure mode using Function Failure Analysis (FFA) STEP 5: Determine the cyclomatic complexity of the sub-control flow graph STEP 6: For the failure mode, measure the number of messages exchanged between the components in the sequence diagram. STEP 7: Calculate the complexity of the scenario for the failure mode as: Cyclomatic complexity * Number of messages STEP 8: Calculate the Risk factor of the scenario for the failure mode as: Complexity * Severity End For each failure mode End for each sequence diagram

Cardiac Pacemaker system A cardiac pacemaker is an implanted device that assists cardiac functions by pacing the heart, when the heart pulse fails.

Atrial-Ventricular Inhibit Scenario: Control Flow Graph Requirement R2 of Pacemaker : Mapped to Atrial-Ventricular Inhibit (AVI) Scenario Atrial-Ventricular Inhibit Scenario: Control Flow Graph Failure Modes and Msg Values marked in the sequence diagram FM1 Msg = 4 FM3 Msg = 9 FM2 Msg = 7

Calculating Risk Complexity Severity Failure Modes Cyclomatic Complexity (CC) Number of Messages (Msg) Complexity (CC * Msg) Normalized FM1 3 4 12 0.111 FM2 6 7 42 0.389 FM3 9 54 0.500 Severity Severity is assessed using Function Failure Analysis (FFA) The severity factor corresponding to the failure of requirement R2 in failure modes FM1, FM2 and FM3 is 0.95 (catastrophic category) Risk Requirements Scenarios Failure Modes FM1 FM2 FM3 R2 AVI 0.11 0.37 0.475

Requirement Risk Matrix (Cardiac Pacemaker) Requirements Scenarios Failure Modes FM1 FM2 FM3 FM4 FM5 R1 AVI 0.11 0.37 0.475 R2 AAI 0.151 0.283 0.566 R3 Programming 0.017 0.233

Requirement Risk Chart (Cardiac Pacemaker)

Performance-based Risk Analysis Considers the non-functional performance requirements of the system, which can sometimes be more important than the functional-requirements Highlights the performance critical components and scenarios at an architectural-level Uses the software and the system execution models which are well defined ways of modeling the performance aspects of a system

Performance-based Risk Analysis Methodology For each scenario in each use case and for each scenario is that use case STEP 1 - Assign demand vector to each action/interaction in sequence diagram and build a Software Execution Model for that scenario STEP 2 - Add hardware platform characteristics on the deployment diagram and conduct stand-alone analysis STEP 3 - Devise the workload parameters; build a System Execution Model and conduct contention-based analysis and estimate probability of performance failure STEP 4 - Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 - Estimate the performance risk of the scenario; Identify high-risk components

Step 1 Each “action” in the sequence diagram such as number of cpu instructions, amount of disk access and the data passed from one component to another A software execution model is built which considers the maximum service demand path of the scenario execution

Step 2 The annotated Deployment Diagram is used for identifying the hardware platform characteristics on which the software system is deployed The system execution model is built by mapping the software execution model to the hardware component service times A standalone analysis is conducted by considering total service time of the scenario against the response time objective of that scenario

Step 3 Contention-based analysis is performed by for objective workload and response time of the scenario is the total demand of the scenario and the is the maximum demand in that scenario Z1: Failure probability (Z1) = 0 Z2: Failure probability (Z2) = (UB-OBJ) / (UB-LB) Z3: Failure probability (Z3) = 1 UB stands for Upper Bound, LB for Lower Bound and OBJ stands for Performance Objective

Step 4 and 5 Step 4 Severity analysis is conducted to obtain the severity factor of the performance failures of the scenarios. We use a framework for conducting severity analysis of the performance failures of the scenario. The framework uses Function Failure Analysis(FFA) Failure Mode Effect Analysis(FMEA) Fault Tree Analysis(FFT) Step 5 Identification of performance-critical components - we first estimate the overall residence time of each component in a given scenario and then normalize it with the response time of the scenario A high-risk component is said to be the one with a higher normalized service time in a scenario or across many scenarios

The methodology is illustrated with a typical e-commerce case study

Sequence Diagram Annotations and Execution Graph The sequence diagram shows the PlaceRequisition scenario and the various performance annotations

Demand Vectors The sequence diagram is converted to a software execution graph and the execution path with the longest service demand is considered for estimating the PLACE-REQ RA4 CA4 CI3 CA5 RS1 DOA1 OS1 SPLIT

Service Time and Demand Vectors The deployment diagram is annotated with hardware characteristics The completion time of the Place Requisition scenario is equal to 0.1326 seconds

Estimating Failure Probability The performance objectives are : A response time objective of 1.5 seconds A workload of 15 customers The results are: UB=2.0295 sec LB=1.35 sec and OBJ=1.5 sec Probability of performance failure = 0.7792 The severity assigned for this scenario is catastrophic (0.95) The risk factor of this scenario is 0.74024

Performance-based Risk: E-commerce case study

Residence Time of Components The components with taller bars(high-risk) and colored red(high-severity) are the ones that are more performance-critical and require further investigation and testing

Conclusion and Future Work The requirements based risk assessment methodology presented, fills the gap between completely domain-expert dependant methods that are applied at the requirement analysis stage and formal analytical methods that do not assess risk at the requirements level The performance-based risk assessment methodology gives a systematic approach to use UML profile for performance to assess risk based on failure probability and severity

Conclusion and Future Work Future work will focus on developing tool support to assist the analyst with the steps of the methodology. We also intend to apply the methodologies and supporting tools on large scale applications.

Publications K. Appukkutty , Hany H. Ammar, K.Goseva-Popstojanova, “Software Requirement Risk Assessment Using UML”, Accepted in The 3rd ACS/IEEE International Conference on Computer Systems and Applications, Jan 2005. K. Appukkutty , Hany H. Ammar, K.Goseva-Popstojanova, “Early Risk Assessment of Software Systems”, Accepted in The 3rd International Conference on Computer Science, Software Engineering, Information Technology, e-Business, and Applications, Dec 2004. V. Cortellessa, K. Goseva-Popstojanova, K. Appukkutty, A. Guedem, A. Hassan, R. Elnaggar, W. Abdelmoez, and H. H. Ammar “Performance-based Risk Analysis of UML Models” submitted to IEEE Transactions on Software Engineering