Download presentation
Presentation is loading. Please wait.
Published byFlora McCarthy Modified over 9 years ago
1
Research Heaven, West Virginia 1 FY 2003 Initiative: IV&V of UML Hany Ammar, Katerina Goseva-Popstojanova, V. Cortelessa, Ajith Guedem, Kalaivani Appukutty, Walid AbdelMoez, Ahmad Hassan and Rania Elnaggar LANE Department of Computer Science and Electrical Engineering West Virginia University Ali Mili College of Computing Science New Jersey Institute of Technology Less risk, sooner WVU UI: Performance-Based Risk Assessment
2
Research Heaven, West Virginia 2 Outline Objectives What can we do Why UML UML & NASA Project Overview Performance Based Risk Accomplishments Future Work Publications
3
Research Heaven, West Virginia 3 Objectives Automated techniques V&V of dynamic specifications –Performance and timing analysis –Fault-injection based analysis, Reliability-based and Performance-based risk assessment Technologies: –UML –Architectures –Risk assessment methodology Benefits: –Find & rank critical use cases, scenarios, components, connectors What keeps satellites working 24/7 ? The ARIANE 5 explosion
4
Research Heaven, West Virginia 4 What we can do? Estimate performance based risk on a scenario level Identify and rank performance critical components How ?- details follow
5
Research Heaven, West Virginia 5 Why UML ? Unified modeling language –Rational software –The three amigos: Booch Rumbaugh, Jacobson. International standard in system specification An international standard In system specification
6
Research Heaven, West Virginia 6 UML & NASA Increasing use at NASA Informal (very) survey –Google search: –“rational rose nasa” –10,000 hits –3 definite projects, just in first ten We use a case-study based on the UML specs of the Earth Observing System
7
Research Heaven, West Virginia 7 The methodology is illustrated on the Flight Operations System (FOS) of NASA's Earth Observing System (EOS) The Case Study NASA's Earth Observing System (EOS) is the first observing system to offer integrated measurements of the Earth's processes The Flight Operations Segment (FOS) of EOS is responsible for the planning, scheduling, commanding, and monitoring of the spacecraft and the instruments on board We have evaluated the performance- based risk of the Commanding service
8
Research Heaven, West Virginia 8 Project Overview FY01 Developed of an automated simulation environment for UML dynamic specification, suggested an observer component to detect errors Conducted performance and timing analysis of the NASA case study FY02 Develop a fault injection methodology Define a fault-model for components at the specification level Develop a methodology for architectural level risk analysis Determine Critical Use Case List Determine Critical Component/Connector list FY03 Develop a methodology for Performance-based / Reliability-based risk assessment Validation of the risk analysis methodology
9
Research Heaven, West Virginia 9 Performance is a non-functional software attribute that plays a crucial role in application domains spreading from safety- critical systems to e-commerce web sites We introduce the concept of performance-based risk, which is a risk resulting from software failures originated from behaviors that do not meet performance requirements Performance failure is the inability of the system to meet its performance objective(s) Performance-based risk is defined as: Probability of performance failure Severity of the failure Performance Based Risk
10
Research Heaven, West Virginia 10 What do we need and what do we get? Input –UML diagrams: Use case diagram, Sequence diagram, and Deployment diagram; –Performance objectives (requirements) Output –Performance-based risk factor of the scenarios modeled as sequence diagrams –Identification of performance-critical components in the scenario
11
Research Heaven, West Virginia 11 Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
12
Research Heaven, West Virginia 12 Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
13
Research Heaven, West Virginia 13 STEP 1 – Assign a Demand Vector to Every “Action” in SD Build a software execution model from the demand vectors and SD
14
Research Heaven, West Virginia 14 The Preplanned Emergency scenario Preplanned emergency scenario comprises of two sequence diagrams: Preparation of command groups that are to be uplinked (SD1) Handling the transmission failure during uplink (SD2) We assumed for the purpose of illustration that SD1 is executed once and SD2 i.e. the retransmission twice before there is a mission failure
15
Research Heaven, West Virginia 15 Annotated SD1 (Step 1)
16
Research Heaven, West Virginia 16 Demand vectors for SD1 (Step 1)
17
Research Heaven, West Virginia 17 Annotated SD2 (Step 1)
18
Research Heaven, West Virginia 18 Demand vectors for SD2 (Step 1)
19
Research Heaven, West Virginia 19 The Software Execution graph (Step 1) EOC1 ICC2 EOC2 IDB1 ICC1 ICC3 EOC4 EOC3 T1 T2 R1 EOC5 SE1 TC1 T1 EOC7 EOC6 Transmit Preplanned Command Retransmit On Failure 2 SD1 SD2
20
Research Heaven, West Virginia 20 Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
21
Research Heaven, West Virginia 21 STEP 2 – Add Hardware Platform Characteristics on the Deployment Diagram Space Craft ECOM > Communication Subsystem > Ground N/W > EOC > ICC > IDB >
22
Research Heaven, West Virginia 22 Conduct Stand-alone Analysis (Step 2) Stand-alone analysis consists of evaluating the completion time of the whole SD as it would be executed on a dedicate hardware platform with a single user workload The service time consumed by the steps (as shown in the software execution graph) is 9.949 seconds
23
Research Heaven, West Virginia 23 Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
24
Research Heaven, West Virginia 24 Asymptotic bounds and Failure probability estimate (Step 3) Failure probability (Z1) = 0 Failure probability (Z2) = =0.7958 Failure probability (Z3) = 1 UB LB D Response time Objective N*D N*D MAX Z1 Z3 Z2
25
Research Heaven, West Virginia 25 Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
26
Research Heaven, West Virginia 26 STEP 4 – Conduct Severity Analysis For severity analysis we use Functional Failure Analysis (FFA) based on UML use case and scenario diagram The input to FFA are A list of events of a use case (under a specific scenario) A list of guide words The output is the severity level (catastrophic, critical, marginal, and minor) based on FFA in a tabulated form
27
Research Heaven, West Virginia 27 FFT for the Emergency Scenario in EOS-FOS (Step 4) Since we are dealing with performance-based risk, we apply only the guideword “LATE”
28
Research Heaven, West Virginia 28 Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conduct severity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
29
Research Heaven, West Virginia 29 STEP 5 – Estimate the Performance Risk The performance risk of a scenario is defined as a product of –the probability that the system will fail to meet the required performance objective for a given workload (e.g., desired response time) estimated in STEP 3 and, –the severity associated with this performance failure of the system in this scenario estimated in STEP 4 Performance based risk = Probability of performance failure * Severity of the failure = 0.7958*0.95=0.756
30
Research Heaven, West Virginia 30 Identify High-risk Components (Step 5) Estimate the overall residence time of each component in a given sequence diagram Sum the time of all processing steps that belong to that component in a given scenario Normalize it with the response time of the sequence diagram Components that contribute significantly to the scenario’s response time are the high-risk components
31
Research Heaven, West Virginia 31 Identify High-risk Components (Step 5) Ground (GN) and the Space(ECOM) networks are the most critical components The service times of the other components are significantly smaller than the service times of GN and ECOM network components and hence are not visible on the graph
32
Research Heaven, West Virginia 32 Identify High-risk Components (Step 5) This 3-D graph shows the components on x-axis, scenarios on y-axis and normalized service times on the z-axis The graph is based on different case study and is presented here for illustration
33
Research Heaven, West Virginia 33 Developed analytical techniques and a methodology for reliability-based risk analysis –A lightweight approach based on static analysis of dynamic specifications is developed and automated –A tool was presented at ICSE Tools session –Applied the methodology and tool to the NASA case study HCS- ISS Developed analytical techniques and a methodology for performance-based risk analysis –Applied the methodology to the NASA-EOS case study Accomplishments
34
Research Heaven, West Virginia 34 Publications 1.H. H. Ammar, T. Nikzadeh, and J. B. Dugan "Risk Assessment of Software Systems Specifications," IEEE Transactions on Reliability, To Appear September 2001 2.Hany H. Ammar, Sherif M. Yacoub, Alaa Ibrahim, “A Fault Model for Fault Injection Analysis of Dynamic UML Specifications,” International Symposium on software Reliability Engineering, IEEE Computer Society, November 2001 3. Rania M. Elnaggar, Vittorio Cortellessa, Hany Ammar, “A UML-based Architectural Model for Timing and Performance Analyses of GSM Radio Subsystem”, 5th World Multi- Conference on Systems, Cybernetics and Informatics, July. 2001, Received Best Paper Award 4.Ahmed Hassan, Walid M. Abdelmoez, Rania M. Elnaggar, Hany H. Ammar, “An Approach to Measure the Quality of Software Designs from UML Specifications,” 5th World Multi- Conference on Systems, Cybernetics and Informatics and the 7th international conference on information systems, analysis and synthesis ISAS July. 2001. 5.Hany H. Ammar, Vittorio Cortellessa, Alaa Ibrahim “Modeling Resources in a UML-based Simulative Environment”, ACS/IEEE International Conference on Computer Systems and Applications (AICCSA'2001), Beirut, Lebanon, 26-29 June 2001 6.A Ibrahim, Sherif M. Yacoub, Hany H. Ammar, “Architectural-Level Risk Analysis for UML Dynamic Specifications,” Proceedings of the 9th International Conference on Software Quality Management (SQM2001), Loughborough University, England, April 18-20, 2001, pp. 179-190 URL is http://www.csee.wvu.edu/~ammar/papers /2001
35
Research Heaven, West Virginia 35 Publications 7.T. Wang, A. Hassan, A. Guedem, W. Abdelmoez, K. Goseva-Popstojanova, H. Ammar, “Architectural Level Risk Assessment Tool Based on UML Specifications”, 25th International Conference on Software Engineering, Portland, Oregon, May 3 - 10, 2003. 8.A. Hassan, K. Goseva-Popstojanova, H. Ammar, “Methodology for Architecture Level Hazard Analysis”, ACS/IEEE International Conference on Computer Systems and Applications (AICCSA 03), Tunis, Tunisia, July 14-18, 2003. 9.A. Hassan, W. Abdelmoez, A.Guedem, K. Apputkutty, K.Goseva-Popstojanova, H.Ammar, “Severity Analysis at Architectural Level Based on UML Diagrams”, 21st International System Safety Conference, Ottawa, Ontario, Canada, August 4-8, 2003. 10.K. Goseva-Popstojanova, A. Hassan, A. Guedem, W. Abdelmoez, D. Nassar, H. Ammar, A. Mili, “Architectural-Level Risk Analysis using UML”, IEEE Transaction on Software Engineering, (accepted for publication). URL is http://www.csee.wvu.edu/~ammar/papers/2001
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.