Download presentation
Presentation is loading. Please wait.
1
Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul, MN Graduate Assistants : Ramkumar Natarajan Baskar Sridharan Last update: August 16, 2002
2
Software Testing and Reliability © Aditya P. Mathur 2002 2 Reliability and risk assessment Learning objectives- 1.What is software reliability? 2.How to estimate software reliability? 3.What is risk assessment? 4.How to estimate risk using application architecture?
3
Software Testing and Reliability © Aditya P. Mathur 2002 3 References 1. Statistical Methods in Software Engineering: Reliability and Risk, Nozer D. Singpurwalla and Simon P. Wilson, Springer, 1999. 2. Software Reliability, Measurement, Prediction, Application, John D. Musa, Anthony Iannino, and Kazuhira Okumoto, McGraw-Hill Book Company, 1987. 3. A Methodology for Architecture Level Reliability Risk Analysis, S. M. Yacoub and H. H. Ammar, IEEE Transactions on Software Engineering, June 2002, V28, N 6, pp529-547. 4. Real-Time UML: Developing Efficient Objects for Embedded Systems. Bruce Powell Douglass, Addison- Wesley, 1998.
4
Software Testing and Reliability © Aditya P. Mathur 2002 4 Software Reliability Software reliability is the probability of failure free operation of an application in a specified operating environment and time period. Reliability is one quality metric. Others include performance, maintainability, portability, and interoperability
5
Software Testing and Reliability © Aditya P. Mathur 2002 5 Operating Environment Hardware: Machine and configuration Software: OS, libraries, etc. Usage (Operational profile)
6
Software Testing and Reliability © Aditya P. Mathur 2002 6 Uncertainty Uncertainty is a common phenomena in our daily lives. In software engineering, uncertainty occurs in all phases of the software life cycle. Examples: Will the schedule be met? What is the number of faults remaining faults? How many testers to deploy? How many months will it take to complete the design?
7
Software Testing and Reliability © Aditya P. Mathur 2002 7 Probability and statistics Uncertainty can be quantified and managed using probability theory and statistical inference. Probability theory assists with quantification and combination of uncertainties. Statistical inference assists with revision of uncertainties in light of the available data.
8
Software Testing and Reliability © Aditya P. Mathur 2002 8 Probability Theory In any software process there are known and unknown quantities. The known quantities constitute history and is denoted by H. The unknown quantities are referred to as random quantities. Each unknown quantity is denoted by a capital letter such as T or X.
9
Software Testing and Reliability © Aditya P. Mathur 2002 9 Random Variables When a random quantity can assume numerical values it is known as a random variable. Specific values of T and X are denoted by lower case letters t and x and are known as realizations of the corresponding random quantities. Example: If X denotes the outcome of a coin toss, then X can assume a value 0 (for head) and 1 (for tail). X is a random variable under the assumption that on each toss the outcome is not known with certainty.
10
Software Testing and Reliability © Aditya P. Mathur 2002 10 Probability For brevity we will suppress H and to denote the probability of E as simply The probability of an event E computed at time in light of history H is given by
11
Software Testing and Reliability © Aditya P. Mathur 2002 11 Random Events Examples: A random quantity that may assume one of two values, say e1 and e2, is a random event often denoted by E. Program P will fail on the next run. The design for application A will be completed in less than 3 months. The time to next failure of application A will be greater than t. Application A contains no errors.
12
Software Testing and Reliability © Aditya P. Mathur 2002 12 Binary Random Variables A discrete random variable is one whose realizations are countable. When e1 and e2 are numerical values, such as 0 and 1, then E is known as a binary random variable. Example: Number of failures encountered over four hours of application use. A continuous random variable is one whose realizations are not countable. Example: Time to next failure.
13
Software Testing and Reliability © Aditya P. Mathur 2002 13 Probability distribution function If E is the event that X <= x then is known as the distribution function of X and is denoted as For a random variable X let E be the event that X= x If then X is said to have a point mass. Note that is nondecreasing in x and ranges from 0 to 1.
14
Software Testing and Reliability © Aditya P. Mathur 2002 14 Probability density function If X is continuous and takes all values in some interval I and is differentiable with respect to x for all x in I, then is absolutely continuous. The derivative of at x is denoted by and is known as the probability density function of X. dx is the approximate probability that the random variable X takes on a value in the interval x and x+dx.
15
Software Testing and Reliability © Aditya P. Mathur 2002 15 Exponential Density function: Continuous random variable, for both x and > 0. 0 x
16
Software Testing and Reliability © Aditya P. Mathur 2002 16 Binomial Distribution Suppose that an application is executed N times each with a distinct input. We want to know the number of inputs, X, on which the application will fail. Note that the proportion of the correct outputs is a measure of the reliability of the application. X can assume values x =0, 1, 2,…,N. We are interested in the probability that X= x. Each input to the application can be assumed to be a Bernoulli trial. This gives us Bernoulli random variables Xi, i=1, 2,…,N. Each Xi is a 1 if the application fails and 0 otherwise. Note that X= X1+X2+…+XN.
17
Software Testing and Reliability © Aditya P. Mathur 2002 17 Binomial Distribution [contd.] Under certain assumptions, the following probability model, known as the Binomial distribution, is used. Here p is the probability that Xi = 1 for i=1,…,N. In other words, p is the probability of failure of any single run.
18
Software Testing and Reliability © Aditya P. Mathur 2002 18 Poisson Distribution When the application under test is almost error free and is subjected to a large number of inputs, then N is large, (1-p) is small, and N (1-p) is moderate. The above assumption leads to a simplification of the Binomial distribution into the Poisson distribution given by the formula
19
Software Testing and Reliability © Aditya P. Mathur 2002 19 Software Reliability: Types Reliability on a single execution: P(X=1| H ), modeled by Bernoulli distribution. Reliability over N executions: P(X= x | H ), for x=0,1,2,…N, given by Binomial distribution or Poisson distribution for large N and small parameter value p. Reliability over an infinite number of executions: P(X= x | H ), for x =1,2,…N. Note that we are interested in the number of inputs after which the first failure occurs. This is given by geometric distribution.
20
Software Testing and Reliability © Aditya P. Mathur 2002 20 Software Reliability: Types [contd.] When the inputs to software occur continuously over time, then we are interested in P(X>=x| H ), i.e. the probability that the first failure occurs after x time units. This is given by the exponential distribution. The time of occurrence to the kth failure can be given by the Gamma distribution. There are several other models of reliability, over one hundred!
21
Software Testing and Reliability © Aditya P. Mathur 2002 21 Software failures: Sources of uncertainty Uncertainty about the presence and location of defects. Uncertainty about the use of run types. Will a run for a given input state cause a failure?
22
Software Testing and Reliability © Aditya P. Mathur 2002 22 Failure Process Inputs arrive at an application at random times. Some inputs cause failures and others do not. T1, T2, …denote (CPU) times between application failures. Most reliability models are centered around the interfailure times.
23
Software Testing and Reliability © Aditya P. Mathur 2002 23 Failure Intensity and Reliability Failure intensity is the number of failures experienced within a unit of time. For example, the failure intensity of an application might be 0.3 failures/hr. Failure intensity is an alternate way of expressing reliability, R( ), which is the probability of no failures over time duration . For a constant failure intensity we have R( )=e - . It is safe to assume that during testing and debugging, the failure intensity decreases with time and thus the reliability increases.
24
Software Testing and Reliability © Aditya P. Mathur 2002 24 Jelinski and Moranda Model [1972] The application contains an unknown number N of defects. Each time the application fails the defect that caused the failure is removed. Ti is proportional to (N-I+1). Debugging is perfect. Constant relationship between the number of defects and the failure rate.
25
Software Testing and Reliability © Aditya P. Mathur 2002 25 Jelinski and Moranda Model [contd.] Thus, given 0=S0<=S1<=….<=Si, i=1, 2… and some constant c, we obtain the following failure intensity, where S0,S1,…,Si are supposed software failure times, failure rate r Ti is given by: for Note that the failure rate drops by a constant amount. t time S 0=0 S1S1 S2S2 S3S3
26
Software Testing and Reliability © Aditya P. Mathur 2002 26 Musa-Okumoto Model: Terminology Execution time: Initial failure intensity: 0 =f K 0 Average number of failures at a given time:: Total number of failures in infinite time: 0 = 0 / B Fault reduction factor: B Per fault hazard rate: ; 0 / 0 = B Execution time from current time: ’
27
Software Testing and Reliability © Aditya P. Mathur 2002 27 Musa-Okumoto Model: Terminology [contd.] Number of inherent faults: 0 = I I Number of source instructions: I Instruction execution rate: r Executable object instructions: I Linear execution frequency: f=r/I Fault exposure ratio: K Number of inherent faults per source instructions: I
28
Software Testing and Reliability © Aditya P. Mathur 2002 28 Musa-Okumoto: Basic Model Failure intensity for basic execution time model
29
Software Testing and Reliability © Aditya P. Mathur 2002 29 Musa-Okumoto: Logarithmic Poisson Model Failure intensity decay parameter: Failure intensity for logarithmic Poisson model:
30
Software Testing and Reliability © Aditya P. Mathur 2002 30 Failure intensity comparison as a function of average failures experienced 0 0 Average number of failures experienced Failure intensity ( ) Logarithmic Poisson model Basic model
31
Software Testing and Reliability © Aditya P. Mathur 2002 31 Failure intensity comparison as function of execution time 0 0 Execution time Failure intensity ( ) Basic model Logarithmic Poisson model
32
Software Testing and Reliability © Aditya P. Mathur 2002 32 Which Model to use? Uniform operational profile: Use the basic model Non-uniform operational profile: Use the logarithmic Poisson model
33
Software Testing and Reliability © Aditya P. Mathur 2002 33 Other issues Counting failures When is a defect repaired Impact of imperfect repair
34
Software Testing and Reliability © Aditya P. Mathur 2002 34 Independent check against code coverage Reliability estimate Code coverage CLCL CHCH RLRL RHRH CLCL RLRL Unreliable estimate CLCL RHRH CHCH RLRL Reliable estimate CHCH RHRH
35
Software Testing and Reliability © Aditya P. Mathur 2002 35 Operational Profile A quantitative characterization of how an application will be used. This characterization requires a knowledge of input variables. Input state is a vector of values of all input variables. Input variables: An interrupt is an input variable and so are all environment variables and variables whose values are input by the user via the keyboard or from a file in response to a prompt. Internal variables, computed from one or more input variables are not input variables. Intermediate results and interrupts generated during the execution as a result of the execution should not be considered as input variables.
36
Software Testing and Reliability © Aditya P. Mathur 2002 36 Operational Profile [contd.] Runs of an application that begin with identical input states belong to the same run type. Example 1: Two withdrawals from the same person from the same account and of the same dollar amount. Example 2: Reservations made for two different people on the same flight belong to different run types. Function: Grouping of different run types. A function is conceived at the time of requirements analysis.
37
Software Testing and Reliability © Aditya P. Mathur 2002 37 Operational Profile [contd.] Function: A set of different run types. A function is conceived at the time of requirements analysis. A function is analogous to a use-case. Operation: A set of run types for the application that is built.
38
Software Testing and Reliability © Aditya P. Mathur 2002 38 Input Space: Graphical View Input space Input state Function 1 Input state Function 2 Function 3 Function 4 Function k
39
Software Testing and Reliability © Aditya P. Mathur 2002 39 Functional Profile FunctionProbability of occurrence F10.6 F20.35 F30.05
40
Software Testing and Reliability © Aditya P. Mathur 2002 40 Operational Profile FunctionOperationProbability of occurrence F1O110.4 O120.1 O130.1 F2O210.05 O220.15 F3O310.15 O330.05
41
Software Testing and Reliability © Aditya P. Mathur 2002 41 Modes and Operational Profile ModeFunctionOperationProbability of occurrence NormalF1O110.4 O120.1 O130.1 F2O210.05 O220.15 F3O310.15 O330.05
42
Software Testing and Reliability © Aditya P. Mathur 2002 42 Modes and Operational Profile [contd.] ModeFunctionOperationProbability of occurrence AdministrativeAF1AO110.4 AO120.1 AF2AO210.5
43
Software Testing and Reliability © Aditya P. Mathur 2002 43 Reliability Estimation Process App. ready for release Develop Operational profile Collect failure data Perform system test Objective met? Compute reliability Remove defects No Yes
44
Software Testing and Reliability © Aditya P. Mathur 2002 44 Risk Assessment Risk is a combination of two factors: Probability of malfunction Consequence of malfunction Dynamic complexity and coupling metrics can be used to account for the probability of a fault manifesting itself into a failure. Risk Assessment is useful in identifying: Complex modules that need more attention Potential trouble spots Estimating test effort
45
Software Testing and Reliability © Aditya P. Mathur 2002 45 Question of interest Given the architecture of an application, how does one quantify the risk associated with the given architecture? Note that risk analysis, as described here, can be performed prior to the development of any code and soon after the system architecture, in terms of its components and connections, is available.
46
Software Testing and Reliability © Aditya P. Mathur 2002 46 Risk Assessment Procedure Develop System Architecture Determine component and connector complexity Develop operational scenarios and their likelihood Develop risk factors Perform severity analysis Perform risk analysis Develop CDG
47
Software Testing and Reliability © Aditya P. Mathur 2002 47 Cardiac Pacemaker: Behavior Modes L1L1 What is paced? A: Atrium V: Ventricle D: Dual; (both) Behavior mode indicated by 3-letter acronym: L 1 L 2 L 3 L2L2 Which chamber is being monitored ? A: Atrium V: Ventricle D: Dual; (both) L3L3 What is the mode type? I: Inhibited T: Triggered D: Dual pacing Example: VVI: Ventricle is paced when Ventricular sense does not occur, pace is Inhibited if a sense does occur
48
Software Testing and Reliability © Aditya P. Mathur 2002 48 Pacemaker: Components and Communication Reed Switch Communication Gnome Coil Driver Atrial Model Ventricular Model enables heart magnet programming
49
Software Testing and Reliability © Aditya P. Mathur 2002 49 Component Description Reed Switch (RS): Magnetically activated switch; must be closed before programming can begin. Coil Driver (CD): Pulsed to send 0’s and 1’s by the programmer. Atrial Model (AR): Controls heart pacing. Communications Gnome (CG): Receives commands as bytes from CD and send to AR and VT. Ventricular Model (VT): Controls sensing and the refractory period.
50
Software Testing and Reliability © Aditya P. Mathur 2002 50 Scenarios Programming: Programmer sets the operation mode of the device. AVI: VT monitors the heart. When a heart beat is not sensed the AR paces the heart and a refractory period is in effect. AAI: The AR component paces the heart when it does not sense any pulse. VVI: VT component paces the heart when it does not sense any pulse. VVT: VT component continuously paces the heart. AAT: The AR component continuously paces the heart.
51
Software Testing and Reliability © Aditya P. Mathur 2002 51 Static Complexity for OO Designs Coupling Between Classes (CBC): Total number of other classes to which a class is coupled. Coupling: Two classes are considered coupled if methods from one class use methods or instance variables from other class.
52
Software Testing and Reliability © Aditya P. Mathur 2002 52 Operational Complexity for Statecharts Dynamic complexity factor for each component is based on cyclomatic complexity of the statechart specification for each component. Given a program graph G with e edges and n nodes, the cyclomatic complexity V(G)=e-n+2. For each execution scenario S k a subset of the statechart specification of the component is executed thereby exercising state entries, state exits, and fired transitions. The cyclomatic complexity of the executed path for each component C i is called the operational complexity denoted by cpx k (C i ).
53
Software Testing and Reliability © Aditya P. Mathur 2002 53 Dealing with Composite States init t11 s11 I s1 init s22 I s2 s21 t13 Cyclomatic complexity for the s11 to s22 transition: VG p : p: x, a, e: Complexity of the exit, action, and entry code segments code segment t12
54
Software Testing and Reliability © Aditya P. Mathur 2002 54 Dynamic Complexity for Statecharts For each execution scenario these variables are updated with the complexity measure of the thread that is triggered for that particular scenario. Each component of the model is assigned a complexity variable. At the end of the simulation, the tool reports the dynamic complexity value for each component. The average operational complexity is now updated for each component: is the probability of scenario k,is the total number of scenarios
55
Software Testing and Reliability © Aditya P. Mathur 2002 55 Component Complexity Sequence diagrams are developed fo each scenario. Each sequence diagram is used to simulate the corresponding scenario. Average operational complexity is then computed as a sum of the scenario component complexity weighted by the scenario probability Simulation is used to compute the dynamic complexity of each component. Domain experts determine the relative probability of occurrence of each scenario. This is akin to the operational profile of an application. The component complexities are then normalized against the highest component complexity.
56
Software Testing and Reliability © Aditya P. Mathur 2002 56 Export coupling, EC k ( C i, C j ), measures the coupling for component C i with respect to component C j. It is the percentage of the number of messages sent from C i to C j with respect to the total number of messages exchanged during the execution of scenario S k. Connector Complexity The export coupling metric for a pair of components for a given scenario is extended for an operational profile by averaging over all scenarios using the probabilities of occurrences of the scenarios considered.
57
Software Testing and Reliability © Aditya P. Mathur 2002 57 Connector Complexity Simulation is used to determine the dynamic coupling measure for each connector. Coupling amongst components is represented in the form of a matrix. Coupling values are normalized to the highest coupling.
58
Software Testing and Reliability © Aditya P. Mathur 2002 58 Component Complexity Values RSCDCGARVT Programming (0.01)8.367.424.3 AVI (0.29)53.246.8 AAT (0.15)100 AAI (0.20)100 VVI (0.15)100 VVT (0.20)100 % of architecture complexity.083.674.24350.24848.572 Normalized.0.0020.0130.00510.963
59
Software Testing and Reliability © Aditya P. Mathur 2002 59 Coupling Matrix RSCDCGARVTProg.Heart RS.0014 CD.003.011 CG.002.0014 AR.251 VT.27.873 Programmer.0014.006 Heart.123.307
60
Software Testing and Reliability © Aditya P. Mathur 2002 60 Severity Analysis Risk factors are associated with each component and connector by performing severity analysis. Basic failure mode(s) of each component/connector and its effect on the overall system is studied using failure mode and effects analysis (FMEA). A simulation tool is used for injecting faults, one-by-one in each component and each connector. The effect of each fault, and the resulting failure, is studied. Domain experts can rank severity of failures, thus ranking the effect of a component or connector failure Apart from their complexity, risk also depends on the severity of failure of components and connectors.
61
Software Testing and Reliability © Aditya P. Mathur 2002 61 Severity Ranking Catastrophic (0.95): Failure may cause death or total system loss. Critical (0.75): Failure may cause severe injury, property damage, system damage, or loss loss of production. Marginal (0.5): Failure may cause minor injury, property damage, system damage, delay or loss of production. Minor (0.25): Failure not serious enough to cause injury, property damage, or system damage but will result in unscheduled maintenance or repair. Domain experts assign severity indices (svrty i ) to the severity classes.
62
Software Testing and Reliability © Aditya P. Mathur 2002 62 Heuristic Risk Factor The highest severity index (svrty i ) corresponding to a severity level of failure of a given component i, is assigned as its severity value. By comparing the result of the simulation with the expected operation, severity level for each faulty component for a given scenario is determined. A Heuristic Risk Factor (hrf i ) is then computed for each component based on its complexity and severity value.
63
Software Testing and Reliability © Aditya P. Mathur 2002 63 FMEA for components (sample) ComponentFailureCauseEffectCriticality RSCommunication not enabled Error in translating magnet command Unable to program the pacemaker, schedule maintenance task. Minor VTNo heart pulses are sensed though heart is working fine. Heart sensor is malfunctioning. Heart is paced incorrectly; patient could be harmed. Critical
64
Software Testing and Reliability © Aditya P. Mathur 2002 64 FMEA for connectors (sample) ConnectorFailureCauseEffectCriticality AR-HeartFailed to pace the heart in AVI mode. Pacing h/w device malfunction. Heart operation is irregular Catastrophic CG-VTSend incorrect command (e.g. ToOff instead of ToIdle) Incorrect interpretation of program bytes Incorrect operation mode and pacing of the heart. Device still monitored by the physician, immediate maintenance required. Marginal
65
Software Testing and Reliability © Aditya P. Mathur 2002 65 Component Risk factors: Using Dynamic Complexity RSCDCGARVT Dynamic complexity.002.013.0051.963 Severity.25.5.95 Risk factor.0005.00325.0025.95.91485
66
Software Testing and Reliability © Aditya P. Mathur 2002 66 Connector Risk factors: Using Dynamic Complexity RSCDCGARVTProg.Heart RS.00035 CD.00075.00275 CG.0005.0007 AR.2375.95 VT.2565.82935 Prog..00035.0015 Heart.11685.2916
67
Software Testing and Reliability © Aditya P. Mathur 2002 67 Component Risk factors: Using Static Complexity RSCDCGARVT CBC0.470.810.6 Severity0.25 0.50.95 Risk factors based on CBO 0.1190.20.50.57
68
Software Testing and Reliability © Aditya P. Mathur 2002 68 Component Risk factors: Comparison Dynamic metrics better distinguish AR and VT components as high risk when compared with RS, CD, and CG. Using static metrics, CG is considered at the same risk level as AR and VT. In pacemaker, AR and VT control the heart and hence are the highest risk components which is confirmed when one computes the risk factors using dynamic metrics.
69
Software Testing and Reliability © Aditya P. Mathur 2002 69 Component Dependency Graphs (CDGs) A CDG is described by sets N and E where N is a set of of nodes and E is a set of edges. s and t are designated as the start and termination nodes and belong to N. Each node n in N:, where C i is the component corresponding to n, RC i is the reliability of C i, and EC i is the average execution time of C i. Each edge e in E :, where T ij is the transition from node C i to C j, RT ij is the reliability of this transition, and PT i j is the transition probability. In the methodology described here, risk factors replace the reliabilities of components and transitions.
70
Software Testing and Reliability © Aditya P. Mathur 2002 70 Generation of CDGs Estimate the execution probability of each scenario. For each scenario estimate the execution time of each component and then, using the probability of each scenario, compute the average execution time of each component. Estimate the transition probability of each transition. Estimate the complexity factor of each component. Estimate the complexity factor of each connector..
71
Software Testing and Reliability © Aditya P. Mathur 2002 71 CDG for the Pacemaker (all transition labels not shown) t s
72
Software Testing and Reliability © Aditya P. Mathur 2002 72 Reliability Risk Analysis Architecture risk factor is obtained by aggregating the risk factors of individual components and connectors. Example: Let L be the length of an execution sequence, i.e,. L is the number of components executed along this sequence. Then, the risk factor is given by: where hrf i is the risk factor associated with the ith component, or connector, in the sequence.
73
Software Testing and Reliability © Aditya P. Mathur 2002 73 Risk Analysis Algorithm-OR paths Breadth expansions correspond to “OR” paths. The risk factors associated with all nodes along the breadth expansion are summed up weighted by the transition probabilities. e1: e2: n1: n2: HRF=1-[(1-0.5)0.3+(1-0.6)0.7]. s n1n2 e1e2 Traverse the CDG starting at node s and stop until either t is reached or the average application execution time is consumed.
74
Software Testing and Reliability © Aditya P. Mathur 2002 74 Risk Analysis Algorithm-AND paths The depth of a path implies sequential execution. For example, suppose that node n1 is reached from node s via edge e1 and that node n2 is reached from n1 via edge e2. Attributes of the edges and components are as follows: e1: e2: n1: n2: HRF=1-[(1-0.5)0.3 x (1-0.6)0.7]. Time=Time+5 +12 s n1 n2 e1 e2 The “AND” paths take into consideration the connector risk factors (hrf ij )
75
Software Testing and Reliability © Aditya P. Mathur 2002 75 Pacemaker Risk Given the architecture and the risk factors associated with components and connector, the risk factor associated with the pacemaker is computed to be approx. 0.9. This value of risk is considered high. It implies that the pacemaker architecture is critical and failures are likely to be catastrophic. Risk analysis tells us that the VT and AR components are the highest risk components Risk analysis also tells us that the connectors between VT, AR and heart components are the highest risk components
76
Software Testing and Reliability © Aditya P. Mathur 2002 76 Advantages of Risk Analysis The CDG is useful for the risk analysis of hierarchical systems. Risks for subsystems can be computed. These could then be aggregated to compute then risk of the entire system. The CDG is useful for performing sensitivity analysis. One could study the impact of changing the risk factor of a component on the risk associated with the entire system. As the analysis is being done, most likely, prior to coding, one might consider revising the architecture or use the same architecture but allocate resources for coding and testing based on individual risk factors.
77
Software Testing and Reliability © Aditya P. Mathur 2002 77 Summary Reliability, modeling uncertainty, failure intensity, operational profile, reliability growth models, parameter estimation. Risk assessment, architecture, severity analysis, risk factors, CDGs.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.