Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Software Risk Management Practices LiGuo Huang Computer Science and Engineering Southern Methodist University.

Similar presentations


Presentation on theme: "1 Software Risk Management Practices LiGuo Huang Computer Science and Engineering Southern Methodist University."— Presentation transcript:

1 1 Software Risk Management Practices LiGuo Huang Computer Science and Engineering Southern Methodist University

2 2 Outline Risk Management Principles and Practices –Risk Assessment: Risk identification, analysis, and prioritization –Risk Control: risk management planning, risk resolution, and risk monitoring Risk Management Implementation Concerns –RM in the software life cycle: the spiral model

3 3 Software Risk Management Checklists Decision driver analysis Assumption analysis Decomposition Performance models Cost models Network analysis Decision analysis Quality factor analysis Risk exposure Risk leverage Compound risk reduction Buying information Risk avoidance Risk transfer Risk reduction Risk element planning Risk plan integration Prototypes Simulations Benchmarks Analyses Staffing Milestone tracking Top-10 tracking Risk reassessment Corrective action Risk Resolution Risk Assessment Risk Control Risk Identification Risk Analysis Risk Prioritization Risk mgmt Planning Risk Management Risk Monitoring

4 4 Risk Assessment

5 5 Risk Assessment Role in Each Life-Cycle Phase Evaluate Alternatives W.R.T Objectives ID Potential High-Risk Areas No Alternatives Satisfy Objectives Major Satisfaction Uncertainty Analyze Risk Items Prioritize Risk Items Objectives Cost Schedule Function Operation Support Reuse Alternatives Physical Architecture Logical Architecture COTS Software Reused Software Special Software Risk ID Checklist, Techniques Models, Analysis Aids Risk Management Plan RMP n Revised Objectives Subset of Alternatives Prioritized Risk Items

6 6 Risk Assessment ─ Risk Identification

7 7 Risk Identification Techniques Risk-item checklists Decision driver analysis –Comparison with experience –Win-lose, lose-lose situations Decomposition –Pareto 80 – 20 phenomena –Task dependencies –Murphy’s law –Uncertainty areas

8 8 Top 10 Risk Items: 1989 and 1995 1.Personal shortfalls 2.Schedules and budgets 3.Wrong software functions 4.Wrong user interface 5.Gold plating 6.Requirements changes 7.Externally-furnished components 8.Externally-performed tasks 9.Real-time performance 10.Straining computer science 1.Personnel shortfalls 2.Schedules, budgets, process 3.COTS, external components 4.Requirements mismatch 5.User interface mismatch 6.Architecture, performance, quality 7.Requirements changes 8.Legacy software 9.Externally-performed tasks 10.Straining computer science 1989 1995

9 9 Example Risk-item Checklist: Staffing Will you project really get all the best people? Are there critical skills for which nobody is identified? Are there pressures to staff with available warm bodies? Are there pressures to overstaff in the early phases? Are the key project people compatible? Do they have realistic expectations about their project job? Do their strengths match their assignment? Are they committed full-time? Are their task prerequisites (training, clearances, etc.) satisfied?

10 10 Personnel Shortfalls − UniWord Case Study Critical skills for WYSIWYG-type user interface development Nobody was identified for file management Incompatibilities between key people, particularly Brown and Gray Unsatisfied prerequisites of familiarization with the UniWindow technology Weakness in four components: availability, mix, experience, management

11 11 Unrealistic Schedules and Budgets − UniWord Case Study Technology experience: The window-based WYSIWYG workstation was unfamiliar tech. for all SoftWizards personnel Requirement definition and stability: no requirement baseline for the UniWord project Reusable software modifications: UniWindow workstation and its system software underwent serveral considerable changes impacting project window- management procedures, mouse conventions, etc. Configuration management: no configuration management control

12 12 Cost Models as Risk-Identification Checklists Cost driver factors in software cost estimation models provide a checklist of risk items strongly correlated with cost/schedule increase Conflicting cost driver ratings

13 13 Cost Models as Risk-Identification Checklists − UniWord Case Study Virtual machine experience: SoftWizards team’s inexperience with UniWindow type workstation Schedule constraint: ambitious delivery schedule creates additional project costs

14 14 COTS Cost and Risk Modeling: Some Results from USC-CSSE Affiliates’ Workshop COTS immaturity and lack of support Staff inexperience with COTS integration, COTS Incompatibility of COTS with –Application –Other COTS, reused software –Computing platform, infrastructure –Performance, scalability, reliability, … rqts. Licenses, tailoring, training Lack of COTS evolution controllability Cost Risk Driver * * *

15 15 Risk ID: Examining Decision Drivers Political versus Technical –Choice of equipment –Choice of subcontractor –Schedule, Budget –Allocation of responsibilities Marketing versus Technical –Gold plating –Choice of equipment –Schedule, budget Solution-driven versus Problem-driven –In-house components, tools –Artificial intelligence –Schedule, Budget Short-term versus Long-term –Staffing availability versus qualification –Reused software productions engineering –Premature SRR, PDR Outdated Experience

16 16 Potential Win-lose, Lose-lose Situations Quick, cheap, sloppy product Lots of bells and whistles Driving too hard a bargain Developer Customer “Winner” Developer Customer Developer Customer User Customer Developer Loser Actually, nobody wins in these situations

17 17 Murphy’s Law “If anything can go wrong, it will.”

18 18 Decomposition − Pareto 80-20 Phenomena 20% of the modules contribute 80% of the cost 20% of the modules contribute 80% of the errors 20% of the errors causes 80% of the down time 20% of the errors consumes 80% of the cost to fix 20% of the modules consume 80% of the execution time

19 19 Decomposition − Task Dependencies A task dependency network with high fan-in or high fan-out nodes

20 20 Decomposition − Uncertainty Areas Mission requirements Life-cycle concept of operation Software performance drivers User interface characteristics Interfacing system characteristics

21 21 Risk Assessment ─ Risk Analysis

22 22 Risk Analysis Decision analysis Network analysis Cost risk analysis –Cost model and cost driver analysis –Schedule analysis Other risk analysis techniques –Satisfying both functional and non-functional requirements  Performance analysis (e.g., simulation, benchmarking, modeling, prototyping, instrumentation, tuning)  Reliability, availability and safety analysis (e.g., FMEA, static and dynamic analysis of specs and programs ) –Automated risk analysis aids  Monte Carlo approach  Leading candidate risk analysis software packages: PROMAP V, PROSIM, RISNET, SLAM

23 23 Network Schedule Risk Analysis... 2, p = 0.5 6, p = 0.5 2, p = 0.5 6, p = 0.5 4 4 months EV = 4 months 4 Equally Likely Outcomes 2222 2626 6262 6666 2 6 6 _6_ 20 _20_ 4 = 5 MONTHSEV =

24 24 Risk Assessment ─ Risk Prioritization

25 25 Risk Prioritization Risk exposure Risk leverage –Betting analogy –Adjective calibration –Delphi/group techniques –Compound risk reduction Prioritization examples

26 26 Risk Probability Assessment Calculate probabilities, utilities –Hard to do in general Betting analogy –Define “satisfactory” level –Establish a personally meaningful amount of money, say, $100 –Determine how much money you would be willing to risk in betting on satisfactory outcome

27 27 Risk Probability Assessment: Example Establish proposition –Using java will not cause us to slip our schedule Establish betting odds –No schedule slip: you win $100 –Schedule slip: you lose $500 Determine willingness to bet –Willing: low risk –Unwilling: high risk –Not sure: risk due to uncertainty buy information

28 28

29 29 Watch Out For Compound Risks Pushing technology on more than one front Pushing technology with key staff shortages Vague user requirements with ambitious schedule Untried hardware with ambitious schedule Unstable interfaces with untried subcontractor Reduce to non-compound risks if possible Otherwise, devote extra attention to compound- risk containment

30 30 Risk Control ─ Risk Management Planning

31 31 Risk Management (RM) Planning Process Prioritized Risk items Candidate RM techniques Choose best cost-benefit mix of RM activities Develop individual RM plans for each risk item Coordinate RM plans with each other,project plan Risk leverage Analyses Draft RM plan n Revised RM plan n Draft RM plan n+1

32 32 For each risk item, answer the following questions: 1.Why? Risk item importance, relation to project objectives 2.What, when? Risk resolution deliverables, milestones, activity nets 3.Who, where? Responsibilities, organization 4.How? Approach (prototypes, surveys, models,…) 5.How much? Resources (budget, schedule, key personnel) Risk Management Plans

33 33 Risk Control ─ Risk Resolution

34 34 Risk Resolution Techniques (1) Staffing and prescheduling key people –Proactive approach Team building –Shared values –Participative planning and objective setting activities, blue- sky sessions, group-consensus techniques Cost and schedule estimation Design to cost/schedule Incremental development –Early increments cover high-priority user requirements and high-risk but well-understood requirements –Late increments cover poorly defined or less well understood requirements –Device-oriented software increments are synchronized with device availability

35 35 Risk Resolution Techniques (2) Requirements scrubbing –Cost-benefit analysis –Affordability review –Prototyping Prototyping –May be throwaway or build-upon Mission analysis –Organizational analysis –Operational concept formulation –Cost-benefit analysis –User engineering Information hiding Reference checking Preaward audits

36 36 Risk Resolution Techniques (3) Performance engineering –Simulation, benchmarking –Modeling –Prototyping performance-critical elements –Instrumentation and tuning Technical analysis –Analogous to performance analysis are available for other critical success factors

37 37 Outline Risk Management Principles and Practices –Risk Assessment: Risk identification, analysis, and prioritization –Risk Control: risk management planning, risk resolution, and risk monitoring Risk Management Implementation Concerns –RM in the software life cycle: the spiral model

38 38 Spiral Model and MBASE Spiral experience Critical success factors Invariants and variants Stud poker analogy Spiral refinements WinWin spiral Life cycle anchor points MBASE

39 39 “Spiral Development Model:” Candidate Definition The spiral development model is a risk-driven process model generator. It is used to guide multi- stakeholder concurrent engineering of software- intensive systems. It has two main distinguishing features. One is a cyclic approach for incrementally growing a system’s degree of definition and implementation. The other is a set of anchor point milestones for ensuring stakeholder commitment to feasible and mutually satisfactory system solutions.

40 40 Spiral Invariants and Variants (1) InvariantsWhy InvariantVariants 1. Concurrent rather than sequential determination of artifacts (OCD, Rqts, Design, Code, Plans) in each spiral cycle.  Avoids premature sequential commitments toRqts, Design, COTS, combination of cost/ schedule performance - 1 sec. response time 1a. Relative amount of each artifact developed in each cycle. 1b. Number of concurrent mini-cycles in each cycle. 2. Consideration in each cycle of critical- stakeholder objectives and constraints, product and process alternatives, risk identification and resolution, stakeholder review and commitment to proceed.  Avoids commitment to stakeholder- unacceptable or overly risky alternatives.  Avoids wasted effort in elaborating unsatisfactory alternatives. - Mac-based COTS 2a. Choice of risk resolution techniques: prototyping, simulation, modeling, benchmarking, reference checking, etc. 2b. Level of effort on each activity within each cycle. 3. Level of effort on each activity within each cycle driven by risk considerations.  Determines “how much is enough” of each activity: domain engr., prototyping, testing, CM, etc. - Pre-ship testing  Avoids overkill or belated risk resolution. 3a. Choice of methods used to pursue activities: MBASE/WinWin, Rational USDP, JAD, QFD, ESP, … 3b. Degree of detail of artifacts produced in each cycle. - Critical success factor examples

41 41 Spiral Invariant 1: Concurrent Determination of Key Artifacts (Ops Concept, Rqts, Design, Code, Plans) Why invariant –Avoids premature sequential commitments to system requirements, design, COTS, combination of cost/ schedule/ performance –1 sec response time Variants 1a. Relative amount of each artifact developed in each cycle. 1b. Number of concurrent mini-cycles in each cycle. Models excluded –Incremental sequential waterfalls with high risk of violating waterfall model assumptions

42 42 Sequential Engineering Buries Risk $100M $50M Arch. A: Custom many cache processors Arch. B: Modified Client-Server 12 3 4 5 Response Time (sec) Original Spec After Prototyping

43 43 Waterfall Model Assumptions 1. The requirements are knowable in advance of implementation. 2. The requirements have no unresolved, high-risk implications –e.g., risks due to COTS choices, cost, schedule, performance, safety, security, user interfaces, organizational impacts 3. The nature of the requirements will not change very much –During development; during evolution 4. The requirements are compatible with all the key system stakeholders’ expectations –e.g., users, customer, developers, maintainers, investors 5. The right architecture for implementing the requirements is well understood. 6. There is enough calendar time to proceed sequentially.

44 44 Windows-Only COTS Example: Digital Library Artifact Viewer Great prototype using ER Mapper –Tremendous resolution –Incremental-resolution artifact display –Powerful zoom and navigation features Only runs well on Windows –Mac, Unix user communities forced to wait –Overoptimistic assumptions on length of wait Eventual decision to drop ER Mapper

45 45 Spiral Invariant 2: Each cycle does objectives, constraints, alternatives, risks, review, commitment to proceed Why invariant –Avoids commitment to stakeholder-unacceptable or overly risky alternatives. –Avoids wasted effort in elaborating unsatisfactory alternatives. –Windows-only COTS Variants 2a. Choice of risk resolution techniques: prototyping, simulation, modeling, benchmarking, reference checking, etc. 2b. Level of effort on each activity within each cycle. Models excluded –Sequential phases with key stakeholders excluded

46 46 Models Excluded: Sequential Phases Without Key Stakeholders High risk of win-lose even with spiral phases –Win-lose evolves into lose-lose Key criteria for IPT members (AFI 63-123) –Representative, empowered, knowledgeable, collaborative, committed User, Customer Customer, Developer Developer, User, Maintainer Inception Elaboration, Construction Transition

47 47 Spiral Invariant 3: Level of Effort Driven by Risk Considerations Why invariant –Determines ‘how much is enough” of each activity: domain engr., prototyping, testing, CM, etc. –Pre-ship testing –Avoids overkill or belated risk resolution. Variants 3a. Choice of methods used to pursue activities: MBASE/WinWin, Rational RUP, JAD, QFD, ESP,... 3b. Degree of detail of artifacts produced in each cycle. Models excluded –Risk-insensitive evolutionary or incremental development

48 48 Pre-Ship Test Risk Exposure Risk Exposure RE = Size (Loss) Prob (Loss) Amount of testing; Time to market 10 8 6 4 2 RE (Market share losses) RE (total) RE (defect losses)

49 49 Spiral Invariants and Variants (2) InvariantsWhy InvariantVariants 4. Degree of detail of artifacts produced in each cycle driven by risk considerations.  Determines “how much is enough” of each artifact (OCD,Rqts, Design, Code, Plans) in each cycle.  Avoids overkill or belated risk resolution 4a. Choice of artifact representations (SA/SD, UML, MBASE, formal specs, programming languages, etc.) 5. Managing stakeholder life- cycle commitments via the LCO, LCA, and IOC Anchor Point milestones (getting engaged, getting married, having your first child),  Avoids analysis paralysis, unrealistic expectations, requirements creep, architectural drift, COTS shortfalls and incompatibilities, unsustainable architectures, traumaticcutovers, useless systems. 5a. Number of spiral cycles or increments between anchor points. 5b. Situation-specific merging of anchor point milestones. 6. Emphasis on system and life cycle activities and artifacts rather than software and initial development activities and artifacts.  Avoids premature suboptimization on hardware, software, or initial development considerations. 6a. Relative amount of hardware and software determined in each cycle. 6b. Relative amount of capability in each life cycle increment. 6c. Degree ofproductization (alpha, beta, shrink-wrap, etc.) of each life cycle increment.

50 50 Spiral Invariant 4: Degree of Detail Driven by Risk Considerations Why invariant –Determines “how much is enough” of each artifact (OCD, Rqts, Design, Code, Plans) in each cycle. Screen layout rqts. –Avoids overkill or belated risk resolution. Variants –4a. Choice of artifact representations (SA/SD, UML, MBASE, formal specs, programming languages, etc.) Models excluded –Complete, consistent, traceable, testable requirements specification for systems involving significant levels of GUI, COTS, or deferred decisions

51 51 Risk-Driven Specifications If it’s risky not to specify precisely, Do –Hardware-software interface –Prime-subcontractor interface If it’s risky to specify precisely, Don’t –GUI layout –COTS behavior

52 52 Spiral Invariant 5: Use of LCO, LCA, IOC, Anchor Point Milestones Why invariant –Avoids analysis paralysis, unrealistic expectations, requirements creep, architectural drift, COTS shortfalls and incompatibilities, unsustainable architectures, traumatic cutovers, useless systems. Variants 5a. Number of spiral cycles or increments between anchor points. 5b. Situation-specific merging of anchor point milestones Can merge LCO and LCA when adopting an architecture from mature 4GL, product line Models excluded –Evolutionary or incremental development with no life cycle architecture

53 53 Life Cycle Anchor Points Common System/Software stakeholder commitment points –Defined in concert with Government, industry affiliates –Coordinated with the Rational Unified Process Life Cycle Objectives (LCO) –Stakeholders’ commitment to support architecting –Like getting engaged Life Cycle Architecture (LCA) –Stakeholders’ commitment to support full life cycle –Like getting married Initial Operational Capability (IOC) –Stakeholders’ commitment to support operations –Like having first child

54 54 Win Win Spiral Anchor Points (Risk-driven level of detail for each element) *WWWWWHH: Why, What, When, Who, Where, How, How Much Milestone ElementLife Cycle Objectives (LCO)Life Cycle Architecture (LCA) Definition of Operational Concept Top-level system objectives and scope - System boundary - Environment parameters and assumptions - Evolution parameters Operational concept - Operations and maintenance scenarios and parameters - Organizational life-cycle responsibilities (stakeholders) Elaboration of system objectives and scope of increment Elaboration of operational concept by increment Top-level functions, interfaces, quality attribute levels, including: - Growth vectors and priorities - Prototypes Stakeholders’ concurrence on essentials Elaboration of functions, interfaces, quality attributes, and prototypes by increment - Identification of TBD’s( (to-be-determined items) Stakeholders’ concurrence on their priority concerns Top-level definition of at least one feasible architecture - Physical and logical elements and relationships - Choices of COTS and reusable software elements Identification of infeasible architecture options Choice of architecture and elaboration by increment - Physical and logical components, connectors, configurations, constraints - COTS, reuse choices - Domain-architecture and architectural style choices Architecture evolution parameters Elaboration of WWWWWHH* for Initial Operational Capability (IOC) - Partial elaboration, identification of key TBD’s for later increments Assurance of consistency among elements above All major risks resolved or covered by risk management plan Identification of life-cycle stakeholders - Users, customers, developers, maintainers, interoperators, general public, others Identification of life-cycle process model - Top-level stages, increments Top-level WWWWWHH* by stage Assurance of consistency among elements above - via analysis, measurement, prototyping, simulation, etc. - Business case analysis for requirements, feasible architectures Definition of System Requirements Definition of System and Software Architecture Definition of Life- Cycle Plan Feasibility Rationale System Prototype(s) Exercise key usage scenarios Resolve critical risks Exercise range of usage scenarios Resolve major outstanding risks

55 55 Initial Operational Capability (IOC) Software preparation –Operational and support software –Data preparation, COTS licenses –Operational readiness testing Site preparation –Facilities, equipment, supplies, vendor support User, operator, and maintainer preparation –Selection, teambuilding, training

56 56 Evolutionary Development Assumptions 1. The initial release is sufficiently satisfactory to key system stakeholders that they will continue to participate in its evolution. 2. The architecture of the initial release is scalable to accommodate the full set of system life cycle requirements (e.g., performance, safety, security, distribution, localization). 3. The operational user organizations are sufficiently flexible to adapt to the pace of system evolution 4. The dimensions of system evolution are compatible with the dimensions of evolving-out the legacy systems it is replacing.

57 57 Spiral Model and Incremental Commitment: Stud Poker Analogy Evaluate alternative courses of action –Fold: save resources for other deals –Ante: buy at least one more round Using incomplete information –Hole cards: competitive situation –Rest of deck: chance of getting winner Anticipating future possibilities –Likelihood that next round will clarify outcome Commit incrementally rather than all at once –Challenge: DoD POM process makes this hard to do

58 58 Anchor Points and Rational RUP Phases Engineering Stage Manufacturing Stage Inception ElaborationConstructionTransition Feasibility Iterations Architecture Iterations Usable Iterations Product Releases Management REQREQ DESDES IMPIMP DEPDEP REQREQ DESDES IMPIMP DEPDEP REQREQ DESDES IMPIMP DEPDEP REQREQ DESDES IMPIMP DEPDEP LCOLCAIOC RATIONAL S o f t w a r e C o r p o r a t i o n

59 59 Spiral Model Refinements Where do objectives, constraints, alternatives come from? –Win Win extensions Lack of intermediate milestones –Anchor Points: LCO, LCA, IOC –Concurrent-engineering spirals between anchor points Need to avoid model clashes, provide more specific guidance –MBASE Evaluate product and The WinWin Spiral Model 2. Identify Stakeholders’ win conditions 1. Identify next-level Stakeholders Reconcile win conditions. Establish next level objectives, 3. process alternatives. Resolve Risks 4. Define next level of product and process - including partitions 5. Validate product and process definitions 6. Review, commitment7. Win-Win Extensions Original Spiral constraints, alternatives

60 60 Spiral Invariant 6: Emphasis on System and Life Cycle Activities and Artifacts Why invariant –Avoids premature suboptimization on hardware, software, or development considerations. Scientific American Variants 6a. Relative amount of hardware and software determined in each cycle. 6b. Relative amount of capability in each life cycle increment 6c. Degree of productization (alpha, beta, shrink-wrap, etc.) of each life cycle increment. Models excluded –Purely logical object-oriented methods Insensitive to operational, performance, cost risks

61 61 Problems With Programming-Oriented Top-Down Development “SCIENTIFIC AMERICAN” SUBSCRIPTION PROCESSING

62 62 Spiral Model as Process-Model Generator Determine process objectives and constraints Identify process-model alternatives Evaluate process model alternatives with respect to objectives and constraints Analyze risks Use risk-analysis results to determine the project-tailored process model

63 63 Critical Process Drivers Growth envelop Understanding of requirements Robustness Available technology Architecture understanding

64 64 Summary: Hazardous Spiral Look-Alikes Incremental sequential waterfalls with significant COTS, user interface, or technology risks Sequential spiral phases with key stakeholders excluded from phases Risk-insensitive evolutionary or incremental development Evolutionary development with no life-cycle architecture Insistence on complete specs for COTS, user interface, or deferred-decision situations Purely logical object-oriented methods with operational, performance, or cost risks Impeccable spiral plan with no commitment to managing risks

65 65 Summary: Successful Spiral Examples Rapid commercial: C-Bridge’s RAPID process Large commercial: AT&T/Lucent/Telcordia spiral extensions Commercial hardware-software: Xerox Time- to-Market process Large aerospace: TRW CCPDS-R Variety of projects: Rational Unified Process, SPC Evolutionary Spiral Process, USC MBASE approach

66 66 References (MBASE material available at http://sunset.usc.edu/MBASE) [Boehm, 1988]. “ A Spiral Model of Software Development and Enhancement,”Computer, May 1988, pp. 61-72. [Boehm, 1989]. “Software Risk Management”, IEEE Computer Society Press, 1989. [Boehm-Ross, 1989]. “Theory W Software Project Management: Principles and Examples” IEEE Trans. Software Engr., July 1989. [Boehm-Bose, 1994]. “A Collaborative Spiral Software Process Model Based on Theory W,” Proceedings, ICSP 3, IEEE, Reston, Va. October 1994. [Boehm-Port, 1999b]. “When Models Collide: Lessons from Software Systems Analysis,” IEEE IT Professional, January/February 1999, pp. 49-56.

67 67 B. Boehm, D. Port, “Escaping the Software Tar Pit: Model Clashes and How to Avoid Them,” ACM Software Engineering Notes, January, 1999, pp. 36-48. B. Boehm et al., “Using the Win Win Spiral Model: A Case Study,” IEEE Computer, July 1998, pp. 33-44. B. Boehm et al., “Developing Multimedia Applications with the WinWin Spiral Model,” Proceedings, ESEC/FSE 97, Springer Verlag, 1997. M.J. Carr, S.L. Konda, I. Monarch, F.C. Ulrich, and C.F. Walker,”Taxonomy-Based Risk Identification,” CMU/SEI-93-TR-06, Software Engineering Institute, 1993 R.N. Charette, Software Engineering Risk Analysis and Management, McGraw Hill, 1989. M.A. Cusumano and R. W. Selby, Microsoft Secrets, Free Press, 1995 E. Hall, Managing Risk, Addison Wesley, 1998. W. E. Royce, Software Project Management: A Unified Framework, Addison Wesley, 1998. J. Thorp and DMR, The Information Paradox, McGraw Hill, 1998.


Download ppt "1 Software Risk Management Practices LiGuo Huang Computer Science and Engineering Southern Methodist University."

Similar presentations


Ads by Google