Software Risk Management and the COCOMO Model

Slides:



Advertisements
Similar presentations
COST ESTIMATION TECHNIQUES AND COCOMO. Cost Estimation Techniques 1-)Algorithmic cost modelling 2-)Expert judgement 3-)Estimation by analogy 4)-Parkinsons.
Advertisements

Ninth Lecture Hour 8:30 – 9:20 pm, Thursday, September 13
CSE 470 : Software Engineering The Software Process.
University of Southern California Center for Systems and Software Engineering Risk Calculation Case Studies CS 510 Software Engineering Supannika Koolmanojwong.
Tradespace, Affordability, and COCOMO III Barry Boehm, USC CSSE Annual Research Review 2014 April 30,
Sixth Hour Lecture 10:30 – 11:20 am, September 9 Framework for a Software Management Process – Artifacts of the Process (Part II, Chapter 6 of Royce’ book)
Using UML, Patterns, and Java Object-Oriented Software Engineering Royce’s Methodology Chapter 16, Royce’ Methodology.
University of Southern California Center for Software Engineering C S E USC Barry Boehm, USC USC-CSE Executive Workshop March 15, 2006 Processes for Human.
University of Southern California Center for Systems and Software Engineering USC CSSE Research Overview Barry Boehm Sue Koolmanojwong Jo Ann Lane Nupul.
University of Southern California Center for Systems and Software Engineering Integrating Systems and Software Engineering (IS&SE) with the Incremental.
University of Southern California Center for Software Engineering C S E USC Barry Boehm, USC CS 510, CS 577a Fall,2005 (
1 SOFTWARE PRODUCTION. 2 DEVELOPMENT Product Creation Means: Methods & Heuristics Measure of Success: Quality f(Fitness of Use) MANAGEMENT Efficient &
University of Southern California Center for Software Engineering CSE USC 9/14/05 1 COCOMO II: Airborne Radar System Example Ray Madachy
University of Southern California Center for Systems and Software Engineering Feasibility Evidence Description (FED) Barry Boehm, USC CS 577a Lecture Fall.
Software Evolution Planning CIS 376 Bruce R. Maxim UM-Dearborn.
Information System Economics Software Project Cost Estimation.
COCOMO-SCORM: Cost Estimation for SCORM Course Development
Introduction to RUP Spring Sharif Univ. of Tech.2 Outlines What is RUP? RUP Phases –Inception –Elaboration –Construction –Transition.
Thirteenth Lecture Hour 8:30 – 9:20 am, Sunday, September 16 Software Management Disciplines Process Automation (from Part III, Chapter 12 of Royce’ book)
SPEAKER : KAI-JIA CHANG ADVISER : QUINCY WU DATA : Spiral Model.
Software System Engineering: A tutorial
CS 360 Lecture 3.  The software process is a structured set of activities required to develop a software system.  Fundamental Assumption:  Good software.
ECE450 - Software Engineering II1 ECE450 – Software Engineering II Today: Risk.
Software Engineering Management Lecture 1 The Software Process.
1 Project Risk Management Project Risk Management Dr. Said Abu Jalala.
Chapter 3 Project Management Details Tracking Project Progress Project Estimation Project Risk Analysis Project Organization RUP Project Management Workflow.
© The McGraw-Hill Companies, Software Project Management 4th Edition Risk management Chapter 7.
Introduction to Software Engineering ECSE-321 Unit 4 – Project Management 10/19/2015Introduction to Software Engineering – ECSE321Unit 4 – Project Management/1.
University of Southern California Center for Software Engineering C S E USC Using COCOMO for Software Decisions - from COCOMO II Book, Section 2.6 Barry.
13-January-2003cse LifeCycle © 2003 University of Washington1 Lifecycle CSE 403, Winter 2003 Software Engineering
University of Southern California Center for Software Engineering C S E USC Using COCOMO for Software Decisions - from COCOMO II Book, Section 2.6 Barry.
University of Southern California Center for Systems and Software Engineering 09/02/09©USC-CSSE1 OCD Risk Management & DART CS 577a, Fall 2009 September.
Fifth Lecture Hour 9:30 – 10:20 am, September 9, 2001 Framework for a Software Management Process – Life Cycle Phases (Part II, Chapter 5 of Royce’ book)
CS 577b Software Engineering II -- Introduction
Pre-Project Components
Ch 10 - Risk Management Learning Objectives You should be able to: List and describe risk management processes, inputs, outputs, and tools List and describe.
An Introduction to Software Engineering
University of Southern California Center for Software Engineering CSE USC A Case for Anchor Point Milestones and Feasibility Rationales April 2005 Barry.
Risk Management ©USC-CSSE.
University of Southern California Center for Systems and Software Engineering © 2010, USC-CSSE 1 Trends in Productivity and COCOMO Cost Drivers over the.
Software Project Management Lecture 5 Software Project Risk Management.
Software Project Management (SEWPZG622) BITS-WIPRO Collaborative Programme: MS in Software Engineering SECOND SEMESTER /1/ "The content of this.
Overview of RUP Lunch and Learn. Overview of RUP © 2008 Cardinal Solutions Group 2 Welcome  Introductions  What is your experience with RUP  What is.
University of Southern California Center for Systems and Software Engineering Risk Management CS 577a, Fall 2010 November 24, /24/2010©USC-CSSE1.
The COCOMO model An empirical model based on project experience. Well-documented, ‘independent’ model which is not tied to a specific software vendor.
Risk Analysis 19 th September, Risk vs. Problem Risk  An “uncertain event that “may” happen with some probability Problem  The above event.
1 Software Risk Management : Overview and Recent Developments LiGuo Huang Computer Science and Engineering Southern Methodist University.
University of Southern California Center for Software Engineering C S E USC Barry Boehm, USC CS 510, CS 577a Fall 2015 Software Risk Management : Overview.
Software Development Process CS 360 Lecture 3. Software Process The software process is a structured set of activities required to develop a software.
University of Southern California Center for Systems and Software Engineering Aug. 26, 2010 © USC-CSE Page 1 A Winsor Brown CS 577a Lecture Fall.
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
University of Southern California Center for Software Engineering C S E USC ICSM Principles for Successful Software and Systems Engineering Barry Boehm,
1 Agile COCOMO II: A Tool for Software Cost Estimating by Analogy Cyrus Fakharzadeh Barry Boehm Gunjan Sharman SCEA 2002 Presentation University of Southern.
Software Development Process includes: all major process activities all major process activities resources used, subject to set of constraints (such as.
CS 577b: Software Engineering II
Project Cost Management
Software Engineering Management
Software Risk Management : Overview and Recent Developments
Cost Estimation with COCOMO II
Tutorial: Software Cost Estimation Tools – COCOMO II and COCOTS
Pongtip Aroonvatanaporn CSCI 577b Spring 2011 March 25, 2011
Software Risk Management : Overview and Recent Developments
Cost Estimation with COCOMO II
COCOMO 2 COCOMO 81 was developed with the assumption that a waterfall process would be used and that all software would be developed from scratch. Since.
Cost Estimation with COCOMO II
Cost Estimation with COCOMO II
Risk Management ©USC-CSSE.
Software Risk Management : Overview and Recent Developments
Incremental Commitment Model (ICM)* for Software
Presentation transcript:

Software Risk Management and the COCOMO Model Barry Boehm USC Fall 2011

Outline What is Software Risk Management? When should you do it? Continuously from Day One How should you do it? Risk Assessment Cost risk assessment and COCOMO II Risk Control Conclusions 4/17/2017

What is Software Risk Management? An approach for early identification and mitigation of critical project uncertainties Learning early and cheaply Narrowing the Cone of Uncertainty Avoiding expensive late rework 4/17/2017

Risk of Delaying Risk Management: Software 10 20 50 100 200 500 1000 Relative cost to fix defect 2 1 5 Requirements Design Code Development Acceptance Operation test test Smaller software projects Larger software projects Median (TRW survey) 80% 20% SAFEGUARD GTE IBM-SSD Phase in Which defect was fixed 4/17/2017

Steeper Cost-to-fix for High-Risk Elements TRW Project B 1005 SPR’s 100 90 80 TRW Project A 373 SPR’s 70 % of Cost to Fix SPR’s 60 50 Major Rework Sources: Off-Nominal Architecture-Breakers A - Network Failover B - Extra-Long Messages 40 30 20 10 10 20 30 40 50 60 70 80 90 100 % of Software Problem Reports (SPR’s) 4/17/2017

Risk of Delaying Risk Management: Systems Blanchard- Fabrycky, 1998 (Cone of Uncertainty)

The Fundamental Risk Management Metric Risk Exposure RE = Prob (Loss) * Size (Loss) “Loss” – of stakeholders’ value financial; reputation; quality of service, … For multiple sources of loss: RE =  [Prob (Loss) * Size (Loss)]source sources 4/17/2017

Prioritizing Risks: Risk Exposure Risk Exposure = (Probability) (Loss of Utility) High Check Utility - Loss Estimate Major Risk Risk Probability Check Probability Estimate Little Risk Low Loss of Utility Low High 4/17/2017

Risk Exposure Factors (Satellite Experiment Software) Unsatisfactory Outcome (UO) Prob (UO) Loss (UO) 10 8 7 9 3 4 1 5 2 Risk Exposure 3 - 5 4 - 8 5 6 8 1 2 30 - 50 24 - 40 28 - 56 45 15 24 8 30 7 4 A. S/ W error kills experiment B. S/ W error loses key data C. Fault tolerance features cause unacceptable performance D. Monitoring software reports unsafe condition as safe E. Monitoring software reports safe condition as unsafe F. Hardware delay causes schedule overrun G. Data reduction software errors cause extra work H. Poor user interface causes inefficient operation I. Processor memory insufficient J. DBMS software loses derived data 4/17/2017

Risk Exposure Factors and Contours: Satellite Experiment Software 4/17/2017

Risk Reduction Leverage (RRL) Equivalent to Return on Investment BEFORE - AFTER RRL = RISK REDUCTION COST Spacecraft Example LONG DURATION TEST FAILURE MODE TESTS LOSS (UO) PROB (UO) RE $20M 0.2 $4M $20M 0.2 $4M B B PROB (UO) RE 0.05 $1M 0.07 $1.4M A A COST $2M $0.26M 4-1 4- 1.4 = 1.5 = 10 RRL 2 0.26 4/17/2017

Outline What is Software Risk Management? When should you do it? Continuously from Day One How should you do it? Risk Assessment Cost risk assessment and COCOMO II Risk Control Conclusions 4/17/2017

Early Risk Resolution Quotes “In architecting a new software program, all the serious mistakes are made on the first day.” Robert Spinrad, VP-Xerox, 1988 “If you don’t actively attack the risks, the risks will actively attack you.” Tom Gilb, 1988 4/17/2017

Day One Temptations to Avoid It’s too early to think about risks. We need to: Finalize the requirements Maximize our piece of the pie Converge on the risk management organization, forms, tools, and procedures. Don’t put the cart before the horse. The real horse is the risks, and it’s leaving the barn Don’t sit around laying out the seats on the cart while this happens. Work this concurrently. 4/17/2017

Waterfall Model Risk Profile Requirements analysis The most critical risks are architectural: Performance (size, time, accuracy) Interface integrity System qualities (adaptability, portability, etc.) Specifications & models Design R i s k Specifications & models Code and unit test Tested code Subsystem integration Executable subsystem System integration Executable system Time 4/17/2017

Architectural prototype Draft specifications & models ICM Risk Profile Exploration Architectural prototype Draft specifications & models Foundations Initial executable system Refined specifications & models R i s k Development Iteration 3... Intermediate system Operations Final system Time 4/17/2017

The ICM and Continuous Risk Management Anchor Point Milestones Synchronize, stabilize concurrency via FEDs The Incremental Commitment Life Cycle Process: Overview This slide shows how the ICM spans the life cycle process from concept exploration to operations. Each phase culminates with an anchor point milestone review. At each anchor point, there are 4 options, based on the assessed risk of the proposed system. Some options involve go-backs. These options result in many possible process paths. The life cycle is divided into two stages: Stage I of the ICM (Definition) has 3 decision nodes with 4 options/node, culminating with incremental development in Stage II (Development and Operations). Stage II has an additional 2 decision nodes, again with 4 options/node. One can use ICM risk patterns to generate frequently-used processes with confidence that they fit the situation. Initial risk patterns can generally be determined in the Exploration phase. One then proceeds with development as a proposed plan with risk-based evidence at the VCR milestone, adjusting in later phases as necessary. For complex systems, a result of the Exploration phase would be the Prototyping and Competition Plan discussed above. Risks associated with the system drive the life cycle process. Information about the risk(s) (feasibility assessments) supports the decision to proceed, adjust scope or priorities, or cancel the program. Risk patterns determine life cycle process 4/17/2017 17

Nature of FEDs and Anchor Point Milestones Evidence provided by developer and validated by independent experts that: If the system is built to the specified architecture, it will Satisfy the specified operational concept and requirements Capability, interfaces, level of service, and evolution Be buildable within the budgets and schedules in the plan Generate a viable return on investment Generate satisfactory outcomes for all of the success-critical stakeholders Shortfalls in evidence are uncertainties and risks Should be resolved or covered by risk management plans Assessed in increasing detail at major anchor point milestones Serves as basis for stakeholders’ commitment to proceed Serves to synchronize and stabilize concurrently engineered elements Nature of FEDs and Anchor Point Milestones The Feasibility Evidence Description (FED) does not assess a single sequentially developed system definition element, but the consistency, compatibility, and feasibility of several concurrently-engineered elements. To make this concurrency work, a set of anchor point milestone reviews are performed to ensure that the many concurrent activities are synchronized, stabilized, and risk-assessed at the end of each phase. Each of these anchor point milestone reviews is focused on developer-produced and expert-validated evidence, documented in the FED, to help the key stakeholders determine whether to proceed into the next level of commitment. The FED is based on evidence from simulations, models, or experiments with planned technologies and increasingly detailed analysis of development approaches and projected productivity rates. The parameters used in the analyses should be based on measured component performance or on historical data showing relevant past performance, cost estimation accuracy, and actual developer productivity rates. A shortfall in feasibility evidence indicates a level of program execution uncertainty and a source of program risk. It is often not possible to fully resolve all risks at a given point in the development cycle, but known, unresolved risks need to be identified and covered by risk management plans, including the necessary staffing and funding to address them. The nature of the evidence shortfalls, the strength and affordability of the risk management plans, and the stakeholders’ degrees of risk acceptance or avoidance will determine their willingness to commit the necessary resources to proceed. A program with risks is not necessarily bad, particularly if it has strong risk management plans. A program with no risks may be high on achievability, but low on ability to produce a timely competitive advantage. A program more familiar with a sequential waterfall or V-model can achieve most of the effect of a FED-based anchor point milestone review by adding a FED to the set of artifacts to be developed and reviewed for adequacy and satisfaction at a System requirements Review (SRR), System Functional Review (SFR), or Preliminary Design Review (PDR). In principle, some guidance documents indicate that such feasibility evidence should be produced and reviewed. But since the evidence is not generally specified as a developer deliverable and is vaguely defined, it is generally inadequate as a basis for stakeholder commitments. Can be used to strengthen current schedule- or event-based reviews 4/17/2017 File: Feasibility Evidence Developmentv8

Problems Encountered without FED: 15-Month Architecture Rework Delay Required Architecture: Custom; many cache processors Original Architecture: Modified Client-Server 1 2 3 4 5 Response Time (sec) Original Spec After Prototyping Original Cost Problems Encountered without FED In the early 1980s, a large government organization contracted with TRW to develop an ambitious information query and analysis system. The system would provide more than 1,000 users, spread across a large building complex, with powerful query and analysis capabilities for a large and dynamic database. TRW and the customer specified the system using a classic sequential-engineering waterfall development model. Based largely on user need surveys, an oversimplified high-level performance analysis, and a short deadline for getting the TBDs out of the requirements specification, they fixed into the contract a requirement for a system response time of less than one second. Subsequently, the software architects found that subsecond performance could only be provided via a highly customized design that attempted to anticipate query patterns and cache copies of data so that each user’s likely data would be within one second’s reach (a 1980’s precursor of Google). The resulting hardware architecture had more than 25 super-midicomputers busy caching data according to algorithms whose actual performance defied easy analysis. The scope and complexity of the hardware-software architecture brought the estimated cost of the system to nearly $100 million, driven primarily by the requirement for a one-second response time. Faced with this unattractive prospect (far more than the customer’s budget for the system), the customer and developer decided to develop a prototype of the system’s user interface and representative capabilities to test. The results showed that a four-second response time would satisfy users 90 percent of the time. A four-second response time, with special handling for high-priority transactions, dropped development costs closer to $30 million. Thus, the premature specification of a 1-second response time neglected the risk of creating an overexpensive and time-consuming system development. Fortunately, in this case, the only loss was the wasted effort on the expensive-system architecture and a 15 month delay in delivery. More frequently, such rework is done only after the expensive full system is delivered and found still too slow and too expensive to operate. 4/17/2017 File: Feasibility Evidence Developmentv8

Problems Avoidable with FED Attempt to validate 1-second response time Commercial system benchmarking and architecture analysis: needs expensive custom solution Prototype: 4-second response time OK 90% of the time Negotiate response time ranges 2 seconds desirable 4 seconds acceptable with some 2-second special cases Benchmark commercial system add-ons to validate their feasibility Present solution and feasibility evidence at anchor point milestone review Result: Acceptable solution with minimal delay Problems Avoidable with FED Had the developers been required to deliver a FED showing evidence of feasibility of the one-second response time, they would have run benchmarks on the best available commercial query systems, using representative user workloads, and would have found that the best that they could do was about a 2.5 second response time, even with some preprocessing to reduce query latency. They would have performed a top-level architecture analysis of custom solutions, and concluded that such solutions were in the $100M cost range. They would have shared these results with the customer in advance of any key reviews, and found that the customer would prefer to explore the feasibility of a system with a commercially-supportable response time. They would have done user interface prototyping and found that 4 second response time was acceptable 90% of the time much earlier. As some uncertainties still existed about the ability to address the remaining 10% of the queries, the customer and developer would have agreed to avoid repeating the risky specification of a fixed response time requirement, and instead to define a range of desirable-to-acceptable response times, with an award fee provided for faster performance. They would also have agreed to reschedule the next milestone review to give the developer time and budget to present evidence of the most feasible solution available, using the savings over the prospect of a $100M system development as rationale. This would have put the project on a more solid success track over a year before the actual project discovered and rebaselined itself, and without the significant expense that went into the unaffordable architecture definition. 4/17/2017 File: Feasibility Evidence Developmentv8

Reducing Software Cost-to-Fix: CCPDS-R - Royce, 1998 Architecture first -Integration during the design phase -Demonstration-based evaluation Risk Management Configuration baseline change metrics: RATIONAL S o f t w a r e C o r p o r a t I o n Project Development Schedule 15 20 25 30 35 40 10 Design Changes Implementation Changes Maintenance Changes and ECP’s Hours Change 4/17/2017

Outline What is Software Risk Management? When should you do it? Continuously from Day One How should you do it? Risk Assessment Cost risk assessment and COCOMO II Risk Control Conclusions 4/17/2017

Software Risk Management Identification Checklists Decision driver analysis Assumption analysis Decomposition Performance models Cost models Network analysis Decision analysis Quality factor analysis Risk exposure Risk leverage Compound risk reduction Buying information Risk avoidance Risk transfer Risk reduction Risk element planning Risk plan integration Prototypes Simulations Benchmarks Analyses Staffing Milestone tracking Top-10 tracking Risk reassessment Corrective action Risk Assessment Risk Analysis Risk Prioritization Risk Management Risk mgmt Planning Risk Control Risk Resolution Risk Monitoring 4/17/2017

Risk Identification Techniques Risk-item checklists Decision driver analysis Comparison with experience Win-lose, lose-lose situations Decomposition Pareto 80 – 20 phenomena Task dependencies Murphy’s law Uncertainty areas Model Clashes 4/17/2017

Top 10 Risk Items: 1989 and 1995 1995 1989 Personnel shortfalls Schedules and budgets Wrong software functions Wrong user interface Gold plating Requirements changes Externally-furnished components Externally-performed tasks Real-time performance Straining computer science Personnel shortfalls Schedules, budgets, process COTS, external components Requirements mismatch User interface mismatch Architecture, performance, quality Requirements changes Legacy software Externally-performed tasks Straining computer science 4/17/2017

Example Risk-item Checklist: Staffing Will you project really get all the best people? Are there critical skills for which nobody is identified? Are there pressures to staff with available warm bodies? Are there pressures to overstaff in the early phases? Are the key project people compatible? Do they have realistic expectations about their project job? Do their strengths match their assignment? Are they committed full-time? Are their task prerequisites (training, clearances, etc.) satisfied? 4/17/2017

The Top Ten Software Risk Items Risk Management Techniques 1. Personnel Shortfalls Staffing with top talent; key personnel agreements; incentives; team-building; training; tailoring process to skill mix; peer reviews 2. Unrealistic schedules and budgets Business case analysis; design to cost; incremental development; software reuse; requirements descoping; adding more budget and schedule 3. COTS; external components Qualification testing; benchmarking; prototyping; reference checking; compatibility analysis; vendor analysis; evolution support analysis 4. Requirements mismatch; gold plating Stakeholder win-win negotiation; business case analysis; mission analysis; ops-concept formulation; user surveys; prototyping; early users’ manual; design/develop to cost 5. User interface mismatch Prototyping; scenarios; user characterization (functionality, style, workload) 4/17/2017

The Top Ten Software Risk Items (Concluded) 6. Architecture, performance, quality Architecture tradeoff analysis and review boards; simulation; benchmarking; modeling; prototyping; instrumentation; tuning 7. Requirements changes High change threshold; information hiding; incremental development (defer changes to later increments) 8. Legacy software Design recovery; phaseout options analysis; wrappers/mediators; restructuring 9. Externally-performed tasks Reference checking; pre-award audits; award-fee contracts; competitive design or prototyping; team-building 10. Straining Computer Science capabilities Technical analysis; cost-benefit analysis; prototyping; reference checking 4/17/2017

Network Schedule Risk Analysis 2, p = 0.5 6, p = 0.5 . . . 4 months . . . 4 4 Equally Likely Outcomes 2 2 EV = 4 months 2 6 _20_ 4 6 EV = = 5 months 6 2 6 6 _6_ 20 4/17/2017

Cost Risk Analysis: COCOMO II Cone of Uncertainty methods and estimate ranges Applications Composition, Early Design, Post-Architecture Nature of COCOMO II Model Size parameters, scale factors, cost drivers Sensitivity and tradeoff analyses Monte Carlo analysis Enter size, scale factor, cost driver uncertainty ranges Run model many times with random parameter selections within ranges Produces cumulative probability distribution curve 4/17/2017

COCOMO Baseline Overview I Software product size estimate Software development mainten ance cost and schedule estimates Software product, process, com- puter, and personal attributes Cost, schedule, distribution by Software reuse, maintenance, and increment parameters phase, activity, increment Software organization’s Project data COCOMO recalibrated to organization’s data COCOMO 4/17/2017

COCOMO II Model Stages 4/17/2017

Factors for Project Scaling Effects Nominal person-months = A*(size)B B = 0.91 + 0.01 (scale factor ratings) - B ranges from 0.91 to 1.23 - 5 scale factors; 6 rating levels each Scale factors: - Precedentedness (PREC) - Development flexibility (FLEX) - Architecture/ risk resolution (RESL) - Team cohesion (TEAM) - Process maturity (PMAT, derived from SEI CMM) 4/17/2017

Rating the Scale Factors ´ ( Size ) B EM i Õ æ è ç ö ø ÷ PM = 2.94 estimated B = 0.91 . + . 01 ´ å SF i Scale Factors ( Wi ) Very Low Low Nominal High Very High Extra High PREC thoroughly unprecedented largely somewhat generally familiar largely familiar throughly FLEX rigorous occasional relaxation some general conformity general goals RESL little (20%) some (40%) often (60%) (75%) mostly (90%) full (100%) TEAM very difficult interactions some difficult basically cooperative highly seamless PMAT weighted sum of 18 KPA achievement levels 4/17/2017

COCOMO II. 2000 Productivity Ranges Scale Factor Ranges: 10, 100, 1000 KSLOC Development Flexibility (FLEX) Team Cohesion (TEAM) Develop for Reuse (RUSE) Precedentedness (PREC) Architecture and Risk Resolution (RESL) Platform Experience (PEXP) Data Base Size (DATA) Required Development Schedule (SCED) Language and Tools Experience (LTEX) Process Maturity (PMAT) Storage Constraint (STOR) Use of Software Tools (TOOL) Platform Volatility (PVOL) Applications Experience (AEXP) Multi-Site Development (SITE) Documentation Match to Life Cycle Needs (DOCU) Required Software Reliability (RELY) Personnel Continuity (PCON) Time Constraint (TIME) Programmer Capability (PCAP) Analyst Capability (ACAP) Product Complexity (CPLX) 1 1.2 1.4 1.6 1.8 2 2.2 2.4 Productivity Range 4/17/2017

USC COCOMO II Model 4/17/2017

Tradeoffs Among Cost, Schedule, and Reliability: COCOMO II (RELY, MTBF (hours)) For 100-KSLOC set of features Can “pick all three” with 77-KSLOC set of features -- Cost/Schedule/RELY: “pick any two” points 4/17/2017

COCOMO II Estimation Accuracy: Percentage of sample projects within 30% of actuals -Without and with calibration to data source COCOMO81 COCOMOII.2000 COCOMOII.1997 # Projects 63 83 161 Effort Schedule 81% 52% 64% 75% 80% 61% 62% 72% 65% 4/17/2017

Software Risk Management 4/17/2017

Risk Exposure Factors and Contours: Satellite Experiment Software 4/17/2017

Risk Management Options Buying information C: Prototype fault tolerance features’ effect on performance Risk avoidance Risk transfer Risk reduction Risk acceptance 4/17/2017

Risk Management Plans For Each Risk Item, Answer the Following Questions: 1. Why? Risk Item Importance, Relation to Project Objectives 2. What, When? Risk Resolution Deliverables, Milestones, Activity Nets 3. Who, Where? Responsibilities, Organization 4. How? Approach (Prototypes, Surveys, Models, …) 5. How Much? Resources (Budget, Schedule, Key Personnel) 4/17/2017

Risk Management Plan: Fault Tolerance Prototyping 1. Objectives (The “Why”) Determine, reduce level of risk of the software fault tolerance features causing unacceptable performance Create a description of and a development plan for a set of low-risk fault tolerance features 2. Deliverables and Milestones (The “What” and “When”) By week 3 1. Evaluation of fault tolerance option 2. Assessment of reusable components 3. Draft workload characterization 4. Evaluation plan for prototype exercise 5. Description of prototype By week 7 6. Operational prototype with key fault tolerance features 7. Workload simulation 8. Instrumentation and data reduction capabilities 9. Draft Description, plan for fault tolerance features By week 10 10. Evaluation and iteration of prototype 11. Revised description, plan for fault tolerance features 4/17/2017

Risk Management Plan: Fault Tolerance Prototyping (concluded) Responsibilities (The “Who” and “Where”) System Engineer: G. Smith Tasks 1, 3, 4, 9, 11, support of tasks 5, 10 Lead Programmer: C. Lee Tasks 5, 6, 7, 10 support of tasks 1, 3 Programmer: J. Wilson Tasks 2, 8, support of tasks 5, 6, 7, 10 Approach (The “How”) Design-to-Schedule prototyping effort Driven by hypotheses about fault tolerance-performance effects Use real-time OS, add prototype fault tolerance features Evaluate performance with respect to representative workload Refine Prototype based on results observed Resources (The “How Much”) $60K - Full-time system engineer, lead programmer, programmer (10 weeks)*(3 staff)*($2K/staff-week) $0K - 3 Dedicated workstations (from project pool) $0K - 2 Target processors (from project pool) $0K - 1 Test co-processor (from project pool) $10K - Contingencies $70K - Total 4/17/2017

Risk Monitoring Milestone Tracking Top-10 Risk Item Tracking Monitoring of Risk Management Plan Milestones Top-10 Risk Item Tracking Identify Top-10 risk items Highlight these in monthly project reviews Focus on new entries, slow-progress items Focus review on manger-priority items Risk Reassessment Corrective Action 4/17/2017

Project Top 10 Risk Item List: Satellite Experiment Software Mo. Ranking This Last #Mo. Risk Item Risk Resolution Progress Replacing Sensor-Control Software 1 4 2 Top Replacement Candidate Unavailable Developer Target Hardware Delivery Delays 2 5 2 Procurement Procedural Delays Sensor Data Formats Undefined 3 3 3 Action Items to Software, Sensor Teams; Due Next Month Staffing of Design V&V Team 4 2 3 Key Reviewers Committed; Need Fault- Tolerance Reviewer Software Fault-Tolerance May 5 1 3 Fault Tolerance Prototype Successful Compromise Performance Accommodate Changes in Data 6 - 1 Meeting Scheduled With Data Bus Bus Design Designers Testbed Interface Definitions 7 8 3 Some Delays in Action Items; Review Meeting Scheduled User Interface Uncertainties 8 6 3 User Interface Prototype Successful TBDs In Experiment Operational - 7 3 TBDs Resolved Concept Uncertainties In Reusable - 9 3 Required Design Changes Small, Monitoring Software Successfully Made 4/17/2017

Conclusions Risk management starts on Day One Delay and denial are serious career risks Data provided to support early investment ICM provides process framework for early risk resolution Stakeholder identification and win condition reconciliation Evidence-risk-driven commitment milestones Risk analysis helps determine “how much is enough” Testing, planning, specifying, prototyping,… Balance risks of doing too little, too much 4/17/2017