Download presentation
Presentation is loading. Please wait.
1
Barry Boehm, USC CSSE CS 577 Lecture, Fall 2017
Evidence-Based Reviews for Systems of Systems: Needs, Principles, Practices, and Experience Developing and Using Feasibility Evidence for Life Cycle Commitment Milestones Success-critical stakeholders for systems under development have the authority and responsibility to exercise accountable governance of their organization’s resources when committing such resources to the next phase of system development. In many cases, these stakeholders are presented with insufficient information upon which to make a responsible decision. As a result, many projects proceed into system development with little idea of whether their project requirements, architectures, plans, budgets, and schedules are feasible and consistent or not. This is one of the major reasons for the relatively low (20-30%) success rates for successful on-time, in-budget system deliveries in the commercial Standish Reports and government GAO reports. This presentation provides guidance for avoiding such system overruns and shortfalls by using Anchor Point milestones (APs) and Feasibility Evidence Descriptions (FEDs). It begins by pointing out that the main distinction of APs and FEDs is to provide EVIDENCE of feasibility as part of the system definition’s core deliverables, not just assertions, assumptions, or aspirations placed in optional appendices. It emphasizes that the evidence supplied by developers should be evaluated by independent experts as part of the review process, and that shortfalls in evidence should be identified as sources of project risk and covered by risk mitigation plans. APs and FEDs are an integral part of the Incremental Commitment Model discussed in [Boehm-Lane, 2007; Boehm-Lane, 2008, and Pew-Mavor, 2007]. However, a good approximation to their use may be obtained by adding FEDs to the content of traditional waterfall or V-model milestones such as System Requirements Reviews (SRRs), System Functional Reviews (SFRs), and Preliminary Design Reviews (PDRs). Barry Boehm, USC CSSE CS 577 Lecture, Fall 2017 File: Feasibility Evidence Developmentv8
2
Summary Schedule-based and event-based reviews are risk-prone
Evidence-based reviews enable early risk resolution They require more up-front systems engineering effort They have a high ROI for high-risk projects They synchronize and stabilize concurrent engineering The evidence becomes a first-class deliverable It requires planning and earned value management They can be added to traditional review processes 10/24/2014 ©USC-CSSE
3
Types of Milestone Reviews
Schedule-based reviews (contract-driven) We’ll hold the PDR on April 1 whether we have a design or not High probability of proceeding into a Death March Event-based reviews (artifact-driven) The design will be done by June 1, so we’ll have the review then Large “Death by PowerPoint and UML” event Hard to avoid proceeding with many unresolved risks and interfaces Evidence-based commitment reviews (risk-driven) Evidence provided in Feasibility Evidence Description (FED) A first-class deliverable Shortfalls in evidence are uncertainties and risks Should be covered by risk mitigation plans Stakeholders decide to commit based on risks of going forward 10/24/2014 ©USC-CSSE
4
Nature of FEDs and Anchor Point Milestones
Evidence provided by developer and validated by independent experts that: If the system is built to the specified architecture, it will Satisfy the specified operational concept and requirements Capability, interfaces, level of service, and evolution Be buildable within the budgets and schedules in the plan Generate a viable return on investment Generate satisfactory outcomes for all of the success-critical stakeholders Shortfalls in evidence are uncertainties and risks Should be resolved or covered by risk management plans Assessed in increasing detail at major anchor point milestones Serves as basis for stakeholders’ commitment to proceed Serves to synchronize and stabilize concurrently engineered elements Nature of FEDs and Anchor Point Milestones The Feasibility Evidence Description (FED) does not assess a single sequentially developed system definition element, but the consistency, compatibility, and feasibility of several concurrently-engineered elements. To make this concurrency work, a set of anchor point milestone reviews are performed to ensure that the many concurrent activities are synchronized, stabilized, and risk-assessed at the end of each phase. Each of these anchor point milestone reviews is focused on developer-produced and expert-validated evidence, documented in the FED, to help the key stakeholders determine whether to proceed into the next level of commitment. The FED is based on evidence from simulations, models, or experiments with planned technologies and increasingly detailed analysis of development approaches and projected productivity rates. The parameters used in the analyses should be based on measured component performance or on historical data showing relevant past performance, cost estimation accuracy, and actual developer productivity rates. A shortfall in feasibility evidence indicates a level of program execution uncertainty and a source of program risk. It is often not possible to fully resolve all risks at a given point in the development cycle, but known, unresolved risks need to be identified and covered by risk management plans, including the necessary staffing and funding to address them. The nature of the evidence shortfalls, the strength and affordability of the risk management plans, and the stakeholders’ degrees of risk acceptance or avoidance will determine their willingness to commit the necessary resources to proceed. A program with risks is not necessarily bad, particularly if it has strong risk management plans. A program with no risks may be high on achievability, but low on ability to produce a timely competitive advantage. A program more familiar with a sequential waterfall or V-model can achieve most of the effect of a FED-based anchor point milestone review by adding a FED to the set of artifacts to be developed and reviewed for adequacy and satisfaction at a System requirements Review (SRR), System Functional Review (SFR), or Preliminary Design Review (PDR). In principle, some guidance documents indicate that such feasibility evidence should be produced and reviewed. But since the evidence is not generally specified as a developer deliverable and is vaguely defined, it is generally inadequate as a basis for stakeholder commitments. Can be used to strengthen current schedule- or event-based reviews 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
5
Problems Encountered without FED: 15-Month Architecture Rework Delay
Required Architecture: Custom; many cache processors Original Architecture: Modified Client-Server 1 2 3 4 5 Response Time (sec) Original Spec After Prototyping Original Cost Problems Encountered without FED In the early 1980s, a large government organization contracted with TRW to develop an ambitious information query and analysis system. The system would provide more than 1,000 users, spread across a large building complex, with powerful query and analysis capabilities for a large and dynamic database. TRW and the customer specified the system using a classic sequential-engineering waterfall development model. Based largely on user need surveys, an oversimplified high-level performance analysis, and a short deadline for getting the TBDs out of the requirements specification, they fixed into the contract a requirement for a system response time of less than one second. Subsequently, the software architects found that subsecond performance could only be provided via a highly customized design that attempted to anticipate query patterns and cache copies of data so that each user’s likely data would be within one second’s reach (a 1980’s precursor of Google). The resulting hardware architecture had more than 25 super-midicomputers busy caching data according to algorithms whose actual performance defied easy analysis. The scope and complexity of the hardware-software architecture brought the estimated cost of the system to nearly $100 million, driven primarily by the requirement for a one-second response time. Faced with this unattractive prospect (far more than the customer’s budget for the system), the customer and developer decided to develop a prototype of the system’s user interface and representative capabilities to test. The results showed that a four-second response time would satisfy users 90 percent of the time. A four-second response time, with special handling for high-priority transactions, dropped development costs closer to $30 million. Thus, the premature specification of a 1-second response time neglected the risk of creating an overexpensive and time-consuming system development. Fortunately, in this case, the only loss was the wasted effort on the expensive-system architecture and a 15 month delay in delivery. More frequently, such rework is done only after the expensive full system is delivered and found still too slow and too expensive to operate. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
6
Problems Avoidable with FED
Attempt to validate 1-second response time Commercial system benchmarking and architecture analysis: needs expensive custom solution Prototype: 4-second response time OK 90% of the time Negotiate response time ranges 2 seconds desirable 4 seconds acceptable with some 2-second special cases Benchmark commercial system add-ons to validate their feasibility Present solution and feasibility evidence at anchor point milestone review Result: Acceptable solution with minimal delay Problems Avoidable with FED Had the developers been required to deliver a FED showing evidence of feasibility of the one-second response time, they would have run benchmarks on the best available commercial query systems, using representative user workloads, and would have found that the best that they could do was about a 2.5 second response time, even with some preprocessing to reduce query latency. They would have performed a top-level architecture analysis of custom solutions, and concluded that such solutions were in the $100M cost range. They would have shared these results with the customer in advance of any key reviews, and found that the customer would prefer to explore the feasibility of a system with a commercially-supportable response time. They would have done user interface prototyping and found that 4 second response time was acceptable 90% of the time much earlier. As some uncertainties still existed about the ability to address the remaining 10% of the queries, the customer and developer would have agreed to avoid repeating the risky specification of a fixed response time requirement, and instead to define a range of desirable-to-acceptable response times, with an award fee provided for faster performance. They would also have agreed to reschedule the next milestone review to give the developer time and budget to present evidence of the most feasible solution available, using the savings over the prospect of a $100M system development as rationale. This would have put the project on a more solid success track over a year before the actual project discovered and rebaselined itself, and without the significant expense that went into the unaffordable architecture definition. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
7
Need for FED in Large Systems of Systems
10000 KSLOC 100 KSLOC 10 KSLOC Sweet Spot Sweet Spot Drivers: Rapid Change: leftward High Assurance: rightward Percent of Project Schedule Devoted to Initial Architecture and Risk Resolution Added Schedule Devoted to Rework (COCOMO II RESL factor) Total % Added Schedule 10/24/2014 ©USC-CSSE
8
Summary Schedule-based and event-based reviews are risk-prone
Evidence-based reviews enable early risk resolution They require more up-front systems engineering effort They have a high ROI for high-risk projects They synchronize and stabilize concurrent engineering The evidence becomes a first-class deliverable It requires planning and earned value management They can be added to traditional review processes 10/24/2014 ©USC-CSSE
9
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition Stage II: Development and Operations Anchor Point Milestones The Incremental Commitment Life Cycle Process: Overview This slide shows how the ICM spans the life cycle process from concept exploration to operations. Each phase includes a number of concurrent activities (shown on the next chart) that are synchronized and stabilized with a FED-based anchor point milestone review at points corresponding with the major DoD acquisition milestones. At each anchor point, there are 4 options, based on the assessed risk of the proposed system. Some options involve go-backs. These options result in many possible process paths. The life cycle is divided into two stages: Stage I of the ICM (Definition) has 3 decision nodes with 4 options/node, culminating with incremental development in Stage II (Development and Operations). Stage II has an additional 2 decision nodes, again with 4 options/node. One can use ICM risk patterns to generate frequently-used processes with confidence that they fit the situation. Initial risk patterns can generally be determined in the Exploration phase. One then proceeds with development as a proposed plan with risk-based evidence at the VCR/Concept Definition milestone, adjusting in later phases as necessary. Risks associated with the system drive the life cycle process. Information about the risks (feasibility assessments) supports the decision to proceed, adjust scope or priorities, or cancel the program. The downward arrows at the end of each phase provide built-in off-ramps for clearly infeasible programs. The horizontal arrows provide additional options to skip phases or activities if the risk of doing so is negligible. Synchronize, stabilize concurrency via FEDs Risk patterns determine life cycle process 03/19/2008 10/24/2014 ©USC-CSSE ©USC-CSSE File: Feasibility Evidence Developmentv8
10
Nature of Feasibility Evidence
Not just traceability matrices and PowerPoint charts Evidence can include results of Prototypes: of networks, robots, user interfaces, COTS interoperability Benchmarks: for performance, scalability, accuracy Exercises: for mission performance, interoperability, security Models: for cost, schedule, performance, reliability; tradeoffs Simulations: for mission scalability, performance, reliability Early working versions: of infrastructure, data fusion, legacy compatibility Previous experience Combinations of the above Validated by independent experts Realism of assumptions Representativeness of scenarios Thoroughness of analysis Coverage of key off-nominal conditions Key Point: Need to Show Evidence It was a significant step forward for DoD to progress from having schedule-based reviews (where the review took place at the scheduled time whether the project was ready or not) to event-based reviews (where the review would not take place until its artifacts – e.g., functional requirements – were completed). However, many event-based reviews focused on hastily-produced artifacts that had not been validated for compatibility or feasibility, such as the example in Chart 4. Thus, it is another significant step forward to progress from event-based reviews to evidence-based reviews. The major difference in the example project proceeding without and with an FR was that the need to provide evidence of feasibility requires the project to develop and exercise benchmarks and prototypes that expose both infeasible solutions and unnecessarily ambitious requirements. Not only does the evidence need to be produced, but it needs to be validated by independent experts. The risk of validating just nominal-case scenarios is shown in the next chart. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
11
Common Examples of Inadequate Evidence
Our engineers are tremendously creative. They will find a solution for this. We have three algorithms that met the KPPs on small-scale nominal cases. At least one will scale up and handle the off-nominal cases. We’ll build it and then tune it to satisfy the KPPs The COTS vendor assures us that they will have a security-certified version by the time we need to deliver. We have demonstrated solutions for each piece from our NASA, Navy, and Air Force programs. It’s a simple matter of integration to put them together. Common Examples of Inadequate Evidence Numerous projects have repeated the consequences of proceeding without evidence that their plans and specifications are compatible and feasible. Why does this happen? Most often because the project has been pushed into an inadequate budget and schedule, particularly for systems engineering, and because its progress payments and award fees are initially based on the existence of specifications rather than on their feasibility. As a result, one often hears statements such as the above in place of evidence at milestone reviews. What can be done about this? First, enough budget and schedule needs to be provided (models such as COSYSMO [Valerdi, 2005] are a good cross-check). Next, plans need to be made and progress monitored for both producing and reviewing the feasibility evidence. And if progress reviews uncover statements such as the above, corrective action such as the examples on the next slide needs to be performed. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
12
Examples of Making the Evidence Adequate
Have the creative engineers prototype and evaluate a solution on some key nominal and off-nominal scenarios. Prototype and evaluate the three examples on some key nominal and off-nominal scenarios Develop prototypes and/or simulations and exercise them to show that the architecture will not break while scaling up or handling off-nominal cases. Conduct a scaled-down security evaluation of the current COTS product. Determine this and other vendors’ track records for getting certified in the available time. Investigate alternative solutions. Have a tiger team prototype and evaluate the results of the simple matter of integration. Examples of Making the Evidence Adequate These are candidate actions to resolve the issues listed on the previous chart. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
13
Summary Schedule-based and event-based reviews are risk-prone
Evidence-based reviews enable early risk resolution They require more up-front systems engineering effort They have a high ROI for high-risk projects They synchronize and stabilize concurrent engineering The evidence becomes a first-class deliverable It requires planning and earned value management They can be added to traditional review processes 10/24/2014 ©USC-CSSE
14
FED Development Process Framework
As with other ICM artifacts, FED process and content are risk-driven Generic set of steps provided, but need to be tailored to situation Can apply at increasing levels of detail in Exploration, Validation, and Foundations phases Can be satisfied by pointers to existing evidence Also applies to Stage II Foundations rebaselining process Examples provided for large simulation and testbed evaluation process and evaluation criteria FED Development Process Framework This chart highlights the fact that the appropriate level of detail for the contents of the FED are based on the perceived risks and criticality of the system to be developed. It is NOT a “one size fits all” process, but rather a framework to help developers and stakeholders determine the appropriate level of analysis and evaluation. It is also not limited to new system developments, but can be effective in major upgrades to existing systems or systems of systems. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
15
Steps for Developing Feasibility Evidence
Develop phase work-products/artifacts For examples, see ICM Anchor Point Milestone Content charts Determine most critical feasibility assurance issues Issues for which lack of feasibility evidence is program-critical Evaluate feasibility assessment options Cost-effectiveness, risk reduction leverage/ROI, rework avoidance Tool, data, scenario availability Select options, develop feasibility assessment plans Prepare FED assessment plans and earned value milestones Try to relate earned value to risk-exposure avoided rather than budgeted cost Steps for Developing Feasibility Evidence This chart and the next one outline a process that can be used for developing feasibility evidence. The process obviously starts with developing the appropriate work products for the phase (operational concept, requirements specification, architecture, etc.). As part of the engineering work, critical feasibility assurance issues are identified that are critical to the success of the system development program. These are the issues for which options explored, and for potentially viable options, further investigated with respect to cost, schedule, ROI, etc. Also note that the FED activities need to be planned and assigned an appropriate earned value. The earned value should be based on the potential risk exposure costs, not the perceived available budget. See the Competitive Prototyping handout for additional guidance on how much “information” to buy via the feasibility assessment process. “Steps” denoted by letters rather than numbers to indicate that many are done concurrently 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
16
Steps for Developing Feasibility Evidence (continued)
Begin monitoring progress with respect to plans Also monitor project/technology/objectives changes and adapt plans Prepare evidence-generation enablers Assessment criteria Parametric models, parameter values, bases of estimate COTS assessment criteria and plans Benchmarking candidates, test cases Prototypes/simulations, evaluation plans, subjects, and scenarios Instrumentation, data analysis capabilities Perform pilot assessments; evaluate and iterate plans and enablers Assess readiness for Commitment Review Shortfalls identified as risks and covered by risk mitigation plans Proceed to Commitment Review if ready Hold Commitment Review when ready; adjust plans based on review outcomes 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
17
Large-Scale Simulation and Testbed FED Preparation Example
For large, complex systems, it is often very cost effective to begin development of a modeling and simulation testbed to support both the system infrastructure and application developers as well as those trying to evaluate the feasibility of the various alternatives. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
18
Focus of Each Commitment Review
Each commitment review evaluates the review package created during the current phase Work products Feasibility evidence Prototypes Studies Estimates Basis of estimates Goal is to determine if Efforts should proceed into the next phase Commit to next phase – risk acceptable or negligible More work should be done in current phase Do more work before deciding to commit to next phase – risk high, but probably addressable Efforts should be discontinued Risk too high or unaddressable Enter-Next-Phase Commitment Review Source of Package Information Valuation (VCR/CD) Exploration phase Foundations (FCR/MS-A) Valuation phase Development (DCR/MS-B) Foundations phase Operations (OCR) Development phase Focus of Each Commitment Review This chart summarizes the types of evidence that are evaluated at each commitment review as well as the potential results of the review: proceed ahead, do more work, or discontinue program. Also note that the title of each review indicates the type of commitment to the next phase of increased investment that will result from a successful review. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
19
Exploration Phase Activities
Protagonist identifies need or opportunity worth exploring Service, agency, joint entity Protagonist identifies additional success-critical stakeholders (SCSs) Technical, Managerial, Financial, DOTMLPF SCS working groups explore needs, opportunities, scope, solution options Materiel and Non-Materiel options Compatibility with Strategic Guidance SCS benefits realization Analysis of alternatives Define evaluation criteria Filter out unacceptable alternatives Identify most promising alternative(s) Identify common-special-case process if possible Develop top-level VCR/CD Package Approval bodies review VCR/CD Package Major starting points in sequence, but activities concurrent Exploration Phase Activities The activities listed on this chart are those that are typically conducted in response to an identified need or a promising opportunity. Also note that many of the exploratory activities are often going on concurrently with each other and can change in scope as potential stakeholders and alternatives are identified. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
20
Top-Level VCR/CD Package
Operations/ life cycle concept Top-level system boundary and environment elements Benefits chain or equivalent Links initiatives to desired benefits and identifies associated SCSs Including production and life cycle support SCSs Representative operational and support scenarios Prototypes (focused on top development and operational risks), objectives, constraints, and priorities Initial Capabilities Document Leading solution alternatives Top-level physical, logical, capability and behavioral views Life Cycle Plan Key elements Top-level phases, capability increments, roles, responsibilities, required resources Feasibility Evidence Description Evidence of ability to meet objectives within budget and schedule constraints Evidence of ability to provide desired benefits to stakeholders Mission effectiveness evidence Top-Level VCR/CD Package This chart summarizes the work products and artifacts that are reviewed at the end of the Exploration Phase as part of the VCR/CD Commitment Review. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
21
ICM Anchor Point Milestone Content (1)
(Risk-driven level of detail for each element) Milestone Element Foundations Commitment Review (FCR/MS-A) Package Development Commitment Review (DCR/MS-B) Package Definition of Operational Concept System shared vision update Top-level system objectives and scope System boundary; environment parameters and assumptions Top-level operational concepts Production, deployment, operations and sustainment scenarios and parameters Organizational life-cycle responsibilities (stakeholders) Elaboration of system objectives and scope by increment Elaboration of operational concept by increment Including all mission-critical operational scenarios Generally decreasing detail in later increments System Prototype(s) Exercise key usage scenarios Resolve critical risks E.g., quality attribute levels, technology maturity levels Exercise range of usage scenarios Resolve major outstanding risks Definition of System Requirements Top-level functions, interfaces, quality attribute levels, including Growth vectors and priorities Project and product constraints Stakeholders’ concurrence on essentials Elaboration of functions, interfaces, quality attributes, and constraints by increment Including all mission-critical off-nominal requirements Stakeholders’ concurrence on their priority concerns ICM Anchor Point Milestone Content This chart (and the next two) show the work products and artifacts that are evaluated at the end of the Valuation and Foundations phases as part of the FCR/MS-A and DCR/MS-B reviews, respectively. They also show the major differences in the content of the FCR/MS-A and DCR/MS-B milestone packages. For example, the Operational Concept focuses primarily on the major system-level operational stakeholders’ roles, responsibilities, and usage scenarios in the FCR package, while the DCR package elaborates the scope and operational concept of each increment of capability to be delivered. Their content is risk-driven, so simple scenarios are less emphasized, while critical or complex scenarios are elaborated into more detail. Similarly, the operational scenarios for early increments are defined in more detail, while scenarios for later increments have less detail because of the risk that operational usage is likely to change their nature and priorities. Still, some detail is needed to enable feasibility evaluation of the architecture’s ability to support the overall operational capability. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
22
ICM Anchor Point Milestone Content (2)
(Risk-driven level of detail for each element) Milestone Element Foundations Commitment Review (FCR/MS-A) Package Development Commitment Review (DCR/MS-B) Package Definition of System Architecture Top-level definition of at least one feasible architecture Physical and logical elements and relationships Choices of Non-Developmental Items (NDI) Identification of infeasible architecture options Choice of architecture and elaboration by increment and component Physical and logical components, connectors, configurations, constraints NDI choices Domain-architecture and architectural style choices Architecture evolution parameters Definition of Life-Cycle Plan Identification of life-cycle stakeholders Users, customers, developers, testers, sustainers, interoperators, general public, others Identification of life-cycle process model Top-level phases, increments Top-level WWWWWHH* by phase, function Production, deployment, operations, sustainment Elaboration of WWWWWHH* for Initial Operational Capability (IOC) by phase, function Partial elaboration, identification of key TBD’s for later increments ICM Anchor Point Milestone Content (2) This slide describes the expected level of detail for the system architecture definition and the life cycle plan at the FCR/MS-A and DCR/MS-B reviews. Missing or weak evidence should be identified as a risk as part of the reviews. *WWWWWHH: Why, What, When, Who, Where, How, How Much 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
23
ICM Anchor Point Milestone Content (3)
(Risk-driven level of detail for each element) Milestone Element Foundations Commitment Review (FCR/MS-A) Package Development Commitment Review (DCR/MS-B) Package Feasibility Evidence Description (FED) Evidence of consistency, feasibility among elements above Via physical and logical modeling, testbeds, prototyping, simulation, instrumentation, analysis, etc. Mission cost-effectiveness analysis for requirements, feasible architectures Identification of evidence shortfalls; risks Stakeholders’ concurrence on essentials All major risks resolved or covered by risk management plan Stakeholders’ concurrence on their priority concerns, commitment to development ICM Anchor Point Milestone Content (3) The FED for the FCR/Milestone-A package needs to only show feasibility for at least one candidate architecture, while at the DCR/Milestone-B, a particular architecture is proposed for feasibility evaluation and commitment. Again, any shortfalls in feasibility evidence are uncertainties and risks that need to be covered by risk management plans to be reviewed for commitment to proceed forward into the next phase. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
24
Overview of Example Review Process: DCR/MS-B
Review Planning Tasks Collect/distribute review products Determine readiness Identify stakeholders, expert reviewers Identify review leader and recorder Identify location/facilities Prepare/distribute agenda Overview of Example Review Process: DCR/MS-B Review Exit Criteria Evidence of DCR/MS-B Package Feasibility validated Feasibility shortfalls identified as risks, covered by risk mitigation plans Stakeholder agreement on DCR/MS-B package content Stakeholder commitment to support Development phase All open issues have action plans Otherwise, review fails Review Entrance Criteria Successful FCR/MS-A Required inputs available Perform Pre-Review Technical Activities Experts, stakeholders review DRC/MS-B package, submit issues Developers prepare responses to issues Conduct DCR/MS-B Review Meeting Discuss, resolve issues Identify action plans, risk mitigation plans Review Inputs DCR/MS-B Package: operational concept, prototypes, requirements, architecture, life cycle plans, feasibility evidence Overview of Example Review Process: DCR/Milestone-B This chart highlights the activities that need to be performed in preparation for the review, the actual review, as well as the post-review activities and follow-up. Key to this process are the questions to answer in assessing ability to demonstrate feasibility. These include: What has been done so far? What is the rate of progress toward achievement? What are some intermediate achievable benchmarks? What further needs to be done? Concise risk mitigation plans for key uncertainty areas Why-What-When-Who-Where-How-How Much Including benchmarks Including required resources These questions also need to be addressed in monitoring the steps for developing the feasibility evidence, as discussed below. Review Outputs Action plans Risk mitigation plans Post Review Tasks Publish review minutes Publish and track open action items Document lessons learned 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
25
Lean Risk Management Plan: Fault Tolerance Prototyping
1. Objectives (The “Why”) Determine, reduce level of risk of the fault tolerance features causing unacceptable performance (e.g., throughput, response time, power consumption) Create a description of and a development plan for a set of low-risk fault tolerance features 2. Deliverables and Milestones (The “What” and “When”) By week 3 1. Evaluation of fault tolerance option 2. Assessment of reusable components 3. Draft workload characterization 4. Evaluation plan for prototype exercise 5. Description of prototype By week 7 6. Operational prototype with key fault tolerance features 7. Workload simulation 8. Instrumentation and data reduction capabilities 9. Draft Description, plan for fault tolerance features By week 10 10. Evaluation and iteration of prototype 11. Revised description, plan for fault tolerance features Lean Risk Management Plan The word “Plan” often conjures up a heavyweight document full of boilerplate and hard-to-remember sections. Risk management plans should be particularly lean and risk-driven. Here is an example outline for an risk management plan to conduct fault tolerance prototyping to mitigate identified risks that the fault tolerance features might seriously compromise performance aspects such as throughput, real-time deadline satisfaction, and power consumption. Its essential content fits onto two PowerPoint charts. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
26
Lean Risk Management Plan: Fault Tolerance Prototyping (continued)
Responsibilities (The “Who” and “Where”) System Engineer: G. Smith Tasks 1, 3, 4, 9, 11, support of tasks 5, 10 Lead Programmer: C. Lee Tasks 5, 6, 7, 10 support of tasks 1, 3 Programmer: J. Wilson Tasks 2, 8, support of tasks 5, 6, 7, 10 Approach (The “How”) Design-to-Schedule prototyping effort Driven by hypotheses about fault tolerance-performance effects Use multicore processor, real-time OS, add prototype fault tolerance features Evaluate performance with respect to representative workload Refine Prototype based on results observed Resources (The “How Much”) $60K - Full-time system engineer, lead programmer, programmer (10 weeks)*(3 staff)*($2K/staff-week) $0K - 3 Dedicated workstations (from project pool) $0K - 2 Target processors (from project pool) $0K - 1 Test co-processor (from project pool) $10K - Contingencies $70K - Total Lean Risk Management Plan (continued) This chart shows the examples of the types of information to be provided for responsibilities, the risk management approach, and the needed funding resources. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
27
Example of FED Risk Evaluation Criteria
Negligible Anticipated 0-5% budget and/or schedule overrun Identified only minor shortfalls and imperfections expected to affect the delivered system Low Anticipated 5-10% budget and/or schedule overrun Identified 1-3 moderate shortfalls and imperfections expected to affect the delivered system Moderate Anticipated 10-25% budget and/or schedule overrun Identified >3 moderate shortfalls and imperfections expected to affect the delivered system Major Anticipated 25-50% budget and/or schedule overrun Identified 1-3 mission-critical shortfalls and imperfections expected to affect the delivered system Severe Anticipated >50% budget and/or schedule overrun Identified >3 mission-critical shortfalls and imperfections expected to affect the delivered system Example of FED Risk Evaluation Criteria In order to evaluate the findings of a feasibility assessment and determine the appropriate next steps, criteria are needed to help determine the associated risk for each finding. The information on this chart can be used to assign a risk level to the various findings and guide the outcome of the associated Commitment Review. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
28
Off-Nominal Architecture-Breakers
TRW Project B 1005 SPR’s 100 90 80 TRW Project A 373 SPR’s 70 % of Cost to Fix SPR’s 60 50 Major Rework Sources: Off-Nominal Architecture-Breakers A - Network Failover B - Extra-Long Messages 40 30 Off-Nominal Architecture-Breakers Often, projects with tight schedules will overfocus their architecture and development effort on the nominal-case requirements. This chart shows the frequent result : a critical off-nominal condition that cannot be handled by the nominal-case architecture, causing large amounts of rework and delays, and reproducing the common Pareto diagram in which 20% of the defects cause 80% of the work. This also happens on programs that suboptimize their architecture to succeed on the initial increments. This analysis enabled TRW not only to extend its risk management guidelines to cover critical off-nominal scenarios, but also to help convince future customers to reinterpret traditional waterfall milestones to include validation of feasibility evidence. A resulting example success story, the CCPDS-R project, is summarized in the backup charts. 20 10 10 20 30 40 50 60 70 80 90 100 % of Software Problem Reports (SPR’s) 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
29
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition Stage II: Development and Operations Anchor Point Milestones Concurrently engr. Incr.N (ops), N+1 (devel), N+2 (arch) Concurrently engr. OpCon, rqts, arch, plans, prototypes The Incremental Commitment Life Cycle Process: More on the Overview Stage II of the Incremental Commitment Life Cycle provides a framework for concurrent engineering and development of multiple increments. More on this concurrency follows on the next slides. Note: The term “concurrent engineering” fell into disfavor when behind-schedule developers applied it to the practice of proceeding into development while the designers worked on finishing the design. Not surprisingly, the developers encountered a high rework penalty for going into development with weak architecture and risk resolution. “Concurrent engineering” as applied in the ICM is much different. It is focused on doing a cost-effective job of architecture and risk resolution in Stage I; and on performing stabilized development, verification, and validation of the current system increment while concurrently handling the systems change traffic and preparing a feasibility-validated architecture and set of plans for the next increment in Stage II. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
30
ICM Detailed Activity Descriptions
This slide provides more detailed descriptions of the activities concurrently going on during the various ICM phases. A DCR/MS-B package contains all of the Foundations material (in risk-driven levels of detail) needed for its independent expert review preceding its Development/Milestone B Commitment Review: operations concept, requirements, architecture, plans, business case, feasibility evidence, and risk management plans as necessary. Even further detail on the contents of this material is provided next. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
31
Case Study: CCPDS-R Project Overview
Characteristic CCPDS-R Domain Ground based C3 development Size/language 1.15M SLOC Ada Average number of people 75 Schedule 75 months; 48-month IOC Process/standards DOD-STD-2167A Iterative development Environment Rational host DEC host DEC VMS targets Contractor TRW Customer USAF Current status Delivered On-budget, On-schedule Reference: [Royce, 1998], Appendix D RATIONAL S o f t w a r e C o r p o r a t I o n 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
32
CCPDS-R Reinterpretation of SSR, PDR
Development Life Cycle Inception Elaboration Construction Architecture Iterations Release Iterations SSR IPDR PDR CDR 5 10 15 20 25 Contract award High-risk prototypes Architecture baseline under change control (LCO) Competitive design phase: Architectural prototypes Planning Requirements analysis (LCA) Working Network OS with validated failover Early delivery of “alpha” capability to user RATIONAL S o f t w a r e C o r p o r a t I o n 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
33
CCPDS-R Results: No Late 80-20 Rework
Architecture first -Integration during the design phase -Demonstration-based evaluation Risk Management Configuration baseline change metrics: RATIONAL S o f t w a r e C o r p o r a t I o n Project Development Schedule 15 20 25 30 35 40 10 Design Changes Implementation Changes Maintenance Changes and ECP’s Hours Change 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
34
Summary Schedule-based and event-based reviews are risk-prone
Evidence-based reviews enable early risk resolution They require more up-front systems engineering effort They have a high ROI for high-risk projects They synchronize and stabilize concurrent engineering The evidence becomes a first-class deliverable It requires planning and earned value management They can be added to traditional review processes 10/24/2014 ©USC-CSSE
35
Conclusions Anchor Point milestones enable synchronization and stabilization of concurrent engineering Have been successfully applied on small to large projects CCPDS-R large project example provided in backup charts They also provide incremental stakeholder resource commitment points The FED enables evidence of program feasibility to be evaluated Produced by developer Evaluated by stakeholders, independent experts Shortfalls in evidence are sources of uncertainty and risk, and should be covered by risk management plans Can get most of benefit by adding FED to traditional milestone content and reviews 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
36
References B. Boehm and J. Lane, "Guide for Using the Incremental Commitment Model (ICM) for Systems Engineering of DoD Projects, v.0.5,” USC-CSSE-TR , B. Boehm, “Anchoring the Software Process,” IEEE Software, July 1996 B. Boehm, A.W. Brown, V. Basili, and R. Turner, “Spiral Acquisition of Software-Intensive Systems of Systems,” Cross Talk, May 2004, pp. 4-9. B. Boehm and J. Lane, “Using the ICM to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October 2007, pp. 4-9. J. Maranzano et. al., “Architecture Reviews: Practice and Experience,” IEEE Software, March/April 2005. R. Pew and A. Mavor, Human-System Integration in the System Development Process: A New Look, National Academy Press, 2007. W. Royce, Software Project Management, Addison Wesley, 1998. R. Valerdi, “The Constructive Systems Engineering Cost Model,” Ph.D. dissertation, USC, August 2005. CrossTalk articles: 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
37
Backup Charts 10/24/2014 ©USC-CSSE
File: Feasibility Evidence Developmentv8
38
AT&T Experience with AP Reviews
AT&T and its spinoffs (Telcordia, Lucent, Avaya, regional Bell companies) have been successfully using versions of AP reviews since More detail on their Architecture Review experience is in J. Maranzano et. al., “Architecture Reviews: Practice and Experience,” IEEE Software, March/April USC has used APs and FRs to achieve a 92% on-time, in-budget, satisfied-client success rate on over 50 fixed-schedule campus e-service projects. AP Reviews are also an integral part of the Rational Software Process, with many successful implementations. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
39
ICM Levels of Activity for Complex Systems
This chart shows the primary system aspects that are being concurrently engineered at an increasing level of understanding, definition, and development. It should be emphasized that the magnitude and shape of the levels of effort will be risk-driven and likely to vary from project to project. In particular, they are likely to have mini risk/opportunity-driven peaks and valleys, rather than the smooth curves shown for simplicity in this slide. The main intent of this view is to emphasize the necessary concurrency of the primary success-critical activities shown as rows. Thus, in interpreting the Exploration column, although system scoping is the primary objective of the Exploration phase, doing it well involves a considerable amount of activity in understanding needs, envisioning opportunities, identifying and reconciling stakeholder goals and objectives, architecting solutions, life cycle planning, evaluation of alternatives, and negotiation of stakeholder commitments. The number of concurrent activities (each row may have a good many concurrent activities going on) makes it clear that an anchor point milestone that brings everything together and assesses their compatibility and feasibility is necessary for concurrent engineering of large systems. Note that the development of Feasibility Evidence is a continuous process, although it peaks toward the end of each phase. More details on this process are provided later. 10/24/2014 ©USC-CSSE File: Feasibility Evidence Developmentv8
40
List of Acronyms CD Concept Development CP Competitive Prototyping
DCR Development Commitment Review DoD Department of Defense ECR Exploration Commitment Review EV Expected Value FCR Foundations Commitment Review FED Feasibility Evidence Description GAO Government Accounting Office ICM Incremental Commitment Model KPP Key Performance Parameter MBASE Model-Based Architecting and Software Engineering OCR Operations Commitment Review RE Risk Exposure RUP Rational Unified Process V&V Verification and Validation VB Value of Bold approach VCR Valuation Commitment Review 10/24/2014 ©USC-CSSE ©USC-CSSE File: Feasibility Evidence Developmentv8
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.