Presentation is loading. Please wait.

Presentation is loading. Please wait.

ICSM Principles 3 and 4 3. Concurrent multidiscipline engineering 4

Similar presentations


Presentation on theme: "ICSM Principles 3 and 4 3. Concurrent multidiscipline engineering 4"— Presentation transcript:

1 ICSM Principles 3 and 4 3. Concurrent multidiscipline engineering 4
ICSM Principles 3 and Concurrent multidiscipline engineering 4. Evidence- and risk-based decisions Barry Boehm, USC CS 577 Lecture, Fall 2014 5/27/2014 © USC-CSSE

2 Principle 3: Concurrent multidiscipline engineering
Problems with sequential engineering What concurrent engineering is and isn’t Failure story: Total Commitment RPV control Success Story: Incremental Commitment RPV control Concurrent hardware-software-human engineering 5/27/2014 © USC-CSSE

3 Problems with Sequential Engineering
Functionality first: (build it now; tune it later) Often fails to satisfy needed performance Hardware first Inadequate capacity for software growth Physical component hierarchy vs. software layered services Software first Layered services vs. component hierarchy Non-portable COTS products Human interface first Unscalable prototypes Requirements first Solutions often infeasible 5/27/2014 © USC-CSSE

4 What Concurrent Engineering Is and Isn’t
Is: Enabling activities to be performed in parallel via synchronization and stabilization ICSM concurrency view and evidence-based synchronization Agile sprints with timeboxing, refactoring, prioritized backlog Kanban pull-based systems evolution Architected agile development and evolution Isn’t: Catching up on schedule by starting activities before their prerequisites are completed and validated “The requirements and design aren’t ready, but we need to hurry up and start coding, because there will be a lot of debugging to do” A self-fulfilling prophecy 5/27/2014 © USC-CSSE

5 ICSM Principle 3. Concurrent Multidiscipline Engr.
5/27/2014 © USC-CSSE

6 Scalable Remotely Controlled Operations Agent-based 4:1 Remotely Piloted Vehicle demo
Scalable remotely controlled operations – ICM Case Study An example to illustrate ICM benefits is the Unmanned Aerial Vehicle (UAV) (or Remotely Piloted Vehicles (RPV)) system enhancement discussed in Chapter 5 of the NRC HSI report [Pew and Mavor, 2007]. The RPVs are airplanes or helicopters operated remotely by humans. These systems are designed to keep humans out of harm’s way. However, the current system is human-intensive, requiring two people to operate a single vehicle. If there is a strong desire to modify the 2:1 (2 people to one vehicle) ratio to allow for a single operator and 4 aircraft (e.g., a 1:4 ratio), based on a proof-of principle agent-based prototype demo showing 1:4 performance of some RPV tasks, how should one proceed? 12/31/2007 5/27/2014 ©USC-CSSE © USC-CSSE 6 6

7 Total vs. Incremental Commitment – 4:1 RPV
Total Commitment Agent technology demo and PR: Can do 4:1 for $1B Winning bidder: $800M; PDR in 120 days; 4:1 capability in 40 months PDR: many outstanding risks, undefined interfaces $800M, 40 months: “halfway” through integration and test 1:1 IOC after $3B, 80 months Incremental Commitment [with a number of competing teams] $25M, 6 mo. to VCR [4]: may beat 1:2 with agent technology, but not 4:1 $75M, 8 mo. to FCR [3]: agent technology may do 1:1; some risks $225M, 10 mo. to DCR [2]: validated architecture, high-risk elements $675M, 18 mo. to IOC [1]: viable 1:1 capability 1:1 IOC after $1B, 42 months Total vs. Incremental Commitment -- 4:1 RPV This slide outlines two approaches to the RPV question: total commitment and incremental commitment. While this is a hypothetical case for developing a solution to the RPV manning problem, it shows how a premature total commitment without significant modeling, analysis, and feasibility assessment will often lead to large overruns in costs and schedule, and a manning ratio that is considerably less than initially desired. However, by “buying information” early and validating high-risk elements, the more technologically viable option is identified much earlier and can be provided for a much lower cost and much closer to the desired date. The ICM approach leads to the same improved manning ratio as the total commitment approach, but sooner and at a much reduced cost. The ICM approach also employs a competitive downselect strategy, which both reduces risk and enables a buildup of trust among the acquirers, developers, and users. 12/31/2007 5/27/2014 ©USC-CSSE © USC-CSSE 7 7

8 ICSM Activity Levels for Complex Systems Concurrency needs to be synchronized and stabilized; see Principle 4 ICSM HSI Levels of Activity for Complex Systems As mentioned earlier, with the ICSM, a number of system aspects are being concurrently engineered at an increasing level of understanding, definition, and development. The most significant of these aspects are shown in this slide, an extension of a similar view of concurrently engineered software projects developed as part of the RUP (shown in a backup slide). As with the RUP version, it should be emphasized that the magnitude and shape of the levels of effort will be risk-driven and likely to vary from project to project. In particular, they are likely to have mini risk/opportunity-driven peaks and valleys, rather than the smooth curves shown for simplicity in this slide. The main intent of this view is to emphasize the necessary concurrency of the primary success-critical activities shown as rows. Thus, in interpreting the Exploration column, although system scoping is the primary objective of the Exploration phase, doing it well involves a considerable amount of activity in understanding needs, envisioning opportunities, identifying and reconciling stakeholder goals and objectives, architecting solutions, life cycle planning, evaluation of alternatives, and negotiation of stakeholder commitments. 5/27/2014 © USC-CSSE 8

9 Principle 4. Evidence- and risk-based decisions
Evidence provided by developer and validated by independent experts that: If the system is built to the specified architecture, it will Satisfy the requirements: capability, interfaces, level of service, and evolution Support the operational concept Be buildable within the budgets and schedules in the plan Generate a viable return on investment Generate satisfactory outcomes for all of the success-critical stakeholders All major risks resolved or covered by risk management plans (shortfalls in evidence are uncertainties and risks) Serves as basis for stakeholders’ commitment to proceed Anchor Point Feasibility Rationales To make ICSM concurrency work, the anchor point milestone reviews are the mechanism by which the many concurrent activities are synchronized, stabilized, and risk-assessed at the end of each phase. Each of these anchor point milestone reviews is focused on developer-produced evidence, documented in a Feasibility Evidence Description (FED), to help the key stakeholders determine the next level of commitment. At each program milestone/anchor point, feasibility assessments and the associated evidence are reviewed and serve as the basis for the stakeholders’ commitment to proceed. The FED is not just a document, a set of PowerPoint charts, or Unified Modeling Language (UML) diagrams. It is based on evidence from simulations, models, or experiments with planned technologies and detailed analysis of development approaches and projected productivity rates. The detailed analysis is often based on historical data showing reuse realizations, software size estimation accuracy, and actual developer productivity rates. It is often not possible to fully resolve all risks at a given point in the development cycle, but known, unresolved risks need to be identified and covered by risk management plans. Can be used to strengthen current schedule- or event-based reviews 5/27/2014 © USC-CSSE 9

10 General Data Item Description drafted
Feasibility Evidence a First-Class ICSM Deliverable Not just an optional appendix General Data Item Description drafted Based on one used on a large program Includes plans, resources, monitoring Identification of models, simulations, prototypes, benchmarks, usage scenarios, experience data to be developed or used Earned Value tracking of progress vs. plans Serves as way of synchronizing and stabilizing concurrent activities in hump diagram 5/27/2014 © USC-CSSE

11 Failure Story: 1 Second Response Rqt.
$100M $50M Required Architecture: Custom; many cache processors Original Architecture: Modified Client-Server 1 2 3 4 5 Response Time (sec) Original Spec After Prototyping Original Cost Problems Encountered without FED In the early 1980s, a large government organization contracted with TRW to develop an ambitious information query and analysis system. The system would provide more than 1,000 users, spread across a large building complex, with powerful query and analysis capabilities for a large and dynamic database. TRW and the customer specified the system using a classic sequential-engineering waterfall development model. Based largely on user need surveys, an oversimplified high-level performance analysis, and a short deadline for getting the TBDs out of the requirements specification, they fixed into the contract a requirement for a system response time of less than one second. Subsequently, the software architects found that subsecond performance could only be provided via a highly customized design that attempted to anticipate query patterns and cache copies of data so that each user’s likely data would be within one second’s reach (a 1980’s precursor of Google). The resulting hardware architecture had more than 25 super-midicomputers busy caching data according to algorithms whose actual performance defied easy analysis. The scope and complexity of the hardware-software architecture brought the estimated cost of the system to nearly $100 million, driven primarily by the requirement for a one-second response time. Faced with this unattractive prospect (far more than the customer’s budget for the system), the customer and developer decided to develop a prototype of the system’s user interface and representative capabilities to test. The results showed that a four-second response time would satisfy users 90 percent of the time. A four-second response time, with special handling for high-priority transactions, dropped development costs closer to $30 million. Thus, the premature specification of a 1-second response time neglected the risk of creating an overexpensive and time-consuming system development. Fortunately, in this case, the only loss was the wasted effort on the expensive-system architecture and a 15 month delay in delivery. More frequently, such rework is done only after the expensive full system is delivered and found still too slow and too expensive to operate. 07/09/2010 ©USC-CSSE File: Feasibility Evidence Developmentv8

12 Problems Avoidable with FED Feasibility Evidence Description
Attempt to validate 1-second response time Commercial system benchmarking and architecture analysis: needs expensive custom solution Prototype: 4-second response time OK 90% of the time Negotiate response time ranges 2 seconds desirable 4 seconds acceptable with some 2-second special cases Benchmark commercial system add-ons to validate their feasibility Present solution and feasibility evidence at anchor point milestone review Result: Acceptable solution with minimal delay Problems Avoidable with FED Had the developers been required to deliver a FED showing evidence of feasibility of the one-second response time, they would have run benchmarks on the best available commercial query systems, using representative user workloads, and would have found that the best that they could do was about a 2.5 second response time, even with some preprocessing to reduce query latency. They would have performed a top-level architecture analysis of custom solutions, and concluded that such solutions were in the $100M cost range. They would have shared these results with the customer in advance of any key reviews, and found that the customer would prefer to explore the feasibility of a system with a commercially-supportable response time. They would have done user interface prototyping and found that 4 second response time was acceptable 90% of the time much earlier. As some uncertainties still existed about the ability to address the remaining 10% of the queries, the customer and developer would have agreed to avoid repeating the risky specification of a fixed response time requirement, and instead to define a range of desirable-to-acceptable response times, with an award fee provided for faster performance. They would also have agreed to reschedule the next milestone review to give the developer time and budget to present evidence of the most feasible solution available, using the savings over the prospect of a $100M system development as rationale. This would have put the project on a more solid success track over a year before the actual project discovered and rebaselined itself, and without the significant expense that went into the unaffordable architecture definition. 07/09/2010 ©USC-CSSE File: Feasibility Evidence Developmentv8

13 Steps for Developing FED
Description Examples/Detail A Develop phase work-products/artifacts For a Development Commitment Review, this would include the system’s operational concept, prototypes, requirements, architecture, life cycle plans, and associated assumptions B Determine most critical feasibility assurance issues Issues for which lack of feasibility evidence is program-critical C Evaluate feasibility assessment options Cost-effectiveness; necessary tool, data, scenario availability D Select options, develop feasibility assessment plans What, who, when, where, how… E Prepare FED assessment plans and earned value milestones Example to follow… F Begin monitoring progress with respect to plans Also monitor changes to the project, technology, and objectives, and adapt plans G Prepare evidence-generation enablers Assessment criteria Parametric models, parameter values, bases of estimate COTS assessment criteria and plans Benchmarking candidates, test cases Prototypes/simulations, evaluation plans, subjects, and scenarios Instrumentation, data analysis capabilities H Perform pilot assessments; evaluate and iterate plans and enablers Short bottom-line summaries and pointers to evidence files are generally sufficient I Assess readiness for Commitment Review Shortfalls identified as risks and covered by risk mitigation plans Proceed to Commitment Review if ready J Hold Commitment Review when ready; adjust plans based on review outcomes Review of evidence and independent experts’ assessments NOTE: “Steps” are denoted by letters rather than numbers to indicate that many are done concurrently. 07/09/2010 ©USC-CSSE

14 Success Story: CCPDS-R
Characteristic CCPDS-R Domain Ground based C3 development Size/language 1.15M SLOC Ada Average number of people 75 Schedule 75 months Process/standards DOD-STD-2167A Iterative development Environment Rational host DEC host DEC VMS targets Contractor TRW Customer USAF Performance Delivered On-budget, On-schedule 5/27/2014 © USC-CSSE

15 CCPDS-R Evidence-Based Commitment
Development Life Cycle Inception Elaboration Construction Architecture Iterations Release Iterations SSR IPDR PDR CDR 5 10 15 20 25 Contract award Architecture baseline under change control (LCO) Competitive design phase: Architectural prototypes Planning Requirements analysis (LCA) Early delivery of “alpha” capability to user 5/27/2014 © USC-CSSE

16 Reducing Software Cost-to-Fix: CCPDS-R - Royce, 1998
Architecture first -Integration during the design phase -Demonstration-based evaluation Risk Management Configuration baseline change metrics: Project Development Schedule 15 20 25 30 35 40 10 Design Changes Implementation Changes Maintenance Changes and ECP’s Hours Change 5/27/2014 © USC-CSSE

17 CCPDS-R and 4 Principles
Stakeholder Value-Based Guidance Reinterpreted DOD-STD-2167a; users involved Extensive user, maintainer, management interviews, prototypes Award fee flowdown to performers Incremental Commitment and Accountability Stage I: Incremental tech. validation, prototyping, architecting Stage II: 3 major-user-organization increments Concurrent multidiscipline engineering Small, expert, concurrent-SysE team during Stage I Stage II: 75 parallel programmers to validated interface specs; integration preceded programming Evidence and Risk-Driven Decisions High-risk prototyping and distributed OS developed before PDR Performance validated via executing architectural skeleton 5/27/2014 © USC-CSSE

18 Master Net and 4 Principles
Stakeholder value-based guidance Overconcern with Voice of Customer: 3.5 MSLOC of rqts. No concern with maintainers, interoperators: Prime vs. IBM Incremental commitment and accountability Total commitment to infeasible budget and schedule No contract award fees or penalties for under/overruns Concurrent multidiscipline engineering No prioritization of features for incremental development No prototyping of operational scenarios and usage Evidence and risk-driven decisions No evaluation of Premier Systems scalability, performance No evidence of ability to satisfy budgets and schedules 827/2014 Copyright © USC-CSSE

19 Meta-Principle 4+: Risk Balancing
How much (system scoping, planning, prototyping, COTS evaluation, requirements detail, spare capacity, fault tolerance, safety, security, environmental protection, documenting, configuration management, quality assurance, peer reviewing, testing, use of formal methods, and feasibility evidence) are enough? Answer: Balancing the risk of doing too little and the risk of doing too much will generally find a middle-course sweet spot that is about the best you can do. 5/27/2014 © USC-CSSE

20 Risk Exposure RE = Prob (Loss) * Size (Loss)
Using Risk to Determine “How Much Is Enough” - testing, planning, specifying, prototyping… Risk Exposure RE = Prob (Loss) * Size (Loss) “Loss” – financial; reputation; future prospects, … For multiple sources of loss: RE =  [Prob (Loss) * Size (Loss)]source sources 5/27/2014 © USC-CSSE

21 Example RE Profile: Time to Ship - Loss due to unacceptable dependability
Many defects: high P(L) Critical defects: high S(L) RE = P(L) * S(L) Few defects: low P(L) Minor defects: low S(L) Time to Ship (amount of testing) 5/27/2014 © USC-CSSE

22 Example RE Profile: Time to Ship
- Loss due to unacceptable dependability - Loss due to market share erosion Many defects: high P(L) Critical defects: high S(L) Many rivals: high P(L) Strong rivals: high S(L) RE = P(L) * S(L) Few rivals: low P(L) Weak rivals: low S(L) Few defects: low P(L) Minor defects: low S(L) Time to Ship (amount of testing) 5/27/2014 © USC-CSSE

23 Example RE Profile: Time to Ship - Sum of Risk Exposures
Time to Ship (amount of testing) RE = P(L) * S(L) Few rivals: low P(L) Weak rivals: low S(L) Many rivals: high P(L) Strong rivals: high S(L) Sweet Spot Many defects: high P(L) Critical defects: high S(L) Few defects: low P(L) Minor defects: low S(L) 5/27/2014 © USC-CSSE

24 Comparative RE Profile: Safety-Critical System
Time to Ship (amount of testing) RE = P(L) * S(L) Mainstream Sweet Spot Higher S(L): defects High-Q 5/27/2014 © USC-CSSE

25 Comparative RE Profile:
Internet Startup Higher S(L): delays Low-TTM Sweet Spot Mainstream Sweet Spot TTM: Time to Market RE = P(L) * S(L) Time to Ship (amount of testing) 5/27/2014 © USC-CSSE

26 How Much Testing is Enough
How Much Testing is Enough? (LiGuo Huang, 1996) - Early Startup: Risk due to low dependability - Commercial: Risk due to low dependability - High Finance: Risk due to low dependability - Risk due to market share erosion Sweet Spot COCOMO II: 12 22 34 54 Added % test time COQUALMO: 1.0 .475 .24 .125 0.06 P(L) Early Startup: .33 .19 .11 .06 .03 S(L) Commercial: .56 .32 .18 .10 High Finance: 3.0 1.68 .96 .54 .30 Market Risk: .008 .027 .09 REm 5/27/2014 © USC-CSSE

27 How Much Architecting Is Enough?
A COCOMO II Analysis 10000 KSLOC Percent of Project Schedule Devoted to Initial Architecture and Risk Resolution Added Schedule Devoted to Rework (COCOMO II RESL factor) Total % Added Schedule Sweet Spot 100 KSLOC Sweet Spot Drivers: Rapid Change: leftward High Assurance: rightward 10 KSLOC 5/27/2014 © USC-CSSE


Download ppt "ICSM Principles 3 and 4 3. Concurrent multidiscipline engineering 4"

Similar presentations


Ads by Google