Presentation is loading. Please wait.

Presentation is loading. Please wait.

University of Southern California Center for Systems and Software Engineering Barry Boehm and Jo Ann Lane University of Southern California Center for.

Similar presentations


Presentation on theme: "University of Southern California Center for Systems and Software Engineering Barry Boehm and Jo Ann Lane University of Southern California Center for."— Presentation transcript:

1 University of Southern California Center for Systems and Software Engineering Barry Boehm and Jo Ann Lane University of Southern California Center for Systems and Software Engineering http://csse.usc.edu Presented at ICM and CP Workshop July 14, 2008 Incremental Commitment Model: What It Is (and Isn’t) How to Right Size It How It Helps Do Competitive Prototyping ----Tutorial; Charts with Notes----

2 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE2 Goals of Tutorial 1.Provide an overview the Incremental Commitment Model (ICM) and its capabilities 2.Provide guidance on using the ICM process decision table through elaboration and examples of special cases 3.Describe how ICM supports the new DoD Competitive Prototyping (CP) policy

3 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE3 Tutorial Outline ICM nature and context ICM views and examples ICM process decision table: Guidance and examples for using the ICM ICM and competitive prototyping Summary, conclusions, and references

4 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE4 ICM Nature and Origins Integrates hardware, software, and human factors elements of systems engineering –Concurrent exploration of needs and opportunities –Concurrent engineering of hardware, software, human aspects –Concurrency stabilized via anchor point milestones Developed in response to DoD-related issues –Clarify “spiral development” usage in DoD Instruction 5000.2 Initial phased version (2005) –Explain Future Combat System of systems spiral usage to GAO Underlying process principles (2006) –Provide framework for human-systems integration National Research Council report (2007) Integrates strengths of current process models –But not their weaknesses

5 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE5 ICM Integrates Strengths of Current Process Models But not their weaknesses V-Model: Emphasis on early verification and validation –But not ease of sequential, single-increment interpretation Spiral Model: Risk-driven activity prioritization –But not lack of well-defined in-process milestones RUP and MBASE: Concurrent engineering stabilized by anchor point milestones –But not software orientation Lean Development: Emphasis on value-adding activities –But not repeatable manufacturing orientation Agile Methods: Adaptability to unexpected change –But not software orientation, lack of scalability 5

6 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE6July 2008©USC-CSSE6 The ICM: What It Is and Isn’t Risk-driven framework for tailoring system processes –Not a one-size-fits-all process –Focused on future process challenges Integrates the strengths of phased and risk-driven spiral process models Synthesizes together principles critical to successful system development –Commitment and accountability of system sponsors –Success-critical stakeholder satisficing –Incremental growth of system definition and stakeholder commitment –Concurrent engineering –Iterative development cycles –Risk-based activity levels and evidence-based milestones Principles trump diagrams… Principles Used by 60-80% of CrossTalk Top-5 projects, 2002-2005

7 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE7 03/19/2008 ©USC-CSSE 7 Context: Current and Future DoD S&SE Challenges Multi-owner, multi-mission systems of systems Emergent requirements; rapid pace of change Always-on, never-fail systems Need to turn within adversaries’ OODA loop –Observe, orient, decide, act Within asymmetric adversary constraints

8 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE8 03/19/2008 ©USC-CSSE 8 Asymmetric Conflict and OODA Loops Decide on new capabilities, architecture upgrades, plans Act on plans, specifications Orient with respect to stakeholders priorities, feasibility, risks Observe new/updated objectives, constraints, alternatives Adversary Picks time and place Little to lose Lightweight, simple systems and processes Can reuse anything Defender Ready for anything Much to lose More heavy, complex systems and processes Reuse requires trust

9 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE9 9 Average Change Processing Time: Two Complex Systems of Systems Average workdays to process changes Incompatible with turning within adversary’s OODA loop 03/19/2008

10 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE10 Lifecycle Evolution Motivators as Identified by CSSE Sponsors and Affiliates Current lifecycle models and associated practices –Based on outdated assumptions –Don’t always address today’s and tomorrow’s system development challenges, especially as systems become more complex –Examples of outdated assumptions and shortfalls in guidance documents included in backup charts Need better integration of –Systems engineering –Software engineering –Human factors engineering –Specialty engineering (e.g. information assurance, safety) –Engineering-management interactions (feasibility of plans, budgets, schedules, risk management methods) Need to better identify and manage critical issues and risks up front (Young memo)

11 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE11 03/19/2008 ©USC-CSSE 11 System/Software Architecture Mismatches - Maier, 2006 System Hierarchy –Part-of relationships; no shared parts –Function-centric; single data dictionary –Interface dataflows –Static functional- physical allocation Software Hierarchy –Uses relationships; layered multi-access –Data-centric; class- object data relations –Interface protocols; concurrency challenges –Dynamic functional- physical migration

12 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE12 03/19/2008 ©USC-CSSE 12 Fractionated, incompatible sensor data management “Touch Football” interface definition earned value –Full earned value taken for defining interface dataflow –No earned value left for defining interface dynamics Joining/leaving network, publish-subscribe, interrupt handling, security protocols, exception handling, mode transitions –Result: all green EVMS turns red in integration Examples of Architecture Mismatches … Sensor 1 SDMS1 Sensor 2 SDMS2 Sensor 3 SDMS3 Sensor n SDMSn ……

13 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE13 Tutorial Outline ICM nature and context ICM views and examples ICM process decision table: Guidance and examples for using the ICM ICM and competitive prototyping Summary, conclusions, and references

14 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE14©USC-CSSE The Incremental Commitment Life Cycle Process: Overview Stage I: DefinitionStage II: Development and Operations Anchor Point Milestones Synchronize, stabilize concurrency via FEDs Risk patterns determine life cycle process

15 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE15©USC-CSSE15 ICM HSI Levels of Activity for Complex Systems

16 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE16©USC-CSSE16 Anchor Point Feasibility Evidence Description Evidence provided by developer and validated by independent experts that: If the system is built to the specified architecture, it will –Satisfy the requirements: capability, interfaces, level of service, and evolution –Support the operational concept –Be buildable within the budgets and schedules in the plan –Generate a viable return on investment –Generate satisfactory outcomes for all of the success-critical stakeholders All major risks resolved or covered by risk management plans Serves as basis for stakeholders’ commitment to proceed Can be used to strengthen current schedule- or event-based reviews

17 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE17 ICM Anchor Point Milestone Content (1) (Risk-driven level of detail for each element) Milestone Element Foundations Commitment Review (FCR/MS-A) Package Development Commitment Review (DCR/MS-B) Package Definition of Operational Concept System shared vision update Top-level system objectives and scope –System boundary; environment parameters and assumptions Top-level operational concepts –Production, deployment, operations and sustainment scenarios and parameters –Organizational life-cycle responsibilities (stakeholders) Elaboration of system objectives and scope by increment Elaboration of operational concept by increment –Including all mission-critical operational scenarios –Generally decreasing detail in later increments System Prototype(s) Exercise key usage scenarios Resolve critical risks –E.g., quality attribute levels, technology maturity levels Exercise range of usage scenarios Resolve major outstanding risks Definition of System Requirements Top-level functions, interfaces, quality attribute levels, including –Growth vectors and priorities Project and product constraints Stakeholders’ concurrence on essentials Elaboration of functions, interfaces, quality attributes, and constraints by increment –Including all mission-critical off- nominal requirements –Generally decreasing detail in later increments Stakeholders’ concurrence on their priority concerns

18 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE18 ICM Anchor Point Milestone Content (2) (Risk-driven level of detail for each element) Milestone Element Foundations Commitment Review (FCR/MS-A) Package Development Commitment Review (DCR/MS-B) Package Definition of System Architecture Top-level definition of at least one feasible architecture –Physical and logical elements and relationships –Choices of Non-Developmental Items (NDI) Identification of infeasible architecture options Choice of architecture and elaboration by increment and component –Physical and logical components, connectors, configurations, constraints –NDI choices –Domain-architecture and architectural style choices Architecture evolution parameters Definition of Life-Cycle Plan Identification of life-cycle stakeholders –Users, customers, developers, testers, sustainers, interoperators, general public, others Identification of life-cycle process model –Top-level phases, increments Top-level WWWWWHH* by phase, function –Production, deployment, operations, sustainment Elaboration of WWWWWHH* for Initial Operational Capability (IOC) by phase, function –Partial elaboration, identification of key TBD’s for later increments *WWWWWHH: Why, What, When, Who, Where, How, How Much

19 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE19 ICM Anchor Point Milestone Content (3) (Risk-driven level of detail for each element) Milestone Element Foundations Commitment Review (FCR/MS-A) Package Development Commitment Review (DCR/MS-B) Package Feasibility Evidence Description (FED) Evidence of consistency, feasibility among elements above –Via physical and logical modeling, testbeds, prototyping, simulation, instrumentation, analysis, etc. –Mission cost-effectiveness analysis for requirements, feasible architectures Identification of evidence shortfalls; risks Stakeholders’ concurrence on essentials Evidence of consistency, feasibility among elements above –Identification of evidence shortfalls; risks All major risks resolved or covered by risk management plan Stakeholders’ concurrence on their priority concerns, commitment to development

20 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE20©USC-CSSE20 Incremental Commitment in Gambling Total Commitment: Roulette –Put your chips on a number E.g., a value of a key performance parameter –Wait and see if you win or lose Incremental Commitment: Poker, Blackjack –Put some chips in –See your cards, some of others’ cards –Decide whether, how much to commit to proceed

21 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE21©USC-CSSE21 Scalable remotely controlled operations

22 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE22©USC-CSSE22 Total vs. Incremental Commitment – 4:1 RPV Total Commitment –Agent technology demo and PR: Can do 4:1 for $1B –Winning bidder: $800M; PDR in 120 days; 4:1 capability in 40 months –PDR: many outstanding risks, undefined interfaces –$800M, 40 months: “halfway” through integration and test –1:1 IOC after $3B, 80 months CP-based Incremental Commitment [number of competing teams] –$25M, 6 mo. to VCR [4]: may beat 1:2 with agent technology, but not 4:1 –$75M, 8 mo. to ACR [3]: agent technology may do 1:1; some risks –$225M, 10 mo. to DCR [2]: validated architecture, high-risk elements –$675M, 18 mo. to IOC [1]: viable 1:1 capability –1:1 IOC after $1B, 42 months

23 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE23 The Incremental Commitment Life Cycle Process: Overview Stage I: DefinitionStage II: Development and Operations Anchor Point Milestones Concurrently engr. OpCon, rqts, arch, plans, prototypes Concurrently engr. Incr.N (ops), N+1 (devel), N+2 (arch)

24 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE24 ICM Risk-Driven Scalable Development Rapid Change High Assurance Short, Stabilized Development Of Increment N Increment N Transition/O&M Increment N Baseline Short Development Increments Foreseeable Change (Plan) Stable Development Increments

25 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE25 ICM Risk-Driven Scalable Development: Expanded View Agile Rebaselining for Future Increments Short, Stabilized Development of Increment N Verification and Validation (V&V) of Increment N Deferrals ArtifactsConcerns Rapid Change High Assurance Future Increment Baselines Increment N Transition/ Operations and Maintenance Future V&V Resources Increment N Baseline Current V&V Resources Unforeseeable Change (Adapt) Short Development Increments Foreseeable Change (Plan) Stable Development Increments Continuous V&V

26 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE26 Agile Change Processing and Rebaselining Assess Changes, Propose Handling Stabilized Increment-N Development Team Change Proposers Future Increment Managers Agile Future- Increment Rebaselining Team Negotiate change disposition Formulate, analyze options in context of other changes Handle Accepted Increment-N changes Discuss, resolve deferrals to future increments Propose Changes Discuss, revise, defer, or drop Rebaseline future-increment Foundations packages Prepare for rebaselined future-increment development Defer some Increment-N capabilities Recommend handling in current increment Accept changes Handle in current rebaseline Proposed changes Recommend no action, provide rationale Recommend deferrals to future increments

27 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE27 1 2 3 4 5 6 STAKEHOLDER COMMITMENT REVIEW POINTS: Opportunities to proceed, skip phases backtrack, or terminate Exploration Commitment Review Valuation Commitment Review Architecture Commitment Review Development Commitment Review Operations 1 and Development 2 Commitment Review Operations 2 and Development 3 Commitment Review Cumulative Level of Understanding, Cost, Time, Product, and Process Detail (Risk-Driven) Concurrent Engineering of Products and Processes 2345 EXPLORATION VALUATION ARCHITECTING DEVELOPMENT OPERATION 1 OPERATION 2 16 Spiral View of Incremental Commitment Model

28 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE28 Use of Key Process Principles: Annual CrossTalk Top-5 Projects Year Concurrent Engineering Risk-Driven Evolutionary Growth 2002433 2003543 2004334 2005445 Total (of 20)161415

29 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE29 Example ICM Commercial Application: Symbiq Medical Infusion Pump Winner of 2006 HFES Best New Design Award Described in NRC HSI Report, Chapter 5

30 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE30 Symbiq IV Pump ICM Process - I Exploration Phase –Stakeholder needs interviews, field observations –Initial user interface prototypes –Competitive analysis, system scoping –Commitment to proceed Valuation Phase –Feature analysis and prioritization –Display vendor option prototyping and analysis –Top-level life cycle plan, business case analysis –Safety and business risk assessment –Commitment to proceed while addressing risks

31 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE31 Symbiq IV Pump ICM Process - II Foundations Phase –Modularity of pumping channels –Safety feature and alarms prototyping and iteration –Programmable therapy types, touchscreen analysis –Failure modes and effects analyses (FMEAs) –Prototype usage in teaching hospital –Commitment to proceed into development Development Phase –Extensive usability criteria and testing –Iterated FMEAs and safety analyses –Patient-simulator testing; adaptation to concerns –Commitment to production and business plans

32 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE32 ICM Summary Current processes not well matched to future challenges –Emergent, rapidly changing requirements –High assurance of scalable performance and qualities Incremental Commitment Model addresses challenges –Assurance via evidence-based milestone commitment reviews, stabilized incremental builds with concurrent V&V Evidence shortfalls treated as risks –Adaptability via concurrent agile team handling change traffic and providing evidence-based rebaselining of next-increment specifications and plans –Use of critical success factor principles: stakeholder satisficing, incremental growth, concurrent engineering, iterative development, risk- based activities and milestones Major implications for funding, contracting, career paths

33 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE33 Implications for funding, contracting, career paths Incremental vs. total funding –Often with evidence-based competitive downselect No one-size-fits all contracting –Separate instruments for build-to-spec, agile rebaselining, V&V teams With funding and award fees for collaboration, risk management Compatible regulations, specifications, and standards Compatible acquisition corps education and training –Generally, schedule/cost/quality as independent variable Prioritized feature set as dependent variable Multiple career paths –For people good at build-to-spec, agile rebaselining, V&V –For people good at all three Future program managers and chief engineers

34 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE34 Tutorial Outline ICM nature and context ICM views and examples ICM process decision table: Guidance and examples for using the ICM ICM and competitive prototyping Summary, conclusions, and references

35 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE35 03/19/2008 ©USC-CSSE The Incremental Commitment Life Cycle Process: Overview Stage I: DefinitionStage II: Development and Operations Anchor Point Milestones Synchronize, stabilize concurrency via FEDs Risk patterns determine life cycle process

36 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE36©USC-CSSE Different Risk Patterns Yield Different Processes

37 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE37 The ICM as Risk-Driven Process Generator Stage I of the ICM has 3 decision nodes with 4 options/node –Culminating with incremental development in Stage II –Some options involve go-backs –Results in many possible process paths Can use ICM risk patterns to generate frequently-used processes –With confidence that they fit the situation Can generally determine this in the Exploration phase –Develop as proposed plan with risk-based evidence at VCR milestone –Adjustable in later phases

38 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE38 The ICM Process Decision Table: Key Decision Inputs Product and project size and complexity Requirements volatility Mission criticality Nature of Non-Developmental Item (NDI)* support –Commercial, open-source, reused components Organizational and Personnel Capability * NDI Definition [ DFARS] : a) any product that is available in the commercial marketplace; b) any previously developed product in use by a U.S. agency (federal, state, or local) or a foreign government that has a mutual defense agreement with the U.S.; c) any product described in the first two points above that requires only modifications to meet requirements; d) any product that is being produced, but not yet in the commercial marketplace, that satisfies the above criteria.

39 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE39 The ICM Process Decision Table: Key Decision Outputs Key Stage I activities: incremental definition Key Stage II activities: incremental development and operations Suggested calendar time per build, per deliverable increment

40 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE40 Common Risk-Driven Special Cases of the ICM Special CaseExampleSize, Complexi ty Change Rate % /Month CriticalityNDI SupportOrg, Personnel Capability Key Stage I Activities : Incremental Definition Key Stage II Activities: Incremental Development, Operations Time per Build; per Increment 1Use NDISmall Accounting CompleteAcquire NDIUse NDI 2AgileE-servicesLow1 – 30Low- Med Good; in place Agile-ready Med-high Skip Valuation, Architecting phases Scrum plus agile methods of choice <= 1 day; 2-6 weeks 3Architected Agile Business data processing Med1 – 10Med- High Good; most in place Agile-ready Med-high Combine Valuation, Architecting phases. Complete NDI preparation Architecture-based Scrum of Scrums 2-4 weeks; 2-6 months 4Formal Methods Security kernel or safety- critical LSI chip Low0.3 – 1Extra high NoneStrong formal methods experience Precise formal specificationFormally-based programming language; formal verification 1-5 days; 1-4 weeks 5HW component with embedded SW Multi-sensor control device Low0.3 – 1Med- Very High Good; In place Experienced; med-high Concurrent HW/SW engineering. CDR-level ICM DCR IOC Development, LRIP, FRP. Concurrent Version N+1 engineering SW: 1-5 days; Market-driven 6Indivisible IOCComplete vehicle platform Med – High 0.3 – 1High- Very High Some in placeExperienced; med-high Determine minimum-IOC likely, conservative cost. Add deferrable SW features as risk reserve Drop deferrable features to meet conservative cost. Strong award fee for features not dropped SW: 2-6 weeks; Platform: 6-18 months 7NDI- IntensiveSupply Chain Management Med – High 0.3 – 3Med- Very High NDI-driven architecture NDI- experienced; Med-high Thorough NDI-suite life cycle cost-benefit analysis, selection, concurrent requirements/ architecture definition Pro-active NDI evolution influencing, NDI upgrade synchronization SW: 1-4 weeks; System: 6-18 months 8Hybrid agile / plan-driven system C4ISRMed – Very High Mixed parts: 1 – 10 Mixed parts; Med- Very High Mixed parts Full ICM; encapsulated agile in high change, low-medium criticality parts (Often HMI, external interfaces) Full ICM,three-team incremental development, concurrent V&V, next- increment rebaselining 1-2 months; 9-18 months 9Multi-owner system of systems Net-centric military operations Very High Mixed parts: 1 – 10 Very High Many NDIs; some in place Related experience, med-high Full ICM; extensive multi- owner team building, negotiation Full ICM; large ongoing system/software engineering effort 2-4 months; 18- 24 months 10Family of systems Medical Device Product Line Med – Very High 1 – 3Med – Very High Some in placeRelated experience, med – high Full ICM; Full stakeholder participation in product line scoping. Strong business case Full ICM. Extra resources for first system, version control, multi-stakeholder support 1-2 months; 9- 18 months C4ISR: Command, Control, Computing, Communications, Intelligence, Surveillance, Reconnaissance. CDR: Critical Design Review. DCR: Development Commitment Review. FRP: Full-Rate Production. HMI: Human-Machine Interface. HW: Hard ware. IOC: Initial Operational Capability. LRIP: Low-Rate Initial Production. NDI: Non-Development Item. SW: Software

41 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE41 Case 1: Use NDI Exploration phase identifies NDI opportunities NDI risk/opportunity analysis indicates risks acceptable –Product growth envelope fits within NDI capability –Compatible NDI and product evolution paths –Acceptable NDI volatility Some open-source components highly volatile –Acceptable usability, dependability, interoperability –NDI available or affordable Example: Small accounting system Size/complexity: Low Anticipated change rate (% per month): Low Criticality: Low NDI support: Complete Organization and personnel capability: NDI-experienced Key Stage I activities: Acquire NDI Key Stage II activities: Use NDI Time/build: Driven by time to initialize/tailor NDI Time/increment: Driven by NDI upgrades

42 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE42 Case 2: Pure Agile Methods Exploration phase determines –Low product and project size and complexity –Fixing increment defects in next increment acceptable –Existing hardware and NDI support of growth envelope –Sufficient agile-capable personnel –Need to accommodate rapid change, emergent requirements, early user capability Example: E-services Size/complexity: Low Anticipated change rate (% per month): 1-30% Criticality: Low to medium NDI support: Good; in place Organization and personnel capability: Agile-ready, medium to high capability Key Stage I activities: Skip Valuation and Architecting phases Key Stage II activities: Scrum plus agile methods of choice Time/build: Daily Time/increment: 2-6 weeks

43 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE43 Case 3: Architected Agile Exploration phase determines –Need to accommodate fairly rapid change, emergent requirements, early user capability –Low risk of scalability up to 100 people –NDI support of growth envelope –Nucleus of highly agile-capable personnel –Moderate to high loss due to increment defects Example: Business data processing Size/complexity: Medium Anticipated change rate (% per month): 1-10% Criticality: Medium to high NDI support: Good, most in place Organization and personnel capability: Agile-ready, med-high capability Key Stage I activities: Combined Valuation and Architecting phase, complete NDI preparation Key Stage II activities: Architecture-based scrum of scrums Time/build: 2-4 weeksTime/increment: 2-6 months

44 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE44 Case 4: Formal Methods Biggest risks: Software/hardware does not accurately implement required algorithm precision, security, safety mechanisms, or critical timing Example: Security kernel or safety-critical LSI chip Size/complexity: Low Anticipated change rate (% per month): 0.3% Criticality: Extra high NDI support: None Organization and personnel capability: Strong formal methods experience Key Stage I activities: Precise formal specification Key Stage II activities: Formally-based programming language; formal verification Time/build: 1-5 days Time/increment: 1-4 weeks

45 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE45 Case 5: Hardware Component with Embedded Software Biggest risks: Device recall, lawsuits, production line rework, hardware-software integration –DCR carried to Critical Design Review level –Concurrent hardware-software design Criticality makes Agile too risky –Continuous hardware-software integration Initially with simulated hardware Low risk of overrun –Low complexity, stable requirements and NDI –Little need for risk reserve –Likely single-supplier software

46 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE46 Case 5: Hardware Component with Embedded Software (continued) Example: Multi-sensor control device Size/complexity: Low Anticipated change rate (% per month): 0.3-1% Criticality: Medium to very high NDI support: Good, in place Organization and personnel capability: Experienced; medium to high capability Key Stage I activities: Concurrent hardware and software engineering; CDR-level ICM DCR Key Stage II activities: IOC Development, LRIP,FRP, concurrent version N+1 engineering Time/build: 1-5 days (software) Time/increment: Market-driven

47 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE47 Case 6: Indivisible IOC Biggest risk: Complexity, NDI uncertainties cause cost- schedule overrun –Similar strategies to case 5 for criticality (CDR, concurrent HW- SW design, continuous integration) –Add deferrable software features as risk reserve Adopt conservative (90% sure) cost and schedule Drop software features to meet cost and schedule Strong award fee for features not dropped –Likely multiple-supplier software makes longer (multi-weekly) builds more necessary

48 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE48 Case 6: Indivisible IOC (continued) Example: Complete vehicle platform Size/complexity: Medium to high Anticipated change rate (% per month): 0.3-1% Criticality: High to very high NDI support: Some in place Organization and personnel capability: Experienced, medium to high capability Key Stage I activities: Determine minimum-IOC likely, conservative cost; Add deferrable software features as risk reserve Key Stage II activities: Drop deferrable features to meet conservative cost; Strong award fee for features not dropped Time/build: 2-6 weeks (software) Time/increment: 6-18 months (platform)

49 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE49 Case 7: NDI-Intensive Biggest risks: incompatible NDI; rapid change, business/mission criticality; low NDI assessment and integration experience; supply chain stakeholder incompatibilities Example: Supply chain management Size/complexity: Medium to high Anticipated change rate (% per month): 0.3-3% Criticality: Medium to very high NDI support: NDI-driven architecture Organization and personnel capability: NDI-experienced; medium to high capability Key Stage I activities: Thorough NDI-suite life cycle cost-benefit analysis, selection, concurrent requirements and architecture definition Key Stage II activities: Pro-active NDI evolution influencing, NDI upgrade synchronization Time/build: 1-4 weeks (software) Time/increment: 6-18 months (systems)

50 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE50 Case 8: Hybrid Agile/Plan-Driven System Biggest risks: large scale, high complexity, rapid change, mixed high/low criticality, partial NDI support, mixed personnel capability Example: C4ISR system Size/complexity: Medium to very high Anticipated change rate (% per month): Mixed parts; 1-10% Criticality: Mixed parts; medium to very high NDI support: Mixed parts Organization and personnel capability: Mixed parts Key Stage I activities: Full ICM; encapsulated agile in high changed; low-medium criticality parts (often HMI, external interfaces) Key Stage II activities: Full ICM, three-team incremental development, concurrent V&V, next-increment rebaselining Time/build: 1-2 months Time/increment: 9-18 months

51 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE51 Case 9: Multi-Owner System of Systems Biggest risks: all those of Case 8 plus –Need to synchronize, integrate separately-managed, independently-evolving systems –Extremely large-scale; deep supplier hierarchies –Rapid adaptation to change extremely difficult Example: Net-centric military operations or global supply chain management Size/complexity: Very high Anticipated change rate (% per month): Mixed parts; 1-10% Criticality: Very high NDI support: Many NDIs; some in place Organization and personnel capability: Related experience, medium to high Key Stage I activities: Full ICM; extensive multi-owner teambuilding, negotiation Key Stage II activities: Full ICM; large ongoing system/software engineering effort Time/build: 2-4 monthsTime/increment:18-24 months

52 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE52 ICM Approach to Systems of Systems SoSs provide examples of how systems and software engineering can be better integrated when evolving existing systems to meet new needs Net-centricity and collaboration-intensiveness of SoSs have created more emphasis on integrating hardware, software, and human factors engineering Focus is on –Flexibility and adaptability –Use of creative approaches, experimentation, and tradeoffs –Consideration of non-optimal approaches that are satisfactory to key stakeholders Key focus for SoS engineering is to guide the evolution of constituent systems within SoS to –Perform cross-cutting capabilities –While supporting existing single-system stakeholders SoS process adaptations have much in common with the ICM –Supports evolution of constituent systems using the appropriate special case for each system –Guides the evolution of the SoS through long range increments that fit with the incremental evolution of the constituent systems –ICM fits well with the OSD SoS SE Guidebook description of SoSE core elements

53 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE53 The ICM Life Cycle Process and the SoS SE Core Elements Stage I: DefinitionStage II: Development and Operations Developing & evolving SoS architecture Translating capability objectives Understanding systems & relationships Monitoring & assessing changes Addressing requirements & solution options Orchestrating upgrades to SoS Assessing performance to capability objectives

54 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE54 Example: SoSE Synchronization Points

55 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE55 Case 10: Family of Systems Biggest risks: all those of Case 8 plus –Need to synchronize, integrate separately-managed, independently- evolving systems –Extremely large-scale; deep supplier hierarchies –Rapid adaptation to change extremely difficult Example: Medical device product line Size/complexity: Medium to very high Anticipated change rate (% per month): 1-3% Criticality: Medium to very high NDI support: Some in place Organization and personnel capability: Related experience, medium to high capability Key Stage I activities: Full ICM; full stakeholder participation in product line scoping; strong business case Key Stage II activities: Full ICM; extra resources for first system, version control, multi-stakeholder support Time/build: 1-2 monthsTime/increment: 9-18 months

56 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE56 Frequently Asked Question Q: Having all that ICM generality and then using the decision table to come back to a simple model seems like an overkill. –If my risk patterns are stable, can’t I just use the special case indicated by the decision table? A: Yes, you can and should – as long as your risk patterns stay stable. But as you encounter change, the ICM helps you adapt to it. –And it helps you collaborate with other organizations that may use different special cases.

57 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE57 Tutorial Outline ICM nature and context ICM views and examples ICM process decision table: Guidance and examples for using the ICM ICM and competitive prototyping –Motivation and context –Applying ICM principles to CP Summary, conclusions, and references

58 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE58 Motivation and Context DoD is emphasizing CP for system acquisition –Young memo, September 2007 CP can produce significant benefits, but also has risks –Benefits discussed in RPV CP example –Examples of risks from experiences, workshops The risk-driven ICM can help address the risks –Primarily through its underlying principles July 2008©USC-CSSE58

59 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE59 Young Memo: Prototyping and Competition Discover issues before costly SDD phase –Producing detailed designs in SDD –Not solving myriad technical issues Services and Agencies to produce competitive prototypes through Milestone B –Reduce technical risk, validate designs and cost estimates, evaluate manufacturing processes, refine requirements Will reduce time to fielding –And enhance govt.-industry teambuilding, SysE skills, attractiveness to next generation of technologists Applies to all programs requiring USD(AT&L) approval –Should be extended to appropriate programs below ACAT I 59

60 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE60 Example Risks Involved in CP Based on TRW, DARPA, SAIC experiences; workshop Seductiveness of sunny-day demos –Lack of coverage of rainy-day off-nominal scenarios –Lack of off-ramps for infeasible outcomes Underemphasis on quality factor tradeoffs –Scalability, performance, safety, security, adaptability Discontinuous support of developers, evaluators –Loss of key team members –Inadequate evaluation of competitors Underestimation of productization costs –Brooks factor of 9 for software –May be higher for hardware Underemphasis on non-prototype factors July 2008©USC-CSSE60

61 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE61 Some of these are root causes of technology immaturity Can address these via evidence-based Milestone B exit criteria Technology Development Strategy Capability Development Document Evidence of affordability, KPP satisfaction, program achievability Milestone B Focus on Technology Maturity Misses Many OSD/AT&L Systemic Root Causes 1 Technical process (35 instances) 6 Lack of appropriate staff (23) - V&V, integration, modeling&sim. 2 Management process (31) 7 Ineffective organization (22) 3 Acquisition practices (26) 8 Ineffective communication (21) 4 Requirements process (25) 9 Program realism (21) 5 Competing priorities (23) 10 Contract structure (20)

62 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE62July 2008©USC-CSSE62 Applying ICM Principles and Practices to CP When, what, and how much to prototype? –Risk management principle: buying information to reduce risk Whom to involve in CP? –Satisficing principle: all success-critical stakeholders How to sequence CP? –Incremental growth, iteration principles How to plan for CP? –Concurrent engineering principle: more parallel effort What is needed at Milestone B besides prototypes? –Risk management principle: systemic analysis insights

63 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE63July 2008©USC-CSSE63 When, What, and How Much to Prototype?  Buying information to reduce risk When and what: Expected value of perfect information How much is enough: Simple statistical decision theory

64 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE64July 2008©USC-CSSE64 When and What to Prototype: Early RPV Example Bold approach 0.5 probability of success: Value VB S = $100M 0.5 probability of failure: Value VB F = - $20M Conservative approach Value VC = $20M Expected value with no information EV NI = max(EV B, EV C ) = max(.5($100M)+.5(-$20M), $20M) = max($50M-$10M,$20M) = $40M Expected value with perfect information EV PI = 0.5[max(VB S,VC)] + 0.5[max(VB F,VC)] = 0.5 * max($100M,$20M) + 0.5 * max(-$20M,$20M) = 0.5 * $100M + 0.5 * $20M = $60M Expected value of perfect information EVPI = EV PI – EV NI = $20M Can spend up to $20M buying information to reduce risk

65 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE65July 2008©USC-CSSE65 If Risk Exposure is Low, CP Has Less Value Risk Exposure RE = Prob(Loss) * Size(Loss) Value of CP (EVPI) would be very small if the Bold approach is less risky –Prob(Loss) = Prob (VB F ) is near zero rather than 0.5 –Size(Loss) = VB F is near $20M rather than -$20M

66 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE66July 2008©USC-CSSE66 How Much Prototyping is Enough?  Value of imperfect information Larger CP investments reduce the probability of –False Negatives (FN): prototype fails, but approach would succeed –False Positives (FP): prototype succeeds, but approach would fail Can calculate EV(Prototype) from previous data plus P(FN), P(FP) Added CP decision criterion –The prototype can cost-effectively reduce the uncertainty CP Cost P(FP) P(FN) EV(CP) EV(Info) Net EV(CP) 0$40M00 $2M0.30.2$46M$6M$4M $5M0.20.1$52M$12M$7M $10M0.150.075$54M$14M$4M $15M0.10.05$56M$16M$1M $22M0.0 $60M$20M-$2M

67 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE67July 2008©USC-CSSE67 Summary: CP Pays Off When The basic CP value propositions are satisfied 1.There is significant risk exposure in making the wrong decision 2.The prototype can cost-effectively reduce the risk exposure There are net positive side effects 3.The CP process does not consume too much calendar time 4.The prototypes have added value for teambuilding or training 5.The prototypes can be used as part of the product

68 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE68July 2008©USC-CSSE68 Applying ICM Principles and Practices to CP When, what, and how much to prototype? –Risk management principle: buying information to reduce risk Whom to involve in CP? –Satisficing principle: all success-critical stakeholders How to sequence CP? –Incremental growth, iteration principles How to plan for CP? –Concurrent engineering principle: more parallel effort What is needed at Milestone B besides prototypes? –Risk management principle: systemic analysis insights

69 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE69July 2008©USC-CSSE69 Whom to Involve in CP?  Satisficing principle: All success-critical stakeholders Success-critical: high risk of neglecting their interests –Acquirers  Operators –Developers  Maintainers –Users  Interoperators –Testers  Others Risk-driven level of involvement –Interoperators: initially high-level; increasing detail Need to have CRACK stakeholder participants –Committed, Representative, Authorized, Collaborative, Knowledgeable

70 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE70July 2008©USC-CSSE70 How to Sequence CP?  Iterative cycles; incremental commitment principles 100% Traditional Degree Of Commitment Traditional Degree Of Understanding Blanchard-Fabrycky, 1998 Incremental CP Commitments

71 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE71July 2008©USC-CSSE71 Actual CP Situation: Need to Conserve Momentum Need time to evaluate and rebaseline Eliminated competitors’ experience lost Need to keep competitors productive, compensated Need to capitalize on lost experience 100% Degree of Commitment Degree of Understanding Proto-1Eval-1Proto-2Eval-2Proto-3Eval-3

72 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE72July 2008©USC-CSSE72 Keeping Competitors Productive and Supported During Evaluations  Concurrent engineering principle Provide support for a core group within each competitor organization –Focused on supporting evaluation activities –Avoiding loss of tacit knowledge and momentum Key evaluation support activities might include –Supporting prototype exercises –Answering questions about critical success factors Important to keep evaluation and selection period as short as possible –Through extensive preparation activities (see next chart)

73 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE73July 2008©USC-CSSE73 Keeping Acquirers Productive and Supported During Prototyping Adjusting plans based on new information Preparing evaluation tools and testbeds –Criteria, scenarios, experts, stakeholders, detailed procedures Possibly assimilating downselected competitors –IV&V contracts as consolation prizes Identifying, involving success-critical stakeholders Reviewing interim progress Pursuing complementary acquisition initiatives –Operational concept definition, life cycle planning, external interface negotiation, mission cost-effectiveness analysis

74 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE74July 2008©USC-CSSE74 Applying ICM Principles and Practices to CP When, what, and how much to prototype? –Risk management principle: buying information to reduce risk Whom to involve in CP? –Satisficing principle: all success-critical stakeholders How to sequence CP? –Incremental growth, iteration principles How to plan for CP? –Concurrent engineering principle: more parallel effort What is needed at Milestone B besides prototypes? –Risk management principle: systemic analysis insights

75 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE75July 2008©USC-CSSE75 Later CP Rounds Need Increasing Focus on Complementary Practices  By all success critical stakeholders Stakeholder roles, responsibilities, authority, accountability Capability priorities and sequencing of development increments Concurrent engineering of requirements, architecture, feasibility evidence Early preparation of development infrastructure (i.e., key parts of the architecture) Acquisition planning, contracting, management, staffing, test and evaluation

76 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE76July 2008©USC-CSSE76 When to Stop CP  Commitment and accountability principle: Off-ramps Inadequate technology base –Lack of evidence of scalability, security, accuracy, robustness, airworthiness, useful lifetime, … –Better to pursue as research, exploratory development Better alternative solutions emerge –Commercial, other government Key success-critical stakeholders decommit –Infrastructure providers, strategic partners, changed leadership Important to emphasize possibility of off-ramps….

77 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE77July 2008©USC-CSSE77 Acquiring Organization’s ICM-Based CP Plan Addresses issues discussed above –Risk-driven prototyping rounds, concurrent definition and development, continuity of support, stakeholder involvement, off-ramps Organized around key management questions –Objectives (why?): concept feasibility, best system solution –Milestones and Schedules (what? when?): Number and timing of competitive rounds; entry and exit criteria, including off-ramps –Responsibilities (who? where?): Success-critical stakeholder roles and responsibilities for activities and artifacts –Approach (how?): Management approach or evaluation guidelines, technical approach or evaluation methods, facilities, tools, and concurrent engineering –Resources (how much?): Necessary resources for acquirers, competitors, evaluators, other stakeholders across full range of prototyping and evaluation rounds –Assumptions (whereas?): Conditions for exercise of off-ramps, rebaselining of priorities and criteria Provides a stable framework for pursuing CP

78 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE78 Tutorial Outline ICM nature and context ICM views and examples ICM process decision table: Guidance and examples for using the ICM ICM and competitive prototyping Conclusions, References, Acronyms –Copy of Young Memo

79 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE79July 2008©USC-CSSE79 ICM and CP Conclusions CP most effective in reducing technical risk –If project is low-risk, may not need CP May be worth it for teambuilding Other significant risks need resolution by Milestone B –Systemic Analysis DataBase (SADB) sources: management, acquisition, requirements, staffing, organizing, contracting CP requires significant, continuing preparation –Prototypes are just tip of iceberg –Need evaluation criteria, tools, testbeds, scenarios, staffing, procedures Need to sustain CP momentum across evaluation breaks –Useful competitor tasks to do; need funding support ICM provides effective framework for CP plan, execution –CP value propositions, milestone criteria, guiding principles CP will involve changes in cultures and institutions –Need continuous corporate assessment and improvement of CP-related principles, processes, and practices

80 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE80 ICM Transition Paths Existing programs may benefit from some ICM principles and practices, but not others Problem programs may find some ICM practices helpful in recovering viability Primary opportunities for incremental adoption of ICM principles and practices –Supplementing traditional requirements and design reviews with development and review of feasibility evidence –Stabilized incremental development and concurrent architecture rebaselining –Using schedule as independent variable and prioritizing features to be delivered –Continuous verification and validation –Using the process decision table –Tailoring the Competitive Prototyping Plan

81 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE81©USC-CSSE References - I Beck, K., Extreme Programming Explained, Addison Wesley, 1999. Blanchard, B., and Fabrycky, W., Systems Engineering and Analysis, Prentice Hall, 1998 (3 rd ed.) Boehm, B., “Some Future Trends and Implications for Systems and Software Engineering Processes”, Systems Engineering 9(1), pp. 1-19, 2006. Boehm, B., Brown, W., Basili, V., and Turner, R., “Spiral Acquisition of Software-Intensive Systems of Systems, CrossTalk, Vol. 17, No. 5, pp. 4-9, 2004. Boehm, B. and Lane J., "21st Century Processes for Acquiring 21st Century Software-Intensive Systems of Systems." CrossTalk: Vol. 19, No. 5, pp.4-9, 2006. Boehm, B., and Lane, J., “Using the ICM to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October 2007, pp. 4-9. Boehm, B., and Lane, J., “A Process Decision Table for Integrated Systems and Software Engineering,” Proceedings, CSER 2008, April 2008. Boehm, B., “Future Challenges and Rewards for Software Engineers,” DoD Software Tech News, October 2007, pp. 6-12. Boehm, B. et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000. Boehm, B., Software Engineering Economics, Prentice Hall, 2000. Brooks, F. The Mythical Man-Month, Addison Wesley, 1995 (2 nd ed.). Carlock, P. and Fenton, R., "System of Systems (SoS) Enterprise Systems for Information-Intensive Organizations," Systems Engineering, Vol. 4, No. 4, pp. 242-26, 2001. Carlock, P., and J. Lane, “System of Systems Enterprise Systems Engineering, the Enterprise Architecture Management Framework, and System of Systems Cost Estimation”, 21st International Forum on COCOMO and Systems/Software Cost Modeling, 2006. Checkland, P., Systems Thinking, Systems Practice, Wiley, 1980 (2nd ed., 1999). Department of Defense (DoD), Defense Acquisition Guidebook, version 1.6, http://akss.dau.mil/dag/, 2006.http://akss.dau.mil/dag/ Department of Defense (DoD), Instruction 5000.2, Operation of the Defense Acquisition System, May 2003. Department of Defense (DoD), Systems Engineering Plan Preparation Guide, USD(AT&L), 2004. Electronic Industries Alliance (1999); EIA Standard 632: Processes for Engineering a System Galorath, D., and Evans, M., Software Sizing, Estimation, and Risk Management, Auerbach, 2006.

82 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE82©USC-CSSE References -II Hall, E.T., Beyond Culture, Anchor Books/Doubleday, 1976. Highsmith, J., Adaptive Software Development, Dorset House, 2000. International Standards Organization, Information Technology Software Life Cycle Processes, ISO/IEC 12207, 1995 ISO, Systems Engineering – System Life Cycle Processes, ISO/IEC 15288, 2002. Jensen, R. “An Improved Macrolevel Software Development Resource Estimation Model,” Proceedings, ISPA 5, April 1983, pp. 88-92. Krygiel, A., Behind the Wizard’s Curtain; CCRP Publication Series, July, 1999, p. 33 Lane, J. and Boehm, B., "System of Systems Cost Estimation: Analysis of Lead System Integrator Engineering Activities", Information Resources Management Journal, Vol. 20, No. 2, pp. 23-32, 2007. Lane, J. and Valerdi, R., “Synthesizing SoS Concepts for Use in Cost Estimation”, Proceedings of IEEE Systems, Man, and Cybernetics Conference, 2005. Lientz, B., and Swanson, E.B., Software Maintenance Management, Addison Wesley, 1980. Madachy, R., Boehm, B., Lane, J., "Assessing Hybrid Incremental Processes for SISOS Development", USC CSSE Technical Report USC-CSSE-2006-623, 2006. Maier, M., “Architecting Principles for Systems-of-Systems”; Systems Engineering, Vol. 1, No. 4 (pp 267-284). Maier, M., “System and Software Architecture Reconciliation,” Systems Engineering 9 (2), 2006, pp. 146-159. Northrop, L., et al., Ultra-Large-Scale Systems: The Software Challenge of the Future, Software Engineering Institute, 2006. Pew, R. W., and Mavor, A. S., Human-System Integration in the System Development Process: A New Look, National Academy Press, 2007. Putnam, L., “A General Empirical Solution to the Macro Software Sizing and Estimating Problem,” IEEE Trans SW Engr., July 1978, pp. 345-361. Rechtin, E. Systems Architecting, Prentice Hall, 1991. Schroeder, T., “Integrating Systems and Software Engineering: Observations in Practice,” OSD/USC Integrating Systems and Software Engineering Workshop, http://csse.usc.edu/events/2007/CIIForum/pages/program.html, October 2007. http://csse.usc.edu/events/2007/CIIForum/pages/program.html

83 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE83©USC-CSSE List of Acronyms B/LBaselined C4ISRCommand, Control, Computing, Communications, Intelligence, Surveillance, Reconnaissance CDConcept Development CDRCritical Design Review COTSCommercial Off-the-Shelf CPCompetitive Prototyping DCRDevelopment Commitment Review DIDevelopment Increment DoDDepartment of Defense ECRExploration Commitment Review EVMSEarned Value Management System FCRFoundations Commitment Review FEDFeasibility Evidence Description FMEAFailure Modes and Effects Analysis FRPFull-Rate Production GAOGovernment Accountability Office GUIGraphical User Interface

84 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE84©USC-CSSE List of Acronyms (continued) HMIHuman-Machine Interface HSIHuman-System Interface HWHardware ICMIncremental Commitment Model IOCInitial Operational Capability IRRInception Readiness Review IS&SEIntegrating Systems and Software Engineering LCOLife Cycle Objectives LRIPLow-Rate Initial Production MBASEModel-Based Architecting and Software Engineering NDINon-Developmental Item NRCNational Research Council OCOperational Capability OCROperations Commitment Review OO&DObserve, Orient and Decide OODAObserve, Orient, Decide, Act O&MOperations and Maintenance

85 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE85©USC-CSSE List of Acronyms (continued) PDRPreliminary Design Review PMProgram Manager PRPublic Relations PRRProduct Release Review RUPRational Unified Process SoSSystem of Systems SoSESystem of Systems Engineering SSESystems and Software Engineering SWSoftware SwESoftware Engineering SysESystems Engineering Sys EngrSystems Engineer S&SESystems and Software Engineering USD (AT&L)Under Secretary of Defense for Acquisition, Technology, and Logistics VCRValidation Commitment Review V&VVerification and Validation WBSWork Breakdown Structure WMIWarfighter-Machine Interface

86 University of Southern California Center for Systems and Software Engineering July 2008©USC-CSSE86 Competitive Prototyping Policy: John Young Memo


Download ppt "University of Southern California Center for Systems and Software Engineering Barry Boehm and Jo Ann Lane University of Southern California Center for."

Similar presentations


Ads by Google