Download presentation
Presentation is loading. Please wait.
Published byJean-Louis Papineau Modified over 5 years ago
1
Midterm 1 Content Q&A - COCOMO-Related Questions -- ICSM-Related Questions
Barry Boehm CS 510, Fall 2014
2
COCOMO-Related Questions
Fully explain the reuse model, with examples Why are some reuse decisions considered bad? Why is maintenance different from reuse? Why 20% per year? Should we use the reuse model for COTS or cloud services? Fully explain the return on Investment (ROI) model What is a good value for ROI? Should one always use 5 years for payoff? How many significant digits to carry calculations? What if we’re told to use a nominal exponent of 1.10 rather than ? How do you determine the size input (# of SLOC)? How do you determine staff size from estimated effort? Is *(E-0.91) the same as *(E-1.01)? What if you estimate a non-integer staff size? Why are some factors effort multipliers and others scale factors? Why don’t COCOMO estimates include Inception and Transition? Are there guidelines for estimating them?
3
COCOMO Reuse Model A nonlinear estimation model to convert adapted (reused or modified) software into equivalent size of new software: ©USC-CSSE
4
COCOMO Reuse Model cont’d
ASLOC - Adapted Source Lines of Code ESLOC - Equivalent Source Lines of Code AAF - Adaptation Adjustment Factor DM - Percent Design Modified. The percentage of the adapted software's design which is modified in order to adapt it to the new objectives and environment. CM - Percent Code Modified. The percentage of the adapted software's code which is modified in order to adapt it to the new objectives and environment. IM - Percent of Integration Required for Modified Software. The percentage of effort required to integrate the adapted software into an overall product and to test the resulting product as compared to the normal amount of integration and test effort for software of comparable size. AA - Assessment and Assimilation effort needed to determine whether a fully-reused software module is appropriate to the application, and to integrate its description into the overall product description. See table. SU - Software Understanding. Effort increment as a percentage. Only used when code is modified (zero when DM=0 and CM=0). See table. UNFM - Unfamiliarity. The programmer's relative unfamiliarity with the software which is applied multiplicatively to the software understanding effort increment (0-1). ©USC-CSSE ©USC-CSSE 4
5
Assessment and Assimilation Increment (AA)
©USC-CSSE ©USC-CSSE 5
6
Software Understanding Increment (SU)
Take the subjective average of the three categories. Do not use SU if the component is being used unmodified (DM=0 and CM =0). ©USC-CSSE
7
Programmer Unfamiliarity (UNFM)
Only applies to modified software ©USC-CSSE
8
Development vs. Reuse Possibility of reusing a 40 KSLOC component
Reuse parameters not a strong match % design modified DM=40 % code modified CM=50 % integration redone IM=100 Understanding penalty SU=50 SW unfamiliarity UMFM=1.0 Adaptation of assessment AA =5% Equivalent new lines of code = 40k*[(0.4*40+0.3*50+0.3*100)/100+(5+50*1.0)/100] = 40k * ( ) = 46.4KSLOC Not a good decision to reuse
9
Other Reuse-Related Questions
Why is maintenance different from reuse? Different effort multipliers (e.g., RELY) Only parts of code base affected Why 20% per year? Varies by domain: higher for user-intensive parts Should we use the reuse model for COTS or cloud services? No: Don’t know size; Only parts affected
10
COCOMO-Related Questions
Fully explain the reuse model, with examples Why are some reuse decisions considered bad? Why is maintenance different from reuse? Why 20% per year? Should we use the reuse model for COTS or cloud services? Fully explain the return on Investment (ROI) model What is a good value for ROI? Should one always use 5 years for payoff? How many significant digits to carry calculations? Generally 5 What if we’re told to use a nominal exponent of 1.10 rather than ? How do you determine the size input (# of SLOC)? How do you determine staff size from estimated effort? Is *(E-0.91) the same as *(E-1.01)? What if you estimate a non-integer staff size? Why are some factors effort multipliers and others scale factors? Why don’t COCOMO estimates include Inception and Transition? Are there guidelines for estimating them?
11
Fully explain the ROI model
ROI = (Benefits-Costs) / Costs Error in class chart What is a good value for ROI? Better than alternate investments of funds Better than costs of borrowing development funds Should one always use 5 years for payoff? Depends on marketplace dynamics
12
Software Process Improvement
UST currently at Process maturity Level 2 Planning & control, config. Management, quality assurance Cost to achieve level 3 (process group, training, product engr.) Process group: (2yr)*(4 persons)*($96K/yr) = $768K Training: (200 persons)*(3weeks)*($96K/yr) = $1108K Contingency = $124K; Total = = $2000K Benefit: scale exponent reduced by =.0156, to 1.10 – = (generally OK to use 1.10) From 100^1.10 = to 100^ = 147.5, or 7% less effort Annual savings = (200 persons)*(96K/yr)(.07)=$1344K 5 year ROI = [5*$1344K-$2000K]/$2000K = 2.36 Again, well worth the investment But should add costs of continuing process upkeep, training
13
COCOMO-Related Questions
Fully explain the reuse model, with examples Why are some reuse decisions considered bad? Why is maintenance different from reuse? Why 20% per year? Should we use the reuse model for COTS or cloud services? Fully explain the return on Investment (ROI) model What is a good value for ROI? Should one always use 5 years for payoff? How many significant digits to carry calculations? What if we’re told to use a nominal exponent of 1.10 rather than ? How do you determine the size input (# of SLOC)? How do you determine staff size from estimated effort? Is *(E-0.91) the same as *(E-1.01)? What if you estimate a non-integer staff size? Why are some factors effort multipliers and others scale factors? Why don’t COCOMO estimates include Inception and Transition? Are there guidelines for estimating them?
14
The Cost and Size Cone of Uncertainty Improve sizing via analogy, bottom-up, experts, further design
October 16, 2012 Copyright © USC-CSSE
15
Determining staff size from estimated effort Use schedule estimation model
Is *(E-0.91) the same as *(E-1.01)? For E=1.10, *(E-0.91) = = *(E-1.01) Error in formula in EC-3 chart 8 For an estimated effort of 200 person-months Schedule = 3.67 * (200)^0.318 = months Average staff size = 200 PM/ Mo = persons What if you estimate a non-integer staff size? OK, since staff size is generally not constant
16
COCOMO II Schedule Model
Where: Schedule is the calendar time in months from the requirements baseline to acceptance C is a constant derived from historical project data (currently C = 3.67 in COCOMOII.2000) Effort is the estimated person-months excluding the SCED effort multiplier E is the exponent in the effort equation SCED% is the compression / expansion percentage in the SCED cost driver This is the COCOMOII.2000 calibration Schedule (months) = C (Effort)( (E-0.91)) x SCED%/100 ©USC-CSSE
17
COCOMO-Related Questions
Fully explain the reuse model, with examples Why are some reuse decisions considered bad? Why is maintenance different from reuse? Why 20% per year? Should we use the reuse model for COTS or cloud services? Fully explain the return on Investment (ROI) model What is a good value for ROI? Should one always use 5 years for payoff? How many significant digits to carry calculations? What if we’re told to use a nominal exponent of 1.10 rather than ? How do you determine the size input (# of SLOC)? How do you determine staff size from estimated effort? Is *(E-0.91) the same as *(E-1.01)? What if you estimate a non-integer staff size? Why are some factors effort multipliers and others scale factors? Why don’t COCOMO estimates include Inception and Transition? Are there guidelines for estimating them?
18
Why are some factors effort multipliers and others scale factors?
Determined from calibration of COCOMO II parameters via regression analysis of 161 projects For example, Precedentedness scale factor overlaps experience cost drivers APEX, LTEX, and PLEX, but all statistically significant
19
Table 2.50 COCOMO II Scale Factors & Multipliers
©USC-CSSE
20
Why don’t COCOMO estimates include Inception and Transition?
Percentages vary widely Calibration data unavailable
21
ICSM-Related Questions Mostly related to diagrams in chapters
Master Net PC, PD, PP, S model clashes Dual cones of Uncertainty Three-team evolutionary development Commitment reviews preceding phases Nature of incremental commitments: TRW-SPS Principle 4 failure and success differences Sweet spot “how much is enough?” diagrams Nature of Kanban processes
22
ICSM Principles Counterexample:
Bank of America Master Net 1/13/2014 © USC-CSSE
23
Doubtfulness: The Cones of Uncertainty – Need incremental vs
Doubtfulness: The Cones of Uncertainty – Need incremental vs. one-shot development Uncertainties in competition, technology, organizations, mission priorities
24
Risk-Driven Scalable Spiral Model: Increment View For each level of systems-of-interest
Unforeseeable Change (Adapt) Rapid Change Agile Rebaselining for Future Increments Future Increment Baselines Short Development Increments Deferrals Foreseeable Change (Plan) Short, Stabilized Development of Increment N Increment N Transition/ Operations and Maintenance Increment N Baseline Stable Development Increments High Assurance Artifacts Concerns Future V&V Resources Current V&V Resources Verification and Validation (V&V) of Increment N Continuous V&V © USC-CSSE 1/13/2014
25
Agile Change Processing and Rebaselining
Assess Changes, Propose Handling Stabilized Increment-N Development Team Change Proposers Future Increment Managers Agile Future- Increment Rebaselining Team Negotiate change disposition Formulate, analyze options in context of other changes Handle Accepted changes Discuss, resolve deferrals to future increments Propose Changes Discuss, revise, defer, or drop Rebaseline future-increment Foundations packages Prepare for rebaselined development Defer some Increment-N capabilities Recommend handling in current increment Accept changes Handle in current rebaseline Proposed changes Recommend no action, provide rationale Recommend deferrals to future increments To handle unpredictable, asynchronous change traffic, it is not possible to do a sequential observe, then orient, then decide process. Instead, it requires handling unforeseen change requests involving new technology or COTS opportunities; changing requirements and priorities; changing external interfaces; low-priority current increment features being deferred to a later increment; and user requests based on experience with currently-fielded increments (including defect fixes). In the context of the agile rebaselining team in Chart 10, this chart shows how the agile team interacts with the change proposers, current-increment development team, and managers of future increments to evaluate proposed changes and their interactions with each other, and to negotiate rebaselined Milestone B packages for the next increment. Again, there is no precise way to forecast the budget and schedule of this team’s workload. Within an available budget and schedule, the agile team will perform a continuing “handle now; incorporate in next increment rebaseline; defer-or-drop” triage in the top “Assess Changes, Propose Handling” box. Surges in demand would have to be met by surges in needed expertise and funding support. 3/1/2010
26
The Incremental Commitment Spiral Process: Phased View
Anchor Point Milestones Synchronize, stabilize concurrency via FEDs The Incremental Commitment Life Cycle Process: Overview This slide shows how the ICSM spans the life cycle process from concept exploration to operations. Each phase culminates with an anchor point milestone review. At each anchor point, there are 4 options, based on the assessed risk of the proposed system. Some options involve go-backs. These options result in many possible process paths. The life cycle is divided into two stages: Stage I of the ICSM (Definition) has 3 decision nodes with 4 options/node, culminating with incremental development in Stage II (Development and Operations). Stage II has an additional 2 decision nodes, again with 4 options/node. One can use ICSM risk patterns to generate frequently-used processes with confidence that they fit the situation. Initial risk patterns can generally be determined in the Exploration phase. One then proceeds with development as a proposed plan with risk-based evidence at the VCR milestone, adjusting in later phases as necessary. For complex systems, a result of the Exploration phase would be the Prototyping and Competition Plan discussed above. Risks associated with the system drive the life cycle process. Information about the risk(s) (feasibility assessments) supports the decision to proceed, adjust scope or priorities, or cancel the program. Risk patterns determine life cycle process © USC-CSSE 1/13/2014 26
27
ICSM-Related Questions Mostly related to diagrams in chapters
Master Net PC, PD, PP, S model clashes Dual cones of Uncertainty Three-team evolutionary development Commitment reviews preceding phases Nature of incremental commitments: TRW-SPS Principle 4 failure and success differences Sweet spot “how much is enough?” diagrams Nature of Kanban processes
28
TRW - SPS, Exploration Phase
Copyright © USC-CSSE 827/2014
29
TRW-SPS, Validation Phase
Copyright © USC-CSSE 827/2014
30
TRW-SPS, Foundations Phase
Copyright © USC-CSSE 827/2014
31
Problems Encountered without FED: 15-Month Architecture Rework Delay
Required Architecture: Custom; many cache processors Original Architecture: Modified Client-Server 1 2 3 4 5 Response Time (sec) Original Spec After Prototyping Original Cost Problems Encountered without FED In the early 1980s, a large government organization contracted with TRW to develop an ambitious information query and analysis system. The system would provide more than 1,000 users, spread across a large building complex, with powerful query and analysis capabilities for a large and dynamic database. TRW and the customer specified the system using a classic sequential-engineering waterfall development model. Based largely on user need surveys, an oversimplified high-level performance analysis, and a short deadline for getting the TBDs out of the requirements specification, they fixed into the contract a requirement for a system response time of less than one second. Subsequently, the software architects found that subsecond performance could only be provided via a highly customized design that attempted to anticipate query patterns and cache copies of data so that each user’s likely data would be within one second’s reach (a 1980’s precursor of Google). The resulting hardware architecture had more than 25 super-midicomputers busy caching data according to algorithms whose actual performance defied easy analysis. The scope and complexity of the hardware-software architecture brought the estimated cost of the system to nearly $100 million, driven primarily by the requirement for a one-second response time. Faced with this unattractive prospect (far more than the customer’s budget for the system), the customer and developer decided to develop a prototype of the system’s user interface and representative capabilities to test. The results showed that a four-second response time would satisfy users 90 percent of the time. A four-second response time, with special handling for high-priority transactions, dropped development costs closer to $30 million. Thus, the premature specification of a 1-second response time neglected the risk of creating an overexpensive and time-consuming system development. Fortunately, in this case, the only loss was the wasted effort on the expensive-system architecture and a 15 month delay in delivery. More frequently, such rework is done only after the expensive full system is delivered and found still too slow and too expensive to operate. ©USC-CSSE 07/09/2010 File: Feasibility Evidence Developmentv8
32
CCPDS-R Evidence-Based Commitment
Development Life Cycle Inception Elaboration Construction Architecture Iterations Release Iterations SSR IPDR PDR CDR 5 10 15 20 25 Contract award Architecture baseline under change control (LCO) Competitive design phase: Architectural prototypes Planning Requirements analysis (LCA) Early delivery of “alpha” capability to user © USC-CSSE 5/27/2014
33
CCPDS-R and 4 Principles
Stakeholder Value-Based Guidance Reinterpreted DOD-STD-2167a; users involved Extensive user, maintainer, management interviews, prototypes Award fee flowdown to performers Incremental Commitment and Accountability Stage I: Incremental tech. validation, prototyping, architecting Stage II: 3 major-user-organization increments Concurrent multidiscipline engineering Small, expert, concurrent-SysE team during Stage I Stage II: 75 parallel programmers to validated interface specs; integration preceded programming Evidence and Risk-Driven Decisions High-risk prototyping and distributed OS developed before PDR Performance validated via executing architectural skeleton © USC-CSSE 5/27/2014
34
Master Net and 4 Principles
Stakeholder value-based guidance Overconcern with Voice of Customer: 3.5 MSLOC of rqts. No concern with maintainers, interoperators: Prime vs. IBM Incremental commitment and accountability Total commitment to infeasible budget and schedule No contract award fees or penalties for under/overruns Concurrent multidiscipline engineering No prioritization of features for incremental development No prototyping of operational scenarios and usage Evidence and risk-driven decisions No evaluation of Premier Systems scalability, performance No evidence of ability to satisfy budgets and schedules Copyright © USC-CSSE 827/2014
35
How Much Testing is Enough
How Much Testing is Enough? (LiGuo Huang, 1996) - Early Startup: Risk due to low dependability - Commercial: Risk due to low dependability - High Finance: Risk due to low dependability - Risk due to market share erosion Sweet Spot COCOMO II: 12 22 34 54 Added % test time COQUALMO: 1.0 .475 .24 .125 0.06 P(L) Early Startup: .33 .19 .11 .06 .03 S(L) Commercial: .56 .32 .18 .10 High Finance: 3.0 1.68 .96 .54 .30 Market Risk: .008 .027 .09 REm © USC-CSSE 5/27/2014
36
Visual Representation: Kanban board shows flow and bottlenecks
Kanban Limit – regulates WIP at each stage in the process Pull Flow – from Engineering Ready to Release Ready From David Anderson
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.