Software Size and Cost Estimation Chapter 3 Software Size and Cost Estimation
PROJECT EVALUATION
Overview What is it? A systematic and objective assessment of an ongoing or completed project Design Implementation Results Involves gathering, analysing, interpreting and reporting information Should be based on credible data
Purpose Learning and improvement Accountability provide useful feed back to stakeholders; entrepreneurs, sponsors, donors, client-groups, administrators, staff, and other relevant constituencies
Types of Evaluation There are several types of evaluation. The classification is based on: purpose of the evaluation, methodology, timing, who is involved in the evaluation position of the evaluators. Based on purpose formative summative
Types of Evaluation Based on Timing Based on position of evaluator Ex-ante (prediction) evaluation Ex-Post (Affecting things past) evaluation Based on position of evaluator External evaluation Internal evaluation or self-assessment
Types of Evaluation Ex–ante evaluation Ex-post evaluation Conducted before the implementation of a project as part of the planning Also referred to as appraisal or quality at entry Ex-post evaluation Conducted after the project is completed Used to assess sustainability of project effects, impacts Identifies factors of success to inform other projects
Types of Evaluation External evaluation Initiated and controlled by the donor as part of contractual agreement Conducted by independent people – who are not involved in implementation Often guided by project staff
Types of Evaluation Internal or self assessment Internally guided reflective processes Initiated and controlled by the group for its own learning and improvement. Sometimes done by consultants who are outsiders to the project Need to clarify ownership of information before the review starts
Types of Evaluation By methodology employed Quantitative Qualitative
Steps in Managing a Project Evaluation Establishing the need for an evaluation Initial Planning and Resourcing Developing Terms of Reference Engaging the Evaluator or Evaluation Team Approving the Workplan Implementing and Monitoring the Evaluation Assessing the Results of the Evaluation Developing a Plan for Follow-up
Step 1: Establishing the need for an Evaluation Project manager(s) need to clarify the purpose of evaluations. E.g. - Donor requirement Accountability Innovation Learning and change Responding to changed circumstance
Step 2: Initial Planning and Resourcing Evaluations take up significant time and resources Need to ensure that the costs are appropriate for the anticipated benefits. Resourcing the evaluation: Money? Technical expertise? Defining scope and size Clarify if external or internal Level of effort and resources required / available Stakeholder groups to be involved and how.
Full stakeholder desirable, but could be limited to the following: Deciding whether or not to evaluate. Defining the type of evaluation, its scope, and criteria. Defining the evaluation questions, what are the key issues to explore in the evaluation? Defining evaluation workplan. Evaluation activities must be scheduled and fit into the stakeholders' agendas. Deciding which recommendations to adopt and which to reject. Disseminating and gathering feedback on the results.
Providing resources for the Evaluation Evaluations require substantial investments of financial and human resources. Funding source would have been indicated in the project document
Developing Terms of Reference (TORs) TOR are the key guide for an evaluation. They should clarify reasons for the evaluation highlight issues that have become apparent indicate the general depth and scope required spell out any imperatives for the evaluators provide details about methodology, scheduling, cost and the qualifications of the members on the evaluating teams
Developing Terms of Reference (TORs) The project manager is responsible for ensuring clear and focused TORs This is as far as the Manager is responsible for development of the TOR
Contents of Terms of Reference Context for the evaluation Purpose for the evaluation Evaluation issues and questions Evaluation stakeholders Methodology Qualifications of evaluators Schedule Outputs and Deliverables Cost Action Plan Appendices - Evaluation Matrix, Evaluation Policy, LFA
Engaging the Evaluator or Evaluation Team Evaluators can be selected by you, imposed by donors or jointly agreed to. Which ever it is some guide is useful here: The appropriate level of technical expertise or evaluation expertise The previous experience or profile of the evaluator Suggested profile of a good evaluation team Using peers as evaluators Roles and responsibilities
Reviewing and Approving the Workplan The evaluation work plan is developed by the evaluator and the evaluation team It should: provide roadmap for conducting the evaluation include proposed methodology and means of analysis A poor work plan leads to poor evaluation Important that the leadership of the project review and approve the evaluation work plan
Reviewing and Approving the Workplan Suggested outline of a Workplan Introduction - purpose and stakeholders Evaluation Questions (framework) Methodology (sources, methods) Schedule (Gantt chart) Resource Allocation and Budget Evaluation Team Outline of Evaluation Report
Implementing and Monitoring the Evaluation Work Managers required to facilitate evaluators work by: Supporting field data collection Making documents available Responding to regular evaluation reports and feedback Distributing draft reports for comments to appropriate partners Participating in donor and evaluator meetings when requested Reviewing drafts of findings and reports and providing feedback
Different Audiences may have Different Needs Internal staff might need a verbal report and a memo with key points Donors and external stakeholders might need a full report Ministries might need an abstract Public at large might need an abstract of findings only Know your audience and match your reporting approach
Effective Communication of Evaluation Results Captures the data in its conclusions Speaks in language of users Detached, non-possessive stance Objective - “truth” to power, but Is pragmatic - goes only as far as the key stakeholders will accept
Assessing the quality of an evaluation report and process Meeting needs – commissioning managers, stakeholders Relevant scope Suitable methods Reliable data Sound analysis Credible findings Impartial conclusions Clear reporting
What to Evaluate Outcomes Processes
Steps In Evaluation Planning Selecting object (setting objectives) Methodology Deciding on standards Choice of measures Data collection Data analysis Implementing evaluation Reporting
Project Estimation
Estimation “The single most important task of a project: setting realistic expectations. Unrealistic expectations based on inaccurate estimates are the single largest cause of software failure.”
The Problems Predicting software cost Predicting software schedule Controlling software risk Managing/tracking project as it progresses
Fundamental estimation questions How much effort is required to complete an activity? How much calendar time is needed to complete an activity? What is the total cost of an activity? Project estimation and scheduling are interleaved management activities.
Software cost components Hardware and software costs. Travel and training costs. Effort costs (the dominant factor in most projects) The salaries of engineers involved in the project; Social and insurance costs. Effort costs must take overheads into account Costs of building, heating, lighting. Costs of networking and communications. Costs of shared facilities (e.g library, staff restaurant, etc.).
Nature of Estimates Man Months (or Person Months), defined as 152 man-hours of direct-charged labor Schedule in months (requirements complete to acceptance) Well-managed program
Common Estimation models Expert Judgment Analogy Top Down Bottom up Price to win Parametric or Algorithmic Method Using formulas and equations
Criteria for a Good Model Defined—clear what is estimated Accurate Objective—avoids subjective factors Results understandable Detailed Stable—second order relationships Right Scope Easy to Use Causal—future data not required Parsimonious—everything present is important
Expert judgment One or more experts in both software development and the application domain use their experience to predict software costs. Process iterates until some consensus is reached. Advantages: Relatively cheap estimation method. Can be accurate if experts have direct experience of similar systems Disadvantages: Very inaccurate if there are no experts!
Estimation by analogy The cost of a project is computed by comparing the project to a similar project in the same application domain Advantages: May be accurate if project data available and people/tools the same Disadvantages: Impossible if no comparable project has been tackled. Needs systematically maintained cost database
Cost Pricing to win The project costs whatever the customer has to spend on it Advantages: You get the contract Disadvantages: The probability that the customer gets the system he or she wants is small. Costs do not accurately reflect the work required. How do you know what customer has? Only a good strategy if you are willing to take a serious loss to get a first customer, or if Delivery of a radically reduced product is a real option.
Top-down and bottom-up estimation Any of these approaches may be used top-down or bottom-up. Top-down Start at the system level and assess the overall system functionality and how this is delivered through sub-systems. Bottom-up Start at the component level and estimate the effort required for each component. Add these efforts to reach a final estimate.
Top-down estimation Usable without knowledge of the system architecture and the components that might be part of the system. Takes into account costs such as integration, configuration management and documentation. Can underestimate the cost of solving difficult low-level technical problems. Advantages Easy to calculate Effective early on (like initial cost estimates) Disadvantages Some models are questionable or may not fit Less accurate because it doesn’t look at details
Bottom-up estimation Usable when the architecture of the system is known and components identified. This can be an accurate method if the system has been designed in detail. It may underestimate the costs of system level activities such as integration and documentation. Advantages Works well if activities well understood Disadvantages Specific activities not always known More time consuming
Estimation methods Each method has strengths and weaknesses. Estimation should be based on several methods. If these do not return approximately the same result, then you have insufficient information available to make an estimate. Some action should be taken to find out more in order to make more accurate estimates. Pricing to win is sometimes the only applicable method.
Pricing to win This approach may seem unethical and un-businesslike. However, when detailed information is lacking it may be the only appropriate strategy. The project cost is agreed on the basis of an outline proposal and the development is constrained by that cost. A detailed specification may be negotiated or an evolutionary approach used for system development.
Algorithmic Measures Lines of Code (LOC) Function points Feature points or object points Other possible Number of bubbles on a DFD Number of of ERD entities Number of processes on a structure chart LOC and function points most common (of the algorithmic approaches) Majority of projects use none of the above
Lines of code What's a line of code? The measure was first proposed when programs were typed on cards with one line per card; How does this correspond to statements as in Java which can span several lines or where there can be several statements on one line. What programs should be counted as part of the system? This model assumes that there is a linear relationship between system size and volume of documentation. A key thing to understand about early estimates is that the uncertainty is more important than the initial line – don’t see one estimate, seek justifiable bounds.
Code-based Estimates LOC Advantages LOC Disadvantages Commonly understood metric Permits specific comparison Actuals easily measured LOC Disadvantages Difficult to estimate early in cycle Counts vary by language Many costs not considered (ex: requirements) Programmers may be rewarded based on this Can use: # defects/# LOC Code generators produce excess code
LOC Estimate Issues How do you know how many in advance? What about different languages? What about programmer style? Stat: avg. programmer productivity: 3,000 LOC/yr Most algorithmic approaches are more effective after requirements (or have to be after)
Function Points Software size s/b measured by number & complexity of functions it performs More methodical than LOC counts House analogy House’s Square Feet ~= Software LOC # Bedrooms & Baths ~= Function points Former is size only, latter is size & function Six basic steps
Function Point Process 1. Count # of biz functions per category Categories: outputs, inputs, db inquiries, files or data structures, and interfaces 2. Establish Complexity Factor for each and apply Simple, Average, Complex Set a weighting multiplier for each (0->15) This results in the “unadjusted function-point total” 3. Compute an “influence multiplier” and apply It ranges from 0.65 to 1.35; is based on 14 factors 4. Results in “function point total” This can be used in comparative estimates
Parametric Method Issues Remember: most projects you’ll run into don’t use these Which is ‘normal’, so don’t be surprised Or come-in to new job and say “Hey, let’s use COCOMO” These are more effective on large projects Where a past historical base exists Primary issue for most projects are Lack of similar projects Thus lack of comparable data
COCOMO The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics. model for estimating effort, cost, and schedule for software projects. Outputs in Person Months Biggest weakness? Requires input of a product size estimate in LOC
Input Data Delivered K source lines of code(KSLOC) Various scale factors: Experience Process maturity Required reliability Complexity Developmental constraints
COCOMO Mode & Model Three development environments (modes) Organic Mode Semidetached Mode Embedded Mode Three increasingly complex models Basic Model Intermediate Model Detailed Model
COCOMO Modes Organic Mode Semidetached Mode Embedded Mode Developed in familiar, stable environment Product similar to previously developed product Semidetached Mode somewhere between Organic and Embedded "medium" teams with mixed experience working with a mix of rigid and less than rigid requirements Embedded Mode new product requiring a great deal of innovation developed within a set of "tight" constraints (hardware, software, operational, ...)
COCOMO Models Basic Model Used for early rough, estimates of project cost, performance, and schedule Accuracy: within a factor of 2 of actuals 60% of time Intermediate Model Uses Effort Adjustment Factor (EAF Doesn’t account for 10 - 20 % of cost (trng, maint, TAD, etc) Accuracy: within 20% of actuals 68% of time Detailed Model Uses different Effort Multipliers for each phase of project (everybody uses intermediate model)
Basic Model Effort Equation Effort=A(size)exponent A is a constant based on the developmental mode organic = 2.4 semi = 3.0 embedded = 3.6 Size = 1000s Source Lines of Code (KSLOC) Exponent is constant given mode organic = 1.05 semi = 1.12 embedded = 1.20
Nominal Project Profiles Size 2000 SLOC 8000 SLOC 32000 SLOC 128000 SLOC MM 5 21 91 392 Schedule Months 8 14 24 Staff 1.1 2.7 6.5 16 SLOC/ MM 400 376 352 327
Estimation Issues Quality estimations needed early but information is limited Precise estimation data available at end but not needed Or is it? What about the next project? Best estimates are based on past experience Politics of estimation: You may anticipate a “cut” by upper management For many software projects there is little or none Technologies change Historical data unavailable Wide variance in project experiences/types Subjective nature of software estimation
Over and Under Estimation Over estimation issues The project will not be funded Conservative estimates guaranteeing 100% success may mean funding probability of zero. Parkinson’s Law: Work expands to take the time allowed Danger of feature and scope creep Be aware of “double-padding”: team member + manager Under estimation issues Quality issues (short changing key phases like testing) Inability to meet deadlines Morale and other team motivation issues