Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Effort Estimation

Similar presentations


Presentation on theme: "Software Effort Estimation"— Presentation transcript:

1 Software Effort Estimation

2 What makes a successful project?
Delivering: Agreed functionality On time At the agreed cost With the required quality A key point here is that developers may in fact be very competent, but incorrect estimates leading to unachievable targets will lead to extreme customer dissatisfaction.

3 Difficulties with Estimation
Subjective nature of Estimation Changing Technologies Political implications

4 Estimation Wrong initial estimations Over estimation Under estimation
Lead to exceeding deadlines Over estimation Under estimation

5 Over and under-estimating
Parkinson’s Law: ‘Work expands to fill the time available’ An over-estimate is likely to cause project to take longer than it would otherwise Brook’s Law: putting more people on late job make it later The answer to the problem of over-optimistic estimates might seem to be to pad out all estimates, but this itself can lead to problems. You might miss out to the competition who could underbid you, if you were tendering for work. Generous estimates also tend to lead to reductions in productivity. On the other hand, having aggressive targets in order to increase productivity could lead to poorer product quality. Ask how many students have heard of Parkinson’s Law – the response could be interesting! It is best to explain that C. Northcote Parkinson was to some extent a humourist, rather than a heavyweight social scientist. Note that ‘zeroth’ is what comes before first. This is discussed in Section 5.3 of the text which also covers Brooks’Law.

6 Basis for s/w estimation
Need historical data Measure of work Only a code measure Programmer dependent

7 Software effort estimation techniques
Top-down Bottom-up

8 Bottom-up V/S top-down
use when no past project data identify all tasks that have to be done – so quite time- consuming use when you have no data about similar past projects Top-down produce overall estimate based on project cost drivers based on past project data divide overall estimate between jobs to be done There is often confusion between the two approaches as the first part of the bottom-up approach is a top-down analysis of the tasks to be done, followed by the bottom-up adding up of effort for all the work to be done. Make sure you students understand this or it will return to haunt you (and them) at examination time.

9 Bottom-up estimating 1. Break project into small components 2. Stop when you get to what one person can do in one/two weeks 3. Estimate costs for the lowest level activities 4. At each higher level calculate estimate by adding estimates for lower levels The idea is that even if you have never done something before you can imagine what you could do in about a week. Exercise 5.2 relates to bottom-up estimating

10 Top-down estimates Produce overall estimate using effort driver(s)
100 days Produce overall estimate using effort driver(s) Distribute proportions of overall estimate to components overall project design code test 30% i.e. 30 days 30% i.e. 30 days 40% i.e. 40 days

11 Function Point Function Count Alan Albrecht while working for IBM, recognized the problem in size measurement in the 1970s, and developed a technique (which he called Function Point Analysis), which appeared to be a solution to the size measurement problem.

12 Function Count The principle of Albrecht’s function point analysis (FPA) is that a system is decomposed into functional units. Inputs : information entering the system Outputs : information leaving the system Enquiries : requests for instant access to information Internal logical files : information held within the system External interface files : information held by other system that is used by the system being analyzed.

13 FPAs functional units System
Function Count The FPA functional units are shown in figure given below: ILF EIF User Other applications System Outputs Inputs Inquiries ILF: Internal logical files EIF: External interfaces FPAs functional units System

14 Function Count The five functional units are divided in two categories: (i) Data function types Internal Logical Files (ILF): A user identifiable group of logical related data or control information maintained within the system. External Interface files (EIF): A user identifiable group of logically related data or control information referenced by the system, but maintained within another system. This means that EIF counted for one system, may be an ILF in another system.

15 Function Count (ii) Transactional function types
External Input (EI): An EI processes data or control information that comes from outside the system. The EI is an elementary process, which is the smallest unit of activity that is meaningful to the end user in the business. External Output (EO): An EO is an elementary process that generate data or control information to be sent outside the system. External Inquiry (EQ): An EQ is an elementary process that is made up to an input-output combination that results in data retrieval.

16 Software Project Planning
Counting function points Functional Units Weighting factors Low Average High External Inputs (EI) 3 4 6 External Output (EO) 5 7 External Inquiries (EQ) Internal logical files (ILF) 10 15 External Interface files (EIF) Table 1 : Functional units with weighting factors

17 Software Project Planning
Table 2: UFP calculation table Functional Units Count Complexity Complexity Totals Functional Unit Totals External Inputs (EIs) Low x 3 = Average x 4 = High x 6 = External Outputs (EOs) Low x 4 = Average x 5 = High x 7 = External Inquiries (EQs) Low x 3 = Average x 4 = High x 6 = Low x 7 = External logical Files (ILFs) Average x 10 = High x 15 = Low x 5 = External Interface Files (EIFs) Average x 7 = High x 10 = Total Unadjusted Function Point Count

18 Software Project Planning
Table 3 : Computing function points. Rate each factor on a scale of 0 to 5. 1 2 3 4 5 No Influence Incidental Moderate Average Significant Essential Number of factors considered ( Fi ) 1. Does the system require reliable backup and recovery ? 2. Is data communication required ? 3. Are there distributed processing functions ? 4. Is performance critical ? 5. Will the system run in an existing heavily utilized operational environment ? 6. Does the system require on line data entry ? 7. Does the on line data entry require the input transaction to be built over multiple screens or operations ? 8. Are the master files updated on line ? 9. Is the inputs, outputs, files, or inquiries complex ? 10. Is the internal processing complex ? 11. Is the code designed to be reusable ? 12. Are conversion and installation included in the design ? 13. Is the system designed for multiple installations in different organizations ? 14. Is the application designed to facilitate change and ease of use by the user ?

19 IFPUG Complexity

20 Software Project Planning
Functions points may compute the following important metrics: Productivity = FP / persons-months Quality = Defects / FP Cost = Rupees / FP Documentation = Pages of documentation per FP These metrics are controversial and are not universally acceptable. There are standards issued by the International Functions Point User Group (IFPUG, covering the Albrecht method) and the United Kingdom Function Point User Group (UFPGU, covering the MK11 method). An ISO standard for function point method is also being developed.

21 Software Project Planning
Example: 4.1 Consider a project with the following functional units: Number of user inputs = 50 Number of user outputs = 40 Number of user enquiries = 35 Number of user files = 06 Number of external interfaces = 04 Assume all complexity adjustment factors and weighting factors are average. Compute the function points for the project.

22 FP Solution We know UFP = 50 x x x x x 7 = = 628 CAF = ( ΣFi) = ( (14 x 3)) = = 1.07 FP = UFP x CAF = 628 x 1.07 = 672

23 Function points Mark II
Developed by Charles R. Symons ‘Software sizing and estimating - Mk II FPA’, Wiley & Sons, 1991. Work originally for CCTA: should be compatible with SSADM; mainly used in UK has developed in parallel to IFPUG FPs Once again, just a reminder that the lecture is just an overview of concepts. Mark II FPs is a version of function points developed in the UK and is only used by a minority of FP specialists. The US-based IFPUG method (developed from the original Albrecht approach) is more widely used. I use the Mark II version because it has simpler rules and thus provides an easier introduction to the principles of FPs. Mark II FPs are explained in more detail in Section 5.9. If you are really keen on teaching the IFPUG approach then look at Section The IFPUG rules are really quite tricky in places and for the full rules it is best to contact IFPUG.

24 Function points Mk II For each transaction, count
data items input (Ni) data items output (No) entity types accessed (Ne) #entities accessed For each transaction (cf use case) count the number of input types (not occurrences e.g. where a table of payments is input on a screen so the account number is repeated a number of times), the number of output types, and the number of entities accessed. Multiply by the weightings shown and sum. This produces an FP count for the transaction which will not be very useful. Sum the counts for all the transactions in an application and the resulting index value is a reasonable indicator of the amount of processing carried out. The number can be used as a measure of size rather than lines of code. See calculations of productivity etc discussed earlier. There is an example calculation in Section 5.9 (Example 5.3) and Exercise 5.7 should give a little practice in applying the method. #input items #output items FP count = Ni * Ne * No * 0.26

25 Productivity = size/effort
Effort=const1+(size*const2)

26 Inputs:1) Select new sale (control); User action expressed by selection of command [Sale: Receipt_No]. 2) Select product type (business); User choses from categorised drop-down list of pizza related product types [Product: Type_Of_Item]. 3) Select product “name” (business); User choses from a drop-down list of pizza related goods. [Product: Product_Description > Receipt_No & Product_No]. 4) Select number of item (business); Customer may order 3 large margarita pizzas [Item_sale: Quantity_Sold].5) Confirm sale (control); This is a recursive menu selection system. Outputs:1) Error/conformation (control); End of sales data interaction cycle. Entities: 1) Sale; Occurrence of a sale recorded here [Write All data].2) Item_sale; Functional relationship to sale entity (List of products for this sales) [Write all data].3) Recipe; Required to determine the inventory items to be subtracted from the inventory_item entity [Read all data]. 4) Inventory_item; Required to change the inventory(stock) level for items used [Read Item_No, Write Quantity_In_Stock] (Quantity of Item x - Recipe:Quantity_Used). 5) System; Provides the automatically generated sales receipt number and Date.

27 COCOMO COCOMO applied to Semidetached mode Organic mode Embedded mode

28 COCOMO81 Based on industry productivity standards - database is constantly updated Allows an organization to benchmark its software development productivity Basic model effort = c x sizek C and k depend on the type of system: organic, semi-detached, embedded Size is measured in ‘kloc’ ie. Thousands of lines of code Recall that the aim of this lecture is to give an overview of principles. COCOMO81 is the original version of the model which has subsequently been developed into COCOMO II some details of which are discussed in Section For full details read Barry Boehm et al. Software estimation with COCOMO II Prentice-Hall 2002.

29 The COCOMO constants System type c k
Organic (broadly, information systems) 2.4 1.05 Semi-detached 3.0 1.12 Embedded (broadly, real-time) 3.6 1.20 An interesting question is what a ‘semi-detached’ system is exactly. To my mind, a project that combines elements of both real-time and information systems (i.e. has a substantial database) ought to be even more difficult than an embedded system. Another point is that COCOMO was based on data from very large projects. There are data from smaller projects that larger projects tend to be more productive because of economies of scale. At some point the diseconomies of scale caused by the additional management and communication overheads then start to make themselves felt. Exercise 5.10 in the textbook provides practice in applying the basic model.

30 Development effort multipliers (dem)
According to COCOMO, the major productivity drivers include: Product attributes: required reliability, database size, product complexity Computer attributes: execution time constraints, storage constraints, virtual machine (VM) volatility Personnel attributes: analyst capability, application experience, VM experience, programming language experience Project attributes: modern programming practices, software tools, schedule constraints Virtual machine volatility is where the operating system that will run your software is subject to change. This could particularly be the case with embedded control software in an industrial environment. Schedule constraints refers to situations where extra resources are deployed to meet a tight deadline. If two developers can complete a task in three months, it does not follow that six developers could complete the job in one month. There would be additional effort needed to divide up the work and co-ordinate effort and so on.

31 Software Project Planning
Multipliers of different cost drivers Cost Drivers RATINGS Very low Low Nominal High Very high Extra high Product Attributes RELY DATA CPLX Computer Attributes TIME STOR VIRT TURN 0.75 0.88 1.00 1.15 1.40 -- -- 0.94 1.00 1.08 1.16 -- 0.70 0.85 1.00 1.15 1.30 1.65 -- -- 1.00 1.11 1.30 1.66 -- -- 1.00 1.06 1.21 1.56 -- 0.87 1.00 1.15 1.30 -- -- 0.87 1.00 1.07 1.15 --

32 Software Project Planning
Cost Drivers RATINGS Very low Low Nominal High Very high Extra high Personnel Attributes ACAP AEXP PCAP VEXP LEXP Project Attributes MODP TOOL SCED 1.46 1.19 1.00 0.86 0.71 -- 1.29 1.13 1.00 0.91 0.82 -- 1.42 1.17 1.00 0.86 0.70 -- 1.21 1.10 1.00 0.90 -- -- 1.14 1.07 1.00 0.95 -- -- 1.24 1.10 1.00 0.91 0.82 -- 1.24 1.10 1.00 0.91 0.83 -- 1.23 1.08 1.00 1.04 1.10 -- Table 5: Multiplier values for effort calculations

33 COCOMOI

34 Using COCOMO development effort multipliers (dem)
An example: for analyst capability: Assess capability as very low, low, nominal, high or very high Extract multiplier: very low 1.46 low 1.19 nominal 1.00 high 0.80 very high 0.71 Adjust nominal estimate e.g x 0.80 = staff months Exercise 5.11 gives practice in applying these.

35 Application for the types of projects
COCOMOII Stage No Model Name Application for the types of projects Applications Stage I Application composition estimation model Application composition In addition to application composition type of projects, this model is also used for prototyping (if any) stage of application generators, infrastructure & system integration. Stage II Early design estimation model Application generators, infrastructure & system integration Used in early design stage of a project, when less is known about the project. Stage III Post architecture estimation model Application generators, infrastructure & system integration Used after the completion of the detailed architecture of the project. Table 8: Stages of COCOMO-II


Download ppt "Software Effort Estimation"

Similar presentations


Ads by Google