Presentation is loading. Please wait.

Presentation is loading. Please wait.

COCOMO II: Airborne Radar System Example

Similar presentations


Presentation on theme: "COCOMO II: Airborne Radar System Example"— Presentation transcript:

1 COCOMO II: Airborne Radar System Example
Barry Boehm CSCI 510 Fall 2011 9/14/05

2 Outline Overview the Airborne Radar System (ARS)
Demonstrate progressive usage of different COCOMO sub-models within an evolutionary spiral development process Cover estimation of reuse, modification, COTS, and automated translation Show how an aggregate estimate is refined in greater detail Here is the outline of topics for the talk. First I’ll give a very broad overview of the radar system architecture and it’s functions. You will see that this example has 3 parts to it, corresponding to increasing levels of elaboration as the system is developed. The different COCOMO submodels we use are tied to the varying degrees of detail - these submodels are called the applications composition, the early design and the post-architecture models. The evolutionary spiral process is a risk-driven lifecycle used to iteratively develop the radar. If you want more background on the process, you can find references on the spiral model in our book. In addition to new software development, these estimates will cover reuse, modification, incorporating commercial-off-the-shelf software from vendors, and automatic translation from one language to another. Our final estimate will be a refinement of a top-level estimate, whereby we use component-level detail instead of considering the system at the highest aggregate level. We will also bring to your attention some aids that can be used in the estimation process to make it easier and more consistent. 9/14/05

3 ARS Estimation Use Applications Composition, Early Design and Post-Architecture submodels Two Post-Architecture estimates are demonstrated: top-level and detailed scale drivers apply to overall system in both estimates cost drivers are rated for the aggregate system in the top-level estimate (17 ratings) cost drivers are refined for each individual software component in the detailed estimate (17*6 components=102 ratings) Now we will transition to the construction phase of development where we use the post-architecture model. The same techniques apply as in the early design estimate, except the number of cost drivers increases. Reference the book for their rationale and respective ratings. We show two estimates in the book - a top-level estimate whereby the system is rated in aggregate, and a detailed estimate where we take the information down to the individual component level. We rate the cost drivers separately for each component in the detailed estimate. Naturally this increases the workload of the estimator, but greater precision in the estimate should result. 9/14/05

4 ARS System Overview Here is a high-level diagram of the airborne radar system. It’s a “hard” real-time system because it’s processing has to keep up with flying objects and human operator input. It is resident onboard a jet aircraft. This is a very complex development that typifies the systems integration work of large aerospace companies. It contains multiple computers - one for the display workstation, a central computer that directs the other devices, and a computer for the radar unit itself for passing data and radar hardware commands. The user interacts through several input devices on the workstation, and his actions are received by the central computer. The central computer performs some graphics processing, but the low-level primitives are executed on the display itself. There is other sensor data coming into the system besides radar, so that it can be fused with the radar data. 9/14/05

5 Software Components Radar Unit Control Radar Item Processing
controls radar hardware Radar Item Processing extracts information from returned radar to identify objects Radar Database maintains radar object tracking data Display Manager high level displays management Display Console user input device interface and primitive graphic processing Built In Test hardware monitoring and fault localization These are the software components in the system: radar unit control does as it’s name implies - it simply controls the radar and tells it where to scan in the skies, with what shape of radar beam and frequency radar item processing operates on the received radar signals, and attempts to identify the types of objects it detects via the reflected radar; objects can be other vehicles in the radar’s path the radar database uses data from the radar item processing to keep track of the other objects’ whereabouts the display manager is in charge of drawing the displays and receiving user input the display console works together with the display manager by performing low-level graphics processing and passing user inputs to the display manager finally, built-in-test is used to monitor the system hardware and identify where faults occur More detail on these functions are found in the book writeup. 9/14/05

6 COCOMO Coverage in Evolutionary Lifecycle Process
This table ties together the standard development phases with the terminology used for the radar system development milestones and the appropriate COCOMO submodels to use. The prototype is a very short development used to demonstrate feasibility of the new radar hardware design and associated radar processing algorithms. Many of the interfaces will be simulated. The applications composition submodel is used since the prototype is largely graphics screens. The system is further elaborated on the breadboard. It is a true working system on non-production hardware, and only a subset of the full capabilities are demonstrated. At this stage there are still a good number of unknowns since the architecture is being fleshed out; hence we will use the early design submodel. It is coarse-grained like the information about the system at this stage. Finally, after some knowledge is gained from the breadboard, the post-architecture model is used for estimation during the construction phase to achieve initial operating capability, or IOC. The post-architecture is the most detailed version of COCOMO II, and corresponds to the fuller system knowledge at this point. * both top-level and detailed estimates shown 9/14/05

7 Prototype Size and Effort
This slide shows the table used to determine the total number of application points. The book includes an earlier table that quantifies the actual numbers of screens, reports, their associated complexities, and other 3rd generation software components from which the application points are calculated. Note that this procedure accounts for reuse, and subtracts out for it. Per the provided tables in the book, we assume a high productivity for our team of 25 new application points per person-month. At this productivity, the effort comes out to be 5.45 person-months. Finally, given a 6 week timeframe, this means we need about 4 full-time personnel to finish the prototype on time. Productivity is “high” at 25 NAP/PM Effort = NAP/ Productivity = 136.3/25 = 5.45 PM (or 23.6 person-weeks) Personnel = 23.5 person-weeks/6 weeks ~ 4 full-time personnel 9/14/05

8 Scale Factors for Breadboard
Factor Rating Precedentedness (PREC) Development Flexibility (FLEX) Risk/Architecture Resolution (RESL) Team Cohesion (TEAM) Process Maturity (PMAT) Nominal Low High Now let’s get into the breadboard estimate where we will use the early design model. Here are the scale factors that apply for the system. Note that the book provides more detail on these ratings. The precedentedness is nominal since the company has developed these systems before, but there are some new aspects. So these factors offset each other. There is low flexibility since the customer requirements for the system are strict. Due to a good development process that focuses on risk, the risk/architecture resolution driver is high. Team cohesion is nominal due to some offsetting factors again, and the process maturity is nominal for a SEI-CMM Level 2 rating. 9/14/05

9 Early Design Cost Drivers for Breadboard
Factor Rating Product Reliability and Complexity (RCPX) Required Reuse (RUSE) Platform Difficulty (PDIF) Personnel Capability (PERS) Personnel Experience (PREX) Facilities (FCIL) Schedule (SCED) High Very High Nominal Like the previous slide, the detailed rationale for these cost driver ratings are provided in the book. The system is highly complex and has to be very reliable, so the product reliability and complexity driver is high. Required reuse is very high because the corporation wants to evolve a product line around the software system, and reuse as much as possible. Platform difficulty is high due to the custom nature of the complex system, and the real-time processing and memory constraints. Personnel capability is high, while experience is nominal since they have high domain experience, but very limited platform and tool experience. Facilities is nominal since the tools aren’t very elaborate, and finally the schedule constraint is nominal since the breadboard isn’t mandated to finish by a particular date. 9/14/05

10 Breadboard System Size Calculations
This table is used to calculate the equivalent size for input to the USC COCOMO tool. It is mostly intended to be instructional in terms of how one might break up the components into their new and adapted portions, and which parameters are used to calculate equivalent size. DESCRIBE MATRIX All this information could be put into the tool itself for the different components as you see them here, but in this example we calculated equivalent size in a spreadsheet outside of USC COCOMO and input the final size of equivalent source lines of code. This method saved some inputting time. Now I’d like to say a few words about how the estimating tools you use influence how you consolidate size. Different tools have different means of identifying new, reused and modified. In USC COCOMO for example, a component is either new or adapted, but not both. Other tools allow one to quantify the new and adapted portions within a single component. In any case, the estimator does have to quantify how much is new, reused, modified or COTS. (remember; COTS is currently treated the same as reuse in our examples). The requirements evolution and volatility parameter applies to all software types except translated. Then for each adapted component, one has to quantify the percent of design modified, percent of code modified, integration required, assessment and assimilation, software understanding, and programmer unfamiliarity. All these are fully defined in the book, and you are also encouraged to study the guidelines for quantifying the adaptation parameters. An important one to remember is that AA and SU only apply to modified code, not reused. Note that the translated code is handled separately, and does not enter into the equivalent size. For all other types, the adaptation equations are applied to come up with equivalent size of new code for each sub-component, and simply added up in the total. 9/14/05

11 Early Design Estimate for Breadboard
Here is the final output for the early design estimate. DESCRIBE OUTPUT The total equivalent size is given as a whole for the new and adapted code. The effort adjustment factor comes out to be 1.64 based on the cost drivers, and the effort for it is 459 person-months. Note that the translated code is a separate item here. It has a relatively small effort associated with it of about 29 person-months. Automatic translation is a rare practice, but this example shows that COCOMO II can handle it. The total effort sums the two for a combined effort of 488 person-months, and a corresponding schedule of 25.6 months. The translation effort is not used to calculate the schedule, since it is assumed to be off the critical path of activities. Thus the 25.6 months is based only on the new and adapted code. 9/14/05

12 ARS Full Development for IOC
Use Post-Architecture estimation model same general techniques as the Early Design model for the Breadboard system, except for elaborated cost drivers Two estimates are demonstrated: top-level and detailed scale drivers apply to overall system in both estimates cost drivers are rated for the aggregrate system in the top-level estimate (17 ratings) cost drivers are refined for each individual software component in the detailed estimate (17*6 components=102 ratings) Now we will transition to the construction phase of development where we use the post-architecture model. The same techniques apply as in the early design estimate, except the number of cost drivers increases. Reference the book for their rationale and respective ratings. We show two estimates in the book - a top-level estimate whereby the system is rated in aggregate, and a detailed estimate where we take the information down to the individual component level. We rate the cost drivers separately for each component in the detailed estimate. Naturally this increases the workload of the estimator, but greater precision in the estimate should result. 9/14/05

13 ARS Top-Level Size Calculations
9/14/05

14 Post-Architecture Estimate for IOC (Top-level)
Here is the post-architecture top-level estimate. It was produced the same way as the early design estimate, whereby we calculated the total equivalent size in a spreadsheet and input it into the tool. The automated translation component was again handled separately, and it remains a very small part of the overall effort. The top-level estimate gives us 565 person-months and 17.5 months of calendar time. 9/14/05

15 Post-Architecture Estimate for IOC (Detailed)
Here is the output for the detailed post-architecture estimate. We had to put in all the sub-components separately in this case, since the cost drivers are rated differently for each one. The book shows all the detail for the components. As we stated before, improved precision in the estimate should result when more time is taken to rate individual components. Many of the unique circumstances are missed when rating a project in aggregate at a high-level, since some details are glossed over. Performing a detailed estimate makes you think more about the job at hand, which is always a good idea for planning. In this example, the estimate is increased by a sizable amount. It went up to 756 person-months of effort and 19.3 months of schedule. Of course, this more precise information is absolutely crucial to know for project success. Now more appropriate planning can take place. 9/14/05

16 Sample Incremental Estimate
9/14/05

17 Increment Phasing 9/14/05

18 Increment Summary 9/14/05

19 Summary and Conclusions
We provided an overview of the ARS example provided in Chapter 3 We demonstrated using the COCOMO sub-models for differing lifecycle phases and levels of detail the estimation model was matched to the known level of detail We showed increasing the level of component detail in the Post-Architecture estimates Incremental development was briefly covered In summary, we just showed you some highlights from the airborne radar system example in the book. You saw how the various COCOMO sub-models can be tied to the different granularity of information as system development progresses. It is very handy to match the model to the currently available information. As another type of iterative elaboration of a cost estimate, we showed how one could refine a top-level aggregate estimate into a detailed one by taking into account the component differences. Another interesting topic is how to handle incremental development. There is a short example at the end of chapter 3 that we didn’t have time for in this presentation, and you are encouraged to read over it. Thank you very much for your time, and happy estimating. 9/14/05


Download ppt "COCOMO II: Airborne Radar System Example"

Similar presentations


Ads by Google