Download presentation
Presentation is loading. Please wait.
1
Midterm 2 Questions and Answers
Barry Boehm, October 24, 2018
2
Which is the most pragmatic software cost estimation method that also takes into consideration requirements, hardware issues and documentation, unlike the COCOMO model ? The COCOMO II DOCU cost driver accounts for documentation. Requirements and hardware are still in the research stage. A recent research result used the number of itemized requirements (shalls) and the application domain to produce a very-early estimate of effort, plus or minus a factor of about 1.5, based on 16 data points. Another early research approach for costing hardware costs applies the COSYSMO estimate of systems engineering costs plus application domain and number of copies to produce a very rough estimate, but with very few data points to use in calibrating the model.
3
In EP Schedule Estimation, when you calculate the estimated schedule time, we will use the PM excluding the SCED multiplier and directly use the compression/expansion percentage as SCED% in the formula. However, when we calculate the effort, we will see the SCED the same as other cost drivers and use its rating level to do the math. Why SCED is handled so differently in schedule estimation, is there a way so that we can define the SCED factor in a more uniform way (so we can use the SCED like other factors , using one representative way like rating level in both effort and schedule estimation)? The different handling avoids getting caught in a loop. The estimated schedule in months is 3.67 * (person-months) raised to the (effort exponent-0.91) power. Now if you want a 15% schedule compression, the Low rating applies a factor of 1.14 to the effort in person-months. But that changes the number of person-months used to estimate the schedule, resulting in an inconsistency with your initial base schedule estimate.
4
On slide 23 there are 6 Cost-Effectiveness Decision Criteria each one having it's own strength and weaknesses. Given a project these criteria might give conflicting results than another. If that is the case, which conflicting results should I consider as the solution? The result that most criteria agree on? Are some criteria more important than other, like ROI? Or should the weight of a criteria be based on other factors about the project? Generally ROI is best, but often if the investment segment of the payoff curve is small, ROI will be maximized at a very low volume of production, and a criterion like effectiveness-cost difference may be more attractive from a total profit standpoint.
5
In EC-11, I am confused on what the difference is between DSC and weighted
sum, because in our homework and in the lecture example we use the DSC to 3 calculate the weighted sum. Is the weighted sum a weighted DSC? The weighted sum is an additive combination of desired capabilities weighted by their relative importances. The DSC takes this weighted sum and applies two multiplicative factors for the fraction of the computing resources available for the application and the fraction of time that the computer is up and running.
6
Pg 280 of EP7 introduces decision rules to be applied under conditions of complete uncertainty (maximax, maximin, Laplace, etc.). What criteria are used to determine which of these rules to use beyond feeling particularly pessimistic or optimistic? Is there more concrete information we can base that 2 decision off from? The best one can do is to try applying ach to your particular decision situation and see which one makes the most sense. The special cases where the payoffs are near their limits are good examples of poor criteria to use. Often finding the breakeven point will giver you a good rationale for choosing which option is bes on your best estimate of the probability of success.
7
During the lecture on Risk Management, Prof
During the lecture on Risk Management, Prof. Boehm's talked about how there is a perception that Risk Management is fundamentally negative (slide 9). I used to work for a company that tried to minimize Risk Management because they were afraid to highlight problems. What methods can you use to change the perception of risk management and highlight the opportunity risk management brings? About the best one can do is to treat resolving the risk as an opportunity to produce greater benefits. If there is a risk of getting a poorly-matched user interface, there is an opportunity to invest in prototyping the user interface to be more satisfactory to management or attractive to purchasers. If there is a risk of slipping schedule, there is an opportunity to agree to defer the lowest-priority features and deliver an acceptable version on time.
8
How do you balance the early acceptance of risks in a project with keeping engineering team (i.e. team members not always exposed to the strategic side) morale high? [EC 13 slides 17-19, Day One Temptations to Avoid] If the risks are not well understood, it will be good to buy information to understand or reduce them via prototyping, COTS evaluations, etc. This will generally either reduce the risks to be acceptable or to inform management of the level of risk and to present options for risk reduction by deferring features, risk acceptance by adding budget or schedule, or risk transfer by establishing a risk reserve.
9
In slides 23-25 of EC-13, the top ten risks change nearly every decade
In slides of EC-13, the top ten risks change nearly every decade. Between 2010 and 2011, the top ten risks seemed to have changed a considerable amount. Is there any research or development in predicting future high risks? It is always a good idea to keep up with the literature and marketplace trends. If your upgraded processors will have multicore chips, there will be risks that your legacy software will be hard to parallelize. If hackers are employing stronger penetration capabilities, there will be risks that your software will have security problems. If cloud services will be enabling much more powerful data analysis capabilities, the are risks that your conventional data analyses will be non-competitive.
10
On page 22 of the EC-9 MedFRS Econ Analysis I, it says "The best way to combat diseconomies of scale is to reduce the scale". What are ways to reduce the scale? The best way to reduce the scale is to prioritize the features and develop the features incrementally. The increments will become larger as you go along, but often changes in priorities will make some parts of system unnecessary. Another approach is to look for COTS products and cloud services that are already developed. Another is to identify product line families for which later products can reuse earlier parts of the product line.
11
EC-9 page 21; (Questioning the goodness of modularity) A) Based on the theory of queues, does it worth always dividing the processors into multiple modules? How are we going to optimize the assigning the jobs to modules? B) What issues should be taken under the consideration about preparing availability for modules and their components? One can certainly overdo breaking up systems into many tiny modules. Some associated problems are determining which modules to develop next; whether they need to be modified, and when to reuse the original or modified version of the module. Also, in some situations, modules may assume that other modules they rely on have been developed when they haven’t. On large teams, there will also be problems when team members are unsure of what each module or module version does, or how they communicate what changes they have made to a module.
12
2. On slide 20 of EC-9, it talks about Software "Gold-Plating
2. On slide of EC-9, it talks about Software "Gold-Plating." As we saw in a previous lecture, we saw that the client wanted a second response time but later determined that this was not necessary and that a second response time would suffice If the client is asking the development team to develop something, how can we determine if any of the features are gold-plating or if they are actually necessary? There are several ways to get groups of stakeholders to prioritize features. One is to give each stakeholder a certain number (say, 10) of votes that they can cast for the features (all 10 on one option if they prefer). Adding up the votes for each feature will determine the priorities. Or they can cast similar votes for which features they dislike, which totals are subtracted from the positive votes. Other approaches are to compare return-on investment business cases for candidate options, or to compare weighted-sum feature evaluations as in Chapter 15 of SW Engr. Economics; see next chart.
13
TPS Operating System Figure of Merit Calculation
System A System A Plus Criterion Weight Characteristic Rating Weighted Rating 1. Added Cost 30 $0 10 300 $40K 4 120 2. Processor overhead 200 3 3. Multiprocessor overhead 15 80 45 4. Measurement capability 7 Poor 2 14 Good 8 56 5. Trace Capability None Adequate 6 48 6. Diagnostics, error msgs 60 7. Maintenance Support Marginal 40 8. Accounting system 12 Very good 20 9. Usage summaries 24 10. Documentation 5 Total 100 541 533
14
Three decision rules for complete uncertainty, i. e
Three decision rules for complete uncertainty, i.e. Maximin, Maximax and Laplace, are mentioned in class and the reading material. None of them can be totally satisfactory. And it concludes that complete uncertainty about the states of nature is a very difficult position for good decision making. Thus we need to make decision with, perfect or imperfect, information. However, I think there still exists many situations where we should make decision for complete uncertainty.My question is: how do we make a good decision under complete uncertainty? In my opinion, since each single decision rule cannot reach a good effect, why not combine some of them together? For example, we can set different weight to each of them and sum them up. And the weights can be calibrated by some advanced algorithm, such as some machine learning algorithm, with large training data. [Chapter 19.2, Page 282, EP-7] Other forms of decisionmaking under uncertainty come within the category of buying information to reduce risk. One can use prototyping approaches such as for the fault tolerance risk reduction example in the EC-13 Risk Management lecture, or the how-much-prototyping-is-enough analysis as in Chapter 20 of SW Engr. Economics; see next three charts. Other forms of buying information may address business case analysis of alternative features or stakeholder preference analyses as in the answer above.
15
Risk Management Plan: Fault Tolerance Prototyping
1. Objectives (The “Why”) Determine, reduce level of risk of the software fault tolerance features causing unacceptable performance Create a description of and a development plan for a set of low-risk fault tolerance features 2. Deliverables and Milestones (The “What” and “When”) By week 3 1. Evaluation of fault tolerance option 2. Assessment of reusable components 3. Draft workload characterization 4. Evaluation plan for prototype exercise 5. Description of prototype By week 7 6. Operational prototype with key fault tolerance features 7. Workload simulation 8. Instrumentation and data reduction capabilities 9. Draft Description, plan for fault tolerance features By week 10 10. Evaluation and iteration of prototype 11. Revised description, plan for fault tolerance features Fall 2016 ©USC-CSSE
16
Risk Management Plan: Fault Tolerance Prototyping (concluded)
Responsibilities (The “Who” and “Where”) System Engineer: G. Smith Tasks 1, 3, 4, 9, 11, support of tasks 5, 10 Lead Programmer: C. Lee Tasks 5, 6, 7, 10 support of tasks 1, 3 Programmer: J. Wilson Tasks 2, 8, support of tasks 5, 6, 7, 10 Approach (The “How”) Design-to-Schedule prototyping effort Driven by hypotheses about fault tolerance-performance effects Use real-time OS, add prototype fault tolerance features Evaluate performance with respect to representative workload Refine Prototype based on results observed Resources (The “How Much”) $60K - Full-time system engineer, lead programmer, programmer (10 weeks)*(3 staff)*($2K/staff-week) $0K - 3 Dedicated workstations (from project pool) $0K - 2 Target processors (from project pool) $0K - 1 Test co-processor (from project pool) $10K - Contingencies $70K - Total Fall 2016 ©USC-CSSE
17
Net Expected Value of Prototype
PROTO COST, $K P (PS|SF) P (PS|SS) EV, $K NET EV, $K 60 5 0.30 0.80 69.3 4.3 10 0.20 0.90 78.2 8.2 20 0.10 0.95 86.8 6.8 30 0.00 1.00 90 8 4 10 20 30 NET EV, $K PROTO COST, $K
18
For the ICSM, are there rules and/or guidelines for helping to best define the scope or boundaries between different increments? Or is the decision completely up to the specific program/project needs? Increment prioritization can be a function of stakeholder, business, or technical factors. Which stakeholders want or are willing to pay for it first or next? Which business activities need it first or next: multinational companies will generally build the first increment for their home country, then expand as a function of costs and benefits to other countries or regions. Very large systems building their own infrastructure will build the infrastructure first, because it is needed by the applications.
19
Question: In EC-13, the topic is about risk management
Question: In EC-13, the topic is about risk management. There are five strategies that can help us to manage risk. I have some confusion about the buying information and risk avoidance. Let's say we are prototyping a project in order to get some data so that in the real process of developing, we could use the data to guide us. In my option and understanding, the process of prototyping is a way to avoid and reduce the risk. I don't know what is the difference between buying information and risk avoidance. Same confusion about buying information and risk reduction. Please clarify. Thanks. Reference: EC-13 Risk Management In general, if it is clear that some features are too risky to pursue, they can be avoided without the need to buy information about them. If it is clear that some options such as reusing proven software can reduce risk, one will not need to buy information about the reusable software. However, if there are uncertainties about the riskiness of features or reusability of the software, one would generally invest in buying information about their levels of risk via prototyping, benchmarking, etc.
20
Question #3 - Software Risks My organization recently implemented a feature that is dependent on a third-party data source. This data source has changed the format of their data without notice several times, breaking our app feature. We have no alternative data source for this data. In EC-13, slide 34, it's not clear to me which risk item this would fall under. I believe #3, "COTS; external components" is the appropriate place, but I'm not sure.. COTS is the most common instance of this class. In general, COTS products have a new release about every 10 months, and drop support for the release after 3 new releases. A more general category is “co-dependent, independently evolving external interfaces.” Many systems depend on or provide external services such as event management , food services, traffic analysis, weather analysis, etc. and have continual challenges in keeping consistent with their co-dependent systems.
21
3) In Tim Boyd's guest lecture, he introduced the notion of a technical entry point into a career followed by option shifts into both technical and non-technical managerial roles or more specialized, entirely technical roles -- given the increasingly accelerated nature of change in technology, in your estimation based on a lifetime of experience, is the purely technically focused role less feasible in modern software given the constant change in areas required to maintain expertise? There are risks in becoming narrowly technical and becoming narrowly managerial. As you indicate, new technologies often make skills based on older technologies obsolete. But also, they make technical management skills even more obsolete. When I was working at TRW for Dr. Simon Ramo (the R in TRW), he indicated that the most valuable performers at TRW were T-shaped people, who had deep skills in as given technical area, but also broad skills in other disciplines. This is what we try to do for I-shaped CS BA grads, who have so many CS courses to take as undergraduates that there is hardly any room for anything else, in our MS program, by including courses in management, economics, human factors, and project courses enabling students to understand the whole life cycle and its activities.
22
Section discusses the graph of production function (Figure ) of program size as output and man-months as input. But why are there three different curves on the graph? What do each of the three curves mean (organic, semidetached, embedded)? (Reference: EP-6 Section ) When the COCOMO model was first calibrated to TRW projects, their engineering complexity made scaling up more expensive, resulting in a calibrated exponent of 1.2 accounting for the diseconomies of scale. When I was able to gather data for non-TRW projects such as business data processing, I found that their diseconomies of scale were best fitted with an exponent of Also, though, there were more complex business applications that were best fitted with an exponent of 1.12, leading to the three curves shown in Figure 11-3, shown next. Later, when we did COCOMO II in , we found that some of the scaling was management-controllable, leading to the scale factors for the exponent in COCOMO II.
23
From Slide 25, EC-9 MedFRS Econ Analysis I, "How can one use the Maximum Effectiveness/Cost Ratio decision criteria to make better decisions? Graphically, how is this criterion used to evaluate benefit from the different cost points?” The general criterion for the optimal solution is that it is within the feasible solution set, and no point with a higher value is in the feasible set. This is the case for the solution with the highest Effectiveness/Cost Ratio. The contours of constant Effectiveness/Cost are straight lines emanating from the origin, and rotating these lines until they just touch the feasible set is the case for the optimal MedFRS solution, as shown in the next chart.
24
Maximum Effectiveness/Cost Ratio
1200 2000 1600 800 400 2400 100 200 300 500 600 700 R L Eff/Cost = 8 K Eff/Cost = 3.69 C, $K E (tr/sec)
25
For the weight-sum of merit calculation, I am wondering how weight is decided for each criterion? And a step further, is decision based on the specific number? Is there a possibility that the decision will be based on a range? For example, system A rates 541 in total and system A plus rates 530 in total but we decide to choose A plus when A plus is less than 10 lower than A? The weights are chosen by stakeholder consensus. In the example in section 15.5 of the SW Engr. Economics book, the weights were determined by consensus among the team leaders for Engineering Text processing, Economics, and Business Applications. The comparison between System A and System A Plus in Chart 20 of EC-10 has 541 points for System A and 533 points for System A Plus, which would led to choosing System A (see next chart).
26
TPS Operating System Figure of Merit Calculation
System A System A Plus Criterion Weight Characteristic Rating Weighted Rating 1. Added Cost 30 $0 10 300 $40K 4 120 2. Processor overhead 200 3 3. Multiprocessor overhead 15 80 45 4. Measurement capability 7 Poor 2 14 Good 8 56 5. Trace Capability None Adequate 6 48 6. Diagnostics, error msgs 60 7. Maintenance Support Marginal 40 8. Accounting system 12 Very good 20 9. Usage summaries 24 10. Documentation 5 Total 100 541 533
27
[Refer to EC-11, Graph of TPS Reliability, Availability and Performance on Slide 3] The professor explained that the availability and the reliability of the processors goes down when the number of processors increases. Explain this with the help of the graph given on Slide 3 of EC-11. Also, state what impact does increase in the number of processors have on the performance (trans/sec). The system architecture unfortunately has the whole system going down when one of the processors goes down. The more processors the system has, the more likely it will be for the system to fail when one of the processors fails. However, when the system is up, the more processors it has, the more transactions it can process. That is what the graph shows; see next chart.
28
TPS Reliability, Availability, and Performance
1 2 3 4 5 6 0.94 0.95 0.96 0.97 0.98 0.99 1.00 Rel, Av 400 800 1200 1600 2000 2400 E(N) trans/sec Number of processors, N DSC = (SC)(E(N))(Av(N)) Rel(N) Av(N)
29
Explain in detail the geometric view in finding the optimal solution given the necessary and sufficient conditions. And what is an isoquant? Ref: PPT chapters Multiple-Goal Decision Analysis II. As shown in the next two charts, the necessary and sufficient conditions for an optimal solution are that the solution point be in the feasible area, and any isoquant having a higher value does not contain any feasible points. An isoquant (iso:equal, quant:quantity) is a contour or area whose values are equal.
30
Optimal Solution: Necessary and Sufficient Conditions
The optimal solution (X1, X2, …, Xn)max And the optimal value Vmax Are characterized by the necessary and sufficient conditions (X1, X2, …, Xn)max is a feasible point on the isoquant f(X1, X2, …, Xn) = Vmax If V> Vmax, then its isoquant f(X1, X2, …, Xn) = V does not contain any feasible points
31
Geometric View xn g1(x1, … , xn) = b1 = v5 = v4 = vmax = v3 = v2
Objective function isoquants g1(x1, … , xn) = b1 g3(x1, … , xn) = b3 g2(x1, … , xn) = b2 x1 (Decision variable) xn (Decision variable) Decision space Optimal solution Infeasible point Feasible point Feasible set f(x1, … , xn) = v1 = v2 = v3 = v4 = vmax = v5 Geometric View
32
In the analysis of net value vs
In the analysis of net value vs. activity level in page 6 in EC-10, the profitable phase should be the time before net value reaches the maximum value. But the next page says that if MNV < 0, it's in decrease activity level. So how can MNV < 0 in profitable segment which should be monotonically increasing? What's the actual business case for this situation? The profitable segment includes all of the values of the activity level x for which the total net value is greater than zero. This includes both its ascending (positive marginal net value) and descending (negative marginal net value) segments. The optimal activity level is at the peak of the net value curve, where the marginal net value is zero: see the next three charts.
33
Marginal Net Value Decision Rule
In the “profitable” segment If MNV > 0, Increase activity level If MNV < 0, Decrease activity level If MNV = 0, Activity level is optimal MNV = d(TV) / dx – dC/dx For Option B, with VT = value of each TR/sec, C(N) = N; dC / dN = 20 TV(N) = VT(840N – 40N2); d(TV) / dN = VT(840 –80N) 20 = VT (840 – 80Nmax) Nmax = (840 VT – 20) / 80 VT = 10.5 – 1/(4VT )
34
Net Value vs. Activity Level
X1 Xmax X2 Over- investment (b) NV = TV - C X NVmax Investment Profitable
35
Marginal Net Value (b) MNV = d(TV)/dx – dC/dx Activity level Xmax X
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.