Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Copyright 2011. Richard W. Selby. All rights reserved. 0 Rick Selby Director of Software Products Northrop Grumman Aerospace Systems 310-813-5570,

Similar presentations


Presentation on theme: "© Copyright 2011. Richard W. Selby. All rights reserved. 0 Rick Selby Director of Software Products Northrop Grumman Aerospace Systems 310-813-5570,"— Presentation transcript:

1 © Copyright 2011. Richard W. Selby. All rights reserved. 0 Rick Selby Director of Software Products Northrop Grumman Aerospace Systems 310-813-5570, Rick.Selby@NGC.com Adjunct Professor of Computer Science University of Southern California Lessons Learned for Development and Management of Large-Scale Software Systems

2 © Copyright 2011. Richard W. Selby. All rights reserved. 1  Embedded software for  Advanced robotic spacecraft platforms  High-bandwidth satellite payloads  High-power laser systems  Emphasis on both system management and payload software  Reusable, reconfigurable software architectures and components  Languages: O-O to C to assembly  CMMI Level 5 for Software in February 2004; ISO/AS9100; Six Sigma  High-reliability, long-life, real-time embedded software systems Organizational Charter Focuses on Embedded Software Products Prometheus / JIMO Software Peer Review Software Development Lab Restricted JWST NPOESS EOS Aqua/Aura Chandra Airborne Laser Software Analysis Software Process Flow for Each Build, with 3-15 Builds per Program GeoLITE AEHFMTHEL

3 © Copyright 2011. Richard W. Selby. All rights reserved. Lessons Learned for Development and Management of Large-Scale Software Systems  Early planning  People are the largest lever  Engage your stakeholders and set expectations  Embrace change because change is value  Lifecycle and architecture strategy  Prioritize features and align resources for high-payoff  Develop products incrementally  Iteration facilitates efficient learning  Reuse drives favorable effort and quality economics  Execution  Organize to enable parallel activities  Invest resources in high return-on-investment activities  Automate testing for early and frequent defect detection  Create schedule margin by delivering early  Decision making  Measurement enables visibility  Modeling and estimation improve decision making  Apply risk management to mitigate risks early 2

4 © Copyright 2011. Richard W. Selby. All rights reserved. Lessons Learned for Development and Management of Large-Scale Software Systems  Early planning  Lifecycle and architecture strategy  Execution  Decision making 3

5 © Copyright 2011. Richard W. Selby. All rights reserved. People are the Largest Lever 4  Barry Boehm’s comparisons of actual productivity rates across projects substantiated the COCOMO productivity multiplier factors of 4.18 due to personnel/team capability, etc. Source: B. Boehm, “Improving Software Productivity,” IEEE Computer, Vol. 20, Issue 9, September 1987, pp. 43-57.

6 © Copyright 2011. Richard W. Selby. All rights reserved. Engage Your Stakeholders and Set Expectations 5 Source: B. Boehm, “Critical Success Factors for Schedule Estimation and Improvement,” 26 th International Forum on Systems, Software, and COCOMO Cost Modeling, November 2, 2011.

7 © Copyright 2011. Richard W. Selby. All rights reserved. 6 Embrace Change because Change is Value Criteria for high adoption rate for innovations:  Relative advantage – The innovation is technically superior (in terms of cost, functionality, “image”, etc.) than the technology it supersedes.  Compatibility – The innovation is compatible with existing values, skills, and work practices of potential adopters.  Lack of complexity – The innovation is not relatively difficult to understand and use.  Trialability – The innovation can be experimented with on a trial basis without undue effort and expense; it can be implemented incrementally and still provide a net positive benefit.  Observability – The results and benefits of the innovation’s use can be easily observed and communicated to others. Source: E. M. Rogers, Diffusion of Innovations, Free Press, New York, 1983.

8 © Copyright 2011. Richard W. Selby. All rights reserved. Lessons Learned for Development and Management of Large-Scale Software Systems  Early planning  Lifecycle and architecture strategy  Execution  Decision making 7

9 © Copyright 2011. Richard W. Selby. All rights reserved. 8 Prioritize Features and Align Resources for High- Payoff  Synchronize-and-stabilize lifecycle has planning, development, and stabilization phases  Planning phase  Vision statement – Product and program management use extensive customer input to identify and prioritize product features  Specification document – Based on vision statement, program management and development group define feature functionality, architectural issues, and component interdependencies  Schedule and feature team formation – Based on specification document, program management coordinates schedule and arranges feature teams that each contain approximately 1 program manager, 3-8 developers, and 3-8 testers (who work in parallel 1:1 with developers)  Development phase  Program managers coordinate evolution of specification. Developers design, code, and debug. Testers pair up with developers for continuous testing.  Subproject I – First 1/3 of features: Most critical features and shared components.  Subproject II – Second 1/3 of features.  Subproject III – Final 1/3 of features: Least critical features.  Stabilization phase  Program managers coordinate OEMs and ISVs and monitor customer feedback. Developers perform final debugging and code stabilization. Testers recreate and isolate errors.  Internal testing – Thorough testing of complete product within the company.  External testing – Thorough testing of complete product outside the company by “beta” sites such as OEMs, ISVs, and end-users.  Release preparation – Prepare final release of “golden master” version and documentation for manufacturing.

10 © Copyright 2011. Richard W. Selby. All rights reserved. 9 Develop Products Incrementally Development 6-16 months  Synchronize-and- stabilize lifecycle timeline and milestones enable frequent incremental deliveries

11 © Copyright 2011. Richard W. Selby. All rights reserved. 10 Iteration Facilitates Efficient Learning Power  Incremental software builds deliver early capabilities and accelerate integration and test  Iteration helps refine problem statements, create potential solutions, and elicit feedback

12 © Copyright 2011. Richard W. Selby. All rights reserved. 11 Reuse Drives Favorable Effort and Quality Economics  Analyses of component-based software reuse shows favorable trends for decreasing faults  Data from 25 NASA systems  Overall difference is statistically significant (  <.0001). Number of components (or modules) in each category is: 1629, 205, 300, 820, and 2954, respectively.

13 © Copyright 2011. Richard W. Selby. All rights reserved. Lessons Learned for Development and Management of Large-Scale Software Systems  Early planning  Lifecycle and architecture strategy  Execution  Decision making 12

14 © Copyright 2011. Richard W. Selby. All rights reserved. Organize to Enable Parallel Activities 13

15 © Copyright 2011. Richard W. Selby. All rights reserved. 14 TotalAve.Ave. / EKSLOC Reviews25729N/A Prevention cycles 151.7N/A Defects26212917.3 Defects per review N/A15N/A Defects out- of-phase N/A8.1%1.3 Invest Resources in High Return-on-Investment Activities  Return-on-investment (ROI) for software peer reviews ranges from 9:1 to 3800:1 per project  Return-on-investment (ROI) = Net cost avoidance divided by non-recurring cost  2621 defects, 257 reviews, 9 systems, 1.5 years  High ROI drivers  Mature and effective processes already in place  Significant new scope under development  Early lifecycle peer reviews (e.g., requirements phase)  Four of the five programs with >80% requirements and design defects had relatively higher ROI Projects

16 © Copyright 2011. Richard W. Selby. All rights reserved. 15 Automate Testing for Early and Frequent Defect Detection  Distribution of software defect injection phases based on using peer reviews across 12 system development phases  3418 defects, 731 peer reviews, 14 systems, 2.67 years  49% of defects injected during requirements phase

17 © Copyright 2011. Richard W. Selby. All rights reserved. 16 Create Schedule Margin by Delivering Early Source: “The Network Diagram and Critical Path,” www.slideshare.net, May 2010. Legend Critical path  Critical path defines the path through the network containing activities with zero slack

18 © Copyright 2011. Richard W. Selby. All rights reserved. Lessons Learned for Development and Management of Large-Scale Software Systems  Early planning  Lifecycle and architecture strategy  Execution  Decision making 17

19 © Copyright 2011. Richard W. Selby. All rights reserved. 18 Measurement Enables Visibility  Interactive metric dashboards provide framework for visibility, flexibility, integration, and automation  Interactive metric dashboards incorporate a variety of information and features to help developers and managers characterize progress, identify outliers, compare alternatives, evaluate risks, and predict outcomes

20 © Copyright 2011. Richard W. Selby. All rights reserved. 19 Modeling and Estimation Improve Decision Making  Target: Identify error-prone (top 25%) and effort- prone (top 25%) components  16 large NASA systems  960 configurations  Models use metric- driven decision trees and networks  Analyses tradeoff prediction consistency versus completeness

21 © Copyright 2011. Richard W. Selby. All rights reserved. 20 SDR Moderate High Low Apply Risk Management to Mitigate Risks Early Last Updated: 03-October-05 CSRR PDR Last Updated: 14-Dec-05 Events CDR TRL 4 TRL 5 TRL 6 TRL 7 Exit/Success Criteria: 1.BM1 complete; customer concurs with approach 2.Software requirements scope estimated (preliminary). 3.Software control board established (preliminary); change control process established. 4.SDP released. Spec tree defined. 5.RTOS lab evaluation completed. Capabilities validated using sim. 6.Software requirements scope estimated (final) 7.System development process flow models implemented. 8.Spacecraft/subsystems/etc. users define use cases (for I/Fs, functions, nominal ops, off-nominal ops, etc.) completed. Validated using models/sim. 9.Finalize IFC1 requirements: Infrastructure SW completed. Validated requirements using models/sim. 10.Baseline allocation of SW requirements to IFCs with growth/correction/deficiency completed. 11.Software control board (final) established 12.SwRR conducted. NASA customer agrees with software requirements. 13.Finalize IFC2 requirements: Inter-module & inter-subsystem I/Fs completed. Validated requirements using models/sim. 14.Initial end-to-end architecture model completed. 15.Finalize IFC3 requirements: Subsystems major functions completed. Validated requirements using models/sim. 16.Finalize IFC4 requirements: Nominal operations completed. Validated requirements using models/sim. 17.Deliver IFC3: Subsystems major functions completed. Validated capabilities using sim. 18.Finalize IFC5 requirements: Subsystems off-nominal operations completed. Validated requirements using models/sim. 19.Finalize IFC5 requirements: Subsystems off-nominal operations completed. Validated requirements using models/sim. 20.Deliver IFC7: No new capabilities; Only system I&T corrections completed. SW complete for 1st mission.  Projects define risk mitigation “burn down” charts with specific tasks and exit criteria

22 © Copyright 2011. Richard W. Selby. All rights reserved. Lessons Learned for Development and Management of Large-Scale Software Systems  Early planning  People are the largest lever  Engage your stakeholders and set expectations  Embrace change because change is value  Lifecycle and architecture strategy  Prioritize features and align resources for high-payoff  Develop products incrementally  Iteration facilitates efficient learning  Reuse drives favorable effort and quality economics  Execution  Organize to enable parallel activities  Invest resources in high return-on-investment activities  Automate testing for early and frequent defect detection  Create schedule margin by delivering early  Decision making  Measurement enables visibility  Modeling and estimation improve decision making  Apply risk management to mitigate risks early 21


Download ppt "© Copyright 2011. Richard W. Selby. All rights reserved. 0 Rick Selby Director of Software Products Northrop Grumman Aerospace Systems 310-813-5570,"

Similar presentations


Ads by Google