 The progress toward project goals and the quality of software products must be measurable throughout the software development cycle.  Metrics values.

Slides:



Advertisements
Similar presentations
Prescriptive Process models
Advertisements

Ninth Lecture Hour 8:30 – 9:20 pm, Thursday, September 13
Automated Software Testing: Test Execution and Review Amritha Muralidharan (axm16u)
CSE 470 : Software Engineering The Software Process.
Chapter 4 Quality Assurance in Context
Metrics for Process and Projects
Sixth Hour Lecture 10:30 – 11:20 am, September 9 Framework for a Software Management Process – Artifacts of the Process (Part II, Chapter 6 of Royce’ book)
Using UML, Patterns, and Java Object-Oriented Software Engineering Royce’s Methodology Chapter 16, Royce’ Methodology.
W5HH Principle As applied to Software Projects
Software Quality Engineering Roadmap
Software Quality Metrics
Creating Architectural Descriptions. Outline Standardizing architectural descriptions: The IEEE has published, “Recommended Practice for Architectural.
I n t e g r i t y - S e r v i c e - E x c e l l e n c e Business & Enterprise Systems Introduction to Hewlett Packard (HP) Application Lifecycle Management.
SOFTWARE PROJECT MANAGEMENT Project Quality Management Dr. Ahmet TÜMAY, PMP.
Capability Maturity Model
Chapter 6– Artifacts of the process
Chapter : Software Process
S T A M © 2000, KPA Ltd. Software Trouble Assessment Matrix Software Trouble Assessment Matrix *This presentation is extracted from SOFTWARE PROCESS QUALITY:
Software Development *Life-Cycle Phases* Compiled by: Dharya Dharya Daisy Daisy
Chapter 2 The process Process, Methods, and Tools
N By: Md Rezaul Huda Reza n
Software Project Management Introduction to Project Management.
Thirteenth Lecture Hour 8:30 – 9:20 am, Sunday, September 16 Software Management Disciplines Process Automation (from Part III, Chapter 12 of Royce’ book)
-Nikhil Bhatia 28 th October What is RUP? Central Elements of RUP Project Lifecycle Phases Six Engineering Disciplines Three Supporting Disciplines.
Fourteenth Lecture Hour 9:30 – 10:20 am, Sunday, September 16 Software Management Disciplines Project Control and Process Automation (from Part III, Chapter.
1 Software Quality CIS 375 Bruce R. Maxim UM-Dearborn.
Chapter 6 : Software Metrics
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 1 Slide 1 Software Processes (Chapter 3)
Chapter 2 Process: A Generic View
Identify steps for understanding and solving the
Capability Maturity Models Software Engineering Institute (supported by DoD) The problems of software development are mainly caused by poor process management.
Software Project Management Lecture # 7. What are we studying today? Chapter 24 - Project Scheduling  Effort distribution  Defining task set for the.
Software Measurement & Metrics
Week 8 - Quality Management Learning Objectives You should be able to: §List and explain common principles of quality management (QM) §List, distinguish.
Chapter – 9 Checkpoints of the process
Testing Workflow In the Unified Process and Agile/Scrum processes.
Project Management for Agile Projects. Introduction The Project Management Job Day to day administration Managing the process Managing external relationships.
Software Project Management With Usage of Metrics Candaş BOZKURT - Tekin MENTEŞ Delta Aerospace May 21, 2004.
This chapter is extracted from Sommerville’s slides. Text book chapter
Iterative process planning. Overview Introductory Remarks 10.1 Work breakdown structure 10.2 Planning Guidelines 10.3 The cost & Schedule estimating process.
Lecture 3 Software Engineering Models (Cont.)
Software Engineering Principles Principles form the basis of methods, techniques, methodologies and tools Principles form the basis of methods, techniques,
Eleventh Lecture Hour 9:30 – 10:20 am, Saturday, September 16 Software Management Disciplines Iterative Process Planning (from Part III, Chapter 10 of.
CHECKPOINTS OF THE PROCESS Three sequences of project checkpoints are used to synchronize stakeholder expectations throughout the lifecycle: 1)Major milestones,
Project environment 1) Round trip engineering 2) Change management 3)Infrastructures. 4)Stakeholder environments. Overview of process Automation.
SAM Executive Seminar Software Measurement.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Chapter 3 Project Management Chapter 3 Project Management Organising, planning and scheduling software projects.
Fifth Lecture Hour 9:30 – 10:20 am, September 9, 2001 Framework for a Software Management Process – Life Cycle Phases (Part II, Chapter 5 of Royce’ book)
Software Product Line Material based on slides and chapter by Linda M. Northrop, SEI.
Computing and SE II Chapter 15: Software Process Management Er-Yu Ding Software Institute, NJU.
Chapter 3: Software Project Management Metrics
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
A Metrics Program. Advantages of Collecting Software Quality Metrics Objective assessments as to whether quality requirements are being met can be made.
MODEL-BASED SOFTWARE ARCHITECTURES.  Models of software are used in an increasing number of projects to handle the complexity of application domains.
Software Project Management (SEWPZG622) BITS-WIPRO Collaborative Programme: MS in Software Engineering SECOND SEMESTER /1/ "The content of this.
Software Measurement Measuring software indicators: metrics and methods Jalote-2002,
Unit – I Presentation. Unit – 1 (Introduction to Software Project management) Definition:-  Software project management is the art and science of planning.
Welcome to Software Project Management. CONVENTIONAL SOFTWARE MANAGEMENT The BEST and WORST thing about software is its flexibility. 1.Software development.
Software Architecture Architecture represents different things from use cases –Use cases deal primarily with functional properties –Architecture deals.
RUP RATIONAL UNIFIED PROCESS Behnam Akbari 06 Oct
Chapter 33 Estimation for Software Projects
Software Project Sizing and Cost Estimation
Software engineering -1
Project Management Process Groups
Chapter 33 Estimation for Software Projects
Capability Maturity Model
Software metrics.
Capability Maturity Model
Chapter 26 Estimation for Software Projects.
Presentation transcript:

 The progress toward project goals and the quality of software products must be measurable throughout the software development cycle.  Metrics values provide an important perspective for managing the process.  The most useful metrics are extracted directly from the existing artifacts.

 The primary themes of a modern software development process tackle the central management issues of complex software:  Getting the design right by focusing on architecture first.  Managing risk through iterative development.  Reducing the complexity with component based techniques.  Making software progress and quality tangible through instrumented change management.  Automating the overhead and bookkeeping activities through the use of round-rip engineering and integrated environments.

The goals of software metrics are to provide the development team and the management team with the following: An accurate assessment of progress to date. Insight into the quality of the evolving software product. A basis for estimating the cost and schedule for completing the product with increasing accuracy over time. THE SEVEN CORE METRICS (Management indicators) Work and progress (work performed over time) Budgeted cost and expenditures (cost incurred over time) Staffing and team dynamics (personnel changes over time)

 Quality indicators  Change traffic and stability (change traffic over time)  Breakage and modularity (average breakage per change over time)  Rework and adaptability (average rework per change over time)  Mean time between failures (MTBF) and maturity (defect rate over time)

The seven core metrics are based on common sense and field experience with both successful and unsuccessful metrics programs. Their attributes include the following: They are simple, objective, easy to collect, easy to interpret, and hard to misinterpret. Collection can be automated and non-intrusive. They provide for consistent assessments throughout the life cycle and are derived from the evolving product baselines rather than from a subjective assessment. They are useful to both management and engineering personnel for communicating progress and quality in a consistent format. Their fidelity improves across the life cycle.

MANAGEMENT INDICATORS There are three fundamental sets of management metrics: technical progress, financial status, and staffing progress. Most managers know their resource expenditures in terms of costs and schedule. The management indicators recommended here include standard financial status based on an earned value system, objective technical progress metrics tailored to the primary measurement criteria for each major team of the organization, and staffing metrics provide insight that provide insight into team dynamics.

 The various activities of an iterative development project can be measured by defining a planned estimate of the work in an objective measure, then tracking progress against the plan.  The default perspectives of this metric would be as follows:  Software architecture team: use cases demonstrated  Software development team: SLOC under baseline change management, SCOs closed.  Software assessment team: SCOs opened, test hours executed, evolution criteria met.  Software management team: milestones completed

. BUDGETED COST AND EXPENDITURES With an iterative development process, it is important to plan the near-term activities (usually a window of time less than six months) in detail and leave the far-term activities as rough estimates to be refined as the current iteration is winding down and planning for the next iteration becomes crucial.

Modern software processes are amenable to financial performance measurement through an earned value approach. The basic parameters of an earned value system, usually expressed in units of dollars, are as follows: Expenditure plan: the planned spending profile for a project over its planned schedule. Actual progress: the planned accomplishment relative to the planned progress underlying the spending profile. Actual cost: the actual spending profile for a percent over its actual schedule. Earned value: the value that represents the planned cost of he actual progress. Cost variance: the difference between the actual cost and the earned value.

 An iterative development should start with a small team until the risks in the requirements and architecture have been suitably resolved.  It is reasonable to expect the maintenance team to be smaller than the development team for small projects.  For a commercial product development, the sizes of the maintenance and development teams may be the same.

Increases in staff can slow overall project progress as new people consume the productive time of existing people in coming up to speed. Low attrition of good people is a sign of success. Engineers are highly motivated by making progress in getting something to work; this is the recurring theme underlying an efficient iterative development process. If this motivation is not there, good engineers will migrate elsewhere. An increase in unplanned attrition – namely, people leaving a project prematurely – is one of the most glaring indicators that a project is destined for trouble.

QUALITY INDICATORS The four quality indicators are based primarily on the measurement of software change across evolving baselines of engineering data (such as design models and source code). 1.Change traffic and stability 2.Breakage and modularity 3.Rework and adaptability 4.MTBF and maturity

CHANGE TRAFFIC AND STABILITY Overall change traffic is one specific indicator of progress and quality. Change traffic is defined as the number of software change orders opened and closed over the life cycle. This metric can be collected by change type, by release, across all releases, by team, by components, by subsystem, and so forth. Coupled with the work and progress metrics, it provides insight into the stability of the software and its convergence towards stability (or divergence towards instability). Stability is defined as the relationship between opened versus closed SCOs.

BREAKAGE AND MODULARITY Breakage is defined as the average extent of change, which is the amount of software baseline that needs rework. Modularity is the average breakage trend over time.

Rework and Adaptability Rework is defined as the average cost of change, which is the effort to analyze, resolve, and retest all changes to software baselines. Adaptability is defined as the rework trend over time. For a healthy project, the trend expectation is decreasing or stable. In a mature iterative development process, earlier changes (architectural changes which affect multiple components and people) are expected to require more rework than later changes (implementation changes, which tend to be confined to a single component or person). Rework trends that are increasing with time clearly indicate that product maintainability is suspect.

MTBF and Maturity MTBF is the average usage time between software faults. MTBF is computed by dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the MTBF trend over time. Early insight into maturity requires that an effective test infrastructure be established. Systems of components are more effectively tested by using statistical techniques. Consequently, the maturity metrics measure statistics over usage time rather than product coverage.

Software errors can be categorized into two types: deterministic and non- deterministic. Bohr-bugs represent a class of errors that always result when the software is stimulated in a certain way. These errors are predominantly caused by coding errors, and changes are typically isolated to a single component. Heisen-bugs are software faults that are coincidental with a certain probabilistic occurrence of a given situation. These errors are almost always design errors and typically are not repeatable even when the software is stimulated in the same apparent way.

The best way to mature a software product is to establish an initial test infrastructure that allows execution of randomized usage scenarios early in the life cycle and continuously evolves the breadth and depth of usage scenarios to optimize coverage across the reliability-critical components. As baselines are established, they should be continuously subjected to test scenarios. From this base of testing, reliability metrics can be extracted. Meaningful insight into product maturity can be gained by maximizing test time (through independent test environments, automated regression tests, randomized statistical testing, after-hours stress testing, etc.).

Life Cycle Expectations The reasons for selecting the seven-core metrics are: The quality indicators are derived from the evolving product rather than from the artifacts. They provide insight into the waste generated by the process. They recognize the inherently dynamic nature of an iterative development process. Rather than focus on the value, they explicitly concentrate on the trends or changes with respect to time. The combination of insight from the current value and the current trend provides tangible indicators for management action.

PRAGMATIC SOFTWARE METRICS Measures only provides data to help them ask the right questions, understand the context, and make objective decisions. The basic characteristics of a good metric are as follows: It is considered meaningful by the customer, manager, and performer. If any one of these stakeholders does not see the metric as meaningful, it will not be used. Customers come to software engineering providers because the providers are more expert than they are at developing and managing software.

It demonstrates quantifiable correlation between process perturbations and business performance. The only real goals and objectives are financial: cost reduction, revenue increase, and margin increase. It is objective and unambiguously defined. Objectivity should translate into some form of numeric representation as opposed to textual representations. Ambiguity is minimized through well-understood units of measurement (such as staff-month, SLOC, change, FP, class, scenario, requirement). It displays trends. Understanding the change in a metric’s value with respect to time, subsequent projects, subsequent releases, and so forth is an extremely important perspective. It is

It is a natural byproduct of the process. It is supported by automation. Experience has demonstrated that the most successful metrics are those that are collected and reported by automated tools, in part because software tools require rigorous definitions of the data they process.

METRICS AUTOMATION For managing against a plan, a software project control panel (SPCP) that maintains an on-line version of the status of evolving artifacts provides a key advantage. The idea is to provide a display panel that integrates data from multiple sources to show current status of some aspect of the project. The panel can support standard features such as warning lights, thresholds, variable scales, digital formats, and analog formats to present an overview of the current situation

This automation support can improve management insight into progress and quality trends and improve the acceptance of metrics by the engineering team. To implement a complete SPCP, it is necessary to define and develop the following: Metrics primitives: indicators, trends, comparisons, and progressions. A GUI: support for a software project manager role and flexibility to support other roles.

Metrics collection agents: data extraction from the environment tools that maintain the engineering notations for the various artifact sets. Metrics data management server: data management support for populating the metric displays of the GUI and storing the data extracted by the agents. Metrics definitions: actual metrics presentations for requirements progress, design progress, implementation progress, assessment progress, and other progress dimensions (extracted from manual sources, financial management systems, management artifacts, etc.).

Specific monitors (called roles) include software project managers, software development team leads, software architects, and customers. Monitor: defines panel layouts from existing mechanisms, graphical objects, and linkages to project data; queries data to be displayed at different levels of abstraction. Administrator: installs the system, defines new mechanisms, graphical objects, and linkages; handles archiving functions; defines composition and decomposition structures for displaying multiple levels of abstraction.

The whole display is called a panel. Within a panel are graphical objects, which are types of layouts (such as dials and bar charts) for information. Each graphical object displays a metric.