Presentation is loading. Please wait.

Presentation is loading. Please wait.

Emilia Mendes1 Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software.

Similar presentations


Presentation on theme: "Emilia Mendes1 Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software."— Presentation transcript:

1 Emilia Mendes1 Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software

2 Emilia Mendes2 Metrics –Software Size –Software Quality How to Measure: Empirical Investigations Threats to Validity Outline

3 Emilia Mendes3 Measuring Software Size (1) Standard measure is based on a functional size measurement. Four different ISO standards (ISO/IEC 14143): –IFPUG Function Points Analysis (FPA) –MkII FPA –COSMIC Full Function Points –NESMA Functional Size Measurement

4 Emilia Mendes4 COSMIC-FFP method Functional User Requirements of the software to be measured Mapping to COSMIC-FFP software FUR model Identify Software Boundaries Identify Functional Processes Identify Data Groups COSMIC-FFP Software Model context Software model

5 Emilia Mendes5 exit entry Software Functional User Requirement Is implemented by Data movementData manipulation Ordered set of sub-processes performing either... Functional Processes write read Front endBack end User or engineered devices Storage hw

6 Emilia Mendes6 Context for Web applications Each “HREF” counted as one functional sub-process containing 1 entry + 1 read + 1 exit. Each applet counted as one functional sub-process containing 1 entry + 1 exit.

7 Emilia Mendes7 ISO Quality Model ISO/IEC 9126-1:2001 Software engineering -- Product quality -- Part 1: Quality model Provides characteristics and sub- characteristics for the definition of software quality Measuring Software Quality

8 Emilia Mendes8 Characteristics and sub- characteristics

9 Emilia Mendes9 ISO Quality model Three approaches to quality –Internal quality: attributes are directly measured without their interaction with the environment –External quality: attributes are measured by looking at their interaction with the environment (e.g. reliability) –Quality in use: similar to external quality but what counts here is the extent to which the application meets specific users’ needs in the actual, specific context of use.

10 Emilia Mendes10 surveys case studies formal experiments post-mortem analysis How do you plan to measure?

11 Emilia Mendes11 To improve (process and/or product) To evaluate (process and/or product) To reject/support a theory or hypothesis To understand (a scenario, situation) To compare (entities, properties etc) Empirical Investigation: Why?

12 Emilia Mendes12 Person’s performance Tool’s performance Person’s perceptions Tool’s usability Document’s understandability Development’s effort Program’s complexity many more………… Empirical Investigation: What?

13 Emilia Mendes13 In the field In the lab In the classroom Choice depends on what questions your are asking => your measurement goals! Empirical Investigation: Where and When?

14 Emilia Mendes14 Hypothesis/question generation Data collection Data evaluation Data interpretation Feed back into iterative process Empirical Investigation: How?

15 Emilia Mendes15 Experiment to confirm rules-of-thumb –Should the LOC in a method be less than 300? –Should the number of classes in an OO hierarchy be less than 4? Experiment to explore relationships –How does the team experience with the application domain affect the quality of the code? –How does the requirements quality affect the productivity of the designer? –How does the design structure affect code maintainability? Experiment to initiate novel practices –Would it be better to start OO design of Web applications using OOHDM rather than UML? –Would the use of XP programming improve software quality? SE Investigation: Examples

16 Emilia Mendes16 There are four main principles of investigation: –Selecting an investigation technique: conducting surveys, case studies, formal experiments, post- mortem studies –Stating the hypothesis: What should be investigated? –Maintaining control over variables: dependent and independent variables –Making meaningful investigations: verification of theories, evaluating accuracy of models, validating measurement results Investigation Principles

17 Emilia Mendes17 There are four ways to assess a method, tool or technique: –Survey: A retrospective study of a situation to try to document relationships and outcomes –Case study: Document an activity by identifying key factors ( inputs, constraints and resources) that may affect the outcomes of that activity. –Formal experiment: A controlled investigation of an activity, by identifying, manipulating and documenting key factors of that activity. If replication is not possible, you cannot do a formal experiment. –Post-mortem analysis: Also a retrospective study of a situation, however this time it applies only to subjects related to a single project SE Investigation Techniques

18 Emilia Mendes18 Formal Experiment: research in the small You have heard about XP (Extreme Programming) and its advantages and may want to investigate whether XP is a better choice than your current development methodology. You may create a dummy project and get people to develop it using either XP or your company’s current development methodology. Those using XP are experienced in the use of this methodology. You may want to experiment via measuring effort (person hours) and size (new Web pages), and compare development productivity between these two methodologies. Examples (1)

19 Emilia Mendes19 Case study: research in the typical You have heard about XP (Extreme Programming) and its advantages and may want to investigate whether to use XP in your company. You may perform a case study to apply XP to a project representing a typical project in your organisation, and measure effort (person hours) and size (new Web pages), and compare its development productivity to a baseline, obtained from other similar projects you have developed using your own in-house methodology. Those using XP have experience with this methodology. Examples (2)

20 Emilia Mendes20 Survey: research in the large After you have used XP in numerous projects you may conduct a survey to capture the effort involved (person hours), and the size (new Web pages) for all projects. Then you may compare productivity figures with those from projects using the current company’s development methodology to see if XP could lead to an overall improvement in productivity. Examples (3)

21 Emilia Mendes21 Post-Mortem: research in the past-and-typical After you have used XP in one of your projects you may conduct a post-mortem to capture the effort involved (person hours), and the size (new Web pages) for that project. Then you may compare productivity figures with those from projects using the current company’s development methodology to see if XP could lead to an overall improvement in productivity. Generally involves interviewing the development team and investigating project documentation. Examples (4)

22 Emilia Mendes22 Case study or Experiment?

23 Emilia Mendes23 Surveys and formal experiments try to generalise their findings to large populations. –Ideally a random sample should be used –Often formal experiments end up using convenience sampling Web/software developers in the vicinity of the researcher Students as representatives of young professionals –Also often in formal experiments the sample will determine the population, rather than the population determining the sample. Differences in Population (1)

24 Emilia Mendes24 Case-studies and post-mortem analysis. –Results can only be generalised to similar projects and similar organisations to those used in the case study or port-mortem analysis. –Impossible to generalise results to a wider population. Differences in Population (2)

25 Emilia Mendes25 First step: decide what to investigate The goal for the research can be expressed as a hypothesis, in quantifiable terms, to be tested The test results (gathered data) will refute or not the hypothesis. –Example (null and alternative hypotheses): H 0 : Using Dreamweaver produces similar quality Web applications, on average, than using Witango. H 1 : Using Dreamweaver produces better quality Web applications, on average, than using Witango. Hypothesis (1)

26 Emilia Mendes26 One independent variable with two values (C# or Java). Same as one Factor. Assuming other variables are constant, subjects are randomly assigned to C# or Java. C# Group (25) Java Group (25) 50 subjects Standard Design (1)

27 Emilia Mendes27 Assume gender can have an effect on the results (blocking and balancing) But you only want to compare languages, not the interaction between language and gender J2EE Group (10) 50 people ASP.NET Group (10) J2EE Group (15) ASP.NET Group (15) Females (20) Males (30) Standard Design (1)

28 Emilia Mendes28 One independent variable with two values (C# or Java). Paired design Assuming other variables are constant, subjects are randomly assigned to C# or Java. 50 people C# Group (25) Java Group (25) Java Group (25) C# Group (25) Standard Design (2)

29 Emilia Mendes29  One independent variable with more than two values (C#, Java, Smalltalk).  Assuming other variables are constant, subjects are randomly assigned to C#, Java or Smalltalk. C# Group (20) Java Group (20) Small.Group (20) 60 people Standard Design (3)

30 Emilia Mendes30 More than one Factor (independent variable) (Experience with a particular language: high, medium, low; Language: C#, Java; 48 people, 6x2 combinations; 4 people per combination Standard Design (4)

31 Emilia Mendes31  Nesting: Reduced the number of combinations from 12 to 6; 8 people per combination, instead of 4. Standard Design (4)

32 Emilia Mendes32 You are investigating the comparative effects of three Web design techniques on the effort to design a given Web application. The experiment involves teaching the techniques to 15 students and measuring how long it takes each student to design a given Web application. It may be the case that six students have previously worked in software development so that their previous experience can affect the way in which the design technique is understood and/or used. Example: Blocking and Balancing (1)

33 Emilia Mendes33 To eliminate this possibility, two blocks can be defined so that the first block contains all students with previous development experience, and the second all the students who do not have previous experience. Then, the treatments are assigned at random to the students from each block. In the first block, two students are assigned to design method A, two to B, and two to C (to balance the number of students for each treatment). In the second block, three are assigned to each method. Example: Blocking and Balancing (2)

34 Emilia Mendes34 Example: Blocking and Balancing (3)

35 Emilia Mendes35 Four types of validity that must be considered: –Internal: Unknown factors that may affect the dependent variable. E.g. confounding factors we’re unaware of. –External: To what extent we can generalise the findings. –Conclusion: To be able to draw correct conclusions regarding the relationship between treatments and the experiment’s outcome. E.g. use of adequate statistical test, use of proper measurement –Construct: Represents to what extent the independent and dependent variables precisely measure the concepts they claim to measure. Threats to Validity

36 Emilia Mendes36 Exercise


Download ppt "Emilia Mendes1 Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software."

Similar presentations


Ads by Google