Emilia Mendes1 Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software.

Slides:



Advertisements
Similar presentations
Unit 1 Section 1.3.
Advertisements

Research methods – Deductive / quantitative
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Empirically Assessing End User Software Engineering Techniques Gregg Rothermel Department of Computer Science and Engineering University of Nebraska --
Observing Behavior A nonexperimental approach. QUANTITATIVE AND QUALITATIVE APPROACHES Quantitative Focuses on specific behaviors that can be easily quantified.
CMP3265 – Professional Issues and Research Methods Research Proposals: n Aims and objectives, Method, Evaluation n Yesterday – aims and objectives: clear,
Marketing Research and Information Systems
Chapter 9 Experimental Research Gay, Mills, and Airasian
The Research Process. Purposes of Research  Exploration gaining some familiarity with a topic, discovering some of its main dimensions, and possibly.
Chapter 2 Understanding the Research Process
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Association vs. Causation
Experimental Design The Gold Standard?.
Fig Theory construction. A good theory will generate a host of testable hypotheses. In a typical study, only one or a few of these hypotheses can.
Chapter 4 Principles of Quantitative Research. Answering Questions  Quantitative Research attempts to answer questions by ascribing importance (significance)
Chapter 4 Hypothesis Testing, Power, and Control: A Review of the Basics.
+ Controlled User studies HCI /6610 Winter 2013.
Chapter 1: Introduction to Statistics
Marketing Research: Overview
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Research Design. Research is based on Scientific Method Propose a hypothesis that is testable Objective observations are collected Results are analyzed.
Chapter 6 : Software Metrics
Basic and Applied Research. Notes:  The question asked is either “basic” or “applied”  “Try again…” NEVER with the same data set  *data mining*  Literature.
Undergraduate Dissertation Preparation – Research Strategy.
© 2005 Pearson Education Canada Inc. Chapter 2 Sociological Investigation.
OBSERVATIONAL METHODS © 2012 The McGraw-Hill Companies, Inc.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Probability & Statistics – Bell Ringer  Make a list of all the possible places where you encounter probability or statistics in your everyday life. 1.
Experimental Design All experiments have independent variables, dependent variables, and experimental units. Independent variable. An independent.
Chapter 12 Evaluating Products, Processes, and Resources.
S14: Analytical Review and Audit Approaches. Session Objectives To define analytical review To define analytical review To explain commonly used analytical.
Introduction to research methods 10/26/2004 Xiangming Mu.
Chapter 1 Measurement, Statistics, and Research. What is Measurement? Measurement is the process of comparing a value to a standard Measurement is the.
Assumes that events are governed by some lawful order
Experimentation in Computer Science (Part 1). Outline  Empirical Strategies  Measurement  Experiment Process.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Quantitative and Qualitative Approaches
1 Experimental Research Cause + Effect Manipulation Control.
Methodology Matters: Doing Research in the Behavioral and Social Sciences ICS 205 Ha Nguyen Chad Ata.
PROCESSING OF DATA The collected data in research is processed and analyzed to come to some conclusions or to verify the hypothesis made. Processing of.
AMSc Research Methods Research approach IV: Experimental [1] Jane Reid
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Question paper 1997.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
©2010 John Wiley and Sons Chapter 2 Research Methods in Human-Computer Interaction Chapter 2- Experimental Research.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Part Three Using Technology and Information to Build Customer Relationships 7 Marketing Research and Information Systems.
OBSERVATIONAL METHODS © 2009 The McGraw-Hill Companies, Inc.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
WERST – Methodology Group
Analytical Review and Audit Approaches
Experimental and Ex Post Facto Designs
Experiments.  Labs (update and questions)  STATA Introduction  Intro to Experiments and Experimental Design 2.
Research Methods in Psychology Introduction to Psychology.
What is Research?. Intro.  Research- “Any honest attempt to study a problem systematically or to add to man’s knowledge of a problem may be regarded.
CONDUCTING EDUCATIONAL RESEARCH Guide to Completing a Major Project Daniel J. Boudah Chapter 5 Designing and Conducting Experimental Research Lucy B. Houston,
Research design By Dr.Ali Almesrawi asst. professor Ph.D.
STA248 week 121 Bootstrap Test for Pairs of Means of a Non-Normal Population – small samples Suppose X 1, …, X n are iid from some distribution independent.
Research Methods Systematic procedures for planning research, gathering and interpreting data, and reporting research findings.
Emilia Mendes Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software.
EXPERIMENTAL RESEARCH
Principles of Quantitative Research
Purpose of Research Research may be broadly classified into two areas; basic and applied research. The primary purpose of basic research (as opposed to.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Software Engineering An Introduction Experimentation in Jianyun Zhou
Coupling Interaction: It occurs due to methods of a class invoking methods of other classes. Component Coupling: refers to interaction between two classes.
DESIGN OF EXPERIMENTS by R. C. Baker
Reminder for next week CUELT Conference.
Presentation transcript:

Emilia Mendes1 Professora Visitante CAPES/ Associate Professor Univ. Auckland, NZ. Introdução a Métricas, Qualidade e Medição de Software

Emilia Mendes2 Metrics –Software Size –Software Quality How to Measure: Empirical Investigations Threats to Validity Outline

Emilia Mendes3 Measuring Software Size (1) Standard measure is based on a functional size measurement. Four different ISO standards (ISO/IEC 14143): –IFPUG Function Points Analysis (FPA) –MkII FPA –COSMIC Full Function Points –NESMA Functional Size Measurement

Emilia Mendes4 COSMIC-FFP method Functional User Requirements of the software to be measured Mapping to COSMIC-FFP software FUR model Identify Software Boundaries Identify Functional Processes Identify Data Groups COSMIC-FFP Software Model context Software model

Emilia Mendes5 exit entry Software Functional User Requirement Is implemented by Data movementData manipulation Ordered set of sub-processes performing either... Functional Processes write read Front endBack end User or engineered devices Storage hw

Emilia Mendes6 Context for Web applications Each “HREF” counted as one functional sub-process containing 1 entry + 1 read + 1 exit. Each applet counted as one functional sub-process containing 1 entry + 1 exit.

Emilia Mendes7 ISO Quality Model ISO/IEC :2001 Software engineering -- Product quality -- Part 1: Quality model Provides characteristics and sub- characteristics for the definition of software quality Measuring Software Quality

Emilia Mendes8 Characteristics and sub- characteristics

Emilia Mendes9 ISO Quality model Three approaches to quality –Internal quality: attributes are directly measured without their interaction with the environment –External quality: attributes are measured by looking at their interaction with the environment (e.g. reliability) –Quality in use: similar to external quality but what counts here is the extent to which the application meets specific users’ needs in the actual, specific context of use.

Emilia Mendes10 surveys case studies formal experiments post-mortem analysis How do you plan to measure?

Emilia Mendes11 To improve (process and/or product) To evaluate (process and/or product) To reject/support a theory or hypothesis To understand (a scenario, situation) To compare (entities, properties etc) Empirical Investigation: Why?

Emilia Mendes12 Person’s performance Tool’s performance Person’s perceptions Tool’s usability Document’s understandability Development’s effort Program’s complexity many more………… Empirical Investigation: What?

Emilia Mendes13 In the field In the lab In the classroom Choice depends on what questions your are asking => your measurement goals! Empirical Investigation: Where and When?

Emilia Mendes14 Hypothesis/question generation Data collection Data evaluation Data interpretation Feed back into iterative process Empirical Investigation: How?

Emilia Mendes15 Experiment to confirm rules-of-thumb –Should the LOC in a method be less than 300? –Should the number of classes in an OO hierarchy be less than 4? Experiment to explore relationships –How does the team experience with the application domain affect the quality of the code? –How does the requirements quality affect the productivity of the designer? –How does the design structure affect code maintainability? Experiment to initiate novel practices –Would it be better to start OO design of Web applications using OOHDM rather than UML? –Would the use of XP programming improve software quality? SE Investigation: Examples

Emilia Mendes16 There are four main principles of investigation: –Selecting an investigation technique: conducting surveys, case studies, formal experiments, post- mortem studies –Stating the hypothesis: What should be investigated? –Maintaining control over variables: dependent and independent variables –Making meaningful investigations: verification of theories, evaluating accuracy of models, validating measurement results Investigation Principles

Emilia Mendes17 There are four ways to assess a method, tool or technique: –Survey: A retrospective study of a situation to try to document relationships and outcomes –Case study: Document an activity by identifying key factors ( inputs, constraints and resources) that may affect the outcomes of that activity. –Formal experiment: A controlled investigation of an activity, by identifying, manipulating and documenting key factors of that activity. If replication is not possible, you cannot do a formal experiment. –Post-mortem analysis: Also a retrospective study of a situation, however this time it applies only to subjects related to a single project SE Investigation Techniques

Emilia Mendes18 Formal Experiment: research in the small You have heard about XP (Extreme Programming) and its advantages and may want to investigate whether XP is a better choice than your current development methodology. You may create a dummy project and get people to develop it using either XP or your company’s current development methodology. Those using XP are experienced in the use of this methodology. You may want to experiment via measuring effort (person hours) and size (new Web pages), and compare development productivity between these two methodologies. Examples (1)

Emilia Mendes19 Case study: research in the typical You have heard about XP (Extreme Programming) and its advantages and may want to investigate whether to use XP in your company. You may perform a case study to apply XP to a project representing a typical project in your organisation, and measure effort (person hours) and size (new Web pages), and compare its development productivity to a baseline, obtained from other similar projects you have developed using your own in-house methodology. Those using XP have experience with this methodology. Examples (2)

Emilia Mendes20 Survey: research in the large After you have used XP in numerous projects you may conduct a survey to capture the effort involved (person hours), and the size (new Web pages) for all projects. Then you may compare productivity figures with those from projects using the current company’s development methodology to see if XP could lead to an overall improvement in productivity. Examples (3)

Emilia Mendes21 Post-Mortem: research in the past-and-typical After you have used XP in one of your projects you may conduct a post-mortem to capture the effort involved (person hours), and the size (new Web pages) for that project. Then you may compare productivity figures with those from projects using the current company’s development methodology to see if XP could lead to an overall improvement in productivity. Generally involves interviewing the development team and investigating project documentation. Examples (4)

Emilia Mendes22 Case study or Experiment?

Emilia Mendes23 Surveys and formal experiments try to generalise their findings to large populations. –Ideally a random sample should be used –Often formal experiments end up using convenience sampling Web/software developers in the vicinity of the researcher Students as representatives of young professionals –Also often in formal experiments the sample will determine the population, rather than the population determining the sample. Differences in Population (1)

Emilia Mendes24 Case-studies and post-mortem analysis. –Results can only be generalised to similar projects and similar organisations to those used in the case study or port-mortem analysis. –Impossible to generalise results to a wider population. Differences in Population (2)

Emilia Mendes25 First step: decide what to investigate The goal for the research can be expressed as a hypothesis, in quantifiable terms, to be tested The test results (gathered data) will refute or not the hypothesis. –Example (null and alternative hypotheses): H 0 : Using Dreamweaver produces similar quality Web applications, on average, than using Witango. H 1 : Using Dreamweaver produces better quality Web applications, on average, than using Witango. Hypothesis (1)

Emilia Mendes26 One independent variable with two values (C# or Java). Same as one Factor. Assuming other variables are constant, subjects are randomly assigned to C# or Java. C# Group (25) Java Group (25) 50 subjects Standard Design (1)

Emilia Mendes27 Assume gender can have an effect on the results (blocking and balancing) But you only want to compare languages, not the interaction between language and gender J2EE Group (10) 50 people ASP.NET Group (10) J2EE Group (15) ASP.NET Group (15) Females (20) Males (30) Standard Design (1)

Emilia Mendes28 One independent variable with two values (C# or Java). Paired design Assuming other variables are constant, subjects are randomly assigned to C# or Java. 50 people C# Group (25) Java Group (25) Java Group (25) C# Group (25) Standard Design (2)

Emilia Mendes29  One independent variable with more than two values (C#, Java, Smalltalk).  Assuming other variables are constant, subjects are randomly assigned to C#, Java or Smalltalk. C# Group (20) Java Group (20) Small.Group (20) 60 people Standard Design (3)

Emilia Mendes30 More than one Factor (independent variable) (Experience with a particular language: high, medium, low; Language: C#, Java; 48 people, 6x2 combinations; 4 people per combination Standard Design (4)

Emilia Mendes31  Nesting: Reduced the number of combinations from 12 to 6; 8 people per combination, instead of 4. Standard Design (4)

Emilia Mendes32 You are investigating the comparative effects of three Web design techniques on the effort to design a given Web application. The experiment involves teaching the techniques to 15 students and measuring how long it takes each student to design a given Web application. It may be the case that six students have previously worked in software development so that their previous experience can affect the way in which the design technique is understood and/or used. Example: Blocking and Balancing (1)

Emilia Mendes33 To eliminate this possibility, two blocks can be defined so that the first block contains all students with previous development experience, and the second all the students who do not have previous experience. Then, the treatments are assigned at random to the students from each block. In the first block, two students are assigned to design method A, two to B, and two to C (to balance the number of students for each treatment). In the second block, three are assigned to each method. Example: Blocking and Balancing (2)

Emilia Mendes34 Example: Blocking and Balancing (3)

Emilia Mendes35 Four types of validity that must be considered: –Internal: Unknown factors that may affect the dependent variable. E.g. confounding factors we’re unaware of. –External: To what extent we can generalise the findings. –Conclusion: To be able to draw correct conclusions regarding the relationship between treatments and the experiment’s outcome. E.g. use of adequate statistical test, use of proper measurement –Construct: Represents to what extent the independent and dependent variables precisely measure the concepts they claim to measure. Threats to Validity

Emilia Mendes36 Exercise