WERST – Methodology Group

Slides:



Advertisements
Similar presentations
Test Yaodong Bi.
Advertisements

1 A Systematic Review of Cross- vs. Within-Company Cost Estimation Studies Barbara Kitchenham Emilia Mendes Guilherme Travassos.
Automated Software Testing: Test Execution and Review Amritha Muralidharan (axm16u)
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 4: An Overview of Empirical Methods 1.
Critiquing Research Articles For important and highly relevant articles: 1. Introduce the study, say how it exemplifies the point you are discussing 2.
Empirically Assessing End User Software Engineering Techniques Gregg Rothermel Department of Computer Science and Engineering University of Nebraska --
Business research methods: data sources
1 User Centered Design and Evaluation. 2 Overview My evaluation experience Why involve users at all? What is a user-centered approach? Evaluation strategies.
Chapter Three Research Design.
Variables and Measurement (2.1) Variable - Characteristic that takes on varying levels among subjects –Qualitative - Levels are unordered categories (referred.
DECISION SUPPORT FOR RE-PLANNING OF SOFTWARE PRODUCT RELEASES S. M. Didar-Al-Alam Dept. of Computer Science University of Calgary, Calgary, AB, Canada.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Formulating the research design
Creating Research proposal. What is a Marketing or Business Research Proposal? “A plan that offers ideas for conducting research”. “A marketing research.
Methodology: How Social Psychologists Do Research
Information Technology Audit
This chapter is extracted from Sommerville’s slides. Text book chapter
© 2011 The McGraw-Hill Companies, Inc. Chapter 2 Psychology’s Scientific Method.
Chapter 4 Hypothesis Testing, Power, and Control: A Review of the Basics.
Overview of the research process. Purpose of research  Research with us since early days (why?)  Main reasons: Explain why things are the way they are.
Chapter 1: Introduction to Statistics
Group Discussion Explain the difference between assignment bias and selection bias. Which one is a threat to internal validity and which is a threat to.
Evaluation of Quality of Learning Scenarios and Their Suitability to Particular Learners’ Profiles Assoc. Prof. Dr. Eugenijus Kurilovas, Vilnius University,
Day 6: Non-Experimental & Experimental Design
Sampling Basics Jeremy Kees, Ph.D.. Conceptually defined… Sampling is the process of selecting units from a population of interest so that by studying.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Classroom Assessment A Practical Guide for Educators by Craig A
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
IIT BOMBAYIDP in Educational Technology * Paper Planning Template Resource – Paper-Planning-Template(SPT)Version 1.0, Dec 2013 Download from:
Undergraduate Dissertation Preparation – Research Strategy.
Evaluating a Research Report
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
Research Strategies Chapter 6. Research steps Literature Review identify a new idea for research, form a hypothesis and a prediction, Methodology define.
Today: Our process Assignment 3 Q&A Concept of Control Reading: Framework for Hybrid Experiments Sampling If time, get a start on True Experiments: Single-Factor.
Organizational Psychology: A Scientist-Practitioner Approach Jex, S. M., & Britt, T. W. (2014) Prepared by: Christopher J. L. Cunningham, PhD University.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
United States Department of Agriculture Food Safety and Inspection Service 1 National Advisory Committee on Meat and Poultry Inspection August 8-9, 2007.
Experimentation in Computer Science (Part 1). Outline  Empirical Strategies  Measurement  Experiment Process.
Enterprise Risk Management Chapter One Prepared by: Raval, Fichadia Raval Fichadia John Wiley & Sons, Inc
© 1997, BAK. 1The DESMET Methodology EEL UNIVERSITY KE Evaluating software methods and tools using the DESMET Methodology Barbara Kitchenham Steve Linkman.
1 Introduction to Software Engineering Lecture 1.
Notes on Research Design You have decided –What the problem is –What the study goals are –Why it is important for you to do the study Now you will construct.
Methodology Matters: Doing Research in the Behavioral and Social Sciences ICS 205 Ha Nguyen Chad Ata.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
SURVEY RESEARCH.  Purposes and general principles Survey research as a general approach for collecting descriptive data Surveys as data collection methods.
Scientifically-Based Research What is scientifically-based research? How do evaluate it?
 Job evaluation is the process of systematically determining the relative worth of jobs to create a job structure for the organization  The evaluation.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Anatomy of a Research Article Five (or six) major sections Abstract Introduction (without a heading!) Method (and procedures) Results Discussion and conclusions.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Experimental & Quasi-Experimental Designs Dr. Guerette.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
3-1 Copyright © 2010 Pearson Education, Inc. Chapter Three Research Design.
1 Chapter 12 Configuration management This chapter is extracted from Sommerville’s slides. Text book chapter 29 1.
Chapter Eight: Quantitative Methods
11-1 Chapter 11 Experiments and Test Markets Learning Objectives Understand... uses for experimentation advantages and disadvantages of the experimental.
Methodology: How Social Psychologists Do Research
URBDP 591 A Lecture 16: Research Validity and Replication Objectives Guidelines for Writing Final Paper Statistical Conclusion Validity Montecarlo Simulation/Randomization.
T EST T OOLS U NIT VI This unit contains the overview of the test tools. Also prerequisites for applying these tools, tools selection and implementation.
Research Methods & Design Outline
STA248 week 121 Bootstrap Test for Pairs of Means of a Non-Normal Population – small samples Suppose X 1, …, X n are iid from some distribution independent.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Understanding Results
CHAPTER 10 Comparing Two Populations or Groups
Writing a Technical Report
Software Engineering Experimentation
UNDERSTANDING RESEARCH RESULTS: STATISTICAL INFERENCE
Goal-Driven Software Measurement
Presentation transcript:

WERST – Methodology Group

Outline Taxonomies of studies Experiment design Reporting studies Research directions

Types of studies Controlled experiments vs. field studies –Advantages: internal validity, duration –Drawbacks: scale, external validity Academic vs. industrial –Academic studies are a good first step: ball park figures, refine hypotheses, additional information Comparing vs. combining techniques –Evaluating single techniques –Choosing one technique out of several alternatives –Ultimate goal should be to combine techniques Human-based studies vs. simulations –Advantage: account for human factors (learning curve, error proneness) –Drawbacks: statistical issues (variation, sample size, etc.), bias

Design of Studies Fault sampling –Mutant generation –Experts seeding faults –Actual project faults Test case selection –Automatic generation versus manual test cases –Sampling from test pools versus humans –Guidance to human subjects: maximum benefit versus expected benefit Selection of subject programs –Characterize, classify, describe subjects –Relevant aspects? –Concurrency, distribution, embedded, PL, dev. Methodology, complexity and size

Design of studies II Baselines of comparisons in empirical studies –“Random” selection of test sets –Current practice –Alternative techniques Statistical variation in the performance of testing techniques –Techniques and criteria are not deterministic –Human factors –Location, type of faults, subject programs –Context: severity, risk

Reporting Studies Subject programs: Concurrency, distribution, embedded, PL, dev. Methodology, complexity and size Fault sampling: selection or seeding procedure Human participants: training, background Data collection procedures, e.g., effort Experiment design: –Hypotheses –Human: groups, task assignments, order of execution –Simulation: Test pools, procedure for deriving test sets from pool Threats to validity

Conclusions Properly reporting studies is key Decide on proper designs for replicability and meta-analyses Need for multiple kinds of studies to answer a research question

Future Directions Mutants versus real faults: What is the relationship? Guidelines and templates for conducting empirical studies: Industrial, academic –Group assignments, training –Data collection: metrics and procedures –Fault sampling –Subject selection –Statistical analysis: statistical inference testing, meta-analysis –Qualitative analysis, e.g., faults Guidelines and templates for reporting empirical studies: Industrial, academic –How to ensure replicability?

Proposals White paper on how to perform and report empirical studies of software testing Web repositories Competitions on benchmarks Software testing questions: –Cost-effectiveness: How to measure the cost and benefits of testing? –Model-driven test automation and strategies –Tailoring techniques to specifics of development processes and application domains –Prioritizing black-box, system level regression test cases –Correlation/Synergy among test techniques (faults) Meta-questions –Generalization / prediction –Appropriate taxonomies of faults –Consistency of results between lab and industrial contexts –Improve the relevance of mutation systems