The Role of Empirical Study in Software Engineering

Slides:



Advertisements
Similar presentations
Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
Advertisements

The design process IACT 403 IACT 931 CSCI 324 Human Computer Interface Lecturer:Gene Awyzio Room:3.117 Phone:
Chapter 2 The Software Process
Using UML, Patterns, and Java Object-Oriented Software Engineering Royce’s Methodology Chapter 16, Royce’ Methodology.
Requirements Engineering n Elicit requirements from customer  Information and control needs, product function and behavior, overall product performance,
Systems Engineering in a System of Systems Context
Lecture 5 Themes in this session Building and managing the data warehouse Data extraction and transformation Technical issues.
The Experience Factory May 2004 Leonardo Vaccaro.
Architecture is More Than Just Meeting Requirements Ron Olaski SE510 Fall 2003.
A GOAL-BASED FRAMEWORK FOR SOFTWARE MEASUREMENT
Modeling and Validation Victor R. Basili University of Maryland 27 September 1999.
Empirically Evolving Software Engineering Techniques Victor R. Basili University of Maryland and Fraunhofer Center - Maryland.
Fundamentals of Information Systems, Second Edition
Iterative development and The Unified process
Organizational Influences and Life Cycle
IIBA Denver | may 20, 2015 | Kym Byron , MBA, CBAP, PMP, CSM, CSPO
1 Computer Systems & Architecture Lesson 1 1. The Architecture Business Cycle.
Software Architecture Quality. Outline Importance of assessing software architecture Better predict the quality of the system to be built How to improve.
CSE USC Fraunhofer USA Center for Experimental Software Engineering, Maryland February 6, Outline Motivation Examples of Existing.
CSC230 Software Design (Engineering)
Chapter 13: Defect Prevention & Process Improvement
Capability Maturity Model
Enterprise Architecture
What is Business Analysis Planning & Monitoring?
Chapter : Software Process
Chapter 2: Approaches to System Development
UML - Development Process 1 Software Development Process Using UML (2)
Chapter 2 The process Process, Methods, and Tools
CLEANROOM SOFTWARE ENGINEERING.
N By: Md Rezaul Huda Reza n
Engineering, Operations & Technology | Information TechnologyAPEX | 1 Copyright © 2009 Boeing. All rights reserved. Architecture Concept UG D- DOC UG D-
Requirements Analysis
-Nikhil Bhatia 28 th October What is RUP? Central Elements of RUP Project Lifecycle Phases Six Engineering Disciplines Three Supporting Disciplines.
Rational Unified Process Fundamentals Module 4: Disciplines II.
EENG 1920 Chapter 1 The Engineering Design Process 1.
1 Chapter 2 The Process. 2 Process  What is it?  Who does it?  Why is it important?  What are the steps?  What is the work product?  How to ensure.
ISERN Open Issues, Grand Challenges or Have we made any progress and where are going? Vic Basili 2001.
Demystifying the Business Analysis Body of Knowledge Central Iowa IIBA Chapter December 7, 2005.
Business Analysis and Essential Competencies
The Challenge of IT-Business Alignment
Introduction to Software Engineering LECTURE 2 By Umm-e-Laila 1Compiled by: Umm-e-Laila.
An Online Knowledge Base for Sustainable Military Facilities & Infrastructure Dr. Annie R. Pearce, Branch Head Sustainable Facilities & Infrastructure.
This chapter is extracted from Sommerville’s slides. Text book chapter
Software Engineering Principles Principles form the basis of methods, techniques, methodologies and tools Principles form the basis of methods, techniques,
Assessing the Frequency of Empirical Evaluation in Software Modeling Research Workshop on Experiences and Empirical Studies in Software Modelling (EESSMod)
Integrated Risk Management Charles Yoe, PhD Institute for Water Resources 2009.
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
1 Introduction to Software Engineering Lecture 1.
Towards an Experience Management System at Fraunhofer Center for Experimental Software Engineering Maryland (FC-MD)
Slide-1 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001.
Process Improvement. It is not necessary to change. Survival is not mandatory. »W. Edwards Deming Both change and stability are fundamental to process.
Systems Analysis and Design in a Changing World, Fourth Edition
1 Capturing Requirements As Use Cases To be discussed –Artifacts created in the requirements workflow –Workers participating in the requirements workflow.
Fundamentals of Information Systems, Second Edition 1 Systems Development.
BSBPMG501A Manage Project Integrative Processes Manage Project Integrative Processes Project Integration Processes – Part 1 Diploma of Project Management.
Software Architecture Evaluation Methodologies Presented By: Anthony Register.
MODEL-BASED SOFTWARE ARCHITECTURES.  Models of software are used in an increasing number of projects to handle the complexity of application domains.
WERST – Methodology Group
Yazd University, Electrical and Computer Engineering Department Course Title: Advanced Software Engineering By: Mohammad Ali Zare Chahooki The Rational.
Making knowledge work harder Process Improvement.
ANALYSIS PHASE OF BUSINESS SYSTEM DEVELOPMENT METHODOLOGY.
UTA/ARRI. Enterprise Engineering for The Agile Enterprise Don Liles The University of Texas at Arlington.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
Identify the Risk of Not Doing BA
TechStambha PMP Certification Training
Goal, Question, and Metrics
Chapter 5 Designing the Architecture Shari L. Pfleeger Joanne M. Atlee
Capability Maturity Model
Capability Maturity Model
Presentation transcript:

The Role of Empirical Study in Software Engineering Victor R. Basili University of Maryland and Fraunhofer Center - Maryland

Outline Empirical Studies Motivation Specific Methods Example: SEL Applications CeBASE NASA High Dependability Computing Project The High Productivity Computing Systems Project Acquisition Best Practices Project

Motivation for Empirical Software Engineering Understanding a discipline involves building models, e.g., application domain, problem solving processes And checking our understanding is correct, e.g., testing our models, experimenting in the real world Analyzing the results involves learning, the encapsulation of knowledge and the ability to change or refine our models over time The understanding of a discipline evolves over time This is the empirical paradigm that has been used in many fields, e.g., physics, medicine, manufacturing Like other disciplines, software engineering requires an empirical paradigm

Motivation for Empirical Software Engineering Empirical software engineering requires the scientific use of quantitative and qualitative data to understand and improve the software product, software development process and software management It requires real world laboratories Research needs laboratories to observe & manipulate the variables - they only exist where developers build software systems Development needs to understand how to build systems better - research can provide models to help Research and Development have a symbiotic relationship requires a working relationship between industry and academe

Motivation for Empirical Software Engineering For example, a software organization needs to ask: What is the right combination of technical and managerial solutions? What are the right set of process for that business? How are they tailored? How do they learn from their successes and failures? How do the demonstrate sustained, measurable improvement? More specifically: When are peer reviews more effective than functional testing? When is an agile method appropriate? When do I buy rather than make my software product elements?

Examples of Useful Empirical Results “Under specified conditions, …” Technique Selection Guidance Peer reviews are more effective than functional testing for faults of omission and incorrect specification (UMD, USC) Functional testing is more effective than reviews for faults concerning numerical approximations and control flow (UMD, USC) Technique Definition Guidance For a reviewer with an average experience level, a procedural approach to defect detection is more effective than a less procedural one. (UMD) Procedural inspections, based upon specific goals, will find defects related to those goals, so inspections can be customized. (UMD) Readers of a software artifact are more effective in uncovering defects when each uses a different and specific focus. (UMD)

What do I believe? Software engineering is an engineering discipline We need to understand products, processes, and the relationship between them (we assume there is one) We need to experiment (human-based studies), analyze, and synthesize that knowledge We need to package (model) that knowledge for use and evolution Believing this changes how we think, what we do, what is important

What do I believe? Measurement is fundamental to any engineering science User needs must be made explicit (measurable models) Organizations have different characteristics, goals, cultures; stakeholders have different needs Process is a variable and needs to be selected and tailored to solve the problem at hand We need to learn from our experiences, build software core competencies Interaction with various industrial, government and academic organizations is important to understand the problems To expand the potential competencies, we must partner

Basic Concepts for Empirical Software Engineering The following concepts have been applied in a number of organizations Quality Improvement Paradigm (QIP) An evolutionary learning paradigm tailored for the software business Goal/Question/Metric Paradigm (GQM) An approach for establishing project and corporate goals and a mechanism for measuring against those goals Experience Factory (EF) An organizational approach for building software competencies and supplying them to projects

Quality Improvement Paradigm Characterize & understand Set goals Choose processes, methods, techniques, and tools Package & store experience Analyze results Execute process Provide process with feedback Corporate learning Project

The Experience Factory Organization Project Organization Experience Factory environment characteristics 1. Characterize 2. Set Goals 3. Choose Process Project Support 6. Package tailorable knowledge, consulting Generalize Execution plans products, lessons learned, models Experience Base Tailor Formalize project analysis, process modification Disseminate 5. Analyze 4. Execute Process data, lessons learned

The Experience Factory Organization A Different Paradigm Project Organization Experience Factory Problem Solving Experience Packaging Decomposition of a problem Unification of different solutions into simpler ones and re-definition of the problem Instantiation Generalization, Formalization Design/Implementation process Analysis/Synthesis process Validation and Verification Experimentation Product Delivery within Experience / Recommendations Schedule and Cost Delivery to Project

SEL: An Example Experience Factory Structure DEVELOPERS (SOURCE OF EXPERIENCE) (PACKAGE EXPERIENCE FOR REUSE) DATA BASE SUPPORT (MAINTAIN/QA EXPERIENCE INFORMATION) Development measures for each project Refinements to development process STAFF 275-300 developers TYPICAL PROJECT SIZE 100-300 KSLOC ACTIVE PROJECTS 6-10 (at any given time) PROJECT STAFF SIZE 5-25 people TOTAL PROJECTS (1976-1994) 120 10-15 Analysts FUNCTION • Set goals/questions/metrics - Design studies/experiments • Analysis/Research • Refine software process - Produce reports/findings PRODUCTS 300 reports/documents SEL DATA BASE FORMS LIBRARY REPORTS LIBRARY 160 MB 220,000 • SEL reports • Project documents • Reference papers 3-6 support staff • Process forms/data • QA all data • Record/archive data • Maintain SEL data base • Operate SEL library NASA + CSC + U of MD NASA + CSC PO PROCESS ANALYSTS EF

Using Baselines to Show Improvement 1987 vs. 1991 vs. 1995 Continuous Improvement in the SEL Decreased Development Defect rates by 75% (87 - 91) 37% (91 - 95) Reduced Cost by 55% (87 - 91) 42% (91 - 95) Improved Reuse by 300% (87 - 91) 8% (91 - 95) Increased Functionality five-fold (76 - 92) CSC officially assessed as CMM level 5 and ISO certified (1998), starting with SEL organizational elements and activities Fraunhofer Center for Experimental Software Engineering - 1998 CeBASE Center for Empirically-based Software Engineering - 2000

CeBASE Center for Empirically Based Software Engineering CeBASE Project Goal: Enable a decision framework and experience base that forms a basis and infrastructure needed to evaluate and choose among software development technologies CeBASE Research Goal: Create and evolve an empirical research engine for building the research methods that can provide the empirical evidence of what works and when

CeBASE Approach Observation and Evaluation Studies of Development Technologies and Techniques Empirical Data Predictive Models (Quantitative Guidance) General Heuristics (Qualitative Guidance) Emphasize: We are trying to provide the models and heuristics. Everything else supports. What is innovative here? E.g. COCOTS excerpt: Cost of COTS tailoring = f(# parameters initialized, complexity of script writing, security/access requirements, …) E.g. Defect Reduction Heuristic: For faults of omission and incorrect specification, peer reviews are more effective than functional testing.

CeBASE Three-Tiered Empirical Research Strategy Technology maturity Primary activities Evolving results Practical applications (Government, industry, academia) Practitioner tailoring, usage of, and feedback on maturing ePackage. Increasing success rates in developing agile, dependable, scalable IP applications. Applied Research (e.g. NASA HDCP) Exploratory use of evolving ePackage. Experimentation and analysis in selected areas. More mature, powerful ePackage. Faster technology maturation and transition. Basic Research Explore, understand, evolve nature and structure of ePackage. Evolving ePackage understanding and capabilities. (ePackage = Empirical Research Engine, eBase, empirical decision framework)

CeBASE Basic Research Activities Define and improve methods to Formulate evolving hypotheses regarding software development decisions Collect empirical data and experiences Record influencing variables Build models (Lessons learned, heuristics/patterns, decision support frameworks, quantitative models and tools) Integrate models into a framework Testing hypotheses by application

NASA High Dependability Computing Program Project Goal: Increase the ability of NASA to engineer highly dependable software systems via the development of new techniques and technologies Research Goal: Quantitatively define dependability, develop high dependability technologies and assess their effectiveness under varying conditions and transfer them into practice Partners: NASA, CMU, MIT, UMD, USC, U. Washington, Fraunhofer-MD

How can I understand the stakeholders quality needs? Main Questions How can I understand the stakeholders quality needs? Experience Base How can I apply the available processes to deliver the required quality? System Developers

What are the top level research problems? System Users Research Problem 3 What set of processes should be applied to achieve the desired quality? (Decision Support) Research Problem 1 Can the quality needs be understood and modeled? (Product Models) Process Developers Fault Space System Developers Research Problem 2 What does a process do? Can it be empirically demonstrated?

System User Issues How do I elicit quality requirements System User Issues How do I elicit quality requirements? How do I express them in a consistent, compatible way? How do I identify the non-functional requirements in a consistent way? Across multiple stakeholders In a common terminology (Failure focused) Able to be integrated How can I take advantage of previous knowledge about failures relative to system functions, models and measures, reactions to failures? Build an experience base How do I identify incompatibilities in my non-functional requirements for this particular project? AND THIS MEANS THAT EACH DEVELOPMENT PROJECT MUST HAVE EXPLICIT DEPENDABILITY GOALS EXPRESSED IN TERMS OF THE COHERENT MODEL THAT YOU MENTION IN A PREVIOUS SLIDE. BY "PROJECT" YOU MEAN THE DEVELOPMENT PROJECT THAT WILL USE THE TECHNOLOGY?

UMD - Unified Model of Dependability The Unified Model of Dependability is a requirements engineering framework for eliciting and modeling quality requirements Requirements are expressed by specifying the actual issue (failure and/or hazard), or class of issues, that should not affect the system or a specific service (scope). As issues can happen, tolerable manifestations (measure) may be specified with a desired corresponding system reaction. External events that could be harmful for the system may also be specified. For an on-line bookstore system, an example requirement is: “The book search service (scope) should not have a response time greater than 10 seconds (issue) more often than 1% of the cases (measure); if the failure occurs, the system should warn the user and recover full service in one hour”.

UMD is a model builder

UMD assimilates new experience Characterizations (e.g., types, severity, etc.) of the basic UMD modeling concepts of issue, scope, measure, and event depend on the specific context (project and stakeholders). They can be customized while applying UMD to build a quality model of a specific system and enriched with each new application

UMD: a framework for engineering decisions UMD support engineering decisions at requirements phase for quality validation, negotiation, trade-offs analysis Requirements Visualization Computation of aggregate values of dependability (availability, MTBF per service, etc) UMD

Technology Researcher Issues How well does my technology work Technology Researcher Issues How well does my technology work? Where can it be improved? How does one articulate the goals of a technology? Formulating measurable hypotheses How does one empirically demonstrate its goals? Performing empirical studies Validate expectations/hypotheses What are the requirements for a testbed? Fault seeding How do you provide feedback for improving the technology? AND THIS MEANS THAT EACH DEVELOPMENT PROJECT MUST HAVE EXPLICIT DEPENDABILITY GOALS EXPRESSED IN TERMS OF THE COHERENT MODEL THAT YOU MENTION IN A PREVIOUS SLIDE. BY "PROJECT" YOU MEAN THE DEVELOPMENT PROJECT THAT WILL USE THE TECHNOLOGY?

HDCP : Example Outcome A process for inspections of Object-Oriented designs was developed using multiple iterations through this method. Early iterations concentrated on feasibility: - effort required, results due to the process in the context of offline, toy systems. Is further effort justified? Mid-process iterations concentrated on usability: - usability problems, results due to individual steps in the context of small systems in actual development. What is the best ordering and streamlining of process steps to match user expectations? Most recent iterations concentrated on effectiveness: - effectiveness compared to other inspection techniques previously used by developers in the context of real systems under development. Does the new techniques represent a usable improvement to practice?

Using testbeds to transfer technology Define Testbeds Projects, operational scenarios, detailed evaluation criteria representative of product needs Stress the technology and demonstrate its context of effectiveness Help the researcher identify the strengths, bounds, and limits of the particular technology at different levels Provides insight into the integration of technologies Reduce costs by reusing software artifacts Reduce risks by enabling technologies to mature before taking them to live project environments Assist technology transfer of mature results Conduct empirical evaluations of emerging processes Establish evaluation support capabilities: instrumentation, seeded defect base; experimentation guidelines IMPORTANT

TSAFE testbed: Tactical Separation Assisted Flight Environment Aids air-traffic controllers in detecting short-term aircraft conflicts Proposed as principle component of larger Automated Airspace Computing System by Heinz Erzberger at NASA Ames MIT TSAFE: a partial implementation by Gregory Dennis Trajectory synthesis and Conformance Monitoring TSAFE Testbed: developed at FC-MD, based on MIT TSAFE Added testbed specific features, e.g. to monitor faults, output Added features to make it easier to run and experiment with Testbed Synthesized faults that were seeded Added documentation Added functionality, algorithms Used to evaluate a family of software architecture techniques

Testbed Development: Fault Seeding Fault Seeding Drivers Intervention-driven: seed faults based upon what the intervention is expected to find Assumes interventionist can express goals in terms of fault classes Focus on intervention (no direct dependability link) Each intervention can generate a different set of faults History-driven: seed faults based upon the fault history from similar NASA projects Assumes we have historical data from similar systems Maybe we have the relationship of failure to fault Dependability-driven: seed faults that create issues (failures) that the user sees as hindering dependability Assumes we can identify issues that would impact stakeholders needs Fault may not be representative of what caused the failure Approaches can be combined

Problem: Satisfying dependability needs by applying the right interventions: Matching Failures Classes & Faults Classes Faults Space Failures Space

Applied Research DoE High Productivity Computing Systems Project Goal: Improve the buyers ability to select the high end computer for the problems to be solved based upon productivity, where productivity means Time to Solution = Development Time + Execution Time Research Goal: Develop theories, hypotheses, and guidelines that allow us to characterize, evaluate, predict and improve how an HPC environment (hardware, software, human) affects the development of high end computing codes. Partners: MIT Lincoln Labs, MIT, UCSD, UCSB, UMD, USC, FC-MD

HPCS Phase II Teams Industry Mission Partners Create a new generation of economically viable computing systems and a procurement methodology for the security/industrial community (2010) Industry PI: Elnozahy PI: Mitchell PI: Smith Mission Partners Productivity Team (Lincoln Lead) HPCS Phase II teams. Industry: IBM, Sun, Cray. Mission Partners: DoE, DoE Office of Science, NNSA, NASA, NSA, NRO, HPCMO. Productivity Team: Lincoln, ISI, UMD, UCSD, UTK, MITRE, ONL, LANL, ANL, LBL, UCSB, LCS, OSU, CodeSourcery MIT Lincoln Laboratory PI: Kepner PI: Lucas PI: Basili PI: Benson & Snavely PI: Dongarra PI: Koester PIs: Vetter, Lusk, Post, Bailey PIs: Gilbert, Edelman, Ahalt, Mitchell CSAIL Ohio State

Trade-offs Drive Designs Execution Time (Example) Current metrics favor caches and pipelines Systems ill-suited to applications with Low spatial locality Low temporal locality Development Time (Example) No metrics widely used Least common denominator standards Difficult to use Difficult to optimize Low RandomAccess (GUPS) High High Performance High Level Languages Large FFTs (Reconnaissance) Matlab/ Python Adaptive Multi-Physics Weapons Design Vehicle Design Weather Spatial Locality Language Expressiveness UPC/CAF HPCS Tradeoffs HPCS C/Fortran MPI/OpenMP SIMD/ DMA STREAM ADD Top500 Linpack Rmax Assembly/ VHDL Low High Language Performance High Temporal Locality Low Low High Motivation: Metrics Drive Design. “You get what you measure” Goal: allow quantitative tradeoffs between execution time and development time HPCS needs a validated assessment methodology that values the “right” vendor innovations (see HPCchallenge) Allow tradeoffs between Execution and Development Time

Building a knowledge base Ultimate goal: Provide empirical evidence on time to solution for mission partners, vendors, researchers, practitioners via a knowledge base of data, models, hypotheses, influencing variables Knowledge is often either: High confidence in a very small region of context space (e.g., classroom experiment) Over a broad context region, but low confidence (e.g., folklore) Learn over time Start small – crawl before you walk before you run Evolve studies, models, hypotheses,

Stakeholder Needs Mission Partners Which parallel programming technologies are most effective for the kinds of problems we work on? What model allows me to increase my effective work staff? How do I shrink the learning curve? Vendors Can I confidently demonstrate the elimination or minimization of effort in any workflow steps using my technology? Parallel Technology Developers How can I incorporate the study of development time into my work? What technologies help improve development time? What is the tradeoff of development time to speed-up benefits? Practitioners What is the best technology available for solving my problem?

Goals: Evolving studies Pilot Classroom Studies on single programmer assignments Identify variables, data collection problems, single programmer workflows, experimental designs Lead to Observational Studies on single programmer assignments Develop variables and data we can collect with confidence based upon our understanding of the problems Lead to Controlled single programmer experiments Generate more confidence in the variables, data collection, models, provide hypotheses about novices Lead to team student projects Study scale-up, multi-developer workflows, Lead to professional developer studies Study scale-up, multi-developer workflows, porting, reusing, developing from scratch

Goals: Building models Identify the relevant variables and the context variables, programmer workflows, mechanisms for identifying variables and relationships Developers: Novice, experts Problem spaces: various kernels; computationally- based vs. communication based; … Work-flows: single programmer research model, … Mechanisms: controlled experiments, folklore elicitation, case studies Identify what variables can be collected accurately or what proxies can be substituted for those variables, understand data collection problems, … Identify the relationship among those variables, and the contexts in which those relationships are true Build models of time to development, relative effectiveness of different programming models, productivity, i.e., Productivity = Value/Cost E.g., In the context of a single programmer, productivity might be Relative Speedup/Relative Effort (how do you measure speed-up, how do you measure effort?)

Goals: Evolving Hypotheses Identify folklore*: elicit expert opinion to identify the relevant variables and terminology, some simple relationships among variables, looking for consensus or disagreement Evolve the folklore: evolve the relationships and identify the context variables that affect their validity, using surveys and other mechanisms (Actually we: Identified a set of opinions from an particular set of experts and then tested these opinions against the larger community several times, having modified them after each data collection activity) Turn the folklore into hypotheses using variables that can be specified and measured Verify hypotheses or generate more confidence in their usefulness in various studies about development, productivity, relative effectiveness of different programming models, E.g., OpenMP offers more speedup for novices in a shorter amount of time when the problem is more computationally-based than communication based. *Folklore: An often unsupported notion, story, or saying that is widely circulated

HEC community provides questions to study that lead to successively larger and more complex experiments HEC community Case studies porting compact apps Problem Scale folklore Team projects Class projects Single programmer (expert studies) Single programmer classroom studies Kernels Class assignments Experiments, observational studies Time/Duration Evolving measurement, models, hypotheses

Relationships among variables Research Baseline data Evaluation The results of the experiments provide a variety of models and relationships useful for understanding HEC workflows Programmer experience Low High Speedup Relationships among variables Program performance Research Baseline data Workflows Lines of code Effort Evaluation of measures Productivity

Experimental artifacts Programming problems Classroom studies Industrial studies Data collection software Experimental Packages We will produce experimental packages to use to increase our knowledge and allow us to give advice back to HEC community on how to program HEC machines effectively Advice to university professors Advice to mission partners Advice to vendors - Effective programming methods - Language features utilization - Workflow models - Workflow models - Student workflows - Productivity models

HPCS Research Activities Development Time Experiments – Novices and Experts Empirical Data Predictive Models (Quantitative Guidance) General Heuristics (Qualitative Guidance) E.g. Tradeoff between effort and performance: MPI will increase the development effort by y% and increase the performance z% over OpenMP E.g. Scalability: If you need high scalability, choose MPI over OpenMP

HPCS Testbeds We are experimenting with a series of testbeds ranging in size from: Classroom assignments (Array Compaction, the Game of Life, Parallel Sorting, LU Decomposition, … to Compact Applications (Combinations of Kernels, e.g., Embarrassingly Parallel, Coherence, Broadcast, Nearest Neighbor, Reduction) Full scientific applications (nuclear simulation, climate modeling, protein folding, ….)

Acquisition Best Practices Clearinghouse Project Project Goal: Populate an experience base for acquisition best practices, defining context and impact attributes allowing users to understand the effects of applying the processes based upon the best empirical evidence available Research Goal: Define a repeatable model-based empirical evidence vetting process enabling different people to create profiles consistently and the integration of new evidence Partners: OSD, UMD, FC-MD, DAU, Northup-Grumman, … @Copyright Fraunhofer Center - Maryland

Clearinghouse Key Concepts External sources DOD, SEI, DACS, BMC, STSC PMN, SWEBOK Gateway Experts Vetting process Users Submit BP Submit feedback Candidates Questions Upload/update BP Browse,Search & Retrieve Discussion Analyze/ Group AskAnExpert Extract Analyst Refine BPs Log Log Best Practices CH Maintain Admin Answers Experts @Copyright Fraunhofer Center - Maryland

Best Practices Vetting Process Each cycle allows more experience to be gathered and processed, leading to better characterization of the practice, improved recommendations, and more dependable implementation guidance. Practice/packaging maturation cycle Identification Characterization Analysis & Synthesis Validation Packaging &Dissemination Inputs: Leads to practices Activities: Collect Categorize Filter Synthesize Prioritize Outputs: Candidate set of practices Set of candidate practices and rationale for consideration Gather/research characteristics about the practice including context (project, etc.), evidence of use, lessons learned Complete “story” profile Outputs: More detailed set of candidate practices with “stories” Detailed set of candidate practices Aggregate stories, create profile of practice Populate the repository Identify/define Interrelationships Single profile for each best practice, associated artifacts, and confidence levels Sets of practice data; validation criteria Check outputs from previous phases Color Code practices Approve practices via panel of experts Validated practices Packaging Publishing Promoting Providing user help Discussions Repository update Papers & conference presentations Course materials/updates As our information on experience with each practice grows, we’ll have improved validation levels. Some practices (like peer reviews) will start off as Proven because there is sufficient data and experience already available. Other practices (maybe demonstration-based milestone reviews) will have less data and experience and may be presented as Nominated or Promising with caveats on where they have been successful and where there are unknown edges. The idea is to present practices that are believed to be beneficial, but tag them as to our overall confidence in their validity. If we provide sufficient information, the PM can decide if the promised benefit outweighs the perceived risk of implementation. @Copyright Fraunhofer Center - Maryland

Empirically-Based Practices Profile Attributes, Values, Brief justification, links to Empirical evidence Justification, Summary, Statement, Source, Valuation, links to Sources (Full report/paper, Summary/Story) Need models for evaluating the maturity of best practices, weighting empirical evidence b @Copyright Fraunhofer Center - Maryland

Where do we need to go? Propagate the empirical discipline Build an empirical research engine for software engineering Build testbeds for experimentation and evolution of processes Build product models that allow us to make trade-off decisions Build decision support systems offering the best empirical advice for selecting and tailoring the right processes for the problem

Areas of Research Needed Eliciting quality requirements in measurable terms Evaluating technology in context Building testbeds to support experimentation Building heuristics to deal with the fault/failure relationship Building decision support systems Finding better ways to experiment and integrate the results of the studies

Conclusion This talk is about The role of empirical study in software engineering The synergistic relationship between research, applied research, and practice Software developers need to know what works and under what circumstances Technology developers need feedback on how well their technology works and under what conditions We need to continue to collect empirical evidence analyze and synthesize the data into models and theories Collaborate to evolve software engineering into an engineering discipline