DAIMI(c) Henrik Bærbak Christensen1 Test Planning.

Slides:



Advertisements
Similar presentations
Test Yaodong Bi.
Advertisements

Test process essentials Riitta Viitamäki,
Software Quality Assurance Plan
Stoimen Stoimenov QA Engineer SitefinityLeads, SitefinityTeam6 Telerik QA Academy Telerik QA Academy.
Computer Engineering 203 R Smith Project Tracking 12/ Project Tracking Why do we want to track a project? What is the projects MOV? – Why is tracking.
Software Quality Metrics
School of Computing, Dublin Institute of Technology.
1 Software Testing and Quality Assurance Lecture 15 - Planning for Testing (Chapter 3, A Practical Guide to Testing Object- Oriented Software)
1 Software Testing and Quality Assurance Lecture 14 - Planning for Testing (Chapter 3, A Practical Guide to Testing Object- Oriented Software)
Recall The Team Skills Analyzing the Problem
Software Process and Product Metrics
High Level: Generic Test Process (from chapter 6 of your text and earlier lesson) Test Planning & Preparation Test Execution Goals met? Analysis & Follow-up.
Project Documentation and its use in Testing JTALKS.
Test Plan A document that indicates what testing will occur, how it will occur, and what resources will be necessary for it to occur. A test plan also.
Software Test Plan Why do you need a test plan? –Provides a road map –Provides a feasibility check of: Resources/Cost Schedule Goal What is a test plan?
Stoimen Stoimenov QA Engineer QA Engineer SitefinityLeads,SitefinityTeam6 Telerik QA Academy Telerik QA Academy.
Software Testing & Strategies
Introduction to Computer Technology
The Project AH Computing. Functional Requirements  What the product must do!  Examples attractive welcome screen all options available as clickable.
What is Business Analysis Planning & Monitoring?
Software Project Management
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
S/W Project Management
… and after unit testing …
Software Testing Lifecycle Practice
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Test Organization and Management
CPIS 357 Software Quality & Testing
CMSC 345 Fall 2000 Unit Testing. The testing process.
Independent User Acceptance Test Process (IUAT)
SOFTWARE ENGINEERING1 Introduction. Software Software (IEEE): collection of programs, procedures, rules, and associated documentation and data SOFTWARE.
Teaching material for a course in Software Project Management & Software Engineering – part II.
FCS - AAO - DM COMPE/SE/ISE 492 Senior Project 2 System/Software Test Documentation (STD) System/Software Test Documentation (STD)
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
Testing Workflow In the Unified Process and Agile/Scrum processes.
University of Palestine software engineering department Testing of Software Systems Testing throughout the software life cycle instructor: Tasneem.
Software Construction Lecture 18 Software Testing.
Software Testing. Software testing is the execution of software with test data from the problem domain. Software testing is the execution of software.
Project Management All projects need to be “managed” –Cost (people-effort, tools, education, etc.) –schedule –deliverables and “associated” characteristics.
Apply Quality Management Techniques Project Quality Processes Certificate IV in Project Management Qualification Code BSB41507 Unit Code BSBPMG404A.
Software Testing and Quality Assurance Practical Considerations (4) 1.
Chair of Software Engineering Exercise Session 6: V & V Software Engineering Prof. Dr. Bertrand Meyer March–June 2007.
Chapter 6: THE EIGHT STEP PROCESS FOCUS: This chapter provides a description of the application of customer-driven project management.
A Metrics Program. Advantages of Collecting Software Quality Metrics Objective assessments as to whether quality requirements are being met can be made.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Project Management. Introduction  Project management process goes alongside the system development process Process management process made up of three.
CPSC 871 John D. McGregor Module 8 Session 1 Testing.
Software Quality Assurance and Testing Fazal Rehman Shamil.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
T Project Review Wellit I1 Iteration
CIS-74 Computer Software Quality Assurance
Software Test Plan Why do you need a test plan? –Provides a road map –Provides a feasibility check of: Resources/Cost Schedule Goal What is a test plan?
TMP3413 Software Engineering Lab Lab 01: TSPi Tool Support.
Tool Support for Testing Classify different types of test tools according to their purpose Explain the benefits of using test tools.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
CPSC 372 John D. McGregor Module 8 Session 1 Testing.
SOFTWARE TESTING TRAINING TOOLS SUPPORT FOR SOFTWARE TESTING Chapter 6 immaculateres 1.
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Software Engineering (CSI 321)
Chapter 10 Software Quality Assurance& Test Plan Software Testing
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Software testing strategies 2
Fundamental Test Process
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing Lifecycle Practice
Presentation transcript:

DAIMI(c) Henrik Bærbak Christensen1 Test Planning

DAIMI(c) Henrik Bærbak Christensen2 Definition Plan: Document that provides a framework or approach for achieving a set of goals. Corollary: You have to define the goals in advance. Burnstein provides templates of a company testing policy that states overall goals.

DAIMI(c) Henrik Bærbak Christensen3 Plan Contents A testing plan must address issues like: –Overall testing objectives: why are we testing, risks, etc. –Which pieces will be tested? –Who performs the testing? –How will testing be performed? –When will testing be performed? –How much testing is adequate? These dimensions are orthogonal (independent). Decisions must be made where to place your project within each of these dimensions. Each dimension is a continuum.

DAIMI(c) Henrik Bærbak Christensen4 Which pieces will be tested? Continuum extremes: –every unit is tested –no testing at all (i.e. the users do it ) and variations –systematic approach for choosing which to test… –ROI (return on investment) important where does one test hour spent find most defects? or ‘most annoying’ defects Strategies –“Defect Hunting”, “Allocate by Profile”

DAIMI(c) Henrik Bærbak Christensen5 Who performs testing? Project roles: –Developer: construct products –Tester: detect failures in products Remember: roles - not persons. Continuum extremes –same persons do everything –roles always split between different persons and all kinds of variations in between: –unit level: often same person has both roles XP pair programming often split roles in the pair –system level: often separate teams Testing psychology: do not test your own code…

DAIMI(c) Henrik Bærbak Christensen6 How will testing be performed? Continuum extremes –Specification onlywhatblack-box –Implementation alsohowwhite-box Levels –unit, integration, system Documentation –The XP way: move forward as fast as possible –The CMM way: make as much paper as possible

DAIMI(c) Henrik Bærbak Christensen7 When will testing be performed? Continuum extremes –test every unit as it becomes available “high frequency integration”, test-driven development –delay until all units available “Big-bang integration” and variations –defects found early are usually cheaper to fix !!! why ??? Kent Beck says that this is not true !!! –testing at the end of each increment / milestone

DAIMI(c) Henrik Bærbak Christensen8 How much testing is adequate? Continuum from none to very thorough… but when is enough enough ??? –life critical software; asset transaction handling –once used converter; research demo Adequacy –defect detection cost versus increase in quality –standards: drug manufacturing / furniture manufact. Coverage –code coverage –requirement coverage (use cases covered)

DAIMI(c) Henrik Bærbak Christensen9 Test Plan Format

DAIMI(c) Henrik Bærbak Christensen10 IEEE Test plan IEEE Std for test plan Template independent of particular testing level. –system, intg., unit If followed rigorously at every level the cost may be very high...

DAIMI(c) Henrik Bærbak Christensen11 Features to be tested Items to be tested –“Module view”: actual units to be put under test. Features to be tested/not to be tested –“Use case view”: from the users’ perspective use cases

DAIMI(c) Henrik Bærbak Christensen12 Points to note Features not tested –incremental development means large code base is relative stable... –additions + base changes But – what do we retest? –everything? –just added+changed code? Exercise: –Any ideas? –What influences our views? Increment n Inc n+1 Legend: ’blob’ measures code size; position whether code is additions or changes

DAIMI(c) Henrik Bærbak Christensen13 Regression testing The simple answer is : test everything all the time which is what XP says at the unit level. However, –some test run slowly (stress, deployment testing) –or are expensive to make (manual, hardware req.) The question is then –which test cases exercise code that is changed??? Any views?

DAIMI(c) Henrik Bærbak Christensen14 Test case traceability It actually points towards a very important problem, namely traceability between tests, specification, and code units. Simple model (ontology) –the problem is the multiplicity ! –tracing the dependencies ! test case code unit use case derived-from tested-by exercise implement * * * * * *

DAIMI(c) Henrik Bærbak Christensen15 Side bar At the CSMR 2004 conference an interesting problem was stated: –Stock trading application – test cases over 7½ million C++ code lines –no traceability between specification, units, and tests So – what to do? –Dynamic analysis Record time when each test case runs Record time when each method is run (req. instrumentation) compare the time stamps !

DAIMI(c) Henrik Bærbak Christensen16 Approach Section 5 –managerial information that defines the testing process degree of coverage, time and budget limitations, stop-test criteria –and the actual test cases! A bit weird to have both the framework of the test as well as the test itself in the same document. Usually the real test cases are in a separate document.

DAIMI(c) Henrik Bærbak Christensen17 Pass/Fail Criteria Pass/Fail criteria –at unit level this is often a binary decision either it passes (computed = expected) or fail –higher levels require severity levels “save” operation versus “reconfigure button panel” allows conditionally passing the test –compare review terminology

DAIMI(c) Henrik Bærbak Christensen18 Suspension/Resumption Criteria When to suspend testing –for instance if severity level 0 defect encountered “back to the developers, no idea to waste more time” When to resume: –redo all tests after a suspend? Or only those not tested so far?

DAIMI(c) Henrik Bærbak Christensen19 Contents Deliverables –what is the output test design specifications, test procedures, test cases test incident reports, logs,.... Tasks –the work-break-down structure Environment –software/hardware/tools Responsibilities –roles

DAIMI(c) Henrik Bærbak Christensen20 Contents Staff / Training Needs Scheduling –PERT and Gant Risks

DAIMI(c) Henrik Bærbak Christensen21 Testing Costs Estimation, in the form of ”staff hours”, is known to be a hard problem. –historical project data important –still, underestimation more the rule than the exception Suggestion –’prototype’ testing for ’typical’ use-cases/classes and measure effort (staff hours) –count/estimate total use-cases and classes Burnstein –look at project + organization characteristics, use models, gain experience

DAIMI(c) Henrik Bærbak Christensen22 Section 5 Section 5 contains the actual tests –The design of the tests, IDs –Test cases input, expected output, environment –Procedure how testing must be done –especially important for manual Test result reports –Test log: “laboratory diary” –Incident report: Report defects alternatively in defect tracking tool like bugzilla Summary –summary and approval

DAIMI(c) Henrik Bærbak Christensen23 Monitoring the Testing Process

DAIMI(c) Henrik Bærbak Christensen24 Motivation Testing is a managed process. –Clear goals and planned increments/milestones to achieve them –Progress must be monitored to ensure plan is kept.

DAIMI(c) Henrik Bærbak Christensen25 Terms Project monitoring: activities and tasks defined to periodically check project status. Project controlling: developing and applying corrective actions to get project on track. Usually we just use the term project management to cover both processes.

DAIMI(c) Henrik Bærbak Christensen26 Measurements Measuring should of course be done for a purpose. Thus there are several issues to consider: –Which measures to collect? –For what purpose? –Who will collect them? –Which tools/forms will be used to collect data? –Who will analyze data? –Who will have access to reports?

DAIMI(c) Henrik Bærbak Christensen27 Purpose Why collect data? Data is important for monitoring: –testing status indirectly: quality assessment of product –tester productivity –testing costs –failures so we can remove defects

DAIMI(c) Henrik Bærbak Christensen28 Metrics Burnstein’s suggested metrics –Coverage –Test case development –Test execution –Test harness development –Tester productivity –Test cost –Failure tracking

DAIMI(c) Henrik Bærbak Christensen29 Coverage Whitebox metrics –statement (block), branch, flow, path,... –ratio actual coverage / planned coverage Blackbox metrics –# of requirements to be tested –# of requirements covered –ECs identified –ECs covered –... and their ratios

DAIMI(c) Henrik Bærbak Christensen30 Test Case Development Data to collect: –# of planned test cases based upon (time allocated/mean time to complete one test)? –# of available test cases –# of unplanned test cases So – what does the last measure mean? –heavy “water fall model” smell here?

DAIMI(c) Henrik Bærbak Christensen31 Test Execution Data collected: –# test cases executed –... and passed –# unplanned test cases executed –... and passed –# of regression tests executed –... and passed –and their ratios

DAIMI(c) Henrik Bærbak Christensen32 XP example [From Jeffries’ paper] –Functional tests ≠ unit tests customer owned feature oriented not running at 100% –Status not developed developed and –pass –fail –expected output not validated

DAIMI(c) Henrik Bærbak Christensen33 Test Harness Development Data collected –LOC of harness (planned, available) Comments? –Who commissions and develops the harness code?

DAIMI(c) Henrik Bærbak Christensen34 Tester Productivity & Cost !

DAIMI(c) Henrik Bærbak Christensen35 Defects Data collected on detected defects in order to –evaluate product quality –evaluate testing effectiveness –stop-test decision –cause analysis –process improvement Metrics –# of incident reports; solved/unsolved; severity levels; defects/KLOC; # of failures; # defects repaired

DAIMI(c) Henrik Bærbak Christensen36 Test Completion At one time, testing must stop... The question is: when? Criteria –Planned tests pass what about the unplanned ones? –Coverage goals are met branch coverage/unit; use case coverage/system –Specific number of defects found Estimates from historical data –Defect detection rate falls below level “less than 5 severity level > 3 defects per week.”

DAIMI(c) Henrik Bærbak Christensen37 Test Completion Criteria Criteria –fault seeding ratios are favorable seed with “representative defects” how many does testing find –postulate: found seed defects / total seed defects = found actual defects / total actual defects

DAIMI(c) Henrik Bærbak Christensen38 Summary Plan testing –what, who, when, how, how much all are a continuum where decisions must be made Document testing –IEEE outline a document template that is probably no worse than many others... Monitor testing –collect data to make sound judgements about progress stop-testing criteria Record incidents –defects found/repaired