Chapter 10: Testing and Quality Assurance

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
DETAILED DESIGN, IMPLEMENTATIONA AND TESTING Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Testing and Quality Assurance
Ossi Taipale, Lappeenranta University of Technology
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
Testing Without Executing the Code Pavlina Koleva Junior QA Engineer WinCore Telerik QA Academy Telerik QA Academy.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Test Design Techniques
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
BY: GARIMA GUPTA MCA FINAL YEAR WHAT IS SOFTWARE TESTING ? SOFTWARE TESTING IS THE PROCESS OF EXECUTING PROGRAMS OR SYSTEM WITH THE INTENT.
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
Introduction Telerik Software Academy Software Quality Assurance.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Chapter 19 Verification and Validation.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
Chapter 12: Software Inspection Omar Meqdadi SE 3860 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Software Quality Assurance and Testing Fazal Rehman Shamil.
1. Black Box Testing  Black box testing is also called functional testing  Black box testing ignores the internal mechanism of a system or component.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
1 Week 8 Software Engineering Fall Term 2015 Marymount University School of Business Administration Professor Suydam.
CX Introduction to Web Programming Testing.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
School of Business Administration
Software Engineering (CSI 321)
Software Engineering Spring Term 2017 Marymount University
Software Testing.
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
Software Engineering (CSI 321)
Testing Tutorial 7.
Software Testing.
Rekayasa Perangkat Lunak Part-13
Software Testing.
Approaches to ---Testing Software
CSC 480 Software Engineering
SOFTWARE TESTING OVERVIEW
Chapter 8 – Software Testing
Some Simple Definitions for Testing
Verification and Validation
Chapter 13 & 14 Software Testing Strategies and Techniques
Types of Testing Visit to more Learning Resources.
Verification and Validation
Software Testing (Lecture 11-a)
Lecture 09:Software Testing
Different Testing Methodology
Software testing.
Introducing ISTQB Agile Foundation Extending the ISTQB Program’s Support Further Presented by Rex Black, CTAL Copyright © 2014 ASTQB 1.
Inspection and Review The main objective of an Inspection or a Review is to detect defects. (Not for Giving Alternative Solutions) This activity and procedure.
Chapter 10 – Software Testing
Baisc Of Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
Chapter 7 Software Testing.
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

Chapter 10: Testing and Quality Assurance

Testing Related topics Understand basic techniques for software verification and validation 2. Analyze basics of software testing and testing techniques 3. Discuss the concept of “inspection” process

Introduction Quality Assurance (QA): activities designed to measure and improve quality in a product --- and process Quality control (QC): activities designed to validate & verify the quality of the product through detecting faults and “fixing” the defects Need good techniques, process, tools and team similar

What is “Quality?” Two traditional definitions: Conforms to requirements Fit to use Verification: checking the software conforms to its requirements (did the software evolve from the requirements properly) Validation: checking software meets user requirements (fit to use)

Some “Error Detection” Techniques (finding errors) Testing: executing program in a controlled environment and “verifying/validating” output Inspections and Reviews Formal methods (proving software correct) Static analysis detects “error-prone conditions”

Faults and Failures Error: a mistake made by a programmer or software engineer which caused the fault, which in turn may cause a failure Fault (defect, bug): condition that may cause a failure in the system Failure (problem): inability of system to perform a function according to its spec due to some fault Fault or Problem severity (based on consequences) Fault or Problem priority (based on importance of developing a fix which is in turn based on severity)

Testing Activity performed for Evaluating product quality Improving products by identifying defects and having them fixed prior to software release. Dynamic (running-program) verification of program’s behavior on a finite set of test cases selected from execution domain Testing can NOT prove product works 100%- - - even though we use testing to demonstrate that parts of the software works Not always done !

Testing Why test Who tests What is tested How (test cases designed) Acceptance (customer) Conformance (std, laws, etc) Configuration (user .vs. dev.) Performance, stress, security, etc. How (test cases designed) Intuition Specification based (black box) Code based (white-box) Existing cases (regression) Who tests Programmers Testers/Req. Analyst Users What is tested Unit Code testing Functional Code testing Integration/system testing User interface testing

Progression of Testing Unit Test Functional Test Unit Test Component Test . System/Regression Test . . Component Test Functional Test Unit Test Progression of Testing

Equivalence Class partitioning Divide the input into several groups, deemed “equivalent” for purposes of finding errors. Pick one “representative” for each class used for testing. Equivalence classes determined by req./des. specifications and some intuition Example: pick “larger” of two integers and ------- Class Representative First > Second 10,7 Second > First 8,12 First = second 36, 36 Lessen duplication Complete coverage

Simple Example of Equivalence Testing Suppose we have n distinct functional requirements. Suppose further that these n “functional” requirements are such that r1 U r2 U ------ U rn = all n requirements and ri ∩ rj = θ We can devise a test scenario, ti, for each of the ri functionality to check if ri “works.” Then: t1 U t2 U --------- tn = all the test cases to cover the software functionalities. Note that there may be more than one ti for ri. But picking only one from the set of potential test cases for ri, we form an equivalence class of test cases

Boundary Value analysis (A Black-Box technique) Past experiences show that “Boundaries” are error-prone Do equivalence-class partitioning, add test cases for boundaries (at boundary, outside, inside) Reduced cases: consider boundary as falling between numbers If boundary is at12, normal: 11,12,13; reduced: 12,13 (boundary 12 and 13) Large number of cases (~3 per boundary) Good for “ordinal values”

Boundaries of the input values 1 <= number of employees, n <= 1000000 1 n 1000000 1 <= employee age <= 150 150 age 1 The “basic” boundary value testing for a value would include: 1. - at the “minimum” boundary 2. - immediately above minimum 3. - between minimum and maximum (nominal) 4. - immediately below maximum 5. - at the “maximum” boundary ** note that we did not include the “outside” of the boundaries here**

Path Analysis White-Box technique Two tasks Decreasing coverage: Analyze number of paths in program Decide which ones to test Decreasing coverage: Logical paths Independent paths Branch coverage Statement coverage S1 1 2 C1 4 S2 3 S3 Path1 : S1 – C1 – S3 Path2 : S1 – C1 – S2 – S3 OR Path1: segments (1,4) Path2: segments (1,2,3)

A “CASE” Structure S1 S2 C1 The 4 Independent Paths Covers: 8 C1 2 5 The 4 Independent Paths Covers: Path1: includes S1-C1-S2-S5 Path2: includes S1-C1-C2-S3-S5 Path3: includes S1-C1-C2-C3-S4-S5 Path4: includes S1-C1-C2-C3-S5 C2 3 S3 9 6 C3 4 10 S4 7 S5 A “CASE” Structure

A Simple Loop Structure Example with a Loop S1 1 C1 4 S3 2 Linearly Independent Paths are: path1 : S1-C1-S3 (segments 1,4) path2 : S1-C1-S2-C1-S3 (segments 1,2,3,4) S2 3 A Simple Loop Structure

Linearly Independent Set of Paths Consider path1, path2 and path3 as the Linearly Independent Set C1 2 1 2 3 4 5 6 1 path1 1 1 1 S1 3 path2 1 1 C2 path3 1 1 1 5 path4 1 1 1 1 4 S2 6 Remember McCabe’s Cyclomatic number ? It is the same as linearly independent set of paths

Total # of Paths and Linearly Independent Paths Since for each binary decision, there are 2 paths and there are 3 in sequence, there are 23 = 8 total “logical” paths path1 : S1-C1-S2-C2-C3-S4 path2 : S1-C1-S2-C2-C3-S5 path3 : S1-C1-S2-C2-S3-C3-S4 path4 : S1-C1-S2-C2-S3-C3-S5 path5 : S1-C1-C2-C3-S4 path6 : S1-C1-C2-C3-S5 path7 : S1-C1-C2-S3-C3-S4 path8 : S1-C1-C2-S3-C3-S5 S1 1 C1 2 3 S2 4 C2 5 6 S3 How many Linearly Independent paths are there? Using Cyclomatic number = 3 decisions +1 = 4 One set would be: path1 : includes segments (1,2,4,6,9) path2 : includes segments (1,2,4,6,8) path3 : includes segments (1,2,4,5,7,9) path5 : includes segments (1,3,6,9) 7 C3 9 8 S5 S4

Combinations of Conditions Function of several related variables To fully test, we need all possible combinations (of equivalence classes) How to reduce testing: Coverage analysis Assess “important” (e.g. main functionalities) cases Test all pairs of relations (but not all combinations)

Unit Testing Unit Testing: Test each individual unit Usually done by the programmer Test each unit as it is developed (small chunks) Keep test cases/results around (use Junit or xxxUnit) Allows for regression testing Facilitates refactoring Tests become documentation !!

Test-Driven development Write unit-test cases BEFORE the code ! Tests cases “are” / “becomes” requirements Forces development in small steps Steps: Write test case & code Verify (it fails or runs) Modify code so it succeeds Rerun test case, previous tests Refactor until (success and satisfaction)

When to stop testing ? Simple answer, stop when Other techniques: All planned test cases are executed All those problems that are found are fixed Other techniques: Stop when you are not finding any more errors Defect seeding -- test until all (or % of )the seeded bugs found NOT -- when you ran out of time -- poor planning!

Defect Seeding Seed the program (component) Generate and scatter with “x” number of bugs & do not tell the testers. - set a % (e. g. 95%) of seed bugs found as stopping criteria Suppose “y” number of the “x” seed bugs are found If (y/x) > (stopping percentage); stop testing If (y/x) ≤ (stopping percentage), keep on testing Get a feel of how many bugs may still remain: Suppose you discovered “u” non-seeded bugs through testing Set y/x = u/v ; v = (u * x)/y Then there is most likely (v-u) bugs still left in the software.

Problem Find Rate y = ae-bx Class of curves # of Problems Found per hour Time (x) Day 1 Day 2 Day 3 Day 4 Day 5 Decreasing Problem Find Rate

Inspections and Reviews Review: any process involving human testers reading and understanding a document and then analyzing it with the purpose of detecting errors Walkthrough: author explaining document to team of people Software inspection: detailed reviews of work in progress, following Fagan’s method.

Software Inspections Focused on finding defects Steps: Output: list of defects Team of: 3-6 people Author included People working on related efforts Moderator, reader, scribe Steps: Planning Overview Preparation Inspection Rework Follow-Up

Inspections vs Testing Partially Cost-effective Can be applied to intermediate artifacts Catches defects early Helps disseminate knowledge about project and best practices Testing Finds errors cheaper, but correcting them is expensive Can only be applied to code Catches defects late (after implementation) Necessary to gauge quality

Formal Methods Mathematical techniques used to prove that a program works Used for requirements/design/algorithm specification Prove that implementation conforms to spec Pre and Post conditions Problems: Require math training Not applicable to all programs Only verification, not validation Not applicable to all aspects of program (e.g. UI or maintainability)

Static Analysis Examination of static structures of design/code for detecting error-prone conditions (cohesion --- coupling) Automatic program tools are more useful Can be applied to: Intermediate documents (but in formal model) Source code Executable files Output needs to be checked by programmer