Software Quality Assurance

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Testing Relational Database
Test process essentials Riitta Viitamäki,
(c) Dr. Wolfgang J. Schneider GENERAL TESTING PRINCIPLES Dr. Wolfgang J. Schneider 2003 – 2006.
Lecture 8: Testing, Verification and Validation
Testing and Quality Assurance
Chapter 4 Quality Assurance in Context
Basic Concepts Snejina Lazarova Senior QA Engineer, Team Lead CRMTeam Dimo Mitev Senior QA Engineer, Team Lead SystemIntegrationTeam Telerik QA Academy.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SE 555 Software Requirements & Specification Requirements Validation.
The Basics of Software Testing
High Level: Generic Test Process (from chapter 6 of your text and earlier lesson) Test Planning & Preparation Test Execution Goals met? Analysis & Follow-up.
Software Testing Prasad G.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Chapter 1: Introduction to Software Testing Software Testing
DKT311 Software Engineering Dr. Rozmie Razif bin Othman.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Inspection A basic tool for defect removal A basic tool for defect removal Urgent need for QA and removal can be supported by inspection Urgent.
Chapter 6: Testing Overview Basic idea of testing - execute software and observe its behavior or outcome. If failure is observed, analyze execution record.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing Testing types Testing strategy Testing principles.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
Software Quality Assurance and Testing Fazal Rehman Shamil.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Introduction to Software Testing Maili Markvardt.
Software engineering - 2 Section 8. QUIZ Show how it is possible to determine the height of a tall building with the aid of a barometer.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVII. Verification and Validation.
ISQB Software Testing Section Meeting 10 Dec 2012.
Software Engineering (CSI 321)
Dr. Rozmie Razif bin Othman
Module A Fundamentals of Testing
Testing Verification and the Joy of Breaking Code
Software Engineering (CSI 321)
Testing Tutorial 7.
Software Quality Engineering
Software Testing.
Approaches to ---Testing Software
Software Engineering (CSI 321)
Software Quality Engineering
Software Testing An Introduction.
Verification & Validation
Software engineering – 1
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
CS-4349 Software Testing & Implementation
Software Quality Engineering
Lecture 09:Software Testing
Verification and Validation Unit Testing
Software Quality Assurance
Fundamental Test Process
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Software testing.
CS240: Advanced Programming Concepts
Baisc Of Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Applying Use Cases (Chapters 25,26)
Applying Use Cases (Chapters 25,26)
Test Cases, Test Suites and Test Case management systems
Presentation transcript:

Software Quality Assurance Testing Overview

Testing Overview Concepts & Process Seven Principles of Testing Testing Levels Testing vs. Debugging Testing Related Questions Major Testing Techniques When to stop testing?

Testing Software Testing is the process of executing a system or component under specified conditions with the intent of finding defects and to verify that it satisfies specified requirements. Testing is one of the most important parts of quality assurance (QA). Testing is the most commonly performed QA activity. Basic idea of testing involves the execution of software and the observation of its behavior or outcome. If a failure is observed, the execution record is analyzed to locate and fix the fault(s) that caused the failure. Otherwise, we gain some confidence that the software under test is more likely to fulfill its designated functions.

Seven Principles of Testing Testing Principles: A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing. Principle 1 – Testing shows presence of defects Testing can prove the presence of defects, but cannot prove the absence of defects. Even after testing the application or product thoroughly we cannot say that the product is 100% defect free. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

Seven Principles of Testing Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible. Instead of exhaustive testing, risk analysis, time & cost and priorities should be used to focus testing efforts. Principle 3 – Early testing To find defects early, testing activities should be started as early as possible in the software development life cycle, and should be focused on defined objectives. When defects are found earlier in the lifecycle, they are much easier and cheaper to fix

Seven Principles of Testing Principle 4 – Defect clustering A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. There is NO equal distribution of defects within one test object. The place where defect occurs, it’s likely to find some more. The testing process must be flexible and respond to this behavior. Principle 5 – Pesticide paradox If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.

Seven Principles of Testing Principle 6 – Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. Principle 7 – Absence-of-errors fallacy Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations. Just because testing didn’t find any defects in the software, it does not mean that the software is ready to be shipped.

Other Principles to note… Testing must be done by an independent party. Assign best personnel to the task. Test for invalid and unexpected input conditions as well as valid conditions. Keep software static during test. Provide expected test results if possible.

Testing Levels Unit testing Integration testing System testing Acceptance testing

Testing vs. Debugging Debugging and testing are different. Dynamic testing can show failures that are caused by defects. Debugging is the development activity that finds, analyses and removes the cause of the failure. Test and re-test are test activities Testing shows system failures Re-testing proves, that the defect has been corrected Debugging and correcting defects are developer activities Through debugging, developers can reproduce failures, investigate the state of programs and find the corresponding defect in order to correct it.

Testing vs. Debugging Test and re-test are test activities Test Correcting Defects Re-test Test and re-test are test activities Debugging and correcting defects are developer activities

Roles & Responsibilities: Developer and Tester

Testing & QA alternatives Defect & QA: Defect: ==> error/fault/failure Defect prevention/removal/containment Map to major QA activities Defect prevention: Error blocking & error source removal Defect removal: Testing, Inspection, Review, Walkthrough etc. Defect containment: Fault tolerance & failure containment (safety assurance)

QA and Testing Testing as part of QA: Activities focus on testing phase QA/testing in Waterfall and V-models One of the most important parts of QA Defect removal

Testing: Key Questions WHY: Quality demonstration vs. defect detection & removal HOW: Techniques/activities/process/etc. VIEW: Functional/external/black-box vs. Structural/internal/white-box EXIT: Coverage vs. Usage-based

Testing: Why? The purpose of software testing is to ensure that the software systems would work as expected when they are used by their target customers and users. Most natural way to show fulfillment of expectations is to demonstrate their operation through some “dry-runs” or controlled experimentation in laboratory settings before the products are released/delivered. Such controlled experimentation through program execution is generally called testing

Testing: Why? Original/primary purpose: Demonstration of proper behavior or quality demonstration “Testing” in traditional settings Provide evidence of quality in the context of QA New purpose: Defect detection & removal Mostly defect-free software manufacturing vs. traditional manufacturing Flexibility of software ( ease of change) Failure observation ==> fault removal defect detection ==> defect fixing Eclipsing original purpose

Testing: Why? Summary: Testing fulfills two primary purposes: To demonstrate quality or proper behavior To detect and fix problems

Testing: How How? Run==> Observe==> Follow-up (particularly in case of failure observations) Refinement ==> generic testing process Generic testing process as instantiation of SQE process

Generic Testing Process Major test activities include the following in roughly chronological order: Test planning and preparation Test Execution Analysis and follow-up

Testing: Activities & Generic Process Major testing activities : The Generic Testing Process Test planning and preparation  Sets the goals for testing, select an overall testing strategy, and prepare specific test cases and the general test procedure Test Execution  Include related observation & measurement of product behavior Analysis and follow-up  Include result checking and analysis to determine if a failure has been observed, and if so, follow-up activities are initiated & monitored to ensure removal of the underlying causes/faults, that led to the observed failures in the first place.

1) Test Planning and Preparation Goal setting based on customer’s quality perspectives & expectations Overall strategy based on the above and product/environment characteristics. Test preparation: Preparing test cases & test suites Prepare test procedure

2) Test Execution General steps in test execution: Allocating test time ( & resources) Invoking test Identifying system failures ( & gathering information for follow-up actions) Key to execution: Handling both normal vs. abnormal cases Activities closely related to execution: Failure identification Data capturing & other measurement

3) Test Analysis and Follow-up Analysis of testing results: Result checking ( as part of execution) Further result analyses Defect/reliability/ etc. analyses Follow-up activities: Feedback based on analysis results Immediate: Defect removal ( & re-test) Other follow-up( longer term): Decision making ( exit testing, etc.) Test process improvement

Testing: How? How to test? Refine into three sets of questions Basic questions Testing technique questions Activity/management questions What artifacts are tested? What to test? From which view? Related: type of faults found When to stop testing?

Functional vs. Structural Testing Key distinction: Perspective on what need to be checked/tested. Functional testing: Tests external functions As described by external specs. Black-box in nature Functional mapping: Input ==> Output Without involving internal knowledge Structural testing: Tests internal implementations Components and structures White-box in nature “white” here ==> seeing through ==> internal elements visible Really clear/glass/transparent box

Black-Box vs. White-Box View Object abstraction/representation: High-level: Whole system ==> black-box Low –level: Individual statements, data, and other elements ==> white-box Middle-levels of abstraction ==> Gray-box Functional/subroutine/procedure, module , subsystem etc. Method, class, super-class Gray-box( mixed black-box & white-box ) testing: Many of the middle levels of testing Example: procedures in modules Procedures individually as black box, Procedure interconnection  white box at module level

White-Box Testing Program component/structure knowledge( or implementation details) Statement/component checklist Path (control flow) testing Data (flow) dependency testing Applicability Test in the small/early Dual role of programmers/testers Criterion for stopping Mostly coverage goals Occasionally quality/reliability goals

Black-Box Testing Input/output relations or external functional behavior Specification checklist Testing expected/specified behavior Applicability Late in testing: system testing etc. Suitable for IV&V Criteria: when to stop Traditional: functional coverage Usage-based: reliability target

Comparing BBT with WBT Perspective: BBT views the objects of testing as a black-box while focusing on testing the input-output relations or external functional behavior WBT views the objects as a glass-box where internal implementation details are visible & tested

Comparing BBT with WBT Objects: Timeline: WBT is generally used to test small objects (e.g., small software products or small units of large software products) BBT is generally more suitable for large software systems or substantial parts of them as a whole Timeline: WBT is used more in early sub-phases(e.g., unit and component testing) BBT is used more in the late sub-phases (e.g., system and acceptance testing)

Comparing BBT with WBT Defect Focus: In BBT, failures related to specific external functions can be observed, leading to corresponding faults being detected & removed. Emphasis ==> Reduce chances of encountering functional problems by target customers. In WBT, failures related to internal implementations can be observed, leading to corresponding faults being detected & removed directly. Emphasis ==> Reduce internal faults so that there is less chance for failures later on.

Comparing BBT with WBT Defect detection & Fixing: Defects detected through WBT are easier to fix than those through BBT WBT may miss certain types of defects (e.g., omission & design problems) which could be detected by BBT In general: BBT is effective in detecting & fixing problems of interfaces & interactions, while WBT is effective for problems localized within a small unit.

Comparing BBT with WBT Techniques: A specific technique is BBT if external functions are modeled A specific technique is WBT if internal implementations are modeled

Comparing BBT with WBT Tester: BBT is typically performed by dedicated professional testers, and could also be performed by third-party personnel in a setting of IV&V WBT is often performed by developers themselves

When to Stop Testing? Exit Criteria Not finding(any more) defects is NOT an appropriate criteria to stop testing activities (Why?!?)

When to Stop Testing? Resource-based criteria: A decision is made based on resource consumptions. Stop when you run out of time Stop when you run out of money Such criteria are irresponsible , as far as product quality is concerned Quality-based criteria: Stop when quality goals reached Direct quality measure: reliability Resemble actual customer usages Indirect quality measure: coverage Other surrogate: Activity completion (“stop when you complete planned test activities”)