Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering.

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

Software testing.
Defect testing Objectives
Lecture 8: Testing, Verification and Validation
Testing and Quality Assurance
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
Practical Testing Techniques. Verification and Validation Validation –does the software do what was wanted? “Are we building the right system?” –This.
1 Software Engineering Lecture 11 Software Testing.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
November 2005J. B. Wordsworth: J5DAMQVT1 Design and Method Quality, Verification, and Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Chapter 15 Design, Coding, and Testing. Copyright © 2005 Pearson Addison-Wesley. All rights reserved Design Document The next step in the Software.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Program Testing Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Testing an individual module
Test Design Techniques
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
System/Software Testing
System Testing There are several steps in testing the system: –Function testing –Performance testing –Acceptance testing –Installation testing.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
TESTING.
CompSci 230 Software Design and Construction
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
Introduction Telerik Software Academy Software Quality Assurance.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
Lecture 11 Testing and Debugging SFDV Principles of Information Systems.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
1 Introduction to Software Engineering Lecture 1.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
What is Testing? Testing is the process of finding errors in the system implementation. –The intent of testing is to find problems with the system.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Software Testing and Quality Assurance 1. What is the objectives of Software Testing?
Software Quality Assurance and Testing Fazal Rehman Shamil.
Dynamic Testing.
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Testing.
Testing Tutorial 7.
Software Testing.
Rekayasa Perangkat Lunak Part-13
SOFTWARE TESTING OVERVIEW
Software Engineering (CSI 321)
Chapter 8 – Software Testing
Chapter 13 & 14 Software Testing Strategies and Techniques
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Software testing.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
Chapter 7 Software Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering

– 2 – CSCE 492 Spring 2008 Overview Last Time Achieving Quality Attributes (Nonfunctional) requirements Today’s Lecture Testing = Achieving Functional requirementsReferences: Chapter 8 - Testing Next Time: Requirements meetings with individual groups Start at 10:15 Sample test -

– 3 – CSCE 492 Spring 2008 Testing Why Test? The earlier an error is found the cheaper it is to fix. Errors/bugs terminology A fault is a condition that causes the software to fail. A fault is a condition that causes the software to fail. A failure is an inability of a piece of software to perform according to specifications A failure is an inability of a piece of software to perform according to specifications

– 4 – CSCE 492 Spring 2008 Testing Approaches Development time techniques Automated tools: compilers, lint, etc Automated tools: compilers, lint, etc Offline techniques Walkthroughs Walkthroughs Inspections Inspections Online Techniques Black box testing (not looking at the code) Black box testing (not looking at the code) White box testing White box testing

– 5 – CSCE 492 Spring 2008 Testing Levels Unit level testing Integration testing System testing Test cases/test suites Regression tests

– 6 – CSCE 492 Spring 2008 Simple Test for a Simple Function Test cases for the function ConvertToFahrenheit Formula Fahrenheit = Celsius * scale + 32 ; // scale = 1.8 Test Cases for f = convertToFahrenheit(input);  convertToFahrenheit(0); // result should be 32  convertToFahrenheit(100); // result should be 212  convertToFahrenheit(-10); // result should be ???

– 7 – CSCE 492 Spring 2008 Principles of Object-Oriented Testing Object-oriented systems are built out of two or more interrelated objects Determining the correctness of O-O systems requires testing the methods that change or communicate the state of an object Testing methods in an object-oriented system is similar to testing subprograms in process- oriented systems

– 8 – CSCE 492 Spring 2008 Testing Terminology Error - refers to any discrepancy between an actual, measured value and a theoretical, predicted value. Error also refers to some human action that results in some sort of failure or fault in the software Fault - is a condition that causes the software to malfunction or fail Failure - is the inability of a piece of software to perform according to its specifications. Failures are caused by faults, but not all faults cause failures. A piece of software has failed if its actual behaviour differs in any way from its expected behaviour

– 9 – CSCE 492 Spring 2008 Code Inspections Formal procedure, where a team of programmers read through code, explaining what it does. Inspectors play “devils advocate”, trying to find bugs. Time consuming process! Can be divisive/lead to interpersonal problems. Often used only for safety/time critical systems.

– 10 – CSCE 492 Spring 2008 Walkthroughs Similar to inspections, except that inspectors “mentally execute” the code using simple test data. Expensive in terms of human resources. Impossible for many systems. Usually used as discussion aid.

– 11 – CSCE 492 Spring 2008 Test Plan A test plan specifies how we will demonstrate that the software is free of faults and behaves according to the requirements specification A test plan breaks the testing process into specific tests, addressing specific data items and values Each test has a test specification that documents the purpose of the test

– 12 – CSCE 492 Spring 2008 Test Plan If a test is to be accomplished by a series of smaller tests, the test specification describes the relationship between the smaller and the larger tests The test specification must describe the conditions that indicate when the test is complete and a means for evaluating the results

– 13 – CSCE 492 Spring 2008 Example Test Plan Deliverable 8.1 p267 Test #15 Specification: addPatron() while checking out resource  Requirement #3  Purpose: Create a new patron object when a new Patron is attempting to check out a resources  Test Description:  Enter check out screen  Press new patron button  … (next slide)  Test Messages  Evaluation – print patron list to ensure uniqueness and that data was entered correctly

– 14 – CSCE 492 Spring 2008 Example Test Description of Test Plan 3.Test Description:  Enter check out screen  Press new patron button  Enter Jill Smith  Patron Name field  Enter New Boston Rd.  Address field  Enter …  Choose Student from status choice box  A new Patron ID Number is generated if the name?? is new.

– 15 – CSCE 492 Spring 2008 Test Oracle A test oracle is the set of predicted results for a set of tests, and is used to determine the success of testing Test oracles are extremely difficult to create and are ideally created from the requirements specification

– 16 – CSCE 492 Spring 2008 Test Cases A test case is a set of inputs to the system Successfully testing a system hinges on selecting representative test cases Poorly chosen test cases may fail to illuminate the faults in a system In most systems exhaustive testing is impossible, so a white box or black box testing strategy is typically selected

– 17 – CSCE 492 Spring 2008 Black Box Testing The tester knows nothing about the internal structure of the code Test cases are formulated based on expected output of methods Tester generates test cases to represent all possible situations in order to ensure that the observed and expected behaviour is the same

– 18 – CSCE 492 Spring 2008 Black Box Testing In black box testing, we ignore the internals of the system, and focus on relationship between inputs and outputs. Exhaustive testing would mean examining output of system for every conceivable input. Clearly not practical for any real system! Instead, we use equivalence partitioning and boundary analysis to identify characteristic inputs.

– 19 – CSCE 492 Spring 2008 Equivalence Partitioning Suppose system asks for “a number between 100 and 999 inclusive”. This gives three equivalence classes of input: – less that 100 – 100 to 999 – greater than 999 We thus test the system against characteristic values from each equivalence class. Example: 50 (invalid), 500 (valid), 1500(invalid).

– 20 – CSCE 492 Spring 2008 Boundary Values Arises from the fact that most program fail at input boundaries. Suppose system asks for “a number between 100 and 999 inclusive”. The boundaries are 100 and 999. We therefore test for values: the lower boundary the upper boundary

– 21 – CSCE 492 Spring 2008 White Box Testing The tester uses knowledge of the programming constructs to determine the test cases to use If one or more loops exist in a method, the tester would wish to test the execution of this loop for 0, 1, max, and max + 1, where max represents a possible maximum number of iterations Similarly, conditions would be tested for true and false

– 22 – CSCE 492 Spring 2008 White Box Testing In white box testing, we use knowledge of the internal structure of systems to guide development of tests. The ideal: examine every possible run of a system. Not possible in practice! Instead: aim to test every statement at least once! EXAMPLE. if (x > 5) { System.out.println(‘‘hello’’); } else { System.out.println(‘‘bye’’); } There are two possible paths through this code, corresponding to x > 5 and x 5.

– 23 – CSCE 492 Spring 2008 Unit Testing The units comprising a system are individually tested The code is examined for faults in algorithms, data and syntax A set of test cases is formulated and input and the results are evaluated The module being tested should be reviewed in context of the requirements specification

– 24 – CSCE 492 Spring 2008 Integration Testing The goal is to ensure that groups of components work together as specified in the requirements document Four kinds of integration tests exist Structure tests Functional tests Stress tests Performance tests

– 25 – CSCE 492 Spring 2008 System Testing The goal is to ensure that the system actually does what the customer expects it to do Testing is carried out by customers mimicking real world activities Customers should also intentionally enter erroneous values to determine the system behaviour in those instances

– 26 – CSCE 492 Spring 2008 Testing Steps Determine what the test is supposed to measure Determine what the test is supposed to measure Decide how to carry out the tests Decide how to carry out the tests Develop the test cases Develop the test cases Determine the expected results of each test (test oracle) Determine the expected results of each test (test oracle) Execute the tests Execute the tests Compare results to the test oracle Compare results to the test oracle

– 27 – CSCE 492 Spring 2008 Analysis of Test Results The test analysis report documents testing and provides information that allows a failure to be duplicated, found, and fixed The test analysis report mentions the sections of the requirements specification, the implementation plan, the test plan, and connects these to each test

– 28 – CSCE 492 Spring 2008 Special Issues for Testing Object-Oriented Systems Because object interaction is essential to O-O systems, integration testing must be more extensive Inheritance makes testing more difficult by requiring more contexts (all sub classes) for testing an inherited module

– 29 – CSCE 492 Spring 2008 Configuration Management Software systems often have multiple versions or releases Configuration management is the process of controlling development that produces multiple software systems An evolutionary development approach often results in multiple versions of the system Regression testing is the process of retesting elements of the system that were tested in a previous version or release

– 30 – CSCE 492 Spring 2008 Alpha/beta testing In-house testing is usually called alpha testing. For software products, there is usually an additional stage of testing, called beta testing. Involves distributing tested code to “beta test sites” (usually prospective customers) for evaluation and use. Typically involves a formal procedure for reporting bugs. Delivering buggy beta test code is embarrassing!