1 Phase Testing. Janice Regan, 2008 2 For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.

Slides:



Advertisements
Similar presentations
Testing and Inspecting to Ensure High Quality
Advertisements

Testing Relational Database
Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
Lecture 8: Testing, Verification and Validation
Verification and Validation
Testing and Quality Assurance
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
1 Software Engineering Lecture 11 Software Testing.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Program Testing Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
BY: GARIMA GUPTA MCA FINAL YEAR WHAT IS SOFTWARE TESTING ? SOFTWARE TESTING IS THE PROCESS OF EXECUTING PROGRAMS OR SYSTEM WITH THE INTENT.
System/Software Testing
Testing. What is Testing? Definition: exercising a program under controlled conditions and verifying the results Purpose is to detect program defects.
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
CMSC 345 Fall 2000 Unit Testing. The testing process.
CS4311 Spring 2011 Unit Testing Dr. Guoqiang Hu Department of Computer Science UTEP.
1 Phase Testing. \ 2 Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine subphases) Define Coding Standards.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing Testing types Testing strategy Testing principles.
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
1 Phase Implementation. Janice Regan, Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine subphases)
1 Ch. 1: Software Development (Read) 5 Phases of Software Life Cycle: Problem Analysis and Specification Design Implementation (Coding) Testing, Execution.
Software Development Problem Analysis and Specification Design Implementation (Coding) Testing, Execution and Debugging Maintenance.
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
SOFTWARE TESTING. INTRODUCTION Testing forms the first step in determining the errors in a program. It is the major quality control measure used during.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Testing and inspecting to ensure high quality An extreme and easily understood kind of failure is an outright crash. However, any violation of requirements.
Software Testing Mehwish Shafiq. Testing Testing is carried out to validate and verify the piece developed in order to give user a confidence to use reliable.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Software Engineering Testing. These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
SOFTWARE TESTING SOFTWARE TESTING Presented By, C.Jackulin Sugirtha-10mx15 R.Jeyaramar-10mx17K.Kanagalakshmi-10mx20J.A.Linda-10mx25P.B.Vahedha-10mx53.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Software Testing.
Testing Tutorial 7.
Software Testing.
Rekayasa Perangkat Lunak Part-13
SOFTWARE TESTING OVERVIEW
Chapter 9, Testing.
Recall The Team Skills Analyzing the Problem
IT6004 – SOFTWARE TESTING.
Software testing.
Lecture 09:Software Testing
Fundamental Test Process
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Software testing.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
CSE 1020:Software Development
Presentation transcript:

1 Phase Testing

Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine subphases) Define Coding Standards For each unit Implement Methods in class/es Code review Unit test Create integration Test plan Create unit Test plans Release unit for integration Integration Testing System Testing Create system Test plan

Janice Regan, Testing  Goal: Deliberately trying to cause failures in a software system in order to detect any defects that might be present  Test effectively: uncovers as many defects as possible  Test efficiently: find largest possible number of defects using fewest possible tests

Janice Regan, Basic Definitions  Fault:  is a condition that causes the software to malfunction or fail  Failure:  is the inability of a piece of software to perform according to its specifications (violation of functional and non-functional requirements)  A piece of software has failed if its actual behaviour differs in any way from its expected behaviour

Janice Regan, Basic Definitions  Error:  refers to any discrepancy between an actual, measured value (result of running the code) and a theoretical or predicted value (expected result of code from specifications)

Janice Regan,  Failures are caused by faults, but not all faults cause failures. Unexercised faults will go undetected.  If software system has not failed during testing (no errors were produced), we still cannot guarantee that the software system is without faults  Testing can only show presence of faults  A successful test does not prove our system is faultless!!!! faultfailureerror may produce produces

Janice Regan, Faults and Failures: example  Consider an array of integers myarray[], the array is declared to have 10 elements  Code contains a loop to print the contents of the array. The array may be partially filled. Loop to print k elements for (i=0; i<k; i++) {}  There is no check in the code to prevent printing more values than the number of available array elements  Code is tested for an array with 6 elements. No failures occur.  There is an undetected fault in the code that will cause failure if k>9

Janice Regan, Test Plan  Specifies how we will demonstrate that the software produces expected results and behaves according to the requirements specification  Various tests (each representing a phase in our project schedule)  Unit test  usually included within the implementation plan  Integration test  may be included within the implementation plan  System test  User Acceptance test

Janice Regan, Unit Test Plan: (not unit testing)  Unit test planning activity: (performed by programmer) 1. Select and create test cases  Plan how to test the unit in isolation from the rest of the system  Related Unit testing activity: (performed by tester, probably not the same person as the programmer) 1. Implement planned test cases into a unit test driver 2. Exercise (run) test cases and collect results 3. Record and report test results

Janice Regan, Unit Test  Goal: ensure that a unit (class) works (e.g. methods’ post conditions are satisfied)  Testing “in isolation”  Activities: 1. Implement test cases created, (write a unit test driver) 2. Run unit test driver(s) 3. Report results of tests

Janice Regan, Integration Test Plan (!= integration and test plan)  Activity: 1. Select and create test cases  Plan how we shall test our builds  Related phase is Integration Test phase -> where we exercise integration test cases

Janice Regan, Integration Test  Goal: performed on partially constructed software system (builds) to verify that, after integrating additional bits to our software, it still operates as planned (i.e., as specified in the requirements document)  Testing “in context”  Activities: 1. Write drivers/stubs 2. Run integration test(s) 3. Report results of tests

Janice Regan, System Testing  Goal: ensure that the system actually does what the customer expects it to do  For us, in 275, our system test plan is to successfully execute all of our use cases  Activities: 1. Run system test using User Manual 2. Report results of test

Janice Regan, User Acceptance Testing  Goal: client/users test our software system by mimicking real world activities  Users should also intentionally enter erroneous values to determine the robustness of the software system

Janice Regan, Developing Test Cases  We could test code using all possible inputs  exhaustive testing is impractical  More extensive testing for mission critical software  Instead, we shall select a set of inputs (i.e., a set of test cases)

Janice Regan, Test Case  Selection of test cases based on two strategies:  Black box testing strategy  Select test cases by examining expected behaviour and results of software system  Glass (white) box testing strategy  Select test cases by examining internal structure of code

Janice Regan, Test Case Format  Test id -> uniquely identify our test case  Test purpose -> what we are testing, coverage type, resulting sequence of statements …  Requirement # -> from SRS, to validate our test cases  Inputs -> input values and states of objects  Testing procedure -> steps to be performed by testers to execute this test case (algorithm of test driver for automated testing)  Evaluation -> step to be performed by testers in order to evaluate whether test is successful or not  Expected behaviours and results -> test oracle  Actual behaviours and results -> one way to report

Janice Regan, Expected Results  Expected results are created from the requirements specification, class invariants, post conditions of methods, etc.  When expected results are not observed a failure has occurred  Be careful (when developing tests)  keep in mind that the expected results, themselves, could be erroneous

Janice Regan, Selecting Test Cases  Selection of test cases based on two strategies:  Black box testing strategy  Select test cases by examining expected behaviour and results of software system  Glass (white) box testing strategy  Select test cases by examining internal structure of code

Janice Regan, Glass (White) Box Testing  We treat the software system as a glass box  Can see inside, i.e., we knows about  Source code (steps taken by algorithms)  Internal data  design documentation  We can select test cases using our knowledge of the course code, hence test cases can exercise all aspects of each algorithm and data  More thorough but more time-consuming than black box testing

Janice Regan, Select Set of Test Cases using Glass (White) Box Testing Strategy Goal: Select test cases, i.e., determine values of input, that will cause flow of execution to exercise as much (greatest coverage) of your method as possible  Test as much of your method as possible with the minimum number of test cases.  Compromise between extent of test coverage and time needed to complete the tests  More robust testing → longer test phase → more expensive testing  Less robust testing → shorter test phase → less expensive testing

\ 22 Black Box Testing  We treat software system as a black box  We cannot see inside, i.e., we knows nothing about  source code  internal data  design documentation  So what do we know then?  Public interface of the function or class (methods, parameters)  Class skeleton (pre and post conditions, description, invariants)  Test cases are selected based on  input values or situation  expected behaviour and output of methods

\ 23 Select Set of Test Cases using Black Box Testing Strategy 1. Determine what to test  Which method of which class, type of test 2. For each parameter and object a) Establish the parameters equivalence classes OR consider that various states of object  Determine all valid (invalid) values or all ranges of valid (invalid) values, and boundaries between those ranges.  Determine valid and invalid states of objects (preconditions) b) Select representative values for each equivalence class (one from each range, and boundary values) to use as basis of your test cases