Chapter 8 Testing the Programs. Integration Testing  Combine individual comp., into a working s/m.  Test strategy gives why & how comp., are combined.

Slides:



Advertisements
Similar presentations
1 Integration Testing CS 4311 I. Burnstein. Practical Software Testing, Springer-Verlag, 2003.
Advertisements

Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 11: Integration- and System Testing.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Integration testing Satish Mishra
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Software Testing and Quality Assurance
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software Testing and Quality Assurance Lecture 30 - Introduction to Software Testing.
Integration Testing Presented By Nesreen Ahmad. *Main Points:-  Definition Of Integration Testing.  Procedure Of Integration Testing.  Integration.
Illinois Institute of Technology
CS 425/625 Software Engineering Software Testing
INTEGRATION TESTING ● After or during Unit Testing ● Putting modules together in a controlled way to incrementally build up the final system. ● Start with.
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 10: Testing and Inspecting to Ensure High Quality Part 3:
Outline Types of errors Component Testing Testing Strategy
Lecturer: Dr. AJ Bieszczad Chapter 87-1 How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible.
Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 11: Integration- and System Testing.
Software Testing Introduction. Agenda Software Testing Definition Software Testing Objectives Software Testing Strategies Software Test Classifications.
Bottom-Up Integration Testing After unit testing of individual components the components are combined together into a system. Bottom-Up Integration: each.
Software System Integration
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
CH08: Testing the Programs
ECE 355: Software Engineering
Overview Integration Testing Decomposition Based Integration
SOFTWARE TESTING STRATEGIES CIS518001VA : ADVANCED SOFTWARE ENGINEERING TERM PAPER.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee
CMSC 345 Fall 2000 Unit Testing. The testing process.
INT-Evry (Masters IT– Soft Eng)IntegrationTesting.1 (OO) Integration Testing What: Integration testing is a phase of software testing in which.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
Unit 7 Chapter 8 Testing the Programs. Unit 7 Requirements Read Chapters 8 and 9 Respond to the Unit 7 Discussion Board (25 points) Attend seminar/Take.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
1 Systems V & V, Quality and Standards Dr Sita Ramakrishnan School CSSE Monash University.
Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing.
1 Integration Testing CS 4311 I. Burnstein. Practical Software Testing, Springer-Verlag, 2003.
Testing OO software. State Based Testing State machine: implementation-independent specification (model) of the dynamic behaviour of the system State:
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Testing the Programs 中国科学技术大学软件学院 孟宁 2010 年 12 月.
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
Integration Testing Beyond unit testing. 2 Testing in the V-Model Requirements Detailed Design Module implementation Unit test Integration test System.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Software testing techniques Software testing techniques Sandwich strategy Presentation on the seminar Kaunas University of Technology.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 23, 1999.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
SOFTWARE TESTING. SOFTWARE Software is not the collection of programs but also all associated documentation and configuration data which is need to make.
Tool Support for Testing Classify different types of test tools according to their purpose Explain the benefits of using test tools.
Chapter 8 Testing the Programs 8.1 Software Faults and Failures 1. Introduction  faults: A: definition: the problem caused by error B: cause: X: the software.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing Strategies for building test group
Group mambers: Maira Naseer (BCS ).
Software Engineering TESTING Compiled by: Dr. S. Prem Kumar
Integration Testing.
Rekayasa Perangkat Lunak Part-13
Definition of Integration Testing
Lecture 09:Software Testing
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Verification and Validation Unit Testing
Higher-Level Testing and Integration Testing
Chapter 10 – Software Testing
Integration Testing CS 4311
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Chapter 11: Integration- and System Testing
Chapter 11: Integration and System Testing
CS410 – Software Engineering Lecture #11: Testing II
Presentation transcript:

Chapter 8 Testing the Programs

Integration Testing  Combine individual comp., into a working s/m.  Test strategy gives why & how comp., are combined to test the working s/m.  Strategy affects not only the integration timing, coding, cost & thoroughness of the testing.

 S/m viewed as hierarchy of comps.  Approaches  Bottom-up  Top-down  Big-bang  Sandwich testing  Modified top-down  Modified sandwich

Bottom-up Integration  Merging comp.,  Lowest level of s/m hierarchy is tested individually first then next comp., are those that call the previously tested comp.,  Component Driver: a routine that calls a particular component and passes a test case to it.  Drawback – functionally decomposed s/m – is that top level comp., are important but last to be tested.  Top level are more general, whereas bottom level are more specific.  Bottom up testing is most sensible for OOpgms.,

 System viewed as a hierarchy of components  The sequence of tests and their dependencies  We need comp., driver for E,F,G

Top-Down Integration  Reverse of bottom-up  Top level – one controlling comp., is tested by itself.  Then all comps., called by the tested comp., are combined & tested  Stub: a special-purpose program to simulate the activity of the missing component  The stub answers the calling seq., & passes back o/p data that lets the testing process continue.

 Only A is tested by itself  Stubs needed for B,C & D.  Top down allows the test team to exercise one fn., at a time  Test cases defined in terms of fns., being examined.  Design faults abt fnality., addressed at the beginning of testing.  Drivers pgms., are not needed.  Writing stubs can be difficult, its correctness may affect the validity of the test.

 Drawback with top-down testing is possibility that a very large no., of stubs may be required.  One way to avoid is to alter the strategy slightly  Modified Top-Down Integration  Each level’s components individually tested before the merger takes place.

Bing-Bang Integration  Used for small s/m, not practical for large s/m.  First, it requires both stubs and drivers to test the independent components  Second, b’coz all comp., are merged at once, it is difficult to find the cause of failures.  Finally, i/f faults cannot be distinguished easily

Sandwich Integration  Combines top down with bottom-up  Viewed system as three layers: target layer in middle, levels above the target, levels below the target.  Top down in top level & bottom up in lower layer.  Testing converges towards target layer – chosen on the basis of s/m char., & struc., of comp., hierarchy.  Ex., comp., hierarchy - Target layer in the middle level, comp., B, C, D  Drawback – it does not test the indiv., comp before integration.

Modified Sandwich Integration  Allows upper-level components to be tested before merging them with others

Comparison of Integration Strategies Bottom-upTop- down Modified top- down Bing-bangSandwichModified sandwich IntegrationEarly LateEarly Time to basic working program LateEarly LateEarly Component drivers needed YesNoYes Stubs neededNoYes Work parallelism at beginning MediumLowMediumHighMediumHigh Ability to test particular paths EasyHardEasy MediumEasy Ability to plan and control sequence EasyHard EasyHardhard

Testing Object-Oriented Systems  Rumbaugh ask several ques.,:  Is there a path that generates a unique result?  Is there a way to select a unique result?  Are there useful cases that are not handled?  Next check obj., & classes themselves for excesses & deficiencies  Missing obj, useless classes, associations or attributes

Differences Between OO and Traditional Testing  OO comp., - reused, helps minimize testing  First, test – base classes having no parents – test each fn., individually then the interactions among them.  Next – provide an alg., to update incrementally the test history for parent class.  OO unit testing is less difficult, but integration testing is more extensive

 The farther the gray line is out, the more the difference Requires special treatment

Test Planning  Test planning helps us to design & organize the tests  Test steps:  Establish test objectives  Design test cases  Write test cases  Testing test cases  Execute tests  Evaluate test results  Test obj., - tells us what kind of test cases to generate.

 Purpose of the Plan  Test plan explains  who does the testing?  why the tests are performed?  how tests are conducted?  when the tests are scheduled?  Contents of the Plan  What the test objectives are?  How the test will be run?  What criteria will be used to determine when the testing is complete?  Statement, branch, & path coverage at the comp., level  Top-down, bottom-up at the integration level  Resulting test plan for merging the comps., into a whole sometimes called S/m Integration Testing

Automated Testing Tools  Code Analysis Tool  Static analysis – src pgm before it is run  Code analyzer  Structure checker  Data analyzer  Sequence checker  Dynamic analysis – pgm is running  Program monitors: watch and report program’s behavior  Test execution – automate the planning & running of the test themselves  Capture and replay  Stubs and drivers  Automated testing environments  Test case generators – automate test case generation

When to Stop Testing  More faulty?  Probability of finding faults during the development

Stopping Approaches  Fault seeding or error seeding – to estimate no., of faults in a pgm  One team – seeds or inserts a known no., of faults  Other team – locate as many faults as possible  No. of undiscovered faults acts as indicator  The ratio: detected seeded Faults detected nonseeded faults total seeded faults total nonseeded faults  Expressing ratio formally : N = Sn /s  N – no., of nonseeded faults  S – no., seeded faults placed in a pgm.  n – actual no., of nonseeded faults detected during testing  s – no., of seeded faults detected during testing

 Confidence in the software, - use fault estimate to tell how much confidence we can place in the s/w we are testing.  Confidence expressed in percentage. = 1, if n >N C = S/(S – N + 1),if n ≤ N  Coverage criteria

Identifying Fault-Prone Code  Track the number of faults found in each component during the development  Collect measurement (e.g., size, number of decisions) about each component  Classification trees : a statistical technique that sorts through large arrays of measurement information and creates a decision tree to show best predictors  A tree helps in deciding the which components are likely to have a large number of errors

An Example of a Classification Tree