Software Testing mae. Admin Need schedule of meeting with graders for user test Rubrics for Arch Eval posted Scheduled for next Tuesday will be posted.

Slides:



Advertisements
Similar presentations
Black Box Testing Sources: Code Complete, 2 nd Ed., Steve McConnell Software Engineering, 5 th Ed., Roger Pressman Testing Computer Software, 2 nd Ed.,
Advertisements

Testing and Quality Assurance
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
1 Software Engineering Lecture 11 Software Testing.
Uncertainty Analysis Using GEM-SA. GEM-SA course - session 42 Outline Setting up the project Running a simple analysis Exercise More complex analyses.
1 CODE TESTING Principles and Alternatives. 2 Testing - Basics goal - find errors –focus is the source code (executable system) –test team wants to achieve.
Informatics 43 – April 30, What is a test case? An input to a system, and the correct output. An “input” may be complex. Example: – What is an input.
Software Testing and Quality Assurance
OHT 9.1 Galin, SQA from theory to implementation © Pearson Education Limited 2004 Definitions and objectives Software testing strategies Software test.
IMSE Week 18 White Box or Structural Testing Reading:Sommerville (4th edition) ch 22 orPressman (4th edition) ch 16.
OHT 9.1 Galin, SQA from theory to implementation © Pearson Education Limited 2004 Chapter 9.3 Software Testing Strategies.
GOOD PRACTICES & STRATEGY. Functional Analysis & Approach Start from outcome onwards Simplicity Validity.
1 Functional Testing Motivation Example Basic Methods Timing: 30 minutes.
Verificarea şi Validarea Sistemelor Soft Tem ă Laborator 2 Testare Black Box Dat ă primire laborator: Lab 2 Dat ă predare laborator: Lab 2,3.
Software Quality Assurance Lecture #8 By: Faraz Ahmed.
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
Software Testing Content Essence Terminology Classification –Unit, System … –BlackBox, WhiteBox Debugging IEEE Standards.
Automated SW testing Lukáš Miňo
1 Software Testing (Part-II) Lecture Software Testing Software Testing is the process of finding the bugs in a software. It helps in Verifying and.
Software Testing CS 121 “Ordering Chaos” “Mike” Michael A. Erlinger
Database testing Prepared by Saurabh sinha. Database testing mainly focus on: Data integrity test Data integrity test Stored procedures test Stored procedures.
Test plans CSCI102 - Systems ITCS905 - Systems MCS Systems.
Software Systems Verification and Validation Laboratory Assignment 3 Integration, System, Regression, Acceptance Testing Assignment date: Lab 3 Delivery.
Software Testing (Part 2)
Testing CSE 140 University of Washington 1. Testing Programming to analyze data is powerful It’s useless if the results are not correct Correctness is.
CSE403 Software Engineering Autumn 2001 More Testing Gary Kimura Lecture #10 October 22, 2001.
Testing Testing Techniques to Design Tests. Testing:Example Problem: Find a mode and its frequency given an ordered list (array) of with one or more integer.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Black-box Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Black Box Testing : The technique of testing without having any knowledge of the interior workings of the application is Black Box testing. The tester.
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe.
Software Engineering Saeed Akhtar The University of Lahore.
Software Testing and Quality Assurance 1. What is the objectives of Software Testing?
Testing CSE 160 University of Washington 1. Testing Programming to analyze data is powerful It’s useless (or worse!) if the results are not correct Correctness.
Software Testing. System/Software Testing Error detection and removal determine level of reliability well-planned procedure - Test Cases done by independent.
Software Testing Mehwish Shafiq. Testing Testing is carried out to validate and verify the piece developed in order to give user a confidence to use reliable.
Software Quality Assurance and Testing Fazal Rehman Shamil.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
Dynamic Testing.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
ITEC 370 Lecture 18 Testing. Review Questions? Design document due W –System can be implemented just by following it Implementation –Methods (prototype,
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Software Testing CS II: Data Structures & Abstraction
Testing Tutorial 7.
Approaches to ---Testing Software
The Joy of Breaking Code Testing Logic and Case Selection
Black Box Testing PPT Sources: Code Complete, 2nd Ed., Steve McConnell
CompSci 230 Software Construction
Chapter 13 & 14 Software Testing Strategies and Techniques
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software Quality Engineering
Strategies For Software Test Documentation
Introduction to Software Testing
Testing UW CSE 160 Winter 2016.
Software Development Process
CSC 143 Error Handling Kinds of errors: invalid input vs programming bugs How to handle: Bugs: use assert to trap during testing Bad data: should never.
Examining the Specification
Informatics 43 – April 28, 2016.
Chapter 10 – Software Testing
CSE403 Software Engineering Autumn 2000 More Testing
CSE 1020:Software Development
TYPES OF TESTING.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Unit IV – Chapter 2 V-Test Model.
Presentation transcript:

Software Testing mae

Admin Need schedule of meeting with graders for user test Rubrics for Arch Eval posted Scheduled for next Tuesday will be posted on Saturday

convention wisdom 95% of errors are in 5% of the code …or maybe 80% of the errors are in 20% of the code… Testing: can I find that 5% or 20% via a test suite

Software Testing Goals Types of tests Levels of tests Test measures Test plan

Goals Verification: Have we built the software right? Validation: Have we built the right software? Bug-free, meets specs Meets customers needs Are Verification and Validation different or variation of the same idea My argument: meeting specs should equal meeting customer needs... not generally true (my kitchen, satellite systems)

Software Testing Goals Types of tests Levels of tests Test measures Test plan most of this discussion focuses on verification (more specifically bug testing)

Types of tests Black box White box Gray box

Black box tests input output interface 1. Does it perform the specified functions? 2.Does it handle obvious errors in input? 3.Ariane5 – lousy error handling 4.Classic ints vs floats, yards vs meters Black box should catch these if there is adequate test coverage

Example: ordered list of ints class OrdInts create getFirst getNext insert delete print L=create() L.insert(5) L.insert(-1) p=L.getFirst() print (p) L.delete(p) p=L.getFirst() print(p) p=L.getNext(p) print(p) p=L.getNext(p) 5 error

Black box tests Advantage: black box testerdeveloper is unbiased by implementation details. e.g., Use Case testing, just work through all the Use Cases Disadvantage: black box tester is uninformed about implementation details – unnecessary tests – test same thing in different way – insufficient tests – can miss the extremes, especially if actual use follows a different pattern

black box tests Input Code choose good distribution of input – hope good distribution of code tested x x x x

unnecessary tests Input Code large range of input may exercise a small part of code e.g., operator test of satellite control stations, run through each input and output light/key option. Testing same functions, whereas no one had a test for my map function.

insufficient tests Input Code a small range of input may exercise a large range of code but can you know this without knowing the code? Did we miss the 20%

sufficient tests Input Code complex code a small range of input may exercise a small but important/error-prone region of code

White box tests Based on code test 1 test 2

Example: ordered list of ints class ordInts { public: … private: int vals[1000]; int maxElements=1000; … }

Example: ordered list of ints bool testMax() { L=create(); num=maxElements; for (int i=0; i<=num; i++) print i L.insert(i) print maxElements; }

White box tests Advantage: – design tests to achieve good code coverage and avoid duplication – can stress complicated, error-prone code – can stress boundary values (fault injection) Disadvantage: – tester=developer may have bias – if code changes, tests may have to be redesigned (is this bad?)

Gray box tests Look at code to design tests But test through interface Best of both worlds - maybe

Example: ordered list of ints class OrdInts create getFirst getNext insert delete print L=create() L.insert(1) L.insert(2) L.insert(3) L.insert(1001) p=L.getFirst() print(p) p=L.getNext(p) print p … … …

Types of tests Black box: test based on interface, through interface Gray box: test based on code, through interface White box: test based on code, through code Testing strategy can/should include all approaches! My experience: Black Box = non developer, outside testing idiots in your case who? White Box = developer, part of development process in your case who? Gray Box = hated, non developer within developer organization in your case who?

Levels of tests Unit Integration System System integration white black

Testing measures (white box) Code coverage – individual modules Path coverage – sequence diagrams Code coverage based on complexity – test of the risks, tricky part of code (e.g., Unix you are not expected to understand this code)

Code coverage How much of code is tested by tests? – manually – profiling tools Design new tests to extend coverage Is 100% good?

Path coverage How many execution paths have been exercised by tests? 100% path coverage is usually impossible aim to cover common paths and error prone (complex) paths aim to break code with tests – good testers are not liked by developers....

convention wisdom 95% of errors are in 5% of the code …or maybe 80% of the errors are in 20% of the code…

code complexity measures cyclomatic complexity measure (McCabe) – measures number of linearly independent paths through program – ignores asynchronous interaction, fallibility of services – etc. developers best guess – what problems are likely and which will be hardest to diagnose started with Knuth and others who gather stats on programs.

Test plan Collection of tests: unit, integration, system Rationale for test: why these tests? Strategy for developing/performing tests – e.g., test units as developed, test integration at each build, etc.

Testing Strategy TDD – test driven development Regression Test harness Bug tracking User tests Write test BEFORE your write the code!

TDD – test Driven Unit tests are written first by the SEs. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and integrated with regression tests. Unit tests are maintained like software and integrated into the build process. The goal is to achieve continuous deployment with frequent updates.

Testing Strategy TDD – test driven development Regression Test harness Bug tracking User tests Automate Build up tests Run tests before you check in code My mess up....

Testing Strategy TDD – test driven development Regression Test harness Bug tracking User tests Useful for interactive graphics applications automated test framework, built one for satellite console to run through all console interactions

Test harness everything else input output inject test input query state need to have a way to evaluate output

Strategy TDD – test driven development Regression Test harness Bug tracking User test Design system for tracking Scrupulously record problems, their context, the effect, ideas on the cause, attempts to fix, create new tests as bugs uncovered

Strategy TDD – test driven development Regression Test harness User test Validation Nonfunctional requirements Not best for discovering bugs Need to know how they generated bug

Test plan Collection of tests: unit, integration, system Rationale for test: why these tests? Strategy for developing/performing tests Be thoughtful in designing Be diligent in executing