Verification and Validation

Slides:



Advertisements
Similar presentations
Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
Advertisements

Defect testing Objectives
Lecture 8: Testing, Verification and Validation
Chapter 10 Software Testing
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
1 Software Engineering Lecture 11 Software Testing.
Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Terms: Test (Case) vs. Test Suite
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
CSCI 5801: Software Engineering
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
TESTING.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
Software testing techniques 3. Software testing
Software Engineering Chapter 23 Software Testing Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
16 October Reminder Types of Testing: Purpose  Functional testing  Usability testing  Conformance testing  Performance testing  Acceptance.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Software process model from Ch2 Chapter 2 Software Processes1 Requirements Specification Design and Implementation ValidationEvolution.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
Testing. Today’s Topics Why Testing? Basic Definitions Kinds of Testing Test-driven Development Code Reviews (not testing) 1.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Chapter 5 – Software Testing & Maintenance (Evolution) 1.
Software Quality Assurance and Testing Fazal Rehman Shamil.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Testing and Evolution CSCI 201L Jeffrey Miller, Ph.D. HTTP :// WWW - SCF. USC. EDU /~ CSCI 201 USC CSCI 201L.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Development Environment
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Testing Verification and the Joy of Breaking Code
Testing Tutorial 7.
Software Testing.
CSIS 3701: Advanced Object Oriented Programming
Chapter 8 – Software Testing
Verification and Testing
Chapter 13 & 14 Software Testing Strategies and Techniques
Chapter 7 Software Testing.
Software testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Chapter 8 – Software Testing
Software testing.
Software Testing & Quality Management
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
CS310 Software Engineering Dr.Doaa Sami Khafaga
CSE403 Software Engineering Autumn 2000 More Testing
Software Testing “If you can’t test it, you can’t design it”
Chapter 7 Software Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software Testing.
Presentation transcript:

Verification and Validation CSCI 5801: Software Engineering

Verification and Validation

Basic Facts about Errors 30-85 errors per 1000 lines of source code Extensively tested software contains 0.5-3 errors per 1000 lines of source code Error distribution: 60% design 40% implementation 66% of the design errors are not discovered until the software has become operational.

Faults vs. Failures Fault: a static flaw in a program What we usually think of as “a bug” Failure: an observable incorrect behavior of a program as a result of an error Not every fault ever leads to a failure (at least now) Good goal: detect failures and correct before shipping Impossible in practice! Better goal: detect faults and correct before failures occur

Verification vs. Validation Verification: evaluate a product to see whether it satisfies the specifications: Have we built the system right? Validation: evaluate a product to see whether it actually does what the customer wants/needs: Have we built the right system? Key assumption: know desired result of the test! Verification: System passes all test cases Validation: Have correct test cases to begin with!

Verification vs. Validation Verification testing Discover faults in the software where its behavior is incorrect or not in conformance with its specification A successful makes the system perform incorrectly and so exposes a defect in the system Tests show the presence not the absence of defects Validation testing Demonstrate to the developer and the customer that the software meets requirements A successful test shows that the system operates as intended

Stages of Testing Unit testing is the first phase, done by developers of modules Integration testing combines unit-tested modules and tests how they interact System testing tests a whole program to make sure it meets requirements (including most nonfunctional) Acceptance testing by users to see if system meets actual user/customer requirements

Glass Box Testing Use source code (or other structure beyond the input/output specifications) to design test cases Also known as white box, clear box, or structural testing Unit testing: based on structure of individual methods Conditions, loops Data structures Integration testing: based on overall structure of system Which methods call other methods

Black Box Testing Based on external requirements Testing that does not look at source code or internal structure of the system Send a program a stream of inputs, observe the outputs, decide if the system passed or failed the test Based on external requirements Unit testing: Do the methods of a class meet requirements of API Integration/System testing: Does system as a whole meet requirements of RSD Abstracts away the internals – a useful perspective for integration and system testing

Test Suites Key goal: Create test suite of cases most likely to find faults in current code Problems: Cannot exhaustively try all possible test cases Random statistical testing (choosing input values at random) does not work either: Faults generally not uniformly distributed Related problem: How can we evaluate the “quality” of a test suite?

Test Suites Need systematic strategy for creating test suite Tests designed to find specific kinds of faults Best created by multiple types of people: Cases chosen by the development team are effective in testing known vulnerable areas (glass box) Cases chosen by experienced outsiders and clients are effective at checking requirements (black box) Cases chosen by inexperienced users can find other faults (validation, etc.)

Coverage-Based Testing Test suite quality often based on idea of coverage All requirements tested (black box testing) All parts of structure covered (glass box testing) Statement coverage (unit testing) At least one case should execute each statement Decision coverage (unit testing) At least one test case for true/false in control structures Path coverage (integration testing) All paths through a program’s control flow graph (aka sequence diagram) are taken in the test suite

Fault Modeling Idea: Many programs have similar faults regardless of purpose Example: Without knowing purpose of program, what would you try to “break” this: Non-numeric value (“Fred”) Non-integer (0.5) Negative number (-1) Too large (1000000000000000000) Too small (0.00000000000000001) Illegal characters (^C, ^D) Buffer overflow (10000000000 characters) Enter a number: © SE, Testing, Hans van Vliet

User Interface Attacks Try all types of unexpected input Test default values Try with no input Delete any default values in fields Illegal combinations Time from 1100 to 1000 1000 rows ok, 1000 columns ok, both not ok Repeat same command to force overflow Add 1000000 courses Force screen refresh Is everything still redrawn?

File System attacks Full storage Timeouts Invalid filenames Save to full diskette Timeouts Drive being used by other processes and not available Invalid filenames Save to “fred’s file.txt” Access permission Save to read-only device Wrong format Database with missing fields Fields in wrong format Files in wrong forms (XML vs. CSF, etc.)

Operating System Attacks No local memory available New command returns null pointer System overloaded Multiple apps running simultaneously, causing timeouts Unable to access external devices Network Peripheral devices May require fault injection software to simulate Fault Injection Software new System null

Fault Seeding How do we know that we have a “good” set of test cases capable of finding faults? Idea: Deliberately “seed” code with known faults Run test set to see if it finds the seeded faults Higher % of seeded faults found  Higher confidence that test set finds actual faults © SE, Testing, Hans van Vliet

Mutation testing procedure insert(a, b, n, x); begin bool found:= false; for i:= 1 to n do if a[i] = x then found:= true; goto leave endif enddo; leave: if found then b[i]:= b[i] + 1 else n:= n+1; a[n]:= x; b[n]:= 1 endif end insert; n-1 In each variation, mutant, one simple change is made. 2 - © SE, Testing, Hans van Vliet

How to use mutants in testing If a test produces different results for one of the mutants, that mutant is said to be dead If a test set leaves us with many live mutants, that test set is of low quality If we have M mutants, and a test set results in D dead mutants, then the mutation adequacy score is D/M A larger mutation adequacy score means a better test set © SE, Testing, Hans van Vliet

Regression Testing Problem: Fixing bugs can introduce other errors 10% to 20% of bug fixes create new bugs Often happens as part of maintenance Developer who changes one module not familiar with rest of system Big problem: What if new bugs in code already tested? Module A tested first Fixing B creates bugs in A If A not retested, will be shipped with bugs! Module B tested second

Regression Testing Regression testing: Retesting with all test cases after any change to code Must do after any change to code Must do before checking modified code back into repository Must do before releasing code Comprehensive list of test cases Fix bugs Retest with all test cases Find bugs

Automated Testing Problem: May be thousands of tests! Too many for interactive debugging Goal: Automate comprehensive testing Run all test cases Notify developer of incorrect results Approaches: Creating testing “script” in driver Read test cases and desired results from file Use testing tools (JUnit)

JUnit Background Integrated into most Java IDEs (such as Netbeans) Will automatically generate “skeletons” to test each method Based on assert package (from C) http://www.junit.org/apidocs/junit/framework/Assert.html fail(message): Shuts down program and displays message to standard output assertEquals(message, value1, value2): Causes fail(message) if the values not equivalent

JUnit Testing Create object for testing Call methods to put in expected state Use inspector to get actual state Use assertEquals to compare to desired state

JUnit Testing Create method for each test to be performed Constructor state Normal methods Validation …

Test-Driven Development First write the tests, then do the design/ implementation Makes sure testing done as early as possible Perform testing throughout development (regression testing) Based on ideas from agile/XP Write (automated) test cases first Then write the code to satisfy tests

Test-Driven Development Add a test case that fails, but would succeed with the new feature implemented Run all tests, make sure only the new test fails Write code to implement the new feature Rerun all tests, making sure the new test succeeds (and no others break)

Test-Driven Development

Test-Driven Development Advantages: Helps insure all required features covered Will incrementally develop extensive regression test suite that covers all required features Since code written specifically for tests, insures good test coverage of all code

Test-Driven Development Disadvantages: Management must understand that as much time will be spent of writing tests as writing code Requires extensive validation of requirements, as any feature for which test case not created or incorrect will not be created correctly