Random Testing Tor Stålhane Jonas G. Brustad. What is random testing The principle of random testing is simple and can be described as follows: 1.For.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Ali Husseinzadeh Kashan Spring 2010
How to Schedule a Cascade in an Arbitrary Graph F. Chierchetti, J. Kleinberg, A. Panconesi February 2012 Presented by Emrah Cem 7301 – Advances in Social.
Chapter 7 - Local Stabilization1 Chapter 7: roadmap 7.1 Super stabilization 7.2 Self-Stabilizing Fault-Containing Algorithms 7.3 Error-Detection Codes.
Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
K Means Clustering , Nearest Cluster and Gaussian Mixture
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
Ch. 7 - QuickSort Quick but not Guaranteed. Ch.7 - QuickSort Another Divide-and-Conquer sorting algorithm… As it turns out, MERGESORT and HEAPSORT, although.
GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES Reporter : Benson.
An Experimental Evaluation of the Reliability of Adaptive Random Testing Methods Hong Zhu Department of Computing and Electronics, Oxford Brookes University,
Virtual Memory. Names, Virtual Addresses & Physical Addresses Source Program Absolute Module Name Space P i ’s Virtual Address Space P i ’s Virtual Address.
Evaluating Hypotheses
Simulated-Annealing-Based Solution By Gonzalo Zea s Shih-Fu Liu s
Measures of Variability or Dispersion
Optimization Methods One-Dimensional Unconstrained Optimization
Reduce Instrumentation Predictors Using Random Forests Presented By Bin Zhao Department of Computer Science University of Maryland May
Optimization Methods One-Dimensional Unconstrained Optimization
A Hybrid Self-Organizing Neural Gas Network James Graham and Janusz Starzyk School of EECS, Ohio University Stocker Center, Athens, OH USA IEEE World.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 9 Functional Testing
Test coverage Tor Stålhane. What is test coverage Let c denote the unit type that is considered – e.g. requirements or statements. We then have C c =
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Genetic Algorithm.
1. The Simplex Method.
Domain testing Tor Stålhane. Domain testing revisited We have earlier looked at domain testing as a simple strategy for selecting test cases. We will.
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces (1996) L. Kavraki, P. Švestka, J.-C. Latombe, M. Overmars.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
CLUSTERING. Overview Definition of Clustering Existing clustering methods Clustering examples.
1 Prune-and-Search Method 2012/10/30. A simple example: Binary search sorted sequence : (search 9) step 1  step 2  step 3  Binary search.
Course: Logic Programming and Constraints
Method of Hooke and Jeeves
DIVERSITY PRESERVING EVOLUTIONARY MULTI-OBJECTIVE SEARCH Brian Piper1, Hana Chmielewski2, Ranji Ranjithan1,2 1Operations Research 2Civil Engineering.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
1 Network Models Transportation Problem (TP) Distributing any commodity from any group of supply centers, called sources, to any group of receiving.
CURE: EFFICIENT CLUSTERING ALGORITHM FOR LARGE DATASETS VULAVALA VAMSHI PRIYA.
Algorithm Analysis Chapter 5. Algorithm An algorithm is a clearly specified set of instructions which, when followed, solves a problem. –recipes –directions.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
A local search algorithm with repair procedure for the Roadef 2010 challenge Lauri Ahlroth, André Schumacher, Henri Tokola
Low Density Parity Check codes
Vector Quantization CAP5015 Fall 2005.
1 An Arc-Path Model for OSPF Weight Setting Problem Dr.Jeffery Kennington Anusha Madhavan.
A Framework for Reliable Routing in Mobile Ad Hoc Networks Zhenqiang Ye Srikanth V. Krishnamurthy Satish K. Tripathi.
1. Searching The basic characteristics of any searching algorithm is that searching should be efficient, it should have less number of computations involved.
R ANDOM N UMBER G ENERATORS Modeling and Simulation CS
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Building Java Programs Chapter 13 Lecture 13-1: binary search and complexity reading:
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
1 Chapter 7 Quicksort. 2 About this lecture Introduce Quicksort Running time of Quicksort – Worst-Case – Average-Case.
LECTURE 9 CS203. Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search) and sorting (selection sort.
CHAPTER 9 Testing a Claim
CS 326A: Motion Planning Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces (1996) L. Kavraki, P. Švestka, J.-C. Latombe,
CSE 143 Lecture 5 Binary search; complexity reading:
Overview: Fault Diagnosis
CSE 143 Lecture 5 More Stacks and Queues; Complexity (Big-Oh)
Randomized Hill Climbing
Randomized Hill Climbing
Greedy Importance Sampling
Test coverage Tor Stålhane.
Building Java Programs
Sum this up for me Let’s write a method to calculate the sum from 1 to some n public static int sum1(int n) { int sum = 0; for (int i = 1; i
Presentation transcript:

Random Testing Tor Stålhane Jonas G. Brustad

What is random testing The principle of random testing is simple and can be described as follows: 1.For each input parameter, generate a random but legal value. 2.Apply the full set of inputs to the SUT 3.Register the result and go back to step 1.

Chen’s observation Inputs that are close to each others in the input domain tends to go through the same path. Thus, in order to find most of the errors, we should spread the test cases as much as possible. This approach is called Adaptive Random Testing. We will look at four approaches: Partition Adaptive Random Testing Basic Random Testing – RT Basic Adaptive Random Testing – ART Mirror Adaptive Random Testing – MART

Block failure pattern

Strip failure pattern

Point failure pattern

Some notation D: input domain size n: number of test cases m: number of tests that fail F = D/m. Note that – Large F => small m => few errors detected – Small F => large m => many errors detected  = 1/F – failure rate F rel = F obs * 

Partition Adaptive Random Testing Tor Stålhane Jonas G. Brustad

ART by random partitioning – 1 Algorithm for a two-dimensional case 1.Start with C = {(X min, Y min ), (X max, Y max )} 2.Draw a random point in C = (X 1, Y 1 ). This will split C into four regions – R1, R2, R3 and R4. Select T = max area {R1, R2, R3, R4} See next slide.

ART by random partitioning – 2 (X max, Y max ) {(X min, Y min ), (X 1, Y 1 ) R1T = R2 R3R4 Select a test (X 1, Y 1 ) in T. If it is a failure, report the failure and stop. Otherwise, split T in the same way as we split C – see next slide

ART by random partitioning – 3 Select a test (X 2, Y 2 ) in T. If it is a failure, report the failure and stop. Otherwise, repeat the process (X max, Y max ) {(X min, Y min ), (X 1, Y 1 ) R1T R3R4 (X 2, Y 2 )

ART by bisection – 1 Algorithm for a two-dimensional case 1.Start with C = {(X min, Y min ), (X max, Y max )} 2.Draw a random test in C – (X 1, Y 1 ). If it fails we are finished. Otherwise split C in two equal parts – see next slide.

ART by bisection – 2 (X max, Y max ) {(X min, Y min ), (X 1, Y 1 ) Select a test (X 2, Y 2 ) in the untested half of C. If it is a failure, report the failure and stop. Otherwise, split C again – see next slide (X 2, Y 2 )

ART by bisection – 3 Select a test (X 2, Y 2 ) in T. If it is a failure report, the failure and stop. Otherwise, repeat the process for each part that we have not tested – (X 3, Y 3 ) and (X 4, Y 4 ) (X max, Y max ) {(X min, Y min ), (X 1, Y 1 ) (X 2, Y 2 ) (X 4, Y 4 ) (X 3, Y 3 )

The exclusion factor – 1 All types of Adaptive Random Testing (ART) can be improved by introducing the exclusion factor, usually denoted by f. This factor will force the new tests away from the tests that have already been run. The optimal factor value will vary, depending on the failure rate and on the failure pattern – block, strip or point.

The exclusion factor – 2

The exclusion factor – 3 Based on this, we have chosen f = 0.4 as the best value.

Comparisons

Basic Adaptive Random Testing Tor Stålhane Jonas G. Brustad

Fixed Size Candidate Set - FSCS

Max distance – 1 Let a and b be two n-dimensional inputs {a 1, a 2,..., a n } and {b 1, b 2,..., b n }. E.g. in a two-dimensional space we have the two parameters a ={1, 2} and b = {2, 5}. Then we have dist(a, b) = sqrt(1 + 9) = 3.16

Max distance – 2 Let T and C be two disjoint sets T = {t 1, t 2,...t n } is the set of executed tests C = {c 1, c 2,...c k } is the candidate set. Find the c h that satisfies: This criterion will spread the test cases evenly by finding the largest minimum distance between the next test case – selected from C – and the already executed test cases in T.

Max distance algorithm

A small example – 1 We have two data sets: T = {(1, 1), (3,4)} – already executed tests C = {(1, 2), (3, 1)} – candidate test set t1t1 c2c2 c1c1 4 t2t2

A small example – 2 Using the max distance algorithm we get: j = 1 => (c 1, t 1 ) dist = 1.0, (c 1, t 2 ) dist = 2.8 j = 2 => (c 2, t 1 ) dist = 2.0, (c 2, t 2 ) dist = 3.0 min(dist) = 1.0 and the first distance larger than min(dist) is 2.0 => Next test is c 2 = (3, 1)

Test comparison – 1 Defect types seeded: AOR: Arithmetic Operator Replacement ROR: Relational Operator Replacement SVR: Scalar Variable Replacement CR: Constant Replacement

Results with RT 1

Results with ART (a) and FSCS (r) N

Test comparison – 2

Mirror Adaptive Random Testing Tor Stålhane Jonas G. Brustad

The problem with ART All versions of ART require a large amount of computations due to the distance calculations and comparisons. The MART – Mirror ART – is simpler and requires less computation.

The MART procedure The procedure has four steps: 1.Partition the input domain into m disjoint subdomains. One is chosen as source subdomain. The rest are mirror subdomains 2.Apply the D-ART process to generate the next test case from the source subdomain. Execute this test case and quit if we find a defect. 3.Apply the mirror function to the test case from step 2 to generate a test case for each mirror subdomain. Execute the test cases in sequential order and stop when we find a defect. 4.Repeat steps 2 and 3 until finding the first failure or until reaching the stopping condition.

Mirror partitioning Below we see several ways to create mirror partitions: X2Y1 => X is bisected, y is unchanged X2Y2 => both X and Y are bisected X4Y2 => X is split into four parts, Y is bisected X4Y1 => X is split into four parts, Y is unchanged

The D-ART process 1.Set E to be the empty set 2.Select a random test case from the input domain and execute it. If no failure, add the test case to E, otherwise stop. 3.Construct C = {c 1, c 2,…, c k }, where all c i are randomly selected and E and C are disjoint. 4.Let n = |E| and select c j so that 5.Repeat steps 3 and 4 until first defect is found