Hong Zhu Department of Computing and Communication Technologies Oxford Brookes University, Oxford OX33 1HX, UK COMPSAC 2012 PANEL.

Slides:



Advertisements
Similar presentations
Formal Methods and Testing Goal: software reliability Use software engineering methodologies to develop the code. Use formal methods during code development.
Advertisements

1 Software Unit Test Coverage And Test Adequacy Hong Zhu, Patrick A. V. Hall, John H.R. May Presented By: Arpita Gandhi.
Towards Self-Testing in Autonomic Computing Systems Tariq M. King, Djuradj Babich, Jonatan Alava, and Peter J. Clarke Software Testing Research Group Florida.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
Software Testing and QA Theory and Practice (Chapter 2: Theory of Program Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and.
CS 355 – Programming Languages
1. Introduction Consistency of learning processes To explain when a learning machine that minimizes empirical risk can achieve a small value of actual.
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
TR1413: Discrete Mathematics For Computer Science Lecture 3: Formal approach to propositional logic.
Evaluating Hypotheses
Elusive Bugs: Can you catch them all? Hong Zhu Department of Computing and Communications technology Oxford Brookes University, Oxford OX33 1HX, UK
An Intelligent Broker Approach to Semantics-based Service Composition Yufeng Zhang National Lab. for Parallel and Distributed Processing Department of.
Parameterizing Random Test Data According to Equivalence Classes Chris Murphy, Gail Kaiser, Marta Arias Columbia University.
Introduction to Software Testing
What Exactly are the Techniques of Software Verification and Validation A Storehouse of Vast Knowledge on Software Testing.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 14.
1 Prediction of Software Reliability Using Neural Network and Fuzzy Logic Professor David Rine Seminar Notes.
PAC learning Invented by L.Valiant in 1984 L.G.ValiantA theory of the learnable, Communications of the ACM, 1984, vol 27, 11, pp
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Software Reliability Growth. Three Questions Frequently Asked Just Prior to Release 1.Is this version of software ready for release (however “ready” is.
CS527: (Advanced) Topics in Software Engineering Overview of Software Quality Assurance Tao Xie ©D. Marinov, T. Xie.
Slide 6.1 CHAPTER 6 TESTING. Slide 6.2 Overview l Quality issues l Nonexecution-based testing l Execution-based testing l What should be tested? l Testing.
T. Mhamdi, O. Hasan and S. Tahar, HVG, Concordia University Montreal, Canada July 2010 On the Formalization of the Lebesgue Integration Theory in HOL On.
A Taxonomy of Evaluation Approaches in Software Engineering A. Chatzigeorgiou, T. Chaikalis, G. Paschalidou, N. Vesyropoulos, C. K. Georgiadis, E. Stiakakis.
Shufeng Wang National Laboratory for Parallel and Distributed Processing National University of Defense Technology Changsha, , China
Verification and Validation Overview References: Shach, Object Oriented and Classical Software Engineering Pressman, Software Engineering: a Practitioner’s.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis.
Testing Theory cont. Introduction Categories of Metrics Review of several OO metrics Format of Presentation CEN 5076 Class 6 – 10/10.
Evaluation of software engineering. Software engineering research : Research in SE aims to achieve two main goals: 1) To increase the knowledge about.
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Sample Size Determination CHAPTER thirteen.
Requirements-based Test Generation for Functional Testing (© 2012 Professor W. Eric Wong, The University of Texas at Dallas) 1 W. Eric Wong Department.
Cs498dm Software Testing Darko Marinov January 22, 2008.
Sampling “Sampling is the process of choosing sample which is a group of people, items and objects. That are taken from population for measurement and.
Chapter 1 Measurement, Statistics, and Research. What is Measurement? Measurement is the process of comparing a value to a standard Measurement is the.
Slide 6.1 © The McGraw-Hill Companies, 2002 Object-Oriented and Classical Software Engineering Fifth Edition, WCB/McGraw-Hill, 2002 Stephen R. Schach
Test Drivers and Stubs More Unit Testing Test Drivers and Stubs CEN 5076 Class 11 – 11/14.
CPIS 357 Software Quality & Testing I.Rehab Bahaaddin Ashary Faculty of Computing and Information Technology Information Systems Department Fall 2010.
When to Test Less Presented by Lan Guo. Introduction (1) Methods of software testing: functional, coverage, and user-oriented Phases of software testing:
White Box-based Coverage Testing (© 2012 Professor W. Eric Wong, The University of Texas at Dallas) 111 W. Eric Wong Department of Computer Science The.
Hong Zhu Dept of Computing and Communication Technologies Oxford Brookes University Oxford, OX33 1HX, UK TOWARDS.
Panel Discussion on Foundations of Data Mining at RSCTC2004 J. T. Yao University of Regina Web:
1 The problem of correctness Consider the following program: Read(ch) WriteString(‘42’) is this correct?
What is Testing? Testing is the process of finding errors in the system implementation. –The intent of testing is to find problems with the system.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
CS Inductive Bias1 Inductive Bias: How to generalize on novel data.
Testing Inheritance & Polymorphism in OO Software using Formal Specification Presented by : Mahreen Aziz Ahmad (Center for Software Dependability, MAJU)
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
1. Black Box Testing  Black box testing is also called functional testing  Black box testing ignores the internal mechanism of a system or component.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
FORMAL METHOD. Formal Method Formal methods are system design techniques that use rigorously specified mathematical models to build software and hardware.
Lecture №4 METHODS OF RESEARCH. Method (Greek. methodos) - way of knowledge, the study of natural phenomena and social life. It is also a set of methods.
1 © 2011 Professor W. Eric Wong, The University of Texas at Dallas Requirements-based Test Generation for Functional Testing W. Eric Wong Department of.
CSC3315 (Spring 2009)1 CSC 3315 Languages & Compilers Hamid Harroud School of Science and Engineering, Akhawayn University
D10A Metode Penelitian MP-04b Metodologi Penelitian di dalam Ilmu Komputer/Informatika Program Studi S-1 Teknik Informatika FMIPA Universitas.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
CS223: Software Engineering Lecture 25: Software Testing.
Formal Methods. Objectives To introduce formal methods including multiple logic based approaches for software modelling and reasoning 2.
Cs498dm Software Testing Darko Marinov January 24, 2012.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVIII. Software Testing.
A Review of Software Testing - P. David Coward
TQS - Teste e Qualidade de Software (Software Testing and Quality) Software Testing Concepts João Pascoal Faria
Overview Theory of Program Testing Goodenough and Gerhart’s Theory
Software Testing An Introduction.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 2 Theory of Program Testing
Introduction to Software Testing
Software Verification and Validation
Object-Oriented and Classical Software Engineering Fifth Edition, WCB/McGraw-Hill, 2002 Stephen R. Schach
Software Verification and Validation
Software Verification and Validation
Presentation transcript:

Hong Zhu Department of Computing and Communication Technologies Oxford Brookes University, Oxford OX33 1HX, UK COMPSAC 2012 PANEL 2 SOFTWARE TESTING, SOFTWARE QUALITY AND TRUST IN SOFTWARE-BASED SYSTEMS POSITION STATEMENT Can Testing Prove Software Has No Bug?

Program testing can be used to show the presence of bugs, but never their absence. Dijkstra 1972

WHAT HAPPENED IN THE PAST 40 YEARS? Software engineers have never stopped testing in practice. More and more emphasis has been put on software testing in modern software development methodologies, such as the so-called test-driven approach in agile methodologies. It is widely recognized that confidence in software systems can be gained from systematic testing. Beck, K. Test-Driven Development, Addison Wesley, Is there a theoretical foundation to the practice?

THE FIRST ATTEMPT In 1975, Goodenough and Gerhart made a significant breakthrough by proposing the notion of test adequacy criteria. J. B. Goodenough, AND S. L. Gerhart, Toward a theory of test data selection. IEEE Trans. Softw. Eng. SE-3, June A large number of adequacy criteria have proposed and investigated; See for survey: Zhu, H., Hall, P. and May, J., Software unit test coverage and adequacy, ACM Computing Survey, 29(4), Dec. 1997, pp366~427. Several theories have also been advanced in attempt to prove that testing can guarantee software correctness

TWO APPROACHES Statistical theories If a software system passes certain number of random tests, the reliability of the software system can be asserted with certain confidence according to the mathematical theory of probability and statistics. Fault elimination theories This is based on the assumption that there are only a limited number of ways that a software system can contain faults. When the software passes a test case successfully, it can be regarded as eliminating certain possible faults in the system. After passing a large number of well selected test cases, most possible faults of the software can be eliminated. Fault-based software testing methods (such as mutation testing and perturbation testing) and error-based testing methods (such as category partitioning testing) have been advanced. None of these two approaches can lead to a definite answer to the foundation problem of software test.

THE INDUCTIVE INFERENCE THEORY It regards software testing as an inductive inference process A tester observes a software systems correctness on a subset (finite number) of test cases and then tries to conclude that the system is correct on all inputs. In practice, such inference is done implicitly, even omitted completely. Zhu, H., A formal interpretation of software testing as inductive inference, Journal of Software Testing, Verification and Reliability 6, July 1996, pp3~31.

IDENTIFICATION IN THE LIMIT Let M be an inductive inference device, and a = a 1, a 2,..., a n,... be an infinite sequence of instances of a given rule f. f n = M {(a 1, a 2,..., a n ) }. If there is a natural number K such that for all n, m K, f m = f n, then we say that M converges to f K on a and that M behaviorally identifies f correctly in the limit by M. If the f K =f, we say that M explanatorily identifies f correctly in the limit. A set P of rules is behaviorally (explanatorily) learnable by M, if for all f P, f is behaviorally (explanatorily) learnable by M. M... a n... a 2 a 1..., f n,..., f 2, f 1 f For example, the set of one-variable polynomials. Case, J. & Smith, C., (1983) Comparison of identification criteria for machine inductive inference, Theoretical Computer Science, 25(2), 193~220.

INTERPRETATION OF TESTING AS INDUCTION A program under test is interpreted as a rule to learn A test case is interpreted as an instance of the rule A test set is then a set of instances The set P to be a set of functions on a domain D of input values such that both the program p under test and its specification s are included in P

INTERPRETATION OF TESTING AS INDUCTION Proposition 1. Let: P be a behaviourally learnable set of functions on a domain D. |. | be a complexity measure of the elements in D such that for any natural number N the subset {x D | |x| N} is finite. Then, for all functions p, s P, there exists a natural number N such that p s, if and only if p(x)=s(x) for all x D where |x| N. Zhu, H., A formal interpretation of software testing as inductive inference, Journal of Software Testing, Verification and Reliability vol. 6, July 1996, pp3~31.

THE THEORY The correctness of a software system can be validated by testing on a finite number of test cases provided that the program and specification are in a learnable set of functions. Testing that assures correctness can be performed without writing down a formal specification.

PRACTICAL IMPLICATION What current testing practice lacks is an analysis of the complexity of the program to determine a learnable set within which the program and the specification vary.

PROBLEMS REMAIN OPEN How many test cases do we need to assure the program is correct? It depends on the complexity of the set P! The more we know about the software and its specification, the smaller is the set P, thus the less test case are needed. Can we define adequacy criteria according to complexity? For existing adequacy criteria, we have the property that the more complex is the software (either program or the specification), the more test cases are required!