Test-Driven Development

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
Software Quality Assurance Plan
Test-Driven Development and Refactoring CPSC 315 – Programming Studio.
Alternate Software Development Methodologies
Software Quality Assurance Inspection by Ross Simmerman Software developers follow a method of software quality assurance and try to eliminate bugs prior.
Illinois Institute of Technology
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Software Issues Derived from Dr. Fawcett’s Slides Phil Pratt-Szeliga Fall 2009.
BY RAJESWARI S SOFTWARE TESTING. INTRODUCTION Software testing is the process of testing the software product. Effective software testing will contribute.
Test Driven Development Derived from Dr. Fawcett’s notes Phil Pratt-Szeliga Fall 2009.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 17 Slide 1 Extreme Programming.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
1 Shawlands Academy Higher Computing Software Development Unit.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
RUP Implementation and Testing
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
Extreme/Agile Programming Prabhaker Mateti. ACK These slides are collected from many authors along with a few of mine. Many thanks to all these authors.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Large Scale Software Systems Derived from Dr. Fawcett’s Notes Phil Pratt-Szeliga Fall 2010.
Rapid software development 1. Topics covered Agile methods Extreme programming Rapid application development Software prototyping 2.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
I Power Higher Computing Software Development The Software Development Process.
Software Construction Lecture 18 Software Testing.
Well-behaved objects Main concepts to be covered Testing Debugging Test automation Writing for maintainability Objects First with Java - A Practical.
The Software Development Process
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Objects First With Java A Practical Introduction Using BlueJ Well-behaved objects 2.1.
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Development. The Software Life Cycle Encompasses all activities from initial analysis until obsolescence Analysis of problem or request Analysis.
CS223: Software Engineering
Software Development.
CSE784 – Software Studio Jim Fawcett Fall 2002.
Appendix 1 - Packages Jim Fawcett copyright (c)
Software Testing.
Software Testing.
Project Center Use Cases Revision 2
Project Center Use Cases
CSE687 – Object Oriented Design
Chapter ? Quality Assessment
Coupling and Cohesion 1.
Architecture Concept Documents
Configuration Management Why do we need it? What does it do?
Project Center Use Cases
Chapter 8 – Software Testing
Verification and Testing
IEEE Std 1074: Standard for Software Lifecycle
CSE784 – Software Studio Jim Fawcett Fall 2006.
Project Center Use Cases Revision 3
Project Center Use Cases Revision 3
Requirements and the Software Lifecycle
Test Planning Mike O’Dell (some edits by Vassilis Athitsos)
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Chapter 3 – Agile Software Development
Software Testing & Quality Management
Chapter 10 – Software Testing
Integration Testing CS 4311
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Extreme Programming.
CSE 1020:Software Development
Chapter 7 Software Testing.
Executable Specifications
Presentation transcript:

Test-Driven Development CSE784 Software Studio Class Notes Test-Driven Development Jim Fawcett Fall 2006

Specifications and Testing Requirements specifications and testing are intimately related. The software requirements document defines completely and unambiguously obligations of the software. The purpose of testing is to show that these obligations are satisfied. Small programs may have simple, informal specifications, e.g., our project statements. Large programs and software systems need much more. Each subsystem has its own requirements document. Each subsystem has a suite of tests run at various levels The whole system has a suite of tests run at several levels. Testing requires a lot of planning, covered by a Software Test Plan document.

Testing Testing is conducted at several different times and levels: Construction Test: Occurs during implementation. Its purpose is to support the efficient construction of working code. Unit Test: Started when construction is complete. It’s goal is to verify correctness. It is a very detailed examination of the operation of each function, in each module. Integration Test: Started after unit tests. It’s goal is to support the composition of tested modules, built by more than one team. Validation Test: These are torture tests designed to show that the system does not crash and can recover from inputs or environment that is outside specifed ranges. Qualification Test: A test of legal completeness, e.g., have all the requirements been met. Regression Test: A summary level test designed to show that an existing, tested system still works after a change in environment or the addition of new functionality during maintenance.

Construction Tests Occurs during implementation: Goal of Construction Test is to support efficient construction of working code. We usually build construction tests in a test stub attached to the product code, adding new tests as we implement each function. small tests are added to the test stub as small pieces of code are added to the growing baseline. Construction tests are universal – every module packaging an implementation has a construction test suite. Test stubs provide an excellent way for a potential user to learn how a module works, at a deeper level than provided by the manual page.

Unit Test Unit Tests are started when construction is complete. The goal of Unit Test is to verify correctness of each function. Unit tests are built as separate test driver modules, one or more for each production module, organized by functionality. They are very detailed examinations of the operation of each function, usually executed manually with a debugger. Main issues here are to provide suitable probing inputs that exercise every path and toggle every predicate. Each test should be preceeded by the development of a test description and procedure, incorporated as comments in the test driver code. Because Unit Tests are labor-intensive they are expensive and so are not universal, but applied to every module on which many other modules depend.

Integration Test Integration Testing is started after Unit Test. The goal of integration testing is to support the efficient composition of modules constructed by more than one team. Each test should be preceded by the development of a test description and procedure, often incorporated as comments in the test code, called an integration test driver. Essentially the driver acts like a test executive, that exercises functionality of a group of modules. Integration testing starts as a blackbox test, e.g., using only an external model of the component, based on its inputs and outputs. Often, however, it is necessary to look internally to find and fix sources of integration problems. Integration testing often is conducted in a series of builds, where each build adds new functionality to a test baseline, or fixes problems in the existing baseline.

Validation Test Validation Testing is started during and/or after integration testing. The goal of validation testing is to ensure “reasonable” operation in the presence of inputs or environment that are outside specified ranges. It is essentially a test of the robustness of the current system. Under these conditions we expect outputs that do not meet specifications, perhaps resulting in error messages. But, we want to avoid crashes or taking the system to states from which it can not recover when inputs or environment return to their specified ranges. Validation tests use out-of-specification inputs, random inputs, pattern tests, and carefully constructed torture tests.

Qualification Test Qualification Tests are started after product development has been completed. They test the legal completeness of the implementation, e.g., do we meet every requirement in the B-Level specification. Each requirement (“shall”) in the B-specification is allocated a test. Does the set of B-level requirements cover the requirements of the A-level specification. Qualification tests come in four flavors: Inspection: Verify this requirement by an inspection of code and/or documents. Demonstration: Verify this requirement by using the system as built. Analysis: Run the system and collect data that will be analyzed off-line. Test: Build signal insertion and measurement extraction software to instrument the system to evaluate requirements that cannot be tested by inspection, demonstration, or analysis. Qualification testing is governed by a test plan and has test descriptions and procedures for each test. These are part of the Test Plan document and may also be incorporated as comments in qualification test drivers.

Regression Test Regression Tests are run any time the systems platform is changed or when new functionality is added. These tests are fairly high-level but are designed to exercise all the required functionality. They are intended to verify expected operation when: Platform (computer and related hardware and software) has changed. New functionality has been added to the software during maintenance. Latent errors are fixed during maintenance. Regression tests are automated. Running a regression test requires a single command that establishes any required preconditions (directories, data, environment) and then exercises all the functionality of the system. Test automation is usually accomplished with a test harness. New Regression tests are added every time new functionality is added to the system during system maintenance.

Extensibility of Testing Notice that every level of testing assumes that tests are extensible: It is easy to add new tests and integrate them with the existing tests. Most testing is automatic. For all but Unit Testing: We simply give a command and an extensive suite of tests is run. Test reports are simple and obvious, although a lot of test data may be logged for off-line examination. Each test has a well defined Pass/Fail criteria. For all but Unit and Integration testing this is a simple boolean condition: Pass == true or Pass == false. Clearly, with this much effort applied to testing it is cost effective to provide some infrastructure for testing. This is what Project #1, Fall 03 explored.

Test Driven Development Test-Driven Development (TDD) attempts to invert the usual sequence of development. In TDD we start by developing a test before we write any code that the test executes. The sequence is often like this: Write a test driver Add a shell for the product code the test exercises. Attempt to compile and run, revising until this process succeeds. Add code to the test driver to test a specific piece of required functionality. Add code to the product to implement that functionality. Continue this process until the module is complete. Clearly, some kind of testing infrastructure is appropriate here. Test harnesses are almost universally used with TDD, as the process of testing is continuous throughout development.

TDD Process Here is an excerpt from “test-driven development”, David Astels, Prentice- Hall, 2003: You maintain an exhaustive suite of Programmer Tests. No code goes into production unless it has associated tests. You write the tests first. The tests determine what code you need to write. The following slides provide excerpts and interpretations drawn from this book.

Programmer Tests Unit tests are written to test that the code you’ve written works. Programmer tests, under TDD, define what it means for the implementation to work. Test-driven development implies that you have an exhaustive set of tests: That is so because there is no code until there is a test that requires it in order to pass. There should be no code in the system that was not written in response to a test. So, the test suite is, by definition, exhaustive.

Extreme Programming Extreme Programming is one of a family of development processes called Agile Development methods that uses TDD. Extreme Programming has several fundamental steps: Plan a series of releases with some nominal functionality associated with each. Each release begins with a mini-plan, conducted with the customer. It continues with requirements analysis, and top-level design. Test-Driven Development is used for all implementation, using programmer-pairs, one person to write the test and product code, another to watch, critique, suggest changes, and review. After a release is delivered, the delivered code is refactored to improve its maintainablilty and robustness, before beginning the next release. Refactoring uses the test apparatus developed for test-driven development.

Refactoring Refactoring is driven by three code attributes: There is duplication in the code. We refactor to eliminate duplication. When the code and/or its intent is not clear. When the code “smells”, e.g., there is a subtle or not so subtle indication that there is a problem with the code: Commented code replaced with cleaner, clearer code that needs no comments to be understood and maintained. Data classes or structs are replaced with classes that provide member functions to manage all transformations of the data. Inappropriate intimacy. Methods are moved so that pieces of code that need to know about each other are in the same module and, if possible, in the same class. Partition large classes and modules into smaller, more-maintainable, parts. Remove overly dependent classes. If a change to one class requires changes in many other places, we need to reduce dependencies, perhaps by defining interfaces to avoid using concrete names or rearranging functionality.

End of Presentation