Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Levels of Testing (testing along the software lifecycle)

Similar presentations


Presentation on theme: "Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Levels of Testing (testing along the software lifecycle)"— Presentation transcript:

1 jpf@fe.up.pt www.fe.up.pt/~jpf
TQS - Teste e Qualidade de Software (Software Testing and Quality) Levels of Testing (testing along the software lifecycle) João Pascoal Faria

2 Index Introduction Unit testing Integration testing System testing
Acceptance, alpha and beta testing Regression testing (for both procedure-oriented and object-oriented systems)

3 Test levels and the extended V-model of software development
(reprinted) Execute acceptance tests Specify Requirements Execute system tests System/acceptance test plan & test cases review/audit Requirements review Specify/Design Code System/acceptance tests Design Execute integration tests Integration test plan & test cases review/audit Design review Specify/Design Code Integration tests Code Execute unit tests Unit test plan & test cases review/audit Code reviews Specify/Design Code Unit tests (source: I. Burnstein, pg.15) (revisited)

4 Typical tests and reviews
(reprinted) (source: "Software Project Survival Guide", Steve McConnell)

5 Test levels or phases (reprinted) Unit testing Integration testing
Testing of individual program units or components Usually the responsibility of the component developer (except sometimes for critical systems) Tests are based on experience, specifications and code A principal goal is to detect functional and structural defects in the unit Integration testing Testing of groups of components integrated to create a sub-system Usually the responsibility of an independent testing team (except sometimes in small projects) Tests are based on a system specification (technical specifications, designs) A principal goal is to detect defects that occur on the interfaces of units System testing Testing the system as a whole Usually the responsibility of an independent testing team Tests are usually based on a requirements document (functional requirements/specifications and quality requirements) A principal goal is to evaluate attributes such as usability, reliability and performance (assuming unit and integration testing have been performed) Acceptance testing Usually the responsibility of the customer Tests are based on a requirements specification or a user manual A principal goal is to check if the product meets customer requirements and expectations

6 Types of testing Level of detail Accessibility Characteristics system
(reprinted) Level of detail system integration unit Accessibility security robustness white box black box performance usability reliability functional behaviour Characteristics

7 Where is object-oriented testing different? (1)
(w.r.t. procedure-oriented testing) Harder Easier Modularity Inheritance Easier and harder parts of testing object-oriented systems Quicker development Polymorphism Encapsulation Small methods Complex interfaces Reuse Interfaces identified early More integration (source: Doroty Graham, "Testing object-oriented systems", in Ovum Evaluates: Software Testing Tools; cited in S. Pfleeger, "Software Engineering")

8 Where is object-oriented testing different? (2)
(w.r.t. procedure-oriented testing) Significant aspects of the testing (and V&V) domain where object-oriented testing is different (source: Doroty Graham, "Testing object-oriented systems", in Ovum Evaluates: Software Testing Tools; cited in S. Pfleeger, "Software Engineering")

9 Testing with encapsulation and interaction (1)
Environment (user, disk, devices, ...) input messages / data output messages / data Capsule (object, class, module, ...) (public) Procedure, Function or Method inputs outputs call (input) parameters return (output) parameters final (output) state initial (input) state Internal state variables (interaction with auxiliary methods and objects ignored) (auxiliary calls may be needed to set initial internal state and check final internal state)

10 Testing with encapsulation and interaction (2)
Specification of a test case is the specification of all the messages exchanged (black box in a different sense)? And also internal states when messages are exchanged (white box in a different sense)? communication during method execution input / output client object (or test driver or user) target object another object (or test stub or mock object) 1 target method 4 12 9 internal state 13? 2 3 10 11 5 0? internal state 0? set initial state and check final state 8 6 7 ... with possible re-entrance another method To be resumed with integration testing ...

11 Index Introduction Unit testing Integration testing System testing
Acceptance, alpha and beta testing Regression testing (for both procedure-oriented and object-oriented systems)

12 Unit testing object oriented systems
Testing levels in object oriented systems operations associated with objects usually not tested in isolation because of encapsulation and dimension (too small) classes -> unit testing clusters of cooperating objects -> integration testing the complete OO system -> system testing Complete test coverage of a class involves Testing all operations associated with an object Setting and interrogating all object attributes Exercising the object in all possible states Inheritance makes it more difficult to design object class tests as the information to be tested is not localised

13 Challenges of Class Testing
Encapsulation: Difficult to obtain a snapshot of a class without building extra methods which display the classes’ state Inheritance: Each new context of use (subclass) requires re-testing because a method may be implemented differently (polymorphism). Other unaltered methods within the subclass may use the redefined method and need to be tested White box tests: Basis path, condition, data flow and loop tests can all be applied to individual methods within a class but they don’t test interactions between methods

14 Class Testing Example Test cases are needed for all operations
Use a state model to identify state transitions for testing Examples of testing sequences Shutdown ® Waiting ® Shutdown Waiting ® Calibrating ® Testing ® Transmitting ® Waiting Waiting ® Collecting ® Waiting ® Summarising ® Transmitting ® Waiting

15 Random Class Testing Identify methods applicable to a class
Define constraints on their use – e.g. the class must always be initialized first Identify a minimum test sequence – an operation sequence that defines the minimum life history of the class Generate a variety of random (but valid) test sequences – this exercises more complex class instance life histories Example: An account class in a banking application has open, setup, deposit, withdraw, balance, summarize and close methods The account must be opened first and closed on completion Open – setup – deposit – withdraw – close Open – setup – deposit –* [deposit | withdraw | balance | summarize] – withdraw – close. Generate random test sequences using this template

16 Partition Class Testing
Reduces the number of test cases required (similar to equivalence partitioning) State-based partitioning Categorize and test methods separately based on their ability to change the state of a class Example: deposit and withdraw change state but balance does not Attribute-based partitioning Categorize and test operations based on the attributes that they use Example: attributes balance and creditLimit can define partitions Category-based partitioning Categorize and test operations based on the generic function each performs Example: initialization (open, setup), computation (deposit, withdraw), queries (balance, summarize), termination (close)

17 Index Introduction Unit testing Integration testing System testing
Acceptance, alpha and beta testing Regression testing (for both procedure-oriented and object-oriented systems)

18 Integration testing Testing of groups of components integrated to create a sub-system Usually the responsibility of an independent testing team (except sometimes in small projects) Integration testing should be black-box testing with tests derived from the specification A principal goal is to detect defects that occur on the interfaces of units Main difficulty is localising errors Incremental integration testing (as opposed to big-bang integration testing) reduces this problem

19 Incremental integration testing

20 Approaches to integration testing
Top-down testing Start with high-level system and integrate from the top-down replacing individual components by stubs where appropriate Bottom-up testing Integrate individual components in levels until the complete system is created In practice, most integration involves a combination of these strategies

21 Top-down testing

22 Bottom-up testing

23 Advantages and disadvantages
Architectural validation Top-down integration testing is better at discovering errors in the system architecture System demonstration Top-down integration testing allows a limited demonstration at an early stage in the development Test implementation Often easier with bottom-up integration testing Test observation Problems with both approaches. Extra code may be required to observe tests

24 Interface testing Takes place when modules or sub-systems are integrated to create larger systems Objectives are to detect faults due to interface errors or invalid assumptions about interfaces Particularly important for object-oriented development as objects are defined by their interfaces

25 Interface testing

26 Interfaces types Parameter interfaces Shared memory interfaces
Data passed from one procedure to another Shared memory interfaces Block of memory is shared between procedures Procedural interfaces Sub-system encapsulates a set of procedures to be called by other sub-systems Message passing interfaces Sub-systems request services from other sub-systems

27 Interface errors Interface misuse Interface misunderstanding
A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order Interface misunderstanding A calling component embeds assumptions about the behaviour of the called component which are incorrect Timing errors The called and the calling component operate at different speeds and out-of-date information is accessed

28 Interface testing guidelines
Design tests so that parameters to a called procedure are at the extreme ends of their ranges Always test pointer parameters with null pointers Design tests which cause the component to fail Use stress testing in message passing systems In shared memory systems, vary the order in which components are activated

29 Object oriented integration testing
OO does not have a hierarchical control structure so conventional top-down and bottom-up integration tests have little meaning No obvious ‘top’ to the system for top-down integration and testing Approach: Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters

30 Approaches to cluster testing
Use-case or scenario testing Testing is based on a user interactions with the system Has the advantage that it tests system features as experienced by users Thread testing Tests the systems response to events as processing threads through the system See button example Object interaction testing Tests sequences of object interactions that stop when an object operation does not call on services from another object

31 Scenario-based testing
Identify scenarios from use-cases and supplement these with interaction diagrams that show the objects involved in the scenario Consider the scenario in the weather station system where a report is generated

32 Collect weather data

33 Weather station testing
Thread of methods executed CommsController:request ® WeatherStation:report ® WeatherData:summarise Inputs and outputs Input of report request with associated acknowledge and a final output of a report Can be tested by creating raw data and ensuring that it is summarised properly Use the same raw data to test the WeatherData object

34 Index Introduction Unit testing Integration testing System testing
Acceptance, alpha and beta testing Regression testing (for both procedure-oriented and object-oriented systems)

35 System testing Testing the system as a whole
Usually the responsibility of an independent testing team Often requires many resources, special laboratory equipment and long test times Usually based on a requirements document, specifying both functional and non-functional (quality) requirements Preparation should begin at the requirements phase with the development of a master test plan and requirements-based tests (black-box tests) The goal is to ensure that the system performs according to its requirements, by evaluating both functional behavior and quality requirements such as reliability, usability, performance and security This phase of testing is especially useful for detecting external hardware and software interface defects, for example, those causing race conditions, deadlocks, problems with interrupts and exception handling, and ineffective memory usage Tests implemented on the parts and subsystems may be reused/repeated, and additional tests for the system as a whole may be designed

36 Types of system testing
Require-ments Document User Manuals Fully integrated software system Usage profile System tests Usability and accessibility tests Configuration tests Security tests Functional tests Performance tests Stress and load tests Reliability and availability tests Test team Recovery tests ... Test results System ready for acceptance test tests applicable depend on the characteristics of the system and the available test resources

37 Types of requirements Functional requirements Quality requirements
Describe what functions the software should perform Compliance to these requirements tested at the system level with functional systems tests Quality requirements Nonfunctional in nature (also called non-functional requirements) Describe quality levels expected for the software The users and other stakeholders may have objectives for the system in terms of memory use, response time, throughput, etc. Should be quantified as much as possible Compliance to these requirements tested at the system level with nonfunctional systems tests

38 Functional testing Ensure that the behavior of the system adheres to the requirements specification Black-box in nature Equivalence class partitioning, boundary-value analysis and state-based testing are valuable techniques Document and track test coverage with a (tests to requirements) traceability matrix A defined and documented form should be used for recording test results from functional and other system tests Failures should be reported in test incident reports Useful for developers (together with test logs) Useful for managers for progress tracking and quality assurance purposes

39 Performance testing Goals:
See if the software meets the performance requirements See whether there any hardware or software factors that impact on the system's performance Provide valuable information to tune the system Predict the system's future performance levels Results of performance test should be quantified, and the corresponding environmental conditions should be recorded Resources usually needed a source of transactions to drive the experiments, typically a load generator an experimental test bed that includes hardware and software the system under test interacts with instrumentation of probes that help to collect the performance data (event logging, counting, sampling, memory allocation counters, etc.) a set of tools to collect, store, process and interpret data from probes

40 Stress and load testing
Stress testing: testing a system with a load that causes it to allocate its resources in maximum amounts Other definitions (Ron Patton): load testing – maximize the load imposed on the system (number of concurrent users, volume of data, ...) stress testing – minimize resources available to the system (processor speed, available memory space, available disk space, ...) usually one is interested in a combination of both The goal is to try to break the system, find the circumstances under which it will crash, and provide confidence that the system will continue to operate correctly (possibly with bad performance but with correct functional behavior) under conditions of stress Example: a system is required to handle 10 interrupts / second and the load causes 20 interrupts/second Another example: a suitcase being tested for strength and endurance is stomped by a multiton elephant Stress testing often uncovers race conditions, deadlocks, depletion of resources in unusual or unplanned patterns, and upsets in normal operation that are not revealed under normal testing conditions Supported by many of the resources used for performance testing

41 Configuration testing
Typical software systems interact with multiple hardware devices such as disc drives, tape drives, and printers Objectives according to [Beizer]: show that all the configuration changing commands and menus work properly show that all interchangeable devices are really interchangeable, and that they each enter the proper states for the specified conditions show that the systems' performance level is maintained when the devices are interchanged, or when they fail Types of test to be performed: rotate and permutate the positions of devices to ensure physical /logical device permutations work for each device induce malfunctions in each device, to see if the system properly handles the malfunction indice multiple device malfunctions to see how the system reacts

42 Security testing Evaluates system characteristics that relate to the availability, integrity and confidentiality of system data and services Computer software and data can be compromised by criminals intent on doing damage, stealing data and information, causing denial of service, invading privacy errors on the part of honest developers/maintainers (and users?) who modify, destroy, or compromise data because of misinformation, misunderstandings, and/or lack of knowledge Both can be perpetuated by those inside and outside on an organization Areas to focus: password checking, legal and illegal entry with passwords, password expiration, encryption, browsing, trap doors, viruses, ... Usually the responsibility of a security specialist See Segurança em Sistemas Informáticos

43 Recovery testing Subject a system to losses of resources in order to determine if it can recover properly from these losses Especially important for transaction systems Example: loss of a device during a transaction Tests would determine if the system could return to a well-known state, and that no transactions have been compromised Systems with automated recovery are designed for this purpose Areas to focus [Beizer]: Restart – the ability of the system to restart properly on the last checkpoint after a loss of a device Switchover – the ability of the system to switch to a new processor, as a result of a command or a detection of a faulty processor by a monitor

44 Reliability and availability testing
Software reliability is the probability that a software system will operate without failure under given conditions for a given interval May be measured by the mean time between failures (MTBF) MTBF = MTTF (mean time to failure) + MTTR (mean time to repair) Software availability is the probability that a software system will be available for use May be measured by the percentage of time the system is on or uptime (example: 99,9%) A = MTTR / MTBF Low reliability is compatible with high availability in case of low MTTR Requires statistical testing based on usage characteristics/profile During testing, the system is loaded according to the usage profile More information: Ilene Burnstein, section 12.5 Usually evaluated only by high maturity organizations

45 Usability and accessibility testing
See Interacção Pessoa Computador

46 Index Introduction Unit testing Integration testing System testing
Acceptance, alpha and beta testing Regression testing (for both procedure-oriented and object-oriented systems)

47 Acceptance, alpha and beta testing
For tailor made software: acceptance tests – performed by users/customers much in common with system test For packaged software (market made software): alpha testing – on the developers site beta testing – on a user site For more information: Ilene Berstein, section 6.15

48 Index Introduction Unit testing Integration testing System testing
Acceptance, alpha and beta testing Regression testing (for both procedure-oriented and object-oriented systems)

49 Regression testing Not really a new level of testing
Just the repetition of tests (at any level) to ensure that previously detected bugs have been corrected Requires test automation, good configuration management and problem/bug tracking Safe attitude: repeat all tests, and not only the ones that failed in the previous test cycle because the correction of a bug may produce new bugs! Problem: how to decide safely which tests need not be repeated (to accelerate regression testing)?

50 References and further reading
Software Engineering, Ian Sommerville, 6th Edition, Addison-Wesley, 2000 Practical Software Testing, Ilene Burnstein, Springer-Verlag, 2003 Software Testing, Ron Patton, SAMS, 2001


Download ppt "Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Levels of Testing (testing along the software lifecycle)"

Similar presentations


Ads by Google