Download presentation
Presentation is loading. Please wait.
Published byNicolas Maycock Modified over 10 years ago
1
Test del Software, con elementi di Verifica e Validazione, Qualità del Prodotto Software G. Berio
2
Argomenti Introduttivi Definizione(i) di test Test daccettazione e test dei difetti Test delle unità e test in the large (test dintegrazione e test di sistema) (strategie di test) Risultato e reazione al test Test, qualità del software, verifica e validazione
3
Testing: Definition(s) Pressman writes: Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user. the end user. (Dijkstra, 1987) Testing is an activity performed for evaluating product quality, and for improving it, by identifying defects and problems. Software testing consists of the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain (inputs), against the expected behavior. Better (SWEBOK): Traduzione Pressman: Test = Collaudo
4
Selection of Test Cases Validation (acceptance) testing (*) –To demonstrate to the developer and the customer that the software meets its requirements; –A successful test shows that the software operates as intended. Defect testing –To discover defects in the software where its behavior is incorrect or not in conformance with its specification; –A successful test is a test that makes the software perform incorrectly and so exposes a defect in the software itself. (*) Traduzione Pressman: Validation testing=Collaudo di convalida o Collaudo di validazione
5
What is a Good Test Case? A good defect test case has a high probability of finding an error (failure) i.e. an unexpected and unwanted behavior A good test case is not redundant. A good test case should be neither too simple nor too complex A good test case should normally be repeatable (i.e. leading to the same results) Elaborated from Pressman
6
Symptoms (Failure) & Causes (Fault, Defect) symptom cause (defect) symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error cause may be due to assumptions that everyone believes symptom may be intermittent fault failure Elaborated from Pressman
7
Testing and lack of "continuity" Testing behaviors of software code by examining only a set of test cases Impossible to extrapolate behavior of software code from a finite set of test cases No continuity of behavior –it can exhibit correct behavior in infinitely many cases, but may still be incorrect in some cases
8
When stopping defect testing? If defect testing (on a set of test cases) does not detect failures, we cannot conclude that software is defect-free Still, we need to do testing driven by sound and systematic principles…
9
Su cosa è eseguito il test? Sul codice, Ma quale codice? Il codice delle unità di test oppure insiemi di unità opportunamente aggregate oppure sullintero software installato in un ambiente proprio
10
Unità di Test non è (necessariamente) una Componente Calcolare Costo Unità ApplicaOpzioniCosto Unità Prossima richiesta?() Unità Creare()
11
Testing Strategy We begin by testing-in-the-small(or unit test) and move to testing-in-the-large The software architecture is the typical way for incrementally driving the testing-in-the-large For conventional components: –The module or part of module (unit) is the initial focus (in the small) –Integration of modules For object-oriented components: –the OO class or part of class (unit) that encompasses attributes and operations, is the initial focus (in the small) –Integration is communication and collaboration among classes Elaborated from Pressman
12
Software Testing Strategy unit test integrationtest Validation(acceptance)test systemtest Defect testing Elaborated from Pressman In the large In the small: component testing
13
Test Cases e Expected Behavior nel Defect Testing Lo expected behavior per una unità si dovrebbe ottenere da: –Specifica dei requisiti Tuttavia le componenti introdotte nella progettazione non hanno un diretto legame con la specifica dei requisiti ma nel progetto hanno tuttavia una loro specifica (opposta al codice) +/- precisa: –Statecharts –Pre-post condizioni –Descrizione dei tipi di Input e Output –Diagrammi di sequenza –Etc. La specifica del componente deve comprendere la specifica dellunità su cui il test deve essere eseguito (o permettere di derivare la specifica dellunità); dalla specifica dellunità si dovrebbe ottenere test cases ed expected behavior Lo expected behavior di una unità di trova nella specifica della componente solo se la specifica della componente è ben fatta – i.e. nella progettazione si garantisce la testabilità - (infatti, nella teoria del test, si parla spesso di ORACOLO per indicare che lexpected behavior è semplicemente noto)
14
Who perform defect testing? developer independent tester Understands the system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality From Pressman
15
Test Cases e Expected Behavior nel Acceptance Testing To be performed by the customer or end-users The expected behavior (dellintero software) should be fixed since the beginning: a special section in the requirement document should explicitly be devoted to how to perform acceptance testing (or which are the test cases for acceptance)
16
Esempio fermarepartire Pre: utente preme bottone and prima richiesta per ascensore Post: ascensore arriva in 1 Caso duso di accettazione (descrive il test case ed anche lo expected behavior) Casi duso di specifica dei requisiti Richiedi ed Ottieni Ascensore richiesta Expected behavior Test case
17
Effort to Repair Software (defects detected in different activities)
18
Effort to Repair Software Testing should only confirm that the performed work has been done rightly It remains easier to build the correct software the main issue of Software Engineering… eliciting and specifying requirements and moving systematically to design are ways towards building correct software as well So, due to high effort to repair defects, it is required to confirm any time that the work is performed in the right direction
19
Contesto Communication Planning Modeling –Requirements analysis –Design Construction –Code generation –Testing Deployment Prevent the non quality; quality forecasting on work products Evaluating the quality; quality assessment on deliverables Parte del Prodotto Software (delivered to the customer) Prodotto intermedio Process framework Framework activities work tasks work products milestones & deliverables QA checkpoints Umbrella Activities System engineering
20
Il prodotto software User manual Code Software and System Requirement specifications Design Models Prodotto software Prodotti del lavoro Technical documentation Expected Quality Attributes Installed Code Requirement Document Development Plan Project Reports
21
Test, qualità, verifica e validazione Quality assessment –Prodotto Software: Codice prodotto: –Test (orientato a correttezza, affidabilità, robustezza, safety, prestazioni) –Verifica e Validazione (orientate a correttezza, affidabilità, robustezza, safety, prestazioni) –…ogni altro attributo di qualità Quality forecasting –Prodotti intermedi: Modello analitico e di progetto: –Verifica e validazione (orientate a correttezza, affidabilità, robustezza, safety, prestazioni) –…ogni altro attributo di qualità
22
Verifica e Validazione La verifica si applica generalmente ai modelli costruiti durante lo sviluppo del software (ad esempio, specifica dei requisiti, al modello di progetto etc.) e non necessariamente (solo) al codice, o in generale sui prodotti intermedi (es. se il modello di progetto è equivalente al modello analitico, se non vi sono errori formali nel modello di progetto, se il modello analitico è consistente con il documento dei requisiti) –(build the (work) product in the right way) La validazione comprende la valutazione se i requisiti sono stati ben compresi (chiamata convalida dei requisiti da Pressman) e, in ogni momento, se i prodotti intermedi corrispondono a ciò che il committente ha richiesto –(build the right (work) product)
23
Test, V&V Il test sul codice indica contemporaneamente il fatto che il codice è eseguito con cui si vorrebbe dimostrare lesistenza di difetti oppure lassenza di difetti in alcuni casi predeterminati La verifica e validazione (V&V) indica un insieme di attività svolte in diversi punti dellIngegneria dei requisiti e dellIngegneria della progettazione, non solo sul codice La verifica e validazione possono usare tecniche di analisi statica (automatica o manuale) del codice – cioè il codice non è eseguito - ma più generalmente sono svolte sui prodotti intermedi; talvolta, lobiettivo della verifica è provare la correttezza ovvero provare alcune proprietà che traducono formalmente gli attributi di qualità considerati Poiché il testing richiede normalmente lesecuzione del codice e quindi può considerarsi coincidente con le tecniche dinamiche di verifica e validazione del software (ma V&V sono più generali…) La verifica e validazione sono, a loro volta, parte del controllo di qualità (quality assurance) del prodotto, molto focalizzate su alcuni attributi di qualità ( correttezza, affidabilità, robustezza, safety e prestazioni ) Traduzione del Pressman: Convalida=Validazione
24
Tecniche di Verifica e Validazione Dinamiche --- eseguono il codice del software, quindi sono anche tecniche di test e si classificano in: –Black box –White box (o Glass box) Statiche --- non eseguono il codice del software, quindi sono tipiche della V&V e, a loro volta, parte della quality assurance e distinte in: –Automatizzate Model checking Correcteness proofs Symbolic execution Data flow analysis –Manuali (formal technical reviews) Ispezione (Inspection) Walkthrough Le tecniche statiche e dinamiche possono essere applicate insieme (cioè non sono alternative); talvolta, il risultato di una tecnica statica può essere usato come punto di partenza per applicare una tecnica dinamica
25
Sintesi Communication Planning Modeling –Requirements analysis –Design Construction –Code generation –Testing sul Codice Deployment Communication Planning Modeling –Requirements analysis –Design Construction –Code generation –Testing sul Codice Deployment Testing del codice (white e black box) Verifica e Validazione su Work Products e Deliverables (Verifica su Codice = Tecniche Statiche) Quality forecasting su Work-Products Quality assessment su Deliverables Orientati a: correttezza, affidabilità, robustezza, safety e prestazioni e a work products o deliverable quali il modello analitico, il modello di progetto, il codice Qualunque attributo di qualità del codice e di altri deliverables Orientato a: correttezza, affidabilità, robustezza, safety e prestazioni legati al codice
26
Foundamenti di Test del Codice
27
Definizioni (1) P (codice), D (dominio degli ingressi), R (dominio delle uscite): –P: D R, P potrebbe essere definita come funzione parziale Expected Behavior (Comportamento Atteso) è definito come EB D R: –P(d) si comporta come atteso sse EB –P si comporta come atteso sse ogni P(d) si comporta come atteso
28
Definizioni (2) Test case t (in Italiano, caso di test) –Un qualunque elemento di D Il test difettivo (defect test(ing)) è di successo se almeno uno dei casi di test previsti mostra un comportamento inatteso (unexpected behavior) Il test di accettazione (acceptance test(ing)) è di successo se per ogni caso di test t previsto, P(t) si comporta come atteso
29
Definizioni (3) Insieme ideale di casi di test (defect testing) –Se P non si comporta come atteso, cè almeno un caso di test t nellinsieme tale che P(t) non si comporta come atteso –Se P: D R corrisponde al normale comportamento dellalgoritmo programmato con P, non è possibile avere un algoritmo per costruire un insieme ideale di casi di test Tuttavia, un insieme di casi di test che approssima un insieme ideale di casi di test, dovrebbe comunque essere definito, definendo i casi di test parte dellinsieme
30
Test Case Design for Defect Testing "Bugs lurk in corners and congregate at boundaries..." Boris Beizer OBJECTIVE COVERAGE CONSTRAINT to discover defects in a complete manner with a minimum of effort and time From Pressman
31
Tecniche di Software Defect Testing Practices white- box black- box Tecniche
32
Conventional Unit Defect Testing
33
Black box techniques derives test cases from the expected behavior on a given input domain
34
Black-Box Testing Unit viewed as a black-box, which accepts some inputs and produces some outputs Test cases are derived solely from the expected behavior, without knowledge of the internal unit code Main problem is to design (a minimal set of) test cases increasing the probability of finding failures, if any
35
Black Box Test-Case Design Techniques Equivalence class partitioning Boundary value analysis Cause-effect graphing Other
36
Equivalence Class Partitioning To identify the unit input domain and to make a partition of it in equivalence classes (i.e. assuming that data in a class are treated identically by the unit code) –The basis of this technique is that test of a representative value of each class is equivalent to a test of any other value of the same class. To identify valid as well as invalid equivalence classes (valid equivalence classes are usually defined in term of a given unit specification, providing how certain inputs are treated by the unit code and for which expected behavior is known ) i.e. D = (DV DIV) For each equivalence class, generating one test case to exercise (execute) the unit with one input representative of that class
37
Example Possible input for x of type INT but with the additional specification : 0 <= x <= max valid equivalence class for x : 0 <= x <= max invalid equivalence (wrt the unit specification) classes for x : x max 3 test cases can be generated It might be part of the Expected Behavior (paramter types are usually an incomplete idea of the possible input); additionally, test is also for evaluating robustness; and finally, integration testing is simplified if we also know how a unit behaves in unexpected situations
38
Guidelines for Identifying Equivalence Classes Input specificationValid Eq ClassesInvalid Eq Classes range of values one validtwo invalid (e.g. 1 - 200)(value within range)(one outside each each end of range) number N valid values one valid two invalid (less than, more than N) Set of input values one valid eq classone invalid each handled per each value(e.g. any value not in valid input set ) differently by the program (e.g. A, B, C)
39
Guidelines for Identifying Equivalence Classes Input specificationValid Eq ClassesInvalid Eq Classes Any other condition one one (e.g. ID_name must begin (e.g. it is a letter) (e.g. it is not a letter) with a letter ) If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes. In very special cases, some of the generated classes cannot be tested (just because they are explicitly forbidden by the program)
40
Identifying Test Cases for Equivalence Classes Assign a unique identifier to each equivalence class Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible Each invalid equivalence class covered by a separate test case
41
Boundary Value Analysis Design test cases that exercise values that lie at the boundaries of an input equivalence class and for situations just beyond the ends. Also identify output equivalence classes, and write test cases to generate o/p at the boundaries of the output equivalence classes, and just beyond the ends. Example: input specification0 <= x <= max Test cases with values : 0, max ( valid inputs) -1, max+1 (invalid inputs)
42
Why testing boundary values Equivalence classes input domain in classes, assuming that behavior is "similar" for all data within a class Some typical code defects, however, just happen to be at the boundary between different classes
43
Esempio (dai requisiti-progetto)
44
Applicazione della tecnica di equivalence class partitioning
45
Valide e non valide
47
Non valide, valide già coperte
48
Valide e non valide
49
e
50
, 1995, speciale
51
Applicazione del boundary value analysis
55
Black box test case design with pre-post conditions (see Ghezzi et al. for more details)
56
Pre-post conditions of insertion of invoice record in a file X: Invoice, f: Invoice_File {sorted_by_date(f) and not exist j, k (j k and f(j) =f(k)} insert(x, f) {sorted_by_date(f) and for all k (old_f(k) = z implies exists j (f(j) = z)) and for all k (f(k) = z and z x) implies exists j (old_f(j) = z) and exists j (f(j). date = x. date and f(j) x) implies j < pos(x, f) and result x.customer belongs_to customer_file and warning (x belongs_to old_f or x.date < current_date or....) }
57
TRUE implies sorted_by_date(f) and for all k old_f(k) = z implies exists j (f(j) = z) and for all k (f(k) = z and z x) implies exists j (old_f(j) = z) and (x.customer belongs_to customer_file) implies result and not (x.customer belongs_to customer_file and...) implies not result and x belongs_to old_f implies warning and x.date < current_date implies warning and.... Apply partitioning to post-conditions… Rewrite them in a more convenient way…
58
Coverage in Black Box Testing partition Possible InputsExpected Outputs Requirements specification Component / Unit specification Analysis model Design model EB = Possible Inputs + Expected Outputs
59
Black-box vs White-Box Black box testing can experiment defects such as missing or functionality not behaving according to its expected behavior –tests what the unit is supposed to do –It is less suitable for experiment defects such as unreachable code, hidden functionality (i.e. what is un-expected), run-time errors raised by code White box testing can experiment defects in unit code, even disregarding its expected behavior –tests what the unit does –It is suitable for experiment (especially unexpected) defects in code (and sometime, in design) but cannot find missing or incomplete functionality Therefore, both techniques are required.
60
Black-box vs White-box Black-boxWhite-box Test unit code on its expected behavior (EB) Test (control) structure of the unit code (P) Can find missing and incomplete behavior (EB-P) Cant find missing or incomplete behavior (EB-P) Need the expected behavior (e.g. module specification) Do not necessarily need expected behavior (try to find unexpected behavior) Cant find unexpected behavior (P-EB) Can find unexpected behavior (P-EB) where P: D R and EB D R
61
White box methods derives test cases from unit code
62
White Box: Exhaustive Testing loop < 20 X Elaborated From Pressman There are 10 possible paths! If we execute one test per millisecond, it would take 3,170 years to test this program!! 14 Coverage is still important as in the black-box testing: however, we require to cover executions of a program (control flow)
63
White Box: Selective Testing loop < 20 X Selected path Elaborated from Pressman Coverage of control flow = Executed (exercised) paths
64
Coverage types Statement coverage Decision coverage (edge coverage) Condition coverage Basis Path coverage Path coverage (with loops coverage)
65
Statement-coverage Select test cases such that – coverage - every statement in the unit (or whatever) P is executed at least once by some test case An input datum executes many statements try to minimise the number of test cases still preserving the coverage
66
Example read (x); read (y); if x > 0 then write ("1"); else write ("2"); end if; if y > 0 then write ("3"); else write ("4"); end if; {,,, } covers all statements {, } is minimal
67
Weakness of the statement-coverage if x < 0 then x := -x; end if; z := x; { } covers all statements it does not exercise the case when x is positive and the then branch is not entered
68
Example void function eval (int A, int B, int X ) { if ( A > 1) and ( B = 0 ) thenX = X / A; if ( A = 2 ) or ( X > 1) then X = X + 1; } Statement coverage test cases: 1) A = 2, B = 0, X = 3 ( X can be assigned any value)
69
Decisions and Conditions void function eval (int A, int B, int X ) { if ( A > 1) and ( B = 0 ) thenX = X / A; if ( A = 2 ) or ( X > 1) then X = X + 1; } A decision A condition in the decision Another condition in the decision ( B <> 0 ) Decisions are made of several conditions usually combined by logic operators
70
Decision, Condition Coverage Decision coverage – write test cases to exercise - coverage - at least once every decision Condition coverage – write test cases to exercise – coverage – at least once every condition – these test cases may not always satisfy decision coverage
71
Control graph with decisions construction rules GG 1 2 G 1 G 1 G 1 G 2 I/O, assignment, or procedure call if-then-else if-then while loop two sequential statements Little bit different from rules in Pressman
72
Simplification a sequence of edges can be collapsed into just one edge
73
begin read (x); read (y); while x y loop if x > y then x := x - y; else y := y - x; end if; end loop; gcd : = x; end; Example: Euclid's algorithm
74
Example void function eval (int A, int B, int X ) { if ( A > 1) and ( B = 0 ) then X = X / A; if ( A = 2 ) or ( X > 1) then X = X + 1; } Decision coverage test cases: A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/ A a c T F b e T F d 2) A = 2, B = 1, X = 1 (abe) 1) A = 3, B = 0, X = 3 (acd) flow chart control graph
75
Example (with the control graph) A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/ A a c T F b e T F d X = X+1 A > 1 and B = 0 A <= 1 or B != 0 A = 2 or X > 1 A != 2 and X <= 1 Decision coverage = To exercise at least once each edge in the control graph
76
Weakness of the decision-coverage {, } causes the execution of decisions but fails to expose the risk of a division by zero if x 0 then y := 5; else z := z - x; end if; if z > 1 then z := z / x; else z := 0; end if;
77
Control Graph with Conditions A > 1 and B = 0 X = X/ A c T F b With decisions With conditions A>1 A<=1 B=0 B!=0 X = X/ A
78
Example Condition coverage test cases must cover conditions A>1, A<=1, B=0, B !=0 A=2, A !=2, X >1, X<=1 Test cases ( do not satisfy decision coverage and do not cover edges in the control graph with decision or extended with conditions ): 1) A = 1, B = 0, X = 3 2) A = 2, B = 1, X = 1 A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/ A a c T F b e T F d X = X+1 A > 1 and B = 0 A <= 1 or B != 0 A = 2 or X > 1 A != 2 and X <= 1
79
Decision-Condition and Multiple Condition Coverage Decision-Condition coverage – write test cases such that each condition in a decision is exercised at least once and each decision is exercised at least once – write test cases such that each edge in the control graph with conditions is exercised at least once Multiple Condition coverage – write test cases to exercise all possible combinations of conditions within every decision
80
Example Decision-Condition coverage test cases must cover conditions A>1, A<=1, B=0, B !=0 A=2, A !=2, X >1, X<=1 and decisions ( A > 1 and B = 0) ( A 0) ( A = 2 or X > 1) ( A 2 and X <= 1) Test cases: 1) A = 2, B = 0, X = 4 2) A = 1, B = 1, X = 1 A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/A a c T F b e T F d
81
Example Multiple Condition coverage must cover conditions 1) A >1 and B =0 5) A=2 and X>1 2) A >1 and B !=0 6) A=2 and X <=1 3) A 1 4) A <=1 and B!=0 8) A !=2 and X<=1 Test cases ( cover possible combination of decisions, at least in the example but this is not always the case ): 1)A = 2, B = 0, X = 4 (covers 1,5) 2)A = 2, B = 1, X = 1 (covers 2,6) 3)A = 1, B = 0, X = 2 (covers 3,7) 4)A = 1, B = 1, X = 1 (covers 4,8) A>1 and B=0 A!=2 and X <= 1 A = 3, B = 0, X = 1 (to cover possible combination of decisions you should combine couples in T,T; T,F; F,T and F,F and introduce new test cases accordingly) A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/A a c T F b e T F d
82
Path coverage Select test cases which traverse all paths from the initial to the final node of Ps control graph However, number of paths may be too large (in the case of loops) –additional constraints must be provided
83
Basis Path Testing First, compute the cyclomatic complexity: number of simple decisions + 1 or number of enclosed regions + 1 Referring to the figure, V(G) = 4 r1 r2 r3
84
Cyclomatic Complexity A number of industry studies have indicated that the higher V(G), the higher the probability of defects. V(G) modules modules in this range are more defect prone From Pressman
85
Basis Path Testing 1. Draw control graph of program from the program detailed design or code. 2. Compute the cyclomatic complexity V(G) of the control graph using any of the formulas: V(G) = #Edges - #Nodes + 2 or V(G) = #regions in control graph + 1 3. Write at least V(G) test cases From Pressman
86
White Box Testing Review White box testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the program. This kind of test is especially suitable for testing unexpected behavior! White box test case design: Statement coverage Decision coverageLoop testing Condition coverage Decision-condition coverage Data flow testing Multiple condition coverage Basis Path Testing
87
Loop Testing NestedLoops Concatenated Loops Loops Unstructured Loops Simpleloop From Pressman
88
Loop Testing: Simple Loops Minimum conditionsSimple Loops 1. skip the loop entirely 2. only one pass through the loop 3. two passes through the loop 4. m (average) passes through the loop m < n 5.n (n-1, too) passes through the loop where n is the (expected) maximum number of allowable passes (if exists) Elaborated from Pressman Changes from the textbook: n+1 referred in the book has been leaved out! average has been added
89
Loop Testing: Nested Loops 1 Start at the innermost loop. Set all outer loops to their minimum iteration parameter values. 2 Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values. 3 Move out one loop and set it up as in step 2, holding all other inner loops at typical values. Continue this step until the outermost loop has been tested. If the loops are independent of one another then treat each as a simple loop then treat each as a simple loop else* treat as nested loops else* treat as nested loops endif* for example, the final loop counter value of loop 1 is used to initialize loop 2. Nested Loops Concatenated Loops From Pressman
90
Remember: white and black box testing are not alternative! What if the code omits the implementation of some part of the expected behavior? White box test cases derived from the code will ignore that part of the expected behavior! Perform black box testing
91
Further problems in White Box testing Not reacheable statements, decisions, etc. Read(x); If x>0 then if x<0 then Read(x); If x>0 then x:=f(x); if x>0 then Easier Complex (f(x) always assigns negative values to x)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.