Download presentation
Presentation is loading. Please wait.
Published byJoel Rowling Modified over 10 years ago
1
Yves Le Traon 2002 Building Trust into OO Components using a Genetic Analogy Yves Le Traon and Benoit Baudry Yves.Le_Traon@irisa.fr
2
Yves Le Traon 2002 Summary The problem: trustable components Component qualification with mutation analysis Genetic algorithms for test enhancement A new model: bacteriological algorithms for test enhancement A mixed-approach and tuning of the parameters Conclusion
3
Yves Le Traon 2002 Testing for trust Safely Object-oriented paradigm Reuse Trust the better you test, the more trustable your component is
4
Yves Le Traon 2002 Specification Implementation V & V : checking Implementation against Specification Trust based on Consistency Derived as executable contracts Testing for trust
5
Yves Le Traon 2002 Testing for trust We use a mutation analysis we inject simple errors in the class we create mutants we write tests to detect the mutants the tests kill the mutants two ways to detect mutants by comparing behaviors with an oracle function: contracts
6
Yves Le Traon 2002 Basic level of trust = high level of trust “Replay” Tests Context adaptative Tests reuse Components test quality Component Trust Trusting Components? Test Quality Estimate Component Trust Estimate
7
Yves Le Traon 2002 Basic level of trust = high level of trust “Replay” Tests Context adaptative Tests reuse Components test quality Component Trust Trusting Components? Test Quality Estimate Component Trust Estimate
8
Yves Le Traon 2002 Trusting Components? Building Unit Tests Specifying the Component Design-by-contract What is the accepted domain of use ? What guarantees the component execution ? Executable Contracts The Design of a Trustable Component Implementing it... Implementation OK Test Quality Estimate debug Embedding Test Suite Self-Tests Trust Validating it... No Trust
9
Yves Le Traon 2002 How Can You Trust a Component? Specification Implementation V & V: checking Implementation against Specification (oracle) (e.g., embedded tests) Measure of Trust based on Consistency Contract between the client and the component
10
Yves Le Traon 2002 Trusting Components? The point of view of the user... Components “off-the-shelf” ?
11
Yves Le Traon 2002 Trusting Components? The point of view of the user... Components “off-the-shelf” 85% “replay” selftests 100% 55% 100%
12
Yves Le Traon 2002 Trusting Components? Plug-in the component in the system Specification Test Component Impl. System selftest tests selftest tests Continuity Strategy Test dependencies between a component and its environment
13
Yves Le Traon 2002 Principle & Applications Embed the test suite inside the component Implements a SELF_TESTABLE interface Component Unit Test suite = Test data + activator Oracle (mostly executable assertions from the component specification) Useful in conjunction with Estimating the Quality of the Component Integration Testing
14
Yves Le Traon 2002 Example with UML SET_OF_INTEGER empty : Boolean full : Boolean has (x : Integer) : Boolean put (x : Integer) {pre: not full} {post: has (x); not empty} prune (x : Integer) {pre: has (x); not empty} {post: not has (x); not full}
15
Yves Le Traon 2002 Example with Eiffel class SET_OF_INTEGERS -- implements a simplified set of integers. creation make feature -- creation make -- allocate a set of integers ensure empty: empty; feature {ANY} -- status report empty: BOOLEAN -- is the set empty? full: BOOLEAN -- is the set full ? feature {ANY} -- access has(x: INTEGER): BOOLEAN -- is x in the set ? feature {ANY} -- Element changes put(x: INTEGER) -- put x into the set require not_full: not full; ensure has: has(x); not_empty: not empty; prune(x: INTEGER) -- remove x from the set. require has: has(x); not_empty: not empty; ensure not_has: not has(x); not_full: not full; end -- class SET_OF_INTEGERS inherit SELF_TESTABLE, self_test feature {TEST_DRIVER} -- for testing purpose only tst_add -- test add tst_prune -- test ‘prune’ and ‘add’ tst_suite -- Called by template method ‘self_test’ -- to execute all tst_* methods.
16
Yves Le Traon 2002 Execution of class SET_OF_INTEGERS Self Test Test sequence nb. 1 is add usable? put - empty - full -------------- - empty at create... Ok - not empty after put... Ok..... >>>> DIAGNOSIS on class SET_OF_INTEGERS <<<< test(s) are OK. Method call statistics has : 37 full : 33 prune : 3 empty : 47 index_of : 40 put : 14....
17
Yves Le Traon 2002 Component qualification with mutation analysis Mutation operators Case study The problem of automating the test enhancement process
18
Yves Le Traon 2002 The Triangle View of a component Specification Implementation V & V: checking Implementation against Specification (oracle) (e.g., embedded tests) Measure of Trust based on Consistency Contract between the client and the component
19
Yves Le Traon 2002 Assessing Consistency Assume the component passes all its tests according to its specification Component’s implementation quality linked to its test & specification quality How do we estimate test & spec consistency? Introduce faults into the implementation mutant implementations Check whether the tests catch the faults the tests kill the mutants
20
Yves Le Traon 2002 put (x : INTEGER) is -- put x in the set require not_full: not full do 1 if not has (x) then 2 count := count + 1 3 structure.put (x, count) end -- if ensure has: has (x) not_empty: not empty end -- put - 1 Remove-inst Limited Mutation Analysis
21
Yves Le Traon 2002 Class A Selftest A Generation of mutants mutantA6 mutantA5 mutantA4 mutantA3 mutantA2 mutantA1 Test Execution mutantAj alive Diagnosis Equivalent mutant 1 Incomplete specification 2 Consider Aj as 3 Enhance Selftest mutantAj killed SelfTest OK ! Add contracts to the specification Error detected Error not detected Automated process Non automated process Overall Process
22
Yves Le Traon 2002 About Living Mutants What if a mutant is not killed? Tests inadequate => add more tests Specification incomplete => add precision Equivalent mutant => remove mutant (or original!) e.g., x x<=y ? x:y
23
Yves Le Traon 2002 Trust estimating : Score of mutation analysis d : Number of killed mutants m : Number of generated mutants which are not equivalent Component qualification d m MS =
24
Yves Le Traon 2002 Quality Estimate = Mutation Score Q(Ci) = Mutation Score for Ci = d i /m i d i = number of mutants killed m i = number of mutants generated for Ci WARNING: Q(Ci)=100% not=> bug free Depends on mutation operators (see next slide) Quality of a system made of components Q(S) = d i / m i
25
Yves Le Traon 2002 Specification Test Impl. System selftest tests selftest tests 120/184 65/100 378/378 234/245 (184 + 100 + 378 + 245) System test Quality Q(S) = (120 + 65 + 378 + 234) = 87,8 %
26
Yves Le Traon 2002 Component qualification TypeDescription EHFException Handling Fault AORArithmetic Operator Replacement LORLogical Operator Replacement RORRelational Operator Replacement NORNo Operation Replacement VCPVariable and Constant Perturbation MCRMethods Call Replacement RFIReferencing Fault Insertion Mutation operators
27
Yves Le Traon 2002 Mutation operators (1) Exception Handling Fault causes an exception Arithmetic Operator Replacement replaces e.g., ‘+’ by ‘-’ and vice-versa. Logical Operator Replacement logical operators (and, or, nand, nor, xor) are replaced by each of the other operators; expression is replaced by TRUE and FALSE.
28
Yves Le Traon 2002 Mutation operators (2) Relational Operator Replacement relational operators (, =, =, /=) are replaced by each one of the other operators. No Operation Replacement Replaces each statement by the Null statement. Variable and Constant Perturbation Each arithmetic constant/variable: ++ / -- Each boolean is replaced by its complement.
29
Yves Le Traon 2002 Mutation operators (3) Referencing Fault Insertion (Alias/Copy) Nullify an object reference after its creation. Suppress a clone or copy instruction. Insert a clone instruction for each reference assignment.
30
Yves Le Traon 2002 Outline of a Testing Process Select either: Quality Driven: select wanted quality level = Q(Ci) Effort Driven: Maximum #test cases = MaxTC Mutation Analysis and Test Cases Enhancement while Q(Ci) < Q(Ci) and nTC <= MaxTC enhance the test cases (nTC++) apply test cases to each mutant Eliminates equivalent mutants computes new Q(Ci)
31
Yves Le Traon 2002 A test report 119 mutants, 99 dead, 15 equivalentsMS= 99/104=95% id_mutEQMETHODESOURCEMUTANTCOMMENTAIRE 21emptycount =lower_bound – 1count <=lower_bound – 1jamais < 62fullcount =upper_boundcount >=upper_boundjamais > 163index_of – loop variantcount + 2 count * 2 même 244index_of – loop until count or else structurecount or structure court test 305make count := 0(nul) valeurdéfaut 456makelower_bound,upper_bound (lower_bound – 1 ),upper_bound redondance 467makelower_bound,upper_bound lower_bound, (upper_bound + 1 ) redondance 60Ifullcount = + 1 = testinsuf. 63IIfullupper_bound; (upper_bound – 1 ); testinsuf. 72IIIput if not has (x) thenif true then Spec inc. 75IVput if not has (x) thenif not false then Spec inc. 988index_of – loop variant- Result) - Result + 1 ) même 999index_of – loop variant- Result) - Result - 1 ) même 10010index_of – loop variantcount + 2 - (count + 2 + 1 ) - même 10111index_of – loop variantcount + 2 - (count + 2 – 1 ) - même 10212index_of – loop variantcount + 2 - (count + 1 ) + 2 - même 10313index_of – loop variantcount + 2 - (count – 1 ) + 2 - même 10414index_of – loop variant count + 2 -(count + 3 - même 10515index_of – loop variant count + 2 -(count + 1 - même 110Vindex_of – loop until> count or > (count + 1 ) or test insuf. NON EQUIVALENT EQUIVALENT
32
Yves Le Traon 2002 Global Process initial tests generation and bugs correction (tester’s work). automatic optimization of the initial tests set measure contracts efficiency Improve contracts Oracle function reconstruction equivalent mutants suppression remaining bugs correction 1 2 3 4 5 6 contracts test impl. contracts test impl. MS=trust contracts test impl. MS=trust contracts test impl. contracts test impl. MS= trust contracts test impl. Automated process
33
Yves Le Traon 2002 Oracle Oracle par différence de comportements Des traces Des objets « programmes » Oracle embarqués (contrats, assertions) Interne au composant Oracle associé aux données de test Extérieur au composant Oracle spécifique à la donnée de test
34
Yves Le Traon 2002 A short cases study Building a self-testable library : date-time
35
Yves Le Traon 2002 A short cases study Results p_date.ep_time.ep_date_time.e Total number of mutants673275199 Nbr equivalent491815 Mutation score100% Initial contracts efficiency10,35%17,90%8,7% Improved contracts efficiency 69,42%91,43%70,10% First version test size1069378 Reduced tests size723344
36
Yves Le Traon 2002 A short cases study Robustness of the selftest against an infected environment p_date_time selftest robustness
37
Yves Le Traon 2002 Partial conclusion An effective method to build ( some level of) ‘trust’ estimate the quality a component based on the consistency of its 3 aspects: specification (contract) tests implementation be a good basis for integration and non-regression testing A tool is currently being implemented for languages supporting DbC: Eiffel, Java (with iContract), UML (with OCL+MSC)...
38
Yves Le Traon 2002 Mutation for Unit and System testing System testing Combinatory explosion of the #mutants Determination of equivalent mutant unrealistic A subset of operators must be chosen to deal with these constraints A strategy for injecting faults ? Specific operators ?
39
Yves Le Traon 2002 Mutation for Unit and System testing
40
Yves Le Traon 2002 Mutation for Unit and System testing
41
Yves Le Traon 2002 Case study Queue Stack Linked_list Sequence_list Date_const Linked_iteratorSequence_iterator Tree_level_iterator Tree_past_iterator Tree_pre_iterator Tree_iterator Sortable Container Text_object_manager Integer_decoder Date_time Time Date HASHABLE Text_object COMPARABLE Iterator Oneway_iteratorMutator Format Dictionary TableSetSearchable List CatalogHash_setPrime General_tree Traversable TreeHistory_list Oneway_traversable SequenceDispenser Tree_post_iterator Node Order_comparable Order_relationTree_nodeLinked_node Random System Math Dict_node
42
Yves Le Traon 2002 Genetic algorithms for test enhancement
43
Yves Le Traon 2002 Test enhancement Easy to reach a mutation score of 60% Costly to improve this score Test enhancement Genetic algorithms
44
Yves Le Traon 2002 Gas for test optimization The issue : improving test cases automatically Non linear problem Gas may be adapted
45
Yves Le Traon 2002 Case study Class A Autotest A Mutant Generation mutantA6 mutantA5 mutantA4 mutantA3 mutantA2 mutantA1 Test Execution Diagnosis Incomplete specification 3 Automatic Improvement of selftest Automatic process Non automatic process Genetic Algorithm 1st step i-th version of Autotest A Step i New MS >Old MS yes determinist Improvement of selftest Add contracts to the specification 2 1 Look for equivalent mutants no
46
Yves Le Traon 2002 Class A mutantA5 mutantA6 mutantA7 mutantA8 mutantA9 mutantA10 mutantA1 mutantA2 mutantA3 mutantA4 Test Test1 Test2 Test3 Test4 Test5 Test6 Test enhancement Praises population
47
Yves Le Traon 2002 Test enhancement The analogy Test cases = predators = individuals Mutants = population of prey
48
Yves Le Traon 2002 Test enhancement: Genetic algorithm Genes Individual Operators reproduction crossover mutation Fitness function
49
Yves Le Traon 2002 Fitness function Mutation score Individual Tests cases For unit class testing Gene {Initialization, Function calls} Test enhancement: Genetic algorithm
50
Yves Le Traon 2002 ind 1 = {G 1 1,... G 1 i, G 1 i+1,.. G 1 m } ind 2 = {G 2 1,... G 2 i, G 2 i+1,.. G 2 m } ind 3 = {G 1 1,... G 1 i, G 2 i+1,.. G 2 m } ind 4 = {G 2 1,... G 2 i, G 1 i+1,.. G 1 m } G = [I, S] G = [I, S mut ] S = (m 1 (p 1 ),…,m i (p i ),…m n (p n )) S mut = (m 1 (p 1 ),…,m i (p i mut ),…m n (p n )) G 1 = [I 1, S 1 ] G 2 = [I 2, S 2 ] G 3 = [I 2, S 1 ] G 4 = [I 1, S 2 ] G 5 = [I 1, S 1 S 2 ] G 6 = [I 2, S 2 S 1 ] Operators Crossover Mutation for unit testing Test enhancement: Genetic algorithm
51
Yves Le Traon 2002 Test enhancement: Genetic algorithm Gene modeling for system testing Must be adapted to the particular system under test In the case of the studied system (a C# parser) If there are x nodes N in the file a gene can be represented as follows G = [N 1,…,N x ] Mutation operator for system testing. The mutation operator, chooses a gene at random in an individual and replaces a node in that gene by another one: G = [N 1,…, N i,…, N x ] G mut = [N 1,…, N imut,…, N x ]
52
Yves Le Traon 2002 Test enhancement: Genetic algorithm The global process of a genetic algorithm
53
Yves Le Traon 2002 Case study: unit testing p_date_time.e +make +is_equal(like current) +hash_code +set_date(x,y,z : integer) +set_time(x,y,z : integer) +set(a,b,c,x,y,z : integer) +is_valid +is_local +is_utc +timezone_bias +set_timezone_bias(integer) -local_timebias -internal_tz +to_iso, out +to_iso_long +to_rfc -add_iso_timezone(string, boolean) p_date.e +is_equal(like current) +hash_code +set_year(integer) +set_month(integer) +set_day(integer) +set(x,y,z : integer) +is_valid_date(x,y,z : integer) +is_valid +is_leap_year -max_day(x :integer) +to_iso, out +to_iso_long +to_rfc p_time.e +is_equal(like current) +hash_code +set_hour(integer) +set_minute(integer) +set_second(integer) +set(x,y,z : integer) +is_am +is_pm +to_iso, out +to_rfc p_date_const.e -Rfc_january -Rfc_february -Rfc_march -Rfc_april -Rfc_may -Rfc_june -Rfc_july -Rfc_august -Rfc_september -Rfc_october -Rfc_november -Rfc_december comparable.ehashable.e p_text_object.e Client Relation Inheritance Relation p_format.e +int_to-str_min(x,y : integer)
54
Yves Le Traon 2002 Case study: unit testing p_datep_date_timep_time # of generated mutants673199275 mutation score (%)5358
55
Yves Le Traon 2002 Case study: unit testing Results for unit testing (Mutation rate : 10%)
56
Yves Le Traon 2002 Case study: system testing a.NET component : C# parser ( 32 classes )
57
Yves Le Traon 2002 Score Case study: system testing initial population of 12 individuals of size 4 (mutation score = 55%). Mutation rate : 2%
58
Yves Le Traon 2002 Case study: system testing Mutation rate : 10 %
59
Yves Le Traon 2002 Case study: system testing Mutation rate : 20 % !!!
60
Yves Le Traon 2002 Case study: system testing The GA have many parameters: difficult tuning Results are not stable The mutation plays a crucial role Not a « classical » GA Some technical reasons for these deceiving results No memorization to guarantee a growth of the MS
61
Yves Le Traon 2002 A new model: bacteriological algorithms for test enhancement
62
Yves Le Traon 2002 The bacteriological approach an adaptive approach Test cases have to be adapted to a given «environment » No cross-over A new model taken from a biological analogy The bacteriological approach
63
Yves Le Traon 2002 The bacteriological approach choose an initial set of bacteria compute the fitness value for each bacterium memorizationofthe best bacterium reproduction mutation on one or several bacteria several stopping criteria : x number of generations, a given fitness value reached … Bacteriological loop
64
Yves Le Traon 2002 Number of bacteria is constant (except the memorized ones) The reproduction randomly choose bacteria with a probability proportional to its contribution to the new mutation score The best bacterium is memorized (constrained by a given threshold) Parameters of a bacteriological algorithm are max size of a bacterium size of the population
65
Yves Le Traon 2002 The bacteriological approach Generic UML model
66
Yves Le Traon 2002 Case study: unit testing Results for unit testing
67
Yves Le Traon 2002 Case study: system testing Initial mutation score = 55%.
68
Yves Le Traon 2002 Comparison Performances reduced tuning effort: less parameters (size of an individual, selection of individuals for reproduction) Algorithm# generation mutation score (%) # mutants executed Genetic20085480000 Bacteriologic309646375
69
Yves Le Traon 2002 A mixed-approach Tuning of the parameters
70
Yves Le Traon 2002 Tuning of models and clues for a mixed-approach Fix the « size » of a bacterium Study of the C# parser Size of a bacterium = #syntactic nodes
71
Yves Le Traon 2002 Tuning of models and clues for a mixed-approach
72
Yves Le Traon 2002 Tuning of models and clues for a mixed-approach Top mutation score trend curve
73
Yves Le Traon 2002 Tuning of models and clues for a mixed-approach Mixed-Approach Loooking for an intermediate solution between ‘pure’ GAs and bacteriological algorithms. From no memorization to a systematic memorization Trade-off between number of test cases = number of memorized bacteria Convergence speed = number of generations
74
Yves Le Traon 2002 Clues for a mixed-approach Let B be bacterium and threshold_value be the memorization threshold if fitness_value(B)> threshold_value then memorize B The category of the algorithm depends on the threshold value: if threshold_value = 100 then “pure ” genetic if threshold_value = 0 then “pure” bacteriological if 0 < threshold_value < 100 then mixed-approach
75
Yves Le Traon 2002 Clues for a mixed-approach Convergence speed
76
Yves Le Traon 2002 Clues for a mixed-approach # of test cases set = # memorized Bacteria
77
Yves Le Traon 2002 Clues for a mixed-approach A slice for the mixed-approach Bacteriological approach Lowest convergence speed for the mixed-approach Test cases set reduced from 10 to 7 But many parameters must be tuned => bact. still better
78
Yves Le Traon 2002 Method to build trustable components Automated test improvement Gas not adapted BAs better and easier to calibrate Mixed-approach benefit not obvious Conclusion
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.