Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright 2015, Robert W. Hasker. Reviewing CI Setup  So what do the steps do?  Key concept: search path  Windows: for each command, run through PATH.

Similar presentations


Presentation on theme: "Copyright 2015, Robert W. Hasker. Reviewing CI Setup  So what do the steps do?  Key concept: search path  Windows: for each command, run through PATH."— Presentation transcript:

1 Copyright 2015, Robert W. Hasker

2 Reviewing CI Setup  So what do the steps do?  Key concept: search path  Windows: for each command, run through PATH to find first folder containing specified command (an executable with the same name)  Variables: strings that can be retrieved from the environment on demand  JDK_PATH, JAVA_HOME: so java computer and runtime knows where libraries are located  Values of variables retrieved from “owner process” when create new ones – variables are inherited  System variables: shared by all users, recorded by OS when it starts

3 More on CI Setup  DOS prompt: variables referred to by %X%  Eg: open DOS prompt and enter the command echo %JAVA_PATH%  Unix uses $JAVA_PATH, hence some of the typos  Much of the setup has to do with getting the proper tools in your path (javac, ant, junit libraries)  Windows services: background jobs

4 Reliability Systems Engineering: Linear System: response to complex input can be described as a sum of responses to simpler inputs Linear System Eg: Wave propagation Nonlinear examples: AC power flow, almost any complex system Reliability for linear systems: product of reliability of each component A: 90% Reliable B: 90% C: 90% Reliability:.9 3 =.73

5 Reliability in Software  Car that works 73% of the time: find a shop or dealer  How many components in software?  How can we achieve high reliability for SW?

6 Is unit testing enough?  Integration, system, acceptance  Performance testing  based on operational profile - given expected types of inputs, ensure can handle load  What can you learn from this type of testing?  What can't you learn from this type of testing?  Stress test  Answers what happens at limits/high loads  what happens if exceed limits  What do you want to happen?  Goal: graceful degredation - bend but don't break

7 More testing methods  Security testing  Maybe something Sony should invest in?  How would you test security?  A particular problem for websites  Database injection, cross-site scripting  Brakeman Rails security scanner Brakeman  Fuzz checkers – inserting random data  Wapiti: identifies scripts, forms in which can inject data Wapiti  Usability  Non-functional requirements in general  Must identify testable component!

8 Testing strategy  CI: run faster tests first  Generally, this means running unit tests first  System tests: often require setups  Likely: run unit tests always, system tests on demand  Standing issue: regression suite can easily get too large to run all  Research question: can we identify a subset to run routinely, but save full run for release testing?  But is that suggestion contrary to CI?

9 Qualities of good tests  Thorough – coverage  Start with statement coverage  Repeatable  Generate small data sets that have predictable outputs  Test runs should not depend on each other  Understandable  Maintainable  Easy to identify failed case

10 Testing  What about “grandma testing”?

11 Testing  What about “grandma testing”?  Sexist? Ageist? "Local grandmother finally uses printer" CES 2009: hall for products for retirees

12 Testing  What about “grandma testing”?  Sexist? Ageist? "Local grandmother finally uses printer" Is grandma-using-the-computer the modern equivalent of racist jokes? CES 2009: hall for products for retirees

13 Testing  What about “grandma testing”?  Sexist? Ageist?  Fuzz testing tools

14 Testing  What about “grandma testing”?  Sexist? Ageist?  Fuzz testing tools  What if need large data sets?  Test oracles  Version matching

15 Testing  What about “grandma testing”?  Sexist? Ageist?  Fuzz testing tools  What if need large data sets?  Test oracles  Version matching  Key: differentiate between robustness, correctness

16 Integration Testing  Common errors:  interface misuse: client doesn't satisfy it's obligations  eg: not calling an initialization routine before executing an operation  interface misunderstanding: incorrect assumptions made by developer of client  timing errors: data producers and consumers may operate at different speeds; if this communication is not protected then may try to consume data before it's produced or produce data when consumer not ready for new information

17 Integration Testing  Guidelines  test extreme ends of ranges in calls  test passing NULL pointer whenever possible  stress testing to reveal timing problems  if components use shared memory, test operations being invoked in different sequences

18 System Testing  Focus: functional testing  that is, are there errors in features?  In some sense, this is the only test that matters  tests all functionality seen by customer  why shouldn't we do all testing at this level?  harder to test deeply  harder to know how to fix system if test fails  Acceptance testing: system testing w/ focus on common scenarios

19 UI Testing in Java  Assumption: painful but readily available  Research: lots of vaporware, abandonware  Many solutions for web, but not a lot for Java  FrogLogic, RAutomation, MarathonTesting, UISpec4J, Mspec, abbot  Method that works:  JavaWorld TestUtils; see also counter.zip JavaWorld TestUtilscounter.zip

20 UI Testing in Java  Each component: call.setName with unique string  Test code: TestUtils.getChildNamed(frame, name-of-child)  Use.doClick,.value(), etc. to exercise code  Robust  Doesn’t depend on screen locations, specialized test frameworks  But no auto-capture/replay

21 Manual Test Procedure StepReqPass ConditionsPass? 1. Select the Biology program. UIR-2 System displays biology classes w/ first class BIOLOGY 1150, Section 01, Title GENERAL BIOLOGY, Instructor Block, Anna, Filled/Seats 52/53, Class# 1311, Credits 5, Meets BOE 0221 MWF 8:00-8:52 P / F 2. Double-click on Class# 1330 UIR-1System includes Class# 1330 in schedule at bottomP / F 3. Scroll down to Class# 1331 (BIOLOGY 1650, Section 01) UIR-9System displays Class# 1331 with a pink backgroundP / F 4.UIR-9 All sections listed between #1311 and #1331 have a white background P / F 5. Select the GENENG program. UIR-2System displays general engineering coursesP / F 6.UIR-9 GENENG 1030 sections 01-07 have a pink background all other sections of GENGENG 1030 have a white backgroundP / F 7.UIR-9 All sections of GENGENG 1000 have a white background P / F

22 Coverage  Exercise:  Each participant: write down 4 instructions  Input to procedure: value given by someone, which person (1-4) gave you the value (source)  Instruction set:  Add one to value  Subtract one from value  If expression then instructions else instructions where expression is a Boolean expr over value, source  Give value to destination – each path must end this way

23 Coverage  See se3800/samples/triangle.htmlse3800/samples/triangle.html  Coverage from triangle 3 2 1  Working with partner…  Write tests which pass  Identify changes to code which pass those tests but fail to meet specification  What is statement coverage?  How does this differ from branch coverage?

24 Coverage  Decision: Boolean expression  Condition: (atomic) component of a Boolean expression  Decision coverage: every Boolean expression evaluates to both true and false  How different from statement coverage?  Condition coverage: every condition evaluates to both true and false  Too easy: consider cases for a && b  Multiple condition coverage: all combinations executed  How many tests needed for a && b && c ?

25 MCDC  Modified Condition Decision Coverage  Every statement invoked at least once  Every entry, exit point invoked at least once  Every control statement taken all possible outcomes at least once  Every (non-constant) Boolean expr evaluated to both true and false  Every (non-constant) condition evaluated to both true and false  Every non-constant condition in a Boolean expression shown to independently affect the outcome

26 MCDC  Modified Condition Decision Coverage:  Every entry, exit point executed at least once  Every decision has taken all possible outcomes at least once  Every condition has taken all possible outcomes at least once  Every condition (in a decision) has been shown to independently affect that decision’s outcome.  With this definition, number of tests is O(N)  See A Practical Tutorial on Modified Condition/Decision Coverage, NASA/TM-2001- 210876 for a tutorialA Practical Tutorial on Modified Condition/Decision Coverage, NASA/TM-2001- 210876

27 Path Testing  Statement, decision, MCDC: all about executing logic  Ultimate goal: execute every path  Enumerate paths for GCD example:GCD example printf("The gcd of %d and %d", x, y); while ( x != y ) { if ( x > y ) x = x - y; else y = y - x; } printf( " is %d.\n", x ); printf("gcd of %d and %d", x, y); while ( x != y ) { if ( x > y ) x = x - y; else y = y - x; } printf( " is %d.\n", x );

28 Mutation Testing  Change code, run tests  Some test should fail  How to generate mutants?  That is: what operators?  Offutt, Rothermel, Lee, Untch, and Zapf. Sufficient Mutant Operators, TOSEM, April 1996

29 Too many mutants!  Number of mutants from 22 operations:  Are the most expensive mutants necessary?  Examined impact of removing each

30 Result: minimum operator set  ABS: insert calls to an absolute value function  AOR: replace all arithmetic ops by every syntactically legal alternatives  LCR: replaces AND, OR by all logical connectors  ROR: replace (modify) relational operators  UOI: insert unary operators

31 Evaluation  Five operators, responsible for ~17% of all mutants, sufficient for 10 Fortan test programs  In general: get O(Lines + References) mutants  Constant for O is large!  Above 10 programs (200 lines): 231,972 mutants  Practical for critical routines

32 Acceptance Tests & APIs  How to write acceptance tests for an API?  Our model: acceptance test = story/scenario  Issue: an API is not a user!  Solution: Cohn: Writing User Stories for Back- end SystemsCohn: Writing User Stories for Back- end Systems  Personify subsystems  Epic: “As a bank, I want to receive a file showing all checks to be cleared so that I can debit and credit the right accounts.”  “As a bank, I want to receive a 5300 file with correctly formatted single-line deposit entry records so that I can process them.”  Write test to the resulting story.

33 Testing Review  Unit, Integration, System, Acceptance  Coverage: making sure all code executed  Mutation testing: testing your tests  Acceptance testing and APIs

34 What next?  How to know when we’re done?  Exercise: research network  Methods & Tools: Methods & Tools  No risk, no need to test!  Done depends on identified risks: minimal criterion is that you are done testing when risks are reduced to an acceptable level  Is there an acceptable level of risk for safety- critical software?

35 Goal-directed design  What do you want to know about user interface design?


Download ppt "Copyright 2015, Robert W. Hasker. Reviewing CI Setup  So what do the steps do?  Key concept: search path  Windows: for each command, run through PATH."

Similar presentations


Ads by Google