Presentation is loading. Please wait.

Presentation is loading. Please wait.

Deriving Test Data for Web Applications from User Session Logs

Similar presentations


Presentation on theme: "Deriving Test Data for Web Applications from User Session Logs"— Presentation transcript:

1 Deriving Test Data for Web Applications from User Session Logs
Jeff Offutt SWE 737 Advanced Software Testing

2 User-Session Testing User session : A collection of user requests in the form of URL and name-value pairs Stored in logs on the server The name-value pairs are stored in logs on the server User sessions to tests : Each logged request is turned into an HTTP request Automated with a tool like HTTPUnit or Selenium SWE 737 © Jeff Offutt

3 User-Session Testing Example
An efficient way to generate tests Empirical studies have found success in finding faults SWE 737 © Jeff Offutt

4 Issues and Problems Hard to scale with large numbers of user sessions
Tends to create “happy path tests,” not exceptional or unusual cases SWE 737 © Jeff Offutt

5 1. Prioritizing Large Numbers of Tests
Researchers have tried to put tests in priority order according to various criteria Semantic differences between programs Coverage of test requirements Fault exposure potential Likelihood of a fault occurring Length (or size) of tests SWE 737 © Jeff Offutt

6 Criteria by Sampath et al.
Length-Based Base Request Long to Short (Req-LtoS) Base Request Short to Long (Req-StoL) Parameter Value Long to Short (PV-LtoS) Parameter Value Short to Long (PV-StoL) Frequency-Based Most Frequently Accessed Sequence (MFAS) All Accessed Sequences (AAS) Parameter Value Coverage Unique Coverage of Parameter Values (1-way) 2-way Parameter-Value Interaction Coverage (2-way) SWE 737 © Jeff Offutt

7 Length-Based Prioritization
Order by the number of HTTP base requests in a test case Length of a test case is the number of base requests the test case contains (counting duplicates) Descending order of length (Req-LtoS) Ascending order of length (Req-StoL) SWE 737 © Jeff Offutt

8 Length-Based Prioritization
Order tests by the number of parameters PV-LtoS: Longest parameter list to shortest PV-StoL: Shortest parameter list to longest SWE 737 © Jeff Offutt

9 Frequency-Based Prioritization
Order the pages (JSPs and Servlets) in terms of how many times they are accessed Prioritize tests that cover pages that are more frequently accessed (1-way) Also order pages that are used in sequences of length 2 (pairs, 2-way) SWE 737 © Jeff Offutt

10 Parameter Value Coverage Prioritization
Unique Coverage of Parameter Values (1-way) Select the next test that maximizes the number of parameter-value pairs that have NOT been used Example (blue is new PV pairs): SWE 737 © Jeff Offutt

11 Parameter Value Coverage Prioritization
Parameter-Value Interaction Coverage (2-way) Select the next test that maximizes the number of 2-way parameter-value interactions between pages that occur in some test SWE 737 © Jeff Offutt

12 Experimental Evaluation (Sampath 2008)
Basis for evaluation: Fault detection rate Average percent of faults detected Execution time of the test suite Subject applications: Book—e-commerce bookstore CPM—create grader accounts for TA Masplas—web app for regional workshop SWE 737 © Jeff Offutt

13 Evaluation Metrics Rate of fault detection
Finding the most faults in the first 10% of tests executed Percent tests needed to find all faults Average percentage of faults detected By all tests SWE 737 © Jeff Offutt

14 Types of Faults Seeded Data Store—faults that exercise code interacting with data Logical—code logic errors in control flow Form-based Appearance—change in perceiving user’s view of the page Links—change of hyperlink locations SWE 737 © Jeff Offutt

15 Results Summary Book CPM Masplas
Finding the most faults in 10% of tests 1-way parameter value interaction All accessed sequences (AAS) Base request long to short (Req-LtoS) Find all the faults with fewest tests 1-way and most frequently accessed sequence (MFAS) 2-way parameter value interaction SWE 737 © Jeff Offutt

16 Conclusions Study did not determine which is best.
Probably depends on the app. Almost everything was better than random SWE 737 © Jeff Offutt

17 2. Testing Exceptional and Unusual Situations
Literature identified two problems (reference [1], Sampath 2008) Scalability Tests focus on “happy paths” I couldn’t find any publications that address the happy path problem SWE 737 © Jeff Offutt

18 References Prioritizing User-session-based Test Cases for Web Applications Testing, Sreedevi Sampath, Renée Bryce, Gokulanand Viswanath, Vani Kandimalla, and Güneş Koru, First International Conference on Software Testing, Verification, and Validation, April 2008 Automated Replay and Failure Detection for Web Applications, Sara Sprenkle, Emily Gibson, Sreedevi Sampath, and Lori Pollock, International Conference of Automated Software Engineering, November 2005 Web Application Testing with Customized Test Requirements—An Experimental Comparison Study, Sreedevi Sampath, Sara Sprenkle, Emily Gibson, and Lori Pollock, International Symposium on Software Reliability Engineering, November 2006 Applying Concept Analysis to User-Session-Based Testing of Web Applications, Sreedevi Sampath, Sara Sprenkle, Emily Gibson, Lori Pollock, and A. S. Greenwald, IEEE Transactions on Software Engineering, 33(10): , October 2007 SWE 737 © Jeff Offutt


Download ppt "Deriving Test Data for Web Applications from User Session Logs"

Similar presentations


Ads by Google