CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 7, 2010.

Slides:



Advertisements
Similar presentations
Thursday, November 1, 2001(c) 2001 Ibrahim K. El-Far. All rights reserved.1 Enjoying the Perks of Model-based Testing Ibrahim K. El-Far Florida Institute.
Advertisements

Feedback-directed Random Test Generation (to appear in ICSE 2007) Carlos Pacheco Shuvendu Lahiri Michael Ernst Thomas Ball MIT Microsoft Research January.
Author: Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, Thomas Ball MIT CSAIL.
50.530: Software Engineering Sun Jun SUTD. Week 10: Invariant Generation.
1 Symbolic Execution for Model Checking and Testing Corina Păsăreanu (Kestrel) Joint work with Sarfraz Khurshid (MIT) and Willem Visser (RIACS)
CS527: Advanced Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 18, 2008.
Background for “KISS: Keep It Simple and Sequential” cs264 Ras Bodik spring 2005.
Feedback-Directed Random Test Generation Automatic Testing & Validation CSI5118 By Wan Bo.
Prioritizing User-session-based Test Cases for Web Applications Testing Sreedevi Sampath, Renne C. Bryce, Gokulanand Viswanath, Vani Kandimalla, A.Gunes.
November 2011CSC7302: Testing & MetricsAdvancedTestingTechniques.1 The hangman problem … the final challenge The rules of hangman have changed with respect.
Software Testing and Quality Assurance
Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.
1 Today Another approach to “coverage” Cover “everything” – within a well-defined, feasible limit Bounded Exhaustive Testing.
RIT Software Engineering
SE 450 Software Processes & Product Metrics 1 Defect Removal.
OOP #10: Correctness Fritz Henglein. Wrap-up: Types A type is a collection of objects with common behavior (operations and properties). (Abstract) types.
Michael Ernst, page 1 Improving Test Suites via Operational Abstraction Michael Ernst MIT Lab for Computer Science Joint.
1 Advanced Material The following slides contain advanced material and are optional.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
Finding Errors in.NET with Feedback-Directed Random Testing By Carlos Pacheco, Shuvendu K. Lahiri and Thomas Ball Presented by Bob Mazzi 10/7/08.
Chapter 2: Developing a Program Extended and Concise Prelude to Programming Concepts and Design Copyright © 2003 Scott/Jones, Inc.. All rights reserved.
1 Scenario-based Analysis of UML Design Class Models Lijun Yu October 4th, 2010 Oslo, Norway.
Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit-State Model Checking -Shreyas Ravindra.
1 Joe Meehean. 2 Testing is the process of executing a program with the intent of finding errors. -Glenford Myers.
CS527: (Advanced) Topics in Software Engineering Overview of Software Quality Assurance Tao Xie ©D. Marinov, T. Xie.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Finding Errors in.NET with Feedback-Directed Random Testing Carlos Pacheco (MIT) Shuvendu Lahiri (Microsoft) Thomas Ball (Microsoft) July 22, 2008.
Reverse Engineering State Machines by Interactive Grammar Inference Neil Walkinshaw, Kirill Bogdanov, Mike Holcombe, Sarah Salahuddin.
Feed Back Directed Random Test Generation Carlos Pacheco1, Shuvendu K. Lahiri2, Michael D. Ernst1, and Thomas Ball2 1MIT CSAIL, 2Microsoft Research Presented.
CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 15, 2011.
CS527: (Advanced) Topics in Software Engineering Reading Papers Tao Xie ©D. Marinov, T. Xie.
Platform Support for Developing Analysis and Testing Plugins Shauvik Roy Choudhary with Jeremy Duvall, Wei Jin, Dan Zhao, Alessandro Orso School of Computer.
Empirically Revisiting the Test Independence Assumption Sai Zhang, Darioush Jalali, Jochen Wuttke, Kıvanç Muşlu, Wing Lam, Michael D. Ernst, David Notkin.
Michael Ernst, page 1 Collaborative Learning for Security and Repair in Application Communities Performers: MIT and Determina Michael Ernst MIT Computer.
What is Software Testing? And Why is it So Hard J. Whittaker paper (IEEE Software – Jan/Feb 2000) Summarized by F. Tsui.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
CS527: Advanced Topics in Software Engineering (Software Testing and Analysis) Darko Marinov August 28, 2008.
Which Configuration Option Should I Change? Sai Zhang, Michael D. Ernst University of Washington Presented by: Kıvanç Muşlu.
CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 9, 2010.
Java PathFinder (JPF) cs498dm Software Testing January 19, 2012.
CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 16, 2010.
CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 22, 2011.
1 Test Selection for Result Inspection via Mining Predicate Rules Wujie Zheng
Feedback-directed Random Test Generation Carlos Pacheco Shuvendu Lahiri Michael Ernst Thomas Ball MIT Microsoft Research January 19, 2007.
“Isolating Failure Causes through Test Case Generation “ Jeremias Rößler Gordon Fraser Andreas Zeller Alessandro Orso Presented by John-Paul Ore.
Finding Errors in.NET with Feedback-Directed Random Testing Carlos Pacheco (MIT) Shuvendu Lahiri (Microsoft) Thomas Ball (Microsoft) July 22, 2008.
Directed Random Testing Evaluation. FDRT evaluation: high-level – Evaluate coverage and error-detection ability large, real, and stable libraries tot.
When Tests Collide: Evaluating and Coping with the Impact of Test Dependence Wing Lam, Sai Zhang, Michael D. Ernst University of Washington.
ITCS 6265 Details on Project & Paper Presentation.
CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov August 30, 2011.
Test and Verification Solutions128 October 2009 Test and Verification Solutions Improved time to market through automated software testing Mike Bartley,
CAPP: Change-Aware Preemption Prioritization Vilas Jagannath, Qingzhou Luo, Darko Marinov Sep 6 th 2011.
Automated Test Generation CS Outline Previously: Random testing (Fuzzing) – Security, mobile apps, concurrency Systematic testing: Korat – Linked.
Cs498dm Software Testing Darko Marinov January 27, 2009.
PHY 107 – Programming For Science. Announcements no magic formulas exist  Need to learn concepts: no magic formulas exist  Single solution not useful;
CS 5150 Software Engineering Lecture 22 Reliability 3.
Random Test Generation of Unit Tests: Randoop Experience
Week 6 MondayTuesdayWednesdayThursdayFriday Testing III Reading due Group meetings Testing IVSection ZFR due ZFR demos Progress report due Readings out.
Objective ICT : Internet of Services, Software & Virtualisation FLOSSEvo some preliminary ideas.
CS 5150 Software Engineering Lecture 21 Reliability 2.
CS223: Software Engineering Lecture 25: Software Testing.
Test Case Purification for Improving Fault Localization presented by Taehoon Kwak SoftWare Testing & Verification Group Jifeng Xuan, Martin Monperrus [FSE’14]
Cs498dm Software Testing Darko Marinov January 24, 2012.
CS527: (Advanced) Topics in Software Engineering (Software Testing and Analysis) Darko Marinov August 25, 2011.
Marcelo d’Amorim (UIUC)
Eclat: Automatic Generation and Classification of Test Inputs
Random Unit-Test Generation with MUT-aware
CSE 1020:Software Development
(presentor: jee-weon Jung)
Darko Marinov February 5, 2009
Presentation transcript:

CS527 Topics in Software Engineering (Software Testing and Analysis) Darko Marinov September 7, 2010

Schedule First few lectures to help you select projects –Previously: Intro, ReAssert, UDITA, Pex –Today: Randoop (test generation) –Sep 9: JPF (model checking), note: journal paper –Sep 14: CHESS (multithreaded testing) –Sep 16: Regression testing, note: survey paper –Sep 21: Static (code) analysis? –Sep 23: GUI testing? –Sep 28: Analysis of code comments? –Your suggestions?

Reports, Project, Presentations Paper reports due for every lecture –4 items for now (fewer after proposals) –Did you get feedback for first report? No more feedback but for projects (and non-ASCII :) Project proposals due September 30 –It’s hard to have a good proposal in a day/week –Start discussion with Sandro and me early Your presentations start from October 7 –Need to choose papers more in advance –Can sign up for slots even now –Bonus given to those who sign up for first lecture

Paper Today Feedback-directed random test generation by Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball (ICSE 2007)Feedback-directed random test generation Slides courtesy of Carlos Pacheco

Paper Overview Problem (Question) –Generate unit tests (with high coverage?) Solution (Result) –Generate sequences of method calls –Random choice of methods and parameters –Publicly available tool for Java (Randoop) Evaluation (Validation) –Data structures (JPF is next lecture) –Checking API contracts –Regression testing (lecture next week)

Questions for Discussion (1) Is there a limit on the length of the sequence generated by Randoop? [ST] Is sequence extension effective when/if different violation results? (e.g. non- determinism) [MK] Can this method also effective for other type of classes such as UI or business logic? [HY] How can we combine systematic testing and random testing? [QL] Why is BFS deemed more preferable to DFS in systematic testing? [DM]

Questions for Discussion (2) Can we deploy symbolic execution when coverage of some methods is small to prioritize some values? [MG] What are some other options to add repetition to the generator? [JN] Tradeoff in random and systematic? [AL] Other applications/domains? How do we know what coverage is good? What are tool limitations? What about methods that change arguments?

Questions for Discussion (3) What is the relation between test coverage and correct code? High coverage and bugs? What is Randoop well suited for? What is Randoop not well suited for? Why is Randoop not used more often? What is the number of false positives? Combining Randoop and Pex (or JPF or X)? –Do they find complementary bugs? Can we parallelize Randoop? Can we use application-specific filters?

Past Questions for Discussion (1) Evaluation considered mostly good, but –Why evaluate on data structures? –Why not compare with more other tools? What kind of bugs were found? –Any feedback from developers on bugs? ISSTA 2008 paper (see optional paper) –When is this technique good/bad? –What to use to complement RANDOOP? Why is this solution better than others?

Past Questions for Discussion (2) What about non-primitive inputs? –Can RANDOOP generate input programs? What about testing business systems? How to reduce # of illegal sequences? How many seeds to use in runs? What are general pros/cons of random testing? Is time a good stopping condition?

Sample Project Ideas First: would Randoop by itself be a good class project (is it too big or too small?) Apply Randoop on some software Extend Randoop, e.g., directed generation –Improve effectiveness or performance Apply random testing in another domain Compare with more tools and techniques Evaluate for regression testing –Assertion failures? False alarms? Repair? Check Randoop page ProjectIdeasProjectIdeas