Quality Assurance: Test Development & Execution Ian S. King Test Development Lead Windows CE Base OS Team Microsoft Corporation.

Slides:



Advertisements
Similar presentations
Test Yaodong Bi.
Advertisements

Test process essentials Riitta Viitamäki,
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
Software Process Models
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL.
Mike Azocar Sr. Developer Technical Specialist Microsoft Corporation
Software Engineering. How many lines of code? Average CS1004 assignment: 200 lines Average CS4115 project: 5000 lines Corporate e-commerce project: 80,000.
Software Engineering.
Testing HCI Usability Testing. Chronological order of testing Individual program units are built and tested (white-box testing / unit testing) Units are.
Quality Assurance and Testing CSE 403 Lecture 22 Slides derived from a talk by Ian King.
Introduction to Software Testing
Software Testing & Strategies
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Design, Implementation and Maintenance
Quality Assurance: Early Work Items. Introduction: Ian King Software Test Lead, Microsoft Corporation Manager of Test Development for Windows CE Base.
Fundamentals of Python: From First Programs Through Data Structures
Estimation Wrap-up CSE 403, Spring 2008, Alverson Spolsky.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
QWise software engineering – refactored! Testing, testing A first-look at the new testing capabilities in Visual Studio 2010 Mathias Olausson.
System Testing There are several steps in testing the system: –Function testing –Performance testing –Acceptance testing –Installation testing.
Fundamentals of Python: First Programs
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
Software Testing. Definition To test a program is to try to make it fail.
1 Software Testing (Part-II) Lecture Software Testing Software Testing is the process of finding the bugs in a software. It helps in Verifying and.
INFO 637Lecture #81 Software Engineering Process II Integration and System Testing INFO 637 Glenn Booker.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
RUP Implementation and Testing
What is Software Testing? And Why is it So Hard J. Whittaker paper (IEEE Software – Jan/Feb 2000) Summarized by F. Tsui.
Winrunner Usage - Best Practices S.A.Christopher.
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
Systems Design Approaches The Waterfall vs. Iterative Methodologies.
Testing Workflow In the Unified Process and Agile/Scrum processes.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III.
Chapter 9 Testing the System Shari L. Pfleeger Joann M. Atlee
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Testing and Debugging Version 1.0. All kinds of things can go wrong when you are developing a program. The compiler discovers syntax errors in your code.
From Quality Control to Quality Assurance…and Beyond Alan Page Microsoft.
16 October Reminder Types of Testing: Purpose  Functional testing  Usability testing  Conformance testing  Performance testing  Acceptance.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
Quality Assurance: Test Development & Execution CSE 403 Lecture 23 Slides derived from a talk by Ian King.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Software Maintenance Speaker: Jerry Gao Ph.D. San Jose State University URL: Sept., 2001.
INFO 637Lecture #71 Software Engineering Process II Product Implementation INFO 637 Glenn Booker.
CSE 403, Spring 2007, Alverson Quality Assurance Pragmatic Programmer Tip Think about Your Work Turn off the autopilot and take control. Constantly critique.
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
What is a level of test?  Defined by a given Environment  Environment is a collection of people, hard ware, software, interfaces, data etc.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Unit 17: SDLC. Systems Development Life Cycle Five Major Phases Plus Documentation throughout Plus Evaluation…
Software Development Process CS 360 Lecture 3. Software Process The software process is a structured set of activities required to develop a software.
T EST T OOLS U NIT VI This unit contains the overview of the test tools. Also prerequisites for applying these tools, tools selection and implementation.
PROGRAMMING FUNDAMENTALS INTRODUCTION TO PROGRAMMING. Computer Programming Concepts. Flowchart. Structured Programming Design. Implementation Documentation.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Week # 4 Quality Assurance Software Quality Engineering 1.
Lecture 1 Page 1 CS 111 Summer 2013 Important OS Properties For real operating systems built and used by real people Differs depending on who you are talking.
CSCE 548 Secure Software Development Risk-Based Security Testing
Software Engineering (CSI 321)
SEVERITY & PRIORITY RELATIONSHIP
Software testing
Designing For Testability
Quality Assurance: Early Work Items
Chapter 8 – Software Testing
Introduction to Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Finding and Managing Bugs CSE 403 Lecture 23
Software Test Automation and Tools
Course: Module: Lesson # & Name Instructional Material 1 of 32 Lesson Delivery Mode: Lesson Duration: Document Name: 1. Professional Diploma in ERP Systems.
Software visualization and analysis tool box
Chapter 11: Integration- and System Testing
that focus first on areas of risk.
Presentation transcript:

Quality Assurance: Test Development & Execution Ian S. King Test Development Lead Windows CE Base OS Team Microsoft Corporation

Introduction: Ian King Manager of Test Development for Windows CE Base OS (kernel, drivers, file systems) Previous projects at Microsoft: MSN 1.x online service, Site Server 3.0, TransPoint online service, Speech API 5.0 Previously: business analyst, Pacific Telecom

Implementing Testing

What makes a good tester? Analytical Ask the right questions Develop experiments to get answers Methodical Follow experimental procedures precisely Document observed behaviors, their precursors and environment Brutally honest You can’t argue with the data

How do test engineers fail? Desire to “make it work” Impartial judge, not “handyman” Trust in opinion or expertise Trust no one – the truth (data) is in there Failure to follow defined test procedure How did we get here? Failure to document the data Failure to believe the data

Testability Can all of the feature’s code paths be exercised through APIs, events/messages, etc.? Unreachable internal states Can the feature’s behavior be programmatically verified? Is the feature too complex to test? Consider configurations, locales, etc. Can the feature be tested timely with available resources? Long test latency = late discovery of faults

Test Categories Functional Does it work? Performance How fast/big/high/etc.? Security Access only to those authorized Stress Working stress Breaking stress – how does it fail? Reliability/Availability

Manual Testing Definition: test that requires direct human intervention with SUT Necessary when: GUI is tested element Behavior is premised on physical activity (e.g. card insertion) Advisable when: Automation is more complex than SUT SUT is changing rapidly (early development)

Automated Testing Good: replaces manual testing Better: performs tests difficult for manual testing (e.g. timing related issues) Best: enables other types of testing (regression, perf, stress, lifetime) Risks: Time investment to write automated tests Tests may need to change when features change

Types of Automation Tools: Record/Playback Record “proper” run through test procedure (inputs and outputs) Play back inputs, compare outputs with recorded values Advantage: requires little expertise Disadvantage: little flexibility - easily invalidated by product change Disadvantage: update requires manual involvement

Types of Automation Tools: Scripted Record/Playback Fundamentally same as simple record/playback Record of inputs/outputs during manual test input is converted to script Advantage: existing tests can be maintained as programs Disadvantage: requires more expertise Disadvantage: fundamental changes can ripple through MANY scripts

Types of Automation Tools: Script Harness Tests are programmed as modules, then run by harness Harness provides control and reporting Advantage: tests can be very flexible Advantage: tests can exercise features similar to customers’ code Disadvantage: requires considerable expertise and abstract process

Types of Automation Tools: Verb-Based Scripting Module is programmed to invoke product behavior at low level – associated with ‘verb’ Tests are designed using defined set of verbs Advantage: great flexibility Advantage: changes are usually localized to a given verb Disadvantage: requires considerable expertise and high-level abstract process

Test Corpus Body of data that generates known results Can be obtained from Real world – demonstrates customer experience Test generator – more deterministic Caveats Bias in data generation? Don’t share test corpus with developers!

Instrumented Code: Test Hooks Code that enables non-invasive testing Code remains in shipping product May be enabled through Special API Special argument or argument value Registry value or environment variable Example: Windows CE IOCTLs Risk: silly customers….

Instrumented Code: Diagnostic Compilers Creates ‘instrumented’ SUT for testing Profiling – where does the time go? Code coverage – what code was touched? Really evaluates testing, NOT code quality Syntax/coding style – discover bad coding lint, the original syntax checker Complexity Very esoteric, often disputed (religiously) Example: function point counting

Advanced Tools: Modeling Example: AsmL Model behavior as set of states and transitions Even multithreaded code is inherently serial Stochastic elements can be explicit Advantage: test design before code is written Advantage: test the test code Disadvantage: creation and maintenance overhead

Instrumented platforms Example: App Verifier Supports ‘shims’ to instrument standard system calls such as memory allocation Tracks all activity, reports errors such as unreclaimed allocations, multiple frees, use of freed memory, etc. Win32 includes ‘hooks’ for platform instrumentation

Environment Management Tools Predictably simulate real-world situations MemHog DiskHog CPU ‘eater’ Data Channel Simulator Reliably reproduce environment Source control tools Consistent build environment Disk imaging tools

Test Monkeys Generate random input, watch for crash or hang Typically, ‘hooks’ UI through message queue Primarily catches “local minima” in state space (logic “dead ends”) Useless unless state at time of failure is well preserved!

Finding and Managing Bugs

What is a bug? Formally, a “software defect” SUT fails to perform to spec SUT causes something else to fail SUT functions, but does not satisfy usability criteria If the SUT works to spec and someone wants it changed, that’s a feature request

What do I do once I find one? Bug tracking is a valuable tool Ensures the bug isn’t forgotten Highlights recurring issues Supports formal resolution/regression process Provides important product cycle data Can support ‘higher level’ metrics, e.g. root cause analysis Valuable information for field support

What are the contents of a bug report? Repro steps – how did you cause the failure? Observed result – what did it do? Expected result – what should it have done? Collateral information: return values/output, debugger, etc. Environment Test platforms must be reproducible “It doesn’t do it on my machine”

Tracking Bugs Raw bug count Slope is useful predictor Ratio by ranking How bad are the bugs we’re finding? Find rate vs. fix rate One step forward, two back? Management choices Load balancing Review of development quality

Ranking bugs Severity Sev 1: crash, hang, data loss Sev 2: blocks feature, no workaround Sev 3: blocks feature, workaround available Sev 4: trivial (e.g. cosmetic) Priority Pri 1: Fix immediately - blocking Pri 2: Fix before next release outside team Pri 3: Fix before ship Pri 4: Fix if nothing better to do

A Bug’s Life

Regression Testing Good: rerun the test that failed Or write a test for what you missed Better: rerun related tests (e.g. component level) Best: rerun all product tests Automation can make this feasible!

To beta, or not to beta Quality bar for beta release: features mostly work if you use them right Pro: Get early customer feedback on design Real-world workflows find many important bugs Con: Do you have time to incorporate beta feedback? A beta release takes time and resources

Developer Preview Different quality bar than beta Known defects, even crashing bugs Known conflicts with previous version Setup/uninstall not completed Goals Review of feature set Review of API set by technical consumers

Dogfood “So good, we eat it ourselves” Advantage: real world use patterns Disadvantage: impact on productivity At Microsoft: we model our customers 50K employees Broad range of work assignments, software savvy Wide ranging network (worldwide)

When can I ship? Test coverage is “sufficient” Bug slope, find vs. fix lead to convergence Severity mix is primarily low-sev Priority mix is primarily low-pri