Download presentation
Presentation is loading. Please wait.
Published byRichard Harrell Modified over 9 years ago
1
Copyright © 1996-2002, Satisfice, Inc. v1.4.3 I grant permission to make digital or hard copies of this work for personal or classroom use, provided that (a) Copies are not made or distributed for profit or commercial advantage, (b) Copies bear this notice and full citation on the first page, and if you distribute the work in portions, the notice and citation must appear on the first page of each portion. Abstracting with credit is permitted. The proper citation for this work is Rapid Software Testing (course notes, Fall 2002), www.testing-education.org, (c) Each page that you use from this work must bear the notice "Copyright (c) James Bach, james@satisfice.com” or, if you modify the page, "Modified slide, originally from James Bach", and (d) If a substantial portion of a course that you teach is derived from these notes, advertisements of that course must include the statement, "Partially based on materials provided by James Bach." To copy otherwise, to republish or post on servers, or to distribute to lists requires prior specific permission and a fee. Request permission to republish from James Bach, james@satisfice.com.www.testing-education.org
2
Acknowledgements Some of this material was developed in collaboration with Dr. Cem Kaner, of the Florida Institute of Technology. Many of the ideas in this presentation were reviewed and extended by my colleagues at the 7 th Los Altos Workshop on Software Testing. I appreciate the assistance of the LAWST 7 attendees: Cem Kaner, Brian Lawrence, III, Jack Falk, Drew Pritsker, Jim Bampos, Bob Johnson, Doug Hoffman, Chris Agruss, Dave Gelperin, Melora Svoboda, Jeff Payne, James Tierney, Hung Nguyen, Harry Robinson, Elisabeth Hendrickson, Noel Nyman, Bret Pettichord, & Rodney Wilson.
3
Assumptions You test software. You have at least some control over the design of your tests and some time to create new tests. One of your goals is to find important bugs fast. You test things under conditions of uncertainty and time pressure. You have control over how you think and what you think about. You want to get very good at software testing.
4
Goal of this Class This class is about how to test a product when you have to test it right now, under conditions of uncertainty, in a way that stands up to scrutiny. (This skill comes in handy even when you have more time, more certainty, and are subject only to your own scrutiny.)
5
Internet time! Less time than you’d like. Faster is better. Your plans will be interrupted. You are not fully in control of your time. What is Internet time like?
6
The Challenge When the ball comes to you… Do you know you have the ball? Can you receive the pass? Do you know what your role and mission is? Do you know where your teammates are? Are you ready to act, right now? Can you let your teammates help you? Do you know your options? Is your equipment ready? Can you read the situation on the field? Are you aware of the criticality of the situation?
7
Your Moves: Rapid Testing Cycle Make sense of your status Compare status against mission Focus on what needs doing START STOP Do a burst of testing Report
8
What About “Slow” Testing? All Testing Rapid automation extensive preparation super testability super skill Rigorous or Thorough Management likes to talk about this… but they don’t fund it. You can do this, no matter what.
9
KEY IDEA
10
Technical Knowledge Specifications Product Ah! Problem! Problem Testing is in Your Head Coverage Problem Report Communication The important parts of testing don’t take place in the computer or on your desk. Critical Thinking Domain Knowledge Experience
11
Welcome to Epistemology the study of knowledge
12
Epistemology Epistemology is the study of how we know what we know. The philosophy of science belongs to Epistemology. All good testers practice Epistemology.
13
Basic Skills of Epistemology Ability to pose useful questions. Ability to observe what’s going on. Ability to describe what you perceive. Ability to think critically about what you know. Ability to recognize and manage bias. Ability to form and test conjectures. Ability to keep thinking despite already knowing. Ability to analyze someone else’s thinking.
14
Epistemology Lesson: Abductive Inference 1.Collect data. 2.Find several explanations that account for the data. 3.Find more data that is either consistent or inconsistent with explanations. 4.Choose the best explanation that accounts for the important data, or keep searching. Abductive inference means finding the best explanation for a set of data.
15
Epistemology Lesson: Conjecture and Refutation We never know product quality for certain. We conjecture about quality. To conjecture is to explore plausible realities. A good conjecture is falsifiable: we can imagine facts that would refute the conjecture. A conjecture can never be proven, only corroborated. Corroborating evidence is most interesting when gained as the result of a genuine attempt to refute. Good testing is a serious attempt to refute and corroborate conjectures about product quality.
16
Epistemology of Magic Our thinking is limited We misunderstand probabilities We use the wrong heuristics We lack specialized knowledge We forget details We don’t pay attention to the right things The world is hidden states sequences processes attributes variables identities Magic tricks work for the same reasons that bugs exist Studying magic can help you develop the imagination to find better bugs. Sufficiently advanced technology is indistinguishable from magic.
17
Tester vs. Tourist: Testers Ask Critical Questions How well does the product work? How do you know how well it works? What evidence do you have about how it works? Is that evidence reliable and up to date? What does it mean for the product to “work”? What facts would cause you to believe that it doesn’t work? In what ways could it not work, yet seem to you that it does? In what ways could it work, yet seem to you that it doesn’t? What might cause the product not to work well (or at all)? What would cause you to suspect that it will soon stop working? Do other people use the product? How does it work for them? How important is it for this product to work? Are you qualified to answer these questions? Is anyone else qualified?
18
Heuristics: Generating Solutions Quickly adjective: “serving to discover.” noun: “a useful method that doesn’t always work.” “Heuristic reasoning is not regarded as final and strict but as provisional and plausible only, whose purpose is to discover the solution to the present problem.” - George Polya, How to Solve It
19
Some Everyday Heuristics It’s dangerous to drink and drive. A bird in hand is worth two in the bush. Nothing ventured, nothing gained. Sometimes people stash their passwords near their computers. Try looking there. Stores are open later during the Holidays. If your computer is behaving strangely, try rebooting. If it’s very strange, reinstall Windows. If it’s a genuinely important task, your boss will follow-up, otherwise, you can ignore it.
20
How Heuristics Differ from Other Procedures or Methods Heuristics are known to be wrong, at least some of the time. No one can say for sure when a heuristic will work. Heuristics aid or focus an open-ended problem-solving or solution-searching effort. Heuristics can substitute for complete or rigorous analysis.
21
Tunnel-Vision is Our Great Occupational Hazard Problems you can find with your biases… invisible problems invisible problems
22
A Tester’s Attitude Cautious Jump to conjectures, not conclusions. Practice admitting “I don’t know.” Have someone check your work. Curious What would happen if…? How does that work? Why did that happen? Critical Proceed by conjecture and refutation. Actively seek counter-evidence. Good testers are hard to fool. (and Courageous)
23
Important Vocabulary Abductive inference is a way to make respected guesses. Conjectures (a.k.a. hypotheses) help us remember that we may be wrong about our beliefs. Refutation means checking how you could be wrong, before trusting a conjecture. Evidence is either consistent with (corroborates) or inconsistent with a given conjecture. Heuristics help us get the right ideas at the right times. Raising questions and issues is the heart of testing.
24
The Triangle Program This program takes three numbers as input. The numbers represent the dimensions of a triangle. When you click on the check button, the program tells you what kind of triangle the sides represent: scalene (no side equal to any other) isosceles (two sides are equal) equilateral (all sides are equal) I want you to test this program. Any questions?
25
KEY IDEA
26
The Universal Test Procedure “Try it and see if it works.” Learn about it Model it Speculate about it Configure it Operate it Know what to look for See what’s there Understand the requirements Identify problems Distinguish bad problems from not- so-bad problems ModelsEvaluationCoverage
27
All Product Testing is Something Like This Project Environment Product Elements Quality Criteria Test Techniques Perceived Quality
28
The Five Problems of Testing Project Environment Product Elements Quality Criteria Test Techniques Perceived Quality L ogistics Problem C overage Problem E valuation Problem R eporting Problem S topping Problem
29
Models A Model is… A map of a territory A simplified perspective A relationship of ideas An incomplete representation of reality A diagram, list, outline, matrix… No good test design has ever been done without models. The trick is to become aware of how you model the product, and learn different ways of modeling.
30
Coverage There are as many kinds of coverage as there are ways to model the product. Structural Functional Data Platform Operations Product coverage is the proportion of the product that has been tested.
31
An evaluation strategy is how you know the product works. “...it works.” really means “...it appeared to meet some requirement to some degree.” Which requirement? Appeared how? To what degree? Perfectly? Just barely?
32
Rapid Evaluation: “HICCUPP” Consistent with History: Present function behavior is consistent with past behavior. Consistent with our Image: Function behavior is consistent with an image that the organization wants to project. Consistent with Comparable Products: Function behavior is consistent with that of similar functions in comparable products. Consistent with Claims: Function behavior is consistent with what people say it’s supposed to be. Consistent with User’s Expectations: Function behavior is consistent with what we think users want. Consistent within Product: Function behavior is consistent with behavior of comparable functions or functional patterns within the product. Consistent with Purpose: Function behavior is consistent with apparent purpose.
33
Exercise Explain what you covered and what your evaluation strategies were for triangle.
34
KEY IDEA
35
Test Cycle: What You Do With a Build 1.Receive the product. Formal builds Informal builds Save old builds. 2. Clean your system. Completely uninstall earlier builds. 3.Verify testability. Smoke testing Suspend test cycle if the product is untestable.
36
Test Cycle: What You Do With a Build 4.Determine what is new or changed. Change log 5.Determine what has been fixed. Bug tracking system 6.Test fixes. Many fixes fail! Also test nearby functionality. 7.Test new or changed areas. Exploratory testing.
37
Test Cycle: What You Do With a Build 8.Perform regression testing. Not performed for an incremental cycle. Automated vs. manual Important tests first! 9.Report results. Coverage Observations Bug status (new, existing, reopened, closed) Assessment of quality Assessment of testability
38
The Test Cycle Time Quality Typical Quality Growth Ship it!
39
Risk Focus: Common and Critical Cases Core functions: the critical and the popular. Capabilities: can the functions work at all? Common situations: popular data and pathways. Common threats: likely stress and error situations. User impact: failures that would do a lot of damage. Most wanted: problems of special interest to someone else on the team.
40
Test Cycle Convergence Time Quality Good Enough Quality Build Cycle EndCycle Start Tests failed: known bad Tests passed: known good Tests not run: Unknown } Ideal Test Cycle Convergence Untestable Quality
41
Rapid Bug Investigation Identification Notice a problem. Recall what you were doing just prior to the problem. Examine symptoms of the problem w/o disturbing system state. Consider possibility of tester error. Investigation How can the problem be reproduced? What are the symptoms of the problem? How severe could the problem be? What might be causing the problem? Reality Check Do we know enough about the problem to report it? Is it important to investigate this problem right now? Is this problem, or any variant of it, already known? How do we know this is really a problem? Is there someone else who can help us? Identify InvestigateCheck
42
Exercise “I’ve modified the Triangle program so that it now renders an image of the triangle. I’m also worried about the limitations of the input field. Please do another test cycle on it.” -- Your Friendly Programmer
43
Exercise Provide a clear report on how completely you tested Triangle
44
An Innocent Question from Management… “On a scale of 0 to 100, how completely did you test the Triangle program?” completely untested completely tested 0 10050 1.Answer with a number. 2.You can also offer a comment, if you have one. 3.Any questions?
45
What Could We Report? Tasks: Things we are doing. Example: “We are trying to reinstall the server so we can do the smoke tests. We have 28 fix verifications to get through.” Product Coverage: What we have examined. Example: “We have tested printing, performance, and compatibility.” Product Risk: Problems or potential for problems in the product. Example: “We have found 53 bugs.” Agreement: What we specifically contracted to do. Example: “We have implemented 90% of the tests on our test plan. As we agreed, we have started testing for printer compatibility.”
46
What Could We Report? Project: Schedule, documents, resources, people, or anything else that makes it possible for us to test. Example: “Testing is on schedule. But Jaeline goes on vacation next week.” Mission: The ultimate goal, which may be to ship a product that meets quality criteria, satisfy customers, or some other goal that can be a guiding principle for the testing. Example: “We don’t yet know enough about feature X of the product.” Client Satisfaction: What our clients think of our work. Example: “The project manager is happy with our work.”
47
Reporting Considerations Reporter safety: What will they think if I made no progress? Client: Who am I reporting to and how do I relate to them? Rules: What rules and traditions are there for reporting, here? Significance of report: How will my report influence events? Subject of report: On what am I reporting? Other agents reporting: How do other reports affect mine? Medium: How will my report be seen, heard, and touched? Precision and confidence levels: What distinctions make a difference? Take responsibility for the communication.
48
KEY IDEA
49
Test Strategy “How we plan to cover the product so as to develop an adequate assessment of quality.” A good test strategy is: Product-Specific Risk-focused Diversified Practical
50
Test Strategy Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy. Example of a poorly stated (and probably poorly conceived) test strategy: “We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification.”
51
Test Strategy Makes use of test techniques. May be expressed by test procedures and cases. Not to be confused with test logistics, which involve the details of bringing resources to bear on the test strategy at the right time and place. You don’t have to know the entire strategy in advance. The strategy should change as you learn more about the product and its problems.
52
Test Cases/Procedures Test cases and procedures should manifest the test strategy. If your strategy is to “execute the test suite I got from Joe Third-Party”, how does that answer the prime strategic questions: How will you cover the product and assess quality? How is that practical and justified with respect to the specifics of this project and product? If you don’t know, then your real strategy is that you’re trusting things to work out.
53
Exercise Produce a test strategy for DecideRight
54
Test Strategy Heuristic: Diverse Half-Measures There is no single technique that finds all bugs. We can’t do any technique perfectly. We can’t do all conceivable techniques. Use “diverse half-measures”-- lots of different points of view, approaches, techniques, even if no one strategy is performed completely.
55
Strategy Heuristic: Function/Data Square Data Functions risk testing Function testing reliability testing smoke testing
56
The Fallacy of Repeated Tests: Clearing Mines mines
57
Totally Repeatable Tests Won’t Clear the Minefield mines fixes
58
Variable Tests are Therefore More Effective mines fixes
59
Exercise Find at least two situations where the heuristic: “Variable tests are better than repeated tests” is WRONG.
60
Test Techniques A test technique is a recipe for performing these tasks that will reveal something worth reporting Analyze the situation. Model the test space. Select what to cover. Select evaluation methods. Configure the test system. Operate the test system. Observe the test system. Evaluate the test results.
61
General Test Techniques Function testing Domain testing Stress testing Flow testing Scenario testing User testing Regression testing Risk testing Claims testing Random Testing For any one of these techniques, there’s somebody, somewhere, who believes that is the only way to test.
62
Function Testing “March through the functions” - A function is something the product can do. - Identify each function and sub-function. - Determine how you would know if they worked. - Test each function, one at a time. Key Idea: Summary: Good for: assessing capability rather than reliability
63
Domain Testing “Divide the data, and conquer” - A domain is a set of test data. - Identify each domain of input and output. - Analyze limits and properties of each domain. - Identify combinations of domains to test. - Select coverage strategy: e.g., exhaustive, boundaries, best representative Key Idea: Summary: Good for: all purposes
64
Stress Testing “Overwhelm the product” - Select test items and functions to stress. - Identify data and platform elements that relate to them. - Select or generate challenging data and platform configurations to test with: e.g., large or complex data structures, high loads, long test runs, many test cases Key Idea: Summary: Good for: performance, reliability, and efficiency assessment
65
Flow Testing “Do one thing after another” - Define test procedures or high level cases that incorporate many events and states. - The events may be in series, parallel, or some combination thereof. - Don’t reset the system between events. Key Idea: Summary: Good for: finding problems fast (however, bug analysis is more difficult)
66
Scenario Testing “Test to a compelling story” - Design tests that involve meaningful and complex interactions with the product. - A good scenario test is a plausible story of how someone who matters might do something that matters with the product. - Incorporate multiple elements of the product, and data that is realistically complex. Key Idea: Summary: Good for: Finding problems that seem important
67
User Testing “Involve the users” - Identify categories of users. - Understand favored and disfavored users. -Find these users and have them do testing or help you design tests. -User testing is powerful when you involve a variety of users. Key Idea: Summary: Good for: all purposes
68
Regression Testing “Test the changes” - Identify what product elements changed. - Identify what elements could have been impacted by the changes. - Coverage strategy: e.g., recent bug fixes, past bug fixes, likely elements, all elements Key Idea: Summary: Good for: managing risks related to product enhancement
69
Claims Testing “Verify every claim” - Identify specifications (implicit or explicit). - Analyze individual claims about the product. - Ask customer to clarify vague claims. - Verify each claim. - Expect the specification and product to be brought into alignment. Key Idea: Summary: Good for: simultaneously testing the product and specification, while refining expectations
70
Risk Testing “Imagine a problem, then look for it.” - What kinds of problems could the product have? - How would you detect them if they were there? - Make a list of interesting problems and design tests specifically to reveal them. - It may help to consult experts, design documentation, past bug reports, or apply risk heuristics. Key Idea: Summary: Good for: making best use of testing resources; leveraging experience
71
Random Testing “Run a million different tests” - Look for an opportunity to automatically generate thousands of slightly different tests. - Create an automated, high speed evaluation strategy. - Write a program to generate, execute, and evaluate all the tests. Key Idea: Summary: Good for: Assessing reliability across input and time.
72
KEY IDEA
73
Dynamic Quality Paradigm Perfect Awful unnecessary quality unacceptable quality Item A Item B It’s more important to work on Item B. Further improvement would not be a good use of resources. Further improvement is necessary. Good enough quality bar floating line
74
A Heuristic for Good Enough 1. X has sufficient benefits. 2. X has no critical problems. 3. Benefits of X sufficiently outweigh problems. 4. In the present situation, and all things considered, improving X would be more harmful than helpful. Benefits Problems All conditions must apply.
75
Good Enough... …with what level of confidence? …to meet ethical obligations? …in what time frame? …compared to what? …for what purpose? …or else what? …for whom? Perspective is Everything
76
Test Project Dynamics: Context Model
79
CHOI VENS Motivation Capability Test Project Dynamics: Givens vs. Choices Motivation: What testing does the situation require? Capability: Can we perform that testing in this situation?
80
Tailoring the Test Process Are GIVENS good enough? Do CHOICES about process exploit the GIVENS and address the MISSION well enough? Is MISSION achieved well enough? How do you know?
81
MISSION: The most important part Find important problems Assess quality Certify to standard Fulfill process mandates Satisfy stakeholders Assure accountability Advise about QA Advise about testing Advise about quality Maximize efficiency Minimize time Minimize cost The quality of testing depends on which of these possible missions matter and how they relate. Many debates about the goodness of testing are really debates over missions and givens.
82
Slow down project to improve testability Driving analogy When on the freeway, go 65mph. When on a suburban artery, go 40mph. When in a neighborhood, go 25mph. When in a parking lot, go 5mph. Park very slowly. Change Control Funnel The closer you are to shipping, the more reluctantly you should change code. To minimize retesting, carefully review proposed changes. “You can change what you like, but change may invalidate everything we thought we knew about quality.” More risks, lower speeds. time
83
“Why didn’t you find that bug?” Highway Patrol analogy: Do you realize how hard it would be to patrol the highways so that all speeders are always caught? Risk-based improvement argument: Our goal is to find important problems fast. Was that bug important? Could we economically have anticipated it? If so, we’ll modify our approach. Testability reframe: We didn’t find it because the developers didn’t make the bug obvious enough for us to notice. The developers should put better bugs into the code. Let’s make a more testable product.
84
Exercise We may want to purchase DiskMapper. Analyze it and tell us what the testing issues might be.
85
KEY IDEA
86
Contrasting Approaches In scripted testing, tests are first designed and recorded. Then they may be executed at some later time or by a different tester. In exploratory testing, tests are designed and executed at the same time, and they often are not recorded. Product Tests (yes, this is a simplified view)
87
Forward-Backward Thinking search for evidence to corroborate or refute the conjecture discover evidence and make conjectures EvidenceConjectures
88
Lateral Thinking but periodically take stock of your status against your mission Let yourself be distracted… ‘cause you never know what you’ll find…
89
Exploratory Testing Tasks Explore Execute Tests Design Tests Product (coverage) Techniques Quality (evaluation) Discover the elements of the product. Discover how the product should work. Discover test design techniques that can be used. Decide which elements to test. Observe product behavior. Speculate about possible quality problems. Evaluate behavior against expectations. Configure & operate the product. Select & apply test design techniques. Testing notes Tests Problems Found
90
Taking Notes Test Coverage Outline/Matrix Evaluation Notes Risk/Strategy List Test Execution Log Issues, Questions & Anomalies It would be easier to test if you changed/added… How does … work? Is this important to test? How should I test it? I saw something strange…
91
The Plunge in and Quit Heuristic This is a tool for overcoming fear of complexity. Often, a testing problem isn’t as hard as it looks. Sometimes it is as hard as it looks, and you need to quit for a while and consider how tackle the problem. It may take several plunge & quit cycles to do the job. Whenever you are called upon to test something very complex or frightening, plunge in! After a little while, if you are very confused or find yourself stuck, quit!
92
Exploration Trigger Heuristic: No Questions If you don’t have any issues or concerns about product testability or test environment, that itself may be a critical issue. When you are uncautious, uncritical, or uncurious, that’s when you are most likely to let important problems slip by. Don’t fall asleep! If you find yourself without any questions, ask yourself “Why don’t I have any questions?”
93
Key Ideas of Exploratory Testing The results from the tests you design and execute influence the next test you will choose to design and execute. You build a mental model of the product while you test it. This model includes what the product is and how it behaves, and how it’s supposed to behave. You test what you know about, and you are alert for clues about behaviors and aspects of the product that you don’t yet know about.
94
Doing Exploratory Testing Keep your mission clearly in mind. Keep notes that help you report what you did, why you did it, and support your assessment of product quality. Keep track of questions and issues raised in your exploration. To supercharge your testing, pair up with another tester and test the same thing on the same computer at the same time.
95
Exercise Write an explicit test procedure. Make it at least 10 steps long. Include coverage and evaluation instructions.
96
KEY IDEA
97
It Boils Down To… YOU: Skills, equipment, experience, attitude THE BALL: The product, testing tasks, bugs YOUR TEAM: Coordination, roles, support THE GAME: Risks, rewards, project environment, corporate environment, your mission as a tester YOUR MOVES: How you spend your attention and energy to help your team win the game.
98
Rapid Testing Develop your scientific mind. Know your coverage and evaluation strategy. Run crisp test cycles that focus first on areas of risk. Use a diversified test strategy that serves the mission. Assure that your testing fits the logistics of the project. Let your tests evolve as new information comes to light.
99
Copyright © 1996-2001, Satisfice, Inc. v1.0
100
Rapid Test Management Establish a strong and supportive testing role. Re-visit your strategy and mission every day. Advocate for bugs and testability. Maintain three lists: risks, issues, coverage. Test with your team. Continuously report the status and dynamics of testing.
101
KEY IDEA
102
This is our role. We make informed decisions about quality possible. We can do this under a variety of conditions. Testers light the way.
103
This is the formula… You must demand… complete specs and quantifiable criteria. protected schedule and early involvement. zero defect philosophy and control over release. resources to achieve complete test coverage. …for hypochondria.
104
Look, testing is a difficult job. “The perfect cure for hypochondria, 100 percent effective, is to contract a potentially fatal disease. It cures you instantly. It happened to me.” Gene Weingarten The Hypochondriac's Guide to Life and Death That’s why they hire smart people, like us, to do it for them.
105
Too many textbooks seem to think testers are wind-up toys. “This model identifies a standards-based life cycle testing process that concentrates on developing formal test documentation to implement repeatable structured testing on a software or hardware/software system. The general intent is that the test documentation be developed based on a formal requirements specification document...Once the documentation is developed, the test is executed.” -- from a real article about testing. (I added the boldfacing to emphasize instructions) Where’s the tester in this picture?
106
Testing skills compensate for a difficult project environment. complete specs quantifiable criteria protected schedule early involvement zero defect philosophy control over release complete test coverage – implicit specs & inference – meaningful criteria – risk-driven schedule – good working relationship – good enough quality – don’t be the gatekeeper – enough information Instead of this…consider this.
107
We think about failure, and that can be “negative”.
108
Eight Commitments worth making to developers We’ll test your code as soon as we can after it’s built. We’ll test important things first, and focus on important problems. We’ll write clear, thoughtful, and respectful problem reports. We’ll try not to be a bottleneck for development. We’ll tell you how we’re testing, and consider your suggestions. We’ll look for ways to test better and faster. We’ll try to accommodate how you like to work. We will not waste your time.
109
Our job is information. If they don’t want information… Stop fretting and just be as helpful as you can. Let events play out. Carefully record what happens. Detach yourself from the outcome and refocus on improving the next project, instead. Help the team use the experience as a reference point for improvement. use the Reality Steamroller technique:
110
KEY IDEA
111
Public Relations Problem “What’s the status of testing?” “What are you doing today?” “When will you be finished?” “Why is it taking so long?” “Have you tested _______, yet?”
112
The Problem Management has little patience for detailed test status reports. Management doesn’t understand testing. Testing is confused with improving. Testing is considered a linear, independent task. Testing is assumed to be exhaustive. Testing is assumed to be continuous. Test results are assumed to stay valid over time. Impact of regression testing is not appreciated. Test metrics are hard to interpret.
113
A Solution Report test cycle progress in a simple, structured way... ...that shows progress toward a goal... ...manages expectations... ...and inspires support... ...for an effective test process. No process? No Problem! Easy setup
114
The Dashboard Concept Project conference room Large dedicated whiteboard “ Do Not Erase ” Project status meeting
115
Test Cycle Report Test Effort Test Coverage Quality Assessment Time Product Areas vs.
116
Area file/edit view insert format tools slideshow online help clipart converters install compatibility general GUI Effort high low blocked low blocked none start 3/17 low C. 1 1+ 2 2+ 1 2 0 1 0 3 Q.Comments 1345, 1363, 1401 automation broken crashes: 1406, 1407 animation memory leak new files not delivered need help to test... lab time is scheduled Testing Dashboard Updated:Build: 2/2138
117
Product Area 15-30 areas (keep it simple) Avoid sub-areas: they’re confusing. Areas should have roughly equal value. Areas together should be inclusive of everything reasonably testable. “Product areas” can include tasks or risks- but put them at the end. Minimize overlap between areas. Areas must "make sense" to your clients, or they won’t use the board. Area file/edit view insert format tools slideshow online help clipart converters install compatibility general GUI
118
Test Effort None Start Low High Pause Blocked Ship Not testing; not planning to test. Regression or spot testing only; maintaining coverage. Focused testing effort; increasing coverage. No testing yet, but expect to start soon. Temporarily ceased testing, though area is testable. Can’t effectively test, due to blocking problem. Going through final tests and signoff procedure.
119
Test Effort Use red to denote significant problems or stoppages, as in blocked, none, or pause. Color ship green once the final tests are complete and everything else on that row is green. Use neutral color (such as black or blue, but pick only one) for others, as in start, low, or high.
120
Test Coverage 0 1 1+ 2 2+ 3 We don’t have good information about this area. More than sanity, but many functions not tested. Common & Critical: Sanity Check: Some data, state, or error coverage beyond level 2. Complex Cases: all functions touched; common & critical tests executed. strong data, state, error, or stress testing. major functions & simple data.
121
Test Coverage Color green if coverage level is acceptable for ship, otherwise color black. Level 1 and 2 focus on functional requirements and capabilities: can this product work at all? Level 2 may span 50%-90% code coverage. Level 2+ and 3 focus on information to judge performance, reliability, compatibility, and other “ilities”: will this product work under realistic usage? Level 3 or 3+ implies “if there were a bad bug in this area, we would probably know about it.”
122
Quality Assessment “We know of no problems in this area that threaten to stop ship or interrupt testing, nor do we have any definite suspicions about any.” “We know of problems that are possible showstoppers, or we suspect that there are important problems not yet discovered.” “We know of problems in this area that definitely stop ship or interrupt testing.”
123
Use the comment field to explain anything colored red, or any non-green quality indicator. Comments Problem ID numbers. Reasons for pausing, or delayed start. Nature of blocking problems. Why area is unstaffed.
124
Using the Dashboard Updates: 2-5/week, or at each build, or prior to each project meeting. Progress: Set expectation about the duration of the “Testing Clock” and how new builds reset it. Justification: Be ready to justify the contents of any cell in the dashboard. The authority of the board depends upon meaningful, actionable content. Going High Tech: Sure, you can put this on the web, but will anyone actually look at it???
125
KEY IDEA
126
Exploratory testing relies on tester intuition. It is unscripted and improvisational. How do I, as test manager, understand what’s happening, so I can direct the work and defend it to my clients?
127
SKILL there’s no shortcut No one can read your mind. You must gain the skill to explain your testing… so that you can be accountable for it. That requires a lot of practice. In our experience: several months of daily practice. This is a black box… Just like your mind.
128
Introducing the Test Session 1)Charter 2)Time Box 3)Reviewable Result 4)Debriefing vs.
129
Charter: A clear mission for the session A charter may suggest what should be tested, how it should be tested, and what problems to look for. A charter is not meant to be a detailed plan. General charters may be necessary at first: “Analyze the Insert Picture function” Specific charters provide better focus, but take more effort to design: “Test clip art insertion. Focus on stress and flow techniques, and make sure to insert into a variety of documents. We’re concerned about resource leaks or anything else that might degrade performance over time.”
130
Time Box: Focused test effort of fixed duration Brief enough for accurate reporting. Brief enough to allow flexible scheduling. Brief enough to allow course correction. Long enough to get solid testing done. Long enough for efficient debriefings. Beware of overly precise timing. Short: 60 minutes (+-15) Normal: 90 minutes (+-15) Long: 120 minutes (+-15)
131
Debriefing: Measurement begins with observation The manager reviews session sheet to assure that he understands it and that it follows the protocol. The tester answers any questions. Session metrics are checked. Charter may be adjusted. Session may be extended. New sessions may be chartered. Coaching happens.
132
Reviewable Result: A scannable session sheet Charter #AREAS Start Time Tester Name(s) Breakdown #DURATION #TEST DESIGN AND EXECUTION #BUG INVESTIGATION AND REPORTING #SESSION SETUP #CHARTER/OPPORTUNITY Data Files Test Notes Bugs #BUG Issues #ISSUE CHARTER ----------------------------------------------- Analyze MapMaker’s View menu functionality and report on areas of potential risk. #AREAS OS | Windows 2000 Menu | View Strategy | Function Testing Strategy | Functional Analysis START ----------------------------------------------- 5/30/00 03:20 pm TESTER ----------------------------------------------- Jonathan Bach TASK BREAKDOWN ----------------------------------------------- #DURATION short #TEST DESIGN AND EXECUTION 65 #BUG INVESTIGATION AND REPORTING 25 #SESSION SETUP 20
133
The Breakdown Metrics Testing is like looking for worms Test Design and Execution Bug Investigation and Reporting Session Setup
134
Reporting the TBS Breakdown A guess is okay, but follow the protocol Test, Bug, and Setup are orthogonal categories. Estimate the percentage of charter work that fell into each category. Nearest 5% or 10% is good enough. If activities are done simultaneously, report the highest precedence activity. Precedence goes in order: T, B, then S. All we really want is to track interruptions to testing. Don’t include Opportunity Testing in the estimate.
135
Activity Hierarchy All test work fits here, somewhere all work non- session session opportunityon charter test bug setup inferred
136
Work Breakdown: Diagnosing the productivity Do these proportions make sense? How do they change over time? Is the reporting protocol being followed?
137
Coverage: Specifying coverage areas These are text labels listed in the Charter section of the session sheet. (e.g. “insert picture”) Coverage areas can include anything areas of the product test configuration test strategies system configuration parameters Use the debriefings to check the validity of the specified coverage areas.
138
Coverage: Are we testing the right stuff? Is it a lop-sided set of coverage areas? Is it distorted reporting? Distribution of On Charter Testing Across Areas 0 20 40 60 80 100 120 Is this a risk-based test strategy?
139
Using the Data to Estimate a Test Cycle 1.How many perfect sessions (100% on-charter testing) does it take to do a cycle? (let’s say 40) 2.How many sessions can the team (of 4 testers) do per day? (let’s say 3 per day, per tester = 12) 3.How productive are the sessions? (let’s say 66% is on-charter test design and execution) 4.Estimate: 40 / (12 *.66) = 5 days 5.We base the estimate on the data we’ve collected. When any conditions or assumptions behind this estimate change, we will update the estimate.
140
Challenges of High Accountability Exploratory Testing Architecting the system of charters (test planning) Making time for debriefings Getting the metrics right Creating good test notes Keeping the technique from dominating the testing Maintaining commitment to the approach For example session sheets and metrics see http://www.satisfice.com/sbtm
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.