AXIOMS Paul Gerrard THE TESTING OF Advancing Testing Using Axioms.

Slides:



Advertisements
Similar presentations
Test plans. Test Plans A test plan states: What the items to be tested are At what level they will be tested What sequence they are to be tested in How.
Advertisements

Test process essentials Riitta Viitamäki,
Bryan Roach Chairman Crime Stoppers Australia. Strategic Planning The process for defining strategy (direction) and decision making For Crime Stoppers,
Software Quality Assurance Plan
The CRM Textbook: customer relationship training Terry James © 2006 Chapter 11: Management.
SCOPE! Simple Review For Your Reference If Needed.
NEES Project Management Workshop June 16 June 18 1 Segment 2.
Monitoring and Control
Project Management: A Critical Skill for Organizations Presented by Hetty Baiz Project Office Princeton University.
1 Test Planning CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology March 9, 2007.
SE 555 Software Requirements & Specification Requirements Validation.
By Saurabh Sardesai October 2014.
Software Project Risk Management
COMP8130 and 4130Adrian Marshall 8130 and 4130 Test Management Adrian Marshall.
Stoimen Stoimenov QA Engineer QA Engineer SitefinityLeads,SitefinityTeam6 Telerik QA Academy Telerik QA Academy.
Slide 1 Intelligent Testing, Improvement and Assurance Susan Windsor Principal Gerrard Consulting Limited +44 (0)
Test Design Techniques
What is Business Analysis Planning & Monitoring?
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
 A project is “a unique endeavor to produce a set of deliverables within clearly specified time, cost and quality constraints”
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
S/W Project Management
Software Testing Lifecycle Practice
Test Organization and Management
University of Sunderland COM369 Unit 8 COM369 Project Monitoring and Control Unit 8.
© BJSS Limited Going Agile UK TMF - April 2011 Mark Crowther, Test Consultant.
1 in partnership with Goodfoot (0) People Management Excellence making tomorrow a better place People Management Excellence.
CO2403 and CO3808 – Quality Management Systems Quality process definition, administration and accreditation.
Independent User Acceptance Test Process (IUAT)
Testing Challenges in an Agile Environment Biraj Nakarja Sogeti UK 28 th October 2009.
Certificate IV in Project Management Introduction to Project Management Course Number Qualification Code BSB41507.
Thriving Third Sector: Vision for Civil Society Les Hems GuideStar Data Services.
FCS - AAO - DM COMPE/SE/ISE 492 Senior Project 2 System/Software Test Documentation (STD) System/Software Test Documentation (STD)
Risk Management for Technology Projects Geography 463 : GIS Workshop May
CSI - Introduction General Understanding. What is ITSM and what is its Value? ITSM is a set of specialized organizational capabilities for providing value.
Risk-Based Testing – An Overview Assurance with IntelligenceSlide 1 Paul Gerrard Gerrard Consulting 1 Old Forge Close Maidenhead Berkshire SL6 2RD UK e:
Management & Development of Complex Projects Course Code MS Project Management Risk Management Framework Lecture # 22.
Building a Business Case for Quality Improvement Glen Copping CFO and Vice-President, Systems Development & Performance Session: BC PSQC D2 Thursday, February.
AXIOMS Paul Gerrard THE TESTING OF.
Northwest ASSIST How to obtain maximum value from consultants 3 rd April 2008 Nadine Fry Julian Todd.
Software Testing. Software testing is the execution of software with test data from the problem domain. Software testing is the execution of software.
Professional Certificate in Electoral Processes Understanding and Demonstrating Assessment Criteria Facilitator: Tony Cash.
Practical Investment Assurance Framework PIAF Copyright © 2009 Group Joy Pty. Ltd. All rights reserved. Recommended for C- Level Executives.
STEP 4 Manage Delivery. Role of Project Manager At this stage, you as a project manager should clearly understand why you are doing this project. Also.
WEC MADRID 18 TH MARCH 2004 ASTRAZENECA’S APPROACH TO SUPPLIER RISK MANAGEMENT.
Chair of Software Engineering Exercise Session 6: V & V Software Engineering Prof. Dr. Bertrand Meyer March–June 2007.
Project Management Workshop James Small. Goals Understand the nature of projects Understand why Project Management is important Get an idea of the key.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Software Quality Assurance SOFTWARE DEFECT. Defect Repair Defect Repair is a process of repairing the defective part or replacing it, as needed. For example,
Quick Recap.
Guidelines Recommandations. Role Ideal mediator for bridging between research findings and actual clinical practice Ideal tool for professionals, managers,
Continual Service Improvement Methods & Techniques.
Company LOGO. Company LOGO PE, PMP, PgMP, PME, MCT, PRINCE2 Practitioner.
CIS-74 Computer Software Quality Assurance
What is a Project? “ Unique process consisting of a set of coordinated and controlled activities with start and finish dates, undertaken to achieve an.
true potential An Introduction to the Middle Manager Programme’s CMI Qualifications.
WORKSHOP ON PROJECT CYCLE MANAGEMENT (PCM) Bruxelles 22 – 24 May 2013 Workshop supported by TAIEX.
1 March 19, Test Plans William Cohen NCSU CSC 591W March 19, 2008.
COMM02 Project Monitoring and Control Unit 8
Introduction and background
Advocacy and CampaiGning
ITPD ISSUE MANAGEMENT PROCESS SEPTEMBER 5, 2008
Test Management without Test Managers
Fundamental Test Process
Portfolio, Programme and Project
Software Testing Lifecycle Practice
that focus first on areas of risk.
CEng progression through the IOM3
Presentation transcript:

AXIOMS Paul Gerrard THE TESTING OF Advancing Testing Using Axioms

 Axioms – a Brief Introduction  Advancing Testing Using Axioms  First Equation of Testing  Test Strategy and Approach  Testing Improvement  A Skills Framework for Testers  Quantum Theory for Testing  Close

Surely, there must be SOME things that ALL testers can AGREE ON? Or are we destined to argue FOREVER?

 Started as a ‘thought experiment’ in my blog in February 2008  Some quite vigorous debate on the web  ‘great idea’  ‘axioms don’t exist’  ‘Paul has his own testing school’  Initial 12 ideas evolved to 16 test axioms  Testers Pocketbook: testers-pocketbook.comtesters-pocketbook.com  Test Axioms Website test-axioms.comtest-axioms.com

Some very useful by-products Test strategy, improvement, skills framework Interesting research areas! First Equation of Testing, Testing Uncertainty Principle, Quantum Theory, Relativity, Exclusion Principle... You can tell I like physics

There are no agreed definitions of test or testing!

The words software, IT, program, technology, methodology, v- model, entry/exit criteria, risk – do not appear in definitions

American Heritage Dictionary: Test: (noun) A procedure for critical evaluation; A means of determining the presence, quality, or truth of something; A trial.

A testing stakeholder is someone who is interested in the outcome of testing; You can be your OWN stakeholder (e.g. dev and users)

Let’s look at a few of the test axioms

Testing needs stakeholders

Test design is based on models

Testers need sources of knowledge to select things to test

Testing needs a test coverage model or models

Our sources of knowledge are fallible and incomplete

The value of testing is measured by the confidence of stakeholder decision making

Testing never goes as planned; evidence arrives in discrete quanta “Ohhhhh... Look at that, Schuster... Dogs are so cute when they try to comprehend quantum mechanics.”

Testing never finishes; it stops

Consider Axioms as thinking tools

Axioms + Context + Values + Thinking =Approach

 Separation of Axioms, context, values and thinking  Tools, methodologies, certification, maturity models promote approaches without reference to your context or values  No thinking is required!  Without a unifying test theory you have no objective way of assessing these products.

Strategy is a thought process not a document

Test Strategy Test Strategy Risks Goals Constraints Human resource Environment Timescales Process (lack of?) Contract Culture Opportunities User involvement Automation De- Duplication Early Testing Skills Communication Axioms Artefacts

Summary: Identify and engage the people or organisations that will use and benefit from the test evidence we are to provide Consequence if ignored or violated: There will be no mandate or any authority for testing. Reports of passes, fails or enquiries have no audience. Questions:  Who are they?  Whose interests do they represent?  What evidence do they want?  What do they need it for?  When do they want it?  In what format?  How often?

Summary: Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make Consequence if ignored or violated: Tests design will be meaningless and not credible to stakeholders. Questions  Are design models available to use as test models? Are they mandatory?  What test models could be used to derive tests from the Test Basis?  Which test models will be used?  Are test models to be documented or are they purely mental models?  What are the benefits of using these models?  What simplifying assumptions do these models make?  How will these models contribute to the delivery of evidence useful to the acceptance decision makers?  How will these models combine to provide sufficient evidence without excessive duplication?  How will the number of tests derived from models be bounded?

1. Test Plan Identifier 2. Introduction 3. Test Items 4. Features to be Tested 5. Features not to be Tested 6. Approach 7. Item Pass/Fail Criteria 8. Suspension Criteria and Resumption Requirements 9. Test Deliverables 10. Testing Tasks 11. Environmental Needs 12. Responsibilities 13. Staffing and Training Needs 14. Schedule 15. Risks and Contingencies 16. Approvals Based on IEEE Standard

 Items 1, 2 – Administration  Items – Scope Management, Prioritisation  Item 6 – All the Axioms are relevant  Items 7+8 – Good-Enough, Value  Item 9 – Stakeholder, Value, Confidence  Item 10 – All the Axioms are Relevant  Item 11 – Environment  Item 12 – Stakeholder  Item 13 – All the Axioms are Relevant  Item 14 – All the Axioms are Relevant  Item 15 – Fallibility, Event  Item 16 – Stakeholder Axioms

1. Stakeholder Objectives  Stakeholder management  Goal and risk management  Decisions to be made and how (acceptance)  How testing will provide confidence and be assessed  How scope will be determined 2. Design approach  Sources of knowledge (bases and oracles)  Sources of uncertainty  Models to be used for design and coverage  Prioritisation approach 3. Delivery approach  Test sequencing policy  Repeat test policies  Environment requirements  Information delivery approach  Incident management approach  Execution and end-game approach 4. Plan (high or low-level)  Scope  Tasks  Responsibilities  Schedule  Approvals  Risks and contingencies

Test process improvement is a waste of time

 There are no “practice” Olympics to determine the best  There is no consensus about which practices are best, unless consensus means “people I respect also say they like it”  There are practices that are more likely to be considered good and useful than others, within a certain community and assuming a certain context  Good practice is not a matter of popularity. It’s a matter of skill and context. Derived from “No Best Practices”, James Bach,

Actually its 11 (most were not software related)

 Google search  “CMM” – 22,300,000  “CMM Training” – 48,200  “CMM improves quality” – 74 (BUT really 11 – most of these have NOTHING to do with software)  A Gerrard Consulting client…  CMM level 3 and proud of it (chaotic, hero culture)  Hired us to assess their overall s/w process and make recommendations (quality, time to deliver is slipping)  40+ recommendations, only 7 adopted – they couldn’t change  How on earth did they get through the CMM 3 audit?

 Using process change to fix cultural or organisational problems is never going to work  Improving test in isolation is never going to work either  Need to look at changing context rather than values…

Context + Values + Thinking =Approach <- your values <- your context <- your thinking <- your approach

Context + Values + Thinking =Approach <- someone else's

Axioms + Context + Values + Thinking =Approach <- recognise <- hard to change <- could change? <- just do some <- your approach

 Axioms represent the critical things to think about  Associated questions act as checklists to:  Assess your current approach  Identify gaps, inconsistencies in current approach  QA your new approach in the future  Axioms represent the WHAT  Your approach specifies HOW

 Mission  Coalition  Vision  Communication  Action  Wins  Consolidation  Anchoring Changes identified here If you must use one, this is where your ‘test model’ comes into play

Axioms indicate WHAT to think about......so the Axioms point to SKILLS

Summary: Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make. Consequence if ignored or violated: Tests design will be meaningless and not credible to stakeholders. Questions:  Are design models available to use as test models? Are they mandatory?  What test models could be used to derive tests from the Test Basis?  Which test models will be used?  Are test models to be documented or are they purely mental models?  What are the benefits of using these models?  What simplifying assumptions do these models make?  How will these models contribute to the delivery of evidence useful to the acceptance decision makers?  How will these models combine to provide sufficient evidence without excessive duplication?  How will the number of tests derived from models be bounded?

 A tester needs to understand:  Test models and how to use them  How to select test models from fallible sources of knowledge  How to design test models from fallible sources of knowledge  Significance, authority and precedence of test models  How to use models to communicate  The limitations of test models  Familiarity with common models Is this all that current certification provides?

 Functional testers are endangered:  Certification covers process and clerical skills  Functional testing is becoming a commodity and is easy to outsource  To survive, testers need to specialise:  Management  Test automation  Test strategy, design, goal- and risk-based  Stakeholder management  Non-Functional testing  Business domain specialists...

 Intellectual skills and capabilities are more important than the clerical skills  Need to re-focus on:  Testing thought processes (Axioms)  Real-world examples, not theory  Testing as information provision  Goal and risk-based testing  Testing as a service (to stakeholders)  Practical, hands-on, real-world training, exercises and coaching.

If evidence arrives in discrete quanta......can we assign a value to it?

 Tests are usually run one by one  Every individual test has some significance  Some tests expose failures but ultimately we want all tests to PASS  When all tests pass – the stakeholders are happy, aren’t they?  Can we measure confidence?  But...

 Testers cannot usually:  Prepare all tests they COULD do  Run ALL tests in the plan  Re-test ALL fixes  Regression-test as much or as often as required  How do we judge the significance of tests?  To include them in scope for planning (or not)  To execute them in the right order?  To ensure the most significant tests are run?

 What stakeholders want ultimately, is every test to pass  The ideal situation is:  We have run all our tests  All our tests pass  Acceptance is a formality  Not all tests pass though  We track incidents, severity and priority – great  But how do we track the significance or value of tests that pass?

 Significance varies by objective:  Criticality of the business goal it covers  Criticality of the risk it covers  Significance varies by precedent:  The first end-to-end test pass is significant  Subsequent e2e passes are less significant  Significance varies by functional dependence:  A test of shared functionality is more important than standalone functionality  Significance by stakeholder:  Customers and sponsor tests are more significant than developer tests.

 Stakeholders usually know how to judge the significance of failures when tests FAIL  So why don’t we assess the significance of tests BEFORE we run them?  If we did that:  We could scope and prioritise more effectively  We would know exactly which tests provide enough information for an acceptance decision  Acceptance criteria would be taken seriously.

 Using business goals, risks and coverage to drive testing is ‘advanced’ - but it is still VERY CRUDE  Quantum Testing proposal:  Need to assign a micro-significance to all tests  Need to assess macro-significance to collections of tests  As tests are created and executed, evidence increases incrementally  Manage progress by monitoring EVIDENCE rather than by counting test cases.

 We and our stakeholders could know the value of tests BEFORE we run them  Stakeholders would understand WHAT we are doing and WHY  The problem of ‘enough testing’ becomes a shared challenge (testers and stakeholders)  Caveats: we assign significance qualitatively rather than numerically  Significance is RELATIVE rather than absolute!

 Axioms are context-neutral rules for testing  The Equation of Testing  Separates axioms, context, values and thinking  We can have sensible conversations about process  Axioms and associated questions provide context neutral checklists for test strategy, assessment/improvement and skills  Quantum Testing aims to address the question, “how much testing is enough?”

Thank-You! THE TESTING OF testaxioms.com testers-pocketbook.com gerrardconsulting.com