Download presentation
Presentation is loading. Please wait.
Published byLaura Price Modified over 9 years ago
1
Software Testing Testing principles
2
Testing Testing involves operation of a system or application under controlled conditions & evaluating the results. The controlled conditions should include both normal & abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
3
What is a Bug ? A flaw in a system or system component that causes the system or component to fail to perform its required function. Most are simple, subtle failures, with many being so small that it’s not always clear which ones are true failures, and which ones aren’t.
4
Terms for software failures 1.Defect8.Failure 2.Fault9.Inconsistency 3.Problem10.Feature 4.Error11.Bug 5.Incident 6.Anomaly 7.Variance
5
Fault, failure & defect : Tend to imply a condition that’s really severe, may be even dangerous. It doesn’t sound right to call an incorrectly colored icon a fault. These words also tend to imply blame: “ it’s his fault that the s/w failed.”
6
Anomaly, incident & variance Don’t sound quite so negative and infer more unintended operation than an all-out failure. “ the president stated that it was a software anomaly that caused the missile to go off course.”
7
Problem, error & bug : Probably the most generic terms used. “ A bug’s a bug’s a bug.”
8
…but in SDLC these terms differs like: Design deviation – Error Coding deviation – Bug Testing – Issue/Bug Maintenance – Defect At Client -- Failure
9
Why does software have bug’s ? Miscommunication or no communication (requirement’s) Software complexity Programming errors Changing requirements Time pressures Poorly documented code Software development tools Egos
10
Fig: Bug’s are caused for numerous reasons, but the main cause can be traced to the specification.
11
The Cost of Bug’s Fig:The cost to fix bug’s increases dramatically over time
12
What is Verification ? Verification ensures the product is designed to deliver all functionality to the customer. Typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
13
Verification This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Verification takes place before validation.
14
What is Validation ? Validation typically involves actual testing and evaluates the product itself. The process of executing something to see how it behaves. The output of validation is a nearly perfect, actual product.
15
What kind of testing should be considered ? Black Box Testing White Box Testing Unit Testing Incremental Integration Testing Integration Testing Functional Testing System Testing End-to-End Testing Sanity or Smoke Testing Regression Testing Acceptance Testing Load Testing Stress Testing Performance Testing Usability Testing Install / Uninstall Testing
16
Contd…. Recovery Testing Failure Testing Security Testing Compatibility Testing Exploratory Testing Ad-hoc Testing User Acceptance Testing Comparison Testing Alpha Testing Beta Testing Mutation Testing
17
Black Box Testing Not based on any knowledge of internal design or code. Tests are based on requirements and functionality. It will not test hidden functions & errors associated with them will not be found in black box testing.
18
White Box Testing Based on knowledge of the internal logic of an application’s code. Test’s are based on coverage of code statements, branches, paths, conditions. It will not detect missing function.
19
Unit Testing Process of testing the individual component of a program. Discover discrepancies between the module’s interface specification & its actual behavior. Verify the control flow and data flow. Requires the knowledge of the code hence done by the developers.
20
Incremental Integration Testing Done by programmers or by Testers. Continuous testing of an application as new functionality is added. Requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed.
21
Integration Testing Testing of combined parts of an application to determine if they function together correctly. To discover errors in the interface between the components, verify communication between units. Done by developers / QA teams.
22
Functional Testing Black box type testing. Detect discrepancies between a program’s functional specification and actual behavior. Verifies that the software provides expected services. Done by tester’s.
23
System Testing Black box type testing. Attempting to demonstrate that a program Or system does not meet its original requirements & objectives, as stated in the requirements specification. Done by testing group before the product is made available to customer.
24
Types / Goals of System Testing Usability Testing Performance Testing Load Testing Stress Testing Volume Testing Security Testing Configuration Testing Install ability Testing Recovery Testing Service ability Testing Reliability / Availability Testing
25
Usability Testing Testing for “ user-friendliness.” User interviews, surveys, video recording of user sessions, & other techniques can be used. Identify discrepancies between the user interface.
26
Performance Testing Evaluate the compliance of a system or components with specified performance requirements. Often used interchangeably with ‘stress’ and ‘load’ testing. Ideally performance testing is defined in requirements documentation.
27
Load Testing Testing an application under heavy loads, such as testing of a website under a range of loads to determine at what point the system’s response time degrades or fails.
28
Stress Testing Used interchangeably with ‘load’ & ‘performance’ testing. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. System functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, etc..
29
Configuration Testing To determine whether the program operates properly when the software or hardware is configured in a required manner
30
Compatibility Testing To determine whether the compatibility objectives of the program have been met. Testing how well application performs in a particular hardware / software / operating system / network etc environment.
31
Installability Testing Testing of full, partial, or upgrade install / uninstall processes. To identify the ways in which the installation procedures lead to incorrect results installation option New Upgrade Customized / Complete Under normal & abnormal conditions
32
Recovery Testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Typically used interchangeably with “fail-over testing.”
33
Regression Testing Re-testing after fixes or modifications of the software or its environment. Verify that changes fixes have not introduced new problems. It can be difficult to determine how much re-testing is needed, especially near the end of development cycle.
34
Acceptance Testing Final testing based on specifications of the end-user or customer, or based on use by end-user / customer over some limited period of time. Determine whether the software is ready for final deployment. Done after the testing group has satisfactorily completed usability, function, & testing.
35
Contd… Organizations can arrange for alternative forms of acceptance testing: ALPHA BETA
36
ALPHA & BETA Both involve running & operating the s/w in production mode for a pre- specified period. The ALPHA test is usually performed by end users inside the development org. The BETA test is usually performed by a selected subset of actual customers outside the company, before the s/w is made available to all customers.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.