Download presentation
Presentation is loading. Please wait.
Published byMae Watkins Modified over 7 years ago
1
Iana Mourza, Sr Quality Engineering Manager, VMware, Inc.
Testing Types Iana Mourza, Sr Quality Engineering Manager, VMware, Inc.
2
The “Box” approach: Black/White/Grey box testing
Black box: Treats the software as a "black box“. Examines functionality without any knowledge of internal implementation. The testers are only aware of what the software is supposed to do, not how it does it. Techniques used: equivalence partitioning, boundary value, state-transition diagrams, decision tables, use case testing, exploratory and specification-based testing. Testing levels: mostly integration, system. Can be done manually or via UI-based automation.
3
The “Box” approach: Black/White/Grey box testing
White box (aka “clear/glass/transparent box” testing): Tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. Internal perspective of the system, as well as programming skills, are used to design test cases. Techniques used: API testing, fault injection, static testing (= static code analysis) Testing levels: unit, integration, system Frequently done at “unit testing” level Automation should be used for API/Fault injection tests; code inspection can be done visually
4
The “Box” approach: Black/White/Grey box testing
Involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code. Examples: modifying or verifying a back-end data repository such as a database or a log file (as the user would not normally be able to change or verify the data repository from the back end in normal production operations).
5
The “Level” approach: Unit/Integration/System/E2E
Unit testing (aka “Component testing”): Verifies the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. Usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other. It aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.
6
The “Level” approach: Unit/Integration/System/E2E
Integration testing: Verifies the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed. Works to expose defects in the interfaces and interaction between integrated components (modules). Usually done by testers (can be done by developers as well). If modules do not have UI – testing requires special skills (API testing) to test the modules. If integrated modules have a UI – can be done manually.
7
The “Level” approach: Unit/Integration/System/E2E
System/E2E testing: Verifies a completely integrated system to verify that it meets its requirements. Should also ensure that the program, as well as working as expected, does not also destroy or partially corrupt its operating environment or cause other processes within that environment to become inoperative (this includes not corrupting shared memory, not consuming or locking up excessive resources and leaving any parallel processes unharmed by its presence (For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff). Can be done manually or via UI or API automation.
8
The “Layer” approach: API/UI
API (= Application Program Interface) testing: Verifies functionality of the module or application via API layer. API layer = set of procedures, functions, and other points of access which an application, an operating system, a library etc., makes available to programmers in order to allow it to interact with other software. An API is similar to user interface, only instead of a user-friendly collection of windows, dialog boxes, buttons, and menus, the API consists of a set of direct software links, or calls, to lower-level functions and operations. API testing requires programming skills.
9
The “Layer” approach: API/UI
Test harness (= Test Automation Framework): Is a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behavior and outputs. Is a software application that robustly executes a series of test cases. It ensures the tests are executed and collects results even in the face of failures and crashes. Usually includes logic and tool(s)/software(s) to support test case setup, execution, clean up and result reporting. A “must have” component for continuous integration systems.
10
The “Layer” approach: API/UI
UI (= User Interface) based testing: Verifies functionality of the module or application via UI layer. All black box tests are done via UI layer. UI consists of a user-friendly collection of windows, dialog boxes, buttons, and menus, through which the user can interact with the application. UI testing does not require programming skills and can be done manually. However, UI-based tests can also be automated using UI-based test automation tools (i.e., Selenium) which interact with the application via User Interface. Very good approach for functional testing (in some cases, the only approach) and the only approach for usability testing of the applications with the UI layer.
11
The “Non-Functional” approach: Performance/Load/Stress/Volume
Performance testing: Verifies how a system performs in terms of responsiveness and stability under a particular workload. Can also serve to investigate, measure, validate or verify other attributes of the system, such as scalability, reliability or resource usage. During the test, the duration of the operation is measured (in time – seconds, milli/microseconds, minutes, days). Performance can be combined with load/stress/volume testing – where durations of certain software operations or transactions is measured under various load conditions. Example: during new user registration, how long does it take to validate new user name against database? (with 1, 1000 and 1 mln user records in the DB)
12
The “Non-Functional” approach: Performance/Load/Stress/Volume
Load testing: Verifies the behavior of the system under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of operations within the set duration. Frequently requires special tools to create the “load”. Example: test that home banking application functions “normally” with concurrent user sessions and can sustain this level of functionality within 5 days. During this test, the load would consist of the set of “normal” user transaction (i.e., log in, verify account info, transfer money, make payments etc) performed consistently by concurrent users during 5 days. this test would verify that application can handle the load and process concurrent transactions without crashes, delays and performance degradation.
13
The “Non-Functional” approach: Performance/Load/Stress/Volume
Stress testing: Used to understand the upper limits of capacity within the system. Is done to determine the system's robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum. Example: per requirements, “average” capacity for home banking application is concurrent transactions per minute. To stress the system, we should test the application with higher number of transactions – i.e., 20000, 100K, 500K and 1 mln to verify that application is not crashing. If the system cannot take more calls, it should handle new calls for service gracefully – like, give meaningful message to the user and ask to try in a few minutes.
14
The “Non-Functional” approach: Performance/Load/Stress/Volume
Volume testing: Testing a software application with a certain amount of data. This data can be the database size or it could also be the size of an interface file that is the subject of volume testing. Example: to volume test the application with a specific database size (1 GB, 5 GB, 100GB), tester will need to expand the database to that size and then test the application's performance on it.
15
The “Functional” approach: Functional testing
Testing functionality (= features) of the application. How do you know what functionality to test for? This information comes from Requirements (BRD, PRD), Functional Specifications, Use Cases and all other documentation available. For each feature, a set of Test Cases will have to be created and executed, to determine the quality level of a feature.
16
The “How do we test for software changes” approach
Regression testing: Partial retesting of a modified program to make sure that no new errors were introduced while making changes to the code (developing new features, removing or fixing existing features) Should be done for each new release (build) If we do not have enough time to retest software completely, Risk analysis is involved when we decide on which parts should be tested (partial retesting) Second (to release acceptance/sanity) most frequently executed test Very good candidate for automation
17
The “How do we test for software changes” approach
Smoke (aka “Sanity” or “Build acceptance test/BAT”) testing: Preliminary testing to reveal simple failures severe enough to reject a prospective software release or build. Subset of test cases that cover the most important functionality of a component or system is selected and run, to ascertain if the most crucial functions of a program work correctly. A daily build acceptance or smoke test is among industry best practices. Used by QA team to determine whether the build should be taken into testing or rejected.
18
The “How do we test for software changes” approach
Smoke (aka “Sanity” or “Build acceptance test/BAT”) testing: Preliminary testing to reveal simple failures severe enough to reject a prospective software release or build. Subset of test cases that cover the most important functionality of a component or system is selected and run, to ascertain if the most crucial functions of a program work correctly. A daily build acceptance or smoke test is among industry best practices. Used by QA team to determine whether the build should be taken into testing or rejected. Smoke tests can either be performed manually or using an automated tool. Most frequently executed type of testing. Best candidate for test automation.
19
The “Customer-facing” approach: Beta and User Acceptance testing
Beta testing: Testing of a software application which reached Beta-readiness phase. Done by customers, via either public or private (subscription- based) Beta-programs. Verifies that application works well in customer’s environments and meets customers’ expectations. Bugs found during beta testing are usually high-priority bugs – especially if those were not seen in in-house (internal) testing performed by QA team(s).
20
The “Customer-facing” approach: Beta and User Acceptance testing
Verifying that a solution/software application works for the user. It is not the same as system testing (ensuring software does not crash and meets documented requirements), but rather is there to ensure that the solution will work for the user i.e. test the user accepts the solution. This testing is usually done by a subject-matter expert (SME), preferably the owner or client of the solution under test, and provide a summary of the findings for confirmation to proceed after trial or review. UAT is frequently one of the final stages of a project that occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real life scenarios. The results of these tests give confidence to the client(s) as to how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.
21
The “Compatibility” approach
Browser compatibility testing: Verifies that application functions well across different browsers. Operating system compatibility testing: Verifies that application functions well across different OSes.
22
The “Error handling ” approach
Error (exception) handling testing: Verifies that the code can handle error conditions (invalid inputs). In order to establish that exception handling routines are sufficiently robust, it is necessary to present the application with a wide spectrum of invalid or unexpected inputs, such as can be created via software fault injection. Can be done either manually or via test automation.
23
The “Non-Systematic” approach
Ad-Hoc testing: Random testing without planning and documentation. The least formal test method. As such, it has been criticized because it is not structured and hence defects found using this method may be harder to reproduce (since there are no written test cases). However, the strength of ad hoc testing is that important and “tricky” defects can be found quickly. Manual. Exploratory testing: Combines simultaneous software learning, test design and test execution.
24
The “Locale-specific” approach
Internationalization (i18n) testing: Verifies that application can be installed in non-EN locale and can perform basic functions with non-EN (non-ASCII) characters. Done by installing the software on EN and non-EN locales and by running sanity testing with inputs containing non-ASCII characters. Note: 18 stands for the number of letters between the first i and last n in internationalization, a usage coined at DEC in the 1970s or 80s) , i.e.: "internationalization“ = "i" + 18 letters + "n"; lower case i is used to distinguish it from the numeral 1 (one)).
25
The “Locale-specific” approach
Localization (L10n) testing: Verifies that application is translated into non-EN locale and can perform basic functions with specific locale characters. Done by installing the translated software on non-EN locales and by running sanity testing with inputs containing characters of a certain non-EN locale. During testing, all strings (labels, messages) are verified for correctness of the translation to the locale. Note: 10 stands for the "L" + 10 letters + "n"; upper case L is used to distinguish it from the numeral 1 (one).
26
Testing types – Overview (approximate!)
Functional testing E2E/System testing | Does a feature work? Does Feature1work with Feature 2? Non-Functional testing | How does a feature work? (is it fast enough? Does it handle stress? Is it translated properly?...) Performance Stress Load i18n L10N Compatibility GUI / Client Backend / Server Database (optional) GUI testing (Look & Feel, Usability) | How does the app look & feel? Is it usable? Manual Feature 1 Function A Function B Automated Feature 2 Box (Black, White, Grey) Unit Testing | Does function A/API work? IntegrationTesting | Does function A work with function B? Positive Negative
27
Testing types across QE cycles – Overview (approximate!)
Software Release Alpha-phase (Phase 1) Beta-phase (Phase 2) Final phase (RTM, GA) Test Cycle 1 Test Cycle 2 Test Cycle 3 … Test Cycle N+1 Test Cycle N+2 … Test Cycle X UAT (by customers) In each QA cycle: Beta-testing (by customers) Start with BAT/Sanity/Smoke (Functional tests) If passes, proceed to Regression: Functional testing (positive, then negative) – both via UI and API. E2E tests are also executed once the software has enough quality (starting from the later part of Alpha phase and until the release) GUI testing Non-functional testing (Performance, stress, compatibility, i18n, L10N, etc) Regression test set (systematic coverage) BAT
28
Q&A
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.