Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linda Hayes’ suggests that automation has to be taken up when: Different Server platforms are to be tested Huge number of tests are to be performed. If.

Similar presentations


Presentation on theme: "Linda Hayes’ suggests that automation has to be taken up when: Different Server platforms are to be tested Huge number of tests are to be performed. If."— Presentation transcript:

1 Linda Hayes’ suggests that automation has to be taken up when: Different Server platforms are to be tested Huge number of tests are to be performed. If automation is being done ‘just to automate’ and the scripts are not going to be run over and over again, then automation will not give any ROI (return on investment), as effort spent in making an automated script is 2-3 times the effort to execute a manual script. Also, selecting which test cases to automate is a multi-stage effort based upon ROI analysis. Hence ROI calculation is desired before going in for automation Should I go for Automated Testing? : Deciding on Automation

2 Deciding on Automation : Calculating ROI (Linda Hayes) STEP 1 : Selecting the test cases to be automated – Categorization of test cases. – Elimination of technically infeasible test cases. – Estimate of initial automation – Evaluate test automation libraries opportunities. – Frequency of (re)-testing that test case (Regression tests being good candidates for automation Vs one time runs ) – Selecting Test Cases for Automation – Estimate of test data set creation and maintenance. – Identify types and number of releases – Criticality of the test case (want to reduce error due to manual testing) – Ascertain CR’s per release and their effect. – Calculate the ROI per group and exclude groups with poor ROI or technical infeasibilities – Remember: If your resulting test sets combine manual and automated tests, test management is critical! STEP 2 : Split the test cases into groups with logically similar ROI characteristics. Some examples are: – Length of the test cases/Average time to automate. – Reliance upon a common data set. – Similar test automation tool coding requirements. – Similar business processes or AUT functionality.

3 STEP 3 : Calculate the following: – Average time to automate per test case. – Time needed to create test automation libraries – Number of major and minor AUT releases. – Number of scripts requiring re-scripting and effort estimates. – Number of scripts requiring maintenance and effort estimates. Word of caution: – Only include costs that are unique to automated testing & the corresponding manual costs that are directly eliminated. – Decide if you want to include manual script clean-up in the test automation calculation. – A POC will be needed to determine some numbers such as development costs and benefits of test automation libraries, making technical feasibility decisions, etc. Deciding on Automation : Calculating ROI

4 Other factors influencing the decision: Software Configuration Management (SCM) Automated script maintenance typically comprises between 50% and 90% of total test automation costs. To minimize this, you must script only on stable, known code. For this, SCM is critical. When new releases of a product are pushed to QA for testing, existing automated scripts must be ‘re certified’ against the new build to identify needed maintenance. This is performed by: – Cross-checking change requests (CR’s) vs. existing paper-based test scripts. – Re-executing all other automated scripts to identify any new tool-app interfacing issues. Test Management (TM) All dependencies that contribute to the final quality of the product should be managed through the use of an automated tool such as PVCS, Quartet, StarTeam, RCS,etc. Items to be tracked include: code modules/releases, test environment configuration, paper-based test scripts, automated test scripts, test data. Using a tool to manage the creation and execution of automated/manual test cases and sets is imperative. Items to track include ‘Build certifications’ for automated scripts, Test case preconditions- post conditions, Test data sets, Test set executions. Requirements Based Testing (RBT) Requirements Based Testing (RBT) is necessary to ensure complete testing coverage and proper updating of test sets when Change Requests are implemented. When new releases of a product are moved to QA for testing, CR’s are matched to test cases using RBT to determine automated /manual scripts that needs to be updated. RBT can be implemented manually or via automated tools such as Starbase’s CaliberRBT. Should I go for Automated Testing : Deciding on Automation

5 Requirements : Automated testing (Linda Hayes’ suggestions) 1. Dedicated Team. 2. Expertise in automation for tool selection & implementation 3. Effective selection of test cases to make them efficient through automation 4. Domain expertise

6 Infosys Strengths We have over 100 resources across Infosys who are experienced on WinRunner We have a understanding with Mercury for annual licensing of Mercury tools We have 5 WinRunner licenses available in our dedicated test lab – Also have LoadRunner and TestDirector licenses in the lab Typically, projects purchase more licenses depending on customer needs We have a dedicated Independent Validation Services group that focuses on Testing related projects and has a rich experience in the field of testing and automation We have worked on several successful test automation projects in the past (refer case studies)

7 Methodology : Automated testing Input: Manual scripts Application to be tested Process: Generate the Automated script using Winrunner, preferably in Context sensitive mode. Save the automated scripts generated under the directory structure created in TestDirector. Insert GUI check points to check whether the test case has executed successfully Data-drive the script to include different sets of data Creation of library module or compiled module which has a set of common functions that can be used across different test scripts. Defects in the script are logged by the reviewer and are attended by the creator to remove the defects in the test script. Output: Automated Test script

8 Methodology : Manual Testing All test cases cannot be automated. Manual testing will be required for such test cases. Inputs: Manual scripts Application to be tested. Process: Baseline the manual scripts to Test Director. Perform a round of manual testing on the application based on the steps mentioned in the script. Check if the expected results match with the actual results. If both matches make the step Pass else make the step Fail. If a particular step has failed make, raise it as an issue and wait for the resolution from the customer. Output: The output to this Phase is the modified Manual script, which has been manually tested at least once.

9 Running automated test scripts Input: Automated Test script Application to be tested. Directory structure under which the automated test scripts are present. Process: Use the Test Director to make a test set for the tests to run on the application. Run all the tests either remotely or locally. Check the results. If the Tests are passed then the application is working fine else Report the same in the defect log. Output: Test director Report Test Results Defects or Bug report Log file and results generated by the test script.

10 Test Automation : Best Practices Generation of compiled module that results in test automation libraries which contain a set of common functions and difficult automation code that can be reused across all test scripts. Categorization of test scripts based upon the functionality. Every test suite should have one common GUI map file that can be used across different test scripts that comprises the test suite. Incorporation of log file generation capability in the test script. This process helps in debugging in event of script failure during test run. Identification of features in the application that are not going to change across different releases. These stable features are the ideal candidates for automation. Test script generation for test cases that form part of the regression test suite. Parameterization of test scripts. This basically helps in data driving the test scripts with different test data across different test runs. Coding and documenting to standards that follow technical best practices for the tools you are using by u sage of review checklists, coding standards, scripting guidelines, etc. Usage of Review effectiveness techniques viz., review and defect seeding Defect prevention activities, Causal analysis of defects, Quality orientation session, knowledge sharing sessions. Effective measurement and analysis of quality metrics

11 Case Study 1 : CISCO Business Issue & Challenges – Business Domain : Technology services – Application Type : Java based based applications – There is a large turnaround time between the testing of the application and the time taken to fix the bugs reported. – Lack of well defined methodology in conducting the testing process.Testing was performed only at the module level, as a result application in the production environment had lot of hidden bugs.` Solution – Setting up of dedicated Offshore Testing team which is entrusted with the job of carrying out the testing activity. – Reduce the turnaround time between the testing team and the development team by automating the testing activity – Resources trained in Mercury suite of tools

12 Benefits to CISCO Fully Automated Testing Uncovering of critical bugs. Reusability of common functions using the concept of compiled module. Reduction in the turn around time between the development team and testing team. This is primarily due to automating certain features of the application. Consolidation of test case, defects reported and automated scripts generated in one common place. Test director used as the repository for storing the test cases, test plan and the automated test scripts. Cost benefit solution by forming an offshore testing team.

13 Case Study 2 : AMEX Regression Test Business Issue & Challenges – Business Domain : Financial services – Application Type : Internet / Intranet based applications – Functionality / Regression Testing and multi-browser-OS compatibility of web- based applications – Acquiring skills on different testing tools within a short period – Issues related to connectivity to different servers for each application – Very short lead period for commencement of each project (application) – Understanding & testing of applications simultaneously – Meeting the short & tight schedules of different development teams in AMEX – Need for accurate estimation in line with development cycle – Testing end-customer sensitive applications with an independent view Solution – Setting up of dedicated Offshore Testing team which is entrusted with the job of carrying out the testing activity. – Reduce the turnaround time between the testing team and the development team by automating the testing activity – Resources trained in Mercury suite of tools

14 Case Study 2 : Infosys Solution System details WinRunner - used for functionality regression testing Test Director – Test director was used to maintain the automated scripts as well as the manual test cases in one single centralized location. This gives a centralized access to the various test documents in a single place to various team. Test suite- Logically related test cases were grouped together to form test suite. This feature helps in better organization of test cases. Win runner and test director integration. Infosys Value Add Dedicated testing team established offshore to carry out the testing activity. Leveraging the scripting expertise in win runner there by coming out with robust automated test scripts. 70 percent automation achieved and 65 bugs reported. These bugs were caught while doing rigorous testing of the application.

15 Benefits to AMEX With testing Processes streamlined, this would help in defining standard process for testing centers Acquiring domain knowledge of several applications Strong skills on the testing tools Continuous improvement in quality and productivity Continuous satisfaction & appreciation from development teams Helping other projects for automating their testing


Download ppt "Linda Hayes’ suggests that automation has to be taken up when: Different Server platforms are to be tested Huge number of tests are to be performed. If."

Similar presentations


Ads by Google