Presentation is loading. Please wait.

Presentation is loading. Please wait.

Agenda Software Testing - Why needed? Testing Methodology Followed

Similar presentations


Presentation on theme: "Agenda Software Testing - Why needed? Testing Methodology Followed"— Presentation transcript:

1 Agenda Software Testing - Why needed? Testing Methodology Followed
Test Automation Different testing types & Tools used Winrunner-Functionality Testing tool

2 Software Testing Determine whether the system meets requirements specified by the ‘CLIENT’ Find the bugs and track the same through Defect Tracking tools. Improve quality of the application and add value to the Organization. Track usability issues which are not specified explicitly in the client requirements. Develop test plans and test cases Determine the expected result Execute the test cases Compare the results

3 Software Testing Methodologies
Waterfall methodology Unit Testing Integration Testing System Testing User Acceptance Testing Incremental methodology Unit Testing : Individual units are tested Goals: confirms the module is coded correctly Types : Black box & White Box Integration Testing :Test sub systems and modules. Goals :Does the sub system meets the system requirements. Does it functions properly Test the interface to the sub system System Testing : Black box Testing Testing the system as a whole after complete integration User Acceptance Testing : Testing the finished product with respect to the user perception.

4 E-Testing Methodology
Requirement Analysis Strategy Formulation Test Planning Post Deployment Evaluation Release Test Execution Test Cases Generation Scripting Knowledge Repository Onsite Offshore Iteration Inputs on Tools, Checklists, Environment Functional, Stress, Performance,etc. Regression, Defects Functional Test Tools Load Test Tools Test Management Tool

5 Automation – Why Required?
Reduced testing time Consistent test procedures – ensure process repeatability and resource independence. Eliminates errors of manual testing Reduces QA cost – Upfront cost of automated testing is easily recovered over the lifetime of the product Improved testing productivity – test suites can be run earlier and more often Proof of adequate testing For doing Tedious work – test team members can focus on quality areas.

6 Which Test Cases to Automate?
Tests that need to be run for every build of the application (sanity check, regression test) Tests that use multiple data values for the same actions (data driven tests) Tests that require detailed information from application internals (e.g., SQL, GUI attributes) Stress/load testing More repetitive the execution, better candidate for automation. 13 20

7 Which Test Cases Not to Automate?
Usability testing "How easy is the application to use?" One-time testing "ASAP" testing "We need to test NOW!" Ad hoc/random testing based on intuition and knowledge of application Tests without predictable results That text box is not necessarily true. Let’s change the text box to something more appropriate.

8 From Manual to Automated Testing
Perform user actions 1 Wait for processes to complete 2 Verify AUT functions as expected 3 Repeat steps until all applications are verified compliant 4 Manual Generate automated script 1 Synchronize script playback to application performance 2 Add verification 3 Run test or suite of tests 4 WinRunner performs the same actions as a manual tester. If you ever wonder how to accomplish a testing task using WinRunner, just ask yourself, “How would I do it manually"? Automated

9 Testing Is a Team Effort
TEAM MEMBER RESPONSIBILITY Project Manager Manages the testing process Business Analyst Analyzes enterprise and creates tests Developer Develops the applications and performs defect fixes WinRunner Expert Creates automated tests based on planning documentation and requirement specifications The Year 2000 Remediation process is a team effort. You as the WinRunner implementer will rely on you teammates throughout the testing process. Subject Matter Expert (business user) Understands how applications work in terms of navigation and data System Administrators Manages the test environment

10 Automated Testing Process - A Team Effort Typical Responsibilities
WinRunner Expert Subject Matter Experts Business Analysts Generate automated script 1 WinRunner Expert Business Analysts Synchronize script playback to application performance 2 WinRunner Expert Add verification 3 WinRunner Expert Business Analysts System Administrators Run test or suite of tests 4 One example where you might need the assistance of your teammates is in step one, generating automated tests. You might need the Subject Matter Expert to help you navigate through a business process using the appropriate data. You might also need the assistance of the Business Analyst to specify what output fields to verify in the test script.

11 Testing Process Gather test documentation Learn the AUT
what type of testing is required for the AUT? which test data to use? what results are expected? Learn the AUT what screens to navigate? what visual cues to establish? Confer with project team functional experts

12 Mercury Interactive’s Winrunner
STEPS INVOLVED: Script generation Customization of Scripts Parameterization of data Maintenance of Test Scripts in Test Suites Save Test Results

13 RECORDING and PLAYBACK
Record user actions in script 1 Synchronize script to application under test 2 Add verification statements to check AUT 3 Run test or suite of tests 4 Recording and Playback Initial/End Conditions Analog vs. Context Sensitive Scripts The GUI Map RECORDING and PLAYBACK

14 What Happens During Recording?
Depart Date: ___/___/___ From City. . : ________________ To City : ________________ Flight : _______ 12/12/03 Thomas Paine BMW 1973 234 Willow Drive 2002tii set_window("Automobile Purchase Form", 10); edit_set ("Customer Name", "Thomas Paine"); Order Number : __________ Customer : ___________________ Billing Date : edit_set ("Address","234 Willow Drive"); edit_set ("Date", "12/12/03"); list_select_item ("Make", "BMW"); edit_set ("Year", "1973"); edit_set ("Model", "2002tii"); During recording, WinRunner listens to user actions on the application and creates a log describing these actions called a test script. button_press ("Insert Sale");

15 What Happens During Playback?
Purchase Completed... Depart Date: ___/___/___ From City. . : ________________ To City : ________________ Flight : _______ 12/12/03 Thomas Paine BMW 1973 234 Willow Drive 2002tii set_window("Automobile Purchase Form", 10); edit_set ("Customer Name", "Thomas Paine"); Order Number : __________ Customer : ___________________ Billing Date : edit_set ("Address","234 Willow Drive"); edit_set ("Date", "12/12/03"); list_select_item ("Make", "BMW"); edit_set ("Year", "1973"); edit_set ("Model", "2002tii"); WinRunner replays the recorded statements which perform the user actions on the application ‘just like a real user’. button_press ("Insert Sale");

16 Two Recording Modes Context Sensitive Analog TIP
When the application is based on GUI objects When the application has non-GUI areas (e.g., a drawing application) Default mode When mouse tracks are necessary for correct execution A test can combine both Context Sensitive and Analog statements TIP Recommended When you are unable to use Context Sensitive mode

17 Context Sensitive Recording
Object based Readable script Maintainable script (editable) Script not affected by user interface changes if object moves location on GUI, script will still replay correctly Portable script a context sensitive script can be ported to different platforms with different configurations

18 A Closer Look at GUI Objects
menu window static text list item scroll bar edit field When using WinRunner to test a terminal emulator (TE) application, you will see one of three objects in a WinRunner test script: 1. Fields - area of the application that can receive input (unprotected) or display output (protected) from the system. 2. Screen - the 80x24 character interface that displays the fields of the application. There are usually multiple screens in an application. 3. Window - the TE window which displays the screens of the application under renovation. frame radio button push button

19 User Actions in the Test Script
Specify window for input set_window("Login", 10); Type input into an edit field edit_set ("Username", "thomas"); Type encrypted input into password field password_edit_set("Password:", "kzptnzet"); When recording a baseline script, you will typically see six Test Script Language (TSL) statements appear repeatedly. For example, notice that the TSL statement edit_field contains the user action (edit_field), the field which the user action is performed on (Depart Date) and the input data (12/12/98). button_press ("OK"); Press a button to submit "OK" set_window("Automobile Purchase Form", 10); Specify a list box selection list_select_item ("Make", "BMW");

20 ANALOG vs. CONTEXT SENSITIVE SCRIPTS
Record user actions in script 1 Synchronize script to application under test 2 Add verification statements to check AUT 3 Run test or suite of tests 4 Recording and Playback Initial/End Conditions Analog vs. Context Sensitive Scripts The GUI Map ANALOG vs. CONTEXT SENSITIVE SCRIPTS

21 Context Sensitive Script R-eview
set_window ("Save As"); edit_set ("File Name", "output14"); button_press ("OK"); output14

22 Analog Recording Screen-coordinate dependent
Test script describes mouse and keyboard activities 3 commands: mouse press/release mouse move keyboard type Covers all types of applications y x

23 Analog Script mouse movement mouse click keyboard timing
move_locator_track (1); mouse movement mtype (" <T55> <kLeft>-<kLeft>+"); mouse click type (" <t3>output14" ); keyboard move_locator_track (2); mtype (" <T35><kLeft>-<kLeft>+ "); timing output14

24 Analog or Context-Sensitive?
Application Functionality under test Context Sensitive Analog Analog statements are useful for literally describing the keyboard, mouse, and mouse button input of the user Graphics program Paintbrush stroke Context Sensitive statements describe actions made to GUI objects and are recommended for most situations Graphics program Preferences checkboxes Virtual reality environment Mouse-based movement controls Client/server database Data entry using standard GUI objects

25 WinRunner Tracks AUT’s Windows and Objects With the GUI Map File
Name: Physical Description: Name class: edit attached_text: "Name" WINDOW: Login Password class: edit attached_text: "Password" OK class: push_button label: "OK" The GUI Map file contains the: Windows of the AUT Objects within each window Physical attributes that create each object’s unique identification

26 GUI Map Editor Visual tree displays windows and objects contained in the GUI Map File First level consists of all windows in AUT Parent Window (logical name) Second level consists of objects uniquely identified within each parent window Child Objects (logical names) Physical Description of window or object highlighted above

27 The GUI Map Characteristics Strengths
Allows separation of physical attributes from test scripts Enables WinRunner to uniquely identify objects in the AUT using physical attributes -Allows WinRunner to refer to objects in the script using an intuitive logical name Provides the connection between logical names a-nd physical attributes Maintainability If a button label changes in the application, update the button description once in the GUI map rather than in 500 tests Readability button_press("Insert") instead of button_press("{class: ThunderSSCommand}"); Portability Use the same script for all platforms, with a different GUI map for each platform

28 Check Points Gui Check Points Db Check Bitmap Check Text Check

29 Without synchronization point With synchronization point
Why Synchronize? Without synchronization point With synchronization point Run script Script AUT Script AUT Run script Inputs data to AUT Accepts input Inputs data to AUT Accepts input Sends data to database server Attempts next step Sends data to database server Waits Synchronization point Waits for server; cannot continue Script fails Waits Server processes data Server returns results Waits Continues Client affirms transaction is complete

30 Synchronization Points
The AUT's performance may slow down as the number of users increases Synchronization points allow WinRunner to wait for the AUT, just like a real user

31 Playback Test Results Report
Checkpoint outcome is either OK or mismatch Insert_Sale ert_Sale Insert_Sale Insert_Sale Checkpoint details can be opened in a separate window

32 Thank You


Download ppt "Agenda Software Testing - Why needed? Testing Methodology Followed"

Similar presentations


Ads by Google