WP4: Testing tools and methodologies

Slides:



Advertisements
Similar presentations
High level QA strategy for SQL Server enforcer
Advertisements

Test process essentials Riitta Viitamäki,
Chapter 4 Quality Assurance in Context
Automation Testing Presentation Phil Hunter Phil Hunter - Automation Presentation 1.
Dr Gordon Russell, Napier University Unit Data Dictionary 1 Data Dictionary Unit 5.3.
Documentation Testing
Software Quality Metrics
Chapter 15 Design, Coding, and Testing. Copyright © 2005 Pearson Addison-Wesley. All rights reserved Design Document The next step in the Software.
R R R CSE870: Advanced Software Engineering (Cheng): Intro to Software Engineering1 Advanced Software Engineering Dr. Cheng Overview of Software Engineering.
Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.
1.
Software Testing Name: Madam Currie Course: Swen5431 Semester: Summer 2K.
OHT 9.1 Galin, SQA from theory to implementation © Pearson Education Limited 2004 Chapter 9.3 Software Testing Strategies.
Software Process and Product Metrics
High Level: Generic Test Process (from chapter 6 of your text and earlier lesson) Test Planning & Preparation Test Execution Goals met? Analysis & Follow-up.
Learning Objectives Describe an overall framework for project integration management as it relates to the other PM knowledge areas and the project life.
Introduction to Computer Technology
Functional Testing Test cases derived from requirements specification document – Black box testing – Independent testers – Test both valid and invalid.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
CPIS 357 Software Quality & Testing
INFSO-RI D SOFT's exploitation plan Budapest, 23 June 2009.
CS 360 Lecture 3.  The software process is a structured set of activities required to develop a software system.  Fundamental Assumption:  Good software.
Chapter 6 : Software Metrics
INT-Evry (Masters IT– Soft Eng)IntegrationTesting.1 (OO) Integration Testing What: Integration testing is a phase of software testing in which.

INFSOM-RI WP4: Testing tools and methodologies WP4: Testing tools and methodologies István Forgács 4D SOFT.
University of Palestine software engineering department Testing of Software Systems Testing throughout the software life cycle instructor: Tasneem.
Testing strategy of DILIGENT 4D Soft Ltd.. 4D SOFT DILIGENT-EGEE Interaction - Content and Metadata Management and Testing meeting CERN, 16th December.
COMP 208/214/215/216 – Lecture 8 Demonstrations and Portfolios.
Slide 12.1 Chapter 12 Implementation. Slide 12.2 Learning outcomes Produce a plan to minimize the risks involved with the launch phase of an e-business.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
1 CSCD 326 Data Structures I Software Design. 2 The Software Life Cycle 1. Specification 2. Design 3. Risk Analysis 4. Verification 5. Coding 6. Testing.
Chair of Software Engineering Exercise Session 6: V & V Software Engineering Prof. Dr. Bertrand Meyer March–June 2007.
ETICS All Hands meeting B ologna, October , 2006 WP4 Status Eva TAKACS.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
INFSOM-RI WP 4 : Testing Tools and Methodologies Status Report ETICS Review – 15 February 2008 Éva Takács (4D SOFT)
Advanced Software Engineering Lecture 4: Process & Project Metrics.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Software Development Process CS 360 Lecture 3. Software Process The software process is a structured set of activities required to develop a software.
Software Test Plan Why do you need a test plan? –Provides a road map –Provides a feasibility check of: Resources/Cost Schedule Goal What is a test plan?
Tool Support for Testing Classify different types of test tools according to their purpose Explain the benefits of using test tools.
MANAGEMENT INFORMATION SYSTEM
1 March 19, Test Plans William Cohen NCSU CSC 591W March 19, 2008.
SOFTWARE TESTING TRAINING TOOLS SUPPORT FOR SOFTWARE TESTING Chapter 6 immaculateres 1.
Advanced Software Engineering Dr. Cheng
ITIL: Service Transition
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
Introduction to Software Testing Part1 Summary & Terms
Software Testing.
Definition CASE tools are software systems that are intended to provide automated support for routine activities in the software process such as editing.
Software Testing.
Parts of an Academic Paper
Manfred Huber Based on an earlier presentation by Mike O’Dell, UTA
Design and Implementation
Object oriented system development life cycle
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
Unit Test: Functions, Procedures, Classes, and Methods as Units
Tools of Software Development
Test Planning Mike O’Dell (some edits by Vassilis Athitsos)
Lecture 09:Software Testing
Software life cycle models
Chapter 9 – Software Evolution and Maintenance
Baisc Of Software Testing
Chapter 7 –Implementation Issues
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Chapter 8 Software Evolution.
MANAGING THE DEVELOPMENT AND PURCHASE OF INFORMATION SYSTEMS
Case Study 1 By : Shweta Agarwal Nikhil Walecha Amit Goyal
Presentation transcript:

WP4: Testing tools and methodologies István Forgács 4D SOFT

Team 4D SOFT’s staff István Forgács - responsible for leading WP4 4D Soft’s own methods and tools Key scientific issues Cooperate with the coordinator, visiting PMBs, etc. Anna Bánsághi, Zsolt Thalmeiner Study and evaluate testing tools and methods Make tutorials, descriptions, etc. Cooperates with Univ. of Wisconsin staff Éva Takács DILIGENT connection Gyöngyi Kispál Financial issues Klára Tauszig Administrative issues Others: Peer testing

A4.1 Collect requirements Requirements for the whole testing project General requirements are usually done by IEEE Standard. 829-1998 and 1008-1987 These requirements include General test planning Unit testing Test design specification Test case specification (test planning) Test procedure Test log Test incident Test summary report Special requirements should be studied and added

Collect requirements Other requirements addressed for Testing tools Test environment Integration testing System testing Load and stress testing Performance testing Installation testing Regression testing Static regression testing (analyzers) Dynamic regression testing

A4.1 Collect requirements IEEE Standard is a general description, and first we study which parts can be neglected and which topics are missing for our special distributed case This standard is very general, we have to extend it to be applicable We rely on current projects especially EGEE and Diligent. Our goal is a modified template and good examples that can be used as a starting document We expect finishing this activity by the end of April (PM04).

A4.1 Collect requirements for testing methods Test method selection We rely on our experiences and state of the art methods in testing text books We further study category-partitioning and state transition testing, how and when they are applicable We extend and improve category-partition method We also improve CatGen Related tools are also studied and compared Case studies, experiences are analysed

A4.1 Collect requirements for tools Testing tool selection Requirements can only be collected based on existing tools We study experimental reports, papers and books to make a preselection needed for requirement specification Requirements will probably be based on features availables Features are weigthed based on our and others experiences Unit testing tools are studied based on the following Additional code is required or not Which type of coverage criteria are supported Private method testing capability Test driver and stub simulation Regression testing capability How easy to use it Learning curve Test automation Experiences and references

A4.1 Collect requirements for test environment Installing a Grid is difficult A separate grid test environment is necessary It’s not a WP4 task, but the specification should be done here Original state should be regained It should be divided and used by more projects A test suite is necessary to justify its initial correctness Test procedure The process of the tests is to be executed Initial state setting

A4.1 Collect requirements for testing phases Load and stress testing Very critical for large distributed projects Simulation tools are needed One machine can send hundreds of different queries simulating lots of users There are free tools, problems with the reports Integration testing No integration testing tools exist (according to our knowledge) EBIT technology supports integration testing Unit and integration testing are united System and regression testing NMI Build/Test Framework D4.1 – requirements for different test phases PM06

A4.2 Testing process The key issue in testing is the methodology used Test automation in itself is very dangerous Companies buying testing tools use them as „shelveware” Using a testing tools may result in additional time without any quality improvement Thus we collect the methods to be applied during testing and we select the tools supports the a selected method Finally, we make written support on how to use the tool related to the selected method

A4.2 Method selection We investigate theoretical and industrial testing methods Test planning Category-partitioning State transition method Unit testing Very important if development is largely distributed Test coverage criteria EBIT vs. traditional Integration testing EBIT Regression testing Static regression testing NMI Build/Test Framework

A4.2 Testing tool selection Testing tools are ranked based on the evaluation criteria URL www.stickyminds.com contains 292 tools Besides open source and free tools we may involve some tools where free tools are too weak from being used Methods are selected first, then related tools are investigated from our special requirements (grid, distributed research projects) We rely on the experince on current research projects such as EGEE, DILIGENT, etc. We also rely on our experince

A4.2 Method description We make a unified, coherent desciption for both the methods and the tools Method description contains Method description Scope How to use When it is applicable Related tools Advantages Disadvantages Examples

A4.2 Tool description Test tool description contains Related method(s) How to use When it is applicable Advantages Disadvantages Quick introduction on how to use by applying the same examples Quailitive and quantitive comparison to similar tools (if any) (later on) success story of the tool Real experices by applying under ETICS D4.2 Distributed test execution system PM12 Test methods and tools have been selected PM12

A4.3 Problems with quality metrics Related testing literature is very poor Branch coverage is theoretically better than statement coverage Reliability growth models get the failure distribution and results in the MTBF (Mean Time Between Failures) Different models give quite different results for MTBFs The ratio is 1/10 Method for model quality evaluation is also poor Case studies are also weak Component+ tried to evaluate BIT technology, but all the studies were different We don’t know whether BIT is effective (though we have a feeling it is)

A4.3 Traditional method Collecting the features Give a weight for each of them Evalute the tool w.r.t. these features Compute the result Compare the results of different tools or methods Advantage Easy to apply Disadvantage Consists of quantitative elements, thus may give erroneous result

A4.3 Error seeding Let us assume that we have a correct program Insert different types of errors into it Apply the concurrent tools for the error-seeded software Validate the result Advantage Ignores human factor, thus more precise Applicable for methods by applying the tool Disadvantage Requires extra work for finding artificial errors Artificial errors may be different from the real ones Improvement Using the method for a former version of the software with known bugs

A4.3 Peer testing Two testers test the same code with different methods or tools Since testers are different, tools and and methods are swaped Real code has to be used We plan more testers to be involved for testing 4D Soft’s code We apply for the testing of Diligent code as well Advantage Applicable for tool evaluation by applying the same method Disadvantage Duplicates the work necessary for testing

A4.3 Software quality evaluation No exact methods exists The testing process is the main indicator of the quality of the code Some details Which methods are used (advanced methods give better result) Which tools are used (advanced tools give better result) Testing time related to the entire development time Number of test cases related to the LOC Justification at the maintenance phase Number of defects found / number of total defects Metrics selected PM16 Distributed test execution system (final release) PM22 Coherent and unified documentation of the selected tools and methods PM22

Relationship with other projects 4D SOFT is involved in DILIGENT We have lots of experience but mainly in Glite installation As the project proceeds our new experiences are involved Éva Takács has lots of experince w.r.t. Grid, and she is involved in ETICS as well (till March) Relation to WP2 – selected methods and tools are deployed and maintained in the repository Relation to WP5 – metrics are used for quality reports

Risk No high level risk can be assessed thus contingency planning can be neglected Minor risks are NMI Build/Test Framework? Load and stress testing tools (if any), should be tailored for grid Unit testing tools are language specific The selection of appropriate quantitative evaluation criteria is very difficult, no significant contribution in the literature is available The evalution methods for software are very weak We must relate on real applications, even if they are weak

Invitation I hope we meet in May in Budapest