Download presentation
Presentation is loading. Please wait.
1
WP4: Testing tools and methodologies
István Forgács 4D SOFT
2
Team 4D SOFT’s staff István Forgács - responsible for
leading WP4 4D Soft’s own methods and tools Key scientific issues Cooperate with the coordinator, visiting PMBs, etc. Anna Bánsághi, Zsolt Thalmeiner Study and evaluate testing tools and methods Make tutorials, descriptions, etc. Cooperates with Univ. of Wisconsin staff Éva Takács DILIGENT connection Gyöngyi Kispál Financial issues Klára Tauszig Administrative issues Others: Peer testing
3
A4.1 Collect requirements
Requirements for the whole testing project General requirements are usually done by IEEE Standard and These requirements include General test planning Unit testing Test design specification Test case specification (test planning) Test procedure Test log Test incident Test summary report Special requirements should be studied and added
4
Collect requirements Other requirements addressed for Testing tools
Test environment Integration testing System testing Load and stress testing Performance testing Installation testing Regression testing Static regression testing (analyzers) Dynamic regression testing
5
A4.1 Collect requirements
IEEE Standard is a general description, and first we study which parts can be neglected and which topics are missing for our special distributed case This standard is very general, we have to extend it to be applicable We rely on current projects especially EGEE and Diligent. Our goal is a modified template and good examples that can be used as a starting document We expect finishing this activity by the end of April (PM04).
6
A4.1 Collect requirements for testing methods
Test method selection We rely on our experiences and state of the art methods in testing text books We further study category-partitioning and state transition testing, how and when they are applicable We extend and improve category-partition method We also improve CatGen Related tools are also studied and compared Case studies, experiences are analysed
7
A4.1 Collect requirements for tools
Testing tool selection Requirements can only be collected based on existing tools We study experimental reports, papers and books to make a preselection needed for requirement specification Requirements will probably be based on features availables Features are weigthed based on our and others experiences Unit testing tools are studied based on the following Additional code is required or not Which type of coverage criteria are supported Private method testing capability Test driver and stub simulation Regression testing capability How easy to use it Learning curve Test automation Experiences and references
8
A4.1 Collect requirements for test environment
Installing a Grid is difficult A separate grid test environment is necessary It’s not a WP4 task, but the specification should be done here Original state should be regained It should be divided and used by more projects A test suite is necessary to justify its initial correctness Test procedure The process of the tests is to be executed Initial state setting
9
A4.1 Collect requirements for testing phases
Load and stress testing Very critical for large distributed projects Simulation tools are needed One machine can send hundreds of different queries simulating lots of users There are free tools, problems with the reports Integration testing No integration testing tools exist (according to our knowledge) EBIT technology supports integration testing Unit and integration testing are united System and regression testing NMI Build/Test Framework D4.1 – requirements for different test phases PM06
10
A4.2 Testing process The key issue in testing is the methodology used
Test automation in itself is very dangerous Companies buying testing tools use them as „shelveware” Using a testing tools may result in additional time without any quality improvement Thus we collect the methods to be applied during testing and we select the tools supports the a selected method Finally, we make written support on how to use the tool related to the selected method
11
A4.2 Method selection We investigate theoretical and industrial testing methods Test planning Category-partitioning State transition method Unit testing Very important if development is largely distributed Test coverage criteria EBIT vs. traditional Integration testing EBIT Regression testing Static regression testing NMI Build/Test Framework
12
A4.2 Testing tool selection
Testing tools are ranked based on the evaluation criteria URL contains 292 tools Besides open source and free tools we may involve some tools where free tools are too weak from being used Methods are selected first, then related tools are investigated from our special requirements (grid, distributed research projects) We rely on the experince on current research projects such as EGEE, DILIGENT, etc. We also rely on our experince
13
A4.2 Method description We make a unified, coherent desciption for both the methods and the tools Method description contains Method description Scope How to use When it is applicable Related tools Advantages Disadvantages Examples
14
A4.2 Tool description Test tool description contains
Related method(s) How to use When it is applicable Advantages Disadvantages Quick introduction on how to use by applying the same examples Quailitive and quantitive comparison to similar tools (if any) (later on) success story of the tool Real experices by applying under ETICS D4.2 Distributed test execution system PM12 Test methods and tools have been selected PM12
15
A4.3 Problems with quality metrics
Related testing literature is very poor Branch coverage is theoretically better than statement coverage Reliability growth models get the failure distribution and results in the MTBF (Mean Time Between Failures) Different models give quite different results for MTBFs The ratio is 1/10 Method for model quality evaluation is also poor Case studies are also weak Component+ tried to evaluate BIT technology, but all the studies were different We don’t know whether BIT is effective (though we have a feeling it is)
16
A4.3 Traditional method Collecting the features
Give a weight for each of them Evalute the tool w.r.t. these features Compute the result Compare the results of different tools or methods Advantage Easy to apply Disadvantage Consists of quantitative elements, thus may give erroneous result
17
A4.3 Error seeding Let us assume that we have a correct program
Insert different types of errors into it Apply the concurrent tools for the error-seeded software Validate the result Advantage Ignores human factor, thus more precise Applicable for methods by applying the tool Disadvantage Requires extra work for finding artificial errors Artificial errors may be different from the real ones Improvement Using the method for a former version of the software with known bugs
18
A4.3 Peer testing Two testers test the same code with different methods or tools Since testers are different, tools and and methods are swaped Real code has to be used We plan more testers to be involved for testing 4D Soft’s code We apply for the testing of Diligent code as well Advantage Applicable for tool evaluation by applying the same method Disadvantage Duplicates the work necessary for testing
19
A4.3 Software quality evaluation
No exact methods exists The testing process is the main indicator of the quality of the code Some details Which methods are used (advanced methods give better result) Which tools are used (advanced tools give better result) Testing time related to the entire development time Number of test cases related to the LOC Justification at the maintenance phase Number of defects found / number of total defects Metrics selected PM16 Distributed test execution system (final release) PM22 Coherent and unified documentation of the selected tools and methods PM22
20
Relationship with other projects
4D SOFT is involved in DILIGENT We have lots of experience but mainly in Glite installation As the project proceeds our new experiences are involved Éva Takács has lots of experince w.r.t. Grid, and she is involved in ETICS as well (till March) Relation to WP2 – selected methods and tools are deployed and maintained in the repository Relation to WP5 – metrics are used for quality reports
21
Risk No high level risk can be assessed thus contingency planning can be neglected Minor risks are NMI Build/Test Framework? Load and stress testing tools (if any), should be tailored for grid Unit testing tools are language specific The selection of appropriate quantitative evaluation criteria is very difficult, no significant contribution in the literature is available The evalution methods for software are very weak We must relate on real applications, even if they are weak
22
Invitation I hope we meet in May in Budapest
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.