Presentation is loading. Please wait.

Presentation is loading. Please wait.

SOFTWARE CONSTRUCTION AND TESTING

Similar presentations


Presentation on theme: "SOFTWARE CONSTRUCTION AND TESTING"— Presentation transcript:

1 SOFTWARE CONSTRUCTION AND TESTING
CSSSPEC6 SOFTWARE DEVELOPMENT WITH QUALITY ASSURANCE <<professor>>

2 SOFTWARE CONSTRUCTION
SOFTWARE CONSTRUCTION is a fundamental act of software engineering: the construction of working meaningful software through a combination of coding, validation, and testing (unit testing) by a programmer. Computer Science Department

3 LOW LEVEL SOFTWARE CONSTRUCTION
Verifying that the groundwork has been laid so that construction can proceed successfully Determining how your code will be tested Designing and writing classes and routines Creating and naming variables and named constants Selecting control structures and organizing blocks of statements Unit testing, integration testing, and debugging your own code Reviewing other team members’ low-level designs and code and having them review yours Polishing code by carefully formatting and commenting it Integrating software components that were created separately Improving code and design Computer Science Department

4 WHY IS SOFTWARE CONSTRUCTION IMPORTANT?
Some Reasons Construction is a large part of software development Construction is the central activity in software development With a focus on construction, the individual programmer’s productivity can improve enormously Construction’s product, the source code, is often the only accurate description of the software Computer Science Department

5 WHY IS SOFTWARE CONSTRUCTION IMPORTANT? Computer Science Department
Simple Answer Construction is the only activity that’s guaranteed to be done Computer Science Department

6 Computer Science Department
IT’S ALL ABOUT QUALITY How do you ensure that the software does what it should? does it in the correct way? does it robust? is reliable? is easy to use? is easy to change? is easy to correct? is easy to test? Computer Science Department

7 WHAT IS SOFTWARE QUALITY ?
Formal Definitions A Practical One The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs. (ISO 8402: 1986, 3.1) The degree to which a system, component, or process meets specified requirements. (IEEE) A product that satisfies the stakeholders needs (Compliant Product + Good Quality + Delivery Within Budget/Schedule.) Computer Science Department

8 QUALITY IS A COLLECTION OF “…ILITIES”
reliability the ability to operate error free reusability the ability to use parts of the software to solve other software problems extendibility the ability to have enhancement changes made easily understandability the ability to understand the software readily, in order to change/fix it (also called maintainability) efficiency the speed and compactness of the software usability the ability to use the software easily testability the ability to construct and execute test cases easily portability the ability to move the software easily from one environment to another functionality what the product does Computer Science Department

9 CHALLENGES IN SOFTWARE DEVELOPMENT
Reliability [correctness + robustness] It should be easier to build software that functions correct, and easier to guarantee what it does. Reusability [modifiability + extendibility] It should build less software! Software should be easier to modify. Functionality [+ usability] Ensure that the software does what the user expects and does this in an easy to use way. Computer Science Department

10 Computer Science Department
SOFTWARE CONSTRUCTION STRATEGIES TOP-DOWN BOTTOM-UP MIDDLE-OUT High-level to low-level; user interface to detail logic Reverse of the above Some of both Computer Science Department

11 QUALITY AND CONSTRUCTION
GOAL The goal of software construction is to build a product that satisfies the quality requirements “Good enough software” Not excellent software ! Computer Science Department

12 QUALITY AND SOFTWARE CONSTRUCTION
Functionality [+ usability] Build software as early as possible and give it to the user As often as possible Reliability [correctness + robustness] Run and test the software Reusability [modifiability + extendibility] Redesign and improve the source code Computer Science Department

13 CONSTRUCTION PROCESS INFRASTRUCTURE
Computer Science Department

14 Computer Science Department
QUALITY Quality means “conformance to requirements” The best testers can only catch defects that are contrary to specification. Testing does not make the software perfect. If an organization does not have good requirements engineering practices then it will be very hard to deliver software that fills the users’ needs, because the product team does not really know what those needs are. Computer Science Department

15 Computer Science Department
TEST PLANS The goal of test planning is to establish the list of tasks which, if performed, will identify all of the requirements that have not been met in the software. The main work product is the test plan. The test plan documents the overall approach to the test. In many ways, the test plan serves as a summary of the test activities that will be performed. It shows how the tests will be organized, and outlines all of the testers’ needs which must be met in order to properly carry out the test. The test plan should be inspected by members of the engineering team and senior managers. Computer Science Department

16 Computer Science Department
TEST PLAN OUTLINE Computer Science Department

17 Computer Science Department
TEST CASES A test case is a description of a specific interaction that a tester will have in order to test a single behavior of the software. Test cases are very similar to use cases, in that they are step-by-step narratives which define a specific interaction between the user and the software. A typical test case is laid out in a table, and includes: A unique name and number A requirement which this test case is exercising Preconditions which describe the state of the software before the test case (which is often a previous test case that must always be run before the current test case) Steps that describe the specific steps which make up the interaction Expected Results which describe the expected state of the software after the test case is executed Test cases must be repeatable. Good test cases are data-specific, and describe each interaction necessary to repeat the test exactly. Computer Science Department

18 TEST CASES – GOOD EXAMPLE
Computer Science Department

19 TEST CASES – BAD EXAMPLE
Steps Bring up search and replace Enter a lowercase word from the document in the search term field. Enter a mixed case word in the replacement field. Verify that case sensitivity is not turned on and execute the search Expected Results 1. Verify that the lowercase word has been replaced with the mixed-case term in lowercase Computer Science Department

20 Computer Science Department
TEST EXECUTION The software testers begin executing the test plan after the programmers deliver the alpha build, or a build that they feel is feature complete. The alpha should be of high quality—the programmers should feel that it is ready for release, and as good as they can get it. There are typically several iterations of test execution. The first iteration focuses on new functionality that has been added since the last round of testing. A regression test is a test designed to make sure that a change to one area of the software has not caused any other part of the software which had previously passed its tests to stop working. Regression testing usually involves executing all test cases which have previously been executed. There are typically at least two regression tests for any software project. Computer Science Department

21 Computer Science Department
TEST EXECUTION When is testing complete? No defects found Or defects meet acceptance criteria outlined in test plan Computer Science Department

22 Computer Science Department
DEFECT TRACKING The defect tracking system is a program that testers use to record and track defects. It routes each defect between testers, developers, the project manager and others, following a workflow designed to ensure that the defect is verified and repaired. Every defect encountered in the test run is recorded and entered into a defect tracking system so that it can be prioritized. The defect workflow should track the interaction between the testers who find the defect and the programmers who fix it. It should ensure that every defect can be properly prioritized and reviewed by all of the stakeholders to determine whether or not it should be repaired. This process of review and prioritization referred to as triage. Computer Science Department

23 TEST ENVIRONMENT AND PERFORMANCE TESTING
Project manager should ask questions regarding desired performance as early as the vision and scope document How many users? Concurrency? Peak times? Hardware? OS? Security? Updates and Maintenance? Adequate performance testing will usually require a large investment in duplicate hardware and automated performance evaluation tools. ALL hardware should match (routers, firewalls, load balancers) If the organization cannot afford this expense, they should not be developing the software and should seek another solution. Computer Science Department

24 Computer Science Department
SMOKE TESTS A smoke test is a subset of the test cases that is typically representative of the overall test plan. Smoke tests are good for verifying proper deployment or other non invasive changes. They are also useful for verifying a build is ready to send to test. Smoke tests are not substitute for actual functional testing. Computer Science Department

25 Computer Science Department
TEST AUTOMATION Test automation is a practice in which testers employ a software tool to reduce or eliminate repetitive tasks. Testers either write scripts or use record-and-playback to capture user interactions with the software being tested. This can save the testers a lot of time if many iterations of testing will be required. It costs a lot to develop and maintain automated test suites, so it is generally not worth developing them for tests that will executed only a few times. Computer Science Department

26 OVERVIEW OF SOFTWARE TESTING STRATEGIES Computer Science Department
Integrates software test case design techniques into a well-planned series of steps that result in the successful construction of software. A testing strategy must always incorporate test planning, test case design, text execution, and the resultant data collection and evaluation Please refer to pages of the textbook and page 9.2 of the Study Guide. The strategy provides a road map that describes the steps to be conducted, when these steps are planned and then undertaken, and how much effort, time, and resources will be required. A software testing strategy should be flexible enough to promote a customized testing approach. Computer Science Department

27 Overview of Software Testing Strategies Computer Science Department
Generic characteristics of all software testing strategies: Testing beings at the module level and works "outward" toward the integration of the entire computer-based system Different testing techniques are appropriate at different points in time Testing is conducted by the developer of the software and (for large projects) an independent test group Testing and debugging are different activities, but debugging must be accommodated in any testing strategy In addition to the above generic characteristics, To perform effective testing, a software team should conduct effective formal technical reviews so that many errors will be eliminated before testing commences. A strategy must provide guidance for the practitioner and a set of milestones for the manager. Because the steps of the test strategy occur at a time when deadline pressure begins to rise, progress must be measurable and problems must surface as early as possible. Computer Science Department

28 VERIFICATION AND VALIDATION Computer Science Department
Verification refers to the set of activities that ensure that software correctly implements a specific function, i.e. It asks the question: “Are we building the product right?” Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements, i.e. It asks the question: “Are we building the right product?” Please refer to page 388 of the textbook and page 9.2 of the Study Guide. It is important to note that verification and validation encompasses a wide array of SQA activities that include formal technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, qualification testing, and installation testing. Tips to remember which is which? Verification => ….product right? Validation => ….right product? ( In the alphabet sequence, letter a comes first then followed by the letter e. ) Computer Science Department

29 ORGANIZING FOR SOFTWARE TESTING Computer Science Department
Misconceptions in Software testing: That the developer of software should not do any testing at all; That the software should be "tossed over the wall" to strangers who will test it mercilessly; That testers get involved with the project only when the testing steps are about to begin. Please refer to pages of the textbook and pages of the Study Guide. Each of these statements are incorrect and leads to misconceptions in software testing. Computer Science Department

30 SOFTWARE TESTING STRATEGY Computer Science Department
Please refer to page 390 of the textbook and page 9.3 of the Study Guide. The software engineering process may be viewed as a spiral, as illustrated above. Initially, system engineering defines the role of software and leads to software requirements analysis. Moving inward along the spiral, we come to design and finally to coding. To develop the software, we spiral in along streamlines that decrease the level of abstraction on each turn. A strategy for software testing may also be envisioned by moving outward along the spiral of the above diagram, starting from the unit testing, and then followed by the integration, validation and finally the system testing. To test the software, we spiral out along the streamlines that broaden the scope of testing with each turn. Computer Science Department

31 SOFTWARE TESTING STRATEGY Computer Science Department
A strategy for software testing moves outward along the spiral. Unit testing begins at the vortex of the spiral and concentrates on each unit of the software as implemented in the source code. Testing progresses by moving outward along the spiral to integration testing, where the focus is on the design and the construction of the software architecture. Validation testing is next encountered, where requirements established as part of software requirement analysis are validated against the software that has been constructed. Finally, at system testing, where the software and other system elements are tested as a whole. Computer Science Department

32 SOFTWARE TESTING STRATEGY Computer Science Department
Unit tests: focuses on each module and makes heavy use of white box testing Integration tests: focuses on the design and construction of software architecture; black box testing is most prevalent with limited white box testing. High-order tests: conduct validation and system tests. Makes use of black box testing exclusively. Please refer to page 391 of the textbook and page 9.4 of the Study Guide. From a procedural point of view, testing within the context of software engineering is actually a series of three steps that are implemented sequentially. Initially, tests focus on each module or unit individually, and next, these units are integrated and tested, and finally, the high-order testing is conducted. Computer Science Department

33 Computer Science Department
UNIT TESTING Unit testing focuses verification effort on the smallest unit of software design - the module. Using the detail design description as a guide, important control paths are tested to uncover errors within the boundary of the module The unit test is always white box-oriented Please refer to pages of the textbook and pages of the Study Guide. Unit testing focuses on the internal processing logic and data structures within the boundaries of a unit. Unit testing can be conducted in parallel for multiple units/modules. Computer Science Department

34 Computer Science Department
UNIT TESTING The module interface is tested to ensure that information properly flows into and out of the module under test. The local data structures is examined to ensure that data stored temporarily maintains its integrity during all steps in the algorithm’s execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. All error handling paths are tested. Computer Science Department

35 UNIT TESTING PROCEDURES Computer Science Department
Because a module is not a stand-alone program, driver and/or stub software must be developed for each unit test. A driver is nothing more than a " main program" that accepts test case data, passes such data to the module (to be tested), and prints the relevant results. Stubs serve to replace modules that are subordinate (called by) the module to be tested. A stub or "dummy subprogram" uses the subordinate module's interface, may do nominal data manipulation, prints verification of entry, and returns. Drivers and stubs also represent overhead Drivers and stubs represent overhead, that is, both are software that must be written but are not delivered with the final software. They are just meant for testing purposes. If drivers and stubs are kept simple, actual overhead is relatively low. Unit testing is simplified when a module with high cohesion is designed. When a module has a single function, the number of test cases is reduced and errors can be more easily predicted and uncovered. Computer Science Department

36 Computer Science Department
INTEGRATION TESTING Integration testing: technique for constructing the program structure while at the same time conducting tests to uncover tests to uncover errors associated with interfacing Objective: combine unit-tested modules and build a program structure that has been dictated by design. Two-types: Top-Down integration; Bottom-up Integration Please refer to pages of the textbook and page 9.6 of the Study Guide. Once all modules have been unit-tested, why do we doubt that they will work when we put them together? The problem is “putting them together” i.e. interfacing. Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; sub-functions when combined, may not produce the desired major function; and many other problems. There is often a tendency to use a big-bang approach i.e. using the non-incremental integration in constructing the program, whereby the entire program is tested as a whole and chaos usually results. When errors encountered, correction is difficult and when these errors are corrected, new ones appear and the process continues in a seemingly endless loop. Therefore, a systematic integration testing approach is required. Computer Science Department

37 Computer Science Department
TOP-DOWN TESTING INTEGRATION PROCESS The main control module is used as a test driver and stubs are substituted for all modules directly subordinate to the main control module Subordinate stubs are replaced one at a time with actual modules Tests are conducted as each module is integrated On the completion of each set of tests, another stub is replaced with the real module Regression testing (i.e., conducting all or some of the previous tests) may be conducted to ensure that new errors have not been introduced Please refer to pages of the Study Guide. Computer Science Department

38 Computer Science Department
TOP-DOWN TESTING For the below program structure, the following test cases may be derived if top-down integration is conducted: Test case 1: Modules A and B are integrated Test case 2: Modules A, B and C are integrated Test case 3: Modules A., B, C and D are integrated (etc.) Please refer to page 9.6 of the Study Guide Computer Science Department

39 Computer Science Department
TOP-DOWN TESTING There is a major problem in top-down integration: inadequate testing at upper levels when data flows at low levels in the hierarchy are required Solutions to the above problem Delay many test until stubs are replaced with actual modules; but this can lead to difficulties in determining the cause of errors and tends to violate the highly constrained nature of the top-down approach Develop stubs that perform limited functions that simulate the actual module; but this can lead to significant overhead Perform bottom-up integration Please refer to page 399 of the textbook and page 9.6 of the Study Guide. Computer Science Department

40 Computer Science Department
BOTTOM-UP TESTING Low-level modules are combined into clusters (sometimes called builds) that perform a specific software subfunction A driver (a control program for testing) is written to coordinate test case input and output The cluster is tested Drivers are removed and clusters are combined moving upward in the program structure Please refer to page 400 of the textbook and pages of the Study Guide. Computer Science Department

41 Computer Science Department
BOTTOM-UP TESTING Test case 1: Modules E and F are integrated Test case 2: Modules E, F and G are integrated Test case 3: Modules E., F, G and H are integrated Test case 4: Modules E., F, G, H and C are integrated (etc.) Drivers are used all round. Please refer to page 9.7 of the Study Guide. Computer Science Department

42 Computer Science Department
VALIDATING TESTING Validation testing: ensuring that software functions in a manner that can be reasonably expected by the customer. Achieve through a series of black tests that demonstrate conformity with requirements. A test plan outlines the classes of tests to be conducted, and a test procedure defines specific test cases that will be used in an attempt to uncover errors in conformity with requirements. A series of acceptance tests (include both alpha and beta testing) are conducted with the end users Please refer to pages of the textbook and page 9.7 of the Study Guide. After each validation test case has been conducted, one of two possible conditions exist : a) the function or performance characteristics conform to specification and accepted, or b) a deviation from specification is uncovered and a deficiency list is created. 2) Deviation or error discovered at this stage in a project can rarely be corrected prior to scheduled completion. It is often necessary to negotiate with the customer to establish a method for resolving deficiencies. Computer Science Department

43 Computer Science Department
VALIDATING TESTING Alpha testing Is conducted at the developer's site by a customer The developer would supervise Is conducted in a controlled environment Beta testing Is conducted at one or more customer sites by the end user of the software The developer is generally not present Is conducted in a "live" environment It is virtually impossible for a software developer to foresee how the customer will really use a program. Instructions for use may be misinterpreted; strange combinations of data may be used; output that seemed clear to the tester may be unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. But if the software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Therefore, alpha and beta testing are used to uncover errors that only the end user seems able to find. Computer Science Department

44 Computer Science Department
SYSTEM TESTING Ultimately, software is only one component of a larger computer-based system. Hence, once software is incorporated with other system elements (e.g. new hardware, information), a series of system integration and validation tests are conducted. System testing is a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each system test has a different purpose, all work to verify that all system elements have been properly integrated and perform allocated functions. Please refer to pages of the textbook and pages of the Study Guide. 4 types of systems testing : Recovery Testing Security Testing Stress Testing Performance Testing Computer Science Department

45 Computer Science Department
RECOVERY TESTING A system test that forces software to fail in a variety of ways and verifies that recovery is properly performed If recovery is automatic, re-initialization, check-pointing mechanisms, data recovery, and restart are each evaluated for correctness If recovery is manual, the mean time to repair is evaluated to determine whether it is within acceptable limits. Many computer-based systems must recover from faults and resume processing within a pre-specified time. In some cases, a system must be fault-tolerant i.e. processing faults must not cause overall system function to cease. In other cases, a system failure must be corrected within a specified period of time or severe economic damage will occur. Examples : Test cases that include a power failure situation and re-start capability of the system Special test cases for handling interrupts Computer Science Department

46 Computer Science Department
SECURITY TESTING Security testing attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration. Particularly important to a computer-based system that manages sensitive information or is capable of causing actions that can improperly harm (or benefit) individuals when targeted. Any computer-based system that manages sensitive information or causes actions that can improperly harm or benefit individuals is a target for improper or illegal penetration. There might be hackers who attempt to penetrate for revenge, or dishonest individuals who attempt to penetrate for illicit personal gain. Examples Test cases are designed to penetrate the multi-level password system to reach to the required data. Tester may attempt to acquire passwords through external clerical means Special test cases are designed to purposely cause system errors and hoping to penetrate during recovery Test cases that allow tester to browse through insecure data and find the key to system entry. Good security testing will ultimately penetrate a system. Computer Science Department

47 Computer Science Department
STRESS TESTING Stress Testing is designed to confront programs with abnormal situations where unusual quantity frequency, or volume of resources are demanded A variation is called sensitivity testing; Attempts to uncover data combinations within valid input classes that may cause instability or improper processing Examples: Special tests may be designed that generate 10 interrupts per second, when one or two is the average rate. Input data rates may be increased by an order of magnitude to determine how input functions will respond. Test cases that require maximum memory or other resources may be executed. Test cases that may cause thrashing in a virtual operating system are designed. Test cases that may cause excessive hunting for disk resident data are designed. Essentially, the tester attempts to break the program. Computer Science Department

48 Computer Science Department
PERFORMANCE TESTING This mode of testing seeks to test the run-time performance of software within the context of an integrated system. Extra instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis Use of instrumentation can uncover situations that lead to degradation and possible system failure Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as tests are conducted. Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation to measure resource utilization ( e.g. processor cycles ) in an exacting fashion. Examples: Test cases are designed to pressure the online retrieval system during its peak of operations. Test cases that increase the input data rate by a magnitude to determine how the system tolerates with these data. Computer Science Department


Download ppt "SOFTWARE CONSTRUCTION AND TESTING"

Similar presentations


Ads by Google