Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unified Process on Software Implemenation & Testing ©Dr. David A. Workman School of EE and Computer Science April 13, 2010.

Similar presentations


Presentation on theme: "Unified Process on Software Implemenation & Testing ©Dr. David A. Workman School of EE and Computer Science April 13, 2010."— Presentation transcript:

1 Unified Process on Software Implemenation & Testing ©Dr. David A. Workman School of EE and Computer Science April 13, 2010

2 (c) Dr. David A. Workman2 Implementation (USP) Purpose To translate the design into machine-readable and executable form. Specifically to: –Plan system integrations required in each implementation increment or iteration –Distribute the system by mapping executable components to nodes in the deployment model. –Implement design classes and subsystems found in the Design Model. –Unit test the components, and integrate them by compiling and linking them together into one or more executables, before they are sent to integration and system tests. Artifacts –Implementation Model Components:,,,, Interfaces Implementation subsystems –Components –Implementation Subsystems –Interfaces –Build Plan

3 April 13, 2010(c) Dr. David A. Workman3 Integration & System Test (USP) Purpose To verify the result of each build and to validate the complete system via acceptance tests. –Plan tests required in each iteration, including integration and system tests. Integration tests are required after each build, while system tests are done as part of client acceptance and system delivery. –Design and implement test plans by creating test cases. Test cases specify what to test and define procedures and programs for conducting test exercises. –Perform various test cases to capture and verify test results. Defects are formally captured, tracked and removed before delivery. Verification: testing a work product to determine whether or not they conform to the product's specifications. "Did we build the system right?" (Boehm) Validation: testing a complete system to determine whether or not it satisfies requirements and solves the problem users need to have solved. "Did we build the right system?" (Boehm)

4 April 13, 2010(c) Dr. David A. Workman4 Integration & Test (USDP) Concepts –Test Strategies Black box testing: demonstrating correct component or subsystem behavior by observing output generated at its interface as a function of inputs supplied at its interface. White box testing: demonstrating that valid computational paths and interactions are observed internal to a component or subsystem as a function of given inputs. –Test Types Installation tests: verify that the system can be installed and run correctly on the client's platform. Configuration tests: verify that the system can run correctly in different configurations. Negative tests: attempt to cause the system to fail by presenting inputs or loads for which it was not designed. Stress tests: performance tests designed to identify problems when resources are limited. Regression tests: after a change is made to a component, re-running tests that the component had successfully pasted before the change was made to ensure that previous capability remains valid

5 April 13, 2010(c) Dr. David A. Workman5 Integration & Test (USDP) Artifacts –Test Model A set of Test Cases A set of Test Procedures A set of Test Components –Test Cases Designed to verify certain use cases, or use case scenarios. Demonstrates that post-conditions of use cases are satisfied, if their pre-conditions are met. Demonstrates that a sequence of actions defined by the use case is followed. Identifies the test components and their features to be tested. Predicts or describes expected component output and behavior. Defines the inputs necessary to produce the desired test output. Specifies the conditions that must exist to conduct the test case. –Test Procedures Specifies how to perform one or several test cases. Test programs (or "harnesses")(or "benches") and shell scripts may have to be executed as part of a test procedure. Defines a sequence of steps that must be followed to complete the procedure, along with the inputs and outputs expected to each step.

6 April 13, 2010(c) Dr. David A. Workman6 Integration & Test (USDP) Artifacts –Test Components Automates one or several test procedures or parts of them. Test drivers Test scripts Test harnesses –Test Plan Describes testing requirements, strategies, resources, and schedule for each build and system release. Describes what test cases should be performed and passed for each build and/or system release. –Test Evaluations Capture results of test cases; declares whether or not test case was successful; generates defect or anomaly reports for tracking. –Anomaly Reports Document test anomalies – unexpected test events or results. Capture and track anomalies during development. Ensure all anomalies have been satisfactorily addressed or removed before delivery (customer signoff).

7 April 13, 2010(c) Dr. David A. Workman7 IEEE Std (829) for Software Testing Test Plan To prescribe the scope, approach, resources, and schedule of testing activities. To identify items to be tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with the plan. Test Design Spec Test Case Spec Test Procedure Spec Test Item Transmittal Report Test Log Test Incident Report Test Summary Report

8 April 13, 2010(c) Dr. David A. Workman8 Test Work Flow Test Engineer Integration Tester Plan Tests System Tester Perform Integration Tests Perform System Tests Component Engineer Implement Tests Analyze a Class Design Tests

9 April 13, 2010(c) Dr. David A. Workman9 Generic Testing for OO Systems Reference Testing Object-Oriented Systems: Models, Patterns, and Tool by Robert V. Binder Addison-Wesley, © 2000, ISBN = 0201809389

10 April 13, 2010(c) Dr. David A. Workman10 Testing OO Systems Fault Models A fault model answers a simple question about a test technique: Why do the features called out by the technique warrant our effort? A fault model therefore identifies the relationships and components of the system under test that are most likely to have faults. Bug Hazard A circumstance that increases the chance of a bug. Example: type coercion in C++ is a hazard because the rules are complex and may depend on hidden declarations when working with a particular class. Test Strategies 1.Conformance-directed testing: seeks to establish conformance to requirements and specifications. Tests are designed to be sufficiently representative of the essential features of the system under test. (Implies a non-specific fault model – any fault that violates conformance is equal to any other.) 2.Fault-directed testing: seeks to reveal implementation faults. It is motivated by the observation that conformance can be demonstrated for an implementation that contains faults. (Implies a specific fault model – directs testing toward particular kinds of faults).

11 April 13, 2010(c) Dr. David A. Workman11 Testing OO Systems Conformance- vs Fault-directed Testing –Conformance-directed testing should be feature sufficient: they should, at least, exercise all specified features of the system. –Fault-directed testing should be fault efficient: they should have a high probability of revealing a fault of a particular type or types. Fault Hazards in OO Programming Languages “Object-oriented technology does not in any way obviate the basic motivation for software testing. In fact, it poses new challenges. … Although the use of OOPLs may reduce some kinds of errors, they increase the chance of others.” –Dynamic binding and complex inheritance structures create many opportunities for faults due to unanticipated bindings or misinterpretation of correct usage. –Interface programming errors are a leading cause of faults in procedural languages. OO programs typically have many small components and therefore more interfaces. Interface errors are more likely, other things being equal. –State control errors are likely. Objects preserve state, but state control (the acceptable sequence of events) is typically distributed over an entire program.

12 April 13, 2010(c) Dr. David A. Workman12 OO Fault Studies Steven P. Fiedler: “Object-oriented unit testing” in Hewlett-Packard Journal, Vol. 40, No. 2, April 1989. In reference to a C++ system for a medical electronics application, Fiedler reports that “on the average, a defect was uncovered for every 150 LOC, and correspondingly, the mean density exceeded 5.1 per KSLOC.” V. Basili, L.C. Briand, and W. L. Melo: “A validation of object-oriented design metrics as quality indicators” in IEEE Transactions on Software Engineering, Vol. 22, No. 10, October 1996. The conclusions of a study done on small C++ systems ( less than 30 KSLOC ) are: –Classes that send relatively more messages to instance variables and message parameter objects are more likely to be buggy; –Classes with greater depth ( number of superclasses ) and higher specialization (number of new and overridden methods ) are more likely to be buggy; –No significant correlation was found between classes that lack cohesion (number of pairs of methods that have no attributes in common) and the relative frequency of bugs.

13 April 13, 2010(c) Dr. David A. Workman13 OO Fault Studies Martin Sheppard & M. Cartwright: “An empirical study of object-oriented metrics” in a Tech Report, TR-97/01, Dept. of Computing, Bournmouth University, U.K. In reference to a 133 KSLOC C++ system for a telecommunications application, the report states, “ classes that participated in inheritance structures were 3 times more defect prone than other classes.” Capers Jones: The economics of object-oriented software development Software Productivity Research Inc., Burlington, MA., April 1997. Summarizing data gathered from 150 development organizations and 600 projects, Jones reports: 1.The OO learning curve is very steep and causes many first-use errors. 2.OO analysis and design seem to have higher defect potential than older design methods. 3.Defect removal efficiency against OO design problems seems lower than against older design methods. 4.OOPLs seem to have lower defect potential than procedural languages. 5.Defect removal efficiency against programming errors is roughly equal to, or somewhat better than, removal efficiency against older procedural language errors.

14 April 13, 2010(c) Dr. David A. Workman14 An OO Testing Manifesto Observations –The hoped-for reduction in OO testing due to reuse is illusory. –Inheritance, polymorphism, late binding, and encapsulation present some new problems for test case design, testability, and coverage analysis. –To the extent that OO development is iterative and incremental, test planning, design, and execution must be similarly iterative and incremental. –Regression testing and its antecedents must be considered essential techniques for professional OO development. Guidance –Unique Bug Hazards: Test design must be based on the bug hazards that are unique to the OO programming paradigm. –OO Test Automation: Application-specific test tools must be OO and must offset obstacles to testability intrinsic to the OO programming paradigm. –Test-effective process: The testing process must adapt to iterative and incremental development and mosaic modularity. The intrinsic structure of the OO paradigm requires that test design must consider method, class, and cluster scope simultaneously.

15 April 13, 2010(c) Dr. David A. Workman15 An OO Testing Manifesto Unique Bug Hazards 1.The interaction of individually correct superclass and subclass methods can be buggy and must be systematically exercised. 2.Superclass test suites must be rerun on subclasses and should be constructed so that they can be reused to test any subclass. 3.Unanticipated bindings that result from scoping nuances in multiple and repeated inheritance can produce bugs that are triggered only by certain superclass/subclass interactions. Subclasses must be tested at “flattened” scope, superclass test cases must be reusable. 4. Poor design of class hierarchies supporting dynamic binding (polymorphic servers) can result in failures of a subclass to observe superclass contracts. All bindings must be systematically exercised to reveal these bugs. 5.The loss of intellectual control that results from spaghetti polymorphism ( yo-yo problem) is a bug hazard. A client of a polymorphic server can be considered to have been adequately tested only if all server bindings that the client can generate have been exercised.

16 April 13, 2010(c) Dr. David A. Workman16 An OO Testing Manifesto Unique Bug Hazards 6.Classes with sequential constraints on method activation and their clients can have control bugs. The required control behavior can be systematically tested using a state machine model. 7.Subclasses can accept illegal superclass method sequences or generate corrupt states by failing to observe the state model of the superclass. Where sequential constraints exist, subclass testing must be based on a flattened state model. 8.A generic class instantiated with a type parameter for which the generic class has not been tested is almost the same as completely untested code. Each generic instantiation must be tested to verify it for that parameter. 9.The difficulty and complexity of implementing multiplicity constraints can easily lead to incorrect state/output when an element of a composition group is added, updated, or deleted. The implementation of multiplicity must be systematically tested. 10.Bugs can easily hide when an update method ( a method that changes instance state) computes a corrupt state, the class interface does not make this corrupt state visible (ie. providing a public feature for reporting a corrupt state or throwing an exception), and the corruption does not inhibit other operations. Def-use sequences of method calls must be systematically tested.

17 April 13, 2010(c) Dr. David A. Workman17 Functional vs. Object-Oriented Architectures Related computations on different objects. (a) Functional Architecture A Use Case Flow Data & Control Flow X Y Z C B Y1Y1 X Y2Y2 Z1Z1 Z2Z2 Control/ Boundary Object (b) Object-Oriented Architecture Class-A Class-C Class-B Public Service Layer Messages A B A C B

18 April 13, 2010(c) Dr. David A. Workman18 A B C D E 1 2 2 2 2 1 1 1 1 3 3 Functional Dependencies Among Methods

19 April 13, 2010(c) Dr. David A. Workman19 Functional Threads (Use Cases) AB CD E A1 B1 A2 D2 E1 A3 D1 B2 C1 E2 D3

20 April 13, 2010(c) Dr. David A. Workman20 Hierarchical View of Design main A::One()A::Two()A::Three() B::One() C::One() B::Two() D::One() E::Two() E::One()D::Three() D::Two() Use Case #1 Use Case #3 Use Case #2

21 April 13, 2010(c) Dr. David A. Workman21 Design & Testing Principles A D BC Principle 1: Design should be performed “top-down” for each functional thread defined by a Use Case; that is, the interface and detailed design of a module should follow the design of all modules that functionally depend on it. Rationale: By performing interface and detailed design top-down, we ensure that all requirements flow from dependent modules toward the modules they depend on. This principle attempts to postpone detailed design decisions until all functional requirements for a module are known. Principle 2: Coding and Unit Testing should be performed “bottom-up” for a functional thread; that is, the unit testing of a module should precede the unit testing of all modules that functionally depend on it. Rationale: By performing unit testing bottom-up, we ensure that all subordinate modules have been verified before verifying the module that depends on them. This principle attempts to localize and limit the scope and propagation of changes resulting from unit testing.

22 April 13, 2010(c) Dr. David A. Workman22 Design & Testing Schedules Development Layers for Detailed Design and Coding Development Layers for Unit Testing Effort Time Development Schedule Build #1 (Integration Test 1) Build #2 (Integration Test 2) Build #3 (System Test) See Notes

23 April 13, 2010(c) Dr. David A. Workman23 McCabe's* Cyclomatic Complexity For a flow graph G: V(G)* = E - N + p + 1 E = # edges in the flow graph N = # nodes in the flow graph p = number of independent program components. A component can be represented by a 1-entry/1-exit DAG (directed acyclic graph). McCabe proved that his metric gives the number of linearly independent flow paths through the DAG. The number of LI paths relate strongly to the testing complexity of the component. 1 0 3 6 5 7 42 Branch nodes = 1,3 (fan out > 1) Join nodes = 6, 7 (fan in > 1) Sequential nodes = 0, 2, 4, 5 *NOTE: This formula is actually a variant of McCabe’s metric proposed by Brian Henderson-Sellers. McCabe’s metric for p isolated components is given by: V(G) = E – N + 2p. Henderson-Sellers showed that his variant gives a consistent value when the isolated components are treated as procedures connected to their call sites. In this example: E = 9 N = 8 p = 1 and V(G)* = 3 It can also be shown that if G is a planar graph, then V(G)* is the number of bounded regions or faces of the graph.

24 April 13, 2010(c) Dr. David A. Workman24 McCabe's* Cyclomatic Complexity 0 1 4 2 3 A1 A4 A3A2 B1 B7 B2 B3 B6 B5B4 5 Main() 6 Call A Call B A() B() V(main)* = 7 – 7 + 2 = 2 V(A)* = 4 – 4 + 2 = 2 V(B)* = 8 – 7 + 2 = 3 V(main+A+B)* = 19 – 18 + 4 = 5 Note: p =3 (Independent Modules = Main, A, B)

25 April 13, 2010(c) Dr. David A. Workman25 McCabe's* Cyclomatic Complexity 0 1 4 2 3 A1 A4 A3A2 B1 B7 B2 B3 B6 B5B4 5 Main() 6 Call A Call B A() B() V(main+A+B)* = 23 – 20 + 2 = 5 3’ 5’ Linearly Independent Paths are: (0,1,2,4,5,B1,B2,B7,5’,6) (0,1,2,4,5,B1,B3,B4,B6,B7,5’,6) (0,1,2,4,5,B1,B3,B5,B6,B7,5’,6) (0,1,3,A1,A2,A4,3’,4,5,B1,B3,B4,B6,B7,5’,6) (0,1,3,A1,A3,A4,3’,4,5,B1,B3,B5,B6,B7,5’,6) Call nodes are “split” to match the single entry and exit nodes of the called component. 1 node and 2 edges are added for each called component.

26 April 13, 2010(c) Dr. David A. Workman26 Example Method (C++) void Server::Work() { ifstream input;// input file stream ofstream output;// output file stream Tray tray; cout << "McDonald's Implementation in C++" << endl; cout << "by Amit Hathiramani and Neil Lott" << endl; cout << endl; while(1) { string szInputFileName; cout << "Please enter the name of the input file: "; cin >> szInputFileName; input.open(szInputFileName.c_str()); if(!input) cerr << endl << "No file named " << szInputFileName << " found." << endl; else break; } } //Server::Work 1 2 3 5 4 6 16 Insert Segment A

27 April 13, 2010(c) Dr. David A. Workman27 Example Method (C++) FoodItems *pFood; while(!input.eof()) { char szMarker[4]; input >> szMarker; strupper(szMarker); if(strcmp(szMarker, "$D") == 0) pFood = new Drinks; // drink else if(strcmp(szMarker, "$S") == 0) pFood = new Sandwiches; // sandwich else if(strcmp(szMarker, "") == 0) continue; // blank line; skip it else throw InputException("Unknown type found " + string(szMarker)); pFood->Get(input); tray.Add_Item(pFood); } //while Segment A 6 7 8 9 10 1112 13 14 15

28 April 13, 2010(c) Dr. David A. Workman28 Example Method (C++) 1 2 9 8 7 6 5 4 3 10 14 13 16 15 12 11 17 Exception exit Normal exit System exit V(G)* = 19 – 16 + 2 = 5 Or V(G)* = 21 – 17 + 2 = 6

29 Design & Test Example: Discrete Event Simulator ©Dr. David A. Workman School of Computer Science University of Central Florida

30 April 13, 2010(c) Dr. David A. Workman30 Use Case Diagram: Simulator Simulation System Construct World Specify Input Simulate World Output World Objects Report Simulation Data Initialize Simulation User Specify Output Simulation Input File Simulation Log File

31 April 13, 2010(c) Dr. David A. Workman31 Simulation Architecture

32 April 13, 2010(c) Dr. David A. Workman32 Simulation Architecture: Design Template The Passive layer contains all classes that model problem data and inanimate objects of the simulated world. Agents make direct calls on passive objects, but must account for the simulation time consumed when doing so. Passive objects make direct calls to each other, if necessary. Passive objects may be passed from one Agent to another as part of a instance of some Message subclass. This layer contains all the derived subclasses of Message. These classes are used to pass data for servicing interaction events between Agents. Only recipient Agent classes know the content and use of instances of these classes. Methods of Agents receiving messages optionally take parameters which are instances of one (or more) of the Passive classes and return an instance of class Message or one of its sub- classes. Instances of the root class Message carry no data and denote signals that require some action on the part of the receiver Agent. Virtual World Message Players Agent Event 2 Passive Class Layer Message Layer Agent Layer (Active Objects) Interface and Control Layer EventMgr * 1 Other Subclasses All Active Objects * * All Passive Classes/Objects * * * This layer consists of all active object classes. Active objects must be instances of some subclass of abstract class Agent. The simulation progresses as a result of Events created and serviced by Agents. An Event has four components: a Sender agent, a Recvr agent, an instance of some Message subclass, and an event time. When one Agent wishes to interact with another, it must do so by creating an Event that defines a “future” time at which the interaction will occur. The message component defines an action to the Recevr agent and possibly data necessary to complete the action. SimModels ClassesSimMgmt Classes

33 April 13, 2010(c) Dr. David A. Workman33 Simulation Architecture: Student Conversation Conversation Message Players Agent Event 2 Passive Class Layer Message Layer Agent Layer (Active Objects) Interface and Control Layer EventMgr * 1 AnswerMsg * * Question SimModels ClassesSimMgmt Classes Student Answer QuestionMsg

34 April 13, 2010(c) Dr. David A. Workman34 Design Graph: 1 0: Main() 4 Reusable Methods 9 New Methods Class Conversation 1: Conversation() Class Agent 2: Agent() 3: operator>>() 4: Get() Class Student 5: Student() 6: Extract() 7: Get() 1 5 6 3 2 7 4 Use Case 1

35 April 13, 2010(c) Dr. David A. Workman35 Design Graph: 2 0: Main() 5 Reusable Methods 9 New Methods Class Conversation 1: Conversation() 8: Initialize() Class Agent 2: Agent() 3: operator>>() 4: Get() 11: NameOf() 21: ~Agent() Class Student 5: Student() 6: Extract() 7: Get() 13: Initialize() 16: AcceptQuest() 8 20 Class Players 9: Players() 12: setAgent() 14: getAgent() 15: getOther() 20: ~Players() Class Message 10: Message() 9 2 1213 11141516 Class SpeakMsg 17: SpeakMsg() 17 10 Class Event 18: Event() 18 Class EventMgr -3: EventMgr() 19: postEvent() 1921 Use Case 2

36 April 13, 2010(c) Dr. David A. Workman36 Design Graph: 3 0: Main() Class Conversation 1: Conversation() 8: Initialize() 22: Insert() Class Agent 2: Agent() 3: operator>>() 4: Get() 11: NameOf() 21: ~Agent() 23: oper<<() 26: Put() Class Student 5: Student() 6: Extract() 7: Get() 13: Initialize() 16: AcceptQuest() 24: Insert() 25: Put Class Players 9: Players() 12: setAgent() 14: getAgent() 15: getOther() 20: ~Players() Class Message 10: Message() Class SpeakMsg 17: SpeakMsg() Class Event 18: Event() Class EventMgr -3: EventMgr() 19: postEvent() 25 24 22 23 26 Use Case 3 2 Reusable Methods 3 New Methods

37 April 13, 2010(c) Dr. David A. Workman37 Design Graph: 4 0: Main() 2 Class Conversation 1: Conversation() 8: Initialize() 22: Insert() 44: Simulate() Class Agent 2: Agent() 3: operator>>() 4: Get() 11: NameOf() 21: ~Agent() 23: oper<<() 26: Put() Class Student 5: Student() 6: Extract() 7: Get() 13: Initialize() 16: AcceptQuest() 24: Insert() 25: Put() 37: Dispatch() 39: doQuestion() 40: AcceptAnswr() 41: doAnswer() Class Players 9: Players() 12: setAgent() 14: getAgent() 15: getOther() 20: ~Players() Class Message 10: Message() 30: Oper<<() 31: Insert() 32: Put() 42: ~Message() Class SpeakMsg 17: SpeakMsg() 33: Insert() 34: Put() 38: getHandlr() Class Event 18: Event() 29: oper<<() 35: getRecvr() 36: getMsg() 43: ~Event() Class EventMgr -3: EventMgr() 19: postEvent() 27: moreEvents() 28: getNextEvent() 17 272829 23 24 25 26 30 32 3133 34 3635 40 3938 37 41 10 1819 42 44 10 Reusable Methods 8 New Methods 43 Use Case 4

38 April 13, 2010(c) Dr. David A. Workman38 Design Graph: 2 0: Main() Class Conversation 1: Conversation() 8: Initialize() 22: Insert() 44: Simulate() 45: WrapUp() 46: ~Conversation() Class Agent 2: Agent() 3: operator>>() 4: Get() 11: NameOf() 21: ~Agent() 23: oper<<() 26: Put() Class Student 5: Student() 6: Extract() 7: Get() 13: Initialize() 16: AcceptQuest() 24: Insert() 25: Put() 37: Dispatch() 39: doQuestion() 40: AcceptAnswr() 41: doAnswer() 47: ~Student() Class Players 9: Players() 12: setAgent() 14: getAgent() 15: getOther() 20: ~Players() Class Message 10: Message() 30: Oper<<() 31: Insert() 32: Put() 42: ~Message() Class SpeakMsg 17: SpeakMsg() 33: Insert() 34: Put() 38: getHandlr() Class EventMgr -3: EventMgr() 19: postEvent() 27: moreEvents() 28: getNextEvent() 48: ~EventMgr() 4645 47 Use Case 5 Class Event 18: Event() 29: oper<<() 35: getRecvr() 36: getMsg() 43: ~Event() 1 Reusable Methods 3 New Methods 21 Reusable 27 New ---------------- 48 Total Methods 4 Reusable 4 New --------------- 8 Total Classes

39 April 13, 2010(c) Dr. David A. Workman39 Scheduling Development Schedule Use Case 1 Use Case 3 Use Case 5 Use Case 4 Use Case 2 0: Main() 1 5 6 3 2 7 4 8 20 92 12 13 11 14 15 16 1710 18 19 21 2524222326 17 27 28 29 23 242526 30 32 31 33 34 36 35 40 39 38 37 41 10 18 19 42 44 43 46 45 47

40 Software Unit Testing Case Study: Money Class Conversation Simulation COP 4331 OO Processes for Software Development Dr. David A. Workman School of EE and Computer Science October 2, 2006 March 28, 2007

41 April 13, 2010(c) Dr. David A. Workman41 Money Class Money Char sign Int dollars Int cents Money(); Money(ifstream&); Money(char,int,int); getSign():char getDollars():int; getCents():int; operator-() :Money; operator-(Money):Money; operator+(Money):Money; operator==(Money):bool; operator<(Money):bool; operator>(Money):bool; operator<=(Money):bool; operator>=(Money):bool; Extract(ifstream&)=>TokenError Insert(ostream&); operator>>(ifstream&,Money&) operator<<(ostream&,Money&) #Get(ifstream&) #Put(ostream&); -toInt(Money):int -toMoney(int):Money Money(fin) Extract(fin) Get(fin) Money() Money(char,int,int) getSign() getDollars() getCents() boolean operators() * binary operators() toInt(Money)toMoney(int) * Insert(fout) Put(fout) operator>>(fin) operator<<(fout) TokenError 1 1 1 1 1 11 2 2 22 2 33 3 3 4 4 4 4 Initially make public

42 April 13, 2010(c) Dr. David A. Workman42 Money Class Test Driver 1 #include "IOMgmt.h" using namespace IOMgmt; #include "Money.h" int main() { InMgr finMgr("Enter Input File (Test1):"); ifstream& fin = finMgr.getStream(); OutMgr foutMgr("Enter Output File(Test1):"); ostream& fout = foutMgr.getStream(); int num_cases; char sign; int dollars, cents; fin >> num_cases; for(int I = 1; I <= num_cases; I++) { fin >> sign >> dollars >> cents; fout << "Case #" << num_cases – I + 1 << endl; fout << "Input Data: " << sign << dollars << cents << endl; Money M1(sign,dollars,cents); int M1cents = toInt(M1); Money M2 = toMoney(M1cents); char Sign = M2.getSign(); int Dollars = M2.getDollars(); int Cents = M2.getCents(); fout << "M1cents: " << M1cents << endl; fout << "Sign = " << Sign << ", Dollars= " << Dollars << ", Cents= " << Cents << endl; }//for }//main-1

43 April 13, 2010(c) Dr. David A. Workman43 Money Class Test Driver 2 #include "IOMgmt.h" using namespace IOMgmt; #include "Money.h" int main() { InMgr finMgr("Enter Input File(Test2):"); ifstream& fin = finMgr.getStream(); OutMgr foutMgr("Enter Output File(Test2):"); ostream& fout = foutMgr.getStream(); int num_cases; char sign1, sign2; int dollars1, cents1, dollars2, cents2; fin >> num_cases; for(int I = 1; I <= num_cases; I++) { fin >> sign1 >> dollars1 >> cents1 >> sign2 >> dollars2 >> cents2; Money M1(sign1,dollars1,cents1), M2(sign2,dollars2,cents2); fout << "Case #" << I << endl; fout << "Input Data (M1): " << sign1 << dollars1 << cents1 << endl; fout << "Input Data (M2): " << sign2 << dollars2 << cents2 << endl; bool Reql, Rneq, Rless, Rgtr, Rleq, Rgeq; Reql = M1 == M2; Rneq = M1 != M2; Rless = M1 M2; Rleq = M1 = M2; fout << "(==) " << Reql << " (!=) " << Rneq << //etc. Money Sum, Diff, Neg; Sum = M1 + M2; Diff = M1 – M2; Neg = -M1; fout << "Sum = " << Sum << ", Diff = " << Diff << ", Neg(M1) = " << Neg << endl; fout << "M1 = " << M1 << ", M2 = " << M2 << endl; }//for }//main-2

44 April 13, 2010(c) Dr. David A. Workman44 Money Class Test Driver 3 #include "IOMgmt.h" using namespace IOMgmt; #include "Money.h" int main() { InMgr finMgr("Enter Input File(Test3):"); ifstream& fin = finMgr.getStream(); OutMgr foutMgr("Enter Output File(Test3):"); ostream& fout = foutMgr.getStream(); int num_cases; fin >> num_cases; for(int I = 1; I <= num_cases; I++) { Money M; try{ fout << "Case# " << I; fin >> M; fout << ": Money = " << M << endl; }//try catch(TokenError &e){ fout << ": " << e.getMsg() << endl; }//catch }//for }//main-3

45 April 13, 2010(c) Dr. David A. Workman45 Testing Agent Behaviors Once the constructors, Insert, Put, Extract and Get methods have been written and tested, the way is clear to test simulation initialization and execution behaviors. There are a couple of techniques that can be used to test specific Events and consequently specific Agent behaviors. class Conversation { //Virtual World public: Conversation(); ~Conversation(); void Initialize(); void Simulate(); void WrapUp(); void Put(); private int debugOption; //new simulation input paramter int numStudents; int numEvents; int lastEvent; AGENTPTR *students; //dynamic array of Agent* MSGPTR players; //pointer to Players message. void ExtractEvent(); //private method(s) for testing };//Conversation

46 April 13, 2010(c) Dr. David A. Workman46 Testing Agent Behaviors Void Conversation::Conversation() { //parses image of virtual world object, Conversation. //if debugOpt = ON, then it calls ExtractEvent() } void Conversation::ExtractEvent() { //private method of Conversation //Has visibility to any data the Conversation constructor can see. //Parses the desired image of Event for debugging purposes // Posts the constructed Event to theEventMgr. } SimModels.cpp

47 April 13, 2010(c) Dr. David A. Workman47 Testing Agent Behaviors class Message {//concrete base class public: Message(int Handler,string Description); virtual ~Message() { } //destructor int getHandler() const { return handler;} //inspector friend ostream& operator<<(ostream &fout, Message &anyMsg) throw(IOError); friend ifstream& operator>>(ifstream& &fin, Message &anyMsg) throw(IOError, TokenError); //new protected: virtual void Insert(); virtual void Put (); virtual void Extract() throw(TokenError); //new virtual void Get() throw(TokenError); //new private: int handler; string description; };//Message class SpeakMsg : public Message { // Inherits from Message public: SpeakMsg(int Handler, string Description, string Speak); ~SpeakMsg() { } string getContent() const { return content; } protected: virtual void Insert(); virtual void Put(); virtual void Extract() throw(TokenError); //reads default program input stream virtual void Get() throw(TokenError); //reads default program input stream private: string content; }; //SpeakMsg

48 April 13, 2010(c) Dr. David A. Workman48 Testing Agent Behaviors Conversation{ debugopt: OFF #students: 2 Student{ name: Betty questdelay: 5 ansrdelay: 2 }Student Student{ name: Bart questdelay: 3 ansrdelay: 4 }Student }Conversation Conversation{ debugopt: ON #students: 2 Student{ name: Betty questdelay: 5 ansrdelay: 2 }Student Student{ name: Bart questdelay: 3 ansrdelay: 4 }Student debug{ #events: 1 Event{ time: ttt sendr: Bart recvr: Betty msg: SpeakMsg{ handler: 3 description: Testing content: How_Are_You? }SpeakMsg }Event }debug }Conversation


Download ppt "Unified Process on Software Implemenation & Testing ©Dr. David A. Workman School of EE and Computer Science April 13, 2010."

Similar presentations


Ads by Google