Presentation is loading. Please wait.

Presentation is loading. Please wait.

Testing, Reliability, and Interoperability Issues in CORBA Programming Paradigm 11/21/2018.

Similar presentations


Presentation on theme: "Testing, Reliability, and Interoperability Issues in CORBA Programming Paradigm 11/21/2018."— Presentation transcript:

1 Testing, Reliability, and Interoperability Issues in CORBA Programming Paradigm
11/21/2018

2 Outline Motivation Testing CORBA Program Reliability Evaluation
Interoperability Future Work Q&A 11/21/2018

3 Motivation Implement distributed applications with CORBA become popular Testing, reliability, and interoperability for the CORBA programming paradigm remain unexplored These issues is important to develop the reliable distributed system in CORBA Why we talk about issues about CORBA? Lots of company use CORBA as its enterprise distributed system solution. Although CORBA can help develop distributed system. Mechanisms for reliability and testing important! 11/21/2018

4 What is CORBA? Architecture and specification
Allow object in the network communicate through an "interface broker" Developed by Object Management Group (OMG) Standard architecture for distributed objects (or Components) An architecture and specification for creating, distributing, and managing distributed program objects in a network Allow programs at different locations and developed by different vendors to communicate in a network through an "interface broker" Developed by a consortium of vendors through the Object Management Group (OMG)--currently includes nearly 800 member companies ISO and X/Open sanction CORBA as the standard architecture for distributed objects (or components) 11/21/2018

5 Experiment Project Description
19 Programs,same specification Soccer Management System 10 operations team rules Different ORBs(Visigenic/Orbix) Different Languages (C++/Java) Based on same spec provide basic management operation lots of rules of team 11/21/2018

6 Program Metrics 7 programs use Iona Orbix(C++)
12 programs use Visigenic 9 uses Java for client and server 2 uses C++ for client and server 1 uses Java as client and C++ as server Table? 11/21/2018

7 General Software Metrics
The software metrics of these 19 programs are listed in Table 2. The metrics were collected using etags and some perl scripts. These programs range from 500 to 5000 lines of code (LOC). The large size of program P12 was due to fancy user interface and on-line help commands. The distribution of the client code versus server code is 1.79. 11/21/2018

8 Testing the Programs Test Preparation Test Execution Result Analysis
Generate Test Cases Test Procedure Test Execution Result Analysis In order to evaluate the reliability We try to testing the program 11/21/2018

9 Generate Test Cases Specification IDL Test Cases 11/21/2018
base on spec and ID, For each operation IDL can help general test cases 11/21/2018

10 Test Procedure 11/21/2018 Minimum operation use for each test case
Test each operation by operation step ? Scripts automatic ? The test procedure is shown in. In order to reduce the testing work for these program versions, we define the test sequence for each operation to cover all the test cases with minimum operations. In each step of test, we only apply one test case. However, we need some extra operations for some test cases to work properly. In total we have 76 operations in the test procedure to cover all the test cases. 11/21/2018

11 Pass Rate Pj - number of “Pass” cases for program j.
Mj -number of “Maybe” cases program j. C- total number of cases apply to the programs (57) The test cases, designed to raise exceptions, can not apply to the program because the client side of the program deliberately forbids them. In this situation, we do not know whether the server is designed properly to raise the exceptions, so we put down “maybe” as the result. The definitions of the pass rate and the reliability in this paper, therefore, consider two conditions, one including the “maybe” case and the other excluding it. Oi -number of “Pass” test cases for operation i OMi- number of “Maybe” test cases for operation i Ti - total cases for operation i 11/21/2018

12 Test Result for Each Team
From Table 4 we see that Team 16 and Team 19 have low pass rate as they fail to process many exceptions properly. In addition, Team 13 has many “maybe” cases due to their special user interface which avoids many illegal test cases designed to test for exception handling. Test Results for Each Team 11/21/2018 Maybe: Some Test cases not applicable

13 Test Results for Each Operation
Operation Pass Rate= A/B discussion From Table 5 we can see that the operations CreateTeam and MovePlayer have the lowest pass rates. The reason is the complexity of these operations, as they need to consider more for the process of normal cases and exceptions. Furthermore, the CreateTeam operation has a large number of “Maybe” results as many programs forbid the test cases which would raise exceptions. 11/21/2018

14 Defects Classification
Exceptions Server Side Missing Extra or Wrong Client Side Wrong Memory Management Ref Count Language Mapping Other This happens when the server side program does not throw the necessary exception. This situation usually comes from the IDL definition problem as addressed above. On the other hand, the implementation may also fails to check if there should be an exception. Extra exception is considered as a wrong one since the client will not be able to recongnize the unexpected exception. The server throws an unexpected exception. This defect seldom occurs. The programmers forget to catch some kind of the expected exception. The programmers catch the exception, but give an incorrect response. This kind of defect occurs when a special process is needed in the exception handling code. "_duplicate()/_release()," sequencs/string/array user interface,parameter process, etc. 11/21/2018

15 Distribution of Defects
Exception: server: missing extra or wrong client: wrong Memory management: traditional/CORBA CORBA: reference count; squence, array, string Other From Table 6, we can see that over 70 percent of the total defects come from exception handling. This indicates that exception handling routines are the most difficult part of CORBA programming for distributed systems. Distribution of Defects *Missing/Extra or Wrong +Missing/Wrong 11/21/2018

16 Distribution of Defects
Exception: Memory management other 11/21/2018

17 Reliability Develop the Operational Profile
Define the Reliability of the Program Analysis the Results Software reliability [LYU96] is the probability of failure-free software operation for a specified time in a specified environment. We use the similar procedure specified in [MUSA97] to evaluate software reliability in our experiment. We note, however, that it is not easy to obtain execution time for CORBA programs, as many factors affect the operation execution time. They include, for example, programming language, platform, ORB implementation, user implementation, etc. Since it is difficult for us to get an accurate execution time measure for each operation in these programs, we evaluate the reliability of each accepted program based on the defects we get from our test and the probability of each operation. We test and evaluate the client and server for each program as a whole, and assume that each test case has the same execution time for the same program. 11/21/2018

18 Operational Profile Operation Profile:
List of occurrence probabilities of each element in the input domain of the program. Because the application in our experiment is a new information management system, the operational profile can not be obtained from any historical data. Consequently, we have to estimate the occurrence probability of each operation. This is shown in Table 7. 11/21/2018

19 Reliability Evaluation
Rj - Reliability for program j Rj' - Reliability for program j (treat “maybe” as pass) OPi,j-“Pass” test cases for operation i , program j Mi,j-“Maybe” test cases for operation i , program j Ti - total cases for operation i i- Probability for operation I n-number of operation Software reliability is the probability of failure-free software operation for a specified time in a specified environment. Since it is difficult for us to get the accurate execution time of each operation in these programs, evaluate the reliability of each accepted program by the defects we get from our test and the probability of each operation. We test and evaluate the client and server for each program as a whole, and assume that each test case has the same execution time for the same program. 11/21/2018

20 Evaluation Results 11/21/2018

21 Evaluation Results 11/21/2018

22 Comparison We also list the average reliability for Visiginic/Orbix program, and Java/C++ programs, as shown in Table 9. From Table 9, we can see that the reliability of Visiginic programs is higher than that of Orbix programs. Moreover, the reliability of the teams using Java is higher than those using C++. The result does not necessary mean that using Visiginic and Java is better than using Orbix and C++. Instead, this may be due to the CORBA mapping for C++, which is more complicated than that for Java. Moreover, the programmers are generally more familiar with Java than with C++. 11/21/2018

23 Why Interoperability? Same Specification
Try to Exchange the Client and Server Availability Higher Reliability Server/Client lead to higher system reliability? 11/21/2018

24 Interoperability Evaluation
The difficulty of Interoperability: 1: very difficult to inter-operate 2: possible to inter-operate, but with considerable effort 3: interoperable with moderate effort. 4: interoperable with some effort. 5: readily interoperable with minimal effort. 11/21/2018

25 Evaluation Results From Table 10 we can see that there is clearly a lack of interoperability among these 18 program versions (indicated by the low mark of 1.42), even though they are developed based on the same specification requirements. Programs P3, P6, P7, P11, P14, P17, P19, in particular, are extremely difficult to inter-operate with other program versions. Only a few pairs of programs achieve higher interoperability marks, but they are sparse. Same ORB better, subset P2,P10,P12 is better 11/21/2018

26 Difficulties in Interoperability
IDL Interface Attribute/Operation Exception Other Between ORBs CORBA Service Specification 11/21/2018

27 Summary Introduction to CORBA Testing the CORBA program
Evaluating the reliability Evaluating the interoperability 11/21/2018

28 Future Work Testing Techniques Reliability Evaluation Techniques
Apply Software Fault Tolerance Techniques Reliability Model Implement Software Reliability Middleware 11/21/2018

29 Q&A 11/21/2018

30 Thank You! 11/21/2018


Download ppt "Testing, Reliability, and Interoperability Issues in CORBA Programming Paradigm 11/21/2018."

Similar presentations


Ads by Google