Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reading This lecture: B. Littlewood, P. Popov, L. Strigini, "Modelling software design diversity - a review", ACM Computing Surveys, Vol. 33, No. 2, June.

Similar presentations


Presentation on theme: "Reading This lecture: B. Littlewood, P. Popov, L. Strigini, "Modelling software design diversity - a review", ACM Computing Surveys, Vol. 33, No. 2, June."— Presentation transcript:

1 CSCE 548 Secure Software Development Independence in Multiversion Programming

2 Reading This lecture: B. Littlewood, P. Popov, L. Strigini, "Modelling software design diversity - a review", ACM Computing Surveys, Vol. 33, No. 2, June 2001, pp , Software reliability, John C. Knight, Nancy G. Leveson, An Experimental Evaluation Of The Assumption Of Independence In Multi-Version Programming, Recommended The Role of Software in Spacecraft Accidents by Nancy Leveson. AIAA Journal of Spacecraft and Rockets, Vol. 41, No. 4, July (PDF ) CSCE Farkas

3 Modeling Software Design Diversity – A Review
BEV LITTLEWOOD, PETER POPOV, and LORENZO STRIGINI, Centre for Software Reliability, City University All systems need to be sufficiently reliable Required level of reliability Catastrophic failures Need: Achieving reliability Evaluating reliability CSCE Farkas

4 Single-Version Software Reliability
The Software Failure Process Why does software fail? What are the mechanisms that underlie the software failure process? If software failures are “systematic,” why do we still talk of reliability, using probability models? Systematic failure: if a fault of a particular class has shown itself in certain circumstances, then it can be guaranteed to show itself whenever these circumstances are exactly reproduced CSCE Farkas

5 Systematic failure in software systems: If a program failed once on a particular input case it would always fail on that input case until the offending fault had been successfully removed CSCE Farkas

6 Failure Process System in its operational environment
Real-time system – time Safety systems – process of failed demands Failure process is not deterministic Software failures: inherent design faults CSCE Farkas

7 Demand space Uncertainty: which demand will be selected and whether
this demand will lie in DF Source: Littlewood et al. ACM Computing CSCE Farkas

8 Predicting Future Reliability
Steady-state reliability estimation Testing the version of the software that is to be deployed for operational use Sample testing Reliability growth-based prediction Consider the series of successive versions of the software that are created, tested, and corrected, leading to the final version Extrapolate the trend of (usually) increasing reliability CSCE Farkas

9 Design Diversity Design diversity has been suggested to
Achieve higher level of reliability Assess level of reliability “Two heads are better than one.” Hardware: redundancy Mathematical curiosity: Can we make arbitrary reliable system from arbitrary unreliable components? Software: diversity and redundancy CSCE Farkas

10 Software Versions Independent development Forced diversity
Different types of diversity Source: Littlewood et al. ACM Computing CSCE Farkas

11 N-Version Software Use scenarios: Acceptance Recovery block
N-self checking Acceptance CSCE Farkas

12 Does Design Diversity Work?
Evaluation: operational experience controlled experimental studies mathematical models Issues: applications with extreme reliability requirements cost-effectiveness CSCE Farkas

13 Multi-Version Programming
N-version programming Goal: increase fault tolerance Separate, independent development of multiple versions of a software Versions executed parallel Identical input  Identical output ? Majority vote CSCE Farkas

14 Separate Development At which point of software development?
Common form of system requirements document Voting on intermediate data Rammamoorthy et al. Independent specifications in a formal specification language Mathematical techniques to compare specifications Kelly and Avizienis Separate specifications written by the same person 3 different specification languages CSCE Farkas

15 Difficulties How to isolate versions How to design voting algorithms
CSCE Farkas

16 Advantages of N-Versioning
Improve reliability Assumption: N different versions will fail independently Outcome: probability of two or more versions failing on the same input is small If the assumption is true, the reliability of the system could be higher than the reliability of the individual components CSCE Farkas

17 Is the assumption TRUE? CSCE Farkas

18 False? Solving difficult problems  people tend to make the same mistakes Common design faults Common Failure Mode Analysis Mechanical systems Software system N-version-based analysis may overestimate the reliability of software systems! CSCE Farkas

19 1. How to achieve reliability? 2. How to measure reliability?
CSCE Farkas

20 How to Achieve Reliability?
Need independence Even small probabilities of coincident errors cause substantial reduction in reliability Overestimate reliability Crucial systems Aircrafts Nuclear reactors Railways CSCE Farkas

21 Testing of Critical Software Systems
Dual programming: Producing two versions of the software Executing them on large number of test cases Output is assumed to be correct if both versions agree No manual or independent evaluation of correct output – expensive to do so Assumption: unlikely that two versions contain identical faults for large number of test cases CSCE Farkas

22 Voting Individual software versions may have low reliability
Run multiple versions and vote on “correct” answer Additional cost: Development of multiple versions Voting process CSCE Farkas

23 low probability of common mode failures
Common Assumption: low probability of common mode failures (identical, incorrect output generated from the same input) CSCE Farkas

24 Independence Assumed and not tested
Two versions were assumed to be correct if the two outputs for the test cases agree Test for common errors but not for independence Kelly and Avizienis: 21 related and 1 common fault – nuclear reactor project Taylor: common faults in European practical systems Need evaluation/testing of independence CSCE Farkas

25 Experiment University of Virginia and University of California at Irvine Graduate and senior level computer science students 27 programs (9 UVA, 18 UCI) 1 million randomly-generated test cases CSCE Farkas

26 Software System Simple anti-missile system
Read data that represent radar reflections Decide: whether the reflections comes from an object that is a threat Heavily parameterized conditions Research Triangle Institute has developed a 3 version study on the same problem previously  RTI supplied the requirement specifications CSCE Farkas

27 Development Process No overall software development methodology was imposed on the developers Must use Pascal and specified compiler and operating system Students were instructed about the experiment and N-versioning Students were instructed not to discuss the project with each other No restriction on reference sources CSCE Farkas

28 Requirement Specification
Answering questions by  remove the potential of transferring unintended information General flaws in the specification were broadcasted to all the programmers Debugging: each student received 15 input and expected output data sets Acceptance test: 200 randomly generated test cases Different data sets for each program  avoid filtering of common faults CSCE Farkas

29 Acceptance Test All 27 versions passed it
Success was evaluated against a “gold” program Written in FORTRAN for the RTI experiment Has been evaluated on several million test cases Considered to be highly accurate CSCE Farkas

30 Evaluation of the Independence
1 million tests were run on 27 version Gold program 15 computers were used between May and Sept. 1984 Programmers had diverse background and expertise in software development CSCE Farkas

31 Time Spent on Development
Reading requirement specification: 1-35 hours (avg. 5.4 hours) Design: 4-50 hours (avg hours) Debugging: 4-70 hours (avg hours) CSCE Farkas

32 Experimental Results Failure: if there is any difference between the version’s output and the output of the gold program or raising an exception 15 x 15 Boolean array Precision High quality versions (Table 1) No failure: 6 versions Successful on 99.9 % of the tests: 23 versions CSCE Farkas

33 Multiple Failures Table 2
Common failures in versions from different universities CSCE Farkas

34 Independence Two events, A and B, are independent if the conditional probability of A occurring given that B has occurred is the same as the probability of A occurring, and vice versa. That is pr(A|B)=pr(A) and pr(B|A)=pr(B). Intuitively, A and B are independent if knowledge of the occurrence of A in no way influences the occurrence of B, and vice versa. CSCE Farkas

35 Evaluating Independence
Examining faults  correlated faults? Examine observed behavior of the programs (does not matter why the programs fail, what matters is that they fail) Use statistical method to evaluate distribution of failures 45 faults were detected, evaluated, and corrected in the 27 versions (Table 4) CSCE Farkas

36 Faults Non-correlated faults: unique to individual versions
Correlated faults: several occurred in more than one version More obscure than non-correlated ones Resulted from lack of understanding of related technology CSCE Farkas

37 Discussion Programmers: diverse background and experience (most skilled programmer’s code had several faults) Program length: smaller than real system  does not address sufficiently inter-component communication faults 1 million test case reflects about 20 years of operational time (1 execution/second) CSCE Farkas

38 Conclusion on Independent Failures
Independence assumption does NOT hold Reliability of N-versioning may NOT be as high as predicted Approximately ½ of the faults involved 2 or more programs Either programmers make similar faults Or common faults are more likely to remain after debugging and testing Need independence in the design level? CSCE Farkas

39 Next Class Web Application Vulnerabilities CSCE Farkas


Download ppt "Reading This lecture: B. Littlewood, P. Popov, L. Strigini, "Modelling software design diversity - a review", ACM Computing Surveys, Vol. 33, No. 2, June."

Similar presentations


Ads by Google