CSCE 548 Secure Software Development Independence in Multiversion Programming
Reading This lecture: B. Littlewood, P. Popov, L. Strigini, "Modelling software design diversity - a review", ACM Computing Surveys, Vol. 33, No. 2, June 2001, pp. 177-208, http://portal.acm.org/citation.cfm?doid=384192.384195 Software reliability, John C. Knight, Nancy G. Leveson, An Experimental Evaluation Of The Assumption Of Independence In Multi-Version Programming, http://citeseer.ist.psu.edu/knight86experimental.html Recommended The Role of Software in Spacecraft Accidents by Nancy Leveson. AIAA Journal of Spacecraft and Rockets, Vol. 41, No. 4, July 2004. (PDF ) CSCE 548 - Farkas
Modeling Software Design Diversity – A Review BEV LITTLEWOOD, PETER POPOV, and LORENZO STRIGINI, Centre for Software Reliability, City University All systems need to be sufficiently reliable Required level of reliability Catastrophic failures Need: Achieving reliability Evaluating reliability CSCE 548 - Farkas
Single-Version Software Reliability The Software Failure Process Why does software fail? What are the mechanisms that underlie the software failure process? If software failures are “systematic,” why do we still talk of reliability, using probability models? Systematic failure: if a fault of a particular class has shown itself in certain circumstances, then it can be guaranteed to show itself whenever these circumstances are exactly reproduced CSCE 548 - Farkas
Systematic failure in software systems: If a program failed once on a particular input case it would always fail on that input case until the offending fault had been successfully removed CSCE 548 - Farkas
Failure Process System in its operational environment Real-time system – time Safety systems – process of failed demands Failure process is not deterministic Software failures: inherent design faults CSCE 548 - Farkas
Demand space Uncertainty: which demand will be selected and whether this demand will lie in DF Source: Littlewood et al. ACM Computing CSCE 548 - Farkas
Predicting Future Reliability Steady-state reliability estimation Testing the version of the software that is to be deployed for operational use Sample testing Reliability growth-based prediction Consider the series of successive versions of the software that are created, tested, and corrected, leading to the final version Extrapolate the trend of (usually) increasing reliability CSCE 548 - Farkas
Design Diversity Design diversity has been suggested to Achieve higher level of reliability Assess level of reliability “Two heads are better than one.” Hardware: redundancy Mathematical curiosity: Can we make arbitrary reliable system from arbitrary unreliable components? Software: diversity and redundancy CSCE 548 - Farkas
Software Versions Independent development Forced diversity Different types of diversity Source: Littlewood et al. ACM Computing CSCE 548 - Farkas
N-Version Software Use scenarios: Acceptance Recovery block N-self checking Acceptance CSCE 548 - Farkas
Does Design Diversity Work? Evaluation: operational experience controlled experimental studies mathematical models Issues: applications with extreme reliability requirements cost-effectiveness CSCE 548 - Farkas
Multi-Version Programming N-version programming Goal: increase fault tolerance Separate, independent development of multiple versions of a software Versions executed parallel Identical input Identical output ? Majority vote CSCE 548 - Farkas
Separate Development At which point of software development? Common form of system requirements document Voting on intermediate data Rammamoorthy et al. Independent specifications in a formal specification language Mathematical techniques to compare specifications Kelly and Avizienis Separate specifications written by the same person 3 different specification languages CSCE 548 - Farkas
Difficulties How to isolate versions How to design voting algorithms CSCE 548 - Farkas
Advantages of N-Versioning Improve reliability Assumption: N different versions will fail independently Outcome: probability of two or more versions failing on the same input is small If the assumption is true, the reliability of the system could be higher than the reliability of the individual components CSCE 548 - Farkas
Is the assumption TRUE? CSCE 548 - Farkas
False? Solving difficult problems people tend to make the same mistakes Common design faults Common Failure Mode Analysis Mechanical systems Software system N-version-based analysis may overestimate the reliability of software systems! CSCE 548 - Farkas
1. How to achieve reliability? 2. How to measure reliability? CSCE 548 - Farkas
How to Achieve Reliability? Need independence Even small probabilities of coincident errors cause substantial reduction in reliability Overestimate reliability Crucial systems Aircrafts Nuclear reactors Railways CSCE 548 - Farkas
Testing of Critical Software Systems Dual programming: Producing two versions of the software Executing them on large number of test cases Output is assumed to be correct if both versions agree No manual or independent evaluation of correct output – expensive to do so Assumption: unlikely that two versions contain identical faults for large number of test cases CSCE 548 - Farkas
Voting Individual software versions may have low reliability Run multiple versions and vote on “correct” answer Additional cost: Development of multiple versions Voting process CSCE 548 - Farkas
low probability of common mode failures Common Assumption: low probability of common mode failures (identical, incorrect output generated from the same input) CSCE 548 - Farkas
Independence Assumed and not tested Two versions were assumed to be correct if the two outputs for the test cases agree Test for common errors but not for independence Kelly and Avizienis: 21 related and 1 common fault – nuclear reactor project Taylor: common faults in European practical systems Need evaluation/testing of independence CSCE 548 - Farkas
Experiment University of Virginia and University of California at Irvine Graduate and senior level computer science students 27 programs (9 UVA, 18 UCI) 1 million randomly-generated test cases CSCE 548 - Farkas
Software System Simple anti-missile system Read data that represent radar reflections Decide: whether the reflections comes from an object that is a threat Heavily parameterized conditions Research Triangle Institute has developed a 3 version study on the same problem previously RTI supplied the requirement specifications CSCE 548 - Farkas
Development Process No overall software development methodology was imposed on the developers Must use Pascal and specified compiler and operating system Students were instructed about the experiment and N-versioning Students were instructed not to discuss the project with each other No restriction on reference sources CSCE 548 - Farkas
Requirement Specification Answering questions by email remove the potential of transferring unintended information General flaws in the specification were broadcasted to all the programmers Debugging: each student received 15 input and expected output data sets Acceptance test: 200 randomly generated test cases Different data sets for each program avoid filtering of common faults CSCE 548 - Farkas
Acceptance Test All 27 versions passed it Success was evaluated against a “gold” program Written in FORTRAN for the RTI experiment Has been evaluated on several million test cases Considered to be highly accurate CSCE 548 - Farkas
Evaluation of the Independence 1 million tests were run on 27 version Gold program 15 computers were used between May and Sept. 1984 Programmers had diverse background and expertise in software development CSCE 548 - Farkas
Time Spent on Development Reading requirement specification: 1-35 hours (avg. 5.4 hours) Design: 4-50 hours (avg. 15.7 hours) Debugging: 4-70 hours (avg. 26.6 hours) CSCE 548 - Farkas
Experimental Results Failure: if there is any difference between the version’s output and the output of the gold program or raising an exception 15 x 15 Boolean array Precision High quality versions (Table 1) No failure: 6 versions Successful on 99.9 % of the tests: 23 versions CSCE 548 - Farkas
Multiple Failures Table 2 Common failures in versions from different universities CSCE 548 - Farkas
Independence Two events, A and B, are independent if the conditional probability of A occurring given that B has occurred is the same as the probability of A occurring, and vice versa. That is pr(A|B)=pr(A) and pr(B|A)=pr(B). Intuitively, A and B are independent if knowledge of the occurrence of A in no way influences the occurrence of B, and vice versa. CSCE 548 - Farkas
Evaluating Independence Examining faults correlated faults? Examine observed behavior of the programs (does not matter why the programs fail, what matters is that they fail) Use statistical method to evaluate distribution of failures 45 faults were detected, evaluated, and corrected in the 27 versions (Table 4) CSCE 548 - Farkas
Faults Non-correlated faults: unique to individual versions Correlated faults: several occurred in more than one version More obscure than non-correlated ones Resulted from lack of understanding of related technology CSCE 548 - Farkas
Discussion Programmers: diverse background and experience (most skilled programmer’s code had several faults) Program length: smaller than real system does not address sufficiently inter-component communication faults 1 million test case reflects about 20 years of operational time (1 execution/second) CSCE 548 - Farkas
Conclusion on Independent Failures Independence assumption does NOT hold Reliability of N-versioning may NOT be as high as predicted Approximately ½ of the faults involved 2 or more programs Either programmers make similar faults Or common faults are more likely to remain after debugging and testing Need independence in the design level? CSCE 548 - Farkas
Next Class Web Application Vulnerabilities CSCE 548 - Farkas