1 A Vision for the Testing of Election Systems in a HAVA World Eric Lazarus
2 How Rate a Testing Capability? Transparent Identification Recommendation Cost effective Broad coverage –Reliability –Accessibility –Usability –Security Encourage high- value innovation Pick correct structure given success
3 This is a tough problem 1983 Turing Award Lecture – Ken Thompson showed that conventional methods will fail A Trojan Horse can live in a compiler, linker, loader, interpreter, micro code, BIOS, hardware… Testing is hard and limited
4 Types of Testing Acceptance/Qualification Testing Code inspections/code walk through Concurrency testing Data table testing Disability Access Testing (Variation of usability testing) Installation testing Integration Testing Legal Validation/Verification (Validate legal requirements then verify legal requirements met) Load/Stress Testing Performance testing (test response times) Recovery testing Regression testing Reliability Testing Scalability testing (variation of load/stress testing) Security Testing / Penetration Testing Spike testing (Variation of Load/Stress Testing) Uninstallation testing (variation of installation testing) Unit Testing Upgrade/Patch testing (variation of installation testing) Usability testing
5 Applied Common Sense Vision is not hard to come by Create a vision –What are the questions? –What are common-sense answers? Bring together smart people to think about the obvious vision
6 Q: States Testing Independently? go it alone, or… or Voluntary Consortium of States?
7 Q: States Testing Independently? Voluntary Consortium of States: –Hire more and/or better people –Save $ on duplicated effort –Better shared knowledge gained in Product evaluation Use
8 Q: Who should pay for it? Not vendor funded as with ITA system –Interest clash –Barrier to new entries Pooled state election money What about others including –Political parties –Good government groups –Civil rights groups –Academic institutions
9 Q: Big-Bang or Continuous?
10 Q: Big-Bang or Continuous? Like getting regular checkups Nevada gaming control board takes machines out of service
11 Q: White Box or Black Box? Why handicap our testers by not giving them source? We want to find bugs – source code review is good for this Every branch much be run – too many to realistically be done in voting system software
12 Q: Partisans Included? Brennan Center for Justice projects worked both ways Working with people on both sides of debates has brought out insights Smart and knowledgably is important – such people often have opinions
13 Q: Team must have… Understand election processes Understand computer security techniques Testing in other domains Background from other industries including gaming International perspective Heterogeneous team how do find problems
14 Q: Product Roadmap Can election officials impact product direction via a consortium?
15 Q: Consortium Services? What can they offer?
16 Q: Develop Risk Models Testing should be driven by clear view of the risks testing is attempting to address –“We might buy a machine that is not as accessible as we are told.” –“…not as secure.” –“…not as reliable.” –“…not as easy to administer.” Good to develop and maintain these jointly
17 Q: Shared Repository of Knowledge? What was learned under testing? What was learned in use? What procedures work well with this technology? Model: Information Sharing and Analysis Centers (ISACs) e.g., Financial Services Information Sharing and Analysis Center
18 Q: Evaluating Election Procedures? Could this same team evaluate procedure manuals? Should be able to evaluate procedures against best practices
19 Q: Testing When? Product Evaluation Certification Acceptance Logic & Accuracy Continuous
20 Q: Other services? Negotiate joint purchasing agreements (like GSA Schedule) –Products –Services Transparency: Arrange for purchasable by responsible organizations Encourage innovation by –Adhering to open standards
21 Q: Make policy? Should such consortia of states do testing and provide testing information or should they take on policy making role?
22 Q: Make policy? Should such consortia of states do testing and provide testing information or should they take on policy making role? –I’ve been assuming that these staff would make no policy but only provide the results of their tests. They would not, for example, certify or decertify machines but would report on results of testing.
23 So one vision emerges Multiple states group into a consortium (or two) Has own staff and/or consultants, small contractors, academics Performs Testing for: –Usability –Security Evaluates –Procedures –New technology –Cost
24 Does this make sense? Very interested in collaborating around a proposal to create a consortium How can we improve this vision? Please contact me if you want to work on this
25 Testing is not an end in itself Current State GOAL= Improved Elections Testing Skills Authority Resources Commitment
26 Illustration: Gaming – What’s Different? Ladder of trust with signed firmware at bottom Multiple people with different keys Field trails as part of certification Hash compare in the field randomly every two years Auditing the auditors Certification done by government employees willing to share/discuss their methods Post-employment restrictions on working for vendors Penalties for messing up Assumption of cheating