Download presentation
Presentation is loading. Please wait.
Published byAlexandrina Hall Modified over 7 years ago
1
Computer Scientist, Software and Systems Division, ITL
Voting System Software Assurance: SAMATE Automated Source Code Conformance Verification Study Michael Kass Computer Scientist, Software and Systems Division, ITL
2
Software Assurance National Information Assurance Glossary Definition: “The level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at anytime during its lifecycle, and that the software functions in the intended manner.” [1] "National Information Assurance Glossary"; CNSS Instruction No. 4009 National Information Assurance Glossary
3
Assurance Through Source Code Analysis
Static source code analysis can identify weaknesses and vulnerabilities in source code that could compromise a voting system’s security, availability, integrity and privacy Many open source and commercial tools Used during development and assessment
4
The Problem Today, Voting System Test Laboratories (VSTLs) are not leveraging automated tools to verify voting source code against Voluntary Voting System Guidelines (VVSG) “Software Design and Coding Standards.” Human analysis alone can result in: Discrepancies in VSTL assessment repeatability and accuracy Increase test lab assessment time and cost
5
A Solution NIST Software Assurance Metrics And Tool Evaluation (SAMATE) team can assist the VSTLs in automating source code conformance by: Customizing freely-available tools that verify source code conformance to VVSG coding requirements Verifying tool effectiveness through testing
6
Current Assurance Work
The SAMATE project Automating verification of source code conformance to 2005 VVSG software requirements Assurance case for open ended vulnerability testing of voting systems
7
SAMATE Background A U.S. Department of Homeland Security (DHS) and NIST co-sponsored effort to measure the effectiveness of software assurance tools (specifically source code analyzers) Through testing against a corpus of source code examples (large and small) with known software weaknesses and vulnerabilities
8
SAMATE Reference Dataset (SRD)
Over 70,000 online source code analysis tool effectiveness tests Across 125 known software weaknesses Indexed against the Common Weakness Enumeration (CWE) An online dictionary of weaknesses in software
9
SAMATE Tool Test Example
#include <stdio.h> #include <dlfcn.h> int main(int argc, char **argv) { void *linuxHandle; linuxHandle = dlopen ("libm.so", RTLD_LAZY); /*bad*/ if (!linuxHandle) fprintf (stderr, "%s\n", dlerror()); return(1); } return (0);
10
“Bootstrapping” an Automated Source Code Verification Capability for VSTLS
SAMATE customized two open source tools against VVSG 2005, Volume 1:5.2 “Software Design and Coding Standards” and verified tool effectiveness through testing
11
VVSG Tool Customization and Testing Specifics
SAMATE identified 49 software design and coding requirements in VVSG 2005 covering: Software integrity Software modularity and programming Control constructs Naming conventions Comment conventions
12
VVSG-Customization of Tools
SAMATE evaluated current capabilities of freely available source code analysis tools for potential use in VVSG source code conformance verification against those 49 VVSG requirements
13
Tool Selection Criteria
SAMATE specifically looks for tools that: Are freely available Are extensible, allowing tool customization for VVSG-specific requirements Have a relaxed licensing agreement (for re-distribution of SAMATE customization) Provide an Abstract Syntax Tree (AST) traversal mechanism
14
Initial Tool Selection
PMD (a Java code analysis tool) Open source tool with BSD license Runs in Windows/Linux environments Focuses on finding “bugs” in source code
15
Initial Tool Selection
Compass/ROSE (a C/C++ code analysis tool) A research project of Lawrence Livermore National Laboratory (LLNL) An open source compiler infrastructure to build source-to-source program transformation and analysis tools for Fortran, C/C++ and other languages
16
Methodology for Automation
Classify VVSG requirements for automation potential Complete, partial or none Build generic tool search rules for requirements Create example test code to verify that the tools function correctly by “seeding” the code with non-conformant constructs Run the tools against the example test code to verify tool correctness Document why each requirement could (or could not) be automated
17
Requirement Classification
Requirements were grouped into 6 types: Completely automatable Completely automatable, but requires customization to the voting system-specific coding style or API Automatable, but not with tools used in this study Partially automatable, tool could “point” to potential non-conformance, but human analysis is required for verification Not automatable Requirement not applicable to a particular language
18
Tool Effectiveness Tests
SAMATE wrote source code examples in C and Java to verify tool correctness in reporting non-conformance Each test consists of a small (30 lines or less) source code program that contains non-VVSG-conformant code constructs that violate a VVSG code workmanship requirement When scanned by the source code analysis tool, the test files elicit a report from the tool indicating the filename and line number where the non-conformant construct was found
19
Tool Reports Tools (run in command-line mode) report file path and name, line number of non-conformance and error message: tool_tests\java\AssertStatements.java:26 Assert statements should be absent from a production compilation tool_tests\java\AssertStatements.java:37 Assert statements should be absent from a production compilation tool_tests\java\DefaultCase.java: Switch statements should have a default label tool_tests\java\DefaultCase.java: Switch statements should have a default label tool_tests\java\DynamicallyLoadedCode.java: Dynamically loaded code is prohibited tool_tests\java\ExceptionAsFlowControl.java:31 Avoid using exceptions as flow control. tool_tests\java\ExceptionAsFlowControl.java:36 Avoid using exceptions as flow control. Reports were verified against expected results
20
“Sanity Checking” Creation of both customized tools and tool tests created a feedback loop to verify our understanding of the semantics of the VVSG requirement Feedback also exposed some questions regarding the semantics of some requirements that require additional clarification by EAC
21
Results of our Study to Date
VVSG Software Design Coding Standards Requirement Classification Completely Automatable Java C 1) A generic search rule can be written to identify all instances of non-conformance in source code 14 15 2) A custom search rule specific to the coding style of each voting system can be written to identify all instances of non-conformance 4 6 3) This tool cannot, but another tool could verify this requirement 8 10 Partially automatable ( but requires additional human analysis to verify) 4) A generic rule can “point to” a possible conformance violation, but human analysis is required to verify it 3 Not automatable 5) No tool can verify conformance to the requirement (requires 100% human analysis) 11 6) The requirement is not relevant to the programming language
22
Other Possible Languages and Tools
C/C++ Commercial tools Java Findbugs, Checkstyle C# StyleCop, FXCop, Commercial tools VB.NET FXCop, Commercial tools COBOL Commercial tools only Trade names and company products are mentioned or identified in this presentation. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the products are necessarily the best available for the purpose.
23
Summary The majority of VVSG 1.0 coding convention requirements can be fully or partially verified via tool automation SAMATE created a “bootstrap” automated conformance verification capability (for C and Java) that could cover the majority of VVSG 1.0 coding requirements Other tools could “fill the gaps” for the remaining requirements A few VVSG 1.0 requirements were irrelevant to programming languages A few VVSG 1.0 requirements are simply not verifiable by automated tools
24
Follow-On NIST SAMATE team met with Wyle and SLI VSTLs in June 2011
VSTLs were supportive of NIST providing guidance and automated tooling Labs acknowledge that source code analysis is one of the most expensive and resource-intensive part of their work Beta demonstration of customized tools raised questions regarding semantics of VVSG coding requirements Labs suggested a “roundtable” discussion between NIST, labs, and voting system manufacturers regarding automated verification of VVSG coding requirements
25
Discussion/Questions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.