Vulnerability Scanning Michael Overton, Jason Ferris, Erik Brown
Scanners Used Nessus ◦ Covered the most CVEs, but missed some things SARA ◦ Only gave a subset of Nessus’ results X-Scan ◦ Also only a subset of Nessus’ results ISS ◦ Not particularly useful (though only the trial) Retina ◦ Gave a lot of results ◦ Little intersection with the others
Network Scanned Small private network Benefits: ◦ Feasible to use trial version software ◦ Viable simulation of larger network running several machines using the same hard disk image Issues: ◦ Hard to gather statistically significant data
Reporting Methodology Compilation of scan results done by hand ◦ No team members particularly skilled in a viable scripting language ◦ Small number of reports made hand compilation more feasible, but it became quickly apparent that this method would not scale well Sorted final results both by majority voting and severity rating
Majority Voting Compiled the list of CVEs found by each scanner Re-ordered the report to indicate which CVEs were reported by the most number of scanners Top Five: CVERetinaNessusX-ScanSARAISS CVE xxxx CVE xxxx CVE xxxx CVE xx CVE xx
Severity Rating Cross correlated CVEs with CVSS base score Nessus and Retina covered the top five predominately Top Five: CVE CVSS Base ScoreRetinaNessusX-ScanSARAISS CVE xx CVE x CVE x CVE x CVE x
Metasploit Because of the small size of the network, the number of possible exploits were limited Many required user interaction or previously established host access Setup, but did not utilize a Samba exploit
Conclusions Nessus and Retina seemed to be the best of the ones we used Many scanners seemed to focus on detecting vulnerabilities specifically not detected by other scanners, requiring the use of many scanners to detect most vulnerabilities Many frivolous “vulnerabilities” detected, making it difficult to get useful results