Looking at the big picture on vulnerabilities

Slides:



Advertisements
Similar presentations
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Advertisements

G. Alonso, D. Kossmann Systems Group
Honeypot 서울과학기술대학교 Jeilyn Molina Honeypot is the software or set of computers that are intended to attract attackers, pretending to be weak.
INDEX  Ethical Hacking Terminology.  What is Ethical hacking?  Who are Ethical hacker?  How many types of hackers?  White Hats (Ethical hackers)
Software Security Growth Modeling: Examining Vulnerabilities with Reliability Growth Models Andy Ozment Computer Security Group Computer Laboratory University.
Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.
SE 450 Software Processes & Product Metrics Reliability Engineering.
1 Estimating the Cost and Benefits of Software Assurance Investments Thomas P. Frazier November 9, 2006.
Software Defect Modeling at JPL John N. Spagnuolo Jr. and John D. Powell 19th International Forum on COCOMO and Software Cost Modeling 10/27/2004.
Lecture 11 Reliability and Security in IT infrastructure.
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Web server security Dr Jim Briggs WEBP security1.
Assessing the Threat How much money is lost due to cyber crimes? –Estimates range from $100 million to $100s billions –Why the discrepancy? Companies don’t.
1 Securing Passwords Against Dictionary Attacks Base on an article by Benny Pinkas & Tomas Sander 2002 Presented by Tomer Conforti.
Internet Quarantine: Requirements for Containing Self-Propagating Code David Moore et. al. University of California, San Diego.
1 An Empirical Analysis of Vendor Response to Vulnerability Disclosure Ashish Arora, Ramayya Krishnan, Rahul Telang, Yubao Yang Carnegie Mellon University.
Introduction to Network Defense
Cloud Computing How secure is it? Author: Marziyeh Arabnejad Revised/Edited: James Childress April 2014 Tandy School of Computer Science.
Large-scale application security Charlie Eriksen.
RFC6520 defines SSL Heartbeats - What are they? 1. SSL Heartbeats are used to keep a connection alive without the need to constantly renegotiate the SSL.
1 Security Risk Analysis of Computer Networks: Techniques and Challenges Anoop Singhal Computer Security Division National Institute of Standards and Technology.
Web Security Demystified Justin C. Klein Keane Sr. InfoSec Specialist University of Pennsylvania School of Arts and Sciences Information Security and Unix.
Viruses.
Malicious Code Brian E. Brzezicki. Malicious Code (from Chapter 13 and 11)
Information Systems Security Computer System Life Cycle Security.
Open Source Software An Introduction. The Creation of Software l As you know, programmers create the software that we use l What you may not understand.
Raven Services Update December 2003 David Wallis Senior Systems Consultant Raven Computers Ltd.
Honeypot and Intrusion Detection System
Computer Security and Penetration Testing
1 1 Vulnerability Assessment of Grid Software Jim Kupsch Associate Researcher, Dept. of Computer Sciences University of Wisconsin-Madison Condor Week 2006.
Name: Angelica F. White WEMBA10. Teach students how to make sound decisions and recommendations that are based on reliable quantitative information During.
Administrative: Objective: –Tutorial on Risks –Phoenix recovery Outline for today.
Lecture 16 Page 1 Advanced Network Security Perimeter Defense in Networks: Virtual Private Networks Advanced Network Security Peter Reiher August, 2014.
1 The Likelihood of Vulnerability Rediscovery and the Social Utility of Vulnerability Hunting Andy Ozment Computer Security Group Computer Laboratory University.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Lecture 11 Page 1 CS 236 Online Customizing and Evolving Intrusion Detection A static, globally useful intrusion detection solution is impossible –Good.
Software Assurance Session 13 INFM 603. Bugs, process, assurance Software assurance: quality assurance for software Particularly assurance of security.
The UCSD Network Telescope A Real-time Monitoring System for Tracking Internet Attacks Stefan Savage David Moore, Geoff Voelker, and Colleen Shannon Department.
CHAPTER 15 Reporting Security Problems. INTRODUCTION There are two choices that can be made when you find a security problem in some software, hardware.
Lecture 12 Page 1 CS 236 Online Virtual Private Networks VPNs What if your company has more than one office? And they’re far apart? –Like on opposite coasts.
Understanding Computer Viruses: What They Can Do, Why People Write Them and How to Defend Against Them Computer Hardware and Software Maintenance.
Cloud Computing Project By:Jessica, Fadiah, and Bill.
Presented by: Reem Alshahrani. Outlines What is Virtualization Virtual environment components Advantages Security Challenges in virtualized environments.
Univariate Linear Regression Problem Model: Y=  0 +  1 X+  Test: H 0 : β 1 =0. Alternative: H 1 : β 1 >0. The distribution of Y is normal under both.
Legitimate Vulnerability Markets By: Jeff Wheeler.
WEIS Economic Analysis of Incentives to Disclose Software Vulnerabilities Dmitri Nizovtsev Washburn University Marie Thursby Georgia Institute of.
Presented by Teererai Marange. Background Open SSL Hearbeat extension Heartbleed vulnerability Description of work Methodology Summary of results Vulnerable.
Lecture 12 Page 1 CS 236, Spring 2008 Virtual Private Networks VPNs What if your company has more than one office? And they’re far apart? –Like on opposite.
Why you can’t always have what you want Simon Hutchinson – Reckon Product Management.
Securing Passwords Against Dictionary Attacks Presented By Chad Frommeyer.
Lecture 13 Page 1 CS 236 Online Principles for Secure Software Following these doesn’t guarantee security But they touch on the most commonly seen security.
Lecture 19 Page 1 CS 236 Online Securing Your System CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
WATERFALL DEVELOPMENT MODEL. Waterfall model is LINEAR development lifecycle. This means each phase must be completed before moving onto the next!!! WHAT.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
01/20151 EPI 5344: Survival Analysis in Epidemiology Cox regression: Introduction March 17, 2015 Dr. N. Birkett, School of Epidemiology, Public Health.
+ Moving Targets: Security and Rapid-Release in Firefox Presented by Carlos Bernal-Cárdenas.
Writing Security Alerts tbird Last modified 2/25/2016 8:55 PM.
Computer and Internet Security (How to protect your computer from Threats) By: Steven Siggers Instructor: Dr. Marko Puljic.
Is finding security holes a good idea? Presented By: Jeff Wheeler CSC 682.
Full Disclosure: Is It Beneficial? Project Based Information Systems Tim Schultz 12/02/02.
Exploitation Development and Implementation PRESENTER: BRADLEY GREEN.
Securing a Host Computer BY STEPHEN GOSNER. Definition of a Host  Host  In networking, a host is any device that has an IP address.  Hosts include.
EN Lecture Notes Spring 2016 ASSURANCE AND EVALUATION.
Evaluating Existing Systems
Putting It All Together
Putting It All Together
Evaluating Existing Systems
Outline Introduction Characteristics of intrusion detection systems
Computer Security for Businesses
6. Application Software Security
Presentation transcript:

Looking at the big picture on vulnerabilities Eric Rescorla Network Resonance, Inc.

The big picture All software has bugs Security relevant software has security bugs Security is a problem in any multi-user system Or any networked computer … and almost all machines now are networked How do we choose appropriate behaviors? Policies? Appropriate levels of investment? The standard tool is cost/benefit analysis But this requires data 11/23/2005 Eric Rescorla

Talk Overview Vulnerability life cycle Empirical data on discovery Empirical data on patching Big unknown questions Looking beyond computer security 11/23/2005 Eric Rescorla

Life cycle of a typical vulnerability Publication/ Fix Release Exploit Creation All Units Fixed Introduction Discovery Latency Fix Dev’t Attacks Patching 11/23/2005 Eric Rescorla

Common Assumptions (I) Initial vulnerability count (Vi) Roughly linear in lines of code (l) Some variability due to programmer quality Rate of discovery (Vd) Somehow related to code quality dVd/dVi > 0 d2Vd/dVi2<0 ??? This relationship not well understood Popularity? Attacker tastes? Closed/open source? 11/23/2005 Eric Rescorla

Common Assumptions (II) Vulnerabilities with exploits (Ve) Some subset of discovered vulnerabilities But who knows what subset? And on what timeframe? Anecdotal evidence indicates it’s getting shorter Vulnerabilities used in attacks (Va) Somehow scales with Ve But again how? And what controls the attack rate (A)? Loss due to attacks (L) Somehow scales with A 11/23/2005 Eric Rescorla

The intuitive model Running bad software places you at risk More latent vulnerabilities means more discovered vulnerabilities means more attacks Vendor quality improvement reduces risk Converse of above Release of vulnerabilities increases risk More attack sites But patching decreases risk Less vulnerabilities means less attacks 11/23/2005 Eric Rescorla

Empirical data on discovery Question: is finding vulns socially useful? Benefits Pre-emption “Find and fix vulnerabilities before the bad guys do” Incentives for vendors Research Costs New tools for attackers Most attacks are based on known problems Cost to develop fixes And costs to install them The research itself is expensive 11/23/2005 Eric Rescorla

Two discovery scenarios White Hat Discovery (WHD) Vulnerability found by a good guy Follows normal disclosure procedure Black Hat Discovery (BHD) Vulnerability found by a bad guy Bad guy exploits it 11/23/2005 Eric Rescorla

White Hat Discovery Time Course 11/23/2005 Eric Rescorla

Black Hat Discovery Time Course 11/23/2005 Eric Rescorla

Cost/benefit analysis WHD is clearly better than BHD Cost difference CBHD – CWHD = Cpriv If we have a choice between them, choose WHD Say we’ve found a bug Should we disclose? Bug may never be rediscovered Or rediscovered by a white hat … or discovered much later 11/23/2005 Eric Rescorla

Finding the best option Probability of rediscovery = pr Ignore cost of patching Assume that all rediscoveries are BHD (conservative) Disclose Not Disclose Rediscovered N/A Cpub + Cpriv Not Rediscovered Cpub Expected Value pr(Cpub + Cpriv) 11/23/2005 Eric Rescorla

Key question: probability of rediscovery Disclosure pays off if pr (Cpub + Cpriv) > Cpub Disclosure is good if pr is high Disclosure is bad if pr is low Cpub and Cpriv are hard to estimate But we can try to measure pr This gives us bounds for values of Cpub and Cpriv for which disclosure is good 11/23/2005 Eric Rescorla

A model for pr Assume a program has N vulnerabilities F are eventually found And all bugs are equally likely to be found This is a big assumption and probably not entirely true Each bug has an F/N probability of being found Say you find a bug b Probability of rediscovery pr <= F/N This model is easily extended to be time dependent Assuming we know the rate of discovery as a function of time 11/23/2005 Eric Rescorla

Outline of the experiment Collect data on rate of bug discovery Use that data to model rate of bug finding Using standard reliability techniques Estimate pr(t) Increased reliability over time implies high pr(t) 11/23/2005 Eric Rescorla

Data source NIST ICAT metabase Collects data from multiple vulnerability databases Includes CVE Id affected program/version information Bug release time Used the data through May 2003. Need one more data point: introduction time Collected version information for 35 programs 11/23/2005 Eric Rescorla

Vulnerability Disclosure by Time 11/23/2005 Eric Rescorla

Data issues We only know about discovered bugs And have to infer stuff about undiscovered bugs Data is heavily censored Right censoring for bugs not yet found Left censoring because not all bugs introduced at same time Lots of noise and errors Some of these removed manually 11/23/2005 Eric Rescorla

Approach 1: A Program’s Eye View Question: do programs improve over time? Take all bugs in Program/Version X For a few selected program/version pairs Genetically somewhat independent Regardless of when they were introduced Plot discovery rate over time Is there a downward trend? 11/23/2005 Eric Rescorla

Disclosures over time (selected programs) Linear regression Significant only for RH 6.2 Exponential regression Laplace factor Only significant depletion at end (except RH 6.2) … but there are censoring issues 11/23/2005 Eric Rescorla

Approach 2: A bug’s eye view Find bug introduction time Introduction date of first program with bug Measure rate of bug discovery From time of first introduction Look for a trend 11/23/2005 Eric Rescorla

Disclosures over time (by introduction year) Linear regression Significant trend only for 1999 Exponential Laplace factor Generally stable 11/23/2005 Eric Rescorla

How to think about this Medical standard of care First do no harm We’re burning a lot of energy here Would be nice to know that it’s worthwhile Answers aren’t confidence inspiring This data isn’t definitive See caveats above Other work in progress [Ozment 05] 11/23/2005 Eric Rescorla

Empirical data on patching rate Rate of patching controls useful lifetime of an exploit So how fast do people actually patch? And what predicts when people will patch? 11/23/2005 Eric Rescorla

Overview of the bugs Announced July 30, 2002 by Ben Laurie Buffer overflows in OpenSSL Allowed remote code execution Affected software Any OpenSSL-based server which supports SSLv2 Essentially everyone leaves SSLv2 on mod_SSL, ApacheSSL, Sendmail/TLS, … Easy to identify such servers Any SSL client that uses OpenSSL * All of OpenSSL was vulnerable, but what programs? * More of a pain to turn off OpenSSL v2 than leave it on * If you were an admin you had to upgrade 11/23/2005 Eric Rescorla

OpenSSL flaws: a good case study A serious bug Remotely exploitable buffer overflow Affects a security package Crypto people care about security, right? In a server Administrators are smarter, right? Remotely detectable …easy to study * function pointers * UNIX admins are smarter * Detectable without damaging the server * Scientists, not vandals 11/23/2005 Eric Rescorla

Questions we want to ask What fraction of users deploy fixes? And on what timescale? What kind of countermeasures are used? Patches Available for all major versions Often supplied by vendors Upgrades Workarounds Turn off SSLv2 What factors predict user behavior? 11/23/2005 Eric Rescorla

Methodology Collect a sample of servers that use OpenSSL Google searches on random words Filter on servers that advertise OpenSSL This means mod_SSL n=892 (890 after complaints) Periodically monitor them for vulnerability * Sources of error Word bias mod_SSL bias Port bias * Monitor daily 11/23/2005 Eric Rescorla

Detecting Vulnerability Take advantage of the SSLv2 flaw Buffer overflow in key_arg Negotiate SSLv2 Use an overlong key_arg The overflow damages the next struct field client_master_key_length client_master_key_length is written before it is read So this is safe This probe is harmless but diagnostic Fixed implementations throw an error Broken implementations complete handshake * Bottom line--attempt to exploit a hole * key_arg used for IV *Longer overflow is still dangerous 11/23/2005 Eric Rescorla

Response after bug release * Look at the initial trend line a little more closely. * Point out three gaps--not weekends 11/23/2005 Eric Rescorla

Kinds of fixes deployed * Our second question: what people do * MEntion uncertainty * First day mostly patches -- availability -- on the caseness * Conversion * Mention uncertainty 11/23/2005 Eric Rescorla

Why not use workarounds? Disabling SSLv2 is complete protection It’s easy But essentially no administrator did it Never more than 8 machines Why not? Guesses… Advisories unclear Not all described SSLv2-disabling as a workaround Some suggested that all OpenSSL-using applications were unsafe It’s fine if you just use it for crypto (OpenSSH, etc.) Pretty easy to install patches Anyone smart enough just used fixes * Notice previous slide didn’t show SSLv2 at all * Better safe than sorry ---- *This answers the second question 11/23/2005 Eric Rescorla

Predictors of responsiveness Big hosting service providers fix faster The bigger the better More on the ball? More money on the line? People who were already up to date fix faster Signal of active maintenance? Or just higher willingness to upgrade * Notice previous slide didn’t show SSLv2 at all * Better safe than sorry ---- *This answers the second question 11/23/2005 Eric Rescorla

Response after Slapper release * Now let’s turn to the response to the worm * HSPs upgraded first so fit looks better * Why another pulse? 11/23/2005 Eric Rescorla

Why so much post-worm response? People didn’t hear the first time? Not likely… published through the same channels Guesses People are interrupt driven People respond when they feel threatened And deliberately ignore potential threats 11/23/2005 Eric Rescorla

The zombie problem 60% of servers were vulnerable at worm release These servers can be turned into zombies … and then DDoS other machines Independent servers are less responsive So they’re harder to turn off Try contacting hundreds of administrators Slapper wasn’t so bad Since Linux/Apache isn’t a monoculture And the worm was kind of clumsy * A more stealthy worm could exploit a lot of servers and be very dangerous 11/23/2005 Eric Rescorla

Overall fix deployment by time * Three things (1) Starts fast (2) Levels off (3) Starts up again after the worm 11/23/2005 Eric Rescorla

Have things changed? Automatic patching more common Threat environment more hostile Answer: not much [Eschelbeck ‘05] Externally-visible systems: half-life = 19 days (30 days in 2003) Internally-visible systems: half-life = 48 days (62 days in 2005) 11/23/2005 Eric Rescorla

The big open question How much do vulnerabilities cost us? How much would various defenses cost us? How well do they work? Where should we be spending our money? 11/23/2005 Eric Rescorla

Steps along the way What are the predictors of discovered vulnerability rate? Software quality? Popularity?Access to source code? Hacker attitudes? What is the marginal impact of a new vulnerability? Number of total attacks? Cost of attacks? What is the marginal impact of faster patching? How much does it reduce risk? Balanced against patch quality [Beattie ‘02, Rescorla ‘03] Does diversity really work? Targeted vs. untargeted attacks What about bad diversity [Shacham ‘04] 11/23/2005 Eric Rescorla

Thinking outside CS: Bioweapons Same as software but with much worse parameters Vulnerabilities are long-standing Exploits are hard to create But there are plenty of old ones available Smallpox, anthrax, ebola, etc. And technology is making it easier Fixes are hard to create Where’s my HIV vaccine? And easy to counter Influenza Mousepox [Jackson ‘01] Patching is painfully slow 11/23/2005 Eric Rescorla

Case study: 1918 Influenza Complete sequence has been reconstructed Published in Science and Nature ‘05 Includes diffs from ordinary flu And explanations Usual controversy [Joy and Kurzweil ‘05] But what’s the marginal cost? Smallpox has already been fully sequenced Current vaccination levels are low And vaccine has bad side effects And compare the mousepox work Possible to de novo synthesize the virus? What’s the impact of a new virus? 11/23/2005 Eric Rescorla

Final slide Questions? Interested in working on this stuff? Reach me at ekr@rtfm.com 11/23/2005 Eric Rescorla