A Unified Framework for Measuring a Network’s Mean Time-to-Compromise

Slides:



Advertisements
Similar presentations
1_Panel Production. 380 pannelli 45 giorni di produzione = 8.4 pannelli/day.
Advertisements

11 World-Leading Research with Real-World Impact! A Framework for Risk-Aware Role Based Access Control Khalid Zaman Bijon, Ram Krishnan and Ravi Sandhu.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. *See PowerPoint Lecture Outline for a complete, ready-made.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 116.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 107.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 40.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 28.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 44.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 101.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 38.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 58.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 112.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 75.
A Model for When Disclosure Helps Security: What is Different About Computer & Network Security? Peter P. Swire Ohio State University George Mason CII.
Chapter 1 Image Slides Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
TUF: Securing Software Update Systems on GENI Justin Cappos Department of Computer Science and Engineering University of Washington.
DNS Security and Stability Analysis Working Group (DSSA) DSSA Update Prague – June, 2012.
A Trust Management Framework for Service-Oriented Environments William Conner, Arun Iyengar, Thomas Mikalsen, Isabelle Rouvellou, and Klara Nahrstedt
Aggregating CVSS Base Scores for Semantics-Rich Network Security Metrics Lingyu Wang 1 Pengsu Cheng 1, Sushil Jajodia 2, Anoop Singhal 3 1 Concordia University.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
A Comprehensive Study for RFID Malwares on Mobile Devices TBD.
15 May 2006Collaboration Board GridPP3 Planning Executive Summary Steve Lloyd.
Solving Quadratic Equations by Factorisation. By I Porter
What you don’t see can’t matter? The effects of unobserved differences on causal attributions Robert Coe CEM, Durham University Randomised Controlled Trials.
Termination Proofs from Tests
Lingyu Wang1 Sushil Jajodia2, Anoop Singhal3, and Steven Noel2
Introduction Distance-based Adaptable Similarity Search
L.C.Smith College of Engineering and Computer Science AppSealer : Automatic Generation of Vulnerability-Specific Patches for Preventing Component Hijacking.
+ Multi-label Classification using Adaptive Neighborhoods Tanwistha Saha, Huzefa Rangwala and Carlotta Domeniconi Department of Computer Science George.
Example One Internet is allowed to access the web server through HTTP protocol and port CVE was identified on web server.
SAFE Blueprint and the Security Ecosystem. 2 Chapter Topics  SAFE Blueprint Overview  Achieving the Balance  Defining Customer Expectations  Design.
1 Measuring Network Security Using Attack Graphs Anoop Singhal National Institute of Standards and Technology Coauthors: Lingyu Wang and Sushil Jajodia.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
P REDICTING ZERO - DAY SOFTWARE VULNERABILITIES THROUGH DATA MINING Su Zhang Department of Computing and Information Science Kansas State University 1.
A Kolmogorov Complexity Approach for Measuring Attack Path Complexity By Nwokedi C. Idika & Bharat Bhargava Presented by Bharat Bhargava.
An Authentication Service Based on Trust and Clustering in Wireless Ad Hoc Networks: Description and Security Evaluation Edith C.H. Ngai and Michael R.
Fast Detection of Denial-of-Service Attacks on IP Telephony Hemant Sengar, Duminda Wijesekera and Sushil Jajodia Center for Secure Information Systems,
Fast Detection of Denial-of-Service Attacks on IP Telephony Hemant Sengar, Duminda Wijesekera and Sushil Jajodia Center for Secure Information Systems,
Unsupervised Intrusion Detection Using Clustering Approach Muhammet Kabukçu Sefa Kılıç Ferhat Kutlu Teoman Toraman 1/29.
An Authentication Service Against Dishonest Users in Mobile Ad Hoc Networks Edith Ngai, Michael R. Lyu, and Roland T. Chin IEEE Aerospace Conference, Big.
Sanjay Goel, School of Business/Center for Information Forensics and Assurance University at Albany Proprietary Information 1 Unit Outline Information.
Distributed Firewall Policy Validation by Kyle Wheeler.
Hacking Framework Extended: The Role of Vulnerabilities Joseph H. Schuessler Bahorat Ibragimova 8 th Annual Security Conference Las Vegas, Nevada April.
Security Risk Management Marcus Murray, CISSP, MVP (Security) Senior Security Advisor, Truesec
1 Security Risk Analysis of Computer Networks: Techniques and Challenges Anoop Singhal Computer Security Division National Institute of Standards and Technology.
A Framework for Automated Web Application Security Evaluation
Optimal Power Flow: Closing the Loop over Corrupted Data André Teixeira, Henrik Sandberg, György Dán, and Karl H. Johansson ACCESS Linnaeus Centre, KTH.
Software Security Weakness Scoring Chris Wysopal Metricon August 2007.
Carnegie Mellon University 10/23/2015 Survivability Analysis via Model Checking Oleg Sheyner Jeannette Wing Carnegie Mellon University.
Measuring Relative Attack Surfaces Michael Howard, Jon Pincus & Jeannette Wing Presented by Bert Bruce.
Evaluating Network Security with Two-Layer Attack Graphs Anming Xie Zhuhua Cai Cong Tang Jianbin Hu Zhong Chen ACSAC (Dec., 2009) 2010/6/151.
Principal Components: A Conceptual Introduction Simon Mason International Research Institute for Climate Prediction The Earth Institute of Columbia University.
Secure In-Network Aggregation for Wireless Sensor Networks
A Mission-Centric Framework for Cyber Situational Awareness Assessing the Risk Associated with Zero-day Vulnerabilities: Automated Methods for Efficient.
FlexFlow: A Flexible Flow Policy Specification Framework Shipping Chen, Duminda Wijesekera and Sushil Jajodia Center for Secure Information Systems George.
Security Vulnerabilities in A Virtual Environment
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
Using system security metrics to enhance resiliency Dr. Sara Bitan ENGINEERING RESILIENT & ROBUST SYSTEMS 24-Jan-2011 Bitan: Using system security metrics.
A Graph Theoretic Approach to Cache-Conscious Placement of Data for Direct Mapped Caches Mirza Beg and Peter van Beek University of Waterloo June
Presented by Edith Ngai MPhil Term 3 Presentation
A Key Pre-Distribution Scheme Using Deployment Knowledge for Wireless Sensor Networks Zhen Yu & Yong Guan Department of Electrical and Computer Engineering.
Onno W. Purbo Cracking Techniques Onno W. Purbo
Off-line Risk Assessment of Cloud Service Provider
Principal Components: A Conceptual Introduction
Type Topic in here! Created by Educational Technology Network
Gregory Morton COSC380 February 16, 2011
A Kolmogorov Complexity Approach for Measuring Attack Path Complexity
A Kolmogorov Complexity Approach for Measuring Attack Path Complexity
Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation Binghui Wang, Jinyuan Jia, and Neil.
Attack Graphs and Attack Surface
Presentation transcript:

A Unified Framework for Measuring a Network’s Mean Time-to-Compromise Anoop Singhal1 William Nzoukou2, Lingyu Wang2, Sushil Jajodia3 1 National Institute of Standards and Technology 2 Concordia University 3 George Mason University SRDS 2013

Outline Introduction Motivating Example The MTTC metric models Simulation Conclusion

Outline Introduction Motivating Example The MTTC metric models Simulation Conclusion

The Need for Security Metric Some simple questions difficult to answer: Are we more secure than that company? Are we secure enough? How much additional security will be provided by that firewall? “You cannot improve what you cannot measure” A security metric will allow for a direct measurement of security before, and after deploying the solution Such a capability will make network hardening a science rather than an art

Existing Work Efforts on standardizing security metric CVSS by NIST CWSS by MITRE Efforts on measuring vulnerabilities Minimum-effort approaches (Balzarotti et al., QoP’05 and Pamula et al., QoP’06) PageRank approach (Mehta et al., RAID’06) Attack surface (Manadhata et al., TSE’11) MTTC-based approach (Leversage et al., SP’08) Our previous work (DBSec’07-08, QoP’07-08, ESORICS’10, SRDS’12) Note the MTTC-based approach (Leversage et al., SP’08) is closest to our work. but our work improves the model by introducing attack graph and CVSS scores.

An Example Metric for Known Vulnerabilities ftp_rhosts(0,1) root(2) rsh(0,1) trust(0,1) sshd_bof(0,1) user(1) ftp_rhosts(1,2) trust(1,2) rsh(1,2) rsh(0,2) trust(0,2) ftp_rhosts(0,2) user(2) local_bof(2,2) user(0) 0.8 0.9 0.1 0.087 0.72 0.6 0.54 Attack probability (DBSec’08) E.g., probability of exploiting ftp_rhosts is 0.8 E.g., probability of reaching root(2) is 0.087 Here is one example of existing metrics for known vulnerabilities (based on our DBSec’08 paper). No need to explain in details. Just say the metric assigns probability to individual vulnerability based on CVSS (e.g., 0.8 for ftp_rhosts(0,1)), and calculating probability for reaching the goal based on probability theory (e.g., 0.087 for eaching root(2)).

An Example Metric for Zero-Day Attacks k-zero day safety (ESORICS’10) k: the minimum number of distinct zero-day vulnerabilities required for attack Larger k means safer networks E.g., assuming no known vulnerability here, then k=1, if ssh has no known vulnerability; k=0, otherwise Here is one example of existing metrics for zero day attacks (based on our ESORICS’10 paper). No need to explain in details. Suppose host 2 is the target. If no service has any known vulnerability, then attacker will require at least one zero day vulnerability (in ssh) to attack host 2.

How to Measure Both? A natural next step is to develop metrics that are capable of handling the threats of both known vulnerabilities and zero day attacks

Outline Introduction Motivating Example The MTTC metric models Simulation Conclusion I’ll first motivate our study, and summarize the limitations of related work. I’ll then introduce some basic concepts such as attack graph. I’ll describe our model in three stages: how to assign individual values using CVSS, how to compose them in a static case, how to compose them in dynamic case I’ll discuss two case studies.

How to Measure Both? A viable approach is to combine those two types of metrics, Known vulnerabilities Zero day vulnerabilities through, for example, a weighted sum E.g., we assign a score s (0 <= s < 1) to known vulnerability, and 1 to zero day vulnerability However, such a naïve approach may lead to misleading results One way to measure both is to combine existing metrics for known, and zero day vulnerabilities For example, a naïve solution is to use weighted sum, to assign a smaller than 1 score to known vulnerability and 1 to zero day vulnerability (since known vulnerability is considered easier than zero day ones)

Issues with Such a Naïve Solution Consider this sequence Initially, sssh+sssh+sbof If we patch one of the ssh services, sssh+1+sbof If we path both ssh, 1+sbof Patching both is less secure than patching only one – difficult to explain Adding the two metrics together makes little sense, when they have different semantics The problem with this naïve solution (of adding the two together) is that it makes little sense, because the two metrics measure different things (one for difficulty of exploiting a known vulnerability, one for the likelihood of having a zero day vulnerability). Explaining this example is optional.

Our Solution: Using Time to Combine Different Metrics Define the MTTC t of a vulnerability x Initially, t1 = f(ssh)+f’(ssh)+f(bot) Patch one ssh, t2 = k+min(f(ssh),k’)+f(bot) Patch both ssh, k+k’+f(bot) Which case more secure will depend on how you define f and k. What is important is the model still applies. No need to explain the formulas in details (say explanations are in paper). The key point is, we combine the two types of metrics using time, so there is a clear and coherent semantics, and no matter how you define f(x) and k, the model still applies.

Contribution Among the first security metrics capable of handling both known vulnerabilities and zero day attacks under the same model with coherent semantics The proposed metric provides more intuitive and easy-to-understand score (time) than previous work based on abstract value-based metrics We take a layered approach such that the high level metric model remains valid regardless of specific low level inputs

Outline Introduction Motivating Example The MTTC metric models Simulation Conclusion

Mean Time-to-Compromise (MTTC) Given an attack graph and goal, the MTTC of a condition c in an attack graph is defined as the average time spent by an attacker in reaching the goal MTTC(e) is the average time required for the exploit e Pr(ec) represents the conditional probability that a successful attacker actually chooses to exploit e P(c) represents the probability of an attacker being successful (i.e., s/he can reach the goal condition c) (Note that ‘chooses to exploit’ and ‘can exploit’ are two different things) Intuitively speaking, the dividend is the total time used by all successful attackers in reaching the goal condition c. the divisor is the number of successful attackers. So by diving the two, we have the average time used by each attacker, e.g., if among 100 attackers, 50 can reach the goal, and those 50 attacker use 36 hours in total, then 36/50 = 0.72hour is the MTTC.

An Example To determine MTTC(goal) We need to find the probabilities P(goal) and Pr(egoal) for each e (we will do this in three steps) We need to estimate MTTC(e) for each e

Step 1: Probability of Being Able to Exploit e When Its Pre-Conditions Are Satisfied For known vulnerabilities, we assign the probability based on CVSS scores For zero day vulnerabilities, we assign a fixed nominal 0.08 based on following assumptions: This is the probability of an attacker being able to exploit e. For known vulnerability, just use CVSS score For zero day, we make some assumptions about their CVSS base metrics, as stated here on the slides, and then calculate a nominal score as 0.08.

An Example Apply this to our example: Here we just assign probabilities to each exploit, either based on CVSS (first two cases), or as a nominal value (last one)

Step 2: Probability of Being Able to Exploit e Construct a Bayesian network based on the attack graph Calculate the probability that an attacker can reach the goal This is to follow the previous probability assignment. No need to explain. Details are in paper.

Step 3: Probability of Attacker Choosing Exploit e Here we can make different assumptions, e.g., An attacker may always choose the easiest exploit s/he is able to An attacker may still choose harder exploits, the likelihood of which are proportional to their relative difficulties Even though an attacker can exploit e, s/he may or may not choose to do so, because s/he usually have more than one choices. Don’t explain the algorithm. It calculates the probability that an attacker chooses e, based on the two assumptions. The procedure calculates pr(e) based on those two assumptions

An Example Apply this to our example: Here we just assign probabilities to each exploit, either based on CVSS (first two cases), or as a nominal value (last one)

Estimating MTTC(e) – Known Vulnerabilities To estimate MTTC(e), we average the two complementary cases: Exploit code already exists, e.g., Exploit code does not exist, e.g., Note those only represent one (rough) way of estimating MTTC(e) No need to explain. You can say the results are based on search theory and the previous MTTC work (Leversage et al., SP’08). Especially, the ‘5.8 days’ comes from Leversage et al., SP’08.

An Example Apply this to our example: Here we just assign probabilities to each exploit, either based on CVSS (first two cases), or as a nominal value (last one)

An Example The final result of our example: Here we just assign probabilities to each exploit, either based on CVSS (first two cases), or as a nominal value (last one)

Outline Introduction Motivating Example The MTTC metric models Simulation Conclusion

Simulation The algorithms are implemented using Python and libraries including the Networkx, OpenBayes, Pygraphviz[33] and Matplotlib. To render the graphs, we use GraphViz The experiments were performed inside an Intel Core I7 computer with 8Gb of RAM. The computer is running Ubuntu 12.04 LTS If they ask why not do experiments, say there do not exist any publicly available datasets of attack graphs. We generate random attack graphs by growing from a seed graph, which is based on real world networks.

Simulation: MTTC vs Network Size Those results show how the MTTC grows in the size of attack graph (# of nodes) or that of networks (# of hosts). In the figure, ind represents the maximum number of pre-conditions of each exploit.

Simulation: Running Time vs Network Size Those results show how the running time grows in the size of networks. We can see that the running time is dominated by the generation of attack graphs and Bayesian networks (so that our algorithms do not cost that much time).

Outline Introduction Motivating Example The MTTC metric models Simulation Conclusion

Conclusion We have proposed a MTTC framework for developing metrics in order to measure both known and zero day vulnerabilities We have defined our MTTC model, and provided examples of concrete methods for estimating inputs to the model Future work will be directed to developing more refined estimation methods, applying the metrics to network hardening, and conducting more realistic experiments