Deception Game on Decoy Systems

Slides:



Advertisements
Similar presentations
HONEYPOTS Mathew Benwell, Sunee Holland, Grant Pannell.
Advertisements

Testing Hypotheses About Proportions
Introduction to Game Theory
Mean, Proportion, CLT Bootstrap
21-23 November, 2012, 5th IDCS, Wu Yi Shan, China Smartening the Environment using Wireless Sensor Networks in a Developing Country Presented By Al-Sakib.
Conditional Entropy for Deception Analysis E. John Custy Neil C. Rowe Center for Information Security Research U.S. Naval Postgraduate School
A Game Theoretic Model of Strategic Conflict in Cyberspace Operations Research Department Naval Postgraduate School, Monterey, CA 80 th MORS 12 June, 2012.
Honeypot 서울과학기술대학교 Jeilyn Molina Honeypot is the software or set of computers that are intended to attract attackers, pretending to be weak.
Testing Hypotheses About Proportions Chapter 20. Hypotheses Hypotheses are working models that we adopt temporarily. Our starting hypothesis is called.
Chapter 9 Audit Sampling: An Application to Substantive Tests of Account Balances McGraw-Hill/Irwin ©2008 The McGraw-Hill Companies, All Rights Reserved.
Lesson 13-Intrusion Detection. Overview Define the types of Intrusion Detection Systems (IDS). Set up an IDS. Manage an IDS. Understand intrusion prevention.
1 Validation and Verification of Simulation Models.
Testing Hypotheses About Proportions
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 20 Testing Hypotheses About Proportions.
Chapter 6.7 Determinants. In this chapter all matrices are square; for example: 1x1 (what is a 1x1 matrix, in fact?), 2x2, 3x3 Our goal is to introduce.
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
Honeypots. Introduction A honeypot is a trap set to detect, deflect, or in some manner counteract attempts at unauthorized use of information systems.
1Of 25. 2Of 25  Definition  Advantages & Disadvantages  Types  Level of interaction  Honeyd project: A Virtual honeypot framework  Honeynet project:
HONEYPOTS PRESENTATION TEAM: TEAM: Ankur Sharma Ashish Agrawal Elly Bornstein Santak Bhadra Srinivas Natarajan.
HONEYPOT By SIDDARTHA ELETI CLEMSON UNIVERSITY. Introduction Introduced in 1990/1991 by Clifford Stoll’™s in his book “The Cuckoo’s Egg” and by Bill Cheswick’€™s.
+ Simulation Design. + Types event-advance and unit-time advance. Both these designs are event-based but utilize different ways of advancing the time.
Chapter 20 Testing hypotheses about proportions
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 20 Testing Hypotheses About Proportions.
Topic 5: Basic Security.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Presented by Yu-Shun Wang Advisor: Frank, Yeong-Sung Lin Near Optimal Defense Strategies to Minimize Attackers’ Success Probabilities for networks of Honeypots.
Statistics 20 Testing Hypothesis and Proportions.
Using Honeypots to Improve Network Security Dr. Saleh Ibrahim Almotairi Research and Development Centre National Information Centre - Ministry of Interior.
Contents of the Talk Preliminary Materials Motivation and Contribution
Module 10 Hypothesis Tests for One Population Mean
Virtual University of Pakistan
What is a Hidden Markov Model?
Instructor: Vincent Conitzer
Rapid Recall Activities
Chapter 2 Memory and process management
Chapter 9 Audit Sampling: An Application to Substantive Tests of Account Balances McGraw-Hill/Irwin ©2008 The McGraw-Hill Companies, All Rights Reserved.
MSA / Gage Capability (GR&R)
Managing Secure Network Systems
Intrusion Control.
Object-Oriented Analysis and Design
Authors – Johannes Krupp, Michael Backes, and Christian Rossow(2016)
Testing Hypotheses about Proportions
Part III – Gathering Data
Testing Hypotheses About Proportions
Location Cloaking for Location Safety Protection of Ad Hoc Networks
SCIT1003 Chapter 2: Sequential games - perfect information extensive form Prof. Tsang.
Evaluating a Real-time Anomaly-based IDS
When Security Games Go Green
Modeling Cyberspace Operations
Office of Education Improvement and Innovation
Unit# 9: Computer Program Development
Testing Hypotheses about Proportions
12/6/2018 Honeypot ICT Infrastructure Sashan
Friday, December 07, 2018 Honeypot ICT Infrastructure Sashan Kantonsspital Graubunden ICT Department.
Honors Statistics From Randomness to Probability
Fundamentals of Data Representation
Intrusion Detection Systems
Instructor: Vincent Conitzer
Concurrency: Mutual Exclusion and Process Synchronization
Testing Hypotheses About Proportions
Security Overview: Honeypots
PROCESSES & THREADS ADINA-CLAUDIA STOICA.
ECE 352 Digital System Fundamentals
Lecture 6 Architecture Algorithm Defin ition. Algorithm 1stDefinition: Sequence of steps that can be taken to solve a problem 2ndDefinition: The step.
Presented by Yu-Shun Wang
Security Principles and Policies CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
Outline The spoofing problem Approaches to handle spoofing
Optimal defence of single object with imperfect false targets
Blockchain Mining Games
Using the Rule Normal Quantile Plots
Presentation transcript:

Deception Game on Decoy Systems Gihyuk Ko gko@andrew.cmu.edu Carnegie Mellon University Hi, I’m Gihyuk Ko. In this presentation, I will introduce my research on Deception Game on Decoy Systems.

Decoy Systems and Honeypots One of the deception technologies in computer security Lure attackers into accessing fake objects (i.e., decoys) Monitor attackers’ behavior, mitigate intrusion Many variations in implementations of decoy systems Honeypots, Honeywords, Kamouflage System, etc. Honeypots[Spitzner03]: “an information system resource whose value lies in unauthorized or illicit use of that resource” A fake server/host which looks like a real server/host Seems as a valuable resource to attack to the attackers No true value to the outsiders I would like to begin my talk by briefly going over the concept of decoy systems. Decoy systems are one of deception technology in computer security that uses decoys to lure attackers. Primary purpose of the decoy systems is to lure attackers into accessing fake objects, namely decoys, in order to monitor attacker’s behaviour, and mitigate intrusion. Currently there are many variations in implementations of such decoy systems, like honeypots, honeywords, kamouflage system. Kamouflage system is a password manager system which stores multiple decoy password lists other than true password list. Similarly, honeywords suggested maintenance of additional honeywords other than Among many variations, honeypots were the very first concept came out of using decoys. According to Spitzner, honeypots are defined as an information system resource whose value lies in unauthorized or illicit use of that resource. In other words, this means that honeypots are a fake host which look alike a real host, by acting like real hosts. In consequence, they seem as a valuable resource to the attackers, where it has no true value.

Honeypots in the network Attack successful Attacker Actual servers 272.17.31.10 272.17.31.11 272.17.31.12 272.17.31.16 272.17.31.17 Now let’s look at an actual example deployment of honeypots. Let’s assume the defender is running several servers. Defender can deploy honeypots along with the actual servers to lure attackers. Also, let’s assume the attacker wants to crack down few of the servers to limit functionality of the defender. In this situation, if the attacker who wants to crack down the server accesses and attacks honeypots other than actual servers, he will become detected and monitored by the defender. There are many other ways to use honeypots to defend against real world attacker, and honeypots are widely deployed and actively developed throughout the network by a project named honeynet project. Attack fails: becomes monitored Honeypots 272.17.31.13 272.17.31.15 272.17.31.14

Fake Honeypots[Rowe06] Attackers became aware that defenders use honeypots Attackers ‘probe’ the target to avoid attacking honeypots Attackers may collect ‘clues’ to discover honeypots using ‘heuristics’ Signature-based approach: look for particular features e.g., well-known honeypot tool name, “magic numbers” Anomaly-based approach: use metrics on entire system e.g., # of files / subdirectories, # of english-word file names, stdev of length of filename As honeypot technology became popular, attackers also became aware of the defender’s usage of honeypots in their defense. Here, smart attackers first began to ‘probe’ the target before attacking to avoid honeypots. Attackers may be able to collect several clues to indicate that the host they are interacting with is honeypot, not the normal host using several heuristics. According to Rowe and Custy, there can be two different approaches to . First of all, signature-based approach looks for particular features such as well-known honeypot tool names, or “magic numbers” such as parameter values for the VMWare. On the other hand, anomaly-based approach uses metrics on the entire system, such as number of files and subdirectories, number of english word file names, or standard deviation of length of filename. For instance, if there are too few number of files or subdirectories, attacker can assume the host is honeypot with high probability, as honeypots only have few files compared to actual working hosts.

Fake Honeypots[Rowe06] (cont’d) Ordinary hosts/servers which are made to appear like honeypots Produced by planting ‘clues’ into the system e.g., plant well-known honeypot tool’s trace in the memory, make the stdev of the filename length close to zero Attackers who discover ‘clues’ will avoid the system Fake honeypots are protected against intrusion Using decoys, defender can deliberatively ‘deceive’ attackers in active way In this context, Rowe and Custy, proposed the concept of ‘fake honeypots’, taking advantage of these ‘clues’. Fake honeypots are the ordinary hosts which look like honeypots. According to the authors, fake honeypots can be produced by planting ‘clues’ into the normal system. For example, one can make a fake honeypot by planting well-known honeypot tool’s trace in the memory, or he can make the standard deviation of the filename length close to zero. Here, smart attackers who discover the ‘clues’ using their ‘heuristics’, will avoid the system as they think they are in contact with honeypots. In this context, fake honeypots are protected against attackers’ intrusion. Usage of fake honeypots suggests us that using decoys, defender can deliberatively ‘deceive’ the attackers in active way.

Attacker-defender game using honeypots Consider an attacker who wants to attack honeypot network Attacker attacks honeypot node  gets detected Attacker would first try to discover (by ‘probing’) honeypots, and avoid attacking them Defender can build ‘fake honeypot’ to deceive the attacker These interactions can happen multiple times back and forth This can be viewed as a two-player game! Until now, we have overviewed the concept of decoy systems, and looked more detailed example of honeypots. We also have looked the concept of fake honeypots, and what it suggests. Now, as we have seen on the diagram, we can think of This interactions between attacker and defender can be viewed as a two-player game, mainly because attacker and defender each can choose its strategies against other party’s action.

Key Points Many variations in deception techniques which use decoy systems Honeypots, Fake honeypots Some attacker-defender interactions using the decoy systems can be explained as a two-player game Developing a game-theory model can benefit defender in countering attacker with strategy! From 1990s, the deception has been an important computer security defense mechanism as it can use decoys to lure attacker into accessing fake objects, and monitor attackers’ behavior. By doing so, defenders were able to mitigate and detect intrusion from attackers from outside. There has been many variation in implementation of such decoy systems against attackers such as honeypots, honeytokens, honeywords, and kamouflage system, etc. Also we have seen that some attacker-defender interactions using these decoy systems can be explained as a game. Thus, in this context, under the condition that the attacker has certain attack strategy, developing game-theory model will benefit defender in that

Objective Develop a game-theoretic model which reflects interactions between real world attacker and defender using decoy systems, that involve deceptive actions such as ‘lying’.

Game Definition – Decoy Systems True objects: actual valuable resource (e.g., servers) Decoy objects: decoys without any value (e.g., honeypots) Total n objects, k true objects, n-k decoys Indicator function: / ... All the decoy systems use several “decoys”, which looks like “true objects” from the outsider’s view, which can be modeled as follows. We assume decoy systems consists of total n objects, where there are k true objects and n-k decoys. Additionally, we need an indicator function that indicates whether certain objects are decoys or true objects.

Game Definition – Category We define a two-player non-cooperative complete imperfect information dynamic game Two-player: Attacker A, Defender D Non-cooperative: independent decision-making Complete information: strategy profiles and payoffs(utilities) are known to each other Imperfect information: not all the past actions are known to each party Dynamic: turn-taking manner According to Roy et al., games in security can be divided as static/dynamic, complete/incomplete, perfect/imperfect manner. Also, as the game in security is basically done between the attacker and the defender who conflicts each other, they don’t cooperate. In our model, we define a two-player non-cooperative complete imperfect information dynamic game.

Game Definition – Goals / Moves A’s goal: distinguish true objects from all objects Learn D’s goal: prevent attacker from learning true objects By ‘lying’ to an attacker Moves ; Steps A’s move D’s move Initialization Probe-Respond (repeated) Termination

Game Definition – Moves (cont’d) Initialization ... D deploys decoys and true objects D In the first step, Initialization step, the defender D deploys decoys and true objects in the network. Here, he deploys them by choosing an indicator function I which maps true objects and decoy objects to the right indicator value. Next, in the second step, D defines an indicator function

Game Definition – Moves (cont’d) Probe-Respond ... A picks an object to ‘probe’ D A D responds to ‘probe’: Probe-Respond step, is a repeated step until the attacker A stops to. In the first phase of this step, the attacker A picks an object to ‘probe’. This reflects ‘probing’ of the real world attacker in order to avoid attacking decoys, by collecting ‘clues’. We assume the case that A has many different ‘heuristics’, and he uses one ‘heuristic’ per each ‘probe’. Tell the Truth: return indicator value of an object Lie: return logical complement of indicator value of an object  Repeats until A stops

Game Definition – Moves (cont’d) Termination ... D A ... In the termination step, A concludes with k objects to attack A concludes with k objects to attack

Game Definition – Moves (cont’d) Multiple ‘probes’ on the same object: Applying different ‘heuristics’ to the same object Higher probability to reveal identity of the object Assumption: two ‘probes’ on the same object reveals whether object is decoy or not One possible problem in my game can be such that if there happens to be multiple probes on the same object, then

Game Definition – Utilities We assume both A’s ‘probing’ and D’s ‘lying’ have per-action costs A’s per-probe cost overall cost D’s per-lie cost overall cost Incentives depend on attacker’s final move (attack): When A picked a true object, A gets overall incentive When A picked a decoy object, D gets overall incentive Incentives work in zero-sum manner: Overall utility: As we argued, it would be reasonable to assume that attacker’s probing and defender’s lying would cost some amount of resource, because in real world Internet settings, probing would need to apply certain ‘heuristics’ to collect clues, and lying would need actual infrastructural changes for the defender. Incentives, on the other hand, are assumed to depend on the attacker’s final move, which is k different indices A decides to attack. If A picks true object to attack, then A would be able to get valuable resource from the attack, penalizing the defender at the same time. Otherwise, if A picks decoy object to attack, D becomes able to track down A’s behavior and thus A gets penalized. We assume these values correspond to i_A, and i_D respectively. Also in the same manner as the cost, we denote overall incentives as IA and ID. Also we assume that the incentives work in zero-sum manner, that is, IA equals to minus ID. Finally, overall utility can be calculated as subtraction of overall cost from overall incentives. Please note that the overall utility doesn’t necessarily function in zero-sum manner because the amount of costs can be different.

Analysis on Possible Strategies Three possible example attacker strategies All-Round-Probing Attacker No-Probing Attacker Selectively-Probing Attacker

Analysis on Possible Strategies All-Round-Probing Attacker Attacker probes all objects twice until it finds k true objects Attacker always gets k true object right Defender’s lies doesn’t help: defender should not use any lies to maximize its utility Unlikely to be considered as an attacker strategy Until now, we defined our game using the concept of decoy systems, inspired by plausible attacker and defender interactions in honeypot network. Now, we analyze our game model using three different possible attacker strategies. First attacker we can think of is all-round-probing attacker, who puts every resources to obtain its goal. That is, using the fact that defender cannot lie twice on the same object, attacker may probe twice on all objects until it finds k true objects. In this case, expected number of probes can be calculated as follows.

Analysis on Possible Strategies No-Probing Attacker Attacker might guess k objects without any probing No cost for both A and D Efficiency of decoy systems comes as countermeasure Intuitively, increasing portion of decoys in the network, expected correct guesses decreases Increasing per-object penalty( ) will also benefit defender Next attacker we can consider is the attacker who refuses to play game with defender. That is, the attacker can just guess k objects and attack them without any probes.

Analysis on Possible Strategies Selectively-Probing Attacker Attacker may selectively probe twice on the objects where he gets ‘true object’ response Possible countermeasure for defender: ‘lie’ in every true objects to deter examination from attacker as much as possible ‘lie’ in every decoy objects to make attacker waste each probe examining decoy

Conclusion Some attacker-defender interactions can be viewed as a two-player game We proposed a game-theoretic model which reflects interaction between attacker and defender using decoy systems Analyzing many other possible strategies using our game will remain as a future work In conclusion, there exists various implementation of deception techniques using decoy systems, and as we have seen in the example case of using the honeypot and fake honeypots, some attacker-defender interactions can be viewed as a two-player game. We proposed a game-theoretic model which reflects interaction between attacker and defender, using decoy systems.

References [Spitzner03] L. Spitzner, “Honeypots: Catching the Insider Threat”, Computer Security Applications Conference, 2003. [Rowe06] N. C. Rowe and E. J. Custy, “Fake Honeypots: A Defensive Tactic for Cyberspace,” Proceedings of 2006 IEEE Workshop on Information Assurance, pp. 223-230, 2006. [Roy10] S. Roy et al., “A Survey of Game Theory as Applied to Network Security,” Proceedings of the 43rd Hawaii International Conference on System Sciences, pp. 1-10, 2010.

Game Definition – Moves (cont’d) Step 1: Initialization Defender D deploys decoys and true objects Step 2: Probe-Respond Repeated until A stops a: Attacker A picks an object to ‘probe’  To avoid attacking decoys: to collect ‘clues’  Assume A uses one ‘heuristic’ per each ‘probe’ In the first step, Initialization step, the defender D deploys decoys and true objects in the network. Here, he deploys them by choosing an indicator function I which maps true objects and decoy objects to the right indicator value. Next, in the second step, Probe-Respond step, is a repeated step until the attacker A stops to. In the first phase of this step, the attacker A picks an object to ‘probe’. This reflects ‘probing’ of the real world attacker in order to avoid attacking decoys, by collecting ‘clues’. We assume the case that A has many different ‘heuristics’, and he uses one ‘heuristic’ per each ‘probe’.

Game Definition – Moves (cont’d) Step 2: Probe-Respond (cont’d) b: Defender D responds to A’s ‘probe’ by giving one-digit value to A.  Defender can either ‘tell the truth’ or ‘lie’ In the second phase of step 2, defender responds to A’s probe, by giving one digit value to A. Here, we define two different moves D can do according to A’s move, named as ‘tell the truth’ an ‘lie’. Defender

Game Definition – Moves (cont’d) Multiple ‘probes’ on the same object: Applying different ‘heuristics’ to the same object Higher probability to reveal identity of the object Assumption: two ‘probes’ on the same object reveals whether object is decoy or not Step 3: Termination Attacker concludes with k objects to attack ; Given these probing-response moves, we assume multiple probing on the same target of the attacker means that the attacker applies different ‘heuristics’ to the same object. Also, we assume applying different heuristics gives higher probability to reveal the identity of the target, namely, whether the target is a decoy or the true object. Given these settings, we assume that two ‘probes’ on the same object reveals its identity. Note that using this assumption, we can also prevent the game going on endlessly because