Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/11/29 1 Research Direction Introduction.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Classification. Introduction A discriminant is a function that separates the examples of different classes. For example – IF (income > Q1 and saving >Q2)
Coverage by Directional Sensors Jing Ai and Alhussein A. Abouzeid Dept. of Electrical, Computer and Systems Engineering Rensselaer Polytechnic Institute.
Distribution and Revocation of Cryptographic Keys in Sensor Networks Amrinder Singh Dept. of Computer Science Virginia Tech.
‧指導教授:林永松 博士 【 Master Thesis 】 Oral Examination A Near-Optimal Redundancy Allocation Policy to Minimize System Vulnerability against Hazardous Events and.
Optimal Jamming Attacks and Network Defense Policies in Wireless Sensor Networks Mingyan Li, Iordanis Koutsopoulos, Radha Poovendran (InfoComm ’07) Presented.
Markov Game Analysis for Attack and Defense of Power Networks Chris Y. T. Ma, David K. Y. Yau, Xin Lou, and Nageswara S. V. Rao.
Markov Game Analysis for Attack and Defense of Power Networks Chris Y. T. Ma, David K. Y. Yau, Xin Lou, and Nageswara S. V. Rao.
Network Theory and Dynamic Systems Game Theory: Mixed Strategies
A Brief Introduction to Bayesian Inference Robert Van Dine 1.
Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/9/13 Modeling secrecy and deception in a multiple- period attacker–defender signaling game 1.
Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/3/07 Defending simple series and parallel systems with imperfect false targets R. Peng, G. Levitin,
Correlation and regression
Research Direction Introduction Advisor : Frank, Y.S. Lin Presented by Yu Pu Wu.
Worm Origin Identification Using Random Moonwalks Yinglian Xie, V. Sekar, D. A. Maltz, M. K. Reiter, Hui Zhang 2005 IEEE Symposium on Security and Privacy.
Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/10/25 1 Research Direction Introduction.
Robust Allocation of a Defensive Budget Considering an Attacker’s Private Information Mohammad E. Nikoofal and Jun Zhuang Presenter: Yi-Cin Lin Advisor:
Adviser: Frank,Yeong-Sung Lin Present by Chris Chang.
Visual Recognition Tutorial
Harsanyi transformation Players have private information Each possibility is called a type. Nature chooses a type for each player. Probability distribution.
Visual Recognition Tutorial
Inferences About Process Quality
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Reliability-Redundancy Allocation for Multi-State Series-Parallel Systems Zhigang Tian, Ming J. Zuo, and Hongzhong Huang IEEE Transactions on Reliability,
Incomplete Graphical Models Nan Hu. Outline Motivation K-means clustering Coordinate Descending algorithm Density estimation EM on unconditional mixture.
Game theoretic models for detecting network intrusions OPLab 1.
Defending and attacking a network of two arcs subject to traffic congestion Presenter : Yen Fen Kao Advisor : Yeong Sung Lin 2013 Reliability Engineering.
Sergios Theodoridis Konstantinos Koutroumbas Version 2
Maximization of Network Survivability against Intelligent and Malicious Attacks (Cont’d) Presented by Erion Lin.
Network Survivability Against Region Failure Signal Processing, Communications and Computing (ICSPCC), 2011 IEEE International Conference on Ran Li, Xiaoliang.
EE 685 presentation Utility-Optimal Random-Access Control By Jang-Won Lee, Mung Chiang and A. Robert Calderbank.
Protection vs. false targets in series systems Reliability Engineering and System Safety(2009) Kjell Hausken, Gregory Levitin Advisor: Frank,Yeong-Sung.
Optimal Voting Strategy Against Rational Attackers th International Conference on Risks and Security of Internet and Systems (CRiSIS) Presenter:
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Decapitation of networks with and without weights and direction : The economics of iterated attack and defense Advisor : Professor Frank Y. S. Lin Presented.
Adviser: Frank, Yeong - Sung Lin Present by Jason Chang 1.
Research Direction Introduction Advisor: Professor Frank Y.S. Lin Present by Hubert J.W. Wang.
Optimal Resource Allocation for Protecting System Availability against Random Cyber Attack International Conference Computer Research and Development(ICCRD),
Redundancy and Defense Resource Allocation Algorithms to Assure Service Continuity against Natural Disasters and Intelligent Attackers Advisor: Professor.
Channel Coordination and Quantity Discounts Z. Kevin Weng Presented by Jing Zhou.
REDUNDANCY VS. PROTECTION VS. FALSE TARGETS FOR SYSTEMS UNDER ATTACK Gregory Levitin, Senior Member, IEEE, and Kjell Hausken IEEE Transactions on Reliability.
Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/11/29 1 Defender Message Strategies to Maximize Network Survivability for Multi-Stage Defense Resource.
Research Direction Advisor: Frank,Yeong-Sung Lin Presented by Jia-Ling Pan 2010/10/211NTUIM OPLAB.
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
An Effective Method to Improve the Resistance to Frangibility in Scale-free Networks Kaihua Xu HuaZhong Normal University.
Machine Learning 5. Parametric Methods.
E FFECTIVE N ETWORK P LANNING AND D EFENDING S TRATEGIES TO M INIMIZE S ERVICE C OMPROMISED P ROBABILITY UNDER M ALICIOUS C OLLABORATIVE A TTACKS Advisor:
Resource Distribution in Multiple Attacks Against a Single Target Author: Gregory Levitin,Kjell Hausken Risk Analysis, Vol. 30, No. 8, 2010.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Research Direction Introduction
Advisor: Frank,Yeong-Sung Lin 碩一 冠廷 1.  1.Introduction  2.The attack model 2.1. Even resource distribution between two attacks 2.2. Uneven resource.
An Improved Acquaintance Immunization Strategy for Complex Network.
Research Direction Introduction Advisor : Frank, Y.S. Lin Presented by Yu Pu Wu.
Research Direction Introduction Advisor: Frank, Yeong-Sung Lin Presented by Hui-Yu, Chung 2011/11/22.
Presented by Yu-Shun Wang Advisor: Frank, Yeong-Sung Lin Near Optimal Defense Strategies to Minimize Attackers’ Success Probabilities for networks of Honeypots.
Optimal Defense Against Jamming Attacks in Cognitive Radio Networks Using the Markov Decision Process Approach Presenter: Wayne Hsiao Advisor: Frank, Yeong-Sung.
O PTIMAL R EPLACEMENT AND P ROTECTION S TRATEGY FOR P ARALLEL S YSTEMS R UI P ENG, G REGORY L EVITIN, M IN X IE AND S ZU H UI N G Adviser: Frank, Yeong-Sung.
Epidemic Profiles and Defense of Scale-Free Networks L. Briesemeister, P. Lincoln, P. Porras Presented by Meltem Yıldırım CmpE
Contents of the Talk Preliminary Materials Motivation and Contribution
A Game Theoretic Study of Attack and Defense in Cyber-Physical Systems
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 2 Bayesian Games Zhu Han, Dusit Niyato, Walid Saad, Tamer.
Network Optimization Research Laboratory
Advisor: Yeong-Sung Lin Presented by Chi-Hsiang Chan
Considering Multi-objective Resource Allocation Strategies under Attack-Defense Roles and Collaborative Attacks 考慮攻防雙角色與協同攻擊情況下之多目標資源分配策略 Advisor: Frank,Yeong-Sung.
Adviser: Frank,Yeong-Sung Lin Present by 瀅如
Presented by Yu-Shun Wang
Advisor: Frank,Yeong-Sung Lin 碩一 冠廷
Advisor: Frank,Yeong-Sung Lin Presented by Jia-Ling Pan
Optimal defence of single object with imperfect false targets
Presentation transcript:

Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/11/29 1 Research Direction Introduction

Agenda 2011/11/29 2 Problem Description Problem Assumption Problem Formulation

Problem Description 2011/11/29 3

Defender versus Attacker 2011/11/29 4 DefenderAttacker Defender’s information 1. Common knowledgeThe information was known to both. 2. Defender’s private information (ex. node’s type, and network topology) The defender knew all of it. The attacker knew a part of it. 3. The defender’s other information (ex. system vulnerabilities) The defender did not know it before the game started. The attacker knew a part of it.

Defender versus Attacker 2011/11/29 5 DefenderAttacker Budget1. Based on the importance of node Defense.Attack. 2. On each nodeReleasing message.Updating information. 3. Reallocated or recycledYes. But the defender with extra cost. No. 4. RewardNo.Yes. If the attacker compromised a node, the node’s resource could be controlled by the attacker before the defender had not repaired it yet. 5. Repaired nodeYes.No. 6. Resource accumulationYes. But the resource needed to be discounted.

Defender versus Attacker 2011/11/29 6 DefenderAttacker Immune benefit Yes. The defender could update information about system vulnerabilities after attacks. No. RationalityFull or bounded rationality.

Objective 2011/11/29 7 The network survivability is measured by ADOD. The game has two players: an attacker (he, A) and a defender (she, D). Defender Objective - minimize the damage of the network (ADOD). Budget Constraint -  deploying the defense budget in nodes  repairing the compromised node  releasing message in nodes Attacker Objective - maximize the damage of the network (ADOD). Budget Constraint –  deploying the attack budget in nodes  updating information

Defender’s information 2011/11/29 8 The defender had private information, including each node’s type and network topology. There were two types (lower or higher valuation) of nodes and each node’s prior belief in the first round was common knowledge. The attack success probability of node i = The probability of node i belonged to type 1 * The attack success probability of node i belonged to type 1 + The probability of node i belonged to type 2 * The attack success probability of node i belonged to type 2

Defender’s information 2011/11/29 9

Defender’s action 2011/11/29 10 In each round, the defender moves first, determines strategy and chooses message which may be truth, deception or secrecy to each node.

Message releasing 2011/11/29 11 Message releasing could be classified into two types.  A node’s information could be divided into different parts to release message by the defender.  The defender could release a node’s defensive state as a message to the attacker.

Message releasing- type /11/29 12 The defender could choose a part of information from a node according to his strategy to release truthful message, deceptive message or secrecy.

Message releasing- type /11/29 13 The defender released a node’s defensive state as a message, which was truth, deception or secrecy, to each node as a mixed strategy.

Message releasing 2011/11/29 14 The defender chooses : 1. Truthful message if and only if message = actual information/defense. 2. Secrecy if and only if message is secret. 3. Deceptive message if and only if message ≠ actual information/defense. Cost: Deceptive message > Secrecy > Truthful message

The effect of deception/secrecy 2011/11/29 15 The effect of deception or secrecy would be discounted if the attacker knew defender’s partial private information.

The effect of deception/secrecy 2011/11/29 16 The effect of deception or secrecy would be zero if the attacker knew something that the defender did not know.

Immune benefit 2011/11/29 17 Although the attacker knew something that the defender did not know, the defender could update information after observing the result of each round’s contest. After the defender updated information, she had immune benefit which meant that the attacker was unable to use identical attack.

Defender’s resources 2011/11/29 18 From the view of the defender, the budget could be reallocated or recycled but the discount factor was also considered. The defender could accumulate resources to decrease attack success probability to defend network nodes in next time. Defense resource on node i Defender Recycled Reallocated

Attacker’s information 2011/11/29 19 The attacker knew only partial network topology. The attacker could update information after observing the result of each round’s contest and defender’s messages.

Attacker’s resources 2011/11/29 20 The attacker could accumulate experience to increase attack success probability to compromise network nodes in next time. The attacker could increase resources when the attacker compromised network nodes, before the defender had not repaired the nodes yet.

Network topology 2011/11/29 21 We considered a complex system with n nodes in series-parallel. A node consisted of M components which might be different component or the same. (M ≥ 1)

Network topology 2011/11/29 22 A node’s composition could be classified into two types.  A node with backup component  A k-out-of-m node

Network topology 2011/11/29 23 The relationship between nodes could be classified into three types.  Independence A node could function solely.  Dependence When a node was destroyed, the nodes dependent on the destroyed node would not operate normally.  Interdependence When a node was destroyed, the node interdependent on the destroyed node would not operate normally and vice versa.

2011/11/29 24

Problem Assumption 2011/11/29 25

Problem assumption 2011/11/ The problem involved both cyber attacker and network defender. The objective of attacker was to maximize the value of the Average DOD. On the other hand, the defender’s goal was to minimize the value of the Average DOD. 2. Both the attacker and the defender were based on the importance of node to take actions.

Problem assumption 2011/11/ Cyber attacker had incomplete information about:  Network topology: The attacker could only attack nodes of the network which had been known to the attacker and kept collecting information.  Defender’s private information: The defender did not know the attacker knew it.  Defender’s system vulnerabilities: The defender did not know it.

Problem assumption 2011/11/ The attacker had private information which included the attacker’s budget and the defender’s system vulnerabilities. 5. The defender had private information which included each node’s type and the network topology. 6. Both attacker and defender were limited by the total budget. 7. Both attacker and defender might be rational or bounded rational.

Problem assumption 2011/11/ Both attacker and defender knew that there were two types (lower or higher valuation) of nodes. 9. Both attacker and defender knew each node’s prior belief in the first round. 10. Both attacker and defender could update information by Bayes’ theorem after observing the result of each round’s contest. The attacker could also update his information by Bayes’ theorem after observing the defender’s messages.

Problem assumption 2011/11/ There were no enforceable agreements between attacker and defender which meant that the attacker and the defender could not cooperate. 12. In each round, the defender moves first, determines strategy and chooses message which may be truth, deception or secrecy to each node. 13. The cost of releasing truthful message was lower than the costs of releasing secrecy and deception, respectively. Also, the cost of releasing secrecy was lower than the cost of releasing deception. And the cost of releasing message would not be accumulated or recycled.

Problem assumption 2011/11/ The defender using deceptive messages could lower the attack success probability. 15. The defender’s message releasing could be classified into two types: 16. Only node attack was considered. (We did not consider the link attack) 17. Only malicious attack was considered. (We did not consider the random errors)  A node’s information could be divided into different part to release message by the defender.  The defender could release a node’s defensive state as a message to the attacker.

Problem assumption 2011/11/ Cyber attacker could accumulate experience to increase attack success probability to compromise network nodes in next time. 19. Network defender could accumulate resources to decrease attack success probability to defend network nodes in next time. 20. The attacker could increase budget when the attacker compromised network nodes, which meant that the compromised network nodes were controlled by the attacker. 21. From the view of the defender, the budget could be reallocated or recycled but the discount factor was also considered.

Problem assumption 2011/11/ From the view of the defender, the compromised nodes could be repaired. 23. Only static network was considered. (We did not consider the growth of network) 24. The defender used redundant components to design system to achieve high availability. 25. The network survivability was measured by Average DOD value. 26. Any two nodes of network could form to be an O-D pair. 27. The attack success probability was calculated by contest success function, considering the resource allocation on each node of both parties.

Problem Formulation 2011/11/29 34

Given 2011/11/29 35 The total budget of network defender. The total budget of cyber attacker. Both the defender and the attacker have incomplete information about each other.

Objective 2011/11/29 36 Minimize the maximum damage degree of network (ADOD).

Subject to 2011/11/29 37 The total budget constraint of network defender. The total budget constraint of cyber attacker.

To determine 2011/11/29 38 The attacker How to allocate attack budget to each node in each round. The defender How to allocate defense budget and determine which message would use to each node in each round. Whether to repair the compromised node in each round. Whether to reallocate or recycle nodes’ resource in each round.

Given parameter 2011/11/29 39

Given parameter 2011/11/29 40

Given parameter 2011/11/29 41

Decision variable 2011/11/29 42

Decision variable 2011/11/29 43

Objective function 2011/11/29 44

Subject to 2011/11/29 45

Subject to 2011/11/29 46

Subject to 2011/11/29 47

Thanks for your listening. 2011/11/29 48