Markov Game Analysis for Attack and Defense of Power Networks Chris Y. T. Ma, David K. Y. Yau, Xin Lou, and Nageswara S. V. Rao.

Slides:



Advertisements
Similar presentations
Markov Decision Process
Advertisements

1 University of Southern California Keep the Adversary Guessing: Agent Security by Policy Randomization Praveen Paruchuri University of Southern California.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Solving POMDPs Using Quadratically Constrained Linear Programs Christopher Amato.
Game Theoretical Insights in Strategic Patrolling: Model and Analysis Nicola Gatti – DEI, Politecnico di Milano, Piazza Leonardo.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
An Introduction to... Evolutionary Game Theory
Optimal Jamming Attacks and Network Defense Policies in Wireless Sensor Networks Mingyan Li, Iordanis Koutsopoulos, Radha Poovendran (InfoComm ’07) Presented.
Markov Game Analysis for Attack and Defense of Power Networks Chris Y. T. Ma, David K. Y. Yau, Xin Lou, and Nageswara S. V. Rao.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
COSC 878 Seminar on Large Scale Statistical Machine Learning 1.
Markov Decision Processes
Planning under Uncertainty
A Game Theoretic Approach to Provide Incentive and Service Differentiation in P2P Networks John C.S. Lui The Chinese University of Hong Kong Joint work.
Defending Complex System Against External Impacts Gregory Levitin (IEC, UESTC)
Detection of Nuclear Threats: Defending Multiple Ports Jeffrey Victor Truman 17 July 2009.
Detecting Network Intrusions via Sampling : A Game Theoretic Approach Presented By: Matt Vidal Murali Kodialam T.V. Lakshman July 22, 2003 Bell Labs, Lucent.
INSTITUTO DE SISTEMAS E ROBÓTICA Minimax Value Iteration Applied to Robotic Soccer Gonçalo Neto Institute for Systems and Robotics Instituto Superior Técnico.
Nash Q-Learning for General-Sum Stochastic Games Hu & Wellman March 6 th, 2006 CS286r Presented by Ilan Lobel.
Dynamic Network Security Deployment under Partial Information George Theodorakopoulos (EPFL) John S. Baras (UMD) Jean-Yves Le Boudec (EPFL) September 24,
1 University of Southern California Security in Multiagent Systems by Policy Randomization Praveen Paruchuri, Milind Tambe, Fernando Ordonez University.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
A Game Theoretic Approach to Provide Incentive and Service Differentiation in P2P Networks Richard Ma, Sam Lee, John Lui (CUHK) David Yau (Purdue)
Planning to learn. Progress report Last time: Transition functions & stochastic outcomes Markov chains MDPs defined Today: Exercise completed Value functions.
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
CS121 Heuristic Search Planning CSPs Adversarial Search Probabilistic Reasoning Probabilistic Belief Learning.
More RL. MDPs defined A Markov decision process (MDP), M, is a model of a stochastic, dynamic, controllable, rewarding process given by: M = 〈 S, A,T,R.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
Making Decisions CSE 592 Winter 2003 Henry Kautz.
1 Introduction of MDP Speaker : Xu Jia-Hao Adviser : Ke Kai-Wei.
Scheduling of Wireless Metering for Power Market Pricing in Smart Grid Husheng Li, Lifeng Lai, and Robert Caiming Qiu. "Scheduling of Wireless Metering.
MDP Reinforcement Learning. Markov Decision Process “Should you give money to charity?” “Would you contribute?” “Should you give money to charity?” $
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Instructor: Vincent Conitzer
MAKING COMPLEX DEClSlONS
Reinforcement Learning on Markov Games Nilanjan Dasgupta Department of Electrical and Computer Engineering Duke University Durham, NC Machine Learning.
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Reinforcement Learning Presentation Markov Games as a Framework for Multi-agent Reinforcement Learning Mike L. Littman Jinzhong Niu March 30, 2004.
Chapter 11 Game Theory Math Game Theory What is it? – a way to model conflict and competition – one or more "players" make simultaneous decisions.
Statistical Sampling-Based Parametric Analysis of Power Grids Dr. Peng Li Presented by Xueqian Zhao EE5970 Seminar.
Regret Minimizing Equilibria of Games with Strict Type Uncertainty Stony Brook Conference on Game Theory Nathanaël Hyafil and Craig Boutilier Department.
SOBIERAJSKI_PL_author_ALPHA4_BLOCK4.4_Question9 Barcelona May The probabilistic study of voltage problems in lightly loaded medium voltage.
Computing and Approximating Equilibria: How… …and What’s the Point? Yevgeniy Vorobeychik Sandia National Laboratories.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
MDPs (cont) & Reinforcement Learning
Grid Defense Against Malicious Cascading Failure Paulo Shakarian, Hansheng Lei Dept. Electrical Engineering and Computer Science, Network Science Center,
Repeated Game Modeling of Multicast Overlays Mike Afergan (MIT CSAIL/Akamai) Rahul Sami (University of Michigan) April 25, 2006.
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
Locating network monitors: complexity, heuristics, and coverage Kyoungwon Suh Yang Guo Jim Kurose Don Towsley.
Slide 1/20 Defending Against Strategic Adversaries in Dynamic Pricing Markets for Smart Grids Paul Wood, Saurabh Bagchi Purdue University
Research Direction Introduction Advisor: Frank, Yeong-Sung Lin Presented by Hui-Yu, Chung 2011/11/22.
Advisor: Yeong-Sung Lin Presented by I-Ju Shih 2011/11/29 1 Research Direction Introduction.
Keep the Adversary Guessing: Agent Security by Policy Randomization
On-Line Markov Decision Processes for Learning Movement in Video Games
A Game Theoretic Study of Attack and Defense in Cyber-Physical Systems
Non-additive Security Games
Markov Decision Processes
When Security Games Go Green
Optimal Electricity Supply Bidding by Markov Decision Process
For modeling conflict and cooperation Schwartz/Teneketzis
Solutions Sample Games 1
Announcements Homework 3 due today (grace period through Friday)
Network Optimization Research Laboratory
CASE − Cognitive Agents for Social Environments
9.3 Linear programming and 2 x 2 games : A geometric approach
Reinforcement Learning Dealing with Partial Observability
Reinforcement Nisheeth 18th January 2019.
Richard Ma, Sam Lee, John Lui (CUHK) David Yau (Purdue)
Normal Form (Matrix) Games
Presentation transcript:

Markov Game Analysis for Attack and Defense of Power Networks Chris Y. T. Ma, David K. Y. Yau, Xin Lou, and Nageswara S. V. Rao

Outline Motivation What have been done – Markov Decision Process (MDP), Static game, and Stackelberg game Our approach – Markov game Experiment results Conclusion

Power Networks are Important Infrastructures (And Vulnerable to Attacks) Growing reliance on electricity Aging infrastructure Introduced more connected digital sensing and control devices (and attract attacks on cyber space) Hard and expensive to protect Limited budget How to allocate the limited resources? – Optimal deployment to maximize long-term payoff

Modeling the Interactions – Game Theoretic Approaches Static game – Each player has a set of actions available – Outcome and payoff determined by action of all players – Players act simultaneously

Static Game Example Defend & Attack Defend & No Attack No defend & Attack No defend & No Attack

Static Game Example Defend No defend Attack No Attack Attack No Attack

Modeling the Interactions – Game Theoretic Approaches Leader-follower game (Stackelberg game) – Defender as the leader – Adversary as the follower – Bi-level optimization – minimax operation Inner level: follower maximizes its payoff given a leader’s strategy Outer level: leader maximizes its payoff subject to the follower’s solution of the inner problem

Stackelberg Game Example Defend No defend Attack No Attack Attack No Attack Only model one-time interactions

Modeling the Interactions – Markov Decision Process Markov Decision Process (MDP) – System modeled as set of states with Markov transitions between them – Transition depends on action of one player and some passive disruptors of known probabilistic behaviors (acts of nature)

Markov Decision Process (MDP) Example (2 states, each has 2 actions available) updown Defend No defend Recover No recover Only models one intelligent player

Weaknesses of Current Formulations Markov Decision Process – Only models a single rational player Static game / Stackelberg game – Only models one-time interaction Security of Power Grid should be modeled as continual interactions between two rational players

Our Approach – Markov Game Generalizations of MDP to an adversarial setting – Models the continual interactions between multiple players Players interact in the new state with different payoffs – Models probabilistic state transition because of inherent uncertainty in the underlying physical system (e.g., random acts of nature)

Problem Formulation Defender and adversary of a power network – Two-player zero-sum game Game formulation: – Adversary Actions: which link to attack Payoff: cost of load shedding by the defender because of the attack – Defender Actions: which (up) link to reinforce or which (down) link to recover Payoff: cost of load shedding because of the attack

Problem Formulation State of the game – Status of system - set of links that are currently up, e.g., State 0 = all links are up State 1 = Link 1 is down State 3 = Links 1 & 2 are down … Both players have limited budget – Can only defense or attack limited number of links at a time

Markov Game – Reward Overview Assume five links; link 4 both attacked and defended (u,u,u,u,u) (u,u,u,d,u) (u,u,u,u,u) (u,u,u,d,u) p1p1 1-p 1 Immediate reward of such actions is the weighted sum of successful attack and successful defense Assume at state (u,u,u,d,u), link 4 both attacked and defended again p2p2 1-p 2 Immediate reward at state (u,u,u,d,u) is then the weighted sum of successful recovery and failed recovery This immediate reward is further “propagated” back to the original state (u,u,u,u,u) with a discount factor Hence, actions taken in a state will accrue a long-term reward

Solving the Markov Game – Definitions

Concerning the Transition Probability

Optimal Load Shedding Formulated as a constrained optimization problem, under physical constraints of stable power flow p: power (load or generation) z: changes in power distribution

Finding the Optimal Strategy – Solving a Linear Program

Solving the Markov Game – Value Iteration Dynamic program (value iteration) to solve the Markov game

Experiment Results Link diagram State {u,u,u,u,u} Links 4 and 5 both connect to generator, and generator at bus 4 has higher output

Experiment Results Payoff Matrix of state {u,u,u,u,u} for the static game. Payoff Matrix of state {u,u,u,u,u} for the Markov game. (ϒ = 0.3)

Conclusions Using Markov game to model the attack and defense of a power network between two players Results show the action of players depends not only on current state, but also later states – To obtain the optimal long term benefit