Using Markov Process in the Analysis of Intrusion Tolerant Systems Quyen L. Nguyen CS 795 – Computer Security Architectures.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Modeling and Evaluating the Survivability of an Intrusion Tolerant Database System Hai Wang and Peng Liu Cyber Security Lab Pennsylvania State University.
Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
INDR 343 Problem Session
CS 795 – Spring  “Software Systems are increasingly Situated in dynamic, mission critical settings ◦ Operational profile is dynamic, and depends.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 6 on Stochastic Processes Kishor S. Trivedi Visiting Professor.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 8 on Continuous-Time Markov Chains Kishor Trivedi.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Continuous Time Markov Chains and Basic Queueing Theory
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Markov Reward Models By H. Momeni Supervisor: Dr. Abdollahi Azgomi.
NETE4631:Capacity Planning (3)- Private Cloud Lecture 11 Suronapee Phoomvuthisarn, Ph.D. /
Reliable System Design 2011 by: Amir M. Rahmani
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
A. BobbioBertinoro, March 10-14, Dependability Theory and Methods 5. Markov Models Andrea Bobbio Dipartimento di Informatica Università del Piemonte.
1 The Designs and Analysis of a Scalable Optical Packet Switching Architecture Speaker: Chia-Wei Tuan Adviser: Prof. Ho-Ting Wu 3/4/2009.
1 Spare part modelling – An introduction Jørn Vatn.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
Lecture 11 – Stochastic Processes
Introduction to Stochastic Models GSLM 54100
Module 13: Network Load Balancing Fundamentals. Server Availability and Scalability Overview Windows Network Load Balancing Configuring Windows Network.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Generalized Semi-Markov Processes (GSMP)
Intro. to Stochastic Processes
Stochastic Processes A stochastic process is a model that evolves in time or space subject to probabilistic laws. The simplest example is the one-dimensional.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Lecture 14 – Queuing Networks Topics Description of Jackson networks Equations for computing internal arrival rates Examples: computation center, job shop.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Modeling and Analysis of Computer Networks
Lecture 4: State-Based Methods CS 7040 Trustworthy System Design, Implementation, and Analysis Spring 2015, Dr. Rozier Adapted from slides by WHS at UIUC.
CDA6530: Performance Models of Computers and Networks Chapter 7: Basic Queuing Networks TexPoint fonts used in EMF. Read the TexPoint manual before you.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
NETE4631: Network Information System Capacity Planning (2) Suronapee Phoomvuthisarn, Ph.D. /
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 17 Queueing Theory.
Flows and Networks Plan for today (lecture 6): Last time / Questions? Kelly / Whittle network Optimal design of a Kelly / Whittle network: optimisation.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Discrete-time Markov chain (DTMC) State space distribution
Lecture 14 – Queuing Networks
Use of Simulation for Cyber Security Risk and Consequence Assessment
Markov Chains Carey Williamson Department of Computer Science
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Lecture 14 – Queuing Networks
Carey Williamson Department of Computer Science University of Calgary
Stochastic Processes A stochastic process is a model that evolves in time or space subject to probabilistic laws. The simplest example is the one-dimensional.
Presentation transcript:

Using Markov Process in the Analysis of Intrusion Tolerant Systems Quyen L. Nguyen CS 795 – Computer Security Architectures

References 1.Sheldon M. Ross. “Introduction to Probability Models”, Academic Press. 2.Kishor Shridharbhai Trivedi. “Probability and Statistics with Reliability, Queuing, and Computer Science Applications, 2nd Edition”. Wiley-Interscience, Bharat B. Madan, Katerina Goseva-Popstojanova, Kalyanaraman Vaidyanathan, and Kishor S. Trivedi. “A Method for Modeling and Quantifying the Security Attributes of Intrusion Tolerant Systems”. Performance Evaluation 56 (2004), Khin Mi Mi Aung, Kiejin Park, and Jong Sou Park. “A Model of ITS Using Cold Standby Cluster”. ICADL 2005, LNCS 3815, pp. 1-10, Alex Hai Wang, Su Yan and peng Liu. “A Semi-Markov Survivability Evaluation Model for Intrusion Tolerant Database Systems” International Conference on Availability, Reliability and Security. 6.Quyen Nguyen and Arun Sood. “Quantitative Approach to Tuning of a Time-Based Intrusion-Tolerant System Architecture”. WRAITS 2009, Lisbon, Portugal. Note: State Diagrams and matrix snapshots in subsequent slides are taken from [3], [4] and [5]. 11/03/20102

3 Outline  Markov Chain –Semi-Markov Process (SMP)  Analysis Model of ITS –Mean Time to Security Failure (MTTSF) –Availability  SCIT  Cluster  ITDB 11/03/2010

Stochastic Process  Given that it rains today, will it rain or shine tomorrow?  Given that it is sunny today, will it rain or shine tomorrow? 11/03/20104

Markov Process  State space: {rainy, sunny}  Parameter space: X 1, X 2, …  Markov property: next state depends only on current state  p ij = p(X n+1 = j | X n = i, X n-1 = i n-1, …, X 0 = i 0 ) = p(X n+1 = j | X n = i)  Transition Probability Matrix: –P = [p ij ] with ∑ j p ij = 1 for every i  Markov Chain: finite state space  Discrete-time, Continuous-time 11/03/20105

Steady-state Probabilities  Stationary Process: transition probability independent of n –p(X n+1 = j | X n = i) = p(X n = j | X n-1 = i)  Chapman-Kolmogorov for n-step transition matrix –P (n) = P n  P n converges to steady state values, as n --> ∞  Solution of system (1) of equations: –x.P = x –Σ i x i = 1 11/03/20106

Semi-Markov Process  Time spent in a state i is a random variable with mean µ 1 –If amount of time in each state is 1, then SMP is a Markov.  Embedded DTMC with steady-state probabilities π i  Time proportion in state i: –P i = (π i * µ i ) / ∑ j (π j * µ j )(2)  Steps to solve an SMP: –Solve steady-probabilities of DTMC using system (1) –Use (2) 11/03/20107

Modeling ITS  Modeling steps: –Identify states –Identify transitions –Assign transition probabilities 11/03/20108

ITS State Diagram [3] 11/03/20109

ITS: Embedded DTMC [3] 11/03/201010

DTMC Transition Probability Matrix [3]  p 1 = 1 - p a  p 2 = 1 – p m – p u  p 3 = 1 – p s - p g 11/03/201011

Calculating Availability [3]  A = 1 – (P FS + P F + P UC )  Transition Diagram and formula depend on attack scenario and metric to compute.  Example: DoS attack, remove unused states MC and FS:  A = 1 – (P F + P UC ) 11/03/201012

Availability: Numerical Examples [3]  A is decreasing function of P a and increasing function of h G. 11/03/201013

Absorbing and Transient States  if p ij = 0 for i ≠ j, then i is an absorbing state. –Example: complete system failure state.  Arranging Transition Probability, with Q containing transitions between transient states only. 11/03/201014

Example of Absorbing State 11/03/201015

Visit Times  k-step transition probability matrix P k  ∑Q k = I + Q 1 + Q 2 + … converges to (I – Q) -1 = M = [m ij ]  (I – Q) -1 = M ↔ M(I – Q) = I ↔ M = I + MQ  Theorem: Let X ij be the visit times of state j starting from state i before going to absorbing states: E[X ij ] = m ij  Starting from state 1, V = (V 1, V 2, …, V n ) can be solved by system of equations: –V = I + V.Q 11/03/201016

Calculating MTTSF  Determine absorbing states: {UC, FS, GD, F}.  Transient states: {G, V, A, MC, TR}  Form transition matrix comprising of transient states Q.  Compute visit times V i using the equations: –v = q + v.Q  MTTSF = v.µ 11/03/201017

ITS: Transient States [3] 11/03/201018

MTTSF Numerical Examples [3]  MTTSF decreases as P a increases  MTTSF increases as h G increases. 11/03/201019

Issues  Parameter Modeling –Probability Distribution: exponential, Weibull, etc.  Mean value Estimation 11/03/201020

SCIT Parameters  Online window W o : server accepts requests from the network  Grace period W g : server stops accepting new requests and tries to fulfill outstanding requests already in its queue.  Exposure window: W = W o + W g.  N online : # redundant online nodes.  N total : total nodes in the cluster.  N total, W, and the cleansing-time T cleansing are inter-related. 11/03/201021

SCIT: State Transition Diagram with Absorbing States  Pa: probability of successful attack  Pc: probability of cleansing when in A.  F: low chance of occurrence, but still possible: –Virtual machine and/or the host machine no longer respond to the Controller. –Controller itself fails due to a hardware fault. GVAF G0100 V1–P a 0PaPa 0 APcPc 001-P c F /03/201022

SCIT: MTTSF Computation  Xa and Xt are absorbing states and transient states X a = {F} and X t = {G, V, A}  q: probabilities that process starts at each state in X t : q = (1,0,0), since it starts with state G.  V = (V 0 V 1 V 2 ): number of visit times for each state in X t.  h: mean sojourn times in each state  Solve system of equations: V = q + VQ  Using solutions for V, compute MTTSF scit = V.h 11/03/ Q

SCIT: MTTSF Expression  P a ↓ → MTTSF scit ↑  P c ↑ → MTTSF scit ↑  How to make P a ↓ and P c ↑? 11/03/201024

SCIT: Relationship between P a and W  Modeling malicious attack arrivals: –Assumption: non-staged attacks –(Attack arrivals) ̴ Poisson (λ)  Then, inter-arrival time Y between attacks is exponential distribution: –P(Y ≤ W) = 1 - e -λW  P(Y ≤ W) is also prob. that attacks occur in exposure window.  Then: –P a ≤ P(Y ≤ W) –→ P a ≤ 1 - e -λW 11/03/201025

SCIT: Relationship between P c and W  Resident time of the attack modeled as a “service” time Z with rate μ.  Assume Z exponential distribution: P(Z > W) = e -μW  probability that the service time is greater than W is limited by the fact that the system moves out of state A due to the cleansing mode: –P(Z > W) ≤ P c ↔ P c ≥ e -µW  System cannot “serve” more than the arriving attacks: μ ≤ λ.  Then: e -μW ≥ e -λW. 11/03/201026

SCIT: MTTSF and W  W ↓ → (P a ≤ 1 - e -λW ) ↓  W ↓ → (P c ≥ e -µW ) ↑  Then: W ↓ → MTTSF scit ↑  MTTSF SCIT ≥ F(W), where F(W) is a decreasing function of W:  Significance: engineer instance of SCIT architecture by tuning W in order to increase or decrease the value of MTTSF SCIT. 11/03/201027

SCIT: MTTSF Trend 11/03/201028

SCIT Failure State  Is state F really absorbing? –Compromise of Controller is very minimal due to the one-way data. –System automatically recovers back to the G state.  Use Semi-Markov Process with embedded DTMC (Discrete-Time Markov Chain) to compute the steady- state Availability (state without security faults). 11/03/201029

SCIT: Availability  Solve the DTMC steady-state probabilities vector y = (y 0, y 1, y 2, y 3 ) for all states in {G, V, A, F}: –y = y.P –Σ i y i = 1. 11/03/201030

SCIT: Availability and Exposure Window  Compute SMP stead-state probability π F for state F: –π F = y 3 h 3 /y.h, with h = (h 0, h 1, h 2, h 3 ) being extended to include the mean sojourn time h 3 for state F.  Availability = 1 − π F   Availability monotonically decreases with P a but increases with P c.  Using the same line of reasoning and the assumption of Poisson attack arrival process as for MTTSF SCIT above, we can also conclude that decreasing the exposure window will increase Availability. 11/03/201031

Rejuvenation: Single System [4] 11/03/  Rejuvenation: stop software, clean internal state, service restart.  Reconfiguration: patching, anti-virus, access control (IP blocking, port blocking, session drop, content filtering), traffic control by limiting bandwidth.  Both may be needed depending on the situation.

Rejuvenation: Transition Probability [4]  Equation System: –π = π.P and Σi π i = 1.  π i, i= (H,I,J,C,F).  A = 1 – (π F + π J + π C )  Paper uses balance equations of probabilities leaving and entering a state. 11/03/201033

Rejuvenation: Cluster Analysis [4] 11/03/  Use SMP for modeling with State Space: Xs =  {(1,1), (I,1), (J,1), (C,1), (F,1), (0,1), (0,I), (0,J), (0,C), (F,F)}  d is the solution of DTMC equations: d.P and Σdi = 1  Then, the prob. for SMP is given by:  A = 1 – (π F 1 + π FF )  Deadline D of mean sojourn time (d i h i ).  Indicator variable Y: –Y i = 0 if d i h i ≤ D and Y i = 1 if d i h i > D  Survavibility S = –A – [Y J1 π F 1 + Y C1 π C1 + Y 0J π 0J + Y 0C π 0C ]

Rejuvenation: Numerical Results [4] 11/03/  As prob. for (Rj,1), (Rc,1) or (0,Rj), (0,Rc) increase, availability and survivability decrease.

Rejuvenation: Numerical Results [4] 11/03/  Changes of survability vs. changes in rejuvenation when attacked.  No significant difference between deadlines when prob <.4

Coping Ability: Numerical Results [4] 11/03/  Survivability is maximized when primary-secondary servers detect abnormal behavior early.

Intrusion Tolerance DB [5] 11/03/201038

ITDB: State Transition [5]  Integrity: fraction of time when all accessible data are clean –I = π G + π Q + π R  Availability: fraction of time when all clean data are accessible –A = π G + π R 11/03/201039

ITDB: False Alarm Rate [5]  ITDB maintains I and A even at high FA rate.  Degradation of I and A as FA increases. 11/03/201040

ITDB: Detection Rate [5]  ITDB depends on detection probability.  When P d = 0, I and A are at low level.  When P d increases, I and A go up.  ITDB can maintain I and A at some level at low detection rate. 11/03/ d

ITDB: Attack Rate [5]  Heavy attack: h G = 5.  Compare “good” and “poor” systems in terms of P d, P fa, h I, h Q, h R.  When attack rate increases, observe: –I and A –Q and R 11/03/ d

43 Summary  What is a Markov Process?  How to model an ITS using a Semi-Markov Process?  How to calculate MTTSF based on the model?  Application to SCIT Analysis  Rejuvenation Cluster Analysis  ITDB Analysis 11/03/2010

Thank You! 11/03/201044