Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 16 December 2013.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Minimum Energy Mobile Wireless Networks IEEE JSAC 2001/10/18.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Modeling Malware Spreading Dynamics Michele Garetto (Politecnico di Torino – Italy) Weibo Gong (University of Massachusetts – Amherst – MA) Don Towsley.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
1 A class of Generalized Stochastic Petri Nets for the performance Evaluation of Mulitprocessor Systems By M. Almone, G. Conte Presented by Yinglei Song.
Discrete Time Markov Chains
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Population dynamics of infectious diseases Arjan Stegeman.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
An Introduction to Markov Decision Processes Sarah Hickmott
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
MEAN FIELD FOR MARKOV DECISION PROCESSES: FROM DISCRETE TO CONTINUOUS OPTIMIZATION Jean-Yves Le Boudec, Nicolas Gast, Bruno Gaujal July 26,
Distributed Association Control in Shared Wireless Networks Krishna C. Garikipati and Kang G. Shin University of Michigan-Ann Arbor.
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
ODE and Discrete Simulation or Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL MLQA, Aachen, September
Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL July
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Practical Belief Propagation in Wireless Sensor Networks Bracha Hod Based on a joint work with: Danny Dolev, Tal Anker and Danny Bickson The Hebrew University.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
1 A Class Of Mean Field Interaction Models for Computer and Communication Systems Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with Michel Benaïm.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
1 Mean Field Interaction Models for Computer and Communication Systems and the Decoupling Assumption Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with.
1 A Generic Mean Field Convergence Result for Systems of Interacting Objects From Micro to Macro Jean-Yves Le Boudec, EPFL Joint work with David McDonald,
1 A Class Of Mean Field Interaction Models for Computer and Communication Systems Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with Michel Benaïm.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
1 Modeling and Simulating Networking Systems with Markov Processes Tools and Methods of Wide Applicability ? Jean-Yves Le Boudec
STOCHASTIC GEOMETRY AND RANDOM GRAPHS FOR THE ANALYSIS AND DESIGN OF WIRELESS NETWORKS Haenggi et al EE 360 : 19 th February 2014.
Dimitrios Konstantas, Evangelos Grigoroudis, Vassilis S. Kouikoglou and Stratos Ioannidis Department of Production Engineering and Management Technical.
Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL ACCESS Distinguished Lecture Series, Stockholm, May 28,
Performance Evaluation of Networks, part II Giovanni Neglia G. Neglia10 December 2012.
Decentralised load balancing in closed and open systems A. J. Ganesh University of Bristol Joint work with S. Lilienthal, D. Manjunath, A. Proutiere and.
Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 9 January 2014.
Fluid Limits for Gossip Processes Vahideh Manshadi and Ramesh Johari DARPA ITMANET Meeting March 5-6, 2009 TexPoint fonts used in EMF. Read the TexPoint.
Performance Evaluation Lecture 4: Epidemics Giovanni Neglia INRIA – EPI Maestro 12 January 2013.
Message-Passing for Wireless Scheduling: an Experimental Study Paolo Giaccone (Politecnico di Torino) Devavrat Shah (MIT) ICCCN 2010 – Zurich August 2.
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Practical Dynamic Programming in Ljungqvist – Sargent (2004) Presented by Edson Silveira Sobrinho for Dynamic Macro class University of Houston Economics.
Converge-Cast: On the Capacity and Delay Tradeoffs Xinbing Wang Luoyi Fu Xiaohua Tian Qiuyu Peng Xiaoying Gan Hui Yu Jing Liu Department of Electronic.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Models for Epidemic Routing
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Discrete Time Markov Chains
Asymptotic behaviour of blinking (stochastically switched) dynamical systems Vladimir Belykh Mathematics Department Volga State Academy Nizhny Novgorod.
Traditional Approaches to Modeling and Analysis
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL Network Science Workshop Hong Kong July
Markov Chains.
Performance Evaluation
Performance Evaluation
Industrial Engineering Dep
Lecture 2 – Monte Carlo method in finance
Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with Michel Benaïm
Queueing networks.
Lecture 14 – Queuing Networks
Presentation transcript:

Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 16 December 2013

Epidemics on a graph: SI model Susceptible Infected One of the neighbours is randomly selected and infected

Epidemics on a graph: SI model Susceptible Infected One of the neighbours is randomly selected and infected

Any interest for Computer Networks?  Flooding  Epidemic Routing in Delay Tolerant Networks  Chunk distribution in a P2P streaming system (push algorithms) w/o chunk with chunk A copy of the chunk is pushed to a randomly selected neighbour

Time-slotted synchronous behaviour Susceptible Infected Every T an infected node transmits the disease to a randomly selected neighbour

Time-slotted synchronous behaviour Susceptible Infected Every T an infected node transmits the disease to a randomly selected neighbour

Time-slotted synchronous behaviour Susceptible Infected Every T an infected node transmits the disease to a randomly selected neighbour

Time-slotted synchronous behaviour Susceptible Infected Every T an infected node transmits the disease to a randomly selected neighbour

How do you model it?  A Markov Chain  System state at time k is a vector specifying if every node is infected (1) or not (0) e.g. (1,1,0,0,0), size: 2 5  Probability transitions among states e.g. Prob((1,1,0,0,0)->(1,1,1,0,0))=1/

Asynchronous behaviour Every infected node transmits the disease on each of its links according to a Poisson Process with rate β

How to model it?  A Continuous-time Markov process/Chain (C-MC)  System state at time t is a vector specifying if every node is infected (1) or not (0)  Rate transitions between state pairs e.g, q((1,1,1,0,0)->(1,1,1,1,0))=2β

What to study and how  P the transition matrix (2 N x2 N )  Transient analysis  π(k+1)=π(k)P,  π(k+1)=π(0)P k+1,  Stationary distribution (equilibrium)  π=πP  If the Markov chain is irreducible and aperiodic  Computational cost: O((2 N ) 3 ) if we solve the system O(K A) where A is the number of non-null entries in P if we adopt the iterative procedure (K is the number of iterations), in our case A ε O(|E|)

Similar for C-MC  Stationary distribution (equilibrium)  π=πP, D-MC  πQ=0, C-MC  Transient analysis  π(k+1)=π(k)P, D-MC  dπ(t)/dt=π(t)Q, C-MC

Outline  Limit of Markovian models  Mean Field (or Fluid) models exact results extensions to graphs applications

A motivating example: epidemics N Susceptible Infected

A motivating example: epidemics N Susceptible Infected k=1 At each slot there is a probability p that two given nodes meet. Assume meetings to be independent.

A motivating example: epidemics N Susceptible Infected At each slot there is a probability p that two given nodes meet. Assume meetings to be independent. k=1

A motivating example: epidemics N Susceptible Infected k=2 At each slot there is a probability p that two given nodes meet. Assume meetings to be independent.

Delay Tolerant Networks (a.k.a. Intermittently Connected Networks) mobile wireless networks no path at a given time instant between two nodes because of power contraint, fast mobility dynamics maintain capacity, when number of nodes (N) diverges Fixed wireless networks: C = Θ(sqrt(1/N)) [Gupta99] Mobile wireless networks: C = Θ(1),[Grossglauser01] a really challenging network scenario No traditional protocol works A V2V2 V3V3 V1V1 B C

Some examples Inter-planetary backbone DakNet, Internet to rural area DieselNet, bus network ZebraNet, Mobile sensor networks Network for disaster relief team Military battle-field network …

Epidemic Routing A V2V2 V3V3 V1V1 B C  Message as a disease, carried around and transmitted

Epidemic Routing A V2V2 V3V3 V1V1 B C  Message as a disease, carried around and transmitted  Store, Carry and Forward paradigm

How do you model it?  A Markov Chain  System state at time k is a vector specifying if every node is infected (1) or not (0) e.g. (1,0,1,0,0), size: 2 5  Probability transitions among states e.g. Prob((1,0,1,0,0)->(1,1,1,0,0))=?

Transition probabilities At slot k, when there are I(=I(k)) infected nodes, the prob. that node 2 gets infected is: q I =1-(1-p) I Prob((1,0,1,0,0)->(1,1,1,0,0))=?

Transition probabilities Prob((1,0,1,0,0)->(1,1,1,0,0))=q 2 (1-q 2 ) 2 Where q I =1-(1-p) I Prob((1,0,1,0,0)->(1,1,1,0,0))=?

What to study and how  P the transition matrix (2 N x2 N )  Transient analysis  π(k+1)=π(k)P,  π(k+1)=π(0)P k+1,  Stationary distribution (equilibrium)  π=πP  If the Markov chain is irreducible and aperiodic  Computational cost: O((2 N ) 3 ) if we solve the system O(K A) where A is the number of non-null entries in P if we adopt the iterative procedure (K is the number of iterations), in our case M ε O((2 N ) 2 )

Can we simplify the problem?  all the nodes in the same state (infected or susceptibles) are equivalent  If we are interested only in the number of nodes in a given status, we can have a more succinct model  state of the system at slot k: I(k)  it is still a MC  Prob(I(k+1)=I+n | I(k)=I) = C n N-I q I n (1-q I ) N-n-I (I(k+1)-I(k) |I(k)=I) ~ Bin(N-I,q I ) q I =1-(1-p) I

Some numerical examples p=10 -4, N=10, I(0)=N/10, 10 runs x10 3 iterations %infected nodes (I(k)/N) average confidence interval

Some numerical examples p=10 -4, N=100, I(0)=N/10, 10 runs x10 2 iterations %infected nodes (I(k)/N) confidence interval

Some numerical examples p=10 -4, N=10000, I(0)=N/10, 10 runs iterations %infected nodes (I(k)/N) The system is almost deterministic!

Summary  For a large system of interacting equivalent objects, the Markov model can be untractable…  but a deterministic description of the system seems feasible in terms of the empirical measure (% of objects in each status) intuition: kind of law of large numbers  Mean field models describe the deterministic limit of Markov models when the number of objects diverges

Spoiler  i (N) (k), fraction of infected nodes at time k  Solve di(t)/dt=i(t)(1-i(t)), with i=i 0 Solution: i(t)=1/((1/i 0 -1) e -t +1)  If i (N) (0)=i 0, i (N) (k) ≈ i(k p 0 /N)=1/((1/i 0 -1) exp(-k p 0 /N)+1) =1/((1/i 0 -1) exp(-k N p)+1)

Outline  Limit of Markovian models  Mean Field (or Fluid) models exact results extensions to graphs applications

References  Results here for discrete time Markov Chains Benaïm, Le Boudec “A Class of Mean Field Interaction Models for Computer and Communication Systems”, LCA-Report  A survey with pointers to continuous time Markov processes and links to stochastic approximation and propagation of chaos Ch. 2 of Nicolas Gast’s PhD thesis “Optimization and Control of Large Systems, Fighting the Curse of Dimensionality”

Necessary hypothesis: Objects’ Equivalence  π(k+1)=π(k)P  A state σ =(v 1,v 2,...v N ), v j є V (|V|=V, finite) E.g. in our example V={0,1}  P is invariant under any label permutation φ: P σ,σ' =Prob((v 1,v 2,...v N )->(u 1,u 2,...u N ))= Prob((v φ(1),v φ(2),...v φ(N) )->(u φ(1),u φ(2),...u φ(N) ))

Some notation and definitions  X n (N) (k): state of node n at slot k  M v (N) (k): occupancy measure of state v at slot k M v (N) (k)=Σ n 1(X n (N) (k)=v)/N SI model: M 2 (N) (k)=I (N) (k)/N=i (N) (k), M 1 (N) (k)=S (N) (k)/N=s (N) (k)=1-i (N) (k)  M ( N) (k)=(M 1 (N) (k),M 2 (N) (k),...M V (N) (k)) SI model: (1-i (N) (k),i (N) (k))  f ( N) (m)=E[M (N) (k+1)-M (N) (k)|M (N) (k)=m] Drift or intensity, it is the mean field

Other hypotheses  Intensity vanishes at a rate ε(N) – Lim N->∞ f (N) (m)/ε(N)=f(m)  Second moment of number of object transitions per slot is bounded – #transitions<W N (k), E[W N (k) 2 |M (N) (k)=m]<cN 2 ε(N) 2  Drift is a smooth function of m and 1/N – f (N) (m)/ε(N) has continuous derivatives in m and in 1/N on [0,1] V x[0, β ], with β>0

Convergence Result  Define M (N) (t) with t real, such that M (N) (k ε(N))=M (N) (k) for k integer M (N) (t) is affine on [k ε(N),(k+1)ε(N)]  Consider the Differential Equation – dμ(t)/dt=f(μ), with μ(0)=m 0  Theorem – For all T>0, if M (N) (0) → m 0 in probability (/mean square) as N → ∞, then sup 0≤t≤T ||M (N) (t)-μ(t)|| → 0 in probability (/mean square)

Convergence of random variables  The sequence of random variables X (N) converges to X in probability if – for all δ>0 Lim N → ∞ Prob(|X (N) – X|>δ)=0  The sequence of random variables X N converges to X in mean square if – Lim N → ∞ E[|X (N) – X| 2 ]=0  Convergence in mean square implies convergence in probability

Application to the SI model  Assumptions’ check Nodes are equivalent - Intensity vanishes at a rate ε(N) f (N) (m)=E[M (N) (k+1)-M (N) (k)|M (N) (k)=m] M 2 (N) (k)=I (N) (k)/N=i (N) (k),M 1 (N) (k)=1- M 2 (N) (k) (I (N) (k+1)-I (N) (k) |I (N) (k)=I) ~ Bin(N-I,q I ) => E[I (N) (k+1)-I (N) (k) |I (N) (k)=I] = q I (N-I) E[i (N) (k+1)-i (N) (k)|i (N) (k)=i] = (1-i) q I = (1-i)(1-(1-p) i N ) -> (1-i) when N diverges!

Application to the SI model  Out of the impasse: introduce a scaling for p If p (N) =p 0 /N a a>1 => (1-i)(1-(1-p (N) ) i N )->0 Consider a=2 - (1-i)(1-(1-p (N) ) i N ) ~ (1-i) i p 0 /N (for N large) ε(N)=p 0 /N f 2 (m) =f 2 ((s,i))= s i = i (1-i)  Lesson to keep: often we need to introduce some parameter scaling

Application to the SI model  Assumptions’ check Nodes are equivalent Intensity vanishes at a rate ε(N)=p 0 /N - Second moment of number of object transitions per slot is bounded #transitions<W N (k), E[W N (k) 2 |M (N) (k)=m]<cN 2 ε(N) 2 W N (k)=#trans. ~ Bin(N-I(k),q I ) E[W N (k) 2 ]=((N-I(k))q I ) 2 + (N-I(k))q I (1-q I ) is in O(N 2 ε(N) 2 )

Application to the SI model  Assumptions’ check Nodes are equivalent Intensity vanishes at a rate ε(N)=p 0 /N Second moment of number of object transitions per slot is bounded Drift is a smooth function of m and 1/N f 2 (N) (m)/ε(N) = =(1-i) (1 – ( 1- (p 0 /N 2 )) i N )/(p 0 /N) continuous derivatives in i and in 1/N (not evident)

Practical use of the convergence result  Theorem – For all T>0, if M (N) (0) → m 0 in probability (/mean square) as N → ∞, then sup 0≤τ≤T ||M (N) (t)-μ(t)|| → 0 in probability (/mean square) – Where μ(t) is the solution of dμ(t)/dt=f(μ), with μ(0)=m 0  M (N) (0)=m 0, M (N) (k)=M (N) (kε(N))≈μ(kε(N))

Application to the SI model  f 2 (m)=f 2 ((s,i))=i(1-i)  dμ 2 (t)/dt=f 2 (μ 2 (t))=μ 2 (t)(1-μ 2 (t)), with μ 2 (0)=μ 0,2 Solution: μ 2 (t)=1/((1/μ 0,2 -1) e -t +1)  If i (N) (0)=i 0, i (N) (k) ≈ μ 2 (kε(N))=1/((1/i 0 -1) exp(-k p 0 /N)+1) =1/((1/i 0 -1) exp(-k N p)+1)

Back to the numerical examples p=10 -4, I(0)=N/10, 10 runs x10 3 iterations %infected nodes (I(k)/N) x10 2 iterationsiterations N=10N=100N=10000

Advantage of Mean Field  If i (N) (0)=i 0, i (N) (k) ≈ μ 2 (kε(N))=1/((1/i 0 -1) exp(-k p 0 /N)+1) =1/((1/i 0 -1) exp(-k N p)+1) solved for each N with negligible computational cost  In general: solve numerically the solution of a system of ordinary differential equations (size = #of possible status) simpler than solving the Markov chain