Performance Evaluation

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Sequence comparison: Significance of similarity scores Genome 559: Introduction to Statistical and Computational Genomics Prof. James H. Thomas.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
1 A class of Generalized Stochastic Petri Nets for the performance Evaluation of Mulitprocessor Systems By M. Almone, G. Conte Presented by Yinglei Song.
Gibbs Sampling Qianji Zheng Oct. 5th, 2010.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Discrete Time Markov Chains
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Graduate School of Information Sciences, Tohoku University
MEAN FIELD FOR MARKOV DECISION PROCESSES: FROM DISCRETE TO CONTINUOUS OPTIMIZATION Jean-Yves Le Boudec, Nicolas Gast, Bruno Gaujal July 26,
ODE and Discrete Simulation or Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL MLQA, Aachen, September
Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL July
1 Epidemic Spreading in Real Networks: an Eigenvalue Viewpoint Yang Wang Deepayan Chakrabarti Chenxi Wang Christos Faloutsos.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
1 A Class Of Mean Field Interaction Models for Computer and Communication Systems Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with Michel Benaïm.
1 Mean Field Interaction Models for Computer and Communication Systems and the Decoupling Assumption Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with.
1 A Class Of Mean Field Interaction Models for Computer and Communication Systems Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with Michel Benaïm.
1 TCOM 501: Networking Theory & Fundamentals Lecture 8 March 19, 2003 Prof. Yannis A. Korilis.
Lecture 11 – Stochastic Processes
Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL ACCESS Distinguished Lecture Series, Stockholm, May 28,
Performance Evaluation of Networks, part II Giovanni Neglia G. Neglia10 December 2012.
Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 9 January 2014.
Performance Evaluation Lecture 4: Epidemics Giovanni Neglia INRIA – EPI Maestro 12 January 2013.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Synchronization in complex network topologies
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
 Probability in Propagation. Transmission Rates  Models discussed so far assume a 100% transmission rate to susceptible individuals (e.g. Firefighter.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 16 December 2013.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Mean Field Methods for Computer and Communication Systems Jean-Yves Le Boudec EPFL Network Science Workshop Hong Kong July
How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian.
Markov Chains.
Performance Evaluation
Lecture 14 – Queuing Networks
Hiroki Sayama NECSI Summer School 2008 Week 2: Complex Systems Modeling and Networks Network Models Hiroki Sayama
Markov Chains and Random Walks
Availability Availability - A(t)
Courtesy of J. Bard, L. Page, and J. Heyl
水分子不時受撞,跳格子(c.p. 車行) 投骰子 (最穩定) 股票 (價格是不穏定,但報酬過程略穩定) 地震的次數 (不穩定)
Discrete-time markov chain (continuation)
DTMC Applications Ranking Web Pages & Slotted ALOHA
CPSC 531: System Modeling and Simulation
Markov Networks.
Effective Social Network Quarantine with Minimal Isolation Costs
Discrete-time markov chain (continuation)
CS 188: Artificial Intelligence Fall 2007
Lecture 2: Complex Networks
Jean-Yves Le Boudec EPFL – I&C – LCA Joint work with Michel Benaïm
Lecture 14 – Queuing Networks
Carey Williamson Department of Computer Science University of Calgary
Hidden Markov Models (cont.) Markov Decision Processes
Discrete time Markov Chain
Discrete time Markov Chain
Modelling and Searching Networks Lecture 6 – PA models
Markov Networks.
Discrete Mathematics and its Applications Lecture 5 – Random graphs
Discrete Mathematics and its Applications Lecture 6 – PA models
Generalized Belief Propagation
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 10 January 2013

There is more: Independence Theorem 2 Under the assumptions of Theorem 1, and that the collection of objects at time 0 is exchangeable (X1N(0),X2N(0),...XNN(0)), then for any fixed n and t: limN→∞Prob(X1N(t)=i1,X2N(t)=i2,...XnN(t)=in)= =μi1(t)μi2(t)...μin(t) MF Independence Property, a.k.a. Decoupling Property, Propagation of Chaos

Remarks (X1N(0),X2N(0),...XNN(0)) exchangeable Means that all the states that have the same occupancy measure m0 have the same probability limN→∞Prob(X1N(t)=i1,X2N(t)=i2,...XnN(t)=in)= =μi1(t)μi2(t)...μin(t) Application Prob(X1N(k)=i1,X2N(k)=i2,...XkN(k)=ik)≈ ≈μi1(kε(N))μi2(kε(N))...μik(kε(N))

Probabilistic interpretation of the occupancy measure (SI model with p=10-4, N=100) Prob(nodes 1,17,21 and 44 infected at k=200)= =μ2(k p N)4=μ2(2)4≈(1/3)4 What if 1,17,21 and 44 are surely infected at k=0

On approximation quality p=10-4, I(0)=N/10, 10 runs %infected nodes (I(k)/N) N=10 N=100 N=10000 x103 iterations x102 iterations iterations

On approximation quality p=10-4, I(0)=N/10, 10 runs Model vs Simulations Why this Difference? %infected nodes (I(k)/N) N=10 N=100 N=10000 x103 iterations x102 iterations iterations

Why the difference? N should be large (the larger the better) p should be small p(N)=p0/N2 For N=104 p=10-4 is not small enough! What if we do the correct scaling?

On approximation quality p=104/N2, I(0)=N/10, 10 runs Model vs Simulations %infected nodes (I(k)/N) N=104 N=105 N=106 iterations

Lesson You need to check (usually by simulation) in which parameter region the fluid model is a good approximation. e.g. N>N* p<p*/N2 p p* 10-4 p*N*2/N2 N 10 N* 102 104

SIS model 2 1 3 4 5 Susceptible At each slot there is a probability p that two given nodes meet, a probability r that a node recovers. Infected

SIS model 2 1 3 4 5 Susceptible At each slot there is a probability p that two given nodes meet, a probability r that a node recovers. Infected

Let’s practise Can we propose a Markov Model for SIS? No need to calculate the transition matrix If it is possible, derive a Mean Field model for SIS Do we need some scaling?

Epidemic Threshold: p0/r0 Study of the SIS model We need p(N)=p0/N2 and r(N)=r0/N If we choose ε(N)=1/N, we get di(t)/dt= p0 i(t)(1-i(t)) – r0 i(t) p0 > r0 p0 < r0 di/dτ di/dτ i i Epidemic Threshold: p0/r0

N=80, p0=0.1 r0 = 0.05 r0 = 0.125 Able to predict this value?

Study of the SIS model μ2(t)=i(t) di(t)/dt=p0 i(t)(1-i(t)) – r0 i(t) Equilibria, di(t)/dt=0 i(∞)=1-r0/p0 or i(∞)=0 If i(0)>0 and p0>r0 => μ2(∞)=1-r0/p0

Study of the SIS model If i(0)>0 p0>r0, μ2(∞)=1-r0/p0 Prob(X1(N)(k)=1) ≈ i(kε(N)) Prob(X1(N)(∞)=1) ≈ μ2(∞) = i(∞) =1-r0/p0 What is the steady state distribution of the MC? (0,0,0,…0) is the unique absorbing state and it is reachable from any other state Who is lying here?

Back to the Convergence Result Define M(N)(t) with t real, such that M(N)(kε(N))=M(N)(k) for k integer M(N)(t) is affine on [kε(N),(k+1)ε(N)] Consider the Differential Equation dμ(t)/dt=f(μ), with μ(0)=m0 Theorem For all T>0, if M(N)(0) → m0 in probability (/mean square) as N → ∞, then sup0≤t≤T||M(N)(t)-μ(t)|| →0 in probability (/mean square)

Some examples

Nothing to do with t=∞? Theorem 3: The limits when N diverges of the stationary distributions of M(N) are included in the Birkhoff center of the ODE Birkhoff center: the closure of all the recurrent points of the ODE (independently from the initial conditions) What is the Birkhoff center of di(t)/dt=p0 i(t)(1-i(t)) – r0 i(t)?

Nothing to do with t=∞? Theorem 3: The limits when N diverges of the stationary distributions of M(N) are included in the Birkhoff center of the ODE Corollary: If the ODE has a unique stationary point m*, the sequence of stationary distributions M(N) converges to m*

Outline Limit of Markovian models Mean Field (or Fluid) models exact results Extensions Epidemics on graphs Reference: ch. 9 of Barrat, Barthélemy, Vespignani “Dynamical Processes on Complex Networks”, Cambridge press Applications to networks