Download presentation
Presentation is loading. Please wait.
Published byKathryn White Modified over 9 years ago
1
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/158052.html Introduction to theorie of flows in complex networks: both stochastic and deterministic apects Size 5 ECTS 16 lectures : 8 by R.J. Boucherie focusing on stochastic networks 8 by W. Kern focusing on deterministic networks Common problem How to optimize resource allocation so as to maximize flow of items through the nodes of a complex network Material: handouts / downloads Exam: exercises / (take home) exam References: see website
2
Motivation and main question Motivation Production / storage system C:\Flexsim Demo\tutorial\Tutorial 3.fsm C:\Flexsim Demo\tutorial\Tutorial 3.fsm Internet Thomas Bonalds's animation of TCP.htm ( www-sop.inria.fr/mistral/personnel/Thomas.Bonald/tcp_eng.html ) http://www.warriorsofthe.net/ trailer Thomas Bonalds's animation of TCP.htm ( www-sop.inria.fr/mistral/personnel/Thomas.Bonald/tcp_eng.html ) http://www.warriorsofthe.net/ trailer Main questions How to allocate servers / capacity to nodes or how to route jobs through the system to maximize system performance, such as throughput, sojourn time, utilization QUESTIONS
3
Aim: Optimal design of Jackson network Consider an open Jackson network with transition rates Assume that the service rates and arrival rates are given Let the costs per time unit for a job residing at queue j be Let the costs for routing a job from station i to station j be (i) Formulate the design problem (allocation of routing probabilities) as an optimisation problem. (ii) Provide the solution to this problem
4
Flows and network: stochastic networks Contents 1.Introduction; Markov chains 2.Birth-death processes; Poisson process, simple queue; reversibility; detailed balance 3.Output of simple queue; Tandem network; equilibrium distribution 4.Jackson networks; Partial balance 5.Sojourn time simple queue and tandem network 6.Performance measures for Jackson networks: throughput, mean sojourn time, blocking 7.Application: service rate allocation for throughput optimisation Application: optimal routing
5
Today: Introduction / motivation course Discrete-time Markov chain Continuous-time Markov chain Next Exercises
6
Today: Introduction / motivation course Discrete-time Markov chain Continuous-time Markov chain Next Exercises
7
AEX Continuous, per minute, per day Random process: reason increase / decrease ? Probability level 300 of 400 dec 2004 ? Given level 350 : buy or sell ? Markov chain : random walk
8
Gambler’s ruin Gambling game: on any turn –Win €1 w.p. p=0.4 –Lose €1 w.p. 1-p=0.6 –Continue to play until €N –If fortune reaches €0 you must stop –X n = amount after n plays –For –X n has the Markov property: conditional probability that given the entire history depends only on –X n is a discrete time Markov chain
9
Markov chain X n is time-homogeneous Transition probability State space : all possible states For gambler’s ruin For N=5: transition matrix Property
10
Markov chain : equilibrium distribution n-step transition probability Evaluate: Chapman-Kolmogorov equation n-step transition matrix Initial distribution Distribution at time n Matrix form
11
Markov chain: classification of states j reachable from i if there exists a path from i to j i and j communicate when j reachable from i and i reachable from j State i absorbing if p(i,i)=1 State i transient if there exists j such that j reachable from i and i not reachable from j Recurrent state i process returns to i infinitely often = non transient state State i periodic with period k>1 if k is smallest number such that all paths from i to i have length that is multiple of k Aperiodic state: recurrent state that is not periodic Ergodic Markov chain: alle states communicate, are recurrent and aperiodic (irreducible, aperiodic)
12
Markov chain : equilibrium distribution Assume: Markov chain ergodic Equilibrium distribution independent initial state stationary distribution normalising interpretation probability flux
13
Discrete time Markov chain: summary stochastic process X(t) countable or finite state space S Markov property time homogeneous independent t irreducible: each state in S reachable from any other state in S transition probabilities Assume ergodic (irreducible, aperiodic) global balance equations (equilibrium eqns) solution that can be normalised is equilibrium distribution if equilibrium distribution exists, then it is unique and is limiting distribution
14
Random walk http://www.math.uah.edu/stat/ Gambling game over infinite time horizon: on any turn –Win €1 w.p. p –Lose €1 w.p. 1-p –Continue to play –X n = amount after n plays –State space S = {…,-2,-1,0,1,2,…} –Time homogeneous Markov chain –For each finite time n : –But equilibrium?
15
Today: Introduction / motivation course Discrete-time Markov chain Continuous-time Markov chain Next Exercises
16
Continuous time Markov chain stochastic process X(t) countable or finite state space S Markov property transition probability irreducible: each state in S reachable from any other state in S Chapman-Kolmogorov equation transition rates or jump rates
17
Continuous time Markov chain Chapman-Kolmogorov equation transition rates or jump rates Kolmogorov forward equations: (REGULAR) Global balance equations
18
Continuous time Markov chain: summary stochastic process X(t) countable or finite state space S Markov property transition rates independent t irreducible: each state in S reachable from any other state in S Assume ergodic and regular global balance equations (equilibrium eqns) π is stationary distribution solution that can be normalised is equilibrium distribution if equilibrium distribution exists, then it is unique and is limiting distribution
19
Today: Introduction / motivation course Discrete-time Markov chain Continuous-time Markov chain Next Exercises
20
Next time: [R+SN] section 1.1 – 1.3 Continuous – time Markov chains: Birth-death processes; Poisson process, simple queue; reversibility; detailed balance;
21
Today: Introduction / motivation course Discrete-time Markov chain Continuous-time Markov chain Next Exercises
22
Exercises: [R+SN] 1.1.2, 1.1.4, 1.1.5 Give proof of Chapman-Kolmogorov equation For random walk, let Determine the possible states for N=10, and compute for all feasible j Consider the random walk with reflecting boundary, that has transition probabilities similar to random walk, except in state 0. When the process attempts to jump to the left in state 0, it stays at 0. The transition probabilities are Show that a solution of the global balance equations is For which values of p is this an equilibrium distribution?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.