Download presentation
Presentation is loading. Please wait.
Published byMerryl Stewart Modified over 8 years ago
1
COMS 6998-06 Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010
2
(8) Random walks and electrical networks
3
Random walks Stochastic process on a graph Transition matrix E Simplest case: a regular 1-D graph 0 12345
4
Gambler’s ruin A has N pennies and B has M pennies. At each turn, one of them wins a penny with a probability of 0.5 Stop when one of them loses all his money.
5
Harmonic functions Harmonic functions: –P(0) = 0 –P(N) = 1 –P(x) = ½*p(x-1)+ ½*p(x+1), for 0<x<N –(in general, replace ½ with the bias in the walk)
6
Simple electrical circuit 012345 V(0)=0V(N)=1
7
Arbitrary resistances
8
The Maximum principle Let f(x) be a harmonic function on a sequence S. Theorem: –A harmonic function f(x) defined on S takes on its maximum value M and its minimum value m on the boundary. Proof: –Let M be the largest value of f. Let x be an element of S for which f(x)=M. Then f(x+1)=f(x-1)=M. If x-1 is still an interior point, continue with x-2, etc. In the worst case, reach x=0, for which f(x)=M.
9
The Uniqueness principle Let f(x) be a harmonic function on a sequence S. Theorem: –If f(x) and g(x) are harmonic functions on S such that f(x)=g(x) on the boundary points B, then f(x)=g(x) for all x. Proof: –Let h(x)=f(x)-g(x). Then, if x is an interior point, and h is harmonic. But h(x)=0 for x in B, and therefore, by the Maximum principle, its minimal and maximal values are both 0. Thus h(x)=0 for all x which proves that f(x)=g(x) for all x.
10
How to find the unique solution? Try a linear function: f(x)=x/N. This function has the following properties: –f(0)=0 –f(N)=1 –(f(x-1)+f(x+1))*1/2=x/N=f(x)
11
Reaching the boundary Theorem: –The random walker will reach either 0 or N. Proof: –Let h(x) be the probability that the walker never reaches the boundary. Then h(x)=1/2*h(x+1)+1/2*h(x-1), so h(x) is harmonic. Also h(0)=h(N)=0. According to the maximum principle, h(x)=0 for all x.
12
Number of steps to reach the boundary m(0)=0 m(N)=0 m(x)=1/2m(x+1)+1/2m(x-1) The expected number of steps until a one dimensional random walk goes up to b or down to -a is ab. Examples: (a=1,b=1); (a=2,b=2) (also: the displacement varies as sqrt(t) where t is time).
13
Fair games In the penny game, after one iteration, the expected fortune is ½(k-1)+1/2(k+1)=k Fair game = martingale Now if A has x pennies out of a total of N, his final fortune is: (1-p(x)).0+p(x).N=p(x).N Is the game fair if A can stop when he wants? No – e.g., stop playing when your fortune reaches $x.
14
(9) Method of relaxations and other methods for computing harmonic functions
15
2-D harmonic functions 0x 0 z1 y1
16
The original Dirichlet problem Distribution of temperature in a sheet of metal. One end of the sheet has temperature t=0, the other end: t=1. Laplace’s differential equation: This is a special (steady-state) case of the (transient) heat equation : In general, the solutions to this equation are called harmonic functions. U=1 U=0
17
Learning harmonic functions The method of relaxations –Discrete approximation. –Assign fixed values to the boundary points. –Assign arbitrary values to all other points. –Adjust their values to be the average of their neighbors. –Repeat until convergence. Monte Carlo method –Perform a random walk on the discrete representation. –Compute f as the probability of a random walk ending in a particular fixed point. Linear equation method Eigenvector methods –Look at the stationary distribution of a random walk
18
Monte Carlo solution Least accurate of all. Example: 10,000 runs for an accuracy of 0.01
19
Example x=1/4*(y+z+0+0) y=1/2*(x+1) z=1/3*(x+1+1) Ax=u X=A -1 u
20
Effective resistance Series: R=R1+R2 Parallel: C=C1+C2 1/R=1/R1+1/R R=R1R2/(R1+R2)
21
Example Doyle/Snell page 45
22
Electrical networks and random walks Ergodic (connected) Markov chain with transition matrix P 1 Ω 0.5 Ω a b c d w=Pw From Doyle and Snell 2000
23
Electrical networks and random walks 1 Ω 0.5 Ω a c d 1 V b v x is the probability that a random walk starting at x will reach a before reaching b. The random walk interpretation allows us to use Monte Carlo methods to solve electrical circuits.
24
Energy-based interpretation The energy dissipation through a resistor is Over the entire circuit, The flow from x to y is defined as follows: Conservation of energy
25
Thomson’s principle One can show that: The energy dissipated by the unit current flow (for v b =0 and for i a =1) is R eff. This value is the smallest among all possible unit flows from a to b (Thomson’s Principle)
26
Eigenvectors and eigenvalues An eigenvector is an implicit “direction” for a matrix where v (eigenvector) is non-zero, though λ (eigenvalue) can be any complex number in principle Computing eigenvalues:
27
Eigenvectors and eigenvalues Example: Det (A- I) = (-1- )*(- )-3*2=0 Then: + 2 -6=0; 1 =2; 2 =-3 For Solutions: x 1 =x 2
28
Stochastic matrices Stochastic matrices: each row (or column) adds up to 1 and no value is less than 0. Example: The largest eigenvalue of a stochastic matrix E is real: λ 1 = 1. For λ 1, the left (principal) eigenvector is p, the right eigenvector = 1 In other words, G T p = p.
29
Markov chains A homogeneous Markov chain is defined by an initial distribution x and a Markov kernel E. Path = sequence (x 0, x 1, …, x n ). X i = x i-1 *E The probability of a path can be computed as a product of probabilities for each step i. Random walk = find X j given x 0, E, and j.
30
Stationary solutions The fundamental Ergodic Theorem for Markov chains [Grimmett and Stirzaker 1989] says that the Markov chain with kernel E has a stationary distribution p under three conditions: –E is stochastic –E is irreducible –E is aperiodic To make these conditions true: –All rows of E add up to 1 (and no value is negative) –Make sure that E is strongly connected –Make sure that E is not bipartite Example: PageRank [Brin and Page 1998]: use “teleportation”
31
1 2 3 4 5 7 68 Example This graph E has a second graph E’ (not drawn) superimposed on it: E’ is the uniform transition graph. t=0 t=1
32
Eigenvectors An eigenvector is an implicit “direction” for a matrix. Ev = λv, where v is non-zero, though λ can be any complex number in principle. The largest eigenvalue of a stochastic matrix E is real: λ 1 = 1. For λ 1, the left (principal) eigenvector is p, the right eigenvector = 1 In other words, E T p = p.
33
Computing the stationary distribution function PowerStatDist (E): begin p (0) = u; (or p (0) = [1,0,…0]) i=1; repeat p (i) = E T p (i-1) L = ||p (i) -p (i-1 )|| 1 ; i = i + 1; until L < return p (i) end Solution for the stationary distribution Convergence rate is O(m)
34
1 2 3 4 5 7 68 Example t=0 t=1 t=10
35
More dimensions Polya’s theorem says that a 1-D random walk is recurrent and that a 2-D walk is also recurrent. However, a 3-D walk has a non-zero escape probability (p=0.66). http://mathworld.wolfram.com/PolyasRand omWalkConstants.htmlhttp://mathworld.wolfram.com/PolyasRand omWalkConstants.html
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.