Presentation is loading. Please wait.

Presentation is loading. Please wait.

Haim Kaplan and Uri Zwick

Similar presentations


Presentation on theme: "Haim Kaplan and Uri Zwick"β€” Presentation transcript:

1 Haim Kaplan and Uri Zwick
Introduction to Markov chains (part 2) Haim Kaplan and Uri Zwick Algorithms in Action Tel Aviv University Last updated: May

2 Mixing time 𝑑 𝑑 = max π‘₯ π‘₯ 𝑃 𝑑 βˆ’πœ‹ 𝑣𝑑
We can prove that 𝑑(𝑑) is monotonic decreasing in 𝑑 𝑑 π‘šπ‘–π‘₯ πœ– = min 𝑑 𝑑 𝑑 β‰€πœ– 𝑑 π‘šπ‘–π‘₯ =𝑑 π‘šπ‘–π‘₯ ≑ min 𝑑 𝑑 𝑑 ≀ 1 4 We can prove that 𝑑 π‘šπ‘–π‘₯ πœ– = log 2 (1/πœ–) 𝑑 π‘šπ‘–π‘₯

3 Back to shuffling (n cards)
- Top-in-at-Random: - Riffle Shuffle: - Random Transpositions ≀2𝑛ln(𝑛) ≀𝑛ln𝑛 + ln⁑(4)𝑛 20% is just an arbitrary constant; the precise number does not really matter (wait a few slides) ≀2 log 𝑛 3

4 Reversible Markov chain
A distribution πœ‹ is reversible for a Markov chain if βˆ€π‘–,𝑗 πœ‹ 𝑖 𝑃 𝑖𝑗 = πœ‹ 𝑗 𝑃 𝑗𝑖 (detailed balance) A Markov chain is reversible if it has a reversible distribution Lemma: A reversible distribution is a stationary distribution Proof: πœ‹ 1 , πœ‹ 2 , πœ‹ 3 , πœ‹ 4 𝑃 11 𝑃 12 𝑃 13 𝑃 14 𝑃 21 𝑃 22 𝑃 23 𝑃 24 𝑃 31 𝑃 32 𝑃 33 𝑃 34 𝑃 41 𝑃 42 𝑃 43 𝑃 44

5 Reversible Markov chain
πœ‹ 1 , πœ‹ 2 , πœ‹ 3 , πœ‹ 4 𝑃 11 𝑃 12 𝑃 13 𝑃 14 𝑃 21 𝑃 22 𝑃 23 𝑃 24 𝑃 31 𝑃 32 𝑃 33 𝑃 34 𝑃 41 𝑃 42 𝑃 43 𝑃 44 = πœ‹ 1 𝑃 11 + πœ‹ 2 𝑃 21 + πœ‹ 3 𝑃 31 + πœ‹ 4 𝑃 41 ,…,…,… = 𝑃 11 πœ‹ 1 + 𝑃 12 πœ‹ 1 + 𝑃 13 πœ‹ 1 + 𝑃 14 πœ‹ 1 ,…,…,… = πœ‹ 1 (𝑃 11 + 𝑃 12 + 𝑃 13 + 𝑃 14 ),…,…,… = (πœ‹ 1 ,…,…,…)

6 Symmetric Markov chain
A Markov chain is symmetric if 𝑃 𝑖𝑗 = 𝑃 𝑗𝑖 What is the stationary distribution of an irreducible symmetric Markov chain ?

7 Example: Random walk on a graph
Given a connected undirected graph 𝐺, define a Markov chain whose states are the vertices of the graph. We move from a vertex 𝑣 to one of its neighbors with equal probability 1/3 𝑣 𝑣 1 𝑣 2 𝑣 3 𝑣 𝑣 1 𝑣 2 𝑣 3 1/3 1/3 Consider πœ‹= 𝑑 1 2π‘š , 𝑑 2 2π‘š ,…, 𝑑 𝑛 2π‘š

8 Example: Random walk on a graph
𝑣 𝑣 1 𝑣 2 𝑣 3 𝑣 𝑣 1 𝑣 2 𝑣 3 1/3 1/3 1/3 Consider πœ‹= 𝑑 1 2π‘š , 𝑑 2 2π‘š ,…, 𝑑 𝑛 2π‘š πœ‹ 𝑖 𝑃 𝑖𝑗 = πœ‹ 𝑗 𝑃 𝑗𝑖 ⇔ 𝑑 𝑖 2π‘š 1 𝑑 𝑖 = 1 𝑑 𝑗 𝑑 𝑗 2π‘š = 1 2π‘š Where do we use the fact that the graph is undirected ?

9 Reversible Markov chain
𝑃[ 𝑋 0 = 𝑠 0 , 𝑋 1 = 𝑠 1 ,…, 𝑋 𝑗 = 𝑠 𝑗 ]= 𝑃[ 𝑋 0 = 𝑠 𝑗 , 𝑋 1 = 𝑠 π‘—βˆ’1 ,…, 𝑋 𝑗 = 𝑠 0 ]= If 𝑋 0 is drawn from πœ‹ Prove as an exercise

10 Another major application of Markov chains

11 Sampling from large spaces
Given a distribution πœ‹ on a set 𝑆, we want to draw an object from 𝑆 with the distribution πœ‹ Say we want to estimate the average size of an independent set in a graph Suppose we could draw an independent set uniformly at random Then we can draw multiple times and use the average size of the independents sets we drew as an estimate Useful also for approximate counting

12 Markov chain Monte carlo
Given a distribution πœ‹ on a set 𝑆, we want to draw an object from 𝑆 with the distribution πœ‹ Build a Markov chain whose stationary distribution is πœ‹ Run the chain for sufficiently long time (until it mixes) from some starting position π‘₯ Your position is a random draw from a distribution close to πœ‹, its distribution is π‘₯ 𝑃 π‘˜ ~πœ‹

13 Independent sets Say we are given a graph 𝐺 and we want to sample an independent set uniformly at random This is a symmetric chain so stationary distribution is uniform

14 Independent sets Transitions: Pick a vertex 𝑣 uniformly at random, flip a coin. Heads  switch to 𝐼βˆͺ 𝑣 if 𝐼βˆͺ 𝑣 is an independent set Tails  switch to πΌβˆ– 𝑣 1 2𝑛 This is a symmetric chain so stationary distribution is uniform This chain is irreducible and aperiodic (why?)

15 Independent sets Transitions: Pick a vertex 𝑣 uniformly at random, flip a coin. Heads  switch to 𝐼βˆͺ 𝑣 if 𝐼βˆͺ 𝑣 is an independent set Tails  switch to πΌβˆ– 𝑣 1 2𝑛 This is a symmetric chain so stationary distribution is uniform What is the stationary distribution ?

16 Independent sets So if we walk sufficiently long time on this chain we have an independent set almost uniformly at random… Lets generalize this

17 Gibbs samplers We have a distribution πœ‹ over functions f:𝑉→𝐡={1,2,…,5}
There are | 𝐡| |𝑉| 𝑓’s (states) 1 4 5 𝑓 T Want to sample from πœ‹

18 Gibbs samplers We have a distribution πœ‹ over functions f:𝑉→𝐡={1,2,…,5}
There are | 𝐡| |𝑉| 𝑓’s (states) 1 4 5 𝑓 T Want to sample from πœ‹

19 Gibbs samplers Chain: At state 𝑓, pick a vertex 𝑣 uniformly at random. There are |𝐡| states 𝑓 𝑣→1 ,…, 𝑓 𝑣→ 𝐡 in which π‘‰βˆ– 𝑣 is kept fixed ( 𝑓 𝑣→𝑖 is 𝑓 with 𝑣 assigned to 𝑖). Pick 𝑓 𝑣→𝑖 with probability πœ‹ 𝑣 ( 𝑓 𝑣→𝑖 )≑ πœ‹( 𝑓 𝑣→𝑖 ) π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ . 1 4 5 𝑣 𝑓 1 𝑛 πœ‹ 𝑣 𝑓 𝑣→1 = 1 𝑛 πœ‹( 𝑓 𝑣→1 ) π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ T

20 Gibbs samplers Claim: This chain is reversible with respect to πœ‹
Need to verify: βˆ€π‘“,𝑓′ πœ‹ 𝑓 𝑃 𝑓 𝑓 β€² =πœ‹(𝑓′) 𝑃 𝑓 β€² 𝑓 𝑃 𝑓 𝑓 β€² =0 iff 𝑃 𝑓 β€² 𝑓 =0 Otherwise 𝑓= 𝑓 𝑣→𝑖 and 𝑓 β€² = 𝑓 𝑣→𝑗 We need to verify that: T πœ‹ 𝑓 𝑣→𝑖 1 𝑛 πœ‹ 𝑣 ( 𝑓 𝑣→𝑗 )=πœ‹( 𝑓 𝑣→𝑗 ) 1 𝑛 πœ‹ 𝑣 ( 𝑓 𝑣→𝑖 )

21 πœ‹ 𝑓 𝑣→𝑖 1 𝑛 πœ‹ 𝑓 𝑣→𝑗 π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ =πœ‹ 𝑓 𝑣→𝑗 1 𝑛 πœ‹ 𝑓 𝑣→𝑖 π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜
Gibbs samplers πœ‹ 𝑓 𝑣→𝑖 1 𝑛 πœ‹ 𝑓 𝑣→𝑗 π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ =πœ‹ 𝑓 𝑣→𝑗 1 𝑛 πœ‹ 𝑓 𝑣→𝑖 π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ Easy to check that the chain is aperiodic, so if it is also irreducible then we can use it for sampling

22 Gibbs for uniform q-coloring
Transitions: Pick a vertex 𝑣 uniformly at random, pick a (new) color for 𝑣 uniformly at random from the set of colors not attained by a neighbor of 𝑣 π‘ž=5 1 4𝑛

23 Gibbs for uniform q-coloring
Notice that πœ‹ 𝑓 is hard to compute but πœ‹ 𝑣 𝑓 𝑣→𝑖 is easy π‘ž=5 1 4𝑛

24 Gibbs samplers (summary)
Chain: At state 𝑓, pick a vertex 𝑣 uniformly at random. There are |𝐡| states 𝑓 𝑣→1 ,…, 𝑓 𝑣→ 𝐡 consistent with π‘‰βˆ– 𝑣 ( 𝑓 𝑣→𝑖 is 𝑓 with 𝑣 assigned to 𝑖). Pick 𝑓 𝑣→𝑖 with probability πœ‹( 𝑓 𝑣→𝑖 ) π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ . Call this distribution πœ‹ 𝑣 Notice that even if πœ‹ 𝑓 may be hard to compute it is typically easy to compute πœ‹ 𝑣 𝑓 𝑣→𝑖 = πœ‹( 𝑓 𝑣→𝑖 ) π‘˜βˆˆπ΅ πœ‹ 𝑓 π‘£β†’π‘˜ T

25 Metropolis chain Want to construct a chain over 𝑠 1 , 𝑠 2 ,…, 𝑠 𝑛 with a stationary distribution πœ‹ States do not necessarily correspond to labelings of the vertices of a graph

26 Metropolis chain Start with some chain over 𝑠 1 , 𝑠 2 ,…, 𝑠 𝑛
Say 𝑃 𝑖𝑗 = 𝑃 𝑗𝑖 (symmetric) 𝑗 Need that 𝑃 𝑖𝑗 is easy to compute when at 𝑖 𝑖

27 Metropolis chain We now modify the chain and obtain a Metropolis chain: At 𝑠 𝑖 : 1) Suggest a neighbor 𝑠 𝑗 with probability 𝑃 𝑖𝑗 2) Move to 𝑠 𝑗 with probability min πœ‹ 𝑗 πœ‹ 𝑖 ,1 (otherwise stay at 𝑠 𝑖 )

28 Metropolis chain 𝑖 𝑗 𝑃 𝑖𝑗 min πœ‹ 𝑗 πœ‹ 𝑖 ,1 1βˆ’ 𝑗 𝑃 𝑖𝑗 min πœ‹ 𝑗 πœ‹ 𝑖 ,1

29 A more general presentation
𝑃 is not symmetric The metropolis chain with respect to πœ‹: At 𝑠 𝑖 : 1) Suggest a neighbor 𝑠 𝑗 with probability 𝑃 𝑖𝑗 2) Move to 𝑠 𝑗 with probability min πœ‹ 𝑗 𝑃 𝑗𝑖 πœ‹ 𝑖 𝑃 𝑖𝑗 ,1 (otherwise stay at 𝑠 𝑖 )

30 A more general presentation
𝑖 𝑗 𝑃 𝑖𝑗 min πœ‹ 𝑗 𝑃 𝑗𝑖 πœ‹ 𝑖 𝑃 𝑖𝑗 ,1 1βˆ’ 𝑗 𝑃 𝑖𝑗 min πœ‹ 𝑗 𝑃 𝑗𝑖 πœ‹ 𝑖 𝑃 𝑖𝑗 ,1

31 Detailed balance conditions
πœ‹ 𝑖 𝑃 𝑖𝑗 min πœ‹ 𝑗 𝑃 𝑗𝑖 πœ‹ 𝑖 𝑃 𝑖𝑗 ,1 = πœ‹ 𝑗 𝑃 𝑗𝑖 min πœ‹ 𝑖 𝑃 𝑖𝑗 πœ‹ 𝑗 𝑃 𝑗𝑖 ,1 Assume πœ‹ 𝑗 𝑃 𝑗𝑖 πœ‹ 𝑖 𝑃 𝑖𝑗 ≀1 πœ‹ 𝑖 𝑃 𝑖𝑗 πœ‹ 𝑗 𝑃 𝑗𝑖 πœ‹ 𝑖 𝑃 𝑖𝑗 = πœ‹ 𝑗 𝑃 𝑗𝑖 Other case is symmetric

32 Metropolis/Gibbs Often πœ‹ 𝑠 𝑖 = 𝑔 𝑠 𝑖 𝑍 where 𝑍= 𝑖 𝑔( 𝑠 𝑖 )
Then it is possible to compute the transition probabilities in the Gibbs and Metropolis chains

33 Metropolis chain for bisection

34 Metropolis chain for bisection
𝑓 𝑠= 𝑆, 𝑆 = 𝒖,𝒗 βˆ£π’–βˆˆπ‘Ί, π’—βˆˆ 𝑺 +𝒄 𝑺 βˆ’ 𝑺 𝟐 We introduce a parameter 𝑇 and take the exponent of this quality measure 𝑔 𝑇 𝑠 = 𝑒 βˆ’ 𝑓 𝑠 𝑇 Our target distribution is proportional to 𝑔 𝑇

35 Boltzmann distribution
πœ‹ 𝑇 𝑠 = 1 𝑍 𝑇 𝑒 βˆ’ 𝑓 𝑠 𝑇 𝑍 𝑇 = 𝑠 𝑒 βˆ’ 𝑓 𝑠 𝑇

36 Boltzmann distribution
𝑒 βˆ’π‘₯ 𝑒 βˆ’ π‘₯ 0.5

37 Properties of the Boltzmann distribution
Let 𝑂={𝑠 1 , 𝑠 2 ,…, 𝑠 π‘˜ } the global minima, 𝑓 𝑠 𝑖 =𝑀 πœ‹ 𝑇 𝑂 = 𝑗=1 π‘˜ 𝑒 βˆ’ 𝑓 𝑠 𝑗 𝑇 Z 𝑇 =π‘˜ 𝑒 βˆ’ 𝑀 𝑇 Z 𝑇 𝑍 𝑇 = 𝑠 𝑒 βˆ’ 𝑓 𝑠 𝑇 πœ‹ 𝑇 𝑂 =π‘˜ 𝑒 βˆ’ 𝑀 𝑇 𝑠 𝑒 βˆ’ 𝑓 𝑠 𝑇

38 Properties of the Boltzmann distribution
πœ‹ 𝑇 𝑂 = π‘˜ π‘˜+ π‘ βˆ£π‘“ 𝑠 >𝑀 𝑒 π‘€βˆ’π‘“ 𝑠 𝑇 lim 𝑇→0 πœ‹ 𝑇 (𝑂) =1

39 Properties of the Boltzmann distribution
As 𝑇 gets smaller πœ‹ get concentrated on the global minima

40 Metropolis chain for the Boltzmann distribution
πœ‹ 𝑇 𝑠 = 1 𝑍 𝑇 𝑒 βˆ’ 𝑓 𝑠 𝑇 𝑍 𝑇 = 𝑠 𝑒 βˆ’ 𝑓 𝑠 𝑇 We will generate a metropolis chain for πœ‹ 𝑇 𝑠

41 The base chain Consider the chain over the cuts in the graph where the neighbors of a cut (𝑆,𝑇) are the cuts we can obtain from (𝑆,𝑇) by flipping the side of a single vertex (π‘†βˆ–{𝑣},𝑇βˆͺ{𝑣}) 1 𝑛 (𝑆,𝑇) Symmetric 𝑃 𝑖𝑗 = 𝑃 𝑗𝑖 = 1 𝑛

42 Metropolis chain for bisection
At 𝑠 𝑖 : 1) Suggest a neighbor 𝑠 𝑗 with probability 1 𝑛 2) Move to 𝑠 𝑗 with probability min πœ‹ 𝑇 𝑠 𝑗 πœ‹ 𝑇 𝑠 𝑖 ,1 (otherwise stay at 𝑠 𝑖 ) πœ‹ 𝑇 𝑠 𝑗 = 1 𝑍 𝑇 𝑒 βˆ’ 𝑓 𝑠 𝑗 𝑇 πœ‹ 𝑇 𝑠 𝑗 πœ‹ 𝑇 𝑠 𝑖 = 𝑒 𝑓 𝑠 𝑖 βˆ’π‘“ 𝑠 𝑗 𝑇

43 Generalization of local search
This is a generalization of local search Allows non improving moves We take a non-improving move with probability that decreases with the amount of degradation in the quality of the bisection

44 Generalization of local search
As 𝑇 decreases it is harder to take non- improving moves For very small 𝑇, this is like local search For very large 𝑇, this is like random walk So which 𝑇 should we use ?

45 Simulated annealing Start with a relatively large 𝑇
Perform 𝐿 iterations Decrease 𝑇

46 Motivated by physics Growing crystals First we melt the raw material
Then we start cooling it Need to cool carefully/slowly in order to get a good crystal We want to bring the crystal into a state with lowest possible energy Don’t want to get stuck in a local optimum

47 Experiments with annealing
Average running times: Annealing 6 min Local search 1 sec KL 3.7 sec Johnson, Aragon, McGeoch, Schevon, 1989, Optimization by simulated annealing: An experimental evaluation, Part I, graph partitioning

48 Experiments with annealing
Johnson, Aragon, McGeoch, Schevon, 1989, Optimization by simulated annealing: An experimental evaluation, Part I, graph partitioning

49 The annealing parameters
Two parameters control the range of temperature considered: 𝐼𝑁𝐼𝑇𝑃𝑅𝑂𝐡: Pick the initial temperature so that you accept 𝐼𝑁𝐼𝑇𝑃𝑅𝑂𝐡 of the moves 𝑀𝐼𝑁𝑃𝐸𝑅𝐢𝐸𝑁𝑇: You β€œfreeze’’ when you accept at most 𝑀𝐼𝑁𝑃𝐸𝑅𝐢𝐸𝑁𝑇 (at 5 temperatures since the last winner found)

50 𝐼𝑁𝐼𝑇𝑃𝑅𝑂𝐡=0.9, 𝑀𝐼𝑁𝑃𝐸𝑅𝐢𝐸𝑁𝑇=0.1
Sample once per 500 times ~ 16 times per temperature, no change in the last 100 samples, average random bisection 599

51 After applying local opt to the sample

52 Tails of 2 runs Same quality for half the time !
Left: 𝐼𝑁𝐼𝑇𝑃𝑅𝑂𝐡 = 0.4, 𝑀𝐼𝑁𝑃𝐸𝑅𝐢𝐸𝑁𝑇=0.2 Right: 𝐼𝑁𝐼𝑇𝑃𝑅𝑂𝐡 = 0.9, 𝑀𝐼𝑁𝑃𝐸𝑅𝐢𝐸𝑁𝑇=0.1 Same quality for half the time !

53 Running time/quality tradeoff
Two natural parameters control this: 𝐿 and π‘Ÿ 𝐿 was set to be 𝑆𝐼𝑍𝐸𝐹𝐴𝐢𝑇𝑂𝑅×(#π‘›π‘’π‘–π‘”β„Žπ‘π‘œπ‘Ÿπ‘ ) = 16𝑛 π‘Ÿ=0.95 Doubling 𝑆𝐼𝑍𝐸𝐹𝐴𝐢𝑇𝑂𝑅 doubles the running time Changing π‘Ÿβ† π‘Ÿ should double the running time (experiment shows that it grows only by a factor of 1.85)

54

55 Simulated annealing summary
Modification to local search that allows to escape from local minima Many applications (original paper has citations) VLSI design Protein folding Scheduling/assignment problems


Download ppt "Haim Kaplan and Uri Zwick"

Similar presentations


Ads by Google