Download presentation
Presentation is loading. Please wait.
Published byPierce Pierce Modified over 9 years ago
1
Fast sampling for LDA William Cohen
2
MORE LDA SPEEDUPS FIRST - RECAP LDA DETAILS
4
Called “collapsed Gibbs sampling” since you’ve marginalized away some variables Fr: Parameter estimation for text analysis - Gregor Heinrich prob this word/term assigned to topic k prob this doc contains topic k
5
More detail
7
z=1 z=2 z=3 … … … … unit height random
8
SPEEDUP 1 - SPARSITY
9
KDD 2008
10
z=1 z=2 z=3 … … … … unit height random
12
Running total of P(z=k|…) or P(z<=k)
13
Discussion…. Where do you spend your time? – sampling the z’s – each sampling step involves a loop over all topics – this seems wasteful even with many topics, words are often only assigned to a few different topics – low frequency words appear < K times … and there are lots and lots of them! – even frequent words are not in every topic
14
Discussion…. What’s the solution? Idea: come up with approximations to Z at each stage - then you might be able to stop early….. computationally like a sparser vector Want Z i >=Z
15
Tricks How do you compute and maintain the bound? – see the paper What order do you go in? – want to pick large P(k)’s first – … so we want large P(k|d) and P(k|w) – … so we maintain k’s in sorted order which only change a little bit after each flip, so a bubble-sort will fix up the almost-sorted array
16
Results
17
SPEEDUP 2 - ANOTHER APPROACH FOR USING SPARSITY
18
KDD 09
19
z=s+r+q t = topic (k) w = word d = doc
20
z=s+r+q If U<s: lookup U on line segment with tic-marks at α 1 β/(βV + n.|1 ), α 2 β/(βV + n.|2 ), … If s<U<r: lookup U on line segment for r Only need to check t such that n t|d >0 t = topic (k) w = word d = doc
21
z=s+r+q If U<s: lookup U on line segment with tic-marks at α 1 β/(βV + n.|1 ), α 2 β/(βV + n.|2 ), … If s<U<s+r: lookup U on line segment for r If s+r<U: lookup U on line segment for q Only need to check t such that n w|t >0
22
z=s+r+q Only need to check t such that n w|t >0 Only need to check t such that n t|d >0 Only need to check occasionally (< 10% of the time)
23
z=s+r+q Need to store n w|t for each word, topic pair …??? Only need to store n t|d for current d Only need to store (and maintain) total words per topic and α ’s, β,V Trick; count up n t|d for d when you start working on d and update incrementally
24
z=s+r+q Need to store n w|t for each word, topic pair …??? 1. Precompute, for each t, Most (>90%) of the time and space is here… 2. Quickly find t’s such that n w|t is large for w
25
Need to store n w|t for each word, topic pair …??? 1. Precompute, for each t, Most (>90%) of the time and space is here… 2. Quickly find t’s such that n w|t is large for w map w to an int array no larger than frequency w no larger than #topics encode (t,n) as a bit vector n in the high-order bits t in the low-order bits keep ints sorted in descending order
27
Outline LDA/Gibbs algorithm details How to speed it up by parallelizing How to speed it up by faster sampling – Why sampling is key – Some sampling ideas for LDA The Mimno/McCallum decomposition (SparseLDA) Alias tables (Walker 1977; Li, Ahmed, Ravi, Smola KDD 2014)
28
Alias tables http://www.keithschwarz.com/darts-dice-coins/ Basic problem: how can we sample from a biased coin quickly? If the distribution changes slowly maybe we can do some preprocessing and then sample multiple times. Proof of concept: generate r~uniform and use a binary tree r in (23/40,7/10] O(K) O(log2K)
29
Alias tables http://www.keithschwarz.com/darts-dice-coins/ Another idea… Simulate the dart with two drawn values: rx int(u1*K) ry u1*p max keep throwing till you hit a stripe Simulate the dart with two drawn values: rx int(u1*K) ry u1*p max keep throwing till you hit a stripe
30
Alias tables http://www.keithschwarz.com/darts-dice-coins/ An even more clever idea: minimize the brown space (where the dart “misses”) by sizing the rectangle’s height to the average probability, not the maximum probability, and cutting and pasting a bit. You can always do this using only two colors in each column of the final alias table and the dart never misses! mathematically speaking…
31
KDD 2014 Key ideas use variant of Mimno/McCallum decomposition Use alias tables to sample from the dense parts Since the alias table gradually goes stale, use Metropolis-Hastings sampling instead of Gibbs
32
KDD 2014 q is stale, easy-to-draw from distribution p is updated distribution computing ratios p(i)/q(i) is cheap usually the ratio is close to one else the dart missed
33
KDD 2014
34
SPEEDUP 3 - ONLINE LDA
35
Pilfered from… NIPS 2010: Online Learning for LDA, Matthew Hoffman, Francis Bach & Blei
36
ASIDE: VARIATIONAL INFERENCE FOR LDA
43
uses γ uses λ
51
BACK TO: SPEEDUP 3 - ONLINE LDA
58
SPEED 3.5 - ONLINE SPARSE LDA
62
Compute expectations over the z’s any way you want…. Compute expectations over the z’s any way you want….
64
Technical Details Variational distrib: q(z d ) not q(z d )! Approximate using Gibbs: after sampling for a while estimate: estimate using time and “coherence”: D(w) = # docs containing word w
66
better
67
Summary of LDA speedup tricks Gibbs sampler: – O(N*K*T) and K grows with N – Need to keep the corpus (and z’s) in memory You can parallelize – You need to keep a slice of the corpus – But you need to synchronize K multinomials over the vocabulary – AllReduce helps You can sparsify the sampling and topic-counts – Mimno’s trick - greatly reduces memory You can do the computation on-line – Only need to keep K-multinomials and one document’s worth of corpus and z’s in memory You can combine some of these methods – Online sparsified LDA – Parallel online sparsified LDA?
68
SPEEDUP FOR PARALLEL LDA - USING ALLREDUCE FOR SYNCHRONIZATION
70
What if you try and parallelize? Split document/term matrix randomly and distribute to p processors.. then run “Approximate Distributed LDA” Common subtask in parallel versions of: LDA, SGD, ….
71
Introduction Common pattern: – do some learning in parallel – aggregate local changes from each processor to shared parameters – distribute the new shared parameters back to each processor – and repeat…. AllReduce implemented in MPI, recently in VW code (John Langford) in a Hadoop/compatible scheme MAP ALLREDUCE
78
Gory details of VW Hadoop-AllReduce Spanning-tree server: – Separate process constructs a spanning tree of the compute nodes in the cluster and then acts as a server Worker nodes (“fake” mappers): – Input for worker is locally cached – Workers all connect to spanning-tree server – Workers all execute the same code, which might contain AllReduce calls: Workers synchronize whenever they reach an all- reduce
79
Hadoop AllReduce don’t wait for duplicate jobs
80
Second-order method - like Newton’s method
81
2 24 features ~=100 non-zeros/example 2.3B examples example is user/page/ad and conjunctions of these, positive if there was a click-thru on the ad
82
50M examples explicitly constructed kernel 11.7M features 3,300 nonzeros/example old method: SVM, 3 days: reporting time to get to fixed test error
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.