Download presentation
Presentation is loading. Please wait.
Published byMalcolm Harris Modified over 8 years ago
1
Carnegie Mellon University Joseph Gonzalez Joint work with Yucheng Low Aapo Kyrola Danny Bickson Carlos Guestrin Joe Hellerstein Alex Smola The Next Generation of the GraphLab Abstraction. Jay Gu
2
How will we design and implement parallel learning systems?
3
Map-Reduce / Hadoop Build learning algorithms on-top of high-level parallel abstractions... a popular answer:
4
Belief Propagation Label Propagation Kernel Methods Deep Belief Networks Neural Networks Tensor Factorization PageRank Lasso Map-Reduce for Data-Parallel ML Excellent for large data-parallel tasks! 4 Data-Parallel Graph-Parallel Cross Validation Feature Extraction Map Reduce Computing Sufficient Statistics
5
Example of Graph Parallelism
6
PageRank Example Iterate: Where: α is the random reset probability L[j] is the number of links on page j 132 46 5 5
7
Properties of Graph Parallel Algorithms Dependency Graph Iterative Computation My Rank Friends Rank Factored Computation
8
Belief Propagation SVM Kernel Methods Deep Belief Networks Neural Networks Tensor Factorization PageRank Lasso Map-Reduce for Data-Parallel ML Excellent for large data-parallel tasks! 8 Data-Parallel Graph-Parallel Cross Validation Feature Extraction Map Reduce Computing Sufficient Statistics Map Reduce? Pregel (Giraph)?
9
Barrier Pregel (Giraph) Bulk Synchronous Parallel Model: ComputeCommunicate
10
PageRank in Giraph (Pregel) public void compute(Iterator msgIterator) { double sum = 0; while (msgIterator.hasNext()) sum += msgIterator.next().get(); DoubleWritable vertexValue = new DoubleWritable(0.15 + 0.85 * sum); setVertexValue(vertexValue); if (getSuperstep() < getConf().getInt(MAX_STEPS, -1)) { long edges = getOutEdgeMap().size(); sentMsgToAllEdges( new DoubleWritable(getVertexValue().get() / edges)); } else voteToHalt(); }
11
Carnegie Mellon University Bulk synchronous computation can be inefficient. 11 Problem
12
Curse of the Slow Job Data CPU 1 CPU 2 CPU 3 CPU 1 CPU 2 CPU 3 Data CPU 1 CPU 2 CPU 3 Iterations Barrier Data Barrier
13
Curse of the Slow Job Assuming runtime is drawn from an exponential distribution with mean 1.
14
Problem with Messaging Storage Overhead: Requires keeping Old and New Messages [2x Overhead] Redundant messages: PageRank: send a copy of your own rank to all neighbors O(|V|) O(|E|) Often requires complex protocols When will my neighbors need information about me? Unable to constrain neighborhood state How would you implement graph coloring? CPU 1 CPU 2 Sends the same message three times!
15
Converge More Slowly Optimized in Memory Bulk Synchronous Asynchronous Splash BP
16
Carnegie Mellon University Bulk synchronous computation can be wrong! 16 Problem
17
The problem with Bulk Synchronous Gibbs Adjacent variables cannot be sampled simultaneously. Strong Positive Correlation t=0 Parallel Execution t=2t=3 Strong Positive Correlation Strong Positive Correlation t=1 Sequential Execution Strong Negative Correlation Strong Negative Correlation 17 Heads: Tails:
18
Belief Propagation SVM Kernel Methods Deep Belief Networks Neural Networks Tensor Factorization PageRank Lasso The Need for a New Abstraction If not Pregel, then what? 18 Data-Parallel Graph-Parallel Cross Validation Feature Extraction Map Reduce Computing Sufficient Statistics Pregel (Giraph)
19
What is GraphLab?
20
The GraphLab Framework Scheduler Consistency Model Graph Based Data Representation Update Functions User Computation 20
21
Data Graph 21 A graph with arbitrary data (C++ Objects) associated with each vertex and edge. Vertex Data: User profile text Current interests estimates Edge Data: Similarity weights Graph: Social Network
22
Comparison with Pregel Pregel Data is associated only with vertices GraphLab Data is associated with both vertices and edges
23
pagerank(i, scope){ // Get Neighborhood data (R[i], W ij, R[j]) scope; // Update the vertex data // Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); } Update Functions 23 An update function is a user defined program which when applied to a vertex transforms the data in the scope of the vertex
24
PageRank in GraphLab2 struct pagerank : public iupdate_functor { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach ( edge_type edge, context.in_edges() ) sum += 1/context.num_out_edges(edge.source()) * context.vertex_data(edge.source()).rank; double old_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } };
25
Comparison with Pregel Pregel Data must be sent to adjacent vertices The user code describes the movement of data as well as computation GraphLab Data is read from adjacent vertices User code only describes the computation
26
The Scheduler 26 CPU 1 CPU 2 The scheduler determines the order that vertices are updated. e e f f g g k k j j i i h h d d c c b b a a b b i i h h a a i i b b e e f f j j c c Scheduler The process repeats until the scheduler is empty.
27
The GraphLab Framework Scheduler Consistency Model Graph Based Data Representation Update Functions User Computation 27
28
Ensuring Race-Free Code How much can computation overlap?
29
GraphLab Ensures Sequential Consistency 29 For each parallel execution, there exists a sequential execution of update functions which produces the same result. CPU 1 CPU 2 Single CPU Single CPU Parallel Sequential time
30
Consistency Rules 30 Guaranteed sequential consistency for all update functions Data
31
Full Consistency 31
32
Obtaining More Parallelism 32
33
Edge Consistency 33 CPU 1 CPU 2 Safe Read
34
Is pretty neat! In Summary …
35
Pregel vs. GraphLab Multicore PageRank (25M Vertices, 355M Edges) Pregel [Simulated] Synchronous Schedule No Skipping [Unfair updates comparison] No Combiner [Unfair runtime comparison]
36
Update Count Distribution Most vertices need to be updated infrequently
37
Bayesian Tensor Factorization Gibbs Sampling Dynamic Block Gibbs Sampling Matrix Factorization Lasso SVM Belief Propagation PageRank CoEM K-Means SVD LDA …Many others…
38
Startups Using GraphLab Companies experimenting with Graphlab Academic projects Exploring Graphlab 1600++ Unique Downloads Tracked (possibly many more from direct repository checkouts) 1600++ Unique Downloads Tracked (possibly many more from direct repository checkouts)
39
Why do we need a NEW GraphLab?
40
Natural Graphs
41
Natural Graphs Power Law Top 1% vertices is adjacent to 53% of the edges! Yahoo! Web Graph 41 “Power Law”
42
Problem: High Degree Vertices High degree vertices limit parallelism: Touch a Large Amount of State Requires Heavy Locking Processed Sequentially
43
High Degree Vertices are Common “Social” People Popular Movies θ θ Z Z w w Z Z w w Z Z w w Z Z w w θ θ Z Z w w Z Z w w Z Z w w Z Z w w θ θ Z Z w w Z Z w w Z Z w w Z Z w w θ θ Z Z w w Z Z w w Z Z w w Z Z w w B B α α Hyper Parameters Common Words Obama
44
Proposed Four Solutions Decomposable Update Functors Expose greater parallelism by further factoring update functions Commutative- Associative Update Functors Transition from stateless to stateful update functions Abelian Group Caching (concurrent revisions) Allows for controllable races through diff operations Stochastic Scopes Reduce degree through sampling
45
PageRank in GraphLab struct pagerank : public iupdate_functor { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach ( edge_type edge, context.in_edges() ) sum += 1/context.num_out_edges(edge.source()) * context.vertex_data(edge.source()).rank; double old_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } };
46
PageRank in GraphLab struct pagerank : public iupdate_functor { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach ( edge_type edge, context.in_edges() ) sum += 1/context.num_out_edges(edge.source()) * context.vertex_data(edge.source()).rank; double old_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } }; Atomic Single Vertex Apply Parallel Scatter [Reschedule] Parallel “Sum” Gather
47
Decomposable Update Functors Decompose update functions into 3 phases: Locks are acquired only for region within a scope Relaxed Consistency + + … + Δ Y Y Y Parallel Sum User Defined: Gather( ) Δ Y Δ 1 + Δ 2 Δ 3 Y Scope Gather Y Y Apply(, Δ) Y Apply the accumulated value to center vertex User Defined: Apply Y Scatter( ) Update adjacent edges and vertices. User Defined: Y Scatter
48
Factorized PageRank struct pagerank : public iupdate_functor { double accum = 0, residual = 0; void gather(icontext_type& context, const edge_type& edge) { accum += 1/context.num_out_edges(edge.source()) * context.vertex_data(edge.source()).rank; } void merge(const pagerank& other) { accum += other.accum; } void apply(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double old_value = vdata.rank; vdata.rank = RESET_PROB + (1 - RESET_PROB) * accum; residual = fabs(vdata.rank – old_value) / context.num_out_edges(); } void scatter(icontext_type& context, const edge_type& edge) { if (residual > EPSILON) context.schedule(edge.target(), pagerank()); } };
49
Y Y Split computation across machines: Decomposable Execution Model ( o )( ) Y Y Y F1F1 F1F1 F2F2 F2F2 Y Y Y Y
50
Weaker Consistency Neighboring vertices maybe be updated simultaneously: A A B B C Gather Apply
51
Other Decomposable Algorithms Loopy Belief Propagation Gather: Accumulates product (log sum) of in messages Apply: Updates central belief Scatter: Computes out messages and schedules adjacent vertices Alternating Least Squares (ALS) y1y1 y2y2 y3y3 y4y4 w1w1 w2w2 x1x1 x2x2 x3x3 User Factors (W) Movie Factors (X)
52
LDA: Collapsed Gibbs Sampling Implement LDA in bipartite graph: Gather: Collects topic counts for all words in a document Apply: re-samples all words Scatter: updates word topic counts Doc1 Topics Doc2 Topics Doc3 Topics Word1 Topics Word3 Topics Topic A Topic B Topic A Topic B
53
Convergent Gibbs Sampling Cannot be done: A A B B C Gather Unsafe
54
Decomposable Functors Fits many algorithms Loopy Belief Propagation, Label Propagation, PageRank… Addresses the earlier concerns Problem: Does not exploit asynchrony at the vertex level. Large State Distributed Gather and Scatter Heavy Locking Fine Grained Locking Sequential Parallel Gather and Scatter
55
Need for Vertex Level Asynchrony Exploit commutative associative “sum” Y + + + + + Y Costly gather for a single change!
56
Need for Vertex Level Asynchrony Exploit commutative associative “sum” Y + + + + + Y
57
Need for Vertex Level Asynchrony Exploit commutative associative “sum” Y + + + + + + - Y
58
Need for Vertex Level Asynchrony Exploit commutative associative “sum” Y + + + + + + Δ Y
59
Need for Vertex Level Asynchrony Exploit commutative associative “sum” Y + + + + + + Δ Y Old (Cached) Sum
60
Need for Vertex Level Asynchrony Exploit commutative associative “sum” Y + + + + + + Δ Y Old (Cached) Sum ΔΔΔΔ
61
Commutative-Associative Update struct pagerank : public iupdate_functor { double delta; pagerank(double d) : delta(d) { } void operator+=(pagerank& other) { delta += other.delta; } void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); vdata.rank += delta; if(abs(delta) > EPSILON) { double out_delta = delta * (1 – RESET_PROB) * 1/context.num_out_edges(edge.source()); context.schedule_out_neighbors(pagerank(out_delta)); } }; // Initial Rank: R[i] = 0; // Initial Schedule: pagerank(RESET_PROB);
62
Scheduling Composes Updates Calling reschedule neighbors forces update function composition: pagerank(3)Pending: pagerank(7) reschedule_out_neighbors(pagerank(3))pagerank(3) Pending: pagerank(3) Pending: pagerank(10)
63
Experimental Comparison
64
Comparison of Abstractions: Multicore PageRank (25M Vertices, 355M Edges)
65
Comparison of Abstractions: Distributed PageRank (25M Vertices, 355M Edges)
66
PageRank on the Web circa 2000 Invented Comparison:
67
Ongoing work Extending all of GraphLab2 to the distributed setting Implemented push based engines (chromatic) Need to build GraphLab2 distributed locking engine Improving storage efficiency of the distributed data- graph Porting large set of Danny’s applications
68
Questions http://graphlab.org
69
Carnegie Mellon University Extra Material
70
Abelian Group Caching Enabling eventually consistent data races
71
Abelian Group Caching Issue: All earlier methods maintain a sequentially consistent view of data across all processors. Proposal: Try to split data instead of computation. How can we split the graph without changing the update function?
72
Insight from WSDM paper Answer: Allow Eventually Consistent data races High degree vertices admit slightly “stale” values: Changes in a few elements negligible effect High degree vertex updates typically a form of “sum” operation which has an “inverse” Example: Counts, Averages, Sufficient statistics Counter Example: Max Goal: Lazily synchronize duplicate data Similar to a version control system Intermediate values partially consistent Final value at termination must be consistent
73
Example Every processor initial has a copy of the same central value: 10 Processor 1Processor 2Processor 3 Master Current
74
Example Each processor makes a small change to its value: 10 Processor 1Processor 2Processor 3 Master 11 Old 7 13 Old True Value: 10 + 1 - 3 + 3 = 11 Current
75
Example Send delta values (Diffs) to the master: 10 Processor 1Processor 2Processor 3 Master 1 -3 True Value: 10 + 1 - 3 + 3 = 11 10 11 Old 7 13 OldCurrent
76
Example Send delta values (Diffs) to the master: 10 Processor 1Processor 2Processor 3 Master 1 -3 True Value: 10 + 1 - 3 + 3 = 11 1171013 OldCurrent
77
Example Send delta values (Diffs) to the master: 8 8 Processor 1Processor 2Processor 3 Master 1 -3 True Value: 10 + 1 - 3 + 3 = 11 1171013 OldCurrent
78
Example Master is consistent with first two processors changes 8 8 1171013 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 Current
79
Example Master decides to refresh other processors 8 1171013 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 8 8 8 Current
80
Example Master decides to refresh other processors 8 881013 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 8 Current
81
Example Master decides to refresh other processors 8 881013 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 8 +3 Current
82
Example Master decides to refresh other processors 8 8888+3 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 8 +3 Current
83
Example Master decides to refresh other processors 8 88811 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 Current
84
Example Processor 3 decides to update the master 8 88811 Old Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 3 Current
85
Example Processor 3 decides to update the master 8 8811 Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 8 8 3 Current
86
Example Processor 3 decides to update the master 11 88 Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 3 Current
87
Example Master is globally consistent: 11 88 Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 Current
88
Example Master is globally consistent: 11 88 Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 Current 11
89
Example Master is globally consistent: 11 Processor 1Processor 2Processor 3 Master True Value: 10 + 1 - 3 + 3 = 11 Current 11
90
Abelian Group Caching Abelian Group Caching: Data must have a commutative (+) and inverse (-) operation In GraphLab we have encountered many applications with the following bipartite form: Data Parameter Bounded Low Degree Parameter High Degree (Power Law)
91
Abelian Group Caching Abelian Group Caching: Data must have a commutative (+) and inverse (-) operation In GraphLab we have encountered many applications with the following bipartite form: Clustering Topic models Lasso … Data Parameter
92
Caching Replaces Locks Instead of locking cache entries are created: Each processor maintains a LRU vertex data cache Locks are acquired in parallel and only on a cache miss User must define (+) and (-) operations for vertex data Simpler: vdata.apply_diff(new_vdata, old_vdata) User specifies maximum allowable “staleness” Works with existing update functions/functors Cach e
93
Hierarchical Caching The caching strategy can be composed across varying latency systems: Rack 1 Cache Distributed Hash Table of Masters Rack 2 Cache System Cache Thread Cache Cache Resolution
94
Hierarchical Caching Current implementation uses two tiers: Distributed Hash Table of Masters System Cache Cache Resolution Thread Cache
95
Contention Based Caching Idea: Only use cache strategy when a lock is in frequently contention Tested on LDA and PageRank Reduces the effective cache size Works under LDA Does not work on Y!Messenger Due to sleep-based implementation of try_write in Pthreads Try_LockLock and Cache Fail Use true dataUsed Cached Copy
96
Global Variables Problem: Current global aggregation is fully synchronous and contrary to GraphLab philosophy: Don’t want to repeatedly re-compute entire sum. Solution: Trivial (now): use abelian caching Problem: Does not support Max operation (no inverse) Maintain the top k items and whenever the set is empty synchronously re-compute max. Slowly Converging Converged Slowly Converging f(v 1 ) + f(v 2 ) + … + f(v i ) + … + f(v n ) Converged
97
Created a New Library! Abelian cached distributed hash table: Should be running on the grid soon int main(int argc, char** argv) { dc_init_param rpc_parameters; init_param_from_env(rpc_parameters); distributed_control dc(rpc_parameters); delta_dht tbl(dc, 100); tbl[“hello”] += 1.0; tbl[“world”] -= 3.0; tbl.synchronize(“world”); std::cout << tbl[“hello”] << std::endl; } Initialize system using Hadoop Friendly TCP connections Create an Abelian Cached Map Add entries Read values
98
Stochastic Scopes Bounded degree through sampling
99
Stochastic Scopes Idea: Can we “sample” the neighborhood of a vertex Randomly sample neighborhood of fixed size: Currently only supports uniform sampling Will likely need weighted sampling Need to develop theory of stochastic scopes in learning algorithms label_prop(i, scope, p){ // Get Neighborhood data // Update the vertex data // Reschedule Neighbors if needed } Randomly construct a sample scope lock all selected neighbors
100
EARLY EXPERIMENT
101
Implemented LDA in GraphLab Used collapsed Gibbs sampling for LDA as test App. GraphLab Formulation: Doc 1 Doc 2 Doc 3 Word A Word B Word C Word D {#[w,d,t], #[w,d]} #[d,t] #[w,t] #[t]Global Variable:
102
Implemented LDA in GraphLab Used collapsed Gibbs sampling for LDA as test App. GraphLab Formulation: Doc 1 Doc 2 Doc 3 Word A Word B Word C Word D {#[w,d,t], #[w,d]} #[d,t] #[w,t] #[t]Global Variable: Resample #[w,d,t] using: Update: #[d,t], #[w,d,t], #[w,t]
103
Implemented LDA in GraphLab Used collapsed Gibbs sampling for LDA as test App. GraphLab Formulation: Doc 1 Doc 2 Doc 3 Word A Word B Word C Word D {#[w,d,t], #[w,d]} #[d,t] #[w,t] #[t]Global Variable: Resample #[w,d,t] using: Update: #[d,t], #[w,d,t], #[w,t]
104
GraphLab LDA Scaling Curves Factorized is close to exact parallel Gibbs sampling! Only uses “stale” topic counts #[t] Cached system with 2 update lag Need to evaluate lag effect on convergence
105
Other Preliminary Observations Pagerank on Y!Messenger Friend Network 14x speedup (on 16 cores) using new approaches GraphLab 12x speedup (on 16 cores) using original GraphLab? I suspect an inefficiency in functor composition is “improving” scaling LDA over new DHT data-structures Appears to scale linearly on small 4x machine deployments Keep’s cache relative fresh (2-3 update Lag) Needs more evaluation! Needs system optimization
106
Summary and Future Work We have identified several key weakness of GraphLab Data Management [Mostly Engineering] Natural Graphs and High degree vertices [Interesting] After substantial engineering effort we have: Update Functors & Decomposable Update Functors Abelian Group Caching: Eventual consistency Stochastic Scopes [Not Evaluated but interesting] Plan to evaluate on following applications LDA (both collapsed Gibbs and CVB0) Probabilistic Matrix Factorization Loopy BP on Markov Logic Networks Label Propagation on Social Networks
107
GraphLab LDA Scaling Curves
108
Carnegie Mellon University Problems with GraphLab
109
Problems with the Data Graph How is the Data Graph constructed? Sequentially and in physical memory by the user graph.add_vertex(vertex_data) vertex_id; graph.add_edge(source_id, target_id, edge_data); In parallel using a complex binary file format Graph Atoms: Fragments of the Graph How is the Data Graph stored between runs? By the user in a distributed file-system No notion of locality No convenient tools to read the output of GraphLab No out-of-core storage limit size of graphs 109
110
Solution: Hadoop/HDFS DataGraph Graph Construction and Storage using Hadoop: Developed a simple AVRO graph file format Implemented a reference AVRO graph constructor in Hadoop. Automatically sorts records for fast locking Simplifies computing edge reversal maps Tested on a subset of the twitter data-set Hadoop/HDFS manages launching and post-processing Hadoop streaming assigns graph fragments Output of GraphLab can be processed in Hadoop Problem: Waiting on C++ ScopeRecord { ID vertexId; VDataRecord vdata; List NeighborsIds; List EdgeData; } ScopeRecord { ID vertexId; VDataRecord vdata; List NeighborsIds; List EdgeData; }
111
Out-of-core Storage Problem: What if graph doesn’t fit in memory? Solution: Disk based caching. Only completed design specification Collaborator is writing a mem-cached file-system In Physical Memory Local Scope Map: Local VertexId File Offset Out-of-Core Storage Scope Record Vdata EdataVdata Adj. Lists EdataVdataAdj. List Local Vertex Locks EdataAdjacency Lists Scope Record DHT Distributed Map: VertexId Owning Instance Remote Storage Object Cache Fail
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.