Download presentation
Presentation is loading. Please wait.
Published byJared Richards Modified over 9 years ago
1
Carnegie Mellon University GraphLab Tutorial Yucheng Low
2
GraphLab Team Yucheng Low Aapo Kyrola Jay Gu Joseph Gonzalez Danny Bickson Carlos Guestrin
3
GraphLab 0.5 (2010) Internal Experimental Code Insanely Templatized Development History GraphLab 1 (2011) Nearly Everything is Templatized First Open Source Release (< June 2011 LGPL >= June 2011 APL) GraphLab 2 (2012) Many Things are Templatized Shared Memory : Jan 2012 Distributed : May 2012
4
Graphlab 2 Technical Design Goals Improved useability Decreased compile time As good or better performance than GraphLab 1 Improved distributed scalability … other abstraction changes … (come to the talk!)
5
Development History Ever since GraphLab 1.0, all active development are open source (APL): code.google.com/p/graphlabapi/ (Even current experimental code. Activated with a --experimental flag on./configure )
6
Guaranteed Target Platforms Any x86 Linux system with gcc >= 4.2 Any x86 Mac system with gcc 4.2.1 ( OS X 10.5 ?? ) Other platforms? … We welcome contributors.
7
Tutorial Outline GraphLab in a few slides + PageRank Checking out GraphLab v2 Implementing PageRank in GraphLab v2 Overview of different GraphLab schedulers Preview of Distributed GraphLab v2 (may not work in your checkout!) Ongoing work… (however much as time allows)
8
Warning A preview of code still in intensive development! Things may or may not work for you! Interface may still change! GraphLab 1 GraphLab 2 still has a number of performance regressions we are ironing out.
9
PageRank Example Iterate: Where: α is the random reset probability L[j] is the number of links on page j 132 46 5 5
10
The GraphLab Framework Scheduler Consistency Model Graph Based Data Representation Update Functions User Computation 10
11
Data Graph A graph with arbitrary data (C++ Objects) associated with each vertex and edge Vertex Data: Webpage Webpage Features Edge Data: Link weight Graph: Link graph 11
12
The GraphLab Framework Scheduler Consistency Model Graph Based Data Representation Update Functions User Computation 12
13
pagerank(i, scope){ // Get Neighborhood data (R[i], W ij, R[j]) scope; // Update the vertex data // Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); } Update Functions 13 An update function is a user defined program which when applied to a vertex transforms the data in the scope of the vertex
14
Dynamic Schedule e e f f g g k k j j i i h h d d c c b b a a CPU 1 CPU 2 a a h h a a b b b b i i 14 Process repeats until scheduler is empty
15
Source Code Interjection 1 Graph, update functions, and schedulers
16
--scope=vertex--scope=edge
17
Consistency Trade-off Consistency “Throughput” # “iterations” per second Goal of ML algorithm: Converge
18
Ensuring Race-Free Code How much can computation overlap? 18
19
The GraphLab Framework Scheduler Consistency Model Graph Based Data Representation Update Functions User Computation 19
20
Importance of Consistency Fast ML Algorithm development cycle: Build Test Debug Tweak Model Necessary for framework to behave predictably and consistently and avoid problems caused by non-determinism. Is the execution wrong? Or is the model wrong? 20
21
Full Consistency Guaranteed safety for all update functions
22
Full Consistency Parallel update only allowed two vertices apart Reduced opportunities for parallelism Parallel update only allowed two vertices apart Reduced opportunities for parallelism
23
Obtaining More Parallelism Not all update functions will modify the entire scope! Belief Propagation: Only uses edge data Gibbs Sampling: Only needs to read adjacent vertices
24
Edge Consistency
25
Obtaining More Parallelism “Map” operations. Feature extraction on vertex data
26
Vertex Consistency
27
The GraphLab Framework Scheduler Consistency Model Graph Based Data Representation Update Functions User Computation 27
28
Shared Variables Global aggregation through Sync Operation A global parallel reduction over the graph data Synced variables recomputed at defined intervals while update functions are running Sync: Highest PageRank Sync: Highest PageRank Sync: Loglikelihood Sync: Loglikelihood 28
29
Source Code Interjection 2 Shared variables
30
What can we do with these primitives? …many many things…
31
Matrix Factorization Netflix Collaborative Filtering Alternating Least Squares Matrix Factorization Model: 0.5 million nodes, 99 million edges Netflix Users Movies d
32
Netflix Speedup Increasing size of the matrix factorization
33
Video Co-Segmentation Discover “coherent” segment types across a video (extends Batra et al. ‘10) 1. Form super-voxels video 2. EM & inference in Markov random field Large model: 23 million nodes, 390 million edges GraphLab Ideal
34
Many More Tensor Factorization Bayesian Matrix Factorization Graphical Model Inference/Learning Linear SVM EM clustering Linear Solvers using GaBP SVD Etc.
35
Distributed Preview
36
GraphLab 2 Abstraction Changes (an overview couple of them) (Come to the talk for the rest!)
37
Exploiting Update Functors (for the greater good)
38
Exploiting Update Functors (for the greater good) 1. Update Functors store state 2. Scheduler schedules update functor instances. 3. We can use update functors as a controlled asynchronous message passing to communicate between vertices!
39
Delta Based Update Functors struct pagerank : public iupdate_functor { double delta; pagerank(double d) : delta(d) { } void operator+=(pagerank& other) { delta += other.delta; } void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); vdata.rank += delta; if(abs(delta) > EPSILON) { double out_delta = delta * (1 – RESET_PROB) * 1/context.num_out_edges(edge.source()); context.schedule_out_neighbors(pagerank(out_delta)); } }; // Initial Rank: R[i] = 0; // Initial Schedule: pagerank(RESET_PROB);
40
Asynchronous Message Passing Obviously not all computation can be written this way. But when it can; it can be extremely fast.
41
Factorized Updates
42
PageRank in GraphLab struct pagerank : public iupdate_functor { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach ( edge_type edge, context.in_edges() ) sum += context.const_edge_data(edge).weight * context.const_vertex_data(edge.source()).rank; double old_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } };
43
PageRank in GraphLab struct pagerank : public iupdate_functor { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach ( edge_type edge, context.in_edges() ) sum += context.const_edge_data(edge).weight * context.const_vertex_data(edge.source()).rank; double old_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } }; Atomic Single Vertex Apply Parallel Scatter [Reschedule] Parallel “Sum” Gather
44
Decomposable Update Functors Decompose update functions into 3 phases: + + … + Δ Y Y Y Parallel Sum User Defined: Gather( ) Δ Y Δ 1 + Δ 2 Δ 3 Y Scope Gather Y Y Apply(, Δ) Y Apply the accumulated value to center vertex User Defined: Apply Y Scatter( ) Update adjacent edges and vertices. User Defined: Y Scatter
45
Factorized PageRank struct pagerank : public iupdate_functor { double accum = 0, residual = 0; void gather(icontext_type& context, const edge_type& edge) { accum += context.const_edge_data(edge).weight * context.const_vertex_data(edge.source()).rank; } void merge(const pagerank& other) { accum += other.accum; } void apply(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double old_value = vdata.rank; vdata.rank = RESET_PROB + (1 - RESET_PROB) * accum; residual = fabs(vdata.rank – old_value) / context.num_out_edges(); } void scatter(icontext_type& context, const edge_type& edge) { if (residual > EPSILON) context.schedule(edge.target(), pagerank()); } };
46
Demo of *everything* PageRank
47
Ongoing Work Extensions to improve performance on large graphs. (See the GraphLab talk later!!) Better distributed Graph representation methods Possibly better Graph Partitioning Off-core Graph storage Continually changing graphs All New rewrite of distributed GraphLab (come back in May!)
48
Ongoing Work Extensions to improve performance on large graphs. (See the GraphLab talk later!!) Better distributed Graph representation methods Possibly better Graph Partitioning Off-core Graph storage Continually changing graphs All New rewrite of distributed GraphLab (come back in May!)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.