Introduction to MapReduce Jimmy Lin University of Maryland Tuesday, January 26, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share.

Slides:



Advertisements
Similar presentations
Big Data Infrastructure Jimmy Lin University of Maryland Monday, February 9, 2015 Session 3: MapReduce – Basic Algorithm Design This work is licensed under.
Advertisements

大规模数据处理 / 云计算 Lecture 4 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 4/24/2011 This work is licensed under a Creative.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, February 21, 2013 Session 5: Graph Processing This work is licensed.
大规模数据处理 / 云计算 Lecture 6 – Graph Algorithm 彭波 北京大学信息科学技术学院 4/26/2011 This work is licensed under a Creative Commons.
Thanks to Jimmy Lin slides
DISTRIBUTED COMPUTING & MAP REDUCE CS16: Introduction to Data Structures & Algorithms Thursday, April 17,
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
Cloud Computing Lecture #3 More MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, September 10, 2008 This work is licensed under a Creative.
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
Distributed Computations
MapReduce Algorithm Design Data-Intensive Information Processing Applications ― Session #3 Jimmy Lin University of Maryland Tuesday, February 9, 2010 This.
Jimmy Lin The iSchool University of Maryland Wednesday, April 15, 2009
Cloud Computing Lecture #5 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, October 1, 2008 This work is licensed.
MapReduce Algorithms CSE 490H. Algorithms for MapReduce Sorting Searching TF-IDF BFS PageRank More advanced algorithms.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland Tuesday, June 29, 2010 This work is licensed.
Cloud Computing Lecture #1 Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 This work is licensed.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Distributed Computations MapReduce
7/14/2015EECS 584, Fall MapReduce: Simplied Data Processing on Large Clusters Yunxing Dai, Huan Feng.
Cloud Computing Lecture #4 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, February 6, 2008 This work is licensed.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland MLG, January, 2014 Jaehwan Lee.
CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
Map Reduce: Simplified Data Processing On Large Clusters Jeffery Dean and Sanjay Ghemawat (Google Inc.) OSDI 2004 (Operating Systems Design and Implementation)
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce and Graph Data Chapter 5 Based on slides from Jimmy Lin’s lecture slides ( (licensed.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Massive Data Processing 02: MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/1/2014 This work is licensed under a Creative Commons.
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
大规模数据处理 / 云计算 Lecture 3 – MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/12/2011 This work is licensed under a Creative Commons.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Graph Algorithms. Graph Algorithms: Topics  Introduction to graph algorithms and graph represent ations  Single Source Shortest Path (SSSP) problem.
大规模数据处理 / 云计算 Lecture 5 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 7/19/2011 This work is licensed under a Creative.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
Distributed Computing Seminar Lecture 5: Graph Algorithms & PageRank Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007 Except.
大规模数据处理 / 云计算 Lecture 3 – Mapreduce Algorithm Design 闫宏飞 北京大学信息科学技术学院 7/16/2013 This work is licensed under a Creative.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Big Data Infrastructure Week 5: Analyzing Graphs (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United.
Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015.
Big Data Infrastructure Week 5: Analyzing Graphs (1/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United.
Big Data Infrastructure Week 2: MapReduce Algorithm Design (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Big Data Infrastructure Week 3: From MapReduce to Spark (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Big Data Infrastructure Week 4: Analyzing Text (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
Tallahassee, Florida, 2016 COP5725 Advanced Database Systems MapReduce Spring 2016.
大规模数据处理 / 云计算 05 – Graph Algorithm 闫宏飞 北京大学信息科学技术学院 7/22/2014 Jimmy Lin University of Maryland SEWMGroup This work.
Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google.
Mergesort example: Merge as we return from recursive calls Merge Divide 1 element 829.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Jimmy Lin and Michael Schatz Design Patterns for Efficient Graph Algorithms in MapReduce Michele Iovino Facoltà di Ingegneria dell’Informazione, Informatica.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Big Data Infrastructure
Big Data Infrastructure
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
湖南大学-信息科学与工程学院-计算机与科学系
Cloud Computing Lecture #4 Graph Algorithms with MapReduce
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Distributed System Gang Wu Spring,2018.
Graph Algorithms Adapted from UMD Jimmy Lin’s slides, which is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
MapReduce Algorithm Design
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Introduction to MapReduce Jimmy Lin University of Maryland Tuesday, January 26, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See for details Adapted from slides of Jimmy Lin

2 What is MapReduce? Programming model for expressing distributed computations at a massive scale Execution framework for organizing and performing such computations Open-source implementation called Hadoop

Why large data?

4 How much data? Google processes 20 PB a day (2008) Wayback Machine has 3 PB TB/month (3/2009) Facebook has 2.5 PB of user data + 15 TB/day (4/2009) eBay has 6.5 PB of user data + 50 TB/day (5/2009) 640K ought to be enough for anybody.

What is cloud computing?

6 The best thing since sliced bread? Before clouds… Grids Vector supercomputers … Cloud computing means many different things: Large-data processing Utility computing Everything as a service

7 Source: Wikipedia (Electricity meter)

8 Utility Computing What? Computing resources as a metered service (“pay as you go”) Ability to dynamically provision virtual machines Why? Cost: capital vs. operating expenses Scalability: “infinite” capacity Elasticity: scale up or down on demand Does it make sense? Benefits to cloud users Business case for cloud providers I think there is a world market for about five computers.

How do we scale up?

10 Source: Wikipedia (IBM Roadrunner)

11 Divide and Conquer “Work” w1w1 w2w2 w3w3 r1r1 r2r2 r3r3 “Result” “worker” Partition Combine

12 Parallelization Challenges How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems?

13 Common Theme? Parallelization problems arise from: Communication between workers (e.g., to exchange state) Access to shared resources (e.g., data) Thus, we need a synchronization mechanism

14 Source: Ricardo Guimarães Herrmann

15 Managing Multiple Workers Difficult because We don’t know the order in which workers run We don’t know when workers interrupt each other We don’t know the order in which workers access shared data Thus, we need: Semaphores (lock, unlock) Conditional variables (wait, notify, broadcast) Still, lots of problems: Deadlock, livelock, race conditions... Dining philosophers, sleeping barbers, cigarette smokers... Moral of the story: be careful!

16 Current Tools Programming models Shared memory Message passing Design Patterns Master-slaves Producer-consumer flows Shared work queues Message Passing P1P1 P2P2 P3P3 P4P4 P5P5 Shared Memory P1P1 P2P2 P3P3 P4P4 P5P5 Memory master slaves producerconsumer producerconsumer work queue

17 Where the rubber meets the road Concurrency is difficult to reason about Concurrency is even more difficult to reason about At the scale of datacenters (even across datacenters) In the presence of failures In terms of multiple interacting services Not to mention debugging… The reality: Lots of one-off solutions, custom code Write your own dedicated library, then program with it Burden on the programmer to explicitly manage everything

18 Source: Wikipedia (Flat Tire)

19 Source: MIT Open Courseware

20 Source: MIT Open Courseware

21 Source: Harper’s (Feb, 2008)

22 What’s the point? It’s all about the right level of abstraction The von Neumann architecture has served us well, but is no longer appropriate for the multi-core/cluster environment Hide system-level details from the developers No more race conditions, lock contention, etc. Separating the what from how Developer specifies the computation that needs to be performed Execution framework (“runtime”) handles actual execution The datacenter is the computer!

23 “Big Ideas” Scale “out”, not “up” Limits of SMP and large shared-memory machines Assume failures are common Even reliable machines eventually fail. Many machines means frequent failures Move processing to the data Cluster have limited bandwidth Process data sequentially, avoid random access Seeks are expensive, disk throughput is reasonable Seamless scalability Two dimensions: data and cluster size From the mythical man-month to the tradable machine-hour

MapReduce

25 Typical Large-Data Problem Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Key idea: provide a functional abstraction for these two operations Map Reduce (Dean and Ghemawat, OSDI 2004)

26 MapReduce Programmers specify two functions: map (k, v) → * reduce (k’, v’*) → * All values with the same key are sent to the same reducer The execution framework handles everything else…

27 map Shuffle and Sort: aggregate values by keys reduce k1k1 k2k2 k3k3 k4k4 k5k5 k6k6 v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 ba12cc36ac52bc78 a15b27c2368 r1r1 s1s1 r2r2 s2s2 r3r3 s3s3

28 MapReduce Programmers specify two functions: map (k, v) → * reduce (k’, v’*) → * All values with the same key are sent to the same reducer The execution framework handles everything else… What’s “everything else”?

29 MapReduce “Runtime” Handles scheduling Assigns workers to map and reduce tasks Handles “data distribution” Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS Beyond the scope of this course!

30 MapReduce Programmers specify two functions: map (k, v) → * reduce (k’, v’*) → * All values with the same key are reduced together The execution framework handles everything else… Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod n Divides up key space for parallel reduce operations combine (k’, v’*) → * Mini-reducers that run in memory after the map phase Must have precisely the same input as the reducer Must have the same output as the mapper Used as an optimization to reduce network traffic

31 combine ba12c9ac52bc78 partition map k1k1 k2k2 k3k3 k4k4 k5k5 k6k6 v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 ba12cc36ac52bc78 Shuffle and Sort: aggregate values by keys reduce a15b27c298 r1r1 s1s1 r2r2 s2s2 r3r3 s3s3 c2368

32 Two more details… Barrier between map and reduce phases Keys arrive at each reducer in sorted order No enforced ordering across reducers

33 “Hello World”: Word Count

34 MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Development led by Yahoo, used in production Now an Apache project Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc.

MapReduce Algorithm Design

36 Algorithm Design In-mapper combining Pairs Stripes Order Inversion

37 States of Mapper and Reducer Objects Mapper object configure map close state one object per task Reducer object configure reduce close state one call per input key-value pair one call per intermediate key API initialization hook API cleanup hook

38 Local Aggregation One of the main costs of a program is in its network communication Often, intermediate results are written to disk before communication, making the cost even higher Local aggregation is critical to reduce these costs

39 Word Count: Baseline

40 Word Count: Version 1 Add a combiner that does the same thing as the reducer Reduces network communication Can we do even better?

41 Word Count: Version 2 Are combiners still needed?

42 Word Count: Version 3 Are combiners still needed? Key: preserve state across input key-value pairs!

43 Design Pattern for Local Aggregation “In-mapper combining” Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Advantages Speed Why is this faster than actual combiners? Disadvantages Explicit memory management required Potential for bugs

44 Combiner Design Combiners and reducers share same method signature Sometimes, reducers can serve as combiners Often, not… Remember: combiner are optional optimizations Should not affect algorithm correctness May be run 0, 1, or multiple times Example: find average of all integers associated with the same key

45 Computing the Mean: Version 1 Why can’t we use reducer as combiner?

46 Computing the Mean: Version 2 Why doesn’t this work?

47 Computing the Mean: Version 3 Fixed?

48 Computing the Mean: Version 4 What design pattern was used? Are combiners still needed?

49 Two More Design Patterns: Running Example Term co-occurrence matrix for a text collection M = N x N matrix (N = vocabulary size) M ij : number of times i and j co-occur in some context (for concreteness, let’s say context = sentence) Why? Distributional profiles as a way of measuring semantic distance Semantic distance useful for many language processing tasks

50 MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection = specific instance of a large counting problem A large event space (number of terms) A large number of observations (the collection itself) Goal: keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently?

51 First Try: “Pairs” Each mapper takes a sentence: Generate all co-occurring term pairs For all pairs, emit (a, b) → count Reducers sum up counts associated with these pairs Use combiners!

52 Pairs: Pseudo-Code

53 “Pairs” Analysis Advantages Easy to implement, easy to understand Disadvantages Lots of pairs to sort and shuffle around (upper bound?) Not many opportunities for combiners to work

54 Another Try: “Stripes” Idea: group together pairs into an associative array Each mapper takes a sentence: Generate all co-occurring term pairs For each term, emit a → { b: count b, c: count c, d: count d … } Reducers perform element-wise sum of associative arrays (a, b) → 1 (a, c) → 2 (a, d) → 5 (a, e) → 3 (a, f) → 2 a → { b: 1, c: 2, d: 5, e: 3, f: 2 } a → { b: 1, d: 5, e: 3 } a → { b: 1, c: 2, d: 2, f: 2 } a → { b: 2, c: 2, d: 7, e: 3, f: 2 } + Key: cleverly-constructed data structure brings together partial results

55 Stripes: Pseudo-Code

56 “Stripes” Analysis Advantages Far less sorting and shuffling of key-value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object more heavyweight Fundamental limitation in terms of size of event space

57 Cluster size: 38 cores Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)

58 Relative Frequencies How do we estimate relative frequencies from counts? Why do we want to do this? How do we do this with MapReduce?

59 f(B|A): “Stripes” Easy! One pass to compute (a, *) Another pass to directly compute f(B|A) a → {b 1 :3, b 2 :12, b 3 :7, b 4 :1, … }

60 f(B|A): “Pairs” For this to work: Must emit extra (a, *) for every a in mapper Must make sure all a’s get sent to same reducer (use partitioner) Must make sure (a, *) comes first (define sort order) Must hold state in reducer across different key-value pairs (a, b 1 ) → 3 (a, b 2 ) → 12 (a, b 3 ) → 7 (a, b 4 ) → 1 … (a, *) → 32 (a, b 1 ) → 3 / 32 (a, b 2 ) → 12 / 32 (a, b 3 ) → 7 / 32 (a, b 4 ) → 1 / 32 … Reducer holds this value in memory

61 “Order Inversion” Common design pattern Computing relative frequencies requires marginal counts But marginal cannot be computed until you see all counts Buffering is a bad idea! Trick: getting the marginal counts to arrive at the reducer before the joint counts

62 Synchronization: Pairs vs. Stripes Approach 1: turn synchronization into an ordering problem Sort keys into correct order of computation Partition key space so that each reducer gets the appropriate set of partial results Hold state in reducer across multiple key-value pairs to perform computation Illustrated by the “pairs” approach Approach 2: construct data structures that bring partial results together Each reducer receives all the data it needs to complete the computation Illustrated by the “stripes” approach

63 Secondary Sorting MapReduce sorts input to reducers by key Values may be arbitrarily ordered What if want to sort value also? E.g., k → (v 1, r), (v 3, r), (v 4, r), (v 8, r)…

64 Secondary Sorting: Solutions Solution 1: Buffer values in memory, then sort Why is this a bad idea? Solution 2: “Value-to-key conversion” design pattern: form composite intermediate key, (k, v 1 ) Let execution framework do the sorting Preserve state across multiple key-value pairs to handle processing Anything else we need to do?

65 Recap: Tools for Synchronization Cleverly-constructed data structures Bring data together Sort order of intermediate keys Control order in which reducers process keys Partitioner Control which reducer processes which keys Preserving state in mappers and reducers Capture dependencies across multiple keys and values

66 Issues and Tradeoffs Number of key-value pairs Object creation overhead Time for sorting and shuffling pairs across the network Size of each key-value pair De/serialization overhead Local aggregation Opportunities to perform local aggregation vary Combiners make a big difference Combiners vs. in-mapper combining

Text Retrieval with MapReduce

68 Inverted Index: Boolean Retrieval one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc blue cat egg fish green ham hat one blue cat egg fish green ham hat one 2 green eggs and ham Doc red 1 1 two 2red 1two

Inverted Index: TF.IDF tf df blue cat egg fish green ham hat one blue cat egg fish green ham hat one red two 1 red 1 two one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 green eggs and ham Doc

70 [2,4] [3] [2,4] [2] [1] [3] [2] [1] [3] Inverted Index: Positional Information tf df blue cat egg fish green ham hat one blue cat egg fish green ham hat one red two 1 red 1 two one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 green eggs and ham Doc

71 Retrieval in a Nutshell Look up postings lists corresponding to query terms Traverse postings for each query term Store partial query-document scores in accumulators Select top k results to return

72 MapReduce it? The indexing problem Scalability is critical Must be relatively fast, but need not be real time Fundamentally a batch operation Incremental updates may or may not be important For the web, crawling is a challenge in itself The retrieval problem Must have sub-second response time For the web, only need relatively few results Perfect for MapReduce! Uh… not so good…

73 Indexing: Performance Analysis Fundamentally, a large sorting problem Terms usually fit in memory Postings usually don’t How is it done on a single machine? How can it be done with MapReduce?

74 MapReduce: Index Construction Map over all documents Emit term as key, (docno, tf) as value Emit other information as necessary (e.g., term position) Sort/shuffle: group postings by term Reduce Gather and sort the postings (e.g., by docno or tf) Write postings to disk MapReduce does all the heavy lifting!

Inverted Indexing with MapReduce 1 one 1 two 1 fish one fish, two fish Doc 1 2 red 2 blue 2 fish red fish, blue fish Doc 2 3 cat 3 hat cat in the hat Doc 3 1 fish 2 1 one 1 two 2 red 3 cat 2 blue 3 hat Shuffle and Sort: aggregate values by keys Map Reduce

76 Inverted Indexing: Pseudo-Code

77 [2,4] [1] [3] [1] [2] [1] [3] [2] [3] [2,4] [1] [2,4] [1] [3] Positional Indexes 1 one 1 two 1 fish 2 red 2 blue 2 fish 3 cat 3 hat 1 fish 2 1 one 1 two 2 red 3 cat 2 blue 3 hat Shuffle and Sort: aggregate values by keys Map Reduce one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3

78 Inverted Indexing: Pseudo-Code What’s the problem?

79 Scalability Bottleneck Initial implementation: terms as keys, postings as values Reducers must buffer all postings associated with key (to sort) What if we run out of memory to buffer postings? Uh oh!

80 [2,4] [9] [1,8,22] [23] [8,41] [2,9,76] [2,4] [9] [1,8,22] [23] [8,41] [2,9,76] Another Try… 1 fish 9 21 (values)(key) fish 9 21 (values)(keys) fish How is this different? Let the framework do the sorting Term frequency implicitly stored Directly write postings to disk! Where have we seen this before?

81 Getting the Document Frequency In the mapper: Emit “special” key-value pairs to keep track of df (=document frequency) In the reducer: Make sure “special” key-value pairs come first: process them to determine df Remember: proper partitioning!

82 Getting the df: Modified Mapper one fish, two fish Doc 1 1 fish [2,4] (value)(key) 1 one [1] 1 two [3]  fish [2]  one [1]  two [1] Input document… Emit normal key-value pairs… Emit “special” key-value pairs to keep track of df… Where have we seen this before?

Graph Algorithms

84 Graphs and MapReduce Graph algorithms typically involve: Performing computations at each node: based on node features, edge features, and local link structure Propagating computations: “traversing” the graph Key questions: How do you represent graph data in MapReduce? How do you traverse a graph in MapReduce? How do you run an iterative algorithm?

85 Representing Graphs: Adjacency Lists Take adjacency matrices… and throw away all the zeros 1: 2, 4 2: 1, 3, 4 3: 1 4: 1,

86 Adjacency Lists: Critique Advantages: Much more compact representation Easy to compute over outlinks Disadvantages: Much more difficult to compute over inlinks

87 Single Source Shortest Path Problem: find shortest path from a source node to one or more target nodes Shortest might also mean lowest weight or cost First, a refresher: Dijkstra’s Algorithm

88 Dijkstra’s Algorithm Example 0 0     Example from CLR

89 Dijkstra’s Algorithm Example   Example from CLR

90 Dijkstra’s Algorithm Example Example from CLR

91 Dijkstra’s Algorithm Example Example from CLR

92 Dijkstra’s Algorithm Example Example from CLR

93 Dijkstra’s Algorithm Example Example from CLR

94 Single Source Shortest Path Problem: find shortest path from a source node to one or more target nodes Shortest might also mean lowest weight or cost Single processor machine: Dijkstra’s Algorithm MapReduce: parallel Breadth-First Search (BFS)

95 Finding the Shortest Path Consider simple case of equal edge weights Solution to the problem can be defined inductively Here’s the intuition: Define: b is reachable from a if b is on adjacency list of a D ISTANCE T O (s) = 0 For all nodes p reachable from s, D ISTANCE T O (p) = 1 For all nodes n reachable from some other set of nodes M, D ISTANCE T O (n) = 1 + min(D ISTANCE T O (m), m  M) s s m3m3 m3m3 m2m2 m2m2 m1m1 m1m1 n n … … … d1d1 d2d2 d3d3

96 Source: Wikipedia (Wave)

97 Visualizing Parallel BFS n0n0 n0n0 n3n3 n3n3 n2n2 n2n2 n1n1 n1n1 n7n7 n7n7 n6n6 n6n6 n5n5 n5n5 n4n4 n4n4 n9n9 n9n9 n8n8 n8n8

98 From Intuition to Algorithm Data representation: Key: node n Value: d (distance from start), adjacency list (list of nodes reachable from n) Initialization: for all nodes except for start node, d =  Mapper:  m  adjacency list: emit (m, d + 1) Sort/Shuffle Groups distances by reachable nodes Reducer: Selects minimum distance path for each reachable node Additional bookkeeping needed to keep track of actual path

99 Multiple Iterations Needed Each MapReduce iteration advances the “known frontier” by one hop Subsequent iterations include more and more reachable nodes as frontier expands Multiple iterations are needed to explore entire graph Preserving graph structure: Problem: Where did the adjacency list go? Solution: mapper emits (n, adjacency list) as well

100 BFS Pseudo-Code

101 Stopping Criterion How many iterations are needed in parallel BFS (equal edge weight case)? Convince yourself: when a node is first “discovered”, we’ve found the shortest path Now answer the question... Six degrees of separation? Practicalities of implementation in MapReduce

102 Comparison to Dijkstra Dijkstra’s algorithm is more efficient At any step it only pursues edges from the minimum-cost path inside the frontier MapReduce explores all paths in parallel Lots of “waste” Useful work is only done at the “frontier”

103 Weighted Edges Now add positive weights to the edges Why can’t edge weights be negative? Simple change: adjacency list now includes a weight w for each edge In mapper, emit (m, d + w p ) instead of (m, d + 1) for each node m That’s it?

104 Stopping Criterion How many iterations are needed in parallel BFS (positive edge weight case)? Convince yourself: when a node is first “discovered”, we’ve found the shortest path Not true!

105 Additional Complexities s p q r search frontier 10 n1n1 n2n2 n3n3 n4n4 n5n5 n6n6 n7n7 n8n8 n9n

106 Stopping Criterion How many iterations are needed in parallel BFS (positive edge weight case)? Practicalities of implementation in MapReduce

107 Graphs and MapReduce Graph algorithms typically involve: Performing computations at each node: based on node features, edge features, and local link structure Propagating computations: “traversing” the graph Generic recipe: Represent graphs as adjacency lists Perform local computations in mapper Pass along partial results via outlinks, keyed by destination node Perform aggregation in reducer on inlinks to a node Iterate until convergence: controlled by external “driver” Don’t forget to pass the graph structure between iterations

108 Random Walks Over the Web Random surfer model: User starts at a random Web page User randomly clicks on links, surfing from page to page PageRank Characterizes the amount of time spent on any given page Mathematically, a probability distribution over pages PageRank captures notions of page importance Correspondence to human intuition? One of thousands of features used in web search Note: query-independent Think about it: How would you implement PageRank?