Presentation is loading. Please wait.

Presentation is loading. Please wait.

MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.

Similar presentations


Presentation on theme: "MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme."— Presentation transcript:

1 MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme

2 MapReduce

3 Motivation: Large Scale Data Processing Want to process lots of data ( > 1 TB) Size of web > 400 TB Want to parallelize across hundreds/thousands of CPUs. Commodity CPUs have become cheaper. Want to make this easy. Favour programmer productivity over CPU efficiency.

4 What is MapReduce? More simply, MapReduce is: A parallel programming model and associated implementation. Borrows from functional programming Many problems can be modeled based on MapReduce paradigm

5 MapReduce Features Automatic parallelization & distribution Fault-tolerant Load balancing Network and disk transfer optimization Provides status and monitoring tools Clean abstraction for programmers Improvements to core library benefit all users of library!

6 Steps in typical problem solved by MapReduce Read a lot of data Map: extract something you care about from each record Shuffle and Sort Shuffle and Sort Reduce: aggregate, summarize, filter, or transform Write the results Outline stays the same, Map and Reduce change to fit the problem

7 MapReduce Paradigm Basic data type: the key-value pair (k,v). E.g. key = URL, value = HTML of web page. Users implement interface of two functions: Map: (k,v) ↦ Reduce: (k', ) ↦ (typically n'' = 1) All v' with same k' are reduced together.

8 Example: Count word occurrences map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

9

10 Example: Query Frequency map(String input_key, String input_value): // input_key: query log name // input_value: query log content for each query q in content: if(substring(“full moon”,q)): EmitIntermediate(issue_time, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: issue_time // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(issue_time, AsString(result));

11

12

13

14

15

16 More Examples: Distributed Grep Count of URL Access Frequency Suggesting terms for query expansion Suggesting terms for query expansion Distributed Sort.

17

18 Execution Create M splits of input data User provides R i.e # of partitions or # of output files Master Data Structure: Keeps track of state of each map and reduce task.

19

20 Locality Master program divides up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

21

22 Parallelism map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed independently Bottleneck: reduce phase can’t start until map phase is completely finished.

23 Fault Tolerance Master detects worker failures  Re-executes completed & in-progress map() tasks  Re-executes in-progress reduce() tasks Master notices particular input key/values cause crashes in map(), and skips those values on re-execution.  Effect: Can work around bugs in third-party libraries!

24 Fault Tolerance contd. What if the Master fails?  Create a checkpoint and note the state of Master Data Structure  Write the state to GFS filesystem  New master recovers and continues

25 Semantics in Presence of Failures Deterministic map and reduce operators are assumed. Atomic commits of map and reduce task outputs Relies on GFS

26 Semantics in Presence of Failures contd. What if map/reduce operators are non- deterministic?  In this case, MapReduce provides weaker but reasonable semantics.

27 Optimizations No reduce can start until map is complete:  A single slow disk controller can rate-limit the whole process Master redundantly executes “slow- moving” map tasks; uses results of first copy to finish Performed only when close to completion Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?

28 Fine tuning Partitioning Function Ordering Guarantees Combiner function Bad Record Skipping Status Information Counters

29 Partitioning Function Default : “hash(key) mod R” Can be customized: E.g. “hash (Hostname(urlkey)) mod R” Distribution of keys can be used to determine good partitions.

30 Combiner Function Runs on same machine as a map task Causes a mini-reduce phase to occur before the real reduce phase Saves bandwidth

31 Performance Grep  1800 machines  10 10 100 byte records.(~ 1 TB)  3-character pattern to be matched ( ~ 1 lakh records contain the pattern )  M = 15000  R = 1  Input data chunk size = 64 MB

32 Performance:Grep

33 Performance Sort  1800 machines used  10 10 100 byte records.(~ 1 TB)  M = 15000  R = 4000  Input data chunk size = 64 MB  2 TB of final output (GFS maintains 2 copies)

34

35 MapReduce Conclusions MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at Google. Indexing code rewritten using MapReduce. Code is simpler, smaller, readable.


Download ppt "MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme."

Similar presentations


Ads by Google