Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Similar presentations


Presentation on theme: "Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)"— Presentation transcript:

1 Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

2 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 2

3 MapReduce! Sequentially read a lot of data Map: – Extract something you care about Group by key: Sort and Shuffle Reduce: – Aggregate, summarize, filter or transform Write the result J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 3

4 MapReduce: The Map Step v k kv kv map v k v k … kv Input key-value pairs Intermediate key-value pairs … kv J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 4

5 MapReduce: The Reduce Step kv … kv kv kv Intermediate key-value pairs Group by key reduce kvkvkv … kv … kv kvv vv Key-value groups Output key-value pairs J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 5

6 More Specifically Input: a set of key-value pairs Programmer specifies two methods: –Map(k, v)  * Takes a key-value pair and outputs a set of key-value pairs – E.g., key is the filename, value is a single line in the file There is one Map call for every (k,v) pair –Reduce(k’, *)  * All values v’ with same key k’ are reduced together and processed in v’ order There is one Reduce function call per unique key k’ J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

7 Large-scale Computing Large-scale computing for data mining problems on commodity hardware Challenges: – How do you distribute computation? – How can we make it easy to write distributed programs? – Machines fail: One server may stay up 3 years (1,000 days) If you have 1,000 servers, expect to loose 1/day People estimated Google had ~1M machines in 2011 – 1,000 machines fail every day! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 7

8 Idea and Solution Issue: Copying data over a network takes time Idea: – Bring computation close to the data – Store files multiple times for reliability Map-reduce addresses these problems – Google’s computational/data manipulation model – Elegant way to work with big data – Storage Infrastructure – File system Google: GFS. Hadoop: HDFS – Programming model Map-Reduce J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8

9 Storage Infrastructure Problem: – If nodes fail, how to store data persistently? Answer: – Distributed File System: Provides global file namespace Google GFS; Hadoop HDFS; Typical usage pattern – Huge files (100s of GB to TB) – Data is rarely updated in place – Reads and appends are common J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9

10 Distributed File System Chunk servers – File is split into contiguous chunks – Typically each chunk is 16-64MB – Each chunk replicated (usually 2x or 3x) – Try to keep replicas in different racks Master node – a.k.a. Name Node in Hadoop’s HDFS – Stores metadata about where files are stored – Might be replicated Client library for file access – Talks to master to find chunk servers – Connects directly to chunk servers to access data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 10

11 Distributed File System Reliable distributed file system Data kept in “chunks” spread across machines Each chunk replicated on different machines – Seamless recovery from disk or machine failure C0C0 C1C1 C2C2 C5C5 Chunk server 1 D1D1 C5C5 Chunk server 3 C1C1 C3C3 C5C5 Chunk server 2 … C2C2 D0D0 D0D0 Bring computation directly to the data! C0C0 C5C5 Chunk server N C2C2 D0D0 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 11 Chunk servers also serve as compute servers

12 Programming Model: MapReduce Warm-up task: We have a huge text document Count the number of times each distinct word appears in the file Sample application: – Analyze web server logs to find popular URLs J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 12

13 Task: Word Count Case 1: – File too large for memory, but all pairs fit in memory Case 2: Count occurrences of words: – words(doc.txt) | sort | uniq -c where words takes a file and outputs the words in it, one per a line Case 2 captures the essence of MapReduce – Great thing is that it is naturally parallelizable J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 13

14 Data Flow Input and final output are stored on a distributed file system (FS): – Scheduler tries to schedule map tasks “close” to physical storage location of input data Intermediate results are stored on local FS of Map and Reduce workers Output is often input to another MapReduce task J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 14

15 Coordination: Master Master node takes care of coordination: – Task status: (idle, in-progress, completed) – Idle tasks get scheduled as workers become available – When a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducer – Master pushes this info to reducers Master pings workers periodically to detect failures J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 15

16 Dealing with Failures Map worker failure – Map tasks completed or in-progress at worker are reset to idle – Reduce workers are notified when task is rescheduled on another worker Reduce worker failure – Only in-progress tasks are reset to idle – Reduce task is restarted Master failure – MapReduce task is aborted and client is notified J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 16

17 Task Granularity & Pipelining Fine granularity tasks: map tasks >> machines – Minimizes time for fault recovery – Can do pipeline shuffling with map execution – Better dynamic load balancing J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 17

18 Refinement: Combiners Often a Map task will produce many pairs of the form (k,v 1 ), (k,v 2 ), … for the same key k – E.g., popular words in the word count example Can save network time by pre-aggregating values in the mapper: –combine(k, list(v 1 ))  v 2 – Combiner is usually same as the reduce function Works only if reduce function is commutative and associative J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 18

19 Refinement: Combiners Back to our word counting example: – Combiner combines the values of all keys of a single mapper (single machine): – Much less data needs to be copied and shuffled! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 19

20 Refinement: Partition Function Want to control how keys get partitioned – Inputs to map tasks are created by contiguous splits of input file – Reduce needs to ensure that records with the same intermediate key end up at the same worker System uses a default partition function: –hash(key) mod R Sometimes useful to override the hash function: – E.g., hash(hostname(URL)) mod R ensures URLs from a host end up in the same output file J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 20

21 Cost Measures for Algorithms In MapReduce we quantify the cost of an algorithm using 1.Communication cost = total I/O of all processes 2.Elapsed communication cost = max of I/O along any path 3.(Elapsed) computation cost analogous, but count only running time of processes Note that here the big-O notation is not the most useful (adding more machines is always an option) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 21

22 Example: Cost Measures For a map-reduce algorithm: – Communication cost = input file size + 2  (sum of the sizes of all files passed from Map processes to Reduce processes) + the sum of the output sizes of the Reduce processes. – Elapsed communication cost is the sum of the largest input + output for any map process, plus the same for any reduce process J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 22

23 What Cost Measures Mean Either the I/O (communication) or processing (computation) cost dominates – Ignore one or the other Total cost tells what you pay in rent from your friendly neighborhood cloud Elapsed cost is wall-clock time using parallelism J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 23

24 Performance IMPORTANT –You may not have room for all reduce values in memory In fact you should PLAN not to have memory for all values Remember, small machines are much cheaper –you have a limited budget

25 Implementations Google – Not available outside Google Hadoop – An open-source implementation in Java – Uses HDFS for stable storage – Download: http://hadoop.apache.org/ Spark – A competitor to MapReduce – Uses several distributed filesystems – Download: http://spark.apache.org/ Others J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 25

26 Reading Jeffrey Dean and Sanjay Ghemawat: MapReduce: Simplified Data Processing on Large Clusters – http://labs.google.com/papers/mapreduce.html http://labs.google.com/papers/mapreduce.html Sanjay Ghemawat, Howard Gobioff, and Shun- Tak Leung: The Google File System – http://labs.google.com/papers/gfs.html http://labs.google.com/papers/gfs.html J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 26

27 Further Reading Programming model inspired by functional language primitives Partitioning/shuffling similar to many large-scale sorting systems – NOW-Sort ['97] Re-execution for fault tolerance – BAD-FS ['04] and TACC ['97] Locality optimization has parallels with Active Disks/Diamond work – Active Disks ['01], Diamond ['04] Backup tasks similar to Eager Scheduling in Charlotte system – Charlotte ['96] Dynamic load balancing solves similar problem as River's distributed queues – River ['99] J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 27

28

29 Git Basics – http://git-scm.com/docs/gittutorial http://git-scm.com/docs/gittutorial [Advanced] Cheat sheet – https://github.com/tiimgreen/github-cheat-sheet https://github.com/tiimgreen/github-cheat-sheet

30 Some useful Hadoop-isms Testing and setup – Excellent Hadoop 2.x setup and testing tutorial: http://www.highlyscalablesystems.com/3597/hadoop- installation-tutorial-hadoop-2-x/ http://www.highlyscalablesystems.com/3597/hadoop- installation-tutorial-hadoop-2-x/ You won’t need to worry about setup! – On GACRC, I’m setting up the VMs – On AWS, Amazon handles it But the testing tips near the bottom are excellent – Putting files into HDFS – Reading existing content/results in HDFS – Submitting Hadoop jobs on the command line If you use AWS GUI, you won’t need to worry about any of this

31 Some useful Hadoop-isms Counters – Initialize in main() / run() – Increment in mapper / reducer – Read in main() / run() – Example: http://diveintodata.org/2011/03/15/an-example-of- hadoop-mapreduce-counter/http://diveintodata.org/2011/03/15/an-example-of- hadoop-mapreduce-counter/

32 Some useful Hadoop-isms Joins – Join values together that have the same key – Map-side Faster and more efficient Harder to implement—requires custom Partitioner and Comparator http://codingjunkie.net/mapside-joins/ – Reduce-side Easy to implement—shuffle step does the work for you! Less efficient as data is pushed to the network http://codingjunkie.net/mapreduce-reduce-joins/ MultipleInputs – Specify a specific mapper class for a specific input path – https://hadoop.apache.org/docs/current/api/org/apache/hado op/mapreduce/lib/input/MultipleInputs.html https://hadoop.apache.org/docs/current/api/org/apache/hado op/mapreduce/lib/input/MultipleInputs.html

33 Some useful Hadoop-isms setup() – Optional method override in Mapper / Reducer subclass – Executed before map() / reduce() – Useful for initializing variables…

34 Some useful Hadoop-isms DistributedCache – Read-only cache of information accessible by each node in the cluster – Very useful for broadcasting small amounts of read-only information – Tricky to implement – http://stackoverflow.com/questions/21239722/ha doop-distributedcache-is-deprecated-what-is-the- preferred-api http://stackoverflow.com/questions/21239722/ha doop-distributedcache-is-deprecated-what-is-the- preferred-api

35 A little MapReduce: NB Variables 1.|V| 2.|L| 3.Y=* 4.Y=y 5.Y=y, W=* 6.Y=y, W=w 1.Size of vocabulary (unique words) 2.Size of label space (unique labels) 3.Number of documents 4.Number of documents with label y 5.Number of words in a document with label y 6.Number of times w appears in a document with label y

36 Introduction to MapReduce 3 main phases – Map (send each input record to a key) – Sort (put all of one key in the same place) Handled behind the scenes in Hadoop – Reduce (operate on each key and its set of values) Terms come from functional programming: –map(lambda x:x.upper(),[”shannon",”p",”quinn"])  [’SHANNON ', ’P', ’QUINN'] –reduce(lambda x,y:x+"- "+y,[”shannon",”p",”quinn"])  ”shannon-p-quinn”

37 MapReduce in slow motion Canonical example: Word Count Example corpus:

38

39

40

41 others: KeyValueInputFormat SequenceFileInputFormat

42

43

44 Is any part of this wasteful? Remember - moving data around and writing to/reading from disk are very expensive operations No reducer can start until: all mappers are done data in its partition has been sorted

45 Common pitfalls You have no control over the order in which reduces are performed You have “no” control over the order in which you encounter reduce values – Sort of (more later). The only ordering you should assume is that Reducers always start after Mappers

46 Common pitfalls You should assume your Maps and Reduces will be taking place on different machines with different memory spaces Don’t make a static variable and assume that other processes can read it – They can’t. – It appear that they can when run locally, but they can’t – No really, don’t do this.

47 Common pitfalls Do not communicate between mappers or between reducers – overhead is high – you don’t know which mappers/reducers are actually running at any given point – there’s no easy way to find out what machine they’re running on because you shouldn’t be looking for them anyway

48 Assignment 2 Naïve Bayes on Hadoop Document classification Due Friday, Sept 18 by 11:59:59pm


Download ppt "Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)"

Similar presentations


Ads by Google