MapReduce Costin Raiciu Advanced Topics in Distributed Systems, 2011.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

MapReduce Simplified Data Processing on Large Clusters
MapReduce.
CS246 TA Session: Hadoop Tutorial Peyman kazemian 1/11/2011.
Intro to Map-Reduce Feb 4, map-reduce? A programming model or abstraction. A novel way of thinking about designing a solution to certain problems…
Distributed Computations
MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
MapReduce Technical Workshop This presentation includes course content © University of Washington Redistributed under the Creative Commons Attribution.
An Introduction to MapReduce: Abstractions and Beyond! -by- Timothy Carlstrom Joshua Dick Gerard Dwan Eric Griffel Zachary Kleinfeld Peter Lucia Evan May.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 3: Mapreduce and Hadoop All slides © IG.
Distributed Computations MapReduce
Lecture 2 – MapReduce: Theory and Implementation CSE 490H This presentation incorporates content licensed under the Creative Commons Attribution 2.5 License.
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Introduction to Google MapReduce WING Group Meeting 13 Oct 2006 Hendra Setiawan.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
MapReduce Programming Yue-Shan Chang. split 0 split 1 split 2 split 3 split 4 worker Master User Program output file 0 output file 1 (1) fork (2) assign.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
MapReduce.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
Jeffrey D. Ullman Stanford University. 2 Chunking Replication Distribution on Racks.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
HAMS Technologies 1
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Mass Data Processing Technology on Large Scale Clusters Summer, 2007, Tsinghua University All course material (slides, labs, etc) is licensed under the.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
MapReduce Costin Raiciu Advanced Topics in Distributed Systems, 2012.
1 MapReduce: Theory and Implementation CSE 490h – Intro to Distributed Computing, Modified by George Lee Except as otherwise noted, the content of this.
PPCC Spring Map Reduce1 MapReduce Prof. Chris Carothers Computer Science Department
MapReduce Programming Model. HP Cluster Computing Challenges  Programmability: need to parallelize algorithms manually  Must look at problems from parallel.
SLIDE 1IS 240 – Spring 2013 MapReduce, HBase, and Hive University of California, Berkeley School of Information IS 257: Database Management.
MapReduce Theory, Implementation and Algorithms Hongfei Yan School of EECS, Peking University 7/2/2009 Refer to.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
MapReduce: Simplified Data Processing on Large Clusters Lim JunSeok.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
Introduction to Hadoop Owen O’Malley Yahoo!, Grid Team Modified by R. Cook.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
Team3: Xiaokui Shu, Ron Cohen CS5604 at Virginia Tech December 6, 2010.
Map Reduce. Functional Programming Review r Functional operations do not modify data structures: They always create new ones r Original data still exists.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
HADOOP Priyanshu Jha A.D.Dilip 6 th IT. Map Reduce patented[1] software framework introduced by Google to support distributed computing on large data.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Lecture 4: Mapreduce and Hadoop
Introduction to Google MapReduce
Adapted from: Google & UWash’s Creative Common MR Deck
MapReduce Simplied Data Processing on Large Clusters
湖南大学-信息科学与工程学院-计算机与科学系
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Distributed System Gang Wu Spring,2018.
5/7/2019 Map Reduce Map reduce.
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

MapReduce Costin Raiciu Advanced Topics in Distributed Systems, 2011

Motivating App Web Search  12PB of Web data  Must be able to search it quickly  How can we do this?

Web Search Primer Each document is a collection of words  Different frequencies, counts, meaning Users supply a few words – the query  Task: find all the documents which contain a specified word

Solution: an inverted web index For each keyword, store a list of the documents that contain it:  Student -> {a,b,c, …}  UPB -> {x,y,z,…}  … When a query comes:  Lookup all the keywords  Intersect document lists  Order the results according to their importance

How do we build an inverted web index? Read 12PB of web pages  For each page, find its keywords  Slowly build index: 2TB of data We could run it on a single machine  100MB/s hard disk read = 1GB read in 10s  s just to read on a single machine ~ 4 years!

We need parallelism! Want to run this task in 1 day  We would need 1400 machines at least What functionality might we need?  Move data around  Run processing  Check liveness  Deal with failures (certainty!)  Get results

Inspiration: Functional Programming

Functional Programming Review Functional operations do not modify data structures: They always create new ones Original data still exists in unmodified form Data flows are implicit in program design Order of operations does not matter

Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) Order of sum() and mul(), etc does not matter – they do not modify l

Map map f list Creates a new list by applying f to each element of the input list; returns output in order.

Fold fold f x 0 list Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list

Implicit Parallelism In map In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements If order of application of f to elements in list is commutative, we can reorder or parallelize execution This is the “secret” that MapReduce exploits

MapReduce

Main Observation A large fraction of distributed systems code has to do with:  Monitoring  Fault tolerance  Moving data around Problems  Difficult to get right even if you know what you are doing  Every app implements its own mechanisms Most of this code is app independent!

MapReduce Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers

Programming Model Borrows from functional programming Users implement interface of two functions:  map (in_key, in_value) -> (out_key, intermediate_value) list  reduce (out_key, intermediate_value list) -> out_value list

map Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line). map() produces one or more intermediate values along with an output key from the input.

reduce After the map phase is over, all the intermediate values for a given output key are combined together into a list reduce() combines those intermediate values into one or more final values for that same output key (in practice, usually only one final value per key)

Parallelism map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed independently Bottleneck: reduce phase can’t start until map phase is completely finished.

How do we place computation? Master assigns map and reduce jobs to workers  Does this mapping matter?

Data Center Network Architecture … Racks of servers Top of Rack Switches Aggregation Switches Core Switch 1Gbps 10Gbps

Locality Master program divides up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

Communication Map output stored to local disk Shuffle phase:  Reducers need to read data from all mappers  Typically cross-rack and expensive  Need full bisection bandwidth in theory More about good topologies next time!

Fault Tolerance Master detects worker failures  Re-executes completed & in-progress map() tasks Why completed also?  Re-executes in-progress reduce() tasks

Fault Tolerance (2) Master notices particular input key/values cause crashes in map(), and skips those values on re-execution.  Effect: Can work around bugs in third-party libraries!

Optimizations No reduce can start until map is complete:  A single slow disk controller can rate-limit the whole process Master redundantly executes stragglers: “slow-moving” map tasks  Uses results of first copy to finish Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?

Optimizations “Combiner” functions can run on same machine as a mapper Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth Under what conditions is it sound to use a combiner?

More and more mapreduce

Apache An Implementation of MapReduce

Open source Java Scale  Thousands of nodes and  petabytes of data Still pre-1.0 release  22 04, 2009: release , 2009: release  17 09, 2008: release , 2008: release  but already used by many

Hadoop MapReduce and Distributed File System framework for large commodity clusters Master/Slave relationship  JobTracker handles all scheduling & data flow between TaskTrackers  TaskTracker handles all worker tasks on a node  Individual worker task runs map or reduce operation Integrates with HDFS for data locality

Hadoop Supported File Systems HDFS: Hadoop's own file system. Amazon S3 file system.  Targeted at clusters hosted on the Amazon Elastic Compute Cloud server-on-demand infrastructure  Not rack-aware CloudStore  previously Kosmos Distributed File System  like HDFS, this is rack-aware. FTP Filesystem  stored on remote FTP servers. Read-only HTTP and HTTPS file systems.

"Rack awareness" optimization which takes into account the geographic clustering of servers network traffic between servers in different geographic clusters is minimized.

Hadoop scheduler Runs a few map and reduce tasks in parallel on the same machine  To overlap IO and computation Whenever there is an empty slot the scheduler chooses:  A failed task, if it exists  An unassigned task, if it exists  A speculative task (also running on another node)

wordCount A Simple Hadoop Example

Word Count Example Read text files and count how often words occur.  The input is text files  The output is a text file each line: word, tab, count Map: Produce pairs of (word, count) Reduce: For each word, sum up the counts.

WordCount Overview 3 import public class WordCount { public static class Map extends MapReduceBase implements Mapper... { public void map } public static class Reduce extends MapReduceBase implements Reducer... { public void reduce } public static void main(String[] args) throws Exception { 40 JobConf conf = new JobConf(WordCount.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); 54 FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); 57 } }

wordCount Mapper 14 public static class Map extends MapReduceBase implements Mapper { 15 private final static IntWritable one = new IntWritable(1); 16 private Text word = new Text(); public void map( LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException { 19 String line = value.toString(); 20 StringTokenizer tokenizer = new StringTokenizer(line); 21 while (tokenizer.hasMoreTokens()) { 22 word.set(tokenizer.nextToken()); 23 output.collect(word, one); 24 } 25 } 26 }

wordCount Reducer 28 public static class Reduce extends MapReduceBase implements Reducer { public void reduce(Text key, Iterator values, OutputCollector output, Reporter reporter) throws IOException { 31 int sum = 0; 32 while (values.hasNext()) { 33 sum += values.next().get(); 34 } 35 output.collect(key, new IntWritable(sum)); 36 } 37 }

wordCount JobConf 40 JobConf conf = new JobConf(WordCount.class); 41 conf.setJobName("wordcount"); conf.setOutputKeyClass(Text.class); 44 conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(Map.class); 47 conf.setCombinerClass(Reduce.class); 48 conf.setReducerClass(Reduce.class); conf.setInputFormat(TextInputFormat.class); 51 conf.setOutputFormat(TextOutputFormat.class);

WordCount main 39 public static void main(String[] args) throws Exception { 40 JobConf conf = new JobConf(WordCount.class); 41 conf.setJobName("wordcount"); conf.setOutputKeyClass(Text.class); 44 conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(Map.class); 47 conf.setCombinerClass(Reduce.class); 48 conf.setReducerClass(Reduce.class); conf.setInputFormat(TextInputFormat.class); 51 conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); 54 FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); 57 }

Invocation of wordcount 1. /usr/local/bin/hadoop dfs -mkdir 2. /usr/local/bin/hadoop dfs -copyFromLocal 3. /usr/local/bin/hadoop jar hadoop-*-examples.jar wordcount

Hadoop At Work

Experimental setup 12 servers connected to a gigabit switch  Same hardware  Single hard disk per server Filesystem: HDFS with replication 2  128MB block size 3 Map and 2 Reduce tasks per machine Data  Crawl of the.uk domain (2009)  50GB (unreplicated)

Monitoring Task Progress Hadoop estimates task status  Map: % of input data read from HDFS  Reduce 33% - progress in copy (shuffle) phase 33% - sorting keys 33% - writing output in HDFS Hadoop computes average progress score for each category

5 Sep, 2011: release available

Back to Hadoop Overview

Speculative Execution Rerun a task if it is slow:  Threshold for speculative execution: 20% less than its category’s average Assumptions  All machines are homogeneous  Task progress at constant rate  There is no cost in launching speculative tasks

Thought Experiment What happens if one machine is slow? What happens if there is network congestion on one link in the reduce phase?

LATE scheduler [Zaharia, OSDI 2008] Calculate progress rate: ProgressRate = ProgressScore / T  Time to finish: (1-ProgressScore)/ProgressRate Only rerun on fast nodes Put a cap on the number of speculative tasks

Some results on EC2

Some more results…

MapReduce Related Work Shared memory architectures: do not scale up MPI: general purpose, difficult to scale up to more than a few tens of hosts Driad/DriadLINQ: computation is Directed Acyclic Graph  More general computation model  Still not Turing Complete Active area of research  HotCloud 2010, NSDI 2011: Spark / Ciel

Conclusions MapReduce is a very useful abstraction: it greatly simplifies large-scale computations  Does it replace traditional databases?  What is missing? Fun to use: focus on problem, let library deal w/ messy details

Project reminder Check the EC2 public data repository web page Start browsing the datasets  Many are externally available  If not, me and I’ll mount the dataset for you Select your group Start playing with Hadoop