Distributed Computations MapReduce/Dryad

Slides:



Advertisements
Similar presentations
Distributed Data-Parallel Programming using Dryad Andrew Birrell, Mihai Budiu, Dennis Fetterly, Michael Isard, Yuan Yu Microsoft Research Silicon Valley.
Advertisements

Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Distributed Computations
Dryad / DryadLINQ Slides adapted from those of Yuan Yu and Michael Isard.
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
Distributed Computations MapReduce
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Map Reduce: Simplified Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat Google, Inc. OSDI ’04: 6 th Symposium on Operating Systems Design.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
1 Dryad Distributed Data-Parallel Programs from Sequential Building Blocks Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, Dennis Fetterly of Microsoft.
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Mass Data Processing Technology on Large Scale Clusters Summer, 2007, Tsinghua University All course material (slides, labs, etc) is licensed under the.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard.
Dryad and DryaLINQ. Dryad and DryadLINQ Dryad provides automatic distributed execution DryadLINQ provides automatic query plan generation Dryad provides.
MapReduce and the New Software Stack CHAPTER 2 1.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
MapReduce: Simplified Data Processing on Large Clusters Lim JunSeok.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
MapReduce: Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Some slides adapted from those of Yuan Yu and Michael Isard
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
CSCI5570 Large Scale Data Processing Systems
Large-scale file systems and Map-Reduce
MapReduce: Simplified Data Processing on Large Clusters
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Parallel Computing with Dryad
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
MapReduce Simplied Data Processing on Large Clusters
湖南大学-信息科学与工程学院-计算机与科学系
Cse 344 May 2nd – Map/reduce.
February 26th – Map/Reduce
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
CS-4513 Distributed Computing Systems Hugh C. Lauer
Distributed System Gang Wu Spring,2018.
Cloud Computing MapReduce, Batch Processing
Introduction to MapReduce
MapReduce: Simplified Data Processing on Large Clusters
5/7/2019 Map Reduce Map reduce.
Big Data Analysis MapReduce.
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

What we’ve learnt so far Basic distributed systems concepts Consistency (sequential, eventual) Concurrency Fault tolerance (recoverability, availability) What are distributed systems good for? Better fault tolerance Better security? Increased storage/serving capacity Storage systems, email clusters Parallel (distributed) computation (Today’s topic)

Why distributed computations? How long to sort 1 TB on one computer? One computer can read ~60MB from disk Takes ~1 days!! Google indexes 100 billion+ web pages 100 * 10^9 pages * 20KB/page = 2 PB Large Hadron Collider is expected to produce 15 PB every year!

Solution: use many nodes! Cluster computing Hundreds or thousands of PCs connected by high speed LANs Grid computing Hundreds of supercomputers connected by high speed net 1000 nodes potentially give 1000X speedup

Distributed computations are difficult to program Sending data to/from nodes Coordinating among nodes Recovering from node failure Optimizing for locality Debugging Same for all problems

MapReduce A programming model for large-scale computations Process large amounts of input, produce output No side-effects or persistent state (unlike file system) MapReduce is implemented as a runtime library: automatic parallelization load balancing locality optimization handling of machine failures

MapReduce design Input data is partitioned into M splits Map: extract information on each split Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize, filter or transform Output is in R result files

More specifically… Programmer specifies two methods: map(k, v) → <k', v'>* reduce(k', <v'>*) → <k', v'>* All v' with same k' are reduced together, in order. Usually also specify: partition(k’, total partitions) -> partition for k’ often a simple hash of the key allows reduce operations for different k’ to be parallelized

Example: Count word frequencies in web pages Input is files with one doc per record Map parses documents into words key = document URL value = document contents Output of map: “to”, “1” “be”, “1” “or”, “1” … “doc1”, “to be or not to be”

Example: word frequencies Reduce: computes sum for a key Output of reduce saved key = “not” values = “1” “1” key = “or” values = “1” “1” key = “to” values = “1”, “1” “2” key = “be” values = “1”, “1” “2” “be”, “2” “not”, “1” “or”, “1” “to”, “2”

Example: Pseudo-code Map(String input_key, String input_value): //input_key: document name //input_value: document contents for each word w in input_values: EmitIntermediate(w, "1"); Reduce(String key, Iterator intermediate_values): //key: a word, same for input and output //intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

MapReduce is widely applicable Distributed grep Document clustering Web link graph reversal Detecting approx. duplicate web pages …

MapReduce implementation Input data is partitioned into M splits Map: extract information on each split Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize, filter or transform Output is in R result files

MapReduce scheduling One master, many workers Input data split into M map tasks (e.g. 64 MB) R reduce tasks Tasks are assigned to workers dynamically Often: M=200,000; R=4,000; workers=2,000

MapReduce scheduling Master assigns a map task to a free worker Prefers “close-by” workers when assigning task Worker reads task input (often from local disk!) Worker produces R local files containing intermediate k/v pairs Master assigns a reduce task to a free worker Worker reads intermediate k/v pairs from map workers Worker sorts & applies user’s Reduce op to produce the output

Parallel MapReduce Input data Map Map Map Map Master Reduce Shuffle Partitioned output

WordCount Internals Input data is split into M map jobs Each map job generates in R local partitions “to”, “1” “be”, “1” “or”, “1” “not”, “1 “to”,“1”,”1” Hash(“to”) % R R local partitions “doc1”, “to be or not to be” “be”,“1” “not”,“1” “or”, “1” “doc234”, “do not be silly” “do”, “1” “not”, “1” “be”, “1” “silly”, “1 “be”,“1” R local partitions “not”,“1” “do”,“1”

WordCount Internals Shuffle brings same partitions to same reducer “do”,“1” “be”,“1”,”1” “not”,“1”,”1” “or”, “1” “to”,“1”,”1” “be”,“1” R local partitions “not”,“1” “or”, “1” “do”,“1” R local partitions “be”,“1” “not”,“1”

WordCount Internals Reduce aggregates sorted key values pairs “do”,“1” “to”, “2” “be”,“2” “not”,“2” “or”, “1” “to”,“1”,”1” “be”,“1”,”1” “not”,“1”,”1” “or”, “1”

The importance of partition function partition(k’, total partitions) -> partition for k’ e.g. hash(k’) % R What is the partition function for sort?

Load Balance and Pipelining Fine granularity tasks: many more map tasks than machines Minimizes time for fault recovery Can pipeline shuffling with map execution Better dynamic load balancing Often use 200,000 map/5000 reduce tasks w/ 2000 machines

Fault tolerance via re-execution On worker failure: Re-execute completed and in-progress map tasks Re-execute in progress reduce tasks Task completion committed through master On master failure: State is checkpointed to GFS: new master recovers & continues

Avoid straggler using backup tasks Slow workers significantly lengthen completion time Other jobs consuming resources on machine Bad disks with soft errors transfer data very slowly Weird things: processor caches disabled (!!) An unusually large reduce partition? Solution: Near end of phase, spawn backup copies of tasks Whichever one finishes first "wins" Effect: Dramatically shortens job completion time

MapReduce Sort Performance 1TB (100-byte record) data to be sorted 1700 machines M=15000 R=4000

MapReduce Sort Performance When can shuffle start? When can reduce start?

Slides adapted from those of Yuan Yu and Michael Isard Dryad Slides adapted from those of Yuan Yu and Michael Isard

Dryad Similar goals as MapReduce Computations expressed as a graph focus on throughput, not latency Automatic management of scheduling, distribution, fault tolerance Computations expressed as a graph Vertices are computations Edges are communication channels Each vertex has several input and output edges

WordCount in Dryad Count Word:n MergeSort Word:n Distribute Word:n

Why using a dataflow graph? Many programs can be represented as a distributed dataflow graph The programmer may not have to know this “SQL-like” queries: LINQ Dryad will run them for you

Runtime Vertices (V) run arbitrary app code Vertices exchange data through files, TCP pipes etc. Vertices communicate with JM to report status V Let us move back to discussing the graph execution. The Job Manager (JM) handles the scheduling of all the processes in the vertices of the graph. It does this using a topological sort of the graph. When nodes in the graph fail execution, parts of the subgraph may need to be re-executed, because the inputs that are needed may no longer be available. The vertices that had generated these inputs may have to be re-run. The JM will attempt to re-execute a minimal part of the graph to recompute the missing data. On executing a vertex, the JM must choose a machine Daemon process (D) executes vertices Job Manager (JM) consults name server(NS) to discover available machines. JM maintains job graph and schedules vertices

Job = Directed Acyclic Graph Outputs Processing vertices Channels (file, pipe, shared memory) A dryad application is composed of a collection of processing vertices (processes). The vertices communicate with each other through channels. The vertices and channels should always compose into a directed acyclic graph. Inputs

Scheduling at JM General scheduling rules: Vertex can run anywhere once all its inputs are ready Prefer executing a vertex near its inputs Fault tolerance If A fails, run it again If A’s inputs are gone, run upstream vertices again (recursively) If A is slow, run another copy elsewhere and use output from whichever finishes first

Advantages of DAG over MapReduce Big jobs more efficient with Dryad MapReduce: big job runs >=1 MR stages reducers of each stage write to replicated storage Output of reduce: 2 network copies, 3 disks Dryad: each job is represented with a DAG intermediate vertices write to local file

Advantages of DAG over MapReduce Dryad provides explicit join MapReduce: mapper (or reducer) needs to read from shared table(s) as a substitute for join Dryad: explicit join combines inputs of different types Dryad “Split” produces outputs of different types Parse a document, output text and references

DAG optimizations: merge tree

DAG optimizations: merge tree

Dryad Optimizations: data-dependent re-partitioning Distribute to equal-sized ranges Sample to estimate histogram Randomly partitioned inputs

Dryad example 1: SkyServer Query 3-way join to find gravitational lens effect Table U: (objId, color) 11.8GB Table N: (objId, neighborId) 41.8GB Find neighboring stars with similar colors: Join U+N to find T = N.neighborID where U.objID = N.objID, U.color Join U+T to find U.objID where U.objID = T.neighborID and U.color ≈ T.color

SkyServer query D M 4n S Y H n X U N select u.color,n.neighborobjid from u join n where u.objid = n.objid u: objid, color n: objid, neighborobjid [partition by objid]

D M 4n S Y H n X U N [distinct] [merge outputs] (u.color,n.neighborobjid) [re-partition by n.neighborobjid] [order by n.neighborobjid] select u.objid from u join <temp> where u.objid = <temp>.neighborobjid and |u.color - <temp>.color| < d

Dryad example 2: Query histogram computation Input: log file (n partitions) Extract queries from log partitions Re-partition by hash of query (k buckets) Compute histogram within each bucket

Naïve histogram topology P parse lines D hash distribute S quicksort C count occurrences MS merge sort Q R k n is : Each MS C P S D

Efficient histogram topology P parse lines D hash distribute S quicksort C count occurrences MS merge sort M non-deterministic merge k Each Q' is : Each T k R R C Each is : R S D T is : P C C Q' M MS MS n

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T M►P►S►C Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

Final histogram refinement Q' R 450 T 217 10,405 99,713 33.4 GB 118 GB 154 GB 10.2 TB 1,800 computers 43,171 vertices 11,072 processes 11.5 minutes