Some slides adapted from those of Yuan Yu and Michael Isard

Slides:



Advertisements
Similar presentations
Distributed Data-Parallel Programming using Dryad Andrew Birrell, Mihai Budiu, Dennis Fetterly, Michael Isard, Yuan Yu Microsoft Research Silicon Valley.
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
epiC: an Extensible and Scalable System for Processing Big Data
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
MAP-REDUCE: WIN EPIC WIN MAP-REDUCE: WIN -OR- EPIC WIN CSC313: Advanced Programming Topics.
Data-Intensive Computing with MapReduce/Pig Pramod Bhatotia MPI-SWS Distributed Systems – Winter Semester 2014.
Optimus: A Dynamic Rewriting Framework for Data-Parallel Execution Plans Qifa Ke, Michael Isard, Yuan Yu Microsoft Research Silicon Valley EuroSys 2013.
Distributed Computations
From LINQ to DryadLINQ Michael Isard Workshop on Data-Intensive Scientific Computing Using DryadLINQ.
MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
Distributed computing using Dryad Michael Isard Microsoft Research Silicon Valley.
Dryad / DryadLINQ Slides adapted from those of Yuan Yu and Michael Isard.
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Distributed Computations MapReduce
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Dryad and DryadLINQ Theophilus Benson CS Distributed Data-Parallel Programming using Dryad By Andrew Birrell, Mihai Budiu, Dennis Fetterly, Michael.
Dryad and DryadLINQ Presented by Yin Zhu April 22, 2013 Slides taken from DryadLINQ project page: projects/dryadlinq/default.aspx.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
Cloud Computing Other High-level parallel processing languages Keke Chen.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Dryad and DryadLINQ Aditya Akella CS 838: Lecture 6.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
1 Dryad Distributed Data-Parallel Programs from Sequential Building Blocks Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, Dennis Fetterly of Microsoft.
MapReduce How to painlessly process terabytes of data.
MapReduce M/R slides adapted from those of Jeff Dean’s.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard.
Dryad and DryaLINQ. Dryad and DryadLINQ Dryad provides automatic distributed execution DryadLINQ provides automatic query plan generation Dryad provides.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Majd F. Sakr, Suhail Rehman and
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
CSCI5570 Large Scale Data Processing Systems
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Distributed Computations MapReduce/Dryad
Adapted from: Google & UWash’s Creative Common MR Deck
Parallel Computing with Dryad
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
MapReduce Simplied Data Processing on Large Clusters
COS 418: Distributed Systems Lecture 1 Mike Freedman
湖南大学-信息科学与工程学院-计算机与科学系
Cse 344 May 2nd – Map/reduce.
February 26th – Map/Reduce
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
CS110: Discussion about Spark
Distributed System Gang Wu Spring,2018.
Cloud Computing MapReduce, Batch Processing
Introduction to MapReduce
CS639: Data Management for Data Science
DryadInc: Reusing work in large-scale computations
CS639: Data Management for Data Science
5/7/2019 Map Reduce Map reduce.
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Some slides adapted from those of Yuan Yu and Michael Isard Dryad Some slides adapted from those of Yuan Yu and Michael Isard

What we’ve learnt so far Basic distributed systems concepts Consistency (Linearizability/strict serializability, causal, eventual) How to tolerate machine failure? Paxos replication transactions What are distributed systems good for? Storage: fault tolerance, increased storage/serving capacity Computation: large-scale distributed computation (Today’s topic)

Recap: MapReduce API Input data is partitioned into M splits Map: extract information on each split map(k, v) → <k', v'>* Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize reduce(k', <v'>*) → <k', v'>* Output is in R result files

Example: Pseudo-code Map(String input_key, String input_value): //input_key: document name //input_value: document contents for each word w in input_values: EmitIntermediate(w, "1"); Reduce(String key, Iterator intermediate_values): //key: a word, same for input and output //intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

Parallel MapReduce Runtime Input data Map Map Map Map Master Reduce Shuffle Reduce Shuffle Reduce Shuffle Partitioned output

MapReduce started “big data” Published in 2004 Open source clone Hadoop 2006 It becomes possible to analyze big data (10s, 100s, 1000s) terabytes.

Limitations of the original MapReduce No joins Many computation require many MapReduce jobs chained together. Best suited for processing unstructured texts on disks

Dryad Similar goals as MapReduce focus on throughput, not latency Automatic scheduling, distribution, fault tolerance Computations expressed as a dataflow graph Vertices are computations Edges are communication channels Each vertex has several input and output edges

WordCount in Dryad Count Word:n MergeSort Word:n Distribute Word:n

Why using a dataflow graph? Many programs can be represented as a parallel dataflow graph Dryad will run them for you

Job = Directed Acyclic Graph Outputs Processing vertices Channels (file, pipe, shared memory) A dryad application is composed of a collection of processing vertices (processes). The vertices communicate with each other through channels. The vertices and channels should always compose into a directed acyclic graph. Inputs

Dryad Scheduling All jobs scheduled by a central job manager scheduling rules: Vertex can run anywhere once all its inputs are ready Prefer executing a vertex near its inputs Fault tolerance If A fails, run it again If A’s inputs are gone, run upstream vertices again (recursively) If A is slow, run another copy elsewhere and use output from whichever finishes first

Advantages of DAG over MapReduce Big jobs more efficient with Dryad MapReduce: big job runs >=1 MR stages reducers of each stage write to replicated storage Output of reduce: 2 network copies, 3 disks Dryad: each job is represented with a DAG intermediate vertices write to local file

Advantages of DAG over MapReduce Dryad provides explicit join MapReduce (circa 2004-2007): mapper/reducer reads from shared table(s) as a substitute for join Dryad: explicit join combines inputs of different types E.g. Most expensive product bought by a customer, PageRank computation

Dryad example: the usefulness of join SkyServer Query: Find neighboring stars with similar colors Table U: (objId, color) 11.8GB Table N: (objId, neighborId) 41.8GB Query contains 2 joins: Join U+N to find T = N.neighborID where U.objID = N.objID, U.color Join U+T to find U.objID where U.objID = T.neighborID and U.color ≈ T.color

SkyServer query D M 4n S Y H n X U N select u.color,n.neighborid from u join n where u.objid = n.objid u: objid, color n: objid, neighborid [partition by objid]

D M 4n S Y H n X U N [distinct] [merge outputs] select u.objid from u join <temp> where u.objid = <temp>.neighborjid and |u.color - <temp>.color| < d (u.color,n.neighborid) [re-partition by n.neighborid] [order by n.neighborid]

Another example: how Dryad optimizes DAG automatically Example application: compute query histogram Input: log file (n partitions) Extract queries from log partitions Re-partition by query into k buckets Compute histogram within each bucket

Naïve histogram topology P parse lines D hash distribute S quicksort C count occurrences MS merge sort Q R k n is : Each MS C P S D

Efficient histogram topology P parse lines D hash distribute S quicksort C count occurrences MS merge sort M non-deterministic merge k Each Q' is : Each T k R R C Each is : R S D T is : P C C Q' M MS MS n

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T M►P►S►C Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

P parse lines D hash distribute S quicksort MS merge sort MS►C R R R MS►C►D T T M►P►S►C Q’ Q’ Q’ Q’ P parse lines D hash distribute S quicksort MS merge sort C count occurrences M non-deterministic merge

Final histogram refinement Q' R 450 T 217 10,405 99,713 33.4 GB 118 GB 154 GB 10.2 TB 1,800 computers 43,171 vertices 11,072 processes 11.5 minutes

What’s after Dryad Dryad DAGs are tedious to work with for humans DryadLINQ [OSDI’08] LINQ provides constructs to manipulate sets and data sequences select, selectMany, groupBy etc. DryadLINQ compiles LINQ constructs into Dryad DAGs Unfortunately, DryadLINQ are not open-sourced ...