COS 518: Distributed Systems Lecture 10 Andrew Or, Mike Freedman

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

MapReduce Online Tyson Condie UC Berkeley Slides by Kaixiang MO
MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Based on the text by Jimmy Lin and Chris Dryer; and on the yahoo tutorial on mapreduce at index.html
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Spark: Cluster Computing with Working Sets
Data-Intensive Computing with MapReduce/Pig Pramod Bhatotia MPI-SWS Distributed Systems – Winter Semester 2014.
Distributed Computations
Mesos A Platform for Fine-Grained Resource Sharing in Data Centers Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy.
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
Distributed Computations MapReduce
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
MapReduce How to painlessly process terabytes of data.
MapReduce M/R slides adapted from those of Jeff Dean’s.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
MapReduce and the New Software Stack CHAPTER 2 1.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
Data Engineering How MapReduce Works
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Big Data Infrastructure Week 3: From MapReduce to Spark (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Image #1 Image Analysis: What do you think is going on in this picture? Which person, thing, or event does this image relate to (which Word Wall term)?
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Distributed Systems COS 418: Distributed Systems Lecture 1 Mike Freedman.
About Hadoop Hadoop was one of the first popular open source big data technologies. It is a scalable fault-tolerant system for processing large datasets.
Spark.
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
Large-scale file systems and Map-Reduce
Spark Presentation.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Data Platform and Analytics Foundational Training
Course overview, principles, MapReduce
Apache Spark Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Aditya Waghaye October 3, 2016 CS848 – University.
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Introduction to Spark.
MapReduce Simplied Data Processing on Large Clusters
COS 418: Distributed Systems Lecture 1 Mike Freedman
湖南大学-信息科学与工程学院-计算机与科学系
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Distributed System Gang Wu Spring,2018.
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
Data processing with Hadoop
Introduction to MapReduce
CS639: Data Management for Data Science
5/7/2019 Map Reduce Map reduce.
Fast, Interactive, Language-Integrated Cluster Computing
Big Data Analysis MapReduce.
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Lecture 29: Distributed Systems
CS639: Data Management for Data Science
Presentation transcript:

COS 518: Distributed Systems Lecture 10 Andrew Or, Mike Freedman Batch Processing COS 518: Distributed Systems Lecture 10 Andrew Or, Mike Freedman

Basic architecture in “big data” systems

Cluster Manager Worker Worker Worker Cluster

Worker Worker Worker Cluster Cluster Manager 64GB RAM 32 cores

Client Worker Worker Worker Cluster Cluster Manager Submit WordCount.java Worker Worker Worker Cluster

Client Worker Worker Worker Cluster Cluster Manager Launch executor Launch driver Worker Worker Launch executor Worker Cluster

Client Worker Worker Worker Cluster Cluster Manager Word Count driver Word Count executor Worker Word Count executor Cluster

Client Client Worker Worker Worker Cluster Cluster Manager Submit FindTopTweets.java Client Worker Worker Word Count driver Word Count executor Worker Word Count executor Cluster

Client Client Worker Worker Worker Cluster Cluster Manager Launch executor Client Worker Worker Word Count driver Word Count executor Launch driver Worker Word Count executor Cluster

Client Client Worker Worker Worker Cluster Cluster Manager Word Count driver Word Count executor Tweets executor Worker Word Count executor Tweets driver Cluster

Client Client Worker Worker Client Worker Cluster Cluster Manager Word Count driver Word Count executor Tweets executor App3 executor App3 executor Client Worker Word Count executor Tweets driver App3 driver Cluster

Basic architecture Clients submit applications to the cluster manager Cluster manager assigns cluster resources to applications Each Worker launches containers for each application Driver containers run main method of user program Executor containers run actual computation Examples of cluster manager: YARN, Mesos Examples of computing frameworks: Hadoop MapReduce, Spark

Two levels of scheduling Cluster-level: Cluster manager assigns resources to applications Application-level: Driver assigns tasks to run on executors A task is a unit of execution that operates on one partition Some advantages: Applications need not be concerned with resource fairness Cluster manager need not be concerned with individual tasks Easy to implement priorities and preemption

(Data-parallel programming at scale) Case Study: MapReduce (Data-parallel programming at scale)

Application: Word count Hello my love. I love you, my dear. Goodbye. hello: 1, my: 2, love: 2, i: 1, dear: 1, goodbye: 1

Application: Word count Locally: tokenize and put words in a hash map How do you parallelize this? Split document by half Build two hash maps, one for each half Merge the two hash maps (by key)

How do you do this in a distributed environment?

When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume, among the Powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation. Input document

When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume, among the Powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation. Partition

requires that they should declare the causes which impel them to the separation. When in the Course of human events, it becomes necessary for one people to Nature and of Nature's God entitle them, a decent respect to the opinions of mankind among the Powers of the earth, the separate and equal station to which the Laws of dissolve the political bands which have connected them with another, and to assume,

Compute word counts locally requires: 1, that: 1, they: 1, should: 1, declare: 1, the: 1, causes: 1, which: 1 ... when: 1, in: 1, the: 1, course: 1, of: 1, human: 1, events: 1, it: 1 nature: 2, and: 1, of: 2, god: 1, entitle: 1, them: 1, decent: 1, respect: 1, mankind: 1, opinion: 1 ... dissolve: 1, the: 2, political: 1, bands: 1, which: 1, have: 1, connected: 1, them: 1 ... among: 1, the: 2, powers: 1, of: 2, earth: 1, separate: 1, equal: 1, and: 1 ... Compute word counts locally

Compute word counts locally requires: 1, that: 1, they: 1, should: 1, declare: 1, the: 1, causes: 1, which: 1 ... when: 1, in: 1, the: 1, course: 1, of: 1, human: 1, events: 1, it: 1 nature: 2, and: 1, of: 2, god: 1, entitle: 1, them: 1, decent: 1, respect: 1, mankind: 1, opinion: 1 ... Now what… How to merge results? dissolve: 1, the: 2, political: 1, bands: 1, which: 1, have: 1, connected: 1, them: 1 ... among: 1, the: 2, powers: 1, of: 2, earth: 1, separate: 1, equal: 1, and: 1 ... Compute word counts locally

Merging results computed locally Several options Don’t merge — requires additional computation for correct results Send everything to one node — what if data is too big? Too slow… Partition key space among nodes in cluster (e.g. [a-e], [f-j], [k-p] ...) Assign a key space to each node Partition local results by the key spaces Fetch and merge results that correspond to the node’s key space

requires: 1, that: 1, they: 1, should: 1, declare: 1, the: 1, causes: 1, which: 1 ... when: 1, in: 1, the: 1, course: 1, of: 1, human: 1, events: 1, it: 1 nature: 2, and: 1, of: 2, god: 1, entitle: 1, them: 1, decent: 1, respect: 1, mankind: 1, opinion: 1 ... dissolve: 1, the: 2, political: 1, bands: 1, which: 1, have: 1, connected: 1, them: 1 ... among: 1, the: 2, powers: 1, of: 2, earth: 1, separate: 1, equal: 1, and: 1 ...

Split local results by key space [a-e] [f-j] [k-p] [q-s] [t-z] causes: 1, declare: 1, requires: 1, should: 1, that: 1, they: 1, the: 1, which: 1 when: 1, the: 1, in: 1, it: 1, human: 1, course: 1, events: 1, of: 1 nature: 2, of: 2, mankind: 1, opinion: 1, entitle: 1, and: 1, decent: 1, god: 1, them: 1, respect: 1, bands: 1, dissolve: 1, connected: 1, have: 1, political: 1, the: 1, them: 1, which: 1 among: 1, and: 1, equal: 1, earth: 1, separate: 1, the: 2, powers: 1, of: 2 Split local results by key space

[q-s] [t-z] [f-j] [a-e] [k-p] All-to-all shuffle

Note the duplicates... [a-e] [f-j] [k-p] [q-s] [t-z] when: 1, the: 1, that: 1, they: 1, the: 1, which: 1, them: 1, the: 2, the: 1, them: 1, which: 1 requires: 1, should: 1, respect: 1, separate: 1 god: 1, have: 1, in: 1, it: 1, human: 1, bands: 1, dissolve: 1, connected: 1, course: 1, events: 1, among: 1, and: 1, equal: 1, earth: 1, entitle: 1, and: 1, decent: 1, causes: 1, declare: 1 powers: 1, of: 2, nature: 2, of: 2, mankind: 1, of: 1, opinion: 1, political: 1 Note the duplicates...

Merge results received from other nodes [a-e] [f-j] [k-p] [q-s] [t-z] requires: 1, should: 1, respect: 1, separate: 1 when: 1, the: 4, that: 1, they: 1, which: 2, them: 2 god: 1, have: 1, in: 1, it: 1, human: 1, bands: 1, dissolve: 1, connected: 1, course: 1, events: 1, among: 1, and: 2, equal: 1, earth: 1, entitle: 1, decent: 1, causes: 1, declare: 1 powers: 1, of: 5, nature: 2, mankind: 1, opinion: 1, political: 1 Merge results received from other nodes

MapReduce Partition dataset into many chunks Map stage: Each node processes one or more chunks locally Reduce stage: Each node fetches and merges partial results from all other nodes

MapReduce Interface map(key, value) -> list(<k’, v’>) Apply function to (key, value) pair Outputs set of intermediate pairs reduce(key, list<value>) -> <k’, v’> Applies aggregation function to values Outputs result

MapReduce: Word count map(key, value): // key = document name // value = document contents for each word w in value: emit (w, 1) reduce(key, values): // key = the word // values = number of occurrences of that word count = sum(values) emit (key, count)

MapReduce: Word count map combine partition reduce

Synchronization Barrier

MapReduce 2011 2004 2007 2012 2015 Dryad

Brainstorm: Top K Top K is the problem of finding the largest K values from a set of numbers How would you express this as a distributed application? In particular, what would the map and reduce phases look like? Hint: use a heap…

Brainstorm: Top K Assuming that a set of K integers fit in memory… Key idea... Map phase: everyone maintains a heap of K elements Reduce phase: merge the heaps until you’re left with one

Brainstorm: Top K Problem: What are the keys and values here? No notion of key here, just assign the same key to all the values (e.g. key = 1) Map task 1: [10, 5, 3, 700, 18, 4] → (1, heap(700, 18, 10)) Map task 2: [16, 4, 523, 100, 88] → (1, heap(523, 100, 88)) Map task 3: [3, 3, 3, 3, 300, 3] → (1, heap(300, 3, 3)) Map task 4: [8, 15, 20015, 89] → (1, heap(20015, 89, 15)) Then all the heaps will go to a single reducer responsible for the key 1 This works, but clearly not scalable…

Brainstorm: Top K Idea: Use X different keys to balance load (e.g. X = 2 here) Map task 1: [10, 5, 3, 700, 18, 4] → (1, heap(700, 18, 10)) Map task 2: [16, 4, 523, 100, 88] → (1, heap(523, 100, 88)) Map task 3: [3, 3, 3, 3, 300, 3] → (2, heap(300, 3, 3)) Map task 4: [8, 15, 20015, 89] → (2, heap(20015, 89, 15)) Then all the heaps will (hopefully) go to X different reducers Rinse and repeat (what’s the runtime complexity?)

(Data-parallel programming at scale) Case Study: (Data-parallel programming at scale)

What is ? General distributed data execution engine Key optimizations General computation graphs (pipelining, lazy execution) In-memory data sharing (caching) Fine-grained fault tolerance #1 most active big data project

What is ? … Spark SQL structured data Spark Streaming real-time MLlib machine learning GraphX graph processing Spark Core …

Spark computational model Most computation can be expressed in terms of two phases Map phase defines how each machine processes its individual partition Reduce phase defines how to merge map outputs from previous phase Spark expresses computation as a DAG of maps and reduces

Reduce phase Map phase Synchronization barrier

More complicated DAG

Spark: Word count Transformations express how to process a dataset Actions express how to turn a transformed dataset into results sc.textFile("declaration-of-independence.txt") .flatMap { line => line.split(" ") } .map { word => (word, 1) } .reduceByKey { case (counts1, counts2) => counts1 + counts2 } .collect()

Pipelining + Lazy execution Transformations can be pipelined until we hit A synchronization barrier (e.g. reduce), or An action Example: data.map { ... }.filter { ... }.flatMap { ... }.groupByKey().count() These three operations can all be run in the same task This allows lazy execution; we don’t need to eagerly execute map

In-memory caching Store intermediate results in memory to bypass disk access Important for iterative workloads (e.g. machine learning!) Example: val cached = data.map { ... }.filter { ... }.cache() (1 to 100).foreach { i => cached.reduceByKey { ... }.saveAsTextFile(...) }

Reusing map outputs Reusing map outputs allows Spark to avoid redoing map stages Along with caching, this makes iterative workloads much faster Example: val transformed = data.map { ... }.reduceByKey { ... } transformed.collect() transformed.collect() // does not run map phase again

Logistic Regression Performance 127 s / iteration first iteration 174 s further iterations 6 s This is for a 29 GB dataset on 20 EC2 m1.xlarge machines (4 cores each) [Borrowed from Matei Zaharia’s talk, “Spark: Fast, Interactive, Language-Integrated Cluster Computing”]

Monday Stream processing

Application: Word Count SELECT count(word) FROM data GROUP BY word cat data.txt | tr -s '[[:punct:][:space:]]' '\n' | sort | uniq -c

Using partial aggregation Compute word counts from individual files Then merge intermediate output Compute word count on merged outputs

Using partial aggregation In parallel, send to worker: Compute word counts from individual files Collect result, wait until all finished Then merge intermediate output Compute word count on merged intermediates

MapReduce: Programming Interface map(key, value) -> list(<k’, v’>) Apply function to (key, value) pair and produces set of intermediate pairs reduce(key, list<value>) -> <k’, v’> Applies aggregation function to values Outputs result

MapReduce: Programming Interface map(key, value): for each word w in value: EmitIntermediate(w, "1"); reduce(key, list(values): int result = 0; for each v in values: result += ParseInt(v); Emit(AsString(result));

MapReduce: Optimizations combine(list<key, value>) -> list<k,v> Perform partial aggregation on mapper node: <the, 1>, <the, 1>, <the, 1>  <the, 3> reduce() should be commutative and associative partition(key, int) -> int Need to aggregate intermediate vals with same key Given n partitions, map key to partition 0 ≤ i < n Typically via hash(key) mod n

Fault Tolerance in MapReduce Map worker writes intermediate output to local disk, separated by partitioning. Once completed, tells master node. Reduce worker told of location of map task outputs, pulls their partition’s data from each mapper, execute function across data Note: “All-to-all” shuffle b/w mappers and reducers Written to disk (“materialized”) b/w each stage

Fault Tolerance in MapReduce Master node monitors state of system If master failures, job aborts and client notified Map worker failure Both in-progress/completed tasks marked as idle Reduce workers notified when map task is re-executed on another map worker Reducer worker failure In-progress tasks are reset to idle (and re-executed) Completed tasks had been written to global file system

Straggler Mitigation in MapReduce Tail latency means some workers finish late For slow map tasks, execute in parallel on second map worker as “backup”, race to complete task