CS639: Data Management for Data Science

Slides:



Advertisements
Similar presentations
Distributed Computations
Advertisements

MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
1 INF 2914 Information Retrieval and Web Search Lecture 6: Index Construction These slides are adapted from Stanford’s class CS276 / LING 286 Information.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
Distributed Computations MapReduce
Lecture 2 – MapReduce: Theory and Implementation CSE 490H This presentation incorporates content licensed under the Creative Commons Attribution 2.5 License.
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
Introduction to MapReduce Amit K Singh. “The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Do you recognize this.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce – An overview Medha Atre (May 7, 2008) Dept of Computer Science Rensselaer Polytechnic Institute.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
Web Search and Text Mining Lecture 3. Outline Distributed programming: MapReduce Distributed indexing Several other examples using MapReduce Zones in.
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
4. Scalability and MapReduce Prof. Tudor Dumitraș Assistant Professor, ECE University of Maryland, College Park ENEE 759D | ENEE 459D | CMSC 858Z
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
1 MapReduce: Theory and Implementation CSE 490h – Intro to Distributed Computing, Modified by George Lee Except as otherwise noted, the content of this.
Introduction to Search Engines Technology CS Technion, Winter 2013 Amit Gross Some slides are courtesy of: Edward Bortnikov & Ronny Lempel, Yahoo!
PPCC Spring Map Reduce1 MapReduce Prof. Chris Carothers Computer Science Department
Hung-chih Yang 1, Ali Dasdan 1 Ruey-Lung Hsiao 2, D. Stott Parker 2
Database Applications (15-415) Part II- Hadoop Lecture 26, April 21, 2015 Mohammad Hammoud.
Information Retrieval Lecture 9. Outline Map Reduce, cont. Index compression [Amazon Web Services]
SLIDE 1IS 240 – Spring 2013 MapReduce, HBase, and Hive University of California, Berkeley School of Information IS 257: Database Management.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce: Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc.
Lecture 4. MapReduce Instructor: Weidong Shi (Larry), PhD
Some slides adapted from those of Yuan Yu and Michael Isard
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
MapReduce Types, Formats and Features
Scalable systems.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Adapted from: Google & UWash’s Creative Common MR Deck
Lecture 3. MapReduce Instructor: Weidong Shi (Larry), PhD
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
Distributed Systems CS
MapReduce Simplied Data Processing on Large Clusters
Cse 344 May 2nd – Map/reduce.
February 26th – Map/Reduce
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
CS110: Discussion about Spark
CS-4513 Distributed Computing Systems Hugh C. Lauer
Word Co-occurrence Chapter 3, Lin and Dyer.
In the news "The cloud isn't this big fluffy area up in the sky," said Rich Miller, who edits Data Center Knowledge, a publication that follows the data.
Distributed System Gang Wu Spring,2018.
Distributed Systems CS
Cloud Computing MapReduce, Batch Processing
CS639: Data Management for Data Science
CS639: Data Management for Data Science
CS639: Data Management for Data Science
Google Map Reduce OSDI 2004 slides
CS639: Data Management for Data Science
Presentation transcript:

CS639: Data Management for Data Science Lecture 9: The MapReduce Programming Model and Algorithms in MapReduce Theodoros Rekatsinas

Logistics/Announcements Minor change on schedule Friday lecture covered by Paris Koutris We will skip Monday’s lecture (I am in the bay area)

Today’s Lecture The MapReduce Abstraction The MapReduce Programming Model MapReduce Examples

1. The MapReduce Abstraction

The Map Reduce Abstraction for Distributed Algorithms Distributed Data Storage Map (Shuffle) Reduce

The Map Reduce Abstraction for Distributed Algorithms Distributed Data Storage map map map map map map Map (Shuffle) Reduce reduce reduce reduce reduce

The Map Reduce Abstraction for Distributed Algorithms Distributed Data Storage map map map map map map Map (Shuffle) Reduce reduce reduce reduce reduce

Map Reduce Shift (docID=1, v1) (docID=2, v2) (docID=3, v3) …. (w1, 1)

The Map Reduce Abstraction for Distributed Algorithms MapReduce is a high-level programming model and implementation for large-scale parallel data processing Like RDBMS adopt the the relational data model, MapReduce has a data model as well

MapReduce’s Data Model Files! A File is a bag of (key, value) pairs A bag is a multiset A map-reduce program: Input: a bag of (inputkey, value) pairs Output: a bag of (outputkey, value) pairs

2. The MapReduce Programming Model

All the user needs to define are the MAP and REDUCE functions User input All the user needs to define are the MAP and REDUCE functions Execute proceeds in multiple MAP – REDUCE rounds MAP – REDUCE = MAP phase followed REDUCE

MAP Phase Step 1: the MAP phase User provides a MAP-function: Input: (input key, value) Output: bag of (intermediate key, value) System applies the map function in parallel to all (input key, value) pairs in the input file

REDUCE Phase Step 2: the REDUCE phase User provides a REDUCE-function: Input: (intermediate key, bag of values) Output: (intermediate key, values) The system will group all pairs with the same intermediate key, and passes the bag of values to the REDUCE function

MapReduce Programming Model Input & Output: each a set of key/value pairs Programmer specifies two functions: map (in_key, in_value) -> list(out_key, intermediate_value) Processes input key/value pair Produces set of intermediate pairs reduce (out_key, list(intermediate_value)) -> (out_key, list(out_values)) Combines all intermediate values for a particular key Produces a set of merged output values (usually just one)

Example: what does the next program do? map(String input_key, String input_value): //input_key: document id //input_value: document bag of words for each word w in input_value: EmitIntermediate(w, 1); reduce(String intermediate_key, Iterator intermediate_values): //intermediate_key: word //intermediate_values: ???? result = 0; for each v in intermediate_values: result += v; EmitFinal(intermediate_key, result);

Example: what does the next program do? map(String input_key, String input_value): //input_key: document id //input_value: document bag of words word_count = {} for each word w in input_value: increase word_count[w] by one for each word w in word_count: EmitIntermediate(w, word_count[w]); reduce(String intermediate_key, Iterator intermediate_values): //intermediate_key: word //intermediate_values: ???? result = 0; for each v in intermediate_values: result += v; EmitFinal(intermediate_key, result);

3. MapReduce Examples

Word length histogram How many big, medium, small, and tiny words are in a document? Big = 10+ letters Medium = 5..9 letters Small = 2..4 letters Tiny = 1 letter

Word length histogram Split the document into chunks and process each chunk on a different computer Chunk 2 Chunk 1

Word length histogram Chunk 2 Chunk 1 Output (Big , 17) (Medium, 77) (Small, 107) (Tiny, 3) Chunk 1 Map Chunk 1 (204words)

Word length histogram Map task 1 Output (Big, 17) (Big, 17) (Big, 20) (Medium, 77) (Small, 107) (Tiny, 3) (Big, 17) (Big, 20) (Big, 37) (Medium, 77) (Medium, 71) (Medium, 148) Map task 2 (Small, 107) (Small, 93) (Small, 200) Output (Big, 20) (Medium, 71) (Small, 93) (Tiny, 6) (Tiny, 3) (Tiny, 6) (Tiny, 9)

Build an Inverted Index Input: doc1, (“I love medium roast coffee”) doc2, (“I do not like coffee”) doc3, (“This medium well steak is great”) doc4, (“I love steak”) Output: “roast”, (doc1) “coffee”, (doc1, doc2) “medium”, (doc1, doc3) “steak”, (doc3, do4)

Let’s design the solution! Input: doc1, (“I love medium roast coffee”) doc2, (“I do not like coffee”) doc3, (“This medium well steak is great”) doc4, (“I love steak”) Output: “roast”, (doc1) “coffee”, (doc1, doc2) “medium”, (doc1, doc3) “steak”, (doc3, do4)