Hector Garcia-Molina Stanford University

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Alan F. Gates, Olga Natkovich, Shubham Chopra, Pradeep Kamath, Shravan M. Narayanamurthy, Christopher Olston, Benjamin Reed, Santhosh Srinivasan, Utkarsh.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Pig Latin: A Not-So-Foreign Language for Data Processing Christopher Olsten, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins Acknowledgement.
Parallel Computing MapReduce Examples Parallel Efficiency Assignment
Data-Intensive Computing with MapReduce/Pig Pramod Bhatotia MPI-SWS Distributed Systems – Winter Semester 2014.
The Hadoop Stack, Part 1 Introduction to Pig Latin CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain University of Notre Dame.
Lecture 10: Parallel Databases Wednesday, December 1 st, 2010 Dan Suciu -- CSEP544 Fall
CS347: MapReduce CS Motivation for Map-Reduce Distribution makes simple computations complex Communication Load balancing Fault tolerance … What.
Distributed Computations
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
Pig Latin Olston, Reed, Srivastava, Kumar, and Tomkins. Pig Latin: A Not-So-Foreign Language for Data Processing. SIGMOD Shahram Ghandeharizadeh.
Distributed Computations MapReduce
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
MapReduce: Simplified Data Processing on Large Clusters 컴퓨터학과 김정수.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
Pig Latin: A Not-So-Foreign Language for Data Processing Christopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins Yahoo! Research.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce – An overview Medha Atre (May 7, 2008) Dept of Computer Science Rensselaer Polytechnic Institute.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Lecture 09: Parallel Databases, Big Data, Map/Reduce, Pig-Latin Wednesday, November 23 rd, 2011 Dan Suciu -- CSEP544 Fall
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
CS347: Map-Reduce & Pig Hector Garcia-Molina Stanford University CS347Notes 09 1.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
CS 440 Database Management Systems Parallel DB & Map/Reduce Some slides due to Kevin Chang 1.
CS 347MapReduce1 CS 347 Distributed Databases and Transaction Processing Distributed Data Processing Using MapReduce Hector Garcia-Molina Zoltan Gyongyi.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Some slides adapted from those of Yuan Yu and Michael Isard
Hadoop.
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
Map Reduce.
Pig Latin - A Not-So-Foreign Language for Data Processing
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
MapReduce Simplied Data Processing on Large Clusters
湖南大学-信息科学与工程学院-计算机与科学系
Pig Latin: A Not-So-Foreign Language for Data Processing
Cse 344 May 2nd – Map/reduce.
February 26th – Map/Reduce
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
Distributed System Gang Wu Spring,2018.
Cloud Computing MapReduce, Batch Processing
Introduction to MapReduce
Hadoop – PIG.
5/7/2019 Map Reduce Map reduce.
MapReduce: Simplified Data Processing on Large Clusters
Pig and pig latin: An Introduction
Presentation transcript:

Hector Garcia-Molina Stanford University CS347: Map-Reduce & Pig Hector Garcia-Molina Stanford University CS347 Notes 09

"Big Data" Open Source Systems Infrastructure for distributed data computations Map-Reduce, S4, Hyracks, Pregel [Storm, Mupet] Components MemCachedD, ZooKeeper, Kestrel Data services Pig, F1 Cassandra, H-Base, Big Table [Hive] CS347 Notes 09

Motivation for Map-Reduce Recall one of our sort strategies: Local sort R1 R’1 ko Result Local sort R2 R’2 k1 Local sort R3 R’3 process data & partition additional processing CS 347 Notes 03

Another example: Asymmetric fragment + replicate join Local join Ra R1 R2 R3 S Sa Rb Sb f partition Result union process data & partition additional processing CS 347 Notes 03

Building Text Index - Part I original Map-Reduce application.... FLUSHING 1 rat (rat, 1) (dog, 1) (dog, 2) (cat, 2) (rat, 3) (dog, 3) (cat, 2) (dog, 1) (dog, 2) (dog, 3) (rat, 1) (rat, 3) Intermediate runs Page stream dog 2 dog cat Spend some time explaining pieces of figure Say that blue is buffer What the three stages are Mention that what is flushed to disk is the intermediate run Different resources are used by diff stages of the process – (network, CPU, IO) Hence obvious approach is to execute them simultaneously rat Disk 3 dog Loading Tokenizing Sorting CS347 Notes 09

Building Text Index - Part II (cat, 2) (dog, 1) (dog, 2) (dog, 3) (rat, 1) (rat, 3) (ant, 5) (cat, 2) (cat, 4) (dog, 1) (dog, 2) (dog, 3) (dog, 4) (dog, 5) (eel, 6) (rat, 1) (rat, 3) (ant: 2) (cat: 2,4) (dog: 1,2,3,4,5) (eel: 6) (rat: 1, 3) Merge Spend some time explaining pieces of figure Say that blue is buffer What the three stages are Mention that what is flushed to disk is the intermediate run Different resources are used by diff stages of the process – (network, CPU, IO) Hence obvious approach is to execute them simultaneously (ant, 5) (cat, 4) (dog, 4) (dog, 5) (eel, 6) Intermediate Runs CS347 Notes 09 Final index

Generalizing: Map-Reduce FLUSHING 1 rat (rat, 1) (dog, 1) (dog, 2) (cat, 2) (rat, 3) (dog, 3) (cat, 2) (dog, 1) (dog, 2) (dog, 3) (rat, 1) (rat, 3) Intermediate runs Page stream dog 2 dog cat Spend some time explaining pieces of figure Say that blue is buffer What the three stages are Mention that what is flushed to disk is the intermediate run Different resources are used by diff stages of the process – (network, CPU, IO) Hence obvious approach is to execute them simultaneously rat Disk 3 dog Loading Tokenizing Sorting CS347 Notes 09

Generalizing: Map-Reduce (cat, 2) (dog, 1) (dog, 2) (dog, 3) (rat, 1) (rat, 3) (ant, 5) (cat, 2) (cat, 4) (dog, 1) (dog, 2) (dog, 3) (dog, 4) (dog, 5) (eel, 6) (rat, 1) (rat, 3) (ant: 2) (cat: 2,4) (dog: 1,2,3,4,5) (eel: 6) (rat: 1, 3) Merge Spend some time explaining pieces of figure Say that blue is buffer What the three stages are Mention that what is flushed to disk is the intermediate run Different resources are used by diff stages of the process – (network, CPU, IO) Hence obvious approach is to execute them simultaneously (ant, 5) (cat, 4) (dog, 4) (dog, 5) (eel, 6) Intermediate Runs Reduce CS347 Notes 09 Final index

Map Reduce Input: R={r1, r2, ...rn}, functions M, R M(ri)  { [k1, v1], [k2, v2],.. } R(ki, valSet)  [ki, valSet’] Let S={ [k, v] | [k, v]  M(r) for some r  R } Let K = {k | [k,v]  S, for any v } Let G(k) = { v | [k, v]  S } Output = { [k, T] | k  K, T=R(k, G(k)) } S is bag G is bag CS347 Notes 09

References MapReduce: Simplified Data Processing on Large Clusters, Jeffrey Dean and Sanjay Ghemawat, available at http://labs.google.com/papers/mapreduce-osdi04.pdf Pig Latin: A Not-So-Foreign Language for Data Processing, Christopher Olston, Benjamin Reedy, Utkarsh Srivastavava, Ravi Kumar, Andrew Tomkins, available at http://wiki.apache.org/pig/ CS347 Notes 09

Example: Counting Word Occurrences map(String doc, String value); // doc is document name // value is document content for each word w in value: EmitIntermediate(w, “1”); Example: map(doc, “cat dog cat bat dog”) emits [cat 1], [dog 1], [cat 1], [bat 1], [dog 1] Why does map have 2 parameters? CS347 Notes 09

Example: Counting Word Occurrences reduce(String key, Iterator values); // key is a word // values is a list of counts int result = 0; for each v in values: result += ParseInt(v) Emit(AsString(result)); Example: reduce(“dog”, “1 1 1 1”) emits “4” should emit (“dog”, 4)?? CS347 Notes 09

Google MR Overview CS347 Notes 09

Implementation Issues Combine function File system Partition of input, keys Failures Backup tasks Ordering of results CS347 Notes 09

Combine Function [cat 1], [cat 1], [cat 1]... worker worker worker [dog 1], [dog 1]... Combine is like a local reduce applied before distribution: worker [cat 3]... worker worker [dog 2]... CS347 Notes 09

Distributed File System reduce worker must be able to access local disks on map workers all data transfers are through distributed file system any worker must be able to write its part of answer; answer is left as distributed file worker must be able to access any part of input file CS347 Notes 09

Partition of input, keys How many workers, partitions of input file? How many workers? Best to have many splits per worker: Improves load balance; if worker fails, easier to spread its tasks How many splits? worker 1 2 3 Should workers be assigned to splits “near” them? worker 9 Similar questions for reduce workers worker CS347 Notes 09

Failures Distributed implementation should produce same output as would have been produced by a non-faulty sequential execution of the program. General strategy: Master detects worker failures, and has work re-done by another worker. master ok? split j worker redo j worker CS347 Notes 09

Must be able to eliminate Backup Tasks Straggler is a machine that takes unusually long (e.g., bad disk) to finish its work. A straggler can delay final completion. When task is close to finishing, master schedules backup executions for remaining tasks. Must be able to eliminate redundant results CS347 Notes 09

Ordering of Results Final result (at each node) is in key order also in key order: [k1, v1] [k3, v3] [k1, T1] [k2, T2] [k3, T3] [k4, T4] CS347 Notes 09

Example: Sorting Records W1 W5 one or two records for k=6? W2 W3 W6 Map: extract k, output [k, record] CS347 Notes 09 Reduce: Do nothing!

Other Issues Skipping bad records Debugging CS347 Notes 09

MR Claimed Advantages Model easy to use, hides details of parallelization, fault recovery Many problems expressible in MR framework Scales to thousands of machines CS347 Notes 09

MR Possible Disadvantages 1-input 2-stage data flow rigid, hard to adapt to other scenarios Custom code needs to be written even for the most common operations, e.g., projection and filtering Opaque nature of map, reduce functions impedes optimization CS347 Notes 09

Questions Can MR be made more “declarative”? How can we perform joins? How can we perform approximate grouping? example: for all keys that are similar reduce all values for those keys CS347 Notes 09

Additional Topics Hadoop: open-source Map-Reduce system Pig: Yahoo system that builds on MR but is more declarative CS347 Notes 09

Pig & Pig Latin A layer on top of map-reduce (Hadoop) Pig is the system Pig Latin is the query language Pig Latin is a hybrid between: high-level declarative query language in the spirit of SQL low-level, procedural programming à la map-reduce. CS347 Notes 09

Example Table urls: (url, category, pagerank) Find, for each sufficiently large category, the average pagerank of high-pagerank urls in that category. In SQL: SELECT category, AVG(pagerank) FROM urls WHERE pagerank > 0.2 GROUP BY category HAVING COUNT(*) > 106 CS347 Notes 09

Example in Pig Latin In Pig Latin: SELECT category, AVG(pagerank) FROM urls WHERE pagerank > 0.2 GROUP BY category HAVING COUNT(*) > 106 In Pig Latin: good_urls = FILTER urls BY pagerank > 0.2; groups = GROUP good_urls BY category; big_groups = FILTER groups BY COUNT(good_urls)>106; output = FOREACH big_groups GENERATE category, AVG(good_urls.pagerank); CS347 Notes 09

good_urls = FILTER urls BY pagerank > 0.2; url, category, pagerank good_urls: url, category, pagerank CS347 Notes 09

groups = GROUP good_urls BY category; url, category, pagerank groups: category, good_urls CS347 Notes 09

big_groups = FILTER groups BY COUNT(good_urls)>1; category, good_urls big_groups: category, good_urls CS347 Notes 09

output = FOREACH big_groups GENERATE category, AVG(good_urls output = FOREACH big_groups GENERATE category, AVG(good_urls.pagerank); big_groups: category, good_urls output: category, good_urls CS347 Notes 09

Features Similar to specifying a query execution plan (i.e., a dataflow graph), thereby making it easier for programmers to understand and control how their data processing task is executed. Support for a flexible, fully nested data model Extensive support for user-defined functions Ability to operate over plain input files without any schema information. Novel debugging environment useful when dealing with enormous data sets. CS347 Notes 09

Execution Control: Good or Bad? Example: spam_urls = FILTER urls BY isSpam(url); culprit_urls = FILTER spam_urls BY pagerank>0.8; Should system re-order filters? CS347 Notes 09

User Defined Functions Example groups = GROUP urls BY category; output = FOREACH groups GENERATE category, top10(urls); should be groups.url ? .gov {(x.fbi.gov, .gov, 0.7) ...} .edu {(y.yale.edu, .edu, 0.5) ...} .com {(z.cnn.com, .com, 0.9) ...} UDF top10 can return scalar or set .gov {(fbi.gov) (cia.gov) ...} .edu {(yale.edu) ...} .com {(cnn.com) (ibm.com) ...} CS347 Notes 09

Data Model Atom, e.g., `alice' Tuple, e.g., (`alice', `lakers') Bag, e.g., { (`alice', `lakers') (`alice', (`iPod', `apple')} Map, e.g., [ `fan of'  { (`lakers') (`iPod') } `age‘  20 ] Note: Bags can currently only hold tuples. So {1, 2, 3} is stored as {(1) (2) (3)} CS347 Notes 09

Expressions in Pig Latin Should be (1) + (2) See flatten examples ahead CS347 Notes 09

Specifying Input Data handle for future use input file queries = LOAD `query_log.txt' USING myLoad() AS (userId, queryString, timestamp); custom deserializer output schema CS347 Notes 09

For Each expanded_queries = FOREACH queries GENERATE userId, expandQuery(queryString); See example next slide Note each tuple is processed independently; good for parallelism To remove one level of nesting: expanded_queries = FOREACH queries GENERATE userId, FLATTEN(expandQuery(queryString)); CS347 Notes 09

ForEach and Flattening “lakers rumors” is a single string value plus userid CS347 Notes 09

Flattening Example (Fill In) X A B C Y = FOREACH X GENERATE A, FLATTEN(B), C CS347 Notes 09

Flattening Example (Fill In) Y = FOREACH X GENERATE A, FLATTEN(B), C Z = FOREACH Y GENERATE A, B, FLATTEN(C) Is Z=Z’ where Z’ = FOREACH X GENERATE A, FLATTEN(B), FLATTEN(C) ? CS347 Notes 09

Flatten is not recursive Flattening Example X A B C Note first tuple is (a1, b1, b2, {(c1)(c2)}) Y = FOREACH X GENERATE A, FLATTEN(B), C Flatten is not recursive Note attribute naming gets complicated. For example, $2 for first tuple is b2; for third tuple it is {(c1)(c2)}. CS347 Notes 09

Flattening Example Y = FOREACH X GENERATE A, FLATTEN(B), C Z = FOREACH Y GENERATE A, B, FLATTEN(C) Note that Z=Z’ where Z’ = FOREACH X GENERATE A, FLATTEN(B), FLATTEN(C) CS347 Notes 09

Filter real_queries = FILTER queries BY userId neq `bot'; real_queries = FILTER queries BY NOT isBot(userId); UDF function CS347 Notes 09

Co-Group Two data sets for example: results: (queryString, url, position) revenue: (queryString, adSlot, amount) grouped_data = COGROUP results BY queryString, revenue BY queryString; url_revenues = FOREACH grouped_data GENERATE FLATTEN(distributeRevenue(results, revenue)); Co-Group more flexible than SQL JOIN CS347 Notes 09

CoGroup vs Join CS347 Notes 09

Group (Simple CoGroup) grouped_revenue = GROUP revenue BY queryString; query_revenues = FOREACH grouped_revenue GENERATE queryString, SUM(revenue.amount) AS totalRevenue; CS347 Notes 09

CoGroup Example 1 X A B C Y A B D Z1 = GROUP X BY A Z1 A X CS347 Notes 09

CoGroup Example 1 X A B C Y A B D Z1 = GROUP X BY A Z1 A X CS347 Notes 09

Syntax not in paper but being added CoGroup Example 2 X A B C Y A B D Syntax not in paper but being added Z2 = GROUP X BY (A, B) Z1 ? X CS347 Notes 09

Syntax not in paper but being added CoGroup Example 2 X A B C Y A B D Syntax not in paper but being added Z2 = GROUP X BY (A, B) Z1 A/B? X CS347 Notes 09

CoGroup Example 3 X A B C Y A B D Z3 = COGROUP X BY A, Y BY A Z1 A X Y CS347 Notes 09

CoGroup Example 3 X A B C Y A B D Z3 = COGROUP X BY A, Y BY A Z1 A X Y CS347 Notes 09

CoGroup Example 4 X A B C Y A B D Z4 = COGROUP X BY A, Y BY B Z1 A X Y CS347 Notes 09

CoGroup Example 4 X A B C Y A B D Z4 = COGROUP X BY A, Y BY B Z1 A X Y CS347 Notes 09

CoGroup With Function Call? X A B Y = GROUP X BY A Z = GROUP X BY SUM(A) Adds integers in tuple Y A X Z ? X CS347 Notes 09

CoGroup With Function Call? X A B Y = GROUP X BY A Z = GROUP X BY SUM(A) Adds integers in tuple Y A X Z SUM(A)/A? X CS347 Notes 09

Pig Latin Join join_result = JOIN results BY queryString, revenue BY queryString; Shorthand for: temp_var = COGROUP results BY queryString, revenue BY queryString; join_result = FOREACH temp_var GENERATE FLATTEN(results), FLATTEN(revenue); CS347 Notes 09

MapReduce in Pig Latin map_result = FOREACH input GENERATE FLATTEN(map(*)); key_groups = GROUP map_result BY $0; output = FOREACH key_groups GENERATE reduce(*); key is first attribute all attributes CS347 Notes 09

Store To materialize result in a file: STORE query_revenues INTO `myoutput' USING myStore(); output file custom serializer CS347 Notes 09

Hadoop HDFS: Hadoop file system How to use Hadoop, examples CS347 Notes 09