Utkarsh Srivastava Pig : Building High-Level Dataflows over Map-Reduce Research & Cloud Computing.

Slides:



Advertisements
Similar presentations
Alan F. Gates, Olga Natkovich, Shubham Chopra, Pradeep Kamath, Shravan M. Narayanamurthy, Christopher Olston, Benjamin Reed, Santhosh Srinivasan, Utkarsh.
Advertisements

How to make your map-reduce jobs perform as well as pig: Lessons from pig optimizations Thejas Nair pig Yahoo! Apache pig.
Pig Optimization and Execution Page 1 Alan F. © Hortonworks Inc
© Hortonworks Inc Daniel Dai Thejas Nair Page 1 Making Pig Fly Optimizing Data Processing on Hadoop.
Pig Latin: A Not-So-Foreign Language for Data Processing Christopher Olsten, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins Acknowledgement.
Parallel Computing MapReduce Examples Parallel Efficiency Assignment
Data-Intensive Computing with MapReduce/Pig Pramod Bhatotia MPI-SWS Distributed Systems – Winter Semester 2014.
The Hadoop Stack, Part 1 Introduction to Pig Latin CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain University of Notre Dame.
Presented By: Imranul Hoque
Optimus: A Dynamic Rewriting Framework for Data-Parallel Execution Plans Qifa Ke, Michael Isard, Yuan Yu Microsoft Research Silicon Valley EuroSys 2013.
MapReduce and databases Data-Intensive Information Processing Applications ― Session #7 Jimmy Lin University of Maryland Tuesday, March 23, 2010 This work.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research Shimin Chen Big.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research.
Query Optimization 3 Cost Estimation R&G, Chapters 12, 13, 14 Lecture 15.
Pig Latin Olston, Reed, Srivastava, Kumar, and Tomkins. Pig Latin: A Not-So-Foreign Language for Data Processing. SIGMOD Shahram Ghandeharizadeh.
The Pig Experience: Building High-Level Data flows on top of Map-Reduce The Pig Experience: Building High-Level Data flows on top of Map-Reduce DISTRIBUTED.
7/14/2015EECS 584, Fall MapReduce: Simplied Data Processing on Large Clusters Yunxing Dai, Huan Feng.
Cloud Computing Other Mapreduce issues Keke Chen.
CS 345D Semih Salihoglu (some slides are copied from Ilan Horn, Jeff Dean, and Utkarsh Srivastava’s presentations online) MapReduce System and Theory 1.
HADOOP ADMIN: Session -2
MapReduce.
Pig Acknowledgement: Modified slides from Duke University 04/13/10 Cloud Computing Lecture.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research.
Presenters: Abhishek Verma, Nicolas Zea.  Map Reduce  Clean abstraction  Extremely rigid 2 stage group-by aggregation  Code reuse and maintenance.
Pig: Making Hadoop Easy Wednesday, June 10, 2009 Santa Clara Marriott.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Cloud Computing Other High-level parallel processing languages Keke Chen.
Pig Latin: A Not-So-Foreign Language for Data Processing Christopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins Yahoo! Research.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
Introduction to Hadoop and HDFS
CSE 486/586 CSE 486/586 Distributed Systems Data Analytics Steve Ko Computer Sciences and Engineering University at Buffalo.
Making Hadoop Easy pig
Storage and Analysis of Tera-scale Data : 2 of Database Class 11/24/09
MapReduce High-Level Languages Spring 2014 WPI, Mohamed Eltabakh 1.
Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters Hung-chih Yang(Yahoo!), Ali Dasdan(Yahoo!), Ruey-Lung Hsiao(UCLA), D. Stott Parker(UCLA)
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
MAP-REDUCE ABSTRACTIONS 1. Abstractions On Top Of Hadoop We’ve decomposed some algorithms into a map-reduce “workflow” (series of map-reduce steps) –
Hung-chih Yang 1, Ali Dasdan 1 Ruey-Lung Hsiao 2, D. Stott Parker 2
MapReduce 高层应用 & 频繁集挖掘算法的 MapReduce 实现 彭波 北京大学信息科学技术学院 7/14/2009.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research.
MapReduce and Data Management Based on slides from Jimmy Lin’s lecture slides ( (licensed.
Alan Gates Becoming a Pig Developer Who Am I? Pig committer Hadoop PMC Member Yahoo! architect for Pig.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
CS 440 Database Management Systems Lecture 5: Query Processing 1.
Apache PIG rev Tools for Data Analysis with Hadoop Hadoop HDFS MapReduce Pig Statistical Software Hive.
Big Data Infrastructure Week 3: From MapReduce to Spark (1/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
What is Pig ???. Why Pig ??? MapReduce is difficult to program. It only has two phases. Put the logic at the phase. Too many lines of code even for simple.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
MapReduce Compilers-Apache Pig
Pig, Making Hadoop Easy Alan F. Gates Yahoo!.
Hadoop.
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
CS 440 Database Management Systems
Spark SQL.
Pig : Building High-Level Dataflows over Map-Reduce
Spark SQL.
RDDs and Spark.
Pig Latin - A Not-So-Foreign Language for Data Processing
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Pig Latin: A Not-So-Foreign Language for Data Processing
Introduction to PIG, HIVE, HBASE & ZOOKEEPER
Overview of big data tools
Pig : Building High-Level Dataflows over Map-Reduce
How to make your map-reduce jobs perform as well as pig: Lessons from pig optimizations Pig performance has been improving because of the optimizations.
Hadoop – PIG.
Pig and pig latin: An Introduction
Presentation transcript:

Utkarsh Srivastava Pig : Building High-Level Dataflows over Map-Reduce Research & Cloud Computing

Data Processing Renaissance  Internet companies swimming in data E.g. TBs/day at Yahoo!  Data analysis is “inner loop” of product innovation  Data analysts are skilled programmers

Data Warehousing …? Scale Often not scalable enough $ $ Prohibitively expensive at web scale Up to $200K/TB SQL Little control over execution method Query optimization is hard Parallel environment Little or no statistics Lots of UDFs

New Systems For Data Analysis  Map-Reduce  Apache Hadoop  Dryad...

Map-Reduce Input records k1k1 v1v1 k2k2 v2v2 k1k1 v3v3 k2k2 v4v4 k1k1 v5v5 map k1k1 v1v1 k1k1 v3v3 k1k1 v5v5 k2k2 v2v2 k2k2 v4v4 Output records reduce Just a group-by-aggregate?

The Map-Reduce Appeal Scale Scalable due to simpler design Only parallelizable operations No transactions $ $ Runs on cheap commodity hardware Procedural Control- a processing “pipe” SQL

Disadvantages 1. Extremely rigid data flow Other flows constantly hacked in Join, Union Split M M R R M M M M R R M M Chains 2. Common operations must be coded by hand Join, filter, projection, aggregates, sorting, distinct 3. Semantics hidden inside map-reduce functions Difficult to maintain, extend, and optimize

Pros And Cons ScalableCheap Control over execution Inflexible Lots of hand coding Semantics hidden Need a high-level, general data flow language

Enter Pig Latin ScalableCheap Control over execution Pig Latin Need a high-level, general data flow language

Outline Map-Reduce and the need for Pig Latin Pig Latin Compilation into Map-Reduce Example Generation Future Work

Example Data Analysis Task UserUrlTime Amycnn.com8:00 Amybbc.com10:00 Amyflickr.com10:05 Fredcnn.com12:00 Find the top 10 most visited pages in each category UrlCategoryPageRank cnn.comNews0.9 bbc.comNews0.8 flickr.comPhotos0.7 espn.comSports0.9 VisitsUrl Info

Data Flow Load Visits Group by url Foreach url generate count Foreach url generate count Load Url Info Join on url Group by category Foreach category generate top10 urls Foreach category generate top10 urls

In Pig Latin visits = load ‘/data/visits’ as (user, url, time); gVisits = group visits by url; visitCounts = foreach gVisits generate url, count(visits); urlInfo = load ‘/data/urlInfo’ as (url, category, pRank); visitCounts = join visitCounts by url, urlInfo by url; gCategories = group visitCounts by category; topUrls = foreach gCategories generate top(visitCounts,10); store topUrls into ‘/data/topUrls’;

Step-by-step Procedural Control Target users are entrenched procedural programmers The step-by-step method of creating a program in Pig is much cleaner and simpler to use than the single block method of SQL. It is easier to keep track of what your variables are, and where you are in the process of analyzing your data. Jasmine Novak Engineer, Yahoo! Automatic query optimization is hard Pig Latin does not preclude optimization With the various interleaved clauses in SQL, it is difficult to know what is actually happening sequentially. With Pig, the data nesting and the temporary tables get abstracted away. Pig has fewer primitives than SQL does, but it’s more powerful. David Ciemiewicz Search Excellence, Yahoo!

visits = load ‘/data/visits’ as (user, url, time); gVisits = group visits by url; visitCounts = foreach gVisits generate url, count(urlVisits); urlInfo = load ‘/data/urlInfo’ as (url, category, pRank); visitCounts = join visitCounts by url, urlInfo by url; gCategories = group visitCounts by category; topUrls = foreach gCategories generate top(visitCounts,10); store topUrls into ‘/data/topUrls’; Quick Start and Interoperability Operates directly over files

visits = load ‘/data/visits’ as (user, url, time); gVisits = group visits by url; visitCounts = foreach gVisits generate url, count(urlVisits); urlInfo = load ‘/data/urlInfo’ as (url, category, pRank); visitCounts = join visitCounts by url, urlInfo by url; gCategories = group visitCounts by category; topUrls = foreach gCategories generate top(visitCounts,10); store topUrls into ‘/data/topUrls’; Quick Start and Interoperability Schemas optional; Can be assigned dynamically Schemas optional; Can be assigned dynamically

visits = load ‘/data/visits’ as (user, url, time); gVisits = group visits by url; visitCounts = foreach gVisits generate url, count(urlVisits); urlInfo = load ‘/data/urlInfo’ as (url, category, pRank); visitCounts = join visitCounts by url, urlInfo by url; gCategories = group visitCounts by category; topUrls = foreach gCategories generate top(visitCounts,10); store topUrls into ‘/data/topUrls’; User-Code as a First-Class Citizen User-defined functions (UDFs) can be used in every construct Load, Store Group, Filter, Foreach User-defined functions (UDFs) can be used in every construct Load, Store Group, Filter, Foreach

Pig Latin has a fully-nestable data model with: – Atomic values, tuples, bags (lists), and maps More natural to programmers than flat tuples Avoids expensive joins Nested Data Model yahoo, finance news

Common case: aggregation on these nested sets Power users: sophisticated UDFs, e.g., sequence analysis Efficient Implementation (see paper) Nested Data Model Decouples grouping as an independent operation UserUrlTime Amycnn.com8:00 Amybbc.com10:00 Amybbc.com10:05 Fredcnn.com12:00 groupVisits cnn.com Amycnn.com8:00 Fredcnn.com12:00 bbc.com Amybbc.com10:00 Amybbc.com10:05 group by url I frankly like pig much better than SQL in some respects (group + optional flatten works better for me, I love nested data structures).” Ted Dunning Chief Scientist, Veoh 19

CoGroup queryurlrank Lakersnba.com1 Lakersespn.com2 Kingsnhl.com1 Kingsnba.com2 queryadSlotamount Lakerstop50 Lakersside20 Kingstop30 Kingsside10 groupresultsrevenue Lakers nba.com1Lakerstop50 Lakersespn.com2Lakersside20 Kings nhl.com1Kingstop30 Kingsnba.com2Kingsside10 resultsrevenue Cross-product of the 2 bags would give natural join

Outline Map-Reduce and the need for Pig Latin Pig Latin Compilation into Map-Reduce Example Generation Future Work

Implementation cluster Hadoop Map-Reduce Pig SQL automatic rewrite + optimize or user Pig is open-source. Pig is open-source. ~50% of Hadoop jobs at Yahoo! are Pig 1000s of jobs per day

Compilation into Map-Reduce Load Visits Group by url Foreach url generate count Foreach url generate count Load Url Info Join on url Group by category Foreach category generate top10(urls) Foreach category generate top10(urls) Map 1 Reduce 1 Map 2 Reduce 2 Map 3 Reduce 3 Every group or join operation forms a map-reduce boundary Other operations pipelined into map and reduce phases

Optimizations: Using the Combiner Input records k1k1 v1v1 k2k2 v2v2 k1k1 v3v3 k2k2 v4v4 k1k1 v5v5 map k1k1 v1v1 k1k1 v3v3 k1k1 v5v5 k2k2 v2v2 k2k2 v4v4 Output records reduce Can pre-process data on the map-side to reduce data shipped Algebraic Aggregation Functions Distinct processing

Optimizations: Skew Join Default join method is symmetric hash join. groupresultsrevenue Lakers nba.com1Lakerstop50 Lakersespn.com2Lakersside20 Kings nhl.com1Kingstop30 Kingsnba.com2Kingsside10 cross product carried out on 1 reducer Problem if too many values with same key Skew join samples data to find frequent values Further splits them among reducers

Optimizations: Fragment-Replicate Join Symmetric-hash join repartitions both inputs If size(data set 1) >> size(data set 2) – Just replicate data set 2 to all partitions of data set 1 Translates to map-only job – Open data set 2 as “side file”

Optimizations: Merge Join Exploit data sets are already sorted. Again, a map-only job – Open other data set as “side file”

Optimizations: Multiple Data Flows Load Users Filter bots Group by state Group by state Apply udfs Store into ‘bystate’ Group by demographic Group by demographic Apply udfs Store into ‘bydemo’ Map 1 Reduce 1

Optimizations: Multiple Data Flows Load Users Filter bots Group by state Group by state Apply udfs Store into ‘bystate’ Group by demographic Group by demographic Apply udfs Store into ‘bydemo’ Split Demultiplex Map 1 Reduce 1

Other Optimizations Carry data as byte arrays as far as possible Using binary comparator for sorting “Streaming” data through external executables

Performance

Outline Map-Reduce and the need for Pig Latin Pig Latin Compilation into Map-Reduce Example Generation Future Work

Example Dataflow Program LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> 0.5 Find users that tend to visit high-pagerank pages

Iterative Process LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> 0.5 Bug in UDF canonicalize? Joining on right attribute? Everything being filtered out? No Output 

How to do test runs? Run with real data – Too inefficient (TBs of data) Create smaller data sets (e.g., by sampling) – Empty results due to joins [Chaudhuri et. al. 99], and selective filters Biased sampling for joins – Indexes not always present

Examples to Illustrate Program LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> 0.5 (Amy, cnn.com) (Amy, (Fred, (Amy, (Amy, (Fred, ( 0.9) ( 0.3) ( 0.4) (Amy, 0.9) (Amy, 0.3) (Fred, 0.4) (Amy, 0.6) (Fred, 0.4) (Amy, 0.6) (Amy, 0.9) (Amy, 0.3) (Fred, 0.4) ( Amy, ( Fred,) )

Value Addition From Examples Examples can be used for – Debugging – Understanding a program written by someone else – Learning a new operator, or language

Good Examples: Consistency LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> Consistency output example = operator applied on input example (Amy, cnn.com) (Amy, (Fred, (Amy, (Amy, (Fred,

Good Examples: Realism LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> Realism (Amy, cnn.com) (Amy, (Fred, (Amy, (Amy, (Fred,

Good Examples: Completeness LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> 0.5 Demonstrate the salient properties of each operator, e.g., FILTER 2. Completeness (Amy, 0.6) (Fred, 0.4) (Amy, 0.6)

Good Examples: Conciseness LOAD (user, url) LOAD (url, pagerank) FOREACH user, canonicalize(url) JOIN on url GROUP on user FOREACH user, AVG(pagerank) FILTER avgPR> Conciseness (Amy, cnn.com) (Amy, (Fred, (Amy, (Amy, (Fred,

Implementation Status Available as ILLUSTRATE command in open-source release of Pig Available as Eclipse Plugin (PigPen) See SIGMOD09 paper for algorithm and experiments

Related Work Sawzall – Data processing language on top of map-reduce – Rigid structure of filtering followed by aggregation Hive – SQL-like language on top of Map-Reduce DryadLINQ – SQL-like language on top of Dryad Nested data models – Object-oriented databases

Future / In-Progress Tasks Columnar-storage layer Metadata repository Profiling and Performance Optimizations Tight integration with a scripting language – Use loops, conditionals, functions of host language Memory Management Project Suggestions at:

Credits

Summary Big demand for parallel data processing – Emerging tools that do not look like SQL DBMS – Programmers like dataflow pipes over static files Hence the excitement about Map-Reduce But, Map-Reduce is too low-level and rigid Pig Latin Sweet spot between map-reduce and SQL Pig Latin Sweet spot between map-reduce and SQL