Design of Pig B. Ramamurthy. Pig’s data model Scalar types: int, long, float (early versions, recently float has been dropped), double, chararray, bytearray.

Slides:



Advertisements
Similar presentations
MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Advertisements

How to make your map-reduce jobs perform as well as pig: Lessons from pig optimizations Thejas Nair pig Yahoo! Apache pig.
Pig Optimization and Execution Page 1 Alan F. © Hortonworks Inc
Hui Li Pig Tutorial Hui Li Some material adapted from slides by Adam Kawa the 3rd meeting of WHUG June 21, 2012.
Your Name.  Recap  Advance  Built-In Function  UDF  Conclusion.
CS525: Special Topics in DBs Large-Scale Data Management MapReduce High-Level Langauges Spring 2013 WPI, Mohamed Eltabakh 1.
© Hortonworks Inc Daniel Dai Thejas Nair Page 1 Making Pig Fly Optimizing Data Processing on Hadoop.
Working with pig Cloud computing lecture. Purpose  Get familiar with the pig environment  Advanced features  Walk though some examples.
High Level Language: Pig Latin Hui Li Judy Qiu Some material adapted from slides by Adam Kawa the 3 rd meeting of WHUG June 21, 2012.
CC P ROCESAMIENTO M ASIVO DE D ATOS O TOÑO 2014 Aidan Hogan Lecture VII: 2014/04/21.
Pig Latin: A Not-So-Foreign Language for Data Processing Christopher Olsten, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins Acknowledgement.
Technical BI Project Lifecycle
Ch. 3 Lin and Dyer’s text Pages (39-69)
2010/11 : [1]Building Web Applications using MySQL and PHP (W1)MySQL Recap.
Query Evaluation. An SQL query and its RA equiv. Employees (sin INT, ename VARCHAR(20), rating INT, age REAL) Maintenances (sin INT, planeId INT, day.
Query Evaluation. SQL to ERA SQL queries are translated into extended relational algebra. Query evaluation plans are represented as trees of relational.
The Pig Latin Dataflow Language A Brief Overview James Jolly University of Wisconsin-Madison
CS 405G: Introduction to Database Systems 24 NoSQL Reuse some slides of Jennifer Widom Chen Qian University of Kentucky.
CS525: Big Data Analytics MapReduce Languages Fall 2013 Elke A. Rundensteiner 1.
+ Hbase: Hadoop Database B. Ramamurthy. + Motivation-1 HDFS itself is “big” Why do we need “hbase” that is bigger and more complex? Word count, web logs.
1 © Copyright 2012 EMC Corporation. All rights reserved. MapReduce Design Patterns Donald Miner Greenplum Hadoop Solutions
Hive: A data warehouse on Hadoop Based on Facebook Team’s paperon Facebook Team’s paper 8/18/20151.
HADOOP ADMIN: Session -2
Structured Query Language (SQL) A2 Teacher Up skilling LECTURE 2.
Pig Acknowledgement: Modified slides from Duke University 04/13/10 Cloud Computing Lecture.
Data Formats CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook.
Avro Apache Course: Distributed class Student ID: AM Name: Azzaya Galbazar
Pig: Making Hadoop Easy Wednesday, June 10, 2009 Santa Clara Marriott.
Big Data Analytics Training
Pig Latin CS 6800 Utah State University. Writing MapReduce Jobs Higher order functions Map applies a function to a list Example list [1, 2, 3, 4] Want.
Introduction to Hadoop and HDFS
CHAPTER:14 Simple Queries in SQL Prepared By Prepared By : VINAY ALEXANDER ( विनय अलेक्सजेंड़र ) PGT(CS),KV JHAGRAKHAND.
Making Hadoop Easy pig
Storage and Analysis of Tera-scale Data : 2 of Database Class 11/24/09
MapReduce High-Level Languages Spring 2014 WPI, Mohamed Eltabakh 1.
+ Hbase: Hadoop Database B. Ramamurthy. + Motivation-0 Think about the goal of a typical application today and the data characteristics Application trend:
A NoSQL Database - Hive Dania Abed Rabbou.
Presented by Priagung Khusumanegara Prof. Kyungbaek Kim
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
MAP-REDUCE ABSTRACTIONS 1. Abstractions On Top Of Hadoop We’ve decomposed some algorithms into a map-reduce “workflow” (series of map-reduce steps) –
ITM © Port, KazmanVariables - 1 ITM 352 Data types, Variables.
Design of Pig B. Ramamurthy. Pig’s data model Scalar types: int, long, float (early versions, recently float has been dropped), double, chararray, bytearray.
SAP DEVELOPMENT BASICS Bohuslav Tesar. TRAINING OVERVIEW Amazing life of ABAP developer ;) SAP introduction ABAP basics ABAP Reporting.
Apache PIG rev Tools for Data Analysis with Hadoop Hadoop HDFS MapReduce Pig Statistical Software Hive.
MapReduce Joins Shalish.V.J. A Refresher on Joins A join is an operation that combines records from two or more data sets based on a field or set of fields,
Aggregator Stage : Definition : Aggregator classifies data rows from a single input link into groups and calculates totals or other aggregate functions.
What is Pig ???. Why Pig ??? MapReduce is difficult to program. It only has two phases. Put the logic at the phase. Too many lines of code even for simple.
Apache Avro CMSC 491 Hadoop-Based Distributed Computing Spring 2016 Adam Shook.
Data Cleansing with Pig Latin. Neubot Tests Data Structure.
CS 405G: Introduction to Database Systems
Mail call Us: / / Hadoop Training Sathya technologies is one of the best Software Training Institute.
Pig, Making Hadoop Easy Alan F. Gates Yahoo!.
Hadoop.
Unit 5 Working with pig.
HADOOP ADMIN: Session -2
MSBIC Hadoop Series Processing Data with Pig
Design of Pig B. Ramamurthy.
Overview of Hadoop MapReduce MapReduce is a soft work framework for easily writing applications which process vast amounts of.
Pig Latin - A Not-So-Foreign Language for Data Processing
Pig Data flow language (abstraction for MR jobs)
Pig Data flow language (abstraction for MR jobs)
Slides borrowed from Adam Shook
Pig from Alan Gates’ book (In preparation for exam2)
The Idea of Pig Or Pig Concepts
CSE 491/891 Lecture 21 (Pig).
Pig Data flow language (abstraction for MR jobs)
MapReduce Algorithm Design
(Hadoop) Pig Dataflow Language
(Hadoop) Pig Dataflow Language
Lecture 20: Representing Data Elements
Presentation transcript:

Design of Pig B. Ramamurthy

Pig’s data model Scalar types: int, long, float (early versions, recently float has been dropped), double, chararray, bytearray Complex types: Map, Map: chararray to any pig element; in fact, this to mapping; map constants [‘name’#’bob’, ‘age’#55] will create a map with two keys name and age, first value is chararray and the second value is an integer. Tuple: is a fixed length ordered collection of Pig data elements. Equivalent to a roq in SQL. Order, can refer to elements by field position. (‘bob’, 55) is a tuple with two fields. Bag: unodered collection of tuples. Cannot reference tuple by position. Eg. {(‘bob’,55), (‘sally’,52), (‘john’, 25)} is a bog with 3 tuples; bogs may become large and may spill into disk from “in-memory” Null: unknown, data missing; any data element can be null; (In Java it is Null pointers… the meaning is different in Pig)

Pig schema Very relaxed wrt schema. Scheme is defined at the time you load the data Table 4-1 Runtime declaration of schemes is really nice. You can operate without meta data. On the other hand, meta data can be stored in a repository Hcatalog and used. For example JSON format… etc. Gently typed: between Java and Perl at two extremes

Schema Definition divs = load ‘NYSE_dividends’ as (exchange:chararray, symbol:chararray, date:chararray, dividend:double); Or if you are lazy divs = load ‘NYSE_dividends’ as (exchange, symbol, date, dividend); But what if the data input is really complex? Eg. JSON objects? One can keep a scheme in the HCatalog (apache incubation), a meta data repository for facilitating reading/loading input data in other formats. divs = load ‘mydata’ using HCatLoader();

Pig Latin Basics: keywords, relation names, field names; Keywords are not case sensitive but relation and fields names are! User defined functions are also case sensitive Comments /* */ or single line comment – Each processing step results in data – Relation name = data operation – Field names start with aplhabet

More examples No pig-schema daily = load ‘NYSE_daily’; calcs = foreach daily generate $7/100.0, SUBSTRING($0,0,1), $6-$3); Here – is only numeric on Pig) No-schema filter daily = load ‘NYSE_daily’; fltrd = filter daily by $6 > $3; Here > is allowed for numeric, bytearray or chararray.. Pig is going to guess the type! Math (float cast) daily = load ‘NYSE_daily’ as (exchange, symbol, date, open, high:float,low:float, close, volume:int, adj_close); rough = foreach daily generate volume * close; -- will convert to float Thus the free “typing” may result in unintended consequences.. Be aware. Pig is sometimes stupid. For a more in-depth view look at also hoe “casts” are done in Pig.

Load (input method) Can easily interface to hbase: read from hbase using clause – divs = load ‘NYSE_dividends’ using HBaseStorage(); – divs = load ‘NYSE_dividends’ using PigStorage(); – divs = load ‘NYSE_dividends’ using PigStorage(,); as clause – daily = load ‘NYSE_daily’ as (exchange, symbol, date, open, high,low, close, volume);

Store & dump Default is PigStorage (it writes as tab separated) – store processed into ‘/data/example/processed’; For comma separated use: – store processed into ‘/data/example/processed’ using PigStorage(,); Can write into hbase using HBaseStorage(): – store ‘processed’ using into HBaseStorage(); Dump for interactive debugging, and prototyping

Relational operations Allow you to transform by sorting, grouping, joining, projecting and filtering foreach supports as array of expressions: simplest is constants and field references. rough = foreach daily generate volume * close; calcs = foreach daily generate $7/100.0, SUBSTRING($0,0,1), $6-$3); UDF (User Defined Functions) can also be used in expressions Filter operation CM syms = filter divs by symbol matches ‘CM*’;

Operations (cntd) Group operation collects together records with the same key. – grpd = group daily by stock; -- output is – counts = foreach grpd generate group, COUNT(daily); – Can also group by multiple keys – grpd = group daily by (stock, exchange); Group forces the “reduce” phase of MR Pig offers mechanism for addressing data skew and unbalanced use of reducers (we will not worry about this now)

Order by Strict total order… Example: daily = load “NYSE_daily” as (exchange, symbol, close, open,…) bydate = order daily by date; bydateandsymbol = order daily by date, symbol; byclose = order by close desc, open;

More functions distinct primitive: to remove duplicates Limit: divs = load ‘NYSE_dividends’; first10 = limit divs 10; Sample divs = load ‘NYSE_dividends’; some = sample divs 0.1;

More functions Parallel daily = load ‘NYSE_daily’; bysym = group daily by symbol parallel 10; (10 reducers) Register, piggybank.jar register ‘piggybank.jar’ divs = load ‘NYSE_dividens’; backwds = foreach divs generate Reverse(symbol); Illustrate, describe …

How do you use pig? To express the logical steps in big data analytics For prototyping? For domain experts who don’t want to learn MR but want to do big data For a one-time job: probably will not be repeated Quick demo of the MR capabilities Good for discussion of initial MR design & planning (group, order etc.) Excellent interface to a data warehouse

Back to Chapter 3

Secondary Sorting Problem: monitor activity m1  (t1, r80521) this is group by sensor mx… But the group itself will not be in temporal order for each sensor. Solution 1: reducer doing the sort… Problems: in-memory buffering potential scalability bottleneck; what if the readings are over long period of time? What if it is a high frequency sensor? What if we are working with large complex objects? We did this by making the key: (m1, t1)  [(r80521)] You must write the sort order.. For the framework and need a custom partitioner for all keys related to the same sensor (mx) are routed to the same reducer. Why is it alright to sort at the infrastructure level?

Data Warehousing Popular application of Hadoop. (remember Hive) Vast repository of data, foundation for Business intelligence (BI) Stores semi-structured as well as unstructured data Problem: how to implement relational joins?

Relational joins Relation S (k1, s1, S1) (k2, s2, S2) (k3, s3, S3) Relation T (k1, t1, T1) (k2, t2, T2) (k3, t3, T3) k is the key, s/t is the tuple id, and S/T attributes of the tuple Example: S is a collection of user profiles, k –user id, tuple demographic info (age, gender, income, etc.), T online log of activity of the people; page view, money spent, time spent on page etc. Joining S and T helps in determining the spending habits by say demographics.

Join Solutions Reduce-side join: simple map both relations and emit the for the reducer to work with. One-one join: not a lot of work for reducer One-many join: many-many join Map-side join: read both relations into map and let the hadoop infrastructure do the sorting/join.