Hadoop: An Industry Perspective Amr Awadallah Founder/CTO, Cloudera, Inc. Massive Data Analytics over the Cloud (MDAC’2010) Monday, April 26 th, 2010.

Slides:



Advertisements
Similar presentations
Introduction to Hadoop Richard Holowczak Baruch College.
Advertisements

Introduction to cloud computing Jiaheng Lu Department of Computer Science Renmin University of China
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
BigData Tools Seyyed mohammad Razavi. Outline  Introduction  Hbase  Cassandra  Spark  Acumulo  Blur  MongoDB  Hive  Giraph  Pig.
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
Hive: A data warehouse on Hadoop
Hadoop tutorials. Todays agenda Hadoop Introduction and Architecture Hadoop Distributed File System MapReduce Spark 2.
Hadoop Ecosystem Overview
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Hive: A data warehouse on Hadoop Based on Facebook Team’s paperon Facebook Team’s paper 8/18/20151.
HADOOP ADMIN: Session -2
Hadoop Team: Role of Hadoop in the IDEAL Project ●Jose Cadena ●Chengyuan Wen ●Mengsu Chen CS5604 Spring 2015 Instructor: Dr. Edward Fox.
This presentation was scheduled to be delivered by Brian Mitchell, Lead Architect, Microsoft Big Data COE Follow him Contact him.
Page 1 © Hortonworks Inc – All Rights Reserved Hortonworks Naser Ali UK Building Energy Management Group Hadoop: A Data platform for businesses.
Distributed and Parallel Processing Technology Chapter1. Meet Hadoop Sun Jo 1.
USING HADOOP & HBASE TO BUILD CONTENT RELEVANCE & PERSONALIZATION Tools to build your big data application Ameya Kanitkar.
Committed to Deliver….  We are Leaders in Hadoop Ecosystem.  We support, maintain, monitor and provide services over Hadoop whether you run apache Hadoop,
Facebook (stylized facebook) is a Social Networking System and website launched in February 2004, operated and privately owned by Facebook, Inc. As.
Cloud Distributed Computing Environment Content of this lecture is primarily from the book “Hadoop, The Definite Guide 2/e)
MapReduce April 2012 Extract from various presentations: Sudarshan, Chungnam, Teradata Aster, …
Scaling for Large Data Processing What is Hadoop? HDFS and MapReduce
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
HBase A column-centered database 1. Overview An Apache project Influenced by Google’s BigTable Built on Hadoop ▫A distributed file system ▫Supports Map-Reduce.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
Panagiotis Antonopoulos Microsoft Corp Ioannis Konstantinou National Technical University of Athens Dimitrios Tsoumakos.
Data Warehouse Overview September 28, 2012 presented by Terry Bilskie.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop Basics -Venkat Cherukupalli. What is Hadoop? Open Source Distributed processing Large data sets across clusters Commodity, shared-nothing servers.
Introduction to Apache Hadoop Zibo Wang. Introduction  What is Apache Hadoop?  Apache Hadoop is a software framework which provides open source libraries.
Introduction to Hadoop and HDFS
Contents HADOOP INTRODUCTION AND CONCEPTUAL OVERVIEW TERMINOLOGY QUICK TOUR OF CLOUDERA MANAGER.
Hadoop & Condor Dhruba Borthakur Project Lead, Hadoop Distributed File System Presented at the The Israeli Association of Grid Technologies.
Introduction to Hadoop Owen O’Malley Yahoo!, Grid Team
An Introduction to HDInsight June 27 th,
The exponential growth of data –Challenges for Google,Yahoo,Amazon & Microsoft in web search and indexing The volume of data being made publicly available.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
Introduction to Hbase. Agenda  What is Hbase  About RDBMS  Overview of Hbase  Why Hbase instead of RDBMS  Architecture of Hbase  Hbase interface.
Presented by: Katie Woods and Jordan Howell. * Hadoop is a distributed computing platform written in Java. It incorporates features similar to those of.
Hadoop implementation of MapReduce computational model Ján Vaňo.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
IBM Research ® © 2007 IBM Corporation Introduction to Map-Reduce and Join Processing.
IBM Research ® © 2007 IBM Corporation A Brief Overview of Hadoop Eco-System.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
Experiments in Utility Computing: Hadoop and Condor Sameer Paranjpye Y! Web Search.
Scalable data access with Impala Zbigniew Baranowski Maciej Grzybek Daniel Lanza Garcia Kacper Surdy.
Cloud Distributed Computing Environment Hadoop. Hadoop is an open-source software system that provides a distributed computing environment on cloud (data.
Beyond Hadoop The leading open source system for processing big data continues to evolve, but new approaches with added features are on the rise. Ibrahim.
Harnessing Big Data with Hadoop Dipti Sangani; Madhu Reddy DBI210.
Learn. Hadoop Online training course is designed to enhance your knowledge and skills to become a successful Hadoop developer and In-depth knowledge of.
This is a free Course Available on Hadoop-Skills.com.
Moscow, November 16th, 2011 The Hadoop Ecosystem Kai Voigt, Cloudera Inc.
BIG DATA/ Hadoop Interview Questions.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Microsoft Ignite /28/2017 6:07 PM
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Database Services Katarzyna Dziedziniewicz-Wojcik On behalf of IT-DB.
Apache hadoop & Mapreduce
INTRODUCTION TO BIGDATA & HADOOP
INTRODUCTION TO PIG, HIVE, HBASE and ZOOKEEPER
Hadoopla: Microsoft and the Hadoop Ecosystem
Central Florida Business Intelligence User Group
Introduction to Spark.
Introduction to PIG, HIVE, HBASE & ZOOKEEPER
Overview of big data tools
TIM TAYLOR AND JOSH NEEDHAM
Charles Tappert Seidenberg School of CSIS, Pace University
Cloud Computing for Data Analysis Pig|Hive|Hbase|Zookeeper
Big DATA.
Pig Hive HBase Zookeeper
Presentation transcript:

Hadoop: An Industry Perspective Amr Awadallah Founder/CTO, Cloudera, Inc. Massive Data Analytics over the Cloud (MDAC’2010) Monday, April 26 th, 2010

Amr Awadallah, Cloudera Inc 2 Outline ▪ What is Hadoop? ▪ Overview of HDFS and MapReduce ▪ How Hadoop augments an RDBMS? ▪ Industry Business Needs: ▪ Data Consolidation (Structured or Not) ▪ Data Schema Agility (Evolve Schema Fast) ▪ Query Language Flexibility (Data Engineering) ▪ Data Economics (Store More for Longer) ▪ Conclusion

Amr Awadallah, Cloudera Inc 3 What is Hadoop? ▪ A scalable fault-tolerant distributed system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on structured and complex data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) ▪ Open source under the Apache License ▪

Amr Awadallah, Cloudera Inc 4 Hadoop History ▪ : Doug Cutting and Mike Cafarella started working on Nutch ▪ : Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes ▪ June 2009, Oct 2009: Hadoop Summit, Hadoop World ▪ September 2009: Doug Cutting joins Cloudera

Amr Awadallah, Cloudera Inc 5 Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly 3. Compute Shall Move to Data 4. Simple Core, Modular and Extensible

Amr Awadallah, Cloudera Inc 6 Block Size = 64MB Replication Factor = 3 HDFS: Hadoop Distributed File System Cost/GB is a few ¢/month vs $/month

Amr Awadallah, Cloudera Inc 7 MapReduce: Distributed Processing

Amr Awadallah, Cloudera Inc 8 Apache Hadoop Ecosystem HDFS (Hadoop Distributed File System) HBase (key-value store) MapReduce (Job Scheduling/Execution System) Pig (Data Flow) Hive (SQL) BI ReportingETL Tools Avro (Serialization)Zookeepr (Coordination) Sqoop RDBMS (Streaming/Pipes APIs)

Amr Awadallah, Cloudera Inc 9 Relational Databases:Hadoop: Use The Right Tool For The Right Job When to use? Affordable Storage/Compute Structured or Not (Agility) Resilient Auto Scalability When to use? Interactive Reporting (<1sec) Multistep Transactions Lots of Inserts/Updates/Deletes

Amr Awadallah, Cloudera Inc 10 Typical Hadoop Architecture Hadoop: Storage and Batch Processing Data Collection OLAP Data Mart Business Intelligence OLTP Data Store Interactive Application Business UsersEnd Customers Engineers

Amr Awadallah, Cloudera Inc 11 Complex Data is Growing Really Fast Gartner – 2009 ▪ Enterprise Data will grow 650% in the next 5 years. ▪ 80% of this data will be unstructured (complex) data IDC – 2008 ▪ 85% of all corporate information is in unstructured (complex) forms ▪ Growth of unstructured data (61.7% CAGR) will far outpace that of transactional data

Amr Awadallah, Cloudera Inc 12 Data Consolidation: One Place For All A single data system to enable processing across the universe of data types. Complex Data Documents Web feeds System logs Online forums Structured Data (“relational”) CRM Financials Logistics Data Marts SharePoint Sensor data EMB archives Images/Video Inventory Sales records HR records Web Profiles

Amr Awadallah, Cloudera Inc 13 Schema-on-Read:Schema-on-Write: Data Agility: Schema on Read vs Write Schema must be created before data is loaded. An explicit load operation has to take place which transforms the data to the internal structure of the database. New columns must be added explicitly before data for such columns can be loaded into the database. Read is Fast. Standards/Governance. Data is simply copied to the file store, no special transformation is needed. A SerDe (Serializer/Deserlizer) is applied during read time to extract the required columns. New data can start flowing anytime and will appear retroactively once the SerDe is updated to parse them. Load is Fast Evolving Schemas/Agility

Amr Awadallah, Cloudera Inc 14 Query Language Flexibility ▪ Java MapReduce: Gives the most flexibility and performance, but potentially long development cycle (the “assembly language” of Hadoop). ▪ Streaming MapReduce: Allows you to develop in any programming language of your choice, but slightly lower performance and less flexibility. ▪ Pig: A relatively new language out of Yahoo, suitable for batch data flow workloads ▪ Hive: A SQL interpreter on top of MapReduce, also includes a meta-store mapping files to their schemas and associated SerDe’s. Hive also supports User-Defined- Functions and pluggable MapReduce streaming functions in any language.

Amr Awadallah, Cloudera Inc 15 Hive Extensible Data Types ▪ STRUCTS: ▪ SELECT mytable.mycolumn.myfield FROM … ▪ MAPS (Hashes): ▪ SELECT mytable.mycolumn[mykey] FROM … ▪ ARRAYS: ▪ SELECT mytable.mycolumn[5] FROM … JSON: SELECT get_json_object(mycolumn, objpath)

Amr Awadallah, Cloudera Inc 16 Data Economics (Return On Byte) Low ROB Return on Byte = value to be extracted from that byte / cost of storing that byte. If ROB is < 1 then it will be buried into tape wasteland, thus we need cheaper active storage. High ROB

Amr Awadallah, Cloudera Inc 17 Case Studies: Hadoop World ‘09 ▪ VISA: Large Scale Transaction Analysis ▪ JP Morgan Chase: Data Processing for Financial Services ▪ China Mobile: Data Mining Platform for Telecom Industry ▪ Rackspace: Cross Data Center Log Processing ▪ Booz Allen Hamilton: Protein Alignment using Hadoop ▪ eHarmony: Matchmaking in the Hadoop Cloud ▪ General Sentiment: Understanding Natural Language ▪ Yahoo!: Social Graph Analysis ▪ Visible Technologies: Real-Time Business Intelligence ▪ Facebook: Rethinking the Data Warehouse with Hadoop and Hive Slides and Videos at

Amr Awadallah, Cloudera Inc 18 Cloudera Desktop for Hadoop

Amr Awadallah, Cloudera Inc 19 Conclusion Hadoop is a scalable distributed data processing system which enables: 1. Consolidation (Structured or Not) 2. Data Agility (Evolving Schemas) 3. Query Flexibility (Any Language) 4. Economical Storage (ROB > 1)

Amr Awadallah, Cloudera Inc 20 Amr Awadallah CTO, Cloudera Inc. Online Training Videos and Info: Contact Information

(c) 2008 Cloudera, Inc. or its licensors. "Cloudera" is a registered trademark of Cloudera, Inc.. All rights reserved. 1.0

Amr Awadallah, Cloudera Inc 22 MapReduce: The Programming Model Split 1 Split i Split N Reduce 1 Reduce i Reduce R (sorted words, counts) Shuffle (sorted words, counts) Map 1 (docid, text) Map i (docid, text) Map M (words, counts) “To Be Or Not To Be?” Be, 5 Be, 12 Be, 7 Be, 6 Output File 1 (sorted words, sum of counts) Output File i (sorted words, sum of counts) Output File R (sorted words, sum of counts) Be, 30 SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt

Amr Awadallah, Cloudera Inc 23 Hadoop High-Level Architecture Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs across task tracker slaves Data Node Stores and serves blocks of data Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Task Tracker Runs tasks (work units) within a job Share Physical Node

Amr Awadallah, Cloudera Inc 24 Economics of Hadoop Storage ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Other solutions cost in the range of $5K to $100K per user TB

Amr Awadallah, Cloudera Inc 25 Data Engineering vs Business Intelligence ▪ Business Intelligence: ▪ The practice of extracting business numbers to monitor and evaluate the health of the business. ▪ Humans make decisions based on these numbers to improve revenues or reduce costs. ▪ Data Engineering: ▪ The science of writing algorithms that convert data into money Alternatively, how to automatically transform data into new features that increase revenues or reduce costs.