Scaling for Large Data Processing What is Hadoop? HDFS and MapReduce

Slides:



Advertisements
Similar presentations
Introduction to cloud computing Jiaheng Lu Department of Computer Science Renmin University of China
Advertisements

Apache Hadoop and Hive.
 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
MapReduce.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
School of Computing Clemson University
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
Big Data & Hadoop By Mr.Nataraj smallest unit is bit 1 byte=8 bits 1 KB (Kilo Byte)= 1024 bytes =1024*8 bits 1MB (Mega Byte)=1024 KB=(1024)^2 * 8 bits.
Hive: A data warehouse on Hadoop
Hadoop: An Industry Perspective Amr Awadallah Founder/CTO, Cloudera, Inc. Massive Data Analytics over the Cloud (MDAC’2010) Monday, April 26 th, 2010.
Hadoop tutorials. Todays agenda Hadoop Introduction and Architecture Hadoop Distributed File System MapReduce Spark 2.
Apache Hadoop and Hive Dhruba Borthakur Apache Hadoop Developer
Hadoop Ecosystem Overview
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Hive: A data warehouse on Hadoop Based on Facebook Team’s paperon Facebook Team’s paper 8/18/20151.
HADOOP ADMIN: Session -2
Hadoop Team: Role of Hadoop in the IDEAL Project ●Jose Cadena ●Chengyuan Wen ●Mengsu Chen CS5604 Spring 2015 Instructor: Dr. Edward Fox.
Distributed and Parallel Processing Technology Chapter1. Meet Hadoop Sun Jo 1.
USING HADOOP & HBASE TO BUILD CONTENT RELEVANCE & PERSONALIZATION Tools to build your big data application Ameya Kanitkar.
Committed to Deliver….  We are Leaders in Hadoop Ecosystem.  We support, maintain, monitor and provide services over Hadoop whether you run apache Hadoop,
Joe Hummel, PhD Visiting Researcher: U. of California, Irvine Adjunct Professor: U. of Illinois, Chicago & Loyola U., Chicago Materials:
Facebook (stylized facebook) is a Social Networking System and website launched in February 2004, operated and privately owned by Facebook, Inc. As.
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
Introduction to Hadoop 趨勢科技研發實驗室. Copyright Trend Micro Inc. Outline Introduction to Hadoop project HDFS (Hadoop Distributed File System) overview.
Cloud Distributed Computing Environment Content of this lecture is primarily from the book “Hadoop, The Definite Guide 2/e)
MapReduce April 2012 Extract from various presentations: Sudarshan, Chungnam, Teradata Aster, …
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Owen O’Malley Yahoo! Grid Team
HBase A column-centered database 1. Overview An Apache project Influenced by Google’s BigTable Built on Hadoop ▫A distributed file system ▫Supports Map-Reduce.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
MapReduce – An overview Medha Atre (May 7, 2008) Dept of Computer Science Rensselaer Polytechnic Institute.
Our Experience Running YARN at Scale Bobby Evans.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop Basics -Venkat Cherukupalli. What is Hadoop? Open Source Distributed processing Large data sets across clusters Commodity, shared-nothing servers.
Introduction to Apache Hadoop Zibo Wang. Introduction  What is Apache Hadoop?  Apache Hadoop is a software framework which provides open source libraries.
Introduction to Hadoop and HDFS
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
SEMINAR ON Guided by: Prof. D.V.Chaudhari Seminar by: Namrata Sakhare Roll No: 65 B.E.Comp.
HAMS Technologies 1
Contents HADOOP INTRODUCTION AND CONCEPTUAL OVERVIEW TERMINOLOGY QUICK TOUR OF CLOUDERA MANAGER.
Hadoop & Condor Dhruba Borthakur Project Lead, Hadoop Distributed File System Presented at the The Israeli Association of Grid Technologies.
Introduction to Hadoop Owen O’Malley Yahoo!, Grid Team
An Introduction to HDInsight June 27 th,
Alastair Duncan STFC Pre Coffee talk STFC July 2014 The Trials and Tribulations and ultimate success of parallelisation using Hadoop within the SCAPE project.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
Introduction to Hbase. Agenda  What is Hbase  About RDBMS  Overview of Hbase  Why Hbase instead of RDBMS  Architecture of Hbase  Hbase interface.
Presented by: Katie Woods and Jordan Howell. * Hadoop is a distributed computing platform written in Java. It incorporates features similar to those of.
Hadoop implementation of MapReduce computational model Ján Vaňo.
Hadoop IT Services Hadoop Users Forum CERN October 7 th,2015 CERN IT-D*
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
IBM Research ® © 2007 IBM Corporation Introduction to Map-Reduce and Join Processing.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
Experiments in Utility Computing: Hadoop and Condor Sameer Paranjpye Y! Web Search.
Cloud Distributed Computing Environment Hadoop. Hadoop is an open-source software system that provides a distributed computing environment on cloud (data.
Beyond Hadoop The leading open source system for processing big data continues to evolve, but new approaches with added features are on the rise. Ibrahim.
Learn. Hadoop Online training course is designed to enhance your knowledge and skills to become a successful Hadoop developer and In-depth knowledge of.
Moscow, November 16th, 2011 The Hadoop Ecosystem Kai Voigt, Cloudera Inc.
BIG DATA/ Hadoop Interview Questions.
Data Science Hadoop YARN Rodney Nielsen. Rodney Nielsen, Human Intelligence & Language Technologies Lab Outline Classical Hadoop What’s it all about Hadoop.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Microsoft Ignite /28/2017 6:07 PM
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
INTRODUCTION TO BIGDATA & HADOOP
Dhruba Borthakur Apache Hadoop Developer Facebook Data Infrastructure
Hadoopla: Microsoft and the Hadoop Ecosystem
Introduction to MapReduce and Hadoop
Hadoop Clusters Tess Fulkerson.
Central Florida Business Intelligence User Group
CS 345A Data Mining MapReduce This presentation has been altered.
Presentation transcript:

Scaling for Large Data Processing What is Hadoop? HDFS and MapReduce Outline Scaling for Large Data Processing What is Hadoop? HDFS and MapReduce Hadoop Ecosystem Hadoop vs RDBMSes Conclusion

Current Storage Systems Can’t Compute Ad hoc Queries & Data Mining Interactive Apps RDBMS (200GB/day) ETL Grid Non-Consumption Filer heads are a bottleneck Storage Farm for Unstructured Data (20TB/day) Data Errors and Reprocessing We encountered data errors that required reprocessing, which could happen a long time after the fact. “Tape Data” was cost prohibitive to reprocess, we needed to retain raw-data online for long time periods Conformation Loss Conversion of data from raw format to conformed dimensions causes some information loss. We needed access to the original data to recover lost information whenever needed (e.g.: a new browser user agent) Shrinking ETL Window The storage filers for raw data started becoming a significant bottleneck as large amounts of data needed to be copied to the ETL grid for processing (e.g. 30 hours to process a day’s worth of data) Ad Hoc Queries on Raw Data We wanted to run ad hoc queries against the original raw event data, but the storage filers only store and can’t compute Data Model Agility: Schema-on-Read vs Schema-on-Write We wanted to access data even if it had no schema yet, Consolidated Repository and Ubiquitous Access We wanted to eliminate borders and have a single repository where anybody can store, join, and process any of our data bits Beyond Reporting (Data-As-Product) Last, but not least, we wanted to process the data in ways that feed directly into the product/business (e.g. Email Spam Filtering, Ad/Content Targeting, Collaborative Filtering, Multimedia Processing, etc.) Mostly Append Collection Instrumentation

The Solution: A Store-Compute Grid Interactive Apps “Batch” Apps RDBMS Ad hoc Queries & Data Mining ETL and Aggregations Storage + Computation The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics. Mostly Append Collection Instrumentation

What is Hadoop? A scalable fault-tolerant grid operating system for data storage and processing Its scalability comes from the marriage of: HDFS: Self-Healing High-Bandwidth Clustered Storage MapReduce: Fault-Tolerant Distributed Processing Operates on unstructured and structured data A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) Open source under the friendly Apache License http://wiki.apache.org/hadoop/ The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else. The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow. One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write) Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality. 1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.

Hadoop History 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch 2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch 2007: NY Times converts 4TB of archives over 100 EC2s 2008: Web-scale deployments at Y!, Facebook, Last.fm April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes May 2009: Yahoo does fastest sort of a TB, 62secs over 1460 nodes Yahoo sorts a PB in 16.25hours over 3658 nodes June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500) September 2009: Doug Cutting joins Cloudera http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html 100s of deployments worldwide (http://wiki.apache.org/hadoop/PoweredBy)

System Shall Manage and Heal Itself Performance Shall Scale Linearly Hadoop Design Axioms System Shall Manage and Heal Itself Performance Shall Scale Linearly Compute Should Move to Data Simple Core, Modular and Extensible Speculative Execution, Data rebalancing, Background Checksumming, etc.

HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Pool commodity servers in a single hierarchical namespace. Designed for large files that are written once and read many times. Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes. Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks. Default block size is 64MB, though most folks now set it to 128MB Cost/GB is a few ¢/month vs $/month

MapReduce: Distributed Processing Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries. MapReduce can run on top of HDFS or a selection of other storage systems Intelligent scheduling algorithms for locality, sharing, and resource optimization.

MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Map 1 (docid, text) Map i Map M (words, counts) “To Be Or Not To Be?” Be, 5 Be, 12 Be, 7 Be, 6 Reduce 1 Reduce i Reduce R (sorted words, counts) Shuffle Output File 1 (sorted words, sum of counts) Output File i Output File R Be, 30 Split i Think: SELECT word, count(*) FROM documents GROUP BY word Checkout ParBASH: http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html Split N

Hadoop High-Level Architecture Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs across task tracker slaves The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible. The NameNode and JobTracker are currently SPOFs which can affect the availability of the system by around 15 mins (no data loss though, so the system is reliable, but can suffer from downtime occasionally). That issue is currently being addressed by the Apache Hadoop community using Zookeeper. Data Node Stores and serves blocks of data Task Tracker Runs tasks (work units) within a job Share Physical Node

Apache Hadoop Ecosystem ETL Tools BI Reporting RDBMS Pig (Data Flow) Hive (SQL) Sqoop MapReduce (Job Scheduling/Execution System) (Streaming/Pipes APIs) Zookeepr (Coordination) HBase (key-value store) Avro (Serialization) HDFS (Hadoop Distributed File System) HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletes Java MapReduce Gives the most flexibility and performance, but with a potentially longer development cycle Streaming MapReduce Allows you to develop in any language of your choice, but slightly slower performance Pig A relatively new data-flow language (contributed by Yahoo), suitable for ETL like workloads (procedural multi-stage jobs) Hive A SQL warehouse on top of MapReduce (contributed by Facebook), translates SQL into MapReduce Hive Features: A subset of SQL covering the most common statements Agile data types: Array, Map, Struct, and JSON objects User Defined Functions and Aggregates Regular Expression support MapReduce support JDBC support Partitions and Buckets (for performance optimization) In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/Collect More details: http://wiki.apache.org/hadoop/Hive Query: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALL Join: LEFT, RIGHT, FULL, OUTER, INNER DDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONS DML: LOAD DATA INTO, FROM INSERT Types: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECT Query: Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE) Relational: IS NULL, IS NOT NULL, LIKE, REGEXP Built in aggregates: COUNT, MAX, MIN, AVG, SUM Built in functions: CAST, IF, REGEXP_REPLACE, … Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BY List and Map operators: array[i], map[k], struct.field

Use The Right Tool For The Right Job Hadoop: Relational Databases: When to use? Affordable Storage/Compute Structured or Not (Agility) Resilient Auto Scalability Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per bit basis and is expensive to maintain. Cargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically. Hadoop: A data grid operating system Stores Files (Unstructured) Stores 10s of petabytes Processes 10s of PB/job Weak Consistency Scan all blocks in all files Queries & Data Processing Batch response (>1sec) Relational Databases: An ACID Database system Stores Tables (Schema) Stores 100s of terabytes Processes 10s of TB/query Transactional Consistency Lookup rows using index Mostly queries Interactive response Hadoop Myths: Hadoop MapReduce requires Rocket Scientists Hadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter) Hadoop is not very efficient hardware wise Hadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in software Hadoop can’t do quick random lookups HBase enables low-latency key-value pair lookups (no fast joins) Hadoop doesn’t support updates/inserts/deletes Not for multi-row transactions, but HBase enables transactions with row-level consistency semantics Hadoop isn’t highly available Though Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for it Hadoop can’t be backed-up/recovered quickly HDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clusters Hadoop doesn’t have security Hadoop has Unix style user/group permissions, and the community is working on improving its security model Hadoop can’t talk to other systems Hadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP When to use? Interactive Reporting (<1sec) Multistep Transactions Interoperability

Economics of Hadoop Typical Hardware: Two Quad Core Nehalems 24GB RAM 12 * 1TB SATA disks (JBOD mode, no need for RAID) 1 Gigabit Ethernet card Cost/node: $5K/node Effective HDFS Space: ¼ reserved for temp shuffle space, which leaves 9TB/node 3 way replication leads to 3TB effective HDFS space/node But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Other solutions cost in the range of $5K to $100K per user TB

Sample Talks from Hadoop World ‘09 VISA: Large Scale Transaction Analysis JP Morgan Chase: Data Processing for Financial Services China Mobile: Data Mining Platform for Telecom Industry Rackspace: Cross Data Center Log Processing Booz Allen Hamilton: Protein Alignment using Hadoop eHarmony: Matchmaking in the Hadoop Cloud General Sentiment: Understanding Natural Language Yahoo!: Social Graph Analysis Visible Technologies: Real-Time Business Intelligence Facebook: Rethinking the Data Warehouse with Hadoop and Hive Slides and Videos at http://www.cloudera.com/hadoop-world-nyc

Cloudera Desktop

Conclusion Hadoop is a data grid operating system which provides an economically scalable solution for storing and processing large amounts of unstructured or structured data over long periods of time.

Online Training Videos and Info: Contact Information Amr Awadallah CTO, Cloudera Inc. aaa@cloudera.com http://twitter.com/awadallah Online Training Videos and Info: http://cloudera.com/hadoop-training http://cloudera.com/blog http://twitter.com/cloudera