Hadoop implementation of MapReduce computational model Ján Vaňo.

Slides:



Advertisements
Similar presentations
Introduction to cloud computing Jiaheng Lu Department of Computer Science Renmin University of China
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
A Hadoop Overview. Outline Progress Report MapReduce Programming Hadoop Cluster Overview HBase Overview Q & A.
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
ETM Hadoop. ETM IDC estimate put the size of the “digital universe” at zettabytes in forecasting a tenfold growth by 2011 to.
Hadoop Ecosystem Overview
The Hadoop Distributed File System, by Dhyuba Borthakur and Related Work Presented by Mohit Goenka.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Hadoop, Hadoop, Hadoop!!! Jerome Mitchell Indiana University.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
Distributed and Parallel Processing Technology Chapter1. Meet Hadoop Sun Jo 1.
USING HADOOP & HBASE TO BUILD CONTENT RELEVANCE & PERSONALIZATION Tools to build your big data application Ameya Kanitkar.
Facebook (stylized facebook) is a Social Networking System and website launched in February 2004, operated and privately owned by Facebook, Inc. As.
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
Introduction to Hadoop 趨勢科技研發實驗室. Copyright Trend Micro Inc. Outline Introduction to Hadoop project HDFS (Hadoop Distributed File System) overview.
Cloud Distributed Computing Environment Content of this lecture is primarily from the book “Hadoop, The Definite Guide 2/e)
Scaling for Large Data Processing What is Hadoop? HDFS and MapReduce
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop Basics -Venkat Cherukupalli. What is Hadoop? Open Source Distributed processing Large data sets across clusters Commodity, shared-nothing servers.
Apache Hadoop MapReduce What is it ? Why use it ? How does it work Some examples Big users.
W HAT IS H ADOOP ? Hadoop is an open-source software framework for storing and processing big data in a distributed fashion on large clusters of commodity.
Introduction to Apache Hadoop Zibo Wang. Introduction  What is Apache Hadoop?  Apache Hadoop is a software framework which provides open source libraries.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
Introduction to Hadoop and HDFS
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
Contents HADOOP INTRODUCTION AND CONCEPTUAL OVERVIEW TERMINOLOGY QUICK TOUR OF CLOUDERA MANAGER.
O’Reilly – Hadoop: The Definitive Guide Ch.1 Meet Hadoop May 28 th, 2010 Taewhi Lee.
Introduction to Hadoop Owen O’Malley Yahoo!, Grid Team
The exponential growth of data –Challenges for Google,Yahoo,Amazon & Microsoft in web search and indexing The volume of data being made publicly available.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
Introduction to Hbase. Agenda  What is Hbase  About RDBMS  Overview of Hbase  Why Hbase instead of RDBMS  Architecture of Hbase  Hbase interface.
Presented by: Katie Woods and Jordan Howell. * Hadoop is a distributed computing platform written in Java. It incorporates features similar to those of.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
{ Tanya Chaturvedi MBA(ISM) Hadoop is a software framework for distributed processing of large datasets across large clusters of computers.
Cloud Distributed Computing Environment Hadoop. Hadoop is an open-source software system that provides a distributed computing environment on cloud (data.
1 HBASE – THE SCALABLE DATA STORE An Introduction to HBase XLDB Europe Workshop 2013: CERN, Geneva James Kinley EMEA Solutions Architect, Cloudera.
Next Generation of Apache Hadoop MapReduce Owen
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Beyond Hadoop The leading open source system for processing big data continues to evolve, but new approaches with added features are on the rise. Ibrahim.
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
1 Student Date Time Wei Li Nov 30, 2015 Monday 9:00-9:25am Shubbhi Taneja Nov 30, 2015 Monday9:25-9:50am Rodrigo Sanandan Dec 2, 2015 Wednesday9:00-9:25am.
BIG DATA/ Hadoop Interview Questions.
Data Science Hadoop YARN Rodney Nielsen. Rodney Nielsen, Human Intelligence & Language Technologies Lab Outline Classical Hadoop What’s it all about Hadoop.
What is it and why it matters? Hadoop. What Is Hadoop? Hadoop is an open-source software framework for storing data and running applications on clusters.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Hadoop Javad Azimi May What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data. It includes:
Hadoop Aakash Kag What Why How 1.
Hadoop.
Introduction to Distributed Platforms
An Open Source Project Commonly Used for Processing Big Data Sets
INTRODUCTION TO PIG, HIVE, HBASE and ZOOKEEPER
Chapter 10 Data Analytics for IoT
Hadoopla: Microsoft and the Hadoop Ecosystem
Hadoop.
Introduction to HDFS: Hadoop Distributed File System
Hadoop Clusters Tess Fulkerson.
Software Engineering Introduction to Apache Hadoop Map Reduce
Central Florida Business Intelligence User Group
Ministry of Higher Education
The Basics of Apache Hadoop
Introduction to PIG, HIVE, HBASE & ZOOKEEPER
Hadoop Basics.
Hadoop Technopoints.
Lecture 16 (Intro to MapReduce and Hadoop)
Charles Tappert Seidenberg School of CSIS, Pace University
Cloud Computing for Data Analysis Pig|Hive|Hbase|Zookeeper
Presentation transcript:

Hadoop implementation of MapReduce computational model Ján Vaňo

What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google‘s distributed file system (GFS) Works with key:value pairs

Why MapReduce? It is the ‚answer‘ for Big Data problem Runs on commodity hardware Very scalable solution

MapReduce model

MapReduce alternatives Hadoop – Top-level Apache project Spark – University of Berkeley project Disco – Open source Nokia project MapReduce-MPI – US Department of Energy project MARIANE – academic project of University of Binghamton Phoenix – University of Stanford project BashReduce - MapReduce for std. Unix commands

MapReduce vs. RDBMS

Data Structure Structured Data – data organized into entities that have a defined format. – Realm of RDBMS Semi-Structured Data – there may be a schema, but often ignored; schema is used as a guide to the structure of the data. Unstructured Data – doesn’t have any particular internal structure. MapReduce works well with semi-structured and unstructured data.

What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Hadoop is most popular implementation of MapReduce so far

Why Hadoop? It has been Apache top-level project for a long time (6 years) Hadoop Ecosystem Hadoop exclusive technologies

Why Hadoop? Scalable: It can reliably store and process petabytes Economical: It distributes the data and processing across clusters of commonly available computers (in thousands) Efficient: By distributing the data, it can process it in parallel on the nodes where the data is located Reliable: It automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures

Brief history Project Nutch started (open source web search engine) – Doug Cutting GFS (Google File System) paper published Implementation of GFS started Google published MapReduce paper Working implementations of MapReduce and GFS (NDFS) System applicable beyond realm of search Nutch moved to Hadoop project, Doug Cutting joins Yahoo! Yahoo!s production index generated by 10,000 core Hadoop cluster Hadoop moved under Apache Foundation April Hadoop broke world record - fastest sorting of 1 TB of data (209 seconds, previously 297) November Google's implementation sorted 1 TB in 68 seconds May Yahoo! team sort 1 TB in 62 seconds

1TB sort by Hadoop

Who uses Hadoop?

Assumptions Hardware will fail Processing will be run in batches. Thus there is an emphasis on high throughput as opposed to low latency Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance Applications need a write-once-read-many access model Moving Computation is Cheaper than Moving Data Portability is important

Hadoop modules Hadoop Common - contains libraries and utilities needed by other Hadoop modules Hadoop Distributed File System (HDFS) - a distributed file-system that stores data on the commodity machines, providing very high aggregate bandwidth across the cluster Hadoop MapReduce - a programming model for large scale data processing Hadoop YARN - a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications – Provides base for MapReduce v2

Hadoop architecture

HDFS architecture

HDFS architecture (reading)

HDFS architecture (writing)

Data replication in HDFS

How to use Hadoop MapReduce? Implement 2 basic functions: – Map – Reduce Implement Driver class

MapReduce structure

Job submission (MapReduce v1)

Client applications submit jobs to the Job tracker The JobTracker talks to the NameNode to determine the location of the data The JobTracker locates TaskTracker nodes with available slots at or near the data The JobTracker submits the work to the chosen TaskTracker nodes The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker

Job submission (MapReduce v1) A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable When the work is completed, the JobTracker updates its status Client applications can poll the JobTracker for information

Job flow (MapReduce v1)

Job flow (MapReduce v2)

Hadoop Ecosystem Apache Avro (serialization system for persistent data) Apache Pig (high-level dataflow querying language) Apache Hive (data warehouse infrastructure)

Hadoop Ecosystem Apache HBase (database for real-time access) Apache Sqoop (tool for moving data from SQL to Hadoop or opposite) Apache ZooKeeper (distributed coordination service providing high availability) – library for building distributed systems

Hadoop exclusive technologies YARN – Yet Another Resource Negotiator HDFS federation – possibility of partitioning namespace across several namenodes to support high number of files HDFS high-availability – techniques for for removing the namenode as the single point of failure

Examples of production use Yahoo! : More than 100,000 CPUs in ~20,000 computers running Hadoop; biggest cluster: 2000 nodes (2*4cpu boxes with 4TB disk each); used to support research for Ad Systems and Web Search Facebook: To store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning; 320 machine cluster with 2,560 cores and about 1.3 PB raw storage

Size of releases

Hadoop + Framework for applications on large clusters + Built for commodity hardware + Provides reliability and data motion + Implements a computational paradigm named Map/Reduce + Very own distributed file system (HDFS) (very high aggregate bandwidth across the cluster) + Failures handles automatically

Hadoop - Time consuming development - Documentation sufficient, but not the most helpful - HDFS is complicated and has plenty issues of its own - Debugging a failure is a "nightmare" - Large clusters require a dedicated team to keep it running properly - Writing a Hadoop job becomes a software engineering task rather than a data analysis task