CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Slides:



Advertisements
Similar presentations
MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Advertisements

 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
Based on the text by Jimmy Lin and Chris Dryer; and on the yahoo tutorial on mapreduce at index.html
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Hadoop: The Definitive Guide Chap. 2 MapReduce
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
MapReduce. 2 (2012) Average Searches Per Day: 5,134,000,000 (2012) Average Searches Per Day: 5,134,000,000.
Lecture 6 – Google File System (GFS) CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation.
Hadoop tutorials. Todays agenda Hadoop Introduction and Architecture Hadoop Distributed File System MapReduce Spark 2.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Hadoop, Hadoop, Hadoop!!! Jerome Mitchell Indiana University.
The Hadoop Distributed File System: Architecture and Design by Dhruba Borthakur Presented by Bryant Yao.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
U.S. Department of the Interior U.S. Geological Survey David V. Hill, Information Dynamics, Contractor to USGS/EROS 12/08/2011 Satellite Image Processing.
MapReduce.
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
Introduction to Hadoop 趨勢科技研發實驗室. Copyright Trend Micro Inc. Outline Introduction to Hadoop project HDFS (Hadoop Distributed File System) overview.
Cloud Distributed Computing Environment Content of this lecture is primarily from the book “Hadoop, The Definite Guide 2/e)
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
HBase A column-centered database 1. Overview An Apache project Influenced by Google’s BigTable Built on Hadoop ▫A distributed file system ▫Supports Map-Reduce.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Whirlwind tour of Hadoop Inspired by Google's GFS Clusters from systems Batch Processing High Throughput Partition-able problems Fault Tolerance.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
Introduction to Hadoop and HDFS
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
HAMS Technologies 1
Contents HADOOP INTRODUCTION AND CONCEPTUAL OVERVIEW TERMINOLOGY QUICK TOUR OF CLOUDERA MANAGER.
Whirlwind Tour of Hadoop Edward Capriolo Rev 2. Whirlwind tour of Hadoop Inspired by Google's GFS Clusters from systems Batch Processing High.
Hadoop & Condor Dhruba Borthakur Project Lead, Hadoop Distributed File System Presented at the The Israeli Association of Grid Technologies.
The exponential growth of data –Challenges for Google,Yahoo,Amazon & Microsoft in web search and indexing The volume of data being made publicly available.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Presented by: Katie Woods and Jordan Howell. * Hadoop is a distributed computing platform written in Java. It incorporates features similar to those of.
HADOOP DISTRIBUTED FILE SYSTEM HDFS Reliability Based on “The Hadoop Distributed File System” K. Shvachko et al., MSST 2010 Michael Tsitrin 26/05/13.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
Team3: Xiaokui Shu, Ron Cohen CS5604 at Virginia Tech December 6, 2010.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
{ Tanya Chaturvedi MBA(ISM) Hadoop is a software framework for distributed processing of large datasets across large clusters of computers.
Cloud Distributed Computing Environment Hadoop. Hadoop is an open-source software system that provides a distributed computing environment on cloud (data.
Next Generation of Apache Hadoop MapReduce Owen
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
1 Student Date Time Wei Li Nov 30, 2015 Monday 9:00-9:25am Shubbhi Taneja Nov 30, 2015 Monday9:25-9:50am Rodrigo Sanandan Dec 2, 2015 Wednesday9:00-9:25am.
CMSC 491 Hadoop-Based Distributed Computing Spring 2016 Adam Shook.
BIG DATA/ Hadoop Interview Questions.
Big Data is a Big Deal!.
Hadoop.
Introduction to Distributed Platforms
Software Systems Development
INTRODUCTION TO BIGDATA & HADOOP
What is Apache Hadoop? Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created.
Chapter 10 Data Analytics for IoT
Introduction to MapReduce and Hadoop
Software Engineering Introduction to Apache Hadoop Map Reduce
Central Florida Business Intelligence User Group
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Ministry of Higher Education
The Basics of Apache Hadoop
Hadoop Basics.
Lecture 16 (Intro to MapReduce and Hadoop)
Pig Hive HBase Zookeeper
Presentation transcript:

CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Hadoop Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Agenda Why Hadoop? HDFS MapReduce

Why Hadoop?

The V’s of Big Data! Volume Velocity Variety Veracity Volume Need to manage lots of data Social enterprises and Industrial Internet generate lots of data Velocity Data is coming in at an extremely fast rate, need to respond in milliseconds Variety Today’s agile business require rapid response to changing requirements Fixed schemas are too rigid, need 'emerging' schemas May not fit into traditional 'relational' and 'transactional' data model of RDBMS Veracity Lots of uncertainty in the data due to inconsistency, ambiguities, latency, and model approximations

Data Sources Social Media Web Logs Video Networks Sensors Transactions (banking, etc.) E-mail, Text Messaging Paper Documents

Value in all of this! Fraud Detection Predictive Models Recommendations Analyzing Threats Market Analysis Others!

How do we extract value? Monolithic Computing! Build bigger faster computers! Limited solution Expensive and does not scale as data volume increases

Enter Distributed Computing Processing is distributed across many computers Distribute the workload to powerful compute nodes with some separate storage

New Class of Challenges Does not scale gracefully Hardware failure is common Sorting, combining, and analyzing data spread across thousands of machines is not easy

RDBMS is still alive! SQL is still very relevant, with many complex business rules mapping to a relational model But there is much more than can be done by leaving relational behind and looking at all of your data NoSQL can expand what you have, but it has its disadvantages

NoSQL Downsides Increased middle-tier complexity Constraints on query capability No standard semantics for query Complex to setup and maintain

An Ideal Cluster Linear horizontal scalability Analytics run in isolation Simple API with multiple language support Available in spite of hardware failure

Enter Hadoop Hits these major requirements Two core pieces Distributed File System (HDFS) Flexible analytic framework (MapReduce) Many ecosystem components to expand on what core Hadoop offers

Scalability Near-linear horizontal scalability Clusters are built on commodity hardware Component failure is an expectation

Data Access Moving data from storage to a processor is expensive Store data and process the data on the same machines Process data intelligently by being local

Disk Performance Disk technology has made significant advancements Take advantage of multiple disks in parallel 1 disk, 3TB of data, 300MB/s, ~2.5 hours to read 1,000 disks, same data, ~10 seconds to read Distribution of data and co-location of processing makes this a reality

Complex Processing Code Hadoop framework abstracts complex distributed computing environment No synchronization code No networking code No I/O code MapReduce developer focuses on the analysis Job runs the same on one node or 4,000 nodes!

Fault Tolerance Failure is inevitable, and it is planned for Component failure is automatically detected and seamlessly handled System continues to operate as expected with minimal degradation

Hadoop History Spun off of Nutch, an open source web search engine Google Whitepapers GFS MapReduce Nutch re-architected to birth Hadoop Hadoop is very mainstream today

Hadoop Versions Hadoop 2.x recently became stable Two Java APIs, referred to as the old and new APIs Two task management systems MapReduce v1 JobTracker, TaskTracker MapReduce v2 – A YARN application MR runs in containers in the YARN framework

Hdfs

Hadoop Distributed File System Inspired by Google File System (GFS) High performance file system for storing data Relatively simple centralized management Fault tolerance through data replication Optimized for MapReduce processing Exposing data locality Linearly scalable

Hadoop Distributed File System Use of commodity hardware Files are write once, read many Leverages large streaming reads vs random Favors high throughput vs low latency Modest number of huge files

HDFS Architecture Split large files into blocks Distribute and replicate blocks to nodes Two key services Master NameNode Many DataNodes Backup/Checkpoint NameNode for HA User interacts with a UNIX file system

NameNode Single master service for HDFS Was a single point of failure Stores file to block to location mappings in a namespace All transactions are logged to disk Can recover based on checkpoints of the namespace and transaction logs

NameNode Memory HDFS prefers fewer larger files as a result of the metadata Consider 1GB of data with 64MB block size Stored as one file Name: 1 item Blocks: 16*3 = 48 items Total items: 49 Stored as 1024 1MB files Names: 1024 items Blocks: 1024*3 = 3072 items Total items: 4096 Each item is around 200 bytes

Checkpoint Node (Secondary NN) Performs checkpoints of the namespace and logs Not a hot backup! HDFS 2.0 introduced NameNode HA Active and a Standby NameNode service coordinated via ZooKeeper

DataNode Stores blocks on local disk Sends frequent heartbeats to NameNode Sends block reports to NameNode Clients connect to DataNodes for I/O

How HDFS Works - Writes Client contacts NameNode to write data NameNode says write it to these nodes Client sequentially writes blocks to DataNode Client contacts the namenode with a request to write some data Namenode responds and says okay write it to these data nodes Client connects to each data node and writes out four blocks, one per node

DataNodes replicate data How HDFS Works - Writes DataNodes replicate data blocks, orchestrated by the NameNode After the file is closed, the data nodes traffic data around to replicate the blocks to a triplicate, all orchestrated by the namenode In the event of a node failure, data can be accessed on other nodes and the namenode will move data blocks to other nodes

How HDFS Works - Reads Client contacts NameNode to read data NameNode says you can find it here Client sequentially reads blocks from DataNode Client contacts the namenode with a request to write some data Namenode responds and says okay write it to these data nodes Client connects to each data node and writes out four blocks, one per node

How HDFS Works - Failure Client connects to another node serving that block Client contacts the namenode with a request to write some data Namenode responds and says okay write it to these data nodes Client connects to each data node and writes out four blocks, one per node

HDFS Blocks Default block size is 64MB Default replication is three Configurable Common sizes are 128 MB and 256 MB Default replication is three Also configurable, not rarely changed Stored as files on the DataNode’s local fs Cannot associate any block with its true file

Replication Strategy First copy is written to the same node as the client If the client is not part of the cluster, first block goes to a random node Second copy is written to a node on a different rack Third copy is written to a different node on the same rack as the second copy

Data Locality Key in achieving performance MapReduce tasks run as close to data as possible Some terms Local On-Rack Off-Rack

Data Corruption Use of checksums to to ensure block integrity Checksum is calculatd on read and compared against that when it was written Fast to calculate and space-efficient If the checksums differ, client reads the block from another DataNode Corrupted block will be deleted and replicated by a non-corrupt block DataNode periodically runs a background thread to do this checksum process For those files that aren’t read very often

Fault Tolerance If no heartbeat is received from DN within a configurable time window, it is considered lost 10 minute default NameNode will: Determine which blocks were on the lost node Locate other DataNodes with valid copies Instruct DataNodes with copies to replicate blocks to other DataNodes

Interacting with HDFS Primary interfaces are CLI and Java API Web UI for read-only access HFTP provides HTTP/S read-only WebHDFS provides RESTful read/write FUSE, allows HDFS to be mounted on standard file system Typically used so legacy apps can use HDFS data ‘hdfs dfs -help’ will show CLI usage

Hadoop MapReduce

MapReduce! A programming model for processing data Contains two phases! Map Perform a map function on key/value pairs Reduce Perform a reduce function on key/value groups Groups are created by sorting map output Operations on key/value pairs open the door for very parallelizable algorithms

Hadoop MapReduce Automatic parallelization and distribution of tasks Framework handles scheduling task and repeating failed tasks Developer can code many pieces of the puzzle Framework handles a Shuffle and Sort phase between map and reduce tasks. Developer need only focus on the task at hand, rather than how to manage where data comes from and where it goes

MRv1 and MRv2 Both manage compute resources, jobs, and tasks Job API the same MRv1 is proven in production JobTracker / TaskTrackers MRv2 is a new application on YARN YARN is a generic platform for developing distributed applications

MRv1 - JobTracker Monitors job and task progress Issues task attempts to TaskTrackers Re-tries failed task attempts Four failed attempts of same task = one failed job Default scheduler is FIFO order CapacityScheduler or FairScheduler is preferred Single point of failure for MapReduce

MRv1 – TaskTrackers Runs on same node as DataNodes Sends heartbeats and task reports to JT Configurable number of map and reduce slots Runs map and reduce task attempts in a separate JVM

How MapReduce Works Client submits job to JobTracker JobTracker submits tasks to TaskTrackers JobTracker reports metrics Client submits a job to the JobTracker for processing JobTracker uses the input of the job to determine where the blocks are located (through the NameNode), and then distributed task attempts to the task trackers TaskTrackers coordinate the task attempts and data output is written back to the datanodes, which is distributed and replicated as normal HDFS operations Job statistics – not output – is reported back to the client upon job completion Job output is written to DataNodes w/replication

How MapReduce Works - Failure JobTracker assigns task to different node Client submits a job to the JobTracker for processing JobTracker uses the input of the job to determine where the blocks are located (through the NameNode), and then distributed task attempts to the task trackers TaskTrackers coordinate the task attempts and data output is written back to the datanodes, which is distributed and replicated as normal HDFS operations Job statistics – not output – is reported back to the client upon job completion

Map Input Map Output Reducer Input Groups Reducer Output Uses key value pairs as input and output to both phases Highly parallelizable paradigm – very easy choice for data processing on a Hadoop cluster Reducer Output

Example -- Word Count Count the number of times each word is used in a body of text Uses TextInputFormat and TextOutputFormat map(byte_offset, line) foreach word in line emit(word, 1) reduce(word, counts) sum = 0 foreach count in counts sum += count emit(word, sum)

Mapper Code public class WordMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable ONE = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, ONE); }

Shuffle and Sort Mapper outputs to a single partitioned file Reducers copy their parts Mapper outputs data to a single file that is logically partitioned by key Reducers copy their partition over to the local machine – “shuffle” Each reducer then sorts their partitions into a single sorted local file for processing Reducer merges partitions

Reducer Code public class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum));

Resources, Wrap-up, etc. http://hadoop.apache.org Supportive community Hadoop-DC Data Science MD Baltimore Hadoop Users Group Plenty of resources available to learn more Books Email lists Blogs Easy to install on VM and begin exploring