Lecture 3 – Hadoop Technical Introduction

Slides:



Advertisements
Similar presentations
Hadoop Programming. Overview MapReduce Types Input Formats Output Formats Serialization Job g/apache/hadoop/mapreduce/package-
Advertisements

The map and reduce functions in MapReduce are easy to test in isolation, which is a consequence of their functional style. For known inputs, they produce.
Introduction to cloud computing Jiaheng Lu Department of Computer Science Renmin University of China
MapReduce Simplified Data Processing on Large Clusters
Running Hadoop. Hadoop Platforms Platforms: Unix and on Windows. – Linux: the only supported production platform. – Other variants of Unix, like Mac OS.
Beyond map/reduce functions partitioner, combiner and parameter configuration Gang Luo Sept. 9, 2010.
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
MapReduce Online Veli Hasanov Fatih University.
MapReduce in Action Team 306 Led by Chen Lin College of Information Science and Technology.
Hadoop Technical Review Peng Bo School of EECS, Peking University 7/3/2008 Refer to Aaron Kimball’s slides.
O’Reilly – Hadoop: The Definitive Guide Ch.6 How MapReduce Works 16 July 2010 Taewhi Lee.
Google MapReduce Framework A Summary of: MapReduce & Hadoop API Slides prepared by Peter Erickson
Lecture 11 – Hadoop Technical Introduction. Terminology Google calls it:Hadoop equivalent: MapReduceHadoop GFSHDFS BigtableHBase ChubbyZookeeper.
Hadoop: The Definitive Guide Chap. 2 MapReduce
CS246 TA Session: Hadoop Tutorial Peyman kazemian 1/11/2011.
Hadoop: Nuts and Bolts Data-Intensive Information Processing Applications ― Session #2 Jimmy Lin University of Maryland Tuesday, February 2, 2010 This.
Lecture 3 – Hadoop Technical Introduction CSE 490H.
Hadoop Technical Workshop Module II: Hadoop Technical Review.
Creating Map-Reduce Programs Using Hadoop. Presentation Overview Recall Hadoop Overview of the map-reduce paradigm Elaboration on the WordCount example.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Hadoop: The Definitive Guide Chap. 8 MapReduce Features
Big Data Analytics with R and Hadoop
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment 彭波 北京大学信息科学技术学院 4/23/2011 This work is licensed under a Creative Commons.
MapReduce Programming Yue-Shan Chang. split 0 split 1 split 2 split 3 split 4 worker Master User Program output file 0 output file 1 (1) fork (2) assign.
Introduction to Hadoop Prabhaker Mateti. ACK Thanks to all the authors who left their slides on the Web. I own the errors of course.
© Spinnaker Labs, Inc. Google Cluster Computing Faculty Training Workshop Module V: Hadoop Technical Review.
Distributed and Parallel Processing Technology Chapter7. MAPREDUCE TYPES AND FORMATS NamSoo Kim 1.
CC P ROCESAMIENTO M ASIVO DE D ATOS O TOÑO 2014 Aidan Hogan Lecture VI: 2014/04/14.
HAMS Technologies 1
大规模数据处理 / 云计算 Lecture 5 – Hadoop Runtime 彭波 北京大学信息科学技术学院 7/23/2013 This work is licensed under a Creative Commons.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
Writing a MapReduce Program 1. Agenda  How to use the Hadoop API to write a MapReduce program in Java  How to use the Streaming API to write Mappers.
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
O’Reilly – Hadoop: The Definitive Guide Ch.7 MapReduce Types and Formats 29 July 2010 Taikyoung Kim.
Lectures 10 & 11: MapReduce & Hadoop. Placing MapReduce in the course context Programming environments:  Threads On what type of architecture? What are.
Lecture 5 Books: “Hadoop in Action” by Chuck Lam,
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, January 31, 2013 Session 2: Hadoop Nuts and Bolts This work is licensed.
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
CS350 - MAPREDUCE USING HADOOP Spring PARALLELIZATION: BASIC IDEA Parallelization is “easy” if processing can be cleanly split into n units:
Introduction to Google MapReduce
INTRODUCTION TO BIGDATA & HADOOP
Chapter 10 Data Analytics for IoT
Map-Reduce framework.
Ch 8 and Ch 9: MapReduce Types, Formats and Features
Hadoop MapReduce Framework
MapReduce Types, Formats and Features
Google Cluster Computing Faculty Training Workshop
Lecture 17 (Hadoop: Getting Started)
Overview of Hadoop MapReduce MapReduce is a soft work framework for easily writing applications which process vast amounts of.
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Lecture 11 – Hadoop Technical Introduction
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
Distributed Systems CS
Hadoop MapReduce Types
The Basics of Apache Hadoop
Cloud Distributed Computing Environment Hadoop
CS6604 Digital Libraries IDEAL Webpages Presented by
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Distributed System Gang Wu Spring,2018.
Lecture 18 (Hadoop: Programming Examples)
Data processing with Hadoop
Distributed Systems CS
MAPREDUCE TYPES, FORMATS AND FEATURES
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Distributed Systems and Concurrency: Map Reduce
Map Reduce, Types, Formats and Features
Presentation transcript:

Lecture 3 – Hadoop Technical Introduction CSE 490H

Announcements My office hours: M 2:30—3:30 in CSE 212 Cluster is operational; instructions in assignment 1 heavily rewritten Eclipse plugin is “deprecated” Students who already created accounts: let me know if you have trouble

Breaking news! Hadoop tested on 4,000 node cluster 32K cores (8 / node) 16 PB raw storage (4 x 1 TB disk / node) (about 5 PB usable storage) http://developer.yahoo.com/blogs/hadoop/2008/09/ scaling_hadoop_to_4000_nodes_a.html

You Say, “tomato…” Google calls it: Hadoop equivalent: MapReduce GFS HDFS Bigtable HBase Chubby Zookeeper

Some MapReduce Terminology Job – A “full program” - an execution of a Mapper and Reducer across a data set Task – An execution of a Mapper or a Reducer on a slice of data a.k.a. Task-In-Progress (TIP) Task Attempt – A particular instance of an attempt to execute a task on a machine

Terminology Example Running “Word Count” across 20 files is one job 20 files to be mapped imply 20 map tasks + some number of reduce tasks At least 20 map task attempts will be performed… more if a machine crashes, etc.

Task Attempts A particular task will be attempted at least once, possibly more times if it crashes If the same input causes crashes over and over, that input will eventually be abandoned Multiple attempts at one task may occur in parallel with speculative execution turned on Task ID from TaskInProgress is not a unique identifier; don’t use it that way

MapReduce: High Level

Node-to-Node Communication Hadoop uses its own RPC protocol All communication begins in slave nodes Prevents circular-wait deadlock Slaves periodically poll for “status” message Classes must provide explicit serialization

Nodes, Trackers, Tasks Master node runs JobTracker instance, which accepts Job requests from clients TaskTracker instances run on slave nodes TaskTracker forks separate Java process for task instances

Job Distribution MapReduce programs are contained in a Java “jar” file + an XML file containing serialized program configuration options Running a MapReduce job places these files into the HDFS and notifies TaskTrackers where to retrieve the relevant program code … Where’s the data distribution?

Data Distribution Implicit in design of MapReduce! All mappers are equivalent; so map whatever data is local to a particular node in HDFS If lots of data does happen to pile up on the same node, nearby nodes will map instead Data transfer is handled implicitly by HDFS

Configuring With JobConf MR Programs have many configurable options JobConf objects hold (key, value) components mapping String  ’a e.g., “mapred.map.tasks”  20 JobConf is serialized and distributed before running the job Objects implementing JobConfigurable can retrieve elements from a JobConf

What Happens In MapReduce? Depth First

Job Launch Process: Client Client program creates a JobConf Identify classes implementing Mapper and Reducer interfaces JobConf.setMapperClass(), setReducerClass() Specify inputs, outputs FileInputFormat.addInputPath(), FileOutputFormat.setOutputPath() Optionally, other options too: JobConf.setNumReduceTasks(), JobConf.setOutputFormat()…

Job Launch Process: JobClient Pass JobConf to JobClient.runJob() or submitJob() runJob() blocks, submitJob() does not JobClient: Determines proper division of input into InputSplits Sends job data to master JobTracker server

Job Launch Process: JobTracker Inserts jar and JobConf (serialized to XML) in shared location Posts a JobInProgress to its run queue

Job Launch Process: TaskTracker TaskTrackers running on slave nodes periodically query JobTracker for work Retrieve job-specific jar and config Launch task in separate instance of Java main() is provided by Hadoop

Job Launch Process: Task TaskTracker.Child.main(): Sets up the child TaskInProgress attempt Reads XML configuration Connects back to necessary MapReduce components via RPC Uses TaskRunner to launch user process

Job Launch Process: TaskRunner TaskRunner, MapTaskRunner, MapRunner work in a daisy-chain to launch your Mapper Task knows ahead of time which InputSplits it should be mapping Calls Mapper once for each record retrieved from the InputSplit Running the Reducer is much the same

Creating the Mapper You provide the instance of Mapper Should extend MapReduceBase One instance of your Mapper is initialized by the MapTaskRunner for a TaskInProgress Exists in separate process from all other instances of Mapper – no data sharing!

Mapper void map(K1 key, K types implement WritableComparable V1 value, OutputCollector<K2, V2> output, Reporter reporter) K types implement WritableComparable V types implement Writable

What is Writable? Hadoop defines its own “box” classes for strings (Text), integers (IntWritable), etc. All values are instances of Writable All keys are instances of WritableComparable

Getting Data To The Mapper

Reading Data Data sets are specified by InputFormats Defines input data (e.g., a directory) Identifies partitions of the data that form an InputSplit Factory for RecordReader objects to extract (k, v) records from the input source

FileInputFormat and Friends TextInputFormat – Treats each ‘\n’-terminated line of a file as a value KeyValueTextInputFormat – Maps ‘\n’- terminated text lines of “k SEP v” SequenceFileInputFormat – Binary file of (k, v) pairs with some add’l metadata SequenceFileAsTextInputFormat – Same, but maps (k.toString(), v.toString())

Filtering File Inputs FileInputFormat will read all files out of a specified directory and send them to the mapper Delegates filtering this file list to a method subclasses may override e.g., Create your own “xyzFileInputFormat” to read *.xyz from directory list

Record Readers Each InputFormat provides its own RecordReader implementation Provides (unused?) capability multiplexing LineRecordReader – Reads a line from a text file KeyValueRecordReader – Used by KeyValueTextInputFormat

Input Split Size FileInputFormat will divide large files into chunks Exact size controlled by mapred.min.split.size RecordReaders receive file, offset, and length of chunk Custom InputFormat implementations may override split size – e.g., “NeverChunkFile”

Sending Data To Reducers Map function receives OutputCollector object OutputCollector.collect() takes (k, v) elements Any (WritableComparable, Writable) can be used By default, mapper output type assumed to be same as reducer output type

WritableComparator Compares WritableComparable data Will call WritableComparable.compare() Can provide fast path for serialized data JobConf.setOutputValueGroupingComparator()

Sending Data To The Client Reporter object sent to Mapper allows simple asynchronous feedback incrCounter(Enum key, long amount) setStatus(String msg) Allows self-identification of input InputSplit getInputSplit()

Partition And Shuffle

Partitioner int getPartition(key, val, numPartitions) Outputs the partition number for a given key One partition == values sent to one Reduce task HashPartitioner used by default Uses key.hashCode() to return partition num JobConf sets Partitioner implementation

Reduction reduce( K2 key, Iterator<V2> values, OutputCollector<K3, V3> output, Reporter reporter) Keys & values sent to one partition all go to the same reduce task Calls are sorted by key – “earlier” keys are reduced and output before “later” keys

Finally: Writing The Output

OutputFormat Analogous to InputFormat TextOutputFormat – Writes “key val\n” strings to output file SequenceFileOutputFormat – Uses a binary format to pack (k, v) pairs NullOutputFormat – Discards output

Questions?