Download presentation
Presentation is loading. Please wait.
Published byRosalind Newton Modified over 9 years ago
1
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment 彭波 北京大学信息科学技术学院 4/23/2011 http://net.pku.edu.cn/~course/cs402/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details Jimmy Lin University of Maryland 课程建设 SEWMGroup
2
Install Hadoop 2
3
Installation Prerequisites JAVA SDK 6 Linux OS( the only production platform) Installation Download hadoop.x.y.z.tar.gz (we use version 0.20.2) tar xzvf hadoop.x.y.z.tar.gz Setup Environment Variables export HADOOP_HOME=/home/hadoop export HADOOP_CONF_DIR=/home/hadoop/hadoop-config export PATH=$HADOOP_HOME/bin:$PATH export JAVA_HOME=/usr/lib/jvm/java-6-openjdk 3
4
Configuration Directory: $HADOOP_CONF_DIR core-site.xml hdfs-site.xml mapred-site.xml 4
5
Test HDFS hadoop fs -ls / MapReduce hadoop jar $HADOOP_HOME/hadoop-0.20.2-examples.jar pi 10 10000 Web Administration http://jobtracker:50030 5
6
Develop Environment Eclipse 3.3.2 works perfect with the plugin distributed with hadoop.0.20.2 You’d better not to try higher version~ Online KnowledgeBase 错误汇总 《 hadoop 0.20 程式開發 – eclipse plugin + Makefile 》 《 hadoop 0.20 程式開發 – eclipse plugin + Makefile 》 6
7
“Hello world”
8
“Hello World”: Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator values): int sum = 0; for each v in values: sum += v; Emit(term, value); 8
9
Basic Hadoop API* Mapper void map(K1 key, V1 value, OutputCollector output, Reporter reporter) void configure(JobConf job) void close() throws IOException Reducer/Combiner void reduce(K2 key, Iterator values, OutputCollector output, Reporter reporter) void configure(JobConf job) void close() throws IOException Partitioner void getPartition(K2 key, V2 value, int numPartitions) *Note: forthcoming API changes… 9
10
Data Types in Hadoop WritableDefines a de/serialization protocol. Every data type in Hadoop is a Writable. WritableComprableDefines a sort order. All keys must be of this type (but not values). IntWritable LongWritable Text … Concrete classes for different data types. SequenceFiles Binary encoded of a sequence of key/value pairs 10
11
WordCount::Mapper 11
12
WordCount::Reducer 12
13
Basic Cluster Components One of each: Namenode (NN) Jobtracker (JT) Set of each per slave machine: Tasktracker (TT) Datanode (DN) 13
14
Putting everything together… datanode daemon Linux file system … tasktracker slave node datanode daemon Linux file system … tasktracker slave node datanode daemon Linux file system … tasktracker slave node namenode namenode daemon job submission node jobtracker 14
15
Anatomy of a Job MapReduce program in Hadoop = Hadoop job Jobs are divided into map and reduce tasks An instance of running a task is called a task attempt Multiple jobs can be composed into a workflow Job submission process Client (i.e., driver program) creates a job, configures it, and submits it to job tracker JobClient computes input splits (on client end) Job data (jar, configuration XML) are sent to JobTracker JobTracker puts job data in shared location, enqueues tasks TaskTrackers poll for tasks Off to the races… 15
16
InputSplit Source: redrawn from a slide by Cloduera, cc-licensed InputSplit Input File InputSplit RecordReader Mapper Intermediates Mapper Intermediates Mapper Intermediates Mapper Intermediates Mapper Intermediates InputFormat 16
17
Source: redrawn from a slide by Cloduera, cc-licensed Mapper Partitioner Intermediates Reducer Reduce Intermediates (combiners omitted here) 17
18
Source: redrawn from a slide by Cloduera, cc-licensed Reducer Reduce Output File RecordWriter OutputFormat Output File RecordWriter Output File RecordWriter 18
19
Input and Output InputFormat: TextInputFormat KeyValueTextInputFormat SequenceFileInputFormat … OutputFormat: TextOutputFormat SequenceFileOutputFormat … 19
20
Shuffle and Sort in Hadoop Probably the most complex aspect of MapReduce! Map side Map outputs are buffered in memory in a circular buffer When buffer reaches threshold, contents are “spilled” to disk Spills merged in a single, partitioned file (sorted within each partition): combiner runs here Reduce side First, map outputs are copied over to reducer machine “Sort” is a multi-pass merge of map outputs (happens in memory and on disk): combiner runs here Final merge pass goes directly into reducer 20
21
Shuffle and Sort Mapper Reducer other mappers other reducers circular buffer (in memory) spills (on disk) merged spills (on disk) intermediate files (on disk) Combiner 21
22
Hadoop Workflow Hadoop Cluster You 1. Load data into HDFS 2. Develop code locally 3. Submit MapReduce job 3a. Go back to Step 2 4. Retrieve data from HDFS 22
23
Debugging Hadoop First, take a deep breath Start from unit test, then start locally Strategies Learn to use the webapp Where does println go? Don’t use println, use logging Throw RuntimeExceptions 23
24
Recap Hadoop data types Anatomy of a Hadoop job Hadoop jobs, end to end Software development workflow 24
25
Q&A? Thanks you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.