Presentation is loading. Please wait.

Presentation is loading. Please wait.

MIT 802 Introduction to Data Platforms and Sources Lecture 2

Similar presentations


Presentation on theme: "MIT 802 Introduction to Data Platforms and Sources Lecture 2"— Presentation transcript:

1 MIT 802 Introduction to Data Platforms and Sources Lecture 2
Willem S. van Heerden

2 MapReduce MapReduce is a programming model Composed of two procedures
Different implementations (e.g. Hadoop MapReduce) Processing big data sets in parallel over many nodes Composed of two procedures Map procedure Performs filtering and sorting Splits the task up into components Maps inputs into intermediate key-value pairs Reduce procedure Performs a summary operation Produces a result Combines intermediate key-value pairs into final results

3 MapReduce A simple example Strategy We have a set of documents
We want to count the occurrences of each word Strategy The framework creates a series of mapper nodes Give documents (or parts of documents) to mapper nodes Each mapper node Processes through its document (or part of a document) Produces intermediate key-value pairs Key: A word occurring in the document Value: Count of the word (1, because the word occurs once) Intermediate results are stored locally on the node

4 MapReduce Hello, 1 World, 1 Hello World Bye World Bye, 1 Hello, 1
Documents Key-value pairs Hello World Bye World Hello, 1 World, 1 Bye, 1 Mapper node 1 Hello Test Hello Bye Hello, 1 Test, 1 Bye, 1 Mapper node 2

5 MapReduce Strategy The framework creates a series of reducer nodes
All key-value pairs with the same key are gathered together These key-value pairs are fed to the same reducer node This is referred to as shuffling If there are many instances of a key Multiple reducer nodes may handle the same key Reducers with the same key are grouped together This is referred to as sorting Finally, each reducer node processes its key-value pairs Totals the values for the key-value pairs Outputs the key with the total count as the final result This is referred to as reducing

6 MapReduce Bye, 1 Bye, 2 Hello, 1 World, 1 Hello, 1 Bye, 1 Hello, 3
Reducer node 1 Bye, 2 Key-value pairs Hello, 1 World, 1 Bye, 1 Hello, 1 Mapper node 1 Reducer node 2 Hello, 3 Test, 1 Hello, 1 Test, 1 Bye, 1 Reducer node 3 Test, 1 Mapper node 2 World, 1 Reducer node 4 Word, 2

7 Apache Hadoop Open-source software framework Components
Used for distributed storage and processing of big data Designed to work on commodity hardware Consists of computer clusters Components Core ecosystem Query engines External data storage A Hadoop cluster consists of At least one master node Multiple slave or worker nodes

8 Core Hadoop Ecosystem Zookeeper Oozie PIG Hive Sqoop, Flume, Kafka
Data Ingestion MapReduce TEZ Spark HBase Apache Storm YARN MESOS HDFS

9 HDFS Hadoop Distributed File System Java-based file system
Scalable and reliable data storage Breaks data into smaller pieces stored over nodes

10 HDFS Hadoop Distributed File System Primarily consists of A NameNode
Resides on the master node Manages file system metadata, primarily file system index May be a standalone server in large clusters May be secondary NameNode snapshotting NameNode memory DataNodes that store the actual data Reside in worker nodes Contain the actual data DataNodes send heartbeat to NameNode DataNode is dead if NameNode doesn’t receive a heartbeat NameNode then replicates blocks on another DataNode

11 YARN Yet Another Resource Negotiator MESOS is an alternative to YARN
Manages computing resources on the cluster Schedules user applications for resource use An application is a single job or set of jobs organised in a graph Consists of Global ResourceManager Arbitrates resources between all applications A per-application ApplicationMaster Negotiates resources from ResourceManager A per-machine NodeManager Works with ApplicationMaster to execute and monitor tasks MESOS is an alternative to YARN

12 YARN

13 Hadoop MapReduce Hadoop implementation of MapReduce
Uses mapper and reducer methods Resilient to failure ApplicationMaster monitors mappers and reducers Will restart a task on a new node if a failure is detected Written in Java

14 Hadoop MapReduce Example
Let's look at our previous simple example One class (WordCount) encapsulates Mapper class (TokenizerMapper) Reducer class (IntSumReducer) Main method First, the mapper class (TokenizerMapper) Contains the map method Receives text as input (value) Tokenizes value to get individual words Also receives a context (context) Used to interact with rest of the Hadoop system Provides an interface for output Writes each word and count of 1 as a key-value pair to context

15 Hadoop MapReduce Example
public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); }

16 Hadoop MapReduce Example
Let's look at our previous simple example Second, the reducer class (IntSumReducer) Contains the reduce method Receives a textual key (key) Receives an iterable structure of values (values) Iterates over values, adding to sum Also receives a context (context) Writes the key and sum as a result to the context

17 Hadoop MapReduce Example
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result);

18 Hadoop MapReduce Example
Let's look at our previous simple example Finally, the main function Sets up details related to the job The job’s general configuration Job input and output details Sets the mapper class using setMapperClass Sets the reducer class using setReducerClass Sets a combiner class using setCombinerClass Performs local aggregation of the intermediate outputs Helps cut down amount of data transfer from mapper to reducer In this case, the same as the reducer class

19 Hadoop MapReduce Example
public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }

20 Hadoop MapReduce Streaming
Allows interfacing of MapReduce with other languages For example, mrjob and pydoop in Python

21 Python mrjob Example from mrjob.job import MRJob import re WORD_RE = re.compile(r"[\w']+") class MRWordFreqCount(MRJob): def mapper(self, _, line): for word in WORD_RE.findall(line): yield word, 1 def combiner(self, word, counts): yield word, sum(counts) def reducer(self, word, counts): if __name__ == '__main__': MRWordFreqCount.run()

22 Apache Pig and Hive Apache Pig Hive has a similar objective to Pig
High-level platform Used to create programs that run on Hadoop High-level scripting language called Pig Latin Abstracts away from MapReduce Compiles to MapReduce Hive has a similar objective to Pig Data set file looks and acts like a relational database SQL-like syntax

23 Apache Pig Example input_lines = LOAD 'input.txt' AS (line:chararray); words = FOREACH input_lines GENERATE FLATTEN(TOKENIZE(line)) AS word; filtered_words = FILTER words BY word MATCHES '\\w+'; word_groups = GROUP filtered_words BY word; word_count = FOREACH word_groups GENERATE COUNT(filtered_words) AS count, group AS word; ordered_word_count = ORDER word_count BY count DESC; STORE ordered_word_count INTO 'output.txt';

24 Apache HBase Open-source, non-relational, distributed database
Runs on HDFS Fault-tolerant way of storing large quantities of data Supports database compression Tables in HBase can serve as Input to MapReduce programs Output from MapReduce programs

25 Apache Storm Distributed stream processing framework
Processes streaming data Implemented in Clojure General framework structure is similar to MapReduce Main difference is that data is processed in real time As opposed to batch processing Thus Storm topologies run indefinitely, not just for a set of jobs

26 Data Ingestion Sqoop Flume Kafka
Tool for efficiently transferring bulk data between Hadoop and structured datastores (e.g. relational databases) Structured datastores and Hadoop Command-line interface Flume Tool for efficiently processing log data Collection, aggregation and moving Processes large amounts of log data Based on streaming data flows Kafka Distributed streaming platform Transfers data between systems or applications

27 External Data Storage Cassandra MySQL mongoDB Open source
Distributed NoSQL database management system Designed to handle large amounts of data MySQL Open source relational database management system mongoDB Open source and cross-platform Document-oriented database NoSQL


Download ppt "MIT 802 Introduction to Data Platforms and Sources Lecture 2"

Similar presentations


Ads by Google