Download presentation
Presentation is loading. Please wait.
Published byRudolf Richard Modified over 8 years ago
1
CMSC 491 Hadoop-Based Distributed Computing Spring 2016 Adam Shook
2
Objectives Explain why and how Hadoop has become the foundation for virtually all modern data architectures Explain the architecture and design of HDFS and MapReduce
3
Agenda! What is it? Why should I care? Example Usage Hadoop Core – HDFS – MapReduce Ecosystem, maybe
4
WHAT IS HADOOP?
5
You tell me
6
Distributed Systems Collection of independent computers that appears to its users as a single coherent system* Take a bunch of Ethernet cable, wire together machines with a hefty amount of CPU, RAM, and storage Figure out how to build software to parallelize storage/computation/etc. across our cluster – While tolerating failures and all kinds of other garbage “The hard part” *Andrew S. Tanenbaum
7
Properties of Distributed Systems Reliability Availability Scalability Efficiency
8
Reliability Can the system deliver services in face of several component failures?
9
Availability How much latency is imposed on the system when a failure occurs? # of NinesAvailability %Per yearPer monthPer week 190%36.5 days72 hours16.8 hours 299%3.65 days7.20 hours1.68 hours 399.9%8.76 hours43.8 minutes10.1 minutes 499.99%52.56 minutes4.38 minutes1.01 minutes 599.999%5.26 minutes25.9 seconds6.05 seconds 699.9999%31.5 seconds2.59 seconds604.8 ms 799.99999%3.15 seconds262.97 ms60.48 ms 899.999999%315.569 ms26.297 ms6.048 ms 999.9999999%31.5569 ms2.6297 ms0.6048 ms
10
Scalability Can the system scale to support a growing number of tasks?
11
Efficiency How efficient is the system, in terms of latency and throughput?
12
WHY SHOULD I CARE?
13
The Short Answer is
14
The V’s of Big Data! Volume Velocity Variety Veracity Value – New and Improved!
15
Data Sources Social Media Web Logs Video Networks Sensors Transactions (banking, etc.) E-mail, Text Messaging Paper Documents
16
Value in all of this! Fraud Detection Predictive Models Recommendations Analyzing Threats Market Analysis Others!
17
How do we extract value? Monolithic Computing! Build bigger faster computers! Limited solution – Expensive and does not scale as data volume increases
18
Enter Distributed Computing Processing is distributed across many computers Distribute the workload to powerful compute nodes with some separate storage
19
New Class of Challenges Does not scale gracefully Hardware failure is common Sorting, combining, and analyzing data spread across thousands of machines is not easy
20
RDBMS is still alive! SQL is still very relevant, with many complex business rules mapping to a relational model But there is much more than can be done by leaving relational behind and looking at all of your data NoSQL can expand what you have, but it has its disadvantages
21
NoSQL Downsides Increased middle-tier complexity Constraints on query capability No standard semantics for query Complex to setup and maintain
22
An Ideal Cluster Linear horizontal scalability Analytics run in isolation Simple API with multiple language support Available in spite of hardware failure
23
Enter Hadoop Hits these major requirements Two core pieces – Distributed File System (HDFS) – Flexible analytic framework (MapReduce) Many ecosystem components to expand on what core Hadoop offers
24
Scalability Near-linear horizontal scalability Clusters are built on commodity hardware Component failure is an expectation
25
Data Access Moving data from storage to a processor is expensive Store data and process the data on the same machines Process data intelligently by being local
26
Disk Performance Disk technology has made significant advancements Take advantage of multiple disks in parallel – 1 disk, 3TB of data, 300MB/s, ~2.5 hours to read – 1,000 disks, same data, ~10 seconds to read Distribution of data and co-location of processing makes this a reality
27
Complex Processing Code Hadoop framework abstracts complex distributed computing environment – No synchronization code – No networking code – No I/O code MapReduce developer focuses on the analysis – Job runs the same on one node or 4,000 nodes!
28
Fault Tolerance Failure is inevitable, and it is planned for Component failure is automatically detected and seamlessly handled System continues to operate as expected with minimal degradation
29
Hadoop History Spun off of Nutch, an open source web search engine Google Whitepapers – GFS – MapReduce Nutch re-architected to birth Hadoop Hadoop is very mainstream today
30
Hadoop Versions Hadoop 2.7.1 is the latest release Two Java APIs, referred to as the old and new APIs Two task management systems – MapReduce v1 JobTracker, TaskTracker – MapReduce v2 – A YARN application MR runs in containers in the YARN framework
31
Common Use Cases Log Processing Image Identification Extract Transform Load (ETL) Recommendation Engines Time-Series Storage and Processing Building Search Indexes Long-Term Archive Audit Logging
32
Non-Use Cases Data processing handled by one large server ACID Transactions
33
LinkedIn Architecture
34
LinkedIn Applications
37
HDFS
38
Hadoop Distributed File System Inspired by Google File System (GFS) High performance file system for storing data Relatively simple centralized management Fault tolerance through data replication Optimized for MapReduce processing – Exposing data locality Linearly scalable Written in Java APIs in all the useful languages
39
Hadoop Distributed File System Use of commodity hardware Files are write once, read many Leverages large streaming reads vs random Favors high throughput vs low latency Modest number of huge files
40
HDFS Architecture Split large files into blocks Distribute and replicate blocks to nodes Two key services – Master NameNode – Many DataNodes Backup/Checkpoint NameNode for HA User interacts with a UNIX file system
41
NameNode Single master service for HDFS Was a single point of failure Stores file to block to location mappings in a namespace All transactions are logged to disk Can recover based on checkpoints of the namespace and transaction logs
42
NameNode Memory HDFS prefers fewer larger files as a result of the metadata Consider 1GB of data with 64MB block size – Stored as one file Name: 1 item Blocks: 16*3 = 48 items Total items: 49 – Stored as 1024 1MB files Names: 1024 items Blocks: 1024*3 = 3072 items Total items: 4096 Each item is around 200 bytes
43
Checkpoint Node (Secondary NN) Performs checkpoints of the namespace and logs Not a hot backup! HDFS 2.0 introduced NameNode HA – Active and a Standby NameNode service coordinated via ZooKeeper
44
DataNode Stores blocks on local disk Sends frequent heartbeats to NameNode Sends block reports to NameNode Clients connect to DataNodes for I/O
45
How HDFS Works - Writes Client contacts NameNode to write data NameNode says write it to these nodes Client sequentially writes blocks to DataNode
46
How HDFS Works - Writes DataNodes replicate data blocks, orchestrated by the NameNode
47
How HDFS Works - Reads Client contacts NameNode to read data NameNode says you can find it here Client sequentially reads blocks from DataNode
48
Client connects to another node serving that block How HDFS Works - Failure
49
HDFS Blocks Default block size was 64MB, now 128 MB – Configurable – Larger sizes are common, say 256 MB or 512 MB Default replication is three – Also configurable, not rarely changed Stored as files on the DataNode’s local fs – Cannot associate any block with its true file
50
Replication Strategy First copy is written to the same node as the client – If the client is not part of the cluster, first block goes to a random node Second copy is written to a node on a different rack Third copy is written to a different node on the same rack as the second copy
51
Data Locality Key in achieving performance MapReduce tasks run as close to data as possible Some terms – Local – On-Rack – Off-Rack
52
Data Corruption Use of checksums to to ensure block integrity Checksum is calculatd on read and compared against that when it was written – Fast to calculate and space-efficient If the checksums differ, client reads the block from another DataNode – Corrupted block will be deleted and replicated by a non-corrupt block DataNode periodically runs a background thread to do this checksum process – For those files that aren’t read very often
53
Fault Tolerance If no heartbeat is received from DN within a configurable time window, it is considered lost – 10 minute default NameNode will: – Determine which blocks were on the lost node – Locate other DataNodes with valid copies – Instruct DataNodes with copies to replicate blocks to other DataNodes
54
Interacting with HDFS Primary interfaces are CLI and Java API Web UI for read-only access WebHDFS provides RESTful read/write HttpFS also exists snakebite is a Python library from Spotify FUSE, allows HDFS to be mounted on standard file system – Typically used so legacy apps can use HDFS data ‘hdfs dfs -help’ will show CLI usage
55
HADOOP MAPREDUCE
56
MapReduce! A programming model for processing data Contains two phases! Map – Perform a map function on key/value pairs Reduce – Perform a reduce function on key/value groups Groups are created by sorting map output Operations on key/value pairs open the door for very parallelizable algorithms
57
Hadoop MapReduce Automatic parallelization and distribution of tasks Framework handles scheduling task and repeating failed tasks Developer can code many pieces of the puzzle Framework handles a Shuffle and Sort phase between map and reduce tasks. Developer need only focus on the task at hand, rather than how to manage where data comes from and where it goes
58
MRv1 and MRv2 Both manage compute resources, jobs, and tasks Job API the same MRv1 is proven in production – JobTracker / TaskTrackers MRv2 is a new application on YARN – YARN is a generic platform for developing distributed applications
59
MRv1 - JobTracker Monitors job and task progress Issues task attempts to TaskTrackers Re-tries failed task attempts Four failed attempts of same task = one failed job Default scheduler is FIFO order – CapacityScheduler or FairScheduler is preferred Single point of failure for MapReduce
60
MRv1 – TaskTrackers Runs on same node as DataNodes Sends heartbeats and task reports to JT Configurable number of map and reduce slots Runs map and reduce task attempts in a separate JVM
61
How MapReduce Works Client submits job to JobTracker JobTracker submits tasks to TaskTrackers Job output is written to DataNodes w/replication JobTracker reports metrics
62
How MapReduce Works - Failure JobTracker assigns task to different node
63
Map Input Map Output Reducer Input Groups Reducer Output
64
Example -- Word Count Count the number of times each word is used in a body of text Uses TextInputFormat and TextOutputFormat map(byte_offset, line) foreach word in line emit(word, 1) reduce(word, counts) sum = 0 foreach count in counts sum += count emit(word, sum)
65
Mapper Code public class WordMapper extends Mapper { private final static IntWritable ONE = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, ONE); }
66
Shuffle and Sort Mapper outputs to a single partitioned file Reducers copy their parts Reducer merges partitions
67
Reducer Code public class IntSumReducer extends Reducer { public void reduce(Text key, Iterable values, Context context) { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); }
68
Resources, Wrap-up, etc. http://hadoop.apache.org Supportive community – Hadoop-DC – Data Science MD – Baltimore Hadoop Users Group Plenty of resources available to learn more – Books – Email lists – Blogs
69
That Ecosystem I Mentioned Accumulo Avro Geode Flink Twill DataFu Falcon HCatalog Samza Tajo HAWQ Parquet Mahout Oozie Storm ZooKeeper Spark Cassandra Kafka Crunch Azkaban Knox HDFS MapReduce YARN Sqoop Flume NiFi Pig Hive Streaming HBase Greenplum Ambari Mesos Drill Giraph Nutch Sentry Chukwa MRUnit Phoenix Tez Ranger
70
References Hadoop: The Definitive Guide, Chapter 16.2 http://www.slideshare.net/s_shah/the-big- data-ecosystem-at-linkedin-23512853 http://www.slideshare.net/s_shah/the-big- data-ecosystem-at-linkedin-23512853 http://datameer2.datameer.com/blog/wp- content/uploads/2013/01/hadoop_ecosystem _full2.png http://datameer2.datameer.com/blog/wp- content/uploads/2013/01/hadoop_ecosystem _full2.png Google Images
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.