Cloud Computing and MapReduce

Slides:



Advertisements
Similar presentations
Based on the text by Jimmy Lin and Chris Dryer; and on the yahoo tutorial on mapreduce at index.html
Advertisements

Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud ( and slides from Jimmy Lin’s.
EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
Lecture 6 – Google File System (GFS) CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation.
Cloud Computing Lecture #2 Introduction to MapReduce Jimmy Lin The iSchool University of Maryland Monday, September 8, 2008 This work is licensed under.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Cloud Computing – The Cloud Dr. Jie Liu. Definition  Cloud computing is Web-based processing, whereby shared resources, software, and information are.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
1 The Google File System Reporter: You-Wei Zhang.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
MapReduce M/R slides adapted from those of Jeff Dean’s.
Massive Data Processing 02: MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/1/2014 This work is licensed under a Creative Commons.
大规模数据处理 / 云计算 Lecture 3 – MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/12/2011 This work is licensed under a Creative Commons.
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Presented by: Katie Woods and Jordan Howell. * Hadoop is a distributed computing platform written in Java. It incorporates features similar to those of.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
HADOOP DISTRIBUTED FILE SYSTEM HDFS Reliability Based on “The Hadoop Distributed File System” K. Shvachko et al., MSST 2010 Michael Tsitrin 26/05/13.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
{ Tanya Chaturvedi MBA(ISM) Hadoop is a software framework for distributed processing of large datasets across large clusters of computers.
Next Generation of Apache Hadoop MapReduce Owen
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Presenter: Chao-Han Tsai (Some slides adapted from the Google’s series lectures)
1 Student Date Time Wei Li Nov 30, 2015 Monday 9:00-9:25am Shubbhi Taneja Nov 30, 2015 Monday9:25-9:50am Rodrigo Sanandan Dec 2, 2015 Wednesday9:00-9:25am.
BIG DATA/ Hadoop Interview Questions.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
MapReduce using Hadoop Jan Krüger … in 30 minutes...
Hadoop Javad Azimi May What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data. It includes:
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
Big Data is a Big Deal!.
Hadoop Aakash Kag What Why How 1.
Software Systems Development
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
An Open Source Project Commonly Used for Processing Big Data Sets
Large-scale file systems and Map-Reduce
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Introduction to MapReduce and Hadoop
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
MapReduce Simplied Data Processing on Large Clusters
The Basics of Apache Hadoop
CS6604 Digital Libraries IDEAL Webpages Presented by
湖南大学-信息科学与工程学院-计算机与科学系
Above the Clouds A Berkeley View of Cloud Computing
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Hadoop Technopoints.
Distributed File Systems
Distributed System Gang Wu Spring,2018.
Syllabus and Introduction Keke Chen
Lecture 16 (Intro to MapReduce and Hadoop)
CS 345A Data Mining MapReduce This presentation has been altered.
Introduction to MapReduce
THE GOOGLE FILE SYSTEM.
5/7/2019 Map Reduce Map reduce.
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud ( http://abovetheclouds.cs.berkeley.edu/) and slides from Jimmy Lin’s slides (http://www.umiacs.umd.edu/~jimmylin/cloud-2010-Spring/index.html) (licensed under Creation Commons Attribution 3.0 License)

Cloud computing What is the “cloud”? Many answers. Easier to explain with examples: Gmail is in the cloud Amazon (AWS) EC2 and S3 are the cloud Google AppEngine is the cloud Windows Azure is the cloud SimpleDB is in the cloud The “network” (cloud) is the computer

Cloud Computing What about Wikipedia? “Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). “

Cloud Computing Computing as a “service” rather than a “product” Everything happens in the “cloud”: both storage and computing Personal devices (laptops/tablets) simply interact with the cloud Advantages Device agonstic – can seamlessly move from one device to other Efficiency/scalability: programming frameworks allow easy scalability (relatively speaking) Increasing need to handle “Big Data” Reliability Multi-tenancy (better on the cloud provider) Cost: “pay as you go” allows renting computing resources as needed – much cheaper than building your own systems

more Scalability means that you (can) have infinite resources, can handle unlimited number of users Multi-tenancy enables sharing of resources and costs across a large pool of users. Lower cost, higher utilization… but other issues: e.g. security. Elasticity: you can add or remove computer nodes and the end user will not be affected/see the improvement quickly. Utility computing (similar to electrical grid)

X-as-a-Service

X-as-a-Service

Cloud Types

Data Centers The key infrastructure piece that enables CC Everyone is building them Huge amount of work on deciding how to build/design them

Data Centers

Data Centers

Data Centers Amazon data centers: Some old data 8 MW data center can include about 46,000 servers Costs about $88 million to build (just the facility) Power a pretty large portion, but server costs still dominate source: James Hamilton Presentation

Data Centers Power distribution Slides from 4-5 years ago Data Centers Power distribution Almost 11% lost in distribution – starts mattering when the total power consumption is in millions Modular and pre-fab designs Fast and economic deployments, built in a factory source: James Hamilton Presentation

Data Centers Networking equipment Cooling/temperature/energy issues Very very expensive: server/storage prices dropping fast Networking frozen in time: vertically integrated ecosystem Bottleneck – forces workload placement restrictions Cooling/temperature/energy issues Appropriate placement of vents, inlets etc. a key issue Thermal hotspots often appear and need to worked around Overall cost of cooling is quite high So is the cost of running the computing equipment Both have led to issues in energy-efficient computing Hard to optimize PUE (Power Usage Effectiveness) in small data centers  may lead to very large data centers in near future Ideally PUE should be 1, currently numbers are around 1.07-1.22 1.07 is a Facebook data center that does not have A/C source: James Hamilton Presentation

MGHPCC Massachusetts Green High Performance Computing Center MGHPCC in Holyoke, MA Cost: $95M 8.6 acres, 10,000 high-end computers with hundreds of thousands of processor cores 10 MW power + 5MW for cooling/lighting Close to electricity sources (hydroelectric plant)+ solar More: http://www.mghpcc.org/

Amazon Web Services 11 AWS regions worldwide

Virtualization Virtual machines (e.g., running Windows inside a Mac) etc. has been around for a long time Used to be very slow… Only recently became efficient enough to make it a key for CC Basic idea: run virtual machines on your servers and sell time on them That’s how Amazon EC2 runs Many advantages: “Security”: virtual machines serves as “almost” impenetrable boundary Multi-tenancy: can have multiple VMs on the same server Efficiency: replace many underpowered machines with a few high-power machines

Virtualization Consumer VM products include VMWare, Parallels (for Mac), VirtualBox etc… Some tricky things to keep in mind: Harder to reason about performance (if you care) Identical VMs may deliver somewhat different performance Much continuing work on the virtualization technology itself

Docker Hottest thing right now… Avoid the overheads of virtualization altogether

CLOUD COMPUTING ECONOMICS AND ELASTICITY 20

Cloud Application Demand Many cloud applications have cyclical demand curves Daily, weekly, monthly, … Demand Time Resources 21

Economics of Cloud Users How do you pick a capacity level? Economics of Cloud Users Pay by use instead of provisioning for peak Recall: DC costs >$150M and takes 24+ months to design and build Demand Capacity Time Resources Demand Capacity Time Resources Unused resources Static data center Data center in the cloud 22

Economics of Cloud Users Risk of over-provisioning: underutilization Huge sunk cost in infrastructure Demand Capacity Time Resources Unused resources Resources Demand Capacity Time (days) 1 2 3 Static data center 23

Utility Computing Arrives Amazon Elastic Compute Cloud (EC2) “Compute unit” rental: $0.10-0.80 0.085-0.68/hour 1 CU ≈ 1.0-1.2 GHz 2007 AMD Opteron/Intel Xeon core No up-front cost, no contract, no minimum Billing rounded to nearest hour (also regional,spot pricing) Platform Units Memory Disk Small - $0.10 $.085/hour 32-bit 1 1.7GB 160GB Large - $0.40 $0.35/hour 64-bit 4 7.5GB 850GB – 2 spindles X Large - $0.80 $0.68/hour 8 15GB 1690GB – 4 spindles High CPU Med - $0.20 $0.17 5 350GB High CPU Large - $0.80 $0.68 20 7GB 1690GB High Mem X Large - $0.50 6.5 17.1GB High Mem XXL - $1.20 13 34.2GB High Mem XXXL - $2.40 26 68.4GB 24

Utility Storage Arrives Amazon S3 and Elastic Block Storage offer low-cost, contract-less storage 25

Programming Frameworks Third key piece emerged from efforts to “scale out” i.e., distribute work over large numbers of machines (1000’s of machines) Parallelism has been around for a long time Both in a single machine, and as a cluster of computers But always been considered very hard to program, especially the distributed kind Too many things to keep track of How to parallelize, how to distribute the data, how to handle failures etc etc.. Google developed MapReduce and BigTable frameworks, and ushered in a new era

Programming Frameworks Note the difference between “scale up” and “scale out” scale up usually refers to using a larger machine – easier to do scale out refers to distributing over a large number of machines Even with VMs, I still need to know how to distribute work across multiple VMs Amazon’s largest single instance may not be enough

Cloud Computing Infrastructure Computation model: MapReduce* Storage model: HDFS* Other computation models: HPC/Grid Computing Network structure *Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed under Creation Commons Attribution 3.0 License) 29

Cloud Computing Computation Models Finding the right level of abstraction von Neumann architecture vs cloud environment Hide system-level details from the developers No more race conditions, lock contention, etc. Separating the what from how Developer specifies the computation that needs to be performed Execution framework (“runtime”) handles actual execution Similar to SQL!! 30

Typical Large-Data Problem Map Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Reduce Key idea: provide a functional abstraction for these two operations – MapReduce (Dean and Ghemawat, OSDI 2004) 31

MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’’>* All values with the same key are sent to the same reducer The execution framework handles everything else… 33

Shuffle and Sort: aggregate values by keys MapReduce k1 k2 k3 k4 k5 k6 v1 v2 v3 v4 v5 v6 map map map map b a 1 2 c 3 6 a c 5 2 b c 7 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce r1 s1 r2 s2 r3 s3 34

MapReduce What’s “everything else”? Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else… What’s “everything else”? 35

Sounds simple, but many challenges! MapReduce “Runtime” Handles scheduling Assigns workers to map and reduce tasks Handles “data distribution” Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and automatically restarts Handles speculative execution Detects “slow” workers and re-executes work Everything happens on top of a distributed FS (later) Sounds simple, but many challenges! 36

MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are reduced together The execution framework handles everything else… Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod R Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic 37

Shuffle and Sort: aggregate values by keys map map map map b a 1 2 c 3 6 a c 5 2 b c 7 8 combine combine combine combine b a 1 2 c 9 a c 5 2 b c 7 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3 38

Two more details… Barrier between map and reduce phases But we can begin copying intermediate data earlier Keys arrive at each reducer in sorted order No enforced ordering across reducers 39

MapReduce Overall Architecture User Program (1) submit Master (2) schedule map (2) schedule reduce worker split 0 (5) remote read (6) write output file 0 split 1 worker (3) read split 2 (4) local write worker split 3 output file 1 split 4 worker worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004) 40

“Hello World” Example: Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value); 41

MapReduce can refer to… The programming model The execution framework (aka “runtime”) The specific implementation Usage is usually clear from context! 42

MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Development led by Yahoo, used in production Now an Apache project Rapidly expanding software ecosystem, but still lots of room for improvement Lots of custom research implementations For GPUs, cell processors, etc. 43

Cloud Computing Storage, or how do we get data to the workers? NAS SAN Compute Nodes What’s the problem here? 44

Distributed File System Don’t move data to workers… move workers to the data! Store data on the local disks of nodes in the cluster Start up the workers on the node that has the data local Why? Network bisection bandwidth is limited Not enough RAM to hold all the data in memory Disk access is slow, but disk throughput is reasonable A distributed file system is the answer GFS (Google File System) for Google’s MapReduce HDFS (Hadoop Distributed File System) for Hadoop 45

GFS: Assumptions Choose commodity hardware over “exotic” hardware Scale “out”, not “up” High component failure rates Inexpensive commodity components fail all the time “Modest” number of huge files Multi-gigabyte files are common, if not encouraged Files are write-once, mostly appended to Perhaps concurrently Large streaming reads over random access High sustained throughput over low latency GFS slides adapted from material by (Ghemawat et al., SOSP 2003) 46

GFS: Design Decisions Files stored as chunks Fixed size (64MB) Reliability through replication Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large datasets, streaming reads Simplify the API Push some of the issues onto the client (e.g., data layout) HDFS = GFS clone (same basic ideas implemented in Java) 47

From GFS to HDFS Terminology differences: Functional differences: GFS master = Hadoop namenode GFS chunkservers = Hadoop datanodes Functional differences: No file appends in HDFS (was planned) HDFS performance is (likely) slower 48

HDFS Architecture … … HDFS namenode /foo/bar (file name, block id) Application /foo/bar (file name, block id) File namespace HDFS Client block 3df2 (block id, block location) HDFS datanode Linux file system … instructions to datanode HDFS datanode Linux file system … datanode state (block id, byte range) block data Adapted from (Ghemawat et al., SOSP 2003) 49

Namenode Responsibilities Managing the file system namespace: Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc. Coordinating file operations: Directs clients to datanodes for reads and writes No data is moved through the namenode Maintaining overall health: Periodic communication with the datanodes Block re-replication and rebalancing Garbage collection 50

Putting everything together… namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon datanode daemon datanode daemon Linux file system Linux file system Linux file system … … … slave node slave node slave node 51

MapReduce/GFS Summary Simple, but powerful programming model Scales to handle petabyte+ workloads Google: six hours and two minutes to sort 1PB (10 trillion 100-byte records) on 4,000 computers Yahoo!: 16.25 hours to sort 1PB on 3,800 computers Incremental performance improvement with more nodes Seamlessly handles failures, but possibly with performance penalties 52