Analysis of massive data sets Prof. dr. sc. Siniša Srbljić Doc. dr. sc. Dejan Škvorc Doc. dr. sc. Ante Đerek Faculty of Electrical Engineering and Computing.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Overview of MapReduce and Hadoop
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Distributed Computations
CS 345A Data Mining MapReduce. Single-node architecture Memory Disk CPU Machine Learning, Statistics “Classical” Data Mining.
Google’s Map Reduce. Commodity Clusters Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture.
Map Reduce Allan Jefferson Armando Gonçalves Rocir Leite Filipe??
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
Google’s Map Reduce. Commodity Clusters Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture.
Distributed Computations MapReduce
7/14/2015EECS 584, Fall MapReduce: Simplied Data Processing on Large Clusters Yunxing Dai, Huan Feng.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce Simplified Data Processing On large Clusters Jeffery Dean and Sanjay Ghemawat.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Map Reduce Architecture
MapReduce: Simpliyed Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat To appear in OSDI 2004 (Operating Systems Design and Implementation)
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
Map Reduce: Simplified Data Processing On Large Clusters Jeffery Dean and Sanjay Ghemawat (Google Inc.) OSDI 2004 (Operating Systems Design and Implementation)
Jeffrey D. Ullman Stanford University. 2 Chunking Replication Distribution on Racks.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce and Hadoop 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 2: MapReduce and Hadoop Mining Massive.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Large-scale file systems and Map-Reduce Single-node architecture Memory Disk CPU Google example: 20+ billion web pages x 20KB = 400+ Terabyte 1 computer.
Map Reduce: Simplified Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat Google, Inc. OSDI ’04: 6 th Symposium on Operating Systems Design.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
CS 345A Data Mining MapReduce. Single-node architecture Memory Disk CPU Machine Learning, Statistics “Classical” Data Mining.
SLIDE 1IS 240 – Spring 2013 MapReduce, HBase, and Hive University of California, Berkeley School of Information IS 257: Database Management.
SECTION 5: PERFORMANCE CHRIS ZINGRAF. OVERVIEW: This section measures the performance of MapReduce on two computations, Grep and Sort. These programs.
MapReduce and the New Software Stack CHAPTER 2 1.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
MapReduce: Simplified Data Processing on Large Clusters Lim JunSeok.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
 Much of the course will be devoted to large scale computing for data mining  Challenges:  How to distribute computation?  Distributed/parallel.
MapReduce: Simplified Data Processing on Large Cluster Authors: Jeffrey Dean and Sanjay Ghemawat Presented by: Yang Liu, University of Michigan EECS 582.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Lecture 4. MapReduce Instructor: Weidong Shi (Larry), PhD
Large-scale file systems and Map-Reduce
MapReduce Simplied Data Processing on Large Clusters
湖南大学-信息科学与工程学院-计算机与科学系
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
CS 345A Data Mining MapReduce This presentation has been altered.
5/7/2019 Map Reduce Map reduce.
Big Data Analysis MapReduce.
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Analysis of massive data sets Prof. dr. sc. Siniša Srbljić Doc. dr. sc. Dejan Škvorc Doc. dr. sc. Ante Đerek Faculty of Electrical Engineering and Computing Consumer Computing Laboratory

Analysis of Massive Data Sets: MapReduce Programming Model Marin Šilić, PhD

Overview □Motivation □Storage Infrastructure □Programming Model: MapReduce □MapReduce: Implementation □MapReduce: Refinements □Problems Suited for MapReduce □MapReduce Other Implementations Copyright © 2016 CCL: Analysis of massive data sets3 od 60

Motivation □Modern Data Mining Applications o Examples  Ranking Web pages by importance  Query friends networks on social networking site o What is in common?  Require processing of large amount of data  Data is often regular  Idea is to exploit parallelism Copyright © 2016 CCL: Analysis of massive data sets4 od 60

Motivation □Single Node Architecture o Most of the computing done on a single node Copyright © 2016 CCL: Analysis of massive data sets CPU RAM DISK Machine Learning Statistics Data Mining All the data fits in a RAM of single node 5 od 60

Motivation □Parallelism in the past o Scientific application  Large amount of calculations  Done on special purpose computers  Multiple processors, specialized hardware etc. □Parallelism today o Prevalence of the Web  Computing is done on installations of large amount ordinary computing nodes  The costs are greatly reduced compared with usage of special purpose parallel machine Copyright © 2016 CCL: Analysis of massive data sets6 od 60

Motivation □Cluster Architecture Copyright © 2016 CCL: Analysis of massive data sets CPU RAM DISK CPU RAM DISK SWITCH CPU RAM DISK CPU RAM DISK SWITCH … … 1 Gbps between any two nodes within a rack 10 Gbps between racks 16 – 64 nodes in the rack 7 od 60

Motivation □Google Example o 20+ billion websites x 20KB = 400+ TB  It takes ~1000 hard drives to store the Web! o 1 computer reads MB/sec  It takes 4 months to read the Web! o Challenge is to do something useful with the data o Nowadays, a standard architecture for such computation is used  Cluster of commodity Linux nodes  Ethernet network to connect them Copyright © 2016 CCL: Analysis of massive data sets8 od 60

Motivation □Large-scale Computing for data mining on commodity hardware □Challenges i.How to distribute computing? ii.Make easy to write distributed programs? iii.Incorporate fault-tolerance  Machine may operate up to 1000 days  Suppose you have 1000 machines, expected lost is 1 per day  However, people estimate Google posses more than 1M machines 1000 machines fail every day! Copyright © 2016 CCL: Analysis of massive data sets9 od 60

Motivation □Key Ideas o Bring the computation to the data o Store files multiple times for reliability □MapReduce addresses the challenges o Google’s computational and data manipulation model o Very convenient to handle Big Data o Storage infrastructure – File System  GFS, HDFS o Programming model  MapReduce Copyright © 2016 CCL: Analysis of massive data sets10 od 60

Storage Infrastructure □Distributed File System o Google: GFS, Hadoop: HDFS o Provides global namespace o Specifically designed for Google’s needs □Usage pattern o Huge files (100s of GB, up to TB) o Data is rarely updated in place o The dominant operations are  Appends  Reads Copyright © 2016 CCL: Analysis of massive data sets11 od 60

Storage Infrastructure □GFS Architecture o GFS cluster consists of:  Single master, multiple chunkservers and multiple clients GFS Master File namespace chunk 2ef0 /foo/bar Linux File System GFS chunkserver Linux File System GFS chunkserver GFS Client Application (file name, chunk index) (chunk handle, chunk locations) (Instructions to chunkserver) (Chunkserver state) (chunk handle, byte range) (chunk data) Legend Control flow Data flow Copyright © 2016 CCL: Analysis of massive data sets12 od 60

Storage Infrastructure □GFS Architecture o Chunk Servers  File is split into contiguous chunks  Chunk size is 64MB  Each chunk is replicated (2x or 3x)  Replicas in different racks o Master Node  Stores metadata about where files are stored  Replicated – shadow master o Client Library  Contacts master to locate chunk servers  Directly accesses data from chunk servers Copyright © 2016 CCL: Analysis of massive data sets13 od 60

□Reliable Distributed System o Data is kept in chunks replicated on different machines  Easy to recover from disk or machine failure o Brings the computation to the data o Chunk Servers are also used as computing nodes Storage Infrastructure Copyright © 2016 CCL: Analysis of massive data sets C0C0 C1C1 C3C3 C2C2 C0C0 C1C1 C3C3 C2C2 C0C0 C1C1 C3C3 C2C2 Chunk Server 1Chunk Server 2Chunk Server 3 14 od 60

Programming Model: MapReduce □Motivating example o Let’s imagine a huge document o Count the number of times each distinct word occurs o For example  Mining some web server logs, counting URLs  File too large to fit in memory  All word pairs fit in memory  Solution on linux: words(doc.txt) | sort | uniq –c words takes a file and outputs each word in a line Copyright © 2016 CCL: Analysis of massive data sets15 od 60

Programming Model: MapReduce □The High-level Overview of MapReduce o Read the data sequentially o Map:  Identify the entities you care about o Group entities by key:  Sort and Shuffle o Reduce:  Aggregate, count, filter, transform o Write the results Copyright © 2016 CCL: Analysis of massive data sets16 od 60

Programming Model: MapReduce □Map Phase Copyright © 2016 CCL: Analysis of massive data sets … … Input key-value pairs Intermediate key-value pairs Map Phase 17 od 60

Programming Model: MapReduce □Group Phase o Performed by the framework itself Copyright © 2016 CCL: Analysis of massive data sets … Intermediate key-value pairs … Key-value groups Group Phase 18 od 60

Programming Model: MapReduce □Reduce Phase Copyright © 2016 CCL: Analysis of massive data sets … Key-value groups … Output key-value pairs Reduce Phase 19 od 60

Programming Model: MapReduce □Programmer implements two methods: oMap(k, v) → *  Input is a key-value pair and out put is a set of key-value pairs Key might the filename and value is a line in the file  For each input key-value pair (k, v) there is one Map oReduce(k’ *) → *  All values v’ with the same key k’ are processed together  There is one Reduce function per each unique key k’ Copyright © 2016 CCL: Analysis of massive data sets20 od 60

Programming Model: MapReduce □MapReduce Letter Counting Example  File containing 4 string lines, count the number of each vowel letter in the file using MapReduce Copyright © 2016 CCL: Analysis of massive data sets BARCELONA\n REAL MADRID\n INTER MILAN\n CHELSEA\n Map 1 (A, 1) (E, 1) (O, 1) (A, 1) Map 2 (E, 1) (A, 1) (I, 1) Map 3 (I, 1) (E, 1) (I, 1) (A, 1) Map 4 (E, 1) (E, 1) (A, 1) Group by key (A, [1, 1, 1, 1, 1, 1]) (E, [1, 1, 1, 1, 1]) (I, [1, 1, 1]) (O, [1]) Reduce 1 (A, 6) Reduce 2 (E, 5) Reduce 3 (I, 3) Reduce 4 (O, 1) 1.) MAP is provided by the programmer 2.) Grouping by key is done by framework 3.) REDUCE is provided by the programmer 21 od 60

Programming Model: MapReduce □The programmer needs to provide MAP and REDUCE implementation Copyright © 2016 CCL: Analysis of massive data sets map (key, value): // key: document name, value: text of the document for each char c in value: if c is vowel: emit(c, 1) reduce (key, values): // key: a vowel, values: an iterator over counts result = 0 for each count v in values: result += v emit(key, result) 22 od 60

Programming Model: MapReduce □MapReduce framework deals with o Partitioning the input data on multiple mappers o Scheduling the program execution on a set of distributed nodes o Grouping and aggregating values by key o Ensuring fault tolerance o Handling inter-machine communication Copyright © 2016 CCL: Analysis of massive data sets23 od 60

MapReduce: Implementation □Execution Overview Copyright © 2016 CCL: Analysis of massive data sets User program Master worker Input files Map phase Intermediate files Reduce phaseOutput files (1) fork (2) assign map (2) assign reduce (3) read (4) local write (5) remote read (6) write … … 24 od 60

MapReduce: Implementation □Execution Overview 1)Split & Fork  The MapReduce library splits the input files into M pieces typically 16 – 64 MB.  Then, it starts a lot of copies of the program on a cluster of machines 2)Task assignment  One copy is special – master, the rest are workers  Master chooses M map and R reduce workers Copyright © 2016 CCL: Analysis of massive data sets25 od 60

MapReduce: Implementation □Execution Overview 3)Map  Each Map worker reads one of the input chunks  Parses key-value pairs from the input  Passes each pair to user-defined Map function  Buffers the Map function output to local memory 4)Local write  Map output is written to local disk  Partitioned into R regions via partitioning function  Location of the partitions is passed to master Copyright © 2016 CCL: Analysis of massive data sets26 od 60

MapReduce: Implementation □Execution Overview 5)Grouping and sorting  Masters notifies reduce worker of partitions locations  Reduce worker reads the intermediate data by RPC  It sorts the intermediate data by the intermediate key so all the data is grouped by that key  Typically, many different keys map to the same reduce worker  If the amount of intermediate data is too large, an external sort is used Copyright © 2016 CCL: Analysis of massive data sets27 od 60

MapReduce: Implementation □Execution Overview 6)Reduce  The reduce worker iterates over the sorted intermediate data  For each unique intermediate key, it passes the key and the corresponding set of intermediate values to user’s defined Reduce function  The output of the Reduce function is appended to a final output file for this reduce partition 7)Finalizing  Master wakes up the user program  At this point, MapReduce call returns back to the user code Copyright © 2016 CCL: Analysis of massive data sets28 od 60

MapReduce: Implementation □Master data structures o Task status (idle, in-progress, completed) o Identity of worker machines o The locations and sizes of R intermediate file regions produced by map task  The master pushes these locations to reduce tasks  The master pings every worker periodically Copyright © 2016 CCL: Analysis of massive data sets29 od 60

MapReduce: Implementation □Fault Tolerance o Worker failure  Master pings worker  If no response is received, the worker is marked as failed  Any completed map task completed by worker is reset to idle The task is rescheduled and assigned to some other worker  Any map or reduce task in progress on a failed worker is also reset to idle and it gets rescheduled on some other worker  Map tasks that are completed are re-executed since the output is stored on a local disk of the failed worker  Completed reduce tasks do not need to be re-executed since their output is stored in a global file system Copyright © 2016 CCL: Analysis of massive data sets30 od 60

MapReduce: Implementation □Fault Tolerance o Worker failure  A(Map) → B(Map) All reduce workers need to get notified Any reduce task that has not already read the data from worker A will read the data from worker B o Master failure  If the master dies, MapReduce task gets aborted  The client gets notified and can retry MapReduce operation Copyright © 2016 CCL: Analysis of massive data sets31 od 60

MapReduce: Implementation □Locality o Input data is stored on the local disks of the machines that make up cluster o The master takes care of the input files locations while attempting to schedule map tasks  The idea is to schedule map tasks to nodes where the input data is stored  Failing that, it tries to schedule map tasks near the data (on the machine that is on the same network switch)  Large MapReduce operations on a significant fraction of the workers in a cluster Most input data is read locally It consumes no network bandwidth Copyright © 2016 CCL: Analysis of massive data sets32 od 60

MapReduce: Implementation □How many Map and Reduce tasks (granularity)? o M Map and R Reduce workers o Ideally, M and R should be much larger than the number of workers  Improves dynamic load-balancing and speeds up recovery when a worker fails o In practice…  M is chosen so that each task deals with 16 – 64MB of the input data (locality!!!) *M ~ 200k*  R is often constrained by the user (separate output) *R ~ 5k*  R is a small multiple of the number of workers *NoW ~ 2k* o Physical bounds  O(M + R) scheduling decisions, O(M*R) states Copyright © 2016 CCL: Analysis of massive data sets33 od 60

MapReduce: Implementation □Backup Tasks o Common cause that lengthens the total execution time o A “straggler” – a machine that takes unusually long to complete its task  Bad disk, 30MB/s to 1MB/s  Lack of CPU, IO, bandwidth due to some other scheduled task o General mechanism  When the MapReduce operation is close to completion, the master schedules backup executions of the remaining tasks  The task is marked completed whenever the primary or the backup execution completes.  It significantly reduces the completion time Copyright © 2016 CCL: Analysis of massive data sets34 od 60

MapReduce: Implementation □Backup Tasks o Sort example  byte records  1800 machines  M =  R = 4000 o It takes 44% longer with mechanism disabled!!! Copyright © 2016 CCL: Analysis of massive data sets35 od 60

MapReduce: Refinements □Partitioning Function o User specifies the number of reduce tasks R o Data gets partitioned using partitioning function on intermediate key o Default partitioning function: hash(key) mod R o However, it is not appropriate sometimes  For example, output keys are URL and we want all entries on the same host to end up in the same output file  To support such needs, user can provide special partitioning function For example: hash(hostname(urlkey)) mod R Copyright © 2016 CCL: Analysis of massive data sets36 od 60

MapReduce: Refinements □Ordering Guarantees o Within a given partition, the intermediate key/value pairs are processed in increasing key order o Easy to generate sorted output per partition □Skipping Bad Records o Map and Reduce crash deterministically  Bug in the user code  Acceptable to ignore some records (statistical analysis) o Detect records that cause crashes and skip them Copyright © 2016 CCL: Analysis of massive data sets37 od 60

MapReduce: Refinements □Combiners o Significant repetition in the intermediate keys produced by each map:,, … for the same key  Reduce function is commutative and associative  Word counting is a good example o MapReduce framework allows user to specify optional Combiner function  It performs partial merging before the data is sent over the network  Combiner function is executed on machines that perform map task  Typically, the same code is used both for combiner and reduce functions Copyright © 2016 CCL: Analysis of massive data sets38 od 60

MapReduce: Refinements □IO Format Types o MapReduce library provides support for reading input data in several formats  For example, “text” mode treats each line as a key-value pair  Another common supported format stores a sequence of key-value pairs sorted by key  Each input format knows how to split input data meaningfully  Users can provide new additional input types by implementing interface reader o In a similar way, a set of output formats is supported  It is easy for user to add support for new types Copyright © 2016 CCL: Analysis of massive data sets39 od 60

Problems Suited for MapReduce □Count of URL Access Frequency o The Map function processes logs of web page requests  Outputs (URL, 1) o Reduce adds together all values for the same URL  Emits (URL, total count) □Reverse Web-Link Graph o Map: (target, source) for each link to a target URL found in a page named source o Reduce: concatenates all source URLs associated with the target URL, (target, list(source)) Copyright © 2016 CCL: Analysis of massive data sets40 od 60

Problems Suited for MapReduce □Distributed Grep o Map: emit a line if it matches a pattern o Reduce: copy the supplied data to output □Distributed Sort o Map: extract the key from each record  Emits (key, record) o Reduce – emit all pairs unchanged  Ordering guarantee and partitioning function Copyright © 2016 CCL: Analysis of massive data sets41 od 60

Problems Suited for MapReduce □Term-vector per Host o A term vector summarizes the most important words  A list of pairs o Map function emits a pair for each input document  Hostname is extracted from the document URL o Reduce function is passed all term vectors for a given host  It adds these term vectors together  It throws away infrequent terms  Emits final Copyright © 2016 CCL: Analysis of massive data sets42 od 60

Problems Suited for MapReduce □Inverted Index o The Map parses each document  It emits a sequence of o The Reduce function accepts all pairs for a given word  It sorts document IDs  It emits o The set of output pairs forms a simple inverted index  The solution can be easily enhanced to keep track of word positions Copyright © 2016 CCL: Analysis of massive data sets43 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets44 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets45 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets … … 46 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets47 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets48 od 60

Problems Suited for MapReduce □Iterative Message Passing (Graph Processing) o General pattern  There is a network of entities and relationships between them It is required to calculate a state of each entity The state of each entity is influenced by other entities in its neighborhood  The state can be… Distance to other nodes Indication that there is a node with a certain property  MapReduce jobs are performed iteratively At each iteration each node sends messages to its neighbors Each neighbor updates its state based on the received messages  Iterations are terminated by some condition E.g. max number of iterations, negligible state change, etc. Copyright © 2016 CCL: Analysis of massive data sets49 od 60

Problems Suited for MapReduce □Iterative Message Passing (Graph Processing) Copyright © 2016 CCL: Analysis of massive data sets map (key, value): // key: node id, value: node object emit(key, value) for each neighbor m in value.neighbors: emit(m.id, get_message(value)) reduce (key, values): // key: node id, values: received messages M = null messages = [] for each message v in values: if v is Object: M = v else: messages.append(v) M.state = calculate_state(messages) emit(key, M) 50 od 60

Problems Suited for MapReduce □Iterative Message Passing (Graph Processing) Copyright © 2016 CCL: Analysis of massive data sets Iteration 1.) Node passes its state to its neighbors 2.) Based on the received states of its neighbors, the node updates its own state 3.) The node passes its new state to its neighbors 51 od 60

Problems Suited for MapReduce □Iterative Message Passing (Graph Processing) o Example: Availability propagation through the tree of categories Copyright © 2016 CCL: Analysis of massive data sets MenWomenKids Men Jeans Women Dress Toys Lewis 501 Root categories Subcategories Leaf categories …… … The rule: Category is available if at least one of its subcategories is in stock 52 od 60

Problems Suited for MapReduce □Iterative Message Passing (Graph Processing) o Example: Availability propagation through the tree of categories o Implementation: Copyright © 2016 CCL: Analysis of massive data sets get_message(node): // node = {id, availability(initialized true or false)} return node.availability calculate_state(values): // values = {list of its subcategories availabilities} state = false for each availability in values: state = state or availability return state 53 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets54 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets55 od 60

Problems Suited for MapReduce Copyright © 2016 CCL: Analysis of massive data sets AB a1a1 b1b1 a2a2 b1b1 a3a3 b2b2 a4a4 b3b3 BC b2b2 c1c1 b2b2 c2c2 b3b3 c3c3 ABC a3a3 b2b2 c1c1 a3a3 b2b2 c2c2 a4a4 b3b3 c3c3 = RS 56 od 60

Problems Suited for MapReduce □Limitations of MapReduce o Restricted programming framework  Tasks written as acyclic stateless dataflow programs Repeated querying of datasets is difficult Difficult to implement iterative algorithms that revisit a single working set multiple times o In case where computation depends on previously computed values  E.g., Fibonacci series, each value is a sum of previous two  If the dataset is small enough, compute it on a single machine o Algorithms that depend on shared global state  If task synchronization is required, MapReduce is not a choice  E.g. Online learning, Monte Carlo simulation Copyright © 2016 CCL: Analysis of massive data sets57 od 60

MapReduce Other Implementations □Google o Not available outside of Google □Hadoop o An open source implementation in Java o Uses HDFS for storage □Aster Data o Cluster-optimized SQL Database that implements MapReduce Copyright © 2016 CCL: Analysis of massive data sets58 od 60

Literature – Further Reading □Dean, Jeffrey and Ghemawat, Sanjay; „MapReduce: Simplified Data Processing on Large Clusters”, Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6, Pages 10-10, □Rajaraman, Anand, and Jeff Ullman; "Mining of Massive Datasets”, New York, NY, USA: Cambridge University Press, □Leskovec, Jure, Rajaraman, Anand, and Ullman, Jeff; "Mining of Massive Datasets – Online Course”, Copyright © 2016 CCL: Analysis of massive data sets59 od 60

Literature – Further Reading □Ilya Katsov, “MapReduce Patterns, Algorithms, and Use Cases”, Highly Scalable Blog, February 1 st educe-patterns/ educe-patterns/ □Jon Weissman, “Limitations of Mapreduce – where not to use Mapreduce”, CSci 8980 Big Data and the Cloud, October 10 th mapreduce-where-not-to.html mapreduce-where-not-to.html Copyright © 2016 CCL: Analysis of massive data sets60 od 60