Cloud Computing Lecture #3 More MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, September 10, 2008 This work is licensed under a Creative.

Slides:



Advertisements
Similar presentations
Big Data Infrastructure Jimmy Lin University of Maryland Monday, February 9, 2015 Session 3: MapReduce – Basic Algorithm Design This work is licensed under.
Advertisements

大规模数据处理 / 云计算 Lecture 4 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 4/24/2011 This work is licensed under a Creative.
Problem-solving on large-scale clusters: theory and applications Lecture 3: Bringing it all together.
Ch. 3 Lin and Dyer’s text Pages (39-69)
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
Based on Lin and Dryer’s text: Chapter 3.  Figure 2.6.
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
MapReduce Algorithm Design Data-Intensive Information Processing Applications ― Session #3 Jimmy Lin University of Maryland Tuesday, February 9, 2010 This.
Hadoop: Nuts and Bolts Data-Intensive Information Processing Applications ― Session #2 Jimmy Lin University of Maryland Tuesday, February 2, 2010 This.
Jimmy Lin The iSchool University of Maryland Wednesday, April 15, 2009
Cloud Computing Lecture #5 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, October 1, 2008 This work is licensed.
ACL, June Pairwise Document Similarity in Large Collections with MapReduce Tamer Elsayed, Jimmy Lin, and Douglas W. Oard University of Maryland,
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland Tuesday, June 29, 2010 This work is licensed.
Data-Intensive Text Processing with MapReduce Jimmy Lin The iSchool University of Maryland Sunday, May 31, 2009 This work is licensed under a Creative.
MapReduce and Data Intensive NLP CMSC 723: Computational Linguistics I ― Session #12 Jimmy Lin and Nitin Madnani University of Maryland Wednesday, November.
Cloud Computing Lecture #1 Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 This work is licensed.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Cloud Computing Lecture #4 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, February 6, 2008 This work is licensed.
More MapReduce Jimmy Lin The iSchool University of Maryland Tuesday, March 31, 2009 This work is licensed under a Creative Commons Attribution-Noncommercial-Share.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Jeffrey D. Ullman Stanford University.  Mining of Massive Datasets, J. Leskovec, A. Rajaraman, J. D. Ullman.  Available for free download at i.stanford.edu/~ullman/mmds.html.
Hadoop: The Definitive Guide Chap. 8 MapReduce Features
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment 彭波 北京大学信息科学技术学院 4/23/2011 This work is licensed under a Creative Commons.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland MLG, January, 2014 Jaehwan Lee.
CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce and Graph Data Chapter 5 Based on slides from Jimmy Lin’s lecture slides ( (licensed.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Cloud Distributed Computing Platform 2 Content of this lecture is primarily from the book “Hadoop, The Definite Guide 2/e)
HAMS Technologies 1
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz (Slides by Tyler S. Randolph)
大规模数据处理 / 云计算 Lecture 5 – Hadoop Runtime 彭波 北京大学信息科学技术学院 7/23/2013 This work is licensed under a Creative Commons.
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
Why Not Grab a Free Lunch? Mining Large Corpora for Parallel Sentences to Improve Translation Modeling Ferhan Ture and Jimmy Lin University of Maryland,
大规模数据处理 / 云计算 Lecture 5 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 7/19/2011 This work is licensed under a Creative.
MapReduce Theory and Practice 彭波 北京大学信息科学技术学院 7/15/2010 Some Slides borrow from Jimmy Lin and.
Introduction to MapReduce Paradigm for Data Mining COSC 526 Class 2 Arvind Ramanathan Computational Science & Engineering Division Oak Ridge National Laboratory,
MapReduce Algorithm Design Based on Jimmy Lin’s slides
MapReduce and Data Management Based on slides from Jimmy Lin’s lecture slides ( (licensed.
ISchool, Cloud Computing Class Talk, Oct 6 th Computing Pairwise Document Similarity in Large Collections: A MapReduce Perspective Tamer Elsayed,
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
IBM Research ® © 2007 IBM Corporation Introduction to Map-Reduce and Join Processing.
大规模数据处理 / 云计算 Lecture 3 – Mapreduce Algorithm Design 闫宏飞 北京大学信息科学技术学院 7/16/2013 This work is licensed under a Creative.
MapReduce & Hadoop IT332 Distributed Systems. Outline  MapReduce  Hadoop  Cloudera Hadoop  Tutorial 2.
大规模数据处理 / 云计算 03 – Mapreduce Algorithm Design 闫宏飞 北京大学信息科学技术学院 7/8/2014 This work is licensed under a Creative Commons.
Big Data Infrastructure Week 2: MapReduce Algorithm Design (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
MapReduce Basics Chapter 2 Lin and Dyer & /tutorial/
Big Data Infrastructure Week 3: From MapReduce to Spark (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, January 31, 2013 Session 2: Hadoop Nuts and Bolts This work is licensed.
Matrix Multiplication in Hadoop
Introduction to MapReduce Jimmy Lin University of Maryland Tuesday, January 26, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Jimmy Lin and Michael Schatz Design Patterns for Efficient Graph Algorithms in MapReduce Michele Iovino Facoltà di Ingegneria dell’Informazione, Informatica.
Item Based Recommender System SUPERVISED BY: DR. MANISH KUMAR BAJPAI TARUN BHATIA ( ) VAIBHAV JAISWAL( )
Big Data Infrastructure
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
COMP9313: Big Data Management Lecturer: Xin Cao Course web site:
COMP9313: Big Data Management Lecturer: Xin Cao Course web site:
湖南大学-信息科学与工程学院-计算机与科学系
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
CS110: Discussion about Spark
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Word Co-occurrence Chapter 3, Lin and Dyer.
Distributed System Gang Wu Spring,2018.
Charles Tappert Seidenberg School of CSIS, Pace University
MapReduce Algorithm Design
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
COS 518: Distributed Systems Lecture 11 Mike Freedman
Presentation transcript:

Cloud Computing Lecture #3 More MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, September 10, 2008 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See for details Material adapted from: Jimmy Lin. Scalable Language Processing Algorithms for the Masses: A Case Study in Computing Word Co-occurrence Matrices with MapReduce. Proceedings of EMNLP 2008.

The iSchool University of Maryland Today’s Topics Synchronization and coordinating partial results Use of complex keys and values

The iSchool University of Maryland Managing Dependencies Remember: Mappers run in isolation You have no idea in what order the mappers run You have no idea on what node the mappers run You have no idea when each mapper finishes Tools for synchronization: Ability to hold state in reducer across multiple key-value pairs Sorting function for keys Partitioner Cleverly-constructed data structures

The iSchool University of Maryland Motivating Example Term co-occurrence matrix for a text collection M = N x N matrix (N = vocabulary size) M ij : number of times i and j co-occur in some context (for concreteness, let’s say context = sentence) Why? Distributional profiles as a way of measuring semantic distance Semantic distance useful for many language processing tasks “You shall know a word by the company it keeps” (Firth, 1957) e.g., Mohammad and Hirst (EMNLP, 2006)

The iSchool University of Maryland MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection = specific instance of a large counting problem A large event space (number of terms) A large number of events (the collection itself) Goal: keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently?

The iSchool University of Maryland First Try: “Pairs” Each mapper takes a sentence: Generate all co-occurring term pairs For all pairs, emit (a, b) → count Reducers sums up counts associated with these pairs Use combiners! Note: in all my slides, I denote a key-value pair as k → v

The iSchool University of Maryland “Pairs” Analysis Advantages Easy to implement, easy to understand Disadvantages Lots of pairs to sort and shuffle around (upper bound?)

The iSchool University of Maryland Another Try: “Stripes” Idea: group together pairs into an associative array Each mapper takes a sentence: Generate all co-occurring term pairs For each term, emit a → { b: count b, c: count c, d: count d … } Reducers perform element-wise sum of associative arrays (a, b) → 1 (a, c) → 2 (a, d) → 5 (a, e) → 3 (a, f) → 2 a → { b: 1, c: 2, d: 5, e: 3, f: 2 } a → { b: 1, d: 5, e: 3 } a → { b: 1, c: 2, d: 2, f: 2 } a → { b: 2, c: 2, d: 7, e: 3, f: 2 } +

The iSchool University of Maryland “Stripes” Analysis Advantages Far less sorting and shuffling of key-value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object is more heavyweight Fundamental limitation in terms of size of event space

Cluster size: 38 cores Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)

The iSchool University of Maryland Conditional Probabilities How do we compute conditional probabilities from counts? Why do we want to do this? How do we do this with MapReduce?

The iSchool University of Maryland P(B|A): “Pairs” For this to work: Must emit extra (a, *) for every b n in mapper Must make sure all a’s get sent to same reducer (use Partitioner) Must make sure (a, *) comes first (define sort order) (a, b 1 ) → 3 (a, b 2 ) → 12 (a, b 3 ) → 7 (a, b 4 ) → 1 … (a, *) → 32 (a, b 1 ) → 3 / 32 (a, b 2 ) → 12 / 32 (a, b 3 ) → 7 / 32 (a, b 4 ) → 1 / 32 … Reducer holds this value in memory

The iSchool University of Maryland P(B|A): “Stripes” Easy! One pass to compute (a, *) Another pass to directly compute P(B|A) a → {b 1 :3, b 2 :12, b 3 :7, b 4 :1, … }

The iSchool University of Maryland Synchronization in Hadoop Approach 1: turn synchronization into an ordering problem Sort keys into correct order of computation Partition key space so that each reducer gets the appropriate set of partial results Hold state in reducer across multiple key-value pairs to perform computation Illustrated by the “pairs” approach Approach 2: construct data structures that “bring the pieces together” Each reducer receives all the data it needs to complete the computation Illustrated by the “stripes” approach

The iSchool University of Maryland Issues and Tradeoffs Number of key-value pairs Object creation overhead Time for sorting and shuffling pairs across the network Size of each key-value pair De/serialization overhead Combiners make a big difference! RAM vs. disk and network Arrange data to maximize opportunities to aggregate partial results

The iSchool University of Maryland Data Types in Hadoop WritableDefines a de/serialization protocol. Every data type in Hadoop is a Writable. WritableComprableDefines a sort order. All keys must be of this type (but not values). IntWritable LongWritable Text … Concrete classes for different data types.

The iSchool University of Maryland Complex Data Types in Hadoop How do you implement complex data types? The easiest way: Encoded it as Text, e.g., (a, b) = “a:b” Use regular expressions to parse and extract data Works, but pretty hack-ish The hard way: Define a custom implementation of WritableComprable Must implement: readFields, write, compareTo Computationally efficient, but slow for rapid prototyping Alternatives: Cloud 9 offers two other choices: Tuple and JSON

Questions?