CS 540 Database Management Systems

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Overview of MapReduce and Hadoop
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Query Execution, Concluded Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 18, 2003 Some slide content may.
Parallel Databases By Dr.S.Sridhar, Ph.D.(JNUD), RACI(Paris, NICE), RMR(USA), RZFM(Germany) DIRECTOR ARUNAI ENGINEERING COLLEGE TIRUVANNAMALAI.
Data Warehousing 1 Lecture-25 Need for Speed: Parallelism Methodologies Virtual University of Pakistan Ahsan Abdullah Assoc. Prof. & Head Center for Agro-Informatics.
Distributed Computations
Distributed Computations MapReduce
Evaluation of Relational Operations. Relational Operations v We will consider how to implement: – Selection ( ) Selects a subset of rows from relation.
7/14/2015EECS 584, Fall MapReduce: Simplied Data Processing on Large Clusters Yunxing Dai, Huan Feng.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
MapReduce.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce – An overview Medha Atre (May 7, 2008) Dept of Computer Science Rensselaer Polytechnic Institute.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Introduction to Hadoop and HDFS
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
MapReduce How to painlessly process terabytes of data.
MapReduce M/R slides adapted from those of Jeff Dean’s.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Lecture 15- Parallel Databases (continued) Advanced Databases Masood Niazi Torshiz Islamic Azad University- Mashhad Branch
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Topic Distributed DBMS Database Management Systems Fall 2012 Presented by: Osama Ben Omran.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
Query Execution. Where are we? File organizations: sorted, hashed, heaps. Indexes: hash index, B+-tree Indexes can be clustered or not. Data can be stored.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
CS 440 Database Management Systems Lecture 5: Query Processing 1.
Implementation of Database Systems, Jarek Gryz1 Evaluation of Relational Operations Chapter 12, Part A.
CS 540 Database Management Systems
Query Execution Query compiler Execution engine Index/record mgr. Buffer manager Storage manager storage User/ Application Query update Query execution.
CS 440 Database Management Systems Parallel DB & Map/Reduce Some slides due to Kevin Chang 1.
CPT-S Advanced Databases 11 Yinghui Wu EME 49.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
CS 440 Database Management Systems
Some slides adapted from those of Yuan Yu and Michael Isard
Hadoop.
CS 540 Database Management Systems
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
CS 440 Database Management Systems
Large-scale file systems and Map-Reduce
Parallel Databases.
Interquery Parallelism
Introduction to MapReduce and Hadoop
Evaluation of Relational Operations
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
Evaluation of Relational Operations: Other Operations
MapReduce Simplied Data Processing on Large Clusters
April 30th – Scheduling / parallel
湖南大学-信息科学与工程学院-计算机与科学系
Cse 344 May 2nd – Map/reduce.
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
Akshay Tomar Prateek Singh Lohchubh
Selected Topics: External Sorting, Join Algorithms, …
Parallel DBMS Chapter 22, Part A
Lecture 13: Query Execution
Evaluation of Relational Operations: Other Techniques
5/7/2019 Map Reduce Map reduce.
The Gamma Database Machine Project
MapReduce: Simplified Data Processing on Large Clusters
Evaluation of Relational Operations: Other Techniques
Parallel DBMS DBMS Textbook Chapter 22
Presentation transcript:

CS 540 Database Management Systems Lecture 11: Parallel DB & Map/Reduce Some slides due to Kevin Chang

Parallel vs. Distributed DB Fully integrated system, logically a single machine No notion of site autonomy Centralized schema All queries started at a well-defined “host”

Natural parallelism: relations in and out Pipeline: piping the output of one op into the next Partition: N op-clones, each processes 1/N input Observation: essentially sequential programming Any Sequential Program Sequential Any Program Intuitions for both pipeline and partition: horizontal decomposition (to single tuples or small chunks), and processed in sequence or concurrently

Parallel data processing: performance metrics Speedup: constant problem, growing system small-system-elapsed-time big-system-elapsed-time linear speedup if N-system yields N-speedup Scaleup: ability to grow both the system/problem 1-system-elapsed-time-on-1-problem N-system-elapsed-time-on-N-problem linear if scaleup = 1 some problems have super-linear increase in cost e.g., nlog(n) of sorting

Speedup & Scaleup barriers Startup: time to start a parallel operation e.g.: creating processes, opening files, … Interference: slowdown for access shared resources e.g.: hotspots, logs communication cost more I/O access Skew: if tuples are not uniformly distributed, some processors may have to do a lot more work service time = slowest parallel step of the job optimize partitioning, #workers, …

Speedup

Parallel Architectures Shared memory Shared disks Shared nothing ?? Pros and cons? software development (programming)? hardware development (system scalability)?

Architecture: comparison Shared Memory Shared Disk Shared Nothing Easy to program Difficult to build Difficult to scaleup Hard to program Easy to build Easy to scaleup - Shared nothing: Good for DB applications– Data centers, Web servers. Relatively independent tasks (queries). Oracle RAC Teradata, Tandem, Greenplum Winner will be hybrid of shared memory & shared nothing? e.g.: distributed shared memory (Encore, Spark) 36

(Horizontal) data partitioning Relation R split into P chunks R0, ..., RP-1, stored at the P nodes. Round robin tuple ti to chunk (i mod P) Hash based on attribute A Tuple t to chunk h(t.A) mod P Range based on attribute A Tuple t to chunk i if vi-1 < t.A < vi Why not vertical? Load balancing? directed query?

Horizontal Data Partitioning Round robin query: no direction. load: uniform distribution. Hash based on attribute A query: can direct equality load: somehow randomized. Range based on attribute A query: range queries, equijoin, group by. load: depending on the query’s range of interest. Index: created at all sites primary index records where a tuple resides

Query execution Query manager parse and optimize query, generate operator tree. Send to site (if a single site query), or dispatcher Dispatcher give query to a scheduler (simple load balancing) Scheduler pass pieces to operator processes at sites Site query processor with query processes results sent through scheduler to query manager.

Control Messages 3-times as many as operators in query tree Scheduler  Processor: initiate Processor  Scheduler: ID or port to talk to e.g., for later data movement Operator  Scheduler: Done

Selection Selection(R) = Union (Selection R1, …, Selection Rn) Initiate selection operator at each relevant site If predicate on partitioning attributes (range or hash) Send the operator to the overlapping sites. Otherwise send to all sites.

Hash-join: centralized M main memory buffers Disk R OUTPUT 2 INPUT 1 hash function M-1 Partitions . . . Partition relations R and S R tuples in bucket i will only match S tuples in bucket i. Partitions of R & S Input buffer For Si Blocks of bucket Ri ( < M-1 pages) M main memory buffers Disk Output buffer Join Result Read in a partition of R. Scan matching partition of S, search for matches. 14

Parallel Hybrid Hash-Join M Joining Processors (later) R21 R2k R11 R1M RN1 RNk joining split table K Disk Sites Each logical bucket fits in aggregated main memory of M joining processes. R1 R2 RN partitioning split table Partition relation R to N logical buckets

Aggregate operations Aggregate functions: Count, Sum, Avg, Max, Min, MaxN, MinN, Median select Sum(sales) from Sales group by timeID Each site computes its piece in parallel Final results combined at a single site Example: Average(R) what should each Ri return? how to combine? Always can do “piecewise”? Piecewise not always possible: e.g. Median(), MaxN, MinN

Performance Results Almost linear speedup and constant scaleup! Close to perfect almost expected little startup overhead no interference among disjoint data/operations major operations (equijoin) insensitive to data skew insensitive to data skew: only “amount” of data matters for equijoin (can partition right) if arbitrary predicates would be different

Other issues Failure management Concurrency control 2PL centralized deadlock detector Recovery management using ARIES Log manager in each site Failure management Chained declustering copy each relation and keep a part of the copy in another site. Benefit compare to finer grained partitioning in interleaved declustering? Each read goes to primary version and each update goes to both versions. If a node fails Redirect the requests to the backup node Redistribute the load on the back up node

Missing query optimization for parallel execution load balancing skew handling

Map/ Reduce Framework

Motivation Parallel databases leverage parallelism to process large data sets efficiently the data should be relational format. the data should be inside a database system. some unwanted functionalities: logging, …. one should buy and maintain a complex RDBMS Majority of data sets do not meet these conditions. e.g., one wants to scan millions of text files and compute some statistics.

Cluster Large number (100 – 100,000) of servers, i.e. nodes connected by a high speed network many racks each rack has a small number of servers. If a node crashes once a year, #crashes in a cluster of 9000 nodes every day? every hour? Crash happens frequently should handle crashes assuming uniformity: one each hour

Distributed File System (DFS) Manage large files: TBs, PBs, … file is partitioned into chunks, e.g. 64MB chunk is replicated multiple times over different racks Implementations: Google’s DFS (GFS), Hadoop’s DFS (HFS), … The connection between application and chunk nodes depends

Parallel data processing in cluster Data partitioning partition (or repartition) the file across nodes compute the output on each node aggregate the results Other types of parallelism? Map/Reduce: programming model and framework that supports parallel data processing proposed by Google researchers; natural model for many problems simple data model bag of (key, value) tuples input: bag of (input_key, value) output: bag of (output_key, value) Another type of parallelism is pipelining. Input and output of M/R program may have different keys.

Map/reduce M/R program has two stages map: reduce: input = (input_key, value) extract relevant information from each input tuple. output = bag of (intermediate_key, value) similar to Group By in SQL reduce: input = (intermediate_key, bag of values) aggregate the information over a bag of tuples summarize, filter, transform, … output = bag of (output_key, value) similar to aggregation function in SQL System applies the map function in parallel to all (input key, value) pairs in the input file. System groups all pairs with the same intermediate key, and passes the bag of values to the REDUCE function

Example Counting the number of occurrences of each word in a large collection of documents map(String key, String value){ //key: document id //value: document content for each word w in value Output-interim(w, ‘1’); } reduce(String key, Iterator values){ //key: a word //values: a bag of counts for each v in values result += parseInt(v); Output(String.valueOf(result)); } Hadoop implementation

Example: word count DFS DFS Local Storage

Inside M/R framework Master node: partitions input file into M splits, by key. assigns workers (nodes) to the M map tasks. usually: #workers < #map tasks keeps track of their progress. Workers write output to local disk, partition into R regions Master assigns workers to the R reduce tasks. usually: #workers < #reduce tasks Reduce workers read regions from the map workers’ local disks.

Fault tolerance Master pings workers periodically If down then reassigns the task to another worker. Straggler node takes unusually long time to complete one of the last tasks, because: the cluster scheduler has assigned other tasks on the node bad disk forces frequent correctable errors, … stragglers are a main reason for slowdown M/R solution backup execution of the last few remaining in-progress tasks

Optimizing M/R jobs is hard! Choice of #M and #R: larger is better for load balancing limitation: master overhead for control and fault tolerance needs O(M×R) memory typical choice: M: number of chunks R: much smaller; rule of thumb: R=1.5 * number of nodes Over 100 other parameters: partition function, sort factor,…. around 50 of them affect running time.

Discussion Advantage of M/R Disadvantage of M/R manages scheduling and fault tolerance can be used over non-relational data and particularly Extraction Transformation Loading (ETL) applications Disadvantage of M/R limited data model and queries difficult to write complex programs testing & debugging, multiple map/reduce jobs, … optimization is hard Remind you of a similar problem? reapply the principles of RDBMS implementation declarative language, query processing and optimization, … Repeats by every technological shift sensor data => Stream DBMS, spreadsheets => Spreadsheet DBMS, … it is important to learn the principles!

Parallel RDBMS / declarative languages over M/R Hive (by Facebook) HiveQL SQL-like language open source Pig Latin (by Yahoo!) new language, similar to Relational Algebra Big-Query (by Google) SQL on Map/Reduce Proprietary …

What you should know Performance metrics for parallel data processing Parallel data processing architectures Parallelization methods Query processing in Parallel DB Cluster computing & DFS Map/Reduce programming model and framework Advantages and Disadvantages of using Map/Reduce

Carry away messages Usability Map/Reduce was easier to use over new platforms Sometimes, we have to re-build a framework parallel databases => M/R => parallel DB over M/R