SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute.

Slides:



Advertisements
Similar presentations
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Advertisements

Piccolo: Building fast distributed programs with partitioned tables Russell Power Jinyang Li New York University.
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
Scalable High Performance Dimension Reduction
MapReduce Online Veli Hasanov Fatih University.
SALSA HPC Group School of Informatics and Computing Indiana University.
Twister4Azure Iterative MapReduce for Windows Azure Cloud Thilina Gunarathne Indiana University Iterative MapReduce for Azure Cloud.
Hybrid MapReduce Workflow Yang Ruan, Zhenhua Guo, Yuduo Zhou, Judy Qiu, Geoffrey Fox Indiana University, US.
Piccolo – Paper Discussion Big Data Reading Group 9/20/2010.
Thesis Defense, 12/20/2010 Student: Jaliya Ekanayake
Interpolative Multidimensional Scaling Techniques for the Identification of Clusters in Very Large Sequence Sets April 27, 2011.
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
Distributed Computations
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
MapReduce in the Clouds for Science CloudCom 2010 Nov 30 – Dec 3, 2010 Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox {tgunarat, taklwu,
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Dimension Reduction and Visualization of Large High-Dimensional Data via Interpolation Seung-Hee Bae, Jong Youl Choi, Judy Qiu, and Geoffrey Fox School.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland MLG, January, 2014 Jaehwan Lee.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
Big Data Challenge Mega 10^6 Giga 10^9 Tera 10^12 Peta 10^15 Pig Latin.
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute.
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Ex-MATE: Data-Intensive Computing with Large Reduction Objects and Its Application to Graph Mining Wei Jiang and Gagan Agrawal.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
HAMS Technologies 1
Pregel: A System for Large-Scale Graph Processing Presented by Dylan Davis Authors: Grzegorz Malewicz, Matthew H. Austern, Aart J.C. Bik, James C. Dehnert,
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Portable Parallel Programming on Cloud and HPC: Scientific Applications of Twister4Azure Thilina Gunarathne Bingjing Zhang, Tak-Lon.
SALSASALSASALSASALSA MSR Internship – Final Presentation Jaliya Ekanayake School of Informatics and Computing Indiana University.
Pregel: A System for Large-Scale Graph Processing Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and.
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
Parallel Applications And Tools For Cloud Computing Environments Azure MapReduce Large-scale PageRank with Twister Twister BLAST Thilina Gunarathne, Stephen.
SALSA HPC Group School of Informatics and Computing Indiana University.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
A Hierarchical MapReduce Framework Yuan Luo and Beth Plale School of Informatics and Computing, Indiana University Data To Insight Center, Indiana University.
Performance Model for Parallel Matrix Multiplication with Dryad: Dataflow Graph Runtime Hui Li School of Informatics and Computing Indiana University 11/1/2012.
A Collaborative Framework for Scientific Data Analysis and Visualization Jaliya Ekanayake, Shrideep Pallickara, and Geoffrey Fox Department of Computer.
SALSASALSASALSASALSA Clouds Ball Aerospace March Geoffrey Fox
Towards a Collective Layer in the Big Data Stack Thilina Gunarathne Judy Qiu
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Parallel Applications And Tools For Cloud Computing Environments CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Memcached Integration with Twister Saliya Ekanayake - Jerome Mitchell - Yiming Sun -
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSASALSA Dynamic Virtual Cluster provisioning via XCAT on iDataPlex Supports both stateful and stateless OS images iDataplex Bare-metal Nodes Linux Bare-
SALSASALSASALSASALSA IU Twister Supports Data Intensive Science Applications School of Informatics and Computing Indiana University.
Next Generation of Apache Hadoop MapReduce Owen
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
SALSA HPC Group School of Informatics and Computing Indiana University Workshop on Petascale Data Analytics: Challenges, and.
Implementation of Classifier Tool in Twister Magesh khanna Vadivelu Shivaraman Janakiraman.
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
Applying Twister to Scientific Applications
MapReduce for Data Intensive Scientific Analyses
湖南大学-信息科学与工程学院-计算机与科学系
SC09 Doctoral Symposium, Portland, 11/18/2009
Agent-based Model Simulation with Twister
CS110: Discussion about Spark
Twister4Azure : Iterative MapReduce for Azure Cloud
Parallel Applications And Tools For Cloud Computing Environments
Group 15 Swathi Gurram Prajakta Purohit
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Lecture 29: Distributed Systems
Iterative and non-Iterative Computations
Presentation transcript:

SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University HPDC – 2010 MAPREDUCE10 Workshop, Chicago, 06/22/2010

SALSASALSA Acknowledgements to: Co authors: Hui Li, Binging Shang, Thilina Gunarathne Seung-Hee Bae, Judy Qiu, Geoffrey Fox School of Informatics and Computing Indiana University Bloomington Team at IU

SALSASALSA Motivation Data Deluge MapReduce Classic Parallel Runtimes (MPI) Experiencing in many domains Data Centered, QoSEfficient and Proven techniques Input Output map Input map reduce Input map reduce iterations Pij Expand the Applicability of MapReduce to more classes of Applications Map-OnlyMapReduce Iterative MapReduce More Extensions

SALSASALSA Features of Existing Architectures(1) Programming Model – MapReduce (Optionally map-only) – Focus on Single Step MapReduce computations (DryadLINQ supports more than one stage) Input and Output Handling – Distributed data access (HDFS in Hadoop, Sector in Sphere, and shared directories in Dryad) – Outputs normally goes to the distributed file systems Intermediate data – Transferred via file systems (Local disk-> HTTP -> local disk in Hadoop) – Easy to support fault tolerance – Considerably high latencies Google, Apache Hadoop, Sector/Sphere, Dryad/DryadLINQ (DAG based)

SALSASALSA Features of Existing Architectures(2) Scheduling – A master schedules tasks to slaves depending on the availability – Dynamic Scheduling in Hadoop, static scheduling in Dryad/DryadLINQ – Naturally load balancing Fault Tolerance – Data flows through disks->channels->disks – A master keeps track of the data products – Re-execution of failed or slow tasks – Overheads are justifiable for large single step MapReduce computations – Iterative MapReduce

SALSASALSA A Programming Model for Iterative MapReduce Distributed data access In-memory MapReduce Distinction on static data and variable data (data flow vs. δ flow) Cacheable map/reduce tasks (long running tasks) Combine operation Support fast intermediate data transfers Reduce (Key, List ) Iterate Map(Key, Value) Combine (Map ) User Program Close() Configure() Static data Static data δ flow Twister Constraints for Side Effect Free map/reduce tasks Computation Complexity >> Complexity of Size of the Mutant Data (State)

SALSASALSA Twister Programming Model configureMaps(..) Two configuration options : 1.Using local disks (only for maps) 2.Using pub-sub bus configureReduce(..) runMapReduce(..) while(condition){ } //end while updateCondition() close() User programs process space Combine() operation Reduce() Map() Worker Nodes Communications/data transfers via the pub-sub broker network Iterations May send pairs directly Local Disk Cacheable map/reduce tasks

SALSASALSA Twister API 1.configureMaps(PartitionFile partitionFile) 2.configureMaps(Value[] values) 3.configureReduce(Value[] values) 4.runMapReduce() 5.runMapReduce(KeyValue[] keyValues) 6.runMapReduceBCast(Value value) 7.map(MapOutputCollector collector, Key key, Value val) 8.reduce(ReduceOutputCollector collector, Key key,List values) 9.combine(Map keyValues) 1.configureMaps(PartitionFile partitionFile) 2.configureMaps(Value[] values) 3.configureReduce(Value[] values) 4.runMapReduce() 5.runMapReduce(KeyValue[] keyValues) 6.runMapReduceBCast(Value value) 7.map(MapOutputCollector collector, Key key, Value val) 8.reduce(ReduceOutputCollector collector, Key key,List values) 9.combine(Map keyValues)

SALSASALSA Twister Architecture Worker Node Local Disk Worker Pool Twister Daemon Master Node Twister Driver Main Program B B B B Pub/sub Broker Network Worker Node Local Disk Worker Pool Twister Daemon Scripts perform: Data distribution, data collection, and partition file creation map reduce Cacheable tasks One broker serves several Twister daemons

SALSASALSA Input/Output Handling Data Manipulation Tool: – Provides basic functionality to manipulate data across the local disks of the compute nodes – Data partitions are assumed to be files (Contrast to fixed sized blocks in Hadoop) – Supported commands: mkdir, rmdir, put,putall,get,ls, Copy resources Create Partition File Node 0Node 1Node n A common directory in local disks of individual nodes e.g. /tmp/twister_data Data Manipulation Tool Partition File

SALSASALSA Partition file allows duplicates One data partition may reside in multiple nodes In an event of failure, the duplicates are used to re- schedule the tasks File NoNode IPDaemon NoFile partition path /home/jaliya/data/mds/GD-4D-23.bin /home/jaliya/data/mds/GD-4D-0.bin /home/jaliya/data/mds/GD-4D-27.bin /home/jaliya/data/mds/GD-4D-20.bin /home/jaliya/data/mds/GD-4D-23.bin /home/jaliya/data/mds/GD-4D-25.bin /home/jaliya/data/mds/GD-4D-18.bin /home/jaliya/data/mds/GD-4D-15.bin

SALSASALSA The use of pub/sub messaging Intermediate data transferred via the broker network Network of brokers used for load balancing – Different broker topologies Interspersed computation and data transfer minimizes large message load at the brokers Currently supports – NaradaBrokering – ActiveMQ 100 map tasks, 10 workers in 10 nodes Reduce() map task queues Map workers Broker network E.g. ~ 10 tasks are producing outputs at once

SALSASALSA Scheduling Twister supports long running tasks Avoids unnecessary initializations in each iteration Tasks are scheduled statically – Supports task reuse – May lead to inefficient resources utilization Expect user to randomize data distributions to minimize the processing skews due to any skewness in data

SALSASALSA Fault Tolerance Recover at iteration boundaries Does not handle individual task failures Assumptions: – Broker network is reliable – Main program & Twister Driver has no failures Any failures (hardware/daemons) result the following fault handling sequence – Terminate currently running tasks (remove from memory) – Poll for currently available worker nodes (& daemons) – Configure map/reduce using static data (re-assign data partitions to tasks depending on the data locality) – Re-execute the failed iteration

SALSASALSA Performance Evaluation Hardware Configurations We use the academic release of DryadLINQ, Apache Hadoop version , and Twister for our performance comparisons. Both Twister and Hadoop use JDK (64 bit) version 1.6.0_18, while DryadLINQ and MPI uses Microsoft.NET version 3.5.

SALSASALSA Pair wise Sequence Comparison using Smith Waterman Gotoh Typical MapReduce computation Comparable efficiencies Twister performs the best

SALSASALSA Pagerank – An Iterative MapReduce Algorithm Well-known pagerank algorithm [1] Used ClueWeb09 [2] (1TB in size) from CMU Reuse of map tasks and faster communication pays off [1] Pagerank Algorithm, [2] ClueWeb09 Data Set, M R Current Page ranks (Compressed) Partial Adjacency Matrix Partial Updates C Partially merged Updates Iterations

SALSASALSA Multi-dimensional Scaling Maps high dimensional data to lower dimensions (typically 2D or 3D) SMACOF (Scaling by Majorizing of COmplicated Function)[1] [1] J. de Leeuw, "Applications of convex analysis to multidimensional scaling," Recent Developments in Statistics, pp , While(condition) { = [A] [B] C = CalcStress( ) } While(condition) { = [A] [B] C = CalcStress( ) } While(condition) { = MapReduce1([B], ) = MapReduce2([A], ) C = MapReduce3( ) } While(condition) { = MapReduce1([B], ) = MapReduce2([A], ) C = MapReduce3( ) }

SALSASALSA Conclusions & Future Work Twister extends the MapReduce to iterative algorithms Several iterative algorithms we have implemented – K-Means Clustering – Pagerank – Matrix Multiplication – Multi dimensional scaling (MDS) – Breadth First Search Integrating a distributed file system Programming with side effects yet support fault tolerance

SALSASALSA Related Work General MapReduce References: – Google MapReduce Google MapReduce – Apache Hadoop Apache Hadoop – Microsoft DryadLINQ Microsoft DryadLINQ – Pregel : Large-scale graph computing at Google Pregel – Sector/Sphere Sector/Sphere – All-Pairs All-Pairs – SAGA: MapReduce SAGA: MapReduce – Disco Disco

SALSASALSASALSASALSA Questions? Thank you!

SALSASALSA Extra Slides

SALSASALSA Hadoop (Google) Architecture HDFS stores blocks, manages replications, handle failures Map/reduce are Java processes, not long running Failed maps are re-executed, failed reducers collect data from maps again HDFS M M Local R R Task Tracker Job Tracker Map output goes to local disk first Task Tracker Local Map task reads Input data from HDFS Task tracker notifies job tracker Job tracker assigns some map outputs to a reducer Reducer downloads map outputs using HTTP Reduce output goes to HDFS

SALSASALSA Twister Architecture Scripts for file manipulations Twister daemon is a process, but Map/Reduce tasks are Java Threads (Hybrid approach) M M Local R R Twister Daemon Twister Daemon Map output goes directly to reducer Twister Daemon Twister Daemon Local Reduce output goes to local disk OR to Combiner Read static data from local disk 1 1 B B B B B B B B Broker Connection Receive static data (1) OR Variable data (key,value) via the brokers (2) Broker Network Twister Driver Main program 1.configureMaps(PartitionFile partitionFile) 2.configureMaps(Value[] values) 3.configureReduce(Value[] values) 4.String key=addToMemCache(Value value) 5.removeFromMemCache(String key) 6.runMapReduce() 7.runMapReduce(KeyValue[] keyValues) 8.runMapReduceBCast(Value value) 1.configureMaps(PartitionFile partitionFile) 2.configureMaps(Value[] values) 3.configureReduce(Value[] values) 4.String key=addToMemCache(Value value) 5.removeFromMemCache(String key) 6.runMapReduce() 7.runMapReduce(KeyValue[] keyValues) 8.runMapReduceBCast(Value value) Twister API

SALSASALSA Twister In-memory MapReduce Distinction on static data and variable data (data flow vs. δ flow) Cacheable map/reduce tasks (long running tasks) Combine operation Support fast intermediate data transfers Different synchronization and intercommunication mechanisms used by the parallel runtimes Reduce (Key, List ) Iterate Map(Key, Value) Combine (Key, List ) User Program Close() Configure() Static data Static data δ flow

SALSASALSA Publications 1.Jaliya Ekanayake, (Advisor: Geoffrey Fox) Architecture and Performance of Runtime Environments for Data Intensive Scalable Computing, Accepted for the Doctoral Showcase, SuperComputing Xiaohong Qiu, Jaliya Ekanayake, Scott Beason, Thilina Gunarathne, Geoffrey Fox, Roger Barga, Dennis Gannon, Cloud Technologies for Bioinformatics Applications, Accepted for publication in 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers, SuperComputing Jaliya Ekanayake, Atilla Soner Balkir, Thilina Gunarathne, Geoffrey Fox, Christophe Poulain, Nelson Araujo, Roger Barga, DryadLINQ for Scientific Analyses, Accepted for publication in Fifth IEEE International Conference on e-Science (eScience2009), Oxford, UK. 4.Jaliya Ekanayake and Geoffrey Fox, High Performance Parallel Computing with Clouds and Cloud Technologies, First International Conference on Cloud Computing (CloudComp2009), Munich, Germany. – An extended version of this paper goes to a book chapter. 5.Geoffrey Fox, Seung-Hee Bae, Jaliya Ekanayake, Xiaohong Qiu, and Huapeng Yuan, Parallel Data Mining from Multicore to Cloudy Grids, High Performance Computing and Grids workshop, – An extended version of this paper goes to a book chapter. 6.Jaliya Ekanayake, Shrideep Pallickara, Geoffrey Fox, MapReduce for Data Intensive Scientific Analyses, Fourth IEEE International Conference on eScience, 2008, pp Jaliya Ekanayake, Shrideep Pallickara, and Geoffrey Fox, A collaborative framework for scientific data analysis and visualization, Collaborative Technologies and Systems(CTS08), 2008, pp Shrideep Pallickara, Jaliya Ekanayake and Geoffrey Fox, A Scalable Approach for the Secure and Authorized Tracking of the Availability of Entities in Distributed Systems, 21st IEEE International Parallel & Distributed Processing Symposium (IPDPS 2007).