Parallel Applications And Tools For Cloud Computing Environments SC 10 New Orleans, USA Nov 17, 2010.

Slides:



Advertisements
Similar presentations
Scalable High Performance Dimension Reduction
Advertisements

SALSA HPC Group School of Informatics and Computing Indiana University.
Analyzing large-scale cheminformatics and chemogenomics datasets through dimension reduction David J. Wild Assistant Professor & Director, Cheminformatics.
Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
Twister4Azure Iterative MapReduce for Windows Azure Cloud Thilina Gunarathne Indiana University Iterative MapReduce for Azure Cloud.
SCALABLE PARALLEL COMPUTING ON CLOUDS : EFFICIENT AND SCALABLE ARCHITECTURES TO PERFORM PLEASINGLY PARALLEL, MAPREDUCE AND ITERATIVE DATA INTENSIVE COMPUTATIONS.
Hybrid MapReduce Workflow Yang Ruan, Zhenhua Guo, Yuduo Zhou, Judy Qiu, Geoffrey Fox Indiana University, US.
High Performance Dimension Reduction and Visualization for Large High-dimensional Data Analysis Jong Youl Choi, Seung-Hee Bae, Judy Qiu, and Geoffrey Fox.
SALSASALSASALSASALSA Chemistry in the Digital Age Workshop, Penn State University, June 11, 2009 Geoffrey Fox
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
Map-Reduce and Parallel Computing for Large-Scale Media Processing Youjie Zhou.
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
MapReduce in the Clouds for Science CloudCom 2010 Nov 30 – Dec 3, 2010 Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox {tgunarat, taklwu,
Dimension Reduction and Visualization of Large High-Dimensional Data via Interpolation Seung-Hee Bae, Jong Youl Choi, Judy Qiu, and Geoffrey Fox School.
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
A Brief Overview by Aditya Dutt March 18 th ’ Aditya Inc.
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
SALSASALSASALSASALSA Hybrid Cloud and Cluster Computing Paradigms for Scalable Data Intensive Applications April 15, 2011 University of Alabama Judy Qiu.
The Tutorial of Principal Component Analysis, Hierarchical Clustering, and Multidimensional Scaling Wenshan Wang.
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
Deterministic Annealing Indiana University CS Theory group January Geoffrey Fox
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Generative Topographic Mapping in Life Science Jong Youl Choi School of Informatics and Computing Pervasive Technology Institute Indiana University
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
Portable Parallel Programming on Cloud and HPC: Scientific Applications of Twister4Azure Thilina Gunarathne Bingjing Zhang, Tak-Lon.
Presenter: Yang Ruan Indiana University Bloomington
Generative Topographic Mapping by Deterministic Annealing Jong Youl Choi, Judy Qiu, Marlon Pierce, and Geoffrey Fox School of Informatics and Computing.
Deterministic Annealing Networks and Complex Systems Talk 6pm, Wells Library 001 Indiana University November Geoffrey.
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
Parallel Applications And Tools For Cloud Computing Environments Azure MapReduce Large-scale PageRank with Twister Twister BLAST Thilina Gunarathne, Stephen.
S CALABLE H IGH P ERFORMANCE D IMENSION R EDUCTION Seung-Hee Bae.
SALSA HPC Group School of Informatics and Computing Indiana University.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
Implications of Clouds for Data Intensive Science with application to Biomedical Science I400 Indiana University March Geoffrey Fox
Community Grids Lab. Indiana University, Bloomington Seung-Hee Bae.
Multidimensional Scaling by Deterministic Annealing with Iterative Majorization Algorithm Seung-Hee Bae, Judy Qiu, and Geoffrey Fox SALSA group in Pervasive.
Using SWARM service to run a Grid based EST Sequence Assembly Karthik Narayan Primary Advisor : Dr. Geoffrey Fox 1.
Service Aggregated Linked Sequential Activities: GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data.
Deterministic Annealing Dimension Reduction and Biology Indiana University Environmental Genomics April Geoffrey.
SALSASALSASALSASALSA Clouds Ball Aerospace March Geoffrey Fox
SALSA HPC Group School of Informatics and Computing Indiana University.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Looking at Use Case 19, 20 Genomics 1st JTC 1 SGBD Meeting SDSC San Diego March Judy Qiu Shantenu Jha (Rutgers) Geoffrey Fox
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
SALSA Group Research Activities April 27, Research Overview  MapReduce Runtime  Twister  Azure MapReduce  Dryad and Parallel Applications 
ApproxHadoop Bringing Approximations to MapReduce Frameworks
Data Mining Course 2007 Eric Postma Clustering. Overview Three approaches to clustering 1.Minimization of reconstruction error PCA, nlPCA, k-means clustering.
PDAC-10 Middleware Solutions for Data- Intensive (Scientific) Computing on Clouds Gagan Agrawal Ohio State University (Joint Work with Tekin Bicer, David.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Parallel Applications And Tools For Cloud Computing Environments CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
HPC in the Cloud – Clearing the Mist or Lost in the Fog Panel at SC11 Seattle November Geoffrey Fox
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
Deterministic Annealing and Robust Scalable Data mining for the Data Deluge Petascale Data Analytics: Challenges, and Opportunities (PDAC-11) Workshop.
Optimization Indiana University July Geoffrey Fox
Yang Ruan PhD Candidate Salsahpc Group Community Grid Lab Indiana University.
BIG DATA/ Hadoop Interview Questions.
Implementation of Classifier Tool in Twister Magesh khanna Vadivelu Shivaraman Janakiraman.
Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications Expect.
Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae Xiaohong Qiu
Applying Twister to Scientific Applications
Biology MDS and Clustering Results
DACIDR for Gene Analysis
Overview Identify similarities present in biological sequences and present them in a comprehensible manner to the biologists Objective Capturing Similarity.
Twister4Azure : Iterative MapReduce for Azure Cloud
Adaptive Interpolation of Multidimensional Scaling
Parallel Applications And Tools For Cloud Computing Environments
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Parallel Applications And Tools For Cloud Computing Environments SC 10 New Orleans, USA Nov 17, 2010

Azure MapReduce

 A MapRedue runtime for Microsoft Azure using Azure cloud services  Azure Compute  Azure BLOB storage for in/out/intermediate data storage  Azure Queues for task scheduling  Azure Table for management/monitoring data storage  Advantages of the cloud services  Distributed, highly scalable & available  Backed by industrial strength data centers and technologies  Decentralized control  Dynamically scale up/down  No Single Point of Failure

AzureMapReduce Features  Familiar MapReduce programming model  Combiner step  Fault Tolerance  Rerunning of failed and straggling tasks  Web based monitoring console  Easy testing and deployment  Customizable  Custom Input & output formats  Custom Key and value implementations  Load balanced global queue based scheduling

Advantages  Fills the void of parallel programming frameworks on Microsoft Azure  Well known, easy to use programming model  Overcome the possible unreliability's of cloud compute nodes  Designed to co-exist with eventual consistency of cloud services  Allow the user to overcome the large latencies of cloud services by using coarser grained tasks  Minimal management/maintanance overhead

AzureMapReduce Architecture

Performance Smith Watermann Pairwise Distance All-Pairs Normalized Performance CAP3 Sequence Assembly Parallel Efficiency

Large-scale PageRank with Twister

Pagerank with MapReduce  Efficient processing of large scale Pagerank challenges current MapReduce runtimes.  Implementations: Twister, DryadLINQ, Hadoop, MPI  Optimization strategies  Load static data in memory  Fit partition size to memory  Local merge in Reduce stage  Res ults Visualization with PlotViz3  1K 3D vertices processed with MDS  Red vertex represent “wikipedia.org”

Pagerank Optimization Strategies 1.Implement with Twister and Hadoop with 50 million web pages. 2.Twister caches the partitions of web graph in memory during multiple iteration, while Hadoop needs to reload partition from disk to memory for each iteration. 1.Implement with DryadLINQ with 50 million web pages on a 32 nodes Windows HPC cluster 2.The coarse granularity strategy out performs fine granularity because it saves scheduling cost and network traffic

Twister BLAST

Twister-BLAST  A simple parallel BLAST application based on Twister MapReduce framework  Runs on a single machine, a cluster, or Amazon EC2 cloud platform  Adaptable to the latest BLAST tool (BLAST )

Twister-BLAST Architecture

Database Management  Replicated to all the nodes, in order to support BLAST binary execution  Compression before replication  Transported through file share script tool in Twister

Twister-BLAST Performance Chart on IU PolarGrid

SALSA Portal and Biosequence Analysis Workflow

Biosequence Analysis Conceptual Workflow Alu Sequences Pairwise Alignment & Distance Calculation Distance Matrix Pairwise Clustering Multi- Dimensional Scaling Visualization Cluster Indices Coordinates 3D Plot

DNA Sequencing Pipeline Illumina/Solexa Roche/454 Life Sciences Applied Biosystems/SOLiD Modern Commercial Gene Sequencers Internet Read Alignment Visualization Plotviz Visualization Plotviz Blocking Sequence alignment Sequence alignment MDS Dissimilarity Matrix N(N-1)/2 values Dissimilarity Matrix N(N-1)/2 values FASTA File N Sequences block Pairings Pairwise clustering Pairwise clustering MapReduce MPI This chart illustrate our research of a pipeline mode to provide services on demand (Software as a Service SaaS) User submit their jobs to the pipeline. The components are services and so is the whole pipeline.

Alu and Metagenomics Workflow “All pairs” problem Data is a collection of N sequences. Need to calculate N 2 dissimilarities (distances) between sequnces (all pairs). These cannot be thought of as vectors because there are missing characters “Multiple Sequence Alignment” (creating vectors of characters) doesn’t seem to work if N larger than O(100), where 100’s of characters long. Step 1: Can calculate N 2 dissimilarities (distances) between sequences Step 2: Find families by clustering (using much better methods than Kmeans). As no vectors, use vector free O(N 2 ) methods Step 3: Map to 3D for visualization using Multidimensional Scaling (MDS) – also O(N 2 ) Results: N = 50,000 runs in 10 hours (the complete pipeline above) on 768 cores Discussions: Need to address millions of sequences ….. Currently using a mix of MapReduce and MPI Twister will do all steps as MDS, Clustering just need MPI Broadcast/Reduce

Alu Families This visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of repeats – each with about 400 base pairs

Metagenomics This visualizes results of dimension reduction to 3D of gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction

Biosequence Analysis Retrieve Results Submit Microsoft HPC Cluster Distribute Job Write Results Job Configuration and Submission Tool Cluster Head- node Compute Nodes Sequence Aligning Pairwise Clustering Dimension Scaling PlotViz - 3D Visualization Tool Workflow Implementation

SALSA Portal Use Cases Create Biosequence Analysis Job >

SALSA Portal Architecture

PlotViz Visualization with parallel MDS/GTM

PlotViz and Dimension Reduction   Currently available DirectX Windows binary  3-6 months open source VTK/OPENGL  A tool for visualizing data points  Dimension reduction by GTM and MDS  Browse large and high-dimensional data  Use many open (value-added) data  Parallel Dimension Reduction Algorithms  GTM (Generative Topographic Mapping)  MDS (Multi-dimensional Scaling)  Interpolation extensions to GTM and MDS

PlotViz System Overview 27 Visualization Algorithms Chem2Bio2RDF PlotViz Parallel dimension reduction algorithms Aggregated public databases 3-D Map File SPARQL query Meta data Light-weight client PubChem CTD DrugBank QSAR

Parallel Data Analysis Algorithms on Multicore  Clustering for vectors and for points where only dissimilarities defined  Dimension Reduction for visualization and analysis (MDS, GTM)  Matrix algebra as needed  Matrix Multiplication  Equation Solving  Eigenvector/value Calculation  Extending to Global Optimization Algorithms such as Latent Dirichlet Allocation LDA  Use Deterministic Annealing for Clustering, MDS, GTM, LDA, Gaussian Mixtures ….  Extending O(N 2 ) MDS/ dissimilarity clustering to O(NlogN) Developing a suite of parallel data-analysis capabilities

General Deterministic Annealing Formula N data points E(x) in D dimensions space and minimize F by EM Deterministic Annealing Clustering (DAC) F is Free Energy EM is well known expectation maximization method p(x) with  p(x) =1 T is annealing temperature varied down from  with final value of 1 Determine cluster center Y(k) by EM method K (number of clusters) starts at 1 and is incremented by algorithm

Deterministic Annealing I Gibbs Distribution at Temperature T P(  ) = exp( - H(  )/T) /  d  exp( - H(  )/T) Or P(  ) = exp( - H(  )/T + F/T ) Minimize Free Energy F = =  d  {P(  )H + T P(  ) lnP(  )} Where  are (a subset of) parameters to be minimized Simulated annealing corresponds to doing these integrals by Monte Carlo Deterministic annealing corresponds to doing integrals analytically and is naturally much faster In each case temperature is lowered slowly – say by a factor 0.99 at each iteration

Minimum evolving as temperature decreases Movement at fixed temperature going to local minima if not initialized “correctly Solve Linear Equations for each temperature Nonlinearity effects mitigated by initializing with solution at previous higher temperature Deterministic Annealing F({y}, T) Configuration {y}

Deterministic Annealing II For some cases such as vector clustering and Gaussian Mixture Models one can do integrals by hand but usually will be impossible So introduce Hamiltonian H 0 ( ,  ) which by choice of  can be made similar to H(  ) and which has tractable integrals P 0 (  ) = exp( - H 0 (  )/T + F 0 /T ) approximate Gibbs F R (P 0 ) = | 0 = | 0 + F 0 (P 0 ) Where | 0 denotes  d  P o (  ) Easy to show that real Free Energy F A (P A ) ≤ F R (P 0 ) In many problems, decreasing temperature is classic multiscale – finer resolution (T is “just” distance scale) Same idea called variational (Bayes) inference used for Latent Dirichlet Allocation 32

Deterministic Annealing Clustering of Indiana Census Data Decrease temperature (distance scale) to discover more clusters Distance Scale Temperature 0.5 Red is coarse resolution with 10 clusters Blue is finer resolution with 30 clusters Clusters find cities in Indiana Distance Scale is  Temperature

Implementation of DA I Expectation step E is find  minimizing F R (P 0 ) and Follow with M step setting  = | 0 =  d   P o (  ) and if one does not anneal over all parameters and one follows with a traditional minimization of remaining parameters In clustering, one then looks at second derivative matrix of F R (P 0 ) wrt  and as temperature is lowered this develops negative eigenvalue corresponding to instability This is a phase transition and one splits cluster into two and continues EM iteration One starts with just one cluster 34

35 Rose, K., Gurewitz, E., and Fox, G. C. ``Statistical mechanics and phase transitions in clustering,'' Physical Review Letters, 65(8): , August My #5 my most cited article (311)

High Performance Dimension Reduction and Visualization Need is pervasive – Large and high dimensional data are everywhere: biology, physics, Internet, … – Visualization can help data analysis Visualization of large datasets with high performance – Map high-dimensional data into low dimensions (2D or 3D). – Need Parallel programming for processing large data sets – Developing high performance dimension reduction algorithms: MDS(Multi-dimensional Scaling), used earlier in DNA sequencing application GTM(Generative Topographic Mapping) DA-MDS(Deterministic Annealing MDS) DA-GTM(Deterministic Annealing GTM) – Interactive visualization tool PlotViz We are supporting drug discovery by browsing 60 million compounds in PubChem database with 166 features each

Dimension Reduction Algorithms Multidimensional Scaling (MDS) [1] o Given the proximity information among points. o Optimization problem to find mapping in target dimension of the given data based on pairwise proximity information while minimize the objective function. o Objective functions: STRESS (1) or SSTRESS (2) o Only needs pairwise distances  ij between original points (typically not Euclidean) o d ij (X) is Euclidean distance between mapped (3D) points Generative Topographic Mapping (GTM) [2] o Find optimal K-representations for the given data (in 3D), known as K-cluster problem (NP-hard) o Original algorithm use EM method for optimization o Deterministic Annealing algorithm can be used for finding a global solution o Objective functions is to maximize log- likelihood: [1] I. Borg and P. J. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, New York, NY, U.S.A., [2] C. Bishop, M. Svens´en, and C. Williams. GTM: The generative topographic mapping. Neural computation, 10(1):215–234, 1998.

GTM vs. MDS MDS also soluble by viewing as nonlinear χ 2 with iterative linear equation solver GTM MDS (SMACOF) Maximize Log-Likelihood Minimize STRESS or SSTRESS Objective Function O(KN) (K << N) O(N 2 ) Complexity Non-linear dimension reduction Find an optimal configuration in a lower-dimension Iterative optimization method Purpose EM Iterative Majorization (EM-like) Optimization Method Optimization Method

MDS and GTM Map (1) 39 PubChem data with CTD visualization by using MDS (left) and GTM (right) About 930,000 chemical compounds are visualized as a point in 3D space, annotated by the related genes in Comparative Toxicogenomics Database (CTD)

CTD data for gene-disease 40 PubChem data with CTD visualization by using MDS (left) and GTM (right) About 930,000 chemical compounds are visualized as a point in 3D space, annotated by the related genes in Comparative Toxicogenomics Database (CTD)

Chem2Bio2RDF 41 Chemical compounds shown in literatures, visualized by MDS (left) and GTM (right) Visualized 234,000 chemical compounds which may be related with a set of 5 genes of interest (ABCB1, CHRNB2, DRD2, ESR1, and F2) based on the dataset collected from major journal literatures which is also stored in Chem2Bio2RDF system.

Activity Cliffs 42 GTM Visualization of bioassay activities

Solvent Screening 43 Visualizing 215 solvents 215 solvents (colored and labeled) are embedded with 100,000 chemical compounds (colored in grey) in PubChem database

Interpolation Method MDS and GTM are highly memory and time consuming process for large dataset such as millions of data points MDS requires O(N 2 ) and GTM does O(KN) (N is the number of data points and K is the number of latent variables) Training only for sampled data and interpolating for out-of- sample set can improve performance Interpolation is a pleasingly parallel application suitable for MapReduce and Clouds n in-sample N-n out-of-sample N-n out-of-sample Total N data Training Interpolation Trained data Interpolated MDS/GTM map Interpolated MDS/GTM map

Quality Comparison (O(N 2 ) Full vs. Interpolation) MDS Quality comparison between Interpolated result upto 100k based on the sample data (12.5k, 25k, and 50k) and original MDS result w/ 100k. STRESS: w ij = 1 / ∑δ ij 2 GTM Interpolation result (blue) is getting close to the original (red) result as sample size is increasing. 16 nodes Time = C(250 n 2 + nN I ) where sample size n and N I points interpolated