Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications Expect.

Slides:



Advertisements
Similar presentations
Scalable High Performance Dimension Reduction
Advertisements

SALSA HPC Group School of Informatics and Computing Indiana University.
Introduction to Programming Paradigms Activity at Data Intensive Workshop Shantenu Jha represented by Geoffrey Fox
Twister4Azure Iterative MapReduce for Windows Azure Cloud Thilina Gunarathne Indiana University Iterative MapReduce for Azure Cloud.
SCALABLE PARALLEL COMPUTING ON CLOUDS : EFFICIENT AND SCALABLE ARCHITECTURES TO PERFORM PLEASINGLY PARALLEL, MAPREDUCE AND ITERATIVE DATA INTENSIVE COMPUTATIONS.
SALSASALSASALSASALSA Using MapReduce Technologies in Bioinformatics and Medical Informatics Computing for Systems and Computational Biology Workshop SC09.
SALSASALSASALSASALSA Chemistry in the Digital Age Workshop, Penn State University, June 11, 2009 Geoffrey Fox
SALSASALSASALSASALSA Using Cloud Technologies for Bioinformatics Applications MTAGS Workshop SC09 Portland Oregon November Judy Qiu
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland Tuesday, June 29, 2010 This work is licensed.
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
Dimension Reduction and Visualization of Large High-Dimensional Data via Interpolation Seung-Hee Bae, Jong Youl Choi, Judy Qiu, and Geoffrey Fox School.
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
SALSASALSASALSASALSA High Performance Biomedical Applications Using Cloud Technologies HPC and Grid Computing in the Cloud Workshop (OGF27 ) October 13,
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
SALSASALSASALSASALSA Hybrid Cloud and Cluster Computing Paradigms for Scalable Data Intensive Applications April 15, 2011 University of Alabama Judy Qiu.
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
Venkatram Ramanathan 1. Motivation Evolution of Multi-Core Machines and the challenges Summary of Contributions Background: MapReduce and FREERIDE Wavelet.
SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute.
Ex-MATE: Data-Intensive Computing with Large Reduction Objects and Its Application to Graph Mining Wei Jiang and Gagan Agrawal.
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
SALSASALSASALSASALSA MSR Internship – Final Presentation Jaliya Ekanayake School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
Parallel Applications And Tools For Cloud Computing Environments Azure MapReduce Large-scale PageRank with Twister Twister BLAST Thilina Gunarathne, Stephen.
A Map-Reduce System with an Alternate API for Multi-Core Environments Wei Jiang, Vignesh T. Ravi and Gagan Agrawal.
SALSA HPC Group School of Informatics and Computing Indiana University.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
SALSASALSASALSASALSA Cloud Technologies and Applications Indiana University SALSA Group
SALSASALSASALSASALSA Scalable Programming and Algorithms for Data Intensive Life Science Applications Data Intensive Seattle, WA Judy Qiu
Performance Model for Parallel Matrix Multiplication with Dryad: Dataflow Graph Runtime Hui Li School of Informatics and Computing Indiana University 11/1/2012.
MPI and MapReduce CCGSC 2010 Flat Rock NC September Geoffrey Fox
Service Aggregated Linked Sequential Activities: GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data.
SALSA Group’s Collaborations with Microsoft SALSA Group Principal Investigator Geoffrey Fox Project Lead Judy Qiu Scott Beason,
SALSASALSASALSASALSA Clouds Ball Aerospace March Geoffrey Fox
SALSA HPC Group School of Informatics and Computing Indiana University.
1 Multicore for Science Multicore Panel at eScience 2008 December Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University.
Looking at Use Case 19, 20 Genomics 1st JTC 1 SGBD Meeting SDSC San Diego March Judy Qiu Shantenu Jha (Rutgers) Geoffrey Fox
Performance of MapReduce on Multicore Clusters
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
SALSA Group Research Activities April 27, Research Overview  MapReduce Runtime  Twister  Azure MapReduce  Dryad and Parallel Applications 
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
PDAC-10 Middleware Solutions for Data- Intensive (Scientific) Computing on Clouds Gagan Agrawal Ohio State University (Joint Work with Tekin Bicer, David.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Parallel Applications And Tools For Cloud Computing Environments CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Memcached Integration with Twister Saliya Ekanayake - Jerome Mitchell - Yiming Sun -
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSASALSASALSASALSA Cloud Technologies and Their Applications May 17, 2010 Melbourne, Australia Judy Qiu
SALSASALSA Dynamic Virtual Cluster provisioning via XCAT on iDataPlex Supports both stateful and stateless OS images iDataplex Bare-metal Nodes Linux Bare-
SALSASALSASALSASALSA IU Twister Supports Data Intensive Science Applications School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
Early Experience with Cloud Technologies
Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae Xiaohong Qiu
Applying Twister to Scientific Applications
MapReduce for Data Intensive Scientific Analyses
Cloud Distributed Computing Environment Hadoop
Biology MDS and Clustering Results
DACIDR for Gene Analysis
Overview Identify similarities present in biological sequences and present them in a comprehensible manner to the biologists Objective Capturing Similarity.
SC09 Doctoral Symposium, Portland, 11/18/2009
GCC2008 (Global Clouds and Cores 2008) October Geoffrey Fox
CS110: Discussion about Spark
Optimizing MapReduce for GPUs with Effective Shared Memory Usage
Adaptive Interpolation of Multidimensional Scaling
Bin Ren, Gagan Agrawal, Brad Chamberlain, Steve Deitz
Towards High Performance Data Analytics with Java
MapReduce: Simplified Data Processing on Large Clusters
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications Expect data deluge will demand multicore enabled data analysis/mining Detailed objectives modified based on input from Microsoft such as interest in CCR, Dryad and TPL Evaluate and apply these technologies in demonstration systems Threading: CCR, TPL Service model and workflow: DSS and Robotics toolkit MapReduce: Dryad/DryadLINQ compared to Hadoop and Azure Classical parallelism: Windows HPCS and MPI.NET, XNA Graphics based visualization Work performed using C# Provide feedback to Microsoft Broader Impact Papers, presentations, tutorials, classes, workshops, and conferences Provide our research work as services to collaborators and general science community

Approach Use interesting applications (working with domain experts) as benchmarks including emerging areas like life sciences and classical applications such as particle physics Bioinformatics - Cap3, Alu, Metagenomics, PhyloD Cheminformatics - PubChem Particle Physics - LHC Monte Carlo Data Mining kernels - K-means, Deterministic Annealing Clustering, MDS, GTM, Smith-Waterman Gotoh Evaluation Criterion for Usability and Developer Productivity Initial learning curve Effectiveness of continuing development Comparison with other technologies Performance on both single systems and clusters

Major Achievements Analysis of CCR and DSS within SALSA paradigm with very detailed performance work on CCR Detailed analysis of Dryad and comparison with Hadoop and MPI. Initial comparison with Azure Comparison of TPL and CCR approaches to parallel threading Applications to several areas including particle physics and especially life sciences Demonstration that Windows HPC Clusters can efficiently run large scale data intensive applications Development of high performance Windows 3D visualization of points from dimension reduction of high dimension datasets to 3D. These are used as Cheminformatics and Bioinformatics dataset browsers Proposed extensions of MapReduce to perform datamining efficiently Identification of datamining as important application with new parallel algorithms for Multi Dimensional Scaling MDS, Generative Topographic Mapping GTM, and Clustering for cases where vectors are defined or where one only knows pairwise dissimilarities between dataset points. Extension of robust fast deterministic annealing to clustering (vector and pairwise), MDS and GTM.

Typical CCR Comparison with TPL Efficiency = 1 / (1 + Overhead) Hybrid internal threading/MPI as intra-node model works well on Windows HPC cluster Within a single node TPL or CCR outperforms MPI for computation intensive applications like clustering of Alu sequences (“all pairs” problem) TPL outperforms CCR in major applications

Threading versus MPI on node Always MPI between nodes Clustering by Deterministic Annealing (Parallel Overhead = [PT(P) – T(1)]/T(1), where T time and P number of parallel units) Parallel Patterns (ThreadsxProcessesxNodes) Parallel Overhead Thread MPI MPI MPI Note MPI best at low levels of parallelism Threading best at Highest levels of parallelism (64 way breakeven) Uses MPI.Net as a wrapper of MS-MPI

Biology MDS and Clustering Results Alu Families This visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of 35399 repeats – each with about 400 base pairs Metagenomics This visualizes results of dimension reduction to 3D of 30000 gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction

High Performance Data Visualization Developed parallel MDS and GTM algorithm to visualize large and high-dimensional data Processed 0.1 million PubChem data having 166 dimensions Parallel interpolation can process up to 2M PubChem points GTM with interpolation for 2M PubChem data 2M PubChem data is plotted in 3D with GTM interpolation approach. Red points are 100k sampled data and blue points are 4M interpolated points. MDS for 100k PubChem data 100k PubChem data having 166 dimensions are visualized in 3D space. Colors represent 2 clusters separated by their structural proximity. GTM for 930k genes and diseases Genes (green color) and diseases (others) are plotted in 3D space, aiming at finding cause-and-effect relationships. [3] PubChem project, http://pubchem.ncbi.nlm.nih.gov/

Applications using Dryad & DryadLINQ CAP3 [1] - Expressed Sequence Tag assembly to re-construct full-length mRNA Input files (FASTA) Output files CAP3 DryadLINQ Perform using DryadLINQ and Apache Hadoop implementations Single “Select” operation in DryadLINQ “Map only” operation in Hadoop [4] X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp. 868-877, 1999.

All-PairsUsing DryadLINQ 125 million distances 4 hours & 46 minutes Calculate Pairwise Distances (Smith Waterman Gotoh) Calculate pairwise distances for a collection of genes (used for clustering, MDS) Fine grained tasks in MPI Coarse grained tasks in DryadLINQ Performed on 768 cores (Tempest Cluster) [5] Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems , 21, 21-36.

Hadoop/Dryad Comparison Inhomogeneous Data I Inhomogeneity of data does not have a significant effect when the sequence lengths are randomly distributed Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)

Hadoop/Dryad Comparison Inhomogeneous Data II This shows the natural load balancing of Hadoop MR dynamic task assignment using a global pipe line in contrast to the DryadLinq static assignment Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)

Usability and Performance of Different Cloud Approaches Cap3 Performance Cap3 Efficiency Efficiency = absolute sequential run time / (number of cores * parallel run time) Hadoop, DryadLINQ - 32 nodes (256 cores IDataPlex) EC2 - 16 High CPU extra large instances (128 cores) Azure- 128 small instances (128 cores) Ease of Use – Dryad/Hadoop are easier than EC2/Azure as higher level models Lines of code including file copy Azure : ~300 Hadoop: ~400 Dyrad: ~450 EC2 : ~700

Table 1 : Selected EC2 Instance Types Memory EC2 compute units Actual CPU cores Cost per hour Cost per Core per hour Large (L) 7.5 GB 4 2 X (~2Ghz) 0.34$ 0.17$ Extra Large (XL) 15 GB 8 4 X (~2Ghz) 0.68$ High CPU Extra Large (HCXL) 7 GB 20 8 X (~2.5Ghz) 0.09$ High Memory 4XL (HM4XL) 68.4 GB 26 8X (~3.25Ghz) 2.40$ 0.3$ Tempest@IU 48GB n/a 24 1.62$ 0.07$

Twister(MapReduce++) Data Split D MR Driver User Program Pub/Sub Broker Network File System M R Worker Nodes Map Worker M Streaming based communication Intermediate results are directly transferred from the map tasks to the reduce tasks – eliminates local files Cacheable map/reduce tasks Static data remains in memory Combine phase to combine reductions User Program is the composer of MapReduce computations Extends the MapReduce model to iterative computations Reduce Worker R MRDeamon D Data Read/Write Communication Reduce (Key, List<Value>) Iterate Map(Key, Value) Combine (Key, List<Value>) User Program Close() Configure() Static data δ flow Different synchronization and intercommunication mechanisms used by the parallel runtimes

Iterative Computations K-means Matrix Multiplication Performance of K-Means Parallel Overhead Matrix Multiplication