SALSASALSASALSASALSA Cloud Technologies for Data Intensive Biomedical Computing OGF27 Workshop October 13, 2009, Banff Judy Qiu

Slides:



Advertisements
Similar presentations
SALSA HPC Group School of Informatics and Computing Indiana University.
Advertisements

Introduction to Programming Paradigms Activity at Data Intensive Workshop Shantenu Jha represented by Geoffrey Fox
High Performance Dimension Reduction and Visualization for Large High-dimensional Data Analysis Jong Youl Choi, Seung-Hee Bae, Judy Qiu, and Geoffrey Fox.
SALSASALSASALSASALSA Using MapReduce Technologies in Bioinformatics and Medical Informatics Computing for Systems and Computational Biology Workshop SC09.
SALSASALSASALSASALSA Chemistry in the Digital Age Workshop, Penn State University, June 11, 2009 Geoffrey Fox
SALSASALSASALSASALSA Using Cloud Technologies for Bioinformatics Applications MTAGS Workshop SC09 Portland Oregon November Judy Qiu
SALSASALSASALSASALSA Large Scale DNA Sequence Analysis and Biomedical Computing using MapReduce, MPI and Threading Workshop on Enabling Data-Intensive.
Interpolative Multidimensional Scaling Techniques for the Identification of Clusters in Very Large Sequence Sets April 27, 2011.
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
1 Clouds and Sensor Grids CTS2009 Conference May Alex Ho Anabas Inc. Geoffrey Fox Computer Science, Informatics, Physics Chair Informatics Department.
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
MapReduce in the Clouds for Science CloudCom 2010 Nov 30 – Dec 3, 2010 Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox {tgunarat, taklwu,
Dimension Reduction and Visualization of Large High-Dimensional Data via Interpolation Seung-Hee Bae, Jong Youl Choi, Judy Qiu, and Geoffrey Fox School.
Grids and Clouds for Cyberinfrastructure IIT June Geoffrey Fox
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
SALSASALSA Programming Abstractions for Multicore Clouds eScience 2008 Conference Workshop on Abstractions for Distributed Applications and Systems December.
SALSASALSASALSASALSA Performance Analysis of High Performance Parallel Applications on Virtualized Resources Jaliya Ekanayake and Geoffrey Fox Indiana.
SALSASALSASALSASALSA High Performance Biomedical Applications Using Cloud Technologies HPC and Grid Computing in the Cloud Workshop (OGF27 ) October 13,
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Panel Session The Challenges at the Interface of Life Sciences and Cyberinfrastructure and how should we tackle them? Chris Johnson, Geoffrey Fox, Shantenu.
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
SALSASALSASALSASALSA AOGS, Singapore, August 11-14, 2009 Geoffrey Fox 1,2 and Marlon Pierce 1
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
SALSASALSASALSASALSA Proposal Review Meeting with CTSI Translating Research Into Practice Project Development Team, July 8, 2009, IUPUI Gil Liu, Judy Qiu,
SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute.
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
Service Aggregated Linked Sequential Activities GOALS: Increasing number of cores accompanied by continued data deluge. Develop scalable parallel data.
1 Performance of a Multi-Paradigm Messaging Runtime on Multicore Systems Poster at Grid 2007 Omni Austin Downtown Hotel Austin Texas September
Portable Parallel Programming on Cloud and HPC: Scientific Applications of Twister4Azure Thilina Gunarathne Bingjing Zhang, Tak-Lon.
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
SALSASALSASALSASALSA MSR Internship – Final Presentation Jaliya Ekanayake School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Cloud Technologies for Data Intensive Computing Cloud Computing and Collaborative Technologies in the Geosciences September 17-18,
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
SALSASALSASALSASALSA New Approaches to Scientific Computing Presentation to visitors from Lilly September 25, 2009, Bloomington Geoffrey Fox
Parallel Applications And Tools For Cloud Computing Environments Azure MapReduce Large-scale PageRank with Twister Twister BLAST Thilina Gunarathne, Stephen.
SALSASALSASALSASALSA CloudComp 09 Munich, Germany Jaliya Ekanayake, Geoffrey Fox School of Informatics and Computing Pervasive.
SALSA HPC Group School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
SALSASALSASALSASALSA Cloud Technologies and Applications Indiana University SALSA Group
Performance Model for Parallel Matrix Multiplication with Dryad: Dataflow Graph Runtime Hui Li School of Informatics and Computing Indiana University 11/1/2012.
Service Aggregated Linked Sequential Activities: GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data.
SALSA Group’s Collaborations with Microsoft SALSA Group Principal Investigator Geoffrey Fox Project Lead Judy Qiu Scott Beason,
SALSASALSASALSASALSA Clouds Ball Aerospace March Geoffrey Fox
X-Informatics MapReduce February Geoffrey Fox Associate Dean for Research.
1 Multicore for Science Multicore Panel at eScience 2008 December Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University.
SALSASALSASALSASALSA Multicore and Cloud Technologies for Data Intensive Applications Ballantine Hall 006, Indiana University Bloomington October 23, 2009.
Looking at Use Case 19, 20 Genomics 1st JTC 1 SGBD Meeting SDSC San Diego March Judy Qiu Shantenu Jha (Rutgers) Geoffrey Fox
Security: systems, clouds, models, and privacy challenges iDASH Symposium San Diego CA October Geoffrey.
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
SALSA Group Research Activities April 27, Research Overview  MapReduce Runtime  Twister  Azure MapReduce  Dryad and Parallel Applications 
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Parallel Applications And Tools For Cloud Computing Environments CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
HPC in the Cloud – Clearing the Mist or Lost in the Fog Panel at SC11 Seattle November Geoffrey Fox
Memcached Integration with Twister Saliya Ekanayake - Jerome Mitchell - Yiming Sun -
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSASALSA Dynamic Virtual Cluster provisioning via XCAT on iDataPlex Supports both stateful and stateless OS images iDataplex Bare-metal Nodes Linux Bare-
SALSASALSASALSASALSA IU Twister Supports Data Intensive Science Applications School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSA HPC Group School of Informatics and Computing Indiana University Workshop on Petascale Data Analytics: Challenges, and.
Early Experience with Cloud Technologies
Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications Expect.
Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae Xiaohong Qiu
Applying Twister to Scientific Applications
Biology MDS and Clustering Results
SC09 Doctoral Symposium, Portland, 11/18/2009
FutureGrid Cloud Technologies and Bioinformatics Applications
Clouds from FutureGrid’s Perspective
FutureGrid and Applications
Towards High Performance Data Analytics with Java
Presentation transcript:

SALSASALSASALSASALSA Cloud Technologies for Data Intensive Biomedical Computing OGF27 Workshop October 13, 2009, Banff Judy Qiu Community Grids Laboratory Pervasive Technology Institute Indiana University

SALSASALSA Collaborators in SALSA Project Indiana University SALSA Technology Team Geoffrey Fox Judy Qiu Scott Beason Jaliya Ekanayake Thilina Gunarathne Jong Youl Choi Yang Ruan Seung-Hee Bae Hui Li Saliya Ekanayake Microsoft Research Technology Collaboration Azure (Clouds) Dennis Gannon Dryad (Parallel Runtime) Roger Barga Christophe Poulain CCR (Threading) George Chrysanthakopoulos DSS (Services) Henrik Frystyk Nielsen Applications Bioinformatics, CGB Haixu Tang, Mina Rho, Peter Cherbas, Qunfeng Dong IU Medical School Gilbert Liu Demographics (Polis Center) Neil Devadasan Cheminformatics David Wild, Qian Zhu Physics CMS group at Caltech (Julian Bunn) Community Grids Lab and UITS RT – PTI

SALSASALSA Data Intensive (Science) Applications Bare metal (Computer, network, storage) Bare metal (Computer, network, storage) FutureGrid/VM Cloud Technologies (MapReduce, Dryad, Hadoop) Cloud Technologies (MapReduce, Dryad, Hadoop) Classic HPC MPI Classic HPC MPI Applications  Biology: Expressed Sequence Tag (EST) sequence assembly (CAP3)  Biology: Pairwise Alu sequence alignment (SW)  Health: Correlating childhood obesity with environmental factors  Cheminformatics: Mapping PubChem data into low dimensions to aid drug discovery Data mining Algorithm Clustering (Pairwise, Vector) MDS, GTM, PCA, CCA Visualization PlotViz

SALSASALSA Data Intensive Architecture Prepare for Viz MDS Initial Processing Instruments User Data Users Files Higher Level Processing Such as R PCA, Clustering Correlations … Maybe MPI Visualization User Portal Knowledge Discovery

SALSASALSA MapReduce “File/Data Repository” Parallelism Instruments Disks Computers/Disks Map 1 Map 2 Map 3 Reduce Communication via Messages/Files Map = (data parallel) computation reading and writing data Reduce = Collective/Consolidation phase e.g. forming multiple global sums as in histogram Portals /Users

SALSASALSA Cloud Computing: Infrastructure and Runtimes Cloud infrastructure: outsourcing of servers, computing, data, file space, etc. – Handled through Web services that control virtual machine lifecycles. Cloud runtimes: tools (for using clouds) to do data-parallel computations. – Apache Hadoop, Google MapReduce, Microsoft Dryad, and others – Designed for information retrieval but are excellent for a wide range of science data analysis applications – Can also do much traditional parallel computing for data-mining if extended to support iterative operations – Not usually on Virtual Machines

SALSASALSA Application Classes Application—parallel software/hardware in terms of 5 “Application Architecture” Structures – 1) Synchronous – Lockstep Operation as in SIMD architectures – 2) Loosely Synchronous – Iterative Compute-Communication stages with independent compute (map) operations for each CPU. Heart of most MPI jobs – 3) Asynchronous – Compute Chess; Combinatorial Search often supported by dynamic threads – 4) Pleasingly Parallel – Each component independent – in 1988, I estimated at 20% total in hypercube conference – 5) Metaproblems – Coarse grain (asynchronous) combinations of classes 1)-4). The preserve of workflow. Grids greatly increased work in classes 4) and 5) The above largely described simulations and not data processing. Now we should admit the class which crosses classes 2) 4) 5) above – 6) MapReduce++ which describe file(database) to file(database) operations – 6a) Pleasing Parallel Map Only – 6b) Map followed by reductions – 6c) Iterative “Map followed by reductions” – Extension of Current Technologies that supports much linear algebra and datamining Note overheads in 1) 2) 6c) go like Communication Time/Calculation Time and basic MapReduce pays file read/write costs while MPI is microseconds

SALSASALSA Applications & Different Interconnection Patterns Map OnlyClassic MapReduce Iterative ReductionsLoosely Synchronous CAP3 Analysis Document conversion (PDF -> HTML) Brute force searches in cryptography Parametric sweeps High Energy Physics (HEP) Histograms Distributed search Distributed sorting Information retrieval Expectation maximization algorithms Clustering Linear Algebra Many MPI scientific applications utilizing wide variety of communication constructs including local interactions - CAP3 Gene Assembly - PolarGrid Matlab data analysis - Information Retrieval - HEP Data Analysis - Calculation of Pairwise Distances for ALU Sequences - Kmeans - Deterministic Annealing Clustering - Multidimensional Scaling MDS - Solving Differential Equations and - particle dynamics with short range forces Input Output map Input map reduce Input map reduce iterations Pij Domain of MapReduce and Iterative ExtensionsMPI

SALSASALSA Cluster Configurations IU CPUIntel Xeon CPU L GHz Intel Xeon CPU L GHz Intel Xeon CPU E GHz # CPU /# Cores per node 2 / 8 4 / 24 Memory16 GB32GB48GB # Disks212 NetworkGiga bit Ethernet Giga bit Ethernet / 20 Gbps Infiniband Operating SystemWindows Server Enterprise - 64 bit Red Hat Enterprise Linux Server -64 bit Windows Server Enterprise - 64 bit # Nodes Used32 Total CPU Cores Used DryadLINQ Hadoop/ Dryad / MPI DryadLINQ / MPI

SALSASALSA Pairwise Distances – ALU Sequencing Calculate pairwise distances for a collection of genes (used for clustering, MDS) O(N^2) problem “Doubly Data Parallel” at Dryad Stage Performance close to MPI Performed on 768 cores (Tempest Cluster) 125 million distances 4 hours & 46 minutes 125 million distances 4 hours & 46 minutes Processes work better than threads when used inside vertices 100% utilization vs. 70%

SALSASALSA Alu and Sequencing Workflow Data is a collection of N sequences – 100’s of characters long – These cannot be thought of as vectors because there are missing characters – “Multiple Sequence Alignment” (creating vectors of characters) doesn’t seem to work if N larger than O(100) Can calculate N 2 dissimilarities (distances) between sequences (all pairs) Find families by clustering (much better methods than Kmeans). As no vectors, use vector free O(N 2 ) methods Map to 3D for visualization using Multidimensional Scaling MDS – also O(N 2 ) N = 50,000 runs in 10 hours (all above) on 768 cores Our collaborators just gave us 170,000 sequences and want to look at 1.5 million – will develop new algorithms! MapReduce++ will do all steps as MDS, Clustering just need MPI Broadcast/Reduce

SALSASALSA

SALSASALSA

SALSASALSA MPI Parallel Overhead Thread Parallelism Clustering by Deterministic Annealing Thread MPI Thread Pairwise Clustering 30,000 Points on Tempest

SALSASALSA Dryad versus MPI for Smith Waterman Flat is perfect scaling

SALSASALSA Dryad Scaling on Smith Waterman Flat is perfect scaling

SALSASALSA Dryad for Inhomogeneous Data Flat is perfect scaling – measured on Tempest Time Sequence Length Standard Deviation Mean Length 400

SALSASALSA Hadoop/Dryad Comparison Inhomogeneous Data Dryad with Windows HPCS compared to Hadoop with Linux RHEL on IDataplex

SALSASALSA Hadoop/Dryad Comparison “Homogeneous” Data Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex Using real data with standard deviation/length = 0.1 Time per Alignment ms Dryad Hadoop

SALSASALSA Block Arrangement in Dryad and Hadoop Execution Model in Dryad and Hadoop Hadoop/Dryad Model Need to generate a single file with full NxN distance matrix

SALSASALSA High Energy Physics Data Analysis Histogramming of events from a large (up to 1TB) data set Data analysis requires ROOT framework (ROOT Interpreted Scripts) Performance depends on disk access speeds Hadoop implementation uses a shared parallel file system (Lustre) – ROOT scripts cannot access data from HDFS – On demand data movement has significant overhead Dryad stores data in local disks – Better performance

SALSASALSA Block Dependence of Dryad SW-G Processing on 32 node IDataplex Dryad Block Size D128x12864x6432x32 Time to partition data Time to process data Time to merge files60.0 Total Time Smaller number of blocks D increases data size per block and makes cache use less efficient Other plots have 64 by 64 blocking

SALSASALSA Reduce Phase of Particle Physics “Find the Higgs” using Dryad Combine Histograms produced by separate Root “Maps” (of event data to partial histograms) into a single Histogram delivered to Client

SALSASALSA CAP3 - DNA Sequence Assembly Program IQueryable inputFiles=PartitionedTable.Get (uri); IQueryable = inputFiles.Select(x=>ExecuteCAP3(x.line)); IQueryable inputFiles=PartitionedTable.Get (uri); IQueryable = inputFiles.Select(x=>ExecuteCAP3(x.line)); [1] X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp , EST (Expressed Sequence Tag) corresponds to messenger RNAs (mRNAs) transcribed from the genes residing on chromosomes. Each individual EST sequence represents a fragment of mRNA, and the EST assembly aims to re-construct full-length mRNA sequences for each expressed gene. V V V V Input files (FASTA) Output files \\GCB-K18-N01\DryadData\cap3\cluster34442.fsa \\GCB-K18-N01\DryadData\cap3\cluster34443.fsa... \\GCB-K18-N01\DryadData\cap3\cluster34467.fsa \\GCB-K18-N01\DryadData\cap3\cluster34442.fsa \\GCB-K18-N01\DryadData\cap3\cluster34443.fsa... \\GCB-K18-N01\DryadData\cap3\cluster34467.fsa \DryadData\cap3\cap3data 10 0,344,CGB-K18-N01 1,344,CGB-K18-N01 … 9,344,CGB-K18-N01 \DryadData\cap3\cap3data 10 0,344,CGB-K18-N01 1,344,CGB-K18-N01 … 9,344,CGB-K18-N01 Cap3data Input files (FASTA) Cap3data.pf GCB-K18-N01

SALSASALSA CAP3 - Performance

SALSASALSA DryadLINQ on Cloud HPC release of DryadLINQ requires Windows Server 2008 Amazon does not provide this VM yet Used GoGrid cloud provider Before Running Applications – Create VM image with necessary software E.g. NET framework – Deploy a collection of images (one by one – a feature of GoGrid) – Configure IP addresses (requires login to individual nodes) – Configure an HPC cluster – Install DryadLINQ – Copying data from “cloud storage” We configured a 32 node virtual cluster in GoGrid

SALSASALSA DryadLINQ on Cloud contd.. CloudBurst and Kmeans did not run on cloud VMs were crashing/freezing even at data partitioning – Communication and data accessing simply freeze VMs – VMs become unreachable We expect some communication overhead, but the above observations are more GoGrid related than to Cloud CAP3 works on cloud Used 32 CPU cores 100% utilization of virtual CPU cores 3 times more time in cloud than the bare- metal runs on different

SALSASALSA Kmeans Clustering Iteratively refining operation New maps/reducers/vertices in every iteration File system based communication Loop unrolling in DryadLINQ provide better performance The overheads are extremely large compared to MPI CGL-MapReduce is an example of MapReduce++ -- supports MapReduce model with iteration (data stays in memory and communication via streams not files) Time for 20 iterations Large Overheads

SALSASALSA MPI on Clouds: Matrix Multiplication Implements Cannon’s Algorithm [1] Exchange large messages More susceptible to bandwidth than latency At 81 MPI processes, at least 14% reduction in speedup is noticeable Performance - 64 CPU coresSpeedup – Fixed matrix size (5184x5184)

SALSASALSA MPI on Clouds Kmeans Clustering Perform Kmeans clustering for up to 40 million 3D data points Amount of communication depends only on the number of cluster centers Amount of communication << Computation and the amount of data processed At the highest granularity VMs show at least 3.5 times overhead compared to bare-metal Extremely large overheads for smaller grain sizes Performance – 128 CPU coresOverhead

SALSASALSA MPI on Clouds Parallel Wave Equation Solver Clear difference in performance and speedups between VMs and bare-metal Very small messages (the message size in each MPI_Sendrecv() call is only 8 bytes) More susceptible to latency At data points, at least 40% decrease in performance is observed in VMs Performance - 64 CPU cores Total Speedup – data points

SALSASALSA Summary: Key Features of our Approach Intend to implement range of biology applications with Dryad/Hadoop FutureGrid allows easy Windows v Linux with and without VM comparison Initially we will make key capabilities available as services that we eventually implement on virtual clusters (clouds) to address very large problems – Basic Pairwise dissimilarity calculations – R (done already by us and others) – MDS in various forms – Vector and Pairwise Deterministic annealing clustering Point viewer (Plotviz) either as download (to Windows!) or as a Web service Note much of our code written in C# (high performance managed code) and runs on Microsoft HPCS 2008 (with Dryad extensions) – Hadoop code written in Java

SALSASALSA Project website