Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Scientific Data Management Center(ISIC) contains extensive publication list.

Similar presentations


Presentation on theme: "1 Scientific Data Management Center(ISIC) contains extensive publication list."— Presentation transcript:

1 1 Scientific Data Management Center(ISIC) http://sdmcenter.lbl.gov contains extensive publication list

2 2 Scientific Data Management Center Center PI: Arie Shoshani LBNL DOE Laboratories co-PIs: Bill Gropp, Rob RossANL Arie Shoshani, Doron RotemLBNL Terence Critchlow, Chandrika KamathLLNL Nagiza Samatova, Andy WhiteORNL Universities co-PIs : Mladen Vouk North Carolina State Alok Choudhary Northwestern Reagan Moore, Bertram Ludaescher UC San Diego (SDSC) Calton PuGeorgia Tech Steve ParkerU of Utah (future) Participating Institutions

3 3 Phases of Scientific Exploration  Data Generation  From large scale simulations or experiments  Fast data growth with computational power  examples HENP: 100 Teraops and 10 Petabytes by 2006 Climate: Spatial Resolution: T42 (280 km) -> T85 (140 km) -> T170 (70 km), T42: about 1 TB/100 year run => factor of ~ 10-20  Problems Can’t dump the data to storage fast enough – waste of compute resources Can’t move terabytes of data over WAN robustly – waste of scientist’s time Can’t steer the simulation – waste of time and resource Need to reorganize and transform data – large data intensive tasks slowing progress

4 4 Phases of Scientific Exploration  Data Analysis  Analysis of large data volume  Can’t fit all data in memory  Problems Find the relevant data – need efficient indexing Cluster analysis – need linear scaling Feature selection – efficient high-dimensional analysis Data heterogeneity – combine data from diverse sources Streamline analysis steps – output of one step needs to match input of next

5 5 Example Data Flow in TSI Logistical Network Courtesy: John Blondin

6 6 Goal: Reduce the Data Management Overhead Efficiency Example: parallel I/O, indexing, matching storage structures to the application Effectiveness Example: Access data by attributes-not files, facilitate massive data movement New algorithms Example: Specialized PCA techniques to separate signals or to achieve better spatial data compression Enabling ad-hoc exploration of data Example: by enabling exploratory “run and render” capability to analyze and visualize simulation output while the code is running

7 7 Approach  Use an integrated framework that: Provides a scientific workflow capability Supports data mining and analysis tools Accelerates storage and access to data  Simplify data management tasks for the scientist Hide details of underlying parallel and indexing technology Permit assembly of modules using a simple graphical workflow description tool Data Mining & Analysis Layer Storage Efficient Access Layer Scientific Process Automation Layer Scientific Application Scientific Understanding SDM Framework

8 8 Technology Details by Layer

9 9 Accomplishments: Storage Efficient Access (SEA)  Developed Parallel netCDF  Enables high performance parallel I/O to netCDF datasets  Achieves up to 10 fold performance improvement over HDF5  Enhanced ROMIO:  Provides MPI access to PVFS  Advanced parallel file system interfaces for more efficient access  Developed PVFS2  Adds Myrinet GM and InfiniBand support  improved fault tolerance  asynchronous I/O  offered by Dell and HP for Clusters  Deployed an HPSS Storage Resource Manager (SRM) with PVFS  Automatic access of HPSS files to PVFS through MPI-IO library  SRM is a middleware component P0 P1 P2 P3 netCDF Parallel File System Parallel netCDF P0 P1 P2 P3 Parallel File System Before After Parallel Virtual File System: Enhancements and deployment Shared memory communication FLASH I/O Benchmark Performance (8x8x8 block sizes)

10 10 Robust Multi-file Replication  Problem: move thousands of files robustly  Takes many hours  Need error recovery  Mass storage systems failures  Network failures  Use Storage Resource Managers (SRMs)  Problem: too slow  Use parallel streams  Use concurrent transfers  Use large FTP windows  Pre-stage files from MSS NCAR Anywhere LBNL Disk Cache Disk Cache SRM-COPY (thousands of files) SRM-GET (one file at a time) DataMover SRM (performs writes) SRM (performs reads) GridFTP GET (pull mode) stage files archive files Network transfer Get list of files MSS

11 11 File tracking helps to identify bottlenecks Shows that archiving is the bottleneck

12 12 File tracking shows recovery from transient failures Total: 45 GBs

13 13 Accomplishments: Data Mining and Analysis (DMA)  Developed Parallel-VTK  Efficient 2D/3D Parallel Scientific Visualization for NetCDF and HDF files  Built on top of PnetCDF  Developed “region tracking” tool  For exploring 2D/3D scientific databases  Using bitmap technology to identify regions based on multi-attribute conditions  Implemented Independent Component Analysis (ICA) module  Used for accurate for signal separation  Used for discovering key parameters that correlate with observed data  Developed highly effective data reduction  Achieves 15 fold reduction with high level of accuracy  Using parallel Principle Component Analysis (PCA) technology  Developed ASPECT  A framework that supports a rich set of pluggable data analysis tools  Including all the tools above  A rich suite of statistical tools based on R package El Nino signal (red) and estimation (blue) closely match Combustion region tracking

14 14 ASPECT Analysis Environment Data Select  Data Access  Correlate  Render  Display (temp, pressure) From astro-data Where (step=101) (entropy>1000); Sample (temp, pressure) Visualize scatter plot in QT Run pVTK filter Run R analysis pVTK Tool Select Data R Analysis Tool Take Sample Use Bitmap (condition) Get variables (var-names, ranges) Read Data (buffer-name) Write Data Read Data (buffer-name) Write Data Read Data (buffer-name) Parallel NetCDF PVFS Bitmap Index Selection Hardware, OS, and MSS (HPSS) Data Mining & Analysis Layer Storage Efficient Access Layer

15 15 Accomplishments: Scientific Process Automation (SPA) Unique requirements of scientific WFs  Moving large volumes between modules Tightlly-coupled efficient data movement  Specification of granularity-based iteration e.g. In spatio-temporal simulations – a time step is a “granule”  Support for data transformation complex data types (including file formats, e.g. netCDF, HDF)  Dynamic steering of workflow by user Dynamic user examination of results Developed a working scientific work flow system  Automatic microarray analysis  Using web-wrapping tools developed by the center  Using Kepler WF engine  Kepler is an adaptation of the UC Berkeley tool, Ptolemy workflow steps defined graphically workflow results presented to user

16 16 GUI for setting up and running workflows

17 17 Re-applying Technology Technology Parallel NetCDF Parallel VTK Compressed bitmaps Storage Resource Managers Feature Selection Scientific Workflow New Applications Climate Combustion, Astrophysics Astrophysics Fusion Astrophysics (planned) Initial Application Astrophysics HENP Climate Biology SDM technology, developed for one application, can be effectively targeted at many other applications …

18 18 Broad Impact of the SDM Center… Astrophysics: High speed storage technology, parallel NetCDF, parallel VTK, and ASPECT integration software used for Terascale Supernova Initiative (TSI) and FLASH simulations Tony Mezzacappa – ORNL, John Blondin –NCSU, Mike Zingale – U of Chicago, Mike Papka – ANL Climate: High speed storage technology, Parallel NetCDF, and ICA technology used for Climate Modeling projects Ben Santer – LLNL, John Drake – ORNL, John Michalakes – NCAR Combustion: Compressed Bitmap Indexing used for fast generation of flame regions and tracking their progress over time Wendy Koegler, Jacqueline Chen – Sandia Lab ASCI FLASH – parallel NetCDF Dimensionality reduction Region growing

19 19 Broad Impact (cont.) Biology: Kepler workflow system and web- wrapping technology used for executing complex highly repetitive workflow tasks for processing microarray data Matt Coleman - LLNL High Energy Physics: Compressed Bitmap Indexing and Storage Resource Managers used for locating desired subsets of data (events) and automatically retrieving data from HPSS Doug Olson - LBNL, Eric Hjort – LBNL, Jerome Lauret - BNL Fusion: A combination of PCA and ICA technology used to identify the key parameters that are relevant to the presence of edge harmonic oscillations in a Tokomak Keith Burrell - General Atomics Building a scientific workflow Dynamic monitoring of HPSS file transfers Identifying key parameters for the DIII-D Tokamak

20 20 Goals for Years 4-5  Fully develop the integrated SDM framework  Implement the 3 layer framework on SDM center facility  Provide a way to select only components needed  Develop self-guiding web pages on the use of SDM components  Use existing successful examples as guides  Generalize components for reuse  Develop general interfaces between components in the layers  support loosely-coupled WSDL interfaces  Support tightly-coupled components for efficient dataflow  Integrate operation of components in the framework  Hide details form user – automate parallel access and indexing  Develop a reusable library of components that can be selected for use in the workflow system


Download ppt "1 Scientific Data Management Center(ISIC) contains extensive publication list."

Similar presentations


Ads by Google