Presentation is loading. Please wait.

Presentation is loading. Please wait.

A. Shoshani Feb. 2007 SCIENTIFIC DATA MANAGEMENT Computational Research Division Lawrence Berkeley National Laboratory February, 2007 Arie Shoshani.

Similar presentations


Presentation on theme: "A. Shoshani Feb. 2007 SCIENTIFIC DATA MANAGEMENT Computational Research Division Lawrence Berkeley National Laboratory February, 2007 Arie Shoshani."— Presentation transcript:

1 A. Shoshani Feb. 2007 SCIENTIFIC DATA MANAGEMENT Computational Research Division Lawrence Berkeley National Laboratory February, 2007 Arie Shoshani

2 A. Shoshani Feb. 2007 Outline Problem areas in managing scientific data Motivating examples Requirements The DOE Scientific Data Management Center A three-layer architectural approach Some results of technologies (details in mini-symposium) Specific technologies from LBNL Fastbit: innovative bitmap indexing for very large datasets Storage Resource Managers: providing uniform access to storage systems

3 A. Shoshani Feb. 2007 Motivating Example - 1 STAR: Solenoidal Tracker At RHIC RHIC: Relativistic Heavy Ion Collider LHC: Large Hadron Collider Includes: ATLAS, STAR, … Optimizing Storage Management and Data Access for High Energy and Nuclear Physics Applications Experiment # members /institutions Date of first data # events/ year volume/year- TB STAR 350/35 2001 10 8 -10 9 500 PHENIX 350/35 2001 10 9 600 BABAR 300/30 1999 10 9 80 CLAS 200/40 1997 10 300 ATLAS 1200/140 2007 10 5000 A mockup of An “event”

4 A. Shoshani Feb. 2007 Typical Scientific Exploration Process Generate large amounts of raw data large simulations collect from experiments Post-processing of data analyze data (find particles produced, tracks) generate summary data e.g. momentum, no. of pions, transverse energy Number of properties is large (50-100) Analyze data use summary data as guide extract subsets from the large dataset Need to access events based on partial properties specification (range queries) e.g. ((0.1 6000) apply analysis code

5 A. Shoshani Feb. 2007 Motivating example - 2 Combustion simulation: 1000x1000x1000 mesh with 100s of chemical species over 1000s of time steps – 10 14 data values Astrophysics simulation: 1000x1000x1000 mesh with 10s of variables per cell over 1000s of time steps - 10 13 data values This is an image of a single variable What’s needed is search over multiple variables, such as Temperature > 1000 AND pressure > 10 6 AND HO 2 > 10 -7 AND HO 2 > 10 -6 Combining multiple single-variable indexes efficiently is a challenge Solution: specialized bitmap indexes

6 A. Shoshani Feb. 2007 Motivating Example - 3 Earth System Grid Accessing large distributed stores for by 100’s of scientists Problems Different storage systems Security procedures File streaming Lifetime of request Garbage collection Solution Storage Resource Managers (SRMs) SRM

7 A. Shoshani Feb. 2007 Motivating Example 4: Fusion Simulation Coordination between Running Codes Noise Detection Need More Flights? Blob Detection Compute Puncture Plots Island detection Out-of-core Isosurface methods Feature Detection Web-Portal (ElVis) XGC-ET Mesh/Interpolation M3D-L (Linear stability) Stable? XGC-ET Mesh/Interpolation M3D tt Stable? B healed? Mesh/Interpolation Yes No Distributed Store MBs Distributed Store TBs Distributed Store GBs wide-area network

8 A. Shoshani Feb. 2007 Motivating example - 5 Data Entry and Browsing tool for entering and linking metadata from multiple data sources Metadata Problem for Microarray analysis Microarray schemas are quite complex Many objects: experiments, samples, arrays, hybridization, measurements, … Many associations between them Data is generated and processed in multiple locations which participate in the data pipeline In this project: Synechococcus sp. WH8102 whole genome microbes are cultured at Scripps Institution of Oceanography (SIO) then the sample pool is sent to The Institute for Genomics Research (TIGR) then images send to Sandia Lab for Hyperspectral Imaging and analysis Metadata needs to be captured and LINKED Generating specialized user interfaces is expensive and time-consuming to build and change Data is collected on various systems, spreadsheets, notebooks, etc.

9 A. Shoshani Feb. 2007 The kind of technology needed DEB: Data Entry and Browsing Tool 1) The microbes are cultured at SIO 2) Microarray hybridization at TIGR 3) Hyperspectral imaging at Sandia Features - Interface based on lab notebook look and feel - Tools are built on top of commercial DBMS - Schema-driven automatic Screen generation HS_Slide HS-Scan AnalysisMCR-Analysis LCS-Analysis Link to MA-Scan HS_Experiment Nucleotide Pool Probe Source Probe Hybridization MA-Scan StudySlide probe1probe2 Link From TIGR to SIO

10 A. Shoshani Feb. 2007 Storage Growth is Exponential The challenges are in managing the data Unlike compute and network resources, storage resources are not reusable Unless data is explicitly removed Need to use storage wisely Checkpointing, remove replicated data Time consuming, tedious tasks Data growth scales with compute scaling Storage will grow even with good practices (such as eliminating unnecessary replicas) Not necessarily on supercomputers but, on user/group machines and archival storage Storage cost is a consideration Has to be part of science growth cost But, storage costs going down at a rate similar to data growth Need continued investment in new storage technologies Storage Growth 1998-2006 at ORNL (rate: 2X / year) Storage Growth 1998-2006 at NERSC-LBNL (rate: 1.7X / year)

11 A. Shoshani Feb. 2007 Data and Storage Challenges End-to-End: 3 Phases of Scientific Investigation) Data production phase Data movement I/O to parallel file system Moving data out of supercomputer storage Sustain data rates of GB/sec Observe data during production Automatic generation of metadata Post-processing phase Large-scale (entire datasets) data processing Summarization / statistical properties Reorganization / transposition Generate data at different granularity On-the-fly data processing computations for visualization / monitoring Data extraction / analysis phase Automate data distribution / replication Synchronize replicated data Data lifetime management to unclog storage Extract subsets efficiently Avoid reading unnecessary data Efficient indexes for “fixed content” data Automated use of metadata Parallel analysis tools Statistical analysis tools Data mining tools

12 A. Shoshani Feb. 2007 The Scientific Data Management Center (Center for Enabling Technologies - CET) PI: Arie Shoshani, LBNL Annual budget: 3.3 Million Established 5 years ago (SciDAC-1) Successfully re-competed for the next 5 years (SciDAC-2) Featured in second issue of SciDAC magazine Laboratories ANL, ORNL, LBNL, LLNL, PNNL Universities NCSU, NWU, SDSC, UCD, Uof Utah, http://www.scidacreview.org/0602/pdf/data.pdf

13 A. Shoshani Feb. 2007 Tapes Disks Scientific Simulations & experiments Scientific Data Management Center Scientific Analysis & Discovery Data Manipulation: Getting files from Tape archive Extracting subset of data from files Reformatting data Getting data from heterogeneous, distributed systems moving data over the network Petabytes Terabytes Tapes Disks Petabytes Terabytes Data Manipulation: ~80% time ~20% time ~20% time ~80% time Using SDM-Center technology Scientific Analysis & Discovery Climate Modeling Astrophysics Genomics and Proteomics High Energy Physics Fusion … Optimizing shared access from mass storage systems Parallel-IO for various file formats Feature extraction techniques High-dimensional cluster analysis High-dimensional indexing Parallel statistics … SDM-ISIC Technology Current Goal

14 A. Shoshani Feb. 2007 A Typical SDM Scenario Flow Tier Control Flow Layer Applications & Software Tools Layer I/O System Layer Storage & Network Resouces Layer Work Tier + Data Mover Simulation Program Parallel R Post Processing Terascale Browser Task A: Generate Time-Steps Task B: Move TS Task C: Analyze TS Task D: Visualize TS Parallel NetCDF PVFS HDF5 Libraries SRM Subset extraction File system

15 A. Shoshani Feb. 2007 Approach Use an integrated framework that: Provides a scientific workflow capability Supports data mining and analysis tools Accelerates storage and access to data Simplify data management tasks for the scientist Hide details of underlying parallel and indexing technology Permit assembly of modules using a simple graphical workflow description tool Data Mining & Analysis Layer Storage Efficient Access Layer Scientific Process Automation Layer Scientific Application Scientific Understanding SDM Framework

16 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

17 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

18 A. Shoshani Feb. 2007 Data Generation Workflow Design and Execution Scientific Process Automation Layer OS, Hardware (Disks, Mass Store) Storage Efficient Access Layer Data Mining and Analysis Layer Simulation Run Parallel netCDF MPI-IOPVFS2

19 A. Shoshani Feb. 2007 Parallel NetCDF v.s. HDF5 (ANL+NWU) Developed Parallel netCDF Enables high performance parallel I/O to netCDF datasets Achieves up to 10 fold performance improvement over HDF5 Enhanced ROMIO: Provides MPI access to PVFS2 Advanced parallel file system interfaces for more efficient access Developed PVFS2 Production use at ANL, Ohio SC, Univ. of Utah HPC center Offered on Dell clusters Being ported to IBM BG/L system P0 P1 P2 P3 netCDF Parallel File System Parallel netCDF P0 P1 P2 P3 Parallel File System Before After Parallel Virtual File System: Enhancements and deployment Interprocess communication FLASH I/O Benchmark Performance (8x8x8 block sizes)

20 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

21 A. Shoshani Feb. 2007 Statistical Computing with R About R (http://www.r-project.org/):http://www.r-project.org/ R is an Open Source (GPL), most widely used programming environment for statistical analysis and graphics; similar to S. Provides good support for both users and developers. Highly extensible via dynamically loadable add-on packages. Originally developed by Robert Gentleman and Ross Ihaka. > … > dyn.load( “foo.so”) >.C( “foobar” ) > dyn.unload( “foo.so” ) > … > dyn.load( “foo.so”) >.C( “foobar” ) > dyn.unload( “foo.so” ) > library(mva) > pca <- prcomp(data) > summary(pca) > library(mva) > pca <- prcomp(data) > summary(pca) > library (rpvm) >.PVM.start.pvmd () >.PVM.addhosts (...) >.PVM.config () > library (rpvm) >.PVM.start.pvmd () >.PVM.addhosts (...) >.PVM.config ()

22 A. Shoshani Feb. 2007 Providing Task and Data Parallelism in pR Likelihood Maximization. Re-sampling schemes: Bootstrap, Jackknife, etc. Animations Markov Chain Monte Carlo (MCMC). – Multiple chains. – Simulated Tempering: running parallel chains at different “temperature“ to improve mixing. Task-parallel analyses: k-means clustering Principal Component Analysis (PCA) Hierarchical (model-based) clustering Distance matrix, histogram, etc. computations Data-parallel analyses:

23 A. Shoshani Feb. 2007 Parallel R (pR) Distribution http://www.ASPECT-SDM.org/Parallel-R Releases History: pR enables both data and task parallelism (includes task-pR and RScaLAPACK) (version 1.8.1) RScaLAPACK provides R interface to ScaLAPACK with its scalability in terms of problem size and number of processors using data parallelism (release 0.5.1) task-pR achieves parallelism by performing out-of- order execution of tasks. With its intelligent scheduling mechanism it attains significant gain in execution times (release 0.2.7) pMatrix provides a parallel platform to perform major matrix operations in parallel using ScaLAPACK and PBLAS Level II & III routines Also: Available for download from R’s CRAN web site (www.R-Project.org) with 37 mirror sites in 20 countrieswww.R-Project.org

24 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

25 A. Shoshani Feb. 2007 Classify each of the nodes: quasiperiodic, islands, separatrix Connections between the nodes Want accurate and robust classification, valid when few points in each node Piecewise Polynomial Models for Classification of Puncture (Poincaré) plots National Compact Stellarator Experiment Quasiperiodic IslandsSeparatrix

26 A. Shoshani Feb. 2007 Polar Coordinates Transform the (x,y) data to Polar coordinates (r,  ). Advantages of polar coordinates: Radial exaggeration reveals some features that are hard to see otherwise. Automatically restricts analysis to radial band with data, ignoring inside and outside. Easy to handle rotational invariance.

27 A. Shoshani Feb. 2007 Piecewise Polynomial Fitting: Computing polynomials In each interval, compute the polynomial coefficients to fit 1 polynomial to the data. If the error is high, split the data into an upper and lower group. Fit 2 polynomials to the data, one to each group. Blue: data. Red: polynomials. Black: interval boundaries.

28 A. Shoshani Feb. 2007 Classification The number of polynomials needed to fit the data and the number of gaps gives the information needed to classify the node: Number of polynomials Gapsonetwo Zero Quasiperiodic Separatrix > ZeroIslands 2 Polynomials 2 Gaps  Islands 2 Polynomials 0 Gaps  Separatrix

29 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

30 A. Shoshani Feb. 2007 Example Data Flow in Terascale Supernova Initiative Logistical Network Courtesy: John Blondin

31 A. Shoshani Feb. 2007 Automate data generation, transfer and visualization of a large-scale simulation at ORNL Original TSI Workflow Example with John Blondin, NCSU

32 A. Shoshani Feb. 2007 Automate data generation, transfer and visualization of a large-scale simulation at ORNL Submit Job to Cray at ORNL Check whether a time slice is finished Aggregate all into One large File - Save to HPSS Split it into 22 Files and store them in XRaid Notify Head Node at NC State Head Node submit scheduling to SGE SGE schedule the transfer for 22 Nodes Node Retrieve File from XRaid Node-0 Node-1 ….. Node-21 Yes Start Ensight to generate Video Files at Head Node ORNL NCSU Top level TSI Workflow

33 A. Shoshani Feb. 2007 Using the Scientific Workflow Tool (Kepler) Emphasizing Dataflow (SDSC, NCSU, LLNL) Automate data generation, transfer and visualization of a large-scale simulation at ORNL

34 A. Shoshani Feb. 2007 New actors in Fusion workflow to support automated data movement Detect when Files are Generated Detect when Files are Generated Move files Move files Tar files Tar files OTP Login actor OTP Login actor Disk Cache File Watcher actor File Watcher actor Seaborg NERSC HPSS ORNL Disk Cache Disk cacke Ewok-ORNL Scp File copier actor Scp File copier actor Tar’ing actor Tar’ing actor Kepler Workflow Engine Software components Hardware + OS Login At ORNL (OTP) Login At ORNL (OTP) Archive files Archive files Local archiving actor Local archiving actor Simulation Program (MPI) Simulation Program (MPI) Start Two Independent processes KEPLER 1 2

35 A. Shoshani Feb. 2007 Re-applying Technology Technology Parallel NetCDF Parallel VTK Compressed bitmaps Storage Resource Managers Feature Selection Scientific Workflow New Applications Climate Combustion, Astrophysics Astrophysics Fusion (exp. & simulation) Astrophysics Initial Application Astrophysics HENP Climate Biology SDM technology, developed for one application, can be effectively targeted at many other applications …

36 A. Shoshani Feb. 2007 Broad Impact of the SDM Center… Astrophysics: High speed storage technology, parallel NetCDF, integration software used for Terascale Supernova Initiative (TSI) and FLASH simulations Tony Mezzacappa – ORNL, Mike Zingale – U of Chicago, Mike Papka – ANL Scientific Workflow John Blondin – NCSU Doug Swesty, Eric Myra – Stony Brook Climate: High speed storage technology, Parallel NetCDF, and ICA technology used for Climate Modeling projects Ben Santer – LLNL, John Drake – ORNL, John Michalakes – NCAR Combustion: Compressed Bitmap Indexing used for fast generation of flame regions and tracking their progress over time Wendy Koegler, Jacqueline Chen – Sandia Lab ASCI FLASH – parallel NetCDF Dimensionality reduction Region growing

37 A. Shoshani Feb. 2007 Broad Impact (cont.) Biology: Kepler workflow system and web-wrapping technology used for executing complex highly repetitive workflow tasks for processing microarray data Matt Coleman - LLNL High Energy Physics: Compressed Bitmap Indexing and Storage Resource Managers used for locating desired subsets of data (events) and automatically retrieving data from HPSS Doug Olson - LBNL, Eric Hjort – LBNL, Jerome Lauret - BNL Fusion: A combination of PCA and ICA technology used to identify the key parameters that are relevant to the presence of edge harmonic oscillations in a Tokomak Keith Burrell - General Atomics Scott Klasky - PPPL Building a scientific workflow Dynamic monitoring of HPSS file transfers Identifying key parameters for the DIII-D Tokamak

38 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

39 A. Shoshani Feb. 2007 FastBit: An Efficient Indexing Technology For Accelerating Data Intensive Science Outline  Overview  Searching Technology  Applications http://sdm.lbl.gov/fastbit

40 A. Shoshani Feb. 2007 Searching Problems in Data Intensive Sciences Find the collision events with the most distinct signature of Quark Gluon Plasma Find the ignition kernels in a combustion simulation Track a layer of exploding supernova These are not typical database searches: Large high-dimensional data sets (1000 time steps X 1000 X 1000 X 1000 cells X 100 variables) No modification of individual records during queries, i.e., append-only data Complex questions: 500 10 -4 && … Large answers (hit thousands or millions of records) Seek collective features such as regions of interest, beyond typical average, sum

41 A. Shoshani Feb. 2007 Common Indexing Strategies Not Efficient Task: searching high-dimensional append-only data with ad hoc range queries Most tree-based indices are designed to be updated quickly E.g. family of B-Trees Sacrifice search efficiency to permit dynamic update Hash-based indices are Efficient for finding a small number of records But, not efficient for ad hoc multi-dimensional queries Most multi-dimensional indices suffer curse of dimensionality E.g. R-tree, Quad-trees, KD-trees, … Don’t scale to high dimensions (< 20) Are inefficient if some dimensions are not queried

42 A. Shoshani Feb. 2007 Our Approach: An Efficient Bitmap Index Bitmap indices Sacrifice update efficiency to gain more search efficiency Are efficient for multi-dimensional queries Scale linearly as the number of dimensions actually used in a query Bitmap indices may demand too much space We solve the space problem by developing an efficient compression method that Reduces the index size, typically 30% of raw data, vs. 300% for some B+- tree indices Improves operational efficiency, 10X speedup We have applied FastBit to speed up a number of DOE funded applications

43 A. Shoshani Feb. 2007 FastBit In a Nutshell FastBit is designed to search multi- dimensional append-only data Conceptually in table format rows  objects columns  attributes FastBit uses vertical (column-oriented) organization for the data Efficient for searching FastBit uses bitmap indices with a specialized compression method Proven in analysis to be optimal for single-attribute queries Superior to others because they are also efficient for multi- dimensional queries row column

44 A. Shoshani Feb. 2007 Bit-Sliced Index Take advantage that index need to be is append only partition each property into bins (e.g. for 0<Np<300, have 300 equal size bins) for each bin generate a bit vector compress each bit vector (some version of run length encoding) 000000000000000000000000000000 000010001000000000010001000000 000001110111011000001110111011 101100000000000101100000000000 010000000000000010000000000000 000000000000100000000000000100 000000000000000000000000000000 Column 1 000001110111011000001110111011 101100000000000101100000000000 010000001000000010000001000000 000000000000100000000000000100 000000000000000000000000000000 Column 2 000000000000000000000000000000 000000001000000000000001000000 000001110111011000001110111011 101100000000000101100000000000 010000000000000010000000000000 000000000000100000000000000100 000000000000000000000000000000 column n 000010000000000000010000000000...

45 A. Shoshani Feb. 2007 2 < A Basic Bitmap Index First commercial version Model 204, P. O’Neil, 1987 Easy to build: faster than building B-trees Efficient for querying: only bitwise logical operations A < 2  b 0 OR b 1 A > 2  b 3 OR b 4 OR b 5 Efficient for multi-dimensional queries Use bitwise operations to combine the partial results Size: one bit per distinct value per object Definition: Cardinality == number of distinct values Compact for low cardinality attributes only, say, < 100 Need to control size for high cardinality attributes Data values 015312041015312041 100000100100000100 010010001010010001 000001000000001000 000100000000100000 000000010000000010 001000000001000000 =0=1=2=3=4=5 b0b0 b1b1 b2b2 b3b3 b4b4 b5b5 A < 2

46 A. Shoshani Feb. 2007 Run Length Encoding Uncompressed: 0000000000001111000000000......0000001000000001111100000000.... 000000 Compressed: 12, 4, 1000,1,8, 5,492 Practical considerations: - Store very short sequences as-is (literal words) - Count bytes/words rather than bits (for long sequences) - Use first bit for type of word: literal or count - Use second bit of count to indicate 0 or 1 sequence [literal] [31 0-words] [literal] [31 0-words] [00 0F 00 00] [80 00 00 1F] [02 01 F0 00] [80 00 00 0F] Other ideas - repeated byte patterns, with counts - Well-known method use in Oracle: Byte-aligned Bitmap Code (BBC) Advantage: Can perform logical operations such as: AND, OR, NOT, XOR, … And COUNT operations directly on compressed data

47 A. Shoshani Feb. 2007 FastBit Compression Method is Compute-Efficient 10000000000000000000011100000000000000000000000000000……………….00000000000000000000000000000001111111111111111111111111 Example: 2015 bits Main Idea: Use run-length-encoding, but... partition bits into 31-bit groups on 32-bit machines Encode each group using one word 31 bits Count=63 (31 bits) 31 bits … Merge neighboring groups with identical bits Name: Word-Aligned Hybrid (WAH) code (US patent 6,831,575) Key features: WAH is compute-efficient because it  Uses the run-length encoding (simple)  Allows operations directly on compressed bitmaps  Never breaks any words into smaller pieces during operations

48 A. Shoshani Feb. 2007 selectivity Compute Efficient Compression Method: 10 times faster than best-known method 10X

49 A. Shoshani Feb. 2007 Time to Evaluate a Single-Attribute Range Condition in FastBit is Optimal Evaluating a single attribute range condition may require OR’ing multiple bitmaps Both analysis and timing measurement confirm that the query processing time is at worst proportional to the number of hits Worst case: Uniform Random Data Realistic case: Zipf Data Bitmaps in an index A range BBC: Byte-aligned Bitmap Code The best known bitmap compression

50 A. Shoshani Feb. 2007 Processing Multi-Dimensional Queries Merging results from tree-based indices is slow Because sorting and merging are slow Merging results from bitmap indices is fast Because bitwise operations on bitmaps are efficient 1,2,5,7,9 2,4,5,8 2,5 100000100100000100 010010001010010001 000001000000001000 000100000000100000 000000010000000010 001000000001000000 100000100100000100 000000001000000001 001001000001001000 000000000000000000 000010010000010010 010100000010100000 110010101110010101 010110010010110010 OR 010010000010010000 AND

51 A. Shoshani Feb. 2007 Multi-Attribute Range Queries Results are based on 12 most queried attributes (2.2 million records) from STAR High-Energy Physics Experiment with average attribute cardinality equal to 222,000 WAH compressed indices are 10X faster than bitmap indices from a DBMS, 5X faster than our own implementation of BBC Size of WAH compressed indices is only 30% of raw data size (a popular DBMS system uses 3-4X for B+-tree indices) 2-D queries 5-D queries

52 A. Shoshani Feb. 2007 The Case for Query Driven Visualization Support Compound Range Queries: e.g. Get all cells where Temperature > 300k AND Pressure is < 200 millibars Subsetting: Only load data that corresponds to the query. Get rid of visual “clutter” Reduce load on data analysis pipeline Quickly find and label connected regions Do it really fast!

53 A. Shoshani Feb. 2007 Architecture Overview: Query-Driven Vis. Pipeline Vis / Analysis Display Index Data Query FastBit [Stockinger, Shalf, Bethel, Wu 2005]

54 A. Shoshani Feb. 2007 DEX Visualization Pipeline Data Query Visualization Toolkit (VTK) 3D visualization of a Supernova explosion [Stockinger, Shalf, Bethel, Wu 2005]

55 A. Shoshani Feb. 2007 Extending FastBit to Find Regions of Interest Comparison to what VTK is good at: single attribute iso-contouring But, FastBit also does well on: Multi-attribute search Region finding: produces whole volume rather than contour Region tracking Proved to have the same theoretical efficiency as the best iso-contouring algorithms Measured to be 3X faster than the best iso- contouring algorithms Implemented in Dexterous Data Explorer (DEX) jointly with Vis group [Stockinger, Shalf, Bethel, Wu 2005] 3X

56 A. Shoshani Feb. 2007 Combustion: Flame Front Tracking Need to perform: Cell identification Identify all cells that satisfy user specified conditions, such as, “600 < Temperature < 700 AND HO 2 concentration > 10 -7 ” Region growing Connect neighboring cells into regions Region tracking Track the evolution of the regions (i.e., features) through time All steps perform with Bitmap structures T1 T2 T4T3

57 A. Shoshani Feb. 2007 Time required to identify regions in 3D Supernova simulation (LBNL) On 3D data with over 110 million records, region finding takes less than 2 seconds Linear cost with number of segments [Wu, Koegler, Chen, Shoshani 2003]

58 A. Shoshani Feb. 2007 Extending FastBit to Computer Conditional Histograms Conditional histograms are common in data analysis E.g., finding the number of malicious network connections in a particular time window Top left: a histogram of number of connections to port 5554 of machine in LBNL IP address space (two-horizontal axes), vertical axis is time Two sets of scans are visible as two sheets Bottom left: FastBit computes conditional histograms much faster than common data analysis tools 10X faster than ROOT 2X faster than ROOT with FastBit indices [Stockinger, Bethel, Campbell, Dart, Wu 2006]

59 A. Shoshani Feb. 2007 A Nuclear Physics Example: STAR STAR: Solenoidal Tracker At RHIC; RHIC: Relativistic Heavy Ion Collider 600 participants / 50 institutions / 12 countries / in production since 2000 ~100 million collision events a year, ~5 MB raw data per event, several levels of summary data Generated 3 petabytes and 5 million files Append-only data, aka write-once read-many (WORM) data

60 A. Shoshani Feb. 2007 Grid Collector Benefits of the Grid Collector: transparent object access Selection of objects based on their attribute values Improvement of analysis system’s throughput Interactive analysis of data distributed on the Grid

61 A. Shoshani Feb. 2007 Finding Needles in STAR Data One of the primary goals of STAR is to search for Quark Gluon Plasma (QGP) A small number (~hundreds) of collision events may contain the clearest evidence of QGP Using high-level summary data, researchers found 80 special events Have track distributions that are indicative of QGP Further analysis needs to access more detailed data Detailed data are large (terabytes) and reside on HPSS May take many weeks to manually migrate to disk We located and retrieved the 80 events in 15 minutes

62 A. Shoshani Feb. 2007 Grid Collector Speeds up Analyses Test machine: 2.8 GHz Xeon, 27 MB/s read speed When searching for rare events, say, selecting one event out of 1000, using GC is 20 to 50 times faster Using GC to read 1/2 of events, speedup > 1.5, 1/10 events, speed up > 2.

63 A. Shoshani Feb. 2007 Summary – Applications Involving FastBit STARSearch for rare events with special significance BNL (STAR collaboration) Combustion Data Analysis Finding and tracking ignition kernels Sandia (Combustion Research Facility) Dexterous Data Explorer (DEX) Interactive exploration of large scientific data (visualize regions of interest) LBNL Vis group Network Traffic Analysis Enable interactive analysis of network traffic data for forensic and live stream data LBNL Vis group, NERSC/ESNet security, DNA sequencing anomaly detection Finding anomalies in raw DNA sequencing data to diagnose sequencing machine operations and DNA sample preparations JGI FastBit implements an efficient patented compression technique to speed up the searches in data intensive scientific applications

64 A. Shoshani Feb. 2007 Technology Details by Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Tools Web Wrapping Tools Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis tools (PCA, ICA) ASPECT: integration Framework Parallel NetCDF Parallel Virtual File System Storage Resource Manager (To HPSS) ROMIO MPI-IO System Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Hardware, OS, and MSS (HPSS) WorkFlow Management Engine Scientific Workflow Components Efficient Parallel Visualization (pVTK) Efficient indexing (Bitmap Index) Data Analysis and Feature Identification Parallel NetCDF Parallel Virtual File System Storage Resource Manager (SRM) (ROMIO) Parallel I/O Data Mining & Analysis (DMA) Layer Storage Efficient Access (SEA) Layer Scientific Process Automation (SPA) Layer Analysis Parallel R Statistical

65 A. Shoshani Feb. 2007 What is SRM? Storage Resource Managers (SRM) are middleware components whose function is to provide Dynamic space allocation Dynamic file management in space For shared storage components on the WAN

66 A. Shoshani Feb. 2007 Motivation Suppose you want to run a job on your local machine Need to allocate space Need to bring all input files Need to ensure correctness of files transferred Need to monitor and recover from errors What if files don’t fit space? Need to manage file streaming Need to remove files to make space for more files Now, suppose that the machine and storage space is a shared resource Need to do the above for many users Need to enforce quotas Need to ensure fairness of scheduling users

67 A. Shoshani Feb. 2007 Motivation Now, suppose you want to do that on a WAN Need to access a variety of storage systems mostly remote systems, need to have access permission Need to have special software to access mass storage systems Now, suppose you want to run distributed jobs on the WAN Need to allocate remote spaces Need to move (stream) files to remote sites Need to manage file outputs and their movement to destination site(s)

68 A. Shoshani Feb. 2007 Data Analysis Tapes Petabytes Disks Terabytes Disks Terabytes e.g. HPSS Ubiquitous and Transparent Data Access and Sharing Data Analysis

69 A. Shoshani Feb. 2007 Interoperability of SRMs SRM Enstore JASMine Client USER/APPLICATIONS Grid Middleware SRM dCache SRM Castor SRM Unix-based disks SRM SE CCLRC RAL

70 A. Shoshani Feb. 2007 SRB UniTree HPSS DB2 Oracle Unix Application File SIDDBLobj SIDObj SID MCAT Dublin Core Resource User Application Meta-data Client Library SRB Server Local SRB Server SDSC Storage Resource Broker - Grid Middleware This figure was taken from one of the talks by Reagan Moore

71 A. Shoshani Feb. 2007 SRM vs. SRB Storage Resource Broker (SRB) Very successful product from SDSC, has long history Is a centralized solution where all requests go to a central server that includes a metadata catalog (MCAT) Developed by a single institution Storage Resource Management (SRM) Based on open standard Developed by multiple institutions for their storage systems Designed for interoperation of heterogeneous storage systems Features of SRM that SRB does not deal with Managing storage space dynamically based client’s request Managing content of space based on lifetime controlled by client Support for file streaming by pinning and releasing files Several institutions now ask for an SRM interface to SRB In GGF: activity to bridge these technologies

72 A. Shoshani Feb. 2007 HRM (performs writes) HRM (performs writes) HRM (performs writes) GridFTP HTTP(s) FTP services SRM SRM-TESTER Client SRM WEB 1. Initiate SRM-TESTER 3. Publish test results CERN LCG IC.UK EGEE UIO ARC SDSC OSG LBNL STAR APAC SRM Grid.IT SRM FNAL CMS SRM VU SRM 2. Test Storage Sites according to the spec v1.1 and v2.2 GGF GIN-Data SRM inter-op testing (GGF: Global Grid Forum, GIN: Grid Interoperability Now)

73 A. Shoshani Feb. 2007 Testing Operations Results pingputget Advisory delete Copy (SRMs) Copy (gsiftp) ARC (UIO.NO) pass fail pass failpassfail EGEE (IC.UK) pass CMS (FNAL.GOV) pass LCG/EGEE (CERN) pass N.A. OSG (SDSC) pass fail STAR (LBNL) pass

74 A. Shoshani Feb. 2007 Peer-to-Peer Uniform Interface MSS Storage Resource Manager network client Client (command line)... Client’s site... Disk Cache Disk Cache Site 2 Site 1 Site N Storage Resource Manager Disk Cache Storage Resource Manager Client Program Disk Cache Disk Cache... Storage Resource Manager Disk Cache Disk Cache... Uniform SRM interface

75 A. Shoshani Feb. 2007 Tomcat servlet engine Tomcat servlet engine MCS Metadata Cataloguing Services MCS Metadata Cataloguing Services RLS Replica Location Services RLS Replica Location Services SOAP RMI MyProxy server MyProxy server MCS client RLS client MyProxy client GRAM gatekeeper GRAM gatekeeper CAS Community Authorization Services CAS Community Authorization Services CAS client disk MSS Mass Storage System HPSS High Performance Storage System disk HPSS High Performance Storage System disk DRM Storage Resource Management DRM Storage Resource Management HRM Storage Resource Management HRM Storage Resource Management HRM Storage Resource Management HRM Storage Resource Management HRM Storage Resource Management HRM Storage Resource Management gridFTP server gridFTP server gridFTP server gridFTP server gridFTP server gridFTP server gridFTP server gridFTP server openDAPg server openDAPg server gridFTP Striped server gridFTP Striped server LBNL LLNL ISI NCAR ORNL ANL DRM Storage Resource Management DRM Storage Resource Management Earth Science Grid Analysis Environment

76 A. Shoshani Feb. 2007 History and partners in SRM Collaboration 5 year of Storage Resource (SRM) Management activity Experience with SRM system implementations Mass Storage Systems: HRM-HPSS (LBNL, ORNL, BNL), Enstore (Fermi), JasMINE (Jlab), Castor (CERN), MSS (NCAR), Castor (RAL) … Disk systems: DRM(LBNL), jSRM (Jlab), DPM (CERN), universities … Combination systems: dCache(Fermi) – sophisticated multi-storage system L-Store (U Vanderbilt) based on Logistical Networking StoRM – to parallel file systems (ICTP, Trieste, Italy)

77 A. Shoshani Feb. 2007 Standards for Storage Resource Management Main concepts Allocate spaces Get/put files from/into spaces Pin files for a lifetime Release files and spaces Get files into spaces from remote sites Manage directory structures in spaces SRMs communicate as peer-to-peer Negotiate transfer protocols

78 A. Shoshani Feb. 2007 DataMover Perform “rcp –r directory” on the WAN

79 A. Shoshani Feb. 2007 SRMs supports data movement between storage systems Compute Systems Networks Other Storage systems Storage Resource Manager Compute Resource Management Community Authorization Services Application- Specific Data Discovery Services Data Compute Scheduling (Brokering) Data Filtering or Transformation Services Database Management Services Request Interpretation and Planning Services File Transfer Service (GridFTP) Data Transport Services Monitoring/ Auditing Services Workflow or Request Management Services Consistency Services (e.g., Update Subscription, Versioning, Master Copies) Data Federation Services R E S O U R C E : S H A R I N G S I N G L E R E S O U R C E S C O L L E C T I V E 1 : G E N E R A L S E R V I C E S F O R C O O R D I N A T I N G M U L T I P L E R E S O U R C E S C O L L E C T I V E 2 : S E R V I C E S S P E C I F I C T O A P P L I C A T I O N D O M A I N O R V I R T U A L O R G. Resource Monitoring/ Auditing F A B R I C C O N N E C T I V I T Y Communication Protocols (e.g., TCP/IP stack) Authentication and Authorization Protocols (e.g., GSI) C O L L E C T I V E This figure based on the Grid Architecture paper by Globus Team Mass Storage System (HPSS) General Data Discovery Services Data Filtering or Transformation Services Movement Storage

80 A. Shoshani Feb. 2007 Massive Robust File Replication Multi-File Replication – why is it a problem? Tedious task – many files, repetitious Lengthy task – long time, can take hours, even days Error prone – need to monitor transfers Error recovery – need to restart file transfers Stage and archive from MSS – limited concurrency, down time, transient failures Commercial MSS – HPSS at NERSC, ORNL, …, Legacy MSS – MSS at NCAR Independent MSS – Castor (CERN), Enstore (Fermilab), JasMINE (Jlab)

81 A. Shoshani Feb. 2007 Create identical Directory and issue SRM-COPY (thousands of files) SRM-GET (one file at a time) GridFTP GET (pull mode) stage files archive files Network transfer Get list of files From directory Recovers from file transfer failures Anywhere Disk Cache DataMover SRM (performs writes) LBNL/ ORNL Disk Cache SRM (performs reads) NCAR NCAR-MSS Recovers from staging failures Recovers from archiving failures Web-based File Monitoring Tool DataMover: SRMs use in ESG for Robust Multi-file replication

82 A. Shoshani Feb. 2007 Shows: -Files already transferred - Files during transfer - Files to be transferred Also shows for each file: -Source URL -Target URL -Transfer rate Web-Based File Monitoring Tool

83 A. Shoshani Feb. 2007 Shows that archiving is the bottleneck File tracking helps to identify bottlenecks

84 A. Shoshani Feb. 2007 File tracking shows recovery from transient failures Total: 45 GBs

85 A. Shoshani Feb. 2007 Robust Multi-File Replication Main results DataMover is being used in production for over three years Moves about 10 TBs a month currently Averages 8 MB/s (64 Mb/sec) over WAN Eliminated person-time to monitor transfer and recover from failures Reduced error rates from about 1% to 0.02% (50 fold reduction)* * http://www.ppdg.net/docs/oct04/ppdg-star-oct04.doc

86 A. Shoshani Feb. 2007 Summary: lessons learned Scientific workflow is an important paradigm Coordination of tasks AND Management of data flow Managing repetitive steps Tracking, estimation Efficient I/O is often the bottleneck Technology essential for efficient computation Mass storage need to be seamlessly managed Opportunities to interact with Math packages General analysis tools are useful Parallelization is key to scaling Visualization is an integral part of analysis Data movement is complex Network infrastructure is not enough – can be unreliable Need robust software to manage failures Need to manage space allocation Managing format mismatch is part of data flow Metadata emerging as an important need Description of experiments/simulation Provenance Use of hints / access patterns

87 A. Shoshani Feb. 2007 Data and Storage Challenges: Still to be Overcome General Open Problems Multiple parallel file systems A common data model Coordinated scheduling of resources Reservations and workflow management Multiple data formats Running coupled codes Coordinated data movement (not just files) Data Reliability / monitoring / recovery Tracking data for long running jobs Security: authentication + authorization Fundamental technology areas From the report from the DOE Office of Science Data-Management Workshops (March – May 2004) Efficient access and queries, data integration Distributed data management, data movement, networks Storage and caching Data analysis, visualization, and integrated environments Metadata, data description, logical organization Workflow, data flow, data transformation URL: http://www-user.slac.stanford.edu/rmount/ dm-workshop-04/Final-report.pdf

88 A. Shoshani Feb. 2007 SDM Mini-Symposium High-Performance Parallel Data and Storage Management Alok Choudhary (NWU) and Rob Ross (ANL), Robert Latham (ANL) Mining Science Data Chandrika Kamath (LLNL) Accelerating Scientific Exploration with Workflow Automation Systems Ilkay Altintas (SDSC), Terence Critchlow (LLNL), Scott Klasky (ORNL), Bertram Ludaescher (UCDavis), Steve Parker (UofUtah), Mladen Vouk (NCSU) High Performance Statistical Computing with Parallel R and Star-P Nagiza Samatova (ORNL) and Alan Edelman (MIT)


Download ppt "A. Shoshani Feb. 2007 SCIENTIFIC DATA MANAGEMENT Computational Research Division Lawrence Berkeley National Laboratory February, 2007 Arie Shoshani."

Similar presentations


Ads by Google