Download presentation
Presentation is loading. Please wait.
Published byJerome Perry Modified over 8 years ago
1
Using Bitmap Index to Speed up Analyses of High-Energy Physics Data John Wu, Arie Shoshani, Alex Sim, Junmin Gu, Art Poskanzer Lawrence Berkeley National Laboratory Wei-Ming Zhang Kent State University Jerome Lauret Brookhaven National Laboratory
2
Outline Overview bitmap index Introduction to FastBit Overview of Grid Collector Two use cases “common” jobs “exotic” jobs
3
Basic Bitmap Index Compact: one bit per distinct value per object Easy to build: faster than common B-trees Efficient to query: only bitwise logical operations A < 2 b 0 OR b 1 2<A<5 b 3 OR b 4 Efficient for multi- dimensional queries Use bitwise operations to combine the partial results Data values 015312041015312041 100000100100000100 010010001010010001 000001000000001000 000100000000100000 000000010000000010 001000000001000000 =0=1=2=3=4=5 b0b0 b1b1 b2b2 b3b3 b4b4 b5b5
4
An Efficient Compression Scheme -- Word-Aligned Hybrid Code 10000000000000000000011100000000000000000000000000000……………….00000000000000000000000000000001111111111111111111111111 2015 bits 01000… Literal word 100…111111 Fill word 001…111 Literal word Run length is 63 WAH includes three words Groups bits into 65 31-bit groups Encode each group using one word 31 bits 63*31 bits 31 bits … Merge neighboring groups with identical bits
5
Compressed Bitmap Index Is Compact Expected index size of a uniform random attribute (in number of words) is smaller than typical B-trees (3N~4N) N is the number of rows, w is the number of bits per word, c is the number of distinct value, i.e., the attribute cardinality 100 M, synthetic25 M, combustion
6
Compressed Bitmap Index Is Optimal For 1-dimensional Query Compressed bitmap indices are optimal for one-attribute range conditions Query processing time using is at worst proportional to the number of hits Only a small number of most efficient indexing schemes, such as B-tree, has this property Bitmap indices are also efficient for multi- dimensional queries
7
Compressed Bitmap Index Is Efficient For Multi-dimensional Queries Log-log plot of query processing time for different size queries The compressed bitmap index is at least 10X faster than B-tree and 3X faster than the projection index
8
Data Analysis Process In STAR Users want to analyze “some” (not all) events Events are stored in millions of files Files distributed on many storage systems To perform an analysis, a user needs to – Prepare an analysis –Write the analysis code –Specify the events of interest –Run an analysis 1.Locate the files containing the events of interest 2.Prepare disk space for the files 3.Transfer the files to the disks 4.Recover from any errors 5.Read the events of interest from files 6.Remove the files
9
Components of the Grid Collector Legend: red – new components, purple – existing components 1. Locate the files containing the events of interest –Event Catalog, file & replica catalogs 2. Prepare disk space and transfer a)Prepare disk space for the files –Disk Resource Manager (DRM) b)Transfer the files to the disks –Hierarchical Resource Manager (HRM) to access HPSS –On-demand transfers from HRM to DRM c)Recover from any errors –HRM recovers from HPSS failures –DRM recovers from network transfer failures 3. Read the events of interest from files –Event Iterator with fast forward capability 4. Remove the files –DRM performs garbage collection using pinning and lifetime Consistent with other SRM based strategies and tools
10
Grid Collector: Architecture Analysis code New query Event iterator Event Catalog In: conditions Out: logical files, event IDs File Locator In: logical name, Out: physical location Grid Collector File Scheduler In: physical file DRM Administrator Fetch tag file Load subset Rollback Commit Index Builder In: STAR tag file Out: bitmap index NFS, local disk Replica Catalog HRM 1 HRM 2 Clients Servers Replica Catalog
11
FastBit Index For Event Catalog For 13 million events in a 62 GeV production (STAR 2004) Event Catalog size (include base data and bitmap indices): 27 GB tags: 6.0 GB (part of the base data of Event Catalog) MuDST: 4.1 TB event: 8.6 TB raw: 14.6 TB Time to produce tags, MuDST and event files from raw data: 3.5 months, 300+ CPUs Time to build the catalog: 5 days, one CPU
12
Grid Collector Speeds up Reading Test machine: 2.8 GHz Xeon, 27 MB/s read speed Without Grid Collector, an analysis job reads all events Speedup = time to read all events / time to read selected events with Grid Collector Observed speedup ≥ 1 When searching for rare events, say, selecting one event out of 1000, using GC is 20 to 50 times faster
13
Grid Collector Speeds Up Actual Analysis Speedup = time used with existing filtering mechanism / time used with GC selecting the same events Tested on flow analysis jobs Test data set contains 51 MuDST files, 8 GB, 25,000 events (P04ij) Test data uses an efficient organization that enhances the filtering mechanism – reads part of the event data for filtering Real analysis jobs typically include its own filtering mechanisms Real analysis jobs may also spend significant amount of time perform computation On a set of “real” analysis jobs that typically select about 10% of events, using Grid Collector has a speedup of 2 for CPU time, 1.4 for elapsed time. Speeding all jobs by 1.4 means the same computer center can accommodate 40% more analysis jobs
14
Grid Collector Enables Hard Analysis Jobs Searching for anti- 3 He Lee Barnby, Birmingham Initial study identified collision events that possibly contain anti- 3 He, need further analysis (2000) Searching for strangelet Aihong Tang, BNL Initial study identified events that may indicate existence of strangelets, need further investigation (2000) Without Grid Collector, one has to retrieve every file from HPSS and scan them for the wanted events – may take weeks or months, NO ONE WANTS TO DO IT With Grid Collector, both completed in a day
15
Summary Grid Collector Makes use of two distinct technologies, FastBit, And SRM (Storage Resource Manager) To speed up common analysis jobs where files are already on disk, And, enable difficult analysis jobs where some files may not be on disk. Contact Information John Wu John.Wu@nersc.govJohn.Wu@nersc.gov Jerome Lauret JLauret@bnl.govJLauret@bnl.gov
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.