Download presentation
Presentation is loading. Please wait.
Published byClementine May Modified over 9 years ago
1
slide 1 PS1 PSPS Object Data Manager Design PSPS Critical Design Review November 5-6, 2007 IfA
2
slide 2 Detail Design General Concepts Distributed Database architecture Ingest Workflow Prototype
3
slide 3 Zones (spatial partitioning and indexing algorithm) Partition and bin the data into declination zones ZoneID = floor ((dec + 90.0) / zoneHeight) Few tricks required to handle spherical geometry Place the data close on disk Cluster Index on ZoneID and RA Fully implemented in SQL Efficient Nearby searches Cross-Match (especially) Fundamental role in addressing the critical requirements Data volume management Association Speed Spatial capabilities Zones Declination (Dec) Right Ascension (RA)
4
slide 4 Zoned Table ObjIDZoneID*RADecCXCYCZ… 100.0-90.0 220250180.00.0 320250181.00.0 440500360.0+90.0 * ZoneHeight = 8 arcsec in this example ZoneID = floor ((dec + 90.0) / zoneHeight)
5
slide 5 SQL CrossNeighbors SELECT * FROM prObj1 z1 JOIN zoneZone ZZ ON ZZ.zoneID1 = z1.zoneID JOIN prObj2 z2 ON ZZ.ZoneID2 = z2.zoneID WHERE z2.ra BETWEEN z1.ra-ZZ.alpha AND z2.ra+ZZ.alpha AND z2.dec BETWEEN z1.dec-@r AND z1.dec+@r AND (z1.cx*z2.cx+z1.cy*z2.cy+z1.cz*z2.cz) > cos(radians(@r))
6
slide 6 Good CPU Usage
7
slide 7 Partitions SQL Server 2005 introduces technology to handle tables which are partitioned across different disk volumes and managed by a single server. Partitioning makes management and access of large tables and indexes more efficient Enables parallel I/O Reduces the amount of data that needs to be accessed Related tables can be aligned and collocated in the same place speeding up JOINS
8
slide 8 Partitions 2 key elements Partitioning function –Specifies how the table or index is partitioned Partitioning schemas –Using a partitioning function, the schema specifies the placement of the partitions on file groups Data can be managed very efficiently using Partition Switching Add a table as a partition to an existing table Switch a partition from one partitioned table to another Reassign a partition to form a single table Main requirement The table must be constrained on the partitioning column
9
slide 9 Partitions For the PS1 design, Partitions mean File Group Partitions Tables are partitioned into ranges of ObjectID, which correspond to declination ranges. ObjectID boundaries are selected so that each partition has a similar number of objects.
10
slide 10 Distributed Partitioned Views Tables participating in the Distributed Partitioned View (DVP) reside on different databases which reside in different databases which reside on different instances or different (linked) servers
11
slide 11 Concept: Slices In the PS1 design, the bigger tables will be partitioned across servers To avoid confusion with the File Group Partitioning, we call them “Slices” Data is glued together using Distributed Partitioned Views The ODM will manage slices. Using slices improves system scalability. For PS1 design, tables are sliced into ranges of ObjectID, which correspond to broad declination ranges. Each slice is subdivided into partitions that correspond to narrower declination ranges. ObjectID boundaries are selected so that each slice has a similar number of objects.
12
slide 12 Detail Design Outline General Concepts Distributed Database architecture Ingest Workflow Prototype
13
slide 13 PS1 P1P1 PmPm PartitionsMap Objects LnkToObj Meta [Objects_p1] [LnkToObj_p1] [Detections_p1] Meta [Objects_pm] [LnkToObj_pm] [Detections_pm] Meta Detections Linked servers PS1 database LoadAdmin Load Support 1 objZoneIndx orphans_l1 Detections_l1 LnkToObj_l1 objZoneIndx Orphans_ln Detections_ln LnkToObj_ln detections Load Support n Linked servers detections PartitionsMap PS1 Distributed DB system Legend Database Full table [partitioned table] Output table Partitioned View Query Manager (QM) Web Based Interface (WBI)
14
slide 14 Design Decisions: ObjID Objects have their positional information encoded in their objID fGetPanObjID (ra, dec, zoneH) ZoneID is the most significant part of the ID It gives scalability, performance, and spatial functionality Object tables are range partitioned according to their object ID
15
slide 15 ObjectID Clusters Data Spatially ObjectID = 087941012871550661 Dec = –16.71611583 ZH = 0.008333 ZID = (Dec+90) / ZH = 08794.0661 RA = 101.287155 ObjectID is unique when objects are separated by >0.0043 arcsec
16
slide 16 Design Decisions: DetectID Detections have their positional information encoded in the detection identifier fGetDetectID (dec, observationID, runningID, zoneH) Primary key (objID, detectionID), to align detections with objects within partitions Provides efficient access to all detections associated to one object Provides efficient access to all detections of nearby objects
17
slide 17 DetectionID Clusters Data in Zones DetectID = 0879410500001234567 Dec = –16.71611583 ZH = 0.008333 ZID = (Dec+90) / ZH = 08794.0661 ObservationID = 1050000 Running ID = 1234567
18
slide 18 ODM Capacity 5.3.1.3 The PS1 ODM shall be able to ingest into the ODM a total of 1.5 10 11 P2 detections 8.3 10 10 cumulative sky (stack) detections 5.5 10 9 celestial objects together with their linkages.
19
slide 19 PS1 Table Sizes - Monolithic Table Year 1Year 2Year 3Year 3.5 Objects 2.31 StackPsfFits 5.0710.1615.2017.74 StackToObj 0.92 1.84 2.76 3.22 StackModelFits 1.15 2.29 3.44 4.01 P2PsfFits 7.8715.7423.6127.54 P2ToObj 1.33 2.67 4.00 4.67 Other Tables 3.19 6.03 8.8710.29 Indexes +20% 4.37 8.2112.0413.96 Total 26.2149.2472.2383.74 Sizes are in TB
20
slide 20 PS1 P1P1 PmPm PartitionsMap Objects LnkToObj Meta Linked servers PS1 database What goes into the main Server Objects LnkToObj Meta PartitionsMap Legend Database Full table [partitioned table] Output table Distributed Partitioned View
21
slide 21 PS1 P1P1 PmPm PartitionsMap Objects LnkToObj Meta Linked servers PS1 database What goes into slices PartitionsMap [Objects_p1] [LnkToObj_p1] [Detections_p1] PartitionsMap Meta [Objects_pm] [LnkToObj_pm] [Detections_pm] PartitionsMap Meta [Objects_p1] [LnkToObj_p1] [Detections_p1] Meta Legend Database Full table [partitioned table] Output table Distributed Partitioned View
22
slide 22 PS1 P1P1 PmPm PartitionsMap Objects LnkToObj Meta Linked servers PS1 database What goes into slices PartitionsMap [Objects_p1] [LnkToObj_p1] [Detections_p1] PartitionsMap Meta [Objects_pm] [LnkToObj_pm] [Detections_pm] PartitionsMap Meta [Objects_p1] [LnkToObj_p1] [Detections_p1] Meta Legend Database Full table [partitioned table] Output table Distributed Partitioned View
23
slide 23 Duplication of Objects & LnkToObj Objects are distributed across slices Objects, P2ToObj, and StackToObj are duplicated in the slices to parallelize “inserts” & “updates” Detections belong into their object’s slice Orphans belong to the slice where their position would allocate them Orphans near slices’ boundaries will need special treatment Objects keep their original object identifier Even though positional refinement might change their zoneID and therefore the most significant part of their identifier
24
slide 24 PS1 P1P1 PmPm PartitionsMap Objects LnkToObj Meta Linked servers PS1 database Glue = Distributed Views Detections [Objects_p1] [LnkToObj_p1] [Detections_p1] PartitionsMap Meta [Objects_pm] [LnkToObj_pm] [Detections_pm] PartitionsMap Meta Legend Database Full table [partitioned table] Output table Distributed Partitioned View Detections
25
slide 25 PS1 P1P1 PmPm Web Based Interface (WBI) Linked servers PS1 database Partitioning in Main Server Main server is partitioned (objects) and collocated (lnkToObj) by objid Slices are partitioned (objects) and collocated (lnkToObj) by objid Query Manager (QM)
26
slide 26 PS1 Table Sizes - Main Server Table Year 1Year 2Year 3Year 3.5 Objects 2.31 StackPsfFits StackToObj 0.92 1.84 2.76 3.22 StackModelFits P2PsfFits P2ToObj 1.33 2.67 4.00 4.67 Other Tables 0.41 0.46 0.52 0.55 Indexes +20% 0.99 1.46 1.92 2.15 Total 5.96 8.7411.5112.90 Sizes are in TB
27
slide 27 PS1 Table Sizes - Each Slice m=4m=8m=10m=12 Table Year 1Year 2Year 3Year 3.5 Objects 0.580.290.230.19 StackPsfFits 1.27 1.521.48 StackToObj 0.23 0.280.27 StackModelFits 0.29 0.340.33 P2PsfFits 1.97 2.362.30 P2ToObj 0.33 0.400.39 Other Tables 0.750.811.001.01 Indexes +20% 1.081.041.231.19 Total 6.506.237.367.16 Sizes are in TB
28
slide 28 PS1 Table Sizes - All Servers Table Year 1Year 2Year 3Year 3.5 Objects 4.63 4.61 4.59 StackPsfFits 5.0810.1615.2017.76 StackToObj 1.84 3.68 5.56 6.46 StackModelFits 1.16 2.32 3.40 3.96 P2PsfFits 7.8815.7623.6027.60 P2ToObj 2.65 5.31 8.00 9.35 Other Tables 3.41 6.9410.5212.67 Indexes +20% 5.33 9.7614.1816.48 Total 31.9858.5685.0798.87 Sizes are in TB
29
slide 29 Detail Design Outline General Concepts Distributed Database architecture Ingest Workflow Prototype
30
slide 30 PS1 P1P1 PmPm PartitionsMap Objects LnkToObj Meta [Objects_p1] [LnkToObj_p1] [Detections_p1] PartitionsMap Meta [Objects_pm] [LnkToObj_pm] [Detections_pm] PartitionsMap Meta Detections Linked servers PS1 database LoadAdmin Load Support 1 objZoneIndx orphans_l1 Detections_l1 LnkToObj_l1 objZoneIndx Orphans_ln Detections_ln LnkToObj_ln detections Load Support n Linked servers detections PartitionsMap PS1 Distributed DB system Legend Database Full table [partitioned table] Output table Partitioned View Query Manager (QM) Web Based Interface (WBI)
31
slide 31 “Insert” & “Update” SQL Insert and Update are expensive operations due to logging and re-indexing In the PS1 design, Insert and Update have been re- factored into sequences of: Merge + Constrain + Switch Partition Frequency f1: daily f2: at least monthly f3: TBD (likely to be every 6 months)
32
slide 32 Ingest Workflow ObjectsZ CSV Detect X(1”) DXO_1a NoMatch X(2”) DXO_2a DZone P2PsfFits Resolve P2ToObj Orphans
33
slide 33 Ingest @ frequency = f1 P2ToObj Orphans SLICE_1 MAIN P2PsfFits Metadata+ Objects Orphans_1 P2ToPsfFits_1 P2ToObj_1 Objects_1 111213 Stack*_1 123 P2ToObjStackToObj P2ToObj_1P2ToPsfFits_1 Orphans_1 ObjectsZ LOADER
34
slide 34 SLICE_1 MAIN Metadata+ Objects Orphans_1 P2ToPsfFits_1 P2ToObj_1 Objects_1 111213 Stack*_1 123 P2ToObjStackToObj LOADER Objects Updates @ frequency = f2
35
slide 35 Updates @ frequency = f2 SLICE_1 MAIN Metadata+ Objects Orphans_1 P2ToPsfFits_1 P2ToObj_1 Objects_1 111213 Stack*_1 123 P2ToObjStackToObj Objects LOADER Objects Objects_1
36
slide 36 Snapshots @ frequency = f3 MAIN Metadata+ Objects 123 P2ToObjStackToObj Snapshot Objects
37
slide 37 Batch Update of a Partition A1A2A3 112123 … merged select into switch B1 select into … where B2 + PK index select into … where B3 + PK index switch select into … where B1 + PK index
38
slide 38 P2P3P2P3 PS1 P1P2P1P2 PmP1PmP1 PartitionsMap Objects LnkToObj Meta Legend Database Duplicate Full table [partitioned table] Partitioned View Duplicate P view [Objects_p1] [LnkToObj_p1] [Detections_p1] [Objects_p2] [LnkToObj_p2] [Detections_p2] Meta Query Manager (QM) Detections Linked servers PS1 database [Objects_pm] [LnkToObj_pm] [Detections_pm] [Objects_p1] [LnkToObj_p1] [Detections_p1] Meta PS1 PartitionsMap Objects LnkToObj Meta Detections P m-1 P m Scaling-out Apply Ping-Pong strategy to satisfy query performance during ingest 2 x ( 1 main + m slices)
39
slide 39 P2P3P2P3 PS1 P1P2P1P2 PmP1PmP1 Legend Database Duplicate Full table [partitioned table] Partitioned View Duplicate P view [Objects_p1] [LnkToObj_p1] [Detections_p1] [Objects_p2] [LnkToObj_p2] [Detections_p2] Meta Query Manager (QM) Linked servers PS1 database [Objects_pm] [LnkToObj_pm] [Detections_pm] [Objects_p1] [LnkToObj_p1] [Detections_p1] Meta PS1 PartitionsMap Objects LnkToObj Meta Detections P m-1 P m Scaling-out More robustness, fault-tolerance, and reabilability calls for 3 x ( 1 main + m slices) PartitionsMap Objects LnkToObj Meta Detections
40
slide 40 Adding New slices SQL Server range partitioning capabilities make it easy Recalculate partitioning limits Transfer data to new slices Remove data from slices Define an d Apply new partitioning schema Add new partitions to main server Apply new partitioning schema to main server
41
slide 41 Adding New Slices
42
slide 42 Detail Design Outline General Concepts Distributed Database architecture Ingest Workflow Prototype
43
slide 43 ODM Assoc/Update Requirement 5.3.6.6 The PS1 ODM shall update the derived attributes for objects when new P2, P4 (stack), P4 and cumulative sky detections are being correlated with existing objects.
44
slide 44 ODM Ingest Performance 5.3.1.6 The PS1 ODM shall be able to ingest the data from the IPP at two times the nominal daily arrival rate* * The nominal daily data rate from the IPP is defined as the total data volume to be ingested annually by the ODM divided by 365. Nominal daily data rate: 1.5 10 11 / 3.5 / 365 = 1.2 10 8 P2 detections / day 8.3 10 10 / 3.5 / 365 = 6.5 10 7 stack detections / day
45
slide 45 Number of Objects miniProtomyProtoPrototypePS1 SDSS* Stars5.7 x 10 4 1.3 x 10 7 1.1 x 10 8 SDSS* Galaxies9.1 x 10 4 1.1 x 10 7 1.7 x 10 8 Galactic Plane1.5 x 10 6 3 x 10 6 1.0 x 10 9 TOTAL1.6 x 10 6 2.6 x 10 7 1.3 x 10 9 5.5 x 10 9 * “SDSS” includes a mirror of 11.3 < < 30 objects to < 0 Total GB of csv loaded data: 300 GB CSV Bulk insert load: 8 MB/s Binary Bulk insert:18-20 MB/s Creation Started: October 15 th 2007 Finished:October 29 th 2007 (??) Includes 10 epochs of P2PsfFits detections 1 epoch of Stack detections
46
slide 46 Time to Bulk Insert from CSV FileRowsRowSizeGB MinutesMinutes/GB stars_plus_xai.csv 5383971 56 0.30 1.234.09 galaxy0_xal.csv 10000000436 4.3615.683.60 galaxy0_xam.csv 10000000436 4.3615.753.61 galaxy0_xan.csv 1721366436 0.75 2.753.66 gp_6.csv 106446858264 28.1041.451.47 gp_10.csv 92019350264 24.2931.401.29 gp_11.csv 73728448264 19.4626.051.34 P2PsfFits / Day 12000000018321.96 592.7 CSV Bulk insert speed ~ 8 MB/s BIN Bulk insert speed ~ 18 – 20 MB/s
47
slide 47 Prototype in Context Survey ObjectsDetections SDSS DR6 3.8 10 8 2MASS 4.7 10 8 USNO-B 1.0 10 9 Prototype 1.3 10 9 1.4 10 10 PS1 (end of survey) 5.5 10 9 2.3 10 11
48
slide 48 Size of Prototype Database Table MainSlice1Slice2Slice3LoaderTotal Objects 1.300.43 1.303.89 StackPsfFits 6.49 StackToObj 6.49 StackModelFits 0.87 P2PsfFits 4.023.903.350.3711.64 P2ToObj 4.023.903.350.1211.39 Total 15.158.478.237.131.7940.77 Extra Tables 0.874.894.774.226.8621.61 Grand Total 16.0213.3613.0011.358.6562.38 Table sizes are in billions of rows
49
slide 49 Size of Prototype Database Table MainSlice1Slice2Slice3LoaderTotal Objects 547.6165.4165.3 137.11180.6 StackPsfFits 841.5 841.6 StackToObj 300.9 StackModelFits 476.7 P2PsfFits 879.9853.0733.574.72541.1 P2ToObj 125.7121.9104.83.8356.2 Total 2166.71171.01140.21003.6215.65697.1 Extra Tables 207.9987.1960.2840.7957.33953.2 Allocated / Free 1878.01223.01300.01121.0666.06188.0 Grand Total 4252.63381.13400.42965.31838.915838.3 9.6 TB of data in a distributed database Table sizes are in GB
50
slide 50 Well-Balanced Partitions Server PartitionRowsFractionDec Range Main 1432,590,59833.34% 32.59 Slice 1 1144,199,10511.11% 14.29 Slice 1 2144,229,34311.11% 9.39 Slice 1 3144,162,15011.12% 8.91 Main 2432,456,51133.33% 23.44 Slice 2 1144,261,09811.12% 8.46 Slice 2 2144,073,97211.10% 7.21 Slice 2 3144,121,44111.11% 7.77 Main 3432,496,64833.33% 81.98 Slice 3 1144,270,09311.12% 11.15 Slice 3 2144,090,07111.10% 14.72 Slice 3 3144,136,48411.11% 56.10
51
slide 51 Ingest and Association Times Task Measured Minutes Create Detections Zone Table39.62 X(0.2") 121M X 1.3B65.25 Build #noMatches Table1.50 X(1") 12k X 1.3B0.65 Build #allMatches Table (121M)6.58 Build Orphans Table0.17 Create P2PsfFits Table11.63 Create P2ToObj Table14.00 Total of Measured Times140.40
52
slide 52 Ingest and Association Times Task Estimated Minutes Compute DetectionID, HTMID30 Remove NULLS15 Index P2PsfFits on ObjID15 Slices Pulling Data from Loader 5 Resolve 1 Detection - N Objects10 Total of Estimated Times75 Educated Guess Wild Guess
53
slide 53 Total Time to I/A daily Data Task Time (hours) Time (hours) Ingest 121M Detections (binary)0.32 Ingest 121M Detections (CSV) 0.98 Total of Measured Times2.34 Total of Estimated Times1.25 Total Time to I/A Daily Data3.914.57 Requirement: Less than 12 hours (more than 2800 detections / s) Detection Processing Rate: 8600 to 7400 detections / s Margin on Requirement: 3.1 to 2.6 Using multiple loaders would improve performance
54
slide 54 Insert Time @ slices Task Estimated Minutes Import P2PsfFits (binary out/in)20.45 Import P2PsfFits (binary out/in)2.68 Import Orphans0.00 Merge P2PsfFits58 Add constraint P2PsfFits193 Merge P2ToObj13 Add constraint P2ToObj54 Total of Measured Times362 6 h with 8 partitions/slice (~1.3 x 10 9 detections/partition) Educated Guess
55
slide 55 Detections Per Partition Years Total Detections Slices Partition per Slice Total Partitions Detections per Slice 0.00.00 48320.00 1.0 4.29 10 10 4832 1.34 10 9 1.0 4.29 10 10 8864 6.7 10 8 2.0 8.57 10 10 8864 1.34 10 9 2.0 8.57 10 10 10880 1.07 10 9 3.0 1.29 10 11 10880 1.61 10 9 3.0 1.29 10 11 12896 1.34 10 9 3.5 1.50 10 11 12896 1.56 10 9
56
slide 56 Total Time for Insert @ slice Task Time (hours) Total of Measured Times0.25 Total of Estimated Times5.3 Total Time for daily insert6 Daily insert may operate in parallel with daily ingest and association. Requirement: Less than 12 hours Margin on Requirement: 2.0 Using more slices will improve insert performance.
57
slide 57 Summary Ingest + Association < 4 h using 1 loader (@f1= daily) Scales with the number of servers Current margin on requirement 3.1 Room for improvement Detection Insert @ slices (@f1= daily) 6 h with 8 partitions/slice It may happen in parallel with loading Detections Lnks Insert @ main (@f2 < monthly) Unknown 6 h available Objects insert & update @ slices (@f2 < monthly) Unknown 6 hours available Objects update @ main server (@f2 < monthly) Unknown 12 h available. Transfer can be pipelined as soon as objects have been processed
58
slide 58 Risks Estimates of Insert & Update at slices could be underestimated Need more empirical evaluation of exercising parallel I/O Estimates and lay out of disk storage could be underestimated Merges and Indexes require 2x the data size
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.