Download presentation
Presentation is loading. Please wait.
Published byPearl Thomas Modified over 9 years ago
1
Data Management at Gaia Data Processing Centers GREAT Workshop on Astrostatistics and Data Mining in Astrnomical Databases La Palma, Spain May 30 - June 3, 2011 Pilar de Teodoro Idiago Gaia Database Administrator European Space Astronomy Center (ESAC) Madrid Spain http://www.rssd.esa.int/Gaia
2
Data Processing Centres *DPCE (ESAC) * DPCB (Barcelona) * DPCC (CNES) * DPCG (Obs. Geneva / ISDC) * DPCI (IoA, Cambridge) * DPCT (Torino) All contributed to this talk Data Processing Centers
3
Photometry Treatment Calibrate flux scale give magnitudes Spectral Treatment Calibrate and disentangle provide s spectra Astrometric Treatment Fix geometrical calibration Adjust Attitude Fix source positions Variability Astrophysical Parameters Non Single Systems Solar System Many iterations Catalogue Many iterations Processing Overview (simplified) Initial Data Treatment Turn CCD transits into source observations on sky Should be linear transform CU3 CU3/SOC CU5 CU6 CU4 CU7 CU8 CU4
4
DPCE
5
DPCB
6
DPCC (CNES) CU4 (Objects Processing), CU6 (Spectroscopic processing) CU8 (Astrophysical Parameters) Solutions based on: performance scalability of the solution data safety impacts on the existing software impacts on the hardware architecture cost of the solution during the whole mission durability of the solution administration and monitoring tools
7
DPCG Detection and characterization of variable sources observed by Gaia (CU7) Analytical queries must be done over sources or processing results (attributes) to support unknown research requirements. Timeseries reconstruction while importing MDB data Parameter analysis for simulations and configurations changes on historical database. ETL-like support must be done for external data. At present Apache OpenJPA. Postgress used as well. Other alternatives : Hadoop, SciDB, VoltDB and Extensions to PG.
8
DPCI Given the use case: bulk-processing of a large data set data volume increases with time (DPAC-wide iterations) We can state that: Random data access is expensive and less efficient than sequential access. Hub-and-Spoke architecture is prone to bottlenecks and therefore does not scale very well with the number of clients. Hadoop adopted in 2009 HDFS:distributed filesystem Map/Reduce jobs to minimize synchronization DAL much simpler
9
DPCT CU3 AVU IGSL support Persistent data management
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.