Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exa-Scale Data Preservation in HEP

Similar presentations


Presentation on theme: "Exa-Scale Data Preservation in HEP"— Presentation transcript:

1 Exa-Scale Data Preservation in HEP
APA/C-DAC Conference February 2014 International Collaboration for Data Preservation and Long Term Analysis in High Energy Physics

2 Background Whilst this talk concerns data from High Energy Physics (HEP) experiments at CERN and elsewhere, many points are generic The scale: 100PB today, reaching ~5EB by 2030 “Trusted” repositories of this size– and with a lifetime of at least decades – are a sine qua non of our work I will also talk about costs, business cases, problems and opportunities…

3 BEFORE!

4 CERN-JRC meeting Bob Jones
MB/sec Data flow to permanent storage: 4-6 GB/sec 1.25 GB/sec 1-2 GB/sec CERN-JRC meeting Bob Jones

5 Tier 0 – Tier 1 – Tier 2 Tier-0 (CERN): Data recording
Initial data reconstruction Data distribution Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-2 (~130 centres): Simulation End-user analysis Tier-2 centres in India: Kolkata (ALICE) Mumbai (CMS) The Tier 0 centre at CERN stores the primary copy of all the data. A second copy is distributed between the 11 so-called Tier 1 centres. These are large computer centres in different geographical regions of the world, that also have a responsibility for long term guardianship of the data. The data is sent from CERN to the Tier 1s in real time over dedicated network connections. In order to keep up with the data coming from the experiments this transfer must be capable of running at around 1.3 GB/s continuously. This is equivalent to a full DVD every 3 seconds. The Tier 1 sites also provide the second level of data processing and produce data sets which can be used to perform the physics analysis. These data sets are sent from the Tier 1 sites to the around 130 Tier 2 sites. A Tier 2 is typically a university department or physics laboratories and are located all over the world in most of the countries that participate in the LHC experiments. Often, Tier 2s are associated to a Tier 1 site in their region. It is at the Tier 2s that the real physics analysis is performed. Tier-2s in Poland: Poland, Polish Tier-2 Federation - Krakow - Poznan - Warszawa Frédéric Hemmer The LHC Computing Grid, February 2010 5

6 Managing 100 PBytes of data
27 January 2014 CERN-JRC meeting Bob Jones

7 CERN-JRC meeting Bob Jones
LHC Schedule 2009 2010 2011 2011 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2030? First run LS1 Second run LS2 Third run LS3 HL-LHC Phase-0 Upgrade (design energy, nominal luminosity) Phase-1 Upgrade (design energy, design luminosity) Phase-2 Upgrade (High Luminosity) LHC startup 900 GeV 7 TeV L=6x1033 cm-2s-2 Bunch spacing = 50 ns 14 TeV L=1x1034 cm-2s-2 Bunch spacing = 25 ns 14 TeV L=2x1034 cm-2s-2 Bunch spacing = 25 ns 14 TeV L=1x1035 cm-2s-2 Spacing = 12.5 ns CERN-JRC meeting Bob Jones

8 ATLAS Higgs Candidates

9 AFTER!

10 CERN has ~100 PB archive

11 But its still early days for the LHC!
Only EYETS (19 weeks) (no Linac4 connection during Run2) LS2 starting in 2018 (July) 18 months + 3months BC (Beam Commissioning) LS3 LHC: starting in 2023 => 30 months + 3 BC injectors: in => 13 months + 3 BC Run 2 LS 2 Run 3 LS 3 Run 4 LS 4 Run 5 LS 5 LHC schedule approved by CERN management and LHC experiments spokespersons and technical coordinators Monday 2nd December 2013

12 High Luminosity LHC (HL-LHC)
Update of the European Strategy for Particle Physics adopted 30 May 2013 in a special session of CERN Council at Brussels. Statement c: c) The discovery of the Higgs boson is the start of a major programme of work to measure this particle’s properties with the highest possible precision for testing the validity of the Standard Model and to search for further new physics at the energy frontier. The LHC is in a unique position to pursue this programme. Europe’s top priority should be the exploitation of the full potential of the LHC, including the high-luminosity upgrade of the machine and detectors with a view to collecting ten times more data than in the initial design, by around This upgrade programme will also provide further exciting opportunities for the study of flavour physics and the quark-gluon plasma. October 1, 2013 HL-LHC Workshop

13 Data: Outlook for HL-LHC
PB Very rough estimate of a new RAW data per year of running using a simple extrapolation of current data volume scaled by the output rates. To be added: derived data (ESD, AOD), simulation, user data…

14 Volume: 100PB + ~50PB/year (+400PB/year from 2020)

15 1. DPHEP Portal Digital library tools (Invenio) & services (CDS, INSPIRE, ZENODO) + related tools (HepData, RIVET, …) Sustainable software, coupled with advanced virtualization techniques, “snap-shotting” and validation frameworks Proven bit preservation at the 100PB scale, together with a sustainable funding model with an outlook to 2040/50 Open Data (“Open everything”)

16 Case B) increasing archive growth
Start with 10PB, then +50PB/year, then +50% every 3y (or +15% / year)

17 Case B) increasing archive growth

18 Case B) increasing archive growth
Total cost: ~59.9M$ (~2M$ / year)

19 Case B) increasing archive growth

20 Summary DPHEP portal: build in collaboration with other disciplines, including RDA IG and the APA… Digital libraries: continue existing collaborations Sustainable “bit preservation” – certified repositories as part of EINFRA “Knowledge capture & preservation”: BIG CHALLENGE not addressed in multi-disciplinary way: next funding round? Open “Big Data”: a Big Opportunity (for RDA?)

21 Portal Example # 1

22 Portal Platform – Zenodo?

23 Documentation projects with INSPIRE
Internal notes from all HERA experiments now available on INSPIRE Experiments no longer need to provide dedicated hardware for such things Password protected now, simple to make publicly available in the future The ingestion of other documents is under discussion, including theses, preliminary results, conference talks and proceedings, paper drafts, ... More experiments working with INSPIRE, including CDF, D0 as well as BaBar

24 LEP Cost would be “now” …
Completely different, of course … Direct resource cost is already compatible with zero for LEP experiments Total ALEPH DATA + MC (analysis format) = 30 TB ALEPH: Shift50 = 320 CernUnit. One of today’s pizza box largely exceeds this CDF data: O(10 PB), bought today for <400kEur CDF CPU ~ 1MSi2k = 4 kHS06 = 40kEur Here the main problem is knowledge /support, clearly Can you trust a “NP peak” 10 years later, when experts are gone? ALEPH reproducibility test (M.Maggi, by NO mean a DP solution) ~0.5 FTE for 3 months Zero! !=0, but decreasing fast

25

26 Open Data?

27 Costs and Scale There are 4 (main) collaborations + detectors at the LHC: the largest has 3000 members The annual cost of WLCG (infrastructure, operations, services) is ~EUR100M The CERN database services costs around 2MCHF per year for Materials (licenses, maintenance, hardware) and 2MCHF for personnel The central grid Experiment Integration Support team varied between 4-10 people, plus significant effort at sites and within experiments The DPHEP Full Costs of Curation workshop concluded that a team of ~4 people, with access to experts, could “make significant progress” (be careful with this number!)

28 Conclusions Long-term data preservation is a journey, not a destination As such, it is best not to venture out alone A clear understanding of costs & benefits is necessary to secure funding We are eager to share our knowledge and experience (exa-scale “bit preservation”) We have learned a lot through collaboration through the APA – and keen to learn more in the future


Download ppt "Exa-Scale Data Preservation in HEP"

Similar presentations


Ads by Google