Download presentation
Presentation is loading. Please wait.
1
Status: ATLAS Grid Computing
Santiago González de la Hoz Reunión ATOPE 18 Mayo 2010 S. González
2
Tier1ES Storage Tier-1 ATLAS capacities
Software used to manage disk is dCache Running version They use a single instance for all our experiments ATLAS, CMS, LHCb, MAGIC, PAUS Total capacity available is around 2.3PB (disk) Tier-1 ATLAS capacities 2010 disk and tape capacities already covered Full CPU capacity deployed in June with the new datacenter expansion S. González
3
Tier1ES April Reprocessing campaing Schedule downtime S. González
4
Tier1ES Real reprocessed data and MC reconstruction tranfer
PIC=> Tier2 S. González
5
ES Cloud Production Data processing S. González
6
TIER2ES S. González
7
ES Cloud User Analysis (only ANALY_PANDA) S. González
8
TIER2ES S. González
9
TIER2ES S. González
10
TIER2-IFIC S. González
11
TIER2-IFIC Recursos CPU 2009 S. González
12
TIER2-IFIC Recursos CPU 2010 S. González
13
TIER2-IFIC Servicios y Uso Irregular de la CPU S. González
14
Tier2-IFIC Planes S. González
15
ATLAS Tier3? Working definition Goals/constrains
“Non pledged resources” “Analysis facilities” at your University/Institute/... Goals/constrains Emphasis on user analysis IO intensive. Iterative batch (athena), interactive (root/proof) Do not increase the overall complexity Do not require more central-operation effort Emphasis on simplicity and “low cost” Do not use physicists to do sysadmin Solution? Privilege client-based solutions Extensive use of caches Positive side effects? Learn more on distributed computing for ATLAS computing evolution (long term) S. González
16
ATLAS Tier3 Activities 6 working groups set up in Jan/Feb Key points
Quickly setup! Lot of quality work! Credits to the working-group chairs Key points ATLAS-wide collaboration Established/reinforced links with “external” experts Converge on an ATLAS Tier3 model Uniform across the Tier3 sites Build a Tier3 community S. González
17
ATLAS Tier3 working groups
DDM-Tier3 link How to “download” the data... S. Campana (CERN) Data access (Lustre/Xrootd) Main data access via file system or file-system like S. Gonzalez La Hoz (Valencia) and R. Gardner (Chicago and OSG) Also creating an inventory and a knowledge base! Software / Conditions data Data distribution and caching of “auxiliary” data A. de Salvo (INFN Roma) and A. da Silva (TRIUMF) Tier 3 Support Tools/infrastructure: HammerCloud, DAST, docs... D. Van der Ster (CERN) PROOF Working Group Parallel ntuple scan Neng Xu (Wisconsin) and W. Ehrenfeld (DESY) Virtualization Yushu Wu (LBNL) S. González
18
Transformative technologies
By their operational requirements non-grid Tier 3 sites will require transformative ideas and solutions Short term examples CVMFS (Cern VM web file system) Minimize effort for ATLAS software releases Conditions DB Xrootd/Lustre Allows for straight forward data access (no SRM) Wide area data clustering will helps groups during analysis (interesting option more in the long term) Native speed, no additional administrative/API layers S. González
19
Transformative technologies(2)
Other examples: dq2-get with FTS data transfer. – Robust client tool to fetch data for Tier 3 (no SRM required – not in ToA – a simplification) Dramatic simplification in cataloguing (local data mgmt) Storage provides cataloguing info (as in the file system) Make more local management (e.g. consistency checks) possible Share local tools Medium/Longer term examples PROOF Efficient data analysis Virtualization On top of service aggregation (short term) Includes Cloud Computing… Again, IO performance and storage is an interesting issue ($) S. González
20
Tier3-IFIC Storage in our Tier3 (Lustre) LOCALGROUPDISK T3
60% (around 60 TB) Under DDM, not quotas T3 40% (around 40 TB) 1-2 TB per user With quotas Write enabled from UIs (Seen as local disk) S. González
21
Tier3-IFIC (Proof Test)
S. González
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.