Status: ATLAS Grid Computing

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHC ATLAS users analysing data on the Grid.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
US Atlas Tier 3 Overview Doug Benjamin Duke University.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
ATLAS Tier3s Santiago González de la Hoz IFIC-Valencia S. González de la Hoz, II PCI 2009 Workshop, Valencia, 18/11/2010 ATLAS Tier3s (Programa de Colaboración.
Analysis Facility Infrastructure: ATLAS in Valencia S. González de la Hoz (On behalf of Tier3 Team) IFIC – Institut de Física Corpuscular de València First.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
ATLAS Tier3 workshop at the OSG all-hand meeting FNAL 8-11 March 2010.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Parrot and ATLAS Connect
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
DPM at ATLAS sites and testbeds in Italy
(Prague, March 2009) Andrey Y Shevel
Computing Operations Roadmap
Ian Bird WLCG Workshop San Francisco, 8th October 2016
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Doug Benjamin Duke University On Behalf of the Atlas Collaboration
Virtualization and Clouds ATLAS position
Dag Toppe Larsen UiB/CERN CERN,
Dag Toppe Larsen UiB/CERN CERN,
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Key Activities. MND sections
POW MND section.
Collaboration Meeting
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
Data Challenge with the Grid in ATLAS
Grid Computing for the ILC
A full demonstration based on a “real” analysis scenario
Dagmar Adamova, NPI AS CR Prague/Rez
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
PanDA in a Federated Environment
Readiness of ATLAS Computing - A personal view
Workshop Summary Dirk Duellmann.
Enabling High Speed Data Transfer in High Energy Physics
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
Cloud Computing R&D Proposal
Grid Canada Testbed using HEP applications
D. van der Ster, CERN IT-ES J. Elmsheuser, LMU Munich
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
Status and Plans of the Spanish ATLAS
ATLAS Distributed Analysis tests in the Spanish Cloud
ATLAS DC2 & Continuous production
The LHCb Computing Data Challenge DC06
Presentation transcript:

Status: ATLAS Grid Computing Santiago González de la Hoz Reunión ATOPE 18 Mayo 2010 S. González

Tier1ES Storage Tier-1 ATLAS capacities Software used to manage disk is dCache Running version 1.9.5-17 They use a single instance for all our experiments ATLAS, CMS, LHCb, MAGIC, PAUS Total capacity available is around 2.3PB (disk) Tier-1 ATLAS capacities 2010 disk and tape capacities already covered Full CPU capacity deployed in June with the new datacenter expansion S. González

Tier1ES April Reprocessing campaing Schedule downtime S. González

Tier1ES Real reprocessed data and MC reconstruction tranfer PIC=> Tier2 S. González

ES Cloud Production Data processing S. González

TIER2ES S. González

ES Cloud User Analysis (only ANALY_PANDA) S. González

TIER2ES S. González

TIER2ES S. González

TIER2-IFIC S. González

TIER2-IFIC Recursos CPU 2009 S. González

TIER2-IFIC Recursos CPU 2010 S. González

TIER2-IFIC Servicios y Uso Irregular de la CPU S. González

Tier2-IFIC Planes S. González

ATLAS Tier3? Working definition Goals/constrains “Non pledged resources” “Analysis facilities” at your University/Institute/... Goals/constrains Emphasis on user analysis IO intensive. Iterative batch (athena), interactive (root/proof) Do not increase the overall complexity Do not require more central-operation effort Emphasis on simplicity and “low cost” Do not use physicists to do sysadmin Solution? Privilege client-based solutions Extensive use of caches Positive side effects? Learn more on distributed computing for ATLAS computing evolution (long term) S. González

ATLAS Tier3 Activities 6 working groups set up in Jan/Feb Key points Quickly setup! Lot of quality work! Credits to the working-group chairs Key points ATLAS-wide collaboration Established/reinforced links with “external” experts Converge on an ATLAS Tier3 model Uniform across the Tier3 sites Build a Tier3 community S. González

ATLAS Tier3 working groups DDM-Tier3 link How to “download” the data... S. Campana (CERN) Data access (Lustre/Xrootd) Main data access via file system or file-system like S. Gonzalez La Hoz (Valencia) and R. Gardner (Chicago and OSG) Also creating an inventory and a knowledge base! Software / Conditions data Data distribution and caching of “auxiliary” data A. de Salvo (INFN Roma) and A. da Silva (TRIUMF) Tier 3 Support Tools/infrastructure: HammerCloud, DAST, docs... D. Van der Ster (CERN) PROOF Working Group Parallel ntuple scan Neng Xu (Wisconsin) and W. Ehrenfeld (DESY) Virtualization Yushu Wu (LBNL) S. González

Transformative technologies By their operational requirements non-grid Tier 3 sites will require transformative ideas and solutions Short term examples CVMFS (Cern VM web file system) Minimize effort for ATLAS software releases Conditions DB Xrootd/Lustre Allows for straight forward data access (no SRM) Wide area data clustering will helps groups during analysis (interesting option  more in the long term) Native speed, no additional administrative/API layers S. González

Transformative technologies(2) Other examples: dq2-get with FTS data transfer. – Robust client tool to fetch data for Tier 3 (no SRM required – not in ToA – a simplification) Dramatic simplification in cataloguing (local data mgmt) Storage provides cataloguing info (as in the file system) Make more local management (e.g. consistency checks) possible Share local tools Medium/Longer term examples PROOF Efficient data analysis Virtualization On top of service aggregation (short term) Includes Cloud Computing… Again, IO performance and storage is an interesting issue ($) S. González

Tier3-IFIC Storage in our Tier3 (Lustre) LOCALGROUPDISK T3 60% (around 60 TB) Under DDM, not quotas T3 40% (around 40 TB) 1-2 TB per user With quotas Write enabled from UIs (Seen as local disk) S. González

Tier3-IFIC (Proof Test) S. González