The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.

Slides:



Advertisements
Similar presentations
The ATLAS Computing Model Roger Jones Lancaster University CHEP06 Mumbai 13 Feb
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
A tool to enable CMS Distributed Analysis
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
External and internal data traffic in Tier-2 ATLAS farms. Sketch of farm organization Some approximate estimate s of internal and external data flows in.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
The ATLAS Computing Model: Status, Plans and Future Possibilities Shawn McKee University of Michigan CCP 2006, Gyeongju, Korea August 29 th, 2006.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
ATLAS: Heavier than Heaven? Roger Jones Lancaster University GridPP19 Ambleside 28 August 2007.
…building the next IT revolution From Web to Grid…
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
CMS Computing and Core-Software Report to USCMS-AB (Building a Project Plan for CCS) USCMS AB Riverside, May 18, 2001 David Stickland, Princeton University.
Handling ALARMs for Critical Services Maria Girone, IT-ES Maite Barroso IT-PES, Maria Dimou, IT-ES WLCG MB, 19 February 2013.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
ATLAS Grid Computing Rob Gardner University of Chicago ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science THE CENTER.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
The ATLAS Computing & Analysis Model Roger Jones Lancaster University GDB BNL, Long Island, 6/9/2006.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
David Adams ATLAS ADA: ATLAS Distributed Analysis David Adams BNL December 15, 2003 PPDG Collaboration Meeting LBL.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
Hall D Computing Facilities Ian Bird 16 March 2001.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Pasquale Migliozzi INFN Napoli
Data Challenge with the Grid in ATLAS
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
New strategies of the LHC experiments to meet
ATLAS DC2 & Continuous production
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007

The ATLAS Computing Model and USATLAS Tier-2/3 Meeting Shawn McKee 2 Overview  The ATLAS collaboration has only ~year before it must manage large amounts of “real” data for its globally distributed collaboration.  ATLAS physicists need the software and physical infrastructure required to:  Calibrate and align detector subsystems to produce well understood data  Realistically simulate the ATLAS detector and its underlying physics  Provide access to ATLAS data globally  Define, manage, search and analyze data-sets of interest  I will give a quick view of ATLAS plans & highlight the processing workflow we envision. This will be brief; most info is available from our recent USATLAS Tier-2/3 meeting presentations ATLAS

Shawn McKee 3 The ATLAS Computing Model  Computing Model is well evolved and documented in C-TDR  pdf pdf  There are many areas with significant questions/issues to be resolved:  Calibration and alignment strategy is still evolving  Physics data access patterns partially exercised  Unlikely to know the real patterns until 2008!  Still uncertainties on the event sizes, reconstruction time  How best to integrate ongoing “infrastructure” improvements from research efforts into our operating model?  Lesson from the previous round of experiments at CERN (LEP, )  Reviews in 1988 underestimated the computing requirements by an order of magnitude! The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

 We have a hierarchical model (EF-T0-T1-T2) with specific roles and responsibilities  Data will be processed in stages: RAW->ESD->AOD->TAG  Data “production” is well-defined and scheduled  Roles and responsibilities are assigned within the hierarchy.  Users will send jobs to the data and extract relevant data  typically DPD’s (Derived Physics Data) or similar  Goal is a production and analysis system with seamless access to all ATLAS grid resources  All resources need to be managed effectively to insure ATLAS goals are met and resource providers policy’s are enforced. Grid middleware must provide this Shawn McKee 4 ATLAS Computing Model Overview The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

 Event Filter Farm at CERN  Assembles data (at CERN) into a stream to the Tier 0 Center  Tier 0 Center at CERN  Data archiving: Raw data to mass storage at CERN and to Tier 1 centers  Production: Fast production of Event Summary Data (ESD) and Analysis Object Data (AOD)  Distribution: ESD, AOD to Tier 1 centers and mass storage at CERN  Tier 1 Centers distributed worldwide (10 centers)  Data steward: Re-reconstruction of raw data they archive, producing new ESD, AOD  Coordinated access to full ESD and AOD (all AOD, % of ESD depending upon site)  Tier 2 Centers distributed worldwide (approximately 30 centers)  Monte Carlo Simulation, producing ESD, AOD, ESD, AOD sent to Tier 1 centers  On demand user physics analysis of shared datasets  Tier 3 Centers distributed worldwide  Physics analysis  A CERN Analysis Facility  Analysis  Enhanced access to ESD and RAW/calibration data on demand Shawn McKee 5 ATLAS Facilities and Roles The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

Shawn McKee 6 USATLAS Tier-2/Tier-3 Meeting  In mid June 2007 we held our first joint USATLAS Tier-2/Tier-3 Meeting  Hosted at Indiana University (Bloomington)  June nd 2007  Indico has the agenda and talks available:   The first half of the meeting focused on Tier-3 concerns  Second half was concentrated on Tier-2 issues and planning  See slides from Amir Farbin which provide a very good overview of the analysis needs from the point of view of a physicist.  mp;resId=0&materialId=slides&confId= mp;resId=0&materialId=slides&confId= mp;resId=0&materialId=slides&confId=15523  mp;resId=0&materialId=slides&confId= mp;resId=0&materialId=slides&confId= mp;resId=0&materialId=slides&confId=15523 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

The ATLAS Computing Model: Status, Plans and Future Possibilities Shawn McKee 7 Slide From Amir Farbin

Shawn McKee 8 ATLAS Resource Requirements in for 2008 Recent (July 2006) updates have reduced the expected contributions Computing TDR The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

Slide From Amir Farbin

The ATLAS Computing Model: Status, Plans and Future Possibilities Shawn McKee 10 Slide From Amir Farbin

Network and Resource Implications  The ATLAS computing model assumes 12 Tier-2 “cores” per physicist  This won’t be able to provide a timely turn-around for most analysis work.  Assumption is Tier-3 should additionally provide 25 more cores and around 50TB/year  Networks for “Tier-3” scale analysis should provide ~10MBytes/sec per core  Typical 8 core machine requires gigabit “end-to-end” connectivity; but in bursts  Will Tier-2’s and Tier-3 have sufficient useable bandwidth (end-to-end issues)? Shawn McKee 11 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

Planning for 2008  To date most requirements envisioned for LHC scale physics from the network have yet to be realized.  Once real data is flowing this will change quickly  End-sites (Tier-2 or Tier-3) must be ready to accommodate needs  Physicist’s will need very high network performance in “bursts”.  Ideally a multiplexed form of network access/usage could provide sufficient capabilities.  End-to-end issues will need to be addressed Shawn McKee 12 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

 Within a year real LHC data will begin flowing  Physicists globally will be intently working to access and process data…there will be implications for networks, storage systems and computing resources.  Planning should provide for reasonable network infrastructure:  Typical Tier-2: 10+ Gbps  Typical Tier-3: 1 (to 10) Gbps (depends on number of physicists and size of resources)  Network services incorporated from research areas may be needed to insure end-to-end capabilities and effective resource management  Shortly we will be living in “Interesting Times”… Conclusions Shawn McKee 13 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting