Download presentation
Presentation is loading. Please wait.
Published byCaitlin Richard Modified over 8 years ago
1
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007
2
The ATLAS Computing Model and USATLAS Tier-2/3 Meeting Shawn McKee 2 Overview The ATLAS collaboration has only ~year before it must manage large amounts of “real” data for its globally distributed collaboration. ATLAS physicists need the software and physical infrastructure required to: Calibrate and align detector subsystems to produce well understood data Realistically simulate the ATLAS detector and its underlying physics Provide access to ATLAS data globally Define, manage, search and analyze data-sets of interest I will give a quick view of ATLAS plans & highlight the processing workflow we envision. This will be brief; most info is available from our recent USATLAS Tier-2/3 meeting presentations ATLAS
3
Shawn McKee 3 The ATLAS Computing Model Computing Model is well evolved and documented in C-TDR http://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lhcc-2005- 022.pdf http://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lhcc-2005- 022.pdf There are many areas with significant questions/issues to be resolved: Calibration and alignment strategy is still evolving Physics data access patterns partially exercised Unlikely to know the real patterns until 2008! Still uncertainties on the event sizes, reconstruction time How best to integrate ongoing “infrastructure” improvements from research efforts into our operating model? Lesson from the previous round of experiments at CERN (LEP, 1989-2000) Reviews in 1988 underestimated the computing requirements by an order of magnitude! The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
4
We have a hierarchical model (EF-T0-T1-T2) with specific roles and responsibilities Data will be processed in stages: RAW->ESD->AOD->TAG Data “production” is well-defined and scheduled Roles and responsibilities are assigned within the hierarchy. Users will send jobs to the data and extract relevant data typically DPD’s (Derived Physics Data) or similar Goal is a production and analysis system with seamless access to all ATLAS grid resources All resources need to be managed effectively to insure ATLAS goals are met and resource providers policy’s are enforced. Grid middleware must provide this Shawn McKee 4 ATLAS Computing Model Overview The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
5
Event Filter Farm at CERN Assembles data (at CERN) into a stream to the Tier 0 Center Tier 0 Center at CERN Data archiving: Raw data to mass storage at CERN and to Tier 1 centers Production: Fast production of Event Summary Data (ESD) and Analysis Object Data (AOD) Distribution: ESD, AOD to Tier 1 centers and mass storage at CERN Tier 1 Centers distributed worldwide (10 centers) Data steward: Re-reconstruction of raw data they archive, producing new ESD, AOD Coordinated access to full ESD and AOD (all AOD, 20-100% of ESD depending upon site) Tier 2 Centers distributed worldwide (approximately 30 centers) Monte Carlo Simulation, producing ESD, AOD, ESD, AOD sent to Tier 1 centers On demand user physics analysis of shared datasets Tier 3 Centers distributed worldwide Physics analysis A CERN Analysis Facility Analysis Enhanced access to ESD and RAW/calibration data on demand Shawn McKee 5 ATLAS Facilities and Roles The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
6
Shawn McKee 6 USATLAS Tier-2/Tier-3 Meeting In mid June 2007 we held our first joint USATLAS Tier-2/Tier-3 Meeting Hosted at Indiana University (Bloomington) June 20-22 nd 2007 Indico has the agenda and talks available: http://indico.cern.ch/conferenceDisplay.py?confId=15523 http://indico.cern.ch/conferenceDisplay.py?confId=15523 The first half of the meeting focused on Tier-3 concerns Second half was concentrated on Tier-2 issues and planning See slides from Amir Farbin which provide a very good overview of the analysis needs from the point of view of a physicist. http://indico.cern.ch/getFile.py/access?contribId=30&sessionId=4&a mp;resId=0&materialId=slides&confId=15523 http://indico.cern.ch/getFile.py/access?contribId=30&sessionId=4&a mp;resId=0&materialId=slides&confId=15523 http://indico.cern.ch/getFile.py/access?contribId=30&sessionId=4&a mp;resId=0&materialId=slides&confId=15523 http://indico.cern.ch/getFile.py/access?contribId=22&sessionId=8&a mp;resId=0&materialId=slides&confId=15523 http://indico.cern.ch/getFile.py/access?contribId=22&sessionId=8&a mp;resId=0&materialId=slides&confId=15523 http://indico.cern.ch/getFile.py/access?contribId=22&sessionId=8&a mp;resId=0&materialId=slides&confId=15523 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
7
The ATLAS Computing Model: Status, Plans and Future Possibilities Shawn McKee 7 Slide From Amir Farbin
8
Shawn McKee 8 ATLAS Resource Requirements in for 2008 Recent (July 2006) updates have reduced the expected contributions Computing TDR The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
9
Slide From Amir Farbin
10
The ATLAS Computing Model: Status, Plans and Future Possibilities Shawn McKee 10 Slide From Amir Farbin
11
Network and Resource Implications The ATLAS computing model assumes 12 Tier-2 “cores” per physicist This won’t be able to provide a timely turn-around for most analysis work. Assumption is Tier-3 should additionally provide 25 more cores and around 50TB/year Networks for “Tier-3” scale analysis should provide ~10MBytes/sec per core Typical 8 core machine requires gigabit “end-to-end” connectivity; but in bursts Will Tier-2’s and Tier-3 have sufficient useable bandwidth (end-to-end issues)? Shawn McKee 11 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
12
Planning for 2008 To date most requirements envisioned for LHC scale physics from the network have yet to be realized. Once real data is flowing this will change quickly End-sites (Tier-2 or Tier-3) must be ready to accommodate needs Physicist’s will need very high network performance in “bursts”. Ideally a multiplexed form of network access/usage could provide sufficient capabilities. End-to-end issues will need to be addressed Shawn McKee 12 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
13
Within a year real LHC data will begin flowing Physicists globally will be intently working to access and process data…there will be implications for networks, storage systems and computing resources. Planning should provide for reasonable network infrastructure: Typical Tier-2: 10+ Gbps Typical Tier-3: 1 (to 10) Gbps (depends on number of physicists and size of resources) Network services incorporated from research areas may be needed to insure end-to-end capabilities and effective resource management Shortly we will be living in “Interesting Times”… Conclusions Shawn McKee 13 The ATLAS Computing Model and USATLAS Tier-2/3 Meeting
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.