Download presentation
Presentation is loading. Please wait.
Published byGyles Johnson Modified over 8 years ago
1
US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University 17-18 August, 2006
2
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 2 Overview LHC Schedule news and implicationsLHC Schedule news and implications News from the DOE/NSF Review of 15 AugustNews from the DOE/NSF Review of 15 August Goals for this workshopGoals for this workshop T2 Planning Wiki Action Items from last T2 meeting in Chicago Welcome to our 2 new Tier 2 Sites!
3
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 3 LHC Schedule Magnet installation going well – 1,250 total Magnet installation going well – 1,250 total Passed the halfway mark Interconnection is a challenge Interconnection is a challenge Install does not mean interconnected Very aggressive schedule for 2007 Very aggressive schedule for 2007 LHC tunnel
4
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 4 LHC Schedule LHC schedule delay announced – 380 days to go!LHC schedule delay announced – 380 days to go! Now – Spring 2007 – LHC installation end game last magnet: delivered (10/06), tested (12/06), installed (3/07) August 2007 -- beam pipe closes nominal two month delay from previous schedule Sept 2007 -- few weeks of controlled access to cavern Nov 2007 – Two month LHC commissioning run at injection energy (450 GeV on 450 GeV, no ramp, no squeeze) till end 2007. Sectors 7-8, 8-1 will be commissioned at 7 TeV, others not Early 2008 -- few month shutdown during which remaining LHC sectors commissioned w/o beam at full 7 TeV energy Mid 2008 -- 14 TeV running goal is substantial (1-4 fb -1 ??) by end 2008 In September, CERN should clarify the goals for 2008-9 since this will affect computing resourcesIn September, CERN should clarify the goals for 2008-9 since this will affect computing resources
5
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 5 LHC Schedule My simplified graphical view based on 7/7/06 detailed update in http://sylvainw.home.cern.ch/sylvainw/planning- follow-up/Schedule.pdfMy simplified graphical view based on 7/7/06 detailed update in http://sylvainw.home.cern.ch/sylvainw/planning- follow-up/Schedule.pdf
6
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 6 Tier 1 Evolution C-TDR values Very Preliminary!
7
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 7 Tier 2 Evolution C-TDR values Very Preliminary!
8
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 8 ATLAS Computing Timeline 2003 POOL/SEAL release (done) ATLAS release 7 (with POOL persistency) (done) LCG-1 deployment (done) ATLAS complete Geant4 validation (done) ATLAS release 8 (done) DC2 Phase 1: simulation production (done) DC2 Phase 2: intensive reconstruction (only partially done…) Combined test beams (barrel wedge) (done) Computing Model paper (done) Computing Memoranda of Understanding ( signatures being collected ) ATLAS Computing TDR and LCG TDR (done) Computing System Commissioning (CSC) Start cosmic ray run (started for some subsystems) Physics Group Notes (former Physics Readiness Report) GO! (Now end of 2007) 2004 2005 2006 2007
9
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 9 Resource Allocation Committee Newly formed committee to coordinate U.S. resources (CPU/disk)Newly formed committee to coordinate U.S. resources (CPU/disk) Prioritize: Physics simulation, analysis, calibration Committee members:Committee members: Physics Adviser (I. Hinchliffe) Chair of the ASG (S. Willocq) Facilities Manager (B. Gibbard) Tier 2 (R. Gardner) The US Production manager (K. De) The EPM (J. Shank) The Software Manager (S. Rajagopalan) Physics users (B. Mellado) Calibration users (B. Zhou) First meeting 25 Jan, 2006First meeting 25 Jan, 2006 http://www.usatlas.bnl.gov/twiki/bin/view/AtlasSoftware/ResourceAllocationCommittee
10
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 10 Resource allocation policy enforcement…
11
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 11 Projected T2 Hardware Growth (dedicated to ATLAS) Tier 2 Center 20052006200720082009Boston/Harvard CPU (kSi2k) Disk (TB) 210403501707303701,0904801,600630 Southwest CPU (kSi2k) Disk (TB) 500609002001,5003801,7005402,100700 Midwest CPU (kSi2k) Disk (TB) 360505101309002601,1004651,300790 Assumes Moore’s law doubling of CPU (3 yrs) and disk capacity (1.5 yrs) at constant costAssumes Moore’s law doubling of CPU (3 yrs) and disk capacity (1.5 yrs) at constant cost Assumes replacement of hardware every 3 yearsAssumes replacement of hardware every 3 years
12
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 12 Tier 3 Centers U.S. ATLAS formed a T3 task force in April 2006U.S. ATLAS formed a T3 task force in April 2006 Composition: Gustaaf Brooijmans, Rob Gardner, Bruce Gibbard, Tom LeCompte, Shawn McKee, Razvan Propescu, Jim Shank. Produced whitepaper on role of T3 in U.S. ATLASProduced whitepaper on role of T3 in U.S. ATLAS Summary from whitepaper:Summary from whitepaper: Some local compute resources, beyond Tier-1 and Tier-2, are required to do physics analysis in ATLAS. These resources are termed Tier-3 and could be as small as a modern desktop computer on each physicist’s desk, or as large as a Linux farm, perhaps operated as part of a shared facility from an institution’s own resources. Resources outside of the U.S. ATLAS Research Program are sometimes available for Tier-3 centers. A small amount of HEP Core Program money can sometimes leverage a large amount of other funding for Tier-3 centers. Decisions on when it is useful to spend Core money in this way will have to be considered on a case by case basis. Support for Tier-3 centers can be accommodated in the U.S. Research Program provided the Tier-3 centers are part of the Open Science Grid and that they provide access those resources with appropriate priority settings to US ATLAS via the VO authentication, authorization and accounting infrastructure.
13
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 13 CSC11 Production Summary Finished jobs:Finished jobs: 155199 | LCG 64710 | LCG-CG 82202 | LCG-CG-DQ 1903 | LCG-DQ 79010 | NORDUGRID 140728 | OSG 52158 | SACLAY Note each job has 50 events for physics samples, 100 events for single particlesNote each job has 50 events for physics samples, 100 events for single particles
14
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 14 Current CSC Production USA
15
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 15 U.S. Production 1 st Half 2006 U.S. provided 24% of managed CSC production through PanDA U.S. provided 24% of managed CSC production through PanDA Half of U.S. production was done at Tier 1, remainder at Tier 2 sites Half of U.S. production was done at Tier 1, remainder at Tier 2 sites PanDA is also used for user analysis (through pathena) PanDA is also used for user analysis (through pathena) Currently, PanDA has 54 analysis users who have submitted jobs through the grid in past 6 months Currently, PanDA has 54 analysis users who have submitted jobs through the grid in past 6 months Analysis jobs primarily run at BNL. We are testing also at UTA. Soon all T2 sites will be enabled. Analysis jobs primarily run at BNL. We are testing also at UTA. Soon all T2 sites will be enabled.
16
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 16 Funding Targets Bottom line not changed since last review
17
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 17 Reconciling Requests with Target As shown in Feb, we still have about a $3M difference in requests over our targets in 2008 and beyond.As shown in Feb, we still have about a $3M difference in requests over our targets in 2008 and beyond. As shown then, we put about $1M for software and $2M for Facilities as requests to the Management Reserve (MR). We are still working on what our actual 2006 spending will beWe are still working on what our actual 2006 spending will be Late hires at T1, late purchases at T2 Should allow us to get where we want on 2007 with minimal call on MR (< $500k ?) 2008 and beyond still a problem2008 and beyond still a problem We are working closely with the ATLAS Computing Model Group to understand our T1/T2 needs out to 2012 New LHC running assumptions COULD lead to some savings in a later ramp of hardware Software: Emphasis on user support, if not enough MR, we will have to cut some Core effort
18
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 18 Need to Ramp up the T2 hardware The Research Program has a large FY06 RolloverThe Research Program has a large FY06 Rollover We are begging for more money for FY07, but the agencies are saying use our rollover.We are begging for more money for FY07, but the agencies are saying use our rollover. Use it or loose itUse it or loose it
19
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 19 The T2 Meeting Last May Tier 2 Planning Wiki:Tier 2 Planning Wiki: http://www.usatlas.bnl.gov/twiki/bin/view/Admins/TierTwoPlanning
20
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 20
21
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 21 Applies to ALL T2! Goal: Fill-in/Update these Wikis
22
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 22 Progress on T2 Services? DQ2 Status?DQ2 Status?
23
J. Shank Tier 2 meeting 17-18 Aug., 2006 Harvard 23 Tier 2 Documentation Uniform Web pagesUniform Web pages Up to date snapshots of hardware configurationUp to date snapshots of hardware configuration Kill/Erase/destroy old web pages!Kill/Erase/destroy old web pages!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.