US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University 17-18 August, 2006.

Slides:



Advertisements
Similar presentations
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Advertisements

S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
U.S. ATLAS Software WBS 2.2 S. Rajagopalan July 8, 2003 DOE/NSF Review of LHC Computing.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
12 Dec 2005 J. Schukraft1 ALICE USA ALICE position towards US participation EU participation in emcal Requirements Formal steps & schedule.
ATLAS-Specific Activity in GridPP EDG Integration LCG Integration Metadata.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
ATLAS and Grid Computing RWL Jones GridPP 13 5 th July 2005.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
DOSAR VO ACTION AGENDA ACTION ITEMS AND GOALS CARRIED FORWARD FROM THE DOSAR VI WORKSHOP AT OLE MISS APRIL 17-18, 2008.
Location: BU Center for Computational Science facility, Physics Research Building, 3 Cummington Street.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Atlas CAP Closeout Thanks to all the presenters for excellent and frank presentations Thanks to all the presenters for excellent and frank presentations.
Zprávy z ATLAS SW Week March 2004 Seminář ATLAS SW CZ Duben 2004 Jiří Chudoba FzÚ AV CR.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
January LEReC Review 12 – 13 January 2015 Low Energy RHIC electron Cooling Kerry Mirabella Cost, Schedule, Personnel.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Post-DC2/Rome Production Kaushik De, Mark Sosebee University of Texas at Arlington U.S. Grid Phone Meeting July 13, 2005.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
ATLAS Grid Computing Rob Gardner University of Chicago ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science THE CENTER.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
Introduction S. Rajagopalan August 28, 2003 US ATLAS Computing Meeting.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
PCAP Close Out Feb 2, 2004 BNL. Overall  Good progress in all areas  Good accomplishments in DC-2 (and CTB) –Late, but good.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dario Barberis: Conclusions ATLAS Software Week - 10 December Conclusions Dario Barberis CERN & Genoa University.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Ruth Pordes, March 2010 OSG Update – GDB Mar 17 th 2010 Operations Services 1 Ramping up for resumption of data taking. Watching every ticket carefully.
Production System 2 manpower and funding issues Alexei Klimentov Brookhaven National Laboratory Aug 19, 2013 Production System Technical Meeting CERN.
J. Shank DOSAR Workshop LSU 2 April 2009 DOSAR Workshop VII 2 April ATLAS Grid Activities Preparing for Data Analysis Jim Shank.
US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University August, 2006.
First collisions in LHC
U.S. ATLAS Tier 2 Computing Center
Database Services at CERN Status Update
Readiness of ATLAS Computing - A personal view
Olof Bärring LCG-LHCC Review, 22nd September 2008
ILD Ichinoseki Meeting
Nuclear Physics Data Management Needs Bruce G. Gibbard
Collaboration Board Meeting
ATLAS DC2 & Continuous production
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University August, 2006

J. Shank Tier 2 meeting Aug., 2006 Harvard 2 Overview LHC Schedule news and implicationsLHC Schedule news and implications News from the DOE/NSF Review of 15 AugustNews from the DOE/NSF Review of 15 August Goals for this workshopGoals for this workshop T2 Planning Wiki Action Items from last T2 meeting in Chicago Welcome to our 2 new Tier 2 Sites!

J. Shank Tier 2 meeting Aug., 2006 Harvard 3 LHC Schedule Magnet installation going well – 1,250 total Magnet installation going well – 1,250 total Passed the halfway mark Interconnection is a challenge Interconnection is a challenge Install does not mean interconnected Very aggressive schedule for 2007 Very aggressive schedule for 2007 LHC tunnel

J. Shank Tier 2 meeting Aug., 2006 Harvard 4 LHC Schedule LHC schedule delay announced – 380 days to go!LHC schedule delay announced – 380 days to go! Now – Spring 2007 – LHC installation end game last magnet: delivered (10/06), tested (12/06), installed (3/07) August beam pipe closes nominal two month delay from previous schedule Sept few weeks of controlled access to cavern Nov 2007 – Two month LHC commissioning run at injection energy (450 GeV on 450 GeV, no ramp, no squeeze) till end Sectors 7-8, 8-1 will be commissioned at 7 TeV, others not Early few month shutdown during which remaining LHC sectors commissioned w/o beam at full 7 TeV energy Mid TeV running goal is substantial (1-4 fb -1 ??) by end 2008 In September, CERN should clarify the goals for since this will affect computing resourcesIn September, CERN should clarify the goals for since this will affect computing resources

J. Shank Tier 2 meeting Aug., 2006 Harvard 5 LHC Schedule My simplified graphical view based on 7/7/06 detailed update in follow-up/Schedule.pdfMy simplified graphical view based on 7/7/06 detailed update in follow-up/Schedule.pdf

J. Shank Tier 2 meeting Aug., 2006 Harvard 6 Tier 1 Evolution C-TDR values Very Preliminary!

J. Shank Tier 2 meeting Aug., 2006 Harvard 7 Tier 2 Evolution C-TDR values Very Preliminary!

J. Shank Tier 2 meeting Aug., 2006 Harvard 8 ATLAS Computing Timeline 2003 POOL/SEAL release (done) ATLAS release 7 (with POOL persistency) (done) LCG-1 deployment (done) ATLAS complete Geant4 validation (done) ATLAS release 8 (done) DC2 Phase 1: simulation production (done) DC2 Phase 2: intensive reconstruction (only partially done…) Combined test beams (barrel wedge) (done) Computing Model paper (done) Computing Memoranda of Understanding ( signatures being collected ) ATLAS Computing TDR and LCG TDR (done) Computing System Commissioning (CSC) Start cosmic ray run (started for some subsystems) Physics Group Notes (former Physics Readiness Report) GO! (Now end of 2007)

J. Shank Tier 2 meeting Aug., 2006 Harvard 9 Resource Allocation Committee Newly formed committee to coordinate U.S. resources (CPU/disk)Newly formed committee to coordinate U.S. resources (CPU/disk) Prioritize: Physics simulation, analysis, calibration Committee members:Committee members: Physics Adviser (I. Hinchliffe) Chair of the ASG (S. Willocq) Facilities Manager (B. Gibbard) Tier 2 (R. Gardner) The US Production manager (K. De) The EPM (J. Shank) The Software Manager (S. Rajagopalan) Physics users (B. Mellado) Calibration users (B. Zhou) First meeting 25 Jan, 2006First meeting 25 Jan,

J. Shank Tier 2 meeting Aug., 2006 Harvard 10 Resource allocation policy enforcement…

J. Shank Tier 2 meeting Aug., 2006 Harvard 11 Projected T2 Hardware Growth (dedicated to ATLAS) Tier 2 Center Boston/Harvard CPU (kSi2k) Disk (TB) , , Southwest CPU (kSi2k) Disk (TB) , , , Midwest CPU (kSi2k) Disk (TB) , , Assumes Moore’s law doubling of CPU (3 yrs) and disk capacity (1.5 yrs) at constant costAssumes Moore’s law doubling of CPU (3 yrs) and disk capacity (1.5 yrs) at constant cost Assumes replacement of hardware every 3 yearsAssumes replacement of hardware every 3 years

J. Shank Tier 2 meeting Aug., 2006 Harvard 12 Tier 3 Centers U.S. ATLAS formed a T3 task force in April 2006U.S. ATLAS formed a T3 task force in April 2006 Composition: Gustaaf Brooijmans, Rob Gardner, Bruce Gibbard, Tom LeCompte, Shawn McKee, Razvan Propescu, Jim Shank. Produced whitepaper on role of T3 in U.S. ATLASProduced whitepaper on role of T3 in U.S. ATLAS Summary from whitepaper:Summary from whitepaper: Some local compute resources, beyond Tier-1 and Tier-2, are required to do physics analysis in ATLAS. These resources are termed Tier-3 and could be as small as a modern desktop computer on each physicist’s desk, or as large as a Linux farm, perhaps operated as part of a shared facility from an institution’s own resources. Resources outside of the U.S. ATLAS Research Program are sometimes available for Tier-3 centers. A small amount of HEP Core Program money can sometimes leverage a large amount of other funding for Tier-3 centers. Decisions on when it is useful to spend Core money in this way will have to be considered on a case by case basis. Support for Tier-3 centers can be accommodated in the U.S. Research Program provided the Tier-3 centers are part of the Open Science Grid and that they provide access those resources with appropriate priority settings to US ATLAS via the VO authentication, authorization and accounting infrastructure.

J. Shank Tier 2 meeting Aug., 2006 Harvard 13 CSC11 Production Summary Finished jobs:Finished jobs: | LCG | LCG-CG | LCG-CG-DQ 1903 | LCG-DQ | NORDUGRID | OSG | SACLAY Note each job has 50 events for physics samples, 100 events for single particlesNote each job has 50 events for physics samples, 100 events for single particles

J. Shank Tier 2 meeting Aug., 2006 Harvard 14 Current CSC Production USA

J. Shank Tier 2 meeting Aug., 2006 Harvard 15 U.S. Production 1 st Half 2006 U.S. provided 24% of managed CSC production through PanDA U.S. provided 24% of managed CSC production through PanDA Half of U.S. production was done at Tier 1, remainder at Tier 2 sites Half of U.S. production was done at Tier 1, remainder at Tier 2 sites PanDA is also used for user analysis (through pathena) PanDA is also used for user analysis (through pathena) Currently, PanDA has 54 analysis users who have submitted jobs through the grid in past 6 months Currently, PanDA has 54 analysis users who have submitted jobs through the grid in past 6 months Analysis jobs primarily run at BNL. We are testing also at UTA. Soon all T2 sites will be enabled. Analysis jobs primarily run at BNL. We are testing also at UTA. Soon all T2 sites will be enabled.

J. Shank Tier 2 meeting Aug., 2006 Harvard 16 Funding Targets Bottom line not changed since last review

J. Shank Tier 2 meeting Aug., 2006 Harvard 17 Reconciling Requests with Target As shown in Feb, we still have about a $3M difference in requests over our targets in 2008 and beyond.As shown in Feb, we still have about a $3M difference in requests over our targets in 2008 and beyond. As shown then, we put about $1M for software and $2M for Facilities as requests to the Management Reserve (MR). We are still working on what our actual 2006 spending will beWe are still working on what our actual 2006 spending will be Late hires at T1, late purchases at T2 Should allow us to get where we want on 2007 with minimal call on MR (< $500k ?) 2008 and beyond still a problem2008 and beyond still a problem We are working closely with the ATLAS Computing Model Group to understand our T1/T2 needs out to 2012 New LHC running assumptions COULD lead to some savings in a later ramp of hardware Software: Emphasis on user support, if not enough MR, we will have to cut some Core effort

J. Shank Tier 2 meeting Aug., 2006 Harvard 18 Need to Ramp up the T2 hardware The Research Program has a large FY06 RolloverThe Research Program has a large FY06 Rollover We are begging for more money for FY07, but the agencies are saying use our rollover.We are begging for more money for FY07, but the agencies are saying use our rollover. Use it or loose itUse it or loose it

J. Shank Tier 2 meeting Aug., 2006 Harvard 19 The T2 Meeting Last May Tier 2 Planning Wiki:Tier 2 Planning Wiki:

J. Shank Tier 2 meeting Aug., 2006 Harvard 20

J. Shank Tier 2 meeting Aug., 2006 Harvard 21 Applies to ALL T2! Goal: Fill-in/Update these Wikis

J. Shank Tier 2 meeting Aug., 2006 Harvard 22 Progress on T2 Services? DQ2 Status?DQ2 Status?

J. Shank Tier 2 meeting Aug., 2006 Harvard 23 Tier 2 Documentation Uniform Web pagesUniform Web pages Up to date snapshots of hardware configurationUp to date snapshots of hardware configuration Kill/Erase/destroy old web pages!Kill/Erase/destroy old web pages!