U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
US ATLAS Distributed IT Infrastructure Rob Gardner Indiana University October 26, 2000
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
U.S. ATLAS Physics and Computing Budget and Schedule Review John Huth Harvard University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
Distributed Facilities for U.S. ATLAS Rob Gardner Indiana University PCAP Review of U.S. ATLAS Physics and Computing Project Argonne National Laboratory.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
U.T. Arlington High Energy Physics Research Summary of Activities August 1, 2001.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
Atlas CAP Closeout Thanks to all the presenters for excellent and frank presentations Thanks to all the presenters for excellent and frank presentations.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Interfacing Grid-Canada to LCG M.C. Vetterli, R. Walker Simon Fraser Univ. Grid Deployment Area Mtg August 2 nd, 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University August, 2006.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Hall D Computing Facilities Ian Bird 16 March 2001.
US Tier 2 issues Jim Shank Boston University U.S. ATLAS Tier 2 Meeting Harvard University August, 2006.
LQCD Computing Project Overview
UK GridPP Tier-1/A Centre at CLRC
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Nuclear Physics Data Management Needs Bruce G. Gibbard
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31, 2001

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 2 US ATLAS Computing Facilities Mission  …to enable effective participation by US physicists in the ATLAS physics program !  Direct access to and analysis of physics data sets  Simulation, re-reconstruction, and reorganization of data as required to complete such analyses  Facilities procured, installed and operated  …to meet U.S. “MOU” obligations to ATLAS  Direct IT support (Monte Carlo generation, for example)  Support for detector construction, testing, and calibration  Support for software development and testing

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 3 US ATLAS Computing Facilities Overview  A Hierarchy of Grid Connected Distributed Resources Including:  Tier 1 Facility Located at Brookhaven – Rich Baker / Bruce Gibbard  Operational at ~ 0.5% level  5 Permanent Tier 2 Facilities ( to be Selected in April ’03 )  2 Prototype Tier 2’s selected earlier this year and now active  Indiana University – Rob Gardner  Boston University – Jim Shank  Tier 3 / Institutional Facilities  Several currently active; most candidate to become Tier 2’s  Univ. of California at Berkeley, Univ. of Michigan, Univ. of Oklahoma, Univ. of Texas at Arlington, Argonne Nat. Lab.  Distribute IT Infrastructure – Rob Gardner  US ATLAS Grid Testbed – Ed May  HEP Networking – Shawn McKee  Coupled to Grid Projects with designated liaisons  PPDG – Torre Wenaus  GriPhyN – Rob Gardner  iVDGL – Rob Gardner  EU Data Grid – Craig Tull

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 4 Evolution of US ATLAS Facilities Plan  In Respond to Changes or Potential Changes in  Schedule  Requirements/Computing Model  Technology  Budgetary Guidance  Changes in Schedule  LHC start-up projected to be a year later, 2005/2006  2006/2007  ATLAS Data Challenges (DC’s) have, so far, stayed fixed  DC0 – Nov/Dec 2001 – 10 5 events – Continuity Test  DC1 – Feb/Jul 2002 – 10 7 events ~ 1%  DC2 – Jan/Sep 2003 – 10 8 events ~ 10% - - a serious Functionality/Capacity exercise

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 5 Changes in Computing Model and Requirements  Requirements Defined by International ATLAS Computing Model  Nominal model and requirements for a Tier 1 (Expect there to be ~6)  Raw  ESD/AOD/TAG pass done at CERN, result shipped to Tier 1’s  TAG/AOD/~25% of ESD on Disk, Tertiary storage for remainder of ESD  Selection passes through ESD monthly  Analysis of TAG/AOD/Selected ESD/etc. (n-tuples) on disk within 4 hours by ~200 users requires …

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 6 Changes in Computing Model and Requirements (2)  Revised model and requirements for a Tier 1 (under consideration)  Raw  ESD/AOD/TAG pass done at CERN, result shipped to Tier 1’s  TAG/AOD/33% of ESD on Disk at each Tier 1 (3 sites in aggregate contain 100% of ESD on Disk)  Selection passes through ESD daily using data resident on disk locally and at 2 complementary Tier 1’s  Analysis of TAG/AOD/All ESD/etc. (n-tuples) on disk within 4 hours by ~200 users requires …

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 7 Comparing Models  All ESD, AOD, and TAG data on disk greatly speeds/improves analyses  Enables one day selection passes (rather than one month) and reduces the tape requirement imposed by selection processing – better/faster selection  Allows navigation of individual events (for all processed, but not Raw, data) without recourse to tape and associated delay – more detailed/faster analysis  Avoids contention between analyses over ESD disk space and the need for complex algorithms to optimize use of that space – less effort for better result  But there are potentially significant cost and operational drawbacks  Additional disk is required to hold 1/3 of ESD  Additional CPU is required to support more frequent selection passes  It introduces major dependencies between Tier 1’s  It increases sensitivity to performance of the network and associated Grid middleware (particularly when separate by a “thin” pipe across an ocean)  What is optimal for US ATLAS computing?

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 8 Changes in Technology  No dramatic new technologies  Previously assumed technologies are tracking Moore’s Law well  Recent price performance points from RHIC Computing Facility  CPU: IBM procurement - $33/SPECint95  310 Dual 1 GHz Pentium III 97.2 SPECint95/Node  Delivered Aug 2001, now fully operational  $1M fully racked including cluster management hardware & software  Disk: OSSI/LSI procurement - $27k/TByte  33 Usable TB of high availability Fibre Channel RAID 1400 MBytes/sec  Delivered Sept 2001, first production use this week  $887k including SAN switch  Strategy is to project, somewhat conservatively, from these points for facilities design and costing  Using somewhat longer than the observed <18 month price/performance halving time – detailed capacity & costing will be presented by Rich Baker

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 9 Changes in Budgetary Assumptions  Assumed Funding Profiles ($K)  For revise LHC startup schedule, new profile is better  In new profile, funding for each year generally matches or exceeds that for one year earlier in the old profile.  Funds are more effective when spend 1 year later (Moore’s Law)  For ATLAS DC 2 which stayed fixed in ’03, new profile is worse  Hardware capacity goals of DC 2 cannot be met  Personnel intensive facility development may be up to 1 year behind  Again, Rich Baker will discuss details  Hope/expectation is that another DC will be added allowing validation of more nearly fully developed Tier 1 and US ATLAS facilities Grid

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 10 Capacities of US ATLAS Facilities for Nominal Model

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 11 Revised US ATLAS Tier 1 Model  Stretch out of LHC startup schedule combined with DOE late funding ramp-up allows for significantly improved US ATLAS Tier 1 facility in ‘07 (rather than ‘06) while staying within budget (Unfortunately it does not help for DC 2)  It is based on the Revised International ATLAS Model with augmentation to address operational drawbacks  Increase disk to hold 100% of ESD  Removing dependency on other Tier 1’s  Reducing dependency on network across the Atlantic  Add sufficient CPU to exploit highly improved data access  Retain tape storage volume of one STK silo, reduce tape I/O band- width to only that required in new model (Selection from disk not tape)

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 12 Revised US ATLAS Tier 1 Model (2)  Impact on Overall US ATLAS Computing Model  The high availability of the complete ESD set at the Tier 1 makes possible more prompt and detailed analyses by users at coupled Tier 2 and Tier 3 sites as well as those directly running at the Tier 1  Increased CPU capacity to exploit this possibility at these site is desirable and may be feasible given the 1 year delay in delivery date, but such an expansion remains to be studied  Exploitation of this capability would increase the network load between Tier 2/3 sites and the Tier 1 and thus the network requirement but the again the added year should help and further study is required  Conclusions  It is currently our intent to make this revised plan the default US ATLAS Tier 1 plan and to determine what changes in the overall US ATLAS facilities plan should and can efficiently follow from this

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 13 Capacities of US ATLAS Facilities for Revised Model

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 14 Tier 1 Ramp-up Profile * DC 2

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 15

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 16

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 17 STATUS of Tier 1 Facility Evolution  Goal of Planned Technical Evolution in FY ’01 was to Establish US ATLAS Scalability & Independence (from RCF)  Users Services – 100 registered users  Accounts, passwords, CTS, etc.  Documentations  Infrastructure Services  NIS, DNS, etc. servers  SSH gateways  SUN/Solaris Services  Server  NFS disk  AFS Service  AFS servers  AFS disk  Network  HPSS Service  Server  Tape/Cache disk

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 18

30 October, 2001 B. Gibbard US ATLAS Computing Facilities 19 STATUS of Tier 1 Facility Evolution  Goal of Planned Technical Evolution in FY ’01 was to Establish US ATLAS Scalability & Independence (from RCF)  Users Services – 100 registered users  Accounts, passwords, CTS, etc.   Documentations   Infrastructure Services  NIS, DNS, etc. servers   SSH gateways   SUN/Solaris Services  Server  *  NFS disk  *  AFS Service  AFS servers  *  AFS disk  *  Network  *  HPSS Service  Server  *  Tape/Cache disk  *