US ATLAS Distributed IT Infrastructure Rob Gardner Indiana University October 26, 2000

Slides:



Advertisements
Similar presentations
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
U.S. ATLAS Physics and Computing Budget and Schedule Review John Huth Harvard University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Distributed Facilities for U.S. ATLAS Rob Gardner Indiana University PCAP Review of U.S. ATLAS Physics and Computing Project Argonne National Laboratory.
U.T. Arlington High Energy Physics Research Summary of Activities August 1, 2001.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
June 02 John Huth, LHC Computing 1 U.S. ATLAS Overview  Project ManagementJ. Huth  SoftwareT.Wenaus  ArchitectureD. Quarrie  PhysicsI. Hinchliffe 
US ATLAS Grid Projects Rob Gardner Indiana University Mid Year Review of US ATLAS Computing NSF Headquarters, Arlington VA June 20, 2002
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
U.S. ATLAS Project Overview John Huth Harvard University LHC Computing Review FNAL November 2001.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Networking Shawn McKee University of Michigan DOE/NSF Review November 29, 2001.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Magda Distributed Data Manager Prototype Torre Wenaus BNL September 2001.
- GMA Athena (24mar03 - CHEP La Jolla, CA) GMA Instrumentation of the Athena Framework using NetLogger Dan Gunter, Wim Lavrijsen,
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
Introduction S. Rajagopalan August 28, 2003 US ATLAS Computing Meeting.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
DOE/NSF Quarterly review January 1999 Particle Physics Data Grid Applications David Malon Argonne National Laboratory
Magda Distributed Data Manager Torre Wenaus BNL October 2001.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
Moving the LHCb Monte Carlo production system to the GRID
S. Rajagopalan August 28, 2003 US ATLAS Computing Meeting
US ATLAS Physics & Computing
Gridifying the LHCb Monte Carlo production system
Presentation transcript:

US ATLAS Distributed IT Infrastructure Rob Gardner Indiana University October 26,

10/26/00US ATLAS Distributed IT Distributed IT Infrastructure Software –“Grid” toolkits: PPDG, GriPhyN, DataGrid –ATLAS extensions and adaptors Tier 2 regional centers Networks

10/26/00US ATLAS Distributed IT GriPhyN 4 physics experiments + leading computer scientists in distributed computing ITR R&D project: 11.9M/5y ATLAS resources: –1 postdoc (physicist-computation) –2 grad students Significant matches from IU and BU –ITP2 (both); 0.5FTE of several IU IT personnel (IU)

10/26/00US ATLAS Distributed IT Typical Tier 2 Regional Center (1 of 5) CPU: 50K SpecInt95 (t1: 209K) –Commodity Pentium/Linux –Estimated 144 Dual Processor Nodes (t1: 640) Online Storage: 70 TB Disk (t1: 365) –High Performance Storage Area Network –Baseline: Fiber Channel Raid Array

10/26/00US ATLAS Distributed IT Tertiary Storage Capability Exploit existing mass store infrastructure at 2 of the 5 Tier 2 centers –Assume existing HPSS or equivalent liscense, tape silo, robot –Augment with drives, media, mover nodes, and disk cache –Each site contributes PB store Reprocessed ESDs, User AODs,

10/26/00US ATLAS Distributed IT Timeline (follows Tier 1) R&D Tier 2’s – FY ‘01 & FY ‘02 –Initial Development & Test, 1% to 2% scale –Start Grid testbeds: ATLAS-GriPhyN, PPDG, DataGrid Data Challenges – FY ‘03 & FY ‘04 Production Tier 2’s – FY ‘04 & FY ‘05 Operation – FY ‘05, FY ‘06 & beyond –Full Scale System Operation, 20% (‘05) to 100% (‘06)

10/26/00US ATLAS Distributed IT

10/26/00US ATLAS Distributed IT

10/26/00US ATLAS Distributed IT

10/26/00US ATLAS Distributed IT Tier 2 Costs Facilities and labor $K

ATLAS Grid Related Activities

10/26/00US ATLAS Distributed IT ATLAS Grid Workshops June at Indiana University –ATLAS-GriPhyN Testbed –ATLAS requirements for grid software. –Identify APIs between ATLAS and grid services. –Specify grid related milestones. –Identify deliverables for MOU, WBS documents. July at CERN –First ATLAS-wide grid workshop –Talks from most major grid efforts in HEP – September at CERN –Focus attention on ATLAS requirements

10/26/00US ATLAS Distributed IT Participants at June Testbed Workshop Lab: – Larry Price (ANL) – Ed May (ANL) – Rich Baker (BNL) – Stu Loken (LBL) – David Malon (ANL) – Craig Tull (LBL) – Bruce Gibbard (BNL) – Torre Weanus (BNL) – David Quarrie (LBL) – Bill Allcock (ANL) University: – John Huth (HU) – Rob Gardner (IU) – Fred Luehring (IU) – Shawn McKee (UM) – Jim Shank (BU) – Steve Wallace (IU) – Leigh Grundhoefer (IU) – Thom Sulanke (IU) – Mary Papakian (IU) – Jane Liu (IU)

10/26/00US ATLAS Distributed IT ATLAS GriPhyN Goals Provide linkage between Athena, database, simulation framework, and the grid toolkits –Feedback to software developers in both communities (ATLAS core developers and grid toolkit developers) Develop an ATLAS-GriPhyN Test-bed –Validate distributed computing model for LHC computing –Provide input to new models by testing tools and distributed functionality of ATLAS software –Provide input to planning for facilities development (at each Tier) and networks

10/26/00US ATLAS Distributed IT Who’s doing what in the US? Participants and projects –Argonne: PPDG activities, Database-grid GriPhyN-DataGrid Integration –Berkeley Lab Athena grid interfaces, DOE sciences grid –Brookhaven Lab Tier 1 development, file replication, grid requirements document –Boston U Globus evaluations, applns –Harvard U US ATLAS computing management, planning –U of Michigan Globus interfaces, QoS authentication –Indiana U Athena grid interfaces, Testbed coordination, GIS UTA: D0 cluster, SAM SUNY Albany - test bed List is growing

10/26/00US ATLAS Distributed IT Gaudi (Athena) Control Framework Converter Algorithm Event Data Service Persistency Service Data Files Algorithm Transient Event Store Detec. Data Service Persistency Service Data Files Transient Detector Store Message Service JobOptions Service Particle Prop. Service Other Services Histogram Service Persistency Service Data Files Transient Histogram Store Application Manager Converter LHC-B

10/26/00US ATLAS Distributed IT Grid vs. Gaudi Services Craig Tull NERSC/LBL Athena and Grids:

10/26/00US ATLAS Distributed IT ATLAS-GriPhyN Testbed Platform for testing grid concepts, computing models Provide input back to grid developers Expose weaknesses to better plan for infrastructure upgrades Identify and specify application-grid services interfaces Developers, administrators, users need grid experience Perform realistic test cases and make available as a test-suite Prepare infrastructure for ATLAS data challenges Distributed Monte Carlo production for TDRs

10/26/00US ATLAS Distributed IT Calren Esnet, Abilene, Nton Abilene Esnet, Mren UC Berkeley LBNL-NERSC Esnet NPACI, Abilene Brookhaven National Laboratory Indiana University Boston University Argonne National Laboratory Initial Participants HPSS sites U Michigan University of Texas at Arlington SUNY Albany

10/26/00US ATLAS Distributed IT ATLAS Applications Tile Cal testbeam data (expand on Ed May’s PPDG work) TRT System Test Data (Module Production –Fred Luehring) Monte Carlo Production Evaluation of Data Distribution Models Data Cataloging, Metadata, Bookkeeping

ATLAS Grid Schedules

10/26/00US ATLAS Distributed IT ATLAS Data Challenges “DC” DC 1 first half 2002 –first project releases: of DATAGRID (PM12) is January 2002 of GriPhyN VDT-1 October 2001 ATLAS DC Milestones –were written on the safe side, but we will try to be more ambitious involve more sites and more complexity further define DC1 in next months getting involved GRID people and sites Computing TDR ready by end 2002: all experimentation needed before TDR to be planned for DC1

10/26/00US ATLAS Distributed IT ATLAS Schedule Summary Significant milestones 11/01 – 12/01 – 11/01 – 12/01 Data Challenge 0 01/02 – 07/02 – 01/02 – 07/02 Data Challenge 1 » Provide input to computing model 05/02 – 11/02 – 05/02 – 11/02 Computing TDR 01/03 – 09/03 – 01/03 – 09/03 Data Challenge 2 »Major tests of grid enabled software 01/04 – 06/06 – 01/04 – 06/06 Physics Readiness Report

10/26/00US ATLAS Distributed IT ATLAS GriPhyN Milestones (1) Year 1 –Perform tests using Globus and Particle Physics Data Grid tool kits providing basic file replication, transport, and security infrastructure. Data samples of order (1 TB) from test beam data recorded at CERN will be used in these tests. –Participate in development of information models for data types and elements. Develop metadata file catalogs, and tools to manipulate them to organize existing test beam and Monte Carlo data sets. –Begin implementing an ATLAS-GriPhyN test bed involving CERN and several US ATLAS institutions, including, Brookhaven, Argonne, and Berkeley Labs; Boston University, Indiana University, University of Michigan, UT Arlington, and SUNY-Albany –Begin identification and specification of APIs for the ATLAS analysis framework (Athena) and Monte Carlo programs. Evaluate and adopt basic grid services from the VDT-1 toolkit as they become available. –Provide feedback to CS researchers designing and planning VDT-2 services. ’00-’01

10/26/00US ATLAS Distributed IT ATLAS GriPhyN Milestones (2) Year 2 –Deploy and evaluate VDT-2 (Centralized Virtual Data Services) on existing ATLAS-GriPhyN testbed focusing on support of the Athena analysis framework and Monte Carlo simulation programs. –Provide support to physicists located at many US sites and CERN requiring access to distributed data sets using metadata file catalogs, high speed multi-site file replication and network caching services. Testbeam data and Monte Carlo data data will be used, ranging in size of 2 to 10 TB. –Incorporate request planning and execution services into Athena framework. Gather policy and resource information for participating nodes on the grid. –Continue to expand US ATLAS testbed. –Tests of distributed object databases across several sites in the ATLAS-GriPhyN testbed. ’01-’02 DC1 DC0

10/26/00US ATLAS Distributed IT ATLAS GriPhyN Milestones (3) Year 3 –Athena framework becomes fully distributed: incorporate VDT-3 toolkit (distributed virtual data services). –Perform tests of the hierarchy in a production mode. This will involve the CERN (as the Tier 0 site), Brookhaven Lab (as the US ATLAS Tier 1 site), and Tier 2 regional data centers –Begin multi-site distributed simulations to generate O(100) TB Monte Carlo data samples for physics studies and Data Challenges. –Continue to provide feedback especially with regard to fault tolerance in practical, realistic planning and request executions. ’02-’03 DC2 TDR

10/26/00US ATLAS Distributed IT ATLAS GriPhyN Milestones (4) Year 4 –Scaled versions of previous Mock Data Challenges using VDT-4 (Scalable Virtual Data Services) using existing testbed between CERN, the Tier 1 center at BNL, Tier 2 regional centers, and of order 20 Tier 4 university groups with physicists doing Monte Carlo studies and analysis of testbeam data. –The goal is to approach realistic data samples >O(500 TB) and to involve thousands of computers. –Full integration of ATLAS analysis tools such as Athena and the Monte Carlo production control framework with virtual data services. ’03-’04 DC3

10/26/00US ATLAS Distributed IT ATLAS GriPhyN Milestones (5) Year 5 –Continue to build on experience from data challengs in previous years. –Build a production quality offline distributed data analysis for the ATLAS GRID using GriPhyN tools. ’04-’05 Physics Readiness Report

10/26/00US ATLAS Distributed IT ATLAS Grid Requirements EU DataGrid –To produce a grid requirements document June 1, 2001 ATLAS: –A group has been formed led by Rich Baker (BNL) and Larry Price (ANL) to produce an ATLAS grid requirements document by November 1, 2000 (draft)

ATLAS – CS Collaborative Projects

10/26/00US ATLAS Distributed IT Possible CS-Physics Projects Athena grid interface Toolkit installation & distribution kit (a la INFN) Collaboration w/ DataGrid WP8 through grid forum, etc. Grid information services: user registration interface

10/26/00US ATLAS Distributed IT Summary of Grid Activities ATLAS fully supports grid efforts and is redesigning its computing model in this new context Significant involvement of ATLAS personnel in PPDG, DataGrid, GriPhyN –This involvement will grow as more US ATLAS institutions become involved –GriPhyN needs to develop a clear and open policy for their contributions