Brookhaven Science Associates U.S. Department of Energy USATLAS Tier 1 & 2 Networking Meeting Scott Bradley Manager, Network Services 14 December 2005.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

05/07/00LHCb Computing Model Meeting (CERN) LHCb(UK) Computing Status Glenn Patrick Prototype (Tier 1) UK national computing centre Bid to Joint.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Alain Romeyer - Dec Grid computing for CMS What is the Grid ? Let’s start with an analogy How it works ? (Some basic ideas) Grid for LHC and CMS.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Other servers Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
RomeWorkshop on eInfrastructures 9 December LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
Slide 1 Experiences with NMI R2 Grids Software at Michigan Shawn McKee April 8, 2003 Internet2 Spring Meeting.
Challenges to address in the next future Apr 3, 2006 HEPiX Spring Meeting 2006 Enzo Valente, GARR and INFN.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
CHEP 2000 (Feb. 7-11)Paul Avery (Data Grids in the LHC Era)1 The Promise of Computational Grids in the LHC Era Paul Avery University of Florida Gainesville,
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
The Internet2 HENP Working Group Internet2 Spring Meeting May 8, 2002 Shawn McKee University of Michigan HENP Co-chair.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
11-Feb-2004 IoP Half Day Meeting: Getting Ready For the Grid Peter Clarke SC2003 Video.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
Networking Shawn McKee University of Michigan DOE/NSF Review November 29, 2001.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Computing Sciences Directorate, L B N L 1 CHEP 2003 Standards For Storage Resource Management BOF Co-Chair: Arie Shoshani * Co-Chair: Peter Kunszt ** *
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Accelerator Safety Workshop SLAC Scott L. Davis Accelerator Safety Program Manager – SC Office of Science, U.S. Department of Energy DOE 420.2C Order.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
1 Experiences and results from implementing the QBone Scavenger Les Cottrell – SLAC Presented at the CENIC meeting, San Diego, May
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
] Open Science Grid Ben Clifford University of Chicago
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
Enabling High Speed Data Transfer in High Energy Physics
IN2P3 Computing Center April 2007
LHC Collisions.
LHC Tier 2 Networking BOF
LHCb(UK) Computing Status Glenn Patrick
Accelerator Safety Workshop SLAC
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

Brookhaven Science Associates U.S. Department of Energy USATLAS Tier 1 & 2 Networking Meeting Scott Bradley Manager, Network Services 14 December 2005

Brookhaven Science Associates U.S. Department of Energy 2 ATLAS Distributed Computing Model Tier 1 Tier2 Center Online System CERN ~5M SI2K >1 PB Disk Tape Robot BNL: ~2M SI2K; 2PB Tape Robot IN2P3 Center INFN Center RAL Center Institute Workstations < GBytes/sec 2.5 Gbps Mbits/sec Physics data cache ~PByte/sec ~10 Gbits/sec Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center ATLAS Experiment CERN:Outside Resource Ratio ~1:2 Tier0:(  Tier1):(  Tier2) ~1:1:1 Tier 2 Tier 0: DAQ, reconstruction, archive Tier 1: Reconstruction, simulation, archive, mining and (large scale) analysis Tier 2+: Analysis, simulation Tier 3+: Interactive analysis

Brookhaven Science Associates U.S. Department of Energy 3 HEP/NP WAN Requirements at BNL

Brookhaven Science Associates U.S. Department of Energy 4 Questions/Comments ???

Brookhaven Science Associates U.S. Department of Energy 5 Purpose of the Meeting… n To Provide USATLAS Tier 2 Network Planners as much information as possible to prepare for: –SC-4 scheduled for ~ 4/06. –ATLAS production scheduled to begin ~ 4/07.

Brookhaven Science Associates U.S. Department of Energy 6 …With the Desired Outcome of: n Answering Tier 2 Questions n Providing Guidance n Establishing Relationships n Determining Next Steps

Brookhaven Science Associates U.S. Department of Energy 7 …The Punchline: Tier 2’s, This is Your Meeting!

Brookhaven Science Associates U.S. Department of Energy 8 Agenda 08:00-08:30Continental Breakfast 08:30-08:40Welcome, Administrative RemarksScott Bradley BNL 08:40-09:00USATLAS Computing Model OverviewRazvan PopescuBNL 09:00-09:40USATLAS Network Model OverviewJohn Bigrow BNL 09:40-10:00ESnet Peering ArrangementsMike O’ConnorESnet 10:00-10:20Abilene Peering ArrangementsChris HeermannInternet2 10:20-10:40Break 10:40-11:00Ultralight UpdateShawn MckeeU Mich 11:00-11:20QoS/MPLS CapabilitiesDantong YuBNL 11:20-11:40Network Performance MeasurementJoe MetzgerESnet 11:40-12:00Summary of Morning Work, Q & A Scott Bradley BNL 12:00-1:00Working Lunch 1:00-2:30Chicago Tier 2 UpdateRobert Gardner U Chicago BU/HU Tier 2 UpdateSaul YoussefBU Southwest Tier 2 UpdateTBDTBD 2:30-3:30Capture of Open Issues, Action Items, Next Steps 3:30-4:00Working Break, Creation of Slides 4:00-5:00Summary of Next Steps, Assignment of Responsibility for Action Items 6:00-Dinner for all interested (?)

Brookhaven Science Associates U.S. Department of Energy 9 Action Items/Next Steps