LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
Network Measurements Session Introduction Joe Metzger Network Engineering Group ESnet Eric Boyd Deputy Technology Officer Internet2 July Joint.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
1 ESnet Network Measurements ESCC Feb Joe Metzger
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Grid Canada CLS eScience Workshop 21 st November, 2005.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
Howard Brown Center for the Study of the Origin and Structure of Matter (COSM) Presented at University of Texas, Brownsville March 1, 2002.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
A Technology Vision for the Future Rick Summerhill, Chief Technology Officer, Eric Boyd, Deputy Technology Officer, Internet2 Joint Techs Meeting 16 July.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
CA*net 4 International Grid Testbed Tel:
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
1-1.1 Sample Grid Computing Projects. NSF Network for Earthquake Engineering Simulation (NEES) 2004 – to date‏ Transform our ability to carry out research.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
Brazilian HEP Grid initiatives: ‘São Paulo Regional Analysis Center’ Rogério L. Iope SPRACE Systems Engineer 2nd EELA Workshop June 2006 Island of.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
GriPhyN EAC Meeting (Jan. 7, 2002)Paul Avery1 Integration with iVDGL è International Virtual-Data Grid Laboratory  A global Grid laboratory (US, EU, Asia,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Brookhaven Science Associates U.S. Department of Energy USATLAS Tier 1 & 2 Networking Meeting Scott Bradley Manager, Network Services 14 December 2005.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
TeraPaths TeraPaths:Configuring End-to-End Virtual Network Paths With QoS Guarantees Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
March 2014 Open Science Grid Operations A Decade of HTC Infrastructure Support Kyle Gross Operations Support Lead Indiana University / Research Technologies.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
“A Data Movement Service for the LHC”
Visit of US House of Representatives Committee on Appropriations
LHC Data Analysis using a worldwide computing grid
LHC Tier 2 Networking BOF
Presentation transcript:

LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005

LHC: Accelerator at CERN. Used by physicists from around the world.

North American LHC Site Hierarchy Tier 0CERN Tier 1Regional Archive & Analysis –US BrookhavenATLAS FERMICMS –Canada TRIUMFATLAS Tier 2Analysis Centers –Universities

Data Challenges 1.SC1: initial tests between some Tier-1 sites and CERN. Datafiles transported from disk to disk using globus_url_copy. Date: December Result: tests were done with BNL, CCIN2P3, Fermilab, FZK and SARA. 2.SC2: sustained file transfers with an aggregated rate of 500 MByte/sec out of CERN. Disk to disk using software components from the radiant service at CERN. Participating sites: CCIN2P3, CNAF, Fermilab, FZK, RAL, SARA. Date: last two weeks of March SC3: sustained file transfer simultaneously to 6 Tier-1 sites. Disk to tape (at 60 MByte/sec) using SRM. Participating sites: CCIN2P3, CNAF, Fermilab, FZK, RAL, SARA. Date: July

Data Challenges 4.SC4: sustained file transfer simultaneously to all 10 Tier-1 and some Tier-2 sites. Inclusion of data processing on farms at CERN and at the Tier-1 sites. Date: Q SC5: the whole infrastructure of Tier-1 and Tier-2 sites in place. Simulated data movements between all partners. Date: Q SC6: test of the whole infrastructure at twice the nominal rate. Date: Q SC7: test of the whole infrastructure with real (cosmic) data from the experiments

ATLAS Tier 2 Sites BNL University of Texas at Arlington University of Oklahoma University of New Mexico Langston University University of Chicago Indiana University Boston University Harvard University University of Michigan

CMS Tier 2 Sites FERMI MIT University of Florida, Gainesville University of Nebraska, Lincoln University of Wisconsin, Madison Caltech Purdue University University of California, San Diego

ATLAS Tier 2 Sites TRIUMF University of Victoria WestGrid University of Alberta University of Toronto Carleton University University de Monreal

Data Challenge Survey Sent to Network contacts at Universities Questions –Expected 05 Data Rates for long flows –Expected 06 Data Rates for long flows –Upstream Provider –Light Paths –Contact Info

ATLAS Tier 2 Sites BNL Texas Mbps Mbps Oklahoma New Mexico Langston Chicago Mbps Mbps Indiana Mbps Mbps Boston Michigan10 Gbps Harvard

CMS Tier 2 Sites FERMI MIT Florida Nebraska Wisconsin Caltech1+ Gbps Purdue UCSD

Tier 2 Light Path Plans U Texas –No U Chicago –Initially not, though we do have capabilities to do so. At UC, we connect to Starlight via optical links provided by I-WIRE. Indiana U. –Initially not, though we do have capabilities to do so. Caltech –We'll experiment on-demand network path setup dynamically using Monalisa service agents and GMPLS. U Michigan –We are an UltraLight collaborating site and will utilize dedicated light-paths, QoS/MPLS and high-performance protocols (like FAST) to do our Service Challenge work.

Summary Synopsis of LHC Tier 2 Networking Web Page Mailing list to discuss US LHC networking. Send message to with a subject of “subscribe lhc-us-net”