LHC Tier 2 Networking BOF

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
Network Measurements Session Introduction Joe Metzger Network Engineering Group ESnet Eric Boyd Deputy Technology Officer Internet2 July Joint.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Grid Canada CLS eScience Workshop 21 st November, 2005.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
1-1.1 Sample Grid Computing Projects. NSF Network for Earthquake Engineering Simulation (NEES) 2004 – to date‏ Transform our ability to carry out research.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
GriPhyN EAC Meeting (Jan. 7, 2002)Paul Avery1 Integration with iVDGL è International Virtual-Data Grid Laboratory  A global Grid laboratory (US, EU, Asia,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
HATHITRUST A Shared Digital Repository Institution Uses of HathiTrust Jeremy York University of Maine May 24, 2013.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Brookhaven Science Associates U.S. Department of Energy USATLAS Tier 1 & 2 Networking Meeting Scott Bradley Manager, Network Services 14 December 2005.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
December 07, 2006Parag Mhashilkar, Fermilab1 Samgrid – OSG Interoperability Parag Mhashilkar, Fermi National Accelerator Laboratory.
HATHITRUST A Shared Digital Repository HathiTrust Large Digital Libraries: Beyond Google Books Modern Language Association January 5, 2012 Jeremy York,
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
TeraPaths TeraPaths:Configuring End-to-End Virtual Network Paths With QoS Guarantees Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Computing Operations Roadmap
“A Data Movement Service for the LHC”
LCG France Network Infrastructures
Service Challenge 3 CERN
Data Challenge with the Grid in ATLAS
The LHCONE network and L3VPN status update
Southwest Tier 2 Center Status Report
Dagmar Adamova, NPI AS CR Prague/Rez
The INFN TIER1 Regional Centre
A high-performance computing facility for scientific research
Summary from last MB “The MB agreed that a detailed deployment plan and a realistic time scale are required for deploying glexec with setuid mode at WLCG.
IN2P3 Computing Center April 2007
LHC Collisions.
Visit of US House of Representatives Committee on Appropriations
ESnet Network Measurements ESCC Feb Joe Metzger
LCG Service Challenges Overview
ExaO: Software Defined Data Distribution for Exascale Sciences
LHC Data Analysis using a worldwide computing grid
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

LHC Tier 2 Networking BOF Joe Metzger metzger@es.net Joint Techs Vancouver 2005

LHC: Accelerator at CERN. Used by physicists from around the world. http://www.ilrasoiodioccam.it/articoli/lhc.html

http://hepweb.rl.ac.uk/ppUKpics/images/POW/1999/990303.jpg

North American LHC Site Hierarchy Tier 0 CERN Tier 1 Regional Archive & Analysis US Brookhaven ATLAS FERMI CMS Canada TRIUMF ATLAS Tier 2 Analysis Centers Universities

Data Challenges SC1: initial tests between some Tier-1 sites and CERN. Datafiles transported from disk to disk using globus_url_copy. Date: December 2004. Result: tests were done with BNL, CCIN2P3, Fermilab, FZK and SARA. SC2: sustained file transfers with an aggregated rate of 500 MByte/sec out of CERN. Disk to disk using software components from the radiant service at CERN. Participating sites: CCIN2P3, CNAF, Fermilab, FZK, RAL, SARA. Date: last two weeks of March 2005. SC3: sustained file transfer simultaneously to 6 Tier-1 sites. Disk to tape (at 60 MByte/sec) using SRM. Participating sites: CCIN2P3, CNAF, Fermilab, FZK, RAL, SARA. Date: July 2005. http://lcg.web.cern.ch/LCG/PEB/gdb/ServiceChallenges.htm

Data Challenges SC4: sustained file transfer simultaneously to all 10 Tier-1 and some Tier-2 sites. Inclusion of data processing on farms at CERN and at the Tier-1 sites. Date: Q4 2005 SC5: the whole infrastructure of Tier-1 and Tier-2 sites in place. Simulated data movements between all partners. Date: Q1 2006 SC6: test of the whole infrastructure at twice the nominal rate. Date: Q2 2006 SC7: test of the whole infrastructure with real (cosmic) data from the experiments http://lcg.web.cern.ch/LCG/PEB/gdb/ServiceChallenges.htm

ATLAS Tier 2 Sites BNL University of Texas at Arlington University of Oklahoma University of New Mexico Langston University University of Chicago Indiana University Boston University Harvard University University of Michigan http://lcg.web.cern.ch/LCG/PEB/gdb/LCG-Tiers.htm

CMS Tier 2 Sites FERMI MIT University of Florida, Gainesville University of Nebraska, Lincoln University of Wisconsin, Madison Caltech Purdue University University of California, San Diego http://lcg.web.cern.ch/LCG/PEB/gdb/LCG-Tiers.htm

ATLAS Tier 2 Sites TRIUMF University of Victoria WestGrid University of Alberta University of Toronto Carleton University University de Monreal http://lcg.web.cern.ch/LCG/PEB/gdb/LCG-Tiers.htm

Data Challenge Survey Sent to Network contacts at Universities Questions Expected 05 Data Rates for long flows Expected 06 Data Rates for long flows Upstream Provider Light Paths Contact Info

ATLAS Tier 2 Sites BNL 2005 2006 Texas 200-500 Mbps 500-1000 Mbps Oklahoma New Mexico Langston Chicago 100-500 Mbps Indiana 100-1000 Mbps Boston Michigan 10 Gbps Harvard

CMS Tier 2 Sites FERMI 2005 2006 MIT Florida Nebraska Wisconsin Caltech 1+ Gbps Purdue UCSD

Tier 2 Light Path Plans U Texas U Chicago Indiana U. Caltech No U Chicago Initially not, though we do have capabilities to do so. At UC, we connect to Starlight via optical links provided by I-WIRE. Indiana U. Initially not, though we do have capabilities to do so. Caltech We'll experiment on-demand network path setup dynamically using Monalisa service agents and GMPLS. U Michigan We are an UltraLight collaborating site and will utilize dedicated light-paths, QoS/MPLS and high-performance protocols (like FAST) to do our Service Challenge work.

Summary Synopsis of LHC Tier 2 Networking Web Page Mailing list to discuss US LHC networking. lhc-us-net@es.net Send message to listserver@listmin.es.net with a subject of “subscribe lhc-us-net”