1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005.

Slides:



Advertisements
Similar presentations
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Advertisements

Director’s Welcome Jonathan Dorfan 32 nd Annual SSRL Users Meeting October 17, 2005.
1 High Performance Active End-to- end Network Monitoring Les Cottrell, Connie Logg, Warren Matthews, Jiri Navratil, Ajay Tirumala – SLAC Prepared for the.
TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong.
1 Internet End-to-end Monitoring Project at SLAC Les Cottrell, Connie Logg, Jerrod Williams, Gary Buhrmaster Site visit to SLAC by DoE program managers.
1 Testbeds Les Cottrell Site visit to SLAC by DoE program managers Thomas Ndousse & Mary Anne Scott April 27,
MAGGIE NIIT- SLAC On Going Projects Measurement & Analysis of Global Grid & Internet End to end performance.
PIPE Dreams Trouble Shooting Network Performance for Production Science Data Grids Presented by Warren Matthews at CHEP’03, San Diego March 24-28, 2003.
SC|05 Bandwidth Challenge ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford.
ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.
Scientific Computing at SLAC Richard P. Mount Director: Scientific Computing and Computing Services DOE Review June 15, 2005.
1 High Performance WAN Testbed Experiences & Results Les Cottrell – SLAC Prepared for the CHEP03, San Diego, March 2003
A Step Towards Automated Event Diagnosis Stanford Linear Accelerator Center Adnan Iqbal, Yee-Ting Li, Les Cottrell Connie A. Log. Williams Jerrod.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
1 ESnet Network Measurements ESCC Feb Joe Metzger
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
24 April 2015 FY 2016 Budget Request to Congress for DOE’s Office of Science Dr. Patricia M. Dehmer Acting Director, Office of Science
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
1 Using Netflow data for forecasting Les Cottrell SLAC and Fawad Nazir NIIT, Presented at the CHEP06 Meeting, Mumbai India, February
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
IEPM-BW Deployment Experiences Connie Logg SLAC Joint Techs Workshop February 4-9, 2006.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
LAN QoS and WAN MPLS: Status and Plan Dantong Yu and Shawn Mckee DOE Site Visit December 13, 2004.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
IEPM. Warren Matthews (SLAC) Presented at the ESCC Meeting Miami, FL, February 2003.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
1 High Performance Network Monitoring Challenges for Grids Les Cottrell, SLAC Presented at the International Symposium on Grid Computing 2006, Taiwan
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
Performance and Scalability of xrootd Andrew Hanushevsky (SLAC), Wilko Kroeger (SLAC), Bill Weeks (SLAC), Fabrizio Furano (INFN/Padova), Gerardo Ganis.
1 Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
The Design and Demonstration of the UltraLight Network Testbed Presented by Xun Su GridNets 2006, Oct.
Scientific Computing at SLAC: The Transition to a Multiprogram Future Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
1 Experiences and results from implementing the QBone Scavenger Les Cottrell – SLAC Presented at the CENIC meeting, San Diego, May
GLAST LAT Project SLAC-NRL Data Connection for GLAST Environmental Test 3/15/05 1 GLAST Large Area Telescope: SLAC – NRL Data Connection for the GLAST.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
1 WAN Monitoring Prepared by Les Cottrell, SLAC, for the Joint Engineering Taskforce Roadmap Workshop JLab April 13-15,
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
1 Lessons Learned Monitoring Les Cottrell, SLAC ESnet R&D Advisory Workshop April 23, 2007 Arlington, Virginia Partially funded by DOE and by Internet2.
1 FAST TCP for Multi-Gbps WAN: Experiments and Applications Les Cottrell & Fabrizio Coccetti– SLAC Prepared for the Internet2, Washington, April 2003
Hiroyuki Matsunaga (Some materials were provided by Go Iwai) Computing Research Center, KEK Lyon, March
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
1 Performance Network Monitoring for the LHC Grid Les Cottrell, SLAC International ICFA Workshop on Grid Activities within Large Scale International Collaborations,
GLAST SLAC-NRL network, June 28 ‘06
Les Cottrell & Yee-Ting Li, SLAC
Western Analysis Facility
Link from SLAC to SC2003.
Using Netflow data for forecasting
ESnet Network Measurements ESCC Feb Joe Metzger
By Les Cottrell for UltraLight meeting, Caltech October 2005
Wide Area Networking at SLAC, Feb ‘03
Experiences from SLAC SC2004 Bandwidth Challenge
SLAC Site Report.
Advanced Networking Collaborations at SLAC
Wide-Area Networking at SLAC
Link from SLAC to SC2003 Les Cottrell, SLAC.
Presentation transcript:

1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005

2 SLAC Evolving Increasingly multi-program: –Increasing focus on photon sources SPEAR3, Linear Coherent Light Source (LCLS) and Ultra Fast Science Center –Increased funding from BES LINAC increasingly funded by BES (all in 2009) –HEP roughly stable, BaBar stops taking data 2008 Proposal for tier 2 W. Coast ATLAS/LHC site –Accelerator design –Astronomy: NASA (GLAST) and Large Synoptic Survey Telescope (LSST)) –Joint (Stanford / DoE / NSF) funded projects KIPAC, UltraFast center, Guest House

3 Requires New business practices –More project oriented since multiple projects so need more accountability –No longer dominated by HEP Harder to “hide” projects like UltraLight/PingER with no sources of funding for operations. SLAC/IEPM part of BNL Terapaths project –Recently hired post-doc Yee-Ting Li to work on this

4 Site Connectivity Connected to ESnet Bay Area MAN –ESnet (production) + SDN both 10Gbits/s –4Gbits/s disk to disk SLAC/NERSC for BAMAn Christening Have equipment to connect up production LAN at 10Gbps to ESnet, awaiting power installations For SC2005 built a ten node xrootD cluster of v20z dual Opterons with Chelsio and Neterion NICs, plus four v20z file servers and disk trays Attached to ESnet for SC2005 via temporary Cisco 6509

5 UL Testbed 10Gbits/s Sunnyvale space loaned from CENIC, SLAC paid for power $1K/month, stopped being invoiced July ’05 –Status of permanent residency unresolved Cisco 6509 from UltraLight proposal Two Sun v20z dual 1.8GHz Opterons loaned from SLAC –10GE Neterion NICs, purchased by SLAC –Remote management Purchased/installed terminal server to provide console access Purchased/installed remote power management Will get file server from Caltech? Connect hosts to Cisco 6509, to 10Gbps UltraLight Tie in/interconnects between ESnet, USN & UL unclear –USN has 2 circuits to UL router –ESnet (SDN) has circuit to USN –ESnet BAMAN has connectivity to SLAC Main costs to SLAC are people costs for which there is no directed funding to support

6 Sunnyvale set up V20z dual 1.8GHz opterons Linux 2.6.9, Neterion 10GE NICs NO uldemo account allowed A3 A4 A5 A6 A Power Console 10Gbits/s UltraLight ( x) CENIC power management UltraLight 10Mbps management( x) Terminal Server Hub Compute servers Not installed snv2 snv1 Te2/2 Te2/4 Te1/2 USN ESnet BAMAN Te2/1 Te11/3

7 Tools installed Ping, traceroute (of course) –Pingroute, TCP achievable throughput: iperf, thrulay Packet dispersion: pathchirp, pathneck File transfer: Bbcp, GridFTP Future: iepm-bw, OWAMP?