OpenLab Enterasys Meeting

Slides:



Advertisements
Similar presentations
By the end of this section, you will know and understand the hardware and software involved in making a LAN!
Advertisements

♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
SAN Last Update Copyright Kenneth M. Chipps Ph.D. 1.
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
1 6/19/ :50 CS57510 Gigabit Ethernet1 Rivier College CS575: Advanced LANs 10 Gigabit Ethernet.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
What is a Computer Network? Two or more computers which are connected together.
Amin Kazempour Long Yunyan XU
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Official Work Order Network Engineering Team Background: The company has recently undergone an expansion program, which has resulted in the purchase of.
Semester 1 Module 5 Cabling LANs and WANs. Ethernet Standards.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
Computer Network Technology
Storage Area Network Presented by Chaowalit Thinakornsutibootra Thanapat Kangkachit
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
PCI.
Infrastructure in Teleradiology CONTENT 1. Introduction 2. Overview of Data Communication 3. Local Area Network 4. Wide Area Network 5. Emerging Technology.
$100 $200 $300 $400 $500 Network topologies Client Architecture Physical Transmission Media Uses of Tele communications Misc. Network Misc. Network.
CS/IS 465: Data Communication and Networks 1 CS/IS 465 Data Communications and Networks Lecture 28 Martin van Bommel.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
Networks.
Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting,
SJ – Mar The “opencluster” in “openlab” A technical overview Sverre Jarp IT Division CERN.
Where we are... Public Interconnects: Number of peers on switch ~60 Aggregate bandwidth through switching fabric ~530mb/s average - ~680mb/s peak.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
SJ – Nov CERN’s openlab Project Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Chapter 7 - The Local Area Network Arrives Introduction Motivation (mainframes & minicomputers with terminals) Interchangeable Media (removable disks and.
Computer Networks and Internet. 2 Objectives Computer Networks Computer Networks Internet Internet.
Network types Point-to-Point (Direct) Connection Dedicated circuit boards connected by cable; To transfer data from A to B: – A writes on its circuit board;
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Enterprise Vitrualization by Ernest de León. Brief Overview.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
High Speed Interconnect Project
Network Topology and LAN Technologies
ALICE Computing Data Challenge VI
High Speed Interconnect Project May08-06
Luca dell’Agnello INFN-CNAF
“A Data Movement Service for the LHC”
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
INFN CNAF TIER1 Network Service
NL Service Challenge Plans
Database Services at CERN Status Update
Switch Setup Connectivity to Other locations Via MPLS/LL etc
Service Challenge 3 CERN
10 Gigabit Ethernet 1 1.
Grid related projects CERN openlab LCG EDG F.Fluckiger
Making an Ethernet Cable
Computer Networks and Internet
Basic Computer Networking at the Toolik Field Station
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Introduction to Networks
Introduction To Computers
Computer Networks and Internet
Ethernet and Token Ring LAN Networks
Computer Technology Notes #4
Low Latency Analytics HPC Clusters
Computer Networks.
Ethernet and Token Ring LAN Networks
May08-06.
نماذج من تطبيقات شبكات الحساب مكونات شبكة الحاسب
Ethernet First network to provide CSMA/CD
Storage Networking Protocols
Web Server Administration
Computer Networks.
EEC4113 Data Communication & Multimedia System Chapter 1: Introduction by Muhazam Mustapha, July 2010.
Presentation transcript:

OpenLab Enterasys Meeting January 23th 2003

Goals for Openlab Networking Understand, Implement and Manage a large, high bandwidth infrastructure Promote the use of 10 Gigabit Ethernet as a switch interconnect and a host attachment Demonstrate that Ethernet Technology is well suited for LHC data acquisition and offline processing .. And demonstrate that Enterasys can do the job better than others Openlab Enterasys 23-january-2003

Topology as proposed mid 2002 Disk Servers 1-12 13-24 25-36 37-48 1-12 13-24 25-36 4 4 4 4 2 2 2 E1 OAS E1 OAS 2 2 E1 OAS E1 OAS 4 4 4 4 1-96 FastEthernet 49- 60 61-72 73-84 85-96 Gig copper Gig fiber 10 Gig Openlab Enterasys 23-january-2003

Status end 2002 Problems with the 10 Gbps interfaces in the ER16 Very reduced setup to isolate the problem Took some time to diagnose and fix 10 Gbps cards are not back to CERN yet E1 OAS not fully tested yet Therefore, the proposed setup has not been installed New request for more connections (HP) 32 nodes with 1 and 10 Gbps NICs announced New request for higher speed data challenges 1 GigaByte/s thru the complete chain (from CPU to tapes, via disks) Openlab Enterasys 23-january-2003

Proposed extension 1Q2003 32 32 node Itanium cluster Disk Servers 1-12 13-24 25-36 37-48 1-12 13-24 25-36 4 4 4 2 2 2 E1 OAS E1 OAS 2 32 2 E1 OAS E1 OAS 4 4 4 4 1-96 FastEthernet 49- 60 61-72 73-84 85-96 Gig copper 32 node Itanium cluster 200+ node Pentium cluster Gig fiber 10 Gig Openlab Enterasys 23-january-2003

What next for 2003? Proposals Improve/adapt the current setup to accommodate higher data challenge speeds Work toward a solution for connecting more HPs at 10Gbps Interconnect this heterogeneous cluster with WAN equipment to validate the “gridification” Openlab Enterasys 23-january-2003