Connectivity layout of the N7 switch (17.01.2005) 4* 1 Gb uplinks to backbone 2 * 10 * 1 Gb interconnections between the N7 switches 10 Gbit limits 1.3.

Slides:



Advertisements
Similar presentations
Interconnection Networks: Flow Control and Microarchitecture.
Advertisements

CSC303 Team 1. Introduction Use a Cisco 7600 technology to connect the following networks: 2 OC3 Ethernets -- fiber 2 T3 Ethernets -- fiber 8 T1 Ethernets.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Chapter 10 Switching Fabrics. Outline Physical Interconnection Physical box with backplane Individual blades plug into backplane slots Each blade contains.
Group 11 Pekka Nikula Ossi Hämäläinen Introduction to Parallel Computing Kentucky Linux Athlon Testbed 2
Core 3: Communication Systems. On any network there are two types of computers present – servers and clients. By definition Client-Server architecture.
Peta-Cache, Mar30, 2006 V1 1 Peta-Cache: Electronics Discussion II Presentation Ryan Herbst, Mike Huffer, Leonid Saphoznikov Gunther Haller
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
Connecting To A Remote Computer Via ‘Remote Desktop Web Connection’ Compatible With ‘Most Any’ Computer.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Technology for Backbone Web Caching Peter B Danzig Network Appliance November 1998.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
2960 Switches Server Farm Existing 6500 Switch Basement Floor Ground Floor Second Floor First Floor 2960 Switches EXISTING TOPOLOGY.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Spending Plans and Schedule Jae Yu July 26, 2002.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Israel, August 2000 Eyal Nouri, Product Manager Optical-Based Switching Solutions Introduction to the OptiSwitch TM Solution.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
SANs Today Increasing port count Multi-vendor Edge and Core switches
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
Rick Claus Sr. Technical Evangelist,
SJ – Mar The “opencluster” in “openlab” A technical overview Sverre Jarp IT Division CERN.
High performance Brocade routers
R.Divià, CERN/ALICE Challenging the challenge Handling data in the Gigabit/s range.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Learning Outcomes Identify the location of the Central Processing Unit (CPU), expansion slots, RAM slots, path and connectors on the motherboard Identify.
Interfaces. Peripheral devices connect to the CPU, via slots on the back of the computer.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
RTTC rack status Apr. 14 th 2005 Loïc Brarda, CERN.
Data and Computer Communications Eighth Edition by William Stallings Chapter 15 – Local Area Network Overview.
P. Vande Vyvre – CERN/PH for the ALICE collaboration CHEP – October 2010.
The last generation of CPU processor for server farm. New challenges Michele Michelotto 1.
1 Hardware Tests of Compute Node Carrier Board Hao Xu IHEP, CAS.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
Validation tests of CNAF storage infrastructure Luca dell’Agnello INFN-CNAF.
ALICE Computing Data Challenge VI
LHCb and InfiniBand on FPGA
Luca dell’Agnello INFN-CNAF
Experience of Lustre at QMUL
N5 Building Switches (4500) in Ist Floor L3 In N4 Building
Joint AGLT2-MWT2 Networking meeting
OpenLab Enterasys Meeting
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
U units for the MVD Close MVD Pixels (Paolo) Strips (Robert and Hans)
INFN CNAF TIER1 and TIER-X network infrastructure
LHC-OPN Meeting Janet (London), 8-9 March 2010
Bernd Panzer-Steindel, CERN/IT
Experience of Lustre at a Tier-2 site
LHC Computing re-costing for
ALICE Data Challenges On the way to 1 GB/s
GridPP Tier1 Review Fabric
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
Interfaces.
Directory-based Protocol
CS 345A Data Mining MapReduce This presentation has been altered.
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

Connectivity layout of the N7 switch ( ) 4* 1 Gb uplinks to backbone 2 * 10 * 1 Gb interconnections between the N7 switches 10 Gbit limits 1.3 Gbit/s per 10 * 1 Gb ports 2.9 Gbits/s per slot 3.non-blocking backplane 4.~6.5 Gbit/s per 10 Gbit port Foundry switch 24 * 1Gb ports 24 disk server (lxfsrk) 24 tape server uplinks tbed CPU nodes lxshare disk server

Connectivity layout of the N7 switch (> ) 4* 1 Gb uplinks to backbone 2 * 10 * 1 Gb interconnections between the N7 switches 10 Gbit limits 1.3 Gbit/s per 10 * 1 Gb ports 2.9 Gbits/s per slot 3.non-blocking backplane 4.~6.5 Gbit/s per 10 Gbit port 5.2 * 10 Gbit ports are shared Foundry switch 24 * 1Gb ports 24 disk server (lxfsrk) 12 tape server, bld 613 uplinks tbed CPU nodes lxshare disk server 12 tape server, bld 512 basement 10 Gbit

Data flow within the N7 switch (> ) 2 Gbit 2 * 10 * 1 Gb interconnections between the N7 switches 4 Gbit in + 2 Gbit out limits 1.3 Gbit/s per 10 * 1 Gb ports 2.9 Gbits/s per slot 3.non-blocking backplane 4.~6.5 Gbit/s per 10 Gbit port 5.2 * 10 Gbit ports are shared Foundry switch 24 * 1Gb ports 24 disk server (lxfsrk) 12 tape server, bld tape server, bld 512 basement 2 Gbit uplinks tbed CPU nodes LDC tbed CPU nodes GDC lxshare disk server 4 Gbit LCD GDC