Download presentation
1
Server and Storage Connectivity Solutions
April 2009
2
Efficient Solutions for Efficient Computing
Enterprise Data Center High-Performance Computing Cloud Computing Leading Connectivity Solution Provider For Servers and Storage Leading Connectivity Solution Provider For Servers and Storage
3
Leading End-to-End Data Center Products
Adapter ICs & Cards Switch ICs & Systems Gateway ICs & Systems Adapter ICs & Cards Blade & Rack Servers Switch Gateway Storage Cables Cables Cables Adapter ICs & Cards Switch ICs & Systems InfiniScale® IV Dual-Port 10/20/40Gb/s InfiniBand, 10GigE with FCoE & Data Center Ethernet 36-port 40Gb/s switch silicon device 36 to 324-port 40Gb/s InfiniBand Switches Gateway ICs & Systems Cables Robust active and passive cables Supporting data rates up to 40Gb/s 10/20/40G InfiniBand or 10GigE to 10GigE and/or 2/4/8G Fibre Channel Gateway
4
InfiniBand Leadership and Market Expansion
InfiniBand market and performance leader First to market with 20Gb/s and 40Gb/s adapters and switches Mature, 4th generation silicon and software Strong industry adoption of 40Gb/s InfiniBand ~34% of 4Q 2008 revenue Roadmap to 80Gb/s in 2010 Expansion into high transaction processing and virtualization Cloud computing, Oracle data base, VMware I/O consolidation Data distribution, algorithmic trading for financial services Expansion into commercial HPC Automotive, Digital media, EDA, Oil & Gas, and simulation
5
10 Gigabit Ethernet Solutions Leadership
Ethernet Leadership First to market with dual-port PCIe Gen2 10GigE adapter First to market with 10GigE w/FCoE with hardware offload Awarded “Best of Interop” 2008 Industry-wide Acceptance and Certification Multiple design wins & deployments Servers, LAN on Motherboard (LOM), and storage systems VMware Virtual Infrastructure 3.5 Citrix XenServer 4.1 in-the-box support Windows Server 2003 & 2008, RedHat 5, SLES 11
6
Maximizing Productivity Since 2001
7
Shanghai Supercomputer Center
China 863 Grid program Biggest government project in China IT industry Dawning5000A supercomputer 1920 nodes Dawning blade system, 180.6TFlop, ~80% efficiency Highest ranked Windows HPC Server 2008 based system Mellanox ConnectX and switch based systems Delivering highest scalability for Windows based clusters
8
Roadrunner – The First Petaflop System
Largest supercomputer in the world Los Alamos Nation Lab, #1 on June 2008 Top500 list Nearly 3x faster than the leading contenders on Nov 2007 list Usage - national nuclear weapons, astronomy, human genome science and climate change Breaking through the “Petaflop barrier" More than 1,000 trillion operations per second 12,960 CPUs, 3,456 tri-blade units Mellanox ConnectX 20Gb/s InfiniBand adapters Mellanox InfiniScale III 20Gb/s switches Mellanox InfiniBand is the only scalable high-performance solution for Petascale computing Scalability, efficiency, performance
9
Virginia Tech – 40Gb/s InfiniBand QDR System
Center for High-End Computing Systems (CHECS) CHECS research activities are the foundation for the development of next generation, power-aware high-end computing resources Mellanox end-to-end 40Gb/s solution Mellanox 40Gb/s - the only 40Gbs technology on the Top500 list 324 Apple Mac Pro Servers Total of 2592 Intel quad-core CPU cores Energy efficient 22.3TF system “Unlike most of the clusters I have ever used, we have never had a Linpack run failure with this cluster, not one.” Dr. Srinidhi Varadarajan
10
Mellanox InfiniBand-Accelerated HP Oracle Database Machine
Mellanox 20Gb/s InfiniBand-accelerated rack servers and native InfiniBand EXADATA Storage Servers with Oracle 11g Solves I/O bottleneck between database servers and storage servers At least 10X Oracle data warehousing query performance Faster access to critical business information “Oracle Exadata outperforms anything we’ve tested to date by 10 to 15 times. This product flat-out screams.” Walt Litzenberger Director Enterprise Database Systems, CME Group World’s Largest Futures Exchange
11
InfiniBand HCA Silicon and Cards
Performance driven architecture MPI latency <1us, 6.6GB/s with 40Gb/s InfiniBand (bi-directional) MPI message rate of >40 Million/sec Superior real application performance Scalability, efficiency, productivity Drive all new InfiniBand designs to ConnectX IB Superior performance, scalability Congestion control maximizes effective bandwidth QoS for guaranteed service levels Service oriented I/O using dedicated channels HW-based virtualization for native OS performance Advanced storage functions for network consolidation Consider InfiniHost families when Power, Size, Cost or legacy PCI-X are key design considerations 16-cores 8-cores
12
ConnectX Ethernet Benefits
Optimized for cost, power, board space Single chip with integrated PHYs Highest bandwidth 2 line-rate 10GigE ports over PCIe2.0 HW-based virtualization For native OS performance Better resource utilization Network convergence Converged Enhanced Ethernet (CEE) Fibre Channel over Ethernet (FCoE) Low Latency Ethernet (LLE) Efficient RDMA iSCSI acceleration Through OS-compatible stateless offloads Line rate from 128B onwards Drivers for the Redhat and Suse distributions are available on the Mellanox web site.
13
ConnectX Virtual Protocol Interconnect
… App1 App2 App3 App4 AppX Applications Consolidated Application Programming Interface Networking TCP/IP/UDP Sockets Storage NFS, CIFS, iSCSI NFS-RDMA, SRP, iSER, Fibre Channel, Clustered Clustering MPI, DAPL, RDS, Sockets Management SNMP, SMI-S OpenView, Tivoli, BMC, Computer Associates Protocols Networking Clustering Storage Virtualization RDMA Acceleration Engines 10/20/40 InfiniBand 10GigE Data Center Ethernet LLE Any Protocol over Any Convergence Fabric 13 13
14
40Gb/s InfiniBand Switch Systems
Scalable switch architecture DDR (20Gb/s) and QDR (40Gb/s) Latency as low as 100ns Adaptive routing, congestion management, QoS Multiple subnets, mirroring MTS3600 1U 36 port QSFP Up to 2.88Tb/s switching capacity MTS3610 19U 18 slot chassis Up to 25.9Tb/s switching capacity 18 QSFP ports per switch blade MTS3630 648-port chassis Up to 51.8Tb/s switching capacity Accelerating QDR Deployment 14
15
BridgeX Enables True IO Unification
Cost-effective bridging InfiniBand → Ethernet InfiniBand → Fibre Channel Ethernet → Fibre Channel Protocol encapsulation No termination Full wire speed, low power Simplicity, scalability and flexibility
16
Efficient High Performance Solutions
10G Ethernet 40G InfiniBand 8G Fibre Channel Storage Bridges Switches IB Storage Servers Adapters IB to Eth IB to FC Eth to FC 40G InfiniBand, FCoIB 10G Ethernet, FCoE 40Gb/s Network InfiniBand Eth over IB FC over IB FC over Eth* Ethernet Storage FC Storage * via ecosystem products
17
Coming to the Theaters…
Adaptive routing and static routing Congestion control Virtual secured subnets 80Gb/s InfiniBand MPI offloads HS 2:10 means 10 links with 2 to 1 oversubscription
18
Enabling Energy Efficiency and Cost Savings
10G Ethernet 40G InfiniBand 8G Fibre Channel Switches Adapters Bridges Storage Servers Virtualization One Wire VPI High-Performance Complete Scalable I/O Consolidation Solutions Bottom Line Benefits for IT: TCO 50% Reduction Energy Costs 67% Reduction Infrastructure 62% Saving * Based on end-users testimonies
19
Energy Efficiency and Increased Productivity
10G Ethernet 40G InfiniBand 8G Fibre Channel Switches Adapters Bridges Storage Servers Virtualization One Wire VPI High-Performance Complete Scalable I/O Consolidation Solutions Bottom Line Benefits for IT: TCO 50% Reduction Infrastructure 62% Saving Performance 100% Increase * Based on end-users testimonies
20
Thank You 20
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.