Server and Storage Connectivity Solutions

Slides:



Advertisements
Similar presentations
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Advertisements

Unified Wire Felix Marti, Open Fabrics Alliance Workshop Sonoma, April 2008 Chelsio Communications.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
From 8Gb FC HBAs to CNAs: Understanding Oracle Networking Products Powered by QLogic.
Brocade VDX 6746 switch module for Hitachi Cb500
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Supermicro © 2009Confidential HPC Case Study & References.
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
Efficient Cloud Computing Through Scalable Networking Solutions.
/ 2 © Copyright scalar decisions inc. Not for redistribution outside of the intended audience. Cisco Silver Certified Partner  Unified Compute.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice HP StorageWorks LeftHand update Marcus.
LTEC 4550 Networking Components
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Emerging Storage Options for Server Blade Architectures Server Blade Summit 2005.
Product Manager Networking Infrastructure Choices for Storage.
Solutions Road Show – 13 December 2013 | India Raghavendra S Specialist Dell Networking Solutions Right Size your Data center Networking.
Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 High Performance Interconnects for Distributed Computing (HPI-DC)
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
Make IT Simple, Make Business Agile ——Huawei IT New Products.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
Network and Virtualization. Intel 82575EB Zoar Controller Dual RJ45 Connectors 1Gb/s per port PCI-e x4, Low Profile, Standard Form Factor Intel I/OAT.
Roland Dreier Technical Lead – Cisco Systems, Inc. OpenIB Maintainer Sean Hefty Software Engineer – Intel Corporation OpenIB Maintainer Yaron Haviv CTO.
Reliable Datagram Sockets and InfiniBand Hanan Hit NoCOUG Staff 2010.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Trends In Network Industry - Exploring Possibilities for IPAC Network Steven Lo.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
© 2012 MELLANOX TECHNOLOGIES 1 The Exascale Interconnect Technology Rich Graham – Sr. Solutions Architect.
The NE010 iWARP Adapter Gary Montry Senior Scientist
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
HPC Business update HP Confidential – CDA Required
Remote Direct Memory Access (RDMA) over IP PFLDNet 2003, Geneva Stephen Bailey, Sandburst Corp., Allyn Romanow, Cisco Systems,
March 9, 2015 San Jose Compute Engineering Workshop.
© 2012 MELLANOX TECHNOLOGIES 1 Disruptive Technologies in HPC Interconnect HPC User Forum April 16, 2012.
© BLADE Network Technologies, © FASTER VIRTUAL PROVEN Cloud Ready Network Open Products, Shipping Now.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Rick Claus Sr. Technical Evangelist,
Network convergence – role in I/O virtualization What is a converged network? A single network capable of transmitting both Ethernet and storage traffic.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
Tackling I/O Issues 1 David Race 16 March 2010.
Cisco Confidential 1 © 2010 Cisco and/or its affiliates. All rights reserved. Fiber Channel over Ethernet Marco Voi – Cisco Systems – Workshop CCR INFN.
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
FusionCube At-a-Glance. 1 Application Scenarios Enterprise Cloud Data Centers Desktop Cloud Database Application Acceleration Midrange Computer Substitution.
Enhancements for Voltaire’s InfiniBand simulator
The Efficient Fabric Presenter Name Title.
LHCb and InfiniBand on FPGA
Flex System Enterprise Chassis
Appro Xtreme-X Supercomputers
Joint Techs Workshop InfiniBand Now and Tomorrow
System G And CHECS Cal Ribbens
Low Latency Analytics HPC Clusters
IBM Power Systems.
Presentation transcript:

Server and Storage Connectivity Solutions April 2009

Efficient Solutions for Efficient Computing Enterprise Data Center High-Performance Computing Cloud Computing Leading Connectivity Solution Provider For Servers and Storage Leading Connectivity Solution Provider For Servers and Storage

Leading End-to-End Data Center Products Adapter ICs & Cards Switch ICs & Systems Gateway ICs & Systems Adapter ICs & Cards Blade & Rack Servers Switch Gateway Storage Cables Cables Cables Adapter ICs & Cards Switch ICs & Systems InfiniScale® IV Dual-Port 10/20/40Gb/s InfiniBand, 10GigE with FCoE & Data Center Ethernet 36-port 40Gb/s switch silicon device 36 to 324-port 40Gb/s InfiniBand Switches Gateway ICs & Systems Cables Robust active and passive cables Supporting data rates up to 40Gb/s 10/20/40G InfiniBand or 10GigE to 10GigE and/or 2/4/8G Fibre Channel Gateway

InfiniBand Leadership and Market Expansion InfiniBand market and performance leader First to market with 20Gb/s and 40Gb/s adapters and switches Mature, 4th generation silicon and software Strong industry adoption of 40Gb/s InfiniBand ~34% of 4Q 2008 revenue Roadmap to 80Gb/s in 2010 Expansion into high transaction processing and virtualization Cloud computing, Oracle data base, VMware I/O consolidation Data distribution, algorithmic trading for financial services Expansion into commercial HPC Automotive, Digital media, EDA, Oil & Gas, and simulation

10 Gigabit Ethernet Solutions Leadership Ethernet Leadership First to market with dual-port PCIe Gen2 10GigE adapter First to market with 10GigE w/FCoE with hardware offload Awarded “Best of Interop” 2008 Industry-wide Acceptance and Certification Multiple design wins & deployments Servers, LAN on Motherboard (LOM), and storage systems VMware Virtual Infrastructure 3.5 Citrix XenServer 4.1 in-the-box support Windows Server 2003 & 2008, RedHat 5, SLES 11

Maximizing Productivity Since 2001

Shanghai Supercomputer Center China 863 Grid program Biggest government project in China IT industry Dawning5000A supercomputer 1920 nodes Dawning blade system, 180.6TFlop, ~80% efficiency Highest ranked Windows HPC Server 2008 based system Mellanox ConnectX and switch based systems Delivering highest scalability for Windows based clusters

Roadrunner – The First Petaflop System Largest supercomputer in the world Los Alamos Nation Lab, #1 on June 2008 Top500 list Nearly 3x faster than the leading contenders on Nov 2007 list Usage - national nuclear weapons, astronomy, human genome science and climate change Breaking through the “Petaflop barrier" More than 1,000 trillion operations per second 12,960 CPUs, 3,456 tri-blade units Mellanox ConnectX 20Gb/s InfiniBand adapters Mellanox InfiniScale III 20Gb/s switches Mellanox InfiniBand is the only scalable high-performance solution for Petascale computing Scalability, efficiency, performance

Virginia Tech – 40Gb/s InfiniBand QDR System Center for High-End Computing Systems (CHECS) CHECS research activities are the foundation for the development of next generation, power-aware high-end computing resources Mellanox end-to-end 40Gb/s solution Mellanox 40Gb/s - the only 40Gbs technology on the Top500 list 324 Apple Mac Pro Servers Total of 2592 Intel quad-core CPU cores Energy efficient 22.3TF system “Unlike most of the clusters I have ever used, we have never had a Linpack run failure with this cluster, not one.” Dr. Srinidhi Varadarajan

Mellanox InfiniBand-Accelerated HP Oracle Database Machine Mellanox 20Gb/s InfiniBand-accelerated rack servers and native InfiniBand EXADATA Storage Servers with Oracle 11g Solves I/O bottleneck between database servers and storage servers At least 10X Oracle data warehousing query performance Faster access to critical business information “Oracle Exadata outperforms anything we’ve tested to date by 10 to 15 times. This product flat-out screams.” Walt Litzenberger Director Enterprise Database Systems, CME Group World’s Largest Futures Exchange

InfiniBand HCA Silicon and Cards Performance driven architecture MPI latency <1us, 6.6GB/s with 40Gb/s InfiniBand (bi-directional) MPI message rate of >40 Million/sec Superior real application performance Scalability, efficiency, productivity Drive all new InfiniBand designs to ConnectX IB Superior performance, scalability Congestion control maximizes effective bandwidth QoS for guaranteed service levels Service oriented I/O using dedicated channels HW-based virtualization for native OS performance Advanced storage functions for network consolidation Consider InfiniHost families when Power, Size, Cost or legacy PCI-X are key design considerations 16-cores 8-cores

ConnectX Ethernet Benefits Optimized for cost, power, board space Single chip with integrated PHYs Highest bandwidth 2 line-rate 10GigE ports over PCIe2.0 HW-based virtualization For native OS performance Better resource utilization Network convergence Converged Enhanced Ethernet (CEE) Fibre Channel over Ethernet (FCoE) Low Latency Ethernet (LLE) Efficient RDMA iSCSI acceleration Through OS-compatible stateless offloads Line rate from 128B onwards Drivers for the Redhat and Suse distributions are available on the Mellanox web site.

ConnectX Virtual Protocol Interconnect … App1 App2 App3 App4 AppX Applications Consolidated Application Programming Interface Networking TCP/IP/UDP Sockets Storage NFS, CIFS, iSCSI NFS-RDMA, SRP, iSER, Fibre Channel, Clustered Clustering MPI, DAPL, RDS, Sockets Management SNMP, SMI-S OpenView, Tivoli, BMC, Computer Associates Protocols Networking Clustering Storage Virtualization RDMA Acceleration Engines 10/20/40 InfiniBand 10GigE Data Center Ethernet LLE Any Protocol over Any Convergence Fabric 13 13

40Gb/s InfiniBand Switch Systems Scalable switch architecture DDR (20Gb/s) and QDR (40Gb/s) Latency as low as 100ns Adaptive routing, congestion management, QoS Multiple subnets, mirroring MTS3600 1U 36 port QSFP Up to 2.88Tb/s switching capacity MTS3610 19U 18 slot chassis Up to 25.9Tb/s switching capacity 18 QSFP ports per switch blade MTS3630 648-port chassis Up to 51.8Tb/s switching capacity Accelerating QDR Deployment 14

BridgeX Enables True IO Unification Cost-effective bridging InfiniBand → Ethernet InfiniBand → Fibre Channel Ethernet → Fibre Channel Protocol encapsulation No termination Full wire speed, low power Simplicity, scalability and flexibility

Efficient High Performance Solutions 10G Ethernet 40G InfiniBand 8G Fibre Channel Storage Bridges Switches IB Storage Servers Adapters IB to Eth IB to FC Eth to FC 40G InfiniBand, FCoIB 10G Ethernet, FCoE 40Gb/s Network InfiniBand Eth over IB FC over IB FC over Eth* Ethernet Storage FC Storage * via ecosystem products

Coming to the Theaters… Adaptive routing and static routing Congestion control Virtual secured subnets 80Gb/s InfiniBand MPI offloads HS   2:10 means 10 links with 2 to 1 oversubscription

Enabling Energy Efficiency and Cost Savings 10G Ethernet 40G InfiniBand 8G Fibre Channel Switches Adapters Bridges Storage Servers Virtualization One Wire VPI High-Performance Complete Scalable I/O Consolidation Solutions Bottom Line Benefits for IT: TCO 50% Reduction Energy Costs 67% Reduction Infrastructure 62% Saving * Based on end-users testimonies

Energy Efficiency and Increased Productivity 10G Ethernet 40G InfiniBand 8G Fibre Channel Switches Adapters Bridges Storage Servers Virtualization One Wire VPI High-Performance Complete Scalable I/O Consolidation Solutions Bottom Line Benefits for IT: TCO 50% Reduction Infrastructure 62% Saving Performance 100% Increase * Based on end-users testimonies

Thank You www.mellanox.com 20