Joint Techs Workshop InfiniBand Now and Tomorrow

Slides:



Advertisements
Similar presentations
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Advertisements

Unified Wire Felix Marti, Open Fabrics Alliance Workshop Sonoma, April 2008 Chelsio Communications.
Evaluation of ConnectX Virtual Protocol Interconnect for Data Centers Ryan E. GrantAhmad Afsahi Pavan Balaji Department of Electrical and Computer Engineering,
1 © 2005 Cisco Systems, Inc. All rights reserved. Session Number Presentation_ID Cisco Public InfiniBand: Today and Tomorrow Jamie Riotto Sr. Director.
Windows Compute Cluster Server Overview and Update Paris OpenFabrics Workshop 2006 Xavier Pillons – Principal Consultant Microsoft.
Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
SQL Server, Storage And You Part 2: SAN, NAS and IP Storage.
An overview of Infiniband Reykjavik, June 24th 2008 R E Y K J A V I K U N I V E R S I T Y Dept. Computer Science Center for Analysis and Design of Intelligent.
2006 Sonoma Workshop January 2006 Pre-Plugfest Interop Session Tuan Phamdo – Intel – Co-Chair IBTA CIWG Sujal Das - Director, SW Product Mgmt, Mellanox.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Solutions Road Show – 13 December 2013 | India Raghavendra S Specialist Dell Networking Solutions Right Size your Data center Networking.
OFA-IWG - March 2010 OFA Interoperability Working Group Update Authors: Mikkel Hagen, Rupert Dance Date: 3/15/2010.
Roland Dreier Technical Lead – Cisco Systems, Inc. OpenIB Maintainer Sean Hefty Software Engineer – Intel Corporation OpenIB Maintainer Yaron Haviv CTO.
OFA Interoperability Logo Program Sujal Das, April 30, 2007 Sonoma Workshop Presentation.
OFA-IWG Interop Event March 2008 Rupert Dance, Arkady Kanevsky, Tuan Phamdo, Mikkel Hagen Sonoma Workshop Presentation.
Copyright DataDirect Networks - All Rights Reserved - Not reproducible without express written permission Adventures Installing Infiniband Storage Randy.
Reliable Datagram Sockets and InfiniBand Hanan Hit NoCOUG Staff 2010.
1 March 2010 A Study of Hardware Assisted IP over InfiniBand and its Impact on Enterprise Data Center Performance Ryan E. Grant 1, Pavan Balaji 2, Ahmad.
Server and Storage Connectivity Solutions
Voltaire The Grid Backbone™ InfiniBand CERN Seminar Asaf Somekh VP Strategic Alliances June 2006.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Silicon Building Blocks for Blade Server Designs accelerate your Innovation.
Open Fabrics BOF Supercomputing 2008 Tziporet Koren, Gilad Shainer, Yiftah Shahar, Bob Woodruff, Betsy Zeller.
2006 Sonoma Workshop February 2006Page 1 of (#) General Windows Update Gilad Shainer Mellanox Technologies Inc.
The NE010 iWARP Adapter Gary Montry Senior Scientist
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
© 2012 IBM Corporation IBM Flex System™ The elements of an IBM PureFlex System.
2006 Sonoma Workshop February 2006Page 1 Sockets Direct Protocol (SDP) for Windows - Motivation and Plans Gilad Shainer Mellanox Technologies Inc.
OFED Interoperability NetEffect April 30, 2007 Sonoma Workshop Presentation.
OpenFabrics Windows Development and Microsoft Windows CCS 2003 Part1
Remote Direct Memory Access (RDMA) over IP PFLDNet 2003, Geneva Stephen Bailey, Sandburst Corp., Allyn Romanow, Cisco Systems,
InfiniBand in the Lab Erik 1.
OFED Usage in VMware Virtual Infrastructure Anne Marie Merritt, VMware Tziporet Koren, Mellanox May 1, 2007 Sonoma Workshop Presentation.
High Performance Communication for Oracle using InfiniBand Ross Schibler CTO Topspin Communications, Inc Session id: #36568 Peter Ogilvie Principal Member.
Integrating New Capabilities into NetPIPE Dave Turner, Adam Oline, Xuehua Chen, and Troy Benjegerdes Scalable Computing Laboratory of Ames Laboratory This.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
OpenFabrics Enterprise Distribution (OFED) Update
Windows OpenFabrics (WinOF) Update Gilad Shainer, Mellanox Technologies November 2007.
SANs Today Increasing port count Multi-vendor Edge and Core switches
Open Fabrics BOF Tziporet Koren, Gilad Shainer, Yiftah Shahar, Bob Woodruff, Betsy Zeller.
InfiniBand at Sun Carl Hensler Distinguished Engineer Solaris Engineering Sun Microsystems.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
OpenFabrics Developer Summit at SC06, Tampa Nov. 16,17
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Shawn Hansen Director of Marketing. Windows Compute Cluster Server 2003 Enable scientist and researcher to focus on Science, not IT. Mission: Enable scientist.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
OFA-IWG Interop Event April 2007 Rupert Dance Lamprey Networks Sonoma Workshop Presentation.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
אלכס לנדסברג אשכול מערכות ממוחשבות בע ” מ.
20071 Native Infiniband Storage John Josephakis, VP, Data Direct Networks St. Louis – November 2007.
Dell PowerEdge Blade Server PDVSA Jun-05
Cisco Confidential 1 © 2010 Cisco and/or its affiliates. All rights reserved. Fiber Channel over Ethernet Marco Voi – Cisco Systems – Workshop CCR INFN.
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice HP Unified Cluster Portfolio with.
Enhancements for Voltaire’s InfiniBand simulator
InfiniBand based storage target
Cisco Storage Networking iSCSI
Introduction to Networks
Cisco Data Center Unified Computing Design Specialist Qualifier Exam Dumps practice-questions.html.
Storage Networks and Storage Devices
OFED 1.2 Status and Contents
Application taxonomy & characterization
Cost Effective Network Storage Solutions
Presentation transcript:

Joint Techs Workshop InfiniBand Now and Tomorrow Minneapolis, MN February 12, 2007 Steve Lyness Sr. Director, HPC Systems Engineering QLogic

LARGE InfiniBand Ecosystem … and growing Silicon Providers IB Systems Providers IB Solutions Providers September 18, 2018 QLogic Confidential

InfiniBand Today

Adapters in multiple form-factors PCI E – SDR: Message rate sensitive apps HTX – SDR: Message rate sensitive apps PCI E – DDR: Bandwidth sensitive apps Mezz cards for HP, IBM, Dell and Sun September 18, 2018 QLogic Confidential

288 Ports 144 Ports 96 Ports 48 Ports 24 Ports Switches and Gateways 24 port Edge Switches SDR and DDR Director Class Switches 24 to 288 ports Virtual IO Gateways and Routers 10GbE and 4 Gb Fibre Channel VIO gateways InfiniBand, GigE and 2 Gb FC routers 14U 288 Ports 144 Ports 7U 7U 96 Ports 4U 48 Ports 24 Ports 2U 19” rack September 18, 2018 QLogic Confidential

InfiniBand Attached Storage using SRP September 18, 2018 QLogic Confidential

End to End InfiniBand Solutions for HPC … and more Adapters Servers 10-20Gb/s IB Edge Fabric Switch 10-60Gb/s IB Multi-Protocol Director One Wire Fibre Channel Gateway Ethernet Gateway Storage Fibre Channel SAN Ethernet Network Native InfiniBand September 18, 2018 QLogic Confidential

OpenFabrics SCinet at SC06 SCinet NOC 1545 SCinet Internet2 DDN S2A9500 Cisco 4948-10GE Fujitsu XG1200 LSI/Engenio 6498 QLogic 9120 Voltaire ISR 9096 Panta Matrix IB 4X DDR 20Gigabits, MPO Fiber Cisco SFS 7000D QLogic 9024 IB 4X SDR 10 Gigabits, MPO Fiber 10 Gig Ethernet, SM Fiber Voltaire ISR 9024 1 Gig Ethernet XNet 1848 Qlogic 9024 QLogic 9024 Qlogic 9024 QLogic 9024 IB Exhibitors Ames Lab – 217 Appro – 1634 ASUS – 612 Cisco – 1035 HLRS - 239 Intel – 1523 Mellanox – 1535 NNSA-ASC - 1217 Pittsburgh SC – 1049 QLogic – 1024 SilverStorm – 216 Sun Microsystems – 605 Tyan - 240 Voltaire - 942 PANTAmatrix and Storage September 18, 2018 QLogic Confidential

Advances to the InfiniBand Ecosystem … Coming Soon!

Innovation at the host …. Increased number of “IB Down” motherboards and blades DDR cards for HTX bus Increased message rates and decreased latency for DDR Cards that can be configured to InfiniBand or 10GigE Quad Data Rate (QDR) – 40 New Cabling solutions including: Fiber that is cost competitive with copper Active copper QSFP connectors (vs. CX4 today) September 18, 2018 QLogic Confidential

Merged Linux Standards+ Stack Accelerated Stack Std Stack OpenFabrics Stack OFED QuickSilver Stack QLogic MPICH Open MPI HP MPI Scali MPI MVAPICH MVAPICH2 GPFS NFS Oracle Std TCP/IP QuickSilver IB Stack Std TCP/IP Std SCSI Std SCSI Std Sock … HP MPI Intel MPI … Open MPI MVAPICH MPI MPICH2 MPI HP MPI Intel MPI Std TCP/IP MVAPICH MVAPICH2 Open MPI Scali MPI Lustre QuickSilver MPI Scali MPI Lustre QuickSilver SM Fast Fabric SDP RDS SRP IPoIB VNIC VAPI SRP+ IPoIB VNIC+ uDAPL uDAPL iPath Ether OpenFabrics Stack PSM API ipath Driver mthca TavorVpd ArbelVpd InfiniPath Adapters QS Adapters September 18, 2018 QLogic Confidential

… and fully certified support for Windows Server Applications Winsock Socket Switch WinSock Provider MPI2* IPoIB NDIS TCP/UDP/ICMP IP User Windows Applications OF Windows Hardware * Windows Compute Cluster Server 2003 ** Will be available in the future SDP** SDP SPI** WSD SAN Provider Management Tools HCA Hardware Access Layer Verbs Provider Driver Verbs Provider Library Access Layer Library Kernel Bypass SRP Miniport StorPort VNIC** Kernel September 18, 2018 QLogic Confidential

Hardware Innovation Staying steps ahead of other Interconnects QDR switch chips Enabling QDR to the host (40 GB/s) Enabling up to 120 Gb/s (12X QDR) switch-to-switch Higher port count switches Switch port count maximum is realistically limited to n2 / 2 Example … 32 port chip would be 32*16 = 512 ports Mainstream InfiniBand Routers Mainstream Long Haul InfiniBand Refresh of Ethernet and Fibre Channel gateways September 18, 2018 QLogic Confidential

Software Innovation – Enterprise Ready! New Enterprise Features and Functions QOS Multi-path Adaptive Routing Partitioning Server Boot Options Enhanced Security Congestion Avoidance And then … even more Robust Management and Diagnostics? GUI and Log based event notification Fabric Health Monitoring Automated Problem Resolution Congestion Management September 18, 2018 QLogic Confidential

Thank you! Steve Lyness QLogic Corp. steve.lyness@qlogic.com (610) 233-4881 (Office) (610) 733-7449 (Cell)