Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.

Slides:



Advertisements
Similar presentations
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Advertisements

Unified Wire Felix Marti, Open Fabrics Alliance Workshop Sonoma, April 2008 Chelsio Communications.
TEMPLATE DESIGN © High Performance Molecular Dynamics in Cloud Infrastructure with SR-IOV and GPUDirect Andrew J. Younge.
PANEL Session : The Future of I/O from a CPU Architecture Perspective #OFADevWorkshop.
Evaluation of ConnectX Virtual Protocol Interconnect for Data Centers Ryan E. GrantAhmad Afsahi Pavan Balaji Department of Electrical and Computer Engineering,
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
Supermicro © 2009Confidential HPC Case Study & References.
1 InfiniBand HW Architecture InfiniBand Unified Fabric InfiniBand Architecture Router xCA Link Topology Switched Fabric (vs shared bus) 64K nodes per sub-net.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Efficient Cloud Computing Through Scalable Networking Solutions.
An overview of Infiniband Reykjavik, June 24th 2008 R E Y K J A V I K U N I V E R S I T Y Dept. Computer Science Center for Analysis and Design of Intelligent.
Heterogeneous Computing Dr. Jason D. Bakos. Heterogeneous Computing 2 “Traditional” Parallel/Multi-Processing Large-scale parallel platforms: –Individual.
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 High Performance Interconnects for Distributed Computing (HPI-DC)
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Cisco Confidential 1 © 2010 Cisco and/or its affiliates. All rights reserved. Data Center Solutions Marketing Data Center Business Advantage Customer Proof.
Network and Virtualization. Intel 82575EB Zoar Controller Dual RJ45 Connectors 1Gb/s per port PCI-e x4, Low Profile, Standard Form Factor Intel I/OAT.
Reliable Datagram Sockets and InfiniBand Hanan Hit NoCOUG Staff 2010.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
1 March 2010 A Study of Hardware Assisted IP over InfiniBand and its Impact on Enterprise Data Center Performance Ryan E. Grant 1, Pavan Balaji 2, Ahmad.
Server and Storage Connectivity Solutions
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Workload Optimized Processor
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Extracted directly from:
Low-Latency Accelerated Computing on GPUs
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
© 2012 MELLANOX TECHNOLOGIES 1 The Exascale Interconnect Technology Rich Graham – Sr. Solutions Architect.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
The NE010 iWARP Adapter Gary Montry Senior Scientist
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 9/1/09.
HPC Business update HP Confidential – CDA Required
March 9, 2015 San Jose Compute Engineering Workshop.
© 2012 MELLANOX TECHNOLOGIES 1 Disruptive Technologies in HPC Interconnect HPC User Forum April 16, 2012.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
Windows OpenFabrics (WinOF) Update Gilad Shainer, Mellanox Technologies November 2007.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
iSER update 2014 OFA Developer Workshop Eyal Salomon
Interconnection network network interface and a case study.
Highest performance parallel storage for HPC environments Garth Gibson CTO & Founder IDC HPC User Forum, I/O and Storage Panel April 21, 2009.
Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
Use case of RDMA in Symantec storage software stack Om Prakash Agarwal Symantec.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
IBM Power system – HPC solution
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
Disruptive Storage Workshop Lustre Hardware Primer
Enhancements for Voltaire’s InfiniBand simulator
Organizations Are Embracing New Opportunities
Appro Xtreme-X Supercomputers
OCP: High Performance Computing Project
Joint Techs Workshop InfiniBand Now and Tomorrow
Low Latency Analytics HPC Clusters
Application taxonomy & characterization
Presentation transcript:

Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future

© 2013 Mellanox Technologies 2 - Mellanox Confidential - Leading Supplier of End-to-End Interconnect Solutions MXM Mellanox Messaging Acceleration MXM Mellanox Messaging Acceleration FCA Fabric Collectives Acceleration FCA Fabric Collectives Acceleration Management UFM Unified Fabric Management UFM Unified Fabric Management Storage and Data VSA Storage Accelerator (iSCSI) VSA Storage Accelerator (iSCSI) UDA Unstructured Data Accelerator UDA Unstructured Data Accelerator Comprehensive End-to-End Software Accelerators and Managment Host/Fabric SoftwareICsSwitches/GatewaysAdapter CardsCables/Modules Comprehensive End-to-End InfiniBand and Ethernet Portfolio Metro / WAN

© 2013 Mellanox Technologies 3 - Mellanox Confidential - Mellanox InfiniBand Paves the Road to Exascale Computing Accelerating Half of the World’s Petascale Systems Mellanox Connected Petascale System Examples Accelerating Half of the World’s Petascale Systems Mellanox Connected Petascale System Examples

© 2013 Mellanox Technologies 4 - Mellanox Confidential - TOP500 InfiniBand Accelerated Petascale Capable Machines  Mellanox FDR InfiniBand systems Tripled from Nov’12 to Nov’13 Accelerating 63% of the Petaflop capable systems (12 systems out of 19)

© 2013 Mellanox Technologies 5 - Mellanox Confidential - FDR InfiniBand Delivers Highest Return on Investment Higher is better Source: HPC Advisory Council

© 2013 Mellanox Technologies 6 - Mellanox Confidential - Architectural Foundation for Exascale Computing Connect-IB

© 2013 Mellanox Technologies 7 - Mellanox Confidential - Mellanox Connect-IB The World’s Fastest Adapter  The 7 th generation of Mellanox interconnect adapters  World’s first 100Gb/s interconnect adapter (dual-port FDR 56Gb/s InfiniBand)  Delivers 137 million messages per second – 4X higher than competition  Support the new innovative InfiniBand scalable transport – Dynamically Connected

© 2013 Mellanox Technologies 8 - Mellanox Confidential - Connect-IB Provides Highest Interconnect Throughput Source: Prof. DK Panda Connect-IB FDR (Dual port) ConnectX-3 FDR ConnectX-2 QDR Competition (InfiniBand) Connect-IB FDR (Dual port) ConnectX-3 FDR ConnectX-2 QDR Competition (InfiniBand) Higher is Better Gain Your Performance Leadership With Connect-IB Adapters

© 2013 Mellanox Technologies 9 - Mellanox Confidential - Connect-IB Delivers Highest Application Performance 200% Higher Performance Versus Competition, with Only 32-nodes Performance Gap Increases with Cluster Size

© 2013 Mellanox Technologies 10 - Mellanox Confidential - Dynamically Connected Transport Advantages

© 2013 Mellanox Technologies 11 - Mellanox Confidential - Accelerations for Parallel Programs Mellanox ScalableHPC

© 2013 Mellanox Technologies 12 - Mellanox Confidential - Mellanox ScalableHPC Accelerate Parallel Applications InfiniBand Verbs API MXM Reliable Messaging Optimized for Mellanox HCA Hybrid Transport Mechanism Efficient Memory Registration Receive Side Tag Matching FCA Topology Aware Collective Optimization Hardware Multicast Separate Virtual Fabric for Collectives CORE-Direct Hardware Offload Memory P1 Memory P2 Memory P3 MPI Memory P1P2P3 SHMEM Logical Shared Memory Memory P1P2P3 PGAS Memory Logical Shared Memory

© 2013 Mellanox Technologies 13 - Mellanox Confidential - Mellanox FCA Collective Scalability

© 2013 Mellanox Technologies 14 - Mellanox Confidential - Nonblocking Alltoall (Overlap-Wait) Benchmark CoreDirect Offload allows Alltoall benchmark with almost 100% compute

© 2013 Mellanox Technologies 15 - Mellanox Confidential - Accelerator and GPU Offloads

© 2013 Mellanox Technologies 16 - Mellanox Confidential - GPUDirect 1.0 CPU GPU Chip set GPUMemory InfiniBand System Memory CPU GPU Chip set GPUMemory InfiniBand System Memory Transmit Receive CPU GPU Chip set GPUMemory InfiniBand System Memory 1 1 CPU GPU Chip set GPUMemory InfiniBand System Memory 1 1 Non GPUDirect GPUDirect 1.0

© 2013 Mellanox Technologies 17 - Mellanox Confidential - GPUDirect RDMA Transmit Receive CPU GPU Chip set GPUMemory InfiniBand System Memory 1 1 CPU GPU Chip set GPUMemory InfiniBand System Memory 1 1 GPUDirect RDMA CPU GPU Chip set GPUMemory InfiniBand System Memory 1 1 CPU GPU Chip set GPUMemory InfiniBand System Memory 1 1 GPUDirect 1.0

© 2013 Mellanox Technologies 18 - Mellanox Confidential - GPU-GPU Internode MPI Latency Lower is Better 67 % 5.49 usec Performance of MVAPICH2 with GPUDirect RDMA 67% Lower Latency 5X GPU-GPU Internode MPI Bandwidth Higher is Better 5X Increase in Throughput Source: Prof. DK Panda

© 2013 Mellanox Technologies 19 - Mellanox Confidential - Execution Time of HSG (Heisenberg Spin Glass) Application with 2 GPU Nodes Source: Prof. DK Panda Performance of MVAPICH2 with GPU-Direct-RDMA Problem Size

© 2013 Mellanox Technologies 20 - Mellanox Confidential - Roadmap

© 2013 Mellanox Technologies 21 - Mellanox Confidential - Technology Roadmap – One-Generation Lead over the Competition Gbs 40Gbs 56Gbs 100Gbs “Roadrunner” Mellanox Connected “Roadrunner” Mellanox Connected 1 st 3 rd TOP Virginia Tech (Apple) TOP Virginia Tech (Apple) Gbs Mega Supercomputers TerascalePetascaleExascale Mellanox

© 2013 Mellanox Technologies 22 - Mellanox Confidential - Paving The Road for 100Gb/s and Beyond Recent Acquisitions are Part of Mellanox’s Strategy to Make 100Gb/s Deployments as Easy as 10Gb/s Copper (Passive, Active)Optical Cables (VCSEL)Silicon Photonics

© 2013 Mellanox Technologies 23 - Mellanox Confidential - The Only Provider of End-to-End 40/56Gb/s Solutions From Data Center to Metro and WAN X86, ARM and Power based Compute and Storage Platforms The Interconnect Provider For 10Gb/s and Beyond Host/Fabric SoftwareICsSwitches/GatewaysAdapter CardsCables/Modules Comprehensive End-to-End InfiniBand and Ethernet Portfolio Metro / WAN

Thank You