Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development

Slides:



Advertisements
Similar presentations
Hardware & the Machine room Week 5 – Lecture 1. What is behind the wall plug for your workstation? Today we will look at the platform on which our Information.
Advertisements

NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Unified Wire Felix Marti, Open Fabrics Alliance Workshop Sonoma, April 2008 Chelsio Communications.
Brocade VDX 6746 switch module for Hitachi Cb500
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
1 InfiniBand HW Architecture InfiniBand Unified Fabric InfiniBand Architecture Router xCA Link Topology Switched Fabric (vs shared bus) 64K nodes per sub-net.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
SQL Server, Storage And You Part 2: SAN, NAS and IP Storage.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
Efficient Cloud Computing Through Scalable Networking Solutions.
Storage area network and System area network (SAN)
Emerging Storage Options for Server Blade Architectures Server Blade Summit 2005.
Microsoft Virtual Academy Module 4 Creating and Configuring Virtual Machine Networks.
Grid Computing Veronique Anxolabehere Senior Director of Product Marketing Mike Margulies Senior Director, Grid Platform Solutions.
Infiniband enables scalable Real Application Clusters – Update Spring 2008 Sumanta Chatterjee, Oracle Richard Frank, Oracle.
New Direction Proposal: An OpenFabrics Framework for high-performance I/O apps OFA TAC, Key drivers: Sean Hefty, Paul Grun.
Extreme Networks Confidential and Proprietary. © 2010 Extreme Networks Inc. All rights reserved.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Roland Dreier Technical Lead – Cisco Systems, Inc. OpenIB Maintainer Sean Hefty Software Engineer – Intel Corporation OpenIB Maintainer Yaron Haviv CTO.
Reliable Datagram Sockets and InfiniBand Hanan Hit NoCOUG Staff 2010.
Voltaire The Grid Backbone™ InfiniBand CERN Seminar Asaf Somekh VP Strategic Alliances June 2006.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
©2014 Extreme Networks, Inc. All rights reserved. Microsoft Skype for Business Integration Overview Leveraging the Power of Technology Partnerships Niels.
MDC-B350: Part 1 Room: You are in it Time: Now What we introduced in SP1 recap How to setup your datacenter networking from scratch What’s new in R2.
The NE010 iWARP Adapter Gary Montry Senior Scientist
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
©2015 EarthLink. All rights reserved Cloud Express ™ Optimize Your Business & Cloud Networks.
11 Copyright © 2009 Juniper Networks, Inc. ANDY INGRAM VP FST PRODUCT MARKETING & BUSINESS DEVELOPMENT.
Remote Direct Memory Access (RDMA) over IP PFLDNet 2003, Geneva Stephen Bailey, Sandburst Corp., Allyn Romanow, Cisco Systems,
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Introducing Network Design Concepts Designing and Supporting Computer Networks.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
© 2012 MELLANOX TECHNOLOGIES 1 Disruptive Technologies in HPC Interconnect HPC User Forum April 16, 2012.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Clustering In A SAN For High Availability Steve Dalton, President and CEO Gadzoox Networks September 2002.
© 1999, Cisco Systems, Inc. 1-1 Chapter 2 Overview of a Campus Network © 1999, Cisco Systems, Inc.
1 Public DAFS Storage for High Performance Computing using MPI-I/O: Design and Experience Arkady Kanevsky & Peter Corbett Network Appliance Vijay Velusamy.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Introducing Network Design Concepts Designing and Supporting Computer Networks.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
CMS week, June 2002, CERN 1 First P2P Measurements on Infiniband Luciano Berti INFN Laboratori Nazionali di Legnaro.
Performance Networking ™ Server Blade Summit March 23, 2005.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
InfiniBand By Group 3: Casey Bauer Mary Daniel William Hunter Hannah McMahon John Walls.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
Networking update and plans (see also chapter 10 of TP) Bob Dobinson, CERN, June 2000.
STORAGE ARCHITECTURE/ MASTER): Where IP and FC Storage Fit in Your Enterprise Randy Kerns Senior Partner The Evaluator Group.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
Microsoft Advertising 16:9 Template Light Use the slides below to start the design of your presentation. Additional slides layouts (title slides, tile.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Embrace the Future of.
Open Source and Business Issues © 2004 Northrop Grumman Corp. All rights reserved. 1 Grid and Beowulf : A Look into the Near Future NorthNorth F-18C Weapons.
E2800 Marco Deveronico All Flash or Hybrid system
Enhancements for Voltaire’s InfiniBand simulator
Wide Area InfiniBand What it is, and why it is
The Efficient Fabric Presenter Name Title.
Organizations Are Embracing New Opportunities
Appro Xtreme-X Supercomputers
GGF15 – Grids and Network Virtualization
Joint Techs Workshop InfiniBand Now and Tomorrow
Storage area network and System area network (SAN)
Presentation transcript:

Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development

2 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Scaling Out Using Clusters Super Computers and Mainframes Bunch of interconnected Linux machines Much Lower cost But, lower reliability/MTBF, underutilization, Higher Complexity, Storage bottleneck Our challenge: minimize the Scale-Out overhead Paradigm Shift

3 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes InfiniBand value proposition Open Interconnect Standard, designed from ground- up for High-Performance Compute & I/O Clustering Significantly lower cost/performance than any other technology due to its architecture Serial + Switched Interconnect, scales to demand High throughput, Low latency, Low CPU utilization 850 Mbytes/sec MPI throughput 140ns per switch, 5.8  s MPI end-to-end Enable high-speed file & block I/O, network, and IPC traffic using a single technology with RDMA Built-in Traffic Classes, QoS, Flow Ctrl, Partitioning, etc.. Already supports 30Gbit/sec links TODAY New standard for 10, 20, 30, 60, 120Gbit/sec defined

4 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes More on InfiniBand Intel has been and still is THE main driving force behind InfiniBand InfiniBand design was the result of the combined efforts of Compaq, Dell, HP, IBM, Intel, Microsoft, Sun !!! V1 spec issued in 2000 IBTA, InfiniBand Trade Association, was created to promote the technology and provide the IB « ecosystem », see OpenIB has been recently (early 2004) created to promote open source software for InfiniBandwww.openib.org

5 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes InfiniBand vs. proprietary interconnects InfiniBandMyrinet Data Rate 10Gbps2 Gbps Multiple Protocols YesNo Managed Fabric YesNo Dynamic Reconfiguration YesNo Proprietary NoYes MPI Latency (uS) Bandwidth (Mbytes/sec) 879 (2500 pci-ex)248 Much better performance and multipurpose at lower prices

6 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Voltaire: Fast Facts Locations Business HQ: Boston, USA Sales Reps: Japan, Europe R&D: Herzeliya, Israel Headcount : 70 (May 2004) Financing Strategic Investors: Hitachi and Quantum Top US and Israeli VCs Recently raised 15M$ + cash in bank Partnerships and on-going developments Hitachi HP, IBM, Sun, Apple, SGI CERN

7 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Voltaire InfiniBand Product Family 6, 24, 96, 288 port non-blocking, multi-protocol connectivity Integrated wire-speed network and storage Virtualization No single point of failure – Hot-swappable FRUs Non-disruptive software update, fail-over Modular elements for investment protection Advanced integrated management Voltaire InfiniBand Switch Router Family ISR9600 ISR6000 ISR9024 TCP/IP Router FC iSCSI Gateway HCA ISR9288

8 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Complete grid infrastructure management Voltaire Fabric Manager (VFM) Load Balancing / NAT, Filtering, VLAN’s, QoS TCP/IP Provisioning Storage Provisioning Logical Volume Management LUN Masking/Mapping High-Availability, Security Voltaire Device Manager (VDM)

9 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Largest InfiniBand installed base, Most field tested Voltaire’s competitive edge: Most scalable switch family, Largest IBTA Certified Switches Scalable HPC focused software: Stacks and Fabric Management Highly scalable, Integrated GbE and Storage connectivity Genuine Open Source Strategy Recent Voltaire Successes in HPC

10 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Mississippi State University Installation “… with the power, performance and scalability of Voltaire's InfiniBand switches and the robust functionality of the VoltaireVision management software, we are well enabled to achieve significant throughput and performance gains…” Trey Breckenridge, HPC Resources and Operations Administrator, Mississippi State University ERC. 192 Dual Xeon Servers `1.4 TFlops Voltaire provided: Switches, Adapters, Software and Advanced Fabric Management

11 June 14, 2016© 2004 Voltaire, Inc. For discussion purposes Voltaire and the CERN openlab CERN openlab is working on architectures for high performance Compute clusters I/O subsystems (file I/O, block I/O) Voltaire develops HW+SW solutions Based on InfiniBand Standard interconnect ideal for HPC Scales well in large compute clusters Low latency, High Throughput, low CPU utilization Atomic and Collective operations … and ideal for high performance I/O We look forward to a fruitful cooperation

Thank You ! Have you seen our Web today ?