Interconnect Trends in High Productivity Computing Actionable Market Intelligence for High Productivity Computing Addison Snell, VP/GM,

Slides:



Advertisements
Similar presentations
High Productivity Computing: Predictions for the New HPC Actionable Market Intelligence for High Productivity Computing Addison Snell, VP/GM,
Advertisements

SensMax People Counting Solutions Visitors counting makes the most efficient use of resources - people, time and money, which leads to higher profits in.
HPC Software in 2014 September 2014 Addison Snell, CEO
Evaluation of ConnectX Virtual Protocol Interconnect for Data Centers Ryan E. GrantAhmad Afsahi Pavan Balaji Department of Electrical and Computer Engineering,
2. Computer Clusters for Scalable Parallel Computing
What is Grid Computing? Cevat Şener Dept. of Computer Engineering, METU.
Overview of Midrange Computing Resources at LBNL Gary Jung March 26, 2002.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Efficient Cloud Computing Through Scalable Networking Solutions.
High Performance Communication using MPJ Express 1 Presented by Jawad Manzoor National University of Sciences and Technology, Pakistan 29 June 2015.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Upgrading the Platform - How to Get There!
© 2013 VCE Company, LLC. All rights reserved. CDCA SBIOI April 18, 2013 VCE: A FOUNDATION FOR IT TRANSFORMATION.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
1.Training and education 2.Consulting 3.Travel 4.Hardware 5.Software Which of the following is not included in a firm’s IT infrastructure investments?
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
1 In Summary Need more computing power Improve the operating speed of processors & other components constrained by the speed of light, thermodynamic laws,
E-Tourism Lecture 7. E-Tourism is used to refer to e-business in the field of travel and tourism, the use of ICT to enable tourism providers destinations.
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Workload Optimized Processor
CLUSTER COMPUTING STIMI K.O. ROLL NO:53 MCA B-5. INTRODUCTION  A computer cluster is a group of tightly coupled computers that work together closely.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
HPC USER FORUM I/O PANEL April 2009 Roanoke, VA Panel questions: 1 response per question Limit length to 1 slide.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
The NE010 iWARP Adapter Gary Montry Senior Scientist
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Common Practices for Managing Small HPC Clusters Supercomputing 12
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
March 9, 2015 San Jose Compute Engineering Workshop.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Distributed Programming CA107 Topics in Computing Series Martin Crane Karl Podesta.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 11, 2006 Session 23.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Rick Claus Sr. Technical Evangelist,
Interconnection network network interface and a case study.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Main Function of SCM (Part I)
SAN 2001 Session 201 The Future of Storage Area Networks Milan Merhar Chief Scientist, Pirus Networks
SC11: HPC Trends and Market Update
Open Source and Business Issues © 2004 Northrop Grumman Corp. All rights reserved. 1 Grid and Beowulf : A Look into the Near Future NorthNorth F-18C Weapons.
Monitoreo y Administración de Infraestructura Fisica (DCIM). StruxureWare for Data Centers 2.0 Arturo Maqueo Business Development Data Centers LAM.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Introduction to Mobile-Cloud Computing. What is Mobile Cloud Computing? an infrastructure where both the data storage and processing happen outside of.
3 rd EGEE Conference Athens 18 th April, Stephen McGibbon Senior Director, EMEA Technology Office Chief Technology Officer, Central & Eastern Europe,
LQCD Computing Project Overview
The Efficient Fabric Presenter Name Title.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Cost Models for HPC and Supercomputing
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Berkeley Cluster Projects
Appro Xtreme-X Supercomputers
Daniel Murphy-Olson Ryan Aydelott1
OCP: High Performance Computing Project
Grid Computing.
Cloud computing Anton Boyko .NET Developer.
CLUSTER COMPUTING.
Presentation transcript:

Interconnect Trends in High Productivity Computing Actionable Market Intelligence for High Productivity Computing Addison Snell, VP/GM, April 2008

Tabor: “High Productivity Computing” Productivity analysis studies: How do users define and measure productivity? Site census studies: How do users configure their systems over time? Budget maps: What are the components of TCO? New online Productivity Analysis Tool Productivity analysis studies: How do users define and measure productivity? Site census studies: How do users configure their systems over time? Budget maps: What are the components of TCO? New online Productivity Analysis Tool Tabor Research is studying how HPC transcends price/performance and server sales.

Expanding Technology Contexts in HPC Technology differentiation pushed outside the server –HPC systems are no longer “self-contained” –Interconnects, processors, OS, storage all may come from different vendors Only a third of HPC budget goes to the server –A third goes to other products and services –Remaining third to facilities, staffing: much of facilities spending is “NIB” Network spending is about 10% of hardware (not including bundled system interconnect) Technology differentiation pushed outside the server –HPC systems are no longer “self-contained” –Interconnects, processors, OS, storage all may come from different vendors Only a third of HPC budget goes to the server –A third goes to other products and services –Remaining third to facilities, staffing: much of facilities spending is “NIB” Network spending is about 10% of hardware (not including bundled system interconnect)

Expanding Usage Contexts in HPC Tabor Research is doing a study on “Edge HPC” HPC technology or application profiles outside the traditional areas of engineering, science, analytics –Complex event processing: Wide-scale, sensor- based, event-driven, real-time or near real-time –Organization optimization: BI, data mining, logistics, inventory / supply chain management –Virtual environments: Online games, Second Life, “augmented reality,” virtual economies –Ultra-scale: Other usage of supercomputers Tabor Research is doing a study on “Edge HPC” HPC technology or application profiles outside the traditional areas of engineering, science, analytics –Complex event processing: Wide-scale, sensor- based, event-driven, real-time or near real-time –Organization optimization: BI, data mining, logistics, inventory / supply chain management –Virtual environments: Online games, Second Life, “augmented reality,” virtual economies –Ultra-scale: Other usage of supercomputers

System (Node-to-Node) Interconnects Data from Tabor Research HPC Site Census: Almost even thirds distribution between Ethernet, Infiniband, and other high-end cluster interconnects (Myrinet, Quadrics) –Almost no 10GbE as system fabrics –Infiniband seems to have had more impact on Ethernet than other high-end interconnects Average 200 nodes/system (SMP = 1 node) Data skewed a bit towards academic Data from Tabor Research HPC Site Census: Almost even thirds distribution between Ethernet, Infiniband, and other high-end cluster interconnects (Myrinet, Quadrics) –Almost no 10GbE as system fabrics –Infiniband seems to have had more impact on Ethernet than other high-end interconnects Average 200 nodes/system (SMP = 1 node) Data skewed a bit towards academic

LAN (Compute Room) Interconnects Data from Tabor Research HPC Site Census: Ethernet dominates as LAN fabric –Almost a third of Ethernet LANs have at least some 10GbE –10GbE outnumbers Infiniband by two-to-one Very little usage of anything other than Ethernet or Infiniband –Some mentions of wireless, FC, Myrinet –Here Infiniband seems to have taken over from fast non-Ethernet technologies Data from Tabor Research HPC Site Census: Ethernet dominates as LAN fabric –Almost a third of Ethernet LANs have at least some 10GbE –10GbE outnumbers Infiniband by two-to-one Very little usage of anything other than Ethernet or Infiniband –Some mentions of wireless, FC, Myrinet –Here Infiniband seems to have taken over from fast non-Ethernet technologies

Storage Interconnects Data from Tabor Comprehensive Research Study: Over one-third of HPC users implement IB as a storage interconnect –More common as storage infrastructure grows –Native IB protocol most common, followed closely by FC, with SATA a distant third –IB also most common implementation of RDMA On average, users place 25%-30% premium value on doubling the storage bandwidth Data from Tabor Comprehensive Research Study: Over one-third of HPC users implement IB as a storage interconnect –More common as storage infrastructure grows –Native IB protocol most common, followed closely by FC, with SATA a distant third –IB also most common implementation of RDMA On average, users place 25%-30% premium value on doubling the storage bandwidth

Other Thoughts Converged Fabric Strategies –About a third are “likely” to implement –Despite Ethernet position in LAN, much more likely to consider converged fabric on Infiniband Impact of multi-core at low end –In the near term, could create interest in SMP –Is MPI equipped to handle new level of parallelism at the socket? –Will changes in workload management or job scheduling create new interconnect requirements? Converged Fabric Strategies –About a third are “likely” to implement –Despite Ethernet position in LAN, much more likely to consider converged fabric on Infiniband Impact of multi-core at low end –In the near term, could create interest in SMP –Is MPI equipped to handle new level of parallelism at the socket? –Will changes in workload management or job scheduling create new interconnect requirements?

Interested in Tabor Research? Please, help us in our research efforts! Join the HPC User Views Advisory Council –Access to research –Rewards for participation Please, help us in our research efforts! Join the HPC User Views Advisory Council –Access to research –Rewards for participation

Interconnect Trends in High Productivity Computing Actionable Market Intelligence for High Productivity Computing Addison Snell, VP/GM, April 2008