Download presentation
Presentation is loading. Please wait.
Published byBianca Baxter Modified over 10 years ago
1
Intel CAC 2002 Panel Page 1 It’s the Interface, Stupid! Shubu Mukherjee VSSAD, Intel Corporation 2002 CAC Panel Disclaimer: All opinions expressed in this presentation are mine and only mine. Intel or any other corporation is not liable for any of the material presented in these slides.
2
Intel CAC 2002 Panel Page 2 CPU MPP Interconnect Memory Bus Network Interface Memory CPU Cluster Interconnect Memory Bus Network Interface Memory I/O Bus ? Interface = Software on CPU + Network Interface Interface is key performance bottleneck
3
Intel CAC 2002 Panel Page 3 Interconnect not a bottleneck Interconnect not a bottleneck MPP & Cluster interconnects have similar properties Link bandwidth ~= 10s of gigabits/second Link latency ~= nano- to micro-seconds
4
Intel CAC 2002 Panel Page 4 MPP Interconnect Bandwidth not a bottleneck for 1-to-1 communication 8085 Bus Pentium4 System Bus TMC CM-2 Cray T3D Pentium System Bus Alpha 21364
5
Intel CAC 2002 Panel Page 5 Cluster Interconnect Bandwidth not a bottleneck for 1-to-1 communication IBM PC 32 MHz / 20 bit Sun Sbus 133 MHz / 64 bit PCI-X Myricom Myrinet Quadrics QsNet Mellanox Infiniband
6
Intel CAC 2002 Panel Page 6 Interconnect Latency not a bottleneck Sample breakdown + Interconnect latency ~= 10s of nanoseconds + Network Interface ~= few microseconds + Software ~= 10s of microseconds to milliseconds Example + Ed Felten’s Thesis, 1993 + On Intel Delta (MPP, but clusters would be similar) + Hardware = 1 microsecond + Software = 67 microsecond
7
Intel CAC 2002 Panel Page 7 Where are the bottlenecks? Software Interface to interconnect –operating system intervention –protocol stacks reliable delivery congestion control Hardware Interface to interconnect –Extra hop via I/O bus (MPPs usu. don’t have this problem) –Side-effect prone hardware (e.g., uncached loads) network interface hardware designed accordingly
8
Intel CAC 2002 Panel Page 8 Winner’s Properties? The Right Thing To Do User-level access to network interface + Myricom Myrinet or Quadrics QsNet (from Meiko CS2) + Infiniband or cLAN’s (with VIA) Streamlined Network Interface + Integrated I/O bus and Cluster Interconnect + Direct Memory Access + Treat Network Interface like Cacheable Memory + most I/O bridges already do this - most network interfaces don’t support this yet
9
Intel CAC 2002 Panel Page 9 Tug-of-War: inertia vs. performance Inertia to use existing software + Gigabit Ethernet with TCP/IP Performance from Cluster Interconnects + User-level access and streamlined network interface IDC Forecast, May 2001 (from D.K.Panda’s Infiniband Tutorial) % of Infiniband-enabled servers + 2003 : 20% of all servers + 2004 : 60% of all servers + 2005 : 80% of all servers And, the winner is …. ULTINET (NOT) + User-Level Thin Interface NETwork
10
Intel CAC 2002 Panel Page 10 Don’t Discount Inertia Software exists and works for gigabit ethernet & TCP/IP Hardware is cheap and widely available It is a price/performance/inertia issue –not performance alone Infiniband will probably be a temporary I/O bus/switch/backplane –3GIO coming up (backward compatible with PCI) Mellanox, Quadrics, Myricom –in a niche market, which can be dangerous because the volume may not be high enough –generalizing and adapting other interfaces (e.g., ethernet) may help their business model
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.