Ethernet Unified Wire November 15, 2006 Kianoosh Naghshineh, CEO Chelsio Communications.

Slides:



Advertisements
Similar presentations
Oracle Storage Networking Powered by QLogic Optimized for the Oracle Solaris Platform.
Advertisements

Technology alliance partner
Storage Networking Strategic Decision-Making Randy Kerns Evaluator Group, Inc.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Emulex Confidential 11 10GB Ethernet: More than a Big Pipe Donal Madden Emulex 1.
Unified Wire Felix Marti, Open Fabrics Alliance Workshop Sonoma, April 2008 Chelsio Communications.
4/11/2017 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
From 8Gb FC HBAs to CNAs: Understanding Oracle Networking Products Powered by QLogic.
Evaluation of ConnectX Virtual Protocol Interconnect for Data Centers Ryan E. GrantAhmad Afsahi Pavan Balaji Department of Electrical and Computer Engineering,
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
1 © 2001, Cisco Systems, Inc. All rights reserved. NIX Press Conference Catalyst 6500 Innovation Through Evolution 10GbE Tomáš Kupka,
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
VIA and Its Extension To TCP/IP Network Yingping Lu Based on Paper “Queue Pair IP, …” by Philip Buonadonna.
I/O Channels I/O devices getting more sophisticated e.g. 3D graphics cards CPU instructs I/O controller to do transfer I/O controller does entire transfer.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
5/8/2006 Nicole SAN Protocols 1 Storage Networking Protocols Nicole Opferman CS 526.
Storage area network and System area network (SAN)
Windows Server Scalability And Virtualized I/O Fabric For Blade Server
IWARP Ethernet Key to Driving Ethernet into the Future Brian Hausauer Chief Architect NetEffect, Inc.
Product Manager Networking Infrastructure Choices for Storage.
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
Implementing Convergent Networking: Partner Concepts
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical.
Roland Dreier Technical Lead – Cisco Systems, Inc. OpenIB Maintainer Sean Hefty Software Engineer – Intel Corporation OpenIB Maintainer Yaron Haviv CTO.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Best Practices for Backup in SAN/NAS Environments Jeff Wells.
1 March 2010 A Study of Hardware Assisted IP over InfiniBand and its Impact on Enterprise Data Center Performance Ryan E. Grant 1, Pavan Balaji 2, Ahmad.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Trends In Network Industry - Exploring Possibilities for IPAC Network Steven Lo.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
The NE010 iWARP Adapter Gary Montry Senior Scientist
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
11/05/07 1TDC TDC 564 Local Area Networks Lecture 8 IP-based Storage Area Network.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
Remote Direct Memory Access (RDMA) over IP PFLDNet 2003, Geneva Stephen Bailey, Sandburst Corp., Allyn Romanow, Cisco Systems,
Srihari Makineni & Ravi Iyer Communications Technology Lab
CCNA 3 Week 4 Switching Concepts. Copyright © 2005 University of Bolton Introduction Lan design has moved away from using shared media, hubs and repeaters.
ENW-9800 Copyright © PLANET Technology Corporation. All rights reserved. Dual 10Gbps SFP+ PCI Express Server Adapter.
Switched Storage Architecture Benefits Computer Measurements Group November 14 th, 2002 Yves Coderre.
Enable Multi Tenant Clouds Network Virtualization. Dynamic VM Placement. Secure Isolation. … High Scale & Low Cost Datacenters Leverage Hardware. High.
IST Storage & Backup Group 2011 Jack Shnell Supervisor Joe Silva Senior Storage Administrator Dennis Leong.
Windows Server 2012 NIC Teaming and Multichannel Solutions Rick Claus Sr. Technical WSV321.
Integrating New Capabilities into NetPIPE Dave Turner, Adam Oline, Xuehua Chen, and Troy Benjegerdes Scalable Computing Laboratory of Ames Laboratory This.
1 Public DAFS Storage for High Performance Computing using MPI-I/O: Design and Experience Arkady Kanevsky & Peter Corbett Network Appliance Vijay Velusamy.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
An Architecture and Prototype Implementation for TCP/IP Hardware Support Mirko Benz Dresden University of Technology, Germany TERENA 2001.
1 © 2003, Cisco Systems, Inc. All rights reserved. CISCO CONFIDENTIAL Advancing the Carrier IP/MPLS Edge Routing Technology Group Cisco Systems, Inc. April.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Rick Claus Sr. Technical Evangelist,
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
iSER update 2014 OFA Developer Workshop Eyal Salomon
Sonoma Workshop 2008 OpenFabrics at 40 and 100 Gigabits? Bill Boas, Vice-Chair
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
Sandeep Singhal, Ph.D Director Windows Core Networking Microsoft Corporation.
Sockets Direct Protocol for Hybrid Network Stacks: A Case Study with iWARP over 10G Ethernet P. Balaji, S. Bhagvat, R. Thakur and D. K. Panda, Mathematics.
Next Generation HPC architectures based on End-to-End 40GbE infrastructures Fabio Bellini Networking Specialist | Dell.
Cisco Confidential 1 © 2010 Cisco and/or its affiliates. All rights reserved. Fiber Channel over Ethernet Marco Voi – Cisco Systems – Workshop CCR INFN.
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Advisor: Hung Shi-Hao Presenter: Chen Yu-Jen
Balazs Voneki CERN/EP/LHCb Online group
What is Fibre Channel? What is Fibre Channel? Introduction
Joint Techs Workshop InfiniBand Now and Tomorrow
Storage Networking Protocols
Cost Effective Network Storage Solutions
Microsoft Virtual Academy
Presentation transcript:

Ethernet Unified Wire November 15, 2006 Kianoosh Naghshineh, CEO Chelsio Communications

CHELSIO CONFIDENTIAL 2 Pioneering Unified Wire Unified 10Gb Ethernet Network LAN NAS SANHPC LAN NAS SAN HPC Improves CPU efficiency Improves CPU efficiency Minimizes software licenses Minimizes software licenses Simplifies data center wiring Simplifies data center wiring Leverages staffing skills & tools Leverages staffing skills & tools Lower Total Cost of Ownership Improves cluster performance Improves cluster performance Lowers application latency Lowers application latency Faster backup and recovery Faster backup and recovery Enables storage applications Enables storage applications Higher Performance and New Apps Convergence Benefits Simplified network architecture – reduced operating costs

CHELSIO CONFIDENTIAL 3 Key Market Drivers CriteriaMarket Drivers / Enablers Unit Growth  3x volume growth in the 10GbE NIC market in 2006 (synergy report)  3x volume growth in 10GbE switch market in 2006 (Dell’Oro)  iSCSI market growing at 50% per quarter Infrastructure  High density 10GbE switches available now  OpenFabrics released  can run IB apps unmodified, channel product  Chimney OS released  All NICs become TNICs in channel  iSCSI Target & Initiator ready  can begin to replace FC  Middleware partnerships in place  can run FC apps unmodified Prices  XFP over 12 months: prices halved  CX4 switch port at $700/port list price now  10G CX4 RNIC at $995. Expected to be halved by 4Q07 Standards  10G CX4 (copper media) introduced and shipping  10G-baseT switches and cards expected by 1Q07

CHELSIO CONFIDENTIAL 4 Current Shipping Products Full TCP/IP Offload (TOE) Full iSCSI Target solution CX4 copper or XFP fiber Highest performance and CPU efficiency T210 10GbE Protocol Engine N210 10GbE Server Adapter 10Gb Ethernet NIC Linux, Windows, Solaris Turn-key software drivers Easy-to-use server adapter for 10GbE connectivity 4-port Gigabit Ethernet Full TCP/IP Offload (TOE) Full iSCSI Target solution Best price/performance for servers & IP SANs T204 4GbE Protocol Engine  2 nd generation products shipping in volume since Feb 2005  Over 140 customers in 20 countries using Chelsio adapters  Strong foundation for 3 rd generation ‘unified wire’ solution  Holder of all but one of the current Land Speed Records CX4 copper or XFP SR/LR optics PCI-Express or PCI-X bus interface

CHELSIO CONFIDENTIAL 5 Introducing Terminator 3 (T3) T3 ASIC T310 10GbE PCI-Express or PCI-X 2.0 T302 2x1GbE PCI-Express  PCIe x8 or PCI-X 2.0  10GbE CX4 or XFP  NIC+TOE+iSCSI+RDMA  32K connections  iSCSI Target+Initiator  Full-duplex 10Gbbps line-rate performance  ½ length form factor  PCIe x4  2-port Gigabit Ethernet  NIC+TOE+iSCSI+RDMA  32K connections  iSCSI Target+Initiator  2 x 1000BASE-T ports  Low-profile form factor  PCIe x8 or PCI-X 2.0  2x10G or 2x1G Ethernet  NIC+TOE+iSCSI+RDMA  Up to 1M connections  ACPI, IPMI, SMBus, I 2 C  0.13um TSMC LV  ~11W typical power  37mm FCGBA PCI-Express or PCI-X bus interface

CHELSIO CONFIDENTIAL 6 Superior Performance Source: Independently verified by VeriTest, Inc. Test tool: netperf Test configuration: 2 systems connected through 10GbE switch running single TCP channel with 1500-byte Ethernet frames System configuration: AMD Opteron GHz uniprocessor running Linux kernel  Performed May 2003 by the world’s leading independent lab, VeriTest, Inc.  TOE delivers ~2X network throughput vs. 10GbE NICs  TOE shows ~1/2X CPU utilization vs. 10GbE NICs  Net-net: TOE shows ~4X network efficiency vs. NICs  No Jumbos, one connection to thousands of connections Chelsio T110 TOE versus 10GbE NIC Network Performance

CHELSIO CONFIDENTIAL 7 Independent HPC Benchmarks Validate Performance 10GbE has higher bandwidth than Infiniband & Myrinet 10GbE has lower latency than Infiniband Testing performed by Los Alamos National Laboratory and The Ohio State University

CHELSIO CONFIDENTIAL 8 RDMA & iSCSI  PCI-X 133MHZ  1.8GHZ Dual CPU OPTERON  1GB Memory  Mellanox 4X SDR IB PCI-X HCA  Chelsio T320X 10GbE PCI-X RNIC  Point to point networks  Using Open-IB derivative benchmarks  20 Initiators – MS iSCSI 2.0, 1GbE  Dell 2950, 2 x Intel 3GHz EM64T Dual Core Woodcrest, 8GB DDR-II 667MHz memory  Linux SMP x86_64.  T320e (T3b).  iSCSI Target 2.0 release.  Ramdisk. No CRC, TOE mode

CHELSIO CONFIDENTIAL 9 T3 Feature Summary FeatureDescription Hardware  Integrated native PCIe 8x and PCI-X MHz interface  Integrated 2 GigE or 2 10GbE ports with CX4/KX4 compliant XAUI Architecture  Cut-through processing architecture delivers 10Gbps wire rate & ultra low latency  Scalable to 100Gbps and extendable to additional protocols (IPSec, SSL, XML, NFS)  Flexible in cost, power, performance, programmability Management  PXE, EFI, Power Management QoS  Multiple queues for bandwidth resource provisioning and allocation  Multiple channels for simultaneous high bandwidth/low latency operation Virtualization  Extensive receive and transmit features to support virtualization Filtering  Tens of thousands of filter rules in hardware, configurable in software, used for firewalls and virtualization Traffic Manager  Simultaneous per-connection and per-class rate control and bandwidth allocation TOE  4x performance vs NIC  Implements both Full and Partial TCP offload  High performance even in demanding network environments (large RTT, packet loss) DDP  Zero-copy steering of payload data directly into application space for extremely low CPU utilization  No modification to applications  Line rate receive on 1 connection iSCSI  16x performance vs NIC  Chelsio offers complete commercial-grade iSCSI software stack for Linux for FREE iWARP RDMA  RDMAC & IETF compliant  RNIC-PI, kDAPL and OpenIB software interfaces

CHELSIO CONFIDENTIAL 10 Competitive Analysis  Chelsio competes primarily with companies that develop Ethernet adapter cards and/or integrated circuits  Indirect competition from other fabric technologies such as InfiniBand and Fibre Channel  Chelsio delivers a best-in-class, full-featured and seasoned solution NICTOEiSCSIRDMA10GbEGbE2-portPCI-2XPCIeVLIW Chelsio ►►►►►►►►►► Broadcom ►►►►►► Emulex/Silverback ►► QLogic ►► Intel ►►► Neterion ►►► NetXen ►►►►►►

CHELSIO CONFIDENTIAL 11 Why Chelsio  Very High Performance  Very Low Latency  High Transaction Rate  High Bandwidth  Correct architecture  Data Flow Processor produces a Scalable and Ideal Performance Profile  Value added features  Integrated Traffic Manager  Filtering  Virtualization  3 rd Generation – Robust, Seasoned  Maximum guarantee of no late arriving QA bugs  Integrated High Performance ULP solutions – run IB or FC apps unmodified  Commercial grade iSCSI stack integrated with hardware DDP facilities  Fully integrated OpenFabrics RDMA software stack  Broad product offering – future protection  2x1Gb or 2x10Gb, iSCSI, RDMA, TOE, Chimney, NIC  PCI-Express, PC-X 2.0, PCI-X 1.0  Reliable, predictable supplier (ISO 9001 certified)  The only vendor with experience shipping Offload products to enterprise OEMs  Availability of software-only products

CHELSIO CONFIDENTIAL 12 Thank You