Current State of Affairs For Data Center Convergence (DCB)

Slides:



Advertisements
Similar presentations
FCoE Overview IEEE CommSoc/SP Chapter Austin, Texas, May Tony Hurson
Advertisements

Technology alliance partner
CCNA3: Switching Basics and Intermediate Routing v3.0 CISCO NETWORKING ACADEMY PROGRAM Switching Concepts Introduction to Ethernet/802.3 LANs Introduction.
Brocade VDX 6746 switch module for Hitachi Cb500
SAN Last Update Copyright Kenneth M. Chipps Ph.D. 1.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Communicating over the Network Network Fundamentals – Chapter 2.
Internetworking Fundamentals (Lecture #4) Andres Rengifo Copyright 2008.
1 6/22/ :39 Chapter 9Fiber Channel1 Rivier College CS575: Advanced LANs Chapter 9: Fibre Channel.
5/8/2006 Nicole SAN Protocols 1 Storage Networking Protocols Nicole Opferman CS 526.
 The Open Systems Interconnection model (OSI model) is a product of the Open Systems Interconnection effort at the International Organization for Standardization.
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
Module 12 MXL DCB <Place supporting graphic here>
D ATA C ENTER E THERNET M. Keshtgary. O VERVIEW Residential vs. Data Center Ethernet Review of Ethernet Addresses, devices, speeds, algorithms Enhancements.
Protocols and the TCP/IP Suite
ACM 511 Chapter 2. Communication Communicating the Messages The best approach is to divide the data into smaller, more manageable pieces to send over.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Network Devices.
Trends In Network Industry - Exploring Possibilities for IPAC Network Steven Lo.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
LAN Switching and Wireless – Chapter 1
Data Center Bridging
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
Networking Devices Hang Pham Jared Jelacich David Ramirez.
1 © 2003, Cisco Systems, Inc. All rights reserved. CCNA 3 v3.0 Module 4 Switching Concepts.
Cisco 3 - Switching Perrine. J Page 16/4/2016 Chapter 4 Switches The performance of shared-medium Ethernet is affected by several factors: data frame broadcast.
Congestion Management Study Group1 September 2004 PAR Title Information technology -- Telecommunications and information exchange between systems -- Local.
NETWORK HARDWARE CABLES NETWORK INTERFACE CARD (NIC)
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 Communicating over the Network Network Fundamentals – Chapter 2.
Cisco S3C3 Virtual LANS. Why VLANs? You can define groupings of workstations even if separated by switches and on different LAN segments –They are one.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
Network convergence – role in I/O virtualization What is a converged network? A single network capable of transmitting both Ethernet and storage traffic.
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
Internet Protocol Storage Area Networks (IP SAN)
CCNA3 Module 4 Brierley Module 4. CCNA3 Module 4 Brierley Topics LAN congestion and its effect on network performance Advantages of LAN segmentation in.
IEEE Inter Networking Shortest Path Bridging Security Audio/Video Bridging DCB (802.1) Task Group PFC 802.1Qbb ETS 802.1Qaz DCBX 802.1Qaz QCN 802.1Qau.
Kevin Harrison LTEC 4550 Assignment 3.  Ethernet Hub  An unsophisticated device that is used for connecting multiple Ethernet devices together.  Typically.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
Cisco Confidential 1 © 2010 Cisco and/or its affiliates. All rights reserved. Fiber Channel over Ethernet Marco Voi – Cisco Systems – Workshop CCR INFN.
Ad Hoc – Wireless connection between two devices Backbone – The hardware used in networking Bandwidth – The speed at which the network is capable of sending.
Simplify network configuration for VMs by harmonizing multiple Bridging, QOS, DCB and CNA implementations Shyam Iyer.
Network Concepts.
Network Topology and LAN Technologies
Ryan Leonard Storage and Solutions Architect
CCNA Practice Exam Questions
Implementing Cisco Data Center Unified Computing
[Add Presentation Title: Insert tab > Header & Footer > Notes and Handouts] 4/15/2018 Tata/TCS Friends Life Case Study Ethernet Fabric/MLXe Data Centre.
TCS Proof Of Concept Test VDX NOS 4.1 Virtual Fabrics/VLAN Translation
Chapter 3 Computer Networking Hardware
Direct Attached Storage and Introduction to SCSI
What is Fibre Channel? What is Fibre Channel? Introduction
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Introduction to Networks
Instructor Materials Chapter 4: Introduction to Switched Networks
Introduction to Networks
Instructor: Mr. Malik Zaib
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Designing Cisco Data Center Unified Fabric practice-questions.html.
Chapter 7 Backbone Network
Network Concepts Devices
Direct Attached Storage and Introduction to SCSI
Routing and Switching Essentials v6.0
Module – 6 IP san and fcOe Module 6: IP SAN and FCoE 1
Storage Networks and Storage Devices
Data Link Issues Relates to Lab 2.
Storage Networking Protocols
TCP/IP Protocol Suite: Review
Factors Driving Enterprise NVMeTM Growth
Presentation transcript:

Current State of Affairs For Data Center Convergence (DCB) Ethernet Alliance Datacenter Subcommittee Hosts Henry He, Ixia Corp. Chauncey Schwartz, QLogic Corp. 1

Agenda Ethernet Alliance Overview Data Center Bridging (DCB) Building Blocks State of Industry Standards Need for Testing

The views WE ARE expressing in this presentation are our own personal views and should not be considered the views or positions of the Ethernet Alliance.

The Ethernet Alliance A global community of end users, system vendors, component suppliers and academia Representing the spectrum of the Ethernet industry Activities Technology and standards incubation Industry consensus building Education Interoperability testing and demonstration Certification 86 member companies 4

CONVERGED DATA CENTER What are the building blocks?

Today’s Environment Separate networks for each traffic type LAN, SAN, IPC Unique infrastructure Server adapters Fabric switches Cables Separate management schemes Inherently costly and complicated LAN SAN In today’s datacenter, you’ll find separate network infrastructure installed to serve different networking needs. For instance, you’ll commonly find an Ethernet network to support LAN, a Fibre Channel network to support SAN, and an InifiniBand network for IPC, or inter-processor communication, to support clustering. This naturally increases data center capital expenses by requiring separate server adapters, fabric switches, and cabling, as well as increasing operating expenses in terms of increased power, cooling, and management costs. This has driven a movement to reduce data center costs through network convergence. IPC 11/27/2018

What is Data Center Bridging? Title Month Year What is Data Center Bridging? Terminology Enhanced Ethernet Datacenter Ethernet Converged Enhanced Ethernet Lossless vs. lossy Ethernet = Lossy (expected to “drop” packets when busy) Fibre Channel = Lossless (expected to not lose information) SCSI does not recover quickly from lost packets Enhanced Ethernet is lossless New features have been added to prevent dropped packets Better suited for transporting SCSI traffic Enhanced Ethernet (aka Datacenter Ethernet or Converged Enhanced Ethernet) is an umbrella name referring to a set of feature enhancements that have been added to the Ethernet specification. The main goal of these Enhanced Ethernet features is to make Ethernet more suitable to carry SAN traffic. Standard Ethernet is a “lossy” networking medium. This means that when a network is very busy and a switch becomes congested (i.e. switch buffers become full), Ethernet spec allows the switch to drop Ethernet frames. These dropped frames are detected by upper layer networking protocols (such as TCP in most LAN networks) and are re-transmitted. This is perfectly acceptable behavior and occurs all the time in today’s Ethernet LAN networks without an actual user ever noticing. However, while this is acceptable for most LAN traffic, this behavior is not acceptable for SAN traffic. SCSI protocol was developed in the early 1980’s and was originally intended for direct connection between a SCSI controller and a SCSI device. It wasn’t designed to support a networked architecture and didn’t expect to experience dropped packets. Therefore, SCSI doesn’t recover quickly from lost packets-meaning there could be significant performance degradation in a busy Ethernet network. Fibre Channel has become the standard for SAN networks because it was designed to be “lossless”. That means it has implemented a mechanism to prevent packets from being dropped, even in a busy environment. This lossless nature is perfectly suited to transporting SCSI traffic. Enhanced Ethernet, therefore, has added features to the standard Ethernet spec regarding lossless behavior. While it is already possible to make Ethernet a lossless network through use of the “PAUSE” function made available in the 802.3X specification, it is rarely implemented. The features enabled by Enhanced Ethernet add sophistication and additional control to Ethernet’s pause behavior – which makes Ethernet better suited for transporting SCSI traffic.

Enabling the Converged Network Today’s Topic IEEE 802.1 Data Center Bridging IEEE 802.1 Qaz Enhanced Transmission Selection Capability Exchange IEEE 802.1 Qau Congestion Notification IEEE 802.1 Qbb Priority Flow Control Enhanced Ethernet Future Topics Fibre Channel Over Ethernet Fibre Channel over Ethernet (FCoE) FCoE Initialization Protocol (FIP) RDMA over Enhanced Ethernet (RoCE) iSCSI over DCB Use Cases iWARP

Data Center Bridging Pause IEEE 802.3X – defines link level flow control and specifies protocols, procedures and managed objects that enable flow control full‐duplex Ethernet links to prevent lost packets Priority Flow Control (PFC) IEEE 802.1Qbb – enhances pause mechanism to achieve flow control of 8 traffic classes by adding priority information in the flow control packets (http://www.ieee802.org/1/files/public/docs2008/bb-pelissier-pfc-proposal-0508.pdf) Enhanced Transmission Selection (ETS) IEEE 802.1Qaz – assigns traffic classes into priority groups and manages bandwidth allocation and sharing across priority groups (http://www.ieee802.org/1/files/public/docs2008/az-wadekar-ets-proposal- 0608-v1.01.pdf) Data Center Bridging Exchange Protocol (DCBX) IEEE 802.1Qaz – “Advertisement/configuration” to allow devices to automatically exchange DCB link capabilities (http://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbx-capability-exchange-discoveryprotocol-1108-v1.01.pdf) Congestion Notification (CN) IEEE 802.1Qau – allows bridges to send congestion signals to end-systems to regulate the amount of network traffic [comments] verbally speak to PAUSE 802.3x. DCB slide title is incorrect…(Pause (802.3x) – Stop transmission when queue is full (prevent lost packets)). Alternatively, make a sub-bullet under PFC. Action item: Chauncey [comments] potentially replace “handshake” with “advertisement/configuration”. Action item: Chauncey

DCB Environment LAN, SAN, and IPC traffic converged onto a single Ethernet network! Reduced hardware, power, cooling, and management costs Requires a new class of IO adapter – a DCB enabled adapter DCB (FCoE, iSCSI, iWARP, RoCE) In a converged network, LAN, SAN, and IPC traffic can travel over a single networking infrastructure. This saves costs in terms of adapter, switch, cabling, power, cooling, and management expenses. Ethernet has become the most widely deployed networking infrastructure in data centers around the world, making it a natural choice as a converged network medium. Now that Ethernet has achieved a new data rate milestone, increasing from 1GbE to 10GbE, it has sufficient bandwidth to support multiple networking types with suitable performance. This has led to the creation of a new class of server I/O adapter, known as the Converged Network adapter or CNA.

Deployment Process Begin: Not Converged Separate NIC & HBA SAN/NAS LAN FC / iSCSI Begin: Not Converged Separate NIC & HBA [comments] removed “no rip & replace” with “phased upgrade”. Action item: Chauncey to review the new text. [comments] update icons. Not clear if it’s meant to show “computer-to-computer”? Action item: Jason (NetApp) to help with graphic. Henry to send slide to Jason. [comments] “converged core” bullet needs some work/feedback. Action item: all who has comments on this slide please send comments directly to Chauncey/Henry. Henry to send slide to those interested in providing feedback. One of the strengths of FCoE is that it protects existing investment in LAN and FC SAN infrastructure. End users who migrate to an FCoE network can do so in stages, rather than requiring a datacenter “forklift”. The typical datacenter today has installed a separate LAN & SAN infrastructure which utilizes separate, unique server adapters and fabric switches. <Advance slide animation…one click> Most initial FCoE deployments will start at the network’s edge by installing CNAs which connect to a “top of the rack” FCoE switch. The “top of the rack” switch is so named because it is meant to be installed in a server rack and provides fabric access to the servers within that and adjacent racks. The Cisco Nexus 5020 and Brocade 8000 are examples of “top of the rack” FCoE switches. One advantage to this approach is that it allows the end user to connect their CNAs to their FCoE switches using inexpensive copper cabling, as opposed to a more expensive optical solution that enables longer distance reach. In this configuration, the FCoE switch is responsible for translating between FCoE signaling and native FC signaling. FCoE packets from the CNA have their Ethernet header/footer removed by the FCoE switch before passing the original FC frame to the native FC fabric. Going in the other direction, the FCoE switch adds the appropriate Ethernet header/footer to native FC packets sent from the Fibre Channel array and intended for the CNA. One nice thing about this approach is that the end user can migrate to an FCoE network in stages by adding CNAs and FCoE switches at the network edge and maintaining their existing LAN and FC SAN infrastructure. As FCoE adoption increases, FCoE networks will start expanding into the core of the data center network. Over time, native FCoE storage will become more popular, meaning that FCoE frames will travel unaltered from the CNA at the network’s edge all the way to the storage array. NetApp has already announced native FCoE storage (utilizing QLogic FCoE technology) and other vendors are expected to announce similar FCoE native storage products throughout 2010 & 2011.

Deployment Process Today: Not Converged Separate NIC & HBA LAN/NAS DCB Adapter DCB / FCoE Switch SAN/NAS LAN Today: Not Converged Separate NIC & HBA Converged edge DCB adapters DCB “top of rack” switch [comments] removed “no rip & replace” with “phased upgrade”. Action item: Chauncey to review the new text. [comments] update icons. Not clear if it’s meant to show “computer-to-computer”? Action item: Jason (NetApp) to help with graphic. Henry to send slide to Jason. [comments] “converged core” bullet needs some work/feedback. Action item: all who has comments on this slide please send comments directly to Chauncey/Henry. Henry to send slide to those interested in providing feedback. One of the strengths of FCoE is that it protects existing investment in LAN and FC SAN infrastructure. End users who migrate to an FCoE network can do so in stages, rather than requiring a datacenter “forklift”. The typical datacenter today has installed a separate LAN & SAN infrastructure which utilizes separate, unique server adapters and fabric switches. <Advance slide animation…one click> Most initial FCoE deployments will start at the network’s edge by installing CNAs which connect to a “top of the rack” FCoE switch. The “top of the rack” switch is so named because it is meant to be installed in a server rack and provides fabric access to the servers within that and adjacent racks. The Cisco Nexus 5020 and Brocade 8000 are examples of “top of the rack” FCoE switches. One advantage to this approach is that it allows the end user to connect their CNAs to their FCoE switches using inexpensive copper cabling, as opposed to a more expensive optical solution that enables longer distance reach. In this configuration, the FCoE switch is responsible for translating between FCoE signaling and native FC signaling. FCoE packets from the CNA have their Ethernet header/footer removed by the FCoE switch before passing the original FC frame to the native FC fabric. Going in the other direction, the FCoE switch adds the appropriate Ethernet header/footer to native FC packets sent from the Fibre Channel array and intended for the CNA. One nice thing about this approach is that the end user can migrate to an FCoE network in stages by adding CNAs and FCoE switches at the network edge and maintaining their existing LAN and FC SAN infrastructure. As FCoE adoption increases, FCoE networks will start expanding into the core of the data center network. Over time, native FCoE storage will become more popular, meaning that FCoE frames will travel unaltered from the CNA at the network’s edge all the way to the storage array. NetApp has already announced native FCoE storage (utilizing QLogic FCoE technology) and other vendors are expected to announce similar FCoE native storage products throughout 2010 & 2011. FC / iSCSI

Deployment Process Today: Not Converged Separate NIC & HBA Converged Network DCB Adapter Edge DCB/FCoE Switch FCoE iSCSI DCB FC / iSCSI Core DCB/FCoE Switch LAN Today: Not Converged Separate NIC & HBA Converged edge DCB adapters DCB “top of rack” switch Converged core Expanded converged network Native attach storage [comments] removed “no rip & replace” with “phased upgrade”. Action item: Chauncey to review the new text. [comments] update icons. Not clear if it’s meant to show “computer-to-computer”? Action item: Jason (NetApp) to help with graphic. Henry to send slide to Jason. [comments] “converged core” bullet needs some work/feedback. Action item: all who has comments on this slide please send comments directly to Chauncey/Henry. Henry to send slide to those interested in providing feedback. One of the strengths of FCoE is that it protects existing investment in LAN and FC SAN infrastructure. End users who migrate to an FCoE network can do so in stages, rather than requiring a datacenter “forklift”. The typical datacenter today has installed a separate LAN & SAN infrastructure which utilizes separate, unique server adapters and fabric switches. <Advance slide animation…one click> Most initial FCoE deployments will start at the network’s edge by installing CNAs which connect to a “top of the rack” FCoE switch. The “top of the rack” switch is so named because it is meant to be installed in a server rack and provides fabric access to the servers within that and adjacent racks. The Cisco Nexus 5020 and Brocade 8000 are examples of “top of the rack” FCoE switches. One advantage to this approach is that it allows the end user to connect their CNAs to their FCoE switches using inexpensive copper cabling, as opposed to a more expensive optical solution that enables longer distance reach. In this configuration, the FCoE switch is responsible for translating between FCoE signaling and native FC signaling. FCoE packets from the CNA have their Ethernet header/footer removed by the FCoE switch before passing the original FC frame to the native FC fabric. Going in the other direction, the FCoE switch adds the appropriate Ethernet header/footer to native FC packets sent from the Fibre Channel array and intended for the CNA. One nice thing about this approach is that the end user can migrate to an FCoE network in stages by adding CNAs and FCoE switches at the network edge and maintaining their existing LAN and FC SAN infrastructure. As FCoE adoption increases, FCoE networks will start expanding into the core of the data center network. Over time, native FCoE storage will become more popular, meaning that FCoE frames will travel unaltered from the CNA at the network’s edge all the way to the storage array. NetApp has already announced native FCoE storage (utilizing QLogic FCoE technology) and other vendors are expected to announce similar FCoE native storage products throughout 2010 & 2011.

FCoE / iSCSI DCB / iWARP / RoCE Deployment Process Converged Network FCoE / iSCSI DCB / iWARP / RoCE DCB Adapter LAN Today: Not Converged Separate NIC & HBA Converged edge DCB adapters DCB “top of rack” switch Converged core Expanded converged network Native attach storage Goal: Converged network Multiple storage technologies over Ethernet [comments] removed “no rip & replace” with “phased upgrade”. Action item: Chauncey to review the new text. [comments] update icons. Not clear if it’s meant to show “computer-to-computer”? Action item: Jason (NetApp) to help with graphic. Henry to send slide to Jason. [comments] “converged core” bullet needs some work/feedback. Action item: all who has comments on this slide please send comments directly to Chauncey/Henry. Henry to send slide to those interested in providing feedback. One of the strengths of FCoE is that it protects existing investment in LAN and FC SAN infrastructure. End users who migrate to an FCoE network can do so in stages, rather than requiring a datacenter “forklift”. The typical datacenter today has installed a separate LAN & SAN infrastructure which utilizes separate, unique server adapters and fabric switches. <Advance slide animation…one click> Most initial FCoE deployments will start at the network’s edge by installing CNAs which connect to a “top of the rack” FCoE switch. The “top of the rack” switch is so named because it is meant to be installed in a server rack and provides fabric access to the servers within that and adjacent racks. The Cisco Nexus 5020 and Brocade 8000 are examples of “top of the rack” FCoE switches. One advantage to this approach is that it allows the end user to connect their CNAs to their FCoE switches using inexpensive copper cabling, as opposed to a more expensive optical solution that enables longer distance reach. In this configuration, the FCoE switch is responsible for translating between FCoE signaling and native FC signaling. FCoE packets from the CNA have their Ethernet header/footer removed by the FCoE switch before passing the original FC frame to the native FC fabric. Going in the other direction, the FCoE switch adds the appropriate Ethernet header/footer to native FC packets sent from the Fibre Channel array and intended for the CNA. One nice thing about this approach is that the end user can migrate to an FCoE network in stages by adding CNAs and FCoE switches at the network edge and maintaining their existing LAN and FC SAN infrastructure. As FCoE adoption increases, FCoE networks will start expanding into the core of the data center network. Over time, native FCoE storage will become more popular, meaning that FCoE frames will travel unaltered from the CNA at the network’s edge all the way to the storage array. NetApp has already announced native FCoE storage (utilizing QLogic FCoE technology) and other vendors are expected to announce similar FCoE native storage products throughout 2010 & 2011.

FCoE / iSCSI DCB / iWARP / RoCE Deployment Process Converged Network FCoE / iSCSI DCB / iWARP / RoCE DCB Adapter LAN Today: Not Converged Separate NIC & HBA Step 1: Converged edge DCB adapters DCB “top of rack” switch Step 2: Converged core Expanded converged network Native attach storage End Goal: Converged network Multiple storage technologies over Ethernet Process Benefit: Provides the building blocks to upgrade a portion of or all data center network assets into a converged infrastructure [comments] removed “no rip & replace” with “phased upgrade”. Action item: Chauncey to review the new text. [comments] update icons. Not clear if it’s meant to show “computer-to-computer”? Action item: Jason (NetApp) to help with graphic. Henry to send slide to Jason. [comments] “converged core” bullet needs some work/feedback. Action item: all who has comments on this slide please send comments directly to Chauncey/Henry. Henry to send slide to those interested in providing feedback. One of the strengths of FCoE is that it protects existing investment in LAN and FC SAN infrastructure. End users who migrate to an FCoE network can do so in stages, rather than requiring a datacenter “forklift”. The typical datacenter today has installed a separate LAN & SAN infrastructure which utilizes separate, unique server adapters and fabric switches. <Advance slide animation…one click> Most initial FCoE deployments will start at the network’s edge by installing CNAs which connect to a “top of the rack” FCoE switch. The “top of the rack” switch is so named because it is meant to be installed in a server rack and provides fabric access to the servers within that and adjacent racks. The Cisco Nexus 5020 and Brocade 8000 are examples of “top of the rack” FCoE switches. One advantage to this approach is that it allows the end user to connect their CNAs to their FCoE switches using inexpensive copper cabling, as opposed to a more expensive optical solution that enables longer distance reach. In this configuration, the FCoE switch is responsible for translating between FCoE signaling and native FC signaling. FCoE packets from the CNA have their Ethernet header/footer removed by the FCoE switch before passing the original FC frame to the native FC fabric. Going in the other direction, the FCoE switch adds the appropriate Ethernet header/footer to native FC packets sent from the Fibre Channel array and intended for the CNA. One nice thing about this approach is that the end user can migrate to an FCoE network in stages by adding CNAs and FCoE switches at the network edge and maintaining their existing LAN and FC SAN infrastructure. As FCoE adoption increases, FCoE networks will start expanding into the core of the data center network. Over time, native FCoE storage will become more popular, meaning that FCoE frames will travel unaltered from the CNA at the network’s edge all the way to the storage array. NetApp has already announced native FCoE storage (utilizing QLogic FCoE technology) and other vendors are expected to announce similar FCoE native storage products throughout 2010 & 2011.

Quick reminder on Data Center Bridging Unifying I/Os and networks over Ethernet Enhanced switches to support lossless Ethernet Essentially, improved Ethernet that is suitable for data center applications Use cases support multiple storage protocols and LAN, and high performance computing

State of THE STANDARDS What Is Going On Now?

A few clarifications… Pre-standard does NOT imply proprietary or lack of interoperability Not ALL DCB protocols must be deployed together to enable a DCB environment

Priority-based Flow Control (PFC) Standards Industry IEEE 802.1Qbb Approved as IEEE standards in June 2011 Many switch, adapter and chipset vendors Switch includes FCFs and DCBs Interoperability Highly interoperable

Enhanced Transmission Selection (ETS) Standards Industry IEEE 802.1Qaz Approved as IEEE standards in June 2011 Many switch, adapter and chipset vendors Switch vendor support Interoperability Not applicable See DCBX interoperability Local forwarding decision Works well end-to-end across different vendors

Data Center Bridging eXchange Standards Industry Pre-standard Convergence Enhanced Ethernet (CEE) IEEE P802.1Qaz Approved as IEEE standards in June 2011 Many switch, adapter and chipset vendors Interoperability Most venders support CEE 1.01 Highly interoperable Few venders support IEEE 802.1Qaz today Most have roadmap to support this in near future Plugfest interoperability test planned for fall 2011

Congestion Notification Standards Industry IEEE 802.1Qau Approved as IEEE standards in 2010 Very few support Few have roadmap in the near future Interoperability Limited early interoperability testing fall 2010 More testing planned fall 2011

What is coming Standards Application IEEE P802.1Qbg Virtual Bridging Edge Virtual Bridging Still in process in working group IEEE P802.1Qbh Bridge Port Extension Virtual Bridging Ultra high-density low-latency switches DCB on 40GbE To be covered in future EA webinars

Testing, testing, and more testing Road to Success Testing, testing, and more testing

Importance of testing Many new and recently developed protocols Many new product implementations Some very early adopters of relatively new protocols Interoperability with new products and new vendors

A lot to test in the Converged Data Center DCB protocols, FCoE/iSCSI/RoCE /iWARP applications, converged switches, DCB adapters, Bridging protocols, Routing protocols, 40/100GbE uplinks, virtualization performance

Ethernet Alliance Testing end-to-end Storage testing I/O performance Server performance Fibre Channel

Ethernet Alliance Testing end-to-end Storage testing I/O performance Server performance Fibre Channel Network testing TCP/IP performance Switch performance Ethernet

Ethernet Alliance Testing end-to-end Storage testing I/O performance Server performance Fibre Channel EA Plugfest converged network testing FCoE/iSCSI/iWARP/RoCE Data Center Bridging Storage + TCP/IP CNA Virtualization Network testing TCP/IP performance Switch performance Ethernet

Ethernet Alliance Testing end-to-end Storage testing I/O performance Server performance Fibre Channel EA Plugfest converged network testing FCoE/iSCSI/iWARP/RoCE Data Center Bridging Storage + TCP/IP CNA Virtualization Network testing TCP/IP performance Switch performance Ethernet Ethernet Alliance facilitates multi-vendor PlugFests at University of New Hampshire Interoperability lab to validate end-to-end converged network functionality.

Thank you

Ways to Get Involved In EA Become A Member

Ways to Get Involved In EA Become A Member Attend A Plugfest Data Center Bridging Higher Speed Modular IO High Speed Ethernet Energy Efficient Ethernet

Ways to Get Involved In EA Become A Member Attend A Plugfest Join A Subcommittee Data Center Bridging Higher Speed Modular IO High Speed Ethernet Energy Efficient Ethernet

Ways to Get Involved In EA Become A Member Attend A Plugfest Join A Subcommittee Participate In An EA Booth At Trade Shows Data Center Bridging Higher Speed Modular IO High Speed Ethernet Energy Efficient Ethernet Carrier Ethernet Congress Interop European Conference on Optical Communication (ECOC) Supercomputing

Ways to Get Involved In EA Become A Member Attend A Plugfest Join A Subcommittee Participate In An EA Booth At Trade Shows Participate In EA Sponsored Webinars Data Center Bridging Higher Speed Modular IO High Speed Ethernet Energy Efficient Ethernet Carrier Ethernet Congress Interop European Conference on Optical Communication (ECOC) Supercomputing

Discussion and Q&A