Download presentation
Presentation is loading. Please wait.
1
Passport 8600 Routing Switch
Release 3.3
2
Notes to Presenter This presentation is intended as a dual level presentation for both business level (CIO) and semi-technical level (Network Managers) to set out the requirements of a core switch in today’s converged campus environment and explains why the Passport 8600 as at Release 3.3 is the ideal product to suit that role. Please read the speaker notes as these contain the key points to make throughout the presentation. Use the main notes for business level adding the Technical Description for the technical audience
3
CIO’s Priorities Do more with less Drive employee productivity with IT
Use IT to grow revenues Use IT to anticipate customer requirements We created our One Network vision to not only address the business needs but also the CIO priorities. We’ve talked to lots CIOs and they are are all facing the dilemma of supporting an increasing number of applications. These applications are resulting in increased costs for the CIO whilst the CIO budget is staying nearly flat or increasing slightly. The CIO is being asked to radically cut costs and do more with less. We aren’t talking about a 10% decrease but a more radical 30 – 40% percent. This means the CIO needs to look at his or her network differently – it’s not just squeezing out the bottom performers but looking at the end to end process. Not only is the CIO having to do more with less but his role is expanding much more with regards to the business. It used to be that the CIO only had to worry that there was network connectivity. Today’s CIO is now responsible for driving employee productivity with IT. Not only this but he has a new mandate to also grow revenues with IT. IT infrastructure is now a key means to communicate with the end customer – be it to gain information or as a new sales channel for online ordering. Extending beyond this, the CIO is also challenged to leverage the IT infrastructure to anticipate customer requirements and provide additional intelligence on customer purchasing habits and trends. Essentially the role of the CIO has evolved from being purely tactical day-to-day network operations to being an integral strategic part of the entire business. Things you can mention: Shrinking or low-growth budgets Increasing demands for new applications: ERP, SCM, CRM … Increasing costs for scarce skills, for increased bandwidth, for new technologies and security requirements. Need for new applications and services to increase employee productivity: conferencing, collaboration, mobility, unified messaging New methods for dealing with customer service -- pre-sale and post-sale Tactical Strategic
4
Needs of the future enterprise network
Consistent customer experience everywhere Business connectivity via the internet Internet Security for all applications and services Storage and networking at light speed The Nortel Networks model of the new enterprise network recreate it into one unified, efficient, cost-effective, adaptable infrastructure with these features: A consistent customer experience, everywhere. “Engaged applications” flow seamlessly across customer contact channels (in-person, telephone, Web, chat, fax, ) and provide a positive experience, where the business is not just reacting to but engaging with the customer. By delivering time-sensitive, critical information in the user’s choice of context and access device, the new enterprise network makes service and convenience a tangible competitive differentiator in an increasingly commoditized world. By moving telephony into the Web paradigm, the new enterprise network offers multimedia capabilities that give users control over where, when, how, and in what form they can be reached. Consistent information is delivered through increased centralization of IT infrastructure enabled through Optical Ethernet. Business connectivity over the Internet. With innovations in privacy and quality of service, the Internet takes on an expanded role as the backbone of enterprise applications. The Internet and IP-based intranets bring new agility and economies to the tasks of connecting data centers, delivering content to users, and supporting the flow of private information across the extended supply chain. Storage and networking at light speed. The new enterprise network takes advantage of IP, Ethernet and DWDM for optical storage networking, in all cases leveraging optics for performance and reliability. Optical Ethernet eliminates bandwidth bottlenecks between LAN and MAN/WAN, and extends this scalable, reliable LAN standard across cities and continents. IP Telephony to succeed traditional telephony. Voice over IP solutions now scale from 1 to 200,000 users to serve telecommuters, remote offices, contact centers, and campuses. Voice over IP has matured to offer the fundamental requirements of full-scale enterprise deployment: centralized or distributed control, enterprise-wide access to applications such as unified messaging, un-compromised voice quality, choice of features and functions, multiple migration paths, and coexistence with legacy systems. Converging voice and data onto one infrastructure integrates critical communications capabilities into a single platform to lower total cost of ownership and build productivity through new applications. Security inherent in all applications and services. High-performance, multi-layer security protects data integrity and privacy across all environments, including mobility, without compromising the performance of the network and applications. Routing is transformed by building in IP VPN and firewall security into routing devices that naturally understand security protocols. IP telephony succeeds traditional telephony
5
Key Requirements High Availability Operational Simplicity
99999s Reliability means uptime all the time. Bandwidth, Security and Quality of Service ensuring application delivery with fail-over schemes that preserve application integrity. Operational Simplicity Simple to install, Simple to maintain, Simple to manage. Reduced complexity through a ‘leaner’ more integrated intelligent infrastructure design. Low Cost of Ownership Reduced purchase, installation and maintenance costs through reduced complexity, the ability to consolidate resources and a lower box count. The Key requirement for networks today is to support user applications in a reliable, secure, and efficient way and to make sure that this is all achieved at as cost effectively as possible. High Availability High availability is a baseline requirement in for One Network. High availability means several things. Firstly, it is the performance or bandwidth to support all the applications and deliver all the necessary information on a single infrastructure. Secondly it is having the security to ensure that information goes only to whom it is supposed to and also to protect the networking environment from attack. Lastly, it means having the reliability, redundancy and fail-over schemes so that the network is inherently reliable, but more than that, the network must be able to absorb any kind of failure or outage without impact to even the most sensitive real time applications such as VOIP. Operational Simplicity Operational simplicity is how we deliver One Network to you. Our mission is to make the networking environment a simple as possible because simplicity brings reliability and because less complexity also mean less cost. A ‘leaner’ more intelligent infrastructure will not only have less components to buy, but will also have less to configure and maintain. This not only brings a more reliable lower cost infrastructure today, it also delivers a more agile networking platform ready to react to the needs of tomorrow. It is far easier and less costly to ‘open up’ and make changes to a simple design than it is a complex one and almost always can be done with less impact to the user base. Low Cost of Ownership The benefit or our approach and the way we deliver One network to you is that Operational simplicity brings with it a lower cost of ownership, not only in terms of initial purchase and installation costs, but continuing on to reduced maintenance costs, consequential costs (costs of an outage to the business) and the ability to consolidate resources such as WAN links, servers and even IT staff to reduce the overall IT spend and to deliver on the promise of doing more with less or a lot more with the same.
6
Applications Drive Infrastructure
Streaming Video Constant stream of data – no pauses or interruptions Unidirectional – server to client Low bandwidth – 2-4Mbps VOIP (IP Telephony) Bi-directional – client to client or peer-to-peer Very low bandwidth – 8kbps-64kbps Sporadic bursts of traffic – varying size (attachments) Bi-directional – client to server to client Varying bandwidth - greedy, will take all available To understand what is needed by our One network we first have to understand the needs of the different applications it needs to support and their specific requirements. We focus on the applications, because applications drive infrastructure. Streaming video Streaming video is just starting to be deployed en mass across enterprise networks and with it it brings a different set of requirements. Firstly, video is a constant stream of information or data that cannot tolerate pauses or interruptions. Typically video is buffered slightly and can absorb very minor interruptions in the flow, but not much. The effect of a couple of very small blips in the data flow will be very obvious to the user. A more lengthy outage, say 10 seconds typically results in application failure requiring the frustrated user to reconnect. The type of video we see customers deploying today is the Broadcast or Multicast type of video rather than video conferencing. Broadcast or Multicast video is unidirectional, that is to say, the flow comes from the server out to the client or clients fanning out form a central point that is either a video server, camera or gateway. Video is a low bandwidth application reaching a maximum of 2-4Mbps today and potentially 15Mbps in future when high definition video comes of age. VOIP (IP telephony) Voice over IP is our second application type and it is similar to video in that in is a constant stream of data, but it is much much more sensitive. Voice traffic cannot be buffered so even the slightest interruption in the data stream is very obvious. An interruptions of a couple of seconds will result on application failure. This translates to the user being ‘cut off’ and having to redial. Reconnecting a video application is tiresome enough, but having to redial and ‘catch’ customer or contact may prove costly. As the IP phone replaces the traditional phone it also has to take care of the emergency or 911 role. Interruptions to a 911 call could prove far more than just expensive! VOIP is also bi-directional with data needing to flow in both directions if both parties are to hear each other. The flow is also peer-to-peer or client-to-client so the traffic pattern is unpredictable. We now have a network that has to cope with a client-server model as well as a peer-to-peer model. VOIP is very low bandwidth 8Kpbs (G.729A) or 64Kbps (G.711) VOIP traffic is only 60Kbps or 100Kbps in IP traffic so it is not the bandwidth that concerns us here, it is the need for consistent low latency, jitter and packet loss throughout the call or stream that is of paramount importance. , the most common application, is also similar in behavior to almost all traditional or legacy data applications. , like most data applications, communicates through sporadic bursts of traffic that vary in size. Typically a request from the user will result in an unpredictable amount of traffic coming back in the other direction. Data applications are also bi-directional. For me to receive an , someone has to have sent it, although the amount of traffic coming in is usually more than that going out. Data applications do not tens to be peer-to-peer, even if the communications is between two users on the same floor. The communication tends to be form client to server an then back to client again. Data applications also use varying bandwidth and typically if a server or client has some data to send and a 100MB connection it will use 100MB to send it with, making data applications bandwidth hogs. As for most of these data applications it is the completeness of the transmission rather than the timeliness or it that is important so interruptions in the data flow are usually absorbed by a retransmission of the data that goes undetected by the user. It is this ability to absorb congestion and outages that became part of the design of data networks and networking equipment in the past and is one of the reasons why Nortel Networks with our heritage and understanding of voice grade networks is doing things a little different to enable our customers to deploy One Network.
7
Two Tier Infrastructure design
Edge High density 10/100 Ethernet ports for user connections and Gigabit Ethernet for riser connections. Access security controls and QoS mapping. Core High Density Gigabit Ethernet for riser connections. ATM, SONET and Optical connections for MAN/WAN access and L4-7 Applications switching for Data Center integration. Our first task was to simplify the network design and to do this we moved to a two tier model of an Edge and a Core. The edge where we connect all the user devices like PCs, printers, gateways and access points and the core where we consolidate all the connections form our edge devices and perform the cross connect switching and routing. This two tier design helps keep complexity to a minimum. In very large network designs or where resources such as fiber runs or port density dictates an intermediate layer between edge and core for aggregation purposes we stick to the two tier principal, but expand this using a layered or onion approach where intermediate switches are both the core of the outer layer and the edge of the inner layer. We will discuss this again later in the presentation. Edge Requirements The requirements of an edge switch are simple. Low cost high density 10/100 ports for the user connections. Multiple gigabit uplinks to connect to the rest of the network and the ability to feed enough bandwidth between the two to satisfy the need of the users. Local switching capability plays a part here, but this should be considered secondary to being able to feed information through to the rest of the network, where 95% plus of the information either resides or needs to go. Because of the number of edge ports and devices in medium to large networks we need to keep it simple here to increase reliability and to keep the cost of purchase, installation and maintenance to a minimum. Layer 2 switching devices feature here. Reliability is also extremely important here because the edge of the network is a difficult place to provide resiliency, because typically user devices are not dual connected. User devices almost always rely on a single network connection. This singe network connection must therefore be ultra reliable. Edge Switches nave to be easily scalable. If additional deices need network access they need a port to plug into. The extra bandwidth they may need can always be added later, but connectivity comes first. Security is higher on everybody’s list of requirements these days. If unauthorized users gain access to the network, even if they cannot authenticate and gain access to the data or network resources, they could snoop on what is going on or maybe cause damage while they are there. Embedded security in the edge device that prevents any kind if unauthorized access is preferable. Quality of Service is a function that needs to be addressed at the edge first. QoS is there to make sure the applications perform as required. QoS should be used as a guarantee mechanism to make sure that in the event of unusual usage, congestion or network outages, that all the applications continue to perform as required. QoS should not be used as a crutch to rest the network on, or as a substitute for insufficient bandwidth. Identifying the applications and their appropriate level of priority is a function that needs to take place at the edge, where the applications enter the network is the edge. Also, applications should have the same level of priority or importance wherever they are in the network, so an end-to-end QoS mechanism such as DiffServ which operates at Layer 3 is the best way to achieve this. Core Requirements If we turn our attention to the core then our needs change. Core switches need higher bandwidth connections so they can consolidate all the uplinks from the edge and a variety of different connections so they can forward this traffic onto the rest of the network either locally or across the Metro or WAN. Core switches also need the ability to cross connect the data as fast and efficiently as possible and make more granular decision as to where the information should be routed. Reliability requirements also turn up a notch or two here. With so many more users information traversing the core switch we need to be able scale more easily and be seamlessly absorb / route-around outages. We also need to be able to balance and load share the traffic across multiple redundant paths. The further away from the edge we get then the need increases to make more granular decision about the routing of information. For this reason higher level routing, multicast and application switching technologies come into play.
8
What is required in a core switch ?
Connectivity Campus – Gig/10 Gig LAN Metro –XD Gig, WDM, Optical WAN – ATM,SONET, 10 Gig WAN Features QoS enforcement and Queuing Redundancy with Hot Swap Application Switching (L2-7) Performance Cross connect bandwidth Low Latency & Jitter Consistent throughput Let’s take a more detailed look at the requirements of a core switch to understand why we designed the Passport 8600 the way we did and why it is such a good choice for building One Network. Connectivity For connections to the rest of the network we need high density Gigabit and 10 Gigabit LAN Ethernet for the riser connection and inter-switch core connection. We also need the ability to bond multiple physical links together for additional bandwidth and redundancy purposes. When we start to cross the Metro we need to move to longer distance technologies like extended distance Gigabit Ethernet, and Wave Division Multiplexing technologies like CWDM and DWDM. To cross the WAN we need to think about ATM, SONET and 10 Gigabit WAN Ethernet to cross service provider networks. Features If we look at the software features required in the core, QoS moves to and enforcement and re-prioritization function so that the policies set at the edge are carried out or maybe adjusted as needs dictate. We also need more granular queuing for the egress traffic as information going out of the core is usually destined for lower bandwidth devices or connections where congestion is more likely to occur. In this event is is important that we queue traffic accordingly. From a redundancy perspective we need the ability to make repairs or changes with minimal impact to the user base. Hot Swap of all components in our core switch is a baseline requirement here. Network level redundancy schemes such as Split-MLT, VRRP and ECMP are also required to deliver network resiliency. Because the core switch is performing this cross-connect function we need to be able to make more granular decisions on where traffic destined to go. Routing technologies for IP, IPMC and IPX are baseline. The ability to look further into the packet and make forwarding decisions on a user or application basis via application switching is desirable as it give us the ability to consolidate the function of the data center switch in to the core device to further reduce the box count. Performance It seems today as though the quality of a core switch is measured in the headline number of packets per second the switch can forward. While a headline number may grab attention on the data sheet or in a laboratory test, this has to be put in context. The job of a core switch is to cross connect sources on one side to destinations on the other. Local forwarding capability between two adjacent ports on a line card maybe beneficial if that is where the traffic flows. The problem is that in a core switch this is not where information usually flows. In an edge switch this local forwarding capability is actually beneficial, especially for applications like VOIP where peer-to-peer traffic between two handsets could well be between two ports on the same unit within a stack or between two ports on the same line card in a modular switch. The traffic pattern in a core switch is very different. What we need here is the ability for forward traffic with consistent performance, low latency and jitter regardless of which slot and port the traffic needs to flow between. To take advantage of or rely on the local forwarding capability in a core switch, complex traffic engineering has to be done to ensure that there is enough bandwidth between ports 1 and 2 on card one and between port 1 on card on and port 2 on card 3. Local forwarding capability should be seen as a bonus and not the base. Everywhere Else
9
Passport 8600 Routing Switch
Modular Platform Passport 8000 family Layer 2 Switching Layer 3 IP, IPMC and IPX Routing Layer 4-7 Application Switching Ethernet 10/100TX, 100FX Gigabit SX, LX, ZX, XD & CWDM 10 Gigabit LR & LW ATM and SONET DS3, OC-3 and OC-12 Gateway functions Switching/Routing done in Ethernet The Passport 8600 is Nortel Networks flagship core routing switch for the Enterprise market. The Passport 8600 is part of the Passport 8000 family and shares the same chassis and power supplies providing common sparing and investment protection for customer utilizing other Passport 8000 family switches. The Passport 8600 provides Layer 2 switching and Layer 3 routing for IP, IP Multicast and IPX traffic at wire speed. That is to say the performance of the switch is wire speed irrespective of the function being performed. The Passport 8600 forwards traffic with the same high performance and consistent low latency whether it is forwarding at layer 2, routing IP, IPMC or IPX traffic at Layer 3 or performing quality of service functions or security filtering at layer 4. The Passport 8600 also benefits from having Alteon technology inside to be able to perform complex L4-7 application switching functions such as load balancing or content switching on every port. The Passport 8600 is primarily an Ethernet switch with high density 10/100, 100FX and Gigabit Ethernet modules and also offering 10 Gigabit Ethernet in both LAN and WAN interfaces. The Passport also offers ATM and SONET capability for connection to MAN/WAN and service provider networks. These ATM and SONET capabilities are considered gateway functions as all the main switching and routing is done in the Ethernet domain.
10
Flexible Platform It fits in the wiring closet delivering high density (384) 10/100 Ethernet ports for user connections It fits in the network center delivering high density (128) Gigabit Ethernet ports for aggregation, riser and MAN connections It fits in the data center delivering high density L4-7 application switching for server selection & load balancing The Passport 8600, while design originally as a core enterprise switch has proven popular in a variety of roles. In the wiring closet the resilience and Quality of Service capability made the 8600 so popular that we developed an additional module specifically for that purpose. Adding the 8632TX, a module with 32 x 10/100 ports and 2 x GBICs allows our customers to deploy an edge solution with /100 ports and 4 x Gigabit uplink ports. (2 x 8632TX + 6 x 8648TX). A function called multimedia filters was also added to simplify the process of setting QoS filters for VOIP and video traffic. Edge In the core or network center, the real target for the Passport 8600, We extended capacity to 128 Gigabit ports and added 10 Gigabit Ethernet. We also added new software features to improve network resiliency, increase scalability and simplify to the process of installation and management. Data Center In the data Center the addition of Alteon application switching technology makes the Passport 8600 the benchmark for high performance application switching. MAN/WAN In the MAN and WAN, 10 Gigabit WAN, ATM and SONET as well as CWDM Optical networking capability and new software feature such as BGP-4 allow the Passport to replace existing software based routers to improve performance. The ability to use the Passport 8600 in multiple roles simultaneously reduces the complexity of the network be reducing the number of network elements or boxes which in term improves reliability. Both simplicity and reliability help to reduce the Total cost of ownership of the IT environment. It fits in MAN/WAN delivering Gigabit Ethernet, 10 Gig E, CWDM , ATM and SONET connections
11
Resilient Platform Connections are made and packets are processed in hardware here by up to 8 I/O modules Heat is removed here by 2 hot swappable cooling modules Packets are transported to the egress port here through 2 load sharing CPU/Switch Fabric modules The Passport 8600 is an extremely resilient platform offering redundancy options at every level. Power supplies Firstly there is redundant power fed via up to three power supplies. The Passport 8600 automatically balances the load draw from the available power supplies to feed the chassis. Both Auto voltage 110/220 AC and 48vDC supplies are available and can be mixed in the same chassis. Power supplies can be added or removed while the chassis is in operation. Cooling Fans Power produces heat and this heat is removed form the chassis through redundant cooling modules that can be removed for maintenance purposes such as cleaning while the chassis is in operation. The chassis has been designed to run for some time on a single cooling module. I/O Modules Connectivity to the Passport 8600 is achieved through up to 8 (in the 10 slot chassis) I/O modules. All packet processing takes place on the I/O modules further enhancing performance and resilience. Multiple ports form different modules can be bonded or ‘trunked’ together to for a single logical pipe with the traffic balanced across available links for greater bandwidth and superior resiliency. CPU/Switching Fabrics. The CPU/Switching Fabric module is a combination module that handles the cross-connect switching between ports in the Switch fabric and handles updating of forwarding/routing tables and maintenance functions in the CPU. Up to two switching fabric modules can be installed in the Passport 8600 with forwarding evenly distributed across the available fabrics and the CPUs mirrored to ensure zero impact fail-over. Power is Supplied here by up to 3 hot swappable AC or DC load sharing P.S.U.s
12
Scalable Platform Sparing Option 3 Slot 6 Slot 10 Slot 10 Slot CO
Power CPU/Switch Fabric The Passport 8600 can utilizes any of the available Passport 8000 family chassis options from the small form factor 3-slot chassis through to the NEBS compliant 10-slot CO chassis challenging environments or service providers. Each chassis offers different numbers of I/O modules and different levels of resiliency. The Table shows in detail the resiliency options offered by each chassis as well as the port density that can be achieved form each with a full compliment of either 10/100 or gigabit ports. 3 Slot The 3 Slot chassis is ideal for small core environments or as an intermediate later switch. Dual power supplies, a single switching fabric and two I/O modules keep rack space to a minimum. 6 Slot The 6 slot chassis is ideal for applications such as the data center where additional I/O modules are required or where the additional resiliency offered by dual switching fabrics is required. The 6 slot chassis also shares the same power supplies as the 10 slot versions. 10 Slot The 10 slot chassis offers the highest density from the 8 I/O modules and the greatest resiliency from multiple power supplies, dual switching fabrics and dual cooling modules. CO Chassis The NEBS compliant 8010CO chassis has been designed for service providers or enterprise customers in more rugged environments. The 8010CO chassis support 3 AC or DC power supplies, 2 switching fabrics and 8 I/O modules Cooling NEBS 10/ Gigabit
13
Passport Architecture
CPU/Switching Fabric Modules CPU Forwarding Table Processing FABRIC I/O Module Cross Connect All Packets take same path through shared memory switching fabrics to the egress port ensuring consistent low latency and jitter and unmatched multicast scaling CPU/Switching Fabric Modules CPU Forwarding Table Processing FABRIC I/O Module Cross Connect I/O Module ASIC Lookup & Packet Processing MEMORY Forwarding & Filtering Tables I/O Module ASIC Lookup & Packet Processing MEMORY Forwarding & Filtering Tables All Packet Processing occurs on the I/O Modules with lookup from in memory ensuring scalability and wire rate performance Custom ASICs (RAPTARU) per port perform Packet filtering, forwarding, routing, security & QoS functions The power of the Passport 8600 comes from the architecture which uses distributed ASIC based forwarding and shared memory switching fabrics. This architecture is not new, in fact it is remarkably similar to that seen in the original routing switch or Passport 1000 range of products launched in This proven architecture will also be seen in our next generation platform due out in 2003. What has changed since 1998 is that performance has increased by an order of magnitude and that the product has got a lot smarter. This ability to continually evolve our routing switch products around a single proven architecture bring huge advantages in terms of reliability. We are also able to bring a rich and new innovative features to market quickly as our customers needs change. The Passport architecture is the reason the Passport 8600 is such a stable product and the reason behind our multicast performance used in some of the worlds largest stock exchanges. Technical Notes The CPU manages that chassis. It is also constantly monitoring for forwarding/routing table updates, which it will process and then send the updated forwarding/routing tables to the local memory attached to each port. The CPU also monitors the health of the system and interacts with the network management applications. Apart from this the CPU has very little to do. Most of the work is handles by the I/O modules. As a packet enters the Passport 8600 a custom ASIC on every port inspects the packet and does a look up in a copy of the forwarding table stored locally in memory attached to the port. This has advantages over a central CPU or supervisor based system in that the CPU is no longer the bottleneck or limit on performance. This local look up is also much faster. Based on the results of the lookup one or more of four things are going to happen. We are going to modify and forward the packet (route/switch), we are going to prioritize the packet (QoS), we are going to copy the packet (Mirroring) or we are going to drop the packet (Security). In every case, apart from drop, we are going to send the packet to one of the switching fabrics to be passed on the the egress port(s). As the packet enters the switching fabric it is written to very fast shared memory to be read by the appropriate egress module(s). We use a shared memory architecture because it has distinct advantages over the alternative cross-bar architecture. Shared memory is better for Multicast applications, which we will explain later. Shared memory is also more cost effective helping us to reduce the cost of ownership without sacrificing performance. The I/O modules are constantly monitoring the switch fabric(s) to see if there are any packets to forward. A unicast packet will be read from memory by a single module, where as multicast/broadcast packets may be read by multiple modules for forwarding.As the packet is read from the fabric by the I/O module it is queued in one of eight hardware queues based on QoS priority on the egress port. The packet is then read from the queue by the egress port and transmitted. This whole forwarding process take around 10us regardless of the source or destination and regardless of the function being performed (switching, routing or QoS). This consistent forwarding path and low latency is ideal, especially when deploying latency or jitter sensitive applications like VOIP.
14
Passport L2 Switching HA Mode Distributed MLT Split-MLT
Outer Switches dual-homed using standard link aggregation protocols. HA Mode CPU mirroring ensures zero impact failure Distributed MLT Link aggregation over multiple modules removes single point of failure. Split-MLT Link aggregation used for network resiliency Two switches act as one All links active and passing traffic No need for the additional complexity of multiple VLANs Two Passport 8600s share forwarding tables and act as one through the IST. If we look at the Layer 2 switching functions for a moment we see some unique differentiators for the product. Apart from the usual switching functions and protocol support there are three key features that make the Passport stand out. HA Mode, Distributed MLT and Split-MLT deliver increased resilience at the link, switch and network level that are an order of magnitude faster than alternative competitive solutions. HA mode provided CPU mirroring for zero impact fail-over, while distributed MLT provides extra switch level resiliency by allowing link aggregation across modules. Split-MLT is our simpler faster alternative to Spanning tree for providing layer 2 network resiliency. Split-MLT allow link aggregation across multiple switches providing sub-second fail-over and application protection at the network level. The recent addition of 9K jumbo frames further improves performance by increasing the actual data throughput of links. The larger packet size means that fewer packets and fewer packet headers are needed for the same amount of data. These key features also contribute to a less complex configuration and network that further increases reliability and performance while at the same time reducing the cost of ownership. Technical Description HA Mode In the past, almost everything (power, fans and ports) in a switch or router could be duplicated for redundancy and in the event of failure the backup systems would take over with very little impact to forwarding. The exception to this was the CPU, where a second CPU could be installed, but was not ‘active’ and in the event of failure to the primary CPU, the switch would need to reboot for the backup CPU to take over. This reboot cycle would impact traffic as the new CPU went through it’s self-test cycle, booted the switch and then re-learned the forwarding tables. With HA mode, a second CPU is still in backup mode, but it now has a copy of the forwarding table and knows the state of the chassis. Basically the backup CPU knows everything the primary CPU does, it just doesn’t act on it. The one additional function the backup CPU is doing it to monitor the health of the primary CPU. If the primary CPU fails the backup immediately takes over. Because the backup knows everything the primary one did there is no need to reboot the chassis or to go through a learning process. With HA mode a CPU failure has zero impact on forwarding. Distributed MLT Multi-Link Trunking (MLT), or link aggregation, is the ability to take multiple physical links between two switches and ‘bond’ them together to form a single logical higher bandwidth link or ‘trunk’. Traffic is spread or balanced across all the available links and in the event of a link failure the traffic is redistributed across the remaining links. This process typically occurs in under a second. Distributed MLT takes link aggregation a a step further by allowing the ‘trunk’ to be made up from physical links that are spread across different modules within a chassis, or in the case of our BayStack stackable line, across different units within a stack. Distributed MLT gives excellent resilience between any two supporting switches. Split MLT Split MLT takes the concept of distributes MLT a step further by extending resilience to the network level. The one drawback to Distributed MLT was that all the links had to start on the same switch and end on the same switch. You were protected against a link, port or module failure, but not complete switch failure. Switch failure, or network level resiliency used to be taken care of with duplicate links and by other protocols. At Layer 2 the protocol for this network resiliency was spanning tree. Spanning tree is a protocol that runs on all the switches where one switch is elected as the master, or root bridge. The master switch goes through a discovery process to collect information form all the other switches in the network about who is connected to whom and then draws a virtual map. The master then instructs all the other switches to block certain ports do that there is one one active path between any two points. The additional paths have to be blocked to prevent loops in the network that would bring the network down. Blocking paths is wasteful as all the additional switches, links and ports you have paid for and installed in the network sit there doing nothing until a failure occurs. When a failure occurs this discovery process starts again to find the currently available best paths. Apart from the waste this process would not be that bad except that it is very slow, with larger more complicated networks taking longer to converge, typically between 30 and 90 seconds where no traffic is going anywhere. Even a best case 30 second outage is enough to impact most applications with crashes and error messages. In the case of streaming applications like video and VOIP this impact is very obvious. In the case of video you have to reconnect to the stream and in the case of VOIP you have to re-dial and connect to the person you were talking to again. If you have been waiting for 20 minutes in a call center queue this is positively frustrating, but you were talking to a customer or the emergency services then this is a best costly and at worst dangerous. Split-MLT takes the concept link aggregation, or MLT, a stage further to make it a network resiliency protocol. Split-MLT allows a trunk that starts on a single switch to be terminated on two separate Passport 8600s. Split-MLT does this by allowing the two Passport 8600s to share their Layer 2 forwarding tables and act as a single switch. Now we can use the fast, sub-second, fail-over characteristics of MLT as a network resiliency protocol. Further than that we can now take advantage of the load sharing capabilities of MLT to provide automatic network level load sharing. Split-MLT is also completely backwards compatible so that this trunk can begin on any switch that supports link-aggregation. Split-MLT provides faster fail-over, better network utilization , more bandwidth and a simpler network design than alternative Layer 2 redundancy protocols, while providing investment protection to customers with existing 3rd party switches on their networks. “Split-MLT is the only mechanism that will protect sensitive applications like VOIP from network outages.”
15
Passport L3 Routing Routing Protocol Support VRRP Backup-Master
A Single Gateway address is now balanced across both Passport 8600s. Routing Protocol Support RIP1,2, OSPF and BGP4 VRRP Backup-Master Simplifies network configuration Better network utilization VRRP fast interval timers Faster VRRP fail-over Sub-second to match Split-MLT IP and IPX routing policies Improved control of routes Increase security and control. Backup-Master allows an 8600 that is in backup mode to route traffic At Layer 3 the Passport 8600 is a fully featured router supporting wire speed routing for IP, IP Multicast and IPX traffic. Apart from the standard routing protocols the Passport 8600 has several key differentiators. VRRP with fast-interval timers and backup-master capability provide for simpler network design, faster fail-over and improved network utilization at layer 3. IP and IPX routing policies allow greater control of the routing domain for resilience or security purposes. Technical Description VRRP Backup-Master We have enhanced the standard gateway protection protocol, VRRP, by adding a couple of extensions to the protocol that improve Layer 3 network resiliency and simplify network configuration. Backup-master is an extension to VRRP that simplifies network configuration by reducing the number of gateway IP addresses needed then therefore simplifying IP addressing and DHCP. Normally VRRP protects the gateway IP address by having two routers support that IP address. The two routers will exchange a hello protocol and decide who is to be the master. Then the master will route packets destined for that IP address while the other, or backup, switch will monitor the master and take over in the event of failure. This fail-over takes about 3 seconds or 3 missed hellos to complete. With Routing Switches and Split-MLT deployed in networks there is a possibility that a Passport 8600 will receive a packet at layer 2 that is has a route for at layer 3, but had to forward to the other Passport 8600 with the master to be routed. Backup-Master capability allows the backup router to route the packet as well. This reduces the number of hops a packet has to take in the network, reducing latency and provides for better network utilization. VRRP Fast Interval Timers VRRP Fast interval timers (FAI) improve VRRP fail-over times by allowing the VRRP interval timers to be tuned according to the network requirements. Standard VRRP fail-over requires that three consecutive hello packets are missed. The standard hello interval is one second so best case for VRRP fail-over in standard form is 3 seconds. This was fine when VRRP was dependent on slower layer 2 protocols like Spanning tree to converge first. Now with Spit-MLT reducing layer 2 fail-over to under a second the standard fail-over interval of 3 seconds does not make sense. FAI allows the hello interval to be tuned down to 200ms reducing VRRP fail-over to under a second, matching Split-MLT. IP and IPX Routing Policies Usually routers exchange all the routes they know about with their neighbors. Most of the time this is fine as it improves resiliency by increasing the amount of information a router has to make its determination on the next hop in the network. Sometimes the routing tables need to be manipulated to provide improved network utilization, tighter security or increased resilience. Routing policies allow control of who will advertise routes to whom, which routes will be advertised, what routes will be accepted form others and also whether or not routes, hop counts or costs will be adjusted when sent in any particular direction. “Backup-Master simplifies network design by balancing traffic and reducing the number of subnets/DHCP scopes.”
16
Passport L4-7 Application Switching
Improved Network Utilization Load balancing of IP applications Server selection with Health-checking Metering and controlling bandwidth usage Improved Performance Appliance (Cache,SSL) redirection Streaming media (Language splicing) Providing fault tolerance Tighter Security Network Address Translation DoS Attack buffer Processing traffic filters The addition of Alteon technology brings the full Alteon feature set to the Passport By adding layer 4-6 switching capability the Passport 8600 is able to look even deeper into the packets and make more granular forwarding decisions at an application level. Using this technology we are able to use the network to extend resilience up into the application layer. Having this integrated into the core switch and right in the middle of the data streams is of huge value. Improved Network Utilization The majority of network traffic is requests from users that need to be fulfilled by servers. Originally the network would simply pass these requests on to the destination based on the information it has such as MAC address or IP address. The network had no concept of whether the user got the information they wanted, or even if the server was able to handle the request. By extending the networks capabilities to look at layer 4-7 information we can improve this situation. Firstly we can check to see if the server is healthy and able to respond to the request. For redundancy and performance purposes there may need to be several servers handling the same type of requests from the users. Using L4-7 technology the Passport 8600 can act as proxy to these servers so that the user has only one place to contact and then distribute the users requests based on various different criteria such as server able to give best performance or server specified for priority users or the specific content type. This load balancing or content switching can be utilized for virtually any IP device or application and improves overall performance by getting the most out of the infrastructure. Improved performance Another function of L4-7 technology is the ability to determine if there is a better way of handling a request. Typically web pages contain a large amount of static information that doesn’t change all that often. Web caches can serve about 3 times the content at 1/3 of the cost of a web server, so the ability to redirect request for static information such as images that is better served by the cache is of great value. With so much more focus on Security and the more towards more and more web based application the SSL based extranet is becoming a reality. SSL using strong encryption is an extremely processor intensive task and takes its toll on the servers running it. A server that could typically handle 10,000 sessions can be reduced to only handling 60 when using strong encryption and handling the key exchange. By identifying this type of traffic we can re-direct it to a device specifically designed to handle encryption and key exchange in hardware. By off-loading this task from the server, our 10,000 session server can still serve 10,000 sessions and the data is still protected. Tighter Security By intercepting traffic within the network we can do a lot to improve security. Network address translation is the ability to to proxy or masquerade one IP address with another. This is usually used when it is desirable to hide the true identity (IP address) of a service or device. By doing this in the core switch we remove the need for additional devices and take advantage of the built in performance and resiliency of the network. Intercepting requests also gives us the ability to thwart Denial of Service (DoS) attacks. With the core switch intercepting the request, any attempt to bog the server down with useless information hits the core switch first and because of the specific hardware involved this useless information or requests can be identified and discarded without ever touching the server. We can also filter the data and requests and either prioritize or discard them as our rules dictate. “The Alteon Web Switching module brings Alteon’s market leading L4-7 capabilities to the Passport 8600.”
17
Passport Multicast PIM-SSM acts like a static route for multicast. Shared memory architecture delivers superior multicast performance Passport 8600 shared memory architecture is the basis of unequalled Multicast scaling and performance. PIM-SSM allows source specific multicast trees to be created, essential in mass multimedia (TV) applications. Fast join and leave capability improves stream setup time and reduces bandwidth. I mentioned before how the architecture of the Passport 8600 was particularly suited to multicast applications. By having shared memory multicast packets take no more resource to process to multiple destinations than a single unicast packet does to one. We use this to advantage in the Passport 8600 to increase scalability of Multicast applications to new heights. The Passport 8600 supports all the standard Multicast protocols such as IGMP, DVMRP and PIM-SM A key feature added recently to the Passport 8600 is PIM-SSM. Protocol Independent Multicast (PIM) reduces bandwidth utilization in larger networks by requesting a specific join to a multicast tree prior to traffic flow. Source Specific Multicast (PIM-SSM) is an extension to PIM that allows a multicast tree to be created specific to a single source. This static configuration improves security and setup times and is ideal for application where permanent sources of information exist such as stock tickers or TV channels. Fast join and leave is a Nortel extension that speeds up the time to join a multicast stream and to leave it. This is ideal for application where a large number of multicast streams exist and the user wants to quickly switch between then. The application that benefits most form this is TV viewing where users channel hop. The fast join insures they see the new channel quickly, whereas the fast leave ensures that the multicast stream is pruned back where needed to avoid unnecessary bandwidth utilization. Technical Description PIM-SSM Fast Join and Leave Fast join and leave allows selection of multicast stream just like TV channel hopping
18
Passport QoS ASIC Based Flow Filters Multi-level capabilities
“Passport Xpress Classification performs wire-speed lookup and packet classification on a per port basis.” ASIC Based Flow Filters Security Wire Speed QoS Multi-level capabilities Layer 2,3,4 and 7 802.1p (L2) XC FCS IP-SA TCP-Port DA SA DATA IP-DA Granular Queuing 8 Hardware queues ensure application delivery Multi-media filters Pre-set VOIP and Multi-media filters simplify QoS deployment. The benefits of having ASIC based forwarding on every port extends beyond simple switching and routing to a other key functions or decisions that can be made as part of the forwarding process without impact to performance or throughput. These ASIC based flow filters are implemented using Passport Xpress Classification (XC). These filters can be used for security purposes, to simply discard unwanted traffic, or to mark and enforce a level of QoS. Depending on your exact needs you can implement filters at layers 2,3,4 and with the addition of the WSM, layer 7. When traffic matches a particular flow filter one of 4 actions is going to happen. One action is that we discard the packet completely for security purposes. Another action is that forward the packet as normal, but make note the the packet matched the filter. The second action is to IP telephony Filters Forward to next hop “With 8 hardware queues per port the Passport 8600 has QoS granularity for the most demanding environment.”
19
Passport Configuration 1 2 3
Pick any starter pack. 3, 6 or 10 slot chassis Add redundancy options Add additional power supplies Add a second switching fabric Choose the I/O modules ‘E’ or ‘M’ Modules Ethernet 10/100, Gigabit and 10Gigabit ATM/SONET DS3, OC3 and OC12 Application Switching
20
Passport Advantage High Availability Operational Simplicity
Industry leading reliability features deliver the only networking solution capable of protecting sensitive applications like VOIP form network outages. Operational Simplicity The simplistic approach to network design and deployment with embedded intelligence further enhances reliability and at the same time reduces costs. Low Cost of Ownership High Availability and Operational Simplicity combine to deliver the best platform for One Network and increased ROI for the business.
22
Campus LAN Solution
23
Campus Architecture Access Layer Floor 1…………..Floor x
PCs, Printers, etc. High Density 10/100 L2 Ethernet switching Floor 1…………..Floor x Aggregation Layer Consolidation point Mixture 10/100 & Gigabit L2/L3 Ethernet switching Design Issues At layer 2 these extra links need to be blocked to prevent network loops. This is usually implemented using a protocol called spanning tree (802.1d) Spanning tree protocol prevents these loops by deciding the best links to use and blocking all the rest. Basically you’re paying for stuff you can’t use ! Building Core Nucleus, Servers, Metro High Density Gigabit L3 Routing Campus
24
Spanning Tree Features
Spanning Tree Protocol (STP) Provides redundant paths and detects loops in L2 networks Redundant links are activated after failure Redundant links are not utilized for data traffic Slow network convergence - minimum of 30 seconds Fast L3 redundant protocols like VRRP and OSPF depend on slow STP convergence Spanning Tree Protocol - Proprietary Hacks and Fixes Uplink Fast, Port Fast, Fast Start Improves convergence time by seconds Bandwidth is still wasted by blocked ports 802.1w Rapid Spanning Tree Protocol Faster convergence, 5 seconds on failure Same re-convergence, 30 seconds plus on repair Same restriction on redundant links Spanning Tree will not protect applications
25
S-MLT Link Aggregation
Our Fresh approach to the spanning tree problem “Extends reliability benefits to attached 3rd party switches through 802.3AD link aggregation” Description Split MLT makes the two core switches act as one at Layer 2 Standard Link aggregation protocols used for network resiliency as well as bandwidth Both Links are active, appear as one, with traffic balanced across all available links. Advantages Less complex than spanning tree Better bandwidth utilization Faster Fail-over and recovery Protects applications from outages In service hitless upgrades “Maintains state of voice and video sessions through fail-over”
26
Passport 8600 Campus 3 Slot chassis with SX Gig blades, configured as an L2 aggregator device with QoS enforced through Diffserv interrogation and hardware queuing 10 Slot CO chassis with mixture of SX, LX and XD Gig, configured as on ramp to Service provider OE network. 10 Slot chassis with mixture of SX and LX Gig blades, configured as an L3 core routing device with IP routing and QoS enforced through Diffserv interrogation and hardware queuing 10 Slot chassis with 10/100 blades, configured as an L2 edge device with QoS enforced through Diffserv marking and hardware queuing 6 Slot chassis with mixture of SX Gig and 10/100 Gig blades, configured as an L2 edge device in the server farm. Intelligent content switching through WSM blade.
27
Low Cost Optical Metro Solution
28
Metro Bandwidth Challenge
New multimedia applications require more bandwidth Multi channel Gigabit metro solution is the answer, but… Normally this would require Multiple expensive leased fiber runs for resilience or Expensive and complex DWDM equipment to reduce fibers Challenge is to provide High bandwidth services, while…. Keeping leased fiber costs to a minimum Without wasting fibers (dead sparing) Maintaining reliability (Application state)
29
3 Part Metro Optical Solution
16 Gigs On a Single Fiber 3 Part Metro Optical Solution Colored GBICs in Switches Standard interface 8 ‘flavors’ Long reach (90km) Optical MUX Fiber Saver Distributed 10 Gig Solution Optical Add/Drop MUX Splits Wavelength in two Doubles the bandwidth One fiber Out 8 Gigs in Breakout one, pass the rest
30
CWDM Metro Design Switch Switch Switch OMUX OMUX 8600 8600 OADM OADM
Gigabit channel bonded together with MLT for high bandwidth and faster fail-over Switch OADM Switch Switch 1 Gigabit East and 1 Gigabit West deliver resiliency OADM OADM OMUX OMUX Simple plug and play operation reduces deployment costs ‘RED’ channel used for additional IST link to increase bandwidth and redundancy in a distributed POP environment. 8600 8600
32
Backup Information
33
Ethernet Modules Hot swappable Wire speed routing
Gigabit connectivity with copper and fiber Module Ports Type Density 8648TXE 48 10/100 BaseTX (RJ45) 384 8624FXE 24 100 BaseFX (MT-RJ) 192 8608SXE 8 1000 BaseSX (SC) 64 8608GBE 1000 Base GBIC (GBIC) 8608GTE 1000 BaseTX (RJ-45) 8616SXE 16 1000 BaseSX (MTRJ) 128 8632TXE 32+2 10/100 BaseTX (RJ-45)+GBIC 256+16
34
ATM and SONET Modules Hot Swappable RFC 1483 routed and bridged PVCs
512 PVCs per Module Module Ports Type Density 8672 ATME 2 MDAs 4 port OC-3 1 port OC-12 16 OC-3s 4 OC-12s 8683 PoSE 3 MDAs 2 port OC-3 1 port OC-12 24 OC-3s 12 OC-12s
35
Forwarding XC 1. Packet arrives
To I/O card To I/O card Switch Fabric/ CPU Module 1. Packet arrives 2. Queue Manager sends packet header to XC Switch Fabric To I/O card To CPU PowerPC CPU 3. XC implements packet policy, sends packet to Queue Manager To I/O card 5 To I/O card 4. Queue Manager sends packet to Switch Fabric 4 6 I/O Module 3 5. Switch fabric schedules packet forwarding into one of eight queues based on priority XC Queue Manager Memory 2 6. Packet is sent to outbound I/O card and buffered if necessary 1 7 I/O Interface 7. Packet is transmitted on outbound interface This Entire Process Always Takes Less than 10uS
36
Learning XC All updates performed out of band
To I/O card To I/O card Switch Fabric/ CPU Module 1. Policy downloaded by CPU to all XCs at startup Switch Fabric To I/O card To CPU PowerPC CPU 2. Route/SPT updates and unknown addresses passed to CPU To I/O card 1 To I/O card 3 3. CPU copies new information to all XCs simultaneously I/O Module XC Queue Manager Memory All updates performed out of band 2 I/O Interface
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.