Download presentation
Presentation is loading. Please wait.
Published byHolly Johns Modified over 6 years ago
1
Fibre Channel Routing Fibre Channel Routing Introduction
Fibre Channel routes frames using the Fabric Shortest Path (FSPF) protocol, a derivative of the Open Shortest Path First (OSPF) protocol used on IP networks. This lesson explains the basic requirements for routing frames on a Storage Area Network (SAN) and how FSPF meets these requirements. It also describes how FSPF’s limitations impact fabric design. Importance It is important to understand how Fibre Channel (FC) routes frames across a fabric because the characteristics of the routing protocol help determine the best configuration for the SAN. For example, you need to know how FSPF works in order to determine the required and effective number of interswitch links (ISLs) for a customer configuration.
2
Objectives Upon completion of this lesson, you will be able to describe how frames are routed through a Fibre Channel SAN using Cisco MDS 9000 Family switches. Performance Objectives Upon completion of this lesson, you will be able to describe how frames are routed through a FC SAN using Cisco MDS 9000 Family switches. Enabling Objectives Discuss some of the issues that must be addressed in order to correctly route data through a SAN Describe the FSPF protocol and its components Describe the operation of the FSPF protocol Describe the limitations of FSPF Describe the impact of FSPF on fabric design Define a port-channel Describe how port-channels are configured Explain how Cisco MDS Family switches support QoS
3
Outline Routing Path Issues What is FSPF? FSPF Protocol Operations
Limitations of FSPF The Impact of FSPF on Fabric Design What is a Port-Channel? Configuring Port-Channels Quality of Service Prerequisites All previous lessons in Curriculum Unit 2, Module 2.
4
Routing Path Issues Without a routing protocol:
Each frame is routed based only on the destination ID Potential for looping of frames Potential for out-of-order delivery FC Frame 1 Frame 1 Frame 1 Frame 3 Frame 2 Frame 1 Routing Path Issues Objectives Discuss some of the issues that must be addressed in order to correctly route data through a SAN Introduction This section describes some of the issues that must be addressed in order to correctly route data through a SAN. Facts In any complex network, the potential exists for routing errors: Frames can inadvertently be routed in a loop Frames can take different paths through the network and can be delivered out of order In addition, it is desirable to find a way to identify the most efficient path between the source and the destination to reduce delivery latency and minimize overall network utilization. Looping Frame 1 Frame 3 Frame 2 Multiple paths Out-of-order delivery
5
What is FSPF? Fabric Shortest Path First (FSPF):
Computes the least-cost path through the fabric, based on: Link speed Number of hops Avoids looping of frames All frames follow the same path Ensures in-order delivery in a stable SAN FC What is FSPF? Objectives Describe the FSPF protocol and its components Introduction This section provides an overview of the FSPF protocol. Definition The FSPF protocol is the routing protocol used on FC SAN fabrics. Example The preceding diagram shows that FSPF selects a single path for a given I/O transaction, avoiding looping and ensuring in-order delivery. Facts The FSPF algorithm is a cost-based routing algorithm that computes the most efficient path between two connected nodes. The cost of a given path is based on two factors: The speed of each of the ISLs along the path The number of hops on the path Routing using a single fixed path prevents looping of frames and, in a stable SAN, ensures in-order delivery. In other words, if routes are stable, frames always follow the same path. However, if the least-cost route changes while a session is in progress, frames sent after the route change might take the new route. Frame 3 Frame 2 Frame 1 Single path In-order delivery
6
What is FSPF? (cont.) Switch F_Port F_Port F_Port F_Port LSD Path
Selector Router Switching Matrix Facts The preceding diagram illustrates the logical components of the FSPF protocol: The Link State Database (LSD) contains information about the fabric topology, including all ISLs in the fabric. The LSD is synchronized between all switches in the fabric so that all switches share a common view of the fabric topology. The Path Selector is a logical function that determines the most efficient path from a source node to a destination node. The Path Selector uses the information in the LSD to select paths using a cost-based weighting algorithm that takes into account the number of links along each path and the speed of each link. The Router routes frames to their destination ports. It utilizes the information from the Path Selector to build its routing table. When an F_Port receives a frame, the Router function looks at the frame’s destination address, looks up the least-cost path, and forwards the frame to the next switch in the selected path—before the switch has even finished receiving the frame. F_Port F_Port F_Port F_Port Switch
7
What is FSPF? (cont.) Link State Database (LSD)
Link State Records (LSR) Link Descriptors (LD) LSR The LSD is a container for information about the fabric topology. The Link State Database contains entries for all active ISLs in the fabric. Switches automatically notify each other of changes to the fabric topology using a protocol defined by FSPF. Switches exchange information about the fabric by exchanging Link State Records (LSRs): An LSR fully describes the connectivity of an FC switch to other directly attached FC switches (ISLs between E_Ports and to B_Ports). An LSR is a set of information that describes the connectivity of all ISLs associated with one specific switch. In other words, there is one LSR for each Domain_ID in the fabric. If a fabric consists of five switches, there will be five LSRs in the LSD. LSD LSR LD LSR LD LD ISLs Switches Fabric
8
What is FSPF? (cont.) Link State Records Link Descriptor Domain_ID=3F
Owning Domain_ID: 01 Out Port Index: 05 Domain_ID of Neighbor: 3F Neighbor Port Index: 01 Link Cost: nnnn Domain_ID=3F Port 2 Domain_ID=01 Link Descriptor Owning Domain_ID: 3F Out Port Index: 02 Domain_ID of Neighbor: 01 Neighbor Port Index: 05 Link Cost: nnnn Port 5 Link Descriptor Owning Domain_ID: 01 Out Port Index: 01 Domain_ID of Neighbor: 05 Neighbor Port Index: 03 Link Cost: nnnn Link State Records Port 1 Domain_ID=05 Each LSR contains a record header and one or more Link Descriptors (LDs): Each LD describes a single FC ISL. Each ISL is identified by the Domain_ID and output port index of the owning switch, and by the Domain_ID and port index of the neighbor switch. An ISL provides two paths (one in each direction); each path is described by a unique LD. Port 3 Link Descriptor Owning Domain_ID: 05 Out Port Index: 03 Domain_ID of Neighbor: 01 Neighbor Port Index: 01 Link Cost: nnnn
9
FSPF Protocol Operations
Five stages of the FSPF protocol: Hello protocol Initial topology database synchronization Topology database maintenance Path discovery Path computation FSPF Protocol Operations Objective Describe the operation of the FSPF protocol Introduction This section describes the five stages of operation of the FSPF protocol. Facts There are five stages associated with the FSPF protocol: 1. Hello protocol 2. Initial topology database synchronization 4. Topology database maintenance 5. Path discovery 3. Path computation Each of these FSPF stages uses switch internal link services (SW_ILS) and Class F service. However, unlike other SW_ILS operations, there are no expected reply sequences. The SW_ILS request is both the first and last sequence of the exchange. Responses are communicated in a separate SW_ILS request sequence using a new exchange. Operation of the FSPF protocol stages can be represented using a finite state machine. A separate instance of this state machine operates in every E_Port in the fabric.
10
FSPF Protocol Operations Stage 1—The Hello Protocol
After a switch acquires a Domain_ID, it begins the process of building a routing table: Does not know if neighbor switch has acquired a Domain_ID Begins transmitting Hello messages to its neighbors on all initialized ISLs Exchanges Domain_IDs with all neighbors After two switches have exchanged Domain_IDs: The ISL is active FSPF topology database synchronization can begin Stage 1—The Hello Protocol: The first stage of the FSPF protocol is called the Hello protocol. The Hello protocol is used to determine the status of the link connected to the switch’s immediate neighbor. The switches use the Hello protocol to exchange Domain IDs with each other. After two switches have exchanged Domain_IDs, the ISL is active and the switches can proceed to the next stage of the FSPF protocol.
11
FSPF Protocol Operations Stage 1—The Hello Protocol (cont.)
Hello messages act as a ‘heartbeat’ Default Hello Interval = 20s Default Hello Dead Interval = 80s Hello protocol messages are transmitted on a periodic basis on each interswitch link, even after two-way communication is established. Periodic Hello messages provide a mechanism to detect a switch that has failed. In effect, the Hello messages act as a heartbeat between the switches. If a switch fails to receive a Hello in the expected time, it assumes the neighbor switch is no longer operational: The Hello Interval is the time in seconds between Hello messages sent by this port. Its default value is 20 seconds. The Dead Interval is the time in seconds this port will wait for a Hello message from the attached port before removing the route to that port from the LSD. Its default value is 80 seconds. Note that the default values of these intervals mean that FSPF can take up to 100s to become aware of a link failure. You can lower these values to promote faster recovery when a link fails, but you should also keep in mind that Hello messages are flooded, so smaller Hello Interval values increase congestion. The Hello Dead Interval should generally be set to 4 times the Hello Interval to avoid triggering unnecessary FSPF route computation if Hello messages are lost due to congestion.
12
FSPF Protocol Operations Stage 2—Database Synchronization
After communication has been established, the switches synchronize their topology databases by exchanging LSRs Each switch exchanges its entire LSD with its neighbors in a Link State Update (LSU) When the recipient of the LSU has processed the database, it sends back a Link State Acknowledgement (LSA) A B LSU(DB-A) LSU(DB-B) Stage 2—Initial Database Synchronization: After two-way communication has been established between two switches using the Hello protocol, the switches begin to synchronize their topology databases. This is accomplished by exchanging LSRs between the switches. During topology database synchronization, each switch sends its entire LSD topology database to its neighbor. Switches synchronize databases by sending LSRs in a Links State Update (LSU) SW_ILS extended link service command. An LSU can contain one or more LSRs. An LSU with zero LSRs signals the end of the database transmission. When a switch receives an LSU, it compares each LSR in the LSU with its current topology database. If the new LSR is not present in the switch’s LSD, or if the new LSR is newer than the existing LSR, the LSR is added to the database. Each LSR is acknowledged with a Link State Acknowledgment (LSA) SW_ILS command or with a newer instance of the LSR. LSA(DB-B) LSA(DB-A)
13
FSPF Protocol Operations Stage 3—Database Maintenance
After the two databases are in sync, each switch sends LSUs only when the fabric topology changes LSUs are flooded throughout the entire fabric Each switch retransmits the LSU by a mechanism called “Reliable Flooding” A B LSU(LSR-A) LSU(LSR-B) Stage 3—Database Maintenance: After the initial database synchronization is complete, the topology database must be maintained to ensure that all switches in the fabric contain identical information in their databases. Events that cause an LSR to be transmitted include: An ISL fails (or the switch associated with that ISL fails). A new LSR is transmitted to remove the failed link(s) from the topology database. An ISL reverts to the ‘one-way’ communication state. A new LSR is transmitted to remove the one-way link from the topology database. A new ISL completes link initialization (stage 1) and initial database synchronization (stage 2). One or more LSRs are transmitted to notify other switches to add the new information to their databases. This process by which LSRs are propagated through the fabric is known as “reliable flooding”. When a switch receives an LSR, it retransmits the LSR on other links. After the LSR is acknowledged, the switch stops transmitting that LSR on that link. The switch continues to send the LSR on other links until acknowledgement is received on those links. LSA(LSR-B) LSA(LSR-A)
14
FSPF Protocol Operations Stage 4—Path Discovery
As frames arrive at a switch: The frame’s Domain_ID is compared to the Domain_ID in the switch’s LSD If the LSD does not contain that Domain_ID, the least-cost path is then calculated The switch always forwards frames on the least-cost path Stage 4—Path Discovery: As frames arrive at a switch, the frame’s destination Domain_ID is compared to the relevant LSR in the LSD. If the LSD does not contain the destination Domain_ID, the Path Selector computes the cost of each path to the destination Domain_ID and selects the least-cost path. Switches must forward frames on the least-cost path.
15
FSPF Protocol Operations Stage 5—Path Computation
Link Cost = S * (1.0625e12 / R) S represents an administratively defined factor (default value = 1) R is the bit rate of the link Examples: Default link cost for 1Gb/s link: 1 * (1.0625e12 / e9) = 1000 Default link cost for 2Gb/s link: 1 * (1.0625e12 / e9) = 500 FSPF considers ISLs only Stage 5—Path Computation: The switch runs the path selection algorithm when it is notified of a physical change to the fabric. It is notified through the process of receiving a new or updated LSU. The link cost for each individual link is calculated based on the bit rate of the link and an administratively defined weighting factor. The weighting factor allows an administrator to adjust link cost based on particular circumstances. By default, the weighting factor is set to 1. The cost calculation is based on the bit rate of a 1Gb/s FC link, and is represented by the formula S * (1.0625e12 / R), where S is the administrative weight and R is the bit rate of the link. Assuming that the weighting factor is the default 1, the cost calculation will arrive at the following costs: Cost of a 1Gb/s link: 1000 Cost of a 2Gb/s link: 500 Cost of a 10Gb/s link: 100 The calculation is performed on a link-by-link basis, so each link in a data path can be advertised with a different cost. These costs are used by the path selection algorithm to determine the most efficient paths. When a path contains multiple links, the costs of each link are added up to determine the total cost of the path. In the case of two or more paths of equal cost, the decision of which path to use is not specified and is determined by the switch vendor. Note that FSPF only considers the ISLs along the data path—it does not consider the node-to-switch link at either end of the path. FSPF routes frames between domains only.
16
FSPF Protocol Operations Backbone Protocol
Not yet implemented AR1 AR2 FSPF Backbone Facts The FSPF Backbone protocol allows FSPF to be implemented as a hierarchical routing protocol. An FSPF backbone consists of a physically contiguous network topology of one or more border switches, each of which connects an autonomous region to the backbone. An autonomous region is essentially a “SAN island”. The FSPF Backbone protocol is used on the backbone, while each autonomous region runs a separate instance of the FSPF protocol, or even another protocol entirely. The FSPF Backbone protocol works in the same way as the FSPF protocol, but all paths terminate at one of the border switches. Cisco MDS 9000 Family switches do not implement the FSPF Backbone protocol. However, Cisco MDS switches can be configured to belong to an autonomous region when attached to another vendor’s switch that does implement the FSPF Backbone protocol. The preceding graphic illustrates the architecture of an FSPF Backbone fabric. The autonomous regions are represented by AR1 through AR4. AR3 AR4 Border Switches Autonomous Regions
17
Limitations of FSPF FSPF algorithm does not account for traffic load
All frames in an exchange follow the same path Path changes only in response to changes in the fabric topology Switch B Limitations of FSPF Objectives Describe the limitations of FSPF Introduction This section describes the limitations of FSPF. Facts The FSPF protocol supports load sharing, but it does not support load balancing. Load sharing is significantly different than load balancing, and the distinction can have significant effects for fabric design, especially when tuning performance: Load sharing simply means that multiple paths can be used Load balancing means that traffic load is balanced across multiple paths FSPF does not account for actual path utilization. In other words, an unused path with a cost of 1000 will be disregarded in favor of an overutilized path with a cost of 500. All frames in an exchange must follow the same path, and paths are recomputed only when the physical ISL configuration changes. The preceding diagram shows a simple SAN with two data paths: Path A→C has a total cost of 500 Path A→B→C has a total cost of 1000 FSPF will never use path A→B→C, even if path A→C is congested. FC Host 2 Storage 2 FC Host 1 Storage 1 2Gb/s Cost=500 2Gb/s Cost=500 2Gb/s Cost=500 Switch A Switch C
18
Limitations of FSPF (cont.)
The least-cost path is not always the best path: Path A→B→C: cost=1000, bandwidth=2Gb/s Path A→C : cost=1000, bandwidth=1Gb/s Load-sharing occurs but cannot be optimized Switch B Facts The FSPF least-cost path algorithm does not necessarily select the best path. For example, in the preceding diagram, links A→B and B→C are 2Gb/s links, with a default cost of 500 per link. Link A→C is a 1Gb/s link with a default cost of (Note that this diagram differs from the previous diagram only in that link A→C is a 1Gb/s link in this diagram.) There are two paths available from Switch A to Switch C: Path A→B→C has a total cost of 1000 and supports 2Gb/s along the entire path Path A→C also has a total cost of 1000 but supports only 1Gb/s FSPF will weight both paths identically. When a single pair of devices (Host 1 and Storage 1) are attached to the SAN, FSPF might select path A→C even though that path supports only half the bandwidth of path A→B→C. (Path A→B→C does have greater latency than path A→C, but latency is a far less significant performance factor than bandwidth.) When a second pair of devices (Host 2 and Storage 2) are attached to the same switches, the switch will use the second equal-cost data path in an attempt to distribute the load evenly. In other words, Host 1→Storage 1 will be assigned one path, and Host 2→Storage 2 will be assigned the other path. Both data paths will be used. In both situations, the administrator can force path selection by adjusting the administrative weighting factor. FC Host 2 Storage 2 FC Host 1 Storage 1 2Gb/s Cost=500 2Gb/s Cost=500 1Gb/s Cost=1000 Switch A Switch C
19
Limitations of FSPF (cont.)
FSPF does not support per-packet load balancing Software-based load balancing is available: Implemented in OS driver Supported by array firmware Not fabric-wide Vendor-specific Must verify compatibility with HBA and array Facts Per-packet load balancing is not supported by fabric switches at present because the computation would add considerably to switch latency. However, some storage array vendors and storage management software vendors do offer software-based load balancing capability. Some host bus adapter (HBA) manufacturers are also beginning to offer load balancing that operates at the HBA driver level. The load-balancing protocol must be supported by both the HBA driver and the array controller firmware. Software-based load balancing is performed only with data traffic between servers and storage devices that both have load balancing capability; it is not a fabric-wide capability. In addition, there is no accepted standard for load balancing, so the SAN designer must verify that the selected load-balancing software is compatible with both the HBA and the array controller.
20
The Impact of FSPF on Fabric Design
Lack of fabric-wide per-packet load balancing complicates SAN design Load-sharing is supported, but: Only over equal-cost paths Depends on topology Heavy traffic will congest the least-cost links VSANs can be used to balance loads: Configure trunking over EISLs Can further complicate fabric design The Impact of FSPF on Fabric Design Objectives Describe the impact of FSPF on fabric design Introduction This section describes how the features and limitations of FSPF affect fabric design. Facts FSPF is a relatively simple protocol that meets the basic requirements for correctly routing frames, but FSPF does not offer a lot of flexibility in fabric design: Because FSPF does not provide automatic load balancing, designers must take anticipated traffic into account in configuring the links between hosts and storage arrays. This complicates the SAN design process. High traffic loads will tend to result in the least-cost links becoming congested. Available higher-cost links will not be used, even though they are underutilized. To some extent, Virtual SANs (VSANs) can be used to manually balance traffic loads by controlling which VSANs are permitted over which Extended ISLs (EISLs). This is a perfectly valid use of VSANs, but it can lead to complex fabric designs that are difficult to implement and manage. FSPF does support load-sharing, but only over equal-cost paths. This means that a need for load-sharing will partly determine the choice of topology.
21
The Impact of FSPF on Fabric Design (cont.)
Host 1 Host 2 FC FC Storage 1 Storage 2 FC The preceding diagram shows a full mesh configuration with two hosts on one switch and two storage devices on another switch. FSPF will always route all traffic through the shortest data path (assuming that all links are the same speed). This means that traffic between Host 1 and Storage 1 and between Host 2 and Storage 2 will flow through a single link, even though there are alternate paths available. The full mesh design shown in this example does not take advantage of FSPF load-sharing. FC
22
The Impact of FSPF on Fabric Design (cont.)
Host 1 Host 2 FC FC Storage 1 The preceding diagram shows a redundant core-edge configuration with two hosts on one switch and two storage devices on another switch. Because this design provides two equal-cost paths between every pair of edge switches, this design can take advantage of FSPF load-sharing: When Host 1 initiates communication with Storage 1, the fabric will assign one of the two lowest-cost data paths. (The choice between two equal-cost data paths can depend on any number of factors and is not specified by the FSPF protocol.) When Host 2 initiates communication with Storage 2, the fabric will assign the second lowest-cost data path. If a third host and storage device were added to the same switches, the switch would assign them either of the two data paths. Storage 2 FC FC
23
What is a Port-Channel? A port-channel is a logical interface containing multiple physical interfaces Port-channel A Switch 1 Switch 2 What is a Port-Channel? Objective Define a port-channel Introduction This section describes the Cisco port channeling feature. Definition A port-channel is a logical interface that contains multiple physical interfaces. Port channeling allows users to aggregate multiple ISLs and provides load-balancing between the links in the port-channel. Port channeling is a feature of Cisco MDS 9000 Family switches. Example The preceding graphic shows an example of port channeling. Each circle represents a physical interface on the switch. Multiple physical interfaces are aggregated into three port-channels. Facts Port channeling is not the same as trunking: Port channeling enables several links to be combined into one aggregated link Trunking enables an ISL to carry (trunk) multiple VSANs Port-channel B Port-channel C
24
What is a Port-Channel? Aggregates bandwidth of multiple ISLs
Load-balancing across multiple links Increased availability of ISLs Failover and restart of a link in a port-channel do not affect traffic flow Scalability of higher level protocols In-order frame delivery Facts A port-channel provides the following benefits: Aggregates the bandwidth of multiple physical links into a single logical point-to-point connection. The speed of a given port-channel is the sum of all the speeds of all the ports included in the port-channel. Load balances across multiple ISLs and maintains optimum bandwidth utilization. Load balancing is based on the source ID (S_ID), destination ID (D_ID), and exchange ID (OX_ID). Provides high availability for ISLs. If one link fails, traffic previously carried on this link is switched to the remaining links in the port-channel. When the link is restored, the link is reintegrated into the port-channel. Traffic flow is not affected when a link in a port-channel fails over or is restarted. If a link goes down in a port-channel, higher-level protocols are not aware of the failure. This includes both FC upper-layer protocols (such as SCSI) and fabric protocols (such as FSPF). For example, FSPF routing tables are not affected by the failure of a link in a port-channel. This makes protocols such as FSPF more scalable in a large network. Ensures in-order frame delivery when the network is in a stable state.
25
Configuring Port-Channels
General rules for port-channels: Point-to-point connection Supports E_Ports and TE_Ports Can span multiple port modules Up to 16 ports per port-channel Up to 128 port-channels per switch Configuring Port-Channels Objective Describe how port-channels are configured Introduction This section lists the basic requirements for configuring port-channels on Cisco MDS 9000 Family switches. Facts A port-channel is a point-to-point connection between two switches. Port-channels support: ISLs (E_Ports) and EISLs (TE_Ports) Spanning multiple port modules for added high-availability Up to 16 physical links per port-channel Up to 128 port-channel per switch
26
Configuring Port-Channels (cont.)
The following parameters must be consistent across all ports in a port-channel: Port speed Port mode (E_Port/AUTO) Port VSAN ID Trunk mode (enabled/disabled) Trunk-allowed VSAN list You can “force” parameters to be set correctly The following parameters must be consistent across all ports in a port-channel: The speed of the port (1Gb/s or 2Gb/s) The port mode (TE_Port, E_Port, or AUTO) The trunk mode (enabled/disabled) The VSAN membership of the port The trunk-allowed VSAN list When you create a port-channel, there is an option to “force” the switch to set all of these values so that they are consistent across all ports in the port-channel.
27
Configuring Port-Channels (cont.)
Port channeling supports two mechanisms for load balancing across links: Flow-based (S_ID - D_ID) Exchange-based (S_ID - D_ID - OX_ID) Facts Port channeling supports two mechanisms for load balancing across links in a port-channel: Flow-based (S_ID-D_ID)—All exchanges between a given source port and destination port are routed across the same links within the port channel. Whichever link is selected for the first exchange between a given S_ID and D_ID is used for all subsequent exchanges between that S_ID and D_ID. Exchange-based (S_ID-D_ID-OX_ID)— All frames in an exchange are routed across the same link. However, subsequent exchanges can use a different link. This provides more granular load-balancing while ensuring in-order delivery for each exchange. This is the default mode.
28
Configuring Port-Channels (cont.)
In-order delivery can be jeopardized when: A route change occurs during an active sequence A link change occurs in a port-channel The in-order-guarantee command: Drops frames that can not be delivered in-order within the latency drop period (2 seconds by default) Slows traffic at the frame source to reduce the number of dropped frames Disabled by default because it can result in lost frames and degraded performance Facts In-order delivery of data can be jeopardized in the following situations: When a route change occurs during an active sequence, a new path might be selected for that sequence. The new path may be faster or less congested than the old path, causing frames sent on the new path to arrive before frames sent on the old path. When a link change occurs in a port-channel, the frames for the same sequence can switch from one path to another faster or slower path. Cisco MDS switches attempt to guarantee in-order delivery if the in-order-guarantee command is used. When the in-order-guarantee command is used, the switch will drop frames that can not be delivered in-order within the latency drop period (2 seconds by default). The number of dropped frames are reduced by slowing down the traffic at the frame source. In-order-guarantee is disabled by default because it can result in lost frames and degraded performance.
29
Quality of Service Quality of service (QoS):
Prioritizes control traffic (e.g. Class F frames) over data traffic Applies to both internally and externally generated control traffic (including other vendors’ switches) Quality of Service Objective Explain how Cisco MDS Family switches support QoS Introduction This section explains how Cisco MDS Family switches support QoS. Facts QoS functionality prioritizes control traffic over data traffic. Cisco MDS 9000 Family switches support QoS for internally and externally generated control traffic. A high priority status assignment provides absolute priority over all other traffic. High priority status is assigned in the following cases: Internally generated time-critical control traffic (mostly Class F frames). Externally generated time-critical control traffic entering a switch in the Cisco MDS 9000 family from a another vendor's switch. High priority frames originating from other vendor switches are marked as high priority as they enter a switch in the Cisco MDS 9000 family. Within switches in the Cisco MDS 9000 family, control traffic is sourced to the supervisor module and is treated as a high priority frame. The QoS feature is enabled for control traffic by default.
30
Lesson Review What feature does FSPF provide?
What statement is true about FSPF on a Brocade switch? What statement is true about FSPF on a Cisco MDS switch? How long can FSPF take to recognize that an ISL has failed? Practice 1. What feature does FSPF provide? a. Aggregation of multiple ISLs b. In-order delivery of frames c. Load-balancing across multiple paths d. Prioritization of control traffic 2. What statement is true about FSPF on a Brocade switch? a. All exchanges between two ports always take the same path. b. All sequences in an exchange always take the same path. c. Each sequence in an exchange can take a different path. d. Each frame in an sequence can take a different path. 3. What statement is true about FSPF on a Cisco MDS switch? 4. How long will FSPF take to recognize that an ISL has failed? a. 8 seconds b. 20 seconds c. 80 seconds d. 100 seconds
31
Lesson Review (cont.) Which path will FSPF choose from Host 1 to Storage 1? Host 1 Storage 1 FC Switch A Switch C FC 1Gb/s 5. The preceding diagram shows part of a SAN. Which path will FSPF choose from Host 1 to Host 2? a. A→C b. A→B→C c. The answer cannot be determined from the information provided 2Gb/s 2Gb/s Switch B FC Storage 2
32
Lesson Review (cont.) Assuming all links are 2Gb/s, what weight must you assign to what path to force Host 1 and Storage 1 to communicate using path A→B→C? FC Host 1 Switch A Switch B 6. In the preceding diagram, assuming all links are 2Gb/s links, what weight would you have to assign to what path to force Host 1 and Storage 1 to communicate using path A→B→D? a. Assign a cost of 1 to path A→C b. Assign a cost of 2 to path A→C c. Assign a cost of 3 to path A→C d. Assign a cost of 1 to paths A→B and B→C e. Assign a cost of 2 to paths A→B and B→C f. Assign a cost of 3 to path A→B or path B→C FC Switch C Storage 1
33
Lesson Review (cont.) Which fabric designs can take advantage of FSPF load sharing? What is the maximum link speed possible in a port-channel between two MDS 9216 switches? What is required of the ports in a port-channel? 7. Which fabric designs can take advantage of FSPF load sharing? a. Core-edge b. Full mesh c. Partial mesh d. Redundant fabric 8. What is the maximum link speed possible in a port-channel between two MDS 9216 switches? a. 800MB/s b. 1600MB/s c. 3200MB/s d. 6400MB/s 9. What is required of the ports in a port-channel? a. All ports must be configured as TE_Ports b. All ports must be configured with the same trunk-allowed VSAN list c. All ports must be located on the same switch module d. All ports must be members of a single VSAN
34
Summary Potential routing issues: Looping Out-of-order delivery
FSPF allows switches to determine the most efficient paths through the fabric: All frames in an Exchange follow the same path FSPF consists of the Path Selector, Link State Database, and Router functions FSPF calculates path cost based on the total speed of each link in the path Summary: Fibre Channel Routing In this lesson, you learned to discuss the issues concerning routing data through a SAN, describe the purpose and features of the FSPF protocol, compare FSPF to other common routing protocols, and discuss some of the proprietary routing features of the Cisco MDS 9000 switches.
35
Summary (cont.) FSPF can use multiple data paths (load-sharing) but does not load-balance FSPF’s limitations help determine the fabric design Port-channels are aggregated ISLs Port channeling provides increased performance (load-balancing), availability, and scalability All ports in a port-channel must be configured identically Cisco MDS 9000 switches support QoS for prioritizing control traffic
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.