Presentation is loading. Please wait.

Presentation is loading. Please wait.

SAN Components Review SAN Components Review Introduction

Similar presentations


Presentation on theme: "SAN Components Review SAN Components Review Introduction"— Presentation transcript:

1 SAN Components Review SAN Components Review Introduction
One of the underlying benefits of Storage Area Networks (SANs) is that they decouple storage from servers. This decoupling enables IT organizations to choose from a wider variety of storage products that address specific information storage needs, and to easily scale that storage to meet the needs of the applications. This lesson reviews the basic hardware components that comprise a SAN—storage devices, network devices, and host bus adapters (HBAs). Importance Understanding the key characteristics and differentiating features of SAN hardware components is important because it allows you to make informed decisions about the products that you recommend to your customers. Understanding how these components work together to present a logical view of storage resources is a fundamental skill that you will need when you are working in storage networking environments.

2 Lesson Objective Upon completion of this lesson, you will be able to identify the components of a Fibre Channel SAN, and explain the purpose and performance characteristics of each component. Performance Objective Upon completion of this lesson, you will be able to identify the components of a Fibre Channel (FC) SAN, and explain the purpose and performance characteristics of each component. Enabling Objectives Identify the key characteristics of Just a Bunch of Disks (JBOD) devices Identify the key characteristics of Redundant Array of Inexpensive Disks (RAID) devices Identify the components of RAID devices Describe how storage devices are addressed in JBOD and RAID arrays Describe the role of FC-SCSI bridges in a Fibre Channel SAN Describe the features of FC hubs Describe the features of FC fabric switches Describe the features of FC director switches Identify technologies and devices that are used to connect FC SANS to other types of networks and I/O buses Identify typical and differentiating features of FC Host Bus Adapters Describe why HBAs are a critical component of the SAN Describe options for configuring redundant HBA ports Describe the functional differences between different types of HBA drivers

3 Outline JBOD Features RAID Features RAID Components
Addressing Disk Storage FC-SCSI Bridges Fibre Channel Hubs Fibre Channel Fabric Switches Director-Class Switches Gateways, Bridges, and Routers What is an HBA? HBA Features HBA Drivers Prerequisites All lessons in Curriculum Unit 1.

4 JBOD Features Just a Bunch of Disks JBOD Features Objective
Identify the key characteristics of JBOD devices Introduction This section identifies the key characteristics of JBOD devices. Facts In a JBOD storage device, each disk appears as an individual device on the SAN. The disks are attached to a common I/O bus, but they are not linked or grouped together in any way. JBODs are the least expensive type of disk storage, because they provide minimal functionality: JBODs typically do not contain “intelligence,” although some JBODs do provide basic management functions. Each disk in a JBOD must be individually managed. If RAID functionality is desired when using JBODs, software-based RAID can be used. However, software RAID can cause I/O performance degradation and high CPU utilization on the host, and is not suitable for high-performance environments. JBODs do not support other high-availability options, such as hot-spare disks, that many RAID arrays provide. Just a Bunch of Disks

5 JBOD Features (cont.) Inside a JBOD Array Primary Loop Loop 1
Facts The preceding graphic shows that each physical disk in a JBOD appears on the SAN as a separate device. The preceding diagrams illustrate two common JBOD configurations that both use Fibre Channel Arbitrated Loop (FC-AL) backplanes: The example on the left illustrates how, in many JBODs, only one backplane is active, and the second backplane is used only if the primary loop fails. The example on the right illustrates how some JBODs allow both backplanes to be used at the same time to increase data throughput. For example, half of the disks can be configured to use one backplane and the other half can be configured to use the other backplane. FC JBODs use FC disks attached to either an FC-AL or a switched fabric backplane. FC JBODs can also use SCSI disks—the external interface is FC, but the internal disk-to-backplane interface is SCSI. Backup loop Loop 2

6 Redundant Array of Inexpensive Disks
RAID Features RAID Features Objective Identify the key characteristics of RAID devices Introduction This section identifies the key characteristics of RAID devices. Facts In a RAID array, disks are grouped together to appear as one or more “virtual” storage devices, often referred to as logical disks or logical volumes. RAID storage is the most common type of storage found on a SAN. The preceding graphic illustrates some common RAID devices. Redundant Array of Inexpensive Disks

7 RAID Features (cont.) Increased data availability:
The Benefits of RAID Increased data availability: Redundancy in the data Loss of a single disk does not result in loss of data Potential performance gain: Data is striped across multiple disks Can also result in performance decrease Scalability and manageability: Logical volumes Facts RAID provides important benefits that are considered essential for enterprise storage environments: Higher availability—RAID configurations increase data availability by storing the data in a redundant fashion. In most RAID configurations, the loss of a single disk will not result in any loss of data or service interruption. RAID arrays typically include multiple ports, redundant controllers, redundant power supplies, and other redundant components to increase fault-tolerance. Improved performance—in some RAID configurations, two identical copies of the data are stored on two disks. In other configurations, the bytes that comprise a single file or record are actually spread out, or striped, across all of the disks in the RAID set. This can improve performance in some environments—but it can also reduce performance in other environments. Better scalability—RAID improves scalability because it allow administrators to create storage volumes that are larger than the largest individual disk. Ease of management— RAID allows storage to scale without substantially increasing the work required to manage more storage. An array with 100 ordinary disks would be difficult to manage because each disk would have to be managed individually. Grouping those 100 disks into 10 logical volumes makes it a lot easier to manage that much capacity.

8 RAID Components Logical volume RAID controller Physical Disks
Objective Identify the components of RAID devices Introduction This section identifies the components of RAID devices. Facts RAID arrays contain a component called a RAID controller or array controller. This component, which is usually a removable card that can be swapped out and upgraded, is responsible for implementing the RAID functions: The RAID controller “hides” the RAID implementation from the other devices on the SAN. The operating system and its applications do not need to understand the mechanics of the RAID configuration because the RAID controller keeps track of where each byte is stored. To the operating system and its application, each RAID set appears to be a single disk. The RAID controller is typically built in to the array enclosure. However, some entry-level SCSI RAID arrays are designed attach to a controller that resides in the host. RAID controllers provide built-in disk and volume management services to allow administrators to configure the RAID sets. RAID controllers use cache memory to increase performance. Different RAID controllers have different performance specifications. The disks are connected to the RAID controller through an array backplane—a high-bandwidth internal device bus. The preceding graphic illustrates how a RAID controller creates a logical volume from an array of physical disks. Physical Disks

9 RAID Components (cont.)
Logical volume Logical volume RAID controller RAID controller Facts The term “Fibre Channel RAID array” refers to the fact that the RAID controller has an FC interface. However, the backplane, which attaches the disks in the array to the controller, can use either SCSI or FC technology: Some storage arrays use a SCSI bus internally. This reduces the overall cost of the solution, because SCSI disks and the array’s internal SCSI components are relatively inexpensive. However, the SCSI bus can limit the performance of the array. Most enterprise-class storage arrays use disks with native FC interfaces. Inside the array, the disks are connected by an internal FC-AL or a switched fabric. FC disks are more expensive, but the use of an FC backplane can allow more scalable and reliable storage arrays, and can actually simplify product design of large enterprise-class devices. Storage arrays can also use other types of disks, such as IDE disks (the standard type of disk found in PC workstations), ATA disks, ESCON disks, FICON disks, or iSCSI disks. The preceding graphic illustrates how disks can be attached to a RAID controller using either a SCSI bus (left) or an internal FC loop (right). Some high-end arrays also implement an FC switched fabric as the array backbone. In either case, the RAID controller “hides” the internal implementation from the SAN. SCSI disks FC disks Fibre Channel connection but not part of SAN fabric

10 Addressing Disk Storage
JBOD Array RAID Array FC Addresses FC Addresses 0x000001 SCSI ID 0 SCSI ID 0 0x000001 0x000002 SCSI ID 1 0x000002 SCSI ID 1 0x000003 SCSI ID 2 SCSI ID 2 Addressing Disk Storage Objective Describe how storage devices are addressed in JBOD and RAID arrays Introduction This section describes the difference in addressing schemes between JBOD and RAID devices. Facts The preceding graphic illustrates the difference in addressing schemes between JBOD and RAID storage arrays in an FC SAN: The illustration on the left demonstrates that each JBOD disk must be addressed individually. Each device is both a physical unit and a separate logical unit (LUN). The illustration on the right demonstrates that in a RAID array, devices can either be addressed individually, or multiple RAID devices can be addressed as a single LUN. Note that the FC addresses illustrated here are simplified in order to clarify how addresses are assigned in JBOD and RAID array. The addresses illustrated here are not representative of actual FC addresses. 0x000004 SCSI ID 3 0x000003 SCSI ID 3 0x000005 SCSI ID 4 SCSI ID 4 5 LUNs 3 LUNs

11 Crossroads 10000 FC-SCSI Bridge
FC-SCSI Bridges Connect SCSI devices to a Fibre Channel SAN Typically used with SCSI tape devices Sometimes called “gateways” or “routers” FC-SCSI Bridges Objective Describe the role of FC-SCSI bridges in an FC SAN Introduction This section describes the role of FC-SCSI bridges in an FC SAN. Facts FC-SCSI bridges allow SCSI devices—typically SCSI tape storage devices—to connect directly to an FC SAN: Fibre Channel-SCSI bridges are typically used to attach legacy SCSI tape devices to an FC SAN. Although native FC devices are available today, many IT organizations have invested a lot of money in high-performance SCSI-attached tape equipment, and they may want to retain that investment. The image shows a Crossroads FC-SCSI bridge. This device uses a modular architecture: The far-left bay contains an FC blade with two 2Gb/s ports. The far-right bay contains a SCSI blade with four SCSI ports. The two center bays are available for additional FC or SCSI ports. Port modules are available to allow routing of other protocols, such as InfiniBand, to Fibre Channel. Note that unlike data networking, where the terms “bridge,” “gateway,” and “router” have specific meanings, these terms are often used interchangeably in the SAN market. Crossroads FC-SCSI Bridge SCSI cable Fibre Channel link

12 FC-SCSI Bridges (cont.)
SCSI Tape Library FC Addresses 0x000001 FC-SCSI Bridge SCSI ID 0 0x000002 SCSI ID 1 Facts FC-SCSI bridges assign an FC address to each SCSI device. The preceding graphic shows FC-SCSI bridge assigning an FC address to each SCSI tape device in a SCSI tape library. Address mapping must be manually configured on the bridge. Many FC-SCSI bridges, such as the Crossroads 10000, provide cache memory to buffer the data stream. Cache is particularly useful for buffering an incoming Fibre Channel data stream because the SCSI bus can only support one communication session at a time. Another feature that the Crossroads supports is the SCSI Extended Copy command, which allows data to be transferred directly from a FC array to a SCSI tape library (serverless backup). Note that the FC addresses illustrated here are simplified in order to clarify how addresses are assigned by an FC-SCSI bridge. The addresses illustrated here are not representative of actual Fibre Channel addresses. 0x000003 SCSI ID 2 3 LUNs

13 Lesson Review Mark each statement to show whether it is true for JBOD devices (J), RAID arrays (R), or both (B): ___ Provides data redundancy ___ Each disk must be managed individually ___ Scales to meet changing storage needs ___ Disks are attached to a common I/O bus ___ Provides storage management functionality ___ Multiple disks can be addressed as a single LUN FC RAID arrays can use what type of disks? FC JBOD arrays can use what type of disks? R J R B R Practice 1. Mark each statement to show whether it is true for JBOD devices (J), RAID arrays (R), or both (B): ___ Provides data redundancy ___ Each disk must be managed individually ___ Scales to meet changing storage needs ___ Disks are attached to a common I/O bus ___ Provides storage management functionality ___ Multiple disks can be addressed as a single LUN 2. Fibre Channel RAID arrays can use what type of disks? a. FC b. SCSI c. Both 3. Fibre Channel JBOD arrays can use what type of disks? R

14 Fibre Channel Hubs Hubs are “passive” wiring concentrators that can only support a few devices (depends on applications) Only one pair of devices can communicate at a time—bandwidth is shared Hubs create an arbitrated loop topology: Relatively low-cost infrastructure Difficult to scale to enterprise levels Provide only basic management functions Fibre Channel Hubs Objective Describe the features of FC hubs Introduction This section describes the features of FC hubs. Facts FC hubs are essentially passive wiring concentrators that provide a central point of connection for multiple devices. Hubs provide shared bandwidth—only one pair of devices at a time can communicate with each other. FC hubs are roughly analogous to Ethernet hubs, in that all attached devices must share the total link bandwidth. However, they are even more similar to Token Ring hubs, because the ports on the hub are connected in a physical loop. In FC, this topology is called an arbitrated loop. Hubs are the least expensive FC connectivity device, but they provide limited management functionality and can support relatively few devices. The number of devices that can be supported depends on the bandwidth and latency that the application requires; most vendors recommend that customers attach no more than 10 to 20 devices to a hub. Using only hubs, it is difficult to scale a SAN to enterprise levels.

15 Fibre Channel Fabric Switches
Thousands of devices today; about 16 million addresses Multiple devices can communicate simultaneously—bandwidth is scalable Switches create a switched fabric topology: More expensive than hubs Enable highly scalable infrastructures Provide enhanced management functions Fibre Channel Switches Objective Describe the features of FC fabric switches Introduction This section describes the features of FC fabric switches. Facts FC fabric switches are designed to enable scalable, high-performance enterprise SANs: Today’s switches can be used to create SANs that consist of thousands of devices. The FC specification provides almost 16 million addresses. They contain a switching matrix that can establish multiple concurrent routing paths to allow simultaneous communication between multiple sets of devices. Adding devices to a switch adds effective bandwidth to the SAN. In addition to providing scalable bandwidth, the switched fabric topology specification provides for enhanced management functionality. Switches are more expensive than hubs, but they provide significantly improved performance capability, and they enable scalable and manageable SAN infrastructures.

16 Director-Class Switches
Support >64 ports on hot-swappable blades Low fixed latency between ports Redundant internal switching paths and port buffers Fault detection and isolation with automatic failover Director-Class Switches Objective Describe the features of FC director switches Introduction This section describes the features of FC director-class switches. Facts Director-class switches are essentially very large, scalable FC switches: Director-class switches consist of a chassis with a very fast switched backplane that provides very low latency between ports, improving the performance of the switch. Ports are added by inserting cards, commonly known as modules or blades. Each module typically contains between four and 32 ports. Most director-class switches support between 64 and 128 ports per chassis; the Cisco MDS 9513 supports 256 ports per chassis, with 16- and 32-port modules. Because the ports in a director-class switch are connected by the backplane, ports are not consumed by inter-switch links (ISLs). In a large, highly available SAN that is built using 16-port switches, ISLs can consume a large fraction of the available ports. Because they are typically used as the backbones of large enterprise SANs, director-class switches are designed for fault-tolerance, including redundant switching paths, redundant port buffers, multiple power supplies, and fault-detection and -isolation capabilities. Note that not all “director-class” switches are the same. Some switches, such as the Brocade Silkworm 12000, actually use ISLs to connect the blades inside of the chassis. In addition, the Silkworm does not support online OS and firmware upgrades. The Cisco MDS 9500 series switches are true director-class switches because the port modules are connected by a fixed backplane, and because they support online upgrades. Brocade Silkworm 12000 Cisco MDS 9509 McData ED6064 InRange 9000

17 Gateways, Bridges, and Routers
IP Connectivity SANValley SL10000 IP-SAN Gateway (FCIP) Nishan IPS4000 Multiprotocol IP Storage Switch (FC, iFCP, iSCSI) Gateways, Bridges, and Routers Objective Identify technologies and devices that are used to connect FC SANS to other types of networks and I/O buses Introduction This section identifies technologies and devices that are used to connect FC SANS to other types of networks and I/O buses. Facts Although IP storage is often positioned as an alternative to FC, some IP storage technologies are actually intended to integrate with FC. This section introduces components that allow connections between IP and FC devices and networks. Fibre Channel Internet Protocol (FCIP) transparently bridges FC fabric switches over IP networks. Internet Fibre Channel Protocol (iFCP) connects both FC and IP nodes over an IP fabric. Internet SCSI (iSCSI) is another IP storage solution which encapsulates SCSI data directly in IP packets. iSCSI is considered competitive to FC, but iSCSI SANs can be connected to FC SANs through routers and multiprotocol SAN switches. Because IP storage technologies can be used to connect FC devices, SANs can actually contain both FC and Gigabit Ethernet links. As IP storage technologies mature and are more widely deployed in conjunction with FC SANs, a strong knowledge of Gigabit Ethernet networks, the IP protocol, and IP storage products is an asset for technical professionals who work with FC SANs. The images are devices that are designed to connect IP and FC networks. Cisco SN 5428 Storage Router (FC, iSCSI) Cisco MDS 9000 IP Storage Services Module (FCIP, iSCSI)

18 Gateways, Bridges, and Routers (cont.)
Other WAN Connectivity Options Akara Optical Utility Services Platform (SONET/SDH, DWDM) Cisco Metro DWDM (DWDM) Facts A handful of vendors offer products that directly extend FC over Asynchronous Transfer Mode (ATM), T1, T3, OC-3, Synchronous Optical Network (SONET), Dense Wave Division Multiplexing (DWDM), and other networks. These solutions, as well as FCIP, are built on the Fibre Channel Backbone (FC-BB) standard: FC-BB is a recent addition to the FC body of standards. FC-BB is used to transparently connect two or more FC SANs over non-FC links. Two FC SANs that are connected by a WAN gateway effectively function as one SAN fabric. The images are devices that can be used to provide WAN connectivity to FC SANs. CNT UltraNet Edge Storage Router (ATM, IP) INRANGE 9801 Channel Extension (ATM, SONET/SDH, IP, T1, T3, OC3)

19 Lesson Review What is the total available bandwidth on a 16-port 1Gb/s Fibre Channel hub? What is the total available bandwidth on a 16-port 2Gb/s Fibre Channel switch? Mark the features below to show whether they are characteristic of Fibre Channel switches (S), director-class switches (D), or both (B): ___ Redundant switching paths ___ SAN bandwidth scales as the SAN grows ___ Reduces management costs for large SANs ___ Modular architecture ___ Maximizes the number of ports usable for nodes ___ Reduces the complexity of high port count SANs Practice 1. What is the total available bandwidth on a 16-port 1Gb/s FC hub? a. 50MB/s b. 100MB/s c. 800MB/s d. 1600MB/s 2. What is the total available bandwidth on a 16-port 2Gb/s FC switch? a. 200MB/s b. 400MB/s c. 1600MB/s d. 3200MB/s 3. Mark the features below to show whether they are characteristic of FC switches (S), director-class switches (D), or both (B): ___ Redundant switching paths ___ SAN bandwidth scales as the SAN grows ___ Reduces management costs for large SANs ___ Modular architecture ___ Maximizes the number of ports usable for nodes ___ Reduces the complexity of high port count SANs D B D B D D

20 Lesson Review (cont.) DWDM T1 FCIP iFCP iSCSI SONET/SDH
Which technologies can be used to transparently bridge two Fibre Channel SANs? Which technologies can be used to connect a host with a Gigabit Ethernet NIC—and no Fibre Channel HBA—to a Fibre Channel storage device? DWDM T1 FCIP iFCP iSCSI SONET/SDH a, b, c, f d, e 4. Which technologies can be used to transparently bridge two FC SANs? a. DWDM b. T1 c. FCIP d. iFCP e. iSCSI f. SONET/SDH 5. Which technologies can be used to connect a host with a Gigabit Ethernet network interface card (NIC)—and no FC HBA—to an FC storage device? b. Ethernet

21 What is an HBA? Ethernet NIC Fibre Channel HBA OS OS I/O Subsystem
Flow Control Sequencing Segmentation Error Correction What is an HBA? Objective Define a Host Bus Adapter Introduction This section defines an HBA and differentiates HBAs from network interface cards. Definition HBAs are I/O adapters that are designed to maximize performance by performing protocol processing functions in silicon. HBAs are roughly analogous to network interface cards, but HBAs are optimized for storage networks, and provide features that are specific to storage. Example The preceding graphic contrasts HBAs with NICs, illustrating that HBAs offload protocol processing functions into silicon. Facts With NICs, protocol processing functions such as flow control, sequencing, segmentation and reassembly, and error correction are performed by software drivers. HBAs offload these protocol processing functions onto the HBA hardware itself—usually some combination of an application-specific integrated circuit (ASIC) and firmware. Offloading these functions is necessary to provide the performance required by storage networks. NICs can utilize over 80 percent of a server’s CPU capacity (measured with a 1Ghz Intel Pentium CPU) to deliver 50-80MB/s on a Gigabit Ethernet link. I/O processing adds considerable real cost to what may appear to be an inexpensive NIC. HBAs manage I/O transactions with little or no involvement of the server CPU. FC HBAs can provide throughput at nearly 95 percent of link speed with less than 10 percent server CPU utilization. Fibre Channel HBA TCP Driver FC Driver Flow Control Sequencing Segmentation Error Correction FC

22 HBA Features Host Bus Adapters (HBAs): Common features:
Connect hosts to the SAN Attach to host I/O bus (PCI, PCI-X, SBus) Common features: 1 or 2 Gb/s Full-duplex operation Persistent binding/LUN mapping Three major HBA vendors: Emulex Qlogic JNI HBA Features Objective Identify typical and differentiating features of FC HBAs Introduction This section describes typical and differentiating features of HBAs. Facts FC HBAs are the I/O adapters that allow servers to connect to the SAN. When designing an Ethernet LAN, a designer might not pay much attention to the selection of NICs. Although different vendors’ NICs do have different features, the range of features is fairly narrow. However, FC HBAs can differ widely in terms of both features and performance. Some features are common to almost all HBAs, while vendors use other features to differentiate their products. Typical features supported by HBAs include: Full-duplex operation, providing up to 200MB/s in each direction over a 2Gb/s FC link Persistent binding (sometimes called LUN mapping) which allows administrators to “bind” specific storage resources to specific hosts. Note that while most HBAs support both loop and fabric topologies, some older HBAs support only loop operation. Emulex, Qlogic, and JNI share most of the HBA market. There are several other HBA vendors have smaller market shares.

23 HBA Features (cont.) Differentiators: Differentiating Features
Multiple FC ports on one HBA GBICs/SFPs versus fixed media Number of port buffers Protocol support (SCSI, IP, VI) FC Tape Profile support SAN boot capability Firmware upgradeability Driver compatibility across versions Facts HBAs have different FC operating parameters, such as support for Classes of Service. Vendors also use value-added features to distinguish their products in the marketplace. Features that differentiate between HBA products include: Multiple ports that support redundant and/or aggregated data paths Use of removable transceivers (such as Gigabit Interface Converters [GBICs] or Small Form-factor Pluggables [SFPs]) that provide more flexibility than fixed optics More port buffers for better performance in high-throughput environments and over WAN links Support for protocols other than SCSI-FCP, including IP and VI Support for native FC tape devices (FC Tape Profile) The ability to boot a host from an attached SAN volume Use of upgradeable firmware to enable support for new services without requiring new hardware or ASIC change Drivers that support backwards compatibility and multiple HBA models to simplify enterprise deployment and management models

24 HBA Port Failover and Aggregation
Dual HBA Configuration FC FC Fabric FC HBA Port Failover and Aggregation Objective Describe options for configuring redundant HBA ports Introduction Hosts can be configured with dual HBA ports for redundancy and—potentially—performance gain. This section describes the available options for configuring dual HBA ports in a host. Facts Two basic options are available to eliminate single points of failure in the host-to-switch data path: Two HBAs per host increases fault-resilience by providing alternate data paths. For example, if one HBA fails, then the host driver can shift the data flow to the other HBA. A single dual-ported HBA also increases fault-resilience by providing alternate paths if a link fails, but does not protect against failure of the HBA itself. The choice of using two HBAs versus a dual-ported HBA depends on how the customer prioritizes availability versus cost. If the customer has already selected an HBA vendor and wants to stay with that vendor, the choice might also depend on the options offered by the vendor. Dual-Ported HBA Configuration FC Fabric FC

25 HBA Port Failover and Aggregation (cont.)
Both HBA multipathing configurations require a multipathing driver: Monitors the active HBA port Switches to the failover port when a failure is detected Can provide bandwidth aggregation (active-active) Choice of drivers: Most HBA vendors provide multipathing drivers Third-party SAN software (e.g. VERITAS DMP) Multipathing drivers are qualified for specific arrays Facts Both two-HBA and dual-ported HBA configurations require a host driver that supports multipathing. The multipathing driver: Monitors the active HBA port for failures Switches to the failover port when a failure is detected Can also provide bandwidth aggregation Most HBA vendors provide multipathing drivers. Some third-party SAN software vendors also provide multipathing software that sits between the vendor HBA driver and the file system. For example, the VERITAS Dynamic Multipathing (DMP) product, which is part of the VERITAS Foundation Suite, supports multipathing with most HBAs and storage devices. Either dual-controller or dual-ported controller configurations can support bandwidth aggregation (also known as active-active multipathing). Third-party products such as VERITAS DMP typically do support active-active operation, but most multipathing HBA drivers do not support active-active operation. (Emulex is one vendor that does support dynamic load balancing with their MultiPulse product.) Note that vendors qualify their multipathing software with specific arrays. Not all arrays are compatible with the multipathing protocols used by the HBA driver. It is important to verify that the multipathing software is qualified with the customers’ storage.

26 HBA Drivers NT IO Subsystem NT IO Subsystem Upper Filter Driver
Class Driver Class Driver Lower Filter Driver Lower Filter Driver SCSI Port Driver Monoblock Driver Miniport Driver HBA Drivers Objective Describe the functional differences between different types of HBA drivers Introduction This section describes the differences between Windows miniport and monoblock drivers. Facts There are two HBA driver architectures available for Microsoft Windows operating systems (Windows 9x, NT, 2000, and XP): Miniport HBA drivers interface between the standard Microsoft-supplied SCSI port driver and the Windows Hardware Abstraction Layer (HAL), and are therefore OS version-independent. Miniport drivers are also less likely to contain bugs, because their architecture is relatively simple; most of the work is done by the standard Microsoft-supplied SCSI port driver. Monoblock drivers are OS version-specific drivers that replace the standard Microsoft SCSI port driver, and sometimes even bypass the Windows HAL. Monoblock drivers typically provide improved performance and additional management functionality that cannot be supported by the Windows miniport driver model. HBA vendors sometimes offer monoblock drivers to take advantage of higher performance and enhance the functionality of their HBAs. For example, multipathing drivers are typically monoblock drivers. However, monoblock drivers are version-specific, and sometimes must be upgraded even when a new Windows Service Pack is applied. Many HBA vendors supply both miniport and monoblock drivers for Windows operating systems. HAL HAL OS version-independent Less opportunity for bugs More features Better performance Version-specific

27 Lesson Review The IT department has defined their requirements for HBAs. Which requirements might help determine their choice of vendors and products? Supports 2 Gb/s Fibre Channel Full duplex operation Supports the switched fabric topology LUN mapping Supports IP Supports electrical and optical cables What are the requirements for implementing bandwidth aggregation across two HBA ports in a host? What are the advantages of using vendor-supplied Windows monoblock HBA drivers? d, e, f Practice 1. The IT department has defined their requirements for HBAs. Which requirements might help determine their choice of vendors and products? a. Supports 1 Gb/s FC links b. Full duplex operation c. Supports the switched fabric topology d. LUN mapping e. Supports IP f. Supports electrical and optical cables 2. What are the requirements for implementing bandwidth aggregation across two HBA ports in a host? a. A dual-ported HBA b. A dual-ported HBA with a multipathing driver c. A multipathing driver d. A multipathing driver and a compatible array controller e. An array controller that supports multipathing 3. What are the advantages of using vendor-supplied Windows monoblock HBA drivers? a. Enhanced performance b. Enhanced management functionality c. Increased reliability d. Fewer driver upgrades required d a, b

28 Summary JBOD arrays are Just a Bunch of Disks
RAID arrays offer better availability, performance, scalability, and manageability Arrays consist of RAID controllers, cache memory, a backplane, SCSI or Fibre Channel backplanes Each disk in a JBOD array is individually managed (1 disk = 1 LUN) RAID arrays group multiple disks into logical volumes (n disks = m LUNs) Lesson Summary: SAN Components Review In this lesson, you reviewed the components of an FC SAN, including the role of each component in the SAN and the performance characteristics of each component.

29 Summary FC-SCSI bridges attach SCSI devices to a Fibre Channel SAN by providing each SCSI device with a Fibre Channel address Fibre Channel hubs provide shared bandwidth (arbitrated loop) and support a maximum of devices Fibre Channel switches provide shared bandwidth (switched fabric), support thousands of devices, and provide management services Director-class switches are large (hundreds of ports), fast, highly available enterprise switches FC over IP WAN protocols: FCIP, iFCP, and iSCSI FC-BB: FC over ATM, DWDM, SONET/SDH

30 Summary HBAs are I/O adapters with protocol processing in silicon
HBAs differ in terms of number of ports, removeable/fixed media, port buffers, protocol support, and manageability Vendor-supplied HBA drivers can support features not supported by standard OS drivers (Windows miniport vs. monoblock)

31


Download ppt "SAN Components Review SAN Components Review Introduction"

Similar presentations


Ads by Google