Download presentation
Presentation is loading. Please wait.
Published bySharlene Williamson Modified over 6 years ago
1
LUN Management LUN Management Introduction
This lesson provides an overview of the tasks involved in configuring SAN-attached disk storage devices. Importance Designing array configurations and making configuration changes on a customer’s array is is out-of-scope for a Cisco SE. However, your role in a SAN deployment project as a Cisco SE will depend in part on the customer, the storage vendor, or a systems integrator performing array management tasks such as LUN definition. Although you do not need to know how to design or configure a customer’s array, you do need to have a basic understanding of what this process involves.
2
Objective Upon completion of this lesson, you will be able to describe the requirements and tasks associated with array and volume management. Performance Objective Upon completion of this lesson, you will be able to describe the requirements and tasks associated with array and volume management. Enabling Objectives Explain the relevance of LUN management to a SAN fabric designer Define LUN mapping Define LUN masking Describe how LUN mapping and LUN masking are implemented Describe how SCSI LUN discovery is implemented in Cisco MDS Family switches Describe the functionality provided by logical volume managers (LVMs) Explain how the components of a SAN work together to present a logical view of storage resources to the host
3
Outline LUN Management Overview What is LUN Mapping?
What is LUN Masking? LUN Mapping and Masking Implementations SCSI LUN Discovery Logical Volume Managers Managing Storage Resources Prerequisites Curriculum Unit 2, Modules 1 and 2 All previous lessons in this module
4
LUN Management Overview
Managing storage LUNs: Can be a complex process Can result in data loss if not done properly Is typically performed by the storage vendor The role of the Cisco SE: Provide path selection capability Ensure that only one server can write to each volume Enforce data security Understand how storage resources are mapped from subsystem to host LUN Management Overview Objective Explain the relevance of LUN management to a SAN fabric designer Introduction This section explains the relevance of LUN management to a SAN fabric designer. Facts Managing a storage subsystem requires a detailed knowledge of its internal architecture. Remember that configuring storage arrays—for example, defining LUNs on an array—can result in data loss if not done properly. Typically, the storage vendor will assist the customer in configuring the storage devices. The role of the Cisco SE in terms of storage subsystems is to: Provide path selection capability: At the storage array and at the HBA, LUNs are mapped to specific ports. The fabric configuration must be mapped to the LUN configuration. Prevent the wrong server from accessing the wrong volumes: In a SAN architecture, hosts manage the file systems that reside on the LUNs, so administrators must ensure that only the appropriate hosts have access to those LUNs. For example, two hosts might attempt to access the same storage volumes simultaneously, causing data corruption. Secure data from unauthorized use: As a fundamental security measure, hosts should not have access to LUNs that they do not need to use. Understand how storage resources are mapped from subsystem to host: The way that raw disks appear to the hosts depends on multiple devices and applications working in unison. LUNs can be logically aggregated and logically “split” at multiple points in the SAN. You need to understand how LUNs are “virtualized” at each point so that you can design the fabric to support that configuration. For example, if a number of storage LUNs are added to a virtual storage pool on the host or in a virtualization appliance, all of the storage ports that are associated with those LUNs will need to be in the same VSAN.
5
What is LUN Mapping? Host OS locates volumes based on SCSI IDs:
SCSI devices are hard-addressed by SCSI IDs By default, hosts assign SCSI IDs to SAN-attached LUNs based on FCID and power-up order If a SAN device is power-cycled, SAN-attached LUNs can be assigned different SCSI IDs by the OS There needs to be a way to ensure that hosts always assign the same SCSI ID to a given LUN LUN mapping ensures that hosts can always access their LUNs What is LUN Mapping? Objective Define LUN mapping Introduction This section explains how storage LUNs are mapped to hosts. Definition LUN mapping is the term used to define a set of tasks that are performed together to ensure that hosts can always locate and access their storage LUNs. Facts Devices on a SCSI bus are hard-addressed. Hard addressing ensures that the host operating system always knows where to “find” a specific disk volume, because that volume is always associated with a specific SCSI ID. Internally, hosts identify storage resources based on SCSI IDs. The fact that the I/O subsystems built in to most operating systems (such as Windows and UNIX operating systems) were designed to work with hard-addressed SCSI storage devices presents a problem for FC implementations. Operating systems do not recognize Fibre Channel IDs (FCIDs) or World Wide Names (WWNs)—they recognize SCSI IDs. Therefore, there needs to be a way to ensure that hosts consistently assign the same SCSI ID to a given LUN. If the SCSI ID assigned to a LUN changes, the host cannot correctly identify the storage volumes, preventing applications from accessing their data. By default, hosts can only assign SCSI IDs to LUNs based on the FCIDs of the LUNs, and based on the order in which LUNs became available. If Array A is powered up before Array B, hosts will first assign SCSI IDs to LUNs on Array A; if Array B is powered up first, hosts will first assign SCSI IDs to LUNs on Array B. When a storage device is power-cycled, the hosts loses contact with storage volumes. When that device comes back online, the host might assign different SCSI IDs to the volumes on that device, depending on the order in which all of the storage devices originally came online.
6
What is LUN Mapping? (cont.)
Hosts FC FC RAID Array LUNs WWN: 21:00…24:E9 WWN: 21:00…24:EA LUN mapping consists of two related sets of tasks. On the storage array: Administrators “carve” LUNs out of physical storage devices. This process might include establishing RAID sets, configuring hot spares, assigning cache resources, and setting failure recovery policies. The administrator assigns a FC World Wide Name (WWN) to each LUN. This is typically done automatically by the array controller, but the administrator is usually able to manually configure a WWN. When a SCSI tape library is attached to the SAN through an FC-SCSI bridge, the bridge is responsible for assigning WWNs to each tape drive. The administrator manually configures this mapping using the configuration interface provided by the bridge. Administrators associate the LUNs with one or more storage controller ports and configures primary and secondary data paths. FC WWN: 21:00…24:EB
7
What is LUN Mapping? (cont.)
Hosts FC WWN 21:00…24:E = SCSI ID 0 FC RAID Array SCSI ID 0 = D: LUNs On the HBA: Administrators use the HBA configuration utility to associate LUNs with one or more HBA ports, based on the WWN of the LUN. Again using the HBA configuration utility, administrators map LUN WWNs to SCSI IDs. The HBA presents LUNs to the host as SCSI LUNs (because the host I/O subsystem only “speaks” SCSI). The HBA always presents the same WWN as the same SCSI ID so that the host can reliably use the SCSI IDs to identify storage resources. Administrators then use native operating system functions to assign operating system-specific device IDs to the SCSI IDs that are presented by the HBA. The mapping of LUN WWNs to host SCSI IDs is sometimes known as persistent binding. Different HBA vendors use different terms. Example The preceding diagram shows an example of LUN mapping. The FC RAID array presents three LUNs to the fabric. Administrators use the HBA utilities on each host on associate the WWNs of the appropriate LUNs to SCSI IDs. On the host, administrators then assign operating system-specific volume IDs (such as D: on Windows hosts or /dev/dsk/c1t1d0 on UNIX hosts) to the SCSI IDs that are presented to the host by the HBA. FC SCSI ID 0 = /dev/dsk/c1t0d0 SCSI ID 1 = /dev/dsk/c1t1d0 WWN 21:00…24:EA = c1t0d0 WWN 21:00…24:EB = c1t1d0
8
What is LUN Mapping? (cont.)
Example: Emulex HBA Configuration Tool Example The preceding image shows LUN mapping using the Emulex HBA configuration utility. This example shows two WWPNs, representing two FC LUNs, mapped to SCSI IDs.
9
What is LUN Masking? LUN masking controls the visibility of LUNs to hosts: The SAN administrator determines which hosts are allowed to “see” which LUNs Prevents hosts from “seizing” or writing to volumes that do not belong to them Can be done either on the HBA or on the array controller What is LUN Masking? Objective Define LUN masking Introduction This section explains how administrators can ensure that each host can access only the LUNs that are owned by that host. Definition LUN masking is the process of “hiding” LUNs from specific HBA ports. Facts Zoning and VSANs are two of the techniques used to control access to volumes on a Fibre Channel SAN. HBAs can also participate in volume access control by implementing a LUN masking. LUN masking helps to prevent hosts from “seizing” storage assets and making them unavailable to other hosts, and prevents two devices from attempting to write to the same storage location. LUN masking is often configured through the LUN mapping process. In other words, both LUN masking and LUN mapping can be performed using the same user interface. Both HBAs and intelligent storage controllers support LUN masking. On the HBA, the administrator can disable LUNs to prevent the host OS from seeing those LUNs. On an intelligent storage controller, the administrator can specify the HBA ports that are allowed to access each LUN.
10
What is LUN Masking? (cont.)
Hosts FC FC RAID Array LUNs Example The preceding diagram shows an example of host-based LUN masking. The FC RAID array presents three LUNs to the fabric. Administrators use the HBA utilities on each host on “hide” specific WWNs from the host while permitting the host to see other WWNs. FC
11
LUN Mapping and Masking Implementations
Host Host I/O subsystem I/O subsystem HBA firmware HBA driver 1 Switch/router Switch/router 3 LUN Mapping and Masking Implementations Objective Describe how LUN mapping and LUN masking are implemented Introduction This section describes the different implementations of LUN mapping and LUN masking in SAN devices. Facts LUN mapping and LUN masking can be implemented in one of three ways: All modern FC HBAs support LUN mapping and masking. LUN mapping and masking can be enforced either by the firmware or by the HBA driver. By the array controller. Array-based implementations achieve the same basic objective as HBA-based implementations by allowing administrators to specify the WWNs of the host ports that can access each LUN. Only intelligent storage arrays support LUN mapping and masking at the array level; JBODs and many mid-range and low-end RAID arrays do not support these functions. In a switch or router. This implementation is not yet common, but storage routers or switches could provide LUN masking functions. Array controller 2 LUN LUN LUN Storage device
12
SCSI LUN Discovery Storage devices do not register detailed LUN information with the FCNS: Device capacity, serial number, and status Initiator and target features The SCSI LUN discovery feature: Queries targets to obtain detailed LUN information Synchronizes LUN information with other MDS switches Allows management applications to query FCNS for LUN info discover scsi-target [local|remote|VSAN x] This is not needed to see LUNs—it is only needed if management software can show LUN configuration data SCSI LUN Discovery Objective Describe how SCSI LUN discovery is implemented in Cisco MDS Family switches Introduction This section explains why fabric switches discover SCSI LUNs, and how this process is implemented on Cisco MDS switches. Facts Disk arrays and tape libraries do not register SCSI LUNs configuration details with the FC Name Server (FCNS). The LUNs themselves are registered, but detailed LUN configuration is not registered. These details include: Device capacity Serial number Device status information Initiator and target features To support management applications that can display detailed SCSI target information, MDS switches provide a SCSI LUN discovery feature. When this feature is activated, the local domain controller on each MDS switch issues SCSI INQUIRY, REPORT LUNS, and READ CAPACITY commands to SCSI devices to obtain LUN configuration information. This information is then synchronized with other Cisco MDS 9000 Family switches in the fabric. The SCSI LUN discovery feature is initiated on demand, through the CLI discover scsi-target command or through SNMP. Note that discovery can take several minutes to complete, especially if the fabric is large fabric or if devices are slow to respond. This command is not required to see LUNs in the FCNS; it is only needed to add additional details about LUN configuration. Use this command if management software can show LUN configuration data.
13
Logical Volume Managers
Application Host-based LVMs: Originally designed to provide software RAID Most OSs provide some level of native LVM functionality Third-party LVMs often support features like snapshots, online volume configuration RDBMS-specific LVMs: RDBMS can bypasses host LVM and store data directly on raw disk partitions Can provide application-specific features and performance optimization Database File System Volume Manager I/O Driver OS Kernel Logical Volume Managers Objective Describe the functionality provided by logical volume managers (LVMs) Introduction This section describes the functionality provided by the logical volume management layer. Facts LVMs were originally designed to provide RAID functionality in situations where it was not possible or desirable to use hardware-based RAID controllers. Today, most operating systems provide some level of native LVM functionality. File systems can either access storage devices directly or access virtual disks presented by the LVM. While OS-native LVMs typically provide just the basic RAID functionality, third-party LVMs are growing increasingly sophisticated, and often support advanced features like expanded RAID functions, snapshots, and online volume configuration. Relational database management systems (RDBMS) can also provide LVM functionality. If the RDMBS does not provide LVM functionality, it stores data in “container” files on managed volumes that the RDBMS perceives as a physical disk. If the RDBMS does provide LVM functionality, it bypasses the host LVM and stores data directly on raw disk partitions. The advantage of an RDBMS providing built-in LVM functionality is that volume management can be optimized for a database environment, and can provide application-specific features (such as concurrent host access to shared volumes) and performance enhancements. Oracle is an example of an RDBMS that provides LVM functionality. The preceding diagram illustrates the role of the LVM. The LVM sits in between the host file system and I/O driver, or between an RDBMS and the I/O driver. SAN
14
Logical Volume Managers (cont.)
Volume Managers—“The Other Virtualization”: Role of the LVM: Convert read/write requests into device I/O commands Implement striping, mirroring, or RAID algorithms Maintain consistent view of volume state Logical abstraction layer enables management features: Reconfigure volumes online Transparently migrate data Manage and optimize storage I/O performance online Enable concurrent data sharing File-system manager still provides user interface, journaling, access control—LVM is transparent The line between LVMs and virtualization is increasingly blurry. Of two vendors that provide the same basic features, one vendor might call their product a “volume manager,” and the other might call their product a “virtualization solution.” The main purpose of an LVM is to virtualize whatever the server perceives as physical disk drives, presenting it in a form that is more convenient for use by file systems and applications. The role of the LVM is to: Convert each read or write request to the volume into one or more storage device I/O commands, issue those commands, and manage the resulting data flow. Implement striping, mirroring, or RAID algorithms that improve performance and protect against data loss due to storage device failure. Maintains a consistent view of volume state—which disks or parts of disks make up a volume, their operational status, and how data is organized on them. LVMs implement a logical abstraction layer to improve the ability of storage administrators to manage storage. By separating the representation of the data from the physical disk, LVMs enable a wide range of advanced management capabilities, such as: The ability to expand and contract volumes without taking those volumes offline Transparent migration of data from one volume or physical disk to another volume or physical disk The ability to manage and optimize storage I/O performance while volumes are online Data sharing capabilities that allow multiple hosts to concurrently access the same volumes Note that LVMs do not replace file systems. The file system manager is still responsible for providing a user interface for applications, journaling features to ensure file system consistency and recoverability, and access control. The LVM is transparent to the file system.
15
Managing Storage Resources
Putting It All Together FC FC Managing Storage Resources Objective Explain how the components of a SAN work together to present a logical view of storage resources to the host Introduction The process of configuring storage resources and making those resources involves multiple layers of abstraction. Storage resources can be logically partitioned and assigned at each tier in the SAN—in the array, in the fabric, and in the host. The final presentation of storage resources to the hosts depends on multiple devices and applications working in unison. The following slides show how the components of a SAN work together to present a logical view of storage resources to the host. How is mapping defined? Where is mapping defined? Multiple layers of abstraction
16
Managing Storage Resources: LUN Definition
SCSI LUNs 2 x 1GB 1 x 3GB 3GB 1GB Physical Disks 10 x 1GB (Mirrored) FC FC Procedure The first step in configuring a new storage resource is to configure the raw storage resources on the array. Using array management software provided by the array vendor, the storage administrator must: Define, or “carve” LUNs out of the physical disks in the array by selecting the desired raw disk partitions and configuring those partitions in a RAID configuration Assign primary and secondary controllers and interfaces to each LUN Assign an FC WWN to each LUN Configure “hot spare” disks and recovery policies Configure remote data replications and snapshots or BCVs, if this functionality is provided by the array firmware The preceding slide shows a simplified scenario in which an array containing ten 1GB disks is partitioned into three LUNs—two 1GB LUNs and one 3GB LUNs. Each LUN consists of one or more pairs of mirrored disks, providing redundancy at the physical disk level. The example shown here contains a single controller, but in an actual production environment each disk and its mirror would typically be configured on separate controllers. The LUN definition process is the first layer of logical abstraction, or “virtualization” of storage. The view of the storage resources that the array presents to the SAN is abstracted from the physical storage resources in the array. Array Mgmt Utility
17
Managing Storage Resources: Fabric Configuration
SCSI LUNs 2 x 1GB 1 x 3GB Physical Disks 10 x 1GB (Mirrored) FC 3GB 1GB FC The SAN fabric must then be configured to allocate the desired logical data paths between LUNs and hosts. The administrator uses a switch management or fabric management application to: Configure and bring up the appropriate switch interfaces Assign each interface to the appropriate VSAN and fabric zones Configure automated monitoring systems, including thresholds and alerting policies If the hosts are iSCSI hosts accessing back-end FC storage through an FC-iSCSI storage router, such as the Cisco SN 5428, or through a multiprotocol switch, such as a Cisco MDS with an IPS-8 module, the FC-to-iSCSI mapping must also be configured. Fabric Manager Array Mgmt Utility
18
Managing Storage Resources: LUN Mapping and Masking
SCSI LUNs 2 x 1GB 1 x 3GB Physical Disks 10 x 1GB (Mirrored) FC 1GB 3GB 3GB 1GB FC On each host, the HBA configuration utility must then be used to: Enable access to the LUNs that are assigned to that host—LUN masking Bind a persistent SCSI ID to each LUN—LUN mapping If the host is multi-homed, the HBA driver or separate multipathing software must be configured to: Assign primary and secondary paths for each LUN Define failover and failback policies for each LUN Define load-balancing policies, if supported In the example shown here, the two 1GB LUNs are assigned to one host, and the 3GB LUN is assigned to the other host. HBA Utility Fabric Manager Array Mgmt Utility
19
Managing Storage Resources: Logical Volume Managers
Logical Disks Drive D (500MB) Drive E (500MB) Drive F (1GB) Drive D (1GB) Drive E (2GB) SCSI LUNs 2 x 1GB 1 x 3GB Physical Disks 10 x 1GB (Mirrored) FC 1GB 3GB 3GB 1GB FC On the host, the logical volume manager (LVM) can either be built in to the operating system, such as Windows 2000 Disk Manager, or installed as a third-party utility, such as VERITAS Volume Manager. The LVM adds a second layer of logical abstraction from the raw physical disks. The LVM can be used to partition a LUN into multiple volumes, to consolidate multiple LUNs into a single volume, and to configure data replication at the host level. In the example shown here, one of the 1GB LUNs and the 3GB LUN are partitioned into multiple volumes by the host LVM applications. Volume Manager HBA Utility Fabric Manager Array Mgmt Utility
20
Managing Storage Resources: Logical Volume Managers (cont.)
Physical Disks 10 x 1GB (Mirrored) SCSI LUNs 2 x 1GB 1 x 3GB Volume Manager Logical Disks Drive D (500MB) Drive E (500MB) Drive F (1GB) Drive D (1GB) Drive E (2GB) FC HBA Utility Fabric Manager 1GB 3GB Array Mgmt Utility Volume spanning multiple LUNs Volume LUNs—not physical disks The preceding image shows the Windows 2000 Disk Manager. The Disk Manager displays the following resource units: Each row in the lower table represents a physical disk from the point of view of the operating system, but in reality the SAN-attached storage has already been partitioned into LUNs. The “disks” that Disk Manager displays are actually the LUNs that are presented by the storage array. Each LUN can be divided into one or more partitions. One or more partitions can be combined to form a logical volume. The volumes are the units of storage that are presented to the operating system and the applications, and are shown in the upper table in Disk Manager. The partitions in a logical volume can be on the same LUN, or on different LUNs. However, even if the partitions are on the same LUN, they may actually reside on multiple physical disks. Partitions
21
Managing Storage Resources: Virtualization
Logical Disks Drive D (500MB) Drive E (500MB) Drive F (1GB) Drive D (1GB) Drive E (2GB) Drive D (1GB) Drive E (4GB) Drive D (5GB) Virtualized LUNs 1 x 1GB 1 x 4GB 1 x 5GB 4GB 1GB 5GB SCSI LUNs 4 x 1GB 2 x 3GB Physical Disks 20 x 1GB (Mirrored) FC 3GB 1GB Virtualization Manager FC FC 3GB 1GB SAN-based virtualization solutions represent yet another layer of logical abstraction of storage resources. The preceding diagram shows a solution in which virtualization is implemented in the fabric, using an appliance-based, router-based, or switch-based virtualization engine. The virtualization engine takes the LUNs that are presented by the arrays and adds them to a virtual storage pool. Administrators can then configure virtual LUNs from the storage pool. In this example, the virtualization engine presents three LUNs to the hosts: a 1GB LUN, a 4GB LUN, and a 5GB LUN. The total storage capacity is still equal to the sum of the LUNs provided by the arrays (10GB). Volume Manager HBA Utility Virtualization Fabric Manager Array Mgmt Utility
22
Managing Storage Resources: Issues to Consider
Identifying the appropriate storage resources for each application: Availability Scalability Performance Multiple layers of abstraction add complexity: Each layer introduces availability, scalability, and performance characteristics How do you keep track as the SAN evolves? How do you coordinate across operational groups? Facts The storage administrator is responsible for identifying the the appropriate storage resources that are required each application. Different applications require different levels of availability, scalability, and performance. Service Level Agreements (SLAs) must also be considered. The fact that there are multiple layers of logical abstraction in the SAN adds complexity to the task of ensuring that each application and customer are assigned the appropriate storage resources. Each layer introduces different availability, scalability, and performance characteristics, all of which must be factored in to calculate the overall availability, scalability, and performance of the storage resources that are provided to the end user. For example, it is not likely to be immediately obvious whether a given logical volume is mirrored at the physical level, or whether there is room for additional capacity, or whether your high-end virtual LUNs actually reside on your fastest disks. In a DAS environment, a server operator can often just one or two utilities to answer these questions, but in a SAN environment the process of tracing a given logical volume back to its physical disks can be a complicated process. Keeping track of storage resources can become more difficult as the SAN evolves. Changes made at each level must be communicated and coordinated across each tier of the SAN. This often means coordinating between multiple operational groups, including storage administrators, network administrators, server administrators, and database administrators, as well as between multiple management, documentation, and change control systems. Storage resource management (SRM) applications are beginning to address these issues by automating the monitoring and provisioning of storage resources across multiple tiers in the SAN. Ultimately, however, an SRM application is only as robust as the organizational practices that feed into that application.
23
Lesson Review Why does a Cisco SE need to have a basic understanding of enterprise storage arrays? What is LUN mapping? What is LUN masking? Practice Why does a Cisco SE need to have a basic understanding of enterprise storage arrays? To communicate effectively with storage vendors and partners To configure storage resources for the customer To design the architecture of the internal I/O bus or fabric inside the storage array To ensure that information about LUNs is provided by the FCNS To monitor the performance of storage LUNs What is LUN mapping? The process of assigning FCIDs to LUNs The process of assigning WWPNs to LUNs A set of related tasks that ensure that hosts can always locate and access their LUNs A set of related tasks that ensures that hosts cannot access LUNs that belong to other hosts What is LUN masking? LUN masking ensures that LUNs always have the same local FCID LUN masking hides LUNs from specified HBA ports LUN masking statically assigns local WWNs to storage LUNs LUN masking ensures the volumes always have the same remote WWPN
24
Lesson Review (cont.) How does the Fibre Channel Name Server (FCNS) in Cisco MDS switches gather information about SCSI LUNs? What are some of the most common uses for BCVs? What are the key considerations for mapping logical data paths from storage devices to hosts? How does the Fibre Channel Name Server (FCNS) in Cisco MDS switches gather information about SCSI LUNs? The MDS switch issues SES commands periodically Storage devices automatically register their attributes, including LUN configuration, with the switch Administrators use the discover scsi-target command to query attached storage devices Administrators use the fcns database add lun command to query attached storage devices What are some of the most common uses for BCVs? What are the key considerations for mapping logical data paths from storage devices to hosts?
25
Summary Fabric designers and managers must have expertise in storage array technologies to communicate with storage management staff and to configure logical paths Enterprise arrays feature modular, multi-protocol architecture, with redundant components for availability and load balancing SCSI LUNs must be registered with the FCNS Logical data paths should be planned to provide: Redundant paths at all levels Traffic distribution Continuity in foreseeable failure scenarios Minimal impact of backups on production data Summary: LUN Management In this lesson, you learned about the requirements and tasks associated with array and volume management.
26
Summary (cont.) LUN mapping involves associating a specific SCSI ID with the WWN assigned by the storage array to a LUN LUN masking defines which hosts can see which LUNs LUN masking and mapping can be implemented in: HBAs Array controllers (higher-end RAID arrays only) Switch or router (not common) LVMs abstract stored data from physical disks Proper management of storage resources is time-consuming, complex, and very important
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.