Agenda CS C446 Data Storage Technologies & Networks Storage Area Networks Structure and Architecture Addressing Zoning, Trunking and Multipathing
Storage Area Networks Storage units are on the network Network is (typically) different from the LAN Fibre-Channel SAN Data is accessed raw (in disk blocks) from storage units As opposed to file access in NAS Fibre-Channel SANs were the earliest: FC offers high Bandwidth Alternative SAN technologies available today: E.g. IP SAN SAN and NAS are converging: E.g. NAS head with a SAN backend. Sundar B.
SAN - Purpose Primary purpose Non-functional requirements Aggregation of physical storage devices permitting logical, on-the-fly division/sharing among hosts Non-functional requirements High transfer rates High availability Sundar B.
SAN Components and Structure Hosts (client / server computers) Storage Devices Interfaces (ports for communication) Hubs, Switches, and Gateways Structure - Example Storage devices thru’ ports are connected to an (FC) AL hub: Local hosts are also connected to the AL via I/O bus adapters and ports Hubs do not allow high transfer rate (due to sharing) but are cheap. The hub is connected through a FC-switch to remote hosts (referred to as switched fabric) Switches allow individual connections with high transfer rates but are expensive. Gateways enable connection of SAN over WANs SAN to SAN SAN to hosts on the Internet Sundar B.
SAN Components and Structure Interconnects Cables – Fiber Optic Serial Transceivers, Interface Converters (Optical/Electrical), Host-Bus Adapters (Parallel-Serial Conversion) Inter Switch Links (connect E-ports) Cascading Seamless extension of fabric by adding switches Interswitch links can also provide redundant paths Devices Hubs, Switched Hubs, Switches/Directors Multiprotocol Routers (FCP, FCIP, iFCP, IP iSCSI) Sundar B.
SAN – Addressing WWN unique World Wide Name per N-port Devices may have a WWN (independent of the adapters/ports) Defined and maintained by IEEE 64-bit long 24-bit port addresses may be used locally to reduce overhead. Sundar B.
SAN – Addressing 24-bit addressing - in a switched fabric Assigned by switch At login, each WWN is assigned (mapped) to a 24-bit address by Simple Name Service (SNS) SNS is a component of the fabric OS – acts as a registry/database Address format: domain address (bits 23-16) identifies the switch Some addresses are reserved e.g broadcast; 239 possible addrs. area address (bits 15-8) identifies a group of F-ports, port address (bits 7-0) identifies a specific N-port Total addressible ports: 239x256x256 Sundar B.
SAN – Addressing 24-bit addressing - in an AL Obtained at loop initiation time and re-assigned at login to the switch Address Format: Fabric loop address (bits 23-8) identifies the loop All 0s denotes a private loop i.e., not connected to any fabric Port address (bits 7-0) identifies a specific NL-port Only 126 addresses are usable (for NL-ports): 8B/10B encoding is used for signal balancing; Out of the 256 bit patterns only 134 have neutral running disparity – 7 are reserved for FC protocol usage; 1 for an FL-port (so that the loop can be on the fabric); Sundar B.
SAN – Routing Routing Analogous to switching in a LAN Goal: Keep a single path (bet. Any two ports) alive – no redundant paths or loops Additional paths are held in reserve – may be used in case of failures. Fabric Shortest Path First (FSPF) protocol – Cost: hop count Link state protocol Link state database (or topology database) kept in switches Updated/Initialized when switch is turned on or new ISL comes up or an ISL fails Switches use additional logic when hop count is same. Round Robin is often used for load sharing Sundar B.
SAN - Zoning Zoning allows fabric segmentation: Storage (traffic) isolation E.g. Scenario: Windows systems claim all visible storage Hardware Zoning: (1-1, 1-*, *-*) Based on ports connected to fabric switches (switches-internal port numbering is used) A port may belong to multiple zones Adv: Implemented into a routing engine by filtering Disadv: Device connections are tied to (physical) ports Software Zoning: Based on WWN – managed by the OS in the switch Less secure due to spoofing possibilities Sundar B.
SAN - Zoning Software Zoning: Based on WWN – managed by the OS in the switch Number of members in a zone limited by memory available A node may belong to more than one zone. More than one sets of zones can be defined in a switch but only set is active at a time Zone sets can be changed without bringing switch down Less secure : SZ is implemented using SNS Device may connect directly to switch without going through SNS WWN spoofing WWN numbers can be probed Sundar B.
SAN – Frame Filtering Frame Filtering Process of inspecting each frame (header info.) at hardware level for access control purposes Usually implemented as an ASIC w/ choice and configuration of filter that can be done at switch initialization/boot time. Allows zoning to be implemented with access control performed at wire speed Port level Zoning, WWN level Zoning, Device level Zoning, LUN level Zoning, and Protocol level Zoning can be implemented using Frame Filtering Sundar B.
SAN – Trunking Trunking Grouping of ISLs into a trunk i.e. a logical link Useful for load sharing in the presence of zoning i.e. zoning need not restrict ISL usage Supports in-order end-to-end Re-ordering done by the switch as required Sundar B.
SAN – Multipathing Multipathing Provide multiple paths between a host and a device (LUN). Redundancy for improved reliability and/or higher bandwidth for improved availability / performance Channel subsystem of the kernel in switch OS handles multipathing at software level Usually Separate device driver is used w/ following capabilities: Enhanced Data Availability Automatic path failover and recovery to alternative path Dynamic Load balancing Path selection policies Failures handled: Device Bus adapters, External SCSI cables, fibre connection cable, host interface adapters Additional software needed for ensuring the host sees a single device. Sundar B.
SAN - LUN Masking Zoning imposes some logical traffic isolation as well as some access control of devices. Alternative – LUN Masking: Storage Device Control program (part of the switch OS) maintains an access lists for the storage device One list per LUN When hosts require access they request access to a LUN and the device control program verifies the list before granting access Sundar B.
Storage Virtualization Integration of back-end devices and functions with front end functionality to provide certain abstractions. Different levels: Device Level Physical devices are collected and presented as different virtual devices (e.g. partitions, RAID array controllers etc.) File System Level Block storage devices are presented as file systems Fabric Level Virtual Devices and collections are aggregated and presented as storage groups with high level access control (e.g. Zoning) Server Level Servers interpret the available storage as different units as per the requirement (Logical Volume Management at the host OS level) Sundar B.
Storage Virtualization In-band implementation Data and control flow thru’ same lines Easy to implement Homogeneous environment (even with heterogenous devices) Scalable Out-of-band implementation Control flows through separate lines Separate server(s) maintain metadata Metadata: mapping tables, locking tables, access control Server Known as metadata controller Authentication needed for hosts Add-on flexibility E.g. Adding a file server / file system on a SAN environment High Bandwidth availability for data traffic Sundar B.
Emerging Protocols iSCSI iFCP FCIP – FC tunnelling Sundar B.