XenServer Storage Integration Deep Dive
Agenda XenServer 5.5 Storage Architecture Multipathing Vendor Integration StorageLink
Citrix XenServer & Essentials 5.5 Family NEW Stage Management Platinum Edition Lab Management Provisioning Services (p+v) Essentials Provisioning Services (virtual) Enterprise Edition Workload Balancing NEW StorageLinkTM High Availability Performance Monitoring Workflow Studio Orchestration Live Migration (XenMotion) Active Directory Integration NEW Free Edition 64bit, Windows, Linux Workloads Generic Storage Snapshotting XenServer XenCenter Management Shared Storage (iSCSI, FC, NFS) No socket restriction
XenServer 5.5 Storage Architecture
Expanded Backup Support
Storage Technologies XenServer 5.0 / 5.5 NFS / EXT3 XenServer 5.0 iSCSI / FC XenServer 5.5 iSCSI / FC Storage Repository Storage Repository Storage Repository Filesystem LUN LUN .VHD file .VHD file LVM Logical Volume LVM Logical Volume VHD header VHD header LVM Logical Volume LVM Logical Volume LVM Volume Group LVM Volume Group VM virtual disk
LVM XenServer 5.5 (LVHD) Replaces LVM for SRs Hosts VHD files directly on LVM volumes Best of both worlds Features of VHD Performance of LVM Adds Advanced Storage features Fast Cloning Snapshots Fast and simple upgrade Backwards compatible LVM does not offer features like Fast Cloning and thin provisioning, EXT does but has performance issues because of the ext3 support for AIO+DIO. LVHD addresses this by hosting VHD files directly on LVM volumes. Backwards compatible: LVHD can handle raw VDIs previously created by LVM SR
Multipathing
Why using Multipathing? Path redundancy to storage Performance increase by load sharing algorithms Many fibre channel environments by default have multiple paths LUN 1 FC HBA 1 LUN 1 Storage controller 1 LUN 1 LUN 1 FC HBA 2 Storage controller 2 LUN 1 XenServer FC Switches Storage Subsystem
Enabling Multipathing xe host-param-set other- config:multipathing=true uuid=host_uuid xe host-param-set other- config:multipathhandle=dmp uuid=host_uuid Note: Do not enable multipathing by other ways (e.g. CLI)!!!
XenServer supports 2 multipathing technologies Device Mapper Multipathing (DMP) RDAC MPP (mppVhba) Default yes no XenServer version >= 5.0 4.1 CLI >= 5.0 Update 2 Management by XenCenter Support wide storage range only LSI controller based storage Driver / Daemon multipathd mppVhba driver CLI path check multipath –ll mpputil Configuration /etc/multipath-enabled.conf /etc/mpp.conf (requires execution of /opt/xensource/bin/update-initrd) See details: http://support.citrix.com/article/ctx118791
DMP vs RDAC MPP Check if RDAC MPP is running Use only 1 technology lsmod | grep mppVhba „multipath –ll“ would show MD device as output (if DMP is active) Use only 1 technology When RDAP MPP is running use it Otherwise use DMP
MPP RDAC: Path check mpputil Lun #0 - WWN: 600a0b80001fdf0800001d9c49b0caa1 ---------------- LunObject: present DevState: OPTIMAL Controller 'A' Path -------------------- Path #1: LunPathDevice: present DevState: OPTIMAL Path #2: LunPathDevice: present Controller 'B' Path mpputil
DMP: Path check Monitoring using XenCenter Monitoring using CLI Command: multipath -ll
Multipathing & Software iSCSI
iSCSI with Software Initiator IP addressing to be done by XenServer Dom-0 Multipathing also to be done by XenServer Dom-0 Dom-0 IP configuration is essential Storage LAN Ctrl 1 XenServer Dom-0 LUN 1 LUN 1 IP IP LUN 1 IP NIC 1 LUN 1 XenServer Switches Storage Subsystem
Best practice configuration: iSCSI storage with multipathing Subnet: 255.255.255.0 Separation of subnets also on IP base Subnet 1 Storage LAN Adapter 1 Port 1 IP: 192.168.1.201 NIC 1 NIC 1 IP: 192.168.1.10 Port 2 IP: 192.168.2.201 NIC 2 NIC 2 IP: 192.168.2.10 Storage LAN Adapter 2 Port 1 IP: 192.168.1.202 LUN 1 Port 2 IP: 192.168.2.202 XenServer Subnet 2 FC Switches Storage Subsystem
Not recommended configurations for multipathing and iSCSI: Both server NICs in same subnet Mixing of NIC teaming and multipathing Subnet 1 Subnet 1 NIC 1 NIC 1 IP: 192.168.1.10 NIC 1 Team IP: 192.168.1.10 NIC 2 NIC 2 IP: 192.168.1.11 NIC 2 XenServer XenServer
Multipathing with Software Initiator XenServer 5 XenServer 5 supports Multipathing with iSCSI software initiator Prerequisites are: iSCSI target uses same IQN on all ports iSCSI target ports operate in portal mode Multipathing reliability has been enhanced massively in XenServer 5.5
How to check if iSCSI target operates in portal mode? Execute iscsiadm -m discovery --type sendtargets --portal <ip address of 1 target> Output must show alls IPs of the target ports with identical IQN Example: 192.168.0.161:3260,1 iqn.strawberry:litchie 192.168.0.204:3260,2 iqn.strawberry:litchie When connecting to iSCSI target using XenCenter Storage Repository Wizard, also all target IPs should show up after Discovery
NetApp Integration
NetApp Storage NetApp Storage supports Multipathing For configuring NetApp storage and modification of multipath.conf see whitepaper http://support.citrix.com/article/CTX118842 NetApp typically supports portal mode for iSCSI multipathing for iSCSI SW Initiator is supported Especially for low-end NetApp storage (e.g. FAS2020) with limited LAN adapters special considerations take place
NetApp low-end storage (iSCSI) Often limited by NIC configuration Example: 2 NICs per head 1 aggregate / LUN is represented by 1 head at a time (other head for fault tolerance) Thus: 2 NICs effectively can be used for storage connection Typically Filer delivers non-block-based protocols (e.g. CIFS) which also require redundancy as well as block based protocols (e.g. iSCSI)
Controller 1 (fault tolerance) Example FAS2020: Scenario 1 no network reduncancy for iSCSI and CIFS separation of networks CIFS Network Controller 0 (active) NIC 0 NIC 1 NIC 0 Controller 1 (fault tolerance) NIC 1 NIC 0 NIC 1 iSCSI Network
Controller 1 (fault tolerance) Example FAS2020: Scenario 2 network redundancy for iSCSI and CIFS no separation of networks CIFS & iSCSI Network Controller 0 (active) NIC 0 vif / bond NIC Bond NIC 1 NIC 0 Controller 1 (fault tolerance) NIC 1 NIC 0 vif / bond NIC 1
Controller 1 (fault tolerance) Example FAS2020: Scenario 3 network redundancy for iSCSI (multipathing) and CIFS separation of networks NIC Bond Ctrl 1 (active) CIFS VLAN CIFS VLAN NIC 0 iSCSI VLAN NIC 1 iSCSI VLAN NIC 0 Vif / bond NIC 1 Multipathing Controller 1 (fault tolerance) Same configuration NIC 2 NIC 3
Dell / Equalogic Integration
Dell Equalogic Support XenServer 5.5 includes Adapter (min. firmware 4.0.1 required) Redundant path configuration does not depend on using adapter or not All PS series are supported as running same OS StorageLink Gateway support planned
Dell / Equalogic See whitepaper for Dell / Equalogic storage http://support.citrix.com/article/CTX118841 Each Equalogic has two controllers Only 1 controller is active Uses „Group ID“ address on storage side (similar to bonding / teaming on server side) Only connection over group ID, no direct connection to the iSCSI ports possible Therefore multipathing cannot be used bonding on XenServer side
Datacore Integration
Multipathing architecture with Datacore Subnet: 255.255.255.0 Different IQNs for targets – no portal mode possible!! Subnet 1 Storage Controller 1 Port 1 IP: 192.168.1.201 NIC 1 NIC 1 IP: 192.168.1.10 IQN 1 NIC 2 NIC 2 IP: 192.168.2.10 Storage Controller 2 IQN 2 LUN 1 Port 2 IP: 192.168.2.202 XenServer Subnet 2 Switches
Datacore hints Special attention for software iSCSI Follow Datacore technical bulletin: TB15 ftp://support.datacore.com/psp/tech_bulletins/Tech BulletinsAll/TB15b_Citrix%20XenServer_config_50 1.pdf Datacore in VM O.k. when not using HA Configuration possible, but take care about booting the whole environment Take care when updating XenServer
StorageLink
Logical advancement of XenServer integrated storage adapters Netapp & Equalogic Storage adapter
Citrix StorageLink Overview (XenServer) Data Path XenServer Control Path Guest SAN Storage Cloud iSCSI / FC One point worth mentioning… StorageLink does not inject any agents or drivers into the control path, control is provided either through support for the native API of the storage or through SMI-S Also, once StorageLink has created the storage, and wired it to the Control Domain (or Parent Partition for HV), StorageLink gets out of the way and lets the VM talk to that storage LUN as it always would. For XenServer, the storage is allocated by StorageLink, and appears as a regular XenServer Storage Repository, from which guest virtual machines can create their virtual disks. Snap-in for XenServer StorageLink
Leveraging the best of virtualization and storage StorageLink as basis for Citrix Essentials Storage vendor functionalities Quick provisioning Snapshots Quick cloning Thin-provisioning Deduplication Backup and Restore capabilities We’ve had a definite approach to the marriage of storage and server virtualization ever since the addition of the NetApp SR in XenServer 4.1. With this release, we take this philosophy to the next level. You buy intelligent storage for its powerful thin provisioning, snapshot, replication and deduplication capabilities, as well as the simplification of backup/restore and disaster recovery. We think virtualization infrastructure should let servers and hypervisors do smart computing, arrays and switches do smart storage management, and a single set of interfaces allow you to manage them together.
Virtual Storage Manager StorageLink Overview XenServer Hyper-V Netapp EqualLogic DOM0 VSM Bridge Parent Partition VDS XenServer Hyper-V Data Path Data Path Virtual Storage Manager SAN/NAS Control Path
StorageLink Gateway Overview SMI-S is the preferred method of integration as it requires no custom development work XenServer Dell At the heart of the new StorageLink is the StorageLink Gateway, a platform that integrates management of the control path for multiple types of storage – from different vendors, via different connection types, and even across different hypervisors. Storage with providers for the standardized SMI-S interfaces can plug right into StorageLink. Custom adapters can be built by vendors for non-SMI-S devices. With StorageLink, rather than implementing that device-specific interface on every server, it only needs to live in one place, on the StorageLink Gateway – and it can be leveraged by every server running the StorageLink Bridge SR, even in multiple resource pools. Vendor-specific VSM Storage Adapters run in separate processes
Storage Technologies + XenServer 5.5 XenServer 5.5 iSCSI / FC Storage Repository Storage Repository LUN LUN LUN VHD header VHD header LVM Logical Volume LVM Logical Volume LVM Volume Group VM virtual disk
Essentials Enterprise Snapshot types XenServer (free) Essentials Enterprise Snapshot type Software based snapshot Hardware based snapshot (also software based snapshot possible when not using StorageLink) LUN access model LVM (1 LUN=x times VDI) LUN-per-VDI (1 LUN = 1 VDI) Performance Good Superior Utilization On XenServer host On storage subsystem Overhead Low lowest
StorageLink: Microsoft look-and-feel
Essentials: Example usage scenario Effective creation of VMs from template VM clone 1x 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM VM clone 3x 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM few mouse clicks VM Template Huge manual effort Effectiveness: Fast-Cloning using Storage Snapshots Fully automated storage and SAN configuration for FC and iSCSI LUN
StorageLink: Supported Storages StorageLink HCL http://hcl.vmd.citrix.com/SLG-HCLHome.aspx