Download presentation
Presentation is loading. Please wait.
1
XenServer Storage Integration Deep Dive
2
Agenda XenServer 5.5 Storage Architecture Multipathing
Vendor Integration StorageLink
3
Citrix XenServer & Essentials 5.5 Family
NEW Stage Management Platinum Edition Lab Management Provisioning Services (p+v) Essentials Provisioning Services (virtual) Enterprise Edition Workload Balancing NEW StorageLinkTM High Availability Performance Monitoring Workflow Studio Orchestration Live Migration (XenMotion) Active Directory Integration NEW Free Edition 64bit, Windows, Linux Workloads Generic Storage Snapshotting XenServer XenCenter Management Shared Storage (iSCSI, FC, NFS) No socket restriction
4
XenServer 5.5 Storage Architecture
5
Expanded Backup Support
6
Storage Technologies XenServer 5.0 / 5.5 NFS / EXT3 XenServer 5.0
iSCSI / FC XenServer 5.5 iSCSI / FC Storage Repository Storage Repository Storage Repository Filesystem LUN LUN .VHD file .VHD file LVM Logical Volume LVM Logical Volume VHD header VHD header LVM Logical Volume LVM Logical Volume LVM Volume Group LVM Volume Group VM virtual disk
7
LVM XenServer 5.5 (LVHD) Replaces LVM for SRs
Hosts VHD files directly on LVM volumes Best of both worlds Features of VHD Performance of LVM Adds Advanced Storage features Fast Cloning Snapshots Fast and simple upgrade Backwards compatible LVM does not offer features like Fast Cloning and thin provisioning, EXT does but has performance issues because of the ext3 support for AIO+DIO. LVHD addresses this by hosting VHD files directly on LVM volumes. Backwards compatible: LVHD can handle raw VDIs previously created by LVM SR
8
Multipathing
9
Why using Multipathing?
Path redundancy to storage Performance increase by load sharing algorithms Many fibre channel environments by default have multiple paths LUN 1 FC HBA 1 LUN 1 Storage controller 1 LUN 1 LUN 1 FC HBA 2 Storage controller 2 LUN 1 XenServer FC Switches Storage Subsystem
10
Enabling Multipathing
xe host-param-set other- config:multipathing=true uuid=host_uuid xe host-param-set other- config:multipathhandle=dmp uuid=host_uuid Note: Do not enable multipathing by other ways (e.g. CLI)!!!
11
XenServer supports 2 multipathing technologies
Device Mapper Multipathing (DMP) RDAC MPP (mppVhba) Default yes no XenServer version >= CLI >= 5.0 Update 2 Management by XenCenter Support wide storage range only LSI controller based storage Driver / Daemon multipathd mppVhba driver CLI path check multipath –ll mpputil Configuration /etc/multipath-enabled.conf /etc/mpp.conf (requires execution of /opt/xensource/bin/update-initrd) See details:
12
DMP vs RDAC MPP Check if RDAC MPP is running Use only 1 technology
lsmod | grep mppVhba „multipath –ll“ would show MD device as output (if DMP is active) Use only 1 technology When RDAP MPP is running use it Otherwise use DMP
13
MPP RDAC: Path check mpputil
Lun #0 - WWN: 600a0b80001fdf d9c49b0caa1 LunObject: present DevState: OPTIMAL Controller 'A' Path Path #1: LunPathDevice: present DevState: OPTIMAL Path #2: LunPathDevice: present Controller 'B' Path mpputil
14
DMP: Path check Monitoring using XenCenter Monitoring using CLI
Command: multipath -ll
15
Multipathing & Software iSCSI
16
iSCSI with Software Initiator
IP addressing to be done by XenServer Dom-0 Multipathing also to be done by XenServer Dom-0 Dom-0 IP configuration is essential Storage LAN Ctrl 1 XenServer Dom-0 LUN 1 LUN 1 IP IP LUN 1 IP NIC 1 LUN 1 XenServer Switches Storage Subsystem
17
Best practice configuration: iSCSI storage with multipathing
Subnet: Separation of subnets also on IP base Subnet 1 Storage LAN Adapter 1 Port 1 IP: NIC 1 NIC 1 IP: Port 2 IP: NIC 2 NIC 2 IP: Storage LAN Adapter 2 Port 1 IP: LUN 1 Port 2 IP: XenServer Subnet 2 FC Switches Storage Subsystem
18
Not recommended configurations for multipathing and iSCSI:
Both server NICs in same subnet Mixing of NIC teaming and multipathing Subnet 1 Subnet 1 NIC 1 NIC 1 IP: NIC 1 Team IP: NIC 2 NIC 2 IP: NIC 2 XenServer XenServer
19
Multipathing with Software Initiator XenServer 5
XenServer 5 supports Multipathing with iSCSI software initiator Prerequisites are: iSCSI target uses same IQN on all ports iSCSI target ports operate in portal mode Multipathing reliability has been enhanced massively in XenServer 5.5
20
How to check if iSCSI target operates in portal mode?
Execute iscsiadm -m discovery --type sendtargets --portal <ip address of 1 target> Output must show alls IPs of the target ports with identical IQN Example: :3260,1 iqn.strawberry:litchie :3260,2 iqn.strawberry:litchie When connecting to iSCSI target using XenCenter Storage Repository Wizard, also all target IPs should show up after Discovery
21
NetApp Integration
22
NetApp Storage NetApp Storage supports Multipathing
For configuring NetApp storage and modification of multipath.conf see whitepaper NetApp typically supports portal mode for iSCSI multipathing for iSCSI SW Initiator is supported Especially for low-end NetApp storage (e.g. FAS2020) with limited LAN adapters special considerations take place
23
NetApp low-end storage (iSCSI)
Often limited by NIC configuration Example: 2 NICs per head 1 aggregate / LUN is represented by 1 head at a time (other head for fault tolerance) Thus: 2 NICs effectively can be used for storage connection Typically Filer delivers non-block-based protocols (e.g. CIFS) which also require redundancy as well as block based protocols (e.g. iSCSI)
24
Controller 1 (fault tolerance)
Example FAS2020: Scenario 1 no network reduncancy for iSCSI and CIFS separation of networks CIFS Network Controller 0 (active) NIC 0 NIC 1 NIC 0 Controller 1 (fault tolerance) NIC 1 NIC 0 NIC 1 iSCSI Network
25
Controller 1 (fault tolerance)
Example FAS2020: Scenario 2 network redundancy for iSCSI and CIFS no separation of networks CIFS & iSCSI Network Controller 0 (active) NIC 0 vif / bond NIC Bond NIC 1 NIC 0 Controller 1 (fault tolerance) NIC 1 NIC 0 vif / bond NIC 1
26
Controller 1 (fault tolerance)
Example FAS2020: Scenario 3 network redundancy for iSCSI (multipathing) and CIFS separation of networks NIC Bond Ctrl 1 (active) CIFS VLAN CIFS VLAN NIC 0 iSCSI VLAN NIC 1 iSCSI VLAN NIC 0 Vif / bond NIC 1 Multipathing Controller 1 (fault tolerance) Same configuration NIC 2 NIC 3
27
Dell / Equalogic Integration
28
Dell Equalogic Support
XenServer 5.5 includes Adapter (min. firmware required) Redundant path configuration does not depend on using adapter or not All PS series are supported as running same OS StorageLink Gateway support planned
29
Dell / Equalogic See whitepaper for Dell / Equalogic storage Each Equalogic has two controllers Only 1 controller is active Uses „Group ID“ address on storage side (similar to bonding / teaming on server side) Only connection over group ID, no direct connection to the iSCSI ports possible Therefore multipathing cannot be used bonding on XenServer side
30
Datacore Integration
31
Multipathing architecture with Datacore
Subnet: Different IQNs for targets – no portal mode possible!! Subnet 1 Storage Controller 1 Port 1 IP: NIC 1 NIC 1 IP: IQN 1 NIC 2 NIC 2 IP: Storage Controller 2 IQN 2 LUN 1 Port 2 IP: XenServer Subnet 2 Switches
32
Datacore hints Special attention for software iSCSI
Follow Datacore technical bulletin: TB15 ftp://support.datacore.com/psp/tech_bulletins/Tech BulletinsAll/TB15b_Citrix%20XenServer_config_50 1.pdf Datacore in VM O.k. when not using HA Configuration possible, but take care about booting the whole environment Take care when updating XenServer
33
StorageLink
34
Logical advancement of XenServer integrated storage adapters
Netapp & Equalogic Storage adapter
35
Citrix StorageLink Overview (XenServer)
Data Path XenServer Control Path Guest SAN Storage Cloud iSCSI / FC One point worth mentioning… StorageLink does not inject any agents or drivers into the control path, control is provided either through support for the native API of the storage or through SMI-S Also, once StorageLink has created the storage, and wired it to the Control Domain (or Parent Partition for HV), StorageLink gets out of the way and lets the VM talk to that storage LUN as it always would. For XenServer, the storage is allocated by StorageLink, and appears as a regular XenServer Storage Repository, from which guest virtual machines can create their virtual disks. Snap-in for XenServer StorageLink
36
Leveraging the best of virtualization and storage
StorageLink as basis for Citrix Essentials Storage vendor functionalities Quick provisioning Snapshots Quick cloning Thin-provisioning Deduplication Backup and Restore capabilities We’ve had a definite approach to the marriage of storage and server virtualization ever since the addition of the NetApp SR in XenServer 4.1. With this release, we take this philosophy to the next level. You buy intelligent storage for its powerful thin provisioning, snapshot, replication and deduplication capabilities, as well as the simplification of backup/restore and disaster recovery. We think virtualization infrastructure should let servers and hypervisors do smart computing, arrays and switches do smart storage management, and a single set of interfaces allow you to manage them together.
37
Virtual Storage Manager
StorageLink Overview XenServer Hyper-V Netapp EqualLogic DOM0 VSM Bridge Parent Partition VDS XenServer Hyper-V Data Path Data Path Virtual Storage Manager SAN/NAS Control Path
38
StorageLink Gateway Overview
SMI-S is the preferred method of integration as it requires no custom development work XenServer Dell At the heart of the new StorageLink is the StorageLink Gateway, a platform that integrates management of the control path for multiple types of storage – from different vendors, via different connection types, and even across different hypervisors. Storage with providers for the standardized SMI-S interfaces can plug right into StorageLink. Custom adapters can be built by vendors for non-SMI-S devices. With StorageLink, rather than implementing that device-specific interface on every server, it only needs to live in one place, on the StorageLink Gateway – and it can be leveraged by every server running the StorageLink Bridge SR, even in multiple resource pools. Vendor-specific VSM Storage Adapters run in separate processes
39
Storage Technologies + XenServer 5.5 XenServer 5.5 iSCSI / FC
Storage Repository Storage Repository LUN LUN LUN VHD header VHD header LVM Logical Volume LVM Logical Volume LVM Volume Group VM virtual disk
40
Essentials Enterprise
Snapshot types XenServer (free) Essentials Enterprise Snapshot type Software based snapshot Hardware based snapshot (also software based snapshot possible when not using StorageLink) LUN access model LVM (1 LUN=x times VDI) LUN-per-VDI (1 LUN = 1 VDI) Performance Good Superior Utilization On XenServer host On storage subsystem Overhead Low lowest
41
StorageLink: Microsoft look-and-feel
42
Essentials: Example usage scenario Effective creation of VMs from template
VM clone 1x 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM VM clone 3x 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM few mouse clicks VM Template Huge manual effort Effectiveness: Fast-Cloning using Storage Snapshots Fully automated storage and SAN configuration for FC and iSCSI LUN
43
StorageLink: Supported Storages
StorageLink HCL
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.