MODULE 9: SCALING THE ENVIRONMENT
Agenda CP storage in a production environment – Understanding IO by Tier Designing for multiple CPs Storage sizing (lab) Cluster design considerations
UNDERSTANDING DISK IO AND UNIDESK STORAGE TIERS
Unidesk Storage Arch.
Boot Volume Tier The VM and associated files are created/located here (.vmx, vswap, etc) VMDK comprised of three things: – Windows Page file for the desktop – A Composited (layered) registry – Basic windows boot files needed before the file system loads Very little IO associated with the Boot volume – First few seconds of boot – Windows Paging
CP and Layer Tier Layers are stored in directory structure underneath the CachePoint Layers are stored as VMDKs – OS and App layers are stored as independent non- persistent disks – Personalization layers are stored as Independent Persistent disk Highest IO volume
Archive Tier Used to store personalization layer backups Backups are configured per persistent desktop – Backup schedule and frequency can be unique to each desktop – Not every desktop on a CachePoint needs to be backed up Very little IO as it’s simply a storage area – Drives such as SATA or slower speed SAS drives are typically used
Tested Desktop Configuration Windows 7 Professional 32 bit Office 2010 Pro Plus VMware vCenter Client Google Chrome and IE8 for browsing Tweetdeck Skype Adobe Reader and Flash Single vCPU 2 GB of RAM – (later changed to 1GB to show increase in IO due to paging)
IOPS Analysis by Tier
IO Analysis by Tier
IOPS by Tier Summary Testing shows that the majority of IO happens at the CachePoint/Layer datastores – Even during a BIC, most IO happens at the Layering volumes Use high performance disk for CachePoint/Layers and less expensive disk for Boot Volume datastores – If these tiers are not combined into a single datastore It is supported to have both CachePoint/Layers and Boot Volume on the same datastore – Although this will use more of the expensive disk over the long run
APPLIANCE PLACEMENT & DATASTORE CONSIDERATIONS
Appliance Placement? DRS
Storage location is what matters MA MCP CP1 CP3 CP2 CP4 CP1 CP2 CP3 CP4 MA MCP
Individual CP Storage CP1 OS APP CP1 APP PERS Boot images Layer\CP StorageArchive BOOT Desktop1 vswap BOOT Desktop2 vswap BOOT Desktop3 vswap BOOT Desktop4 vswap BOOT Desktop5 vswap BOOT Desktop6 vswap PERS
DRS Putting it all together (3 Tiers) MA MCP CP1 CP3 CP2 CP4 MA MCP
DRS Putting it all together (2 Tiers) MA MCP CP1 CP3 CP2 CP4 MA MCP
Sizing the datastores Uses the following variables and basic assumptions to define writable storage needs – Desktop memory – User space/personalization layer size – Shared layer size estimate – Backup settings – Number of desktops and desktops per CP Sizing tool from Unidesk is a simple spreadsheet and completely adjustable
SIZING UNIDESK STORAGE Lab:
Common Limitations? VMware cluster size limit for sharing disks (up to ESXi 5.0) was 8 nodes with active VMs sharing a VMDK on a specific VMFS volume Typical VMFS volume on rotating disk is god about desktops (if you handle the IO load) With SSD VMFS is fine to CPs have been tested to desktops per CP well above typical VMFS usage NFS will allow you have fewer datastores (does not have the VMFS locking issues) though you must still handle IO and have enough CPs
DRS How could NFS change this design? MA MCP CP1 CP3 CP2 CP4 MA MCP 75 Desktops Per VMFS volume
DRS How could NFS change this design? MA MCP CP1 CP3 CP2 CP4 MA MCP 150 Desktops Per CP? Reduced number of datastores? CP1 CP2
OPEN Q&A