Lessons Learned While Deploying vSAN Two-Node Clusters

Slides:



Advertisements
Similar presentations
© 2009 VMware Inc. All rights reserved vCenter Site Recovery Manager 5.1.
Advertisements

High Availability Deep Dive What’s New in vSphere 5 David Lane, Virtualization Engineer High Point Solutions.
VSphere 4 Best Practices/ Common Support Issues Paul Hill Research Engineer, System Management VMware.
© 2011 VMware Inc. All rights reserved High Availability Module 7.
© 2010 VMware Inc. All rights reserved Confidential Performance Tuning for Windows Guest OS IT Pro Camp Presented by: Matthew Mitchell.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
vBrownBag – May 20, 2015 vSphere 6 Foundation Exam Section 3 – Storage
Module – 12 Remote Replication
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
Introducing VMware vSphere 5.0
Module – 7 network-attached storage (NAS)
Virtualization Infrastructure Administration Cluster Jakub Yaghob.
Storage Management Module 5.
Implementing Failover Clustering with Hyper-V
© 2010 VMware Inc. All rights reserved VMware ESX and ESXi Module 3.
High Availability Module 12.
Patch Management Module 13. Module You Are Here VMware vSphere 4.1: Install, Configure, Manage – Revision A Operations vSphere Environment Introduction.
VMware vCenter Server Module 4.
Scalability Module 6.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.

A Deep Dive on the vSphere Distributed Switch Jason Nash VCDX #49, vExpert Data Center Solutions Principal Varrow.
© 2010 VMware Inc. All rights reserved Patch Management Module 13.
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Virtualization Infrastructure Administration Network Jakub Yaghob.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
VSolution Playbook VIRTUALIZED SAN SOLUTION FOR VMWARE SMB.
Module – 4 Intelligent storage system
Cisco Confidential © 2010 Cisco and/or its affiliates. All rights reserved. 1 MSE Virtual Appliance Presenter Name: Patrick Nicholson.
GabesVirtualWorld.com vHangOut Session 01 Guest: Ryan Conley Host: Gabrie van Zanten.
VApp Product Support Engineering Rev E VMware Confidential.
VMware vSphere Configuration and Management v6
Peter Mattei HP Storage Consultant 16. May 2013
Symantec Storage Foundation High Availability 6.1 for Windows: What’s New Providing Support for ApplicationHA in Hyper-V and VMware.
Deployment options for Fluid Cache for SAN with VMware
Jérôme Jaussaud, Senior Product Manager
© 2015 VMware Inc. All rights reserved. Software-Defined Data Center Module 2.
REMINDER Check in on the COLLABORATE mobile app Best Practices for Oracle on VMware - Deep Dive Darryl Smith Chief Database Architect Distinguished Engineer.
Module Objectives At the end of the module, you will be able to:
Deploying Highly Available SQL Server in Windows Azure A Presentation and Demonstration by Microsoft Cluster MVP David Bermingham.
vSphere 6 Foundations Exam Training
VMware Virtual SAN & It’s New Features Luc Gallet
VMware ESX and ESXi Module 3.
vSphere 6 Foundations Beta Question Answer
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
VSPHERE 6 FOUNDATIONS BETA Study Guide QUESTION ANSWER
Atlantis USX Enhancements
Lab A: Planning an Installation
What’s New in VMware vSAN 6.6?
(ITI310) SESSIONS 8: Network Load Balancing (NLB)
Software Defined Storage
Bruno Giovanini Manesco © 2016
UCS Director: Tenant Onboarding
Welcome! Thank you for joining us. We’ll get started in a few minutes.
2V0-602 Dumps PDF vSphere 6.5 Foundations Exam. Easily Pass 2V0-602 Exam With Our Dumps & pdf If you want guaranteed success in 2V0-602 exam by the first.
EMC VxRail Appliance E VCE
2V0-622 Dumps
2V0-621D Braindumps
2V0-622 PDF Dumps Free
Introduction to vSphere and the Software-Defined Data Center
If vSAN Powered The Matrix
“Geek Out”: DIY vSphere 5.1 Lab
Windows Server 2016 Software Defined Storage
Hyperconvergence Your Way
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
If vSAN Powered The Matrix
EMC DES-6332 Specialist - Systems Administrator, VxRail Appliance Exam.
PerformanceBridge Application Suite and Practice 2.0 IT Specifications
Robert Down & Pranay Sadarangani Nov 8th 2011
HC VMware Module
Presentation transcript:

Lessons Learned While Deploying vSAN Two-Node Clusters + 2 Lessons Learned While Deploying vSAN Two-Node Clusters Wes Milliron – Systems Engineer @wesmilliron blog.wesmilliron.com

Overview What vSAN is and how it works 2 + 2 + Overview What vSAN is and how it works vSAN Requirements and Design Considerations Configuration Summary with Gotchas Unexpected Outcomes Resources

What is vSAN? Software-defined storage solution Pools local storage from hosts into a single shared datastore Policy-driven (SPBM) performance and protection control No hardware RAID

Types Of vSAN Deployments 2 + Types Of vSAN Deployments vSAN 3-Node Cluster Supports 1 failure vSAN 3-Node Cluster vSAN 2-Node Cluster Supports 1 failure vSAN 4+ Node Cluster Supports 2+ failures vSAN 4+ Node Cluster

Title Disk Groups Logical grouping of physical disks on a host 2 + Example Hybrid vSAN Host Logical grouping of physical disks on a host Each disk group has 1 SSD for cache, and 1 or more capacity disks At least 2 disk groups recommended per host

Objects and Components vSAN is object-based distributed storage Objects are split into components, which are distributed across the hosts in the cluster Component count is determined by object size/policy Failures to Tolerate (FTT) “RAID” Type Disk Stripes Maximum component size of 255 GB 50% of an objects components must be available

What is vSAN 2-Node for? Title + ROBO = Remote Office/Branch Office = License type When HA is required, but the VM count for a site is low Less expensive alternative to other HA ROBO solutions Requires witness host

Requirements | Hardware Compatibility 2 + Requirements | Hardware Compatibility vSAN leans heavily on hardware functionality Use the vSAN Hardware Compatibility List (HCL) while planning Other options: VxRail from Dell EMC is an appliance-type solution Certified vSAN ReadyNodes from OEM partner of choice NEW: vSAN Hardware Compatibility Checker Fling

Requirements | Networking 2 + Maximum of 500 ms latency is supported from hosting site to witness, but sub-200ms is recommended Witness bandwidth required is 2Mbps/1000 components Each host has a 9000 component maximum Witness bandwidth won’t exceed 18 Mbps Requirements | Networking Between Hosting Nodes Max Latency 1 ms RTT Bandwidth (Hybrid) 1 Gbps Bandwidth (All-Flash) 10 Gbps

vSphere and vSAN Versions For vSAN versions 6.6 and later, unicast is fully supported for vSAN traffic vSAN 6.6 Requires at least ESXi 6.5.0d Try to avoid earlier versions Online health checks use vSAN telemetry data and give the added benefit of HCL checks, improved debugging, etc

Requirements | Cluster Witness Title 2 + Requirements | Cluster Witness One witness per cluster Maintains quorum and data accessibility of the cluster Contains witness components Does not contribute CPU/RAM/Storage to cluster Witness OVA exclusively for 2-node and stretched cluster vSAN

Licensing Considerations vSAN License Options Licensing Considerations Two licenses required: One for vSphere (hosts) One for vSAN (cluster) ROBO licenses have a 25 VM maximum DRS requires a vSphere Enterprise license

Design | Limiting Factors 2 + Design | Limiting Factors Determine your limiting factors: Budget Site networking options Witness location and networking Capacity Hardware Networking Site networking options - 10 GB port availability - When would you choose to configure the cluster as direct-connect

Design | Capacity Planning 2 + Design | Capacity Planning Determine your limiting factors Capacity Hybrid or all-flash configuration 2-node clusters are limited to mirrored storage policies Storage multiplier for mirrored policies is 2X Hardware Networking

Design | Hardware Considerations 2 + Design | Hardware Considerations Determine your limiting factors Capacity Hardware Cache drive should be at least 10% of consumed storage Write buffer is limited to 600 GB SAS SSD vs M2 vs SD Card Separate RAID controller for OS volume Networking Largest cache drive that makes sense Hybrid 70/30 read/write All cases – size should be at least 10% of total VM consumed size for cluster Using DVS allows for easy management as well as use of NIOC HCL is your friend

Design | Networking Approach 2 + Design | Networking Approach Determine your limiting factors Capacity Hardware Networking Distributed Switch vs Standard vSwitch NIOC Link Aggregation Groups (LAG) vCenter Portability Using DVS allows for easy management as well as use of NIOC

Build Summary Build ESXi hosts Configure DVS Deploy vSAN Witness 2 + 2 + Build Summary Build ESXi hosts Configure DVS Deploy vSAN Witness Create Cluster Configure vSAN This is the order I found that has the smoothest deployment High-level view of the build and configuration steps – touching on gotchas Don’t forget to document the process

Host Configuration Build ESXi hosts using standard practices 2 + 2 + Host Configuration Build ESXi hosts using standard practices RAID controllers Passthrough mode Write caching disabled Enable vMotion service on each host Don’t forget NTP and Syslog! If the controller can’t do passthru, RAID 0 configuration will work

Title Networking Configuration Distributed vSwitch Creation 2 + Distributed vSwitch Creation Number of uplinks NIOC

Title Networking Configuration vSAN Port Group Management Port Group 2 + vSAN Port Group Management Port Group

Title Networking Configuration vSAN VMK Creation 2 + vSAN VMK Creation Dedicated for vSAN traffic Remember the vSAN service! Create VMK for vSAN on each host

Witness Deployment Separate subnets for management and vSAN VMKs Set management IP, hostname, etc Add witness host to vCenter Modify witness vSAN VMK Override default gateway

2 + Create cluster object in vCenter with DRS and HA disabled Enable vSAN service on the cluster Enable dedupe/compression if applicable Claim vSAN disks for cache and capacity roles Code to link NAA ID with Drive Bay Select the deployed witness appliance as cluster witness C vSAN Cluster Creation

Required Extra Step for Direct-Connect Clusters 2 + Required Extra Step for Direct-Connect Clusters Witness traffic must be re-routed out of the management VMK Run from each VM host - not witness Information on Witness Traffic Separation Troubleshoot with vmkping esxcli vsan network ip add -i vmk0 –T=witness By default, witness traffic from the hosts travels across the vsan network – in a direct connect scenario, that network is not routable. The witness traffic needs to be re-routed. Vmk0 is the management network, in this example vmkping <IP of vsan VMK on target> -I vmk1

Title Finishing Touches Patching Assign license to the cluster 2 + Finishing Touches Patching Assign license to the cluster Enable High Availability Admission Control set to 50% reserved for CPU and Memory Datastore heartbeating is not enabled by default on vSAN datstores Without DRS, VMs need to be manually migrated Hosts and witnesses must be at the same patch level Upgrade the vSAN on-disk format version after entire cluster is patched Admission control 50% reserved due to the mirrored, RAID-1, FTT=1 nature of the cluster

vSAN Health Check Able to drill down into potential issues Automatically remediate problems Online health checks

Deployment Strategies 2 + Build and Ship Ship and Build Stage environment in-house Deliver to remote site Change IPs/DNS before shutdown Build remotely Remote hands follow runbook Ship to remote site

Title 2 + Documentation

Title Unexpected Outcomes Organizational culture shift 2 + Unexpected Outcomes Organizational culture shift Aligning design and deployment strategies Do it once – do it right

Resources Use the vCommunity! Storage Hub - VMware Virtual Blocks Blog 2 + Resources Storage Hub - VMware Virtual Blocks Blog Cormac Hogan’s Blog Wes Milliron’s Blog Building a 2-Node Direct Connect vSAN Cluster Associating NAA ID with Physical Drive Bay vSAN 6.7 U1 Deep Dive book Can’t use a single blog post/source, pull info from multiple sources and community experts Use the vCommunity!

2 + 2 + Thank You @wesmilliron blog.wesmilliron.com