Presentation is loading. Please wait.

Presentation is loading. Please wait.

Consolidation and Optimization Best Practices: SQL Server 2008 and Hyper-V Dandy Weyn | Microsoft Corp. Antwerp, March 12 2009.

Similar presentations


Presentation on theme: "Consolidation and Optimization Best Practices: SQL Server 2008 and Hyper-V Dandy Weyn | Microsoft Corp. Antwerp, March 12 2009."— Presentation transcript:

1 Consolidation and Optimization Best Practices: SQL Server 2008 and Hyper-V Dandy Weyn | Microsoft Corp. Antwerp, March 12 2009

2 Key Question “If the limitations of the Guest VM meet the requirements of the workload.” Should I consider SQL Server within Hyper-V guests for production Environments? YES

3 Questions asked What are the hardware limitations? What are the restrictions of each VM?

4 16 cores 2 TB No limit (500 rec.) No limit (500 rec.) Hyper-V limitations

5 1-4 VPs 64 GB 260 disks 2 TB each 260 disks 2 TB each 8 NICs Hyper-V limitations

6 Hyper-V Architecture Root Partition I/O Stack Drivers Child Partition I/O Stack VSCs Server Child Partition I/O Stack VSCs Server Hypervisor DevicesProcessorsMemory VMBus Shared Memory VSPs OS Kernel Enlightenments (WS08+)

7 Questions asked What’s best for SQL Server? What are the best practices for Hyper-V? What are considerations for SQL Server 2008? What are the recommended storage options? How to monitor performance? Do you have any metrics?

8 What’s best for SQL Server? Identify the workload Perform through Monitoring best practices Understand the benefits of using Hyper-V

9 Hyper-V Best Practices Use synthetic devices for best performance Are installed with integration components Utilize a VSC and VSP to pass requests over a VM Bus to the root partition Occurs in kernel mode once initial request is passed from VM Less CPU overhead then emulated devices Emulated devices should be avoided, but may be needed during initial configuration of guest VM

10 Considerations for SQL Server Functional Considerations Supported configurations High-Availability IO performance overhead Proper sizing and configuration of storage Pass-through or fixed VHDs are recommended Heavy Network resources may see most CPU overhead on performance impact Network tuning is advisable

11 Recommended Storage Guest VM w/Pass-through disks Disk will be offline at the root partition Physical disk counters must be used to monitor I/O to monitor from Root Observed the physical disk numbers showing duplicate or non-existent instances of disk objects (known Windows issue) Guest VM with VHD Monitoring Logical or Physical disk counters within guest VM needed to determine IO rates to a specific VHD Monitoring at the root partition will provide aggregate values for all I/O issued against all VHDs hosted on the underlying partition/volume

12 What you shouldn’t do… Dynamically expanding disks Generates increased I/O for disk read and write activities Generates increased CPU usage

13 Performance Monitoring Hyper-V Processor Counters are the best way to get true measure of CPU utilization Logical Processor: Total CPU time across entire server Root Virtual Processor: CPU time for Root partition Virtual Processor: CPU time for each guest virtual processor Traditional % Processor Time counters in either guest or root may not be accurate

14 Native Server vs. VM OLTP workload used as test harness using 3 different load levels ConfigurationStorageCPU Cores Physical CPU Cores VM Notes Native 4N/AHyper-V Disabled (bcdedit /set hypervisorlaunchtype off) Native 4N/AHyper-V Enabled Guest VMPass-Through Disks 44 VirtualAll Synthetic Devices Guest VMStatic VHD Files44 VirtualAll Synthetic Devices Workload LevelTarget CPU Low20-30% Medium50-60% High80% Configurations Compared

15 Native vs. VM CPU Utilization vs. Throughput Same throughput attainable however there is more CPU overhead with hyper-v enabled or when running within a VM Some overhead observed with Hyper-V just being enabled

16 Native vs. VM IO Performance

17 Running Concurrent VMs Performance and Scalability of Consolidating VM’s Same OLTP workloads Workload LevelTarget CPU Low20-30% Medium50-60% High80% Scenarios Tested 1. Concurrent VM’s Performance 2. Concurrent VM’s in CPU Overcommit Scenario Total Virtual CPUs across all VMs > Total Physical CPUs on server 3. CPU overhead when scaling number of VM’s from 2 to 4 4. Shared vs. Dedicated Storage Models Configuration  Each guest VM configured with 4 logical CPUs  Identical storage configuration (per VM)  Root with 8 physical CPU cores for scenarios 1 & 2  Root with 16 physical CPU cores for scenarios 3 & 4

18 Two Concurrent Guests 8 CPU Core Root – Performance Two VMs running concurrently on Root with 8 CPU cores available Each guest configured with 4 logical processors Number of VM’s is constant CPU overhead between Root and a VM (delta between lines) is relatively constant for varying workload. Recommendation: To achieve best performance of VM’s limit processes running which consume CPU resources on the Root.

19 Four Concurrent VM Guests Overcommitted CPU Resources Root configured with 8 physical CPU cores Comparing 2 VM’s Scenario 1: Two VMs (8 total virtual CPUs) Scenario 2: Four VMs (16 total virtual CPUs)  CPU resources overcommitted Conclusion: There is more overhead managing 4 VM’s even though cumulative server CPU resources were not exhausted Approximately 16% CPU overhead Number of VM’s Total Batches/sec Total Guest CPU% Total CPU%Relative Throughput (Batches/sec per CPU%) 2108040.5%46.0%26.7 4189385.1%93.1%22.2

20 Two to Four Concurrent VM’s 16 CPU Core Root – CPU Overhead (2 vs. 4)  Relative throughput of concurrent VM’s on 16 CPU core Root – Normalized = (Baseline of single VM on 4 CPU Root) * 4 (Ideal case) – Compare - Normalized vs. 2 concurrent VM’s vs. 4 concurrent VM’s  CPU overhead increases over normalized range as number of VM’s increase  Range of 2-4% for 2 VM’s, Range of 13-16% for 4 VM’s

21 Distribution of Throughput Workload Throughput Distribution Across VM’s Distribution of throughput across four VMs running concurrently Throughput is relatively balanced VM4 utilizes VHD instead of pass-through for SQL data storage

22 Storage Configuration Comparison Shared (VHDs) vs. Dedicated Spindles (Pass-through) Dedicated per VM using passthrough disks: SQL Data – 2 LUNs 150GB LUNs using RAID 1+0 (4+4) Sets SQL Log – 1 LUN 50GB LUN using RAID 1+0 (2+2) Set Disk Configuration per VM/Root  Shared using static VHDs (single logical volume at Root level) : Single Pool of Disks for data files and single pool for logs F: Data files Two 150 GB VHDs per VM G: Log files One 30GB LUN VHD per VM

23 Storage Configuration Comparison Total Read IO’s vs. Latency VHD’s on Shared Storage vs. Dedicated Spindles using Passthrough Disks Measuring average reads per second vs. latency VHDs on common disks has slight latency overhead and less throughput Avg. Disk/sec Read

24 Comparing Consolidation Options Multi-Instance SQL vs. Hyper-V VMs Multiple SQL InstancesMultiple VM’s Isolation Shared Windows instanceDedicated Windows instance CPU Resources Number of CPUs visible to Windows instanceMaximum Windows 2008 – up to 4 virtual CPUs Windows 2003 – up to 2 virtual CPUs Memory Server Limit Flexible (max server memory) Statically allocated to VM Offline changes only No ability to “overcommit” memory resources 64GB limit per VM 2 TB Limit per Host Storage SQL Data Files with standard storage optionsSQL Data Files using Passthrough or Virtual Hard Disks exposed to VM Resource Management WSRM (process level)Hyper-V guest VM Number of instances 50Practical limit determined by physical resources Support Normal Rules ApplySupport for SQL Server 2008 & 2005 High Availability Normal Rules ApplyGuest clustering support Database Mirroring Log Shipping

25 Resource Management Hyper-V – CPU Resources Virtual Machine Reserve – Guarantees resources – Disallows others from using resources – use with caution Virtual Machine Limit – Upper limit on host resources (CPU) available to VM Relative Weight – Defines ratios of processing resources available to VMs – Priority given to VMs based on settings – only enforced when total resources become heavily utilized

26 Resource Management Multi-Instance SQL (vs. multiple VM’s) using Windows ® System Resource Manager Resource Application Policy – Policies enforced once total CPU utilization thresholds are exceeded – Supports scheduling of different policies – Supports memory and CPU affinity (not recommended for use with SQL Server Engine) Available on all Windows 2008 versions

27 Hyper-V Best Practices CPU “Enlightenments” – Include optimizations related to CPU efficiency, scalability of multi-processor VMs and reduction of overhead for memory access – Not available for all guest operating systems (not available for Windows 2003 guests) It is possible to “overcommit” CPU resources Total number of guest virtual CPUs > the number of physical CPU cores – May introduce additional CPU overhead when all workloads are busy – Note: Memory is allocated for VMs in a static fashion and can only be modified when a guest is offline (no ability to “overcommit”)

28 Recommendations Optimizing for SQL Server PASS-TROUGH or FIXED VHDs USE SYNTHETIC DEVICES INSTALL HYPER-V INTEGRATION COMPONENTS OPTIMIZE NETWORK PERFORMANCE TEST PERFORMANCE WORKLOAD

29 Support Considerations SQL Server in VM’s Hyper-V is supported on SQL 2008/2005 Guest-Clustering is not supported 29

30 Additional References Running SQL Server 2008 in a Hyper-V Environment – Best Practices and Performance Considerations http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environment-best- practices-and-performance-recommendations.aspx Windows Server Hyper-V site – http://www.microsoft.com/windowsserver2008/en/us/virtualization-consolidation.aspx http://www.microsoft.com/windowsserver2008/en/us/virtualization-consolidation.aspx Hyper-V Technet center – http://technet2.microsoft.com/windowsserver2008/en/servermanager/virtualization.mspx http://technet2.microsoft.com/windowsserver2008/en/servermanager/virtualization.mspx Performance Tuning Guidelines for Windows 2008 Server (section on Hyper-V) – http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv.mspx http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv.mspx Hyper-V Related BLOGS – http://blogs.msdn.com/tvoellm/ http://blogs.msdn.com/tvoellm/ – http://blogs.technet.com/virtualization/ http://blogs.technet.com/virtualization/ – http://blogs.technet.com/jhoward/ http://blogs.technet.com/jhoward/ – http://blogs.msdn.com/virtual_pc_guy/ http://blogs.msdn.com/virtual_pc_guy/ – http://blogs.technet.com/roblarson/default.aspx http://blogs.technet.com/roblarson/default.aspx – http://blogs.msdn.com/mikekol http://blogs.msdn.com/mikekol 30

31 Appendix

32 Hardware Configuration Server – Dell R900 – 4 socket, 16 core Intel based 2.40 GHz, 1066Mhz bus, 6MB L2 Cache – 64 GB Physical Memory – 2x 4Gb/s dual port Emulex HBA’s – 1x Broadcom 1Gb/s NIC Storage Array - HDS AMS 1000 – 8x Data LUNs, 4x Log LUNs – 1x Backup LUN Single 6 disk (5+1) RAID 5 set – Two storage configurations tested No spindle sharing with other hosts in either configuration

33 Monitoring Performance – Test Results CPU and disk counters measured from root and guest to observe any differences Example of counters measured: Counters Measured from… Counter Low OLTP Workload Med OLTP Workload High OLTP Workload Guest VM Transactions/sec 352546658 Batches/sec 5658971075 % Processor Time 34.265.384.2 % Privilege Time 5.188.4 Logical - Avg. Disk sec/Read (_Total) 0.0050.0060.007 Logical - Disk Reads/sec (_Total) 105315971880 Root partition % Processor Time 4.97.811.2 % Privilege Time 3.66.17.3 Hyper-V – Logical Processor %Hypervisor Run Time 44.84.3 Hyper-V Logical Processor %Total Run Time 39.168.786.5 Hyper-V Logical Processor %Guest VM Run Time 35.163.982.1 Physical - Avg. Disk sec/Read (_Total) 0.0050.006 Physical - Disk Reads/sec (_Total) 105315971880 Batches per CPU % (Batches/sec / % Processor Time) 16.11413.1


Download ppt "Consolidation and Optimization Best Practices: SQL Server 2008 and Hyper-V Dandy Weyn | Microsoft Corp. Antwerp, March 12 2009."

Similar presentations


Ads by Google