Presentation is loading. Please wait.

Presentation is loading. Please wait.

Section 6: SQL Server on IaaS Best Practices

Similar presentations


Presentation on theme: "Section 6: SQL Server on IaaS Best Practices"— Presentation transcript:

1 Section 6: SQL Server on IaaS Best Practices
Lesson1: Best Practices SQL Server IaaS Best Practices © 2015 Microsoft Corporation

2 Different Considerations for SQL Server on Azure VM
Sizing considerations Storage and IO considerations Network considerations Application considerations © 2015 Microsoft Corporation

3 Sizing Considerations
For Standard storage: (do not choose this if performance is critical) Use minimum Standard A2 VM for SQL server For premium storage: SQL Server Enterprise Edition: DS3 or higher SQL Server Std or web Edition: DS2 or higher Network bandwidth is also limited by VM size Max data disks and IO throughput dependent on VM size VM size is a very critical element for considering SQL Server. Best recommendation is to use a configuration similar to the On premises server hardware. So a VM size depends upon completely the workload running on the SQL Server. For high performance sensitive applications, we recommend to use SQL Server Enterprise Edition: DS3 or higher SQL Server Standard and Web Editions: DS2 or higher Please take a look at this chart to understand different VM sizes and how they differ in terms on CPU, Memory, Disks, IO bandwidth and NIC cards VM sizing: Apart from system resources like CPU cores and Memory, VM size also determines the number of Data disks that can be attached and the amount of IO thoughtput that it can drive. Network bandwidth is also dependent on the VM size as well So as you see, the VM size plays a very critical role in a successful deployment of your SQL Server workload onto Azure Although Azure Storage introduces most of the differences between on-premises and Azure deployments, other system components, such as CPU and memory, need to be considered as well when you evaluate performance. In Azure, the only configurable option for CPU is the number of CPU cores assigned to a virtual machine deployment, currently from one shared (A0) to sixteen dedicated in A9 in the Standard Tier, which is subject to increase or change in the future. These CPU cores may not be the same as the SKUs customers can find in expensive on-premises servers. This can lead to significant performance differences in CPU-bound workload and needs to be taken into account during testing and baselining. For memory, the current offering spans from 768 MB in A0 to 112GB in an A9 Compute Intensive Instance in the Standard Tier. Again, when you compare the performance characteristics of an existing on-premises application, you need to consider the right virtual machine size and its other options to avoid any performance impact due to inadequate memory sizing for buffer pools or other internal memory structures. © 2015 Microsoft Corporation

4 Sizing Considerations
SQL Server Enterprise Edition: DS3 or higher SQL Server Std or wed Edition: DS2 or higher Use Premium Storage Network bandwidth is also limited by VM size Max data disks and IO throughput dependent on VM size VM size is a very critical element for considering SQL Server. Best recommendation is to use a configuration similar to the On premises server hardware. So a VM size depends upon completely the workload running on the SQL Server. For high performance sensitive applications, we recommend to use SQL Server Enterprise Edition: DS3 or higher SQL Server Standard and Web Editions: DS2 or higher Please take a look at this chart to understand different VM sizes and how they differ in terms on CPU, Memory, Disks, IO bandwidth and NIC cards VM sizing: Apart from system resources like CPU cores and Memory, VM size also determines the number of Data disks that can be attached and the amount of IO thoughtput that it can drive. Network bandwidth is also dependent on the VM size as well So as you see, the VM size plays a very critical role in a successful deployment of your SQL Server workload onto Azure Although Azure Storage introduces most of the differences between on-premises and Azure deployments, other system components, such as CPU and memory, need to be considered as well when you evaluate performance. In Azure, the only configurable option for CPU is the number of CPU cores assigned to a virtual machine deployment, currently from one shared (A0) to sixteen dedicated in A9 in the Standard Tier, which is subject to increase or change in the future. These CPU cores may not be the same as the SKUs customers can find in expensive on-premises servers. This can lead to significant performance differences in CPU-bound workload and needs to be taken into account during testing and baselining. For memory, the current offering spans from 768 MB in A0 to 112GB in an A9 Compute Intensive Instance in the Standard Tier. Again, when you compare the performance characteristics of an existing on-premises application, you need to consider the right virtual machine size and its other options to avoid any performance impact due to inadequate memory sizing for buffer pools or other internal memory structures. © 2015 Microsoft Corporation

5 Storage considerations Storage Accounts
Storage account and the SQL server VM are in the same datacenter to reduce transfer delays Place all the data disks on the same storage account for faster recovery Disable Geo-Replication on storage accounts We recommend to place the storage account and the SQL Server VM within the same datacenter to reduce transfer delays and improve overall IO performance. Please disable Geo replication on storage accounts as we cannot guarantee Write Order while writing to multiple disks and hence this is an unnecessary overhead, Use Always on Availability groups for HA and DR instead Storage accounts are implemented as a recovery unit in case of failures and hence makes recovery of all disks simple when storage account is the same © 2015 Microsoft Corporation

6 Disks and Performance considerations
Azure VM creates 2 different disks during VM creation OS Disk – C:\ Drive Temporary Disk – D:\ Drive You can attach multiple data disks as needed up to the maximum capacity the VM can offer

7 OS Disk (C:\) OS Disk is a bootable VHD stored as a Page blob on the storage account that you can boot and mount as OS drive This disk is optimized for the OS IO patterns and not recommended to put any heavily used databases Default caching policy on OS disk in Read\Write Always use Data disks instead of OS Disks to store database files OS Disk is a bootable VHD stored as a Page blob on the storage account that you can boot and mount as a running version of an operating system and is labeled as C drive © 2015 Microsoft Corporation

8 Temporary Disk (D:\) Is the scratchpad disk on the VM
This disk is always created the VM is restarted All the contents in the disk are lost and is not a persistent disk Not a good candidate to store data and log files Starting D series VM’s, D:\ drive is an SSD based disk A good candidate to store tempdb files or Buffer pool extensions (SQL 2014) If your VM supports premium storage, Make sure tempdb disk has Read caching enabled D:\ drive is the temporary disk running on the host machine. For standard storage, this is just a plain disk with different sizes depending on the VM configuration. We do not recommend to put tempdb or buffer pool extensions on standard storage D:\ drives However, for D-series and G-Series VMs, the D:\ drive is an SSD which can handle more IOPS than any data disk presented. Hence for applications which use tempdb heavily, we recommend putting them on the D:\ drive to get the IO advantages of SSDs. These disks are still not persistent so do not put any of our user database and log files on this disk. In the premium series storage which you can get in DS-series and GS-series, You can directly place your tempdb on any disk which is an SSD in the background to get the IO advantages. Make sure you enable Read caching on the disk serving your tempdb when leveraging the Premium storage option © 2015 Microsoft Corporation

9 Data Disks (Persistent storage)
Additional disks which you can add to your VM depending on the VM series and edition The data disks could be attached from the standard storage or the Premium storage. Premium storage disks are SSD based and are more IO performant to satisfy SQL Server workloads We recommend to put all the database files on Data disks © 2015 Microsoft Corporation

10 Data Disks Standard Storage Considerations
Standards disks are based on HDD’s Consider Warm up effect on Data disks, Normally 20 mins after inactivity Every data disk is limited to a maximum of 500 IOPS. To drive more IOPS, add more data disks and perform disk striping Disable Disk Caching on Data disks when using standard storage Data Disks Standard Storage Considerations Warmup of Data disks Azure Data Disks have a warm up effect when not accessed for a period of time Approximately after 20 mins of inactivity Adaptive partitioning and Load balancing algorithm kicks in You may see a performance slowness when these algorithms are running Will not be found on systems running continuously Please consider this effect during performance testing © 2015 Microsoft Corporation

11 Data Disks Premium Storage Considerations
Premium disks are based on SSD’s Use minimum 2 P30 Disks where one disk contains log files and other disk contains data and tempdb If you require more IOPS or bandwidth, use Disk Striping to increase IO b/w Enable read caching on the data disks hosting data files and TempDB only Disable caching on data disks serving log files or write intensive data files Data Disks Premium Storage Considerations © 2015 Microsoft Corporation

12 Disk Striping considerations
For Win 8/Svr 2012, Use Storage spaces Set stripe size to 64KB for OLTP and 256KB for Data warehouses. Use Powershell script to create the storage pool For Windows 2008 R2 or earlier Use Dynamic disks or OS Striped volumes instead Determine the number of disks associated with each storage pool based on your load Use different storage pool for log files and data files when possible Disk Striping considerations 1) Add the Data disks to the VM Add-AzureAccount select-subscription -SubscriptionName "Visual Studio Ultimate with MSDN" -Current Get-AzureVM "srgotestsql1" -Name "srgotestsql1" |Add-AzureDataDisk -CreateNew -DiskSizeInGB 1023 DiskLabel "disk 3"-LUN 3| Update-AzureVM 2) Create a storage pool New-VirtualDisk -FriendlyName DataVirtualDisk1 -StoragePoolFriendlyName Azuredatapool -Interleave 64KB -NumberOfColumns 2 -ResiliencySettingName Simple –UseMaximumSize 3) Create a Volume for Data files from the Virtual Disk with 64K Allocation Unit Get-VirtualDisk –FriendlyName DataVirtualDisk1 | Get-Disk | Initialize-Disk –Passthru | New-Partition –AssignDriveLetter –UseMaximumSize | Format-Volume –AllocationUnitSize 64K Disk Striping is the only way you can increase the IO capacity on the server. Whether using Standard storage or premium storage, you need to add the right number of disks to create the storage pool and carve out disks out of them accordingly. Make sure that the stripesize, Numberofcolumns, Interleave, Allocation unit size are set accordingly to get the best IO performance configurations This is a very critical step before creating the databases and hence should be taken with utmost care Always use Powershell script to create the storage spaces as some of the options are not available in the GUI © 2015 Microsoft Corporation

13 Disk Formatting considerations
Perform Quick format of the data disk when formatting in Windows Disk Management Windows default File Allocation Unit is 4K Change the formatting FAU size to 64K for SQL Server data disks

14 Database File Placement
Create a single striped volume using storage spaces leveraging multiple disks and place database and log files on this volume Create separate volumes, each composed of different data disks to achieve specific IO performance and place your data and log files accordingly Database File Placement Placement of database files Depending on how you configure your storage, you should place and the data and log files for user and system databases accordingly to achieve your performance goals. This section provides guidance on how you should place database files when using SQL Server in Azure Virtual Machines: Option 1: You can create a single striped volume using Windows Server Storage Spaces leveraging multiple data disks, and place all database and log files in this volume. In this scenario, all your database workload shares aggregated I/O throughput and bandwidth provided by these multiple disks, and you simplify the placement of database files. Individual database workloads are load balanced across all available disks, and you do not need to worry about single database spikes or workload distribution Option 2: You can create multiple striped volumes, each composed by the number of data disks required to achieve specific I/O performance, and do a careful placement of user and system database files on these volumes accordingly. You may have one important production database with a write-intensive workload that has high priority, and you may want to maximize the database and log file throughput by segregating them on two separate 4 disk volumes (each volume providing around 2000 IOPs and 100 MB/sec). For example, use: 4-disks volume for hosting TempDB data and log files. 4-disks volume for hosting other minor databases. This option can give you precise file placement by optimizing available IO performance. © 2015 Microsoft Corporation

15 Enable Page Compression
Compression reduces the size of the table Use either Row or Page compression In SQL 2014, leverage the Clustered Columnstore index and Columnstore Archive technologies Compressed data can minimize IO and improve performance Compression might increase the CPU consumption on the database server. © 2015 Microsoft Corporation

16 Enable Locked Pages in Memory
Set the right Windows Policy for Lock pages in memory for SQL Service account It prevents the system from paging the data to virtual memory on the disk Buffer pool cannot be paged out by Windows Reduces IO and paging activities Enable Lock pages in Memory Establish locked pages to reduce IO and any paging activities: Lock pages in memory is a Windows policy that determines, which account can use a process to keep memory allocations pinned in physical memory. It prevents the system from paging the data to virtual memory on disk. When the SQL Server service account is granted this user right, buffer pool memory cannot be paged out by Windows. For more information about enabling the Lock pages in memory user right, see How to: Enable the Lock Pages in Memory Option (Windows). © 2015 Microsoft Corporation

17 Instant File Initialization
Instant file initialization to reduce the time that is required for initial file allocation Permission has to be granted to the SQL service account for “Perform volume maintenance tasks” in Local security policy Applicable only for data files and not log files There is a security risk that the data can be accessed even after the data in the file is deleted until the file is overwritten Consider enabling instant file initialization to reduce the time that is required for initial file allocation. To take advantage of instant file initialization, you grant the SQL Server (MSSQLSERVER) service account with SE_MANAGE_VOLUME_NAME and add it to the Perform Volume Maintenance Tasks security policy. If you are using a SQL Server platform image for Azure, the default service account (NT Service\MSSQLSERVER) isn’t added to the Perform Volume Maintenance Tasks security policy. In other words, instant file initialization is not enabled in a SQL Server Azure platform image. After adding the SQL Server service account to the Perform Volume Maintenance Tasks security policy, restart the SQL Server service. There could be security considerations for using this feature. For more information, see Database File Initialization. Enable Instant File Initialization This settings impacts both Userdb and Tempdb data files Enabling instant file initialization can improve the performance of some operations involving database files, such as creating a database or restoring a database, adding files to a database or extending the size of an existing file, autogrow, and so on. For information, see How and Why to Enable Instant File Initialization. To take advantage of instant file initialization, you grant the SQL Server (MSSQLSERVER) service account with SE_MANAGE_VOLUME_NAME and add it to the Perform Volume Maintenance Tasks security policy. If you are using a SQL Server platform image for Azure, the default service account (NT Service\MSSQLSERVER) isn’t added to the Perform Volume Maintenance Tasks security policy. In other words, instant file initialization is not enabled in a SQL Server Azure platform image. © 2015 Microsoft Corporation

18 AutoGrow AutoShrink Try to prestage data files and log file to prevent autogrowth Autogrowth should be available as a safety option Try to set Autogrowth value to MB instead of Percentage Do not set Autogrow to 1024MB or 4096MB for transaction logs Disable Autoshrink- Can cause overhead autogrow is considered to be merely a contingency for unexpected growth. Do not manage your data and log growth on a day-to-day basis with autogrow. If autogrow is used, pre-grow the file using the Size switch. Make sure autoshrink is disabled to avoid unnecessary overhead that can negatively affect performance. © 2015 Microsoft Corporation

19 Virtual Log Files VLFs are created inside your transaction log files
Too many VLFs can impact performance VLF creation is achieved according to the following In order to reduce to VLFs, Shrink and regrow the log in larger chunks Growth in MB No of VLF <64 4 >64 and < 1024 8 >1024 16

20 Network Considerations
Host virtual machines in the same cloud service to enable direct communication between virtual machines via internal IP addresses Use Windows Azure Virtual Network for virtual machines that reside in different cloud services Load balance multiple virtual machines in the same cloud service via public virtual IP addresses There is a network overhead for Chatty applications to connect to Azure VM. Try to put the application and SQL Server on the same VM if possible. Network Considerations Network latency in Azure Infrastructure Services can be higher than that of a traditional on-premises environment, due to several reasons, such as virtualization, security, load balancers, and so on. This means that reducing network round trips between application layers in a cloud solution has a strong positive impact on the performance when compared to on-premises solutions. For “chatty” applications, where communications between application layers and components are frequent, we recommend that you consolidate multiple application layers on the same virtual machine. This reduces the number of tiers and the amount of communications that your application needs resulting in better performance. © 2015 Microsoft Corporation

21 Other SQL Server Considerations
Enable CPU Power option to High Performance instead of Balanced Enable Trace flags 2371, 1118, 4199 Enable Optimize for Adhoc workloads setting Configure tempdb datafiles as per the algorithm and fixed sizes Uninstall any unused components on the server Enable Backup compression For Warehouses, Enable trace flag 610 Enable CPU Power options to High Performance instead of Balanced By default, Windows Server 2008 R2 sets the Balanced power plan, which enables energy conservation by scaling the processor performance based on current CPU utilization. On Intel X5500 and other last-generation CPUs, the clock is throttled down to save power (Processor P-state), and only increases when CPU utilization reaches a certain point. The Minimum and Maximum Processor Performance State parameters are expressed as a percentage of maximum processor frequency, with a value in the range 0 – If a server requires ultra-low latency, invariant CPU frequency, or the very highest performance levels, such as a database server, it might not be helpful that the processors keep switching to lower-performance states. Every Adhoc query generates an execution plan in the procedure cache. It will create a plan for every different adhoc query. Most of the time these plans are not reused and they stay in procedure cache taking space. This is called as Procedure Cache pollution. This can cause your buffer pool to lose a significant amount of space which could instead be used for data and index pages. A plan stub instead of the complete execution plan. This stub is very small compared to thBy setting this configuration, SQL server will create a sme original execution plan, and hence we will get some saving in the procedure cache space when you have a lot of adhoc queries which are not reused. If the adhoc query is re-executed then it will create the full execution plan and remove the stub Trace Flag 4199 Microsoft updates SQL Server continuously. The downside to that is of course that it makes it hard to keep track of all fixes and improvements. And when Microsoft makes changes to the Query Optimizer, that can significantly impact the performance of your applications, mostly in good ways, but sometimes it can also negatively affect the performance of your particular application. Therefore, starting with Microsoft SQL Server 2000 Service Pack 3 (SP3), Microsoft adopted a policy that any hotfix that could potentially affect the execution plan of a query must be controlled by a trace flag. Except for fixes to bugs that can cause incorrect results or corruption, these hotfixes are turned off by default, and a trace flag is required to enable the fix. Please test out this trace flag in your development environment before you implement the changes in production Trace Flag 610 As this is a Datawarehouse system, we would recommend Trace Flag 610 to be enabled to get the minimal logging behavior during BULK operations. SQL Server 2008 introduces trace flag 610, which controls minimally logged inserts into indexed tables Not every row inserted in a cluster index with trace flag 610 is minimally logged. When the bulk load operation causes a new page to be allocated, all of the rows sequentially filling that new page are minimally logged. Rows inserted into pages that are allocated before the bulk load operation occurs are still fully logged, as are rows that are moved as a result of page splits during the load. This means that for some tables, you may still get some fully logged inserts. If trace flag 610 causes minimal logging to occur, you should generally see a performance improvement. But as always with trace flags, make sure you test for your specific environment and workload. Trace Flag 1118 This trace flag switches allocations in tempdb from single-page at a time for the first 8 pages, to immediately allocate an extent (8 pages). It’s used to help alleviate allocation bitmap contention in tempdb under a heavy load of small temp table creation and deletion. Trace Flag 2371 The auto update statistics feature of SQL Server relies on number of rows changed or updated to determine if statistics update is needed. The statistics of a table will only be automatically updated if the number of rows changed exceed a threshold. This threshold is documented in KB article However, when a table becomes very large, the old threshold (a fixed rate – 20% of rows changed) may be too high and the Autostat process may not be triggered frequently enough. This could lead to potential performance problems. SQL Server 2008 R2 Service Pack 1 and later versions introduce trace flag 2371 that you can enable to change this default behavior. The higher the number of rows in a table, the lower the threshold will become to trigger an update of the statistics. For example, if the trace flag is activated, update statistics will be triggered on a table with 1 billion rows when 1 million changes occur. If the trace flag is not activated, then the same table with 1 billion records would need 200 million changes before an update statistics is triggered. Enable Backup Compression Consider using the SQL Server backup compression feature as well, to reduce data size transfer to Windows Azure blob storage during database backup and restore operations. © 2015 Microsoft Corporation

22 Lesson Knowledge Check
Question: Which process increases the overall IO throughput on Azure Data disks Answer: Disk Striping (Storage spaces in Windows 2012 and higher; Striped Volumes on Windows 2008 R2 Question: How does compression improve IO? Answer: Page\Columnstore compression reduces the amount of IO footprint for the data and this in turn improves the overall IO performance requiring less IO operations to be performed for any IO Question: What are some of the considerations for storage accounts for SQL Server ? Answer: Need to be in the same datacenter as the VM, Disable Geo replication, Place all data disks on the same storage account for faster recovery © 2015 Microsoft Corporation

23 © 2015 Microsoft Corporation


Download ppt "Section 6: SQL Server on IaaS Best Practices"

Similar presentations


Ads by Google