Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction to SQL 2016 Query Store

Similar presentations


Presentation on theme: "An Introduction to SQL 2016 Query Store"— Presentation transcript:

1 An Introduction to SQL 2016 Query Store
David Postlethwaite Abstract Query Store is an exciting new feature in SQL Server It can automatically capture and store a history of queries, query execution plans and execution statistics that makes troubleshooting performance problems caused by query plan changes much easier. In this session we will examine Query Store, it's configuration, architecture, benefits and limitations and how it can be used to solve performance problems. About the Author David Postlethwaite has been a SQL Server and Oracle DBA for Liverpool Victoria in Bournemouth, England for the last 7 years. He manages both Oracle and SQL including DBMS, SSIS, SSAS and Reporting Services. Before that he was a .NET developer and way back in history a Windows and Netware administrator. He is an occasional blogger on 18/03/2017 David Postlethwaite

2 An Introduction to SQL 2016 Query Store
David Postlethwaite Liverpool Victoria LV= SQL and Oracle DBA MCSE 2014 Data Platform MCITP 2008, 2005 Oracle OCA 25 years IT Experience 7 years as DBA Blog: gethynellis.com Welcome Good Afternoon. Welcome to this presentation which is entitled “An Introduction to SQL Server 2016 Query Store” so if you are expecting to hear something else you are in the wrong room. My name is David Postlethwaite, I am a senior SQL DBA for a large financial services company on the south coast of England I have been working as a DBA for the last 7 years I currently manage both SQL and Oracle instances. Previous to that I was a developer using .NET, SQL, Access, FoxPro and Oracle And way back in time I was a Windows and NetWare administrator. I am an occasional contributor to the blog on gethynellis.com   2 | 18/03/2017 David Postlethwaite

3 Agenda Overview The Benefits What it Can do For Us How it Works
Limitations and Best Practices Agenda Query Store is a new feature in SQL Server 2016 that is designed to help us troubleshoot query performance. My aim today is to explain how Query Store can help you with your performance issues on your SQL server. We’ll have a look at what Query Store is and what it can do for us, The benefits, how it can solve real world problems How it works. What goes on inside SQL Server when its running Some of the limitations and the best practices that you need to know if you intend to use it. And I hope by the end you will be inspired to give Query Store a try. 18/03/2017 David Postlethwaite

4 Have You Ever… Have You Ever? Before we get into Query Store let me start by asking you a question? Have You Ever… * Experienced system outage or degradation which you are expected to instantly fix? The database is responding slowly and the boss is standing over the top of you demanding you fix it. He’s shouting “Get it fixed we’re losing customers and money. Why don’t you know what’s wrong? “ * Been asked why a database slowed down this morning? Why did it? What changed this morning. What changed since then to make it go back up to speed? * Upgraded an application to the latest SQL version and suffered performance issues? Microsoft claim SQL 2016 is 30% faster so why has your database suddenly gone slow after you moved it to the shiny new server! * Released new code and suffered a slow down? New code should run faster. But when you’ve released it to production its going slower and you can’t see why. * Experience unpredictable performance? One minute its fast, the next minute is gone slow. What’s happening? * Wanted a way to track queries over time to see workload and workload patterns and changes from the baseline? Wanted to be able to answer quickly questions like “Why has this query gone slow? What was the execution plan this morning compared to now? Why has it changed?” * Wanted to force SQL to use a different execution plan to the one it has now? You’ve worked out SQL was using a different execution plan this morning. How can you make SQL use it this afternoon without rewriting code? And keep it that way? Then maybe today is your lucky day! Experienced system outage or degradation which you are expected to instantly fix? Been asked why a database slowed down this morning? Upgraded an application to the latest SQL version and suffered performance issues? Released new code and suffered a slow down? Experience unpredictable performance? Wanted a way to track queries over time to see workload and workload patterns and changes from the baseline? Wanted to force SQL to use a previous (faster) execution plan? Then today may be your luck day 18/03/2017 David Postlethwaite

5 Trouble Shooting before SQL 2016
Traces Can cause high overhead Must be started and stopped Extended Events A lot more events Similar issues to trace DMVs Most DMV’s are either real time only or since the last restart Not organized by time Data is flushed when SQL is restarted Trouble Shooting before SQL 2016 What tools have we used up to now to resolve those sort of issues? * Many of use will have used Profiler to capture traces of what’s happening on the server. It’s been with us since SQL 2000 and it can capture query performance metrics and other things and log it to a table or file It can provide very useful information for a wide range of issues. * But a trace can take quite an overhead on the server, not always but its quite often the case so Profiler isn’t the sort of thing that you can run continuously on your server, * you’ll typically start it to capture some information then stop it after a few minutes or an hour maybe. Trouble is by the time you’ve set it up and turned it on you’re probably too late to capture the problem you wanted to look at, it will have mysteriously cleared and you’ll be no wiser as to what happened. * Extended Events were introduced with SQL 2008. * These were a major improvement on traces because they added a much wider range of events that could be captured. There are lots of things you can do with extended events that you couldn’t do with profiler and traces. * but again there is an overhead and you have to start the extended event to capture the information you want So many of the shortcomings of traces can be applied to Extended Events * Dynamic Management Views were introduced in SQL 2005. These provide system related data and performance related data that you can easily query. DMVs provide a wealth of information but have a couple of drawbacks * There is no concept of time, its either real time data or data since the SQL server was last restarted. * Your server may have been running for months but you can’t say “give me the data just from yesterday”. * And all data is lost when the server is restarted. So if your server crashes you will have no information about what happened prior to the crash And if you have a cluster failover, this involves a server restart and you loose the data in the DMVs. Probably at the moment you want it the most. Query Store is a new tool in your armoury to give you an extra layer of help. 18/03/2017 David Postlethwaite

6 What is Query Store? Workload Data Recorder for your Database
* In a nutshell. Query Store is basically a data recorder for your database. * Once enabled, it automatically captures and retains a history of the queries that run on your system, their execution plans and the runtime execution statistics. You can then use this information for troubleshooting performance problems. * The metrics and data is held in a series of new system tables inside the database. * This means that the information is available even after server restarts which is a great improvement over our DMVs. * The information is automatically aggregated over a defined time interval so we can now look at what happened an hour ago or last night or see trends over the last month rather than just seeing the sum or average since the server started or what is happening right now. * It works for nearly all types of queries including in-memory OLTP queries which I believe weren't captured by DMVs. * It’s new in SQL Server 2016, its not available in older versions of SQL and it is integrated into the SQL Server engine so it is part of the normal processing and execution process. And the great news is that it is supported on all SQL 2016 Server editions , Enterprise down to Express. * It also has quite a limited performance impact of around 3-5% on average (so Microsoft says) so you can leave it running without causing too much of a problem to your server. * It is enabled at database level so you can choose which database on a server you want to collect data which means you don’t have to collect unnecessary data and avoid unnecessary overhead. *It is integrated into SQL Server Management Studio. *Its really easy to setup and there are a series of built-in reports in Management Studio that allow you analyse your queries and their performance. * It includes several Catalog Views and Extended Events to access the captured information to create your own reports and alerts. And some people have used these to create their own reports in Power BI. Workload Data Recorder for your Database Automatically captures a history of queries, plans and their statistics Data stored in system tables Survives crashes, failovers and reboots Aggregated over defined time window Can capture in memory as well as native queries All 2016 editions, even Express ! Lightweight -  performance impact of 3-5% Enabled at Database level Integrated into Management Studio Simple and intuitive User Interface Plenty of Catalog Views and Extended Events 18/03/2017 David Postlethwaite

7 What can Query Store do for us?
Easily identify and fix performance issues Identify top resource consuming or expensive queries Compare activity over different time frames Find queries that exceed certain duration Find frequently failing queries Find queries that need to be parametrized Best of All Find regressed (slowed) queries Identify queries that have many plans Force SQL to use a previous execution plan What can Query Store do for us? Using the built in reports you can quickly and easily find the worst culprits that are causing performance problems on your server. For instance you can; Identify the most expensive queries in your database by CPU, execution time, memory consumption read and writes and so forth. Analyse the resource usage patterns for a particular database. See when the busiest times are and what queries are being run at that time Determine the number of times a query was executed in a given period Audit the history of query plans for a given query. See if you have queries that are changing execution plan regularly. Identify all the non parametrised queries. So you can fix the code to use parameterised queries. And Best of all You can find all the queries that have slowed down, what Microsoft calls, regressed (in other words where a new execution plan has been created which is not as efficient as the older one) You can then easily force the server to use a previous query plan which will give you back the performance you had “yesterday” without having to rewrite an code. 18/03/2017 David Postlethwaite

8 Enabling Query Store SSMS
DEMO The easiest way to explain it is to show you Query Store in action. Query store works on Azure database as well as SQL Server 2016. If you weren’t aware, all new features and bug fixes are first put in the Azure SQL database and then made available via a cumulative update or service pack into SQL 2016. For Microsoft, Azure now comes first. It can be enabled using SSMS or using T-SQL We’ll do it using management studio since a GUI is slighting more interesting than staring at T-SQL code Right click the database, select Query Store Turn On Click OK and query store is collecting data. It’s that simple There a number of parameters that you can choose. We’ll will discuss in more detail in a moment. General Operation Mode Off, Read Only or Read/Write Monitoring Data Flush Interval How often data is written to disk Default 900s Statistics Collection Interval Metrics will be aggregated on this interval Default 60 mins Query Store Retention Maximum Storage Size Maximum amount of space the Query Store will consume When this is hit, turns read only Default 100Mb Query Capture Mode What queries to capture (ALL, AUTO, NONE) Size based Cleanup Mode Controls whether the clean-up process will automatically activate when the total amount of data gets close to the maximum Default Off Stale Query Threshold How long to keep old queries in the store Default 30 Days NOT in SSMS Maximum plans per query Default 200 Enabling Query Store SSMS

9 Enabling Query Store –T-SQL
Naturally, you can also enable and configure Query Store using T-SQL There is a simple alter database command to enable it. This will use the defaults for the parameters ON or OFF Off. Its not collecting data and the data is not available for queries On. Its one of read/write or read only Operation Mode Read/Write is the normal mode. Here it will collect and update Query Store with your performance data Read Only In this case no more data is added to Query Store but it can be read and queried. You might use this to freeze the performance data for analysis offline Maximum Storage Size This is the maximum amount of space that Query Store will use. Query Store does not auto grow so when this maximum is reached, Query Store becomes read only and stops collecting any more data. Default 100Mb Interval length Statistics Collection Interval The time window that the metrics will be aggregated on Default 60 mins Data Flush Interval Initially the data that Query Store captures is stored in-memory then after a period it is written asynchronously to disk. This option determines how often you want to flush this to disk. Default 900s – 15mins Query Capture Mode Specifies the query capture policy for the Query Store. (ALL, AUTO, NONE) ALL- capture every single query that is run on the database. Auto. Tries to decide which queries are worth capturing . It will try to ignore infrequently executed or insignificant queries . You have no say over what that definition is, it is only known to Microsoft. None – won’t capture nay NEW queries Default All Clean-up Policy Stale Query Threshold Defines how long to keep old queries in the store Default 30 Days Size Based Cleanup Mode Controls whether the clean up process will automatically activate and start deleting the oldest data when the total amount of data gets close to the maximum Default Off It is strongly recommended to activate size-based clean-up to makes sure that Query Store always runs in read-write mode and collects the latest data. There is one parameter which is NOT in SSMS Maximum plans per query Default 200 The default parameters are good for a quick start but you should monitor how Query Store behaves over time and adjust its configuration accordingly ALTER DATABASE WideWorldImporters SET QUERY_STORE = ON –- or OFF GO ALTER DATABASE WideWorldImporters SET QUERY_STORE ( OPERATION_MODE = READ_WRITE, -- read write, read only MAX_STORAGE_SIZE_MB = 100, -- maximum size of the query store INTERVAL_LENGTH_MINUTES = 60, -- interval to aggregate statistics DATA_FLUSH_INTERVAL_SECONDS = 900, -- frequency data is written to disk QUERY_CAPTURE_MODE = ALL, -- type of queries to capture (all, auto, none) CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30), --number of days to retain data in the query store SIZE_BASED_CLEANUP_MODE = AUTO, -- controls the clean up process MAX_PLANS_PER_QUERY= max number of plans kept for each query ) 18/03/2017 David Postlethwaite

10 Query Store Architecture
Query Plan In memory Query Store Architecture So what happens under the bonnet? Query Store is integrated into the SQL Server engine so it is part of the normal processing and execution process. When a query gets compiled and executed for the first time the query text and query plan are sent to the plan store and the runtime statistics of the query are sent to the runtime stats store. Each further time that the query is executed the run time statistics are updated and aggregated over the aggregation period into the runtime stats store Initially the data is saved to in-memory objects. When the flush interval is reached the data is placed into an asynchronous queue and an internal process then writes the data inside the queue to the tables. By using an asynchronous queue Query Store doesn’t have to write the data to disk. The queue manage the write schedule allowing Query Store to continue processing new queries without having to wait for the data write process to complete The default flush interval is 15 minutes which will give an average performance overhead of 3-5%. (so Microsoft claims) If you lower the interval so it flushes more often then there will be a higher impact on performance. If you have a longer interval there will be less overhead but there is more risk of losing data that is left in memory in the event of a crash. In practice the query text and execution plans are sent to the async queue straight away to avoid losing this information Its only the runtime statistics that are kept in memory until the flush interval is reached. (A lot of discussions and blogs that I’ve read don’t always mention this but imply both queues are only flushed at the flush interval) Some article that I’ve read also suggest that new execution plans are written to disk quicker than the flush interval And when you enable Query Store for the first time, or after purging it, it is common to see flushes occur more often since more new Execution Plans are encountered. So the flush to disk interval should be treated as a maximum time rather than a specific time. Plan Store directly Query Store Schema Query Text SQL Async queue Runtime Stats Runtime Stats Execute Data flush interval Query Store captures data in memory to minimise i/o overhead Data is persisted to disk asynchronously in the background

11 Query Store Internals tables Catalog Views sys. Plan Store
Let’s go a level deeper Every individual, unique query text is captured and stored as a separate entry. If you have multiple queries in your stored procedure or function they will each be tracked as individual queries. * Each entry consists of the query text and a unique combination of context settings, i.e. SET options such as ANSI NULLs or date format or the object it was called from (see If you execute the same query text with a different set of context values then query store will record it as a separate entry in the query table. So if the exact same query text is run from two different stored procedures then there will be one entry in the query text table but two entries in the query table Importantly, the exact text is used to define a query. So if the exact same query runs in several different places but one has extra characters within it such as comments they will be treated as different entries in the query text table. * Every plan that is generated and executed during the lifetime of the query is stored separately, so you will have runtime statistics for each individual plan. This means that if the query was executed with different plans you’ll be able to see what was a good plan from what was a bad plan and what was the resource consumption of each individual query plan. * Inside the runtime statistics Query Store aggregates the runtime statistics for a query plan. This means that every unique query plan that was executed within the time window will be held as one aggregated row containing all the different metrics such as CPU consumption, memory, duration, reads and writes. If the query is run again in the same interval, it updates that row, it doesn’t create any new rows. * You can access this data using these Catalog Views. So rather than using the GUI you can query these views to gain better insight into your plans. There are plenty of examples of queries on the Internet and I’ve listed some at the end of this PPT When you query the “Query Store” either through SSMS or using the Catalog Views the data returned is from both in-memory and the data on disk and the results are presented as one query result. Plan Store Query_store_query_text Query_context_settings Query_store_query Query_store_plan RunTime Store Query_store_runtime_stats_interval Query_store_runtime_stats tables Catalog Views Query Text Query Plan Runtime Stats sys. 1:N 1:N 1:N Plan Store Query_store_query_text Query_store_query Query_context_settings Query_store_plan 1:N Context Settings Runtime Stats Interval RunTime Store Query_store_runtime_stats_interval Query_store_runtime_stats One row for each plan for each query One row per query text per plan affecting option One row per plan per time interval 18/03/2017 David Postlethwaite

12 Compiling and the Plan Store
Query Text Statement not in Store Statement in Store Context Settings Compiling and the Plan Store Query Store populates the data by: * If the query text is not stored, i.e. this is a new statement seen for the first time, then query store will create an entry in each table * If the query text was in the query store and it is executed again then just the query and plan will be updated with the new information * If the query text comes into the store under a new context (ANSI change or date format etc.) then it records a new set of context settings combination, a new query entry and new plan will be generated * If the query text is in the query store but a new plan is generated then the query will be updated and a new plan created. Statement in Store with new context Query Statement in Store with new plan Plan 18/03/2017 David Postlethwaite

13 Execution and Runtime Stats
Runtime_stats table Run time stats Insert exec stats Plan in store executed for a new interval Start a new interval Execution and Runtime Stats For the execution statistics * When a plan that is in the store is executed for a new time interval, Query Store creates a new interval, generates the run time statistics and then records this information in the runtime stats table. And the last execution time is written to the plan store * If the query is run again in the same interval, it updates the statistics for that query, it doesn’t create new rows. So if the query was run 10,000 times with the same plan in that time interval there will be only one record in the run time stats * The data stays in memory for the period defined by the flush interval and is then written to disk using the asynchronous queue. Runtime_stats interval Asynchronous write to disk on the flush interval Update exec stats Update the execution statistics Plan store tables Plan in store executed in the same interval Update last execution time Plan Store Update last execution time Stats Interval – Aggregation of the stats for a plan Flush Interval – time to write to disk 18/03/2017 David Postlethwaite

14 Query Store Schema Query Text Query Plan Runtime Stats
Text_ID Query_Text Statement_handle Query Query_ID Text_ID Query_Hash Context_ID Execution Time – first, last Compilation Stats Plan Plan_ID Query_ID Plan_xml Compile Time – first , last Compile Count Compile Duration – avg, last Compatability Level Runtime Stats Plan_ID Interval_ID Execution Count Execution Time– first and last Execution Type (completed, aborted) Duration – min, max, last, total, avg, stdev CPU – min, max, last, total, avg, stdev Logical i/o – min, max, last, total, avg, stdev Physical Reads – min, max, last, total, avg, stdev CLR – min, max, last, total, avg, stdev DOP – min, max, last, total, avg, stdev Memory Used – min, max, last, total, avg, stdev Row Count – min, max, last, total, avg, stdev Query Store and Stats This is some of the information you’ll find available in the different views. I couldn't fit every column up here but these are the main ones. On the query plan side you have the query text, query, context settings and plan views On the runtime stats you can see lots of metrics aggregated over the time window You’ll see that it captures the statistics for aborted queries as well those that completed successfully, both client aborted queries and those that failed due to a server error So you can analyse failed queries as well, maybe see why they failed. Query_context_settings Context_ID Set Options Runtime Stats Interval Interval_ID Start Time End Time 18/03/2017 David Postlethwaite

15 What is Tracked? Any T-SQL DML statement NOT Included Select Insert
Delete Update NOT Included DDL Bulk Insert DBCC Backup etc. What is Tracked by Query Store? Any DML statement, regardless of whether its cached, or has a recompile query hint, ad-hoc queries, even internal queries. Everything else is ignored 18/03/2017 David Postlethwaite

16 Use Cases Identify Execution patterns Activity at different times
Queries with Excessive Resource Consumption Frequently Failed Queries Find Regular Adhoc Queries Identify Regressed Queries Queries with lots of different plans Typical Use Cases Typically Query Store usage will be divided into two types Firstly it to look at performance and execution patterns What time of day do we see the most activity? When is our busiest time, morning afternoon or evening? How does performance vary over the last month? Is it getting worse or better? Which queries are causing the biggest performance hit on our database which may need looking at? Are there queries that get cancelled by the user or by the system. Maybe work out why, lack of resources, users doing silly things. Maybe you have one of these reporting tools that allow users to design their own query without any understanding of the database and end up running an impossibly big query Are there people running adhoc queries? Are there non parameterised queries which need converting? Secondly Using Query Store to find bad execution plans or too many execution plans and fix them by forcing the query optimiser to use a previous “good” plan. 18/03/2017 David Postlethwaite

17 Demo Regressed query Forcing plan
Let’s have a demo of fixing regressed queries 18/03/2017 David Postlethwaite

18 Use Case: Upgrading SQL
SQL Server 2016 Upgrade Path Upgrade the instance Don’t change the DB compatibility level Enable Query Store Run workload to capture all query plans to get a baseline Change DB compatibility level to 130 Run the workload again Use Query Store to find plan regressions Force plans Fix the regressed queries Use Case Upgrading SQL One use case that Microsoft quote is that of moving a database from an old version of SQL to SQL 2016 They recommend leaving the compatibility level at the previous version then allowing query store to run and capture a baseline of query performance. Then change the compatibility level to SQL 2016 and using query store to alert you to any queries which have degraded. In SQL 2014 the query optimizer was completely rewritten and caused a number of issues for people when they upgraded so its something worth considering if you are upgrading from earlier versions.

19 Use Case: Upgrading Application
Before making a major change to production Run workload to capture all query plans to get a baseline Upgrade the application Run the workload again Use Query Store to find plan regressions Force plans Fix the regressed queries Use Case Upgrading Application Another example that Microsoft quote is using Query Store to monitor updates to your application. Again use query store to track a baseline for your queries before upgrading then use it to alert you to any queries which have degraded This may require a bit more effort because if the query has changed, even adding a new comment in the middle of a DML statement then query store will treat it as a new query. So you’ll need to use the Catlog Views and procedures to obtain the comparison. 18/03/2017 David Postlethwaite

20 Points to Note Enabled at Database Level
No way to set an instance default Cannot be enabled for master or tempdb Queries must run in the context of the database Does not work on Read Only databases Including read only AG replicas Lack of Control Multiple DBAs could change settings Permissions DB_Owner to configure VIEW DATABASE STATE to view reports Limitations There are plenty of things to be aware of when you start to use Query Store Not so much limitations but things you must consider when you start to use query store First off it is enabled at the database level, it is not a server wide setting. This can be seen as a benefit because you can limit the databases you want to manage so reducing the overhead caused by Query Store. But if you want to manage another database you must remember to turn it on. And if you build a new instance you must remember to turn on Query Store for the databases that you wish to monitor. Also it cannot be enabled for either master or tempdb. People have tried, it just doesn’t work. The reason this is important is that the query is captured in the context of the database. If you connect to master and run a query on your enabled database, query store will not capture the information. You must do a “USE Database” command to change context to that database for query store to capture the information. So the “default database” setting for logins may become very important You cannot capture read only databases and this includes replicas such as Always On Availability Group secondaries. Many people use Always On secondary databases in read only mode for reporting purposes. You won’t be able to capture the usage and statistics of these. There is no control over the Query Store. Multiple DBAs could change the settings without reference to what others may have done. So management could be difficult Db_owner permissions allow you to configure and force execution plans To just view the Query Store reports you require VIEW DATABASE STATE. 18/03/2017 David Postlethwaite

21 Points to Note Space Must be Managed Parameters to Check
Limitations 2 There is no auto growth setting for Query Store like you find in a SQL database. You have to set the maximum size that you want the query store to grow to. You must keep an eye on the size of the Query Store because if it reaches its maximum size it automatically becomes read only and no more data will be added. You may find that query Store isn’t collecting data just at the time you really need it. You will have to keep a balance between the size and how much history you really require. Check these parameters to ensure you don’t run out of space. Max Size of 100 MB may not be sufficient if your workload generates large numbers of different queries and plans or if you want to keep query history for a longer period of time. Statistics Collection Interval:  The aggregation interval of your runtime statistic will directly affect the amount of data in the runtime statistics table The default is 1 hour which will give one row per query per plan per hour. If you lower the value for a finer granularity the larger the query store will grow. If you change from 60 minutes to 1 minute you’ll have 60 times the amount of data. Stale Query Threshold defines the retention period for inactive queries. By default, Query Store is configured to keep data of inactive queries for 30 days. Which may not be long enough, but you’ll need more space. Size Based Cleanup Mode is the parameter that can stop your database going read only. If this is turned on then Query Store will purge the oldest data when the database gets to 90% of its maximum size and delete data to get it down to 80% of the maximum size. By default this is off. But Microsoft strongly recommends that this is to activated to makes sure that Query Store always runs in read-write mode and collects the latest data. But you’ll have to make sure that the maximum size is big enough not to lose the data you want. It will just remove oldest data, it won’t distinguish what is worth keeping or not Query Store Capture Mode determines what queries are captured. All, which is the default, captures every single query regardless of how infrequent or insignificant they are. Whereas Auto  just captures what it thinks are queries worth capturing. Infrequent, insignificant queries and adhoc queries are ignored. How it does this and what defines an insignificant query is not published. You may miss some queries that you really wanted. None – Query Store stops capturing new queries. –this means it will only continue to capture statistics for queries it already knows about but won’t add any new queries Space Must be Managed When Query Store reaches capacity it turns read only May not have the data you require when you need it Balance space consumed and history required Parameters to Check Max Size Statistics Collection Interval Stale Query Threshold Size Based Cleanup Mode –off ,Auto Query Store Capture Mode –All, Auto, None 18/03/2017 David Postlethwaite

22 Points to Note Gathering Too Much or Too Little Data Who What or Where
Queries from different sources give multiple entries Queries with multiple literal values give multiple entries Queries must run in the context of the database Linked server queries, only logged in source Clean up Mode too short No information provided related to waits One aggregation interval Who What or Where Who ran a particular query Which programs and/or servers did the queries execute from Limitations 3 Following on from this are we capturing the data we need correctly? Query store reads all the text of a query including any comments. If you have the same query in different places in your code , one with comments and one without comments they will get treated as two different queries. If the exact same query runs in two different locations then these get recorded as two different queries. The default reports treat these as two different entries which will make comparison more difficult. You may have to design your own custom report to join the two queries based on the same query text. If you run non-parameterised queries, each query with different literal values will be treated as a separate query. In these cases, rather than gathering aggregated data about a query, Query Store will have split each into separate data There is a query that can show you all of your non parametrised queries. Find your non parametrised queries and get them fixed. We’ve already mentioned that you must run the query in the context of the database otherwise Query store will not recognise it. If you are running queries across multiple databases or with linked server you have to ensure they are run in the correct context to capture the information And as mentioned in the previous screen, if your clean up policy is too short you may not be keeping enough history for you to see regressed queries. There is also no information on wait stats included which may be of use to you You can only have one aggregation interval which may not work for every query. There is no context to the data in query store. You know when a query ran and what resources it used but it won’t tell you who ran it, or which program ran it or from where it was run. You won’t be able to blame anyone for causing your system to go slow ! 18/03/2017 David Postlethwaite

23 Points to Note Data is stored in primary filegroup
i/o contention with user data Longer database backups Forcing an execution plan may not be the answer Fix may be temporary What if the schema changed? What if your query changes slightly? What if the data volumes changes? What if significant time has passed? Worry about overly “pinned” systems degrading over time Limitations 4 The Query store data is stored in the primary file group. You cannot change this. If you have your user tables in your primary file group as well then you may get file contention where Query store is trying to write to the database at the same time as the users are trying to update their data. The database will be naturally larger for backups and restores. Forcing Execution Plans is obviously what Microsoft has been promoting and it does offer a quick fix. But is it a long term solution? If you start using query store to force the optimizer to use a different query plan, you must go back and check that it is still valid What if significant amount of data has changed in the table or a new index has been added? Is that forced execution plan still valid? You should see forcing a plan as a quick fix giving you time to find a long term solution And make sure you periodically check what forced plans you have otherwise you could be making the problem worse Query Store is clever enough to see that if the forced query plan can not be used (for example dropping an index which the forced query plan uses), it will use a new query plan instead of failing the query and breaking the application. 18/03/2017 David Postlethwaite

24 One More Point Alter. Don’t Drop Don’t Rename the Database
Query records the containing object id When containing object is recreated, a new query will be generated for the same text Limits the ability to track query performance over time Use more space Use ALTER instead ( or new SP1 command “create or alter”) Don’t Rename the Database Breaks forced execution plans Alter don’t Drop If you release new code its really important that you alter the procedure, don’t drop it. If you recreate a procedure then you’ll end up with a new Object ID for your query even if they are exactly the same text so you won’t be able to so easily compare new and old performance. Don’t Rename the Database If you do then any forced execution plans will break and Query Store will use the whatever the query optimiser comes up with. 18/03/2017 David Postlethwaite

25 Summary Benefits of Query Store Easy to use
Embedded within the database engine so nothing is missed Database not server specific Data not lost on restart Data aggregated over time Data is captured at query level not batch level Automatic storage management Support natively compiled procedures and in-memory OLTP workload Highly customizable Rich set of statistics to look at many types of problems Benefits So to summarise Query Store is really easy to turn on and, with its built in reports, very easy to see the performance of your database You can identify the top queries (by execution time, memory consumption, etc.) over a certain period. You can determine the number of times a query was executed in a given time window, Audit the history of query plans for a given query. Analyse the resource usage patterns And you can fix queries that have recently regressed in performance due to execution plan changes. Even though it is possible for us to perform many of the same actions in earlier SQL Server versions its not that easy. Query Store makes these actions far easier to use and available to everyone without needing to rewrite any queries. 18/03/2017 David Postlethwaite

26 Any Questions Conclusion Q & A Watch again on YouTube
Blog: gethynellis.com Watch again on YouTube Any Questions Hopefully this has given you a head start when you start looking at SQL 2016 18/03/2017 David Postlethwaite

27 Catalog Views and Procedures
sys.query_store_query_text sys.query_store_query sys.query_context_settings sys.query_store_plan sys.query_store_runtime_stats sys.query_store_runtime_stats_interval sys.database_query_store_options sp_query_store_consistency_check sp_query_store_flush_db sp_query_store_force_plan sp_query_store_remove_plan sp_query_store_remove_query sp_query_store_reset_exec_stats sp_query_store_unforce_plan SELECT name, type_desc FROM sys.all_objects WHERE name LIKE '%query_store%' or name= 'query_context_settings‘ You can find description of the stored procedures here:  description of the catalog views here:  On SQL Server requires VIEW SERVER STATE permission on the server.

28 Extended Events Extended Events There are 19 extended events
query_store_background_task_persist_started - Fired if the background task for Query Store data persistence started execution query_store_background_task_persist_finished - Fired if the background task for Query Store data persistence is completed successfully query_store_load_started - Fired when query store load is started query_store_db_data_structs_not_released - Fired if Query Store data structures are not released when feature is turned OFF. query_store_db_diagnostics - Periodically fired with Query Store diagnostics on database level. query_store_db_settings_changed - Fired when Query Store settings are changed. query_store_db_whitelisting_changed - Fired when Query Store database whitelisting state is changed. query_store_global_mem_obj_size_kb - Periodically fired with Query Store global memory object size. query_store_size_retention_cleanup_started - Fired when size retention policy clean-up task is started. query_store_size_retention_cleanup_finished - Fired when size retention policy clean-up task is finished. query_store_size_retention_cleanup_skipped - Fired when starting of size retention policy clean-up task is skipped because its minimum repeating period did not pass yet. query_store_size_retention_query_deleted - Fired when size based retention policy deletes a query from Query Store. query_store_size_retention_plan_cost - Fired when eviction cost is calculated for the plan. query_store_size_retention_query_cost - Fired when query eviction cost is calculated for the query. query_store_generate_showplan_failure - Fired when Query Store failed to store a query plan because the showplan generation failed. query_store_capture_policy_evaluate - Fired when the capture policy is evaluated for a query. query_store_capture_policy_start_capture - Fired when an UNDECIDED query is transitioning to CAPTURED. query_store_capture_policy_abort_capture - Fired when an UNDECIDED query failed to transition to CAPTURED. query_store_schema_consistency_check_failure - Fired when the Query Store schema consistency check failed. Extended Events There are 19 extended events 18/03/2017 David Postlethwaite

29 Useful Queries 1 Is Query Store active List Query Store Options
Increase Maximum Database Size Delete all the data in the query Store Force a Query Delete Add hoc query information from the query store Useful Queries Is Query Store Active SELECT actual_state, actual_state_desc, readonly_reason, current_storage_size_mb, max_storage_size_mb FROM sys.database_query_store_options; List Query Store Options SELECT * FROM sys.database_query_store_options; Query Store Space Used SELECT current_storage_size_mb, max_storage_size_mb , current_storage_size_mb*100/max_storage_size_mb as query_store_utilization FROM sys.database_query_store_options Increase Maximum Database Size ALTER DATABASE <database_name> SET QUERY_STORE (MAX_STORAGE_SIZE_MB = <new_size>); Delete all the data in the query Store ALTER DATABASE <db_name> SET QUERY_STORE CLEAR; Force a Query EXEC = = y; Delete Add hoc query information from the query store int DECLARE adhoc_queries_cursor CURSOR FOR SELECT q.query_id FROM sys.query_store_query_text AS qt JOIN sys.query_store_query AS q ON q.query_text_id = qt.query_text_id JOIN sys.query_store_plan AS p ON p.query_id = q.query_id JOIN sys.query_store_runtime_stats AS rs ON rs.plan_id = p.plan_id GROUP BY q.query_id HAVING SUM(rs.count_executions) < 2 AND MAX(rs.last_execution_time) < DATEADD (hour, -24, GETUTCDATE()) ORDER BY q.query_id ; OPEN adhoc_queries_cursor ; FETCH NEXT FROM adhoc_queries_cursor WHILE = 0 BEGIN EXEC FETCH NEXT FROM adhoc_queries_cursor END CLOSE adhoc_queries_cursor ; DEALLOCATE adhoc_queries_cursor; 18/03/2017 David Postlethwaite

30 Useful Queries 2 Last n queries run
Number of executions for each query The number of queries with the longest average execution time within last hour Last n queries run SELECT TOP 10 qt.query_sql_text, q.query_id, qt.query_text_id, p.plan_id, rs.last_execution_time FROM sys.query_store_query_text AS qt JOIN sys.query_store_query AS q ON qt.query_text_id = q.query_text_id JOIN sys.query_store_plan AS p ON q.query_id = p.query_id JOIN sys.query_store_runtime_stats AS rs ON p.plan_id = rs.plan_id ORDER BY rs.last_execution_time DESC; Number of executions for each query SELECT q.query_id, qt.query_text_id, qt.query_sql_text, SUM(rs.count_executions) AS total_execution_count GROUP BY q.query_id, qt.query_text_id, qt.query_sql_text ORDER BY total_execution_count DESC; The number of queries with the longest average execution time within last hour SELECT TOP 10 rs.avg_duration, qt.query_sql_text, q.query_id, qt.query_text_id, p.plan_id, GETUTCDATE() AS CurrentUTCTime, rs.last_execution_time WHERE rs.last_execution_time > DATEADD(hour, -1, GETUTCDATE()) ORDER BY rs.avg_duration DESC; 18/03/2017 David Postlethwaite

31 Useful Queries 3 The number of queries that had the biggest average physical IO reads in last 24 hours Queries with Multiple Plans Queries that recently regressed in performance   The number of queries that had the biggest average physical IO reads in last 24 hours, with corresponding average row count and execution count SELECT TOP 10 rs.avg_physical_io_reads, qt.query_sql_text, q.query_id, qt.query_text_id, p.plan_id, rs.runtime_stats_id, rsi.start_time, rsi.end_time, rs.avg_rowcount, rs.count_executions FROM sys.query_store_query_text AS qt JOIN sys.query_store_query AS q ON qt.query_text_id = q.query_text_id JOIN sys.query_store_plan AS p ON q.query_id = p.query_id JOIN sys.query_store_runtime_stats AS rs ON p.plan_id = rs.plan_id JOIN sys.query_store_runtime_stats_interval AS rsi ON rsi.runtime_stats_interval_id = rs.runtime_stats_interval_id WHERE rsi.start_time >= DATEADD(hour, -24, GETUTCDATE()) ORDER BY rs.avg_physical_io_reads DESC; Queries with Multiple Plans WITH Query_MultPlans AS ( SELECT COUNT(*) AS cnt, q.query_id ON p.query_id = q.query_id GROUP BY q.query_id HAVING COUNT(distinct plan_id) > 1 ) SELECT q.query_id, object_name(object_id) AS ContainingObject, query_sql_text, plan_id, p.query_plan AS plan_xml, p.last_compile_start_time, p.last_execution_time FROM Query_MultPlans AS qm ON qm.query_id = q.query_id JOIN sys.query_store_query_text qt ORDER BY query_id, plan_id; Queries that recently regressed in performance (comparing different point in time)?  The following query example returns all queries for which execution time doubled in last 48 hours due to a plan choice change. Query compares all runtime stat intervals side by side. SELECT qt.query_sql_text, q.query_id, qt.query_text_id, rs1.runtime_stats_id AS runtime_stats_id_1, rsi1.start_time AS interval_1, p1.plan_id AS plan_1, rs1.avg_duration AS avg_duration_1, rs2.avg_duration AS avg_duration_2, p2.plan_id AS plan_2, rsi2.start_time AS interval_2, rs2.runtime_stats_id AS runtime_stats_id_2 JOIN sys.query_store_plan AS p1 ON q.query_id = p1.query_id JOIN sys.query_store_runtime_stats AS rs1 ON p1.plan_id = rs1.plan_id JOIN sys.query_store_runtime_stats_interval AS rsi1 ON rsi1.runtime_stats_interval_id = rs1.runtime_stats_interval_id JOIN sys.query_store_plan AS p2 ON q.query_id = p2.query_id JOIN sys.query_store_runtime_stats AS rs2 ON p2.plan_id = rs2.plan_id JOIN sys.query_store_runtime_stats_interval AS rsi2 ON rsi2.runtime_stats_interval_id = rs2.runtime_stats_interval_id WHERE rsi1.start_time > DATEADD(hour, -48, GETUTCDATE()) AND rsi2.start_time > rsi1.start_time AND p1.plan_id <> p2.plan_id AND rs2.avg_duration > 2*rs1.avg_duration ORDER BY q.query_id, rsi1.start_time, rsi2.start_time; 18/03/2017 David Postlethwaite


Download ppt "An Introduction to SQL 2016 Query Store"

Similar presentations


Ads by Google