Download presentation
Presentation is loading. Please wait.
Published byChrister Abrahamsen Modified over 5 years ago
1
Automagic Tuning - SQL Server 2019 and Beyond
Joey D’Antoni, Principal Consultant, Denny Cherry and Associates Consulting
2
Joey D’Antoni Joey has over 20 years of experience with a wide variety of data platforms, in both Fortune 50 companies as well as smaller organizations He is a frequent speaker on database administration, big data, and career management MVP, MCSE BI and Data Platform VMWare vExpert He is the co-president of the Philadelphia SQL Server User’s Group He wants you to make sure you can restore your data
3
R SQL SERVER Key New Functionality Windows Linux
Query Memory grant Learn Run query Adaptive Query Processing Query times Automatic Tuning Plan 1 Plan 2 Plan 3 Revert to previously effective plan Platform of Choice Windows Primary Sync/Async Replicas High availability Linux Cross-operating system “Clusterless” Availability Groups Statistics Degree earned Position Skill Andy Smith B.S. Science, Finance Business Analyst Graph Data This slide is specifically an overview of the major key features to SQL Server You can actually group these six items into four areas Platform of Choice (Linux and Docker) Performance (AQP and Auto Tuning) HADR Modern Data Capabilities (Graph and ML) The rest of the deck is to drill into these areas and finish with Migration and vNext. R R and Python + in-memory at massive scale Native T-SQL scoring Built-in Machine Learning
4
Adaptive Query Processing
Optimized query processing Improved efficiency with Adaptive Query Processing Before After Optimize memory grants for repeatable queries to avoid over or under allocating Adjust data join strategy for small or large tables to speed joins Batch mode for memory grant feedback and adaptive joins 101010 Query Adaptive Query Processing Learn Memory grant Key point here is QP can adapt or build a plan that is adaptive without recompilation. Today it is just for “batch mode” which means columnstore but we are building more for the future. See slide on Intelligent QP. Run query Spill to disk All in memory Industry-leading performance
5
Query Processing and Cardinality Estimation
When estimates are accurate (enough), we make informed decisions around order of operations and physical algorithm selection CE uses a combination of statistical techniques and assumptions During optimization, the cardinality estimation (CE) process is responsible for estimating the number of rows processed at each step in an execution plan
6
Common reasons for incorrect estimates
Missing statistics Stale statistics Inadequate statistics sample rate Bad parameter sniffing scenarios Out-of-model query constructs E.g. Multi-Statement TVFs, table variables, XQuery Assumptions not aligned with data being queried E.g. independence vs. correlation
7
Cost of incorrect estimates
Slow query response time due to inefficient plans Excessive resource utilization (CPU, Memory, IO) Spills to disk Reduced throughput and concurrency T-SQL refactoring to work around off-model statements
8
Adaptability in SQL Server
Adaptive Query Processing Interleaved Execution for MSTVFs Batch Mode Memory Grant Feedback Batch Mode Adaptive Joins Scalar Function Optimization Requires Database Compatibility Level = 140 and 150 in SQL Server and Azure SQL Database
9
Mission critical performance The intelligent database
10/26/ :45 PM Mission critical performance The intelligent database The Intelligent Query Processing feature family Intelligent QP Intelligent Query Processing Accelerating I/O performance with Persistent Memory Gain performance insights anytime and anywhere with Lightweight Query Profiling Adaptive QP Table Variable Deferred Compilation Batch Mode for Row Store Approximate QP Adaptive Joins Interleaved Execution Memory Grant Feedback Approximate Count Distinct Batch Mode Batch Mode Row Mode Bold indicates new and improved features in SQL Server 2019 © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
10
Interleaved Execution for MSTVFs
11
Interleaved Execution for MSTVFs
Problem: Multi-statement table valued functions (MSTVFs) are treated as a black box by QP and we use a fixed optimization guess Interleaved Execution will materialize and use row counts for MSTVFs Downstream operations will benefit from the corrected MSTVF cardinality estimate
12
About Interleaved Execution
Minimal, since we’re already materializing MSTVFs Expected overhead? First execution cached will be used by consecutive executions Cached plan considerations Contains Interleaved Execution Candidates Is Interleaved Executed Plan attributes Execution status, CE update, disabled reason Xevents
13
Demo Interleaved Execution
Microsoft Data Amp 10/26/ :45 PM Demo Interleaved Execution © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
14
Batch Mode Memory Grant Feedback
15
Batch Mode Memory Grant Feedback (MGF)
Problem: Queries may spill to disk or take too much memory based on poor cardinality estimates MGF will adjust memory grants based on execution feedback MGF will remove spills and improve concurrency for repeating queries
16
About Batch Mode Memory Grant Feedback
If there is oscillation, we will disable the loop after multiple executions Expected overhead? Spill report, and updates by feedback XEvents For spills – spill size plus a buffer For overages – reduce based on waste, and add a buffer Expected decrease and increase size? Memory grant size will go back to original RECOMPILE or eviction scenarios
17
Demo Memory Grant Feedback
Microsoft Data Amp 10/26/ :45 PM Demo Memory Grant Feedback © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
18
Batch Mode Adaptive Joins
19
Batch Mode Adaptive Joins (AJ)
Problem: If cardinality estimates are skewed, we may choose an inappropriate join algorithm AJ will defer the choice of hash join or nested loop until after the first join input has been scanned AJ uses nested loop for small inputs, hash joins for large inputs
20
About Batch Mode Adaptive Join
We grant memory even for NL scenario, so if NL is * always * optimal, you have more overhead Expected overhead? Adaptive Threshold Rows, Estimated and Actual Join Type Plan attributes Adaptive join skipped XEvents Single compiled plan can accommodate low and high number of rows scenarios Cached plan considerations
21
Adaptive Join Threshold
22
Demo Adaptive Join Microsoft Data Amp 10/26/2019 10:45 PM
© Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
23
Revert to previously effective plan
Automatically fix problems without tuning Better performance with Automatic Plan Correction Continuous performance plan monitoring and analysis Detect problematic plans Automatically fix performance problems caused by SQL plan choice regressions Query times Automatic Tuning requires Query Store and works like this By default we will detect query performance regressions when we see a plan change and avg CPU of the query has dramatically regressed from history in the Query Store. We give you recommendations and a method to resolve it If you enable automatic tuning, we will detect and automatically fix the problem by reverting back to plan when performance was good. I called this “Last Known Good”. server-2017/ Plan 1 Plan 2 Plan 3 Plan 2 Revert to previously effective plan Industry-leading performance MICROSOFT CONFIDENTIAL – INTERNAL ONLY
24
Demo Automatic Tuning In Action
Microsoft Data Amp 10/26/ :45 PM Demo Automatic Tuning In Action Follow the reasdme.md file in demo2_autotune © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
25
T-SQL Scalar UDF Inlining
150 database compatibility level T-SQL Scalar UDF Inlining Enable the benefits of UDFs without the performance penalty! Goal of the Scalar UDF Inlining feature is to improve performance for queries that invoke scalar UDFs where UDF execution is the main bottleneck Before SQL 2019/DB Compat 150: Using query rewriting techniques, UDFs are transformed into equivalent relational expressions that are “inlined” into the calling query “Froid: Optimization of Imperative Programs in a Relational Database” by Microsoft’s Gray Systems Lab in Madison Wisconsin. The Froid paper can be accessed at Source: “Froid: Optimization of Imperative Programs in a Relational Database” © 2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
26
Database Compatibility
Microsoft Envision 10/26/ :45 PM Read the docs page Database Compatibility SQL Server Version Default Compat Level SQL Server 2017 140 SQL Server 2016 130 SQL Server 2014 120 SQL Server 2012 110 SQL Server 2008 R2 SQL Server 2008 100 Upgrade (including restore) retains compat level of database Use this to maintain functional compatibility Reserved words Query execution functionality Protect against breaking changes that are database scoped Does not protect from discontinued features Query Performance Not a guarantee of performance Microsoft will work with you if you encounter query plan changes Use DMA 120 introduces new cardinality estimator for Queries 130 and 140 introduce new query plan and execution features 130 and 140 include by default QP fixes under trace flag 4199 Trace flag 4199 applies for new fixes Query Store to observe performance differences Does not affect server wide functionality Model can be changed to affect new databases We keep default level for Azure DB updated © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
28
Learn more from Joey Dantoni
@jdanton
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.