Download presentation
Presentation is loading. Please wait.
Published byGyles Harry Elliott Modified over 8 years ago
1
Use Cases for In-Memory OLTP Warner Chaves SQL MCM / MVP SQLTurbo.com Pythian.com
2
Our Sponsors
3
About Me SQL Server DBA for 10 years. Previously an L3 DBA at HP, now a Principal Consultant at Pythian in Ottawa Ontario. SQL Server MCM and MVP. Twitter: @warchav Email: warner@sqlturbo.com Blog: sqlturbo.com Company site: pythian.com
4
Agenda Goal of Today: provide a practical guideline on finding the use case of In-Memory OLTP on our systems, the type of gains we can expect and reason for them. What Why When and Compromises of In-Memory OLTP. Demo and Use Case: The Dials of In-Memory and Durability. Demo and Use Case: The Landing Pad Demo and Use Case: The Table Variable conversion. Proposed workflow to apply when testing your use case.
5
CPU speed is stagnant but we have more cores per chip. Cost of RAM keeps decreasing and server capacity increasing. There are aggressive optimization algorithms and data structures that work in RAM but not on disk. The Fundamental Premises
6
TechnologyAvg LatencyInterface Spinning metalMilliseconds (10 -3 ) Solid StateMicroseconds (10 -6 ) RAMNanoseconds (10 -9 ) A matter of speed
7
Bypass all log IO for non Durable Data. Less log IO produced for Durable Data. Data File IO is optimized for streaming sequential writes in data/delta file pairs. Where are the improvements? Network Protocol Log IO Data File IO Query Optimization No change in Network. Native compilation introduced.
8
Table Variables are really in memory. No copies of data but multiple versions. New Hash Indexes. All indexes are covering. No fragmentation. CPU use maximized by eliminating bottlenecks and using native compilation. No latching or locking. Where are the improvements? (2) Query Execution Indexing Memory Use CPU
9
No partitioning. Requires careful measurement (RAM, hash buckets). No parallelism. No FKs, Unique indexes, triggers. Schema is 100% static. Limited T-SQL for native compilation. The compromises of V1
10
– In-memory OLTP will support Foreign Keys, check, unique constraints, native functions and triggers. – Size limit increased from 256GB to 2TB. – Transparent Database Encryption (TDE) is supported. – ALTER procedure and ALTER table. – Parallel plans are now possible on in-memory tables. – Native compilation support for: outer joins, OR, NOT, UNION (ALL),DISTINCT, Subqueries (IN, NOT IN, EXISTS, NOT EXISTS). The promises of V2 (SQL 2016)
11
The Dials of In-Memory and Durability Demo
12
You have locking issues and row versioning didn’t significantly improve throughput. (Go In-Memory). You don’t have locking but you still have latching and it’s limiting your throughput. (Go In-Memory). Your locking and latching are not significant but you still need to improve throughput. (Go In-Memory and native compilation). Use Case
13
You have to ingest large amounts of data where there are big spikes of activity and then low periods. (Go In-Memory and mix with Classic). You have high throughput data where its OK if you lose SOME as long as most is there. (Go In-Memory and Delayed Durable). You have data that needs to be queried with the richness of T-SQL but it’s not required to keep between restarts. (Go In-Memory and Schema-Only). Use Case (2)
14
You have non-durable data that you want to scale-out with AlwaysOn Availability Groups. (Go In-Memory and Delayed Durable). SCHEMA_ONLY will NOT work on this previous use case because there’s no transaction log operation involved. Use Case (3)
15
In-Memory Tables Durable Delayed Durable Non-Durable + Native Compilation + Maximize CPU throughput on the critical code paths Remove LOCK and LATCH + Remove WRITELOG wait at the risk of data loss + Remove ALL disk IO, no data persisted
16
The Landing Pad Demo
17
An easy way to dip your toes on the In-Memory pool! Even higher gains attained by going with Delayed or No Durability at all. Make sure to maximize CPU power with parallel tasks. Use Case
18
The Table Variable Conversion Demo
19
Another easy way to dip your toes on the In-Memory pool! Anywhere table variables are heavily used and hitting tempdb significantly. Anywhere where temp tables are used but there’s no advantage to them being on tempdb (more indexes, actual stats needed for the plan, etc). The plans you currently have do NOT depend on parallelism. Use Case
20
1.Understand where in your system is your bottleneck. 2.Does In-Memory OLTP have improvements for it? 3.Are there any show-stoppers in the limitations of V1? 4.Implement, test and compare. 5.Do a capacity planning exercise (RAM, storage, growth). Make sure the benefits are worth it. Workflow for YOUR Use Case
21
QA?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.