Download presentation
Presentation is loading. Please wait.
Published byRudolf Hopkins Modified over 6 years ago
3
Simple Partitioning Building a simple partitioning solution with SQL Server Stephen Fulcher
5
Confidential and Proprietary. Copyright © 2016 Crocodile Digital
Confidential and Proprietary. Copyright © 2016 Crocodile Digital. All rights reserved
6
Agenda Introduction The partitioning problem Non-partitioned scenario
Partitioning steps Partitioned solution High volume insert techniques Q&A
7
Motivation Why do we care about this?
Database tables can be unmanageably large Insert situations can become too hot to support reading due to index requirements High volume locking from updates/deletes can bring a table or database down
8
Partitioning Partitioning can resolve the database resource contention
Reduces locking Separates mutually exclusive insert and read optimization Partitioning can increase cost and complexity Queries are more complex Inserts, updates and deletes are slower Skillsets are less common
9
Partitioning Defined Partitioning means to divide into parts
So, database partitioning is dividing a database into parts Two sets of complete objects Individual objects In particular, we are dividing tables into parts Dividing tables into parts with essentially identical schemas Not normalization or 1:1 refactoring
10
Partitioning Example CREATE TABLE [dbo].[Request_0](
[Request_Id] [int] IDENTITY(10000,1) NOT NULL, ... [Request_Stamp] [datetime] NOT NULL, CONSTRAINT [PK_Request_0] PRIMARY KEY CLUSTERED (...) WITH (...) ON [PRIMARY] CREATE TABLE [dbo].[Request_1]( CONSTRAINT [PK_Request_1] PRIMARY KEY CLUSTERED
11
Table Partitioning Options
x
12
Data Retention Scenario
So how is partitioning relevant for data retention? Data retention using DELETE statements can create unmanageable locking and bring a database down under high insert load Partitioning creates the opportunity to eliminate the locking issue by using TRUNCATE statements instead of DELETE statements for retention
13
Demo Let’s look at a typical non-partitioned scenario with high insert load, such as request information logging
14
Steps To Partition Choose your retention strategy to define the duration scope for a table partition Define the way to choose the partition based on current time Create the partitions
15
Steps To Partition Modify the insert proc to discover the correct partition to use based on current time Modify the update and delete procs to target one or more partitions Decide if you want a view for selects Modify the select proc or view to leverage UNIONs
16
Steps To Partition Update the retention job to use TRUNCATE statements
Update any tests that do direct data manipulation outside of the procs
17
Demo Let’s modify the request logging scenario to use partitioned tables
18
High Volume Inserts Collect the DataRow or other objects in a ConcurrentQueue Once the queue size reaches a threshold size, dequeuer the items into a DataTable Additionally, use a timer to insure that items are dequeued in a reasonable minimal amount of time in lighter load periods when the threshold is reached over a longer period of time
19
High Volume Inserts Use a bulk insert technique to optimize the insert
Table valued parameters BulkCopyCommand Move the critical behavior parameters into the configuration metadata
20
Demo Let’s look at the implementation code for the high volume insert techniques
21
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.