Download presentation
Presentation is loading. Please wait.
Published byBaldwin Dalton Modified over 9 years ago
1
Table Partitioning For Maintenance and Performance Richard Banville OpenEdge Development Progress Software
2
© 2015 Progress Software Corporation. All rights reserved. 2 Agenda Partition design considerations Partition definition setup, not physical layout Some internals of Table Partitioning in OpenEdge Performance Impact (maintenance and runtime) Your configuration matters Yes, we have numbers
3
© 2015 Progress Software Corporation. All rights reserved. 3 Data Access: Partitioning types Sub-partitioning (up to 15 levels!) OR Northern Region Western Region Southern Region Order Table 12/31/2011 Western Region 12/31/2013 Western Region 12/31/2015 Western Region 12/31/2011 Southern Region 12/31/2013 Southern Region 12/31/2015 Southern Region 12/31/2013 Northern Region 12/31/2015 Northern Region 12/31/2011 Northern Region List Partitioning Range Partitioning OR Order Table 12/31/2011 12/31/2013 12/31/2015
4
© 2015 Progress Software Corporation. All rights reserved. 4 Why Are You Partitioning? Maintenance Data re-org / rebuild Data purging Data archival Availability Data repair Data isolation Historic data access Performance “Hot” table “Hot” index access
5
© 2015 Progress Software Corporation. All rights reserved. 5 What to Partition: Data Organization Look for a “well known” grouping by “static data value” Known at creation time, changes infrequently List Partitioning Data organized geographically or grouped by specific entities Exact match Country, region, company, division Why or why not Sales-Rep? Consider number of unique data values 32,765 max defined partitions per table For best performance: Spread the data out Northern Region Western Region Southern Region Order Table List Partitioning
6
© 2015 Progress Software Corporation. All rights reserved. 6 What to Partition: Data Organization Range Partitioning Data organized by ranges of values Range rather than single value to identify a group of data Date (by year is most typical) –Usage: Calendar year, fiscal year, quarter? –Order-date vs ship-date? –Consider affect on index choice Alphabetic or numeric range –Product code –Usage vs Balance: Group related products, balance A-Z spread For best performance: Spread the data out Range Partitioning Order Table 12/31/2011 12/31/2013 12/31/2015
7
© 2015 Progress Software Corporation. All rights reserved. 7 What to Partition: Data Organization Sub-partitioning Sub-partitioning candidate? Can you include another column (or add one)? By region by order-date For best performance: Sub-partition AND spread the data out Sub-partitioning Order Table 12/31/2011 Western Region 12/31/2013 Western Region 12/31/2015 Western Region 12/31/2011 Southern Region 12/31/2013 Southern Region 12/31/2015 Southern Region 12/31/2013 Northern Region 12/31/2015 Northern Region 12/31/2011 Northern Region
8
© 2015 Progress Software Corporation. All rights reserved. 8 Increasing Concurrency With Table Partitioning A7 A8 A9 Table data all in one physical storage area User #4 User #3 User #2 User #1 A7 No partitioning – Order table data in 1 storage area Create order. Assign Order-date = TODAY region = “NorthEast”. Create order. Assign Order-date = TODAY region = “SouthEast”.
9
© 2015 Progress Software Corporation. All rights reserved. 9 Create order. Assign Order-date = TODAY. Increasing Concurrency With Table Partitioning A7 A8 A9 “Current” data in one physical storage area Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 A7 Range Partitioning by Order-Date
10
© 2015 Progress Software Corporation. All rights reserved. 10 Create order. Assign Order-date = TODAY Product-Code = “D100”. Create order. Assign Order-date = TODAY Product-Code = “A50”. Increasing Concurrency With Table Partitioning A7 A8 A9 “Current” data in one physical storage area Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 A7 Range Partitioning by Order-Date A7 A8 A9 Table data across physical storage areas Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 Range partitioning by Product-Code
11
© 2015 Progress Software Corporation. All rights reserved. 11 Increasing Concurrency With Table Partitioning A7 A8 A9 Table data across physical storage areas Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 List partitioning by Region Create order. Assign Order-date = TODAY region = “NorthEast”. Create order. Assign Order-date = TODAY region = “SouthEast”.
12
© 2015 Progress Software Corporation. All rights reserved. 12 Increasing Concurrency With Table Partitioning A7 A8 A9 Table data across physical storage areas Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 List partitioning by Region Create order. Assign Order-date = TODAY region = “NorthEast”. Create order. Assign Order-date = TODAY region = “SouthEast”. A7 A8 A9 Table data across physical storage areas Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 Sub-partitioning by Region & Order-Date
13
© 2015 Progress Software Corporation. All rights reserved. 13
14
© 2015 Progress Software Corporation. All rights reserved. 14 Physical Characteristics Type II Areas Data and index separated –8 Kb block size with cluster sizes of 512(data) and 64(index) All partitions in separate areas Areas of proportional fixed sizes with matching db extends Data Average record size 257, all same RPB (32) –Might be interesting to show per partition RPB tuning 50,000 records to 10,000,000 per run (base on # users) 3 Global indexes and 2 local indexes Recovery 8KB block with 128 MB cluster size
15
© 2015 Progress Software Corporation. All rights reserved. 15 Testing performed Scale users 1, 2, 5, 10, 25, 50, 100, 200 Avoid application side conflicts Monitor internal resource conflicts Operations executed Basic Create, Read, Delete Vary transaction scope 10, 100, 500 records per transaction Vary partitioning scheme No partitioning Range partitioning on {order-date} Sub-partitioning on {region(9), order-date}
16
© 2015 Progress Software Corporation. All rights reserved. 16 Modified Server Parameters Buffer pool -B 50000 -lruskips 250 Lock Table -L 100000 -lkwtmo 3600 Transaction -TXERetryLimit 1000 BI -bibufs 4000 -bwdelay 20 Latching -spin 50000 -napmax 10 Page writers: 1 BIW 3 APWs
17
© 2015 Progress Software Corporation. All rights reserved. 17 Other Test Information Machine Stats 16 sparcv9 processor operating at 3600 MH Memory size: 32768 Megabytes Dbanalys performed before and after each activity Database recreated with same.st file for each run Variation across runs: ±1% These tests were run with the best intentions There are some additional areas Progress needs to investigate As always, YMMV
18
© 2015 Progress Software Corporation. All rights reserved. 18 Sub-partitioning on region AND order-date Txn size of 10 Writes and deletes perform significantly better with this sub-partitioning scheme Big jump starts at 25 & 50 No improvement for “Isolated” Read activity # Users vs % difference Neg. indicates a loss for TP
19
© 2015 Progress Software Corporation. All rights reserved. 19 Sub-partitioning on region AND order-date Deletes fall off with increased txn size Big jump starts at 25 Reads remain flat
20
© 2015 Progress Software Corporation. All rights reserved. 20 Sub-partitioning on region AND order-date Deletes fall off with increased txn size Writes improve with increased txn size Reads remain flat
21
© 2015 Progress Software Corporation. All rights reserved. 21 Data Location Mapping Overhead A7 A8 A9 A10 Table data now across physical storage areas Partition 1 Partition 2 Partition 3 Partition 4 Table # + Column Value Area # and Record data Object Mapping Partition mapping via “special” _partition-policy-detail (ppd) index One additional index lookup Per record created Create order. Assign region = “NorthEast” and order-date = TODAY. Partition Mapping
22
© 2015 Progress Software Corporation. All rights reserved. 22 Why the big difference? For the 100 user “write” example (117% runtime performance improvement): Note: 3 level index for non-partitioned and global, only 2 for partitioned indexes StatBasePartitionDeltaWaits Delta DB Buf I Lock24,965,75224,219,272746,48048,485,873 DB Buf S Lock49,845,17643,137,9126,707,2643,669,065 Find index entry 1005,000,100-5,000,000 BUF Latch325,197,441230,610,80194,586,640 2,488,178 MTX Latch31,877,745 31,280,314597,431-443,931 TXQ60,339,74460,930,902-591,158-71,178 Latch timeouts3,205,140 1,088,120 2,117,020 Resource waits64,643,1036,940,90657,702,197 Extends44,22448,224
23
© 2015 Progress Software Corporation. All rights reserved. 23 Data Location Mapping Overhead A7 A8 A9 A10 Table data now across physical storage areas Partition 1 Partition 2 Partition 3 Partition 4 Table # + Column Value Area # and Record data Object Mapping Partition mapping via “special’ ppd index One additional index lookup Per partition traversed Query spanning 3 partitions requires only 3 additional partition index lookups For each order where region = “NorthEast” and order-date > 01/01/2013: end. Partition Mapping
24
© 2015 Progress Software Corporation. All rights reserved. 24 Why so little difference? For the 100 user “read” example (1.8% runtime performance improvement): This is an isolated test case Very little conflict for read activity Real world scenarios will have mixed activity introducing concurrency issues May be resolved with partitioning StatBasePartitionDeltaWaits Delta Index operations25,000,10025,000,2001000 DB Buf S Lock26,059,28426,044,49814,7866,816 BHT Latch25,872,92729,698,636-3,825,70929,532 BUF Latch81,084,55481,295,318 -210,764 -3,519 Latch timeouts57,19331,32725,866 Resource waits7,1723566,816
25
© 2015 Progress Software Corporation. All rights reserved. 25 Data Location Mapping Overhead A7 A8 A9 A10 Table data now across physical storage areas Partition 1 Partition 2 Partition 3 Partition 4 Table # + Column Value Area # and Record data Object Mapping Partition mapping via “special” ppd index One additional index lookup Per partition traversed Query spanning 3 partitions requires only 3 additional partition index lookups For each order where region = “NorthEast” and order-date > 01/01/2013: DELETE order. Partition Mapping
26
© 2015 Progress Software Corporation. All rights reserved. 26 Why the big difference? For the 100 user “delete” example (126% runtime performance improvement): Partitioning has more Shared Buffer activity with less waits. Latch time out and resource waits are significant StatBasePartitionDeltaWaits Delta DB Buf I Lock49,764,71239,622,40010,142,31228,235,912 DB Buf S Lock135,589,472172,297,568-36,708,09645,324,198 BHT Latch308,628,367261,774,93146,853,436471,875 BUF Latch527,202,766516,930,24710,272,5192,955,964 Latch timeouts5,200,0691,849,6593,350,410 Resource waits120,187,90446,963,70973,224,195
27
© 2015 Progress Software Corporation. All rights reserved. 27 Create order. Assign Order-date = TODAY. Numbers from a “bad” configuration A7 A8 A9 “Current” data in one physical storage area Partition 1 Partition 2 Partition 3 User #4 User #3 User #2 User #1 A7 Range Partitioning by Order-Date
28
© 2015 Progress Software Corporation. All rights reserved. 28 Poorly designed partitioning scheme (order-date only) Write performance is pretty bad for 25-100 users Expect MUCH flatter write performance Improves with 200 users Reads remain flat Same for other txn sizes
29
© 2015 Progress Software Corporation. All rights reserved. 29 All the overhead without improved concurrency For the 100 user “write” example (66% runtime performance loss): Waits not significantly different but extends, index and buffer activity is MUCH higher. Test case was broken – data in wrong area StatBasePartitionDeltaWaits Delta DB Buf I Lock24,965,75224,987,122-21,3701,135,820 DB Buf S Lock49,845,17654,496,236-4,651,06017,760 Find index entry 1005,000,100-5,000,000 BUF Latch325,197,441332,561,860-7,364,419 268,565 MTX Latch31,877,745 31,931,589-53,8449,455 TXQ60,339,74460,384,584-44,840-346 Latch timeouts3,205,140 2,894,719310,42 Resource waits64,643,10363,340,3881,302,715 Extends44,224173,760
30
© 2015 Progress Software Corporation. All rights reserved. 30 Poorly designed partitioning scheme (order-date only) Much flatter response No 200 user results Further investigation: There is variation in reads and writes from previous. Stats show same as well From small gain down to ~7% loss depending on the operation & # users
31
© 2015 Progress Software Corporation. All rights reserved. 31 All the overhead without improved concurrency For the 100 user “write” example (0.38% runtime performance loss): Now have similar stats Index activity and X Buffer locks is the only real standout StatBasePartitionDeltaWaits Delta DB Buf I Lock24,942,39224,953,548-11,156347,384 DB Buf S Lock49,868,24055,152,964-5,284,724-16,267 DB Buf X Lock41,941,720 42,181,992-240,272-324,844 Find index entry1005,000,1005,000,0000 BUF Latch321,989,710 333,313,379 -11,323,669 48,054 MTX Latch 31,931,31931,944,574-13,255-4,037 Latch timeouts3,166,5053,132,65333,852 Resource waits61,308,4966,865 Extends3,39210,880-7,488
32
© 2015 Progress Software Corporation. All rights reserved. 32 All the overhead without improved concurrency For the 100 user “read” example (0.31% runtime performance loss): Very little difference in activity Index operations indicate use of “Global Index” vs local. Probably should rerun with different query to force comparative local index lookup StatBasePartitionDeltaWaits Delta Index operations25,000,100 00 DB Buf S Lock 26,782,30826,503,194279,114372 BHT Latch25,863,63426,933,298-1,069,6641,428 BUF Latch82,393,82382,089,754304,069 -790 Latch timeouts59,747 58,588 1,159 Resource waits3,8933,521372
33
© 2015 Progress Software Corporation. All rights reserved. 33 All the overhead without improved concurrency For the 100 user “delete” example (4.56% runtime performance loss): TP experiences more activity And now more waiting - NOTE: Buffer intent and share locks Indexes all identical levels StatBasePartitionDeltaWaits Delta DB Buf I Lock49,708,52449,747,972-39,448-2,270,296 DB Buf S Lock133,305,600 133,141,072 164,528-2,075,716 BHT Latch292,113,659 296,430,870-4,317,211-57,465 BUF Latch508,107,788511,003,173-2,895,385-132,237 Latch timeouts4,670,0324,857,119-187,087 Resource waits 106,292,224 110,639,104-4,346,880
34
© 2015 Progress Software Corporation. All rights reserved. 34 Maintenance Operations 10 GB Table (90 million records) Binary Dump / load Index rebuild Local Global
35
© 2015 Progress Software Corporation. All rights reserved. 35 Binary Load Performance Important to note: Concurrent load for TP does NOT foul loading in dump order Multiple users insert on different allocation chains – one per partition –Regardless if partitions are in same area or not! Concurrent load for non-TP fouls loading in dump order Multiple users inserting on SAME allocation chains “Logical” scatter of data is re-introduced during load
36
© 2015 Progress Software Corporation. All rights reserved. 36 Binary Load Single user slower than expected During TP load noticed high DBSI contention Due to global index support. OperationNon-TP TableTP Entire Table% Difference Binary Load -124m35.698s30m0.474s-22.90 Binary Load -1 –i24m36.540s30m14.441s-21.95 Binary Load –n 9 **24m59.249s14m10.843s76.15 Binary Load –n 9 (w/apw,biw) 17m3.913s11m45.879s45.04 Binary Load –n 9 –i **17m30.530s 6m41.232s 162.09 Binary Load –n 9 -i (w/apw,biw) 16m53.992s 6m34.930s 150.74
37
© 2015 Progress Software Corporation. All rights reserved. 37 Index Rebuild Modest performance loss running off line All TP indexes built – only need to build local indexes after binary load No real increased concurrency Running online in parallel for all local indexes is a huge win Very small amount of BI activity when run online Operation Non-TP Table TP Entire Table %Difference Idxbuild off-line16m37s18m26s-10.93 Idxbuild on-line (2 local)(9m15.5)4m49s92.04 Idxbuild on-line –i (2 local)(9m15.5)4m53s89.42
38
© 2015 Progress Software Corporation. All rights reserved. 38 Binary Dump OperationNon-TP TableTP Entire Table%Difference Binary Dump -15m13.499s3m21.848s54.95 Binary Dump –n w/1 exe6m10.337s3m43.359s65.92 Binary Dump –n w/9 exes2m30.658s1m11.738s111.27 Binary Dump threaded w/1 exe2m37.818s3m40.989s-125.51 Binary Dump threaded w/9 exes2m28.074s1m10.527s111.43 Binary Dump specified threaded w/9 exes 2m27.048s1m10.050s110.00
39
© 2015 Progress Software Corporation. All rights reserved. 39 Summary Partition based on data grouping Spread data across partitions Create and delete performance improvements Significant for well designed partition schemes Isolated read performance mostly unaffected by partitioning Maintenance performance improvements also significant
41
© 2015 Progress Software Corporation. All rights reserved. 41 Want To Learn More About Openedge 11? Role-based learning paths are available for OpenEdge 11 Each course is available as Instructor-led training or eLearning Instructor-led training: $500 per student per day https://www.progress.com/support-and-services/education/instructor-led-training eLearning: Via the Progress Education Community (https://wbt.progress.com):https://wbt.progress.com OpenEdge Developer Catalog: $1500 per user per year OpenEdge Administrator Catalog: $900 per user per year User Assistance videos: https://www.progress.com/products/pacific/help/openedge
42
© 2015 Progress Software Corporation. All rights reserved. 42 New Course: Implementing Progress OpenEdge Table Partitioning Description: This course teaches the key tasks to partition tables in an OpenEdge RDBMS database. First, you will be introduced to the concepts, types, and tasks of OpenEdge table partitioning. Then, you will learn how to prepare for table partitioning and enable partitioning for a database. Next, you will learn how to create new partitioned tables and partition existing non- partitioned tables. Finally, you will learn how to manage partitions, maintain indexes, and gather statistics for partitioned tables and indexes. Course duration: Equivalent to 2 days of instructor-led training Audience: Database Administrators who want to partition Progress OpenEdge RDBMS tables Version compatibility: This course is compatible with OpenEdge 11.4. After taking this course, you should be able to: Describe Progress OpenEdge table partitioning. Create new partitioned tables Partition existing tables Manage partitions Maintain indexes Gathering statistics for partitioned tables and indexes
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.