Download presentation
Presentation is loading. Please wait.
Published byRose O’Neal’ Modified over 9 years ago
1
Fast Track, Microsoft SQL Server 2008 Parallel Data Warehouse and Traditional Data Warehouse Design BI Best Practices and Tuning for Scaling SQL Server 2008
3
Data Warehouse
4
Fast Track
5
PDW
6
Traditional MD design SSAS PDW SSAS
7
Characteristic Typical BI (DW’s & DM’s)OLTP (Operational Database) Data Activity Large reads (disjoint sequential scans) Large writes (new data appends) Indexed reads and writes Large scale hashing Small transactions Constant small index reads, writes, and updates Database sweet spot size 100’s of Gigabytes to Terabytes (need medium to large storage farms) Gigabytes (require smaller to medium sized storage farms) Time period Historical (contributes to large data volumes) Current Queries Largely unpredictablePredictable I/O throughput requirement Up to 20 GB/sec sustained throughput IOPS is more important than sustained throughput
8
Microsoft/HP Fast Track reference configurations OR SQL Server Parallel Data Warehouse (PDW) SQL Server/HP Traditional DW design reference configurations Different logical and physical DB design philosophies Mmm, what will my logical & physical DB design look like ? Lower hardware costs
9
It is not uncommon to have hundreds of disk drives to support the I/O throughput requirements in a traditional DW environment RAID 5
10
How does Fast Track and PDW get it’s speed ? X-Ray view at the physical disk level First let’s look at a traditional DW…..
11
Data is stored wherever it happens to land Sequential data Fact table Initial load Fact table 2 nd day load Fact table 3 rd day load Fact table 5 th day load Fact table 6 th day load
12
Column Index / Column Index / Column Pre-Calculated data Pre-Calculated data Duplicate data
13
Disk throughput is slower with indexes, aggregates and summary tables Index-lite is faster because there is less disk head movement Eliminating indexes and storing data sequentially will provide the fastest disk throughput rates Index Summary table Traditional DW design with indexes & summary tables Fast Track & PDW Index-lite Fast Track & PDW Fastest sequential scan rates
14
Example: Average disk Seek time is typically about 4ms; Full stroke is about 7.5ms. At 15K RPM = 250 revolutions/sec. = 4ms for a full revolution = Average latency is about 2ms. Fast Track & PDW are designed to stream large blocks of data sequentially which is even faster than “average latency” because disk heads are directly over the streaming data.
15
Seek time is typically 2 - 4x longer than average latency. By eliminating seek time you can have approximately 2 – 4x fewer disk drives in order to maintain a given throughput level. Fast Track & PDW are designed to stream large blocks of data sequentially! Why does PDW and Fast Track want data to be stored sequentially ?
17
Fast Track and PDW get it’s speed from FAST scan rates ! In addition, HP and SQL Server PDW uses Massively Parallel Processing (MPP) to expand Fast Track concepts in a BI “appliance” Fast scan rates
18
Traditional DB design Fast Track or PDW
19
Basic 6 – 12TB DL38x w/ MSA2000 Mainstream 12 – 24TB DL585 G6 w/ MSA2000 Mainstream 16 – 32 TB DL580 G5 w/ MSA2000 G2 Premium 24 – 48 TB DL785 G6 w/ MSA2000 G2
20
HP SQL Server 2008 Parallel Data Warehouse (PDW) Control Rack Data Rack
21
Free Your IT Pressures... Get More Value Without HP Factory ExpressWith HP Factory Express Faster time to solution Free up valuable IT resources Maximize your IT investment
22
ProLiant Servers
24
Miscellaneous Techniques to Improve SQL Server BI Performance
30
SQL Server Analysis Services 2008
31
SQL Server Analysis Services 2008 Techniques to Improve Performance SSAS SSAS has to major components Formula Engine (does most of the analysis work and tries to keep cells in memory) – Fast clock speeds are best Storage Engine (if cells are not in memory, the Storage Engine gets the data from disk) – Goal is to minimize Storage Engine use and keep data in memory for the Formula Engine to use Faster Storage (SSD) OR more disk drives for quicker responses to Storage Engine Manage your partitions in your AS Database by query performance required Because Large Cubes > 100 GB may not fit in memory. So we design the partitions to get into memory as quickly as possible. Best Practice – less than 4 million cells per partition
32
Tune memory
36
Buffers are allocated via Execution Trees Each of these Numbered Steps represents a new Execution Tree Spawning multiple copies of the package with a horizontal partition of data will create more process space and execution trees
38
www.microsoft.com/teched www.microsoft.com/learning http://microsoft.com/technet http://microsoft.com/msdn
40
Sign up for Tech·Ed 2011 and save $500 starting June 8 – June 31 st http://northamerica.msteched.com/registration You can also register at the North America 2011 kiosk located at registration Join us in Atlanta next year
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.