Download presentation
Presentation is loading. Please wait.
1
What will my performance be? Resource Advisor for DB admins Dushyanth Narayanan, Paul Barham Microsoft Research, Cambridge Eno Thereska, Anastassia Ailamaki Carnegie Mellon University
2
Dushyanth Narayanan December, 2004 Carnegie Mellon University2 It’s all about resources Imagine you’re a DBMS admin Must capacity plan and provision resources Keep clients happy with performance Use upgrade budget intelligently storage system DBMS Background tasks Process management Buffer management Clients
3
Dushyanth Narayanan December, 2004 Carnegie Mellon University3 Current approaches Over-provisioning punt the hard problem costs $$$ Hire more experts still ad-hoc, rule-of-thumb BIG $$$ Aggregate statistics still need humans to interpret don’t tell you the entire story
4
Dushyanth Narayanan December, 2004 Carnegie Mellon University4 What’s wrong with aggregate stats? Example: long I/O queues more buffer cache memory? faster disk? how much to buy before bottleneck shifts? how much will performance improve? what will happen to latency?
5
Dushyanth Narayanan December, 2004 Carnegie Mellon University5 What we want Resource Advisor automated zero configuration workload-agnostic runs on live system Answers “what-if” questions admin > memory memory * 2 ? resource advisor> throughput ↑ 40% latency ↓ 80% bottleneck CPU
6
Dushyanth Narayanan December, 2004 Carnegie Mellon University6 In this talk Live monitoring of OLTP workload high concurrency Memory is resource of interest “hardest” resource non-linear, workload-dependent effect lots of existing work on CPU/disk models Predict performance when memory changes: throughput mean latency by transaction type
7
Dushyanth Narayanan December, 2004 Carnegie Mellon University7 Outline Introduction and motivation Resource monitoring (end-to-end tracing) Resource models (buffer cache simulator) Performance prediction (throughput, latency) Experimental results (some) Future work (lots)
8
Dushyanth Narayanan December, 2004 Carnegie Mellon University8 Resource Advisor architecture
9
Dushyanth Narayanan December, 2004 Carnegie Mellon University9 Monitoring resource consumption Instrument Yukon code, trace live system transaction request start/end stored procedure calls buffer fetches, prefetches, touches user-level context switches I/O requests, completions Background activity NT context switches Events posted through ETW Low overhead Cycle-accurate timestamps
10
Dushyanth Narayanan December, 2004 Carnegie Mellon University10 End-to-end visualization Detailed, per-request information
11
Dushyanth Narayanan December, 2004 Carnegie Mellon University11 Demand trace extraction Separate demand, service process independent of hardware/scheduling Trace above resource schedulers buffer references not disk I/Os per-request virtual CPU cycles But as low as possible below prefetch engine, query planner avoids modeling complex components
12
Dushyanth Narayanan December, 2004 Carnegie Mellon University12 Resource Advisor architecture
13
Dushyanth Narayanan December, 2004 Carnegie Mellon University13 Resource models Buffer cache simulator reference trace, memory size, “stolen” memory stochastic LFU (same as Yukon) ignore asynchronous/background activity Analytic disk model disk params, queue length service time [Seltzer90] queue length from throughput, #users CPU scaling measure virtual CPU time in cycles assume clock speed == performance
14
Dushyanth Narayanan December, 2004 Carnegie Mellon University14 Throughput prediction Predict bottleneck throughput IO-bound, CPU-bound, or client-bound
15
Dushyanth Narayanan December, 2004 Carnegie Mellon University15 Latency prediction Scale I/O wait time #blocking I/Os / transaction disk busyness Keep CPU time unchanged Aggregate by transaction type inferred from stored procedure calls More sophisticated model possible We have the information
16
Dushyanth Narayanan December, 2004 Carnegie Mellon University16 Outline Introduction and motivation Resource monitoring (end-to-end tracing) Resource models (buffer cache simulator) Performance prediction (throughput, latency) Experimental results (some) Future work (lots)
17
Dushyanth Narayanan December, 2004 Carnegie Mellon University17 Evaluation TPC-C workload, two variants Closed loop, saturation Open loop, low load Memory is only varying resource 64, 128, 256, 512, 1024 MB One server processor, disk 2.7 GHz Xeon, 80GB disk 200 concurrent users
18
Dushyanth Narayanan December, 2004 Carnegie Mellon University18 Cache simulator accuracy
19
Dushyanth Narayanan December, 2004 Carnegie Mellon University19 Disk model accuracy
20
Dushyanth Narayanan December, 2004 Carnegie Mellon University20 Throughput prediction accuracy
21
Dushyanth Narayanan December, 2004 Carnegie Mellon University21 Changing the transaction rate
22
Dushyanth Narayanan December, 2004 Carnegie Mellon University22 Latency prediction accuracy
23
Dushyanth Narayanan December, 2004 Carnegie Mellon University23 Latency has high variance
24
Dushyanth Narayanan December, 2004 Carnegie Mellon University24 Evaluation summary Works well for memory, despite cache simulator not perfect disk model simplistic high variance in observed latency Live tracing overheads reasonable 1.1% CPU 0.4 MB/s trace data 64 MB buffering
25
Dushyanth Narayanan December, 2004 Carnegie Mellon University25 Future Work Online simulation/analysis Changing transaction mix Better latency model, distributions Changing CPU, disk Other workloads Automatic feedback-based tuning allocate resources by application or by transaction type
26
Dushyanth Narayanan December, 2004 Carnegie Mellon University26 Summary DB admins need a Resource Advisor “what-if” questions for capacity planning understanding current system performance Must be zero-configuration live-system tracing hardware/workload agnostic We have an architecture and prototype plug-n-play resource models Works for OLTP / memory changes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.