Download presentation
Presentation is loading. Please wait.
1
Database Performance Measurement
Spring 2002 Sang Hyup Lee School of Computing, Soongsil Univ. Database Benchmark
2
Benchmarks To rate the performance of various hardware/software platforms in the same way To measure performance in different application domains on different platforms. To compare ratings of different platforms in terms of their relative price-performance ratios before deciding on an acquisition. Database Benchmark
3
Custom & Generic Benchmark
Custom benchmark Created by a particular customer based on a specific application Measured on a number of platforms to aid in an acquisition decision Very cost and hard to maintain Generic benchmark Created to be generally useful for a large group of potential customers with applications that are relatively standard across an application domain. Also known as domain-specific benchmark Hybrid benchmark generic benchmark + custom benchmark Database Benchmark
4
How to Normalize Results
Difficult to compare the performance of the wide variety of database software/hardware platforms (from IBM mainframe to PC) Current practice Requires domain-specific qualifications that must be met e.g.) ACID properties of transaction for OLTP benchmarks Reports two specific measures a measure of peak performance, say transaction per second (tps) a measure of price/performance, say dollars per tps Database Benchmark
5
Scalar measures vs. Vector measures
Scalar measures: the idea of using a single performance measure Vector measures Set Query benchmark (CPU, I/O, Elapsed Time) Vector measures allow users to distinguish different performance effects in terms of functional origins, namely separability Database Benchmark
6
Criteria for Domain-Specific Benchmarks
Relevance performs typical operations within the problem domain Portability easy to implement on many different systems and architectures Scalability applicable small and large computer systems Simplicity easily understandable otherwise it would lack credibility Database Benchmark
7
A genus of Benchmarks Wisconsin benchmark (1983)
TP1/DebitCredit benchmark (1985) Transaction Processing Council formed (Aug. 1988) TPC-A (Nov. 1989) TPC-B (Aug. 1990) AS3AP (1991) SUN benchmark (001, object operations version benchmark) (1992) TPC-C (July 1992) 007 Benchmark (1993) Sequoia 2000 Storage Benchmark (1993) TPC-D (April, 1995) Database Benchmark
8
TPC Benchmarks Originally eight vendors, currently 35 vendors joined, 1988 To define transaction processing and database benchmarks To limits the possibility that one vendor will change the rules to its own advantage. Database Benchmark
9
Benchmarks and Application Domains
OLTP benchmark DSS (Decision Support System) OODBS (Object-Oriented Database System) ECAD (Electronic Computer-Aided Design) DRS (Document Retrieval Service) Database Benchmark
10
OLTP Domain (1) 1985: DebitCredit benchmark 1985: TP1 benchmark
Dealt with concurrent order entries by clerks or tellers on thousands of terminals in large applications Stylized representation of a large bank and a multi-user workload consisting of a single database transaction which stressed the bottlenecks just listed. 1985: TP1 benchmark Removed the multi-thread requirement and drove the logic with a small number of batch threads. Easier to run and gave better price-performance numbers Database Benchmark
11
OLTP Domain (2) 1989 : TPC-A 1990 : TPC-B 1992 : TPC-C
Standardized DebitCredit with some variations from the original 1990 : TPC-B Standardized TP1 No network environment 1992 : TPC-C The current OLTP benchmark Quite complex and difficult to run. Database Benchmark
12
Decision Support System (DSS) Domain (1)
relatively small numbers of users read-only queries Update activity may be undesirable in DSS an extract of operational data is often placed in a data warehouse Parallelism is another important capability in DSS applications. Database Benchmark
13
Decision Support System (DSS) Domain (2)
1983 : Wisconsin Measures the elapsed time for a relational database system To perform a set of queries requested one after another by a single user. 1993 : AS3AP Created to improve on the Wisconsin benchmark by building scaleability considerations into the fundamental design Introduce the concept of equivalent database size 1993 : Set Query A single-user benchmark with several suites of queries Attempted to offer guidance on how to use the vector of measures Database Benchmark
14
Decision Support System (DSS) Domain (3)
1995 : TPC-D benchmark Thought of as executing multi-user queries to analyze the sort of operational business data entered by TPC-C Complex decision support Multitable join, sorting, aggregation, ... Queries per hour (QPH) Dollars per query per hour ($/QPH) Database Benchmark
15
OODBS, Engineering Workstation
1993 : 001 benchmark relatively simple single-user workstation emphasizes main-memory-resident access to data and claims to be independent of the underlying data model 1993 : 007 benchmark Created to provide comprehensive measure of some of the capabilities of OODBS systems that were not covered by OO1. Database Benchmark
16
Full Text Retrieval(FTR)
created to represent the workload generated by a generic document retrieval service modeled after the TPC benchmarks, with a multi-user workload and a performance rating in search transactions per minute (tpm-S) created based on a good deal of experience with real document retrieval applications. Database Benchmark
17
Research issues What are good criteria for domain-specific benchmarks
Currently, relevance, portability, scalability, simplicity What about scalar versus vector measures How to incorporate actual costs of system ownership Multi-user benchmark How to achieve saparability of performance effects How to develop an analytic framework New applications or models BLOBs Database Benchmark
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.