TPC Benchmarks: TPC-A and TPC-B

Slides:



Advertisements
Similar presentations
TPC Benchmarks Charles Levine Microsoft
Advertisements

ICS 434 Advanced Database Systems
Database Architectures and the Web
C-Store: Introduction to TPC-H Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY Mar 20, 2009.
TPC Benchmarks - Chidananda (Chidu) Sridhar CSCI 5707 Relationship with 5707: Transaction Processing, Chapter 21.
Distributed Database Management Systems
Asper School of Business University of Manitoba Systems Analysis & Design Instructor: Bob Travica System architectures Updated: November 2014.
Benchmarks Title: A Measure of Transaction Processing Power Authors: Anon Et. Al. Datamation, 1985.
Distributed Systems: Client/Server Computing
Client/Server Architectures
System Architecture & Hardware Configurations Dr. D. Bilal IS 592 Spring 2005.
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
Design Considerations CS2312. Conceptual Design includes Operational Use Mini World Requirements collection & analysis Conceptual design Data model design.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
Memory/Storage Architecture Lab Computer Architecture Performance.
TPC Benchmarks Sandeep Gonsalves CSE 8330 – Project 1 SMU May 1, 2004.
Csi315csi315 Client/Server Models. Client/Server Environment LAN or WAN Server Data Berson, Fig 1.4, p.8 clients network.
Introduction  Client/Server technology is seen by many as the solution to the difficulty of linking together the various departments of corporation.
Unit – I CLIENT / SERVER ARCHITECTURE. Unit Structure  Evolution of Client/Server Architecture  Client/Server Model  Characteristics of Client/Server.
Week 5 Lecture Distributed Database Management Systems Samuel ConnSamuel Conn, Asst Professor Suggestions for using the Lecture Slides.
1 Invitation to Join the TPC Kim Shanley Chief Operating Officer TPC.
OLAP Council APB-1 OLAP Benchmark Release II
Mainframe (Host) - Communications - User Interface - Business Logic - DBMS - Operating System - Storage (DB Files) Terminal (Display/Keyboard) Terminal.
5-1 McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved.
1 Seoul National University Performance. 2 Performance Example Seoul National University Sonata Boeing 727 Speed 100 km/h 1000km/h Seoul to Pusan 10 hours.
D. Beecroft Fremont High School Networks.
Client/Server Computing
INTRODUCTION to MIS 12/24/20151 Introduction To MIS Component Overview.
1 Object-Oriented Analysis and Design with the Unified Process Figure 13-1 Implementation discipline activities.
System Architecture & Hardware Configurations Dr. D. Bilal IS 582 Spring 2008.
IT 5433 LM1. Learning Objectives Understand key terms in database Explain file processing systems List parts of a database environment Explain types of.
E-commerce Architecture Ayşe Başar Bener. Client Server Architecture E-commerce is based on client/ server architecture –Client processes requesting service.
TPC Benchmark™ W 2002 년 6 월 이상호 교수 숭실대학교 데이터베이스 연구실
Distributed Systems Architectures. Topics covered l Client-server architectures l Distributed object architectures l Inter-organisational computing.
Chapter 7. Identifying Assets and Activities to Be Protected
TPC-C: The OLTP Benchmark
System Architecture & Hardware Configurations
Database System Concepts and Architecture
Web Development Web Servers.
Database Performance Measurement
Database Management:.
Principles of Network Applications
Database Systems: Design, Implementation, and Management Tenth Edition
System Architecture & Hardware Configurations
CHAPTER 3 Architectures for Distributed Systems
Database Architectures and the Web
CHAPTER 2 Application Layer.
#01 Client/Server Computing
Implementing a Load-balancing Web Server Using Red Hat Cluster Suite
Chapter 16 Designing Distributed and Internet Systems
Ch > 28.4.
HPTS 99 Are TP Benchmarks Still Relevant to the TP Community?
Chapter 2 Database Environment Pearson Education © 2009.
Network management system
03 - Database Design, UML and (Extended) Entity Relationship Modeling
Capacity Analysis, cont. Realistic Server Performance
Networks Software.
Overview of Databases and Transaction Processing
Ch 15 –part 3 -design evaluation
Database Environment Transparencies
Tiers vs. Layers.
Architectures of distributed systems Fundamental Models
GCSE OCR 3 A451 Computing Client-server and peer-to-peer networks
Ch 17 - Binding Protocol Addresses
Chapter 5 Architectural Design.
Web-based Imaging Management System WIMS
Architectures of distributed systems Fundamental Models
Web-based Imaging Management System WIMS
Objectives Explain the role of computers in client-server and peer-to-peer networks Explain the advantages and disadvantages of client- server and peer-to-peer.
#01 Client/Server Computing
Presentation transcript:

TPC Benchmarks: TPC-A and TPC-B May 2002 Prof. Sang Ho Lee Soongsil University shlee@computing.ssu.ac.kr

Who/What is the TPC Founded 1988 by 8 vendors (lead by Omri Serlin), now over 40 vendors/users Mission: to define transaction processing and database benchmarks and to disseminate objective and verifiable TPC performance data to the industry De facto industry standards body for OLTP performance Administrated by Shenleey Public Relations 777 N. First St., Suite 600 San Jose, CA 95112-6311 Tel: 408-295-8894 Most TPC specs, information, results are on the web page: http://www.tpc.org/

TPC Current Benchmarks Benchmark C The current OLTP benchmark Two metrics: Transactions-per-minute-C (tpmC) and price-per-tpm-C ($/tpmC) Benchmark H Ad-hoc benchmark Two metrics : Query-per-hour (QphH) and price-per-QphH ($/QphH) Benchmark R Business reporting benchmark Two metrics : Query-per-hour (QphR) and price-per-QphR ($/QphR) Benchmark W Web commerce benchmark Two metrics : web-interactions-per-second (WIPS) and price-per-WIPS ($/WIPS)

TPC Obsolete Benchmarks Benchmark A Based on Debit/Credit benchmark Jun. 1995 obsolete Benchmark B Database half of Debit/Credit benchmark No network environment Benchmark D Complex decision support Multitable join, sorting, aggregation, ... Two metrics: Power metric (QppD@size), Throughput (Qthd@size) Apr. 1999 obsolete

Aborted TPC benchmark efforts TPC Server benchmark (TPC-S) To create a server version of TPC-C Remove TPC-C’s front-end and remote terminal emulation (RTE) requirement Dec. 1995 resignation TPC-E, the Enterprise benchmark TPC-C is significantly more complex and robust than TPC-A, but still is not complex enough to stress very large, enterprise-class systems 1996 resignation TPC Client/Server Adding 3 transactions to TPC-C benchmark WWW changes the computing paradigm, so go to TPC-W mid 1995 resignation

TPC-A and TPC-B History 1985: DebitCredit benchmark Anon. et al,. A measure of transaction processing power, Datamation TP1 benchmark an implementation by vendors that typically departs from the DebitCredit specifications one way or another Aug. 1988: Transaction Processing Council formed Nov. 1989: TPC-A, OLTP with LAN or WAN Aug. 1990: TPC-B, OLTP with no network July 1992: TPC-C, On-Line Business Transaction Processing Jan. 1995: no new result on TPC-A/B (i.e. move to TPC-C)

TPC-A Published in November, 1989 Council’s version of the Debit/Credit test On-line transaction processing (OLTP) benchmark Use context of a bank application Metrics throughput: tpsA-Local, tpsA-Wide price-per-performance: dollars/tpsA-Local, dollars/tpsA-Wide About 300 TPC-A benchmark results were published Highest result was 3692 tpsA with a cost of $4,873 per tpsA

Logical database design BRANCH HISTORY TELLER ACCOUNT 1:M

TPC-A transaction profile Read 100 bytes including Aid, Tid, Bid, Delta from terminal Begin transaction Update Account where Account_ID = Aid: Read Account_Balance from Account Set Account_Balance = Account_Balance + Delta Write Account_Balance to Account Write to History: Aid, Tid, Bid, Delta, Time_Stamp Update Teller where Teller_ID = Tid: Set Teller_Balance = Teller_Balance + Delta Write Teller_Balance to Teller Update Branch where Branch_ID = Bid: Set Branch_Balance = Branch_Balance + Delta Write Branch_Balance to Branch Commit transaction Write 200 bytes (Aid, Tid, Bid, Delta, Account_Balance) to terminal

Test configuration of TPC-A System under Test (SUT) one or more processing units (e.g., hosts, front-ends, workstations, etc.) hardware and software components of all networks data storage media, host system(s) supporting the database Driver System Remote Terminal Emulator (RTE) functionality Generates and sends 100 byte transactional message to the SUT; Receives 200 byte responses; Records message response times; Performs conversion and/or multiplexing into the communications protocol Statistical accounting Driver/SUT communication interface

SUT, Driver, and Communication of TPC-A

Features of TPC-A Strong ACID with tests specified Response time constraints 90% of all transactions have a response time of less than 2 seconds Transaction arrival distribution: random History file: horizontal partitioning permitted Price all h/w and s/w over 5 year maintenance cost except physical communications media Detailed report required The test sponsor submits a full disclosure report (FDR) Response time is measured at driver

Scaling rules For each nominal tps configured, the test must use a minimum of Account records/rows: 100,000 Teller records/rows: 10 Branch records/rows: 1 History records/rows: 2,592,000 (90 eight-hour days = 90 * 8 * 60 * 60) Terminals: 10

Pacing of transactions by emulated terminals Each emulated terminal, after sending a request to update the database to the SUT, must wait for a given “Think Time” after receiving that reply, before sending the next request

TPC-B Officially approved in August, 1990 Council’s version of the TP1 benchmark not OLTP benchmark No terminals with think time Batch transaction generator Metrics: tpsB, dollars/tpsB About 130 TPC-B results were published Highest: 2,025 tpsB with $254 per tpsB

TPC-A vs. TPC-B Similar with TPC-A Difference from TPC-A same transaction profile, ACID requirements, and costing formula Difference from TPC-A Use of batch transaction generator No terminal emulation with think time No network configuration Concept of a user does not exist Response time is replaced by residence time (i.e. how long the transaction resides within the database server)