Lecture 12: Storage Systems Performance Kai Bu

Slides:



Advertisements
Similar presentations
Introduction to Queuing Theory
Advertisements

NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
Copyright © 2005 Department of Computer Science CPSC 641 Winter PERFORMANCE EVALUATION Often in Computer Science you need to: – demonstrate that.
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
Queueing Model 박희경.
Queuing Theory For Dummies
Queuing Analysis Based on noted from Appendix A of Stallings Operating System text 6/10/20151.
1 Part II Web Performance Modeling: basic concepts © 1998 Menascé & Almeida. All Rights Reserved.
OS2-1 Chapter 2 Computer System Structures. OS2-2 Outlines Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
1 Performance Evaluation of Computer Networks Objectives  Introduction to Queuing Theory  Little’s Theorem  Standard Notation of Queuing Systems  Poisson.
Data Communication and Networks Lecture 13 Performance December 9, 2004 Joseph Conron Computer Science Department New York University
1 Queueing Theory H Plan: –Introduce basics of Queueing Theory –Define notation and terminology used –Discuss properties of queuing models –Show examples.
Performance Evaluation
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
1 PERFORMANCE EVALUATION H Often in Computer Science you need to: – demonstrate that a new concept, technique, or algorithm is feasible –demonstrate that.
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 14-1 © 2003 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 14.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Introduction to Queuing Theory. 2 Queuing theory definitions  (Kleinrock) “We study the phenomena of standing, waiting, and serving, and we call this.
1 Part VI System-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
Chapter 9 Overview  Reasons to monitor SQL Server  Performance Monitoring and Tuning  Tools for Monitoring SQL Server  Common Monitoring and Tuning.
Using Standard Industry Benchmarks Chapter 7 CSE807.
Introduction to Queuing Theory
Computer Architecture Lec 18 – Storage. 01/19/10 Storage2 Review Disks: Arial Density now 30%/yr vs. 100%/yr in 2000s TPC: price performance as normalizing.
Introduction to Management Science
AN INTRODUCTION TO THE OPERATIONAL ANALYSIS OF QUEUING NETWORK MODELS Peter J. Denning, Jeffrey P. Buzen, The Operational Analysis of Queueing Network.
Introduction to Queuing Theory
Queuing models Basic definitions, assumptions, and identities Operational laws Little’s law Queuing networks and Jackson’s theorem The importance of think.
CS433 Modeling and Simulation Lecture 13 Queueing Theory Dr. Anis Koubâa 03 May 2009 Al-Imam Mohammad Ibn Saud University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
 Birth Death Processes  M/M/1 Queue  M/M/m Queue  M/M/m/B Queue with Finite Buffers  Results for other Queueing systems 2.
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
Introduction to Queueing Theory
Lecture 3: 1 Introduction to Queuing Theory More interested in long term, steady state than in startup => Arrivals = Departures Little’s Law: Mean number.
Waiting Lines and Queuing Models. Queuing Theory  The study of the behavior of waiting lines Importance to business There is a tradeoff between faster.
CS433 Modeling and Simulation Lecture 12 Queueing Theory Dr. Anis Koubâa 03 May 2008 Al-Imam Mohammad Ibn Saud University.
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
CS352 - Introduction to Queuing Theory Rutgers University.
CSCI1600: Embedded and Real Time Software Lecture 19: Queuing Theory Steven Reiss, Fall 2015.
Waiting Lines and Queuing Theory Models
1 1 Slide © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
NETE4631: Network Information System Capacity Planning (2) Suronapee Phoomvuthisarn, Ph.D. /
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web (Book, Chapter 8)
Internet Applications: Performance Metrics and performance-related concepts E0397 – Lecture 2 10/8/2010.
(C) J. M. Garrido1 Objects in a Simulation Model There are several objects in a simulation model The activate objects are instances of the classes that.
1 Queuing Delay and Queuing Analysis. RECALL: Delays in Packet Switched (e.g. IP) Networks End-to-end delay (simplified) = End-to-end delay (simplified)
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part II System Performance Modeling: basic concepts, operational analysis (book, chap. 3)
Queuing Theory.  Queuing Theory deals with systems of the following type:  Typically we are interested in how much queuing occurs or in the delays at.
Queuing Theory Simulation & Modeling.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web.
OPERATING SYSTEMS CS 3502 Fall 2017
Lecture 2: Performance Evaluation
Al-Imam Mohammad Ibn Saud University
Queuing Theory Queuing Theory.
CSCI1600: Embedded and Real Time Software
Capacity Analysis, cont. Realistic Server Performance
Queueing Theory Carey Williamson Department of Computer Science
Introduction Notation Little’s Law aka Little’s Result
Queuing models Basic definitions, assumptions, and identities
System Performance: Queuing
Queuing models Basic definitions, assumptions, and identities
Computer Systems Performance Evaluation
Single server model Queue Server I/O Controller And device Arrivals.
Storage Systems Performance
Computer Systems Performance Evaluation
Lecture 12: Storage Systems Performance
Carey Williamson Department of Computer Science University of Calgary
CS 704 Advanced Computer Architecture
Presentation transcript:

Lecture 12: Storage Systems Performance Kai Bu

Lab 4 Demo due Report due May 27 Quiz June 3 PhD Positions in HK PolyU

Appendix D.4–D.5

Outline I/O Performance Queuing Theory

Outline I/O Performance Queuing Theory

I/O Performance Unique measures Diversity which I/O devices can connect to the computer system? Capacity how many I/O devices can connect to a computer system?

I/O Performance Producer-server model producer creates tasks to be performed and places them in a buffer; server takes tasks from the FIFO buffer and performs them;

I/O Performance Response time / Latency the time a task from the moment it is placed in the buffer until the server finishes the task Throughput / Bandwidth the average number of tasks completed by the server over a time period

Throughput vs Response Time Competing demands Highest possible throughput requires server never be idle, thus the buffer should never be empty Response time counts time spent in the buffer, so an empty buffer shrinks it

Throughput vs Response Time

Choosing Response Time Transaction an interaction between user and comp Transaction Time consists of Entry time: the time for the user to enter the command System response time: the time between command entered and complete response displayed Think time: the time from response reception to user entering next cmd

Choosing Response Time reduce response time from 1 s to 0.3 s

Choosing Response Time More transaction time reduction than just the response time reduction

Choosing Response Time People need less time to think when given a faster response

I/O Benchmarks Response time restrictions for I/O benchmarks

TPC Conducted by Transaction-Processing Council OLTP for online transaction processing I/O rate: the number of disk accesses per second; instead of data rate (bytes of data per second)

TPC

TPC-C Configuration use a database to simulate an order-entry environment of a wholesale supplier Include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses Run five concurrent transactions of varying complexity Includes nine tables with a scalable range of records and customers

TPC-C Metrics tmpC transactions per minute System price hardware software three years of maintenance support

TPC: Initiative/Unique Characteristics Price is included with the benchmark results The dataset generally must scale in size as the throughput increases The benchmark results are audited Throughput is performance metric, but response times are limited An independent organization maintains the benchmarks

SPEC Benchmarks Best known for its characterization of processor performances Has created benchmarks for also file servers, mail servers, and Web servers SFS, SPECMail, SPECWeb

SPEC File Server Benchmark SFS a synthetic benchmark agreed by seven companies; evaluate systems running the Sun Microsystems network file sys (NFS); SFS 3.0 / SPEC SFS97_R1 to include support for NFS version 3

SFS Scale the amount of data stored according to the reported throughput Also limits the average response time

SPECMail Evaluate performance of mail servers at an Internet service provider SPECMail 2001 based on standard Internet protocols SMTP and POP3; measures throughput and user response time while scaling the number of users from 10,000 to 1,000,000

SPECWeb Evaluate the performance of World Wide Web servers Measure number of simultaneous user sessions SPECWeb2005 simulates accesses to a Web service provider; server supports home pages for several organizations; three workloads: Banking (HTTPs), E- commerce (HTTP and HTTPs), and Support (HTTP)

Dependability Benchmark Examples TPC-C The benchmarked system must be able to handle a single disk failure Measures submitters run some RAID organization in their storage system

Dependability Benchmark Examples Effectiveness of fault tolerance Availability: measured by examining the variations in system quality-of-service metrics over time as faults are injected into the system For a Web server performance: requests satisfied per second degree of fault tolerance: the number of faults tolerated by the storage system, network connection topology, and so forth

Dependability Benchmark Examples Effectiveness of fault tolerance SPECWeb99 Single fault injection e.g., write error in disk sector Compares software RAID implementations provided by Linux, Solaris, and Windows 2000 Server

SPECWeb99 fast reconstruction decreases app performance reconstruction steals I/O resources from running apps

SPECWeb99 Linux and Solaris initiate automatic reconstruction of the RAID volume onto a hot spare when an active disk is taken out of service due to a failure Windows’s RAID reconstruction must be initiated manually

SPECWeb99 Managing transient faults Linux: paranoid shut down a disk in controlled manner at the first error, rather than wait to see if the error is transient; Windows and Solaris: forgiving ignore most transient faults with the expectation that they will not recur

Outline I/O Performance Queuing Theory

Give a set of simple theorems that will help calculate response time and throughput of an entire I/O system Because of the probabilistic nature of I/O events and because of sharing of I/O devices A little more work and much more accurate than best-case analysis, but much less work than full-scale simulation

Black Box Model Processor makes I/O requests that arrive at the I/O device, requests depart when the I/O device fulfills them I/O Device

Black Box Model If the system is in steady state, then the number of tasks entering the system must equal the number of tasks leaving the system This flow-balanced state is necessary but not sufficient for steady state I/O Device

Black Box Model The system has reached steady state if the system has been observed for a sufficiently long time and mean waiting times stabilize I/O Device

Little’s Law Assumptions multiple independent I/O requests in equilibrium: input rate = output rate; a steady supply of tasks independent for how long they wait for service;

Little’s Law Mean number of tasks in system = Arrival rate x Mean response time

Little’s Law Mean number of tasks in system = Arrival rate x Mean response time applies to any system in equilibrium nothing inside the black box creating new tasks or destroying them

Little’s Law Observe a sys for Time observe mins Sum the times for each task to be serviced Time accumulated Number task completed during Time observe Time accumulated ≥Time observe because tasks can overlap in time

Little’s Law

Single-Server Model Queue / Waiting line the area where the tasks accumulate, waiting to be serviced Server the device performing the requested service is called the server

Single-Server Model Time server average time to service a task average service rate: 1/Time server Time queue average time per task in the queue Time system average time/task in the system, or the response time; the sum of Time queue and Time server

Single-Server Model Arrival rate average # of arriving tasks per second Length server average # of tasks in service Length queue average length of queue Length system average # of tasks in system, the sum of Length server and Length queue

Server Utilization / traffic intensity Server utilization the mean number of tasks being serviced divided by the service rate Service rate = 1/Time server Server utilization =Arrival rate x Time server (little’s law again)

Server Utilization Example an I/O sys with a single disk gets on average 50 I/O requests per sec; 10 ms on avg to service an I/O request; server utilization =arrival rate x time server =50 x 0.01 = 0.5 = 1/2 Could handle 100 tasks/sec, but only 50

Queue Discipline How the queue delivers tasks to server FIFO: first in, first out Time queue =Length queue x Time server + Mean time to complete service of task when new task arrives if server is busy

Queue with exponential/Poisson distribution of events/requests

Length queue Example an I/O sys with a single disk gets on average 50 I/O requests per sec; 10 ms on avg to service an I/O request; Length queue =

M/M/1 Queue assumptions The system is in equilibrium Interarrival times (times between two successive requests arriving) are exponentionally distributed Infinite population model: unlimited number of sources of requests Server starts on the next job immediately after finishing prior one FIFO queue with unlimited length One server only

M/M/1 Queue M: Markov exponentially random request arrival; M: Markov exponentially random service time 1 single server

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is 20ms; Q: 1. avg dis utilization? 2. avg time spent in the queue? 3. avg response time (queuing+serv)?

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is 20ms; Q: 1. avg dis utilization? server utilization =Arrival rate x Time server =40 x 0.02 = 0.8

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is 20ms; Q: 2. avg time spent in the queue?

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is 20ms; Q:3. avg response time (queuing+serv)? Time system =Time queue + Time server = = 100 ms

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is down to 10ms; Q: 1. avg dis utilization? server utilization =Arrival rate x Time server =40 x 0.01 = 0.4

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is down to 10ms; Q: 2. avg time spent in the queue?

M/M/1 Queue Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time of an older disk is down to 10ms; Q:3. avg response time (queuing+serv)? Time system =Time queue + Time server = = 16.7 ms

M/M/m Queue multiple servers

M/M/m Queue

Example a processor sends 40 disk I/Os per sec; exponentially distributed requests; avg service time for read is 20ms; two disks duplicate the data; all requests are reads;

M/M/m Queue Example

M/M/m Queue Example

M/M/m Queue Example avg response time = =23.8 ms

?