Download presentation
Presentation is loading. Please wait.
Published byAnn Greene Modified over 9 years ago
1
CPE 619 Queueing Networks Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama in Huntsville http://www.ece.uah.edu/~milenka http://www.ece.uah.edu/~lacasa
2
2 Overview Queueing Network: model in which jobs departing from one queue arrive at another queue (or possibly the same queue) Open and Closed Queueing Networks Product Form Networks Queueing Network Models of Computer Systems
3
3 Open Queueing Networks Open queueing network: external arrivals and departures Number of jobs in the system varies with time Throughput = arrival rate Goal: To characterize the distribution of number of jobs in the system
4
4 Closed Queueing Networks Closed queueing network: No external arrivals or departures Total number of jobs in the system is constant “ OUT ” is connected back to “ IN ” Throughput = flow of jobs in the OUT-to-IN link Number of jobs is given, determine the throughput
5
5 Mixed Queueing Networks Mixed queueing networks: Open for some workloads and closed for others Two classes of jobs. Class = types of jobs All jobs of a single class have the same service demands and transition probabilities. Within each class, the jobs are indistinguishable
6
6 Series Networks k M/M/1 queues in series Each individual queue can be analyzed independently of other queues Arrival rate= If i is the service rate for i th server:
7
7 Series Networks (cont ’ d) Joint probability of queue lengths: product form network
8
8 Product-Form Network Any queueing network in which: When f i (n i ) is some function of the number of jobs at the ith facility, G(N) is a normalizing constant and is a function of the total number of jobs in the system
9
9 Example 32.1 Consider a closed system with two queues and N jobs circulating among the queues Both servers have an exponentially distributed service time. The mean service times are 2 and 3, respectively. The probability of having n 1 jobs in the first queue and n 2 =N-n 1 jobs in the second queue can be shown to be: In this case, the normalizing constant G(N) is 3 N+1 -2 N+1. The state probabilities are products of functions of the number of jobs in the queues. Thus, this is a product form network.
10
10 General Open Network of Queues Product form networks are easier to analyze Jackson (1963) showed that any arbitrary open network of m-server queues with exponentially distributed service times has a product form
11
11 General Open Network of Queues (cont ’ d) If all queues are single-server queues, the queue length distribution is: Note: Queues are not independent M/M/1 queues with a Poisson arrival process In general, the internal flow in such networks is not Poisson. Particularly, if there is any feedback in the network, so that jobs can return to previously visited service centers, the internal flows are not Poisson
12
12 Closed Product-Form Networks Gordon and Newell (1967) showed that any arbitrary closed networks of m-server queues with exponentially distributed service times also have a product form solution Baskett, Chandy, Muntz, and Palacios (1975) showed that product form solutions exist for an even broader class of networks
13
13 BCMP Networks 1.Service Disciplines: First-come-first-served (FCFS) Processor sharing (PS) Infinite servers (IS or delay centers) and Last-come-first-served-preemptive-resume (LCFS-PR) 2. Job Classes: The jobs belong to a single class while awaiting or receiving service at a service center, but may change classes and service centers according to fixed probabilities at the completion of a service request
14
14 BCMP Networks (cont ’ d) 3. Service Time Distributions: At FCFS service centers, the service time distributions must be identical and exponential for all classes of jobs At other service centers, where the service times should have probability distributions with rational Laplace transforms Different classes of jobs may have different distributions 4. State Dependent Service: The service time at a FCFS service center can depend only on the total queue length of the center The service time for a class at PS, LCFS-PR, and IS center can also depend on the queue length for that class, but not on the queue length of other classes Moreover, the overall service rate of a subnetwork can depend on the total number of jobs in the subnetwork
15
15 BCMP Networks (cont ’ d) 5. Arrival Processes: In open networks, the time between successive arrivals of a class should be exponentially distributed No bulk arrivals are permitted The arrival rates may be state dependent A network may be open with respect to some classes of jobs and closed with respect to other classes of jobs
16
16 Non-Markovian Product Form Networks By Denning and Buzen (1978) 1. Job Flow Balance: For each class, the number of arrivals to a device must equal the number of departures from the device 2. One Step Behavior: A state change can result only from single jobs either entering the system, or moving between pairs of devices in the system, or exiting from the system. This assumption asserts that simultaneous job-moves will not be observed. 3. Device Homogeneity: A device's service rate for a particular class does not depend on the state of the system in any way except for the total device queue length and the designated class's queue length. This assumption implies the following:
17
17 Non-Markovian PFNs (cont ’ d) a. Single Resource Possession: A job may not be present (waiting for service or receiving service) at two or more devices at the same time b. No Blocking: A device renders service whenever jobs are present; its ability to render service is not controlled by any other device c. Independent Job Behavior: Interaction among jobs is limited to queueing for physical devices, for example, there should not be any synchronization requirements d. Local Information: A device's service rate depends only on local queue length and not on the state of the rest of the system
18
18 Non-Markovian PFNs (cont ’ d) e. Fair Service: If service rates differ by class, the service rate for a class depends only on the queue length of that class at the device and not on the queue lengths of other classes. This means that the servers do not discriminate against jobs in a class depending on the queue lengths of other classes 4. Routing Homogeneity: The job routing should be state independent. The routing homogeneity condition implies that the probability of a job going from one device to another device does not depend upon the number of jobs at various devices
19
19 Machine Repairman Model Originally for machine repair shops A number of working machines with a repair facility with one or more servers (repairmen) Whenever a machine breaks down, it is put in the queue for repair and serviced as soon as a repairman is available Scherr (1967) used this model to represent a timesharing system with n terminals Users sitting at the terminals generate requests (jobs) that are serviced by the system which serves as a repairman After a job is done, it waits at the user-terminal for a random ``think-time'' interval before cycling again
20
20 Central Server Model Introduced by Buzen (1973) The CPU is the central “ server ” that schedules visits to other devices After service at the I/O devices the jobs return to the CPU
21
21 Types of Service Centers Three kinds of devices 1. Fixed-capacity service centers: Service time does not depend upon the number of jobs in the device For example, the CPU in a system may be modeled as a fixed- capacity service center. 2. Delay centers or infinite server: No queueing. Jobs spend the same amount of time in the device regardless of the number of jobs in it. A group of dedicated terminals is usually modeled as a delay center. 3. Load-dependent service centers: Service rates may depend upon the load or the number of jobs in the device., e.g., M/M/m queue (with m > 2 ) A group of parallel links between two nodes in a computer network is another example
22
22 Summary Product form networks: Any network in which the system state probability is a product of device state probabilities Jackson: Network of M/M/m queues BCMP: More general conditions Denning and Buzen: Even more general conditions Jackson BCMP DB Product Form Networks
23
Operational Laws
24
24 Overview What is an Operational Law? Utilization Law Forced Flow Law Little’s Law General Response Time Law Interactive Response Time Law Bottleneck Analysis
25
25 Operational Laws Relationships that do not require any assumptions about the distribution of service times or inter-arrival times Identified originally by Buzen (1976) and later extended by Denning and Buzen (1978) Operational Directly measured Operationally testable assumptions assumptions that can be verified by measurements For example, whether number of arrivals is equal to the number of completions? This assumption, called job flow balance, is operationally testable Statement “ a set of observed service times is or is not a sequence of independent random variables ” is not operationally testable
26
26 Operational Quantities Quantities that can be directly measured during a finite observation period T = Observation intervalA i = number of arrivals C i = number of completionsB i = busy time B i Black Box
27
27 Utilization Law This is one of the operational laws Operational laws are similar to the elementary laws of motion For example, Notice that distance d, acceleration a, and time t are operational quantities. No need to consider them as expected values of random variables or to assume a distribution
28
28 Example 33.1 Consider a network gateway at which the packets arrive at a rate of 125 packets per second and the gateway takes an average of two milliseconds to forward them Throughput X i = Exit rate = Arrival rate = 125 packets/second Service time S i = 0.002 second Utilization U i = X i S i = 125 0.002 = 0.25 = 25% This result is valid for any arrival or service process. Even if inter-arrival times and service times to are not IID random variables with exponential distribution
29
29 Forced Flow Law Relates the system throughput to individual device throughputs In an open model, System throughput = # of jobs leaving the system per unit time In a closed model, System throughput = # of jobs traversing OUT to IN link per unit time If observation period T is such that A i = C i Device satisfies the assumption of job flow balance Each job makes V i requests for i-th device in the system If the job flow is balanced and C 0 is # of jobs traversing the outside link => Ci – # of jobs visiting the i-th device: C i = C 0 V i or V i =C i /C 0 V i is called visit ratio
30
30 Forced Flow Law (cont ’ d) System throughput:
31
31 Forced Flow Law (cont ’ d) Throughput of i th device: In other words: This is the forced flow law
32
32 Bottleneck Device Combining the forced flow law and the utilization law, we get: Here D i =V i S i is the total service demand on the device for all visits of a job The device with the highest D i has the highest utilization and is the bottleneck device
33
33 Example 33.2 In a timesharing system, accounting log data produced the following profile for user programs Each program requires five seconds of CPU time, makes 80 I/O requests to the disk A and 100 I/O requests to disk B Average think-time of the users was 18 seconds From the device specifications, it was determined that disk A takes 50 milliseconds to satisfy an I/O request and the disk B takes 30 milliseconds per request With 17 active terminals, disk A throughput was observed to be 15.70 I/O requests per second We want to find the system throughput and device utilizations
34
34 Example 33.2 (cont ’ d) Since the jobs must visit the CPU before going to the disks or terminals, the CPU visit ratio is:
35
35 Example 33.2 (cont ’ d) Using the forced flow law, the throughputs are: Using the utilization law, the device utilizations are:
36
36 Transition Probabilities p ij = Probability of a job moving to j th queue after service completion at i th queue Visit ratios and transition probabilities are equivalent in the sense that given one we can always find the other In a system with job flow balance: i = 0 visits to the outside link p i0 = Probability of a job exiting from the system after completion of service at i th device Dividing by C 0 we get:
37
37 Transition Probabilities (cont ’ d) Since each visit to the outside link is defined as the completion of the job, we have: These are called visit ratio equations In central server models, after completion of service at every queue, the jobs always move back to the CPU queue:
38
38 Transition Probabilities (cont ’ d) The above probabilities apply to exit and entrances from the system (i=0), also. Therefore, the visit ratio equations become: Thus, we can find the visit ratios by dividing the probability p 1j of moving to j th queue from CPU by the exit probability p 10
39
39 Example 33.3 Consider the queueing network: The visit ratios are V A =80, V B =100, and V CPU =181. After completion of service at the CPU the probabilities of the job moving to disk A, disk B, or terminals are 80/181, 100/181, and 1/181, respectively. Thus, the transition probabilities are 0.4420, 0.5525, and 0.005525
40
40 Example 33.3 (cont ’ d) Given the transition probabilities, we can find the visit ratios by dividing these probabilities by the exit probability (0.005525):
41
41 Little's Law If the job flow is balanced, the arrival rate is equal to the throughput and we can write:
42
42 Example 33.4 The average queue length in the computer system of Example 33.2 was observed to be: 8.88, 3.19, and 1.40 jobs at the CPU, disk A, and disk B, respectively. What were the response times of these devices? In Example 33.2, the device throughputs were determined to be: The new information given in this example is:
43
43 Example 33.4 (cont ’ d) Using Little's law, the device response times are:
44
44 General Response Time Law There is one terminal per user and the rest of the system is shared by all users. Applying Little's law to the central subsystem: Q = X R Here, Q = Total number of jobs in the system R = system response time X = system throughput
45
45 General Response Time Law (cont ’ d) Dividing both sides by X and using forced flow law: or, This is called the general response time law This law holds even if the job flow is not balanced
46
46 Example 33.5 Let us compute the response time for the timesharing system of Examples 33.2 and 33.4 For this system: The system response time is: The system response time is 68.6 seconds
47
47 Interactive Response Time Law If Z = think-time, R = Response time The total cycle time of requests is R+Z Each user generates about T/(R+Z) requests in T If there are N users: or R = (N/X) - Z This is the interactive response time law
48
48 Example 33.6 For the timesharing system of Example 33.2, we can compute the response time using the interactive response time law as follows: Therefore: This is the same as that obtained earlier in Example 33.5.
49
49 Bottleneck Analysis From forced flow law: The device with the highest total service demand D i has the highest utilization and is called the bottleneck device Note: Delay centers can have utilizations more than one without any stability problems. Therefore, delay centers cannot be a bottleneck device Only queueing centers used in computing D max The bottleneck device is the key limiting factor in achieving higher throughput
50
50 Bottleneck Analysis (cont ’ d) Improving the bottleneck device will provide the highest payoff in terms of system throughput Improving other devices will have little effect on the system performance Identifying the bottleneck device should be the first step in any performance improvement project
51
51 Bottleneck Analysis (cont ’ d) Throughput and response times of the system are bound as follows: and Here, is the sum of total service demands on all devices except terminals These are known as asymptotic bounds
52
52 Bottleneck Analysis: Proof The asymptotic bounds are based on the following observations: 1. The utilization of any device cannot exceed one. This puts a limit on the maximum obtainable throughput 2. The response time of the system with N users cannot be less than a system with just one user. This puts a limit on the minimum response time 3. The interactive response time formula can be used to convert the bound on throughput to that on response time and vice versa
53
53 Proof (cont ’ d) For the bottleneck device b we have: Since U b cannot be more than one, we have:
54
54 Proof (cont ’ d) With just one job in the system, there is no queueing and the system response time is simply the sum of the service demands: Here, D is defined as the sum of all service demands With more than one user there may be some queueing and so the response time will be higher. That is:
55
55 Proof (cont ’ d) Combining these bounds we get the asymptotic bounds.
56
56 Typical Asymptotic Bounds
57
57 Typical Asymptotic Bounds (cont ’ d) The number of jobs N * at the knee is given by: If the number of jobs is more than N *, then we can say with certainty that there is queueing somewhere in the system The asymptotic bounds can be easily explained to people who do not have any background in queueing theory or performance analysis
58
58 Example 33.7 For the timesharing system considered in Example 33.2: The asymptotic bounds are:
59
59 Example 33.7: Asymptotic Bounds
60
60 Example 33.7 (cont ’ d) The knee occurs at: or Thus, if there are more than 6 users on the system, there will certainly be queueing in the system.
61
61 Example 33.8 How many terminals can be supported on the timesharing system of Example 33.2 if the response time has to be kept below 100 seconds? Using the asymptotic bounds on the response time we get: The response time will be more than 100, if: That is, if: the response time is bound to be more than 100. Thus, the system cannot support more than 23 users if a response time of less than 100 is required.
62
62 Summary Symbols:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.