Download presentation
Presentation is loading. Please wait.
Published byDeborah Sims Modified over 9 years ago
1
An Optimal Service Ordering for a World Wide Web Server A Presentation for the Fifth INFORMS Telecommunications Conference March 6, 2000 Amy Csizmar Dalal Northwestern University Department of Electrical and Computer Engineering email: amy@ece.nwu.edu Scott Jordan University of California, Irvine Department of Electrical and Computer Engineering email: sjordan@uci.edu
2
Overview ·Motivation/problem statement ·Derivation of optimal policy ·Two extensions to the original model ·Simulation results ·Conclusions
3
Problems with current web server operation ·Currently, web servers share processor resources among several requests simultaneously. ·Web users are impatient--they tend to abort page requests that do not get a response within several seconds. ·If users time out, the server wastes resources on requests that never complete-- a problem especially for heavily-loaded web servers. ·Objective: decrease user-perceived latency by minimizing the number of user file requests that “time out” before completion
4
A queueing model approach to measuring web server performance ·Performance = P{user will remain at the server until its service completes} ·Decreasing function of response time ·Service times ~ exp( ), i.i.d, proportional to sizes of requested files ·Request arrivals ~ Poisson( ) ·Server is unaware if/when requests are aborted ·Server alternates between busy cycles, when there is at least one request in the system, and idle cycles (state 0), when there are no requests waiting or in service. These cycles are i.i.d. ·Switching time between requests is negligible
5
Web server performance model Revenue earned by the server under policy p from serving request i to completion: By the Renewal Reward Theorem, the average reward earned by the server per unit time under policy p is Potential revenue: g ip (t) c ip
6
Web server performance: Objective Find a service policy that maximizes revenue earned per unit time: optimal policy
7
Requirements for an optimal service ordering policy Lemma 1: An optimal policy is non-idling Difference in expected revenues = g j* (a 2 ) d (1-e -c d )
8
Requirements for an optimal service ordering policy Lemma 2: An optimal service ordering policy switches between jobs in service only upon an arrival to or departure from the system If g (j+1)* (a 1 ) g j* (a 1 ): use first alternate policy Revenue difference = g (j+1)* (a 1 ) d (1-e -c d ) If g (j+1)* (a 1 ) < g j* (a 1 ): use second alternate policy Revenue difference = d e -c d (e -ca 1 - e -ca 2 ) (e cx j* - e cx (j+1)* )
9
Requirements for an optimal service ordering policy Lemma 3: An optimal policy is non-processor-sharing Proof: Consider a portion of the sample path over which the server processes two jobs to completion Three possibilities: ·serve 1 then 2: expected revenue is ·serve 2 then 1: expected revenue is ·serve both until one completes, then serve the other: expected revenue is
10
Requirements for an optimal service ordering policy Lemma 4: An optimal policy is Markov (independent of both past and future arrivals) Follows because service times and interarrival times are exponential
11
Definition of an optimal policy State vector at time t under policy p: best job: *The optimal policy chooses the best job to serve at any time t regardless of the order of service chosen for the rest of the jobs in the system (greedy) LIFO-PR
12
Extension 1: Initial reward ·Initial reward C i : indicates user’s perceived probability of completion/service upon arrival at the web server ·Application: QoS levels ·Potential revenue: ·Natural extension of the original system model Optimal policy: serve the request that has the highest potential revenue (function of the time in the system so far and the initial reward)
13
Extension 2: Varying service time distributions ·Service time of request i ~ exp( i ), independent ·Application: different content types best described by different distributions ·Additional initial assumptions: ›only switch between jobs in service upon an arrival to or departure from the system ›non-processor-sharing ·We show that the optimal policy is ›non-idling ›Markov
14
Extension 2 (continued) State vector at time t under policy p: Best job satisfies Optimal policy: Serve the request with the highest c i i product
15
Simulation results ·Revenue (normalized by the number of requests served during a simulation run) as a function of offered load ( / ) ·Mean file size ( ) = 2 kB ·Extension 1: C i ~ U[0,1) ·Extension 2: 1 2 kB, 2 = 14 kB, 3 = 100 kB ·Implemented in BONeS ™
16
Simulation results
19
Conclusions ·Using a greedy service policy rather than a fair policy at a web server can improve server performance by up to several orders of magnitude ·Performance improvements are most pronounced at high server loads ·Under certain conditions, a processor-sharing policy may generate a higher average revenue than comparable non- processor-sharing policies. These conditions remain to be quantified.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.