Competitive Queue Policies for Differentiated Services 28.5.02Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William.

Slides:



Advertisements
Similar presentations
Mobility Increase the Capacity of Ad-hoc Wireless Network Matthias Gossglauser / David Tse Infocom 2001.
Advertisements

Routing and Congestion Problems in General Networks Presented by Jun Zou CAS 744.
How to Schedule a Cascade in an Arbitrary Graph F. Chierchetti, J. Kleinberg, A. Panconesi February 2012 Presented by Emrah Cem 7301 – Advances in Social.
Price Of Anarchy: Routing
Seminar in Auctions and Mechanism Design Based on J. Hartline’s book: Approximation in Economic Design Presented by: Miki Dimenshtein & Noga Levy.
1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
Prompt Mechanisms for Online Auctions Speaker: Shahar Dobzinski Joint work with Richard Cole and Lisa Fleischer.
Engineering Internet QoS
How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan.
Institute of Computer Science University of Wroclaw Geometric Aspects of Online Packet Buffering An Optimal Randomized Algorithm for Two Buffers Marcin.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu.
Competitive Analysis of Buffer Policies with SLA Commitments Boaz Patt-Shamir, Tel Aviv University Gabriel Scalosub, University of Toronto Yuval Shavitt,
Lectures on Network Flows
Windows Scheduling Problems for Broadcast System 1 Amotz Bar-Noy, and Richard E. Ladner Presented by Qiaosheng Shi.
June 3, 2015Windows Scheduling Problems for Broadcast System 1 Amotz Bar-Noy, and Richard E. Ladner Presented by Qiaosheng Shi.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
Worst-case Fair Weighted Fair Queueing (WF²Q) by Jon C.R. Bennett & Hui Zhang Presented by Vitali Greenberg.
Providing Performance Guarantees in Multipass Network Processors Isaac KeslassyKirill KoganGabriel ScalosubMichael Segal EE, TechnionCISCO & CSE, BGU.
1 Ecole Polytechnque, Nov 7, 2007 Scheduling Unit Jobs to Maximize Throughput Jobs:  all have processing time (length) = 1  release time r j  deadline.
2002/04/18Chin-Kai Wu, CS, NTHU1 Jitter Control in QoS Network Yishay Mansour and Boaz Patt-Shamir IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 9, NO. 4,
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Competitive Buffer Management with Packet Dependencies Alex Kesselman, Google Boaz Patt-Shamir, Tel Aviv University Gabriel Scalosub, University of Toronto.
Online Algorithms Motivation and Definitions Paging Problem Competitive Analysis Online Load Balancing.
October 5th, 2005 Jitter Regulation for Multiple Streams David Hay and Gabriel Scalosub Technion, Israel.
Fair Scheduling in Web Servers CS 213 Lecture 17 L.N. Bhuyan.
1 Experiment And Analysis of Dynamic TCP Acknowledgement Daeseob Lim Sam Lai Wing-Ho Gordon Wong.
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
CSE 421 Algorithms Richard Anderson Lecture 6 Greedy Algorithms.
Competitive Buffer Management with Packet Dependencies Alex Kesselman, Google Boaz Patt-Shamir, Tel Aviv University Gabriel Scalosub, University of Toronto.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.
Packet Scheduling with Bounded Buffers A router can send one packet at a time Arriving packets must be queued in a finite buffer B (though we often ignore.
Data Structures – LECTURE 10 Huffman coding
Chain: Operator Scheduling for Memory Minimization in Data Stream Systems Authors: Brian Babcock, Shivnath Babu, Mayur Datar, and Rajeev Motwani (Dept.
Distributed Combinatorial Optimization
Competitive Analysis of Incentive Compatible On-Line Auctions Ron Lavi and Noam Nisan SISL/IST, Cal-Tech Hebrew University.
Online Packet Switching Techniques and algorithms Yossi Azar Tel Aviv University.
1 Proportional differentiations provisioning Packet Scheduling & Buffer Management Yang Chen LANDER CSE Department SUNY at Buffalo.
Online Function Tracking with Generalized Penalties Marcin Bieńkowski Institute of Computer Science, University of Wrocław, Poland Stefan Schmid Deutsche.
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
1 Distributed Computing Optical networks: switching cost and traffic grooming Shmuel Zaks ©
Asaf Cohen (joint work with Rami Atar) Department of Mathematics University of Michigan Financial Mathematics Seminar University of Michigan March 11,
Control and Optimization Meet the Smart Power Grid: Scheduling of Power Demands for Optimal Energy Management Authors: Iordanis Koutsopoulos Leandros Tassiulas.
Competitive On-Line Admission Control and Routing By: Gabi Kliot Presentation version.
CSE QoS in IP. CSE Improving QOS in IP Networks Thus far: “making the best of best effort”
Distributed Multimedia March 19, Distributed Multimedia What is Distributed Multimedia?  Large quantities of distributed data  Typically streamed.
Yossi Azar Tel Aviv University Joint work with Ilan Cohen Serving in the Dark 1.
1 By: MOSES CHARIKAR, CHANDRA CHEKURI, TOMAS FEDER, AND RAJEEV MOTWANI Presented By: Sarah Hegab.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Scheduling policies for real- time embedded systems.
Market Design and Analysis Lecture 5 Lecturer: Ning Chen ( 陈宁 )
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley.
Multimedia and QoS#1 Quality of Service Support. Multimedia and QoS#2 QOS in IP Networks r IETF groups are working on proposals to provide QOS control.
Loss-Bounded Analysis for Differentiated Services. By Alexander Kesselman and Yishay Mansour Presented By Sharon Lubasz
1 Chapter 5-1 Greedy Algorithms Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
© The McGraw-Hill Companies, Inc., Chapter 12 On-Line Algorithms.
Non-Preemptive Buffer Management for Latency Sensitive Packets Moran Feldman Technion Seffi Naor Technion.
Buffering problems Moran Feldman Technion Based on joint work with Seffi Naor.
11 -1 Chapter 12 On-Line Algorithms On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.
Competitive Queueing Policies for QoS Switches Nir Andelman Yishay Mansour An Zhu TAUTAUStanford.
Providing QoS in IP Networks
The Greedy Algorithm for Buffer Management: A Personal Perspective Zvi Lotker Some slide contributed by Prof Boaz Patt-Shamir.
Randomized Queue Management for DiffServ
Exercise Exercise Proof that
Brian Babcock, Shivnath Babu, Mayur Datar, and Rajeev Motwani
A new and improved algorithm for online bin packing
Presentation transcript:

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William A. Aiello Yishay Mansour S.Rajagopolan AT&T Research AT&T Research and Telcordia Tel ­ Aviv University Technologies Adi Rosen University of Toronto by Chen Chagashi

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks2 Introduction A description of our model. Details of the five policies we consider Overview of the results. Analysis of the queue policies The optimal offline schedule The lecture ’ s highlights

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks3 Future packet networks will support Quality of Service Quality of Service (QoS) in order to provide a full array of services. According to QoS the user ’ s commitments are about how his traffic will behave (average bandwidth, peak bandwidth, burst size etc.) The network guarantees the user maximum delay, jitter, etc. Introduction QoS

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks4 What happens when the user's traffic does not conform to his commitments? There are two solutions: 1. To force the incoming traffic to conform to the committed parameters by regulating the traffic at the entrance to the network. 2. To label the traffic as “ in ” and “ out ” : “ in ” has the desired properties and “ out ” is an excess load. This leads to the state where “ in ” packets have a higher priority over “ out ” packets.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks5 Abstract of the question We have two type of packets: a low priority packet - a benefit of 1. a high priority packet – a benefit of α>=1. If α is very large, the high priority packets have an absolute preference over low priority packets. If α has a moderate value, there is a tradeoff between the two packet types. For α near one, we are optimizing the total traffic, ignoring the various priorities

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks6 1 packet per time unit Outgoing queue Queue policy B packets α =1 α >=1

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks7 1 packet per time unit Outgoing queue Queue policy Once accepted, the packet cannot latter be preempt from the outgoing queue.

The aim of the queue policy is to maximize the total benefit of the packets that were sent

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks9 A Queue Policy online Our queue policy is an ” online ” algorithm. There are arrival sequences for which the benefit of any queue policy will be very low. Thus a lower bound on the benefit of a queue policy over all arrival sequences will not differentiate between queue policies. competitive analysis We use an approach known as competitive analysis, which is model independent. In this approach no assumptions about the arrival sequence are required to apply the bounds.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks10 Competitive Analysis The competitive analysis compares the performance of an online queue policy to an optimal offline policy, which is given the entire input sequence in advance. The competitive ratio is the minimum, over all input sequences, of the ratio of the online benefit to the offline benefit. The competitive ratio will always <=1. Our aim is to find queue policies with the largest competitive ratio.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks11 The Model The benefit of a queue policy is the sum of the benefits of the packets it accepts. k + αm, k, m = the number of low or high priority packets accepted respectively. We use competitive analysis an input sequence Λ, the benefit of an online policy π is π (Λ), the benefit of an optimal policy is opt(Λ). The competitive ratio of policy π is min Λ {π(Λ)/opt(Λ)}.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks12 Policies Definition 5 queue policies: The Greedy Policy The Fixed Partition Policy Threshold parameter x, # of low priority packets<= xB # of high priority packets <= (1 – x)B. The Flexible Partition Policy Threshold parameter x, # of low priority packets<= xB, and always accepts high priority packets. The Dynamic Flexible Partition Policy Threshold parameter x, k and m are #of low and high priority packets, respectively, in the buffer. The Dynamic Flexible Partition Policy accepts a low priority packets if k ’ <= xB ’ k ’ = k+1, B ’ = B – m The Round ­ Robin Policy

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks13 Overview of the Results

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks14 The Results α=1α=2α → ∞Policy 12/31/2General Impossibility Results 11/20Greedy 1/2 Round Robin 1/2[1/4, 1/2][1/4, 0.41]Fixed Partition 1[0.41, 0.62]0.41Flexible Partition 1[0.53, 0.62]1/2Dynamic Flexible Partition Each policy adjusts its parameter x to be the optimal value for the given α.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks15 Conclusions The policies which allow a flexible use of the buffer space achieve a good competitive ratio in all the regimes of α. Policies which try to perform a preallocation of the buffer space, have a problem in the case that the difference in prices is not significant. The best policy, of the five we consider here, is Dynamic Flexible Partition Policy. By adjusting the threshold parameter we can tune the behavior of the Dynamic Flexible Partition Policy.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks16 Conclusions - Cont. The Flexible Partition Policy, which is very similar to the Dynamic Flexible Partition Policy, achieves similar performance, but its performance is lower in each of the three regimes. The Fixed Partition Policy has consistently lower c.r. than the Flexible Partition Policy. The Round Robin Policy has a competitive ratio of 1/2 in all three regimes. The Greedy Policy should be viewed as a minimal performance measure.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks17 Analysis Of Queue Policies The Greedy Policy Theorem 1: The competitive ratio of the Greedy Policy is at least 1/α. Proof: The Greedy Policy maximizes the total number of packets accepted. Thus, the number of packets accepted by the optimal is <= the number of packets accepted by the Greedy Policy. The ith packet accepted by the optimal has benefit at most α times the benefit of the ith packet accepted by the Greedy Policy.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks18 Analysis Of Queue Policies Some analysis without a proof: Theorem 5: The competitive ratio of the Fixed Partition Policy with x=1/2 is at least 1/4. Theorem 6: The competitive ratio of the Round-Robin Policy is at least 1/2. Theorem 18: The competitive ratio of the Flexible Partition Policy is at least √2-1.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks19 Analysis Of Queue Policies The Dynamic Flexible Partition Policy the competitive ratio is at least 1/2. The parameter x= 1/2, and the competitive ratio is at least 1/2. We define a matching as follows: when a high priority packet arrives it is matched to the lowest unmatched low priority packet in the buffer (if such a packet exists). α =1 α >=1

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks20 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 7: In the Dynamic Flexible Partition Policy at any time we have that l <= f, where l is the number of unmatched low priority packets in the queue and f is the number of free slots. Proof: By definition of the D.F.P. Policy with x=1/2, we accept a low priority packet if after we accept it, we have more free slots than low priority packets in the queue. -> When we accept a low priority packet the claim holds.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks21 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 7: In the Dynamic Flexible Partition Policy at any time we have that l <= f, where l is the number of unmatched low priority packets in the queue and f is the number of free slots. Proof – Cont.: When we accept a high priority packet we have one less free space, but also one less unmatched low priority packets assuming there are unmatched low priority packets in the queue. If there are no unmatched low priority packets in the queue the lemma holds trivially. α >=1

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks22 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 7: In the Dynamic Flexible Partition Policy at any time we have that l <= f, where l is the number of unmatched low priority packets in the queue and f is the number of free slots. Proof – Cont.: When we accept a high priority packet we have one less free space, but also one less unmatched low priority packets assuming there are unmatched low priority packets in the queue. If there are no unmatched low priority packets in the queue the lemma holds trivially.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks23 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 7: In the Dynamic Flexible Partition Policy at any time we have that l <= f, where l is the number of unmatched low priority packets in the queue and f is the number of free slots. Proof – Cont.: When we send a packet (either high or low priority) the number of free slots increases by 1, and the number of unmatched low priority packets can not increase (it can either decrease by 1 or stay unchanged).

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks24 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Corollary 8: In the Dynamic Flexible Partition Policy when the buffer is full all the low priority packets are matched. (l<=f, f=0) Definitions: good We call a packet good if it is: (1) a high priority packet that was accepted, and (2) a matched low priority packet. good prefix The good prefix is the number of consecutive good packets from the start of the online buffer.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks25 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. G ­ HIGH(B,x) For any input sequence, the number of high priority packets G ­ HIGH(B,1) >= the number of high priority packets any other policy can accept. (bound the number of high priority packets the offline accepts). Accepts high priority packets xB packets in q, sends x packet each t.u. greedy

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks26 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 9: In the Dynamic Flexible Partition Policy, for any input sequence Λ, at any time the number of packets in the good prefix >= the number of packets in G ­ HIGH(B,1). Proof: By induction on time. Initially both buffers are empty -> the claim holds trivially.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks27 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 9: In the Dynamic Flexible Partition Policy, for any input sequence Λ, at any time the number of packets in the good prefix >= the number of packets in G ­ HIGH(B,1). Proof - Cont.: When a high priority packet arrives, If G ­ HIGH(B,1) accept it - #packets++ If reject it (case the buffer is full) #packets If The D.F.P.Policy rejects it - the buffer is full.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks28 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 9: In the Dynamic Flexible Partition Policy, for any input sequence Λ, at any time the number of packets in the good prefix >= the number of packets in G ­ HIGH(B,1). Proof - Cont.: By Corollary 8 when the buffer is full, all the packets in the buffer are matched -> the good prefix=B (entire buffer). Therefore if the high priority packet is rejected good prefix = B >= B = #packets G-HIGH.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks29 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 9: In the Dynamic Flexible Partition Policy, for any input sequence Λ, at any time the number of packets in the good prefix >= the number of packets in G ­ HIGH(B,1). Proof – Cont.: If the high priority packet is accepted there are 2 cases: there is an unmatched low priority packet in the buffer -> we add a matching, which increases good prefix by at least 1. there is no unmatched low priority packet -> the good prefix is increased by 1. The inductive claim is maintained.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks30 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 9: In the Dynamic Flexible Partition Policy, for any input sequence Λ, at any time the number of packets in the good prefix >= the number of packets in G ­ HIGH(B,1). Proof – Cont.: An arrival of a low priority packet does not change the buffer of G ­ HIGH(B,1) or the good prefix of the Dynamic Flexible Partition Policy buffer.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks31 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Lemma 9: In the Dynamic Flexible Partition Policy, for any input sequence Λ, at any time the number of packets in the good prefix >= the number of packets in G ­ HIGH(B,1). Proof – Cont.: During a send event, if the G ­ HIGH(B,1) buffer is empty-> good prefix >=0. Otherwise, the good prefix is also not empty. good prefix-- >= #packets G_HIGH--

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks32 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Corollary 10:In the Dynamic Flexible Partition Policy for any input sequence Λ, the number of packets G ­ HIGH(B,1) sends <= the number of good packets. Proof: Each time G ­ HIGH(B,1) sends a packet the Dynamic Flexible Partition Policy sends a good packet.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks33 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Claim 11: For any input sequence Λ, the number of packets (high or low) any schedule sends <= 2*the number of packets sent by the D.F.P. Policy. Proof: The Dynamic Flexible Partition Policy rejects a low priority packet only if the buffer is at least half full.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks34 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Theorem 12: The c.r. of Dynamic Flexible Partition Policy is at least 1/2. Proof: Consider a fixed sequence of packet arrivals. Let k 1 and m 1 be the number of low and high priority packets an optimal offline policy accepts, respectively. Let k 2 and m 2 be the number of low and high priority packets the Dynamic Flexible Partition Policy accepts, respectively.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks35 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Theorem 12: The c.r. of Dynamic Flexible Partition Policy <= 1/2. Proof – Cont.: We want to prove that: K 1 + αm 1 <= 2(k 2 + αm 2 ) According to claim 11: (1) K 1 + m 1 <= 2(k 2 + m 2 )

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks36 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Theorem 12: The c.r. of Dynamic Flexible Partition Policy <= 1/2. Proof – Cont.: g2 is the number of good packets in the Dynamic Flexible Partition Policy. (2) m 1 <= g 2 <= 2m 2 The matching guarantees that at least half of the good packets are high priority packets. Corollary 10

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks37 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Theorem 12: The c.r. of Dynamic Flexible Partition Policy <= 1/2. Proof – Cont.: By multiplying (2) by α – 1 and adding to (1): K 1 + αm 1 <= 2(k 2 + αm 2 ).

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks38 Analysis Of Queue Policies The Dynamic Flexible Partition Policy – Cont. Theorem 13: The Dynamic Flexible Partition Policy with x = ¾ has a c.r. of at least 15/28 (~0.53) for α =2.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks39 Optimal Offline Schedule Given an input sequence Λ, OPTIMAL works in two phases. In the first phase it find a schedule that includes only high priority packets of Λ. (In this phase OPTIMAL accepts a high priority packet if when it arrives the buffer is not full.) The second phase is to augment the schedule by adding low priority packets.

Competitive Queue Policies for Differentiated Services Seminar in Packet Networks40 Optimal Offline Schedule OPTIMAL considers the low priority packets in the order they arrive, and accepts a low priority packet if adding the low priority packet does not force a later high priority packet to be rejected. Theorem 26: For any input sequence Λ, OPTIMAL(Λ) generates the maximum benefit schedule.