Download presentation
Presentation is loading. Please wait.
Published byTyrone Gray Modified over 9 years ago
1
Research Unit in Networking - University of Liège - 20041 A Distributed Algorithm for Weighted Max-Min Fairness in MPLS Networks Fabian Skivée (Fabian.Skivee@ulg.ac.be) Guy Leduc (Guy.Leduc@ulg.ac.be)
2
Research Unit in Networking - University of Liège - 20042 Outline Introduction Weight Proportional Max-Min policy Proposed distributed WPMM algorithm Algorithm integration with RSVP Simulation results Conclusion
3
Research Unit in Networking - University of Liège - 20043 Introduction Our goal : sharing the available bandwidth among all the LSPs according to their weights Consider a set of LSPs, each carrying many TCP connections. Without explicit policy, more aggressive LSPs (with more flows) get more than their fair share, independently of their reservations. The classical max-min rate allocation policy has been accepted as an optimal network bandwidth sharing criterion among user flows. Extension with a weight : WPMM (Weighted Proportional Max-Min)
4
Research Unit in Networking - University of Liège - 20044 Application The fair rate allocated to an LSP could be used at the ingress by a 3 colour marker : green : rate under the reserved rate yellow : rate between reserved rate and fair rate red : rate above the fair rate In case of congestion, core routers discard the red packets first and possibly, during transient periods, some of the yellow packets by using WRED policy by example.
5
Research Unit in Networking - University of Liège - 20045 L : a set of links S : a set of LSPs Each LSP s has : a reserved rate RR s a fair rate FR s a maximal rate MR s a weight w s Admission control : Σ RR s ≤ C l A fair share allocates a LSP with a "small" demand what it wants, and distributes unused resources evenly to the "big" LSPs according to their weights Weight Proportional Max Min policy BW LSPs w1w1 w2w2 w3w3 MR 1 Expected BW Shared BW Fair rate MR 3 MR 2
6
Research Unit in Networking - University of Liège - 20046 Weight-proportional allocation policy The centralized Water Filling algorithm computes the exact fair rate for each LSP : Step 1 : First allocates to each LSP its reserved rate Step 2 : Increase the rate of each LSP proportionally to its weight until a link becomes fully utilized Step 3 : Freeze all the LSPs crossing this link and go to step 2 until all the LSPs are frozen
7
Research Unit in Networking - University of Liège - 20047 Proposed distributed WPMM algorithm We propose an algorithm that converges to the WPMM policy quickly through distributed and asynchronous iterations. We used the RSVP signaling protocol to convey information through the network We add 4 fields in the PATH and RESV packets: RR, W : given at the creation of the LSP explicit fair rate (ER) : the fair rate allocated by the network to this LSP bottleneck (BN) : id of the LSP's bottleneck link
8
Research Unit in Networking - University of Liège - 20048 Proposed distributed WPMM algorithm Periodically, the ingress sends a PATH packet Each router computes a local fair share for the LSP and updates the ER and BN fields if its local fair rate is less than the actual fair rate Upon receiving a PATH packet, the egress routers sends a RESV packet. Each router updates its information with the RESV parameters ER and BN. PATH RESV
9
Research Unit in Networking - University of Liège - 20049 RR : reserved rate W : weight FR : fair rate C : link capacity Local fair rate computation = set of LSPs bottlenecked at link l = set of LSPs not bottlenecked at link = additional fair share per unit of weight for LSPs bottlenecked on this link l Local fair rate for LSP i at link l is defined by is computed by
10
Research Unit in Networking - University of Liège - 200410 Bottleneck-consistency U l is bottleneck consistent if i.e. all LSPs not bottlenecked at a link must have a bottleneck elsewhere or reach their maximal rate, so they have an allocated fair rate less than the one proposed by the current link. The key concept of this algorithm is the bottleneck marking strategy
11
Research Unit in Networking - University of Liège - 200411 Improvements provided by our algorithm There is no similar work for MPLS networks But there is interesting propositions in ATM A naïve solution is to adapt Hou’s work to the MPLS context With the flexibility provided by MPLS, we can improve this naïve solution by updating the routers in the backward path, so they all have the same information as the ingress adding a new parameter BN that conveys explicitly the bottleneck link in the path. This information improves considerably the convergence time
12
Research Unit in Networking - University of Liège - 200412 Algorithm integration with RSVP The RSVP signaling protocol is widespread in MPLS networks for the LSP establishment RSVP Refresh Overhead Reduction Extensions* if two successive PATH (or RESV) packets are the same, the upstream node only sends a refresh PATH. The downstream node refreshes the LSP entry but doesn't process the whole PATH packet. By associating a special bit (NR i ) to each LSP i, we can determine if the LSP value has changed and so keeping this nice RSVP mechanism. * RFC 2961
13
Research Unit in Networking - University of Liège - 200413 Simulation results We create a dedicated simulator and we compare our algorithm and Hou's solution adapted to MPLS networks with the WPMM allocation vector (computed by Water- Filling) An iteration consists of simulating the RSVP protocol for each LSP in the topology We stop when the mean relative error between the last rate vector and the WPMM rate allocation is under a fixed precision We made extensive simulations on 63 topologies from 20 to 100 nodes with between 20 and 1000 LSPs
14
Research Unit in Networking - University of Liège - 200414 Simulation results Our solution is nearly 3 times faster than Hou's solution PrecisionHou's algorithmOur algorithmGain 1E-25.42.02.7 1E-318.46.82.7 1E-440.414.12.9 1E-566.9193.5 Average number of iterations on 63 topologies
15
Research Unit in Networking - University of Liège - 200415 Simulation results For stabilizing 90% of the LSPs, our solution takes 4 iterations (16 with Hou's one) In the worst topology, it takes 36 iterations to converge (84 with Hou's solution)
16
Research Unit in Networking - University of Liège - 200416 Conclusion This distributed algorithm provides a scalable architecture to share the available bandwidth among all the LSPs according to their weights By using a new update scheme and an explicit bottleneck link marking, our algorithm improves considerably the performance (between 2 and 4 times faster) The compatibility with RSVP refresh extensions (RFC 2961) reduces the overhead
17
Research Unit in Networking - University of Liège - 200417 Thanks for your attention This work was supported by: the European ATRIUM project the TOTEM project funded by the DGTRE (Walloon region) Contact : Fabian.Skivee@ulg.ac.be
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.