Download presentation
Presentation is loading. Please wait.
Published byLeona Barnett Modified over 9 years ago
1
Rethinking Traffic Management: Design Optimizable Networks Jiayue He May 9 th, 2008
2
2 Approach: Theory Meets Practice Using optimization theory: Analyze system properties Derive protocols and architectures Practical solutions: Understand limitations of today’s protocols and architectures Propose new protocols and architectures implementable using existing technology
3
3 Traffic Management Determines traffic rate along each path Supports multiple Internet applications Application Transport Network Link Physical Traffic Management
4
4 Traffic Management Today User (RTTs): Congestion Control Operator (hours): Traffic Engineering Routers (seconds): Routing Protocols Evolved organically without conscious design
5
Goal: Redesign Traffic Management Resource Allocation between Multiple Traffic Classes (Part III: 18 min) Throughput-Sensitive Traffic Analysis (Part I: 10 min) Design (Part II: 22 min) Other Traffic Classes
6
6 Scope of This Talk Single Internet Service Provider backbone Control and visibility of network Traffic management of aggregate flows No inter-network economics Multipath with flexible splitting
7
PART ONE Can Congestion Control and Traffic Engineering Be at Odds?
8
8 Motivation Congestion Control: maximize user utility Traffic Engineering: minimize network congestion Given routing R li, how to adapt end rate x i ? Given traffic x i, how to perform routing R li ?
9
9 Goal: Understand Interaction xixi R li Congestion Control Traffic Engineering Understand system properties: Convergence to a stable value? What is a reasonable overall objective?
10
10 Congestion Control Implicitly Maximizes Aggregate User Utility max. ∑ i U i (x i ) s.t. ∑ i R li x i ≤ c l var. x aggregate utility Source Rate x i User Utility U i (x i ) Source-destination pair indexed by i source rate Utility function represents user satisfaction and elasticity routing matrix Fair rate allocation among greedy users Kelly98, Low03, Srikant04…
11
11 Traffic Engineering Explicitly Minimizes Network Congestion min. ∑ l f(u l ) s.t. u l =∑ i R li x i /c l var. R Link Utilization u l Cost f(u l ) aggregate congestion cost Links are indexed by l u l =1 Cost function represents penalty for approaching capacity and approximates average queuing delay To avoid bottlenecks in the network link utilization FortzThorup04
12
12 Model of Interaction xixi R li Congestion Control (RTTs): max ∑ i U i (x i ), s.t. ∑ i R li x i ≤ c l Traffic Engineering (hours): min ∑ l f(u l ), s.t. u l =∑ i R li x i /c l Assume the TCP session is between two customers of same ISP f is controlled by the operators and can be modified
13
13 Numerical Experiments MATLAB experiments: Different topologies and capacity distributions Benchmark: Observations: System converges Utility gap exists between the joint system and benchmark max. ∑ i U i (x i ) s.t. Rx ≤ c Var. x, R
14
14 Simulation of the joint system suggests that it is stable, but suboptimal Gap reduced if we change f to red curve Backward Compatible Design Link utilization u l Cost f u l =1 f(u l ) 0
15
15 Theoretical Results Theorem: the joint system model converges if Replace the capacity constraint in congest control with a penalty function U i ’’(x i ) ≤ -U i ’(x i ) /x i holds for all TCP variants Master Problem: min. g(x,R) = - ∑ i U i (x i ) + γ∑ l f(u l ) Congestion Control: argmin x g(x,R) Traffic Engineering: argmin R g(x,R) Gauss-Siedel
16
16 Pros and Cons of Changing f Pros: Backwards compatible Can maximize aggregate user utility Cons: Creates bottleneck links Fragile to high volume traffic bursts Motivation for redesign in Part II
17
17 Contributions and Related Work Related Work: Separate analysis of CC and TE Use congestion price as link weights (WangLiLowDoyle05, HeChiangRexford06) Contributions: Modeled the interaction between CC/TE Studied the interaction Proposed backward compatible design
18
PART TWO TRUMP: TRaffic-management Using Multipath Protocol Joint work with Ma’ayan Bresler and Martin Suchara
19
19 Motivation for Redesign Shortcomings of today’s traffic management: Congestion control assumes routing is fixed; traffic engineering assumes traffic is inelastic Traffic engineering occurs at the timescale of hours, slower than traffic shifts Not taking full advantage of path diversity Goal: redesign traffic management from scratch using optimization tools
20
20 Top-down Redesign Problem Formulation Distributed Solutions TRUMP algorithm Optimization decomposition Compare using simulations TRUMP Protocol Translate into packet version
21
21 A Balanced Objective max. ∑ i U i (x i ) - w∑ l f(u l ) Congestion Control: Maximize throughput Generate bottlenecks Traffic Engineering: Minimize congestion Avoid bottlenecks Penalty weight
22
22 Topologies with Different Pattern of Bottleneck Links Abilene Internet2 Access-Core Multihome
23
23 Effect of Penalty Weight w Can achieve high aggregate utility for a range of w User utility w Operator penalty Depends on # of flows on each bottleneck link ( U-wf)/U
24
24 Top-down Redesign Problem Formulation Distributed Solutions TRUMP algorithm Optimization decomposition Compare using simulations TRUMP Protocol Translate into packet version
25
25 i source-destination pair, j path number Multipath Formulation max. ∑ i U i (∑ j z j i ) – w∑ l f(u l ) s.t. link load ≤ c l var. path rates z z11z11 z21z21 z31z31 Path rate z captures source rate and routing
26
26 Overview of Distributed Solutions Edge node: Update path rates z Rate limit incoming traffic Operator: Tune w, U, f Parameters tuned very rarely Routers: Set up multiple paths Measure link load Update link prices s s s s Distributed algorithm runs on the timescale of RTTs
27
27 Four decompositions differ in number of tunable parameters Theoretical results and limitations: All proven to converge to global optimum for well-chosen parameters Little guidance for choosing parameters Only loose bounds for rate of convergence Sweep large parameter space in MATLAB Compare rate of convergence Compare sensitivity of tunable parameters Evaluating Four Decompositions
28
28 Convergence Properties Tunable parameters impact convergence time Best rate Parameter sensitivity Iterations to convergence o average value x actual values Tunable parameter
29
29 For all algorithms: Parameter sensitivity correlated to rate of convergence Trade-off between convergence and utility Comparing between algorithms: Extra parameters do not improve convergence Allowing packet loss improves convergence Direct update converges faster than iterative update (with constant tunable parameter) Convergence Properties (MATLAB)
30
30 Top-down Redesign Problem Formulation Distributed Solutions TRUMP algorithm Optimization decomposition Compare using simulations TRUMP Protocol Translate into packet version Construct TRUMP with different parts of previous algorithms
31
31 TRUMP Algorithm Source i : Path rate z j i (t+1) = max. U i ( ∑ k z k i ) – (z j i )( path price) Link l : p l (t+1) = [ p l (t) – ( β p )(c l – link load ) ] + q l (t+1) = wf’ ( u l ) Price for path j = ∑ l on path j ( p l +q l )
32
32 Theorem: TRUMP converges if: w is sufficiently large such that p=0 n l < αf '(u l ) (1/ α + 1) /f ''(u l ), n l number of flows Proof technique: contraction mapping TRUMP trumps previous distributed algorithms (MATLAB): Observe convergence to optimum Faster convergence Converges in many scenarios if β p = 0.05/c l 2 TRUMP Properties
33
33 Top-down Redesign Problem Formulation Distributed Solutions TRUMP algorithm Optimization decomposition Compare using simulations TRUMP Protocol So far, assume fluid model and constant feedback delay Translate into packet version
34
34 TRUMP: Packet-Based Version Link l : link load = (bytes in period T ) / T Update link prices every T Source i : Update path rates at max j { RTT j i } Arrival and departure of flows are implicitly conveyed through price changes
35
35 Set-up: Topologies and delays of large ISPs (Rocketfuel data) Selected flows and paths Link failures and recoveries ON-OFF traffic model Questions: Does TRUMP react quickly to dynamics? How many paths does TRUMP need? Packet-level Experiments (NS-2)
36
36 TRUMP Link Dynamics (NS-2) Link failure or recovery Time (s) Throughput (Mbps) TRUMP reacts quickly to link dynamics Same observation for ON-OFF flows
37
TRUMP: A Few Paths Suffice Sources benefit the most with a few alternative paths Time (s) Throughput (Mbps)
38
38 Summary of TRUMP Properties PropertyTRUMP Tuning Parameters Universal parameter setting Only need to be tuned for small w Robustness to link dynamics Reacts quickly to link failures and recoveries Robustness to flow dynamics Independent of variance of file sizes, more efficient for larger files General Trumps other algorithms Two or three paths suffice
39
39 Multiple decompositions (PalomarChiang06) Design traffic-management protocols: Congestion control (FAST TCP) Dynamic traffic engineering (REPLEX, TeXCP) Traffic management (KeyMassoulieTowsley07, LinShroff06, Shakkottai et al 06, Voice07) Related Work
40
40 Contributions Design process Formulated new objective for traffic management Compared four distributed algorithms (from decomposition) Constructed TRUMP based on insights TRUMP Universal parameter setting Packet-level protocol and simulations
41
PART THREE DaVinci: Dynamically Adaptive Virtual Networks for a Customized Internet Joint work with Rui Zhang-Shen, Ying Li, Mike Lee, Martin Suchara, and Umar Javed
42
42 Internet Has Many Applications Different application requirements Throughput-sensitive: file transfer, web Delay-sensitive: VoIP, IPTV, online gaming
43
43 Support Multiple Traffic Classes Key research areas: QoS: provides separate resources to support multiple traffic classes in parallel Overlays: provide customized protocols for each traffic class Network virtualization is emerging Current applications: router consolidation, experimental test beds, VPNs Router virtualization: separate resources Programmable routers: customized protocols
44
44 Virtual Networks Each virtual node/link has isolated resources
45
45 Two traffic classes: Delay-sensitive traffic (DST): fixed demand Throughput-sensitive traffic (TST): elastic Motivation for Virtualization 2 1 Single queue TST can fill up both links DST may not be satisfied Shared routing DST chooses shorter path Capacity wasted 5ms, 100 Mbps 10ms, 1000 Mbps
46
46 How to partition resources? Static partitioning Simple, but can be inefficient One virtual network could be congested while another is idle Dynamically allocate bandwidth shares! Adaptive Network Virtualization
47
47 DaVinci is an architecture to realize adaptive network virtualization Virtual networks indexed by (k) One per traffic class Run customized traffic-management protocols Substrate network Provides separate queues Computes per link bandwidth shares Enforce bandwidth shares with traffic shapers Dynamically Adaptive Virtual Networks for a Customized Internet
48
48 DaVinci: Substrate Link Bandwidth shares computation link load y l (1) y l (2) y l (N) Congestion price computation s l (1) Use optimization to determine the computations
49
49 ISP: Maximize Aggregate Performance max. ∑ k w (k) U (k) (z (k), y (k) ) s.t. ∑ k H (k) z (k) ≤ c var. z (k), y (k) weighted aggregate performance objective bandwidth shares path rates users + efficiently using resources = $$$
50
50 ISP problem decomposes into multiple subproblems (per traffic class): Master problem update y (k) using Indication of congestion s (k) Indication of performance d/dy (k) U (k) (z (k), y (k) ) Primal Decomposition max. U (k) (z (k), y (k) ) s.t. H (k) z (k) ≤ y (k) var. z (k)
51
51 Bandwidth Allocation for Link l v (k) l (t+1) = [ y (k) l (t) + ( β y ) (w (k) λ (k) l ) ] + Adjust bandwidth in two steps: Projection onto feasible region: ∑ k y (k) l ≤ c l v λ (k) l = s (k) l +d/dy (k) U (k) (z (k), y (k) )
52
Theorem Theorem: the bandwidth share computation together with per traffic class problem maximizes aggregate performance if The objective function and constraints are convex The stepsize β y is diminishing The bandwidth shares are updated when the congestion prices have converged Proof technique: primal decomposition
53
System Properties from Theorem Resources are efficiently utilized to maximize aggregate performance Bandwidth shares converge to a stable value and the computation is Based only on local link information Each virtual network runs its own protocols independently Bandwidth shares updated more slowly than congestion prices 53
54
54 DST on High Capacity, High Delay Link DST does not use all the allocated bandwidth 21 Number of iterations Mbps 5ms, 100 Mbps DST: 50Mbps 10ms, 1000 Mbps DST: 500Mbps
55
55 Related Work: QoS, overlays, and network virtualization Primal decomposition Contributions: Introduced adaptive network virtualization Introduced DaVinci Proved stability and optimality of DaVinci Related Work and Contributions
56
56 Traffic management today is An organic evolution Complex for operators Redesign of traffic management to support multiple traffic classes: TRUMP: design of an individual traffic class DaVinci: design of resource allocation between traffic classes Conclusions
57
57 Future Research Directions Extending DaVinci: Tailoring to application-specific requirements, e.g. R-factors for voice traffic Running sub-optimal but simpler protocols Interdomain traffic management requires Economic incentives Protection against malicious users
58
58 Part One: Globecom, JSAC Part Two: CoNext, submitted to ToN Part Three: under preparation Related publications: Multipath survey: IEEE Network Magazine Design Optimizable Protocols: CCR Editorial, invited book chapter Publications Related to Thesis
59
The End Thank you!
60
60 Abilene Topology: f = e (yl/cl) Gap exists Standard deviation of capacity Aggregate utility gap
61
61 Abilene Continued: f = n(y l /c l ) n Gap shrinks with larger n n Aggregate utility gap
62
62 Optimization Decomposition Deriving prices and path rates Prices: penalties for violating a constraint Path rates: updates driven by penalties Example: TCP congestion control Link prices: packet loss or delay Source rates: AIMD based on prices Our problem is more complicated More complex objective, multiple paths
63
63 Rewrite capacity constraint: Subgradient feedback price update: Stepsize controls the granularity of reaction Stepsize is a tunable parameter Effective capacity keeps system robust Effective Capacity (Links) s l (t+1) = [s l (t) – stepsize* (y l – link load )] + link load ≤ c l link load = y l effective capacity y l ≤ c l
64
64 Key Architectural Principles Effective capacity Advance warning of impending congestion Simulates the link running at lower capacity and give feedback on that Dynamically updated Consistency price Allowing some packet loss Allowing some overshooting in exchange for faster convergence
65
65 Four Decompositions - Differences AlgorithmsFeatures Parameters Partial-dual Effective capacity1 Primal-dual Effective capacity3 Full-dual Effective capacity, Allow packet loss 2 Primal-driven Direct s update1 Iterative updates contain stepsizes: They affect the dynamics of the distributed algorithms Differ in how link & source variables are updated
66
TRUMP versus File Size TRUMP’s performance is independent of variance Average File Size (Mbps) Achieved aggregate rates (%) TRUMP’s is better for large files
67
67 Delay-sensitive Traffic Minimizes Delay min. ∑ l H lj i z j i (p l +f(u l )) s.t. u l =∑ i R li x i /c l ∑ i z j i ≥ x D i var. z Link Utilization u l Cost f(u l ) Propagation delay Links are indexed by l u l =1 Cost function represents penalty for long queues Traffic demand
68
68 Voice Traffic: R-factor constants R-factor: 50-60, 60-70, 70-80, 80-90, 90-100 Voice quality: poor, low, medium, high, best R = R a -α 1 δ – α 2 (δ – α 3 )H – β 1 – β 2 log (1+β 3 φ) End-to-end delayPacket loss
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.