Download presentation
Presentation is loading. Please wait.
Published byAnne Edwards Modified over 9 years ago
1
Bilal Gonen University of Alaska Anchorage Murat Yuksel University of Nevada, Reno
2
Many parameters to set in a network Each may significantly change the overall network performance Fast response to failures is necessary Automated configuration and management is much needed in practice Can be casted as an optimization problem..
3
Routers flood information to learn topology Determine “next hop” to reach other routers… Compute shortest paths based on link weights Link weights configured by network operator 5 5 5 5 5 5 5 5 5 5 source: http://www.cs.princeton.edu/courses/archive/spr06/cos461/http://www.cs.princeton.edu/courses/archive/spr06/cos461/
4
5 5 5 5 5 5 5 5 5 5 5 5 5 congestion 2 2 2 2 2 2 2 2 D S D D S S
5
Inversely proportional to link capacity Proportional to propagation delay Network-wide optimization based on traffic 5 5 5 5 5 5 5 5 5 5 source: http://www.cs.princeton.edu/courses/archive/spr06/cos461/http://www.cs.princeton.edu/courses/archive/spr06/cos461/
6
Empirical way: Network administrator experience Problems: error-prone, not scalable 5 5 5 5 5 5 5 5 5 5 source: http://www.cs.princeton.edu/courses/archive/spr06/cos461/http://www.cs.princeton.edu/courses/archive/spr06/cos461/
7
Given a certain offered traffic load matrix, distribute the traffic over the network to achieve the optimal resource utilization. 5 5 5 5 5 5 5 5 5 5 source: http://www.cs.princeton.edu/courses/archive/spr06/cos461/http://www.cs.princeton.edu/courses/archive/spr06/cos461/
8
Black-Box System Parameter 1 Parameter 2 Parameter n System Response Map the network to a black-box optimization framework and let the optimization algorithm search for the best configuration Black-box optimization searches thru the response surface to find the optimum or near-optimum sample. Key Question: How to accurately characterize the response surface with minimum # of experiments?
9
Black-Box System Parameter 1 Parameter 2 Parameter n System Response Can we try all possibilities? (Exhaustive search) Assume 1 ≤ X i ≤ 10, i=1:5 Step Size = 1 10 5 = 100,000 If one try = 1 sec then 100,000 sec ≈ 28 hours For 10 parameters ≈ 317 years
10
Black-Box System Parameter 1 Parameter 2 Parameter n System Response Parameter Adjustments Algorithm#3 Algorithm#2 Algorithm#1 Budget Allocator Comparator Current BestSoFar BestSoFar Metric Number of Experiments PTAS Problem
11
Black-Box System Parameter 1 Parameter 2 Parameter n System Response Parameter Adjustments Algorithm#3 Algorithm#2 Algorithm#1 Budget Allocator Comparator Current BestSoFar BestSoFar Metric Number of Experiments
12
An algorithm may be good at one class of problems, but its performance will suffer in the other problems NFL Theorem: General-purpose universal algorithm is impossible Key Question: How to design an evolutionary hybrid search algorithm? Search for the best search Roulette wheel: Punish the bad algorithms and reward the good ones Trans-algorithmic Transfer the best-so-far among the algorithms
13
Exploration techniques: Random sampling Random walk Genetic Algorithm Exploitation techniques: Downhill simplex Hillclimbing Simulated Annealing Hybrid Recursive Random Search (RRS), T. Ye et al. ToN 2009
14
Exploration techniques: Random sampling Random walk Genetic Algorithm Exploitation techniques: Downhill simplex Hillclimbing Simulated Annealing Hybrid Recursive Random Search (RRS), T. Ye et al. ToN 2009
16
Black-Box System Parameter 1 Parameter 2 Parameter n System Response Parameter Adjustments Algorithm#3 Algorithm#2 Algorithm#1 Budget Allocator Comparator Current BestSoFar BestSoFar Metric Number of Experiments
17
Total Budget = 1500 300 Round budget = 300 Algorithm-1 Round-1 budget=100 Algorithm-2Algorithm-3 budget=100 Winner Algorithm-1 Round-2 budget=110 Algorithm-2Algorithm-3 budget=98 budget=92 Winner Algorithm-1 Round-3 budget=106 Algorithm-2Algorithm-3 budget=90 budget=104 Winner Algorithm-1 Round-4 budget=120 Algorithm-2Algorithm-3 budget=92 budget=88 Winner Algorithm-1 Round-5 budget=110 Algorithm-2Algorithm-3 budget=102 budget=88 Winner
18
RRS is rewarded in the 2 nd round. RRS is the winner in the 1 st round. GA is the second in the 1 st round. SA is the third in the 1 st round. SA is punished more in the 2 nd round. GA is punished in the 2 nd round. SA is punished in the 2 nd round but rewarded in the 3 rd round.
19
Network Simulator 2 (NS-2) We converted our PTAS code into an NS-2 agent and integrate it into the NS-2. Optimization objective: minimize the overall packet drop rate Thus, maximize aggregate network throughput
20
22 nodes and 37 links exist. We used 7 nodes as the edge nodes, and composed 6 × 7=42 TCP flows between those edge nodes. Simulation metric: number of bytes received at sink nodes of the TCP flows. We repeated the optimization process 30 times. Average throughput achieved by each algorithm with 80% confidence intervals. IEEE GLOBECOM Workshops, 2011
21
Optimization using a separate model of the system Optimization using real-time running system Assumption: system does not change frequently (backbone networks). This former approach fails when the network system is dynamic with high failure rates or a variable demand profile. It is not practical to model such highly variant networks by simulations.
22
130000500065001150010000 search phase search interval search interval = 5,000 sec Simulation duration = 13,000 sec search phase A two-phase approach: search, no-search
23
Key questions: How frequent should we go into the “search” phase to achieve reasonable improvement by using in-situ trials on the real network? How much disturbance is given to the system when the optimizer is searching for better configuration parameters?
24
RRS (Avg throughput=7,698.24) PTAS (Avg throughput=7708.21) GA (Avg throughput=7,596.68) SA (Avg throughput=7,322.22)
25
Comparison of PTAS with RRS, SA, and GA for using different search phase lengths and different number of rounds for PTAS Although not always, PTAS outperforms on average.
26
Need for automated configuration and management of highly dynamic networks. PTAS with no system model and PTAS with separate system model. We explore some of the key tradeoffs: How frequent the search should be done How long should the search phase be How worse the search phase can temporarily make the system performance due to its trials. We apply PTAS and three other search algorithms on Six well-known objective functions A network problem on realistic ISP topologies Wireless ad hoc network
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.