Download presentation
Presentation is loading. Please wait.
Published byEthen Quincey Modified over 10 years ago
1
Energy-Efficient Congestion Control Opportunistically reduce link capacity to save energy Lingwen Gan 1, Anwar Walid 2, Steven Low 1 1 Caltech, 2 Bell Labs Improve network efficiency by 1000 times!
2
Network links consume a lot of electricity Electricity consumption of network links > Electricity consumption of the United Kingdom Fiber optics, copper cable Reduce electricity consumption of network links.
3
Exploit low link utilization What we do: dynamically manage link capacity.
4
Technologies to change link capacity Link bundle Sleep mode [Gupta03] Voltage and frequency speed scaling [Pillai01] component link to sleep Link bundle router...... 2~20
5
Linear power consumption # active component links Power consumption (units) energy savingreduced capacity
6
Outline Challenge Goals Algorithm Simulations
7
Challenge: interaction with TCP Reduce traffic throughput TCP reacts capacity congestion throughput
8
Adjust capacity slowly. Adjust capacity fast, but TCP friendly. Packet time scale. [Francini10]… Flow time scale. Fast response Small overhead Routing time scale. [He06] [Fisher10] Two approaches This work
9
Goals Dynamic Bandwidth Adjustment (DBA) Algorithm, such that 1) Operate at flow time scale. 2) Do not reduce throughput. 3) Save as much energy as possible. 4) Throughput does not oscillate---stability.
10
Recall TCP at steady state transmission rate packet loss probability TCP packet loss probability transmission rate
11
Recall Random Early Discard (RED) link incoming traffic link capacity buffer size packet drop probability
12
Recall network solves NUM Transmission rates Throughput on the links Ideal throughput Ideal capacity Thm [Kelly98, Low99]: The network model solves the Network Utility Maximization problem:
13
Bottleneck & non-bottleneck links buffer size packet drop probability Bottleneck link: Non-bottleneck link: buffer size packet drop probability Do not reduce capacity Reduce capacity Keep 0 packet drop
14
Keep the buffer at the right place buffer size packet drop probability target buffer
15
DBA Algorithm (for each link) 1. Pick a target delay satisfying 2. At any time, set target buffer size current buffer size capacity and update capacity as
16
zero throughput reduction & maximum energy saving Thm: Network under DBA algorithm, modeled by converges to (original) target throughput (zero throughput reduction) with minimum energy consumption (maximum energy saving) Current network architecture
17
Model network delay Global stability TCP sources Links transmission rate packet loss incoming traffic packet drop No network delayWith network delay delay
18
Local stability under network delay Thm: Network (with DBA) is locally asymptotically stable, provided some mild conditions hold. in the presence of network delay modeled as
19
Goals Dynamic Bandwidth Adjustment (DBA) Algorithm, such that 1) Operate at flow time scale. 2) Do not reduce throughput. 3) Save as much energy as possible. 4) Throughput does not oscillate---stability. Standard simplifying assumption s ns2 simulation to verify. ns2 is a standard and accurate simulation software.
20
Simulation setup Node 1 Node 2 1Mb/s TCP Source 1 TCP Source 20 TCP sink 1 TCP sink 20 Compare two configurations static:50Mb/s DBA:5~50Mb/s 20 additional TCP flows come and go abruptly.
21
time (s) static DBA Zero throughput reduction TCP flows come TCP flows go Initial dip Fast recovery Throughput does not oscillate. throughput preservation instant increase throughput preservation throughput preservation
22
time (s) static DBA Maximum energy saving TCP flows come TCP flows go capacity ramps up fast capacity ramps down slowly same as throughput same as throughput short transient
23
Network link is lightly utilized, can reduce capacity to save energy. Stability: locally asymptotically stable. Optimality: zero throughput reduction, maximum energy saving. Concluding remarks Verified by ns2 simulations. Propose DBA to adjust link capacity in TCP flow time scale.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.