Download presentation
Presentation is loading. Please wait.
Published byJonathan Ramsey Modified over 9 years ago
1
Energy-Aware Wireless Scheduling with Near Optimal Backlog and Convergence Time Tradeoffs Michael J. Neely University of Southern California INFOCOM 2015, Hong Kong http://www-bcf.usc.edu/~mjneely A(t) Q(t) μ(t)
2
A(t) Q(t) μ(t) Q(t+1) = max[Q(t) + A(t) – μ(t), 0] A Single Wireless Link
3
A(t) Q(t) μ(t) Q(t+1) = max[Q(t) + A(t) – μ(t), 0] A Single Wireless Link Uncontrolled: A(t) = random arrivals, λ
4
A(t) Q(t) μ(t) Q(t+1) = max[Q(t) + A(t) – μ(t), 0] A Single Wireless Link Uncontrolled: A(t) = random arrivals, λ Controlled: μ(t) = bits served [depends on power use & channel state]
5
Random Channel States ω(t) t ω(t) Observe ω(t) on slot t ω(t) in {0, ω 1, ω 2, …, ω M } ω(t) ~ i.i.d. over slots π(ω k ) = Pr[ω(t) = ω k ] Probabilities are unknown
6
Opportunistic Power Allocation p(t) = power decision on slot t [based on observation of ω(t)] Assume: p(t) in {0, 1} (“on” or “off”) μ(t) = p(t)ω(t) Time average expectations: p(t) = (1/t) ∑ E[ p(τ) ] τ=0 t-1
7
Stochastic Optimization Problem Minimize : lim p(t) Subject to: lim μ(t) ≥ λ p(t) in {0, 1} for all slots t p* = ergodic optimal average power Define: Fix ε>0. ε-approximation on slot t if: p(t) ≤ p* + ε μ(t) ≥ λ - ε Challenge: Unknown probabilities!
8
Prior algorithms and analysis E[Q] T ε Neely 03, 06 (DPP) O(1/ε) O(1/ε 2 ) Georgiadis et al. 06 Neely, Modiano, Li 05, 08: O(1/ε) O(1/ε 2 ) Neely 07: O(log(1/ε)) O(1/ε 2 ) Huang et. al. ‘13 (DPP-LIFO): O(log 2 (1/ε)) O(1/ε 2 ) Li, Li, Eryilmaz ‘13, ’15: O(1/ε) O(1/ε 2 ) (additional sample path results)
9
Prior algorithms and analysis E[Q] T ε Neely 03, 06 (DPP) O(1/ε) O(1/ε 2 ) Georgiadis et al. 06 Neely, Modiano, Li 05, 08: O(1/ε) O(1/ε 2 ) Neely 07: O(log(1/ε)) O(1/ε 2 ) Huang et. al. ‘13 (DPP-LIFO): O(log 2 (1/ε)) O(1/ε 2 ) Li, Li, Eryilmaz ‘13, ’15: O(1/ε) O(1/ε 2 ) (additional sample path results) Huang et al. ’14: O(1/ε 2/3 ) O(1/ε 1+2/3 )
10
Main Results 1.Lower Bound: No algorithm can do better than O(1/ε) convergence time. 2.Upper Bound: Provide tighter analysis to show that Drift-Plus-Penalty (DPP) algorithm achieves: Convergence Time: T ε = O( log(1/ε) / ε) Average queue size: E[Q] ≤ O( log(1/ε) )
11
Part 1: Ω(1/ε) Lower Bound for all Algorithms Example system: ω(t) in {1, 2, 3} Pr[ω(t) = 3], Pr[ω(t) = 2], Pr[ω(t) = 1] unknown. Proof methodology: Case 1: Pr[ transmit | ω(0) = 2 ] > ½. o Assume Pr[ω(t) = 3] = Pr[ω(t) = 2] = ½. o Optimally compensate for mistake on slot 0. Case 2: Pr[ transmit | ω(0) = 2 ] ≤ ½. o Assume different probabilities. o Optimally compensate for mistake on slot 0.
12
Case 1: Fix λ=1, ε > 0 Rate E[μ(t)] Power E[p(t)] X 1 0 0 1 h(μ) curve
13
Case 1: Fix λ=1, ε > 0 Rate E[μ(t)] Power E[p(t)] A X 1 0 0 1 (E[μ(0)], E[p(0)]) is in this region.
14
Case 1: Fix λ=1, ε > 0 Rate E[μ(t)] Power E[p(t)] A X 1 0 0 1 (E[μ(0)], E[p(0)]) is in this region.
15
Case 1: Fix λ=1, ε > 0 Rate E[μ(t)] Power E[p(t)] A X 1 0 0 1 (E[μ(0)], E[p(0)]) is in this region.
16
Case 1: Fix λ=1, ε > 0 Rate E[μ(t)] Power E[p(t)] A 1 0 0 1 (E[μ(0)], E[p(0)]) is in this region. X Optimal compensation Requires time Ω(1/ε).
17
Part 2: Upper Bound Channel states 0 < ω 1 < ω 2 < … < ω M General h(μ) curve (piecewise linear) Power E[p(t)] Rate E[μ(t)] λ h(μ) curvep*
18
Part 2: Upper Bound Channel states 0 < ω 1 < ω 2 < … < ω M General h(μ) curve (piecewise linear) Rate E[μ(t)] λ h(μ) curve Transmit iff ω(t) ≥ ω κ-1 Transmit iff ω(t) ≥ ω κ Power E[p(t)]
19
Drift-Plus-Penalty Alg (DPP) Δ(t) = Q(t+1) 2 – Q(t) 2 Observe ω(t), choose p(t) to minimize: Δ(t) + V p(t) Weighted penaltyDrift
20
Drift-Plus-Penalty Alg (DPP) Δ(t) = Q(t+1) 2 – Q(t) 2 Observe ω(t), choose p(t) to minimize: Δ(t) + V p(t) Weighted penaltyDrift Algorithm becomes: P(t) = 1 if Q(t)ω(t) ≥ V P(t) = 0 else Q(t) ω(t)
21
Drift Analysis of DPP Transmit iff ω(t) ≥ ω κ-1 Transmit iff ω(t) ≥ ω κ V/ω k V/ω k+1 V/ω k-1 0 Q(t) Positive driftNegative drift 0 < ω 1 < ω 2 < … < ω M
22
Useful Drift Lemma (with transients) Z(t) Negative drift: -β 0 Lemma: E[e rZ(t) ] ≤ D + (e rZ(0) – D)ρ t “steady state” “transient” Apply 1: Z(t) = Q(t) Apply 2: Z(t) = V/ω k – Q(t)
23
After transient time O(V) we get: V/ω k V/ω k+1 V/ω k-1 0 Q(t) Positive driftNegative drift Pr[ Red intervals ] = O(e -cV ) Choose V = log(1/ε) Pr[ Red ] = O(ε)
24
After transient time O(V) we get: V/ω k V/ω k+1 V/ω k-1 0 Q(t) Positive driftNegative drift Pr[ Red intervals ] = O(e -cV ) λ
25
Analytical Result λ But queue is stable, so E[μ] = λ + O(ε). So we timeshare appropriately and: E[Q(t)] ≤ O( log(1/ε) ) T ε ≤ O( log(1/ε) / ε ) p*
26
Simulation: E[p] versus queue size
27
Simulation: E[p] versus time
28
Non-ergodic simulation (adaptive to changes)
29
Conclusions Fundamental lower bound on convergence time o Unknown probabilities o “Cramer-Rao” like bound for controlled queues Tighter drift analysis for DPP algorithm: o ε-approximation to optimal power o Queue size O( log(1/ε) ) [optimal] o Convergence time O( log(1/ε)/ε ) [near optimal]
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.