Download presentation
Presentation is loading. Please wait.
Published byRodger Richardson Modified over 9 years ago
1
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li
2
Channel Gain or Single-Noise- Ratio (SNR) The channel gain H of a wireless channel (S,R) is defined by: Y= H X, where X is the signal sent by S and Y is the signal received by R. S H Channel Model R 1
3
Channel Gain Monitoring In a wireless network, the knowledge of channel gains are needed to design high performance communication schemes. Due to fading, node mobility and node power instability, channel gains vary with time. Thus, tracking and estimating channel gains of wireless channels is fundamentally important This work seeks the answer of the following question: What is the minimum communication overhead such that all network channels can be tracked? 2
4
Toy Example R S1S1 S2S2 S3S3 H1H1 H2H2 H3H3 Prior Knowledge: H 1 =1 and H 2 =1 and H 3 =1. Updat e There exists i in {1,2,3} such that H i varied. Monitoring Object: The receiver R wants to recover i and H i. 3
5
Toy Example Recovering i and x: Unit Probing R S1S1 S2S2 S3S3 111 Time Slot 1: Time Slot 2: Time Slot 3: Three time slots are required for probing. 4 H i is unknown, H j = 1 for
6
Toy Example (Differential Group Probing) R S1S1 S2S2 S3S3 111 Time Slot 1:Time Slot 2: R S1S1 S2S2 S3S3 123 Receive Y[1]=3+(H i -1)Receive Y[2]=6+(H i -1)i Using the a priori knowledge of the channel gains, R computes [Y’[1],Y’[2]]=[3,6] and then the difference: [Y[1],Y[2]] - [Y’[1],Y’[2]]=(H i -1)[1,i]. 5 Since [1,1], [1,2] and [1,3] are linear independent, R can decode i and then H i. - One time slot saving ! H i is unknown, H j = 1 for
7
Motivation Raised by the Toy Example Unit Probing VS. Differential Group Probing. Unit Probing (Scheduling Interference): Since we do not know which channel varied, all channels must be sampled one by one. Differential Group Probing (Embracing Interference): All channels are sampled simultaneously to explore the a prior knowledge. Question: Does differential group probing suffice to achieve the minimum communication overheads? Answer: YES! 6
8
Outline of the Talk Fundamental setting: multiple transmitters and one receiver. The scaling law of tracking all channel gains. Achieving the scaling law by ADMOT. General setting: multiple transmitters, relay nodes and receivers. The scaling law of above fundamental setting still holds. Achieving the scaling law by ADMOT-GENERAL. Simulation results. 7
9
Fundamental Setting Multiple transmitters and one receiver: R S1S1 S2S2 SnSn H1H1 H2H2 HnHn … For S i, the probe in the s’th time slot is X i [s]. R receives: Definition (State): The state H is a length n vector, with the i’th component equaling H i. The vector H’ is the a priori knowledge of H preserved by R. 8
10
State Variation The state variation H-H ’ is said to be approx-k- sparse if there are at most k “ significant ” nonzero components in H-H ’. Practical interpretation: Approx-k-sparse state variation means there are at most k channels suffering significant variations, while the variations of other channels are negligible. Details about “ approx ” can be found in paper [1]. 9
11
Main Theorem Theorem: When the state variation H-H ’ is approx-k-sparse, we have: Scaling Law: At least time slots are required for reliably estimating all the n channels. Achievability: There exists a monitoring scheme using time slots, such that R can estimate all the n channels in a reliable and computational efficient manner. 10
12
Proof idea of the Scaling Law Estimating H is equivalent to estimate the variation difference H-H ’ Due to the feature of wireless communications, each time slot ’ s communication only provides one linear sample for H-H ’. Using the results in [BIPW2010,SODA], at least linear samples are required for reliably recovering a approx-k-sparse vector H-H ’. 11
13
Achieve the Scaling Law by ADMOT Systematical View of ADMOT: Core techniques in ADMOT: Differential Group Probing+ Compressive Sensing. 13
14
The Training Data of ADMOT The matrix of dimensions consists of the training data of ADMOT. Here, N is the maximum number of time slots allowed by ADMOT, and n is the number of transmitters. Each component of is i.i.d. chosen from {-1,1} with equal probability. The i ’ th column of is the training data of transmitter S i. To be concrete, in the s ’ th time slot, S i sends, as: 14
15
Construction of ADMOT ADMOT(m, H ’ ) Variables Initialization: H* is the estimation of H. Vector Y is of dimension m. Matrix consists of the 1,2, …,m ’ th rows of. Step A (Probing): For s = 1, 2, … m, in the s ’ th time slot: For each i in {1,2, …,n}, S i sends Receiver R sets Y[s] (i.e., the s ’ th component of Y) to be the received sample. Thus, Then we have 15
16
Construction of ADMOT ADMOT(m, H ’ ) Continued from previous slide Step B (Computing Differences): Receiver R computes Step C (Norm-1 Sparse Recovering): Receiver R finds the solution E* of the following convex program: Minimize, subject to Step D (Estimating) : Receiver R estimates H as H*=H ’ +E*. Step E: Terminate ADMOT. 16
17
Comments 1 for ADMOT If H-H ’ is approx-k-sparse, using the results of Compressive Sensing[3], E* is a reliable estimation of H-H ’ provided that m=Cklog(n/k) for a constant C. 17 Tightly Match the Scaling Law!
18
Comments 2 for ADMOT Would error be propagated? Yes case: D i is sparse, and D i e is a estimation of D i. No case: D i a is “ almost ” sparse, D i e is a estimation of D i a.
19
Comments 3 for ADMOT How to deal with the case where the sparsity parameter k is not known? Interactive estimation.
20
Outline of the Talk Fundamental setting: multiple transmitters and one receiver. The scaling law of tracking all channel gains. Achieving the scaling law by ADMOT. General setting: multiple transmitters, relay nodes and receivers. The scaling law of above fundamental setting still holds. Achieving the scaling law by ADMOT-GENERAL. Simulation results. 18
21
Simplified Model The challenging of general communication network rises from the existence of nodes which can perform as both source and receiver. For the simplicity, we consider a network with nodes V={v 1,v 2, …,v n }. Thus, for each node v i in V, it wants to estimate the channel (v j,v i ) for each j=1,2, …,n. Complete Network! Constraint: Any node in V can not transmit and receive in the same time slot. 20
22
The Scaling Law of General Setting Assume for each node v i in V, the incoming channels of v i suffer approx-k-sparse variation. Directly using the scaling law of the single receiver scenario, at least time slots are required. Surprisingly, this scaling law is also tight for general communication networks. 21
23
ADMOT-GENERAL We construct ADMOT-GENERAL to achieve overheads for a constant C ’. The matrix of dimensions consists of the training data. Each component of is i.i.d. chosen from {0,-1,1} with probability {1/2,1/4,1/4}. The i ’ th column of is the training data of v i. 23
24
ADMOT-GENERAL ADMOT-GENERAL runs m time slots. In the s ’ th time slot, if node v i receives in the time slot; Otherwise, v i sends in the time slot. In the end, with large probability (Chernoff Bound), each node, say v i, received at least m/3 data. Let the vector Y i consist of the received data of v i, and H i be the vector consisting of all incoming channel gains of v i. Each component of Y i is a linear sample (with noise) of H i. That is,, where consists of at least m/3 rows of. 24
25
ADMOT-GENERAL Node v i computes the difference using the a priori knowledge H i ’ for its incoming channel gains. Note each component of is i.i.d. sampled from {0,- 1/2,1/2} with probability {0.5, 0.25, 0.25}, which are therefore sub-Gaussian ensembles. Approx-k-sparse H i -H i ’ can be recovered provided that RowNumber( ) for a constant C ’ [4]. 25 Tightly Match the Scaling Law!
26
Outline of the Talk Fundamental setting: multiple transmitters and one receiver. The scaling law of tracking all channel gains. Achieving the scaling law by ADMOT. General setting: multiple transmitters, relay nodes and receivers. The scaling law of above fundamental setting still holds. Achieving the scaling law by ADMOT-GENERAL. Simulation results. 26
27
Simulations Setting: n=500 transmitters. One receiver. Average SNR = 20 dB. Approx-k state variation. Define channel stability=1-k/n. ADMOT is implemented as the consecutive manner: 27
28
Simulations 28
29
Future Works General Setting: Network Tomography + Channel Gain Estimation? Current ADMOT-GENERAL requires the internal nodes in V performing sophisticated protocol (ADMOT) for channel gain estimation. Can we estimate internal channel gains as “ tomography ”, in which relay nodes do normal network transmission, only the transmitters and receivers perform sophisticated protocols? 29
30
Thanks! & Questions? [1]. H. Yao and X. Li and S. C. Liew, “ Achieving the Scaling Law of SNR-Monitoring for Dynamic Wireless Networks ”, arxiv 1008.0053. [2]. K. D. Ba, P. Indyk, E. Price, and D. P. Woodruff, “ Lower bounds for sparse recovery, ” in Proc. of SODA, 2010. [3]. E. Cand´es, J. Romberg, and T. Tao, “ Stable signal recovery from incomplete and inaccurate measurements, ” Communications on Pure and Applied Mathematics, 2006. [4]. S. Mendelson, A. Pajor, and N. T. Jaegermann, “ Uniform uncertainty principle for bernoulli and subgaussian ensembles, ” Constructive Approximation, 2008. 30
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.