Download presentation
Presentation is loading. Please wait.
Published byQuentin Ira Casey Modified over 9 years ago
1
Optimal Sampling Strategies for Multiscale Stochastic Processes Vinay Ribeiro Rolf Riedi, Rich Baraniuk (Rice University)
2
Motivation Probing for RTT (ping, TCP), available bandwidth (pathload, pathChirp) Packet trace collection –Traffic matrix estimation, overall traffic composition Routing/Connectivity analysis –Sample few routing tables Sensor networks –deploy limited number of sensors Global (space/time) average Limited number of local samples How to optimally place N samples to estimate the global quantity? 0 T probe packets
3
Multiscale Stochastic Processes Nodes at higher scales – averages over larger regions Powerful structure – model LRD traffic, image data, natural phenomena root – global average, leaves – local samples Choose N leaf nodes to give best linear estimate (in terms of mean squared error) of root node Bunched, uniform, exponential? root leaves Scale j Quad-tree
4
Independent Innovations Trees Each node is linear combination of parent and independent random innovation Recursive top-to-bottom algorithm Concave optimization for split at each node Polynomial time algorithm O(N x depth + (# tree nodes)) Uniformly spaced leaves are optimal if innovations i.i.d. within scale n N-n split N
5
Covariance Trees Distance : Two leaf nodes have distance j if their lowest common ancestor is at scale j Covariance tree : Covariance between leaf nodes with distance j is c j (only a function of distance), covariance between root and any leaf node is constant, Positively correlation progression : c j >c j+1 Negatively correlation progression : c j <c j+1
6
Covariance Tree Result Optimality proof: Simply construct an independent innovations tree with similar correlation structure Worst case proof: Based on eigenanalysis optimalworst-case Positive correlation progression uniformbunch Negative correlation progression bunch (conjecture) uniform
7
Numerical Results Covariance trees with fractional Gaussian noise correlation structure Plots of normalized MSE vs. number of leaves for different leaf patterns Positive correlation progression Negative correlation progression
8
Future Directions Sampling –more general tree structures –non-linear estimates –non-tree stochastic processes –leverage related work in Statistics (Bellhouse et al) Internet Inference –how to determine correlation between traffic traces, routing tables etc. Sensor networks – jointly optimize with other constraints like power transmission
9
: arbitrary set of leaf nodes; : size of X : leaves on left, : leaves on right : linear min. mean sq. error of estimating root using X Water-Filling 0 1 2 3 4 f L (l)f R (l) N= 01243 Repeat at next lower scale with N replaced by l * N (left) and (N-l * N ) (right) Result: If innovations identically distributed within each scale then uniformly distribute leaves, l * N = b N/2 c
10
Covariance Tree Result Result: For a positive correlation progresssion choosing leaf nodes uniformly in the tree is optimal. However, for negatively correlation progression this same uniform choice is the worst case! Optimality proof: Simply construct an independent innovations tree with similar correlation structure Worst case proof: The uniform choice maximizes sum of elements of S X Using eigen analysis show that this implies that uniform choice minimizes sum of elements of S -1 X
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.