1 Consensus-Based Distributed Least-Mean Square Algorithm Using Wireless Ad Hoc Networks Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis ECE.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

1 ECE 776 Project Information-theoretic Approaches for Sensor Selection and Placement in Sensor Networks for Target Localization and Tracking Renita Machado.

1 Closed-Form MSE Performance of the Distributed LMS Algorithm Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis ECE Department, University of.
Distributed Nuclear Norm Minimization for Matrix Completion
CHAPTER 3 CHAPTER 3 R ECURSIVE E STIMATION FOR L INEAR M ODELS Organization of chapter in ISSO –Linear models Relationship between least-squares and mean-square.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
The loss function, the normal equation,
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
Location Estimation in Sensor Networks Moshe Mishali.
1 Distortion-Rate for Non-Distributed and Distributed Estimation with WSNs Presenter: Ioannis D. Schizas May 5, 2005 EE8510 Project May 5, 2005 EE8510.
© 2005, it - instituto de telecomunicações. Todos os direitos reservados. Gerhard Maierbacher Scalable Coding Solutions for Wireless Sensor Networks IT.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.
Scalable Information-Driven Sensor Querying and Routing for ad hoc Heterogeneous Sensor Networks Maurice Chu, Horst Haussecker and Feng Zhao Xerox Palo.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Algorithm Taxonomy Thus far we have focused on:
1 Unveiling Anomalies in Large-scale Networks via Sparsity and Low Rank Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University.
1 Exact Recovery of Low-Rank Plus Compressed Sparse Matrices Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University of Minnesota.
Consensus-based Distributed Estimation in Camera Networks - A. T. Kamal, J. A. Farrell, A. K. Roy-Chowdhury University of California, Riverside
Adaptive CSMA under the SINR Model: Fast convergence using the Bethe Approximation Krishna Jagannathan IIT Madras (Joint work with) Peruru Subrahmanya.
1 Sparsity Control for Robust Principal Component Analysis Gonzalo Mateos and Georgios B. Giannakis ECE Department, University of Minnesota Acknowledgments:
Image Restoration using Iterative Wiener Filter --- ECE533 Project Report Jing Liu, Yan Wu.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Distributed Tracking Using Kalman Filtering Aaron Dyreson, Faculty Advisor: Ioannis Schizas, Ph.D. Department of Electrical Engineering, The University.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Energy-Efficient Signal Processing and Communication Algorithms for Scalable Distributed Fusion.
CHAPTER 5 S TOCHASTIC G RADIENT F ORM OF S TOCHASTIC A PROXIMATION Organization of chapter in ISSO –Stochastic gradient Core algorithm Basic principles.
Distributed state estimation with moving horizon observers Marcello Farina, Giancarlo Ferrari-Trecate, Riccardo Scattolini Dipartimento di Elettronica.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
Multiuser Receiver Aware Multicast in CDMA-based Multihop Wireless Ad-hoc Networks Parmesh Ramanathan Department of ECE University of Wisconsin-Madison.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Multi-area Nonlinear State Estimation using Distributed Semidefinite Programming Hao Zhu October 15, 2012 Acknowledgements: Prof. G.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Rank Minimization for Subspace Tracking from Incomplete Data
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
Spectrum Sensing In Cognitive Radio Networks
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Nonlinear State Estimation
Energy-Efficient Signal Processing and Communication Algorithms for Scalable Distributed Fusion.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
Yi Jiang MS Thesis 1 Yi Jiang Dept. Of Electrical and Computer Engineering University of Florida, Gainesville, FL 32611, USA Array Signal Processing in.
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
State-Space Recursive Least Squares with Adaptive Memory College of Electrical & Mechanical Engineering National University of Sciences & Technology (NUST)
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
Optimization-based Cross-Layer Design in Networked Control Systems Jia Bai, Emeka P. Eyisi Yuan Xue and Xenofon D. Koutsoukos.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project A Distributed Newton Method for Network Optimization Ali Jadbabaie and Asu Ozdaglar.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
On Optimal Distributed Kalman Filtering in Non-ideal Situations
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Kalman Filter: Bayes Interpretation
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
ACHIEVEMENT DESCRIPTION
Presentation transcript:

1 Consensus-Based Distributed Least-Mean Square Algorithm Using Wireless Ad Hoc Networks Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis ECE Department, University of Minnesota Acknowledgment: ARL/CTA grant no. DAAD

2 Motivation Estimation using ad hoc WSNs raises exciting challenges  Communication constraints  Limited power budget  Lack of hierarchy / decentralized processing Consensus Unique features  Environment is constantly changing (e.g., WSN topology)  Lack of statistical information at sensor-level Bottom line: algorithms are required to be  Resource efficient  Simple and flexible  Adaptive and robust to changes Single-hop communications

3 Prior Works Single-shot distributed estimation algorithms  Consensus averaging [Xiao-Boyd ’ 05, Tsitsiklis-Bertsekas ’ 86, ’ 97]  Incremental strategies [Rabbat-Nowak etal ’ 05]  Deterministic and random parameter estimation [Schizas etal ’ 06] Consensus-based Kalman tracking using ad hoc WSNs  MSE optimal filtering and smoothing [Schizas etal ’ 07]  Suboptimal approaches [Olfati-Saber ’ 05], [Spanos etal ’ 05] Distributed adaptive estimation and filtering  LMS and RLS learning rules [Lopes-Sayed ’ 06 ’ 07]

4 Problem Statement Ad hoc WSN with sensors  Single-hop communications only. Sensor ‘ s neighborhood  Connectivity information captured in  Zero-mean additive (e.g., Rx, quantization) noise Each sensor, at time instant  Acquires a regressor and scalar observation  Both zero-mean w.l.o.g and spatially uncorrelated Least-mean squares (LMS) estimation problem of interest

5 Centralized Approaches If, jointly stationary Wiener solution If global (cross-) covariance matrices, available Steepest-descent converges avoiding matrix inversion If (cross-) covariance info. not available or time-varying Low complexity suggests (C-) LMS adaptation Goal: develop a distributed (D-) LMS algorithm for ad hoc WSNs

6 A Useful Reformulation Introduce the bridge sensor subset  For all sensors, such that  For, there must such that Consider the convex, constrained optimization Proposition [Schizas etal ’ 06]: For satisfying 1)-2) and the WSN is connected, then

7 Algorithm Construction Problem of interest Two key steps in deriving D-LMS  Resort to the alternating-direction method of multipliers Gain desired degree of parallelization  Apply stochastic approximation ideas Cope with unavailability of statistical information

8 Derivation of Recursions Associated augmented Lagrangian Alternating-direction method of Lagrange multipliers Three-step iterative update process Multipliers Dual iteration Local estimates Minimize w.r.t. Bridge variables Minimize w.r.t. Step 1: Step 2: Step 3:

9 Multiplier Updates Recall the constraints Use standard method of multipliers type of update Requires from the bridge neighborhood

10 Local Estimate Updates Given by the local optimization  First order optimality condition  Proposed recursion inspired by Robbins-Monro algorithm  is the local prior error  is a constant step-size Requires  Already acquired bridge variables  Updated local multipliers

11 Bridge Variable Updates Similarly, Requires  from the neighborhood  from the neighborhood in a startup phase

12 D-LMS Recap and Operation In the presence of communication noise, for Simple, fully distributed, only single-hop exchanges needed Step 1: Step 2: Step 3: Sensor Rx from Tx to Bridge sensor Tx to Rx from Steps 1,2:Step 3:

13 Further Insights Manipulating the recursions for and yields Introduce the instantaneous consensus error at sensor The update of becomes Superposition of two learning mechanisms  Purely local LMS-type of adaptation  PI consesus loop tracks the consensus set-point

14 Network-wide information enters through the set-point Expect increased performance with Flexibility D-LMS Processor Local LMS Algorithm Sensor j PI Regulator To Consensus Loop

15 Mean Analysis Independence setting signal assumptions for (As1) is a zero-mean white random vector, with spectral radius (As2) Observations obey a linear model where is a zero-mean white noise (As3) and are statistically independent Define and Goal : derive sufficient conditions under which

16 Dynamics of the Mean Lemma: Under (As1)-(As3), consider the D-LMS algorithm initialized with. Then for, is given by the second-order recursion with and, where Equivalent first-order system by state concatenation

17 First-Order Stability Result Proposition: Under (As1)-(As3), the D-LMS algorithm whose positive step-sizes and relevant parameters are chosen such that, achieves consensus in the mean sense i.e., Step-size selection based on local information only  Local regressor statistics  Bridge neighborhood size

18 Simulations node WSN, Regressors: i.i.d. Observations: D-LMS:, True time-varying weight:

19 Loop Tuning Adequately selecting actually does make a difference Compared figures of merit:  MSE (Learning curve):  MSD (Normalized estimation error):

20 Concluding Summary Developed a distributed LMS algorithm for general ad hoc WSNs Intuitive sensor-level processing  Local LMS adaptation  Tunable PI loop driving local estimate to consensus Mean analysis under independence assumptions step-size selection rules based on local information Simulations validate mss convergence and tracking capabilities Ongoing research  Stability and performance analysis under general settings  Optimality: selection of bridge sensors,  D-RLS. Estimation/Learning performance Vs complexity tradeoff