Blind Calibration of Sensor Networks www.ece.wisc.edu/~nowak Laura Balzano (UCLA) Robert Nowak (UW-Madison) www.ee.ucla.edu/~sunbeam/ This work was supported.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

5.1 Real Vector Spaces.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Compressive Sensing IT530, Lecture Notes.
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Copyright © 2010 Pearson Education, Inc. Slide
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
T HE POWER OF C ONVEX R ELAXATION : N EAR - OPTIMAL MATRIX COMPLETION E MMANUEL J. C ANDES AND T ERENCE T AO M ARCH, 2009 Presenter: Shujie Hou February,
Theoretical Program Checking Greg Bronevetsky. Background The field of Program Checking is about 13 years old. Pioneered by Manuel Blum, Hal Wasserman,
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Chapter 5 Orthogonality
Modern Sampling Methods Summary of Subspace Priors Spring, 2009.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Curve-Fitting Regression
Uncalibrated Geometry & Stratification Sastry and Yang
Maximum likelihood (ML) and likelihood ratio (LR) test
Nonlinear Sampling. 2 Saturation in CCD sensors Dynamic range correction Optical devices High power amplifiers s(-t) Memoryless nonlinear distortion t=n.
1 Validation and Verification of Simulation Models.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
Orthogonality and Least Squares
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Linear and generalised linear models
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
MIMO Multiple Input Multiple Output Communications © Omar Ahmad
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Inferences for Regression
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
Chapter Content Real Vector Spaces Subspaces Linear Independence
AN ORTHOGONAL PROJECTION
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Orthogonality and Least Squares
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Progress in identification of damping: Energy-based method with incomplete and noisy data Marco Prandina University of Liverpool.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Robust Principal Components Analysis IT530 Lecture Notes.
Determining 3D Structure and Motion of Man-made Objects from Corners.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
(COEN507) LECTURE III SLIDES By M. Abdullahi
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
CORRELATION-REGULATION ANALYSIS Томский политехнический университет.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
12. Principles of Parameter Estimation
ECE 638: Principles of Digital Color Imaging Systems
High-Dimensional Matched Subspace Detection When Data are Missing
Linear Algebra Lecture 39.
Rank-Sparsity Incoherence for Matrix Decomposition
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
12. Principles of Parameter Estimation
Presentation transcript:

Blind Calibration of Sensor Networks Laura Balzano (UCLA) Robert Nowak (UW-Madison) This work was supported by the National Science Foundation

Wireless Sensing Challenges solar cell sensor processor GPS radio battery Networking, Communications, Resource Management, Signal Processing BUT if sensors aren’t calibrated, then all is for naught ! Srivastava Megerian

Sensing in a Box time sensor Ideal (Calibrated) Temperature Readings Actual (uncalibrated) Temperature Readings 9 temp sensors in a styrofoam box warm cool

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Preliminaries: Linear Calibration Model “Uncalibrated” Sensor Measurements: Calibrated Measurements: gain correction for sensor j offset correction for sensor j Vector Notation: Hadamard product: entrywise multiplication

Preliminaries Continued No sensor gain is zero The gain and offset remain constant throughout the course of blind calibration. As we will see later, this is a reasonable assumption because blind calibration does not take many measurements. gain correction for sensor j offset correction for sensor j

Blind Calibration Problem Given k uncalibrated “snapshots” (e.g., at different times): Estimate Gain and Offset Corrections: Find  and  such that for i = 1,…,k Without additional assumptions this is an impossible problem

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Calibration in the Field Pseudocolor map of temperature distribution Neighboring sensors in dense deployment make very similar readings We can automatically calibrate sensor network by forcing readings to agree locally V. Bychkovskiy, S. Megerian, D. Estrin, and M. Potkonjak, “A collaborative approach to in-place sensor calibration,” Lecture Notes in Computer Science, 2634:301–316, 2003.

Calibration by Local Agreement space Calibration via local agreement is based on assumption: Linear deployment of sensors: Ideal (calibrated) sensor readings: These conditions define a signal subspace (constant functions)

Calibration by Local Agreement space Calibration via local agreement is based on assumption: Linear deployment of sensors: Ideal (calibrated) sensor readings: These conditions define a signal subspace (constant functions) Calibration assuming second derivatives are approx zero: These conditions define a signal subspace (linear functions)

Signal Subspaces and Calibration Calibration via signal subspace matching is based on assumption: Ideal (calibrated) sensor readings: Example of Constant Subspace: Example of Linear Subspace:

Signal Subspaces and Calibration Calibration via signal subspace matching is based on assumption: Ideal (calibrated) sensor readings: P is called the orthogonal projection matrix corresponding to the orthogonal complement to the signal subspace Ex: If the signal is a bandlimited signals, P projects onto the space outside of that band. If the signal is a “smooth” signal, P projects onto a the space of “rough” signals.

The calibrated signals are a linear function of the uncalibrated snapshots: The calibrated signals lie in a known r-dimensional subspace of R n Assumptions Let P denote the projection on to the orthogonal complement of the signal subspace, so Px = 0

Blind Calibration k “snapshots” result in the following system of equations: Which we can try to solve for  and  …There is hope! We now have a set of nk linear equations, and 2n unknowns. As long as there are enough linearly independent equations, we can solve for our unknowns!

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Identifiability Preview We cannot identify the component of  in the signal subspace. This component is indistinguishable from true signal. However, since y i “modulates” , under certain conditions on P it is possible exactly recover  up to a scalar constant. We cannot distinguish between  and (scalar constant) x . Offsets: Gains: More explanation to come!

Matrix-Vector Formulation Diag Operation: System of Calibration Equations:

Then we must solve However, the matrix P is rank n-r, so this equation gives us n-r equations and n unknowns.  There are r degrees of freedom– so if we calibrate (non-blind) only r sensor offsets, we can recover all of them! Offset Solutions Say we know α and let

Offset Solutions where Note: Every offset solution is a simple function of sensor data and gains If the true signals are zero-mean, and if we can estimate the gains correctly, then we correctly recover the offsets! One possible estimate when we don’t know α would be:

Gain Solutions A unique solution exists iff the matrix has rank n-1 (i.e., a single vector in its right nullspace)

Exact Recovery of Calibration Gains Assumptions: Oversampling: The ideal sensor network signals lie in a known r-dimensional signal subspace Randomness: The signals are random enough (for example, spread out over time so that they are changing) such that they are linearly independent Incoherence: The signal subspace is incoherent with the canonical spatial basis (i.e., δ basis) (more on this in the proof) When does have a unique solution?

Exact Recovery of Calibration Gains Theorem 1: Under assumptions A1, A2 and A3, the gains can be perfectly recovered from any k ≥ r signal measurements by solving the linear system of equations Theorem 2: If the signal subspace is defined by a subset of the DFT vectors (i.e., a frequency-domain subspace), then incoherence condition is automatically satisfied, and the gains can be perfectly recovered from any k ≥ 1 + (n-1)/(n-r) signal measurements

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Implications What does it mean that we can perfectly recover the gains? These theorems have stated that it is possible to estimate the gain vector exactly under these assumptions. Assumption A1 (Oversampling) is a given for slowly varying fields. This technique will not work for fields with spatial frequencies too high to be oversampled. Assumption A2 (Randomness) is very reasonable, because it is possible to spread out measurements enough so that they are sufficiently different. Assumption A3 (Incoherence) is easily checked and is met by commonly used subspaces like frequency subspaces and polynomial subspaces.

Implications What does it mean that we can perfectly recover the gains? These theorems have stated that it is possible to estimate the gain vector exactly under these assumptions. However, this theory has been carried out assuming signals are noise-free and assuming exact knowledge of the projection matrix P. The Great News: There are computational ways to solve the system of equations in the presence of noise and model error (not knowing P)! In fact, many mathematical techniques have dealt with problems of system identification and blind equalization that can potentially be applied here.

Implications What about the offsets? As we said, if the signals are zero-mean we can recover the offsets. If the signals are not zero-mean, we need to collect r true offsets in order to calibrate all the offsets. In the case of the Cold Air Drainage temperature data, we calibrated the transect on one side of the valley and found that r = 4. So we only need to calibrate (non-blind) 4 sensors in order to discover all the sensor offsets!

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Proof Sketch Oversampling: Signals lie in an r-dimensional subspace of R n. Randomness: k ≥ r snapshots span signal subspace with probability 1 Incoherence: First, let α* be the true gains, and write system of equations in terms of calibrated signals or equivalently

Proof Sketch Calibration equations

Proof Sketch Oversampling and Randomness imply that the following conditions are equivalent (i) (ii) (iii)

Proof Sketch (iii) Let Then the equation should have only one solution,. Invoke the assumption: Thus, the single solution to our set of equations will be the true gains, α = α*

Incoherence Condition What does it mean that this matrix has rank n-1 ? What we know so far is that most frequency subspaces (low-pass, bandpass, high-pass) and polynomial subspaces (constant, linear, quadratic, etc) meet this condition. It is an interesting question to ask: Why? And what other subspaces meet the condition?

Outline Preliminaries and Problem Formulation Approach and Intuition Identifiability Implications Proof of Gain Identifiability Simulations and Experiments

Robust Recovery of Calibration Gains Gain equations may hold only approximately due to noise/errors: Robust solutions:

Simulation Experiment simulated temperature field 8x8 sensor readings field is smoothed GWN process approximate signal subspace = span of lowpass DFT vectors

Simulation Experiment robust to noise Error at 1% noise: Gain: >.01% Offset: >2.4% robust to mismodeling Error at 10%: Gain: >1% Offset: >4%

Simulation Experiment: Implementation Estimating Gains Partial information is best, but without information then the svd outperforms the least-squares method. Estimating Offsets Again, partial information is always best. Interestingly, when the signals are not zero-mean, LS does better even though it is using gains with more error! Implementation Matters!

Sensing in the Box Ideally all sensors should read the same temperature 1-d signal subspace of constant functions time sensor warm cool

Sensing in the Wild temperature sensing at James Reserve

Sensing in the Wild temperature sensing at James Reserve 4-dimensional signal subspace determined from calibrated sensor data

Future Work Noise analysis: From our formulations, we hope to derive bounds on blind calibration given a particular subspace P and a noise variance σ. Visiting the Cold Air Drainage sensors and calibrating them by hand was a cumbersome process. A hand-held device like the EcoPDA that could be carried to the sensors would be ideal. We could use it to show whether blind calibration can truly be done accurately with only r sensors. Model Uncertainty and Implementation: It isn’t clear how to proceed in order to understand how model uncertainty affects the output of blind calibration. More understanding of model errors on the singular value decomposition and optimization techniques is needed. and possible collaboration topics

Conclusions Sensor calibration gains and (partially) offsets can be determined from routine sensor measurements with some knowledge of calibrated signal characteristics (e.g., frequency band, smoothness) The key necessary condition is “incoherence” between signal subspace and canonical (spatial) basis; the condition is a nonlinear requirement on the signal basis and is easy to check in general Our experience is that solutions are robust to noise and mismodeling in some cases, and sensitive in others; we do not have a good understanding of the robustness of the methodology at this time Extensions are possible to handle more exotic calibration functions (e.g., nonlinear) More info: Balzano and Nowak, “Blind Calibration of Sensor Networks,” IPSN 2007