Multi-target Detection in Sensor Networks Xiaoling Wang ECE691, Fall 2003.

Slides:



Advertisements
Similar presentations
Motivating Markov Chain Monte Carlo for Multiple Target Tracking
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Biointelligence Laboratory, Seoul National University
Bayesian Estimation in MARK
IMPORTANCE SAMPLING ALGORITHM FOR BAYESIAN NETWORKS
Chapter 4: Linear Models for Classification
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Graduate School of Information Sciences, Tohoku University
Visual Recognition Tutorial
Independent Component Analysis (ICA)
Lecture 5: Learning models using EM
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Independent Component Analysis (ICA) and Factor Analysis (FA)
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
1 lBayesian Estimation (BE) l Bayesian Parameter Estimation: Gaussian Case l Bayesian Parameter Estimation: General Estimation l Problems of Dimensionality.
Visual Recognition Tutorial
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
A Unifying Review of Linear Gaussian Models
Lecture II-2: Probability Review
Crash Course on Machine Learning
Data Selection In Ad-Hoc Wireless Sensor Networks Olawoye Oyeyele 11/24/2003.
Markov Localization & Bayes Filtering
1 Collaborative Processing in Sensor Networks Lecture 4 - Distributed In-network Processing Hairong Qi, Associate Professor Electrical Engineering and.
EM and expected complete log-likelihood Mixture of Experts
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Detection, Classification and Tracking in a Distributed Wireless Sensor Network Presenter: Hui Cao.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
Tracking Multiple Cells By Correspondence Resolution In A Sequential Bayesian Framework Nilanjan Ray Gang Dong Scott T. Acton C.L. Brown Department of.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Lecture 2: Statistical learning primer for biologists
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Univariate Gaussian Case (Cont.)
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Statistical Data Analysis: Lecture 5 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Bayesian Hierarchical Clustering Paper by K. Heller and Z. Ghahramani ICML 2005 Presented by David Williams Paper Discussion Group ( )
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
LECTURE 11: Advanced Discriminant Analysis
Today.
Classification of unlabeled data:
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Markov Networks.
More about Posterior Distributions
CSE-490DF Robotics Capstone
A Fast Fixed-Point Algorithm for Independent Component Analysis
LECTURE 09: BAYESIAN LEARNING
Independent Factor Analysis
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Markov Networks.
Graduate School of Information Sciences, Tohoku University
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Presentation transcript:

Multi-target Detection in Sensor Networks Xiaoling Wang ECE691, Fall 2003

Target Detection in Sensor Networks Single target detection Energy decay model: Constant false-alarm rate (CFAR) Multiple target detection Blind source separation (BSS) problem Targets are considered as the sources “Blind”: there is no a-priori information on  the number of sources  the probabilistic distribution of source signals  the mixing model Independent component analysis (ICA) is common technique to solve the BSS problem source

BSS in sensor networks BSS problem involves Source number estimation Source separation Assumptions Linear, instantaneous mixture model Number of sources = number of observations This equality assumption is not the case in sensor networks due to the large amount of sensors deployed

Source Number Estimation Source number estimation: Available source number estimation algorithms Sample-based approach: RJ-MCMC (reversible- jump Markov Chain Monte Carlo) method Variational learning Bayesian source number estimation

Bayesian Source Number Estimation (BSNE) Algorithm : sensor observation matrix : hypothesis of the number of sources : source matrix : mixing matrix, : unmixing matrix, and : latent variable, and : non-linear transformation function : noise, with variance : marginal distribution of Detailed derivation

Centralized vs. Distributed Schemes Centralized scheme: long observed sequences from all the sensors are available for source number estimation Centralized processing is not realistic in sensor networks due to: Large amount of sensor nodes deployed Limited power supply on the battery-powered sensor node Distributed scheme: Data is processed locally Only the local decisions are transferred between sensor clusters Advantages of distributed target detection framework: Dramatically reduce the long-distance network traffic Therefore conserve the energy consumed on data transmissions.

Distributed Source Number Estimation Scheme Sensor nodes clustering The distributed scheme includes two levels of processing: An estimation of source number is obtained from each cluster using the Bayesian method The local decisions from each cluster are fused using the Bayesian fusion method and the Dempster’s rule of combination.

Distributed Hierarchy Unique features of the developed distributed hierarchy M-ary hypothesis testing Fusion of detection probabilities Distributed structure Structure of the distributed hierarchy

Posterior Probability Fusion Based on Bayes Theorem Since where Sinceare independent, for Therefore,

Dempster’s Rule of Combination Utilize probability intervals and uncertainty intervals to determine the likelihood of hypotheses based on multiple evidence Can assign measures of belief to combinations of hypotheses

Performance Evaluation of Multiple Target Detection Sensor laydown Target types

Results Comparison: Log-likelihood and Histogram

Results Comparison: Kurtosis, Detection Probability, and Computation Time

Discussion The distributed hierarchy with the Bayesian posterior probability fusion method has the best performance, because: Source number estimation is only performed within each cluster, therefore, the effect of signal variations are limited locally and might contribute less in the fusion process The hypotheses of different source numbers are independent, exclusive, and exhaustive set which is in accordance with the condition of the Bayesian fusion method. The physical characteristics of sensor networks are considered, such as the signal energy captured by each sensor node versus its geographical position

Derivation of the BSNE Algorithm Choosethen since where are constants. Suppose noise on each component has same variance,then where (1) Assume the integral in (1) is dominated by a sharp peak at Laplace approximation of the marginal integral, then by using

Therefore, where Then where (2) Assume the density function ofis sharply peaked at and use Laplace approximation,

Assume Then and Using the maximum-likelihood estimation, gives