Inference of Non-Overlapping Camera Network Topology by Measuring Statistical Dependence Date : 2009.01.21.

Slides:



Advertisements
Similar presentations
Motivating Markov Chain Monte Carlo for Multiple Target Tracking
Advertisements

An Adaptive Learning Method for Target Tracking across Multiple Cameras Kuan-Wen Chen, Chih-Chuan Lai, Yi-Ping Hung, Chu-Song Chen National Taiwan University.
1 ECE 776 Project Information-theoretic Approaches for Sensor Selection and Placement in Sensor Networks for Target Localization and Tracking Renita Machado.
Bayesian Estimation in MARK
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis)
Learning to estimate human pose with data driven belief propagation Gang Hua, Ming-Hsuan Yang, Ying Wu CVPR 05.
Semantic Structure from Motion Paper by: Sid Yingze Bao and Silvio Savarese Presentation by: Ian Lenz.
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Markov Chains 1.
Structural Inference of Hierarchies in Networks BY Yu Shuzhi 27, Mar 2014.
Statistical inference for epidemics on networks PD O’Neill, T Kypraios (Mathematical Sciences, University of Nottingham) Sep 2011 ICMS, Edinburgh.
Stochastic approximate inference Kay H. Brodersen Computational Neuroeconomics Group Department of Economics University of Zurich Machine Learning and.
1 Vertically Integrated Seismic Analysis Stuart Russell Computer Science Division, UC Berkeley Nimar Arora, Erik Sudderth, Nick Hay.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
The University of Texas at Austin, CS 395T, Spring 2008, Prof. William H. Press IMPRS Summer School 2009, Prof. William H. Press 1 4th IMPRS Astronomy.
Maximum Likelihood Network Topology Identification Mark Coates McGill University Robert Nowak Rui Castro Rice University DYNAMICS May 5 th,2003.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Exploring subjective probability distributions using Bayesian statistics Tom Griffiths Department of Psychology Cognitive Science Program University of.
Today Introduction to MCMC Particle filters and MCMC
End of Chapter 8 Neil Weisenfeld March 28, 2005.
CENTER FOR BIOLOGICAL SEQUENCE ANALYSIS Bayesian Inference Anders Gorm Pedersen Molecular Evolution Group Center for Biological Sequence Analysis Technical.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Incomplete Graphical Models Nan Hu. Outline Motivation K-means clustering Coordinate Descending algorithm Density estimation EM on unconditional mixture.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Bayes Factor Based on Han and Carlin (2001, JASA).
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
EM and expected complete log-likelihood Mixture of Experts
Model Inference and Averaging
Random Sampling, Point Estimation and Maximum Likelihood.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Learning Lateral Connections between Hidden Units Geoffrey Hinton University of Toronto in collaboration with Kejie Bao University of Toronto.
1 Gil McVean Tuesday 24 th February 2009 Markov Chain Monte Carlo.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
Perceptual Multistability as Markov Chain Monte Carlo Inference.
Fast Simulators for Assessment and Propagation of Model Uncertainty* Jim Berger, M.J. Bayarri, German Molina June 20, 2001 SAMO 2001, Madrid *Project of.
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
Markov-Chain Monte Carlo CSE586 Computer Vision II Spring 2010, Penn State Univ.
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
Improved Cross Entropy Method For Estimation Presented by: Alex & Yanna.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Reducing MCMC Computational Cost With a Two Layered Bayesian Approach
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
04/21/2005 CS673 1 Being Bayesian About Network Structure A Bayesian Approach to Structure Discovery in Bayesian Networks Nir Friedman and Daphne Koller.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
Introduction: Metropolis-Hasting Sampler Purpose--To draw samples from a probability distribution There are three steps 1Propose a move from x to y 2Accept.
HW7: Evolutionarily conserved segments ENCODE region 009 (beta-globin locus) Multiple alignment of human, dog, and mouse 2 states: neutral (fast-evolving),
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Mixture Modeling of the p-value Distribution
Reducing Photometric Redshift Uncertainties Through Galaxy Clustering
MCMC Output & Metropolis-Hastings Algorithm Part I
GEOGG121: Methods Monte Carlo methods, revision
ERGM conditional form Much easier to calculate delta (change statistics)
Model Inference and Averaging
Course: Autonomous Machine Learning
Dynamical Statistical Shape Priors for Level Set Based Tracking
Bayesian inference Presented by Amir Hadadi
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
Markov Networks.
Multidimensional Integration Part I
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Markov Random Fields Presented by: Vladan Radosavljevic.
Markov Networks.
Presentation transcript:

Inference of Non-Overlapping Camera Network Topology by Measuring Statistical Dependence Date :

Motivation With the price of camera devices getting cheaper, the wide-area surveillance system is becoming a trend for the future. For the purpose of achieving the wide-area surveillance, there is a big problem we need to take care : non-overlapping FOVs.

Non-overlapping FOVs More practical in the real word But we begin to bump into many problems… - Correspondence between cameras, but the difference view angles of camera may enhance the difficulty - Hard to do the calibration, so we may not have the information of relative positions and orientations between cameras

Correspondence btw Cameras Means we have to indentify the same object in different cameras. Usually by space-time and appearance feature. But actually in wide-area surveillance, because cameras may be widely separated and objects may occupy only a few pixels, so is difficult to solve.

View from Another Angle To link the objects across no-overlapping FOVs, we need to know the connectivity of movement between FOVs. Turn into the problem of finding the topology of the camera network.

What’s our goal now? Want to determine the network structure relating cameras, and the typical transitions between cameras, based on noisy observations of moving objects in the cameras. Departure and arrival locations in each camera view are nodes in the network. An arc between a departure node and an arrival node denotes connectivity (like transition)

First Consider a simple case…

What feature can we use? Object occupy little pixels  Appearance may fails Space relationship btw cameras unknown  Space may fails Aha! Time may be a good feature to use!

The model T Departure and arrival locations in each camera view are nodes in the network. An arc between a departure node and an arrival node denotes connectivity  Transition Time Distribution

Problem Formulation Now, given departure and arrival time observations X and Y, they are connected by the transition time T. T

Main Hypothesis If camera are connected, the arrival time Y might be easily predict from departure time X.  There is a regularity between X and Y  Given the correspondence, the transition time distribution is highly structured  Dependence between X and Y is strong

Problem Formulation How do we measure the dependency ? By Mutual Information ! write in terms of entropy…

Problem Formulation Since from the graphical model, we have Y = T(X)  Y = X + T (assumed X indept. with T) Therefore relate h(Y|X) to the entropy of T h(Y|X) = h(X+T|X) = h(T|X) = h(T) T

Problem Formulation We get I(X;Y) = h(Y) – h(T)  maximizing statistical dependence is the same as minimizing the entropy of the distribution of transformation T Distribution of T is decided by the matching π between X and Y ! (π is a permutation for correspondence btw X and Y)

Problem Formulation So what we want to do now is trying to find the matching ((x i, y π(i) )) whose transition time distribution have lowest entropy  maximum dependence But how to compute the entropy?  by Parzen density estimater

Problem Formulation Okay, but how to find the matching ((x i, y π(i) ))?  It’s a NP hard problem… Look for approximation algorithms  Markov Chain Monte Carlo (MCMC) Briefly, MCMC is a way to draw samples from the posterior distribution of matchings given the data.

Markov Chain Monte Carlo We use the most general MCMC algorithm  Metropolis-Hastings Sampler

Markov Chain Monte Carlo The key to the efficiency of an MCMC algorithm is the choice of proposal distribution q(.). Here we use 3 types of proposals for sampling matches: 1. Add 2. Delete 3. Swap The new sample is accepted with probability proportional to the relative likelihood of the new sample vs. the current one. The likelihood of a correspondence is proportional to the log probability of the corresponding transformations.

Missing Matches But actually not all the observations in X and Y will be matched, but contain missing matches. (some x i may not have corresponding y π(i) ) Consider missing data as outliers, and model the distribution of transformations as a mixture of the true and outlier distributions. Usually use a uniform outlier distribution

Results