Recalibration Problem Virtual Sensors (VS’s) Detect Events E.g., Coffee Level Rescue Kitchen, West-East Traffic light Next to Calit(2). VS-Output Coffee.

Slides:



Advertisements
Similar presentations
TARGET DETECTION AND TRACKING IN A WIRELESS SENSOR NETWORK Clement Kam, William Hodgkiss, Dept. of Electrical and Computer Engineering, University of California,
Advertisements

ETISEO Project Corpus data - Video sequences contents - Silogic provider.
Kien A. Hua Division of Computer Science University of Central Florida.
Exploiting Homography in Camera-Projector Systems Tal Blum Jiazhi Ou Dec 11, 2003 [Sukthankar, Stockton & Mullin. ICCV-2001]
A Laser Pointer by Uri Gordon & Mark Shovman for PostPC, fall 2002.
Two-view geometry.
Learning Semantic Scene Models From Observing Activity in Visual Surveillance Dimitios Makris and Tim Ellis (2005) Presented by Steven Wilson.
Mining Motion Sensor Data from Smartphones for Estimating Vehicle Motion Tamer Nadeem, PhD Department of Computer Science NSF Workshop on Large-Scale Traffic.
Camera calibration and epipolar geometry
Design and Implementation of a Middleware for Sentient Spaces Bijit Hore, Hojjat Jafarpour, Ramesh Jain, Shengyue Ji, Daniel Massaguer Sharad Mehrotra,
Lecture 23: Structure from motion and multi-view stereo
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
458 Fitting models to data – II (The Basics of Maximum Likelihood Estimation) Fish 458, Lecture 9.
Preprocessing to enhance recognition performance in the presence of: -Illumination variations -Pose/expression/scale variations - Resolution enhancement.
Maximum Entropy Model LING 572 Fei Xia 02/07-02/09/06.
Maximum Entropy Model LING 572 Fei Xia 02/08/07. Topics in LING 572 Easy: –kNN, Rocchio, DT, DL –Feature selection, binarization, system combination –Bagging.
HDM-4 Calibration. 2 How well the available data represent the real conditions to HDM How well the model’s predictions fit the real behaviour and respond.
Status Report #01 Automated Vehicle Tracking System AVTS David Brenner Caleb Douglas Ahmed Elsherif Michael W. Timme An automated license plate detection.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Automatic Camera Calibration
Anomaly detection Problem motivation Machine Learning.
Multi-view geometry.
Probabilistic Context Free Grammars for Representing Action Song Mao November 14, 2000.
WEEK 8: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA & AMIR R. ZAMIR.
General-to-Specific Ordering. 8/29/03Logic Based Classification2 SkyAirTempHumidityWindWaterForecastEnjoySport SunnyWarmNormalStrongWarmSameYes SunnyWarmHighStrongWarmSameYes.
Your measurement technology partner for global competitiveness CSIR National Metrology Laboratory The National Metrology Laboratory of South Africa Feedback.
1 CS 391L: Machine Learning: Bayesian Learning: Naïve Bayes Raymond J. Mooney University of Texas at Austin.
1 SATWARE: A Semantic Middleware for Multi Sensor Applications Sharad Mehrotra.
Optimal XOR Hashing for a Linearly Distributed Address Lookup in Computer Networks Christopher Martinez, Wei-Ming Lin, Parimal Patel The University of.
Classification Techniques: Bayesian Classification
Asian Institute of Technology
Applying Statistical Machine Learning to Retinal Electrophysiology Matt Boardman January, 2006 Faculty of Computer Science.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Concept Learning and the General-to-Specific Ordering 이 종우 자연언어처리연구실.
Overview Concept Learning Representation Inductive Learning Hypothesis
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Exploiting Semantics for Sensor Re-Calibration in Event Detection Systems Ronen Vaisenberg, Shengyue Ji, Bijit Hore, Sharad Mehrotra and Nalini Venkatasubramanian.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
1 Hidden Markov Model Presented by Qinmin Hu. 2 Outline Introduction Generating patterns Markov process Hidden Markov model Forward algorithm Viterbi.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Using Adaptive Tracking To Classify And Monitor Activities In A Site W.E.L. Grimson, C. Stauffer, R. Romano, L. Lee.
Visual Odometry David Nister, CVPR 2004
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Dynamic View Morphing performs view interpolation of dynamic scenes.
Introduction to Robots and the Mind - Sensors - Bert Wachsmuth & Michael Vigorito Seton Hall University.
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Sensor Calibration Automation Demonstration Presenters: Barbara Benson and David Balsiger (University of Wisconsin) Collaborators: Laurence Choi, Yu Hen.
Abstract Temporal Graph
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Machine Learning for the Quantified Self
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
CH 5: Multivariate Methods
Exposing Digital Forgeries Through Chromatic Aberration Micah K
Classification Techniques: Bayesian Classification
Network Topology and Geometry
How to Build Smart Appliances?
Video Compass Jana Kosecka and Wei Zhang George Mason University
Exploiting Semantics for Event Detection Systems
Two-view geometry.
Image and Video Processing
Boltzmann Machine (BM) (§6.4)
CS 188: Artificial Intelligence Fall 2008
Two-view geometry.
Cyber-Physical Graphs vs
Jia-Bin Huang Virginia Tech
Presentation transcript:

Recalibration Problem Virtual Sensors (VS’s) Detect Events E.g., Coffee Level Rescue Kitchen, West-East Traffic light Next to Calit(2). VS-Output Coffee State: 11:43 Half-Full 11:44 Half-Full 11:45 Half-Full 11:46 Half-Full. Traffic Light State: 11:43 Red 11:44 Red 11:45 Red 11:46 Red.

Recalibration Problem (Cont.) Perturbations in Physical Sensors Can Cause Virtual Sensors to Become Un-calibrated E.g, camera can move its FOV. VS-Output Coffee State: 11:43 Empty 11:44 Empty 11:45 Empty 11:46 Empty. Traffic Light State: 11:43 Orange 11:44 Orange 11:45 Orange 11:46 Orange.

Recalibration Problem (Cont.) Recalibration refers to detection of such perturbations & adjustment of parameters used by VS’s to detect correctly.

The Recalibration Problem - Approach Observation: Events = change in value of attributes. Approach: Model the state changes as an automata. Use the automata to capture semantics. AvgStdavg-2stdavg+2std O->R G->O R->G

Exploiting Semantics for Re-Calibration Detection: Deviation from the Learnt Model Correction & Calibration: A Search Over Parameter Space to Find Semantic Model DEMO: Finding the Correct FOV Based on System Semantics: Coffee Machine & Traffic Light

Exploiting Semantics for Scheduling of Resources Let: C be a Set of n Cameras. Plan (t) be a {0,1} vector (Indicating Which Cameras to Probe) Benefit(Plan (t) ) be the expected benefit from executing this plan. Cost(Plan (t) ) be the cost associated with that plan. Find a plan, such that: Benefit(Plan (t) ) is maximized. Cost(Plan (t) ) is minimized. Under the constraint: Σb i ≤ k For example, probe a set of N entry/exit sensors: To find as many distinct people entering. When establishing a new connection to a sensor is 5 time more expensive than continuing to probe the same sensor. Only k connections can be established at the same time.

Semantics Exploited Apriority: What are the cameras that I should probe, when nothing else is known. Self Correlation: If we’ve seen someone walking in camera A, how likely is it to see more motion in A. Cross Correlation: If this is the state of the system we are aware of: We’ve seen motion in camera B 5 seconds ago. And motion in camera C 2 seconds ago. How likely is it to see motion in camera A.

Motion Semantics Semantics Used: Motion Other Types of Semantics Trajectory Identity

Scheduling Using Semantics Naïve: RR Exploiting Sensor Semantics: 1. Determine Probability of Motion 2. Use the Motion Estimates to Find the Best Plan

Results Almost 90% improvement compared to RR!!

Different Benefit Functions The semantics based algorithm proves to perform the best...But it can do better if we change the plan based on the cost/benefit functions.