Digital Perception Lab. Dept. Electrical and Computer Systems Engineering Monash University.

Slides:



Advertisements
Similar presentations
Recognising Panoramas M. Brown and D. Lowe, University of British Columbia.
Advertisements

The fundamental matrix F
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
Rear Lights Vehicle Detection for Collision Avoidance Evangelos Skodras George Siogkas Evangelos Dermatas Nikolaos Fakotakis Electrical & Computer Engineering.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Introduction To Tracking
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Multi-model Estimation with J-linkage Jeongkyun Lee.
Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.
Computer Vision Group University of California Berkeley Shape Matching and Object Recognition using Shape Contexts Jitendra Malik U.C. Berkeley (joint.
Segmentation into Planar Patches for Recovery of Unmodeled Objects Kok-Lim Low COMP Computer Vision 4/26/2000.
MASKS © 2004 Invitation to 3D vision Lecture 8 Segmentation of Dynamical Scenes.
Multiple View Geometry
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
A Bayesian algorithm for tracking multiple moving objects in outdoor surveillance video Department of Electrical Engineering and Computer Science The University.
Segmentation Kyongil Yoon. Segmentation Obtain a compact representation of what is helpful (in the image) No comprehensive theory of segmentation Human.
Computer Vision Segmentation Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel,...
Object Detection and Tracking Mike Knowles 11 th January 2005
Fitting a Model to Data Reading: 15.1,
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
Bootstrapping a Heteroscedastic Regression Model with Application to 3D Rigid Motion Evaluation Bogdan Matei Peter Meer Electrical and Computer Engineering.
Biomedical Image Analysis and Machine Learning BMI 731 Winter 2005 Kun Huang Department of Biomedical Informatics Ohio State University.
Fitting.
Linear Algebra and Image Processing
Face Alignment Using Cascaded Boosted Regression Active Shape Models
Chapter 6 Feature-based alignment Advanced Computer Vision.
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
Hubert CARDOTJY- RAMELRashid-Jalal QURESHI Université François Rabelais de Tours, Laboratoire d'Informatique 64, Avenue Jean Portalis, TOURS – France.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
Advanced Computer Vision Feature-based Alignment Lecturer: Lu Yi & Prof. Fuh CSIE NTU.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Characterizing activity in video shots based on salient points Nicolas Moënne-Loccoz Viper group Computer vision & multimedia laboratory University of.
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
ALIGNMENT OF 3D ARTICULATE SHAPES. Articulated registration Input: Two or more 3d point clouds (possibly with connectivity information) of an articulated.
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
A Statistical Approach to Speed Up Ranking/Re-Ranking Hong-Ming Chen Advisor: Professor Shih-Fu Chang.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Example: line fitting. n=2 Model fitting Measure distances.
Non-Euclidean Example: The Unit Sphere. Differential Geometry Formal mathematical theory Work with small ‘patches’ –the ‘patches’ look Euclidean Do calculus.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Communication Systems Group Technische Universität Berlin S. Knorr A Geometric Segmentation Approach for the 3D Reconstruction of Dynamic Scenes in 2D.
Expectation-Maximization (EM) Case Studies
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Motion Segmentation CAGD&CG Seminar Wanqiang Shen
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
ROBUST SUBSPACE LEARNING FOR VISION AND GRAPHICS
Computer Vision, Robotics, Machine Learning and Control Lab
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Video-based human motion recognition using 3D mocap data
Representing Moving Images with Layers
Representing Moving Images with Layers
Brief Review of Recognition + Context
Filtering Things to take away from this lecture An image as a function
Noah Snavely.
Filtering An image as a function Digital vs. continuous images
Optical flow and keypoint tracking
Presentation transcript:

Digital Perception Lab. Dept. Electrical and Computer Systems Engineering Monash University

Research Covers Areas Such as:  Computational Mathematics  Novel Splines and Fast Approximation of Splines (related to Radial Basis Functions, Support Vector Machines)  Finite Element, Wavelets, Multi-pole Methods  Image Processing  Restoration of Historical Film  Biomedical Image Processing  Computer Vision/Robotics  Optic Flow  Motion Segmentation  Tracking  3-D structure modelling A common thread is: Motion/Displacement Estimation from Images Common techniques are robust statistics, model selection, model fitting….

Current (and New) Projects Robust Model Fitting and Model Selection (with Wang, Bab-Hadiashar, Staudte, Kanatani…..) Subspace Methods for SFM and Face recognition (with Chen) (soon to be postdoc with PIMCE) Biomedical:Microcalcification in breast X-rays (with Lee, Lithgow), Knee cartilage segmentation (with Cheong and Ciccutini) Invariant Matching/Background Modelling (with Gobara) Historical Film Restoration and Film Special Effects (with Boukir) Wavelet denoising (with Chen) (new) Geometric aspects of tracking (ARC )  Postdoc Wang Human motion Modelling and Tracking (with U) Visualisation (Monash SMURF vizlab) (new) Urban Scanning (Monash NRA – soon to be postdoc Schindler) (new) 4-D Recorder Room (+Tat-jun Chin + Tk – soon to start phd students)

Advance on Previous Restoration Work (with Boukir) Can’t capture distortion – e.g., rotation Can try to use 3-D projective geom. – below Large Grant IREX 2001 S. Boukir and D. Suter. Application of rigid motion geometry to film restoration.In Proceedings of ICPR2002, volume 6, pages , 2002.

Symmetry in (Robust Fitting) Actually, the assumption that median belongs to “clean” data is false sometimes even when outliers < 50%! 55 inliers – 45 clustered outliers Large Grant H. Wang and D. Suter. Using symmetry in robust model fitting. Pattern Recognition Letters, 24(16): , 2003.

Symmetry in (Robust Fitting) about 45% clustered outliers Large Grant

Very Robust Fitting – Mean-shift about 95% outliers!. Large Grant H. Wang and D. Suter. MDPE: A very robust estimator for model fitting and range image segmentation. Int. J. of Computer Vision, to appear, 2004.

Very Robust Fitting about 95% outliers! Large Grant

Very Robust Fitting How does it work? Essentially – not just dependent upon a single stat (the median or the number of inliers) but on the pdf about the chosen estimate. Uses Mean Shift and maximizes a measure roughly (sum of inlier pdf – as defined by mean shift window)/(bias – mean residual - centre of mean shift window)

UB – distorted edges USF Noisy Points WSU Missed Surf. Large Grant

Yosemite Otte Large Grant

Imputation\Subspace Learning (Hallucination if you prefer) P. Chen and D. Suter. Recovering the missing components in a large noisy low-rank matrix: Application to SFM. IEEE Trans. Pattern Analysis and Machine Inteliigence, page to appear, 2004.

What you start with: Low rank, large noisy matrix with “holes” We want to fill in and de-noise

Why? Data Mining – on line recommender systems DNA Etc…… Structure From Motion M=RS (M- location of features in images R – camera motion – S – structure) Face Recognition – other learning and classification tasks.

36 frames and 336 feature points – the most reliable by our measure

Jacobs Jacobs+Shum 100 iter Jacobs+Shum 400 iter Ours

4983 points over 36 frames 2683 points (those tracked for more than 2 frames)

SUBSPACE-BASED FACE RECOGNITION: OUTLIER DETECTION and A NEW DISTANCE CRITERION FOR MATCHING P. Chen and D. Suter. Subspace-based face recognition: outlier detection and a new distance criterion. In Proceedings ACCV2004, pages , 2004.

Yale B face database

Outlier detection (Iterative reweighted least square: IRLS)

7D eigenimages

Subsets 1-5

Comparison of the error classification rate (%) on Yale-B face database MethodSubset 1-3Subset 4Subset 5 Linear subspace [9]015/ Cones-attached [9]08.6/ Cones-cast [9]00/ 9PL [14]02.8(5.6)/ Proposed007.9