Combining Geometric- and View-Based Approaches for Articulated Pose Estimation David Demirdjian MIT Computer Science and Artificial Intelligence Laboratory.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Active Appearance Models
Probabilistic Tracking and Recognition of Non-rigid Hand Motion
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Face Alignment with Part-Based Modeling
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Modeling the Shape of People from 3D Range Scans
Reducing Drift in Parametric Motion Tracking
Vision Based Control Motion Matt Baker Kevin VanDyke.
Cambridge, Massachusetts Pose Estimation in Heavy Clutter using a Multi-Flash Camera Ming-Yu Liu, Oncel Tuzel, Ashok Veeraraghavan, Rama Chellappa, Amit.
Model base human pose tracking. Papers Real-Time Human Pose Tracking from Range Data Simultaneous Shape and Pose Adaption of Articulated Models using.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
AAM based Face Tracking with Temporal Matching and Face Segmentation Dalong Du.
Accurate Non-Iterative O( n ) Solution to the P n P Problem CVLab - Ecole Polytechnique Fédérale de Lausanne Francesc Moreno-Noguer Vincent Lepetit Pascal.
Video Processing EN292 Class Project By Anat Kaspi.
RECOGNIZING FACIAL EXPRESSIONS THROUGH TRACKING Salih Burak Gokturk.
Recognizing and Tracking Human Action Josephine Sullivan and Stefan Carlsson.
Bootstrapping a Heteroscedastic Regression Model with Application to 3D Rigid Motion Evaluation Bogdan Matei Peter Meer Electrical and Computer Engineering.
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Shape Matching for Model Alignment 3D Scan Matching and Registration, Part I ICCV 2005 Short Course Michael Kazhdan Johns Hopkins University.
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
Robust global motion estimation and novel updating strategy for sprite generation IET Image Processing, Mar H.K. Cheung and W.C. Siu The Hong Kong.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
Person detection, tracking and human body analysis in multi-camera scenarios Montse Pardàs (UPC) ACV, Bilkent University, MTA-SZTAKI, Technion-ML, University.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
CSCE 643 Computer Vision: Structure from Motion
Enforcing Constraints for Human Body Tracking David Demirdjian Artificial Intelligence Laboratory, MIT.
CS 4487/6587 Algorithms for Image Analysis
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
A Frequency-Domain Approach to Registration Estimation in 3-D Space Phillip Curtis Pierre Payeur Vision, Imaging, Video and Autonomous Systems Research.
Dynamic Refraction Stereo 7. Contributions Refractive disparity optimization gives stable reconstructions regardless of surface shape Require no geometric.
A Fast and Accurate Tracking Algorithm of the Left Ventricle in 3D Echocardiography A Fast and Accurate Tracking Algorithm of the Left Ventricle in 3D.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
Communication Systems Group Technische Universität Berlin S. Knorr A Geometric Segmentation Approach for the 3D Reconstruction of Dynamic Scenes in 2D.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Temporally Coherent Completion of Dynamic Shapes AUTHORS:HAO LI,LINJIE LUO,DANIEL VLASIC PIETER PEERS,JOVAN POPOVIC,MARK PAULY,SZYMON RUSINKIEWICZ Presenter:Zoomin(Zhuming)
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Flexible Automatic Motion Blending with Registration Curves
Fast Semi-Direct Monocular Visual Odometry
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Learning Image Statistics for Bayesian Tracking Hedvig Sidenbladh KTH, Sweden Michael Black Brown University, RI, USA
Karel Lebeda, Simon Hadfield, Richard Bowden
A Closed Form Solution to Direct Motion Segmentation
Computer Vision, Robotics, Machine Learning and Control Lab
Real-Time Object Localization and Tracking from Image Sequences
Fast and Robust Object Tracking with Adaptive Detection
COMPUTER VISION Tam Lam
Eric Grimson, Chris Stauffer,
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Synthesis of Motion from Simple Animations
George Bebis and Wenjing Li Computer Vision Laboratory
Open Topics.
Optical flow and keypoint tracking
Tracking Many slides adapted from Kristen Grauman, Deva Ramanan.
Presentation transcript:

Combining Geometric- and View-Based Approaches for Articulated Pose Estimation David Demirdjian MIT Computer Science and Artificial Intelligence Laboratory Approach View-based model Fusion Given the estimates P(g) from the geometric model-based tracker and P(v) from the view-based model prediction, the final pose P estimation is chosen as the pose minimizing the 3D fitting error function E2(): P = arg min { E2(P(g)), E2(P(v)) } P  {P(g) , P(v)} Key frames The view-based model consists in a collection of key frames. Each key frame maps a view J and its local variation to pose P. Variations of view J are modeled by the apparent motion dx (image flow) of a set of support points x in the view. Pose P is modeled as: A key frame is characterized by: {J , P 0 , x , L} view ref. pose support points motion-pose jacobian We propose here an efficient real-time approach for articulated body tracking that combines: Geometric-based model local fit of a 3D articulated CAD model to stereo data View-based model (global) search of image in a collection of views (+ local prediction) and Experiments Fusion update P(g) stereo view-based model P(v) intensity (I) Detected keyframe Jk JN J2 J1 J3 model-based (ICP) tracker Pose prediction Given a new image I, a prediction of the corresponding pose P is estimated by: Searching the key frame Jk, closest to image I w.r.t. to an image distance d(I1,I2) e.g. L2 (weighted by some foreground weights when applicable) Comparative results (re-projection of the 3D articulated model) on a sequence of more than 1500 images. Jk JN J2 J1 J3 ? ICP (geometric-based model) I We introduce a view-based model that contains views of a person under various articulated poses. The view-based model is built and updated online. Our main contribution consists of modeling, in each frame, the pose changes as a linear transformation of the view change. This linear model allows for: refining the estimate of the pose P 0 corresponding to a key frame predicting the pose in a new image The articulated pose is computed by fusion of the estimation provided by the two techniques. Estimating the flow of support points x between I and Jk Predicting pose P using local model ICP + view-based model Learning/Updating the view-based model The view-based model is learnt online (at the beginning, bootstrapped using the geometric model-based tracker estimates). Key frame selection Goal: model the most of the appearance space using a fixed number N of key frame in the collection.  Key frames Jk are selected so that they maximizes the inter-image distance inside the view-based model collection. Key frame update Support points x estimated as image points corresponding to subject (e.g. using foreground detection, optical flow, …) Parameters P 0 and L estimated from a set of n observations (dx(n), P(n)) using a robust estimation techniques. Average error between the estimation of the 3D articulated model and the 3D scene reconstruction vs. number of frames. Peaks in the data corresponding to the ICP algorithm are actually tracking failures. Geometric model-based tracking The geometric-based model tracker estimates the pose P that minimizes a fitting error E2(P) Average percentage of frames correctly tracked in 20 sequences (of about 1000 frames) using: ICP (geometric-based model) only ICP + view-based model ICP + view-based model Given an initial pose P0, this comes to estimate a body transformation D that minimizes F 2(D) = E2(P = D(P0)). The constrained minimization of F2(D) is performed in 2 steps [ICCV’03 Demirdjian et al.]: Unconstrained minimization Quadratic Programming problem resolution Unconstrained minimization: Find a body transformation D (and uncertainty L) that minimizes F2(D). This is done by applying the ICP algorithm independently to each limb (without accounting for articulated constraints bw. limbs). Quadratic Programming Problem: ICP Future Work Finds the rigid transformation  that maps shape St (limb model) to shape Sr (3D data) Modeling appearance: improving the view-based model to account for appearance (e.g. texture) variation across people Adding dynamic constraints to improve robustness and reduce tracking “jumpiness” Probabilistic fusion to account for the uncertainty on the pose estimates Sr St x Minimizes (Mahalanobis distance) with D* satisfying: