Multi-camera Tracking of Articulated Human Motion using Motion and Shape Cues Aravind Sundaresan and Rama Chellappa Center for Automation Research University.

Slides:



Advertisements
Similar presentations
POSE–CUT Simultaneous Segmentation and 3D Pose Estimation of Humans using Dynamic Graph Cuts Mathieu Bray Pushmeet Kohli Philip H.S. Torr Department of.
Advertisements

DTAM: Dense Tracking and Mapping in Real-Time
Pose Estimation and Segmentation of People in 3D Movies Karteek Alahari, Guillaume Seguin, Josef Sivic, Ivan Laptev Inria, Ecole Normale Superieure ICCV.
Motion.
Multimedia Specification Design and Production 2012 / Semester 1 / week 6 Lecturer: Dr. Nikos Gazepidis
Real-Time Human Pose Recognition in Parts from Single Depth Images
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Computer Graphics Computer Animation& lighting Faculty of Physical and Basic Education Computer Science Dep Lecturer: 16 Azhee W. MD.
3/5/2002Phillip Saltzman Video Motion Capture Christoph Bregler Jitendra Malik UC Berkley 1997.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
LOCUS (Learning Object Classes with Unsupervised Segmentation) A variational approach to learning model- based segmentation. John Winn Microsoft Research.
Video Inpainting Under Constrained Camera Motion Kedar A. Patwardhan, Student Member, IEEE, Guillermo Sapiro, Senior Member, IEEE, and Marcelo Bertalm.
Modeling 3D Deformable and Articulated Shapes Yu Chen, Tae-Kyun Kim, Roberto Cipolla Department of Engineering University of Cambridge.
Computer Vision Optical Flow
Computer Vision Group University of California Berkeley Estimating Human Body Configurations using Shape Context Matching Greg Mori and Jitendra Malik.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1cs426-winter-2008 Notes  Example final exam up in Work section of website Take with a grain of salt  Collision notes part 1 (primitive operations) up.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Augmented Reality: Object Tracking and Active Appearance Model
Information that lets you recognise a region.
Cue Integration in Figure/Ground Labeling Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, U.C. Berkeley We present a model of edge and region grouping.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Algirdas Beinaravičius Gediminas Mazrimas.  Introduction  Motion capture and motion data  Used techniques  Animating human body  Problems.
Algirdas Beinaravičius Gediminas Mazrimas.  Introduction  Motion capture and motion data  Used techniques  Animating human body  Problems  Conclusion.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Motion Capture.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Path-Based Constraints for Accurate Scene Reconstruction from Aerial Video Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth I. Joy 3 Abstract - This.
Sequential Reconstruction Segment-Wise Feature Track and Structure Updating Based on Parallax Paths Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth.
Projective Texture Atlas for 3D Photography Jonas Sossai Júnior Luiz Velho IMPA.
06 - Boundary Models Overview Edge Tracking Active Contours Conclusion.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
13 th International Fall Workshop VISION, MODELING, AND VISUALIZATION 2008 October 8-10, 2008 Konstanz, Germany Strike a Pose Image-Based Pose Synthesis.
Xiaoguang Han Department of Computer Science Probation talk – D Human Reconstruction from Sparse Uncalibrated Views.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
I 3D: Interactive Planar Reconstruction of Objects and Scenes Adarsh KowdleYao-Jen Chang Tsuhan Chen School of Electrical and Computer Engineering Cornell.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Enforcing Constraints for Human Body Tracking David Demirdjian Artificial Intelligence Laboratory, MIT.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Folie 1 High precision image-based tracking of a rigid body moving within a fluid Stuart Laurence, Jan Martinez Schramm German Aerospace Center (DLR),
Computer Vision Michael Isard and Dimitris Metaxas.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Spatiotemporal Saliency Map of a Video Sequence in FPGA hardware David Boland Acknowledgements: Professor Peter Cheung Mr Yang Liu.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
IEEE International Conference on Multimedia and Expo.
Robotics/Machine Vision Robert Love, Venkat Jayaraman July 17, 2008 SSTP Seminar – Lecture 7.
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
Lucas-Kanade Image Alignment Iain Matthews. Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1
Automatic 3D modelling of Architecture Anthony Dick 1 Phil Torr 2 Roberto Cipolla 1 1 Department of Engineering 2 Microsoft Research, University of Cambridge.
MOTION Model. Road Map Motion Model Non Parametric Motion Field : Algorithms 1.Optical flow field estimation. 2.Block based motion estimation. 3.Pel –recursive.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching Link: singhashwini.mesinghashwini.me.
From: An Automated Reference Frame Selection (ARFS) Algorithm for Cone Imaging with Adaptive Optics Scanning Light Ophthalmoscopy Trans. Vis. Sci. Tech..
Motion and Optical Flow
Motion Detection And Analysis
CAPTURING OF MOVEMENT DURING MUSIC PERFORMANCE
Real-Time Human Pose Recognition in Parts from Single Depth Image
Mauricio Hess-Flores1, Mark A. Duchaineau2, Kenneth I. Joy3
Multiway Cut for Stereo and Motion with Slanted Surfaces
Effective and Efficient Detection of Moving Targets From a UAV’s Camera
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Fast Forward, Part II Multi-view Geometry Stereo Ego-Motion
The Image The pixels in the image The mask The resulting image 255 X
Presentation transcript:

Multi-camera Tracking of Articulated Human Motion using Motion and Shape Cues Aravind Sundaresan and Rama Chellappa Center for Automation Research University of Maryland, College Park MD USA

What is motion capture? Motion capture (Mocap) is the process of analysing and expressing human motion in mathematical terms. Initialisation, Pose estimation and Tracking. Applications Motion Analysis for clinical studies, Human-computer interaction, Computer animation. Marker-based systems have shortcomings Cumbersome, introduce artefacts, time consuming. Marker-less system desirable.

Calibration and Human body model Use multiple cameras (8) in our capture 640x480 grey scale images at 30 fps. Calibrated using algorithm of Svoboda. Use articulated human body model. Super-quadrics for body segments. Model described by joint locations and super-quadrics. Pose is described by joint angles.

Overview Use images from multiple cameras. Compute 2-D pixel displacement between t and t+1. Predict 3-D pose at t+1 using pixel displacement. Compute spatial energy function as function of pose. Minimise energy function to obtain pose at t+1.

Tracking Framework Use motion and spatial cues for tracking. Motion cues use texture.  Error accumulation: estimates only change in pose. Spatial cues obtained from silhouettes, edges, etc.  Instability: Solutions are stable only “locally”. Predictor-Corrector framework. Predictor:  Compute motion( t ) from pixel displacement.  Predict pose( t+1 ) from pose( t ) and motion( t ). Corrector:  Assimilate spatial cues into single energy function.  Correct pose( t+1 ) by minimising energy function.

Pixel registration and displacement Project model onto image to obtain Body part label for pixel. 3-D location of pixel. Mask for each body part Find dense pixel correspondence using Parametric optical flow-based algorithm for each segment.. Minimise MSE:

Pose from pixel displacement State-space formulation Linearisation We show that Taylor series Iteratively estimate pose

Combine spatial cues Combine multiple spatial cues into a single “spatial energy function”. Compute pose energy as function of dx, dy and Φ. +=

Minimise 3D pose energy Given multiple views and 3-D pose Compute 2-D pose for i th image Compute E i for i th camera using 2-D pose 3D pose energy, E = E 1 + E E n Compute minimum energy pose using optimisation.

Tracking results