3D Puppetry: A Kinect-based Interface for 3D Animation

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
BRISK (Presented by Josh Gleason)
Capacitive Fingerprinting: Exploring User Differentiation by Sensing Electrical Properties of the Human Body Chris Harrison Munehiko SatoIvan Poupyrev.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
IBBT – Ugent – Telin – IPI Dimitri Van Cauwelaert A study of the 2D - SIFT algorithm Dimitri Van Cauwelaert.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
A Study of Approaches for Object Recognition
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
SIFT Guest Lecture by Jiwon Kim
Curve Analogies Aaron Hertzmann Nuria Oliver Brain Curless Steven M. Seitz University of Washington Microsoft Research Thirteenth Eurographics.
Distinctive Image Feature from Scale-Invariant KeyPoints
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Highlights Lecture on the image part (10) Automatic Perception 16
Scale Invariant Feature Transform (SIFT)
Blob detection.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 3 Advanced Features Sebastian Thrun, Stanford.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Bag of Visual Words for Image Representation & Visual Search Jianping Fan Dept of Computer Science UNC-Charlotte.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Soccer Video Analysis EE 368: Spring 2012 Kevin Cheng.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
3d Pose Detection Used by Kinect
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Evaluation of the stability of SIFT keypoint correspondence across cameras or.. “can we put a ‘C’ in SIFT?” max van kleek 6.869: learning and interfaces.
CSE 185 Introduction to Computer Vision Feature Matching.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Scale Invariant Feature Transform (SIFT)
Presented by David Lee 3/20/2006
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Blob detection.
SIFT.
Visual homing using PCA-SIFT
SIFT Scale-Invariant Feature Transform David Lowe
CS262: Computer Vision Lect 09: SIFT Descriptors
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
CS 4501: Introduction to Computer Vision Sparse Feature Detectors: Harris Corner, Difference of Gaussian Connelly Barnes Slides from Jason Lawrence, Fei.
Scale Invariant Feature Transform (SIFT)
SIFT paper.
3D Vision Interest Points.
Real Time Dense 3D Reconstructions: KinectFusion (2011) and Fusion4D (2016) Eleanor Tursman.
Real-Time Human Pose Recognition in Parts from Single Depth Image
Modeling the world with photos
Car Recognition Through SIFT Keypoint Matching
Assistive System Progress Report 1
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
From a presentation by Jimmy Huff Modified by Josiah Yoder
Aim of the project Take your image Submit it to the search engine
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT.
CSE 185 Introduction to Computer Vision
ECE734 Project-Scale Invariant Feature Transform Algorithm
Lecture VI: Corner and Blob Detection
SIFT SIFT is an carefully designed procedure with empirically determined parameters for the invariant and distinctive features.
Presented by Xu Miao April 20, 2005
Presentation transcript:

3D Puppetry: A Kinect-based Interface for 3D Animation Robert T. Held Ankit Gupta Brian Curless Maneesh Agrawala University of California, Berkeley University of Washington UIST’12

ABSTRACT

INTRODUCTION

SYSTEM OVERVIEW Set up System Capture Render

Puppet Database The Puppet Database includes a 3D model of each physical puppet. Puppet Database includes roughly 30 color and depth template images for each puppet,along with the pose of the puppet in each image.

Setup For tracking and rendering purposes, our system maintains a Puppet Database of defining characteristics for each trackable puppet. To running our system for the first time , each puppeteer must build a color model of their skin. This model is used by the Point-cloud Segmenter to isolate points that belong to puppets, and not the puppeteer’s hands, and is necessary for accurate tracking.

CAPTURE

Render In the Render module, our system uses the captured poses to render the corresponding 3D model for each physical puppet in the virtual set. This module provides camera and lighting controls, and the option to swap 3D backgrounds.

SIFT althogrithm  Image is convolved with Gaussian filters at different scales, and then the difference of successive Gaussian-blurred images are taken. Keypoints are then taken as maxima/minima of the Difference of Gaussians (DoG) that occur at multiple scales.

SIFT

Keypoints This is done by comparing each pixel in the DoG images to its eight neighbors at the same scale and nine corresponding neighboring pixels in each of the neighboring scales. If the pixel value is the maximum or minimum among all compared pixels, it is selected as a candidate keypoint.

Interpolation of nearby data for accurate position  Taylor expansion is computed at the offset If this value less then 0.03,the candicate key point is discarded.

Orientation assignment Each keypoint is assigned one or more orientations based on local image gradient directions. This is the key step in achieving invariance to rotation .

Local image descriptor http://en.wikipedia.org/wiki/Scale-invariant_feature_transform

Matching Image Features

Point-cloud Segmenter The Point-cloud Segmenter first removes the background points and the hands from the cloud and then splits the remaining points into separate clouds for each physical puppet.

Pose Tracker The ICP algorithm aligns two sets of 3D points by determining point-to-point correspondences between the sets.

RESTRICTIONS Our puppet-tracking algorithm does impose a few restrictions on puppeteers. It assumes that puppets are rigid objects it cannot track articulated joints or easily deformable materials such as cloth. In addition, if puppeteers move the puppets very quickly over long distances our system can lose track of them.

USER EXPERIENCE Puppeteers learned not to occlude too much of each puppet with their hands. Otherwise , the point clouds could become too small for accurate pose detection, and our system would either render a puppet in the wrong pose or stop rendering it altogether. Learned how quickly they could move each puppet without losing tracking.

FUTURE WORK Articulated Puppets. Deformable Puppets. Multiple Puppeteers. Remote, collaborative puppeteering sessions using our system.