Download presentation
Presentation is loading. Please wait.
Published byOctavia Stafford Modified over 6 years ago
1
3D Puppetry: A Kinect-based Interface for 3D Animation
Robert T. Held Ankit Gupta Brian Curless Maneesh Agrawala University of California, Berkeley University of Washington UIST’12
2
ABSTRACT
3
INTRODUCTION
4
SYSTEM OVERVIEW Set up System Capture Render
5
Puppet Database The Puppet Database includes a 3D model of each physical puppet. Puppet Database includes roughly 30 color and depth template images for each puppet,along with the pose of the puppet in each image.
6
Setup For tracking and rendering purposes, our system maintains a Puppet Database of defining characteristics for each trackable puppet. To running our system for the first time , each puppeteer must build a color model of their skin. This model is used by the Point-cloud Segmenter to isolate points that belong to puppets, and not the puppeteer’s hands, and is necessary for accurate tracking.
7
CAPTURE
8
Render In the Render module, our system uses the captured poses to render the corresponding 3D model for each physical puppet in the virtual set. This module provides camera and lighting controls, and the option to swap 3D backgrounds.
9
SIFT althogrithm Image is convolved with Gaussian filters at different scales, and then the difference of successive Gaussian-blurred images are taken. Keypoints are then taken as maxima/minima of the Difference of Gaussians (DoG) that occur at multiple scales.
10
SIFT
11
Keypoints This is done by comparing each pixel in the DoG images to its eight neighbors at the same scale and nine corresponding neighboring pixels in each of the neighboring scales. If the pixel value is the maximum or minimum among all compared pixels, it is selected as a candidate keypoint.
12
Interpolation of nearby data for accurate position
Taylor expansion is computed at the offset If this value less then 0.03,the candicate key point is discarded.
13
Orientation assignment
Each keypoint is assigned one or more orientations based on local image gradient directions. This is the key step in achieving invariance to rotation .
14
Local image descriptor
15
Matching Image Features
16
Point-cloud Segmenter
The Point-cloud Segmenter first removes the background points and the hands from the cloud and then splits the remaining points into separate clouds for each physical puppet.
17
Pose Tracker The ICP algorithm aligns two sets of 3D points by determining point-to-point correspondences between the sets.
18
RESTRICTIONS Our puppet-tracking algorithm does impose a few restrictions on puppeteers. It assumes that puppets are rigid objects it cannot track articulated joints or easily deformable materials such as cloth. In addition, if puppeteers move the puppets very quickly over long distances our system can lose track of them.
19
USER EXPERIENCE Puppeteers learned not to occlude too much of each puppet with their hands. Otherwise , the point clouds could become too small for accurate pose detection, and our system would either render a puppet in the wrong pose or stop rendering it altogether. Learned how quickly they could move each puppet without losing tracking.
20
FUTURE WORK Articulated Puppets. Deformable Puppets.
Multiple Puppeteers. Remote, collaborative puppeteering sessions using our system.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.