Motion Capture: History, Approaches, Applications, and Future trends

Slides:



Advertisements
Similar presentations
CS 6353 Computer Graphics and Multimedia
Advertisements

Team:. Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
Virtual Me. Motion Capture The process of recording movement and translating that movement onto a digital model Originally used for military tracking.
Motion Capture The process of recording movement and translating that movement onto a digital model Games Fast Animation Movies Bio Medical Analysis VR.
Virtual Me. Motion Capture (mocap) Motion capture is the process of simulating actual movement in a computer generated environment The capture subject.
Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
ESTABLISHING VIRTUAL PRODUCTION AS A RESEARCH AND TEACHING ACTIVITY AT QUT C. Carter, P. Van Opdenbosch, J. Bennett Queensland University of Technology.
MOTION CAPTURE IN LIFE SCIENCES Mario Lamontagne.
C1 - The Impact of CAD on the Design Process.  Consider CAD drawing, 2D, 3D, rendering and different types of modelling.
Computer Graphics Computer Animation& lighting Faculty of Physical and Basic Education Computer Science Dep Lecturer: 16 Azhee W. MD.
Modeling the Shape of People from 3D Range Scans
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Motion Capture CS294-7 Jacqueline Takeshita Mindy Lue.
Graphics-1 Gentle Introduction to Computer Graphics Based on: –David Brogan’s “Introduction to Computer Graphics” Course Slides, University of Virginia.
Machine Vision Basics 1.What is machine vision? 2.Some examples of machine vision applications 3.What can be learned from biological vision? 4.The curse.
Video Segmentation Based on Image Change Detection for Surveillance Systems Tung-Chien Chen EE 264: Image Processing and Reconstruction.
Graphics. Applications  Digital media  Entertainment  Art  Visualization  Science  Modeling  Games  Software  Virtual Reality.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Introduction to Computer Graphics Survey (Chapter 1) Graphics Systems (Chapter 2)
Motion Capture in 3D Animation Animation : Movies Animation : Movies Video Games Video Games Robot Control Robot Control.
Motion Capture in 3D Animation Edward Tse. Motion Capture as a Tool Motion capture (MOCAP) is an effective 3D animation tool for realistically capturing.
Optical Motion Capture Bobby Bruckart Ben Heipp James Martin Molly Shelestak.
RIGS & MOTION CAPTURE By: Jennifer Marcial and Juan m. lopez A presentation on animation film-making, and how it works.
Algirdas Beinaravičius Gediminas Mazrimas.  Introduction  Motion capture and motion data  Used techniques  Animating human body  Problems.
Algirdas Beinaravičius Gediminas Mazrimas.  Introduction  Motion capture and motion data  Used techniques  Animating human body  Problems  Conclusion.
Computer Animation Prepared by Khadija Kuhail Supervised by Dr Sanaa Alsayegh.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Motion Capture.
Applications of a Motion Capturing System: Music, Modeling, and Animation Chase Finch, Kevin Musselman, Bill Mummert College of Engineering and Science,
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
Master’s programme Game and Media Technology. 10/1/20152 General Information:  Gaming and multimedia are booming industry  Increased use of gaming as.
Miguel Reyes 1,2, Gabriel Dominguez 2, Sergio Escalera 1,2 Computer Vision Center (CVC) 1, University of Barcelona (UB) 2
Computer Animation Lecture #1 송오영 Sejong University Department of Digital Contents.
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Autonomous Virtual Humans Tyler Streeter. Contents Introduction Introduction Implementation Implementation –3D Graphics –Simulated Physics –Neural Networks.
3D Computer Animation Pertemuan 10 Matakuliah : History of Animation Tahun : 2009.
1/7/99 Virtual Viewpoint Reality Virtual Viewpoint Reality NTT: Visit 1/7/99.
SCAPE: Shape Completion and Animation PEople Stanford University Dragomir Anguelov Praveen Srinivasan Daphne Koller Sebastian Thrun Jim Rodgers UC, Santa.
David Kennedy.  In the past MMC systems used passive models with video systems used for motion capture.  But with these systems automatic and accuracy.
-BY SAMPATH SAGAR( ) ABHISHEK ANAND( )
STEM. a. Photoshop is the world’s leading digital image editing software application. b. People use it to create, enhance and modify images, such as brochures,
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
Computer Vision No. 1 What is the Computer Vision?
SS5305 – Popular Marker Setups 1. Objectives Marker Data Measurement Sequence Project Automation Framework (PAF) Popular marker setups PAF Interface 2.
Basic Theory of Motion Capture By: Vincent Verner.
GPU, How It Works? GRAPHICS PROCESSING UNITS Hidden Surfaces Determine which surfaces should be displayed Texturing Modify each pixel colour for added.
What is Multimedia Anyway? David Millard and Paul Lewis.
Simulation of Characters in Entertainment Virtual Reality.
E8: Digital Humans Option E AHL: Human Factors Design IB Technology.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Motion Capture.
Motion Capture CSE 3541 Matt Boggus.
Intelligent Learning Systems Design for Self-Defense Education
Visual Information Retrieval
Introduction Prof. Lizhuang Ma.
High Impact Blow Inspection over a Reactive Mobile-Cloud Framework
RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and.
Development of VR Glasses
GESTURE CONTROLLED ROBOTIC ARM
HCI/ComS 575X: Computational Perception
Towards lifelike Computer Interfaces that learn
Introduction Prof. Lizhuang Ma.
Physically Based Modeling -Overview-
– Graphics and Visualization
COMPUTER GRAPHICS with OpenGL (3rd Edition) Donald Hearn M
Presentation transcript:

Motion Capture: History, Approaches, Applications, and Future trends By: Ryan Frost MSIM 742 : Visualization II Old Dominion University

Introduction Motion capture is the recording of human body movement (or other movement) for immediate or delayed analysis and playback Motion capture refers to the technique of recording the actions of human actors, and using that information to animate digital character models in 3D animation

History Interest in studying motion dates back to 1400’s 1400’s: Leonardo da Vinci 1500’s: Galileo 1600’s: Borelli Newton, Bernoulli, Euler, Pioseuille, and Young 1900’s:Muybridge, Marey ----- In the late 1400’s Leonardo da Vinci studied and sought to describe the mechanics of standing, walking, sitting and jumping. In the 1500’s Galileo attempted to analyze human biomechanical functions through mathematical functions and expressions. Then in the 1600’s Borelli was able to identify the forces required for equilibrium in various joints of the human body. These early pioneers made great strides in the analysis of motion, and human biomechanics, and there were a number of equally famous scientists who followed in the next couple of decades who continued to better understand motion and its complex mechanics. In the late 1800’s and early 1900’s Eadweard Muybridge was the first to utilize motion picture capture to better understand, analyze and visualize motion. Muybridge developed a photographic processing and electrical trigger apparatus that allowed for rapid fire motion capture. The famous series of pictures above, ‘The Horse in Motion’ was used to settle a long standing debate as to whether all four hooves of a horse left the ground during galloping. *An interesting note is that previous scientist had theorized that the legs may all leave the ground when the legs are fully extended forward and back. With the use of the photographic series, using just one of the still images, it was determined that the horses hooves left the ground when they were all tucked under the horse, as it switches from ‘pulling’ from the front legs to ‘pushing’ from the back legs. Etienne-Jules Marey expanded on Muybridges work in developing animated photography, which sought to record several phases of movement on one photographic surface. With the development of this new method, Marey pioneered modern motion analysis. Marey also invented the first motion capture suit, which consisted of a black tight fitting suit with metal strips or white lines attached to it (Figure 2). With this suit and Marey’s method of recording multiple frames on one photographic plate, the beginnings of motion capture and motion analysis can easily be seen.

Applications of Motion Capture Entertainment (Movies, Games) Medical Field Sports Analysis and Enhancement Industrial Field Increase process efficiency, product design, ergonomics evaluation Research and Defense Entertainment Industry – such as Movies, and games – is perhaps the most visible and recognized user of motion capture technology -Through the use of motion capture, films can simulate or approximate the look of live-action cinema with nearly photorealistic digital character models - there is a bit of a dispute between computer animation studios that use motion capture, and those that do not. One of the debates is over the artistic qualities that motion capture seems to sidestep through the use of technology. Some artists feel as though creating a film purely via motion capture is cheating. Medical Field – It can assist with diagnosis and treatment of physical ailments. The motion capture data can also help show progression of the patient. The data from previous sessions can be compared real-time to see if the rehabilitation approach is having a desired result. Motion capture is ideal for a wide range of sports applications in research, education, training, and enhancement. With sports becoming more and more competitive and performance always increasing, motion capture can be used to break down every detail of a certain motion in order to make performance enhancing modifications. The Industrial field and the research and defense field are starting to use motion capture more and more for better, quicker, and faster analysis and design – among other things.

Approaches to Motion Capture Marker Based Goniometers Magnetic Inertial Opitcal Goniometers - A goniometer is an instrument that measures angles. A sort of exoskeleton is strapped to the subjects body, that moves with the rigid bodies of the subject. The model can then be built based on this angle data. A big issue with this approach is the exoskeleton can greatly hinder the free movement of the subject.

Magnetic Magnetic - motion capture systems utilize sensors placed on the body to measure the low-frequency magnetic fields generated by a transmitting source. The electronic control units are networked with a host computer that uses a software driver to represent these positions and rotations in 3D space. Issues with magnetic motion capture include the fact that magnetic fields decrease in power rapidly as the distance from the generating source increases - so capture area is very minimized. Also great care has to be taken since any magetic material in the capture space can have a negative effect on the results.

Inertial Inertial - Advances in miniaturized and micro machined accelerometers and rate sensor technology has made this approach possible. . The sensors, located at predetermined anatomical points on the body, provide angular and force data at each point. Issues with this approach include problems with the sophistication and cost of the sensors, as well as Noise and bias errors are common.

Optical Optical – which is probably the most common approach uses reflective (passive) or strobing (active) markers placed at predetermined anatomical positions on the subject. Multiple cameras track the markers and which are aligned with boney landmarks on the subject and determine anatomical points to recreate the subject as a 3D model. The biggest issue with optical markers is occlusion of markers – and markers falling off. Upper left – set of Kin Kong and upper right is set of I Robot So – as a result of the issues associated to the markered approaches, the latest trend in Motion capture is developing markerless motion capture systems.

MIT’s Pfinder Markerless Approach 2D Blob Model One of the earlier approaches to markerless motion capture was by a group at MIT who developed a system called Pfinder (or Person Finder) in 1996. For its time - Pfinder provided an accurate method for person segmentation, tracking and interpretation, and creating 2D models of human bodies from a single camera video. Pfinder uses features called ‘blobs’ connected together to form the human model. Each blob has a spatial (x,y) and color distribution, as well as a support map that indicates which pixels of the image are members of that blob. The capturing process begins with the system learning the background (with no human in it) - The scene is modeled as a texture surface; each point on the texture surface is associated with a mean color value and a distribution about that mean. In each frame the visible portions of the scene are recursively updated, which compensates for objects of the scene being covered by the human. The changed region of the image is found, and analyzed in order to begin to building a blob model of the person. Pfinder utilizes a 2D contour shape analysis process that attempts to identify the head, hands and feet locations. If one of these locations is discovered a new blob is placed in the location. In today’s standards Pfinder would be considered very crude. Additionally, this approach did not track quick motion well. Sudden changes in the scene were often mistakenly rendered. http://www.youtube.com/watch?v=BVwKPnIErrA

Stanford Markerless Motion Capture Project 3D voxel reconstruction from silhouettes One of the most well known current leading project in markerless motion capture is the Stanford Markerless Motion Capture Project - , their approach relies on images obtained from multiple synchronized cameras placed around the subject to estimate the pose of the subject. The process includes utilizing 2D foreground silhouette data from the original images, as well as 3D data in the form of voxels. The foreground silhouette data is obtained from the original images by performing background subtraction. A voxel model is created by projecting the foreground silhouettes of the 2D images onto a 3D grid of points. A point in the grid is considered a voxel if it lies inside the silhoutte of all the images. Next the 3D voxel data is segmented into the different body parts, then 1D splines are fitted to the principle axis of each of the body parts. The position of each body segment is derived in terms of the position of its attached coordinate frame with respect to its parent segment and is represented by a transformation matrix which includes a rotational component and a translational component. AS you can see from (c) the trunk is used as the base coordinated system and all of the segments link back to it. So at this point we have created an accurate 3D skeleton model – as seen in (d) above. The next step is to estimate the super-quadric parameters of the body segments in order to build the 3D super-quadric model (e). The shape of the super-quadric model is determined by maximizing the overlap between the super-quadric model and the voxels. This is done by minimizing the distance between each voxel and the super-quadric model. Voxels at the surface of the super-quadric have a distance equal to zero (d0). The distances for voxels inside the super-quadric have a negative distance so that if the voxel is on the axis of the super-quadric it has a value of d-1. For voxels that are outside the super-quadric, the distance increases exponentially as the voxel is farther from the surface. The optimal pose of the super-quadric model is that which minimizes d. http://www.stanford.edu/~stefanoc/Markerless/Markerless.html

Human Shape and Pose from Images Using SCAPE SCAPE Models Template Body Pose Another approach to markerless motion capture is being developed by a team of members from Brown University, UC Santa Cruz, and Intel Research. A big difference in their approach over other approaches is that they are able to skip the voxel representation of the model. They are able to show that a detailed parametric model can be estimated directly from the image data. The first phase, which is the learning phase, utilizes the SCAPE (Shape Completion and Animation of People) model. The SCAPE model basically utilizes a large database of preexisting human models. The data for the models were captured using a Cyberware WBX whole-body scanner. The scanner captures range scans from four directions simultaneously and the models contain approximately 200K points. These points are used to create two types of mesh models: a pose set, which contains scans of 70 poses, and a body shape set, which contains scans of 37 different people in a similar pose. A single pose model is marked as the template model and all other models are brought into correspondence with that model through Correlated Correspondence. This produces a set of meshes with the same topology, so that a single model can easily be morphed into different shapes and poses. An algorithm is used to automatically compare these different meshes and extract a proper skeletal configuration. The result is a tree-structured articulated skeleton with 16 parts. (as depicted in the far right column on the picture) The next step is to calculate the deformations between the template and the other mesh models. This basically gives the formula for transforming the template into the other meshes. This is done for both the pose and body shape sets. Next, an algorithm is used to ‘learn’ the coefficients of a linear mapping between poses and shapes. This linear mapping is what is used to predict a new body pose, not present in the database. Body Shape

Human Shape and Pose from Images Using SCAPE Input Images overlaid with estimated body model Overlap (yellow) between silhouette (red) and estimated model (blue) In order to translate image data to a model in the SCAPE database, silhouettes are used to create a crude model similar to what was described in the Stanford project. However, unlike the Stanford project, a voxel model is not created. Rather, the joint angles together with the mean body shape parameters of the silhouette model are used to initialize a stochastic search of the SCAPE parameters. A cost function is used to measure how well a hypothesized model fits image observations. The silhouette model is compared to a generated silhouette of the SCAPE model. Pixels are penalized in non-overlapping regions in one silhouette by the shortest distance to the other silhouette and vice-versa. This is done by producing a Chamfer distance map for each silhouette. The predicted silhouette (SCAPE model) should not exceed the image silhouette, while at the same time try to cover as much as possible. Recovered model from each camera view

Future Trends of Motion Capture Markerless Motion Capture In general – Increased speed Increased capture area Decreased cost Multiple character motion capture

Resources Markerless Motion Capture: Goniometer Motion Capture Organic Motion – http://www.organicmotion.com/ Mova – http://www.mova.com/ MaMoCa – http://www.mamoca.com Optical Motion Capture: VICON – http://www.vicon.com/ MotionAnlysis – http://www.motionanalysis.com OptiTrack – http://www.naturalpoint.com/optitrack/ Qualisys Motion Capture Systems – http://www.qualisys.com PhaseSpace Motion Capture – http://www.phasespace.com Goniometer Motion Capture Animazoo – http://www.animazoo.com Magnetic Motion Capture: Ascension - http://www.ascension-tech.com/ Polhemus – http://www.polhemus.com Inertial Motion Capture: Animazoo – http://www.animazoo.com InterSense – http://www.isense.com/ Moven – http://www.moven.com Motion analysis software: Autodesk – http://usa.autodesk.com – 3-D CAD and animation software. C-Motion, Inc. – http://www.c-motion.com – 3-D motion analysis software.

References [1] Struman, David, A Brief History of Motion Capture for Computer Character Animation, found on the web: http://www.siggraph.org/education/materials/HyperGraph/animation/character_animation/motion_capture/history1.htm  [2] Furniss, Maureen, Motion Capture, found on the web: http://web.mit.edu/comm-forum/papers/furniss.html   [3] Human Motion Analysis, found on the web: http://www.xsens.com/index.php?mainmenu=technology&submenu=research&subsubmenu=human_motion  [4] Eadweard Muybridge, from Wikipedia, found on the web: http://en.wikipedia.org/wiki/Eadweard_Muybridge  [5] Adventures in Cybersound, found on the web: http://www.acmi.net.au/AIC/MAREY_BIO.html  [6] Animazoo, found on the web: http://www.animazoo.com/   [7] Motion capture, from Wikipedia, found on the web: http://en.wikipedia.org/wiki/Motion_capture  [8] A.G. Kirk, J.F. O'Brien, D.A. Forsyth, Skeletal Parameter Estimation From Optical Motion Capture Data, Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference Volume 2, 20-25 June 2005 Page(s): 782–788.  [9] Meta Motion, found on the web: http://www.metamotion.com/metamotion.htm  [10] J.F. O’Brien, R.E Bodenheimer, Jr., G.J. Brostow, J.K. Hodgins, Automatic Joint Parameter Estimation from Magnetic Motion Capture Data, Graphics Interface 2000.  [11] H. Zhou, H. Hu, Kinematic model aided inertial motion tracking of human upper limb, IEEE Conference on Information Acquisition, June27-July3, 2005.  [12] Moven, found on the web: http://www.moven.com   [13] Organic Motion Launches Markerless Motion Capture System, Studio Daily (www.studiodaily.com), August 9, 2007.  [14] Organic Motion, found on the web: http://www.organicmotion.com   [15] L. Mundermann, S. Corazza, T.P. Andriacchi, The Evolution of Methods for the Capture of Human Movement Leading to Markerless Motion Capture for Biomechanical Applications, Department of Mechanical Engineering, Stanford University, 2006.  [16] C. Wren, A. Azarbayejani, T. Darrell, A. Pentland, Pfinder: Real-Time Tracking of the Human Body, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 19, Issue 7, Jul 1997 Page(s): 780–785.  [17] A. Sundaresan, Towards Markerless Motion Capture: Model Estimation, Initialization and Tracking, University of Maryland, 2007.  [18] Stanfords Markerless Motion Capture Project, found on the web: http://www.stanford.edu/~stefanoc/Markerless/Markerless.html   [19] S. Corazza, L. Mundermann, A.M. Chaudhari, T. Demattio, C. Cobelli, T.P. Andriacchi, A Markerless Motion Capture System to Study Musculoskeletal Biomechanics: Visual Hull and Simulated Annealing Approcah, Annals of Biomedical Engineering, Vol. 34, No. 6, June 2006, Page(s): 1019-1029.  [20] A.O. Balan, L. Sigal, M. Black, J.E. Davis, H.W. Haussecker, Detailed Human Shape and Pose from Images, IEEE Conference on Computer Vision and Pattern Recognition, 17-22, June 2007 Page(s): 1-8.  [21] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, J. Davis, SCAPE: Shape Completion and Animation of People, ACM Trans. Graphics, 24(3):408-416, 2005.  [22] Motion Analysis, found on the web: http://www.motionanalysis.com/html/animation/film.html  [23] PhaseSpace, found on the web: http://www.phasespace.com/applicationsMain.html  [24] Improving Quality with Virtual Technology, found on the web: http://www.ford.com/innovation/automotive-technology/developing-better-ideas/virtual-ergonomics/391-virtual-ergonomics

Questions?