Centre for Vision, Speech & Signal Processing University of Surrey Adrian Hilton, John Collomosse Peng Huang, Dan Casas, Ed Brookes, Chris Budd, Marco.

Slides:



Advertisements
Similar presentations
GMD German National Research Center for Information Technology Darmstadt University of Technology Perspectives and Priorities for Digital Libraries Research.
Advertisements

National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
Free-viewpoint Immersive Networked Experience February 2010.
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Probabilistic Tracking and Recognition of Non-rigid Hand Motion
Image Information Retrieval Shaw-Ming Yang IST 497E 12/05/02.
Image-based Clothes Animation for Virtual Fitting Zhenglong Zhou, Bo Shu, Shaojie Zhuo, Xiaoming Deng, Ping Tan, Stephen Lin * National University of.
Silhouette Lookup for Automatic Pose Tracking N ICK H OWE.
Robust Object Tracking via Sparsity-based Collaborative Model
Summary & Homework Jinxiang Chai. Outline Motion data process paper summary Presentation tips Homework Paper assignment.
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 24: Motion Capture Ravi Ramamoorthi Most slides courtesy.
Professor Department of Computer Science & Engineering Indian Institute of Technology Delhi April 26, 2007 Visiting Professor Dayalbagh Educational Institute.
ADVISE: Advanced Digital Video Information Segmentation Engine
09/11/2006 Bharathi Manivannan A.S Motion Warping International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH’95 - -Andrew Witkin.
Supervised by Prof. LYU, Rung Tsong Michael Department of Computer Science & Engineering The Chinese University of Hong Kong Prepared by: Chan Pik Wah,
Presentation Outline  Project Aims  Introduction of Digital Video Library  Introduction of Our Work  Considerations and Approach  Design and Implementation.
Feature vs. Model Based Vocal Tract Length Normalization for a Speech Recognition-based Interactive Toy Jacky CHAU Department of Computer Science and Engineering.
1 Discussion Class 10 Informedia. 2 Discussion Classes Format: Question Ask a member of the class to answer. Provide opportunity for others to comment.
SLIDE 1IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 am Fall 2002
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
Behavior Planning for Character Animation Manfred Lau and James Kuffner Carnegie Mellon University.
Visual Information Retrieval Chapter 1 Introduction Alberto Del Bimbo Dipartimento di Sistemi e Informatica Universita di Firenze Firenze, Italy.
ICCS-NTUA Contributions to E-teams of MUSCLE WP6 and WP10 Prof. Petros Maragos National Technical University of Athens School of Electrical and Computer.
Efficient 3D Data Representation for Biometric Applications Hassan Ugail and Eyad Elyan School of Informatics University of Bradford United Kingdom.
1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System Supervisor: Prof Michael Lyu Presented by: Lewis Ng,
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
Computer Vision in Graphics Production Adrian Hilton Visual Media Research Group Centre for Vision, Speech and Signal Processing University of Surrey
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Automatic Posing of a Meshed Human Model Using Point Clouds Lei Wang Joint work with Tamal K. Dey, Huamin.
Electronic Visualization Laboratory University of Illinois at Chicago Interaction between Real and Virtual Humans: Playing Checkers R. Torre, S. Balcisoy.
Junjun Pan 1, Xiaosong Yang 1, Xin Xie 1, Philip Willis 2, Jian J Zhang 1
DIVA - University of Fribourg - Switzerland Seminar presentation, jan Lawrence Michel, MSc Student Portable Meeting Recorder.
1 Introduction to Multimedia What is Multimedia. 1
A Brief Overview of Computer Vision Jinxiang Chai.
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
1 TEMPLATE MATCHING  The Goal: Given a set of reference patterns known as TEMPLATES, find to which one an unknown pattern matches best. That is, each.
Master’s programme Game and Media Technology. 10/1/20152 General Information:  Gaming and multimedia are booming industry  Increased use of gaming as.
A General Framework for Tracking Multiple People from a Moving Camera
Marcin Marszałek, Ivan Laptev, Cordelia Schmid Computer Vision and Pattern Recognition, CVPR Actions in Context.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
DTI Management of Information LINK Project: ICONS Incident reCOgnitioN for surveillance and Security funded by DTI, EPSRC, Home Office (March March.
© 2011 Adobe Systems Incorporated. All Rights Reserved. Adobe Confidential. Hands-on Introduction to After Effects Chris Jackson Author, Designer, Professor.
Object Based Processing for Privacy Protected Surveillance Karl Martin Kostas N. Plataniotis University of Toronto Dept. of Electrical and Computer Engineering.
Computer Vision – Overview Hanyang University Jong-Il Park.
Subtask 1.8 WWW Networked Knowledge Bases August 19, 2003 AcademicsAir force Arvind BansalScott Pollock Cheng Chang Lu (away)Hyatt Rick ParentMark (SAIC)
Prof. Thomas Sikora Technische Universität Berlin Communication Systems Group Thursday, 2 April 2009 Integration Activities in “Tools for Tag Generation“
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
12/7/10 Looking Back, Moving Forward Computational Photography Derek Hoiem, University of Illinois Photo Credit Lee Cullivan.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
March 31, 1998NSF IDM 98, Group F1 Group F Multi-modal Issues, Systems and Applications.
1 Applications of video-content analysis and retrieval IEEE Multimedia Magazine 2002 JUL-SEP Reporter: 林浩棟.
Data and Applications Security Developments and Directions Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #15 Secure Multimedia Data.
Semantic Extraction and Semantics-Based Annotation and Retrieval for Video Databases Authors: Yan Liu & Fei Li Department of Computer Science Columbia.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
Pascal Kelm Technische Universität Berlin Communication Systems Group Thursday, 2 April 2009 Video Key Frame Extraction for image-based Applications.
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
Skeleton Extraction of 3D Objects by Radial Basis Functions for Content-based Retrieval in MPEG-7 Ming Ouhyoung Fu-Che Wu, Wan-Chun Ma, Communication and.
MPEG 7 &MPEG 21.
Simulation of Characters in Entertainment Virtual Reality.
Digital Video Library - Jacky Ma.
Visual Information Retrieval
Automatic Video Shot Detection from MPEG Bit Stream
Can computers match human perception?
Multimedia Fundamentals
Rob Fergus Computer Vision
WELCOME.
George Bebis and Wenjing Li Computer Vision Laboratory
Discussion Class 9 Informedia.
Presentation transcript:

Centre for Vision, Speech & Signal Processing University of Surrey Adrian Hilton, John Collomosse Peng Huang, Dan Casas, Ed Brookes, Chris Budd, Marco Volino(BBC icase)

4D performance modelling & animation Performance Capture 4D Model Interactive Animation

Vision, Speech & Signal Processing CVSSP Director: Professor Josef Kittler, FREng ‣ IET Faraday Medal 2008 ‣ IAPR KS Fu Award 2006 ‣ Distinguished Professor 2004 Research Focus: Multidimensional signal processing, interpretation and understanding Signals: audio, speech, images, video, 3D video, 3D medical images, 3D volumetric sequences ‣ Founded in 1986 ‣ 110 people (15 academics, 30 post-docs, 65 PhDs) ‣ Top ranking in RAE 2008 assessment (Electronic Eng.)

Vision, Speech & Signal Processing CVSSP Structure A-lab: Machine Audition C-Lab: Cognitive Vision B-Lab: Biometrics I-Lab: Multimedia Communications 34 M-Lab: Medical Imaging Communicatio n V-Lab: Visual Media ImagiCommuni cation Core Science Signal Processing Image & Video Processing Pattern Recognition Computer Vision Machine Learning

Vision, Speech & Signal Processing V-Lab: Visual Media Prof. Adrian Hilton, Dr. John Collomosse, Dr. Krystian Mikolajczyk Enabling technologies for next generation communication & entertainment ‣ Visual content production ‣ Audio-visual search and retrieval ‣ Visual interaction & communication ‣ 3D and free-viewpoint video ‣ Visual content analysis ‣ Video-based animation production ‣ Human shape & motion analysis Film, broadcast and interactive entertainment, fashion, biomechanics Visual Media Platform Grant (EPSRC) BBC Audio-Visual Partnership - audio-visual 3D media production Body shape recognition for online fashion (EPSRC) - Production and delivery of interactive 3D content (EU) SCENE - Novel scene representation for 3D media (EU) SyMMM - Onset metadata in movie production (TSB) VBAP - Video-based animation production (EPSRC) Collaborators: BBC, Sony, Framestore, DNeg, The Foundry, Filmlight, Vicon, Bodymetrics 3D video Free-viewpoint video video stylisation visual searchobject tracking

Vision, Speech & Signal Processing Visual Content Production

Performance Capture Multiple view video 3D video [Starck IEEE CGA’07]

SurfCap 3D Video Database [Starck et al. CGA’07] I: Studio Capture

II: Global Non-rigid Alignment 3D video sequences Shape similarity tree4D model [Huang CVPR’11, Budd 3DIMPVT’11]

Interactive Animation Interactive control of character animation: editing motion high-level parameterisation of motion transitions between motions 4D model Interactive Animation

Skeletal Character Animation Motion Editing Motion Parameterisation Motion Graphs Heck & Gleicher’07 Kovar&Gleicher’02 Witkin’95 Brundelin’95 Rose’98 Arikan&Forsyth’02

3D video concatenation [Huang et al. CVPR’09] Text key startkey end surface motion graph representation key-frame animation [Starck SCA’05,CGA’07]

3D video concatenation [Huang et al. CVPR’09]

4D motion editing Goal: Interactive editing of 4D models Space-time key-frame editing Laplacian deformation framework learnt 4D motion space [Tejera CVMP’11]

III: 4D motion editing [Tejera et al. CVMP’11]

III: 4D motion parameterisation High-level real-time motion control parameters: walk speed/direction, jump height etc combine multiple skeletal sequences [Rose’98] solution: mesh sequence blending ie walk/run walk run

4D motion parameterisation Mesh sequence blending (1) temporal alignment: dynamic time warp (II) blend corresponding frames non-linear blending (Laplacian): ~100ms/frame linear blending: <8ms/frame but unrealistic solution: hybrid non-linear blending 10ms/frame [Casas MIG‘11]

source target [Casas et al. MIG’2011]

[Casas MIG’11,I3D’12] 4D motion parameterisation

III: Interactive Animation interactive motion transitions skeletal motion graph [Gleicher’02,Arikan’02] [Casas MIG’11] solution: 4D parametric motion graph real-time transitions using shape similarity

Summary Performance capture 3D video capture indoor/outdoor joint segmentation & reconstruction Structured representation Global non-rigid alignment shape similarity tree 4D models Interactive Animation 4D motion parametrisation 4D parametric motion graphs space-time editing

Surrey Pipeline for production of interactive 4D models integration of skeletal control with surface motion graphs (WP3) parametric control for general motion (WP4) interactive control of motion constraints (foot slide, hand-contact etc.) (WP4) interactive motion stylisation (WP4)

WP3T5 Surface motion graph extraction & development automatic graph construction similarity measures for dynamic shape & appearance efficient representation D3.1 (month 12): Initial actor model skeleton based motion graph

WP4: Actor Animation Algorithms Interactive animation API using surface motion graphs T4.1 Specification (all) T4.2 Parametric surface motion graph (Surrey,BBC) T4.3 Integration of face and body (HHI,BBC,INRIA,ART) T4.4 Artistic editing and control (Surrey) T4.5 Adaptation & manipulation of appearance (HHI) D4.1 (month 12): Initial animation engine based on skeletal surface motion graph WP3

WP4T1: Specification (All) Data formats for parameterised motion graph Interface for face & body animation Interface for authoring content Representation for parameterised 4D models Interfaces and representation for appearance 1. input from all partners on current/proposed rep. 2. draft data/API specification (month 3, Surrey) 3. working data/API specification (month 6, Surrey)

Vision, Speech & Signal Processing 3D Sports Production

Vision, Speech & Signal Processing 3D Sports Production

Vision, Speech & Signal Processing Outdoor 3D Production