Tracking-dependent and interactive video projection (Big Brother project)

Slides:



Advertisements
Similar presentations
Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri,
Advertisements

Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Vision Based Control Motion Matt Baker Kevin VanDyke.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
1Rémi Devinant DII5 / Devices synchronization for modeling 3D plane.
MULTI-TARGET TRACKING THROUGH OPPORTUNISTIC CAMERA CONTROL IN A RESOURCE CONSTRAINED MULTIMODAL SENSOR NETWORK Jayanth Nayak, Luis Gonzalez-Argueta, Bi.
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Computing motion between images
Domenico Bloisi, Luca Iocchi, Dorothy Monekosso, Paolo Remagnino
Highlights Lecture on the image part (10) Automatic Perception 16
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
Introduction to Machine Vision Systems
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
MACHINE VISION GROUP Multimodal sensing-based camera applications Miguel Bordallo 1, Jari Hannuksela 1, Olli Silvén 1 and Markku Vehviläinen 2 1 University.
Learning to classify the visual dynamics of a scene Nicoletta Noceti Università degli Studi di Genova Corso di Dottorato.
Finish Adaptive Space Carving Anselmo A. Montenegro †, Marcelo Gattass ‡, Paulo Carvalho † and Luiz Velho † †
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
Visual Object Tracking Xu Yan Quantitative Imaging Laboratory 1 Xu Yan Advisor: Shishir K. Shah Quantitative Imaging Laboratory Computer Science Department.
Characterizing activity in video shots based on salient points Nicolas Moënne-Loccoz Viper group Computer vision & multimedia laboratory University of.
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
REU Presentation Week 3 Nicholas Baker.  What features “pop out” in a scene?  No prior information/goal  Identify areas of large feature contrasts.
Color Segmentation & Introduction to Motion Planning CSE350/ Sep 03.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Project # 9 Tracking-dependent and interactive video projection.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
模式识别国家重点实验室 中国科学院自动化研究所 National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences Context Enhancement of Nighttime.
Finish Hardware Accelerated Voxel Coloring Anselmo A. Montenegro †, Luiz Velho †, Paulo Carvalho † and Marcelo Gattass ‡ †
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Efficient Streaming of 3D Scenes with Complex Geometry and Complex Lighting Romain Pacanowski and M. Raynaud X. Granier P. Reuter C. Schlick P. Poulin.
Scientific Systems 1 Scientific Systems Company, Inc Presentation at Meeting No. 96 Aerospace Control and Guidance Systems Committee Harbour Town Resorts.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
1 City With a Memory CSE 535: Mobile Computing Andreea Danielescu Andrew McCord Brandon Mechtley Shawn Nikkila.
Differential Instant Radiosity for Mixed Reality Martin Knecht, Christoph Traxler, Oliver Mattausch, Werner Purgathofer, Michael Wimmer Institute of Computer.
Basic Camera Function G The camera converts an optical image into electrical signals that are reconverted by a television receiver into visible screen.
Audio Manipulation Through Gesticulation Garrett Fosdick, Jair Robinson José Sanchez Bradley University - Electrical & Computer Engineering October 6,
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
Spatiotemporal Saliency Map of a Video Sequence in FPGA hardware David Boland Acknowledgements: Professor Peter Cheung Mr Yang Liu.
Video Surveillance Under The Guidance of Smt. D.Neelima M.Tech., Asst. Professor Submitted by G. Subrahmanyam Roll No: 10021F0013 M.C.A.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Fundamentals of Digital Images & Photography. Pixels & Colors The pixel (a word invented from "picture element") is the basic unit of programmable color.
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
Alan Cleary ‘12, Kendric Evans ‘10, Michael Reed ‘12, Dr. John Peterson Who is Western State? Western State College of Colorado is a small college located.
Multi-view Synchronization of Human Actions and Dynamic Scenes Emilie Dexter, Patrick Pérez, Ivan Laptev INRIA Rennes - Bretagne Atlantique
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Date of download: 5/29/2016 Copyright © 2016 SPIE. All rights reserved. From left to right are camera views 1,2,3,5 of surveillance videos in TRECVid benchmarking.
Computer Photography -Scene Fixed 陳立奇.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
Heechul Han and Kwanghoon Sohn
REAL-TIME DETECTOR FOR UNUSUAL BEHAVIOR
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
CSE 577 Image and Video Analysis
Vision for Robotic Applications
Vehicle Segmentation and Tracking in the Presence of Occlusions
Lecture 10 Causal Estimation of 3D Structure and Motion
Liyuan Li, Jerry Kah Eng Hoe, Xinguo Yu, Li Dong, and Xinqi Chu
Color Model By : Mustafa Salam.
Mosquitoes Use Vision to Associate Odor Plumes with Thermal Targets
Presentation transcript:

Tracking-dependent and interactive video projection (Big Brother project)

Team Pierre Bretéché Matei Mancas Jonathan Demeyer Thierry Ravet Donald Glowinski Gualtiero Volpe

System Architecture

Summary Robust tracking Modeling human motion attention of a visual scene

Summary Robust tracking Tracking should be robust to severe illumination changes ! Modeling human motion attention of a visual scene

Robust tracking: data acquisition Color tracking of red hats IR tracking of lights on the top of the red hats

Robust tracking: data fusion Projective transform

Robust tracking: data fusion Selected points (Matlab) Transformation matrix (EyesWeb) Image warping (EyesWeb)

Robust tracking: data fusion RED: color tracking / GREEN : IR tracking / BLUE : Final fused tracking

Summary Robust tracking Modeling human motion attention of a visual scene

Summary Robust tracking Modeling human motion attention of a visual scene Regions of interest should be dynamically highlighted and visual effects corresponding to outstanding events should be displayed

Attention: in space (blob speed) HOT RED : most important / DARK RED : less important Global contrast !

Attention: in time (blob speed) HOT RED : really interesting / BLACK : really boring Rarity on 4 seconds

Attention: in time (quantity of motion) HOT RED : really interesting / BLACK : really boring Rarity on 4 seconds

Conclusion Summary : Multi-blob real time tracking (on IR images and color images) Image registration using projective warping Data fusion using a weighted combination of blob positions based on confidence levels for each modality Demonstration of motion attention in space (instantaneous) and in time (short- time memory) -> motion is not necessarily salient : it depends on the context Use of blob speed and silhouette quantity of motion features So we worked hard...

Conclusion... but we will work even harder : Refinement of confidence level for each modality Better hardware is needed: adapted optics for the cameras, and a color camera with the same characteristics than the IR camera: easy modality registration A computer with several firewire ports: easy modality synchronization Use of other features for attention for tracked paths (direction, acceleration, direction variation, trajectory curvature, …) and for gesture expressivity (energy, internal/external quantity of motion, …) Long-time memory attention (flickering lights, commonly used paths, …) to get higher level motion segmentation More efficient instantaneous attention (space) which works also with moving cameras and for surprising behaviors in crowds …

Thank you !