Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3

Slides:



Advertisements
Similar presentations
Introduction to Eye Tracking
Advertisements

Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Real-time Tracking of Multiple People Using Stereo David BeymerBob Bolles Kurt Konolige Chris Eveland Artificial Intelligence Center SRI International.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Stereo.
Cognitive Computer Vision
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
A Multicamera Setup for Generating Stereo Panoramic Video Tzavidas, S., Katsaggelos, A.K. Multimedia, IEEE Transactions on Volume: 7, Issue:5 Publication.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Vision-Based Motion Control of Robots
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
CSE473/573 – Stereo Correspondence
Eye Movements and Visual Attention
Integrated Videos and Maps for Driving Direction UIST 2009 Billy Chen, Boris Neubert, Eyal Ofek,Oliver Deussen, Michael F.Cohen Microsoft Research, University.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
The Eye-Tracking Butterfly: Morphing the SMI REDpt Eye-Tracking Camera into an Interactive Device. James Cunningham & James D. Miles California State University,
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Humanoid Robots Debzani Deb.
Components of a computer vision system
Robot Vision SS 2007 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Juhana Leiwo – Marco Torti.  Position and movement  Direction of acceleration (gravity) ‏  Proximity and collision sensing  3-dimensional spatial.
Juhana Leiwo – Marco Torti.  Position and movement  Direction of acceleration (gravity) ‏  Proximity and collision sensing  3-dimensional spatial.
Alexandru Dancu 1 1 t2i lab, Chalmers University of Technology, Sweden Designing seamless displays for interaction in motion Joe Marshall 2 2 University.
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
School of Computer Science & Information Technology G6DPMM - Lecture 15 Media Design III – Video & Animation.
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
DTI Management of Information LINK Project: ICONS Incident reCOgnitioN for surveillance and Security funded by DTI, EPSRC, Home Office (March March.
Visual Odometry in a 2-D environment CS-365A Course Project BY: Aakriti Mittal (12005) Keerti Anand (13344) Under the guidance of: Prof. Amitabha Mukherjee.
Visual Scene Understanding (CS 598) Derek Hoiem Course Number: Instructor: Derek Hoiem Room: Siebel Center 1109 Class Time: Tuesday and Thursday.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
User Issues in 3D TV & Cinema Martin S. Banks Vision Science Program UC Berkeley.
COMP322/S2000/L261 Geometric and Physical Models of Objects Geometric Models l defined as the spatial information (i.e. dimension, volume, shape) of objects.
Autonomous Mobile Robots CPE 470/670 Lecture 6 Instructor: Monica Nicolescu.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Active Vision Sensor Planning of CardEye Platform Sherif Rashad, Emir Dizdarevic, Ahmed Eid, Chuck Sites and Aly Farag ResearchersSponsor.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Department of Computer Science,
1cs426-winter-2008 Notes. 2 Kinematics  The study of how things move  Usually boils down to describing the motion of articulated rigid figures Things.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Motion Features for Action Recognition YeHao 3/11/2014.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Application of Stereo Vision in Tracking *This research is supported by NSF Grant No. CNS Opinions, findings, conclusions, or recommendations.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
EYE TRACKING TECHNOLOGY
Visual Literacy.
Multiple View Geometry
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Web Design and Development
Features Readings All is Vanity, by C. Allan Gilbert,
Multiple View Geometry for Robotics
HCI/ComS 575X: Computational Perception
Jang Pyo Bae1, Dong Heon Lee2, Jae Soon Choi3, and Hee Chan Kim4
Computer Graphics Lecture 15.
SENSOR BASED CONTROL OF AUTONOMOUS ROBOTS
Presentation transcript:

Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3

Lecture 15 Active Vision & cameras Research issues

Active vision During recent years, there has been a growing interest in the use of active control of “image formation” to simplify and accelerate scene understanding Examples of “image formation” include, for example: – gaze or focus of attention (saccadic control) – stereo viewing geometry (vergence control) – a head mounted camera

Active vision Historical roots of “Active Computer Vision” – 1982: Term first used by Bajcsy (Nato workshop) – 1987: Paper by Aloimonos et al (ICCV) – 1989: Entire session at ICCV References: – “Active Percpetion”, R. Bajcsy, IEEE Proceedings Vol 76, No 8, pp , August 1988 – “Active Vision”, J. Y. Aloimonos, I. Weiss and A. Bandopadhay, ICCV, pp , 1987

Active vision To reconstruct or not reconstruct? “Classical” stereo correspondence reconstructs a scene in reference frame based on stereo geometry Active vision changes vergence angles, focus etc. making reconstruction by traditional means intractable Active systems avoid reconstruction wherever possible Many visual control tasks such as driving a car or grasping an object can be performed by servoing directly from measurements made in the image: – “A New Approach to Visual Servoing in Robotics”, B. Espiau, F. Chaumette and P. Rives, IEEE Trans. on Robotics and Automation 8(3), June 1992

Active vision Application areas Task based visual control – Example in the ActIPret project – Need to get reference for second video!! Navigation Telepresence Wearable computing Panoramic cameras Saccadic control

Task based visual control The ActIPret project In ActIPret, information about the current task (what objects are we likely to be interacting with, what types of behaviour) are used to determine in real-time an optimum viewing geometry (gaze vector, focus, zoom)

Task based visual control Source unknown (for now) The vision system is using an appearance based model to determine how and when it is appropriate to pickup up the part

Active vision in navigation Example: GTI Project One approach to visual navigation in cluttered environments is to recover the boundaries of free space, and then move conservatively along the middle of it. Humans tend to prefer to cut corners by "swinging" from protruding corner to protruding corner. Using a stereo head to recover range to a fixated point, can take the vehicle into "orbit" around the fixated point at a chosen safe radius |R| of clearance. (The sense of rotation can by chosen by using R>0 or R<0.)

Telepresence Example: VFR Project Telepresence can be defined as the process of sensing sufficient information about the operator and task environment, and communicating this information in a sufficiently natural way to the human operator, that the operator feels physically present at the remote site. The top movie shows an early version of a tracker using infra-red light to control 2 degrees of freedom of the head at 50Hz. The bottom movie shows a more sophisticated version controlling the head at the end of a robot arm.

Wearable computing Example: DyPERS from MIT

Panoramic vision 360° images usually achieved using a 2D imaging array looking into a rotating mirror or hemi- spherical reflector Rotating mirror approach allows variable resolution at different angular ranges Lots of good web links at:

Panoramic vision Panorama pictures taken from:

Panoramic vision application Homing robot (ICS, Greece) Perceptual processes are addressed in the context of goals, environment and behaviour of a system A novel, vision-based method for robot homing, the problem of computing a route so that a robot can return to its initial “home” position after the execution of an arbitrary “prior” path. Robot tracks visual features in panoramic views of the environment that it acquires as it moves.

Panoramic vision application Homing robot (ICS, Greece) When homing is initiated, the robot selects Milestone Positions (MPs) on the “prior” path by exploiting information in its visual memory. The MP selection process aims at picking positions that guarantee the success of the local control strategy between two consecutive MPs. See website for panoramic view

Saccadic control Attention – recognition loop (KTH, Sweden) Scene is observed using a stereo head Disparity between two images can be used in localise objects in a 3D plane User saccades to an object, localised object is then recognised Attention – recognition loop

Robots that interact with humans SONY QRIO robot

The end Please feed back comments to Kingsley Sage or Hilary Buxton at the University of Sussex, UK