Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still,

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Advertisements

T1.1- Analysis of acceleration opportunities and virtualization requirements in industrial applications Bologna, April 2012 UNIBO.
Presented by Xinyu Chang
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Effect of Opacity of Stimulus in Deployment of Interest in an Interface Sujoy Kumar Chowdhury & Jeremiah D. Still Missouri Western State University Introduction.
Detail to attention: Exploiting Visual Tasks for Selective Rendering Kirsten Cater 1, Alan Chalmers 1 and Greg Ward 2 1 University of Bristol, UK 2 Anyhere.
Chapter 6: Visual Attention. Overview of Questions Why do we pay attention to some parts of a scene but not to others? Do we have to pay attention to.
›SIFT features [3] were computed for 100 images (from ImageNet [4]) for each of our 48 subordinate-level categories. ›The visual features of an image were.
Image Retrieval Using Eye Movements Fred Stentiford & Wole Oyekoya University College London.
Attention Model Based SIFT Keypoints Filtration for Image Retrieval
Visual Cognition II Object Perception. Theories of Object Recognition Template matching models Feature matching Models Recognition-by-components Configural.
CS335 Principles of Multimedia Systems Content Based Media Retrieval Hao Jiang Computer Science Department Boston College Dec. 4, 2007.
Eye Tracking for Personalized Photography Steven Scher (UCSC student) James Davis (UCSC advisor) Sriram Swaminarayan (LANL advisor)
Visual Attention More information in visual field than we can process at a given moment Solutions Shifts of Visual Attention related to eye movements Some.
Motivation Where is my W-2 Form?. Video-based Tracking Camera view of the desk Camera Overhead video camera.
Visual Cognition II Object Perception. Theories of Object Recognition Template matching models Feature matching Models Recognition-by-components Configural.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC. Lecture 12: Visual Attention 1 Computational Architectures in Biological Vision,
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Michigan State University 1 “Saliency-Based Visual Attention” “Computational Modeling of Visual Attention”, Itti, Koch, (Nature Reviews – Neuroscience.
Michael Arbib & Laurent Itti: CS664 – Spring Lecture 5: Visual Attention (bottom-up) 1 CS 664, USC Spring 2002 Lecture 5. Visual Attention (bottom-up)
Computer Science Department, Duke UniversityPhD Defense TalkMay 4, 2005 Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis Nikos P.
Computer Vision. DARPA Challenge Seeks Robots To Drive Into Disasters. DARPA's Robotics Challenge offers a $2 million prize if you can build a robot capable.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Laurent Itti, Christof Koch, and Ernst Niebur IEEE PAMI, 1998.
AUTOMATIC ANNOTATION OF GEO-INFORMATION IN PANORAMIC STREET VIEW BY IMAGE RETRIEVAL Ming Chen, Yueting Zhuang, Fei Wu College of Computer Science, Zhejiang.
Manipulating Attention in Computer Games Matthias Bernhard, Le Zhang, Michael Wimmer Institute of Computer Graphics and Algorithms Vienna University of.
SIFT as a Service: Turning a Computer Vision algorithm into a World Wide Web Service Problem Statement SIFT as a Service The Scale Invariant Feature Transform.
Studying Relationships Between Human Gaze, Description, and Computer Vision Kiwon Yun 1, Yifan Peng 1 Dimitris Samaras 1, Gregory J. Zelinsky 1,2, Tamara.
Text Lecture 2 Schwartz.  The problem for the designer is to ensure all visual queries can be effectively and rapidly served.  Semantically meaningful.
Title of On the Implementation of a Information Hiding Design based on Saliency Map A.Basu, T. S. Das and S. K. Sarkar/ Jadavpur University/ Kolkata/ India/
REU Presentation Week 3 Nicholas Baker.  What features “pop out” in a scene?  No prior information/goal  Identify areas of large feature contrasts.
Visual Attention Derek Hoiem March 14, 2007 Misc Reading Group.
黃文中 Introduction The Model Results Conclusion 2.
ATTENTION BIASED SPEEDED UP ROBUST FEATURES (AB-SURF): A NEURALLY-INSPIRED OBJECT RECOGNITION ALGORITHM FOR A WEARABLE AID FOR THE VISUALLY-IMPAIRED Kaveri.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Assessment of Computational Visual Attention Models on Medical Images Varun Jampani 1, Ujjwal 1, Jayanthi Sivaswamy 1 and Vivek Vaidya 2 1 CVIT, IIIT Hyderabad,
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
Reicher (1969): Word Superiority Effect Dr. Timothy Bender Psychology Department Missouri State University Springfield, MO
Fig.1. Flowchart Functional network identification via task-based fMRI To identify the working memory network, each participant performed a modified version.
Puzzle Solver Sravan Bhagavatula EE 638 Project Stanford ECE.
Region-Based Saliency Detection and Its Application in Object Recognition IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY, VOL. 24 NO. 5,
Creating Better Thumbnails Chris Waclawik. Project Motivation Thumbnails used to quickly select a specific a specific image from a set (when lacking appropriate.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Street Smarts: Visual Attention on the Go Alexander Patrikalakis May 13, XXX.
Computer Science Readings: Reinforcement Learning Presentation by: Arif OZGELEN.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 12: Visual Attention 1 Computational Architectures in Biological.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
An Eyetracking Analysis of the Effect of Prior Comparison on Analogical Mapping Catherine A. Clement, Eastern Kentucky University Carrie Harris, Tara Weatherholt,
1 Computational Vision CSCI 363, Fall 2012 Lecture 1 Introduction to Vision Science Course webpage:
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
ICCV 2009 Tilke Judd, Krista Ehinger, Fr´edo Durand, Antonio Torralba.
Face Detection 蔡宇軒.
Biologically Inspired Vision-based Indoor Localization Zhihao Li, Ming Yang
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
From: What's color got to do with it
Seunghui Cha1, Wookhyun Kim1
Implementation of a Visual Attention Model
Text Detection in Images and Video
Aim of the project Take your image Submit it to the search engine
Investigating the Attentional Blink With Predicted Targets
47th Annual Meeting of the Psychonomic Society, Houston, TX
Large Scale Image Deduplication
Dense Regions of View-Invariant Features Promote Object Recognition
Adaboost for faces. Material
Volume 20, Issue 7, Pages (April 2010)
Presentation transcript:

Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still, Veronica J. Dark & Derrick J. Parkhurst [ For more information Viewpoint Invariant Object Features Attract Overt Visual Attention Overview Currently visual saliency provides the leading description of stimulus driven overt visual attention. However, given that object recognition is necessary for most natural visual tasks, a plausible alternative default strategy is to attend to information likely to be important for object recognition. A Saliency Model Parkhurst, Law, & Niebur (2002) showed that people fixate on salient (or unique) image regions when participants freely view complex artificial and natural scenes. Figure 1: A Saliency Model References Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94, Geusebroek, J., Burghouts, G. J., & Smeulders, A. W. M. (2005). The Amsterdam library of object images. International Journal of Computer Vision, 61(1), Itti, L., Niebur, E., & Koch, C. (1998). A model of saliency-based fast visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), Lowe, D. G. (1999, September). Object recognition from local scale-invariant features. Paper presented at the International Conference on Computer Vision, Corfu, Greece, Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, Wolff, T., Still, J. D., Parkhurst, D. J. & Dark, V. J. (2007, May). Invariant Features Detected with Computer Vision Allow Better Human Object Recognition in Photographs. Poster presented at the meeting of the Midwestern Psychological Association, Chicago, IL. Scale Invariant Feature Transform (SIFT) Object recognition depends in part on the presence of visual features that remain invariant across viewpoints (Biederman, 1987). Lowe (1999) developed the SIFT algorithm to identify such invariant features for use in computer object recognition. Figure 2: Schematic Presenting our Adoption of the SIFT Algorithm Figure 3: Transforming the SIFT’s Keypoints into a Pre-attentional Map Method The fixations made by 12 participants freely viewing images of objects were recorded. Images were color photographs from the Amsterdam Library of Object Images (Geusebroek, Burghouts & Smeulders, 2005). Figure 4: Example of Stimulus with Fixations Overlaid Figure 5: Comparing the Pre-attentional Maps Results Figure 6: Comparison of the Saliency & SIFT Performance Discussion These results suggest viewpoint invariant features of objects attract attention as reflected in eye movements. In a recent experiment we further explored whether these invariant features contribute to object recognition. We found that objects were more easily identified when the fragments contained more invariant features (Wolff, Still, Parkhurst & Dark, 2007). Figure 7: Example Stimuli Our research supports the hypothesis that the default attentional selection strategy is biased to select visual features likely to be important for object recognition. Frequency Frequency of Saliency Values (for Computation of Percentiles) Frequency of SIFT Values (for Computation of Percentiles)