REU Presentation Week 3 Nicholas Baker.  What features “pop out” in a scene?  No prior information/goal  Identify areas of large feature contrasts.

Slides:



Advertisements
Similar presentations
Spike Based Visual Encoding Activity level (a m ) Visual encoder implemented in the NEF as network of 1024 laterally inhibiting neural columns Network.
Advertisements

Traffic Light Control Using Reinforcement Learning
Visual Saliency: the signal from V1 to capture attention Li Zhaoping Head, Laboratory of natural intelligence Department of Psychology University College.
Leveraging Stereopsis for Saliency Analysis
Detail to attention: Exploiting Visual Tasks for Selective Rendering Kirsten Cater 1, Alan Chalmers 1 and Greg Ward 2 1 University of Bristol, UK 2 Anyhere.
Patch to the Future: Unsupervised Visual Prediction
Attention I Attention Wolfe et al Ch 7. Dana said that most vision is agenda-driven. He introduced the slide where the people attended to the many weird.
Vision Based Control Motion Matt Baker Kevin VanDyke.
1 Micha Feigin, Danny Feldman, Nir Sochen
Transferable Dictionary Pair based Cross-view Action Recognition Lin Hong.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Hierarchical Saliency Detection School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
黃文中 Preview 2 3 The Saliency Map is a topographically arranged map that represents visual saliency of a corresponding visual scene. 4.
Presented by Yehuda Dar Advanced Topics in Computer Vision ( )Winter
A saliency map model explains the effects of random variations along irrelevant dimensions in texture segmentation and visual search Li Zhaoping, University.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
A Novel Method for Generation of Motion Saliency Yang Xia, Ruimin Hu, Zhenkun Huang, and Yin Su ICIP 2010.
IEEE TCSVT 2011 Wonjun Kim Chanho Jung Changick Kim
Learning to Detect A Salient Object Reporter: 鄭綱 (3/2)
Michigan State University1 Visual Attention and Recognition Through Neuromorphic Modeling of “Where” and “What” Pathways Zhengping Ji Embodied Intelligence.
Qian Chen, Guangtao Zhai, Xiaokang Yang, and Wenjun Zhang ISCAS,2008.
Visual Attention More information in visual field than we can process at a given moment Solutions Shifts of Visual Attention related to eye movements Some.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC. Lecture 12: Visual Attention 1 Computational Architectures in Biological Vision,
1 Segmentation of Salient Regions in Outdoor Scenes Using Imagery and 3-D Data Gunhee Kim Daniel Huber Martial Hebert Carnegie Mellon University Robotics.
Visual Attention Jeremy Wyatt. Where to look? Many visual processes are expensive Humans don’t process the whole visual field How do we decide what to.
Extraction of Salient Contours in Color Images Vonikakis Vasilios, Ioannis Andreadis and Antonios Gasteratos Democritus University of Thrace
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Michael Arbib & Laurent Itti: CS664 – Spring Lecture 5: Visual Attention (bottom-up) 1 CS 664, USC Spring 2002 Lecture 5. Visual Attention (bottom-up)
BATTLECAM™: A Dynamic Camera System for Real-Time Strategy Games Yangli Hector Yee Graphics Programmer, Petroglyph Elie Arabian.
Introduction of Saliency Map
Computer Science Department, Duke UniversityPhD Defense TalkMay 4, 2005 Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis Nikos P.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Laurent Itti, Christof Koch, and Ernst Niebur IEEE PAMI, 1998.
Manipulating Attention in Computer Games Matthias Bernhard, Le Zhang, Michael Wimmer Institute of Computer Graphics and Algorithms Vienna University of.
黃文中 Introduction The Model Results Conclusion 2.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Geodesic Saliency Using Background Priors
Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still,
Region-Based Saliency Detection and Its Application in Object Recognition IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY, VOL. 24 NO. 5,
Street Smarts: Visual Attention on the Go Alexander Patrikalakis May 13, XXX.
Monitoring and Enhancing Visual Features (movement, color) as a Method for Predicting Brain Activity Level -in Terms of the Perception of Pain Sensation.
Context-based vision system for place and object recognition Antonio Torralba Kevin Murphy Bill Freeman Mark Rubin Presented by David Lee Some slides borrowed.
Computer Science Readings: Reinforcement Learning Presentation by: Arif OZGELEN.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 12: Visual Attention 1 Computational Architectures in Biological.
Tracking-dependent and interactive video projection (Big Brother project)
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
 Mentor : Prof. Amitabha Mukerjee Learning to Detect Salient Objects Team Members - Avinash Koyya Diwakar Chauhan.
ICCV 2009 Tilke Judd, Krista Ehinger, Fr´edo Durand, Antonio Torralba.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Biologically Inspired Vision-based Indoor Localization Zhihao Li, Ming Yang
Perception in 3D Computer Graphics
Visual Saliency Update
Journal of Vision. 2009;9(3):5. doi: /9.3.5 Figure Legend:
From: What's color got to do with it
From: Four types of ensemble coding in data visualizations
BATTLECAM™: A Dynamic Camera System for Real-Time Strategy Games
Implementation of a Visual Attention Model
Learning to Detect a Salient Object
Visual Saliency Detection
Enhanced-alignment Measure for Binary Foreground Map Evaluation
Week 8 Nicholas Baker.
Week 7 Nicholas Baker.
?. ? White Fuzzy Color Oblong Texture Shape Most problems in vision are underconstrained White Color Most problems in vision are underconstrained.
Question: how are neurons in the primary visual cortex encoding the visual scene?
Week 6 Nicholas Baker.
Motivation Semantic Transformation Module Most of the existing works neglect the semantic relationship between the visual feature and linguistic knowledge,
The functional architecture of attention
Presentation transcript:

REU Presentation Week 3 Nicholas Baker

 What features “pop out” in a scene?  No prior information/goal  Identify areas of large feature contrasts in center-surround condition  Luminance, color, orientation, motion Bottom Up Visual Salience

 Identify areas of high intrinsic dimensionality by analyzing the signal as Shannon information (Vig 2012)  Identify areas of low level surprisal in a scene (Itti 2005)  Weight continuity and visual clutter as well as local feature contrasts (He 2011)  Separate feature matrix into low rank non-salient matrix and sparse salient matrix (Souly) Bottom up Visual Salience in Computer Vision

 Goal driven analysis of scene  Direct visual attention to area/features of probable importance  Locate objects/actions/features of exogenous significance Top Down Visual Salience

 Use CRF modulated dictionary learning to construct top down saliency map (Yang 2012)  Use online Reinforced Learning to interactively teach machine how to correctly allocate attention using U-Tree algorithm (Borji 2009) Top Down Visual Salience in Computer Vision

 Most current top-down visual saliency work is on static images  Choose one promising top-down method for static images  Implement the algorithm if code is not available  Extend it to perform on videos instead of static images My Work