Human vision: function

Slides:



Advertisements
Similar presentations
Selective Visual Attention: Feature Integration Theory PS2009/10 Lecture 9.
Advertisements

Reminder: extra credit experiments
Chapter 4: The Visual Cortex and Beyond
Read this article for Friday next week [1]Chelazzi L, Miller EK, Duncan J, Desimone R. A neural basis for visual search in inferior temporal cortex. Nature.
Palmer (after Broadbent) interval 750 ms test 100 ms cue 250 ms Relevant size 28 (Palmer, after Shaw)
The Visual System: Feature Detection Model Lesson 17.
Perception and Pattern Recognition  What types of information do we use to perceive the world correctly?  What are the major theories about how we recognize.
Recap – lesson 1 What is perception? Perception: The process which we give meaning to sensory information, resulting in our own interpretation. What is.
PSYC 330: Perception Seeing in Color PSYC 330: Perception
Chapter 6 The Visual System
Features and Objects in Visual Processing
Current Trends in Image Quality Perception Mason Macklem Simon Fraser University
Visual Search: finding a single item in a cluttered visual scene.
Experiments for Extra Credit Still available Go to to sign upwww.tatalab.ca.
Upcoming Stuff: Finish attention lectures this week No class Tuesday next week – What should you do instead? Start memory Thursday next week – Read Oliver.
1 Biological Neural Networks Example: The Visual System.
Pre-attentive Visual Processing: What does Attention do? What don’t we need it for?
Treisman Visual Search Demo. Visual Search Tasks  Can detect features without applying attention  But detecting stimulus conjunctions requires attention.
Feature Level Processing Lessons from low-level vision Applications in Highlighting Icon (symbol) design Glyph design.
Next Week Memory: Articles by Loftus and Sacks (following week article by Vokey)
Shadows (P) Lavanya Sharan February 21st, Anomalously lit objects are easy to see Kleffner & Ramchandran 1992Enns & Rensink 1990.
Visual search: Who cares?
Types of Perceptual Processes Bottom-up - work up from sensory info. Top-down - apply knowledge and experience.
Attention II Selective Attention & Visual Search.
Features and Object in Visual Processing. The Waterfall Illusion.
Visual Cognition I basic processes. What is perception good for? We often receive incomplete information through our senses. Information can be highly.
Attention II Theories of Attention Visual Search.
Features and Object in Visual Processing. The Waterfall Illusion.
Parallel vs. Serial Information Processing Remember - attention is about information processing.
The Human Visual System Vonikakis Vasilios, Antonios Gasteratos Democritus University of Thrace
PY202 Overview. Meta issue How do we internalise the world to enable recognition judgements to be made, visual thinking, and actions to be executed.
Basic Processes in Visual Perception
The visual system Lecture 1: Structure of the eye
Cognitive Processes PSY 334 Chapter 2 – Perception.
Read article by Anne Treisman. Reflexive Orienting Attention can be automatically “summoned” to a location at which an important event has occurred: –Loud.
Visual Perception How the eye works.
Unit 1: Perception & Dreaming How do we see our world?
Unit 1: Perception & Dreaming
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Text Lecture 2 Schwartz.  The problem for the designer is to ensure all visual queries can be effectively and rapidly served.  Semantically meaningful.
Neural Information in the Visual System By Paul Ruvolo Bryn Mawr College Fall 2012.
1 Perception and VR MONT 104S, Fall 2008 Session 13 Visual Attention.
Mr. Koch AP Psychology Forest Lake High School
THE VISUAL SYSTEM: EYE TO CORTEX Outline 1. The Eyes a. Structure b. Accommodation c. Binocular Disparity 2. The Retina a. Structure b. Completion c. Cone.
Occipital Lobe Videos: –Brain modules 8,9,10, 11 –Consciousness- Blindsight.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Visual Distinctness What is easy to find How to represent quantitity Lessons from low-level vision Applications in Highlighting Icon (symbol) design -
Mind, Brain & Behavior Wednesday February 19, 2003.
Binding problems and feature integration theory. Feature detectors Neurons that fire to specific features of a stimulus Pathway away from retina shows.
High level vision.
Cognitive - perception.ppt © 2001 Laura Snodgrass, Ph.D.1 Perception The final image we are consciously aware of is a “constructed” representation of the.
Assist. Prof. Dr. Ilmiye Seçer Fall
Vision.
Mr. Koch AP Psychology Forest Lake High School
Perception Unit How can we so easily perceive the world? Objects may be moving, partially hidden, varying in orientation, and projected as a 2D image.
Bell Work What occurs when experiences influence our interpretation of data? A. Selective attention B. Transduction C. Bottum-up processing D. Top-down.
Visual Perception.
Optic Nerve Projections
View from the Top Neuron
Mind, Brain & Behavior Wednesday February 12, 2003.
Cognitive Processes PSY 334
Cognitive Processes PSY 334
Changing Light Waves to Neural Impulses
Chapter 7 - Visual Attention
Bayesian vision Nisheeth 14th February 2019.
Human vision: physical apparatus
Experiencing the World
(Do Now) Journal What is psychophysics? How does it connect sensation with perception? What is an absolute threshold? What are some implications of Signal.
Week 14: Neurobiology of Vision Part 2
Presentation transcript:

Human vision: function Nisheeth 12th February 2019

Retina as filter

Retina can calculate derivatives

Evidence

The visual pathway: LGN Optic nerves terminate in LGN Cortical feed-forward connections also seen to LGN LGN receptive fields seem similar to retina receptive fields Considerable encoding happens between the retina-LGN junction 130 million rods and cones 1.2 million axon connections reach LGN

The visual pathway: V1 V1 is the primary visual cortex V1 receptive fields sensitive to orientation, edges, color changes

Simple and complex cells in V1

Emergent orientation selectivity http://www.scholarpedia.org/article/Models_of_visual_cortex

Complexity from simplicity http://www.brains-explained.com/how-hubel-and-wiesel-revolutionized-neuroscience/

Remember this? Image patch Filter Convolution

Edge detection filters designed from V1 principles

Influenced models of object recognition Most successful models of visual function use mixed selective feed-forward information transfer now, e.g. CNN

Attention in vision Visual search

Feature search

Conjunction search

Visual search X X X X O X O X X X X X X O X X X X O O X X O X X X X Feature search Conjunction search Treisman & Gelade 1980

“Serial” vs “Parallel” Search Reaction Time (ms) Set size

Feature Integration Theory: Basics (FIT) Treisman (1988, 1993) Distinction between objects and features Attention used to bind features together (“glue”) at the attended location Code 1 object at a time based on location Pre-attentional, parallel processing of features Serial process of feature integration

FIT: Details Sensory “features” (color, size, orientation etc) coded in parallel by specialized modules Modules form two kinds of “maps” Feature maps color maps, orientation maps, etc. Master map of locations

Feature Maps Contain 2 kinds of info presence of a feature anywhere in the field there’s something red out there… implicit spatial info about the feature Activity in feature maps can tell us what’s out there, but can’t tell us: where it is located what other features the red thing has

Master Map of Locations codes where features are located, but not which features are located where need some way of: locating features binding appropriate features together [Enter Focal Attention…]

Role of Attention in FIT Attention moves within the location map Selects whatever features are linked to that location Features of other objects are excluded Attended features are then entered into the current temporary object representation

Evidence for FIT Visual Search Tasks Illusory Conjunctions

Feature Search: Find red dot

“Pop-Out Effect”

Conjunction: white vertical

1 Distractor

12 Distractors

29 Distractors

Feature Search T T T T T T T T T T T Is there a red T in the display? Target defined by a single feature According to FIT target should “pop out” T T T T T T T T T

Conjunction Search T X X T X T T T T T T T X X Is there a red T in the display? Target defined by shape and color Target detection involves binding features, so demands serial search w/focal attention T X T T T T T T T X X

Visual Search Experiments Record time taken to determine whether target is present or absent Vary the number of distracters FIT predicts that Feature search should be independent of the number of distracters Conjunction search should get slower w/more distracters

Typical Findings & interpretation Feature targets pop out flat display size function Conjunction targets demand serial search non-zero slope

… not that simple... easy conjunctions - - X X O O X X O X O O X O X easy conjunctions - - depth & shape, and movement & shape Theeuwes & Kooi (1994)

Guided Search Triple conjunctions are frequently easier than double conjunctions This lead Wolfe and Cave modified FIT --> the Guided search model - Wolfe & Cave

Guided Search - Wolfe and Cave X X O O X X O X O O X O X Separate processes search for Xs and for white things (target features), and there is double activation that draws attention to the target.

Problems for both of these theories Both FIT and Guided Search assume that attention is directed at locations, not at objects in the scene. Goldsmith (1998) showed much more efficient search for a target location with redness and S-ness when the features were combined (in an “object”) than when they were not.

more problems Hayward & Burke (2000) Lines Lines in circles Lines + circles

Results - target present only a popout search should be unaffected by the circles

more problems Enns & Rensink (1991) Search is very fast in this situation only when the objects look 3D - can the direction a whole object points be a “feature”?

Duncan & Humphreys (1989) SIMILARITY visual search tasks are : easy when distracters are homogeneous and very different from the target hard when distracters are heterogeneous and not very different from the target

Asymmetries in visual search Vs Vs the presence of a “feature” is easier to find than the absence of a feature

Kristjansson & Tse (2001) Faster detection of presence than absence - but what is the “feature”?

Familiarity and asymmetry asymmetry for German but not Cyrillic readers

Other high level effects finding a tilted black line is not affected by the white lattice - so “feature” search is sensitive to occlusion Wolfe (1996)

Gestalt effects

Pragnanz Perception is not just bottom up integration of features. There is more to a whole image than the sum of its parts.

Not understood computationally Principles are conceptually clear; DCNNs can learn them, but translation is missing https://arxiv.org/pdf/1709.06126.pdf

Concept-selective neurons https://www.newscientist.com/article/dn7567-why-your-brain-has-a-jennifer-aniston-cell/

Summary Classic accounts of visual perception have focused on bottom-up integration of features Consistent with data from visual search experiments Inconsistent with phenomenological experience in naturalistic settings Top down influences affect visual perception But how? We will see some computational approaches in the next class