We are focusing our discussion within the luminance dimension. Note, however, that these discussions could also be directed towards the chromatic and/or.

Slides:



Advertisements
Similar presentations
Chapter 5: Space and Form Form & Pattern Perception: Humans are second to none in processing visual form and pattern information. Our ability to see patterns.
Advertisements

Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
Gabor Filter: A model of visual processing in primary visual cortex (V1) Presented by: CHEN Wei (Rosary) Supervisor: Dr. Richard So.
Central Visual Processes. Anthony J Greene2 Central Visual Pathways I.Primary Visual Cortex Receptive Field Columns Hypercolumns II.Spatial Frequency.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Chapter 3: Neural Processing and Perception. Lateral Inhibition and Perception Experiments with eye of Limulus –Ommatidia allow recordings from a single.
Chapter 6 Spatial Vision. The visual system recognizes objects from patterns of light and dark. We will focus on the mechanisms the visual system uses.
Chapter 9: Color Vision. Overview of Questions How do we perceive 200 different colors with only three cones? What does someone who is “color-blind” see?
What is vision Aristotle - vision is knowing what is where by looking.
Visual Acuity Adler’s Physiology of the Eye 11th Ed.
Perception Chapter 3 Light is necessary but not sufficient for vision Ganzfeld: a visual field completely lacking in contour, or luminance changes. Prolonged.
Chapter 6 The Visual System
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color perceived to belong to.
MSU CSE 803 Stockman Fall 2009 Vectors [and more on masks] Vector space theory applies directly to several image processing/representation problems.
ניורוביולוגיה ומדעי המח חלק 2 – מערכת הראייה Introduction to Neurobiology Part 2 – The Visual System Shaul Hochstein.
1 MSU CSE 803 Fall 2014 Vectors [and more on masks] Vector space theory applies directly to several image processing/representation problems.
Major transformations of the light signal in the retina: 1.Temporal filtering – visual response slower than input signal. 2. Spatial filtering – local.
1B50 – Percepts and Concepts Daniel J Hulme. Outline Cognitive Vision –Why do we want computers to see? –Why can’t computers see? –Introducing percepts.
Another viewpoint: V1 cells are spatial frequency filters
1 Computational Vision CSCI 363, Fall 2012 Lecture 26 Review for Exam 2.
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
1 Computational Vision CSCI 363, Fall 2012 Lecture 10 Spatial Frequency.
Wireless and Mobile Computing Transmission Fundamentals Lecture 2.
1 Computational Vision CSCI 363, Fall 2012 Lecture 3 Neurons Central Visual Pathways See Reading Assignment on "Assignments page"
Computer Vision – Fundamentals of Human Vision Hanyang University Jong-Il Park.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Why is this hard to read. Unrelated vs. Related Color Unrelated color: color perceived to belong to an area in isolation (CIE 17.4) Related color: color.
Chapter 5: Spatial Vision & Form Perception
Chapter 6 Section 2: Vision. What we See Stimulus is light –Visible light comes from sun, stars, light bulbs, & is reflected off objects –Travels in the.
The primate visual systemHelmuth Radrich, The primate visual system 1.Structure of the eye 2.Neural responses to light 3.Brightness perception.
Digital Image Processing CSC331 Image Enhancement 1.
Chapter 3: Neural Processing and Perception. Neural Processing and Perception Neural processing is the interaction of signals in many neurons.
Visual Acuity Adler’s Physiology of the Eye 11th Ed. Chapter 33 - by Dennis Levi
1 Computational Vision CSCI 363, Fall 2012 Lecture 5 The Retina.
1 Perception and VR MONT 104S, Fall 2008 Lecture 4 Lightness, Brightness and Edges.
1 Computational Vision CSCI 363, Fall 2012 Lecture 24 Computing Motion.
Why isn’t vision perfect? An exercise in psychoanatomy.
Chapter 6 Spatial Vision.
Fourier Transform.
Computational Vision CSCI 363, Fall 2012 Lecture 22 Motion III
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Outline Of Today’s Discussion 1.Review of Wave Properties, and Fourier Analysis 2.The Contrast Sensitivity Function 3.Metamers 4.Selective Adaptation And.
Sensation & Perception. Motion Vision I: Basic Motion Vision.
Understanding Psychophysics: Spatial Frequency & Contrast
Chapter 9: Perceiving Color. Figure 9-1 p200 Figure 9-2 p201.
1 Computational Vision CSCI 363, Fall 2012 Lecture 32 Biological Heading, Color.
Amity School of Engineering & Technology 1 Amity School of Engineering & Technology DIGITAL IMAGE PROCESSING & PATTERN RECOGNITION Credit Units: 4 Mukesh.
General Principles: The senses as physical instruments
SPATIAL FREQUENCY REPRESENTATION
Digital Image Processing
Early Processing in Biological Vision
Optic Nerve Projections
Binocular Stereo Vision
Spatial Vision (continued)
Soumya Chatterjee, Edward M. Callaway  Neuron 
Volume 36, Issue 5, Pages (December 2002)
Two-Dimensional Substructure of MT Receptive Fields
Motion-Based Analysis of Spatial Patterns by the Human Visual System
Binocular Disparity and the Perception of Depth
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Consequences of the Oculomotor Cycle for the Dynamics of Perception
Coding of Natural Scenes in Primary Visual Cortex
Consequences of the Oculomotor Cycle for the Dynamics of Perception
Experiencing the World
Receptive Fields of Disparity-Tuned Simple Cells in Macaque V1
Volume 24, Issue 8, Pages e6 (August 2018)
Higher-Order Figure Discrimination in Fly and Human Vision
Presentation transcript:

We are focusing our discussion within the luminance dimension. Note, however, that these discussions could also be directed towards the chromatic and/or temporal dimensions as well.

Luminance contrastincreasedecrease

How do images relate to objects in visual space? Properties of the visual SYSTEM.

luminance Line Spread Function

NOTE: We only recognize the two object lines if the detectors under the image are projecting as separate lines.

Linear Systems Analysis - we’re interested in examining the relationship between object intensities and the perceptual representation of those intensities..

Optical System Intensity difference input Intensity difference output

Lens forming an image of a periodic grating. Modulation Transfer Functions

Optical System Intensity input Intensity output Eye is an optical system Intensity output is the retinal image

Further sampling error

Optical System Intensity input perceptual output Neural System

V in (x-axis) V out (y-axis) output signal input signal

Essential Nonlinearities Examples of nonmonotonic functions

Start with the retina..

Can we develop a relationship between the screen & retinal image?

1. Principle of Homogeneity (intensity distribution). 1. Principle of Superposition (positional distribution). Need to first discuss two basic principles of linearity:

p is a vector that represents different positional intensities across the one-dimensional monitor image. Row vector: p=(0.0, 0.0, 1.0, 0.0, 0.0) r is the retinal image vector: r=(0.0, 0.05, 0.55, 0.05, 0.0) r=(0.0, 0.05, 0.55, 0.05, 0.0)

r is the retinal image vector: r=(0.07, 0.65, 0.07, 0.0, 0.0) r=(0.07, 0.65, 0.07, 0.0, 0.0) r is the retinal image vector: r=(0.00, 0.00, 0.07, 0.65, 0.07) r=(0.00, 0.00, 0.07, 0.65, 0.07) p is a vector that represents different positional intensities across the one-dimensional monitor image. Row vector: p=(0.0, 1.0, 0.0, 0.0, 0.0) p=(0.0, 0.0, 0.0, 1.0, 0.0) p=(0.0, 1.0, 0.0, 1.0, 0.0) r is the retinal image vector: r=(0.07, 0.65, 0.45, 0.65, 0.07) r=(0.07, 0.65, 0.45, 0.65, 0.07)

Designate the scalars

Again, define spatial frequency.. First start with sine wave functions w/ time

  Asin(360ft) Where 360 is a constant needed for angles measured in degrees and f is frequency (1/ ), and A is amplitude and t is time.  In the time domain..

Spatial frequency is defined as so many cycles per 1° visual angle (cpd or cyc/deg). Position (or Spatial Extent)

Michelson Contrast defines contrast for periodic patterns.

NOTE: Aperiodic Contrast is usually defined as the % difference between object & background luminance.

Grating sensitivity Grating sensitivity, as measured in psychophysical threshold experiments. The left-hand graph shows the luminance contrast required to detect the presence of a grating, as a function of its spatial frequency and mean luminance. The subject requires least contrast to detect gratings at medium spatial frequencies. The right-hand graph re-plots the data in terms of contrast sensitivity (1/contrast) as a function of spatial frequency. Data are taken from Campbell and Robson (1968).

Note: Different appearing contrasts for different spatial frequencies.

Spatial frequency

Mouse “GO/NO GO” determination of 90° orientation sensitivity with and without optogenetically activated cholinergic basal forebrain modulation of V1.

Normal Amblyopia MS Cataract

To determine the modulation transfer function MTF of an optical system, you need to compare the magnitude of image contrast with that of the object contrast. This defines the spatial filtering properties of the system (amplitude energy loss through the system).

Comparing image contrast to object contrast.

Common Optical MTFs c/deg or cpd in vision

What is amplitude energy loss? First: The spatial energy of an image is defined by the component sine waves and their component amplitudes.

If two sine-wave gratings of different frequencies are superimposed upon each other, the resulting pattern of illumination can be found by adding the two sine waves together, point by point.

This means you can combine different spatial frequencies to create a compound image..

++ Lower contrastFundamentals

Continue the summation process with higher and higher spatial frequencies of proportionally lower & lower amplitudes..

Now is as good a time as any to note the difference between a square-wave and sine- wave grating..

This is the square wave’s fundamental frequency

Note: By adding together sine waves of various frequencies, amplitudes & orientations, it is possible to obtain distributions of many different shapes.

Fourier’s Theorem states that any wave form or distribution can be generated by summing the appropriate sine waves, and as a corollary, any complex distribution can be completely described by specifying some particular set of sine waves that which, when added together, will reproduce the given distribution.

The procedure for finding the particular set of sine waves that must be added in order to obtain some given complex waveform is called Fourier analysis. The sine wave components are the Fourier components of a complex wave. Each Fourier component has its own contributing amplitude (or energy state).

Now, back to the original question: What is amplitude energy loss? The spatial energy of an image is defined by its component sine waves and the component amplitudes. Specific energy can be characterized by the spatial spectrum (Energy as a function of Spatial Frequency)..

Spatial Spectrum: Amplitude as a function of spatial frequency.

Spatial scale and spatial frequency 1 [ Summation of 1-D spatial frequency components to create complex spatial waveforms. As a series of components are added progressively to the waveform, the resultant image approximates a square wave more closely.

Do a Fourier analysis to get spatial spectrum. Amplitude of modulation of object Modulation of image relative to object Amplitude of modulation of image Spatial Frequency cycles/inch MTF Filtered result:

F F -1 Parts of the spectrum are continuous because all the spatial frequencies are included in the object.

Note: Spatial spectra DO NOT represent position. They only represent the periodic energy content of the complex image across one dimension. E E f f f

Again, these analyses are based on 1-dimensional evaluations. Can have Fourier across multiple dimensions to best represent the components of an image. Also, the multiple dimensional Fourier is not always intuitive (e.g., checkerboard pattern): Energy of the spectrum conforms to 45° distribution with fundamental at the shorter √ 2 distance from edge-to-edge. fundamental

Human MTF is also the CSF. Includes optical degradation of the eye & neural spatial errors. Position (minutes of arc) Spectrum of Mach Band intensity distribution input Spatial Frequency Spatial Frequency cycles/degree Position (minutes of arc) intensity F -1 F

The Human MTF (i.e., the CSF) reinforces our perception of edges! This is what, in fact, the brain accomplishes.

F -1 F Theoretically, based on MTF analysis, there should be no difference in perceived image.

Problem: visual system also operates on the principles of image position as well as amplitude energy -- the latter of which is position irrelevant.

In fact, we already know there are spatially linear RFs that make up our visual substrate. So position is relevant. Our cells are not mere Fourier analyzers! Having said this, however, our concept of scale certainly has bearing on spatial frequency.

Images contain detail at many spatial scales.  Fine-scale detail tells us about surface properties and texture.  Coarse-scale detail tells us about general shape and structure.

Spatial scale 1 [ Left: A photograph of a face. Middle: A low-pass filtered version of the face, retaining only course-scale or low spatial-frequency information. Right: A high-pass filtered version of the face, retaining only fine-scale or high spatial frequency information.

Interestingly, the band pass properties of the human CSF can actually be characterized by our multiple (spatially linear) RF distributions..

Why does our sensitivity cut off at high frequency? Answ. The PSF. Optical degradation limits spatial resolution.

Why does our sensitivity cut off at low frequencies? Answ. Lateral antagonism. The strength of inhibition is inversely proportional to distance from the center.

Output

Finally, multiple parallel processing streams in our spatial system. Each stream is made up of different frequency-tuned cells (i.e., the size of RF). Evidence for this comes from adaptation and masking studies..

Size of RF can encode the scale of image

Explaining the spatial contrast sensitivity function [ The shape of the spatial contrast sensitivity function reflects the properties of the receptive fields that mediate our perception of spatial detail. Neurons with small receptive fields respond to high spatial frequencies. Neurons with large receptive fields respond to low spatial frequencies. The shape of the contrast sensitivity function indicates that there are fewer, less responsive receptive fields at extreme sizes.

Low adapt high adapt

bottom top c/deg S S Test appears “lower” in frequency Test appears “higher” in frequency test

Sensitivity shift towards 3 After prolonged adaptation to a Adapt to a spatial frequency est

Spatial filters in the visual system C Contrast sensitivity functions of cells in the visual system. Top: Parvo (open symbols) and magno (filled symbols) LGN cells, re- plotted from Derrington and Lennie (1984, Figures 3A and 10A; curves are based on best-fitting difference-of-Gaussian functions). Bottom: Striate cortical cells (re- plotted from De Valois, Albrecht, & Thorell, 1982). Cortical cells have much narrower spatial frequency tuning than LGN cells. LGN: Two Broadband operators (PC and MC) V1 simple cells: More selective, narrowband operators..suggests intracortical operations involving flanking antagonistic responses (i.e., more than simply a linear aggregate of LGN cells).

NOTE: Behavioral evidence from adaptation and masking studies (superimposing gratings of slightly different spatial frequencies and measuring resultant changes in sensitivity) correlates well with cortical cell measurements!

Spatiotemporal contrast sensitivity We can examine spatial and temporal sensitivity together by measuring the visibility of flickering gratings.  Spatial and temporal parameters interact, so there is a trade-off between spatial acuity and temporal acuity.  At low temporal frequencies, sensitivity is highest at medium spatial frequencies; at high temporal frequencies, sensitivity is highest at low spatial frequencies.  Spatiotemporal contrast sensitivity probably reflects the contribution of two parallel pathways or channels of processing.  At low temporal frequencies contrast sensitivity reflects activity of cells in the parvo division; at high temporal frequencies contrast sensitivity reflects activity of cells in the magno division.

Steady sinusoidal gratings Low spatial frequency (cpd) gratings with varying temporal frequencies (c/sec or Hz). Lesioned parvocelluar stream (low pass) Lesioned parvocelluar stream (band pass) Lesioned magnocelluar stream (low pass) Lesioned magnocelluar stream (band pass)

[ Top: Space–time plot of a spatial grating that repetitively reverses in contrast over time. The grating’s spatial frequency is defined by the rate of contrast alternation across space (horizontal slices through the panel). The grating’s flicker temporal frequency is defined by the rate of contrast alternation across time (vertical slices through the panel). B Bottom: Contrast sensitivity for flickering gratings as a function of their spatial frequency (horizontal axis) and temporal frequency (different curves), re-drawn from Robson (1966). Spatial sensitivity is band-pass at low temporal frequencies (filled squares), but low-pass at high temporal frequencies (filled circles). Spatiotemporal contrast sensitivity

Functional significance of multiple spatial filters The visual system uses spatial filters in several important tasks:  Edge localization  Texture analysis  Stereo and motion analysis

Edge localization 1: Vernier acuity [ We can assign locations to (localise) luminance edges with very high precision. Vernier acuity is typically 1/6 th of the distance between adjacent photoreceptors.

Edge localization 2: Explaining Vernier acuity [ The retinal image of an edge is spread optically over a distance covered by 5 or 6 receptors (top graph). A very small (fine-scale) change in edge position causes the response of each receptor to change by up to 25% (middle and bottom graphs). A small center–surround receptive field computing the difference between adjacent receptor responses would detect the shift in position. Large receptive fields detect larger shifts in position. A number of computational theories have been proposed for how information about edge position is extracted from receptive field responses. (i.e., fovea)

David Marr (1980) Spatial scale & position (Edge Detection) The Raw Primal Sketch derived from a computational approach to images. dy/dx -- slope of function y [or f(x)]; d 2 y/dx 2 -- zero crossings of f(x).