CSC321: Neural Networks Lecture 18: Distributed Representations

Slides:



Advertisements
Similar presentations
P3 / 2004 Register Allocation. Kostis Sagonas 2 Spring 2004 Outline What is register allocation Webs Interference Graphs Graph coloring Spilling Live-Range.
Advertisements

Neural Network Models in Vision Peter Andras
A Neural Model for Detecting and Labeling Motion Patterns in Image Sequences Marc Pomplun 1 Julio Martinez-Trujillo 2 Yueju Liu 2 Evgueni Simine 2 John.
September 2, 2014Computer Vision Lecture 1: Human Vision 1 Welcome to CS 675 – Computer Vision Fall 2014 Instructor: Marc Pomplun Instructor: Marc Pomplun.
Read this article for Friday next week [1]Chelazzi L, Miller EK, Duncan J, Desimone R. A neural basis for visual search in inferior temporal cortex. Nature.
1 Computational Vision CSCI 363, Fall 2012 Lecture 35 Perceptual Organization II.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
CSC321: Neural Networks Lecture 3: Perceptrons
Features and Objects in Visual Processing
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Cognitive Operations What does the brain actually do? Some possible answers: –“The mind” –Information processing… –Transforms of mental representations.
Upcoming Stuff: Finish attention lectures this week No class Tuesday next week – What should you do instead? Start memory Thursday next week – Read Oliver.
Midterm 1 Wednesday next week!. Your Research Proposal Project A research proposal attempts to persuade the reader that: – The underlying question is.
Participate in Cognitive Neuroscience Experiments for extra credit!
We are on track for an exam on NOVEMBER 2 nd To cover everything since last exam up to Friday the 28th.
Midterm 1 Oct. 21 in class. Read this article by Wednesday next week!
Pre-attentive Visual Processing: What does Attention do? What don’t we need it for?
Representation of statistical properties 作 者: Sang Chul Chong, Anne Treisman 報告者:李正彥 日 期: 2006/3/23.
COGNITIVE NEUROSCIENCE
Next Week Memory: Articles by Loftus and Sacks (following week article by Vokey)
Features and Object in Visual Processing. The Waterfall Illusion.
Human Visual System Lecture 3 Human Visual System – Recap
Features and Object in Visual Processing. The Waterfall Illusion.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Parallel Adaptive Mesh Refinement Combined With Multigrid for a Poisson Equation CRTI RD Project Review Meeting Canadian Meteorological Centre August.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Laurent Itti, Christof Koch, and Ernst Niebur IEEE PAMI, 1998.
: Chapter 12: Image Compression 1 Montri Karnjanadecha ac.th/~montri Image Processing.
1 Computational Vision CSCI 363, Fall 2012 Lecture 3 Neurons Central Visual Pathways See Reading Assignment on "Assignments page"
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 5: Distributed Representations Geoffrey Hinton.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 22: Transforming autoencoders for learning the right representation of shapes Geoffrey.
CSC321: Neural Networks Lecture 19: Simulating Brain Damage Geoffrey Hinton.
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Hank Childs, University of Oregon Lecture #10 CIS 410/510: Isosurfacing.
September 3, 2013Computer Vision Lecture 1: Human Vision 1 Welcome to CS 675 – Computer Vision Fall 2013 Instructor: Marc Pomplun Instructor: Marc Pomplun.
Mind, Brain & Behavior Wednesday February 19, 2003.
Binding problems and feature integration theory. Feature detectors Neurons that fire to specific features of a stimulus Pathway away from retina shows.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Fast Learning in Networks of Locally-Tuned Processing Units John Moody and Christian J. Darken Yale Computer Science Neural Computation 1, (1989)
Vector Quantization CAP5015 Fall 2005.
Areas & Volumes of Similar Solids Objective: 1) To find relationships between the ratios of the areas & volumes of similar solids.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 6: Applying backpropagation to shape recognition Geoffrey Hinton.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
Gestalt Principles of Perception Psychology 355: Cognitive Psychology Instructor: John Miyamoto 04/08 /2015: Lecture 02-3 This Powerpoint presentation.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
1 Computational Vision CSCI 363, Fall 2012 Lecture 2 Introduction to Vision Science.
1 Computational Vision CSCI 363, Fall 2012 Lecture 32 Biological Heading, Color.
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Mean Shift Segmentation
Ch 6: The Visual System pt 3
Computer Vision Lecture 4: Color
4.2 Data Input-Output Representation
CSSE463: Image Recognition Day 23
Volume 56, Issue 2, Pages (October 2007)
CSC2535: Computation in Neural Networks Lecture 13: Representing things with neurons Geoffrey Hinton.
Benjamin Scholl, Daniel E. Wilson, David Fitzpatrick  Neuron 
Redundancy in the Population Code of the Retina
Benjamin Scholl, Daniel E. Wilson, David Fitzpatrick  Neuron 
Neural Network Models in Vision
Volume 56, Issue 2, Pages (October 2007)
Presentation transcript:

CSC321: Neural Networks Lecture 18: Distributed Representations Geoffrey Hinton

Localist representations The simplest way to represent things with neural networks is to dedicate one neuron to each thing. Easy to understand. Easy to code by hand Often used to represent inputs to a net Easy to learn This is what mixture models do. Each cluster corresponds to one neuron Easy to associate with other representations or responses. But localist models are very inefficient whenever the data has componential structure.

Examples of componential structure Big, yellow, Volkswagen Do we have a neuron for this combination Is the BYV neuron set aside in advance? Is it created on the fly? How is it related to the neurons for big and yellow and Volkswagen? Consider a visual scene It contains many different objects Each object has many properties like shape, color, size, motion. Objects have spatial relationships to each other.

Using simultaneity to bind things together shape neurons Represent conjunctions by activating all the constituents at the same time. This doesn’t require connections between the constituents. But what if we want to represent yellow triangle and blue circle at the same time? Maybe this explains the serial nature of consciousness. And maybe it doesn’t! color neurons

Using space to bind things together Conventional computers can bind things together by putting them into neighboring memory locations. This works nicely in vision. Surfaces are generally opaque, so we only get to see one thing at each location in the visual field. If we use topographic maps for different properties, we can assume that properties at the same location belong to the same thing.

The definition of “distributed representation” Each neuron must represent something, so this must be a local representation. “Distributed representation” means a many-to-many relationship between two types of representation (such as concepts and neurons). Each concept is represented by many neurons Each neuron participates in the representation of many concepts

Coarse coding Using one neuron per entity is inefficient. An efficient code would have each neuron active half the time. This might be inefficient for other purposes (like associating responses with representations). Can we get accurate representations by using lots of inaccurate neurons? If we can it would be very robust against hardware failure.

Coarse coding Use three overlapping arrays of large cells to get an array of fine cells If a point falls in a fine cell, code it by activating 3 coarse cells. This is more efficient than using a neuron for each fine cell. It loses by needing 3 arrays It wins by a factor of 3x3 per array Overall it wins by a factor of 3

How efficient is coarse coding? The efficiency depends on the dimensionality In one dimension coarse coding does not help In 2-D the saving in neurons is proportional to the ratio of the fine radius to the coarse radius. In k dimensions , by increasing the radius by a factor of r we can keep the same accuracy as with fine fields and get a saving of:

Coarse regions and fine regions use the same surface Each binary neuron defines a boundary between k-dimensional points that activate it and points that don’t. To get lots of small regions we need a lot of boundary. fine coarse saving in neurons without loss of accuracy ratio of radii of fine and coarse fields constant

Limitations of coarse coding It achieves accuracy at the cost of resolution Accuracy is defined by how much a point must be moved before the representation changes. Resolution is defined by how close points can be and still be distinguished in the represention. Representations can overlap and still be decoded if we allow integer activities of more than 1. It makes it difficult to associate very different responses with similar points, because their representations overlap This is useful for generalization. The boundary effects dominate when the fields are very big.

Coarse coding in the visual system As we get further from the retina the receptive fields of neurons get bigger and bigger and require more complicated patterns. Most neuroscientists interpret this as neurons exhibiting invariance. But its also just what would be needed if neurons wanted to achieve high accuracy For properties like position orientation and size. High accuracy is needed to decide if the parts of an object are in the right spatial relationship to each other.

Representing relational structure “George loves Peace” How can a proposition be represented as a distributed pattern of activity? How are neurons representing different propositions related to each other and to the terms in the proposition? We need to represent the role of each term in proposition.

A way to represent structures George Worms Peace Chips Love Hate Tony War Fish Eat Give A way to represent structures agent object beneficiary action

The recursion problem Jacques was annoyed that Tony helped George One proposition can be part of another proposition. How can we do this with neurons? One possibility is to use “reduced descriptions”. In addition to having a full representation as a pattern distributed over a large number of neurons, an entity may have a much more compact representation that can be part of a larger entity. It’s a bit like pointers. We have the full representation for the object of attention and reduced representations for its constituents. This theory requires mechanisms for compressing full representations into reduced ones and expanding reduced descriptions into full ones.