Connectionism. ASSOCIATIONISM Associationism David Hume (1711-1776) was one of the first philosophers to develop a detailed theory of mental processes.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Bioinspired Computing Lecture 16
The Extended Mind.
Academic Language & Transitions Words that make your writing strong! These words will help you with: Organization Connecting your thoughts Communicating.
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Slides from: Doug Gray, David Poole
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Sampling Distribution Models.
How do the following products show design?
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
David Hume Ideas and Thinking Low force and vivacity Conception, volition, memory, imagination, etc. Impressions Feeling High force and.
Hume’s Problem of Induction. Most of our beliefs about the world have been formed from inductive inference. (e.g., all of science, folk physics/psych)
Theoretical Probability Distributions We have talked about the idea of frequency distributions as a way to see what is happening with our data. We have.
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Intelligent systems Colloquium 1 Positive and negative of logic in thinking and AI.
David Hume ( )  Fame as a philosopher (for Treatise and Enquiry) followed fame as an historian (for A History of Britain)
1 Lecture 35 Brief Introduction to Main AI Areas (cont’d) Overview  Lecture Objective: Present the General Ideas on the AI Branches Below  Introduction.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
The Computational Theory of Mind. COMPUTATION Functions.
Artificial Neural Networks for Secondary Structure Prediction CSC391/691 Bioinformatics Spring 2004 Fetrow/Burg/Miller (slides by J. Burg)
Scientific Method Lab.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
THEORIES OF MIND: AN INTRODUCTION TO COGNITIVE SCIENCE Jay Friedenberg and Gordon Silverman.
Descartes I am essentially rational, only accidentally an animal ‘essentially’ = logically necessarily ‘essentially’ = logically necessarily Strictly speaking,
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Bayesian Learning By Porchelvi Vijayakumar. Cognitive Science Current Problem: How do children learn and how do they get it right?
2101INT – Principles of Intelligent Systems Lecture 10.
David Hume’s Skepticism The nature of ideas and reasoning concerning ‘matters of fact’
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
Logical Calculus of Ideas Immanent in Nervous Activity McCulloch and Pitts Deep Learning Fatima Al-Raisi.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Memory and Cognition PSY 324 Chapter 2: Cognition and the Brain Part III: Neural Representation Dr. Ellen Campana Arizona State University.
1 The Language of Thought and the case against neurocomputational approaches – Thursday talk Fredrik Stjernberg IKK Philosophy Linköping University
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
ECE450 - Software Engineering II1 ECE450 – Software Engineering II Today: Design Patterns VIII Chain of Responsibility, Strategy, State.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
CTM 2. EXAM 2 Exam 1 Exam 2 Letter Grades Statistics Mean: 60 Median: 56 Modes: 51, 76.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Introduction to Philosophy Lecture 13 Minds and Bodies #2 (Physicalism) By David Kelsey.
Dialog Processing with Unsupervised Artificial Neural Networks Andrew Richardson Thomas Jefferson High School for Science and Technology Computer Systems.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
The Language of Thought : Part II Joe Lau Philosophy HKU.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Artificial Intelligence Hossaini Winter Outline book : Artificial intelligence a modern Approach by Stuart Russell, Peter Norvig. A Practical Guide.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Information Processing
Today’s Lecture Neural networks Training
Outline Of Today’s Discussion
Skepticism David Hume’s Enquiry Concerning Human Understanding and John Pollock’s “Brain in a vat” Monday, September 19th.
Skepticism David Hume’s Enquiry Concerning Human Understanding
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Remember these terms? Analytic/ synthetic A priori/ a posteriori
Simple learning in connectionist networks
OVERVIEW OF BIOLOGICAL NEURONS
Artificial Intelligence 12. Two Layer ANNs
Simple learning in connectionist networks

Presentation transcript:

Connectionism

ASSOCIATIONISM

Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.

Associationism “There is a secret tie or union among particular ideas, which causes the mind to conjoin them more frequently together, and makes the one, upon its appearance, introduce the other.”

Three Principles 1.Resemblance 2.Contiguity in space and time 3.Cause and effect

Constant Conjunction

Causal Association

Vivacity Hume thought different ideas you had had different levels of “vivacity” – how clear or lively they are. (Compare seeing an elephant to remembering an elephant.)

Belief To believe an idea was for that idea to be very vivacious. Importantly, causal association is vivacity preserving. If you believe the cause, then you believe its effect.

Constant Conjunction

Hume’s Non-Rational Mind Hume thus had a model of mental processes that was non-rational. Associative principles aren’t truth-preserving; they are vivacity preserving. (Hume thought this was a positive feature, because he thought that you could not rationally justify causal reasoning.)

Classical Conditioning And as we saw before, the associationist paradigm continued into psychology after it became a science.

Connectionism Connectionism is the “new” associationism.

CONNECTIONISM

Names Connectionist Network Artificial Neural Network Parallel Distributed Processors

High Middle Low Mine Rock

3 1 9 Mine Rock

3 1 9 Mine Rock Connection

Weights Each connection has its own weight between -1 and 1. The weights correspond to how much of each node’s “message” is passed on. In this example, if the weight is +0.5, then the Low node passes on 3 x 0.5 = 1.5.

3 1 9 Mine Rock

3 1 9 Mine Rock

3 1 9 Mine Rock f(-2)

Activation Function Each non-input node has an activation function. This tells it how active to be, given the sum of its inputs. Often the activation functions are just on/ off: f(x) = 1, if x > 0; otherwise f(x) = 0

3 1 9 Mine Rock 0 1 2

Training a Connectionist Network STEP 1: Assign weights to the connections at random.

Training a Connectionist Network STEP 2: Gather a very large number of categorization tasks to which you know the answer. For example, a large number of echoes where you know whether they are from rocks or from mines. This is the “training set.”

Training a Connectionist Network STEP 3: Randomly select one echo from the training set. Give it to the network.

Back Propagation STEP 4: If the network gets the answer right, do nothing. If it gets the answer wrong, find all the connections that supported the wrong answer and adjust them down slightly. Find all the ones that supported the right answer and adjust them up slightly.

Repeat! STEP 5: Repeat the testing-and-adjusting thousands of times. Now you have a trained network.

Important Properties of Connectionist Networks 1.Connectionist networks can learn. (If they have access to thousands of right answers, and someone is around to adjust the weights of their connections. As soon as they stop being “trained” they never learn a new thing again.)

Learning If we suppose that networks train themselves (and no one knows how this could happen), learning is still a problem: The system, though it can learn, can’t remember. In altering its connections, it alters the traces of its former experiences.

Parallel Processing 2. Connectionist networks process in parallel. Serial computation:

Parallel Processing A parallel computation might work like this: I want to solve a really complicated math problem, so I assign small parts of it to each student in class. They work “in parallel” and together we solve the problem faster than one processor working serially.

Distributed Representations 3. Representations in connectionist networks are distributed. Information about the ‘shape’ of the object (in sonar echoes) is encoded not in any one node or connection, but across all the nodes and connections.

Local Processing 4. Processing in a connectionist network is local. There is no central processor controlling what happens in a connectionist network. The only thing that determines whether a node activates is its activation function and its inputs. There’s no program telling it what to do.

Graceful Degradation 5-6. Connectionist networks tolerate low-quality inputs, and can still work even as some of their parts begin to fail. Since computing and representation are distributed throughout the network, even if part of it is destroyed or isn’t receiving input, the whole will still work pretty well.

CONNECTIONISM AND THE BRAIN

Brain = Neural Network? One of the main points of interest of connectionism is the idea that the human brain might be a connectionist network.

Neurons A neuron receives inputs from a large number of other neurons, some of which “inhibit” it and others of which “excite” it. At a certain threshold, it fires.

Neurons Neurons are hooked up ‘in parallel’: different chains of activation and inhibition can operate independently of one another.

Neurons But is the brain really a neural network?

Spike Trains Neurons fire in ‘spikes’ and many brain researchers think they communicate in the frequency of spikes over time. That’s not a part of connectionism.

Spike Trains (Another hypothesis is that they communicate information by firing in the same patterns as other neurons.)

Back Propagation There’s also no evidence of connectionist-style training. The brain has no (known) means of changing the connections between neurons that “contribute to the wrong answer.”

Close Enough An alternate view might be that while brains aren’t neural networks, they are like neural networks. Furthermore, they are more like neural networks than they are like universal computers because (so the argument goes) neural networks are good at what we’re good at and bad at what we’re bad at.

PROBLEM CASES

Logic, Math Universal computers can solve logic problems or math problems with very high accuracy, and with very few steps. Neural networks need extensive training and a large number of nodes to achieve even moderate accuracy on such tasks.

Bechtel & Abrahamesen Ravenscroft describes a case where Bechtel & Abrahamsen built a connectionist network that was supposed to tell whether an argument was valid or invalid (out of 12 possible argument forms). For instance, it might be given: (P → Q), Q ├ P

The Logic Network After ½ million training sessions, it was 76% accurate. After 2.5 million training sessions, it was 84% accurate. I’ve known students who were 99% accurate, for a larger range of problems, after a couple dozen examples.

Language For the same reason, language poses a problem. Human spoken languages have a similar structure to computer programming languages (that’s intentional). So it’s very hard to get a connectionist network that can speak grammatically.

IMPLEMENTATION

Simulation Every universal computer can simulate every connectionist network. In fact, almost no connectionist networks exist. When you read about researchers “designing” networks, they are virtually designing them in a universal computer. And when they “train” the networks, the computer trains them.

Simulation So the mind could be a universal computer that simulates a neural network. But… that would be strange and wasteful. Why throw out all your computational power to simulate something weaker?

A more interesting idea is that maybe the mind is a universal computer implemented by a connectionist network of neurons. Most connectionist networks are not universal computers. But some are.

1111 f(x) = 1 if x ≥ 1, 0 otherwise f(x) = 1 if x = 2, 0 otherwise