Can they help us understand the human mind?

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Perceptron Learning Rule
Chapter 4 Key Concepts.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Neural Networks Marco Loog.
Chapter Seven The Network Approach: Mind as a Web.
Artificial Neural Networks (ANNs)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks for Secondary Structure Prediction CSC391/691 Bioinformatics Spring 2004 Fetrow/Burg/Miller (slides by J. Burg)
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
What is Artificial Intelligence? AI is the effort to develop systems that can behave/act like humans. Turing Test The problem = unrestricted domains –human.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
CogAT Cognitive Abilities Test ™ Report to Parents What does CogAT measure? CogAT measures cognitive development of a student in the areas of learned reasoning.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
The Emergent Structure of Semantic Knowledge
Introduction to Deep Learning
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Introduction: What is AI? CMSC Introduction to Artificial Intelligence January 7, 2003.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Big data classification using neural network
Final Projects The final project is expected to be an independent project of your own devising addressing a topic related to cognition or neuroscience.
What is cognitive psychology?
Neural Network Architecture Session 2
M-LANG project  Ref. n NO01-KA Interactive Exchange Workshop on how to use response systems and ICT tools for creating interactive learning.
Can they help us understand the human mind?
Artificial Neural Networks
Using Cognitive Science To Inform Instructional Design
Some Aspects of the History of Cognitive Science
Software Name (Function Type)
Done Done Course Overview What is AI? What are the Major Challenges?
CogAT Cognitive Abilities Test ™ Report to Parents
James L. McClelland SS 100, May 31, 2011
CSE 473 Introduction to Artificial Intelligence Neural Networks
Neural Networks Dr. Peter Phillips.
Intelligent Information System Lab
CSE P573 Applications of Artificial Intelligence Neural Networks
Simple learning in connectionist networks
Artificial Intelligence
of the Artificial Neural Networks.
CSE 573 Introduction to Artificial Intelligence Neural Networks
INTRODUCTION.
Artificial Intelligence Lecture No. 28
Lecture Notes for Chapter 4 Artificial Neural Networks
Assessment for Learning
CogAT Cognitive Abilities Test ™ Report to Parents
Can they help us understand the human mind?
Artificial Intelligence 12. Two Layer ANNs
Simple learning in connectionist networks
Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019.
The Network Approach: Mind as a Web
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Introduction to Neural Network
The McCullough-Pitts Neuron
David Kauchak CS158 – Spring 2019

Claim 1: Concepts and Procedures
mastery Maths with Greater Depth at Rosehill Infant SCHOOL
Presentation transcript:

Can they help us understand the human mind? Neural Network Models of Intelligence Can they help us understand the human mind?

Why try to build a mind? The ultimate test of understanding something is being able to recreate it -- Demis Hassabis What I cannot build I do not truly understand -- Richard Feinman

To what purpose? To build better artificial systems to complement or support human intelligence This is the primary goal at Google, Amazon, and Facebook To provide new insights into how our own minds work This is the goal we focus on in this class We will learn about work done for other goals to help us in this effort We will also think about the aspects of human cognition these models can help us understand

What Kind of an Artificial Mind? The point of this slide is to emphasize that for at least the last 65 years or so, approaches to understanding intelligence have been inspired both by the nature of the human brain and by capabilities of computers -- these tools have inspired contrasting approaches. You can hint that the approaches are beginning to merge, though you may want to save that point to bring up near the end

Motivations from the brain The brain looks like a neural network Slide shows just a tiny subset of the neurons – and fails to show all the synapses onto them

Motivations from the brain The brain looks like a neural network Neurons respond to interesting features in the environment in a graded and continuous way

Motivations from the brain The brain looks like a neural network Neurons respond to interesting features in the environment in a graded and continuous way Excitation and inhibition shape neural responses I like this classic figure because you can see the inhibitory connections, in the lateral nexus – Light shined on just one of the ‘omatidea’ results in strong excitation. If you shine a light on both at the same time, neither fires as strongly due to inhibition through the lateral nexus. Grey looks

Motivations from the brain The brain looks like a neural network Neurons respond to interesting features in the environment in a graded and continuous way Excitation and inhibition shape neural responses Such influences can explain contrast effects in human perception I like this classic figure because you can see the inhibitory connections, in the lateral nexus – Light shined on just one of the ‘omatidea’ results in strong excitation. If you shine a light on both at the same time, neither fires as strongly due to inhibition through the lateral nexus. Grey looks

Motivations from the effort to understand human abilities Contextual effects in perception Multiple graded constraint satisfaction in many domains All aspects of an input influence how see and interpret all other aspects – note went and event in the sentences

Neural network models as constraint-satisfaction systems The interactive activation model Perception emerges from excitatory and interactions among simple neuron-like processing units. Captured context effects in perception that had been observed in human experiments.

Neural network models as constraint-satisfaction systems The interactive activation model Perception emerges from excitatory and interactions among simple neuron-like processing units. Captured context effects in perception that had been observed in human experiments.

Neural network models as constraint-satisfaction systems The interactive activation model Perception emerges from excitatory and interactions among simple neuron-like processing units. Captured context effects in perception that had been observed in human experiments. The same principles captured related effects in speech and other domains

Can neural networks help us understand how we learn? Nearly all the applications of neural networks apply to things we learn to do Can these models help us understand the nature and time course of human learning?

The perceptron Sums weights from active inputs Sum = W1 +W3 If Sum exceeds Threshold: output = 1 Else output = 0 If output = 0 and target = 1 increase W’s from active inputs If output = 1 and target = 0 decrease W’s from active inputs output: 1 Threshold Sum

Strengths and weaknesses of perceptrons Inspired by the brain and statistics Designed to learn from experience Used an error-correcting learning rule Learning was limited to one layer of connections, limiting the functions that could be learned An explicit teaching signal was required Units were binary, impeding propagation of learning signals Initial progress was slow because computers were slow and expensive If you want to you can note that the binary nature of the units was one limitation that had to be overcome before we could figure out now to teach multi-layer neural networks

The second wave: Overcoming the limitations of the Perceptron Continuous but non-linear units Multiple layers to compute any computable function

The second wave: Overcoming the limitations of the Perceptron Continuous but non-linear units Multiple layers to compute any computable function Continuity allows computing derivatives Computers are 1,000 times faster and cheeper

Early models of language and reading When we learn language or when we learn to read, we gradually acquire implicit encoding and responding abilities that can be described by rules … guided by an environment that encourages these tendencies Fairly early models with just one hidden layer were able to capture many of these characteristics

What aspects of intelligence could a neural network actually capture? Basic skills such as Perception Skilled action Basic aspects of language and reading Numerical intuition But what about Comprehension of sentences with complex structure and negation Hard games like chess and Go Or even Inferring causality Social cognition Mathematical and scientific reasoning Explanation and justification … Point here is most people are ok with the idea that there are basic aspects of cognition that neural networks can help us understand, but we now begin to see more clearly how they may help us with other aspects

The third wave: The emergence of deep learning Breakthroughs in machine speech perception and object classification Exciting new applications in machine translation Human-level action game performance and world-class performance in Go.

One variety of neural network: CNN These models are not only effective for machine vision, but as we will see they provide detailed accounts of the response properties of neurons in the visual cortex

Other varieties of deep neural networks Aut0-encoder networks Support unsupervised learning, providing a good start for supervised learning Recurrent neural networks Allow computations to unfold overtime Networks trained with reinforcement learning Can discover new ways to solve a problem Can reward themselves, and so be autonomous learners

Other varieties of deep neural networks Aut0-encoder networks Support unsupervised learning, providing a good start for supervised learning Recurrent neural networks Allow computations to unfold overtime Networks trained with reinforcement learning Can discover new ways to solve a problem Can reward themselves, and so be autonomous learners

Understanding and producing complex language? As a follow up to the recurrent network slide, this shows how combining FF and recurrent networks can begin to address a complex scene interpretation task You could also discuss Google Translate if you like

Other varieties of deep neural networks Aut0-encoder networks Support unsupervised learning, providing a good start for supervised learning Recurrent neural networks Allow computations to unfold over time Networks trained with reinforcement learning Can discover new ways to solve a problem Rewards can be internal (e.g. novelty, mastery) allowing autonomous learning

A new departure: The neural ‘Turing Machine’ and the Differentiable Neural Computer (Graves, Wayne et al, 2016)

An exciting opportunity Most of the developments in the third wave of neural network research are not intended as models of human abilities It could be argued that in many ways some of them exploit super-human resources, such as more computer power are larger trainings sets than humans have access to But they provide us with a tremendous opportunity New tools that we can use to model human abilities New tasks that can finally be tackled using neural network models These models are also still quite limited in some ways. As students in this course, you have the opportunity to pioneer the application of these new developments to understanding human intelligence, and even to contribute to overcoming some of these existing models’ limitations

Psychology 209 – Goals and Approach To familiarize students with: the principles that govern learning and processing in neural networks the implications and applications of neural networks for human cognition and its neural basis. Provide students with an entry point for exploring neural network models, including software tools and programming practices approaches to testing and analyzing deep neural networks for cognitive and neural modeling.

Readings, homeworks, final projects Up to 20 pages of reading Completing the reading before class is expected Preparation statements required before class for most sessions Two main homeworks and a short starter homework Main homeworks require demonstrating conceptual understanding, learning and minipulation of simulation tools in Python & Tensorflow Due end of weeks 3 and 5 Project Proposal due end of week 7 Brief presentations during week 10 10-page paper due Wed of Finals week

Reading for next time The reading for next time is the first 14 pages of a longer paper – the second half is assigned for the following session The paper covers a set of basic neural network concepts and conventions that we will be using throughout the course, and also links neural networks to optimal Bayesian inference The first homework will closely assess your understanding of the concepts in both halves of the paper, and the ideas in the second half build on the ones in the first half. So read the paper carefully, and try to do the exercises suggested at various points. Start in on the short homework due Friday.