COSC 460 – Neural Networks Gregory Caza 17 August 2007.

Slides:



Advertisements
Similar presentations
Flexible Shaping: How learning in small steps helps Hierarchical Organization of Behavior, NIPS 2007 Kai Krueger and Peter Dayan Gatsby Computational Neuroscience.
Advertisements

Neural Network Models in Vision Peter Andras
For Wednesday Read chapter 19, sections 1-3 No homework.
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
Cerebellar Spiking Engine: Towards Object Model Abstraction in Manipulation UGR with input from PAVIA and other partners  Motivation 1.Abstract corrective.
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Lecture 15: Cerebellum The cerebellum consists of two hemispheres and a medial area called the vermis. The cerebellum is connected to other neural structures.
Measuring the Influence of Long Range Dependencies with Neural Network Language Models Le Hai Son, Alexandre Allauzen, Franc¸ois Yvon Univ. Paris-Sud and.
Learning linguistic structure with simple recurrent networks February 20, 2013.
Artificial Neural Networks - Introduction -
Cerebellar Spiking Engine: EDLUT simulator UGR with input from other partners.  Motivation 1. Simulation of biologically plausible spiking neural structures.
Artificial Neural Networks - Introduction -
Computational Analysis of Motor Learning. Three paradigms Force field adaptation Visuomotor transformations Sequence learning Does one term (motor learning)
Connectionist Simulation of the Empirical Acquisition of Grammatical Relations – William C. Morris, Jeffrey Elman Connectionist Simulation of the Empirical.
Neural Networks Basic concepts ArchitectureOperation.
9.012 Brain and Cognitive Sciences II Part VIII: Intro to Language & Psycholinguistics - Dr. Ted Gibson.
Tree-based methods, neutral networks
How does the mind process all the information it receives?
Motor systems III: Cerebellum April 16, 2007 Mu-ming Poo Population coding in the motor cortex Overview and structure of cerebellum Microcircuitry of cerebellum.
Chapter Seven The Network Approach: Mind as a Web.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Language Comprehension Speech Perception Naming Deficits.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Radial Basis Function Networks
Modeling Language Acquisition with Neural Networks A preliminary research plan Steve R. Howell.
August 19 th, 2006 Computational Neuroscience Group, LCE Helsinki University of Technology Computational neuroscience group Laboratory of computational.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
T for Two: Linear Synergy Advances the Evolution of Directional Pointing Behaviour Marieke Rohde & Ezequiel Di Paolo Centre for Computational Neuroscience.
Getting on your Nerves. What a lot of nerve! There are about 100,000,000,000 neurons in an adult human. These form 10,000,000,000,000 synapses, or connections.
Modelling Language Evolution Lecture 2: Learning Syntax Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Learning BlackJack with ANN (Aritificial Neural Network) Ip Kei Sam ID:
Connectionist Models of Language Development: Grammar and the Lexicon Steve R. Howell McMaster University, 1999.
Learning sensorimotor transformations Maurice J. Chacron.
Gap filling of eddy fluxes with artificial neural networks
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Applying Neural Networks Michael J. Watts
Operant Conditioning of Cortical Activity E Fetz, 1969.
Summary of Lecture 3 VETS2011 Cerebellum Demo of VOR in owl: VOR plasticity World record cerebellum: Electric fish: nanosecond timing Summary of structure.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Neural Networks Chapter 7
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
CAP6938 Neuroevolution and Developmental Encoding Evolving Adaptive Neural Networks Dr. Kenneth Stanley October 23, 2006.
Ferdinando A. Mussa-Ivaldi, “Modular features of motor control and learning,” Current opinion in Neurobiology, Vol. 9, 1999, pp The focus on complex.
C - IT Acumens. COMIT Acumens. COM. To demonstrate the use of Neural Networks in the field of Character and Pattern Recognition by simulating a neural.
Model Based Control Strategies (Motor Learning). Model Based Control 1- Inverse Model as a Forward Controller (Inverse Dynamics) 2- Forward Model in Feedback.
Model Based Control Strategies (Motor Learning). Model Based Control 1- Inverse Model as a Forward Controller (Inverse Dynamics) 2- Forward Model in Feedback.
Biological and cognitive plausibility in connectionist networks for language modelling Maja Anđel Department for German Studies University of Zagreb.
Information Processing by Neuronal Populations Chapter 6: Single-neuron and ensemble contributions to decoding simultaneously recoded spike trains Information.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Machine Learning Supervised Learning Classification and Regression
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Artificial Neural Networks
Applying Neural Networks
Scenario Specification and Problem Finding
Randomness in Neural Networks
Speaker Classification through Deep Learning
Dr. Kenneth Stanley September 6, 2006
Machine Learning Today: Reading: Maria Florina Balcan
Organization and Subdivisions of the Cerebellum
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 8, 2018.
Long term potentiation and depression
Volume 40, Issue 6, Pages (December 2003)
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
Vivek R. Athalye, Karunesh Ganguly, Rui M. Costa, Jose M. Carmena 
The Network Approach: Mind as a Web
Presentation transcript:

COSC 460 – Neural Networks Gregory Caza 17 August 2007

Elman (1993) Elman, J. L. (1993). Learning and development in neural networks: the importance of starting small. Cognition 48: Modelling first language acquisition using a progressive training strategy.

Elman (1993) Simple Recurrent Network (SRN) context units remember the state of the hidden units at the last time step

Elman (1993) input was a binary-encoded word words are presented one at a time output was an encoded prediction of the next word in a sentence predictions are expected to depend on the network learning a grammatical structure

Elman (1993) developmental constraints may facilitate learning limited view provides a buffer from a complex, potentially overwhelming domain simple network = child complex domain = language

Elman (1993) Training was performed using three different schemata: 1.using all training data and a fully-developed network 2.with the training data organized and presented with increasing complexity 3.beginning with a limited memory that increased throughout training

Elman (1993) developmental simulation #1: incremental input training sentences were classified as simple or complex ratio of complex : simple increased over time

Elman (1993) developmental simulation #2: incremental memory context would be reset when memory limit was reached Epoch #Memory (words) or or or or no limit

Elman (1993) full set: learning did not successfully complete incremental input: low final error; good generalization incremental memory: low final error; good generalization

Elman (1993) can training with a subset construct a “foundation for future success”? filter out “stimuli which may either be irrelevant or require prior learning to be interpreted” solution space is constrained

Elman (1993) Questions –how many sentences/epochs were used in the failed case? –what were the quantitative differences between the incremental memory/input results? –were results reproducible with different training corpora?

Assad et al. (2002) Assad, C., Harmann, M. J., Paulin, M. G. (2002). Control of a simulated arm using a novel combination of cerebellar learning mechanisms. Neurocomputing 44-46: Control of a robot arm using dynamic state estimation.

Assad et al. (2002) explore the cerebellum’s role in dynamic state estimation during movement single-link robot arm, capable of single- plane movement and releasing a ball ANN used to control the release time of the throw, with the goal of hitting a target at a certain height

Assad et al. (2002) 6 Purkinje cells (PC) 6 climbing fibres (CF) 6 ascending branches (AB) 4280 parallel fibres (PF) inhibitory; 3680 excitatory

Assad et al. (2002) each excitatory PF received a radial basis function (RBF) of 2 state variables PF-PC connections were strengthened through ‘Hebbian-like’ learning after each trial, a binary error signal was generated based on throw accuracy if the ball hit the target window, PF-PC connections were strengthened through ‘Hebbian-like’ learning

Assad et al. (2002) the target window was initialized to be “quite large” if a hit was recorded, the window was shrunk if there was an error, the window was expanded

Assad et al. (2002) physiological experiments demonstrate LTD between PF and CF most cerebellar models ignore the AB input the network suggests a possible role for LTP in cerebellar learning through the AB

Assad et al. (2002) details, details! too complicated => laying groundwork for experiments Why does no learning take place when the target is missed? What about negative reinforcement?