Learning School of Computing, University of Leeds, UK AI23 – 2004/05 – demo 2.

Slides:



Advertisements
Similar presentations
Beyond Linear Separability
Advertisements

5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Introduction to Psychology, 7th Edition, Rod Plotnik Module 9: Classical Conditioning Module 9 Classical Conditioning.
Modular Neural Networks CPSC 533 Franco Lee Ian Ko.
EC: Lecture 17: Classifier Systems Ata Kaban University of Birmingham.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
An Introduction to Machine Learning In the area of AI (earlier) machine learning took a back seat to Expert Systems Expert system development usually consists.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Machine Creativity. Outline BackgroundBackground –The problem and its importance. –The known algorithms and systems. Summary of the Creativity Machine.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Applying Multi-Criteria Optimisation to Develop Cognitive Models Peter Lane University of Hertfordshire Fernand Gobet Brunel University.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Rohit Ray ESE 251. What are Artificial Neural Networks? ANN are inspired by models of the biological nervous systems such as the brain Novel structure.
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Artificial Neural Networks
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Unsupervised Learning
Study on Genetic Network Programming (GNP) with Learning and Evolution Hirasawa laboratory, Artificial Intelligence section Information architecture field.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Beyond Gazing, Pointing, and Reaching A Survey of Developmental Robotics Authors: Max Lungarella, Giorgio Metta.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
PSY105 Neural Networks 5/5 5. “Function – Computation - Mechanism”
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Akram Bitar and Larry Manevitz Department of Computer Science
Chapter 4 Decision Support System & Artificial Intelligence.
An Unsupervised Connectionist Model of Rule Emergence in Category Learning Rosemary Cowell & Robert French LEAD-CNRS, Dijon, France EC FP6 NEST Grant.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CITS7212: Computational Intelligence An Overview of Core CI Technologies Lyndon While.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
EEE502 Pattern Recognition
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
Natural Selection. Evolution by Natural Selection.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Additional NN Models Reinforcement learning (RL) Basic ideas: –Supervised learning: (delta rule, BP) Samples (x, f(x)) to learn function f(.) precise error.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Chapter 6 Neural Network.
A field of study that encompasses computational techniques for performing tasks that require intelligence when performed by humans. Simulation of human.
Introduction to Artificial Intelligence Revision Session.
Does the brain compute confidence estimates about decisions?
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Learning and Memory VITHUSS SRIRAJASINGAM ☻ & PAVITHIRAN SIVABALAN.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Done Done Course Overview What is AI? What are the Major Challenges?
Reinforcement learning (Chapter 21)
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Counter propagation network (CPN) (§ 5.3)
CSE 473 Introduction to Artificial Intelligence Neural Networks
Chapter 3. Artificial Neural Networks - Introduction -
Evolution Notes.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Fundamentals of Neural Networks Dr. Satinder Bal Gupta
CHAPTER I. of EVOLUTIONARY ROBOTICS Stefano Nolfi and Dario Floreano
Artificial Neural Networks
The McCullough-Pitts Neuron
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Learning School of Computing, University of Leeds, UK AI23 – 2004/05 – demo 2

part 1: what is learning? what would you say learning is?

Part 1 : what is learning? meaning of learning is subject to discussion recap some ideas high-level: experience alters behaviour low-level: weights (on neuron connections) change

example 1: Yamauchi/Beers alternate worlds one agent, one goal, one landmark two kinds of world: landmark-far/near a/b: landmark opposite to goal c/d: landmark between agent and goal agents task: reach goal (how? what if it knows the type of world it is in?)

example 1 [cont.] so, if world is known, a fixed strategy can be applied now, suppose a coin is tossed every 10 trials, and the kind of world is changed accordingly how can the problem be solved? The agent must learn to detect the kind of world it is in Yamauchi/Beers solution separately obtained (through artificial evolution) 3 distinct networks that solve subtasks: world detection, LF and LN goal-finding integrated networks: agent uses first trial in the 10-trial sequence to learn what world he is in; with that knowledge, he then switches to the right strategy for that world, for the next 9 trials. On average, 95% success

example 1 [cont.] is that learning??? can be seen as experience altering behaviour? no weights changing; rather, internal state of the agent is changed (by setting a world-type flag) – does it matter? the network is only learning one thing (the world the agent is in); can that still be called learning?

example 2: c.elegans 1-mm worm

example 2: c.elegans no evidence of synaptic plasticity in c.elegans, i.e. no mechanisms for changing the weights between neurons however, c.elegans exhibits various kinds of learning capabilities (behavioural plasticity) habituation / sensitisation, associative learning this would mean that changing weights on neuron connections is not the only way in which learning occurs in nature lots to discover and understand yet!

Part 2: different forms of learning activity: recall different forms of learning

forms of learning neural networks Gradient-descent algorithms for the McCulloch and Pitts neuron and for Feed-Forward Neural networks delta rule backpropagation feed-forward nets used in some demos in BEAST

forms of learning reinforcement learning agent interacts with environment and receives rewards (positive reinforcement) punishments (negative reinforcement) different to delta rule / backprop the agent is not given the correct answer, but only a good/bad signal; quantitative v. qualitative only desired results are needed to specify the problem, rather than intermediate actions; think of riding a bike, mazes, tic-tac-toe, backgammon [see demo, pendulum]

forms of learning conditioning Pavlovs experiments repeated pairing of two stimuli so that a previously neutral (conditioned) stimulus eventually elicits a response (conditioned response) similar to that originally elicited by nonneutral (unconditioned) stimulus notion of reward for artificial purposes

forms of learning Hebbian Learning form of learning in natural and artificial neural networks potentiation of effective synaptic connections and decay / depression of ineffective ones concept of simultaneous / concurrent / correlated activation

forms of learning winner takes all a form of competitive learning in natural and artificial neural networks neurons compete on activation over an input winner neuron gets reinforced Hebbian-like rule will be seen in this module

forms of learning evolutionary algorithms search algorithms inspired by natural evolution: population evolves, improving its fitness concepts of assessment (of an individual), selection, variation (of populations individuals over time) can be used as optimisation tools, even to "train" neural networks Yamauchi/Beer also the way we use them in BEAST will be seen in this module

forms of learning imitation a form of learning in nature and (recently) in robotics individuals learn by replication and repetition of behaviour observed in others [see demo, CogVis] work by CogVis SOC behaviour is adapted to their particulars [see demo, tennis]

forms of learning mimicry a form of evolutionary learning: species / groups learn by mimicking desirable genetic traits from other species / groups wasp-like insects work by J.Noble / D. Franks, SOC social learning learning is achieved via the communication of information within a social structure schools, books; birds, mammals

learning activity: where are the above used in nature and in bio-inspired algorithms?

thank you!