1 CSC 550: Introduction to Artificial Intelligence Spring 2004 More neural nets  neural net (backpropagation) examples  associative memory Hopfield networks.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
For Wednesday Read chapter 19, sections 1-3 No homework.
CS 678 –Boltzmann Machines1 Boltzmann Machine Relaxation net with visible and hidden units Learning algorithm Avoids local minima (and speeds up learning)
Tuomas Sandholm Carnegie Mellon University Computer Science Department
CS 678 –Relaxation and Hopfield Networks1 Relaxation and Hopfield Networks Totally connected recurrent relaxation networks Bidirectional weights (symmetric)
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Chapter 10 Artificial Intelligence © 2007 Pearson Addison-Wesley. All rights reserved.
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
Neural Networks Basic concepts ArchitectureOperation.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
COGNITIVE NEUROSCIENCE
Chapter 4 (part 2): Non-Parametric Classification
Dave Reed Connectionist approach to AI neural networks, neuron model
Neural Networks (NN) Ahmad Rawashdieh Sa’ad Haddad.
December 7, 2010Neural Networks Lecture 21: Hopfield Network Convergence 1 The Hopfield Network The nodes of a Hopfield network can be updated synchronously.
CS 4700: Foundations of Artificial Intelligence
1 CSC 550: Introduction to Artificial Intelligence Fall 2004 heuristics & informed search  heuristics  hill-climbing bold + informed search potential.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Rohit Ray ESE 251. What are Artificial Neural Networks? ANN are inspired by models of the biological nervous systems such as the brain Novel structure.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
1 CSC 550: Introduction to Artificial Intelligence Fall 2004 Connectionist approach to AI  neural networks, neuron model  perceptrons threshold logic,
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Chapter 11: Artificial Intelligence
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
December 5, 2012Introduction to Artificial Intelligence Lecture 20: Neural Network Application Design III 1 Example I: Predicting the Weather Since the.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Computer Science and Engineering
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Akram Bitar and Larry Manevitz Department of Computer Science
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 18 Learning Boltzmann Machines Geoffrey Hinton.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 18 Connectionist Models
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
CSC2535: Computation in Neural Networks Lecture 8: Hopfield nets Geoffrey Hinton.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Another Example: Circle Detection
Neural Networks: An Introduction and Overview
Chapter 11: Artificial Intelligence
Real Neurons Cell structures Cell body Dendrites Axon
Recurrent Networks A recurrent network is characterized by
Neural Networks: An Introduction and Overview
CS623: Introduction to Computing with Neural Nets (lecture-11)
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

1 CSC 550: Introduction to Artificial Intelligence Spring 2004 More neural nets  neural net (backpropagation) examples  associative memory Hopfield networks parallel relaxation, relaxation as search

2 Neural net example consider the following political poll, taken by six potential voters  each ranked various topics as to their importance, scale of 0 to 10  voters 1-3 identified themselves as Republicans, voters 4-6 as Democrats BudgetDefenseCrimeEnvironmentSocial Security voter voter voter voter voter voter based on survey responses, can we train a neural net to recognize Republicans and Democrats?

3 Neural net example (cont.) utilize the neural net (backpropagation) simulator at: note: inputs to network can be real values between –1.0 and 1.0  in this example, can use fractions to indicate the range of survey responses e.g., response of 8  input value of 0.8 make sure you recognize the training set accurately.  how many training cycles are needed?  how many hidden nodes?

4 Neural net example (cont.) using the neural net, try to classify the following new respondents BudgetDefenseCrimeEnvironmentSocial Security voter voter voter voter voter voter voter voter voter

5 More examples neural nets for pattern recognition  train a network to recognize picture with crosses and those without  how big does the training set need to be?  how many hidden nodes? for HW 6, you will design a neural net for student advising  ask (at least 5) questions about interests, aptitudes, lifestyle goals  differentiate between (at least 3) different majors  train on a set of peers (at least 2 people per major – the more the better)  test on at least 1 person per major

6 Interesting variation: Hopfield nets in addition to uses as acceptor/classifier, neural nets can be used as associative memory – Hopfield (1982)  can store multiple patterns in the network, retrieve interesting features  distributed representation info is stored as a pattern of activations/weights multiple info is imprinted on the same network  content-addressable memory store patterns in a network by adjusting weights to retrieve a pattern, specify a portion (will find a near match)  distributed, asynchronous control individual processing elements behave independently  fault tolerance a few processors can fail, and the network will still work

7 Hopfield net examples processing units are in one of two states: active or inactive units are connected with weighted, symmetric connections positive weight  excitatory relation negative weight  inhibitory relation to imprint a pattern  adjust the weights appropriately (no general algorithm is known, basically ad. hoc) to retrieve a pattern:  specify a partial pattern in the net  perform parallel relaxation to achieve a steady state representing a near match

8 Parallel relaxation parallel relaxation algorithm: 1.pick a random unit 2.sum the weights on connections to active neighbors 3.if the sum is positive  make the unit active if the sum is negative  make the unit inactive 4.repeat until a stable state is achieved this Hopfield net has 4 stable states  what are they?  parallel relaxation will start with an initial state and converge to one of these stable states

9 Why does it converge? parallel relaxation is guaranteed to converge on a stable state in a finite number of steps (i.e., node state flips) WHY? Define H(net) =  (weights connecting active nodes) Theorem: Every step in parallel relaxation increases H(net). If step involves making a node active, this is because the sum of weights to active neighbors > 0. Therefore, making this node active increases H(net). If step involves making a node inactive, this is because the sum of the weights to active neighbors < 0. Therefore, making this node active increases H(net). Since H(net) is bounded, relaxation must eventually stop  stable state

10 Hopfield nets in Scheme need to store the Hopfield network in a Scheme structure  could be unstructured, graph = collection of edges  could structure to make access easier (define HOPFIELD-NET '((A (B -1) (C 1) (D -1)) (B (A -1) (D 3)) (C (A 1) (D -1) (E 2) (F 1)) (D (A -1) (B 3) (C -1) (F -2) (G 3)) (E (C 2) (F 1)) (F (C 1) (D -2) (E 1) (G -1)) (G (D 3) (F -1))))

11 Parallel relaxation in Scheme (define (relax active) (define (neighbor-sum neighbors active) (cond ((null? neighbors) 0) ((member (caar neighbors) active) (+ (cadar neighbors) (neighbor-sum (cdr neighbors) active))) (else (neighbor-sum (cdr neighbors) active)))) (define (get-unstables net active) (cond ((null? net) '()) ((and (member (caar net) active) (< (neighbor-sum (cdar net) active) 0)) (cons (caar net) (get-unstables (cdr net) active))) ((and (not (member (caar net) active)) (> (neighbor-sum (cdar net) active) 0)) (cons (caar net) (get-unstables (cdr net) active))) (else (get-unstables (cdr net) active)))) (let ((unstables (get-unstables HOPFIELD-NET active))) (if (null? unstables) active (let ((selected (list-ref unstables (random (length unstables))))) (if (member selected active) (relax (remove selected active)) (relax (cons selected active)))))))

12 Relaxation examples > (relax '()) () > (relax '(b d g)) (b d g) > (relax '(a c e f)) (a c e f) > (relax '(b c d e g)) (b c d e g) parallel relaxation will identify stored patterns (since stable) > (relax '(a b)) (g d b) > (relax '(a b c e f)) (a c e f) > (relax '(a b c d e f g)) (b c d e g) > (relax '(a b c d)) (e g b c d) > (relax '(d c b a)) (g d b) if you input a partial pattern, parallel relaxation will converge on a stored pattern  what can you say about the stored pattern that is reached?  is it in some sense the "closest" match?

13 Associative memory a Hopfield net is associative memory  patterns are stored in the network via weights  if presented with a stored pattern, relaxation will verify its presence in the net  if presented with a new pattern, relaxation will find a match in the net if unstable nodes are selected at random, can't make any claims of closeness  ideally, we would like to find the "closest" or "best" match fewest differences in active nodes? fewest flips between states?

14 Parallel relaxation as search can view the parallel relaxation algorithm as search  state is a list of active nodes  moves are obtained by flipping an unstable neighbor state

15 Parallel relaxation using BFS could use breadth first search (BFS) to find the pattern that is the fewest number of flips away from input pattern (define (relax active) (car (bfs-nocycles active))) (define (GET-MOVES active) (define (get-moves-help unstables) (cond ((null? unstables) '()) ((member (car unstables) active) (cons (remove (car unstables) active) (get-moves-help (cdr unstables)))) (else (cons (cons (car unstables) active) (get-moves-help (cdr unstables)))))) (get-moves-help (get-unstables HOPFIELD-NET active))) (define (GOAL? active) (null? (get-unstables HOPFIELD-NET active)))

16 Relaxation examples > (relax '()) () > (relax '(b d g)) (b d g) > (relax '(a c e f)) (a c e f) > (relax '(b c d e g)) (b c d e g) parallel relaxation will identify stored patterns (since stable) > (relax '(a b)) (g d b) > (relax '(a b c e f)) (a c e f) > (relax '(a b c d e f g)) (b c d e g) > (relax '(a b c d)) (g b d) > (relax '(d c b a)) (g d b) if you input a partial pattern, parallel relaxation will converge on "closest" pattern

17 Another example consider the following Hopfield network  specify weights that would store the following patterns: AD, BE, ACE

18 Next week… Emergent models of machine learning  genetic algorithms  cellular automata  artificial life Read Chapter 11 Be prepared for a quiz on  this week’s lecture (moderately thorough)  the reading (superficial)