CS 621 Artificial Intelligence Lecture 29 – 22/10/05

Slides:



Advertisements
Similar presentations
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Advertisements

Neural Networks Part 4 Dan Simon Cleveland State University 1.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks Lecture 17: Self-Organizing Maps
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CS623: Introduction to Computing with Neural Nets (lecture-20) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Lecture 6, CS5671 Neural Networks Introduction –Biological neurons –Artificial neurons –Concepts –Conventions Single Layer Perceptron –Example –Limitation.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS621: Artificial Intelligence Lecture 11: Perceptrons capacity Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
Institute for Advanced Studies in Basic Sciences – Zanjan Kohonen Artificial Neural Networks in Analytical Chemistry Mahdi Vasighi.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 30: Perceptron training convergence;
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
Instructor: Prof. Pushpak Bhattacharyya 13/08/2004 CS-621/CS-449 Lecture Notes CS621/CS449 Artificial Intelligence Lecture Notes Set 4: 24/08/2004, 25/08/2004,
381 Self Organization Map Learning without Examples.
CS621 : Artificial Intelligence
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 32: sigmoid neuron; Feedforward.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Prof. Pushpak Bhattacharyya, IIT Bombay CS 621 Artificial Intelligence Lecture 28 – 22/10/05 Prof. Pushpak Bhattacharyya Generating Function,
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
CS623: Introduction to Computing with Neural Nets (lecture-16) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CS623: Introduction to Computing with Neural Nets (lecture-12) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CS623: Introduction to Computing with Neural Nets (lecture-17) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
CS623: Introduction to Computing with Neural Nets (lecture-9) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 20: Neural Net Basics: Perceptron.
CS623: Introduction to Computing with Neural Nets (lecture-18) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning 12. Local Models.
Neural networks.
Big data classification using neural network
Supervised Learning in ANNs
Data Mining, Neural Network and Genetic Programming
CS623: Introduction to Computing with Neural Nets (lecture-5)
One-layer neural networks Approximation problems
CSE P573 Applications of Artificial Intelligence Neural Networks
CS621: Artificial Intelligence
Linear Discriminators
CS621: Artificial Intelligence Lecture 17: Feedforward network (lecture 16 was on Adaptive Hypermedia: Debraj, Kekin and Raunak) Pushpak Bhattacharyya.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Networks
Neural Network - 2 Mayank Vatsa
CS623: Introduction to Computing with Neural Nets (lecture-15)
CS 621 Artificial Intelligence Lecture 25 – 14/10/05
Capabilities of Threshold Neurons
CS623: Introduction to Computing with Neural Nets (lecture-9)
The Naïve Bayes (NB) Classifier
CS621: Artificial Intelligence
CS 621 Artificial Intelligence Lecture 27 – 21/10/05
COSC 4335: Part2: Other Classification Techniques
CS623: Introduction to Computing with Neural Nets (lecture-5)
Computer Vision Lecture 19: Object Recognition III
CS623: Introduction to Computing with Neural Nets (lecture-3)
Feature mapping: Self-organizing Maps
Artificial Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
CS621: Artificial Intelligence Lecture 18: Feedforward network contd
Presentation transcript:

CS 621 Artificial Intelligence Lecture 29 – 22/10/05 Prof. Pushpak Bhattacharyya SOM, Theoretical Aspects Of Machine Learning, Probably Approximately Correct Learning.

A … P neurons o/p layer Wp … … . n neurons Clusters: A : A B : : C :

Wp(n+1) = Wp(n) + η (I(n) – Wp(n)) Competitive learning: Weights to the winner neuron only are changed.

Generating function application to recurrence relation solving. Example: Max. No of regions on 2-Dim plane produced by n lines.

L1 L3 y L2 R1 = 2 R2 = R1 + 2 = 2+ 2 = 4 L4 R3 = R2 + 3 = 4 + 3 = 7 R4 = 7 + 4 = 11 Rn = Rn-1 + n R1 = 2 x

Rn = Rn-1 + n Rn-1 = Rn-2 + (n-1) Rn-2 = Rn-3 + (n-2) . R2 = R1 + 2

Rn = n + (n-1) + (n-2)+ …..+R1 + 2 = n+(n-1)+(n-2)+…..+2+2 =n(n+1)/2 + 1

Generating Function Method Key idea: Rns are generated by infinite series function ,i.e, F(x) = R1 + R2x + R3x2 + ………………+ Rnxn-1+ ….. ∞ Rn = Coeff of xn-1

Rn = Rn-1 + n Rn - Rn-1 = n R2 - R1 = 2 R3 – R2 = 3 . (Continued) Rn = Rn-1 + n Rn - Rn-1 = n R2 - R1 = 2 R3 – R2 = 3 .

(Continued) F(x) = R1 + R2x + R3x2 + ………………+ Rnxn-1+ ….. ∞ (1) xF(x) = R1x + R2x2 + R3x3 + ……………+ Rnxn+ ….. ∞ (2) - 2 gives (1-x) F(x) = R1 + (R2 – R1)x + (R3 – R2)x2 + ……………+ (Rn – Rn-1)xn-1+ ….. ∞

(Continued) (1-x) F(x) = 2 + 2x + 3x2 + ……………+ nxn-1+ ….. ∞ F(x) = (1-x)-1(2 + 2x + 3x2 + ……………+ nxn-1+ ….. ∞) = ( 1 + x + x2 + x3 +…+ xn-1+…..) (2 + 2x + 3x2 + 4x3 +…+ nxn-1+…..)

Co eff of Xn-1 from the expression for f(x) (Continued) Co eff of Xn-1 from the expression for f(x) = ( n + n-1 + n-2 + … + 2+ 2 ) n(n+1)/ 2 + 1

Clustering Algos 1. Competitive learning 2. K – means clustering 3. Self organization / Kohonen net 4. Counter Propagation

Two packages 1. SNNS – Stuttgart Neural Net Simulator 2. WEKA Package 3. MATLAB

Data to train IRIS data: 3 - class data X- OR, parity, Majority

IRIS Data Classification of flowers based on four attributes in 3 classes c1, c2 and C2 1 Petal length A1 2 Petal width A2 3 Sepal length A3 4 Sepal width A4

Google on “Machine learning Data” IRIS 120 – data points < A1, A2, A3, A4 >1 --- C1 < A1, A2, A3, A4 >2 --- C3 ……….

C1, C2, C3 each has 50 data points in it 80 – 20 rule for ML Training – 40 and Testing -- 10

What you have to do for IRIS choose 40 points randomly from C1, C2 and C3 1. Run BP on FF n/w 2. Run SOM assuming unlabelled data. See if you can discover 3 cluster

Two part assignment Supervised Hidden layer to be found 4 neurons for 4 attributes

Very good performance on test data should be achieved. BP for IRIS error 1% Training iterations EPOCHS Very good performance on test data should be achieved.

Cluster Discovery By SOM/Kohenen Net 4 I/p neurons A1 A2 A3 A4

K – means clustering K o/p neurons are required from the knowledge of k clusters being present. …… 26 neurons Full connection …… n neurons

Steps 1. Initialize the weights randomly. 2. Ik is the vector presented at kth iteration. 3. Find W* such that |w* - Ik| < |wj - Ik| for all j

4. make W*(new) = W* (old) + η(Ik - w* ). 5. k  K +1 ; go to 3. 6. Go to 2 until the error is below a threshold.

K means 1. Initialize weights randomly. 2. Ik in the I/p vector presented at kth. step of one iteration over all the training data (EPOCH). 3. Find the W* such that |w* - Ik| is minimum. 4. Find W* for Ik+1..

5. See the partitions at the end of the epoch. 6. Find centroid of each partition. 7. Make the weight equal to the centroid of the partition. 8. Take next epoch.

9. Keep doing until the error is below a threshold. Key idea form competitive learning: Weight change incorporate after every epoch.