Perceptron Homework Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) Assume a learning rate c of 1 and initial weights all 1:

Slides:



Advertisements
Similar presentations
Instance Based Learning
Advertisements

Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
CS Perceptrons1. 2 Basic Neuron CS Perceptrons3 Expanded Neuron.
x – independent variable (input)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
CS Instance Based Learning1 Instance Based Learning.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
8/10/ RBF NetworksM.W. Mak Radial Basis Function Networks 1. Introduction 2. Finding RBF Parameters 3. Decision Surface of RBF Networks 4. Comparison.
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
Machine Learning Lecture 11 Summary G53MLE | Machine Learning | Dr Guoping Qiu1.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
Principal Component Analysis Machine Learning. Last Time Expectation Maximization in Graphical Models – Baum Welch.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Review for final exam 2015 Fundamentals of ANN RBF-ANN using clustering Bayesian decision theory Genetic algorithm SOM SVM.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Linear Models & Clustering Presented by Kwak, Nam-ju 1.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
Rodney Nielsen Many of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall Data Science Algorithms: The Basic Methods Clustering WFH:
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Semi-Supervised Clustering
Machine Learning Clustering: K-means Supervised Learning
Ananya Das Christman CS311 Fall 2016
Data Science Algorithms: The Basic Methods
Learning with Perceptrons and Neural Networks
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Principle Component Analysis (PCA) Networks (§ 5.8)
Instance Based Learning
10701 / Machine Learning.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Unsupervised Learning - Clustering 04/03/17
Unsupervised Learning - Clustering
Machine Learning Week 1.
Hebb and Perceptron.
Tomasz Maszczyk and Włodzisław Duch Department of Informatics,
Unsupervised learning
Revision (Part II) Ke Chen
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Machine Learning Ensemble Learning: Voting, Boosting(Adaboost)
Revision (Part II) Ke Chen
Instance Based Learning
Neural Network - 2 Mayank Vatsa
CS 478 Homework CS Homework.
Artificial Intelligence Lecture No. 28
Hairong Qi, Gonzalez Family Professor
Deep Learning for Non-Linear Control
Generally Discriminant Analysis
Machine Learning in Practice Lecture 7
Nearest Neighbors CSC 576: Data Mining.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Chapter 7: Transformations
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Feature Selection Methods
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Peer Instruction I pose a challenge question (usually multiple choice), which will help solidify understanding of topics we have studied Might not just.
Machine Learning Perceptron: Linearly Separable Supervised Learning
Lecture 8. Learning (V): Perceptron Learning
Hairong Qi, Gonzalez Family Professor
A task of induction to find patterns
Review for test #3 Radial basis functions SVM SOM.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
A task of induction to find patterns
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
What is Artificial Intelligence?
Presentation transcript:

Perceptron Homework Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) Assume a learning rate c of 1 and initial weights all 1: Dwi = c(t – z) xi Show weights after each pattern for just one epoch Training set 1 0 1 -> 0 1 1 0 -> 0 1 0 1 -> 1 0 1 1 -> 1 Pattern Target Weight Vector Net Output DW 1 0 1 1 0 1 1 1 1 3 1 -1 0 -1 -1 1 1 0 1 0 0 1 0 0 1 1 -1-1 0 -1 1 0 1 1 1 -1 0 0 -1 -2 0 1 0 1 1 0 1 1 1 1 0 0 1 0 1 1 0 0 0 0 0 0 1 0 CS 478 - Perceptrons

Quadric Machine Homework Assume a 2 input perceptron expanded to be a quadric perceptron (it outputs 1 if net > 0, else 0). Note that with binary inputs of -1, 1, that x2 and y2 would always be 1 and thus do not add info and are not needed (they would just act like to more bias weights) Assume a learning rate c of .4 and initial weights all 0: Dwi = c(t – z) xi Show weights after each pattern for one epoch with the following non-linearly separable training set. Has it learned to solve the problem after just one epoch? Which of the quadric features are actually needed to solve this training set? x y Target -1 1 CS 478 - Regression

Quadric Machine Homework wx represents the weight for feature x Assume a 2 input perceptron expanded to be a quadric perceptron (it outputs 1 if net > 0, else 0) Assume a learning rate c of .4 and initial weights all 0: Dwi = c(t – z) xi Show weights after each pattern for one epoch with the following non-linearly separable training set. Has it learned to solve the problem after just one epoch? Which of the quadric features are actually needed to solve this training set? x y xy wx wy wxy bias Δwx Δwxy Δbias net Target Output -1 1 -.4 .4 -.8 .8 CS 478 - Regression

Quadric Machine Homework wx represents the weight for feature x Assume a 2 input perceptron expanded to be a quadric perceptron (it outputs 1 if net > 0, else 0) Assume a learning rate c of .4 and initial weights all 0: Dwi = c(t – z) xi Show weights after each pattern for one epoch with the following non-linearly separable training set. Has it learned to solve the problem after just one epoch? - Yes Which of the quadric features are actually needed to solve this training set? – Really only needs feature xy. x y xy wx wy wxy bias Δwx Δwxy Δbias net Target Output -1 1 -.4 .4 -.8 .8 CS 478 - Regression

Linear Regression Homework Assume we start with all weights as 0 (don’t forget the bias) What are the new weights after one iteration through the following training set using the delta rule with a learning rate of .2 x y Target .3 .8 .7 -.3 1.6 -.1 .9 1.3 CS 478 - Homework

Linear Regression Homework Assume we start with all weights as 0 (don’t forget the bias) What are the new weights after one iteration through the following training set using the delta rule with a learning rate c = .2 x y Target Net w1 w2 Bias .3 .8 .7 .042 .112 .140 -.3 1.6 -.1 .307 .066 -.018 .059 .9 1.3 .118 .279 .295 0 + .2(.7 – 0).3 = .042 CS 478 - Homework

Logistic Regression Homework You don’t actually have to come up with the weights for this one, though you could quickly by using the closed form linear regression approach Sketch each step you would need to learn the weights for the following data set using logistic regression Sketch how you would generalize the probability of a heart attack given a new input heart rate of 60 Heart Rate Heart Attack 50 Y N 70 90 CS 478 - Homework

Logistic Regression Homework Calculate probabilities or odds for each input value Calculate the log odds Do linear regression on these 3 points to get the logit line To find the probability of a heart attack given a new input heart rate of 60, just calculate p = elogit(60)/(1+elogit(60)) where logit(60) is the value on the logit line for 60 Heart Rate Heart Attack 50 Y N 70 90 Heart Rate Heart Attacks Total Patients Probability: # attacks/Total Patients Odds: p/(1-p) = # attacks/ # not Log Odds: ln(Odds) 50 1 4 .25 .33 -1.11 70 2 .5 90 5 .8 1.39 CS 478 - Homework

CS 478 – Backpropagation

CS 478 – Backpropagation

CS 478 – Backpropagation

Terms m 5 Number of instances in data set n 2 Number of input features p 1 Final number of principal components chosen PCA Homework Use PCA on the given data set to get a transformed data set with just one feature (the first principal component (PC)). Show your work along the way. Show what % of the total information is contained in the 1st PC. Do not use a PCA package to do it. You need to go through the steps yourself, or program it yourself. You may use a spreadsheet, Matlab, etc. to do the arithmetic for you. You may use any web tool or Matlab to calculate the eigenvectors from the covariance matrix. Original Data x y p1 .2 -.3 p2 -1.1 2 p3 1 -2.2 p4 .5 -1 p5 -.6 mean -.1 CS 478 - Homework

Matrix B = Transposed zero centered Training Set Terms m 5 Number of instances in data set n 2 Number of input features p 1 Final number of principal components chosen PCA Homework Original Data x y p1 .2 -.3 p2 -1.1 2 p3 1 -2.2 p4 .5 -1 p5 -.6 mean -.1 Zero Centered Data x y p1 .2 -.2 p2 -1.1 2.1 p3 1 -2.1 p4 .5 -.9 p5 -.6 1.1 mean Covariance Matrix x y .715 -1.39 2.72 EigenVectors x y Eigenvalue -.456 -.890 3.431 .0037 % total info in 1st principal component 3.431/(3.431 + . 0037) = 99.89% A × B New Data Set 1st PC p1 .0870 p2 -1.368 p3 1.414 p4 0.573 p5 -0.710 Matrix A – p × n x y 1st PC -.456 -.890 Matrix B = Transposed zero centered Training Set p1 p2 p3 p4 p5 x .2 -1.1 1 .5 -.6 y -.2 2.1 -2.1 -.9 1.1 CS 478 - Homework

Decision Tree Homework Meat N,Y Crust D,S,T Veg Quality B,G,Gr Y Thin N Great Deep Bad Stuffed Good Decision Tree Homework Info(S) = - 2/9·log22/9 - 4/9·log24/9 -3/9·log23/9 = 1.53 Not necessary unless you want to calculate information gain Starting with all instances, calculate gain for each attribute Let’s do Meat: InfoMeat(S) = 4/9·(-2/4log22/4 - 2/4·log22/4 - 0·log20/4) + 5/9·(-0/5·log20/5 - 2/5·log22/5 - 3/5·log23/5) = .98 Information Gain is 1.53 - .98 = .55 Finish this level, find best attribute and split, and then find the best attribute for at least the left most node at the next level Assume sub-nodes are sorted alphabetically left to right by attribute Order alphabetically by attribute value as in the table CS 478 - Decision Trees

Decision Tree Homework MeatN,Y Crust D,S,T VegN,Y Quality B,G,Gr Y Thin N Great Deep Bad Stuffed Good Decision Tree Homework InfoMeat(S) = 4/9·(-2/4log22/4 - 2/4·log22/4 - 0·log20/4) + 5/9·(-0/5·log20/5 - 2/5·log22/5 - 3/5·log23/5) = .98 InfoCrust(S) = 4/9·(-1/4log21/4 - 2/4·log22/4 - 1/4·log21/4) + 2/9·(-0/2·log20/2 - 1/2·log21/2 - 1/2·log21/2) + 3/9·(-1/3·log21/3 - 1/3·log21/3 - 1/3·log21/3) = 1.41 InfoVeg(S) = 5/9·(-2/5log22/5 - 2/5·log22/5 - 1/5·log21/5) + 4/9·(-0/4·log20/4 - 2/4·log22/4 - 2/4·log22/4) = 1.29 Attribute with least remaining info is Meat CS 478 - Decision Trees

Decision Tree Homework MeatN,Y Crust D,S,T VegN,Y Quality B,G,Gr N Deep Bad Stuffed Y Good Thin Decision Tree Homework Left most node will be Meat = N containing 4 instances InfoCrust(S) = 1/4·(-1/1log21/1 - 0/1·log20/1 - 0/1·log20/1) + 1/4·(-0/1·log20/1 - 1/1·log21/1 - 0/1·log20/1) + 2/4·(-1/2·log21/2 - 1/2·log21/2 - 0/2·log20/2) = .5 InfoVeg(S) = 2/4·(-2/2log22/2 - 0/2·log20/2 - 0/2·log20/2) + 2/4·(-0/2·log20/2 - 2/2·log22/2 - 0/2·log20/2) = 0 Attribute with least remaining info is Veg which will split into two pure nodes CS 478 - Decision Trees

k-Nearest Neighbor Homework Assume the following training set. Assume a new point (.5, .2) For nearest neighbor distance use Manhattan distance What would the output be for 3-nn with no distance weighting? Show work and vote. What would the output be for 3-nn with distance weighting? Show work and vote. x y Target .3 .8 A -.3 1.6 B .9 1 CS 478 - Homework

k-Nearest Neighbor Homework Assume the following training set. Assume a new point (.5, .2) For nearest neighbor distance use Manhattan distance What would the output be for 3-nn with no distance weighting?– A wins with vote 2/3 What would the output be for 3-nn with distance weighting? A has vote 1/.82 + 1/1.32 = 2.15, B wins with vote 1/.42 = 6.25 x y distance Target .3 .8 .2 + .6 = .8 A -.3 1.6 .8 + 1.4 = 2.2 B .9 .0 .4 + .2 = .6 1 .5 + .8 = 1.3 CS 478 - Homework

RBF Homework Assume you have an RBF with Two inputs Three output classes A, B, and C (linear units) Three prototype nodes at (0,0), (.5,1) and (1,.5) The radial basis function of the prototype nodes is max(0, 1 – Manhattan distance between the prototype node and the instance) Assume no bias and initial weights of .6 into output node A, -.4 into output node B, and 0 into output node C Assume top layer training is the delta rule with LR = .1 Assume we input the single instance .6 .8 Which class would be the winner? What would the weights be updated to if it were a training instance of .6 .8 with target class B? CS 478 - Homework

RBF Homework: Instance .6 .8 -> B Node Distance Activation Prototype 1 .6 + .8 = 1.4 Max(0, 1 – 1.4) = 0 Prototype 2 .1 + .2 = .3 .7 Prototype 3 .4 + .3 = .7 .3 Node Net Value Output A 0*.6 + .7*.6 + .3*.6 = .6 Winner Output B 0*-.4 + .7*-.4 + .3*-.4 = -.4 Output C 0*0 + .7*0 + .3*0 = 0 Weight Delta Values Δwp1,A = .1(0 – .6)0 = 0 Δwp2,A = .1(0 – .6).7 = -.042 Δwp3,A = .1(0 – .6).3 = -.018 Δwp1,B = .1(1 – -.4)0 = 0 Δwp2,B = .1(1 – -.4).7 = .098 Δwp3,B = .1(1 – -.4).3 = .042 Δwp1,C = .1(0 – 0)0 = 0 Δwp2,C = .1(0 – 0).7 = 0 Δwp3,C = .1(0 – 0).3 = 0 CS 478 - Homework

Naïve Bayes Homework For the given training set: Size (B, S) Color (R,G,B) Output (P,N) B R P S N G For the given training set: Create a table of the statistics needed to do Naïve Bayes What would be the output for a new instance which is Small and Blue? What is the Naïve Bayes value and the normalized probability for each output class (P or N) for this case of Small and Blue? Do first couple on the board CS 478 - Bayesian Learning

What do we need? What is our output for a new instance which is Small and Blue? vP = P(P)* P(S|P)*P(B|P) = 4/7*2/4*3/4= .214 vN = P(N)* P(S|N)*P(B|N) = 3/7*1/3*1/3= .048 Normalized Probabilites: P= .214/(.214+.048) = .817 N = .048/(.214+.048) = .183 P(P) 4/7 P(N) 3/7 P(Size=B|P) 2/4 P(Size=S|P) P(Size=B|N) 2/3 P(Size=S|N) 1/3 P(Color=R|P) 1/4 P(Color=G|P) 0/4 P(Color=B|P) 3/4 P(Color=R|N) P(Color=G|N) P(Color=B|N) Size (B, S) Color (R,G,B) Output (P,N) B R P S N G Now Small and Blue occur 3 times in data set. Normally novel instance would not occur at all in data set, but since example is so small… CS 478 - Bayesian Learning

HAC Homework For the data set below show all iterations (from 5 clusters until 1 cluster remaining) for HAC single link. Show work. Use Manhattan distance. In case of ties go with the cluster containing the least alphabetical instance. Show the dendrogram for the HAC case, including properly labeled distances on the vertical-axis of the dendrogram. Pattern x y a .8 .7 b -.1 .2 c .9 d e .1 CS 478 - Clustering

HAC Homework Distance Matrix 5 separate clusters {b,d} and e .3 Pattern x y a .8 .7 b -.1 .2 c .9 d e .1 a b c d e 1.4 .2 1.3 1.2 1.6 .1 .4 1.5 .3 Closest clusters distance 1 5 separate clusters 2 b and d .1 3 a and c .2 4 {b,d} and e .3 5 {b,d,e} and {a,c} 1.2 CS 478 - Clustering

k-means Homework For the data below, show the centroid values and which instances are closest to each centroid after centroid calculation for two iterations of k-means using Manhattan distance By 2 iterations I mean 2 centroid changes after the initial centroids Assume k = 2 and that the first two instances are the initial centroids Pattern x y a .9 .8 b .2 c .7 .6 d -.1 -.6 e .5 CS 478 - Clustering

k-means Homework For the data below, show the centroid values and which instances are closest to each centroid after centroid calculation for two iterations of k-means using Manhattan distance By 2 iterations I mean 2 centroid changes after the initial centroids Assume k = 2 and that the first two instances are the initial centroids Pattern x y a .9 .8 b .2 c .7 .6 d -.1 -.6 e .5 Iteration Centroid 1 and instances Centroid 2 and instances .9, .8 {a, c} .2, .2 {b, d, e} 1 .8, .7 {a, c, e} .2, .033 {b, d} 2 .7, .633 {a, c, e} .05, -.2 {b, d} 3 CS 478 - Clustering

CS 478 - Reinforcement Learning Q-Learning Homework Assume the deterministic 4 state world below (each cell is a state) where the immediate reward is 0 for entering all states, except the rightmost state, for which the reward is 10, and which is an absorbing state. The only actions are move right and move left (only one of which is available from the border cells). Assuming a discount factor of .8, give the final optimal Q values for each action in each state and describe an optimal policy. 6.4 8 10 5.12 6.4 Optimal Policy: “Choose the Right!!” CS 478 - Reinforcement Learning