Single Neuron Hung-yi Lee
Single Neuron … bias Activation function
Learning to say “yes/no” Binary Classification
Learning to say yes/no Spam filtering Is an spam or not? Recommendation systems recommend the product to the customer or not? Malware detection Is the software malicious or not? Stock prediction Will the future value of a stock increase or not with respect to its current value? Binary Classification
Example Application: Spam filtering ( Spam Not spam
Example Application: Spam filtering How to estimate P(yes|x)? What does the function f look like?
Example Application: Spam filtering To estimate P(yes|x), collect examples first Yes (Spam) No (Not Spam) Yes (Spam) ……. ….. Earn … free …… free Talk … Meeting … Win … free…… Some words frequently appear in the spam e.g., “free” Use the frequency of “free” to decide if an is spam x1x1 x2x2 x3x3 Estimate P(yes|x free = k) x free is the number of “free” in x
Regression Frequency of “Free” (x free ) in an x p(yes|x free ) Problem: What if one day you receive an with 3 “free” …. In training data, there is no e- mail containing 3 “free”. p(yes| x free = 1 ) = 0.4 p(yes| x free = 0 ) = 0.1
Regression f(x free ) = wx free + b (f(x free ) is an estimate of p(yes|x free ) ) Store w and b Frequency of “Free” (x free ) in an x Regression p(yes|x free )
Regression Problem: What if one day you receive an with 6 “free” …. Frequency of “Free” (x free ) in an x p(yes|x free ) Regression f(x free ) = wx free + b The output of f is not between 0 and 1
Logit vertical line: Probability to be spam p(yes|x free ) (p) p is always between 0 and 1 vertical line: logit(p) x free
Logit vertical line: Probability to be spam p(yes|x free ) (p) p is always between 0 and 1 vertical line: logit(p) x free f ’ (x free ) = w ’ x free + b ’ (f ’ (x free ) is an estimate of logit(p) )
Logit > 0.5, so “yes” “yes” vertical line: logit(p) x free Store w ’ and b ’ f ’ (x free ) = w ’ x free + b ’ (f ’ (x free ) is an estimate of logit(p) ) (perceptron)
Multiple Variables x free Consider two words “free” and “hello” compute p(yes|x free,x hello ) (p) x hello
Multiple Variables Regression x free x hello Consider two words “free” and “hello” compute p(yes|x free,x hello ) (p)
Multiple Variables Of course, we can consider all words {t 1, t 2, … t N } in a dictionary z is to approximate logit(p)
approximate Logistic Regression If the probability p = 1 or 0, ln(p/1-p) = +infinity or –infinity Can not do regression t 1 appears 3 times t 2 appears 0 time … t N appears 1 time x The probability to be spam p is always 1 or 0.
Logistic Regression Sigmoid Function
Logistic Regression Yes (Spam) x1x1 close to 1 No (not Spam) x2x2 close to 0
Logistic Regression … bias feature Yes 1 0 No
Logistic Regression … 1 0 Yes No
Reference for Linear Classifier classifiers.pdf
More than saying “yes/no” Multiclass Classification
More than saying “yes/no” Handwriting digit classification This is Multiclass Classification
More than saying “yes/no” Handwriting digit classification Simplify the question: whether an image is “2” or not … feature of an image Each pixel corresponds to one dimension in the feature Describe the characteristics of input object
More than saying “yes/no” Handwriting digit classification Simplify the question: whether an image is “2” or not … “2” or not
More than saying “yes/no” Handwriting digit classification Binary classification of 1, 2, 3 … … “1” or not “2” or not “3” or not …… If y 2 is the max, then the image is “2”.
Limitation of single neuron Motivation of cascading neurons
Limitation of Logistic Regression Input Output x1x1 x2x2 00No 01Yes 10 11No Yes No
Limitation of Logistic Regression XNOR gate AND OR AND NOT Input Output x1x1 x2x2 00No 01Yes 10 11No
Neural Network XNOR gate AND OR AND NOT Hidden Neurons Neural Network
a 2 =0.27 a 2 =0.73 a 2 =0.27 a 2 =0.05 a 1 =0.27 a 1 =0.05 a 1 =0.73
b (0.27, 0.27) (0.73, 0.05) (0.05,0.73) a 2 =0.27 a 2 =0.73 a 2 =0.27 a 2 =0.05 a 1 =0.27 a 1 =0.05 a 1 =0.73 a1a1 a2a2 a1a1 a2a2
Thank you for your listening!
Appendix
More reference e/2_GD_REG_pton_NN/lecture_notes/logistic_regr ession_loss_function/logistic_regression_loss.pdf e/2_GD_REG_pton_NN/lecture_notes/logistic_regr ession_loss_function/logistic_regression_loss.pdf error-function-minimized-in.html error-function-minimized-in.html nonconvex.pdf nonconvex.pdf lms.pdf lms.pdf
Logistic Regression x