Download presentation
Presentation is loading. Please wait.
1
Linear regression with one variable
Cost function Machine Learning
2
Training Set Hypothesis: ‘s: Parameters How to choose ‘s ?
Size in feet2 (x) Price ($) in 1000's (y) 2104 460 1416 232 1534 315 852 178 … Hypothesis: ‘s: Parameters How to choose ‘s ?
4
Idea: Choose so that is close to for our training examples
y x Idea: Choose so that is close to for our training examples
5
Cost function intuition I
Linear regression with one variable Cost function intuition I Machine Learning
6
Simplified Hypothesis: Parameters: Cost Function: Goal:
7
(for fixed , this is a function of x)
(function of the parameter ) y x
8
(function of the parameter )
(for fixed , this is a function of x) y x
9
(function of the parameter )
(for fixed , this is a function of x) y x
10
Cost function intuition II
Linear regression with one variable Cost function intuition II Machine Learning
11
Hypothesis: Parameters: Cost Function: Goal:
12
(for fixed , this is a function of x)
(function of the parameters ) Price ($) in 1000’s Size in feet2 (x)
14
(for fixed , this is a function of x)
(function of the parameters )
15
(for fixed , this is a function of x)
(function of the parameters )
16
(for fixed , this is a function of x)
(function of the parameters )
17
(for fixed , this is a function of x)
(function of the parameters )
18
Linear regression with one variable
Gradient descent Machine Learning
19
Have some function Want Outline: Start with some Keep changing to reduce until we hopefully end up at a minimum
20
J(0,1) 1 0
21
J(0,1) 1 0
22
Gradient descent algorithm
Correct: Simultaneous update Incorrect:
23
Gradient descent intuition
Linear regression with one variable Gradient descent intuition Machine Learning
24
Gradient descent algorithm
26
If α is too small, gradient descent can be slow.
If α is too large, gradient descent can overshoot the minimum. It may fail to converge, or even diverge.
27
at local optima Current value of
28
Gradient descent can converge to a local minimum, even with the learning rate α fixed.
As we approach a local minimum, gradient descent will automatically take smaller steps. So, no need to decrease α over time.
29
Gradient descent for linear regression
Linear regression with one variable Gradient descent for linear regression Machine Learning
30
Gradient descent algorithm
Linear Regression Model
32
Gradient descent algorithm
update and simultaneously
33
Gradient descent example
𝑡ℎ𝑒𝑡𝑎1=2 theta0 = - 1 alpha = 0.01 X y h error h-y (h-y)x 1 2 3 6 5 10
34
J(0,1) 1 0
36
(for fixed , this is a function of x)
(function of the parameters )
37
(for fixed , this is a function of x)
(function of the parameters )
38
(for fixed , this is a function of x)
(function of the parameters )
39
(for fixed , this is a function of x)
(function of the parameters )
40
(for fixed , this is a function of x)
(function of the parameters )
41
(for fixed , this is a function of x)
(function of the parameters )
42
(for fixed , this is a function of x)
(function of the parameters )
43
(for fixed , this is a function of x)
(function of the parameters )
44
(for fixed , this is a function of x)
(function of the parameters )
45
Logistic Regression Classification Machine Learning
46
Classification Spam / Not Spam? Online Transactions: Fraudulent (Yes / No)? Tumor: Malignant / Benign ? 0: “Negative Class” (e.g., benign tumor) 1: “Positive Class” (e.g., malignant tumor)
47
Classification: y = or 1 can be > 1 or < 0 Logistic Regression:
48
Hypothesis Representation
Logistic Regression Hypothesis Representation Machine Learning
49
Sigmoid function Logistic function Logistic Regression Model Want 1
0.5 Sigmoid function Logistic function
50
Logistic regression z 1 Suppose predict “ “ if predict “ “ if
51
Logistic Regression Cost function Machine Learning
52
Training set: m examples How to choose parameters ?
53
Cost function Linear regression: “non-convex” “convex”
54
Logistic regression cost function
If y = 1 1
55
Logistic regression cost function
If y = 0 1
56
Simplified cost function and gradient descent
Logistic Regression Simplified cost function and gradient descent Machine Learning
57
Logistic regression cost function
58
Logistic regression cost function
To fit parameters : To make a prediction given new : Output
59
Gradient Descent Want : Repeat (simultaneously update all )
60
Algorithm looks identical to linear regression!
Gradient Descent Want : Repeat (simultaneously update all ) Algorithm looks identical to linear regression!
61
Gradient Descent Want : Repeat (simultaneously update all )
62
Algorithm looks identical to linear regression!
Gradient Descent Want : Repeat (simultaneously update all ) Algorithm looks identical to linear regression!
64
Chain rule
66
Derivation of logistic regression
67
Now Derive From
69
code to compute code to compute code to compute code to compute
theta = function [jVal, gradient] = costFunction(theta) jVal = [ ]; code to compute gradient(1) = [ ]; code to compute gradient(2) = [ ]; code to compute gradient(n+1) = [ ]; code to compute
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.