Download presentation
Presentation is loading. Please wait.
Published byAnnabelle Hart Modified over 9 years ago
1
Andrew Ng Linear regression with one variable Model representation Machine Learning
2
Andrew Ng Housing Prices (Portland, OR) Price (in 1000s of dollars) Size (feet 2 ) Supervised Learning Given the “right answer” for each example in the data. Regression Problem Predict real-valued output Classification : Discrete-valued output 220 1250
3
Andrew Ng Notation: m = Number of training examples x’s = “input” variable / features y’s = “output” variable / “target” variable Size in feet 2 (x)Price ($) in 1000's (y) 2104460 1416232 1534315 852178 …… Training set of housing prices (Portland, OR) (x, y) – one training example (x (i), y (i) ) – ith trainingg example x (1) = 2104 x (2) = 1416 y (1) = 460 m
4
Andrew Ng Training Set Learning Algorithm How do we represent h ? h maps from x’s to y’s Size of house x h hypothesis Estimated price Estimated value Linear regression with one variable. Univariate linear regression. One variable
5
Andrew Ng Cost function Machine Learning Linear regression with one variable
6
Andrew Ng How to choose ‘s ? Training Set Hypothesis: ‘s: Parameters Size in feet 2 (x)Price ($) in 1000's (y) 2104460 1416232 1534315 852178 ……
7
Andrew Ng h(x) = 1.5 + 0·xh(x) = 0.5·x h(x) = 1 + 0.5·x
8
Andrew Ng y x Idea: Choose so that is close to for our training examples (x (i), y (i) ) Squared error function
9
Andrew Ng Cost function intuition I Machine Learning Linear regression with one variable
10
Andrew Ng Hypothesis: Parameters: Cost Function: Goal: Simplified h(x)
11
Andrew Ng y x (for fixed, this is a function of x)(function of the parameter )
12
Andrew Ng y x (for fixed, this is a function of x) (function of the parameter )
13
Andrew Ng y x (for fixed, this is a function of x) (function of the parameter )
14
Andrew Ng Cost function intuition II Machine Learning Linear regression with one variable
15
Andrew Ng Hypothesis: Parameters: Cost Function: Goal:
16
Andrew Ng (for fixed, this is a function of x)(function of the parameters ) Price ($) in 1000’s Size in feet 2 (x)
17
Andrew Ng Contour plots
18
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
19
Andrew Ng (for fixed, this is a function of x)(function of the parameters ) h(x) = 360 + 0·x
20
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
21
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
22
Andrew Ng Gradient descent Machine Learning Linear regression with one variable
23
Andrew Ng Have some function Want Outline: Start with some Keep changing to reduce until we hopefully end up at a minimum
24
Andrew Ng J( )
25
Andrew Ng J( )
26
Andrew Ng Gradient descent algorithm Correct: Simultaneous updateIncorrect: Learning rate assignment a:=b
27
Andrew Ng Gradient descent intuition Machine Learning Linear regression with one variable
28
Andrew Ng Gradient descent algorithm Learning rate derivative
29
Andrew Ng
30
If α is too small, gradient descent can be slow. If α is too large, gradient descent can overshoot the minimum. It may fail to converge, or even diverge.
31
Andrew Ng at local optima Current value of
32
Andrew Ng Gradient descent can converge to a local minimum, even with the learning rate α fixed. As we approach a local minimum, gradient descent will automatically take smaller steps. So, no need to decrease α over time.
33
Andrew Ng Gradient descent for linear regression Machine Learning Linear regression with one variable
34
Andrew Ng Gradient descent algorithm Linear Regression Model
35
Andrew Ng
36
Gradient descent algorithm update and simultaneously
37
Andrew Ng J( )
38
Andrew Ng J( )
39
Andrew Ng Convex function Bowl-shaped
40
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
41
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
42
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
43
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
44
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
45
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
46
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
47
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
48
Andrew Ng (for fixed, this is a function of x)(function of the parameters )
49
Andrew Ng “Batch” Gradient Descent “Batch”: Each step of gradient descent uses all the training examples.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.