Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dan Simon Cleveland State University Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization 1.

Similar presentations


Presentation on theme: "Dan Simon Cleveland State University Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization 1."— Presentation transcript:

1 Dan Simon Cleveland State University Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization 1

2 Outline 1.Gradient Descent 2.The Newton-Raphson Method 3.The Levenberg–Marquardt Algorithm 4.Trust Region Methods 2

3 Gradient descent: head downhill http://en.wikipedia.org/wiki/Gradient_descent Contour plot 3

4 Fuzzy controller optimization: Find the MF parameters that minimize tracking error min E(  ) with respect to   = n-element vector of MF parameters E(  ) = controller tracking error 4

5 x=-3: 0.1: 3; y=-2: 0.1: 2; for i=1:length(x), for j=1:length(y), z(i,j)=x(i)^2+10*y(j)^2; end, end contour(x,y,z) Contour plot of x 2 +10y 2  too small: convergence takes long time  too large: overshoot minimum 5

6 Gradient descent is a local optimization method (Rastrigin function) 6

7 How should we select the step size?  k too small: convergence takes long time  k too large: overshoot minimum Line minimization: Step Size Selection 7

8 Recall the general Taylor series expansion: f(x) = f(x 0 ) + f’(x 0 )(x – x 0 ) + …Therefore, Step Size Selection The minimum of E(  k+1 ) occurs when its derivative is 0, which means that: 8

9 Compare the two equations: Step Size Selection We see that Newton’s method, also called the Newton-Raphson method 9

10 Jang, Fig. 6.3 – The Newton-Raphson method approximates E(  ) as a quadratic function 10

11 The second derivative is called the Hessian (Otto Hesse, 1800s) Step Size Selection How easy or difficult is it to calculate the Hessian? What if the Hessian is not invertible? The Newton-Raphson method 11

12 The Levenberg–Marquardt algorithm: Step Size Selection  is a parameter selected to balance between steepest descent ( =  ) and Newton-Raphson ( = 0). We can also control the step size with another parameter  : 12

13 Jang, Fig. 6.5 – Illustration of Levenberg-Marquardt gradient descent 13

14 Trust region methods: These are used in conjuction with the Newton-Raphson method, which approximates E as quadratic in  : Step Size Selection If we use Newton-Raphson to minimize E with a step size of  k, then we are implicitly assuming that E(  k +  k ) will be equal to the above expression. 14

15 Actually, we expect the actual improvement k to be slightly smaller than the true improvement: Step Size Selection We limit the step size  k so that |  k |<R k Our “trust” in the quadratic approximation is proportional to k 15

16 k = ratio of actual to expected improvement R k = trust region: maximum allowable size of  k Step Size Selection 16

17 17

18 18

19 19


Download ppt "Dan Simon Cleveland State University Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization 1."

Similar presentations


Ads by Google