Presentation is loading. Please wait.

Presentation is loading. Please wait.

Performance Optimization

Similar presentations


Presentation on theme: "Performance Optimization"— Presentation transcript:

1 Performance Optimization
Steepest Descent

2 objective To learn algorithms how to optimize a performance index F(x) -> to Find the value of x that minimize F(x)

3 Basic Optimization Algorithm
pk - Search Direction k - Learning Rate

4 Choose the next step so that the function decreases:
Steepest Descent Choose the next step so that the function decreases:

5 For small changes in x we can approximate F(x):
Steepest Descent For small changes in x we can approximate F(x): where

6 If we want the function to decrease:
Steepest Descent If we want the function to decrease:

7 Steepest Descent If we want the function to decrease:
We can maximize the decrease by choosing:

8 We can maximize the decrease by choosing:
Steepest Descent We can maximize the decrease by choosing: Two general methods to select ak: - minimize F(x) w.r.t. ak - use a predetermined value (e.g. 0.2, 1/k)

9 Example

10 Plot

11 Stable Learning Rates 
Suppose that the performance index is a quadratic function: Steepest descent algorithm with constant learning rate: A linear dynamic system will be stable if the eigenvalues of the matrix [I-A] are less than one in magnitude.

12 Stable Learning Rates (Quadratic)
Stability is determined by the eigenvalues of this matrix. (i - eigenvalue of A) Eigenvalues of [I - A].

13 Stable Learning Rates 
Let {1, 2,…, n} and {z1,z2,…, zn} be the eigenvalues and eigenvectors of the Hessian matrix. Then Condition for the stability of the steepest descent algorithm is then Assume that the quadratic function has a strong minimum point, then its eigenvalues must be positive numbers. Hence, This must be true for all eigenvalues:

14 Example

15 CHAPTER 10 Widrow-Hoff Learning

16 Objectives Widrow-Hoff learning is an approximate steepest descent algorithm, in which the performance index is mean square error. It is widely used today in many signal processing applications. It is precursor to the backpropagation algorithm for multilayer networks.

17 ADALINE Network ADALINE (Adaptive Linear Neuron) network and its learning rule, LMS (Least Mean Square) algorithm are proposed by Widrow and Marcian Hoff in 1960. Both ADALINE network and the perceptron suffer from the same inherent limitation: they can only solve linearly separable problems. The LMS algorithm minimizes mean square error (MSE), and therefore tires to move the decision boundaries as far from the training patterns as possible.

18 ADALINE Network + + p a W n 1 b p a W n 1 b SR n = Wp + b
a = purelin(Wp + b) + a S1 n 1 b W SR R p R1 S Single-layer perceptron

19 Single ADALINE Set n = 0, then Wp + b = 0 specifies a decision boundary. The ADALINE can be used to classify objects into two categories if they are linearly separable.

20 Mean Square Error The LMS algorithm is an example of supervised training. The LMS algorithm will adjust the weights and biases of the ADALINE in order to minimize the mean square error, where the error is the difference between the target output (tq) and the network output (pq). MSE: E[·]: expected value

21 Mean Square Error

22 Example 1

23 Solved Problem P10.3 1 2 3 -3 -2 -1 4 So the contour of the performance surface will be circular. The center of the contours (the minimum point) is .

24 Approximate Steepest Descent

25 Approximate Gradient

26 Approximate Gradient(conti.)

27 Approximate Gradient(conti.)

28 LMS Algorithm The steepest descent algorithm with constant learning rate  is Matrix notation of LMS algorithm: The LMS algorithm is also referred to as the delta rule or the Widrow-Hoff learning algorithm.

29 Quadratic Functions  General form of quadratic function:
(A: Hessian matrix) If the eigenvalues of the Hessian matrix are all positive, then the quadratic function will have one unique global minimum. ADALINE network mean square error:

30 Orange/Apple Example 
In practical applications, the stable learning rate  might NOT be practical to calculate R, and  could be selected by trial and error.

31 Orange/Apple Example 
Start, arbitrary, with all the weights set to zero, and then will apply input p1, p2, p1, p2, etc., in that order, calculating the new weights after each input is presented.

32 Orange/Apple Example 
This decision boundary falls halfway between the two reference patterns. The perceptron rule did NOT produce such a boundary, The perceptron rule stops as soon as the patterns are correctly classified, even though some patterns may be close to the boundaries. The LMS algorithm minimizes the mean square error.

33 Perceptron rule V.S. LMS algorithm

34 Perceptron rule V.S. LMS algorithm(conti.)

35 Perceptron rule V.S. LMS algorithm(conti.)

36 Perceptron rule V.S. LMS algorithm(conti.)

37 Solved Problem P10.4 Train the network using the LMS algorithm, with the initial guess set to zero and a learning rate  = 0.25.

38 Solved Problem P10.8 Train the network using the LMS algorithm, with the initial guess set to zero and a learning rate  = 0.04.

39 Tapped Delay Line D At the output of the tapped delay line we have an R-dim. vector, consisting of the input signal at the current time and at delays of from 1 to R–1 time steps.

40 Adaptive Filter D

41 Solved Problem P10.1 D   Just prior to k = 0 ( k < 0 ):
Three zeros have entered the filter, i.e., y(3) = y(2) = y(1) = 0, the output just prior to k = 0 is zero. k = 0:

42 Solved Problem P10.1 k = 1: k = 2: k = 3: k = 4:

43 Solved Problem P10.1 The effect of y(0) last from k = 0 through k = 2, so it will have an influence for three time intervals. This corresponds to the length of the impulse response of this filter.

44 Solved Problem P10.6 D +  Application of ADALINE: adaptive predictor
The purpose of this filter is to predict the next value of the input signal from the two previous values. Suppose that the input signal is a stationary random process with autocorrelation function given by D +

45 Solved Problem P10.6 Sketch the contour plot of the performance index (MSE). i.

46 Solved Problem P10.6 Performance Index (MSE): The optimal weights are
The Hessian matrix is Eigenvalues: 1 = 4, 2 = 8. Eigenvectors: The contours of F(x) will be elliptical, with the long axis of each ellipse along the 1st eigenvector, since the 1st eigenvalue has the smallest magnitude. The ellipses will be centered at .

47 Solved Problem P10.6 ii. The maximum stable value of the learning for the LMS algorithm: iii. The LMS algorithm is approximate steepest descent, so the trajectory for small learning rates will move perpendicular to the contour lines. 1 2 -1 -2 1 2 -1 -2

48 Applications Noise cancellation system to remove 60-Hz noise from EEG signal (Fig. 10.6) Echo cancellation system in long distance telephone lines (Fig ) Filtering engine noise from pilot’s voice signal (Fig. P10.8)


Download ppt "Performance Optimization"

Similar presentations


Ads by Google