Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonlinear Conjugate Gradient Method for Supervised Training of MLP

Similar presentations


Presentation on theme: "Nonlinear Conjugate Gradient Method for Supervised Training of MLP"— Presentation transcript:

1 Nonlinear Conjugate Gradient Method for Supervised Training of MLP
Alexandra Ratering ECE/CS/ME 539 December 14, 2001

2 Introduction Back-Propagation Algorithm
Can oscillate and be caught in local minima Slow convergence rate (zigzag path to the minimum) Many parameters have to be adjusted be the user learning rate, momentum constant … Nonlinear Conjugate Gradient Method Second order optimization approach Faster convergence Fewer parameters to adjust

3 The Algorithm Direction vector = conjugate gradient vector
Linear combination of past direction vectors and the current negative gradient vector Reduces oscillatory behavior in the minimum search Reinforces weight adjustment in accordance with previous successful path directions Learning rate Optimal rate determined for every iteration via line search Robustness of line search is critical for performance of CG-Algorithm

4 Implementation and Results
In Matlab code with interface similar to bp Results for approximation problem of homework #4 BP CG Training error 0.0021 e-4 Testing error e-4 e-4

5 Results (II) Results for pattern classification problem
Two equally sized 2D Gaussian distributions (30 samples) Final training result for both CG and BP: Crate = 88.3% after 500 iterations


Download ppt "Nonlinear Conjugate Gradient Method for Supervised Training of MLP"

Similar presentations


Ads by Google