Download presentation
Presentation is loading. Please wait.
Published byVincent Sherman Modified over 9 years ago
1
Chapter 9 Perceptrons and their generalizations
2
Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation of indicator functions Method of potential functions and Radial basis functions Three theorem of optimization theory Neural Networks
3
Perceptrons (Rosenblatt, 1950s)
4
Recurrent Procedure
21
Proofs of the theorems
22
Method of stochastic approximation and sigmoid approximation of indicator functions
24
Method of Stochastic Approximation
28
Sigmoid Approximation of Indicator Functions
29
Basic Frame for learning process Use the sigmoid approximation at the stage of estimating the coefficients Use the indicator functions at the stage of recognition.
30
Method of potential functions and Radial Basis Functions
31
Potential function On-line Only one element of the training data RBFs (mid-1980s) Off-line
32
Method of potential functions in asymptotic learning theory Separable condition Deterministic setting of the PR Non-separable condition Stochastic setting of the PR problem
33
Deterministic Setting
34
Stochastic Setting
35
RBF Method
37
Three Theorems of optimization theory Fermat ’ s theorem (1629) Entire space, without constraints Lagrange multipliers rule (1788) Conditional optimization problem Kuhn-Tucker theorem (1951) Convex optimizaiton
42
To find the stationary points of functions It is necessary to solve a system of n equations with n unknown values.
43
Lagrange Multiplier Rules (1788)
51
Kuhn-Tucker Theorem (1951) Convex optimization Minimize a certain type of (convex) objective function under certain (convex) constraints of inequality type.
59
Remark
60
Neural Networks A learning machine: Nonlinearly mapped input vector x in feature space U Constructed a linear function into this space.
61
Neural Networks The Back-Propagation method The BP algorithm Neural Networks for the Regression estimation problem Remarks on the BP method.
62
The Back-Propagation method
63
The BP algorithm
66
For the regression estimation problem
67
Remark The empirical risk functional has many local minima The convergence of the gradient based method is rather slow. The sigmoid function has a scaling factor that affects the quality of the approximation.
68
Neural-networks are not well-controlled learning machines In many practical applications, however, demonstrates good results.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.