Supervised Learning Networks
Linear perceptron networks Multi-layer perceptrons Mixture of experts Decision-based neural networks Hierarchical neural networks
Two-Level: (b) Linear perceptron networks (c) decision-based neural network. (d) mixture of experts network. Hierarchical Neural Network Structures Three-Level: (e) experts-in-class network. (f) classes-in-expert network. One-Level: (a) multi-layer perceptrons.
Hierarchical Structure of NN 1-level hierarchy: BP 2-level hierarchy : MOE,DBNN 3-level hierarchy: PDBNN “Synergistic Modeling and Applications of Hierarchical Fuzzy Neural Networks”, by S.Y. Kung, et al., Proceedings of the IEEE, Special Issue on Computational Intelligence, Sept. 1999
All Classes in One Net multi-layer perceptron
Divide-and-conquer principle: divide the task into modules and then integrate the individual results into a collective decision. Modular Structures (two-level) Two typical modular networks: (1) mixture-of-experts (MOE) which utilizes the expert-level modules, (2) decision-based neural networks (DBNN) based on the class-level modules.
Each expert serves the function of (1) extracting local features and (2) making local recommendations. The rules in the gating network are used to decide how to combine recommendations from several local experts, with corresponding degree of confidences. Expert-level (Rule-level) Modules:
mixture of experts network
Class-level modules are natural basic partitioning units, where each module specializes in distinguishing its own class from the others. Class-level modules: In contrast to expert-level partitioning, this OCON structure facilitates a global (or mutual) supervised training scheme. In global inter-class supervised learning, any dispute over a pattern region by (two or more) competing classes may be effectively resolved by resorting to the teacher's guidance.
Decision Based Neural Network
Depending on the order used, two kinds of hierarchical networks: one has an experts-in-class construct and another a classes-in-expert Construct. Three-level hierarchical structures: Apply the divide-and-conquer principle twice: one time on the expert-level and another on the class-level.
Classes-in-Expert Network
Experts-in-Class Network
Multilayer Back-Propagation Networks
A BP Multi-Layer Perceptron(MLP) possesses adaptive learning abilities to estimate sampled functions, represent these samples, encode structural knowledge, and inference inputs to outputs via association. Its main strength lies in its (sufficiently large number of ) hidden units, thus a large number of interconnections. The MLP neural networks enhance the ability to learn and generalize from training data. Namely, MLP can approximate almost any function. BP Multi-Layer Perceptron(MLP)
A 3-Layer Network
Neuron Units: Activation Function
Linear Basis Function (LBF)
RBF NN is More Suitable for Probabilistic Pattern Classification MLP RBF HyperplaneKernel function The probability density function (also called conditional density function or likelihood) of the k-th class is defined as
The centers and widths of the RBF Gaussian kernels are deterministic functions of the training data; RBF BP Neural Network
According to Bays’ theorem, the posterior prob. is where P(C k ) is the prior prob. and RBF Output as Probability Function
MLPs are highly non-linear in the parameter space gradient descent local minima l RBF networks solve this problem by dividing the learning into two independent processes. 1. Use the K-mean algorithm to find c i and determine weights w using the least square method 2. RBF learning by gradient descent
Comparison of RBF and MLP
xpxp K-means K-Nearest Neighbor Basis Functions Linear Regression cici cici ii A w l RBF learning process
l RBF networks implement the function w i i and c i can be determined separately Fast learning algorithm l Basis function types
Finding the RBF Parameters (1 ) Use the K-mean algorithm to find c i
Centers and widths found by K-means and K-NN
Use K nearest neighbor rule to find the function width k-th nearest neighbor of c i l The objective is to cover the training points so that a smooth fit of the training samples can be achieved
l For Gaussian basis functions Assume the variance across each dimension are equal
l To write in matrix form, let
l Determining weights w using the least square method where d p is the desired output for pattern p
Let E be the total-squared error between the actual output and the target output
Note that Problems (1) Susceptible to round-off error. (2) No solution if is singular. (3) If is close to singular,we get very large component in w
Reasons (1) Inaccuracy in forming (2) If A is ill-conditioned, small change in A introduce large change in (3)If A T A is close to singular, dependent column in A T A exist e.g. two parallel straight lines. x y
singular matrix : If the lines are nearly parallel, they intersect each other at i.e. or So, the magnitude of the solution becomes very large; hence overflow will occur. The effect of the large components can be cancelled out if the machine precision is infinite.
If the machine precision is finite, we get large error. For example, Finite machine precision => Solution: Singular Value Decomposition
(2) RBF learning by gradient descent we have Apply
we have the following update equations
Elliptical Basis Function networks : function centers : covariance matrix
EBF Vs. RBF networks RBFN with 4 centersEBFN with 4 centers
MatLab Assignment #3: RBF BP Network to separate 2 classes RBF BP with 4 hidden unitsEBF BP with 4 hidden units ratio=2:1