Download presentation
Presentation is loading. Please wait.
1
Supervised learning: Mixture Of Experts (MOE) Network
2
MOE Module Local Expert Gating Network Local Expert Local Expert P(y|x, P(y|x, P(y|x, P(y|x, x a (x a (x a (x
3
P( y | x, Φ) = Σ j P( y | x, Θ j ) a j ( x ) For a given input x, the posterior probability of generating class y given x using K experts can be computed as The objective is to estimate the model parameters so as to attain the highest probability of the training set given the estimated parameters.
4
Each RBF Gaussian kernel can be viewed as an local expert. MAXNET GatingNET
5
MAXNET MOE Classifier P(ω c |x,E k ) Σ k P(E k |x)P(ω c |x,E k ) ω winner P(E k |x,)
6
Given a pattern, each expert network estimates the pattern's conditional a posteriori probability on the (adaptively-tuned or pre- assigned) feature space. Each local expert network performs multi-way classification over K classes by using either K independent binomial model, each modeling only one class, or one multinomial model for all classes. Mixture of Experts The MOE [Jacobs91] exhibits an explicit relationship with statistical pattern classification methods as well as a close resemblance to fuzzy inference systems.
7
Two Components of MOE local experts: gating network:
8
The design of modular neural networks hinges upon the choice of local experts. Usually, a local expert is adaptively trained to extract a certain it local feature particularly relevant to its local decision. Sometimes, a local expert can be assigned a predetermined feature space. Based on the local feature, a local expert gives its local recommendation. Local Experts
9
LBF vs RBF Local Expertss MLP RBF HyperplaneKernel function
10
Mixture of Experts Class 1 Class 2
11
Mixture of Experts Expert #1 Expert #2
12
The gating network serves the function of computing the proper weights to be used for the final weighted decision. A probabilistic rule is used to integrate recommendations from several local experts taking into account the experts' confidence levels. Gating Network
13
The training of the local experts as well as (the confidence levels in) the gating network of the MOE network is based on the expectation- maximization (EM) algorithm.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.