Presentation is loading. Please wait.

Presentation is loading. Please wait.

Forward & Backward selection in hybrid network

Similar presentations


Presentation on theme: "Forward & Backward selection in hybrid network"— Presentation transcript:

1 Forward & Backward selection in hybrid network
11/21/2018

2 Introduction A training algorithm for an hybrid neural network for regression. Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons). 11/21/2018

3 When is it good? 11/21/2018

4 Hidden Units RBF: MLP: 11/21/2018

5 Overall algorithm Divide input space and assign units to each sub-region. Optimize parameters. Prune un-necessary weights using Bayesian Information Criteria. 11/21/2018

6 Forward leg Divide the input space into sub-regions
Select type of hidden unit for each sub-region Stop when error goal or maximum number of units is achieved. 11/21/2018

7 Input space division Like CART using Maximum reduction in 11/21/2018

8 Unit type selection (RBF)
11/21/2018

9 Unit type selection (projection)
11/21/2018

10 Units parameters RBF unit: center at maximum point.
Projection unit: weight normalized of maximum point 11/21/2018

11 ML estimate for unit type
11/21/2018

12 Pruning Target function values corrupted with Gaussian noise
11/21/2018

13 BIC approximation Schwartz, Kass and Raftery 11/21/2018

14 Evidence for the model 11/21/2018

15 Evidence for unit type1 11/21/2018

16 Evidence for unit type cont2’
11/21/2018

17 Evidence fore unit type cont3’
11/21/2018

18 Evidence Unit Type alg4. Initialize alfa and beta Loop: compute w,wo
Recompute alfa and beta Until difference in the evidence is low. 11/21/2018

19 Pumadyn data set DELVE archive
Dynamic of a puma robot arm. Target: annular acceleration of one of the links. Inputs: various joint angles, velocities and torques. Large Guassian noise. Data set non linear. Input dimension: 8, 32. 11/21/2018

20 Results pumadyn-32nh 11/21/2018

21 Results pumadyn-8nh 11/21/2018

22 Related work Hassibi et al. with Optimal Brain Surgeon
Mackey with Bayesian inference of weights and regularization parameters. HME Jordan and Jacob, division on input space. Kass & Raftery Schwarz with BIC. 11/21/2018

23 Discussion Pruning removes 90% of parameters.
Pruning reduces variance of estimator. The pruning algorithm is slow. PRBFN better then MLP of RBF alone. Bayesian techniques disadvantage: the prior distribution parameter. Bayesian techniques are better then LRT. Unit type selection is a crucial element in PRBFN Curse of dimensionality is well seen on pumadyn data sets. 11/21/2018


Download ppt "Forward & Backward selection in hybrid network"

Similar presentations


Ads by Google