Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ensembles of low Variance Architecture Nathan Intrator Tel-Aviv & Brown University www.physics.brown.edu/users/faculty/Intrator Joint work with Shimon.

Similar presentations


Presentation on theme: "Ensembles of low Variance Architecture Nathan Intrator Tel-Aviv & Brown University www.physics.brown.edu/users/faculty/Intrator Joint work with Shimon."— Presentation transcript:

1 Ensembles of low Variance Architecture Nathan Intrator Tel-Aviv & Brown University www.physics.brown.edu/users/faculty/Intrator Joint work with Shimon Cohen

2

3

4

5

6 A Hybrid projection based and radial basis functions

7 Data decompostion A function can be decomposed into mutually exclusive parts: radial and projection based. Sequential methods:first find radial and then projection part get stuck in non-optimal local minimum.

8 Local models RBF is a local model Fast learner Not so good at high dimension spaces

9 Global Model MLPs are global models response to all the data Slow learners Insensitive to the curse of dimensionality

10 Bias x1 xd Ridge1RidgeM1RBF1RBFM2 Bias YcY1 Architecture...

11 Clustering Ridge-Rbf Selection Solve weights Conjugate gradient TSS PRBFN Algorithm stages

12 Clusterization Kmeans Tree EM

13 Units selection Spectrum of covariance Density of clusters Error of cluster

14 Weights Estimation Using centers and radii for RBFS Using centers for projection Use pseudo inverse for the second weights

15 Optimization Conjugate gradient Ensuring radii are not too small

16 EM Algorithm An algorithm for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved (latent) variables. EM alternates between performing an expectation (E) step, which computes the expected value of the latent variables, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters given the data and setting the latent variables to their expectation.

17 EM Algorithm Let the observed variables be known as y and the latent variables as z. Together, y and z form the complete data. Assume that p is a joint model of the complete data with parameters θ. An EM algorithm will then iteratively improve an initial estimate θ 0 and construct new estimates θ 1 through θ N.

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42


Download ppt "Ensembles of low Variance Architecture Nathan Intrator Tel-Aviv & Brown University www.physics.brown.edu/users/faculty/Intrator Joint work with Shimon."

Similar presentations


Ads by Google