Download presentation
Presentation is loading. Please wait.
Published byHugo Nichols Modified over 9 years ago
1
A C B Small Model Middle Model Large Model Figure 1 Parameter Space The set of parameters of a small model is an analytic set with singularities. Rank of the Fisher information matrix depends on the parameter.
2
H(w) 0 g(u) Real Manifold U Resolution Map Kullback Information Figure 2 Resolution of Singularities Hironaka’s theorem ensures that we can algorithmically find a resolution map which makes the Kullback information be a direct product of local coordinates. H(g(u)) = a(u) u 1 k1 u 2 k2 … u 3 k3 Parameter Space W
3
A C B True distribution Figure 3 Bias and Variance The variance of a singular point is smaller than that of a regular point. If the number of training samples is not so large, then singular points A or B are selected in Bayesian estimation.
4
Figure 4 Learning Curve The learning curve of a hierarchical learning machine is bounded by those of several smaller machines. n: The number of training samples G(n) : The generalization error
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.