Download presentation
Presentation is loading. Please wait.
Published byMadelynn Robins Modified over 9 years ago
1
Adjoint Orbits, Principal Components, and Neural Nets Some facts about Lie groups and examples 2.Examples of adjoint orbits and a distance measure 3.Descent equations on adjoint orbits 4.Properties of the double bracket equation 5.Smoothed versions of the double bracket equation 6.The principal component extractor 7.The performance of subspace filters 8.Variations on a theme
2
Where We Are 9:30 - 10:45 Part 1. Examples and Mathematical Background 10:45 - 11:15 Coffee break 11:15- 12:30 Part 2. Principal components, Neural Nets, and Automata 12:30 - 14:30 Lunch 14:30 - 15:45 Part 3. Precise and Approximate Representation of Numbers 15:45 - 16:15 Coffee break 16:15 - 17:30 Part 4. Quantum Computation
3
The Adjoint Orbit Theory and Some Applications 1. Some facts about Lie groups and examples 2.Examples of adjoint orbits and a distance measure 3.Descent equations on adjoint orbits 4.Properties of the double bracket equation 5.Smoothed versions of the double bracket equation 6.Loops and deck transformations
4
Some Background
5
More Mathematics Background
6
A Little More Mathematics Background
7
Still More Mathematics Background
8
The Last for now, Mathematics Background
9
Getting a Feel for the Normal Metric
10
Steepest Descent on an Adjoint Orbit
13
A Descent Equation on an Adjoint Orbit
14
A Descent Equation with Multiple Equilibria
15
A Descent Equation with Smoothing Added
17
The Double Bracket Flow for Analog Computation
19
Adaptive Subspace Filtering
20
Let u be a vector of inputs, and let be a diagonal “editing” matrix that selects energy levels that are desirable. An adaptive subspace filter with input u and output y can be realized by implementing the equations Some Equations
21
Neural Nets as Flows on Grassmann Manifolds
22
Summary of Part 2 1. We have given some mathematical background necessary to work with flows on adjoint orbits and indicated some applications. 2. We have defined flows that will stabilize at invariant subspaces corresponding to the principal components of a vector process. These flows can be interpreted as flows that learn without a teacher. 3. We have argued that in spite of its limitations, steepest descent is usually the first choice in algorithm design. 4. We have interpreted a basic neural network algorithm as a flow in a Grassmann manifold generated by a steepest descent tracking algorithm.
23
M. W. Berry et al., “Matrices, Vector Spaces, and Information Retrieval” SIAM Review, vol. 41, No. 2, 1999. R. W. Brockett, “Dynamical Systems That Learn Subspaces” in Mathematical System Theory: The Influence of R. E. Kalman, (A.C. Antoulas, ed.) Springer -Verlag, Berlin. 1991. pp. 579--592. R. W. Brockett “An Estimation Theoretic Basis for the Design of Sorting and Classification Networks,” in Neural Networks, (R. Mammone and Y. Zeevi, eds.) Academic Press, 1991, pp. 23-41. A Few References
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.