Download presentation
Presentation is loading. Please wait.
1
On the Basis Learning Rule of Adaptive-Subspace SOM (ASSOM) Huicheng Zheng, Christophe Laurent and Grégoire Lefebvre 13th September 2006 Thanks to the MUSCLE Internal Fellowship (http://www.muscle-noe.org).http://www.muscle-noe.org ICANN’06
2
2 Outline Introduction Minimization of the ASSOM objective function Fast-learning methods –Insight on the basis vector rotation –Batch-mode basis vector updating Experiments Conclusions
3
3 Motivation of ASSOM Learning “invariance classes” with subspace learning and SOM [Kohonen. T., et al., 1997] –For example: spatial-translation invariance rectangles circles triangles ……
4
4 Applications of ASSOM Invariant feature formation [Kohonen, T., et al., 1997] Speech processing [Hase, H., et al., 1996] Texture segmentation [Ruiz del Solar, J., 1998] Image retrieval [De Ridder, D., et al., 2000] Image classification [Zhang, B., et al., 1999]
5
5 ASSOM Modules Representing Subspaces The module arrays in ASSOM Rectangular topology Hexagonal topology A module representing the subspace L (j) j
6
6 Competition and Adaptation Repeatedly: –Competition: The winner –Adaptation: For the winner and the modules i in its neighborhood –Orthonormalize the basis vectors N×N matrix:
7
7 Transformation Invariance Episodes correspond to signal subspaces. Example: –One episode, S, consists of 8 vectors. Each vector is translated in time with respect to the others.
8
8 Episode Learning Episode winner Adaptation: for each sample x(s) in the episode X={x(s), s S} –Rotate the basis vectors –Orthonormalize the basis vectors
9
9 Deficiency of the Traditional Learning Rule Rotation operator p c (i) (x(s),t) is an N×N matrix. –N: input vector dimension Approximately: NOP (number of operations) ∝ MN 2 –M: subspace dimension
10
10 Efforts in the Literature Adaptive Subspace Map (ASM) [De Ridder, D., et al., 2000]: –Drop topological ordering –Perform a batch-mode updating with PCA –Essentially not ASSOM. Replace the basis updating rule [McGlinchey, S.J., Fyfe, C., 1998] –NOP ∝ M 2 N
11
11 Outline Introduction Minimization of the ASSOM objective function Fast-learning methods –Insight on the basis vector rotation –Batch-mode basis vector updating Experiments Conclusions
12
12 Minimization of the ASSOM Objective Function where: (projection error) P(X): probability density function of X Solution: Stochastic gradient descent: : Learning rate function
13
13 Minimization of the ASSOM Objective Function When is small: In practice, better stability has been observed by the modified form proposed in [Kohonen, T., et al., 1997]
14
14 Minimization of the ASSOM Objective Function corresponds to a modified objective function: Solution to E m : When is small:
15
15 Outline Introduction Minimization of the ASSOM objective function Fast-learning methods –Insight on the basis vector rotation –Batch-mode basis vector updating Experiments Conclusions
16
16 Insight on the Basis Vector Rotation Recall: traditional learning
17
17 Insight on the Basis Vector Rotation scalar projection For fast computing, calculate first, then scale x(s) with to get NOP ∝ MN Referred to as FL-ASSOM (Fast-Learning ASSOM) Scalar
18
18 Insight on the Basis Vector Rotation
19
19 Outline Introduction Minimization of the ASSOM objective function Fast-learning methods –Insight on the basis vector rotation –Batch-mode basis vector updating Experiments Conclusions
20
20 Batch-mode Fast Learning (BFL-ASSOM) Motivation: Re-use the previously calculated during module competition. In the basic ASSOM, L (i) keeps changing with receiving of each component vector x(s). has to be re-calculated for each x(s).
21
21 Batch-mode Rotation Use the solution to the modified objective function E m : Subspace remains the same for all the component vectors in the episode. We can now use calculated during module competition.
22
22 Batch-mode Fast Learning whereis a scalar defined by: Correction is a linear combination of component vectors x(s) in the episode. For each episode, one orthonormalization of basis vectors is enough.
23
23 Outline Introduction Minimization of the ASSOM objective function Fast-learning methods –Insight on the basis vector rotation –Batch-mode basis vector updating Experiments Conclusions
24
24 Experimental Demonstration Emergence of translation-invariant filters –Episodes are drawn from a colored noise image –Vectors in episodes are subject to translation white noise imagecolored noise image Example episode (magnified):
25
25 Resulted Filters FL-ASSOMBFL-ASSOM t Decrease of the average projection error e with learning step t:
26
26 Timing Results Times given in seconds for 1,000 training steps. M: subspace dimension N: input vector dimension VU: Vector Updating time WL: Whole Learning time
27
27 Timing Results Change of vector updating time (VU) with input dimension N: Change of vector updating time (VU) with subspace dimension M: Vertical scales of FL-ASSOM and BFL-ASSOM have been magnified 10 times for clarity.
28
28 Outline Introduction Minimization of ASSOM objective function Fast-learning methods –Insight on the basis vector rotation –Batch-mode basis vector updating Experiments Conclusions
29
29 Conclusions The basic ASSOM algorithm corresponds to a modified objective function. Updating of basis vectors in the basic ASSOM correponds to a scaling of the component vectors in the input episode. In batch-mode updating, the correction to the basis vectors is a linear combination of component vectors in the input episode. Basis learning can be dramatically boosted with the previous understandings.
30
30 References De Ridder, D., et al., 2000: The adaptive subspace map for image description and image database retrieval. SSPR&SPR 2000. Hase, H., et al., 1996: Speech signal processing using Adaptive Subspace SOM (ASSOM). Technical Report NC95-140, The Inst. of Electronics, Information and Communication Engineers, Tottori University, Koyama, Japan. Kohonen, T., et al., 1997: Self-Organized formation of various invariant-feature filters in the adaptive-subspace SOM. Neural Computation 9(6). McGlinchey, S. J., Fyfe, C., 1998: Fast formation of invariant feature maps. EUSIPCO’98. Ruiz del Solar, J., 1998: Texsom: texture segmentation using Self- Organizing Maps. Neurocomputing 21(1–3). Zhang, B., et al., 1999: Handwritten digit recognition by adaptive- subspace self-organizing map (ASSOM). IEEE Trans. on Neural Networks 10:4.
31
31 Thanks and questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.