Download presentation
Presentation is loading. Please wait.
1
Energy function: E(S 1,…,S N ) = - ½ Σ W ij S i S j + C (W ii = P/N) (Lyapunov function) Attractors= local minima of energy function. Inverse states Mixture statesSpurious minima Spin glass states i = j
2
Magnetic Systems Ising Model: S i spins Field acting on spin i: h i = w ij S j + h ext where w ij is exchange interaction strength and w ij = w ji At low temperature S j = sgn(h i )
3
Effect of temperature: Glauler : S i = +1, with probability g( h i ) -1, with probability 1- g( h i ) Where g(h) = 1 1 + e –2bh b= 1/ ( K * T) K = Boltzman’s constant T = temperature
4
Stochastic hopfield nets Prob ( S i = + 1) = 1 1 + e –2bh i + b = 1/T
5
Optimization with HNN Weighted Matching Problem N points, d ij distance between i & j Link in pairs : each point linked to exactly one other point and total length MINIMUM. 1.Encoding: N x N neurons, (n ij ) 1<= i <= N 1<=j<= N activation of neuron ij: 1, if Ξ link from i to j n ij = 0, otherwise.
6
2.Quantity to minimize: total length L= Σ d ij n ij 3. Constraints: n = 1, V i 4. Energy function: Quantity to minimize + constraint penalty E ([n ij ]) = Σ d ij n ij + (γ /2) Σ ( 1- Σ n ij ) 2 i<j i j
7
5.Reduce energy function to summation of quadratic and linear terms. 1.Coefficients of linear terms are thresholds of units. 2.Coefficients of quadratic terms are the weights between neurons. E ( n ) = …= (Nγ)/2 + Σ (d ij -γ )n ij + γ Σ n ij n ik So, weight ij, kl = - γ, if i,j & k,l have index in common φ, otherwise. Thresholds of node n ij : d ij -γ i,j,k I<j
8
Traveling salesman problem (TSP) NP- Complete N x N nodes : n ij = 1 iff city I is visited at j –th stop in tour. Minimize: –L = ½ Σ d ij n ia (n j,a+1 + n j,a-1 ) Constraints: Σ n ia = 1, V city i Σ n ia = 1, V city a i,j,a a i
9
Energy: e = ½ Σ d ij n ia (n j,a+1 + n j,a-1 ) + (γ /2) [ Σ ( 1- Σ n ia ) 2 + Σ ( 1- Σ n ia ) 2 ] =…= ½ Σ d ij n ia n j,a+1 +½ Σ d ij n ia n j,a-1 + γ Σ n ia n ja + γ Σ n ia n jb - γ Σ n ia + γ N So, threshold : - γ for each unit. weights : γ, between units on same row or column d ij, between units on different columns i,j,a ai ia i!=j a!=bi,a
10
Reinforcement Learning ( Learn with critic, not teacher) Associative Reward – penalty algo ( A RP ) Stochastic units: Prob( S i = + 1) = h i = Σ w ij v j v j = activation of hidden unit or net inputs ξ j themselves. ξ i μ = S i μ, if γ μ = +1 (reward) - S i μ, if γ μ = -1 (penalty) 1 1 + e –2bh i +
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.