Download presentation
Presentation is loading. Please wait.
Published byFlorence Lucas Modified over 9 years ago
1
Optimization with Neural Networks Presented by: Mahmood Khademi Babak Bashiri Instructor: Dr. Bagheri Sharif University of Technology April 2007
2
Introduction An optimization problem consists of two parts: Cost function and Constraints Constrained The constraints are built in the cost function, so minimizing the cost function also satisfies the constraints Unconstraint There is no constraint for the problem! Combinatorial We separate the constraints and the cost function, minimize each of them and then add them together
3
Application Applications in many fields like: Routing in computer networks VLSI circuit design Planning in operational and logistic systems Power distribution systems Wireless and satellite communication systems
4
Basic idea If : decision variables Suppose is our objective function. Constraints can be expressed as nonnegative penalty functions that only when represent a feasible solution By combining the penalty functions with F, the original constrained problem may be reformulated as unconstrained problem in which the goal is to minimize the quantity :
5
TSP Is simple to state but very difficult to solve. The problem is to find the shortest possible tour through a set of N vertices so that each vertex is visited exactly once. This problem is known to be NP-complete
6
Why neural network? Drawbacks of conventional computing systems: Perform poorly on complex problems Lack the computational power Don’t utilize the inherent parallelism of problems Advantages of artificial neural networks: Perform well even on complex problems Very fast computational cycles if implemented in hardware Can take the advantage of inherent parallelism of problems
7
Some Efforts to solve optimization problems Many ANN algorithms with different architectures have been used to solve different optimization problems… We’ve selected: Hopfield NN Elastic Net Self Organizing Map NN
8
Hopfield-Tank model TSP must be mapped, in some way, onto the neural network structure Each row corresponds to a particular city and each column to a particular position in the tour
9
Mapping TSP to Hopfield neural net There is a connection between each pair of units The signal sent along a connection from i to t j is equal to the weight Tij if i is activated. It is equal to 0 otherwise. A negative weight defines inhibitory connection between the two units It is unlikely that two units with negative weigh will be active or “on” at the same time
10
Discrete Hopfield Model connection weights are not learned Hopfield network evolves by updating the activation of each unit in turn In final state, all units are stable according to the update rule The units are updated at random, one unit at a time {Vi}i=1,...,L, L :number of units Vi :activation level of unit i Tij: connection weight between units i and j tetai: threshold of unit i.
11
Discrete Hopfield Model (Cont.) Energy function Units changes its activation level if and only if the energy of the network decreases by doing so: Since the energy can only decrease over time and the number configuration is finite the network must converge (but not necessarily the minimum energy state)
12
Continuous Hopfield-Tank Neuron function is continuous (Sigmoid function) The evolution of the units over time is now characterized by the following differential equation : Ui, Ii and Vi are the input, input bias, and activation level of unit I, respectively
13
Continuous Hopfield-Tank Energy function Discrete time approximation is applied to the equations of motion
14
Application of the Hopfield-Tank Model to the TSP
15
Application of the Hopfield-Tank model to the TSP (1)The TSP is represented as an N*N matrix (2) Energy function (3)Bias and connection weights are derived
16
Application of the Hopfield-Tank model to the TSP
17
Results of Hopfield-Tank Hopfield and Tank were able to solve a randomly generated 10-city,with parameter value :A=B=500,C=200,N=15. They reported for 20 trails, network converge 16 times to feasible tours. Half of those tours were one of two optimal tours
18
The size of each black square indicates the value of the output of the corresponding neuron
19
The main weaknesses of the original Hopfield-Tank model
20
(d) Model plagued with the limitation of “hill-climbing” approaches (e) Model does not guarantee feasibility
21
The main weaknesses of the original Hopfield-Tank model The positive points: Can easily implemented in hardware Can be applied to non-Euclidean TSPs
22
Elastic net (Willshaw-Von der Malsburg)
23
Elastic net
24
Energy function for Elastic net
25
The self organizing map The SOM are instances of “competitive NN”, used by unsupervised learning system to classify data Adjusting the weights Related to elastic net Differ of elastic net
26
Competitive Network Group a set of I-dimensional input pattern in to K cluster (K<=M)
27
SOM in the TSP context A set of 2-dimensional coordinates must be mapped onto a set of 1-dimensional positions in the tour
28
SOM in the TSP context
29
Different SOM based on that form Fort increased speed of convergence by reducing neighborhood and reducing modification to weights of neighboring units over time. The work of Angeniol
30
Questions ?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.