Neural Networks for Optimization William J. Wolfe California State University Channel Islands
Neural Models Simple processing units Lots of them Highly interconnected Exchange excitatory and inhibitory signals Variety of connection architectures/strengths “Learning”: changes in connection strengths “Knowledge”: connection architecture No central processor: distributed processing
Simple Neural Model a i Activation e i External input w ij Connection Strength Assume: w ij = w ji (“symmetric” network) W = (w ij ) is a symmetric matrix
Net Input
Dynamics Basic idea:
Energy
Lower Energy da/dt = net = -grad(E) seeks lower energy
Problem: Divergence
A Fix: Saturation
Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners
Summary: The Neural Model a i Activation e i External Input w ij Connection Strength W (w ij = w ji ) Symmetric
Example: Inhibitory Networks Completely inhibitory –wij = -1 for all i,j –k-winner Inhibitory Grid –neighborhood inhibition
Traveling Salesman Problem Classic combinatorial optimization problem Find the shortest “tour” through n cities n!/2n distinct tours
TSP 50 City Example
Random
Nearest-City
2-OPT
Centroid
Monotonic
Neural Network Approach neuron
Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.
Not Allowed
Only one city per time stop Only one time stop per city Inhibitory rows and columns inhibitory
Distance Connections: Inhibit the neighboring cities in proportion to their distances.
putting it all together:
Research Questions Which architecture is best? Does the network produce: –feasible solutions? –high quality solutions? –optimal solutions? How do the initial activations affect network performance? Is the network similar to “nearest city” or any other traditional heuristic? How does the particular city configuration affect network performance? Is there any way to understand the nonlinear dynamics?
typical state of the network before convergence
“Fuzzy Readout”
Neural Activations Fuzzy Tour Initial Phase
Neural ActivationsFuzzy Tour Monotonic Phase
Neural ActivationsFuzzy Tour Nearest-City Phase
Fuzzy Tour Lengths tour length iteration
Average Results for n=10 to n=70 cities (50 random runs per n) # cities
DEMO 2 Applet by Darrell Long
Conclusions Neurons stimulate intriguing computational models. The models are complex, nonlinear, and difficult to analyze. The interaction of many simple processing units is difficult to visualize. The Neural Model for the TSP mimics some of the properties of the nearest-city heuristic. Much work to be done to understand these models.
EXTRA SLIDES
Brain Approximately neurons Neurons are relatively simple Approximately 10 4 fan out No central processor Neurons communicate via excitatory and inhibitory signals Learning is associated with modifications of connection strengths between neurons
Fuzzy Tour Lengths iteration tour length
Average Results for n=10 to n=70 cities (50 random runs per n) # cities tour length
with external input e = 1/2
Perfect K-winner Performance: e = k-1/2