Neural Networks for Optimization Bill Wolfe California State University Channel Islands.

Slides:



Advertisements
Similar presentations
Feedback Neural Networks
Advertisements

Energy-Efficient Distributed Algorithms for Ad hoc Wireless Networks Gopal Pandurangan Department of Computer Science Purdue University.
CS6800 Advanced Theory of Computation
Carthagène A brief introduction to combinatorial optimization: The Traveling Salesman Problem Simon de Givry Thales Research & Technology, France (minor.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Vehicle Routing & Scheduling: Part 1
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
1 Introduction to Bio-Inspired Models During the last three decades, several efficient machine learning tools have been inspired in biology and nature:
Neural Networks for Optimization William J. Wolfe California State University Channel Islands.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Nature’s Algorithms David C. Uhrig Tiffany Sharrard CS 477R – Fall 2007 Dr. George Bebis.
Un Supervised Learning & Self Organizing Maps Learning From Examples
How does the mind process all the information it receives?
Finding Motifs in DNA References: 1. Bioinformatics Algorithms, Jones and Pevzner, Chapter Algorithms on Strings, Gusfield, Section Beginning.
Neural Networks for Optimization William J. Wolfe California State University Channel Islands.
Hypercubes and Neural Networks bill wolfe 10/23/2005.
Ant Colony Optimization Algorithms for the Traveling Salesman Problem ACO Kristie Simpson EE536: Advanced Artificial Intelligence Montana State.
Lecture 09 Clustering-based Learning
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Ant Colony Optimization: an introduction
Clustering Unsupervised learning Generating “classes”
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234 Lecture 1 -- (14-Jan-09) “Introduction”  Combinatorial Optimization.
Genetic Algorithms and Ant Colony Optimisation
Chapter 7 Other Important NN Models Continuous Hopfield mode (in detail) –For combinatorial optimization Simulated annealing (in detail) –Escape from local.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Network Unsupervised Learning
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Optimization with Neural Networks Presented by: Mahmood Khademi Babak Bashiri Instructor: Dr. Bagheri Sharif University of Technology April 2007.
Cliff Shaffer Computer Science Computational Complexity.
The Traveling Salesman Problem Over Seventy Years of Research, and a Million in Cash Presented by Vladimir Coxall.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
ECE 1747H: Parallel Programming Lecture 2: Data Parallelism.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Self-Organizing Maps (SOM) (§ 5.5)
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Hopfield Neural Networks for Optimization 虞台文 大同大學資工所 智慧型多媒體研究室.
NETWORK SONGS !! created by Carina Curto & Katherine Morrison January 2016 Input: a simple directed graph G satisfying two rules: 1. G is an oriented.
Neural Network to solve Traveling Salesman Problem Amit Goyal Koustubh Vachhani Ankur Jain 01D05007.
Intro. ANN & Fuzzy Systems Lecture 37 Genetic and Random Search Algorithms (2)
Management Science 461 Lecture 7 – Routing (TSP) October 28, 2008.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Lecture 39 Hopfield Network
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Chapter 5 Unsupervised learning
Solving Traveling salesman Problem with Hopfield Net
Heuristic Optimization Methods
Ch7: Hopfield Neural Model
Artificial Intelligence (CS 370D)
ECE/CS/ME 539 Neural Networks and Fuzzy Systems
Fine-Grained Complexity Analysis of Improving Traveling Salesman Tours
Neural Networks Chapter 4
Self-Organizing Maps (SOM) (§ 5.5)
Lecture 39 Hopfield Network
Hopfield Neural Networks for Optimization
Computational Intelligence
Computational Intelligence
A Neural Network for Car-Passenger matching in Ride Hailing Services.
Presentation transcript:

Neural Networks for Optimization Bill Wolfe California State University Channel Islands

Neural Models Simple processing units Lots of them Highly interconnected Exchange excitatory and inhibitory signals Variety of connection architectures/strengths “Learning”: changes in connection strengths “Knowledge”: connection architecture No central processor: distributed processing

Simple Neural Model a i Activation e i External input w ij Connection Strength Assume: w ij = w ji (“symmetric” network)  W = (w ij ) is a symmetric matrix

Net Input Vector Format:

Dynamics Basic idea:

Energy

Lower Energy da/dt = net = -grad(E)  seeks lower energy

Problem: Divergence

A Fix: Saturation

Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners

Summary: The Neural Model a i Activation e i External Input w ij Connection Strength W (w ij = w ji ) Symmetric

Example: Inhibitory Networks Completely inhibitory –wij = -1 for all i,j –k-winner Inhibitory Grid –neighborhood inhibition

Traveling Salesman Problem Classic combinatorial optimization problem Find the shortest “tour” through n cities n!/2n distinct tours

TSP solution for 15,000 cities in Germany

TSP 50 City Example

Random

Nearest-City

2-OPT

An Effective Heuristic for the Traveling Salesman Problem S. Lin and B. W. Kernighan Operations Research, 1973

Centroid

Monotonic

Neural Network Approach neuron

Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.

Not Allowed

Only one city per time stop Only one time stop per city  Inhibitory rows and columns inhibitory

Distance Connections: Inhibit the neighboring cities in proportion to their distances.

putting it all together:

Research Questions Which architecture is best? Does the network produce: –feasible solutions? –high quality solutions? –optimal solutions? How do the initial activations affect network performance? Is the network similar to “nearest city” or any other traditional heuristic? How does the particular city configuration affect network performance? Is there a better way to understand the nonlinear dynamics?

typical state of the network before convergence

“Fuzzy Readout”

Neural Activations Fuzzy Tour Initial Phase

Neural ActivationsFuzzy Tour Monotonic Phase

Neural ActivationsFuzzy Tour Nearest-City Phase

Fuzzy Tour Lengths tour length iteration

Average Results for n=10 to n=70 cities (50 random runs per n) # cities

DEMO 2 Applet by Darrell Long

Conclusions Neurons stimulate intriguing computational models. The models are complex, nonlinear, and difficult to analyze. The interaction of many simple processing units is difficult to visualize. The Neural Model for the TSP mimics some of the properties of the nearest-city heuristic. Much work to be done to understand these models.

EXTRA SLIDES

E = -1/2 { ∑ i ∑ x ∑ j ∑ y a ix a jy w ixjy } = -1/2 { ∑ i ∑ x ∑ y (- d(x,y)) a ix ( a i+1 y + a i-1 y ) + ∑ i ∑ x ∑ j (-1/n) a ix a jx + ∑ i ∑ x ∑ y (-1/n) a ix a iy + ∑ i ∑ x ∑ j ∑ y (1/n 2 ) a ix a jy }

w ix jy = 1/n 2 - 1/ny = x, j ≠ i; (row) 1/n 2 - 1/ny ≠ x, j = i; (column) 1/n 2 - 2/ny = x, j = i; (self) 1/n 2 - d(x, y)y ≠ x, j = i +1, or j = i - 1. (distance ) 1/n 2 j ≠ i-1, i, i+1, and y ≠ x; (global )

Brain Approximately neurons Neurons are relatively simple Approximately 10 4 fan out No central processor Neurons communicate via excitatory and inhibitory signals Learning is associated with modifications of connection strengths between neurons

Fuzzy Tour Lengths iteration tour length

Average Results for n=10 to n=70 cities (50 random runs per n) # cities tour length

with external input e = 1/2

Perfect K-winner Performance: e = k-1/2