Polynomial Discrete Time Cellular Neural Networks Eduardo Gomez-Ramirez † Giovanni Egidio Pazienza‡ † LIDETEA, POSGRADO E INVESTIGACION Universidad La.

Slides:



Advertisements
Similar presentations
Beyond Linear Separability
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
G53MLE | Machine Learning | Dr Guoping Qiu
QR Code Recognition Based On Image Processing
Embedded Algorithm in Hardware: A Scalable Compact Genetic Algorithm Prabhas Chongstitvatana Chulalongkorn University.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
SECTION 3.6 COMPLEX ZEROS; COMPLEX ZEROS; FUNDAMENTAL THEOREM OF ALGEBRA FUNDAMENTAL THEOREM OF ALGEBRA.
Artificial Neural Networks
1 Vision based Motion Planning using Cellular Neural Network Iraji & Bagheri Supervisor: Dr. Bagheri.
ISSPIT Ajman University of Science & Technology, UAE
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
Data Mining Techniques Outline
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
CELLULAR AUTOMATON Presented by Rajini Singh.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
Neural Networks Marco Loog.
Back-Propagation Algorithm
Artificial Neural Networks
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Linear and Non-Linear ICA-BSS I C A  Independent Component Analysis B S S  Blind Source Separation Carlos G. Puntonet Dept.of Architecture.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Chapter 12: Simulation and Modeling
Discovery of Cellular Automata Rules Using Cases Ken-ichi Maeda Chiaki Sakama Wakayama University Discovery Science 2003, Oct.17.
Efficient Model Selection for Support Vector Machines
Based on: The Nature of Statistical Learning Theory by V. Vapnick 2009 Presentation by John DiMona and some slides based on lectures given by Professor.
The Role of Artificial Life, Cellular Automata and Emergence in the study of Artificial Intelligence Ognen Spiroski CITY Liberal Studies 2005.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
On comparison of different approaches to the stability radius calculation Olga Karelkina Department of Mathematics University of Turku MCDM 2011.
Artificial Neural Networks
1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
DIGITAL IMAGE PROCESSING Dr J. Shanbehzadeh M. Hosseinajad ( J.Shanbehzadeh M. Hosseinajad)
Multi-Layer Perceptrons Michael J. Watts
ANNs (Artificial Neural Networks). THE PERCEPTRON.
CS 484 – Artificial Intelligence1 Announcements Lab 4 due today, November 8 Homework 8 due Tuesday, November 13 ½ to 1 page description of final project.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Hardness of Learning Halfspaces with Noise Prasad Raghavendra Advisor Venkatesan Guruswami.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
9.9 The Fundamental Theorem of Algebra
Optimal resource assignment to maximize multistate network reliability for a computer network Yi-Kuei Lin, Cheng-Ta Yeh Advisor : Professor Frank Y. S.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
Introduction to Lattice Simulations. Cellular Automata What are Cellular Automata or CA? A cellular automata is a discrete model used to study a range.
Model Iteration Iteration means to repeat a process and is sometimes referred to as looping. In ModelBuilder, you can use iteration to cause the entire.
Genetic Algorithms Siddhartha K. Shakya School of Computing. The Robert Gordon University Aberdeen, UK
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Algorithms and their Applications CS2004 ( ) 13.1 Further Evolutionary Computation.
Towards CI Foundations Włodzisław Duch Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google: W. Duch WCCI’08 Panel Discussion.
 Based on observed functioning of human brain.  (Artificial Neural Networks (ANN)  Our view of neural networks is very simplistic.  We view a neural.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Section 5.5 The Real Zeros of a Polynomial Function.
CS621 : Artificial Intelligence
Cellular Automata FRES 1010 Eileen Kraemer Fall 2005.
Robot Intelligence Technology Lab. Generalized game of life YongDuk Kim.
CS851 – Biological Computing February 6, 2003 Nathanael Paul Randomness in Cellular Automata.
Cellular Automata BIOL/CMSC 361: Emergence 2/12/08.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Conway’s Game of Life Jess Barak Game Theory. History Invented by John Conway in 1970 Wanted to simplify problem from 1940s presented by John von Neumann.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Introduction to Algebra. What do you think of when you hear “algebra”?
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Evolutionary Computation Evolving Neural Network Topologies.
Chapter 12 Case Studies Part B. Control System Design.
Cellular Neural Networks and Visual Computing Leon O. Chua and Tamás Roska Presentation by Max Pflueger.
Illustrations of Simple Cellular Automata
Perceptron as one Type of Linear Discriminants
Introduction to Artificial Intelligence Lecture 11: Machine Evolution
Complexity as Fitness for Evolved Cellular Automata Update Rules
Presentation transcript:

Polynomial Discrete Time Cellular Neural Networks Eduardo Gomez-Ramirez † Giovanni Egidio Pazienza‡ † LIDETEA, POSGRADO E INVESTIGACION Universidad La Salle – México, D.F. ‡ Department d’Electronica, EALS Universitat “Ramon Llull” – Barcelona, Spain

Outline Cellular Neural Networks (CNN) Introduction and Objective Genetic Algorithms (GA) Polynomial Discrete Time CNNs (PDTCNNs) XOR Problem Game of Life Learning vs Design Conclusions and future work IntroCNN & GAPolyn. CNNXORGoLConclusions

CNN: Introduction CNN for complex task (linearly nonseparable data)  Multilayer CNNs  Include more degrees of freedom for the output state of each layer  Search in a finite set of templates  Single layer: Polynomial CNNs IntroCNN & GAPolyn. CNNXORGoLConclusions

Improve the representation power of a single layer CNN including a simple nonlinear term to solve problems with linearly nonseparable data (XOR) IntroCNN & GAPolyn. CNNXORGoLConclusions Objective

CNN: mathematical model IntroCNN & GAPolyn. CNNXORGoLConclusions The simplified mathematical model is: where x c is the state of the cell, u c the input and y c the output

CNN: Activation Function IntroCNN & GAPolyn. CNNXORGoLConclusions

CNN: Block Diagram IntroCNN & GAPolyn. CNNXORGoLConclusions

CNN: Discrete Model IntroCNN & GAPolyn. CNNXORGoLConclusions Computing x(∞), the model can be represented as using the following activation function

 Steps:  Crossover C(Fg)  Mutation M(*)  Adding random parent Ag(  ) GA: main steps proposed IntroCNN & GAPolyn. CNNXORGoLConclusions

GA: Crossover IntroCNN & GAPolyn. CNNXORGoLConclusions

I=0 Individual 1 Individual 2 GA: Crossover IntroCNN & GAPolyn. CNNXORGoLConclusions a1a1 b1b1 c1c1 a2a2 b2b2 c2c2

GA: Mutation IntroCNN & GAPolyn. CNNXORGoLConclusions where r  U(0,1) is a random variable with uniform distribution defined on a probability space ( , ,P),   

GA: Mutation (resolution) IntroCNN & GAPolyn. CNNXORGoLConclusions I=0 Individual 1 Individual 2 a1a1 b1b1 c1c1 a2a2 b2b2 c2c2

GA: Selecting Parents IntroCNN & GAPolyn. CNNXORGoLConclusions

GA: Adding Random Parent IntroCNN & GAPolyn. CNNXORGoLConclusions

THEOREM 1: (Weierstrass’s Approximation Theorem) Let g be a continuous real valued function defined on a closed interval [a,b]. Then, given any  positive, there exists a polynomial y (which may depend on  ) with real coefficients such that: For every x  [a,b]. Polynomial Discrete Time Cellular Neural Network IntroCNN & GAPolyn. CNNXORGoLConclusions

THEOREM 2 *: Any Boolean Function of n-variables can be realized using a Polynomial Threshold gates of order s  n. The quadratic threshold gate can be defined: And s is the number of inputs and T is the threshold constant. Polynomial Discrete Time Cellular Neural Network IntroCNN & GAPolyn. CNNXORGoLConclusions * N. J. Nilsson. The Mathematical Foundations of Learning Machines. McGraw Hill, New York, 1990.

PDTCNN: the model (I) IntroCNN & GAPolyn. CNNXORGoLConclusions

PDTCNN: the model (II) IntroCNN & GAPolyn. CNNXORGoLConclusions

PDTCNN: Solving XOR problem Some papers:  Z. Yang, Y. Nishio, A. Ushida, Templates and algorithms for two-layer cellular neural networks. IJCNN’02,  F. Chen, G. He, G. Chen & X. Xu, Implementation of Arbitrary Boolean Functions via CNN. CNNA’06, IntroCNN & GAPolyn. CNNXORGoLConclusions

PDTCNN:Solving XOR problem  M. Balsi, Generalized CNN: Potentials of a CNN with Non-Uniform Weights. CNNA-92,  E. Bilgili, I. C. Göknar and O. N. Ucan, Cellular neural network with trapezoidal activation function. Int. J. Circ. Theor. Appl., 2005 IntroCNN & GAPolyn. CNNXORGoLConclusions

Learning parameters Initialpop=20000 Number of fathers=7 Maximum number of random parents to be add = 3 Kpro=0.8 Increment=1 Mutation Probability=0.15 IntroCNN & GAPolyn. CNNXORGoLConclusions

PDTCNN:First Scheme U:u ij =x ij  x ij+1 IntroCNN & GAPolyn. CNNXORGoLConclusions

PDTCNN:Second Scheme U:u ij =x ij  y ij b) c) IntroCNN & GAPolyn. CNNXORGoLConclusions

The Game of Life (I) The Game of Life (GoL) is a totalistic cellular automaton consisting in a two-dimensional grid cells, that may be either alive (black) or dead (white). IntroCNN & GAPolyn. CNNXORGoLConclusions

The Game of Life (II) The state of each cell varies according to the following rules:  Birth: a cell that is dead at time t becomes alive at time t + 1 only if exactly 3 of its neighbors were alive at time t;  Survival: a cell that was living at time t will remain alive at t + 1 if and only if it had exactly 2 or 3 alive neighbors at time t. IntroCNN & GAPolyn. CNNXORGoLConclusions

The Game of Life (III)  Every sufficient well-stated mathematical problem can be reduced to a question about Life;  It is possible to make a life computer (logic gates, storage etc.);  Life is universal: it can be programmed to perform any desired calculation;  Given a large enough Life space and enough time, self-reproducing animals will emerge...  The whole universe is a CA! (E.Fredkin, MIT). IntroCNN & GAPolyn. CNNXORGoLConclusions

The Game of Life – NOT gate IntroCNN & GAPolyn. CNNXORGoLConclusions A

CNN & GoL Multilayer CNN (Chua, Roska) – 1990 Activation function (Chua, Roska) – 1990 CNN-UM (Roska,Chua) CNN Universal Cells (Dogaru, Chua) – 1999 Simplicity vs. Computational power IntroCNN & GAPolyn. CNNXORGoLConclusions

Polynomial CNN (I) What’s g(u d,y d )? IntroCNN & GAPolyn. CNNXORGoLConclusions

Polynomial CNN (II) In the simplest case g(u d, y d ) is a second degree polynomial, whose general form is IntroCNN & GAPolyn. CNNXORGoLConclusions

Polynomial CNN (III) Thanks to some considerations we find that IntroCNN & GAPolyn. CNNXORGoLConclusions

Polynomial CNN (IV) u c and appear in the state equation direct link with totalistic Cellular Automata IntroCNN & GAPolyn. CNNXORGoLConclusions

GoL: Rules (I)

GoL: Rules (II)  Rule 1: a cell will be alive at least 3 of the 9 cells in its 3 × 3 neigh. are alive Black pixel= +1 White pixel= -1 pixel centr. = 1 (black) Σ neigh. = -2 (5 w, 2 b) next state = -1 (white)

GoL: Rules (III)  Rule 2: a cell will be alive if at most 3 of its 8 neighbors are alive Black pixel= +1 White pixel= -1 pixel centr. = 1 (black) Σ neigh. = -2 (5 w, 2 b) next state = 1 (black)

Design algorithm (I) First iteration: we try to perform the first rule (a cell will be alive at least 3 of the 9 cells in its 3 × 3 neigh. are alive) If Y(0)=0 b c =1 b p =1 i=3 IntroCNN & GAPolyn. CNNXORGoLConclusions

Design algorithm (II) Second iteration: we try to accomplish the second rule (a cell will be alive if at most 3 of its 8 neighbors are alive) IntroCNN & GAPolyn. CNNXORGoLConclusions

Design algorithm (III) Hyp: p c =0

Templates found using learning Coming soon... IntroCNN & GAPolyn. CNNXORGoLConclusions

Conclusions (I) In general: In some cases it is possible to reduce a multilayer DTCNN to a single layer PDTCNN Thanks to the GoL we can explore the capacity of PDTCNNs for Universal Machine IntroCNN & GAPolyn. CNNXORGoLConclusions

Conclusions (II) About learning:  The resolution used reduces the search space  The step “Add random parent” improves the behavior to avoid local minimas About design We give a simple algorithm to design templates for the Polynomial CNN

Future Work Implementations of mathematical morphology functions with PDTCNNs IntroCNN & GAPolyn. CNNXORGoLConclusions

Polynomial Discrete Time Cellular Neural Networks Eduardo Gomez-Ramirez Giovanni Egidio Pazienza