Chaos in Neural Network Theme presentation Cui, Shuoyang 03/08/2005.

Slides:



Advertisements
Similar presentations
KULIAH II JST: BASIC CONCEPTS
Advertisements

NEU Neural Computing MSc Natural Computation Department of Computer Science University of York.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Introduction to Artificial Neural Networks
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
A model of Consciousness With neural networks By: Hadiseh Nowparast.
WINNERLESS COMPETITION PRINCIPLE IN NEUROSCIENCE Mikhail Rabinovich INLS University of California, San Diego ’
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Machine Learning Neural Networks
Decision Support Systems
Neural Networks Basic concepts ArchitectureOperation.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Artificial Neurons, Neural Networks and Architectures
Neural Networks (NN) Ahmad Rawashdieh Sa’ad Haddad.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Lecture 09 Clustering-based Learning
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Artificial Intelligence (AI) Addition to the lecture 11.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Pattern Similarity and Storage Capacity of Hopfield Network Suman K Manandhar Prof. Ramakoti Sadananda Computer Science and Information Management AIT.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
NEURAL NETWORKS FOR DATA MINING
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
The Matrix Theory of Objects An Update Sergio Pissanetzky Model Universality Behavior Constraints Dynamics Cost Chaos Attractors.
Lecture 5 Neural Control
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Back-propagation network (BPN) Student : Dah-Sheng Lee Professor: Hahn-Ming Lee Date:20 September 2003.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Seth Kulman Faculty Sponsor: Professor Gordon H. Dash.
Michael Holden Faculty Sponsor: Professor Gordon H. Dash.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Introduction to Artificial Neural Network Session 1
Fall 2004 Perceptron CS478 - Machine Learning.
Learning in Neural Networks
Artificial Intelligence (CS 370D)
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Dr. Unnikrishnan P.C. Professor, EEE
network of simple neuron-like computing elements
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Machine Learning: Lecture 4
Prepared by: Mahmoud Rafeek Al-Farra
Machine Learning: UNIT-2 CHAPTER-1
ARTIFICIAL NEURAL networks.
The Network Approach: Mind as a Web
Introduction to Neural Network
Presentation transcript:

Chaos in Neural Network Theme presentation Cui, Shuoyang 03/08/2005

Artificial Neural Network (ANN) is Neural Networks are a different paradigm for computing: neural network machines are based on the processing/memory abstraction of human information processing. neural networks are based on the parallel architecture of animal brains.

Neural networks versus conventional computers Conventional computers use an algorithmic approach. the problem solved are that we already understand and know how to solve. The network is composed of a large number of highly interconnected processing elements(neurones) working in parallel to solve a specific problem They cannot be programmed to perform a specific task Neural networks and conventional algorithmic computers complement each other The disadvantage is that the network finds out how to solve the problem by itself, its operation can be unpredictable. An important application of neural networks is pattern recognition

Neural networks are a form of multiprocessor computer system, with simple processing elements a high degree of interconnection simple scalar messages adaptive interaction between elements

Architecture of neural networks Feed-forward networks allowing signals to travel one way only Feedback networks have signals travelling in both directions by introducing loops in the network

Chaos and random system Chaos is statistically indistinguishable from randomness, and yet it is deterministic and not random at all. Chaotic system will produce the same results if given the same inputs, it is unpredictable in the sense that you can not predict in what way the system's behavior will change for any change in the input to that system. a random system will produce different results when given the same inputs.

Chaos has periodic, predictable behavior and totally random behavior It is random-appearing, and yet has a large degree of underlying order. Skarda & Freeman(1987) found chaotic activity in the brain Freeman(1991) decided that chaos "may be the chief property that makes the brain different from an artificial-intelligence machine" For brain of human: researchers believe chaotic background behavior is necessary for the brain to engage in continual learning. Background of Chaos

Chaotic neural networks Chaotic neural networks offer greatly increase memory capacity.Each memory is encoded by an Unstable Periodic Orbit (UPO) on the chaotic attractor. A chaotic attractor is a set of states in a system's state space with very special properties: the set is an attracting set. So that the system, starting with its initial condition in the appropriate basin, eventually ends up in the set. Tnd most important, once the system is on the attractor nearby states diverge from each other exponentially fast. Small amounts of noise are amplified.

A Novel Chaotic Neural Network Architecture Nigel Crook and Tjeerd olde Scheper

The delayed feedback method is considered to be best suited to the control of chaos in neural networks. the delay feedback method does not rely on a priori knowledge the delayed feedback method does not specify which UPO is to be stabilized delays in signal transmission are inherent in all biological neuronal networks. The feedback control method amounts to delayed inhibition

Pyragas’s delayed feedback method P(y,x) and Q(y,x) govern the chaotic dynamics of the system; output variable, y(t); input signal, F(t);  a delay time,  strength of the feedback K.

variable y repeats a value specified by the delay at the earlier time. F(t) system back to a state (periodic system coming) periodic system F(t) will become very small this method UPOs of different periodicities can be controlled

Each unit in the Chaotic Layer is governed by the following discrete time equations: Modified by K. Aihara, T. Takabe, and I. Tsuda in Chaotic Neural Networks yi(t) : the internal state of unit i at time t, (0 < <1) a and w : parameters of the Aihara model M : the number of units in the Chaotic Layer N : the number of units in each inhibitory cluster. wij : a weight of the connections between units in the chaotic layer ija : time delay xi(t) : the output activation of chaotic unit i at time t. kij : the weights of connections from the inhibitory units to the chaotic units zj(t) : the activation of chaotic unit j at time t f(y) is given by:

Inputs : xi(t), the activation of the chaotic unit i at time t. & xi(t -Dji), which is the activation of chaotic unit i at time t -Dji, where Dji is a randomly selected time delay. Each inhibitory unit one output + two inputs Each inhibitory unit has a different randomised time delay connection with the associated chaotic unit.

Compete in the inhibitory units within a cluster L : the number of input units Ik(t) : the activation of the kth input unit at time t. winner :the inhibitory unit with the smallest value for h(t) activation value is And then activation values of other units in that inhibitory cluster 0

Experiment 1 input unit 3 units in each inhibitory cluster 4 chaotic units The network was then iterated for a further 200 times steps. The input patterns used were: Input sequence (a) 1.0, 0.5 The activations of the input units at each time step are governed entirely by the input sequences consisting of discrete values in the range 0 to 1.

For sequence (a) I(1) = 1.0,I(2) =0.5, I(3) =1.0, I(4) =0.5, etc. The activations of 2 units from the Chaotic Layer stabilized into periodic after 200 times each of the chaotic units is stabilised to orbits with different periods

The activations of the units in 2 of the inhibitory clusters

References Nigel Crook and Tjeerd olde Scheper, A Novel Chaotic Neural Network Architecture W.J. Freeman and J.M. Barrie, Chaotic oscillations and the genesis ofmeaning in cerebral cortex. Temporal Coding in the Brain T. Shinbrot, C. Grebogi, E. Ott, and J.A. Yorke, Using Small Perturbations to Control Chaos Nature, M.R. Guevara, L. Glass, M.C. Mackey, and A. Shrier,Chaos in Neurobiology A. Babloyantz and C. Lourenco, Brain Chaos and Computation C. Lourenco and A. Babloyantz, Control of spatiotemporal chaos in neural networks