Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bio-Computing By: Reza Ebrahimpour

Similar presentations


Presentation on theme: "Bio-Computing By: Reza Ebrahimpour"— Presentation transcript:

1 Bio-Computing By: Reza Ebrahimpour ebrahimpour@ipm.ir
2009 Part of the pleasure of doing research on the brain is that you can get inspiration from observing seemingly everything events. I want to show you one such event that has been fascinating for me. Do the book on left hand task.

2 Evolutionary Computing
About the course BioComputing Neuro-Computing Evolutionary Computing Swarm Intelligence Supervised Learning Unsupervised Learning Genetic Algorithm Ant Colony Optimization

3 Content covered Neuro-Computing (NC)
- Supervised neural learning algorithms Biological neural networks The Perceptron Linear networks Multi-Layer feedforward neural networks The Back-propagation learning algorithm The Radial Basis function neural network Modular neural networks - Unsupervised neural learning algorithms Competitive learning and competitive networks Self-organizing feature maps Hopfield network Evolutionary Computing (EC) - Genetic Algorithm (GA) Swarm Intelligence - Ant Colony Optimization (ACO)

4 Assessment Homework : 25% Presentation: 10% Midterm exam: 15%
Final exam: % Final Project: %

5 Neuro-Computing Lecture 1 Introduction to Artificial Neural Networks

6 Haykin, S., “Neural Networks: A Comprehensive Foundation”, 2nd edition, Prentice-Hall, 1999.
Bishop, C., “Neural Networks for Pattern Recognition”, Oxford University Press, 1995. Fausett, L. “Fundamental of Neural Networks, Architectures, Algorithms and Applications”, 1991. Veelenturf, L. P. J. “Analysis and Application of Artificial Neural Networks”, Prentice-Hall, 1995. Konar, A.A., ” Computational Intelligence Principles, Techniques and Applications”, Springer, 2005. References

7 Suggested Reading Biological neurons and their relationship to artificial neuron models - Haykin: Sections , 1.6

8 Basic concepts Definition of an Artificial Neural Network (ANN)
– Most people in the filed agree that: An NN is a network of many simple processors (units), Each processor has a small amount of memory, Units are connected by communication channels (connections). - Some ANNs are models of biological neural networks and some are not.

9 Basic concepts Training - a rule,
- weights of connections are adjusted on the basis of data. Generalization - learn from training examples and exhibits some capability for generalization beyond the training examples.

10 Basic characteristic of biological neurons
- About six order of magnitude slower than silicon logic gates: - neurons operates in millisecond range; - silicon gates operates in nanosecond range. A function of a biological neuron seems to be much more complex than that of a logical gate. - Reasons for brain’s slow rate of operation – A huge number of neurons ( ) – Complex operations done by neurons.

11 Basic characteristic of biological neurons (cont.)
Brain is an information processing system: – highly complex, – non-linear, – Parallel. Brain performs tasks many times faster than fastest digital computers: – pattern recognition, Example: – Brain: a complex task of perceptual recognition in ms; – Computer: much lesser complexity task can take hours. – perception, – motor control.

12 Different areas in the Cortex

13 Visual processing centers in the Cortex

14 What can you do with an NN ?
Any computable function, i. e., they can do everything a normal digital computer can do. NNs are especially useful for problems: which have lots of training data available, which are tolerant of some imprecision, but to which hard and fast rules (such as those that might be used in an expert system) cannot easily be applied. Example: – classification, – function approximation/ mapping problems

15 Biological neuron

16 How a biological neuron works?
Signals are transmitted between neurons by electrical pulses (action-potentials or ‘spike’ trains) traveling along the axon. These pulses impinge on the afferent neuron at terminals called synapses. - These are found principally on a set of branching processes emerging from the cell body (soma) known as dendrites.

17 How a biological neuron works?
- Each pulse occurring at a synapse initiates the release of a small amount of chemical substance or neurotransmitter which travels across the synaptic cleft and which is then received at post-synaptic receptor sites on the dendritic side of the synapse. - The neurotransmitter becomes bound to molecular sites here which, in turn, initiates a change in the dendritic membrane potential.

18 How a biological neuron works?
This post-synaptic-potential (PSP) change may serve to increase (hyperpolarize) or decrease (depolarize) the polarization of the post-synaptic membrane. In the former case, the PSP tends to inhibit generation of pulses in the afferent neuron, while in the latter, it tends to excite the generation of pulses. - The size and type of PSP produced will depend on factors such as the geometry of the synapse and the type of neurotransmitter.

19 How a biological neuron works?
Each PSP will travel along its dendrite and spread over the soma, eventually reaching the base of the axon (axon-hillock). The afferent neuron sums or integrates the effects of thousands of such PSPs over its dendritic tree and over time. - If the integrated potential at the axon hillock exceeds a threshold, the cell `fires' and generates an action potential or spike which starts to travel along its axon.

20 How a biological neuron works?
- If the integrated potential at the axon-hillock exceeds a threshold, the cell ‘fires’ and generates an action potential or spike which starts to travel along its axon. synapses axon dendrites

21 Network of biological neurons

22 Taxonomy of neural networks
Two phases of ANNs: – Learning or encoding phase; (Training phase) – Active or decoding phase (Testing phase). From the point of view of their learning or encoding phase, artificial neural networks can be classified into: – supervised and – unsupervised systems. From the point of view of their active or decoding phase, artificial neural networks can be classified into: – feed-forward (static) and – feedback (dynamic, recurrent) systems.

23 Artificial Neural Networks
Feedforward Recurrent Unsupervised Supervised Unsupervised Supervised Elman, Jordan, Hopfield Kohonen, Hebbian MLP, RBF ART

24 support vector machines self-organizing maps (SOM)
Learning in Neural Nets Learning Tasks Supervised Unsupervised Data: Labeled examples (input , desired output) Tasks: classification pattern recognition regression NN models: perceptron adaline feed-forward NN radial basis function support vector machines Data: Unlabeled examples (different realizations of the input) Tasks: clustering content addressable memory NN models: self-organizing maps (SOM) Hopfield networks

25 Feed-forward supervised networks
Feed-forward supervised networks are typically used for “function approximation” tasks. Specific examples include: – Linear recursive least-mean-square (LMS)networks; – Back-propagation networks; – Radial Basis networks.

26 Feed-forward unsupervised networks
Feed-forward unsupervised networks are used: – to “extract important properties” of the input data or – to map input data into a “representation” domain. Two basic groups of methods belong to this category: – Hebbian networks performing the “Principal Component Analysis” of the input data, also known as the Karhunen-Loeve Transform; – Competitive networks used to perform “Learning Vector Quantization”, or tessellation of the input data set. Self-Organizing Kohonen Feature Maps also belong to this group.

27 Feed-back networks These networks are used to learn or process the “temporal features of the input data” and their internal state evolves with time. Specific examples include: – Recurrent Back-propagation networks; – Associative Memories; – Adaptive Resonance networks.

28 A Brief History of the Field
1943 McCulloch and Pitts proposed the McCulloch-Pitts neuron model 1949 Hebb published his book The Organization of Behavior, in which the Hebbian learning rule was proposed. 1958 Rosenblatt introduced the simple single layer networks now called Perceptrons. 1969 Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer perceptrons, and almost the whole field went into hibernation. 1982 Hopfield published a series of papers on Hopfield networks. 1982 Kohonen developed the Self-Organising Maps that now bear his name. 1986 The Back-Propagation learning algorithm for Multi-Layer Perceptrons was rediscovered and the whole field took off again. 1990s The sub-field of Radial Basis Function Networks was developed. 2000s The power of Ensembles of Neural Networks and Support Vector Machines becomes apparent.

29 Some Current Artificial Neural Network Applications
Brain modeling Models of human development – help children with developmental problems Simulations of adult performance – aid our understanding of how the brain works Neuropsychological models – suggest remedial actions for brain damaged patients Real world applications Financial modeling – predicting stocks, shares, currency exchange rates Other time series prediction – climate, weather, airline marketing tactician Computer games – intelligent agents, backgammon, first person shooters Control systems – autonomous adaptable robots, microwave controllers Pattern recognition – speech recognition, hand-writing recognition, sonar signals Data analysis – data compression, data mining, PCA, GTM Noise reduction – function approximation, ECG noise reduction Bioinformatics – protein secondary structure, DNA sequencing

30 A simplistic model of a biological neuron

31 Artificial Neuron An artificial neuron A physical neuron

32 Three basic graphical representations of a
single p- input (p- synapse) neuron

33 f : activation function
Anatomy of an Artificial Neuron bias inputs 1 f : activation function output h : combine wi & xi

34 Bias It is sometimes convenient to add an additional parameter
called threshold, θ or bias b = θ . It can be done by fixing one input signal to be constant. Then we have:

35 Types of activation functions

36 Types of activation functions (cont.)

37 Types of activation functions:
concluding remarks -The smooth activation functions, like sigmoidal, or Gaussian, for which a continuous derivative exists, are typically used in networks performing a function approximation task. - The step functions are used as parts of pattern classification networks. - Many learning algorithms, like Back-propagation, require calculation of the derivative of the activation function.

38 Introduction to learning
Objective of neural network learning: given a set of examples, find parameter settings that minimize the error. Programmer specifies - numbers of units in each layer - connectivity between units, Unknowns - connection weights

39 Introduction to learning
In the decoding (Training) part of a neural network, one assumes that the weight matrix, W, is given. If the weight matrix is satisfactory, during the decoding process the network performs some useful task it has been design to do. Learning is a dynamic process which modifies the weights of the network in some desirable way.

40 Introduction to learning (cont.)
The learning can be described either by differential equations (continuous- time) or by the difference equations (discrete- time). Continuous- time case: Discrete- time case: - where d is an external teaching / supervising signal used in supervised learning. This signal in not present in networks employing unsupervised learning. The discrete- time learning law is often used in a form of a weight update equation:


Download ppt "Bio-Computing By: Reza Ebrahimpour"

Similar presentations


Ads by Google