Bio-Computing By: Reza Ebrahimpour

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Neural Network Toolbox COMM2M Harry R. Erwin, PhD University of Sunderland.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Artificial Intelligence (CS 461D)
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Artificial Neural Networks: An Introduction S. Bapi Raju Dept. of Computer and Information Sciences, University of Hyderabad.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
NEURAL NETWORKS Introduction
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Intrusion Detection Using Hybrid Neural Networks Vishal Sevani ( )
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
2101INT – Principles of Intelligent Systems Lecture 10.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Introduction to Neural Networks Jianfeng Feng School of Cognitive and Computing Sciences Spring 2001.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Machine Learning Supervised Learning Classification and Regression
Introduction to Neural Networks
Artificial Neural Networks
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Artificial Intelligence CSC 361
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Intelligence Lecture No. 28
Introduction to Radial Basis Function Networks
Neuro-Computing Lecture 2 Single-Layer Perceptrons
ARTIFICIAL NEURAL networks.
The Network Approach: Mind as a Web
Introduction to Neural Network
David Kauchak CS158 – Spring 2019

PYTHON Deep Learning Prof. Muhammad Saeed.
Machine Learning.
Presentation transcript:

Bio-Computing By: Reza Ebrahimpour ebrahimpour@ipm.ir rebrahimpour@srttu.edu 2009 Part of the pleasure of doing research on the brain is that you can get inspiration from observing seemingly everything events. I want to show you one such event that has been fascinating for me. Do the book on left hand task.

Evolutionary Computing About the course BioComputing Neuro-Computing Evolutionary Computing Swarm Intelligence Supervised Learning Unsupervised Learning Genetic Algorithm Ant Colony Optimization

Content covered Neuro-Computing (NC) - Supervised neural learning algorithms Biological neural networks The Perceptron Linear networks Multi-Layer feedforward neural networks The Back-propagation learning algorithm The Radial Basis function neural network Modular neural networks - Unsupervised neural learning algorithms Competitive learning and competitive networks Self-organizing feature maps Hopfield network Evolutionary Computing (EC) - Genetic Algorithm (GA) Swarm Intelligence - Ant Colony Optimization (ACO)

Assessment Homework : 25% Presentation: 10% Midterm exam: 15% Final exam: 25% Final Project: 25%

Neuro-Computing Lecture 1 Introduction to Artificial Neural Networks

Haykin, S., “Neural Networks: A Comprehensive Foundation”, 2nd edition, Prentice-Hall, 1999. Bishop, C., “Neural Networks for Pattern Recognition”, Oxford University Press, 1995. Fausett, L. “Fundamental of Neural Networks, Architectures, Algorithms and Applications”, 1991. Veelenturf, L. P. J. “Analysis and Application of Artificial Neural Networks”, Prentice-Hall, 1995. Konar, A.A., ” Computational Intelligence Principles, Techniques and Applications”, Springer, 2005. References

Suggested Reading Biological neurons and their relationship to artificial neuron models - Haykin: Sections 1.1 - 1.4, 1.6

Basic concepts Definition of an Artificial Neural Network (ANN) – Most people in the filed agree that: An NN is a network of many simple processors (units), Each processor has a small amount of memory, Units are connected by communication channels (connections). - Some ANNs are models of biological neural networks and some are not.

Basic concepts Training - a rule, - weights of connections are adjusted on the basis of data. Generalization - learn from training examples and exhibits some capability for generalization beyond the training examples.

Basic characteristic of biological neurons - About six order of magnitude slower than silicon logic gates: - neurons operates in millisecond range; - silicon gates operates in nanosecond range. A function of a biological neuron seems to be much more complex than that of a logical gate. - Reasons for brain’s slow rate of operation – A huge number of neurons ( ) – Complex operations done by neurons.

Basic characteristic of biological neurons (cont.) Brain is an information processing system: – highly complex, – non-linear, – Parallel. Brain performs tasks many times faster than fastest digital computers: – pattern recognition, Example: – Brain: a complex task of perceptual recognition in 200-300 ms; – Computer: much lesser complexity task can take hours. – perception, – motor control.

Different areas in the Cortex

Visual processing centers in the Cortex

What can you do with an NN ? Any computable function, i. e., they can do everything a normal digital computer can do. NNs are especially useful for problems: which have lots of training data available, which are tolerant of some imprecision, but to which hard and fast rules (such as those that might be used in an expert system) cannot easily be applied. Example: – classification, – function approximation/ mapping problems

Biological neuron

How a biological neuron works? Signals are transmitted between neurons by electrical pulses (action-potentials or ‘spike’ trains) traveling along the axon. These pulses impinge on the afferent neuron at terminals called synapses. - These are found principally on a set of branching processes emerging from the cell body (soma) known as dendrites.

How a biological neuron works? - Each pulse occurring at a synapse initiates the release of a small amount of chemical substance or neurotransmitter which travels across the synaptic cleft and which is then received at post-synaptic receptor sites on the dendritic side of the synapse. - The neurotransmitter becomes bound to molecular sites here which, in turn, initiates a change in the dendritic membrane potential.

How a biological neuron works? This post-synaptic-potential (PSP) change may serve to increase (hyperpolarize) or decrease (depolarize) the polarization of the post-synaptic membrane. In the former case, the PSP tends to inhibit generation of pulses in the afferent neuron, while in the latter, it tends to excite the generation of pulses. - The size and type of PSP produced will depend on factors such as the geometry of the synapse and the type of neurotransmitter.

How a biological neuron works? Each PSP will travel along its dendrite and spread over the soma, eventually reaching the base of the axon (axon-hillock). The afferent neuron sums or integrates the effects of thousands of such PSPs over its dendritic tree and over time. - If the integrated potential at the axon hillock exceeds a threshold, the cell `fires' and generates an action potential or spike which starts to travel along its axon.

How a biological neuron works? - If the integrated potential at the axon-hillock exceeds a threshold, the cell ‘fires’ and generates an action potential or spike which starts to travel along its axon. synapses axon dendrites

Network of biological neurons

Taxonomy of neural networks Two phases of ANNs: – Learning or encoding phase; (Training phase) – Active or decoding phase (Testing phase). From the point of view of their learning or encoding phase, artificial neural networks can be classified into: – supervised and – unsupervised systems. From the point of view of their active or decoding phase, artificial neural networks can be classified into: – feed-forward (static) and – feedback (dynamic, recurrent) systems.

Artificial Neural Networks Feedforward Recurrent Unsupervised Supervised Unsupervised Supervised Elman, Jordan, Hopfield Kohonen, Hebbian MLP, RBF ART

support vector machines self-organizing maps (SOM) Learning in Neural Nets Learning Tasks Supervised Unsupervised Data: Labeled examples (input , desired output) Tasks: classification pattern recognition regression NN models: perceptron adaline feed-forward NN radial basis function support vector machines Data: Unlabeled examples (different realizations of the input) Tasks: clustering content addressable memory NN models: self-organizing maps (SOM) Hopfield networks

Feed-forward supervised networks Feed-forward supervised networks are typically used for “function approximation” tasks. Specific examples include: – Linear recursive least-mean-square (LMS)networks; – Back-propagation networks; – Radial Basis networks.

Feed-forward unsupervised networks Feed-forward unsupervised networks are used: – to “extract important properties” of the input data or – to map input data into a “representation” domain. Two basic groups of methods belong to this category: – Hebbian networks performing the “Principal Component Analysis” of the input data, also known as the Karhunen-Loeve Transform; – Competitive networks used to perform “Learning Vector Quantization”, or tessellation of the input data set. Self-Organizing Kohonen Feature Maps also belong to this group.

Feed-back networks These networks are used to learn or process the “temporal features of the input data” and their internal state evolves with time. Specific examples include: – Recurrent Back-propagation networks; – Associative Memories; – Adaptive Resonance networks.

A Brief History of the Field 1943 McCulloch and Pitts proposed the McCulloch-Pitts neuron model 1949 Hebb published his book The Organization of Behavior, in which the Hebbian learning rule was proposed. 1958 Rosenblatt introduced the simple single layer networks now called Perceptrons. 1969 Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer perceptrons, and almost the whole field went into hibernation. 1982 Hopfield published a series of papers on Hopfield networks. 1982 Kohonen developed the Self-Organising Maps that now bear his name. 1986 The Back-Propagation learning algorithm for Multi-Layer Perceptrons was rediscovered and the whole field took off again. 1990s The sub-field of Radial Basis Function Networks was developed. 2000s The power of Ensembles of Neural Networks and Support Vector Machines becomes apparent.

Some Current Artificial Neural Network Applications Brain modeling Models of human development – help children with developmental problems Simulations of adult performance – aid our understanding of how the brain works Neuropsychological models – suggest remedial actions for brain damaged patients Real world applications Financial modeling – predicting stocks, shares, currency exchange rates Other time series prediction – climate, weather, airline marketing tactician Computer games – intelligent agents, backgammon, first person shooters Control systems – autonomous adaptable robots, microwave controllers Pattern recognition – speech recognition, hand-writing recognition, sonar signals Data analysis – data compression, data mining, PCA, GTM Noise reduction – function approximation, ECG noise reduction Bioinformatics – protein secondary structure, DNA sequencing

A simplistic model of a biological neuron

Artificial Neuron An artificial neuron A physical neuron

Three basic graphical representations of a single p- input (p- synapse) neuron

f : activation function Anatomy of an Artificial Neuron bias inputs 1 f : activation function output h : combine wi & xi

Bias It is sometimes convenient to add an additional parameter called threshold, θ or bias b = θ . It can be done by fixing one input signal to be constant. Then we have:

Types of activation functions

Types of activation functions (cont.)

Types of activation functions: concluding remarks -The smooth activation functions, like sigmoidal, or Gaussian, for which a continuous derivative exists, are typically used in networks performing a function approximation task. - The step functions are used as parts of pattern classification networks. - Many learning algorithms, like Back-propagation, require calculation of the derivative of the activation function.

Introduction to learning Objective of neural network learning: given a set of examples, find parameter settings that minimize the error. Programmer specifies - numbers of units in each layer - connectivity between units, Unknowns - connection weights

Introduction to learning In the decoding (Training) part of a neural network, one assumes that the weight matrix, W, is given. If the weight matrix is satisfactory, during the decoding process the network performs some useful task it has been design to do. Learning is a dynamic process which modifies the weights of the network in some desirable way.

Introduction to learning (cont.) The learning can be described either by differential equations (continuous- time) or by the difference equations (discrete- time). Continuous- time case: Discrete- time case: - where d is an external teaching / supervising signal used in supervised learning. This signal in not present in networks employing unsupervised learning. The discrete- time learning law is often used in a form of a weight update equation: