Neural Networks 2nd Edition Simon Haykin

Slides:



Advertisements
Similar presentations
KULIAH II JST: BASIC CONCEPTS
Advertisements

Introduction to Neural Networks
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
Artificial Neural Network
Kostas Kontogiannis E&CE
Artificial Intelligence (CS 461D)
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
Neural Networks Basic concepts ArchitectureOperation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
2806 Neural Computation Introduction Lecture Ari Visa.
Artificial Neurons, Neural Networks and Architectures
Neural Nets Applications Introduction. Outline(1/2) 1. What is a Neural Network? 2. Benefit of Neural Networks 3. Structural Levels of Organization in.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #31 4/17/02 Neural Networks.
NEURAL NETWORKS Introduction
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Copyright ©2009 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. Neural Networks and Learning Machines, Third Edition.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
2101INT – Principles of Intelligent Systems Lecture 10.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
1 CMSC 671 Fall 2001 Class #25-26 – Tuesday, November 27 / Thursday, November 29.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Neural Networks 2nd Edition Simon Haykin
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Chapter 8: Adaptive Networks
Neural Networks 2nd Edition Simon Haykin
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Neural Network Architecture Session 2
Learning in Neural Networks
Neural Networks.
Artificial Intelligence (CS 370D)
OVERVIEW OF BIOLOGICAL NEURONS
XOR problem Input 2 Input 1
Artificial Intelligence Lecture No. 28
Introduction to Neural Networks and Fuzzy Logic
The Network Approach: Mind as a Web
Lecture 09: Introduction Image Recognition using Neural Networks
Presentation transcript:

Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 1. Introduction

What is a Neural Network A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and making it available for use. Knowledge is acquired by the network from its environment through a learning process. The procedure performing learning process is called a learning algorithm. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

Benefits of Neural Networks The computing power of neural networks Massively parallel distributed structure Ability to learn and therefore generalize. Using neural networks offers following properties: Nonlinearity Input-Output Mapping Adaptively Evidential Response Contextual Information Fault Tolerance VLSI Implementability Uniformity of Analysis and Design Neurobiological Analogy Supervised Learning: Modifying the synaptic weights by applying a set of training samples, which constitute of input signals and corresponding desired responses.

Human Brain - Function Block Block diagram representation of human nervous system Forward Receptors Neural Net. Effectors Stimulus Response (Brain) Feedback Receptors: Convert stimulus from the human body or the external environment into electrical impulses that convey information to brain. Effectors: Convert electrical impulses generated by brain into discernible responses as system outputs.

Comparisons: Neural Net. Vs. Brain Neuron: The structural constituents of the brain. Neurons are five to six orders of magnitude slower than silicon logic gates. (e.g. Silicon chips: 10-9 s, Neural Event: 10-3 s) 10 billion neurons and 60 trillion synapses or connections are in the human cortex. The energetic efficiency Brain: 10-16 joules per operation per second. Best Computer today: 10-6 joules per operation per second.

Synapses Synapses are elementary structural and functional units that mediate the interactions between neurons. The most common kind of synapse is chemical synapse. The operations of synapse: A pre-synaptic process liberates a transmitter substance that diffuses across the synaptic junction between neurons. Acts on a post-synaptic process. Synapse converts a pre-synaptic electrical signal into a chemical signal and then back into a post-synaptic electrical signal. (Nonreciprocal two-port device)

Pyramidal Cell

Cytoarchitectural map of the cerebral cortex

Nonlinear model of a neuron Let bk=wk0 and x0=+1 and

Nonlinear model of a neuron (Cont.) Affine transformation produced by the presence of a bias Another Nonlinear model of a neuron

Types of Activation Function Threshold Function Piecewise-Linear Function Sigmoid Function

Types of Activation Function (Cont.) The activation functions defined above range from 0 to +1. Sometimes, the activation function ranges from -1 to +1. (How to do?) Assume the activation function ranging from 0 to +1 is denoted as (), ranging from -1 to +1 is denoted as ’()  ’()=()*2-1 Notes: if (v)=sigmoid function

Stochastic Model of a Neuron The above model is deterministic in that its input-output behavior is precisely defined. Some applications of neural network base the analysis on a stochastic neuronal model. Let x denote the state of the neuron, and P(v) denote the probability of firing, where v is the induced local field of the neuron. A standard choice for P(v) is the sigmoid-shaped function. T is a pseudo-temperature that is used to control the noise level and therefore the uncertainty in firing.

Neural Network  Directed Graph xj yk=wkjxj wkj Synaptic Links xj yk=(xj) () Activation Links yk=yi+yj yi yj Synaptic Convergence (fan-in) xj Synaptic Divergence (fan-out)

Signal-flow Graph of a Neuron

yk(n)=A[xj’(n)] xj’(n)=xj(n)+B[yk(n)] where A and B act as operators Feedback Feedback plays a major role in recurrent network. xj(n) yk=(xj) xj’(n) B A yk(n)=A[xj’(n)] xj’(n)=xj(n)+B[yk(n)] where A and B act as operators A/(1-AB) is referred as closed-loop operator, AB as open-loop operator. In general, ABBA

Let A be a fixed weight, w; and B is a unit-delay operator, z-1 Feedback (Cont.) Let A be a fixed weight, w; and B is a unit-delay operator, z-1 Use Taylor’s Expansion or Binomial Expansion to prove it.

Time Responses for different weight, w Conclusions: |w|<1, yk(n) is exponentially convergent. System is stable. |w|1, yk(n) is divergent. System is unstable. Think about: What does the time response change, If -1<w<0? What does the time response change, If w-1?

Network Architectures MultiLayer Feedforward Networks Single-Layer Feedforward Networks Fully Connected: Every node in each layer is connected to every other node in the adjacent forward layer. Otherwise, it’s Partially Connected.

Network Architectures (Cont.) Recurrent Networks with no self-feedback loops and no hidden neurons Recurrent Networks with hidden neurons

Knowledge Representation Primary characteristics of knowledge representation What information is actually made explicit How the information is physically encoded for subsequent use Knowledge is goal directed. A good solution depends on a good representation of knowledge. A set of input-output pairs, with each pair consisting of an input signal and the corresponding desired response, is referred to as a set of training data or training sample.

Rules for Knowledge Representation Rule 1: Similar inputs from similar classes should usually produce similar representations inside the network. Similarity Measuring: (1) Using Euclidian distance, d(xi, xj) Let xi=[xi1, xi2, …, xim]T (2) Using Inner Product, (xi, xj) Let xi=[xi1, xi2, …, xim]T xi ||xi-xj||  xj xiTxj If ||xi||=1 and ||xj||=1 d2(xi, xj)=(xi-xj)T(xi-xj)=2-2(xiTxj)

Rules for Knowledge Representation (Cont.) Rule 2: Items to be categorized as separate classes should be given widely different representations in the network. (This is the exact opposite of Rule 1.) Rule 3: If a particular feature is important, then there should be a large number of neurons involved in the representation of that item. Rule 4: Prior information and invariance should be built into the design of a neural network, thereby simplifying the network design by not having to learn them.

How to Build Prior Information into Neural Network Design Restricting the network architecture though the use of local connections knows as receptive fields. Constraining the choice of synaptic weights through the use of weight-sharing. Ex: Convolution Sum Convolution Network x1, …, x6 constitute the receptive field for hidden neuron 1 and so on for the other hidden neurons.

Artificial Intelligence (AI) Goal: Developing paradigms or algorithms that require machines to perform cognitive tasks. AI system must be capable of doing: Store knowledge Apply knowledge stored to solve problems Acquire new knowledge through experience Key components Representation Reasoning Learning Reasoning Representation Learning