Network Architectures

Slides:



Advertisements
Similar presentations
Artificial Neural Network in Matlab Hany Ferdinando.
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Artificial Neural Networks (1)
Principle Components & Neural Networks How I finished second in Mapping Dark Matter Challenge Sergey Yurgenson, Harvard University Pasadena, 2011.
2 1 Neuron Model and Network Architectures. 2 2 Biological Inspiration.
An Illustrative Example.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
An Illustrative Example
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks An Introduction.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Neuron Model and Network Architecture
Multiple-Layer Networks and Backpropagation Algorithms
Neural Networks John Riebe and Adam Profitt. What is a neuron? Weights: Weights are scalars that multiply each input element Summer: The summer sums the.
Artificial Neural Networks An Overview and Analysis.
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Network Tool Box Khaled A. Al-Utaibi. Outlines  Neuron Model  Transfer Functions  Network Architecture  Neural Network Models  Feed-forward.
Appendix B: An Example of Back-propagation algorithm
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
ADALINE (ADAptive LInear NEuron) Network and
AI & Machine Learning Libraries By Logan Kearsley.
W is weight matrices, dimension 1xR p is input vector, dimension Rx1 b is bias a = f(Wp + b) w1, w2 harus dicari berdasarkan nilai w1, w2 awal yang diberikan.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Multiple-Layer Networks and Backpropagation Algorithms
Neural Network Architecture Session 2
Artificial Neural Networks
Neural Networks.
Artificial neural networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Additive and Multiplicative Relationships
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Neural Networks: Improving Performance in X-ray Lithography Applications ECE 539 Ryan T. Hogg May 10, 2000.
CSE P573 Applications of Artificial Intelligence Neural Networks
Classification / Regression Neural Networks 2
CSE 473 Introduction to Artificial Intelligence Neural Networks
Convolutional Neural Networks
لجنة الهندسة الكهربائية
Neural Networks Advantages Criticism
Artificial Intelligence Methods
Training a Neural Network
Artificial Neural Network & Backpropagation Algorithm
XOR problem Input 2 Input 1
Eco 6380 Predictive Analytics For Economists Spring 2016
Functions And Function Notation.
Modeling in the Time Domain
Function Notation “f of x” Input = x Output = f(x) = y.
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Basics of Deep Learning No Math Required
Introduction to Neural Networks and Fuzzy Logic
Multi-Layer Perceptron
Introduction to Neural Networks and Fuzzy Logic
2.1 Relations and Functions
Example of a simple deep network architecture.
Artificial Neural Network
Temporal Back-Propagation Algorithm
A connectionist model in action
Fig. 1. A diagram of artificial neural network consisting of multilayer perceptron. This simple diagram is for a conceptual explanation. When the logistic.
Review NNs Processing Principles in Neuron / Unit
Introduction to Neural Network
Feedforward to the Past: The Relation between Neuronal Connectivity, Amplification, and Short-Term Memory  Surya Ganguli, Peter Latham  Neuron  Volume.
Teaching Recurrent NN to be Dynamical Systems
Forward propagation Notation Input Output n : number of features
Example of a simple deep network architecture.
Computational Intelligence
Presentation transcript:

Network Architectures Neuron Model and Network Architectures

Single-Input Neuron Inputs General Neuron Adjustable a = f(wp+b) Example: Suppose w = 3, p = 2, and b = -1.5, then a = f( 3(2) – 1.5) = f(4.5)

Transfer Functions a = hardlim(n) a = hardlim(wp+b) a = purlin(wp+b) hardlim transfer function a = hardlim(wp+b) Single neuron hardlim network a = purlin(wp+b) Single neuron purlin network a = purlin(n) linear transfer function

Transfer Functions a = logsig(wp+b) a = logsig(n) Demo: nnd2n1 Single neuron logsig network a = logsig(n) logsig transfer function Demo: nnd2n1

Multiple-Input Neuron n = w1,1 p1 + w1,2 p2 + …+w1,R pR + b We will show this in matrix form as: n = Wp + b Then, a can be written as: f(Wp + b) Abbreviated Notation Demo: nnd2n2

Layer of Neurons Number of inputs to a layer can be different from the number of neurons. Neurons in a layer may have different transfer functions. f(Wp + b)

Abbreviated Notation W p b a w w ¼ w w w ¼ w = w w ¼ w p b a p b a = = 1 , 1 1 , 2 1 , R w w ¼ w W = 2 , 1 2 , 2 2 , R w w ¼ w S , 1 S , 2 S , R p b a 1 1 1 p b a p = 2 b = 2 a = 2 p b a R S S

Multilayer Network

a3 = f3(W3f2( W2f1(W1p+ b1)+b2)+b3) Abbreviated Notation Hidden Layers Output Layer a1 = f1(W1p + b1) a2 = f2(W2a1 + b2) a3 = f3(W3a2 + b3) a3 = f3(W3f2( W2f1(W1p+ b1)+b2)+b3) Layer number Neuron Number Input number

Delays and Integrators

Recurrent Network