PYTHON Deep Learning Prof. Muhammad Saeed.

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Artificial Neural Networks (1)
Perceptron Learning Rule
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
Machine Learning Neural Networks
Lecture 14 – Neural Networks
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks Torsten Reil
Artificial Neural Networks
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Networks An Overview and Analysis.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
2101INT – Principles of Intelligent Systems Lecture 10.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
© SEMINAR ON ARTIFICIAL NEURAL NETWORK AND ITS APPLICATIONS By Mr. Susant Kumar Behera Mrs. I. Vijaya.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Artificial Neural Networks. 2 Outline What are Neural Networks? Biological Neural Networks ANN – The basics Feed forward net Training Example – Voice.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Today’s Lecture Neural networks Training
Machine Learning Supervised Learning Classification and Regression
Neural networks.
Neural Networks.
Artificial Neural Networks
Deep Learning Amin Sobhani.
Artificial neural networks
Learning in Neural Networks
Artificial neural networks:
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSC 578 Neural Networks and Deep Learning
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Classification Neural Networks 1
Chapter 3. Artificial Neural Networks - Introduction -
XOR problem Input 2 Input 1
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Perceptron as one Type of Linear Discriminants
CSE 573 Introduction to Artificial Intelligence Neural Networks
Lecture Notes for Chapter 4 Artificial Neural Networks
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Neural Networks
Introduction to Neural Network
David Kauchak CS158 – Spring 2019
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

PYTHON Deep Learning Prof. Muhammad Saeed

Dept. Of Computer Science & IT FUUAST Deep Learning Introduction 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Biological Neuron 100 billion neurons in human brain (average) 100 – 500 trillion synaptic connections 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Synapse vs. weight 11/23/2018 Dept. Of Computer Science & IT FUUAST

FUUAST Computer Science Deep Learning McCulloch-Pitts Neuron In 1943, Warren McCulloch and Walter Pitts, published the first ‘neural network’. Their "neurons" operated under the following assumptions: 1. They are binary devices (Vi = [0,1]) 2. Each neuron has a fixed threshold, theta 3. The neuron receives inputs from excitatory synapses, all having identical weights. 4. Inhibitory inputs have an absolute veto power over any excitatory inputs. 5. At each time step the neurons are simultaneously (synchronously) updated by summing the weighted excitatory inputs and setting the output (Vi) to 1 iff the sum is greater than or equal to the threhold AND if the neuron receives no inhibitory input. 9/6/2019 FUUAST Computer Science

Dept. Of Computer Science & IT FUUAST Deep Learning McCulloch-Pitts Neuron for AND Function McCulloch-Pitts Neuron for OR Function 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning McCulloch-Pitts Neuron for XOR Function 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning McCulloch-Pitts Neuron A NOR gate gives you an output of 1 only when all inputs are zero 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning McCulloch-Pitts Neuron A NAND gate gives you an output of 0 only when all inputs are 1 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Perceptron Perceptrons (Rosenblatt 1958, Minsky/Papert 1969) are generalized variants of a former, more simple model (McCulloch/Pitts neurons, 1942): • Inputs are weighted • Weights are real numbers (positive and negative) • No special inhibitory inputs Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks. 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Rosenblatt Perceptron 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Structure of a node: Squashing (Transfer) functions limit node output: 1. 2. 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Squashing (Transfer ) Functions 3. 4. 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Feeding data through the net: (1  0.25) + (0.5  (-1.5)) = 0.25 + (-0.75) = - 0.5 Squashing: 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Problems with Perceptrons Frank Rosenblatt proved mathematically that the perceptron learning rule converges if the two classes can be separated by linear hyperplane, but problems arise if the classes cannot be separated perfectly by a linear classifier. 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Artificial Neural Networks …… Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated. 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning ……. Artificial Neural Networks Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. (Neural networks can also extract features that are fed to other algorithms for clustering and classification 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning ANNs – The Basics ANNs incorporate the two fundamental components of biological neural nets: Layer 1 Layer 2 Layer 3 Neurones (nodes) Synapses (weights) 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning One Layer Neural Network 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Multiple Layers Of Neurons 11/23/2018 Dept. Of Computer Science & IT FUUAST

FUUAST Computer Science Deep Learning Deep Neural Networks A deep neural network is a neural network with a certain level of complexity, a neural network with more than two layers. Deep neural networks use sophisticated mathematical modeling to process data in complex ways. 9/6/2019 FUUAST Computer Science

Dept. Of Computer Science & IT FUUAST Deep Learning Multiple Layer Neural Networks 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning ANN Architecture Multilayer Perceptron (MLP) Radial Basis Function Networks(RBF) Self-Organizing Feature Maps(SOFM) Recurrent (Feedbackward) Learning Supervised Learning Unsupervised Learning 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Feed-forward nets Information flow is unidirectional Data is presented to Input layer Passed on to Hidden Layer Passed on to Output layer Information is distributed Information processing is parallel Internal representation (interpretation) of data 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Data is presented to the network in the form of activations in the input layer Examples Pixel intensity (for pictures) Molecule concentrations (for artificial nose) Share prices (for stock market prediction) Data usually requires preprocessing Analogous to senses in biology How to represent more abstract data, e.g. a name? Choose a pattern, e.g. 0-0-1 for “Zeeshan” 0-1-0 for “AleRaza” 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Applications of Feed-forward nets Pattern recognition Character recognition Face Recognition Sonar mine/rock recognition (Gorman & Sejnowksi, 1988) Navigation of a car (Pomerleau, 1989) Stock-market prediction Pronunciation (NETtalk) (Sejnowksi & Rosenberg, 1987) 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Feed-backward Neural Networks Feedback (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. The learning rate is introduced as a constant (usually very small), in order to force the weight to get updated very smoothly and slowly (to avoid big steps and chaotic behaviour). 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Recurrent Networks Feed forward networks: Information only flows one way One input pattern produces one output No sense of time (or memory of previous state) Recurrency Nodes connect back to other nodes or themselves Information flow is multidirectional Sense of time and memory of previous state(s) Biological nervous systems show high levels of recurrency (but feed-forward structures exists too) 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Training the Network – Learning Backpropagation Requires training set (input / output pairs) Starts with small random weights Error is used to adjust weights (supervised learning)  Gradient descent on error landscape Weight settings determine the behaviour of a network. 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning New weight = old weight — Derivative Rate * learning rate 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning Pros & Cons Advantages It works! Relatively fast Downsides Requires a training set Can be slow Probably not biologically realistic Alternatives to Backpropagation Hebbian learning: Not successful in feed-forward nets Reinforcement learning : Only limited success Artificial evolution: More general, but can be even slower than backprop 11/23/2018 Dept. Of Computer Science & IT FUUAST

Dept. Of Computer Science & IT FUUAST Deep Learning The End 11/23/2018 Dept. Of Computer Science & IT FUUAST