MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks Eric Wong Martin Ho Eddy Li Kitty Wong.
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Kostas Kontogiannis E&CE
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
1 Part I Artificial Neural Networks Sofia Nikitaki.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Artificial Neural Networks (ANNs)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
Machine Learning. Learning agent Any other agent.
Some more Artificial Intelligence
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Data Mining and Neural Networks Danny Leung CS157B, Spring 2006 Professor Sin-Min Lee.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Explorations in Neural Networks Tianhui Cai Period 3.
Chapter 9 Neural Network.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Neural networks.
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Real Neurons Cell structures Cell body Dendrites Axon
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Intelligence Methods
XOR problem Input 2 Input 1
network of simple neuron-like computing elements
The Network Approach: Mind as a Web
David Kauchak CS158 – Spring 2019

Presentation transcript:

MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way Neural Networks MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way

MSE 2400 Evolution & Learning Background Neural Networks can be : - Biological models - Artificial models Desire to produce artificial systems capable of sophisticated computations similar to the human brain. MSE 2400 Evolution & Learning

Biological analogy and some main ideas The brain is composed of a mass of interconnected neurons each neuron is connected to many other neurons Neurons transmit signals to each other Whether a signal is transmitted is an all-or-nothing event (the electrical potential in the cell body of the neuron is thresholded) Whether a signal is sent, depends on the strength of the bond (synapse) between two neurons MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning How Does the Brain Work ? NEURON - The cell that performs information processing in the brain. - Fundamental functional unit of all nervous system tissue. MSE 2400 Evolution & Learning

Brain vs. Digital Computers Computers require hundreds of cycles to simulate a firing of a neuron. - The brain can fire all the neurons in a single step. Parallelism - Serial computers require billions of cycles to perform some tasks but the brain takes less than a second. e.g. Face Recognition MSE 2400 Evolution & Learning

Comparison of Brain and computer MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Biological neuron A neuron has A branching input (dendrites) A branching output (the axon) The information circulates from the dendrites to the axon via the cell body Axon connects to dendrites via synapses Synapses vary in strength Synapses may be excitatory or inhibitory MSE 2400 Evolution & Learning

What is an artificial neuron ? Definition : Non linear, parameterized function with restricted output range x1 x2 x3 w0 y MSE 2400 Evolution & Learning

Learning in Neural Networks The procedure that consists in estimating the parameters of neurons so that the whole network can perform a specific task 2 types of learning The supervised learning The unsupervised learning The Learning process (supervised) Present the network a number of inputs and their corresponding outputs See how closely the actual outputs match the desired ones Modify the parameters to better approximate the desired outputs MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Supervised learning The desired response of the neural network in function of particular inputs is well known. A “Professor” may provide examples and teach the neural network how to fulfill a certain task MSE 2400 Evolution & Learning

Unsupervised learning Idea : group typical input data in function of resemblance criteria un-known a priori Data clustering No need of a professor The network finds itself the correlations between the data Examples of such networks : Kohonen feature maps MSE 2400 Evolution & Learning

Zipcode Example Examples of handwritten postal codes drawn from a database available from the US Postal service MSE 2400 Evolution & Learning

Character recognition example Image 256x256 pixels 8 bits pixels values (grey level) Necessary to extract features MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Neural Networks A mathematical model to solve engineering problems Group of highly connected neurons to realize compositions of non linear functions Tasks Classification Discrimination Estimation 2 types of networks Feed forward Neural Networks Recurrent Neural Networks MSE 2400 Evolution & Learning

Feed-forward Networks - Arranged in layers. - Each unit is linked only in the unit in next layer. No units are linked between the same layer, back to the previous layer or skipping a layer. - Computations can proceed uniformly from input to output units. - No internal state exists. MSE 2400 Evolution & Learning

Feed Forward Neural Networks x1 x2 xn ….. 1st hidden layer 2nd hidden Output layer Information is propagated from inputs to outputs Can pass through one or more hidden layers MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Feed-Forward Example I2 t = -0.5 W24= -1 H4 W46 = 1 t = 1.5 H6 W67 = 1 t = 0.5 I1 W13 = -1 H3 W35 = 1 H5 O7 W57 = 1 W25 = 1 W16 = 1 I1 MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Recurrent Network (1) - The brain is not and cannot be a feed-forward network. - Allows activation to be fed back to the previous unit. - Internal state is stored in its activation level. Can become unstable Can oscillate. MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Recurrent Network (2) - May take long time to compute a stable output. - Learning process is much more difficult. - Can implement more complex designs. - Can model certain systems with internal states. MSE 2400 Evolution & Learning

Recurrent Neural Networks x1 x2 1 Can have arbitrary topologies Can model systems with internal states (dynamic ones) Delays are associated to a specific weight Training is more difficult Performance may be problematic Stable Outputs may be more difficult to evaluate Unexpected behavior (oscillation, chaos, …) MSE 2400 Evolution & Learning

Multi-layer Networks and Perceptrons - Networks without hidden layer are called perceptrons. - Perceptrons are very limited in what they can represent, but this makes their learning problem much simpler. - Have one or more layers of hidden units. - With two possibly very large hidden layers, it is possible to implement any function. MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Perceptrons - First studied in the late 1950s. - Also known as Layered Feed-Forward Networks. - The only efficient learning element at that time was for single-layered networks. - Today, used as a synonym for a single-layer, feed-forward network. MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Perceptrons MSE 2400 Evolution & Learning

What can Perceptrons Represent? Some complex Boolean function can be represented. For example: Majority function - will be covered in this lecture. Perceptrons are limited in the Boolean functions they can represent. MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Need for hidden units If there is one layer of enough hidden units, the input can be recoded (perhaps just memorized) This recoding allows any mapping to be represented Problem: how can the weights of the hidden units be trained? MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Backpropagation - In 1969 a method for learning in multi-layer network, Backpropagation, was invented by Bryson and Ho. - The Backpropagation algorithm is a sensible approach for dividing the contribution of each weight. - Works basically the same as perceptrons MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Backpropagation flow MSE 2400 Evolution & Learning

Backpropagation Network training 1. Initialize network with random weights 2. For all training cases (called examples): a. Present training inputs to network and calculate output b. For all layers (starting with output layer, back to input layer): i. Compare network output with correct output (error function) ii. Adapt weights in current layer This is what you want MSE 2400 Evolution & Learning

Backpropagation Algorithm – Main Idea – error in hidden layers The ideas of the algorithm can be summarized as follows : Computes the error term for the output units using the observed error. 2. From output layer, repeat propagating the error term back to the previous layer and updating the weights between the two layers until the earliest hidden layer is reached. MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning How many hidden layers? Usually just one (i.e., a 2-layer net) How many hidden units in the layer? Too few ==> can’t learn Too many ==> poor generalization MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning How big a training set? Determine your target error rate, e Success rate is 1- e Typical training set approx. n/e, where n is the number of weights in the net Example: e = 0.1, n = 80 weights training set size 800 trained until 95% correct training set classification should produce 90% correct classification on testing set (typical) MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Summary Neural network is a computational model that simulate some properties of the human brain. The connections and nature of units determine the behavior of a neural network. Perceptrons are feed-forward networks that can only represent linearly separable (very simple) functions. MSE 2400 Evolution & Learning

MSE 2400 Evolution & Learning Summary (cont’d) Given enough units, any function can be represented by Multi-layer feed-forward networks. Backpropagation learning works on multi-layer feed-forward networks. Neural Networks are widely used in developing artificial learning systems. MSE 2400 Evolution & Learning