CISC 4631 Data Mining Lecture 11: Neural Networks.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
For Wednesday Read chapter 19, sections 1-3 No homework.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Reading for Next Week Textbook, Section 9, pp A User’s Guide to Support Vector Machines (linked from course website)
Classification Neural Networks 1
Machine Learning Neural Networks
Lecture 14 – Neural Networks
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
Neural Networks Marco Loog.
Machine Learning Neural Networks.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
ICS 273A UC Irvine Instructor: Max Welling Neural Networks.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE
CS 484 – Artificial Intelligence
For Monday Read chapter 22, sections 1-3 No homework.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Neural Networks Chapter 18.7
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks
1 CS 343: Artificial Intelligence Neural Networks Raymond J. Mooney University of Texas at Austin.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
1 Machine Learning Neural Networks Neural Networks NN belong to the category of trained classification models The learned classification model is an.
CS464 Introduction to Machine Learning1 Artificial N eural N etworks Artificial neural networks (ANNs) provide a general, practical method for learning.
Machine Learning Chapter 4. Artificial Neural Networks
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 CS 391L: Machine Learning Neural Networks Raymond J. Mooney University of Texas at Austin.
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
For Friday Read ch. 20, sections 1-3 Program 4 due.
Artificial Neural Network
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Network. Introduction Robust approach to approximating real-valued, discrete-valued, and vector-valued target functions Backpropagation.
Artificial Neural Network. Introduction Robust approach to approximating real-valued, discrete-valued, and vector-valued target functions Backpropagation.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Machine Learning Supervised Learning Classification and Regression
CS 388: Natural Language Processing: Neural Networks
Learning with Perceptrons and Neural Networks
Artificial Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
CISC 4631/6930 Data Mining Lecture 11: Neural Networks.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Neural Networks
Machine Learning Today: Reading: Maria Florina Balcan
Classification Neural Networks 1
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Data Mining Neural Networks Last modified 1/9/19.
Seminar on Machine Learning Rada Mihalcea
Presentation transcript:

CISC 4631 Data Mining Lecture 11: Neural Networks

Biological Motivation Can we simulate the human learning process?  Two schools modeling biological learning process obtain highly effective algorithms, independent of whether these algorithms mirror biological processes (this course) Biological learning system (brain) complex network of neurons ANN are loosely motivated by biological neural systems. However, many features of ANNs are inconsistent with biological systems Analogy to biological neural systems, the most robust learning systems we know. Attempt to understand natural biological systems through computational modeling. Massive parallelism allows for computational efficiency. Help understand “distributed” nature of neural representations (rather than “localist” representation) that allow robustness and graceful degradation. Intelligent behavior as an “emergent” property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms.

Neural Speed Constraints Neurons have a “switching time” on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers. Must be exploiting “massive parallelism.” Human brain has about 1011 neurons with an average of 104 connections each.

Artificial Neural Networks (ANN) network of simple units real-valued inputs & outputs Many neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed process Emphasis on tuning weights automatically

Neural Network Learning Learning approach based on modeling adaptation in biological neural systems. Perceptron: Initial algorithm for learning simple neural networks (single layer) developed in the 1950’s. Backpropagation: More complex algorithm for learning multi-layer neural networks developed in the 1980’s.

Real Neurons

How Does our Brain Work? A neuron is connected to other neurons via its input and output links Each incoming neuron has an activation value and each connection has a weight associated with it The neuron sums the incoming weighted values and this value is input to an activation function The output of the activation function is the output from the neuron

Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excititory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential.

Real Neural Learning To model the brain we need to model a neuron Each neuron performs a simple computation It receives signals from its input links and it uses these values to compute the activation level (or output) for the neuron. This value is passed to other neurons via its output links.

Prototypical ANN Units interconnected in layers directed, acyclic graph (DAG) Network structure is fixed learning = weight adjustment backpropagation algorithm

Appropriate Problems Instances: vectors of attributes Target function discrete or real values Target function discrete, real, vector ANNs can handle classification & regression Noisy data Long training times acceptable Fast evaluation No need to be readable It is almost impossible to interpret neural networks except for the simplest target functions

Perceptrons The perceptron is a type of artificial neural network which can be seen as the simplest kind of feedforward neural network: a linear classifier Introduced in the late 50s Perceptron convergence theorem (Rosenblatt 1962): Perceptron will learn to classify any linearly separable set of inputs. Perceptron is a network: single-layer feed-forward: data only travels in one direction XOR function (no linear separation)

ALVINN drives 70 mph on highways See Alvinn video Alvinn Video

Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, wji Model net input to cell as Cell output is: 1 3 2 5 4 6 w12 w13 w14 w15 w16 oj 1 (Tj is threshold for unit j) Tj netj

Perceptron: Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, wji The input value received of a neuron is calculated by summing the weighted input values from its input links threshold threshold function Vector notation:

Different Threshold Functions We should learn the weight w1,…, wn

Examples (step activation function) In1 In2 Out 1 In1 In2 Out 1 In Out 1 w0 – t

Neural Computation McCollough and Pitts (1943) showed how such model neurons could compute logical functions and be used to construct finite-state machines Can be used to simulate logic gates: AND: Let all wji be Tj/n, where n is the number of inputs. OR: Let all wji be Tj NOT: Let threshold be 0, single input with a negative weight. Can build arbitrary logic circuits, sequential machines, and computers with such gates

Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Goal: learn the weight vector (synaptic weights) that causes the perceptron to produce the correct +/- 1 values Perceptron uses iterative update algorithm to learn a correct set of weights Perceptron training rule Delta rule Both algorithms are guaranteed to converge to somewhat different acceptable hypotheses, under somewhat different conditions

Perceptron Training Rule Update weights by: where η is the learning rate a small value (e.g., 0.1) sometimes is made to decay as the number of weight-tuning operations increases t – target output for the current training example o – linear unit output for the current training example

Perceptron Training Rule Equivalent to rules: If output is correct do nothing. If output is high, lower weights on active inputs If output is low, increase weights on active inputs Can prove it will converge if training data is linearly separable and η is sufficiently small

Perceptron Learning Algorithm Iteratively update weights until convergence. Each execution of the outer loop is typically called an epoch. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output oj for E given its inputs Compare current output to target value, tj , for E Update synaptic weights and threshold using learning rule

Delta Rule Works reasonably with data that is not linearly separable Minimizes error Gradient descent method basis of Backpropagation method basis for methods working in multidimensional continuous spaces Discussion of this rule is beyond the scope of this course

Perceptron as a Linear Separator Since perceptron uses linear threshold function it searches for a linear separator that discriminates the classes o3 ?? Or hyperplane in n-dimensional space o2

Concept Perceptron Cannot Learn Cannot learn exclusive-or (XOR), or parity function in general o3 1 + – ?? – + o2 1

General Structure of an ANN Perceptrons have no hidden layers Multilayer perceptrons may have many

Learning Power of an ANN Perceptron is guaranteed to converge if data is linearly separable It will learn a hyperplane that separates the classes The XOR function on page 250 TSK is not linearly separable A mulitlayer ANN has no such guarantee of convergence but can learn functions that are not linearly separable An ANN with a hidder layer can learn the XOR function by constructing two hyperplanes (see page 253)

Multilayer Network Example The decision surface is highly nonlinear

Sigmoid Threshold Unit Sigmoid is a unit whose output is a nonlinear function of its inputs, but whose output is also a differentiable function of its inputs We can derive gradient descent rules to train Sigmoid unit Multilayer networks of sigmoid units  backpropagation

Multi-Layer Networks Multi-layer networks can represent arbitrary functions, but an effective learning algorithm for such networks was thought to be difficult A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward. The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer output hidden input activation

Logistic function S Inputs Output Age 34 0.6 Gender 1 4 Stage .5 0.6 .4 S Gender 1 “Probability of beingAlive” .8 4 Stage Independent variables Coefficients Prediction 31

Neural Network Model S Inputs Output Age 34 0.6 Gender 2 4 Stage .4 S .2 S 0.6 .1 .5 Gender 2 .2 .3 S .8 .7 “Probability of beingAlive” 4 Stage .2 Dependent variable Prediction Independent variables Weights HiddenLayer Weights 32

Getting an answer from a NN Inputs Output Age 34 .6 .5 S 0.6 .1 Gender 2 .7 .8 “Probability of beingAlive” 4 Stage Dependent variable Prediction Independent variables Weights HiddenLayer Weights 33

Getting an answer from a NN Inputs Output Age 34 .5 .2 S 0.6 Gender 2 .3 “Probability of beingAlive” .8 4 Stage .2 Dependent variable Prediction Independent variables Weights HiddenLayer Weights 34

Getting an answer from a NN Inputs Output Age 34 .6 .5 .2 S 0.6 .1 Gender 1 .3 .7 “Probability of beingAlive” .8 4 Stage .2 Dependent variable Prediction Independent variables Weights HiddenLayer Weights 35

Comments on Training Algorithm Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. However, in practice, does converge to low error for many large networks on real data. Many epochs (thousands) may be required, hours or days of training for large networks. To avoid local-minima problems, run several trials starting with different random weights (random restarts). Take results of trial with lowest training set error. Build a committee of results from multiple trials (possibly weighting votes by training set accuracy).

Hidden Unit Representations Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space: key features of ANNs On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc.. However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature.

Expressive Power of ANNs Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network.

Sample Learned XOR Network IN 1 IN 2 OUT 1 O 3.11 6.96 7.38 5.24 A B 2.03 3.58 3.6 5.57 5.74 X Y Hidden Unit A represents: (X  Y) Hidden Unit B represents: (X  Y) Output O represents: A  B = (X  Y)  (X  Y) = X  Y

Expressive Power of ANNs Universal Function Approximator: Given enough hidden units, can approximate any continuous function f Why not use millions of hidden units? Efficiency (neural network training is slow) Overfitting

Combating Overfitting in Neural Nets Many techniques Two popular ones: Early Stopping Use “a lot” of hidden units Just don’t over-train Cross-validation Choose the “right” number of hidden units

Determining the Best Number of Hidden Units Too few hidden units prevents the network from adequately fitting the data Too many hidden units can result in over-fitting Use internal cross-validation to empirically determine an optimal number of hidden units error on test data on training data # hidden units

Over-Training Prevention Running too many epochs can result in over-fitting. Keep a hold-out validation set and test accuracy on it after every epoch. Stop training when additional epochs actually increase validation error. To avoid losing training data for validation: Use internal 10-fold CV on the training set to compute the average number of epochs that maximizes generalization accuracy. Train final network on complete training set for this many epochs. error on test data on training data # training epochs

Summary of Neural Networks When are Neural Networks useful? Instances represented by attribute-value pairs Particularly when attributes are real valued The target function is Discrete-valued Real-valued Vector-valued Training examples may contain errors Fast evaluation times are necessary When not? Fast training times are necessary Understandability of the function is required

Successful Applications Text to Speech (NetTalk) Fraud detection Financial Applications HNC (eventually bought by Fair Isaac) Chemical Plant Control Pavillion Technologies Automated Vehicles Game Playing Neurogammon Handwriting recognition

Issues in Neural Nets Learning the proper network architecture: Grow network until able to fit data Cascade Correlation Upstart Shrink large network until unable to fit data Optimal Brain Damage Recurrent networks that use feedback and can learn finite state machines with “backpropagation through time.”