Download presentation
Presentation is loading. Please wait.
1
Data Mining Neural Networks Last modified 1/9/19
2
Goals for this Topic Basic understanding of Neural Networks and how they work Gain an understanding of How to use Neural Networks to solve real problems When neural networks are appropriate Strengths and weaknesses A detailed understanding of how they work, and the different types of neural networks, is deferred until “Machine Learning”
3
Biological Motivation
Can we simulate the human learning process? Two schools modeling biological learning process obtain highly effective algorithms, independent of whether these algorithms mirror biological processes (this course) Biological learning system (brain) complex network of neurons ANN are loosely motivated by biological neural systems. However, many features of ANNs are inconsistent with biological systems Analogy to biological neural systems, the most robust learning systems we know. Attempt to understand natural biological systems through computational modeling. Massive parallelism allows for computational efficiency. Help understand “distributed” nature of neural representations (rather than “localist” representation) that allow robustness and graceful degradation. Intelligent behavior as an “emergent” property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms.
4
Neural Speed Constraints
What do you think the differences are in speed between the elements in a computer and brain? How fast are neurons? Neurons have a “switching time” on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. How? Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers. Must be exploiting “massive parallelism.” Human brain has about 30 Billion neurons with an average of 10,000 connections each.
5
Artificial Neural Networks (ANN)
network of simple units real-valued inputs & outputs Many neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed process Emphasis on tuning weights automatically
6
Real Neurons Do you know which way the signal flows?
Answer on next slide
7
How Does our Brain Work? A neuron is connected to other neurons via its input and output links Dendrites receive the signal (input) Axons transmit signal (output) Axon close to about 10,000 dendrites and gap called synapse Chemical signals are used to cross synapses Each axon contains 50 different types of neurotransmitters Receptors keyed to each neurotransmitter on dendrites Can be excitatory or inhibitory
8
How does our Brain Work? Each incoming neuron has an activation value and each connection has an associated weight The neuron sums the incoming weighted values and this value is input to an activation function The output of the activation function is the output from the neuron
9
Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excitatory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential.
10
Real Neural Learning To model the brain we need to model a neuron
Each neuron performs a simple computation It receives signals from its input links and it uses these to compute the activation level (or output) for the neuron. This value is passed to other neurons via its output links.
11
Neural Network Learning
Learning approach based on modeling adaptation in biological neural systems Learning involves modifying the weights between neurons Note: In biological system one can add new connections Perceptron Initial algorithm for learning simple neural networks (single layer) developed in the 1950’s. Backpropagation More complex algorithm for learning multi-layer neural networks developed in the 1980’s.
12
Perceptrons The perceptron is a type of artificial neural network which can be seen as the simplest kind of feedforward neural network: a linear classifier Introduced in the late 50s Perceptron convergence theorem (Rosenblatt 1962): Perceptron will learn to classify any linearly separable set of inputs. Perceptron is a network: single-layer feed-forward: data only travels in one direction XOR function (no linear separation)
13
ALVINN drives 70 mph on highways
See Alvinn video Alvinn Video link
14
Perceptron: Artificial Neuron Model
Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, wji The input value received of a neuron is calculated by summing the weighted input values from its input links threshold threshold function Vector notation:
15
Different Threshold Functions
16
Examples In1 In2 Out 1 In1 In2 Out 1 In Out 1
1 In1 In2 Out 1 In Out 1 What logic functions do these table represent?
17
Neural Computation McCollough and Pitts (1943) showed how such model neurons could compute logical functions and be used to construct finite- state machines Can be used to simulate logic gates: AND: Let all wji be Tj/n, where n is the number of inputs. OR: Let all wji be Tj NOT: Let threshold be 0, single input with a negative weight. Can build arbitrary logic circuits, sequential machines, and computers with such gates
18
Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Goal: learn the weight vector (synaptic weights) that causes the perceptron to produce the correct +/- 1 values Perceptron uses iterative update algorithm to learn a correct set of weights Perceptron training rule Delta rule Both algorithms are guaranteed to converge to somewhat different acceptable hypotheses, under somewhat different conditions
19
Perceptron Training Rule
Update weights by: where η is the learning rate a small value (e.g., 0.1) sometimes is made to decay as the number of weight-tuning operations increases t – target output for the current training example o – linear unit output for the current training example
20
Perceptron Training Rule
Equivalent to rules: If output is correct do nothing. If output is high, lower weights on active inputs If output is low, increase weights on active inputs Can prove it will converge if training data is linearly separable and η is small Works reasonably well when training data is not linearly separable
21
Perceptron Learning Algorithm
Iteratively update weights until convergence. Each execution of the outer loop is typically called an epoch. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output oj for E given its inputs Compare current output to target value, tj , for E Update synaptic weights and threshold using learning rule
22
Perceptron as a Linear Separator
Since perceptron uses linear threshold function it searches for a linear separator that discriminates the classes o3 ?? Or hyperplane in n-dimensional space o2
23
Concepts Perceptron Cannot Learn
Cannot learn exclusive-or (XOR), or parity function in general o3 1 + – ?? – + o2 1
24
General Structure of an ANN
Perceptrons have no hidden layers Multilayer perceptrons may have many
25
Learning Power of an ANN
Perceptron is guaranteed to converge if data is linearly separable It will learn a hyperplane that separates the classes A mulitlayer ANN has no guarantee of convergence but can learn functions that are not linearly separable
26
Multilayer Network Example
The decision surface is highly nonlinear
27
Sigmoid Threshold Unit
Sigmoid is a unit whose output is a nonlinear function of its inputs, but whose output is also a differentiable function of its inputs We can derive gradient descent rules to train Sigmoid unit Multilayer networks of sigmoid units backpropagation
28
Multi-Layer Networks Multi-layer networks can represent arbitrary functions, but an effective learning algorithm for such networks was thought to be difficult A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward. The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer output hidden input activation
29
Logistic function S Inputs Output Age 34 0.6 Gender 1 4 Stage
.5 0.6 .4 S Gender 1 “Probability of beingAlive” .8 4 Stage Independent variables Coefficients 29
30
Neural Network Model S Inputs Output Age 34 0.6 Gender 2 4 Stage
.4 S .2 S 0.6 .1 .5 Gender 2 .2 .3 S .8 .7 “Probability of beingAlive” 4 Stage .2 Dependent variable Prediction Independent variables Weights HiddenLayer Weights 30
31
Getting an answer from a NN
Inputs Output Age 34 .6 .5 S 0.6 .1 Gender 2 .7 .8 “Probability of beingAlive” 4 Stage Dependent variable Prediction Independent variables Weights HiddenLayer Weights 31
32
Getting an answer from a NN
Inputs Output Age 34 .5 .2 S 0.6 Gender 2 .3 “Probability of beingAlive” .8 4 Stage .2 Dependent variable Prediction Independent variables Weights HiddenLayer Weights 32
33
Getting an answer from a NN
Inputs Output Age 34 .6 .5 .2 S 0.6 .1 Gender 1 .3 .7 “Probability of beingAlive” .8 4 Stage .2 Dependent variable Prediction Independent variables Weights HiddenLayer Weights 33
34
Comments on Training Algorithm
Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. In practice, does converge to low error for many large networks on real data. Many epochs (thousands) may be required, hours or days of training for large networks. To avoid local-minima problems run trials starting with different random weights (random restart). Take results of trial with lowest training set error.
35
Hidden Unit Representations
Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space: key features of ANNs On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc.. However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature.
36
Expressive Power of ANNs
Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network. Why not just use millions of hidden units? Efficiency (neural network training is slow) Overfitting
37
Determining Best Number of Hidden Units
Too few hidden units prevents the network from adequately fitting the data Too many hidden units can result in over-fitting Use internal cross-validation to empirically determine an optimal number of hidden units error on test data on training data # hidden units
38
Over-Training Prevention
Running too many epochs can result in over-fitting. Keep a hold-out validation set and test accuracy on it after every epoch. Stop training when additional epochs actually increase validation error. To avoid losing training data for validation: Use internal 10-fold CV on the training set to compute the average number of epochs that maximizes generalization accuracy. Train final network on complete training set for this many epochs. error on test data on training data # training epochs
39
Summary: Strengths and Weaknesses
When are Neural Networks useful? Instances represented by attribute-value pairs Particularly when attributes are real valued The target function is Discrete-valued Real-valued Vector-valued Training examples may contain errors- handles noisy data Fast evaluation times are necessary but slow training is acceptable When complex decision boundaries are needed When automatic learning of higher level features is useful When not? Fast training times are necessary Understandability of the function is required
40
Successful Applications
Text to Speech (NetTalk) Fraud detection Financial Applications HNC (eventually bought by Fair Isaac) Chemical Plant Control Pavillion Technologies Automated Vehicles Game Playing Neurogammon Handwriting recognition
41
Issues in Neural Nets Learning the proper network architecture
Many choices and options There are alternative architectures we did not discuss Recurrent networks that use feedback
42
Deep Learning Deep Learning is currently a very hot topic
Deep learning attempts to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple non-linear transformations Probably requires more than 2 hidden layers and more than 10 for very deep learning Replace need for handcrafted features with automatic feature learning with multiple levels of abstraction Higher level features based on lower level features Autoencoder: have one hidden layer with fewer nodes than input and require it to regenerate the input as its output. This forces the hidden layer to learn more general features that can reconstruct the output Does not require labeled training data
43
Some Examples More about ANNs, Rerepresentation, and Deep Learning (5min, Andrew Ng, interesting) Deep Learning example: Quad Copter Navigation (5min, somewhat interesting)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.