Chapter 5: Introduction to Predictive Modeling: Neural Networks and Other Modeling Tools 5.2 Input Selection 5.3 Stopped Training 5.4 Other Modeling Tools (Self-Study)
Chapter 5: Introduction to Predictive Modeling: Neural Networks and Other Modeling Tools 5.2 Input Selection 5.3 Stopped Training 5.4 Other Modeling Tools (Self-Study)
Model Essentials – Neural Networks Prediction formula Predict new cases. None Select useful inputs. Stopped training Optimize complexity. ...
Model Essentials – Neural Networks Prediction formula Predict new cases. None None Select useful inputs. Select useful inputs Stopped training Stopped training Optimize complexity Optimize complexity. ...
Model Essentials – Neural Networks Prediction formula Predict new cases. None Select useful inputs. Stopped training Optimize complexity. ...
Neural Network Prediction Formula hidden unit prediction estimate bias estimate weight estimate 1 5 -5 -1 tanh activation function ... ...
Neural Network Prediction Formula hidden unit prediction estimate bias estimate weight estimate 1 5 -5 -1 tanh activation function ...
Neural Network Binary Prediction Formula 1 5 -5 logit link function 1 5 -5 -1 tanh ...
Neural Network Diagram H1 H2 H3 hidden layer x2 input layer x1 y target layer ...
Neural Network Diagram H1 H2 H3 hidden layer x2 input layer x1 y target layer ...
Prediction Illustration – Neural Networks logit equation 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x2 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x1 ...
Prediction Illustration – Neural Networks logit equation 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x2 Need weight estimates. 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x1 ...
Prediction Illustration – Neural Networks logit equation 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x2 Weight estimates found by maximizing: 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x1 Log-likelihood Function ...
Prediction Illustration – Neural Networks logit equation 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x2 0.50 0.70 0.60 0.30 0.40 0.60 0.50 0.50 0.40 0.60 Probability estimates are obtained by solving the logit equation for p for each (x1, x2). ^ 0.0 0.5 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 x1 ...
Neural Nets: Beyond the Prediction Formula Manage missing values. Handle extreme or unusual values. Handle extreme or unusual values Use non-numeric inputs Use non-numeric inputs. Account for nonlinearities. Account for nonlinearities Interpret the model. Interpret the model ...
Neural Nets: Beyond the Prediction Formula Manage missing values. Handle extreme or unusual values. Use non-numeric inputs. Account for nonlinearities. Interpret the model.
Training a Neural Network This demonstration illustrates using the Neural Network tool.
Chapter 5: Introduction to Predictive Modeling: Neural Networks and Other Modeling Tools 5.2 Input Selection 5.3 Stopped Training 5.4 Other Modeling Tools (Self-Study)
Model Essentials – Neural Networks Prediction formula Predict new cases. Sequential selection None Select useful inputs. Select useful inputs Best model from sequence Optimize complexity.
5.01 Multiple Answer Poll Which of the following are true about neural networks in SAS Enterprise Miner? Neural networks are universal approximators. Neural networks have no internal, automated process for selecting useful inputs. Neural networks are easy to interpret and thus are very useful in highly regulated industries. Neural networks cannot model nonlinear relationships. Type answer here
5.01 Multiple Answer Poll – Correct Answers Which of the following are true about neural networks in SAS Enterprise Miner? Neural networks are universal approximators. Neural networks have no internal, automated process for selecting useful inputs. Neural networks are easy to interpret and thus are very useful in highly regulated industries. Neural networks cannot model nonlinear relationships. Type answer here
Selecting Neural Network Inputs This demonstration illustrates how to use a logistic regression to select inputs for a neural network.
Chapter 5: Introduction to Predictive Modeling: Neural Networks and Other Modeling Tools 5.2 Input Selection 5.3 Stopped Training 5.4 Other Modeling Tools (Self-Study)
Model Essentials – Neural Networks Prediction formula Predict new cases. Sequential selection Select useful inputs. Stopped training Optimize complexity. ...
Fit Statistic versus Optimization Iteration initial hidden unit weights logit( p ) = ^ + 0·H1 + 0·H2 + 0·H3 logit(0.5) logit(ρ1) ^ H1 = tanh(-1.5 - .03x1 - .07x2) H2 = tanh( .79 - .17x1 - .16x2) H3 = tanh( .57 + .05x1 +.35x2 ) ...
Fit Statistic versus Optimization Iteration logit( p ) = ^ + 0·H1 + 0·H2 + 0·H3 H1 = tanh(-1.5 - .03x1 - .07x2) H1 = tanh(-1.5 - .03x1 - .07x2) H2 = tanh( .79 - .17x1 - .16x2) H2 = tanh( .79 - .17x1 - .16x2) H3 = tanh( .57 + .05x1 +.35x2 ) H3 = tanh( .57 + .05x1 +.35x2 ) random initial input weights and biases ...
Fit Statistic versus Optimization Iteration logit( p ) = ^ + 0·H1 + 0·H2 + 0·H3 H1 = tanh(-1.5 - .03x1 - .07x2) H1 = tanh(-1.5 - .03x1 - .07x2) H2 = tanh( .79 - .17x1 - .16x2) H2 = tanh( .79 - .17x1 - .16x2) H3 = tanh( .57 + .05x1 +.35x2 ) H3 = tanh( .57 + .05x1 +.35x2 ) random initial input weights and biases ...
Fit Statistic versus Optimization Iteration 5 15 20 Iteration 10 ...
Fit Statistic versus Optimization Iteration ASE training validation 1 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 2 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 3 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 4 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 6 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 7 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 8 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 9 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 11 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 12 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 13 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 14 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 16 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 17 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 18 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 19 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 21 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 22 Iteration ...
Fit Statistic versus Optimization Iteration ASE training validation 5 10 15 20 23 Iteration ...
Fit Statistic versus Optimization Iteration 0.50 0.70 0.60 0.30 ASE 0.40 0.60 0.50 0.50 0.40 0.60 5 10 12 15 20 Iteration ...
Increasing Network Flexibility This demonstration illustrates how to further improve neural network performance.
Using the AutoNeural Tool (Self-Study) This demonstration illustrates how to use the AutoNeural tool.
Chapter 5: Introduction to Predictive Modeling: Neural Networks and Other Modeling Tools 5.2 Input Selection 5.3 Stopped Training 5.4 Other Modeling Tools (Self-Study)
Model Essentials – Rule Induction Prediction rules / prediction formula Predict new cases. Split search / none Select useful inputs. Ripping / stopped training Optimize complexity.
Rule Induction Predictions [Rips create prediction rules.] A binary model sequentially classifies and removes correctly classified cases. [A neural network predicts remaining cases.] 1.0 0.74 0.9 0.8 0.7 0.6 x2 0.5 0.4 0.3 0.2 0.39 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 x1
Model Essentials – Dmine Regression Prediction formula Predict new cases. Forward selection Select useful inputs. Stop R-square Optimize complexity.
Dmine Regression Predictions Interval inputs binned, categorical inputs grouped Forward selection picks from binned and original inputs
Model Essentials – DMNeural Stagewise prediction formula Predict new cases. Principal component Select useful inputs. Max stage Optimize complexity.
DMNeural Predictions Up to three PCs with highest target R square are selected. One of eight continuous transformations are selected and applied to selected PCs. The process is repeated three times with residuals from each stage. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 x1
Model Essentials – Least Angle Regression Prediction formula Predict new cases. Generalized sequential selection Select useful inputs. Penalized best fit Optimize complexity.
Least Angle Regression Predictions Inputs are selected using a generalization of forward selection. An input combination in the sequence with optimal, penalized validation assessment is selected by default. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 x1
Model Essentials – MBR Predict new cases. Select useful inputs. Training data nearest neighbors Predict new cases. None Select useful inputs. Number of neighbors Optimize complexity.
MBR Prediction Estimates Sixteen nearest training data cases predict the target for each point in the input space. Scoring requires training data and the PMBR procedure. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 x1
Model Essentials – Partial Least Squares Prediction formula Predict new cases. VIP Select useful inputs. Sequential factor extraction Optimize complexity.
Partial Least Squares Predictions Input combinations (factors) that optimally account for both predictor and response variation are successively selected. Factor count with a minimum validation PRESS statistic is selected. Inputs with small VIP are rejected for subsequent diagram nodes. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 x1
Exercises This exercise reinforces the concepts discussed previously.
Neural Network Tool Review Create a multi-layer perceptron on selected inputs. Control complexity with stopped training and hidden unit count.