DeltaV Neural - Expert In Expert mode, the user can select the training parameters, recommend you use the defaults for most applications
Right click
max = sqrt( number inputs ) Number Hidden Neurons In DeltaV, number of hidden neurons is done by training the network with increasing number, max = sqrt( number inputs ) Training stops when less than 10% reduction in error for each node used
Using External Historical Data Lab data may not be collected in DeltaV, may be collected in a legacy system or recorded manually Historical data may be saved in another system not compatible with DeltaV DeltaV Neural is designed to use external data in development of a virtual sensor
Using Data Saved in a File Sensitivity analysis and Autogeneration may be done using the File selection
Specifying the Data File You select the file as a .dat file extension
Data File Format Line 1 The phrase DeltaV_NN_Data <eol> Line 2 List Input References <tab> Number of Outputs <eol> Line 3 Number of samples in file <tab> sample rate in seconds <eol> Line 4 An empty line <eol> Line 5 Identifiers separated by tabs <eol> Line 6 An empty line <eol> Line 7 The data in the following order Sample index <tab> output <tab> first input <tab> … <eol> Last Line An empty line <eol>
Start of Example File DeltaV_NN_Data 4 1 897 6.000000 4 1 897 6.000000 TT2-1 TT2-2 FC2-1 FC2-2 1 100.28817749 144.00000000 85.00000000 0.55000007 0.48933348 60.00000000 2 100.28817749 144.00000000 85.00000000 0.55000007 0.48949844 60.00000000 3 100.28817749 144.00000000 85.00000000 0.55000007 0.48989388 60.00000000 4 100.28817749 144.00000000 85.00000000 0.55000007 0.48989388 60.00000000 5 100.28817749 144.00000000 85.00000000 0.55000007 0.49091327 60.00000000 6 100.28817749 144.00000000 85.00000000 0.55000007 0.49092382 60.00000000 7 101.00279999 144.00000000 85.00000000 0.55000007 0.49185991 60.00000000 101.00279999 144.00000000 85.00000000 0.55000007 0.71004164 60.00000000
Example File DeltaV_NN_Data 4 1 897 6.000000 TT2-1 TT2-2 FC2-1 FC2-2 1 100.28817749 144.00000000 85.00000000 0.55000007 0.48933348 60.00000000 2 100.28817749 144.00000000 85.00000000 0.55000007 0.48949844 60.00000000 3 100.28817749 144.00000000 85.00000000 ## 0.48989388 60.00000000 4 100.28817749 144.00000000 85.00000000 0.55000007 0.48989388 60.00000000 5 100.28817749 144.00000000 85.00000000 0.55000007 0.49091327 60.00000000 6 100.28817749 144.00000000 85.00000000 0.55000007 ## 60.00000000 7 101.00279999 144.00000000 85.00000000 0.55000007 0.49185991 60.00000000 101.00279999 144.00000000 85.00000000 0.55000007 0.71004164 60.00000000 Use ## to indicate a point not used. Do not remove the point! Time shift is required to make the network converge properly!
Neural Network Applications Rules of Thumb NN needs data with accurate time stamps NN needs the correct input variables to get good correlation Be careful that input variables are not a function of output variables Need real, ground truth data to model (guesses don’t count) Watch out for even “minor” changes to the process NN is excellent at determining key input variables and the delays But… if all input values do not have a delay, can still build a prediction model if we assume inputs with zero delay will remain constant or have minor effect on estimation! NN can do fault diagnostics by modeling “normal” and watching for variation from “normal”
Neural Network Applications Rules of Thumb (Cont’d) NNs can be used with first principle models to build more accurate Hybrid models when the first principle theory has limitations If control problem is non-linear, NN is probably good choice If you model a poorly controlled process, you will likely have a controller that controls poorly Need to insure that “full operating range” is represented Uncontrolled variability in process can cause model inaccuracies Often need additional sensors for inferential measurement No matter how good the NN model (technology) it is useless if there is not enough justification to implement Although not a substitute for DOE (or structured data in general) significant results for knowledge discovery may be realized using historical data
Seven Deadly Sins of Neural Networks 1 - Bad Outlier Data 2 - Too Limited a Range 3 - Noisy Data 4 - Little to no Variation 5 - Too few hidden Neurons 6 - Over fitting 7 - Lack of Convergence
Bad Outlier Data These bad points will cause the algorithm to shift toward them, therefore cause the whole model to be in error
Too Limited a Range If the data is tightly gathered around a single value, the network will learn this gathering place rather than the whole process structure
Noisy Data If the data is too noisy, the network will learn the noise and not the process
Little to No Variation Same as limited range DeltaV Neural Network will exclude the data for you If you add it, and the process does vary, the network will show a variation, but how much? How does it know?
Too Few a Number of Neurons DeltaV Neural will select the number for you, this can become a problem with other programs. The opposite can be a problem, too many neurons can cause overfitting
Over Fitting Extra hidden neurons can change the weights of other neurons Possible to have an “Offset Neuron” with a value always + or – 1 Meaning less neuron with 0 output weight “Trim Neuron” with constant value on all but a few samples
Lack of Convergence Handling outputs that have no correlation with the inputs