Neural Nets How the brain achieves intelligence 10^11 1mhz cpu’s
Concerns Representation –What is it –What can it do Learnability –How can it be trained Efficiency –Of Learning –Of Learned concept
Weka’s Neural Net Output on Iris Node 0 Iris-Setosa Threshold Node Node Node Node 1 Iris-versicolor Threshold 1.06 Node Node Node Node 2 Iris-viginica Threshold Node Node Node Node 3 Threshold 3.38 sepallength 0.90 sepalwidth 1.56 petallength -5.0 petalwidth Node 4 Threshold sepallength sepalwidth 3.12 petallength petalwidth Node 5 Threshold sepallength sepalwidth petallength 8.40 petalwidth 9.46
Representation : Feed-Forward Neural Net DAG of perceptrons Leaf nodes take inputs Outputs node yield decisions Architecture: no one knows how to build them. Weights: trained by “hill-climbing”; slow and guarantee of only local optimum.
Representational Power Any boolean function can be represented in disjunctive or conjunctive normal form. Disjunctive = “or” of “anded” features. Since perceptron can learn “or” and “and”, 2-layer network can represent any boolean function.
Neural Nets Work Disease diagnosis: 90% accurate on prostate cancer prediction Handwritten Character Recognition (5-layer net) 99% accurate NetTalk: 80 hidden units, 28 inputs. 78% accuracy. Sounds like child learning to talk.
Summary Neural nets can do multiple classes and regression Training is slow Performance is fast and high quality No one knows how to create architecture Neural nets tend to be incomprehensible