Download presentation
Presentation is loading. Please wait.
1
Biological inspiration Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours. An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems. The nervous system is build by relatively simple units, the neurons, so copying their behavior and functionality should be the solution.
2
Biological inspiration Dendrites Soma (cell body) Axon
3
Biological inspiration synapses axon dendrites The information transmission happens at the synapses.
4
Artificial neurons Neurons work by processing information. They receive and provide information in form of spikes. The McCullogh-Pitts model Inputs Output w2w2 w1w1 w3w3 wnwn w n-1... x 1 x 2 x 3 … x n-1 x n y
5
Artificial neural networks Inputs Output An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs.
6
Learning in biological systems Learning = learning by adaptation The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior. At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones.
7
Neural network mathematics Inputs Output
8
Neural network approximation Task specification: Data: set of value pairs: (x t, y t ), y t =g(x t ) + z t ; z t is random measurement noise. Objective: find a neural network that represents the input / output transformation (a function) F(x,W) such that F(x,W) approximates g(x) for every x
9
Learning with MLP neural networks MLP neural network: with p layers Data: Error: It is very complicated to calculate the weight changes. x y out 1 2 … p-1 p
10
Learning with backpropagation Solution of the complicated learning: calculate first the changes for the synaptic weights of the output neuron; calculate the changes backward starting from layer p-1, and propagate backward the local error terms. The method is still relatively complicated but it is much simpler than the original optimisation problem.
11
.2.8.3.4.1.6 Train = 1.15 10.2*1+.8*0=.2.3*1+.4*0=.3.1*.3+.6*.2=.15 Error = 1-.15=.85.6.8.4.2.8 Train = 1.72 01.6*0+.8*1=.8.4*0+.4*1=.4.2*.4+.8*.8=.72 Error = 1-.72=.28
12
.6.9.4.45.25.9 Train = 0 1.56 11.6*1+.9*1=1.5.4*1+.45*1=.85.25*.85+.9*.1.5=1.56 Error = 0-1.56=-1.56
13
Artificial Neural Network Predicts Structure at this point
14
Danger You may train the network on your training set, but it may not generalize to other data Perhaps we should train several ANNs and then let them vote on the structure
15
Profile network from HeiDelberg family (alignment is used as input) instead of just the new sequence On the first level, a window of length 13 around the residue is used The window slides down the sequence, making a prediction for each residue The input includes the frequency of amino acids occurring in each position in the multiple alignment (In the example, there are 5 sequences in the multiple alignment) The second level takes these predictions from neural networks that are centered on neighboring proteins The third level does a jury selection
16
PHD Predicts 4 Predicts 6 Predicts 5
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.