Artificial Neural Networks 1 Morten Nielsen Department of Systems Biology, DTU.

Slides:



Advertisements
Similar presentations
FUNCTION FITTING Student’s name: Ruba Eyal Salman Supervisor:
Advertisements

© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
By D. Fisher Geometric Transformations. Reflection, Rotation, or Translation 1.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
Artificial Intelligence 12. Two Layer ANNs
1 Computation in neural networks M. Meeter. 2 Perceptron learning problem Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1]
Constant, Linear and Non-Linear Constant, Linear and Non-Linear
Artificial Neural Networks 1 Morten Nielsen Department of Systems Biology, DTU IIB-INTECH, UNSAM, Argentina.
Beyond Linear Separability
Machine Learning: Intro and Supervised Classification
CENTER FOR BIOLOGICAL SEQUENCE ANALYSISTECHNICAL UNIVERSITY OF DENMARK DTU T cell Epitope predictions using bioinformatics (Hidden Markov models) Morten.
Immune system overview in 10 minutes The non-immunologist guide to the immune system Morten Nielsen Department of Systems Biology DTU.
There are two basic categories: There are two basic categories: 1. Feed-forward Neural Networks These are the nets in which the signals.
Addition 1’s to 20.
25 seconds left…...
CENTER FOR BIOLOGICAL SEQUENCE ANALYSISTECHNICAL UNIVERSITY OF DENMARK DTU Sequence motifs, information content, logos, and HMM’s Morten Nielsen, CBS,
CENTER FOR BIOLOGICAL SEQUENCE ANALYSISTECHNICAL UNIVERSITY OF DENMARK DTU T cell Epitope predictions using bioinformatics (Neural Networks and hidden.
Week 1.
Immune system overview in 10 minutes The non-immunologist guide to the immune system.
Sequence motifs, information content, logos, and Weight matrices
Prediction of T cell epitopes using artificial neural networks
MHC binding and MHC polymorphism Or Finding the needle in the haystack.
MHC binding and MHC polymorphism. MHC-I molecules present peptides on the surface of most cells.
Morten Nielsen, CBS, BioCentrum, DTU
Sequence motifs, information content, logos, and HMM’s Morten Nielsen, CBS, BioCentrum, DTU.
Immunological Bioinformatics Or Finding the needle in the haystack Morten Nielsen
CENTER FOR BIOLOGICAL SEQUENCE ANALYSISTECHNICAL UNIVERSITY OF DENMARK DTU T cell Epitope predictions using bioinformatics (Neural Networks and hidden.
Sequence motifs, information content, and sequence logos Morten Nielsen, CBS, Depart of Systems Biology, DTU.
Hidden Markov Models, HMM’s Morten Nielsen, CBS, Department of Systems Biology, DTU.
Artificial Neural Networks 2 Morten Nielsen BioSys, DTU.
Sequence motifs, information content, logos, and Weight matrices Morten Nielsen, CBS, BioCentrum, DTU.
Biological sequence analysis and information processing by artificial neural networks.
Characterizing receptor ligand interactions Morten Nielsen, CBS, Depart of Systems Biology, DTU.
Artificial Neural Networks 2 Morten Nielsen Depertment of Systems Biology, DTU.
Protein Secondary Structures
Biological sequence analysis and information processing by artificial neural networks Morten Nielsen CBS.
Biological sequence analysis and information processing by artificial neural networks Søren Brunak Center for Biological Sequence Analysis Technical University.
Artificial Neural Networks 1 Morten Nielsen Department of Systems Biology, DTU.
Sequence motifs, information content, logos, and HMM’s Morten Nielsen, CBS, BioCentrum, DTU.
Artificial Neural Networks Thomas Nordahl Petersen & Morten Nielsen.
Immunological Bioinformatics Introduction to the immune system.
Immunological Bioinformatics. The Immunological Bioinformatics group Immunological Bioinformatics group, CBS, Technical University of Denmark (
Sequence motifs, information content, logos, and HMM’s Morten Nielsen, CBS, BioSys, DTU.
Biological sequence analysis and information processing by artificial neural networks.
Psi-Blast Morten Nielsen, CBS, Department of Systems Biology, DTU.
Sequence motifs, information content, logos, and HMM’s Morten Nielsen, CBS, BioCentrum, DTU.
Hidden Markov Models, HMM’s Morten Nielsen, CBS, BioSys, DTU.
Biological sequence analysis and information processing by artificial neural networks Søren Brunak Center for Biological Sequence Analysis Technical University.
Sequence motifs, information content, logos, and HMM’s Morten Nielsen, CBS, BioSys, DTU.
Neural Networks for Protein Structure Prediction Brown, JMB 1999 CS 466 Saurabh Sinha.
Sequence encoding, Cross Validation Morten Nielsen BioSys, DTU
Hidden Markov Models, HMM’s Morten Nielsen, CBS, BioSys, DTU.
Artificiel Neural Networks 2 Morten Nielsen Department of Systems Biology, DTU IIB-INTECH, UNSAM, Argentina.
What is a Project Purpose –Use a method introduced in the course to describe some biological problem How –Construct a data set describing the problem –Define.
Artificial Neural Networks 1 Morten Nielsen Department of Systems Biology, DTU.
Hidden Markov Models, HMM’s
Artificiel Neural Networks 2 Morten Nielsen Department of Systems Biology, DTU.
Weight matrices, Sequence motifs, information content, and sequence logos Morten Nielsen, CBS, Department of Systems Biology, DTU and Instituto de Investigaciones.
Prediction of T cell epitopes using artificial neural networks Morten Nielsen, CBS, BioCentrum, DTU.
Prediction of T cell epitopes using artificial neural networks Morten Nielsen, CBS, BioCentrum, DTU.
Immunological Bioinformatics
Motifs, logos, and Profile HMM’s
Sequence motifs, information content, and sequence logos
Artificial Neural Networks Thomas Nordahl Petersen & Morten Nielsen
Immunological Bioinformatics
Sequence motifs, information content, logos, and HMM’s
Sequence motifs, information content, and sequence logos
Artificial Neural Networks Thomas Nordahl Petersen & Morten Nielsen
Presentation transcript:

Artificial Neural Networks 1 Morten Nielsen Department of Systems Biology, DTU

InputNeural networkOutput Neural network: is a black box that no one can understand over-predicts performance Overfitting - many thousand parameters fitted on few data Objectives

HUNKAT

Biological Neural network

Biological neuron structure

Transfer of biological principles to artificial neural network algorithms Non-linear relation between input and output Massively parallel information processing Data-driven construction of algorithms Ability to generalize to new data items

Similar to SMM, except for delta function!

How to predict The effect on the binding affinity of having a given amino acid at one position can be influenced by the amino acids at other positions in the peptide (sequence correlations). –Two adjacent amino acids may for example compete for the space in a pocket in the MHC molecule. Artificial neural networks (ANN) are ideally suited to take such correlations into account

SLLPAIVEL YLLPAIVHI TLWVDPYEV GLVPFLVSV KLLEPVLLL LLDVPTAAV LLDVPTAAV LLDVPTAAV LLDVPTAAV VLFRGGPRG MVDGTLLLL YMNGTMSQV MLLSVPLLL SLLGLLVEV ALLPPINIL TLIKIQHTL HLIDYLVTS ILAPPVVKL ALFPQLVIL GILGFVFTL STNRQSGRQ GLDVLTAKV RILGAVAKV QVCERIPTI ILFGHENRV ILMEHIHKL ILDQKINEV SLAGGIIGV LLIENVASL FLLWATAEA SLPDFGISY KKREEAPSL LERPGGNEI ALSNLEVKL ALNELLQHV DLERKVESL FLGENISNF ALSDHHIYL GLSEFTEYL STAPPAHGV PLDGEYFTL GVLVGVALI RTLDKVLEV HLSTAFARV RLDSYVRSL YMNGTMSQV GILGFVFTL ILKEPVHGV ILGFVFTLT LLFGYPVYV GLSPTVWLS WLSLLVPFV FLPSDFFPS CLGGLLTMV FIAGNSAYE KLGEFYNQM KLVALGINA DLMGYIPLV RLVTLKDIV MLLAVLYCL AAGIGILTV YLEPGPVTA LLDGTATLR ITDQVPFSV KTWGQYWQV TITDQVPFS AFHHVAREL YLNKIQNSL MMRKLAILS AIMDKNIIL IMDKNIILK SMVGNWAKV SLLAPGAKQ KIFGSLAFL ELVSEFSRM KLTPLCVTL VLYRYGSFS YIGEVLVSV CINGVCWTV VMNILLQYV ILTVILGVL KVLEYVIKV FLWGPRALV GLSRYVARL FLLTRILTI HLGNVKYLV GIAGGLALL GLQDCTMLV TGAPVTYST VIYQYMDDL VLPDVFIRC VLPDVFIRC AVGIGIAVV LVVLGLLAV ALGLGLLPV GIGIGVLAA GAGIGVAVL IAGIGILAI LIVIGILIL LAGIGLIAA VDGIGILTI GAGIGVLTA AAGIGIIQI QAGIGILLA KARDPHSGH KACDPHSGH ACDPHSGHF SLYNTVATL RGPGRAFVT NLVPMVATV GLHCYEQLV PLKQHFQIV AVFDRKSDA LLDFVRFMG VLVKSPNHV GLAPPQHLI LLGRNSFEV PLTFGWCYK VLEWRFDSR TLNAWVKVV GLCTLVAML FIDSYICQV IISAVVGIL VMAGVGSPY LLWTLVVLL SVRDRLARL LLMDCSGSI CLTSTVQLV VLHDDLLEA LMWITQCFL SLLMWITQC QLSLLMWIT LLGATCMFV RLTRFLSRV YMDGTMSQV FLTPKKLQC ISNDVCAQV VKTDGNPPE SVYDFFVWL FLYGALLLA VLFSSDFRI LMWAKIGPV SLLLELEEV SLSRFSWGA YTAFTIPSI RLMKQDFSV RLPRIFCSC FLWGPRAYA RLLQETELV SLFEGIDFY SLDQSVVEL RLNMFTPYI NMFTPYIGV LMIIPLINV TLFIGSHVV SLVIVTTFV VLQWASLAV ILAKFLHWL STAPPHVNV LLLLTVLTV VVLGVVFGI ILHNGAYSL MIMVKCWMI MLGTHTMEV MLGTHTMEV SLADTNSLA LLWAARPRL GVALQTMKQ GLYDGMEHL KMVELVHFL YLQLVFGIE MLMAQEALA LMAQEALAF VYDGREHTV YLSGANLNL RMFPNAPYL EAAGIGILT TLDSQVMSL STPPPGTRV KVAELVHFL IMIGVLVGV ALCRWGLLL LLFAGVQCQ VLLCESTAV YLSTAFARV YLLEMLWRL SLDDYNHLV RTLDKVLEV GLPVEYLQV KLIANNTRV FIYAGSLSA KLVANNTRL FLDEFMEGV ALQPGTALL VLDGLDVLL SLYSFPEPE ALYVDSLFF SLLQHLIGL ELTLGEFLK MINAYLDKL AAGIGILTV FLPSDFFPS SVRDRLARL SLREWLLRI LLSAWILTA AAGIGILTV AVPDEIPPL FAYDGKDYI AAGIGILTV FLPSDFFPS AAGIGILTV FLPSDFFPS AAGIGILTV FLWGPRALV ETVSEQSNV ITLWQRPLV MHC peptide binding

How is mutual information calculated? Information content was calculated as Gives information in a single position Similar relation for mutual information Gives mutual information between two positions Mutual information

Mutual information. Example ALWGFFPVA ILKEPVHGV ILGFVFTLT LLFGYPVYV GLSPTVWLS YMNGTMSQV GILGFVFTL WLSLLVPFV FLPSDFFPS P1 P6 P(G 1 ) = 2/9 = 0.22,.. P(V 6 ) = 4/9 = 0.44,.. P(G 1,V 6 ) = 2/9 = 0.22, P(G 1 )*P(V 6 ) = 8/81 = 0.10 log(0.22/0.10) > 0 Knowing that you have G at P 1 allows you to make an educated guess on what you will find at P 6. P(V 6 ) = 4/9. P(V 6 |G 1 ) = 1.0!

313 binding peptides313 random peptides Mutual information

Higher order sequence correlations Neural networks can learn higher order correlations! –What does this mean? S S => 0 L S => 1 S L => 1 L L => 0 Say that the peptide needs one and only one large amino acid in the positions P3 and P4 to fill the binding cleft How would you formulate this to test if a peptide can bind? => XOR function

Neural networks Neural networks can learn higher order correlations XOR function: 0 0 => => => => 0 (1,1) (1,0) (0,0) (0,1) No linear function can separate the points OR AND XOR

Error estimates XOR 0 0 => => => => 0 (1,1) (1,0) (0,0) (0,1) Predict 0 1 Error 0 1 Mean error: 1/4

Neural networks v1v1 v2v2 Linear function

Neural networks with a hidden layer w 12 v1v1 w 21 w 22 v2v2 1 w t2 w t1 w 11 1 vtvt Input 1 (Bias) {

Neural networks

How does it work? Ex. Input is (0 0) Input 1 (Bias) { o 1 =-6 O 1 =0 o 2 =-2 O 2 =0 y 1 =-4.5 Y 1 =0

Neural networks. How does it work? Hand out

XOR function: 0 0 => => => => Input 1 (Bias) { y2y2 y1y1 What is going on?

(1,1) (1,0) (0,0) (0,1) x2x2 x1x1 y1y1 y2y2 (1,0) (2,2) (0,0) What is going on?

Training and error reduction 

 Size matters

A Network contains a very large set of parameters –A network with 5 hidden neurons predicting binding for 9meric peptides has 9x20x5=900 weights –5 times as many weights as a matrix-based method Over fitting is a problem Stop training when test performance is optimal (use early stopping) Neural network training years Temperature

Neural network training. Cross validation Cross validation Train on 4/5 of data Test on 1/5 => Produce 5 different neural networks each with a different prediction focus

Neural network training curve Maximum test set performance Most cable of generalizing

Demo

Network training Encoding of sequence data –Sparse encoding –Blosum encoding –Sequence profile encoding

Sparse encoding Inp Neuron AAcid A R N D C Q E

BLOSUM encoding (Blosum50 matrix) A R N D C Q E G H I L K M F P S T W Y V A R N D C Q E G H I L K M F P S T W Y V

Sequence encoding (continued) Sparse encoding –V: –L: –V. L=0 (unrelated) Blosum encoding –V: –L: –V. L = 0.88 (highly related) –V. R = (close to unrelated)

The Wisdom of the Crowds The Wisdom of Crowds. Why the Many are Smarter than the Few. James Surowiecki One day in the fall of 1906, the British scientist Fracis Galton left his home and headed for a country fair… He believed that only a very few people had the characteristics necessary to keep societies healthy. He had devoted much of his career to measuring those characteristics, in fact, in order to prove that the vast majority of people did not have them. … Galton came across a weight-judging competition…Eight hundred people tried their luck. They were a diverse lot, butchers, farmers, clerks and many other no-experts…The crowd had guessed … pounds, the ox weighted 1.198

Network ensembles No one single network with a particular architecture and sequence encoding scheme, will constantly perform the best Also for Neural network predictions will enlightened despotism fail –For some peptides, BLOSUM encoding with a four neuron hidden layer can best predict the peptide/MHC binding, for other peptides a sparse encoded network with zero hidden neurons performs the best –Wisdom of the Crowd Never use just one neural network Use Network ensembles

Evaluation of prediction accuracy ENS : Ensemble of neural networks trained using sparse, Blosum, and weight matrix sequence encoding

Applications of artificial neural networks Talk recognition Prediction of protein secondary structure Prediction of Signal peptides Post translation modifications –Glycosylation –Phosphorylation Proteasomal cleavage MHC:peptide binding

Prediction of protein secondary structure Benefits –General applicable –Can capture higher order correlations –Inputs other than sequence information Drawbacks –Needs many data (different solved structures). However, these does exist today (more than 2500 solved structures with low sequence identity/high resolution) –Complex method with several pitfalls

ß-strand Helix Turn Bend Secondary Structure Elements

Sparse encoding of amino acid sequence windows

How is it done One network (SEQ2STR) takes sequence (profiles) as input and predicts secondary structure –Cannot deal with SS elements i.e. helices are normally formed by at least 5 consecutive amino acids Second network (STR2STR) takes predictions of first network and predicts secondary structure –Can correct for errors in SS elements, i.e remove single helix prediction, mixture of strand and helix predictions

Architecture I K E E H V I I Q A E H E C IKEEHVIIQAEFYLNPDQSGEF….. Window Input Layer Hidden Layer Output Layer Weights

Secondary networks (Structure-to-Structure) H E C H E C H E C H E C IKEEHVIIQAEFYLNPDQSGEF….. Window Input Layer Hidden Layer Output Layer Weights

Example PITKEVEVEYLLRRLEE (Sequence) HHHHHHHHHHHHTGGG. (DSSP) ECCCHEEHHHHHHHCCC (SEQ2STR) CCCCHHHHHHHHHHCCC (STR2STR)

Why so many networks? Q3 is the overall accuracy

Why not select the best?

What have we learned? Neural networks are not so bad as their reputation Neural networks can deal with higher order correlations Be careful when training a neural network –Always use cross validated training