Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prediction of T cell epitopes using artificial neural networks Morten Nielsen, CBS, BioCentrum, DTU.

Similar presentations


Presentation on theme: "Prediction of T cell epitopes using artificial neural networks Morten Nielsen, CBS, BioCentrum, DTU."— Presentation transcript:

1 Prediction of T cell epitopes using artificial neural networks Morten Nielsen, CBS, BioCentrum, DTU

2 Objectives How to train a neural network to predict peptide MHC class I binding Understand why NN’s perform the best –Higher order sequence information The wisdom of the crowd! –Why enlightened despotism does not work even for Neural networks

3 Outline MHC class I epitopes –Why MHC binding? How to predict MHC binding? –Information content –Weight matrices –Neural networks Neural network theory –Sequence encoding Examples

4 http://www.nki.nl/nkidep/h4/neefjes/neefjes.htm MHC binding Processing of intracellular proteins

5 Lauemøller et al., 2000 20% processed0.5% bind MHC50% CTL response => 1/2000 peptide are immunogenic From proteins to immunogens

6 Anchor positions MHC class I with peptide

7 What makes a peptide a potential and effective epitope? Part of a pathogen protein Successful processing –Proteasome cleavage –TAP binding Binds to MHC molecule Protein function –Early in replication Sequence conservation in evolution

8 Prediction of HLA binding specificity Simple Motifs –Allowed/non allowed amino acids Extended motifs –Amino acid preferences (SYFPEITHI)SYFPEITHI) –Anchor/Preferred/other amino acids Hidden Markov models –Peptide statistics from sequence alignment (previous talk) Neural networks –Can take sequence correlations into account

9 SYFPEITHI predictions Extended motifs based on peptides from the literature and peptides eluted from cells expressing specific HLAs ( i.e., binding peptides) Scoring scheme is not readily accessible. Positions defined as anchor or auxiliary anchor positions are weighted differently (higher) The final score is the sum of the scores at each position Predictions can be made for several HLA-A, -B and -DRB1 alleles, as well as some mice K, D and L alleles.

10 BIMAS Matrix made from peptides with a measured T 1/2 for the MHC-peptide complex The matrices are available on the website The final score is the product of the scores of each position in the matrix multiplied with a constant, different for each MHC, to give a prediction of the T 1/2 Predictions can be obtained for several HLA-A, -B and -C alleles, mice K,D and L alleles, and a single cattle MHC.

11 How to predict The effect on the binding affinity of having a given amino acid at one position can be influenced by the amino acids at other positions in the peptide (sequence correlations). –Two adjacent amino acids may for example compete for the space in a pocket in the MHC molecule. Artificial neural networks (ANN) are ideally suited to take such correlations into account

12 Higher order sequence correlations Neural networks can learn higher order correlations! –What does this mean? S S => 0 L S => 1 S L => 1 L L => 0 No linear function can learn this (XOR) pattern Say that the peptide needs one and only one large amino acid in the positions P3 and P4 to fill the binding cleft How would you formulate this to test if a peptide can bind?

13 Learning higher order correlation 0 0 => 0; 1 0 => 1 1 1 => 0; 0 1 => 1 Step function Threshold t Check it out. It does actually work!

14 Neural network learning higher order correlations

15 How is mutual information calculated? Information content was calculated as Gives information in a single position Similar relation for mutual information Gives mutual information between two positions Mutual information

16 Mutual information. Example ALWGFFPVA ILKEPVHGV ILGFVFTLT LLFGYPVYV GLSPTVWLS YMNGTMSQV GILGFVFTL WLSLLVPFV FLPSDFFPS P1 P6 P(G 1 ) = 2/9 = 0.22,.. P(V 6 ) = 4/9 = 0.44,.. P(G 1,V 6 ) = 2/9 = 0.22, P(G 1 )*P(V 6 ) = 8/81 = 0.10 log(0.22/0.10) > 0 Knowing that you have G at P 1 allows you to make an educated guess on what you will find at P 6. P(V 6 ) = 4/9. P(V 6 |G 1 ) = 1.0!

17 313 binding peptides313 random peptides Mutual information

18 SLLPAIVEL YLLPAIVHI TLWVDPYEV GLVPFLVSV KLLEPVLLL LLDVPTAAV LLDVPTAAV LLDVPTAAV LLDVPTAAV VLFRGGPRG MVDGTLLLL YMNGTMSQV MLLSVPLLL SLLGLLVEV ALLPPINIL TLIKIQHTL HLIDYLVTS ILAPPVVKL ALFPQLVIL GILGFVFTL STNRQSGRQ GLDVLTAKV RILGAVAKV QVCERIPTI ILFGHENRV ILMEHIHKL ILDQKINEV SLAGGIIGV LLIENVASL FLLWATAEA SLPDFGISY KKREEAPSL LERPGGNEI ALSNLEVKL ALNELLQHV DLERKVESL FLGENISNF ALSDHHIYL GLSEFTEYL STAPPAHGV PLDGEYFTL GVLVGVALI RTLDKVLEV HLSTAFARV RLDSYVRSL YMNGTMSQV GILGFVFTL ILKEPVHGV ILGFVFTLT LLFGYPVYV GLSPTVWLS WLSLLVPFV FLPSDFFPS CLGGLLTMV FIAGNSAYE KLGEFYNQM KLVALGINA DLMGYIPLV RLVTLKDIV MLLAVLYCL AAGIGILTV YLEPGPVTA LLDGTATLR ITDQVPFSV KTWGQYWQV TITDQVPFS AFHHVAREL YLNKIQNSL MMRKLAILS AIMDKNIIL IMDKNIILK SMVGNWAKV SLLAPGAKQ KIFGSLAFL ELVSEFSRM KLTPLCVTL VLYRYGSFS YIGEVLVSV CINGVCWTV VMNILLQYV ILTVILGVL KVLEYVIKV FLWGPRALV GLSRYVARL FLLTRILTI HLGNVKYLV GIAGGLALL GLQDCTMLV TGAPVTYST VIYQYMDDL VLPDVFIRC VLPDVFIRC AVGIGIAVV LVVLGLLAV ALGLGLLPV GIGIGVLAA GAGIGVAVL IAGIGILAI LIVIGILIL LAGIGLIAA VDGIGILTI GAGIGVLTA AAGIGIIQI QAGIGILLA KARDPHSGH KACDPHSGH ACDPHSGHF SLYNTVATL RGPGRAFVT NLVPMVATV GLHCYEQLV PLKQHFQIV AVFDRKSDA LLDFVRFMG VLVKSPNHV GLAPPQHLI LLGRNSFEV PLTFGWCYK VLEWRFDSR TLNAWVKVV GLCTLVAML FIDSYICQV IISAVVGIL VMAGVGSPY LLWTLVVLL SVRDRLARL LLMDCSGSI CLTSTVQLV VLHDDLLEA LMWITQCFL SLLMWITQC QLSLLMWIT LLGATCMFV RLTRFLSRV YMDGTMSQV FLTPKKLQC ISNDVCAQV VKTDGNPPE SVYDFFVWL FLYGALLLA VLFSSDFRI LMWAKIGPV SLLLELEEV SLSRFSWGA YTAFTIPSI RLMKQDFSV RLPRIFCSC FLWGPRAYA RLLQETELV SLFEGIDFY SLDQSVVEL RLNMFTPYI NMFTPYIGV LMIIPLINV TLFIGSHVV SLVIVTTFV VLQWASLAV ILAKFLHWL STAPPHVNV LLLLTVLTV VVLGVVFGI ILHNGAYSL MIMVKCWMI MLGTHTMEV MLGTHTMEV SLADTNSLA LLWAARPRL GVALQTMKQ GLYDGMEHL KMVELVHFL YLQLVFGIE MLMAQEALA LMAQEALAF VYDGREHTV YLSGANLNL RMFPNAPYL EAAGIGILT TLDSQVMSL STPPPGTRV KVAELVHFL IMIGVLVGV ALCRWGLLL LLFAGVQCQ VLLCESTAV YLSTAFARV YLLEMLWRL SLDDYNHLV RTLDKVLEV GLPVEYLQV KLIANNTRV FIYAGSLSA KLVANNTRL FLDEFMEGV ALQPGTALL VLDGLDVLL SLYSFPEPE ALYVDSLFF SLLQHLIGL ELTLGEFLK MINAYLDKL AAGIGILTV FLPSDFFPS SVRDRLARL SLREWLLRI LLSAWILTA AAGIGILTV AVPDEIPPL FAYDGKDYI AAGIGILTV FLPSDFFPS AAGIGILTV FLPSDFFPS AAGIGILTV FLWGPRALV ETVSEQSNV ITLWQRPLV Neural network training Sequence encoding –Sparse –Blosum –Hidden Markov model Network ensembles –Cross validated training –Benefit from ensembles

19 Sequence encoding How to represent a peptide amino acid sequence to the neural network? Sparse encoding (all amino acids are equally disalike) Blosum encoding (encodes similarities between the different amino acids) Hidden Markov model (encodes the position specific amino acid preference of the HLA binding motif)

20 Sequence encoding (continued) Sparse encoding V:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 L:0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 V. L=0 (unrelated) Blosum encoding V: 0 -3 -3 -3 -1 -2 -2 -3 -3 3 1 -2 1 -1 -2 -2 0 -3 -1 4 L: -1 -2 -3 -4 -1 -2 -3 -4 -3 2 4 -2 2 0 -3 -2 -1 -2 -1 1 V. L = 0.88 (highly related) V. R = -0.08 (close to unrelated)

21 Weight matrices A R N D C Q E G H I L K M F P S T W Y V 1 0.6 0.4 -3.5 -2.4 -0.4 -1.9 -2.7 0.3 -1.1 1.0 0.3 0.0 1.4 1.2 -2.7 1.4 -1.2 -2.0 1.1 0.7 2 -1.6 -6.6 -6.5 -5.4 -2.5 -4.0 -4.7 -3.7 -6.3 1.0 5.1 -3.7 3.1 -4.2 -4.3 -4.2 -0.2 -5.9 -3.8 0.4 3 0.2 -1.3 0.1 1.5 0.0 -1.8 -3.3 0.4 0.5 -1.0 0.3 -2.5 1.2 1.0 -0.1 -0.3 -0.5 3.4 1.6 0.0 4 -0.1 -0.1 -2.0 2.0 -1.6 0.5 0.8 2.0 -3.3 0.1 -1.7 -1.0 -2.2 -1.6 1.7 -0.6 -0.2 1.3 -6.8 -0.7 5 -1.6 -0.1 0.1 -2.2 -1.2 0.4 -0.5 1.9 1.2 -2.2 -0.5 -1.3 -2.2 1.7 1.2 -2.5 -0.1 1.7 1.5 1.0 6 -0.7 -1.4 -1.0 -2.3 1.1 -1.3 -1.4 -0.2 -1.0 1.8 0.8 -1.9 0.2 1.0 -0.4 -0.6 0.4 -0.5 -0.0 2.1 7 1.1 -3.8 -0.2 -1.3 1.3 -0.3 -1.3 -1.4 2.1 0.6 0.7 -5.0 1.1 0.9 1.3 -0.5 -0.9 2.9 -0.4 0.5 8 -2.2 1.0 -0.8 -2.9 -1.4 0.4 0.1 -0.4 0.2 -0.0 1.1 -0.5 -0.5 0.7 -0.3 0.8 0.8 -0.7 1.3 -1.1 9 -0.2 -3.5 -6.1 -4.5 0.7 -0.8 -2.5 -4.0 -2.6 0.9 2.8 -3.0 -1.8 -1.4 -6.2 -1.9 -1.6 -4.9 -1.6 4.5 NLTISDVSV -3.5 5.1 -0.5 ….. 4.5

22 Evaluation of prediction accuracy

23 A Network contains a very large set of parameters –A network with 5 hidden neurons predicting binding for 9meric peptides has 9x20x5=900 weights Over fitting is a problem Stop training when test performance is optimal Neural network training

24 Neural network training. Cross validation Cross validation Train on 4/5 of data Test on 1/5 => Produce 5 different neural networks each with a different prediction focus

25 Neural network training curve Maximum test set performance Most cable of generalizing

26 Network ensembles

27 The Wisdom of the Crowds The Wisdom of Crowds. Why the Many are Smarter than the Few. James Surowiecki One day in the fall of 1906, the British scientist Fracis Galton left his home and headed for a country fair… He believed that only a very few people had the characteristics necessary to keep societies healthy. He had devoted much of his career to measuring those characteristics, in fact, in order to prove that the vast majority of people did not have them. … Galton came across a weight-judging competition…Eight hundred people tried their luck. They were a diverse lot, butchers, farmers, clerks and many other no-experts…The crowd had guessed … 1.197 pounds, the ox weighted 1.198

28 Network ensembles No one single network with a particular architecture and sequence encoding scheme, will constantly perform the best Also for Neural network predictions will enlightened despotism fail –For some peptides, BLOSUM encoding with a four neuron hidden layer can best predict the peptide/MHC binding, for other peptides a sparse encoded network with zero hidden neurons performs the best –Wisdom of the Crowd Never use just one neural network Use Network ensembles

29 Evaluation of prediction accuracy ENS : Ensemble of neural networks trained using sparse, Blosum, and hidden Markov model sequence encoding

30 T cell epitope identification Lauemøller et al., reviews in immunogenetics 2001

31

32

33

34 NetMHC Output 53 49 94 289 529

35 Examples. Hepatitis C virus. Epitope predictions Hotspots

36 SARS T cell epitope identification Peptides tested: 15/15 (100 %) Binders (K D < 500 nM): 14/15 (93%)

37 More SARS CTL epitopes 11/15 14/15 10/15 13/15 12/14 A0301A1101B0702 A0201B5801 B1501 A2 supertype: Molecule used: rA0201/ human  2 m 12/15

38 Vaccine design. Polytope optimization Successful immunization can be obtained only if the epitopes encoded by the polytope are correctly processed and presented. Cleavage by the proteasome in the cytosol, translocation into the ER by the TAP complex, as well as binding to MHC class I should be taken into account in an integrative manner. The design of a polytope can be done in an effective way by modifying the sequential order of the different epitopes, and by inserting specific amino acids that will favor optimal cleavage and transport by the TAP complex, as linkers between the epitopes.

39 Vaccine design. Polytope construction NH2 COOH Epitope Linker M C-terminal cleavage Cleavage within epitopes New epitopes cleavage

40 Polytope starting configuration Immunological Bioinformatics, The MIT press.

41 Polytope optimization Algorithm Optimization of four measures: 1.The number of poor C-terminal cleavage sites of epitopes (predicted cleavage < 0.9) 2.The number of internal cleavage sites (within epitope cleavages with a prediction larger than the predicted C- terminal cleavage) 3.The number of new epitopes (number of processed and presented epitopes in the fusing regions spanning the epitopes) 4.The length of the linker region inserted between epitopes. The optimization seeks to minimize the above four terms by use of Monte Carlo Metropolis simulations [Metropolis et al., 1953]

42 Polytope optimal configuation Immunological Bioinformatics, The MIT press.


Download ppt "Prediction of T cell epitopes using artificial neural networks Morten Nielsen, CBS, BioCentrum, DTU."

Similar presentations


Ads by Google