Characterization of Secondary Structure of Proteins using Different Vocabularies Madhavi K. Ganapathiraju Language Technologies Institute Advisors Raj Reddy, Judith Klein-Seetharaman, Roni Rosenfeld 2 nd Biological Language Modeling Workshop Carnegie Mellon University May
2 Presentation overview Classification of Protein Segments by their Secondary Structure types Document Processing Techniques Choice of Vocabulary in Protein Sequences Application of Latent Semantic Analysis Results Discussion
3 Sample Protein: MEPAPSAGAELQPPLFANASDAYPSACPSAGANASGPPGARSASSLALAIAITAL YSAVCAVGLLGNVLVMFGIVRYTKMKTATNIYIFNLALADALATSTLPFQSA… Sample Protein: MEPAPSAGAELQPPLFANASDAYPSACPSAGANASGPPGARSASSLALAIAITAL YSAVCAVGLLGNVLVMFGIVRYTKMKTATNIYIFNLALADALATSTLPFQSA… Secondary Structure of Protein
4 Application of Text Processing Letters Words Sentences Letter counts in languages Word counts in Documents Residues Secondary Structure Proteins Genomes Can unigrams distinguish Secondary Structure Elements from one another
5 Unigrams for Document Classification Word-Document matrix –represents documents in terms of their word unigrams “Bag-of-words” model since the position of words in the document is not taken into account
6 Word Document Matrix
7 Document Vectors
8 Doc-1 Document Vectors
9 Doc-2 Document Vectors
10 Doc-3 Document Vectors
11 Doc-N Document Vectors
12 Documents can be compared to one another in terms of dot-product of document vectors document vectors Document Comparison.* =
13 Documents can be compared to one another in terms of dot-product of document vectors document vectors Document Comparison.* =
14 Documents can be compared to one another in terms of dot-product of document vectors document vectors Document Comparison.* = Formal Modeling of documents is presented in next few slides…
15 Vector Space Model construction Document vectors in word-document matrix are normalized –By word counts in entire document collection –By document lengths This gives a Vector Space Model (VSM) of the set of documents Equations for Normalization…Equations
16 (Word count in document) (document length) (depends on word count in corpus) t_i is the total number of times word i occurs in the corpus Word count normalization
17 Word-Document Matrix Normalized Word-Document Matrix
18 Document vectors after normalisation...
19 Use of Vector Space Model A query document is also represented as a vector It is normalized by corpus word counts Documents related to the query-doc are identified –by measuring similarity of document vectors to the query document vector
20 Application to Protein Secondary Structure Prediction
21 Protein Secondary Structure Dictionary of Secondary Structure Prediction: annotation of each residue with its structure –based on hydrogen bonding patterns and geometrical constraints 7 DSSP labels for PSS: –H–H –G–G –B–B –E–E –S–S –I–I –T–T Helix types Strand types Coil types
22 Example ____SS_SEEEEEEEEEEEETTEEEEEETTEEE___SS_HHHHHHTT_TT Residues DSSP PKPPVKFNRRIFLLNTQNVINGYVKWAINDVSLALPPTPYLGAMKYNLLH ____SS_SEEEEEEEEEEEETTEEEEEETTEEE___SS_HHHHHHTT_TT T, S, I,_: Coil E, B: Strand H, G: Helix Key to DSSP labels
23 Reference Model Proteins are segmented into structural Segments Normalized word-document matrix –constructed from structural segments
24 Example Structural Segments obtained from the given sequence: PKPPVKFN RRIFLLNTQNVI NG YVKWAI ND VSL ALPPTP YLGAMKY NLLH ____SS_SEEEEEEEEEEEETTEEEEEETTEEE___SS_HHHHHHTT_TT Residues DSSP PKPPVKFNRRIFLLNTQNVINGYVKWAINDVSLALPPTPYLGAMKYNLLH ____SS_SEEEEEEEEEEEETTEEEEEETTEEE___SS_HHHHHHTT_TT
25 Example Unigrams in the structural segments Structural Segments obtained from the given sequence: PKPPVKFN RRIFLLNTQNVI NG YVKWAI ND VSL ALPPTP YLGAMKY NLLH ____SS_SEEEEEEEEEEEETTEEEEEETTEEE___SS_HHHHHHTT_TT Residues DSSP PKPPVKFNRRIFLLNTQNVINGYVKWAINDVSLALPPTPYLGAMKYNLLH ____SS_SEEEEEEEEEEEETTEEEEEETTEEE___SS_HHHHHHTT_TT
26 Amino Acids Structural Segments Amino-acid Structural-Segment Matrix
27 Amino Acids Structural Segments Amino-acid Structural-Segment Matrix Similar to Word-Document Matrix
28 Document Vectors Word Vectors …
29 Document Vectors Word Vectors … Query Vector
30 JPred data –513 protein sequences in all –<25% homology between sequences –Residues & corresponding DSSP annotations are given We used 50 sequences for model construction (training) 30 sequences for testing Data Set used for PSSP
31 Proteins from test set –segmented into structural elements –Called “query segments” –Segment vectors are constructed For each query segment –‘n’ most similar reference segment vectors are retrieved –Query segment is assigned same structure as that of the majority of the retrieved segments* Classification *k-nearest neighbour classification
32 Helix Strand CoilKey Query Vector Reference Model Compare Similarities 3 most similar reference vectors Majority voting out of 3-most similar reference vectors == Coil Hence Structure-type assigned to Query Vector is Coil Structure type assignment to QVector
33 Choice of Vocabulary in Protein Sequences Amino Acids But Amino acids are –Not all distinct.. –Similarity is primarily due to chemical composition So, –Represent protein segments in terms of “types” of amino acids –Represent in terms of “chemical composition”
34 Representation in terms of “types” of AA Classify based on Electronic Properties –e - donors: D,E,A,P –weak e - donors: I,L,V –Ambivalent: G,H,S,W –weak e - acceptor: T,M,F,Q,Y –e - acceptor: K,R,N –C (by itself, another group) Use Chemical Groups
35 Representation using Chemical Groups
36 Results of Classification with “AA” as words Leave 1-out testing of reference vectors Unseen query segments
37 Results with “chemical groups” as words Build VSM using both reference segments and test segments –Structure labels of reference segments are known –Structure labels of query segments are unknown
38 Modification to Word-Document matrix Latent Semantic Analysis Word document matrix is transformed – by “Singular Value Decomposition”
39
40 Results with “AA” as words, using LSA
41 Results with “types of AA” as words using LSA
42 Results with “chemical groups” as words using LSA
43 LSA results for Different Vocabularies Amino acids LSA Types of Amino acid LSA Chemical Groups LSA
44 Model construction using all data Matrix models constructed using both reference and query documents together. This gives better models both for normalization and in construction Of latent semantic model Amino Acid Chemical Groups Amino acid types
45 Applications Complement other methods for protein structure prediction –Segmentation approaches Protein classifications as all-alpha, all- beta, alpha+beta or alpha/beta types Automatically assigning new proteins into SCOP families
46 References 1.Kabsch, Sander “Dictionary of Secondary Structure Prediction”, Biopolymers. 2.Dwyer, D.S., Electronic properties of the amino acid side chains contribute to the structural preferences in protein folding. J Biomol Struct Dyn, (6): p Bellegarda, J., “Exploiting Latent Semantic Information in Statistical Language Modeling”, Proceedings of the IEEE, Vol 88:8, 2000.
Thank you!
48 Use of SVD Representation of Training and test segments very similar to that in VSM Structure type assignment goes through same process, except that it is done with the LSA matrices
49 Classification of Query Document A query document is also represented as a vector It is normalized by corpus word counts Documents related to the query are identified –by measuring similarity of document vectors to the query document vector Query Document is assigned the same Structure as of those retrieved by similarity measure Majority voting* *k-nearest neighbour classification
50 Notes… Results described are per-segment Normalized Word document matrix does not preserve document lengths –Hence “per residue” accuracies of structure assignments cannot be computed