Secondary Structure Prediction Lecture 7 Structural Bioinformatics Dr. Avraham Samson 81-871.

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

Secondary structure prediction from amino acid sequence.
Machine Learning Neural Networks
Prediction to Protein Structure Fall 2005 CSC 487/687 Computing for Bioinformatics.
Structure Prediction. Tertiary protein structure: protein folding Three main approaches: [1] experimental determination (X-ray crystallography, NMR) [2]
Simple Neural Nets For Pattern Classification
Protein Secondary Structures
Sequence analysis June 20, 2006 Learning objectives-Understand sliding window programs. Understand difference between identity, similarity and homology.
Predicting local Protein Structure Morten Nielsen.
1 Levels of Protein Structure Primary to Quaternary Structure.
Amino Acids and Proteins 1.What is an amino acid / protein 2.Where are they found 3.Properties of the amino acids 4.How are proteins synthesized 1.Transcription.
Garnier-Osguthorpe-Robson
Structure Prediction. Tertiary protein structure: protein folding Three main approaches: [1] experimental determination (X-ray crystallography, NMR) [2]
Sequence analysis June 18, 2008 Learning objectives-Understand the concept of sliding window programs. Understand difference between identity, similarity.
©CMBI 2008 Aligning Sequences The most powerful weapon in the bioinformaticist’s armory is sequence alignment. Why? Lets’ think about an alignment. It.
Protein Structure Modeling (1). Protein Folding Problem A protein folds into a unique 3D structure under physiological conditions Lysozyme sequence: KVFGRCELAA.
Protein Secondary Structures Assignment and prediction.
Sequence analysis June 19, 2007 Learning objectives-Understand the concept of sliding window programs. Understand difference between identity, similarity.
CENTER FOR BIOLOGICAL SEQUENCE ANALYSISTECHNICAL UNIVERSITY OF DENMARK DTU April 8, 2003Claus Lundegaard Protein Secondary Structures Assignment and prediction.
Sequence analysis June 17, 2003 Learning objectives-Review amino acids structures. Understand sliding window programs. Understand difference between identity,
Scoring Matrices June 19, 2008 Learning objectives- Understand how scoring matrices are constructed. Workshop-Use different BLOSUM matrices in the Dotter.
Computational Biology, Part 10 Protein Structure Prediction and Display Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Protein Secondary Structures Assignment and prediction.
©CMBI 2005 Why align sequences? Lots of sequences with unknown structure and function. A few sequences with known structure and function If they align,
CISC667, F05, Lec20, Liao1 CISC 467/667 Intro to Bioinformatics (Fall 2005) Protein Structure Prediction Protein Secondary Structure.
Protein Secondary Structures Assignment and prediction Pernille Haste Andersen
Protein Secondary Structure Prediction Dong Xu Computer Science Department 271C Life Sciences Center 1201 East Rollins Road University of Missouri-Columbia.
Structure Prediction in 1D
CENTER FOR BIOLOGICAL SEQUENCE ANALYSISTECHNICAL UNIVERSITY OF DENMARK DTU October 29, 2004Claus Lundegaard Protein Secondary Structures Assignment and.
Protein Secondary Structures Assignment and prediction.
Predicting local Protein Structure Morten Nielsen.
Protein Structures: Experiments and Modeling Patrice Koehl.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function (RBF) Networks
Artificial Neural Networks for Secondary Structure Prediction CSC391/691 Bioinformatics Spring 2004 Fetrow/Burg/Miller (slides by J. Burg)
Protein structure prediction
Rising accuracy of protein secondary structure prediction Burkhard Rost
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Prediction to Protein Structure Fall 2005 CSC 487/687 Computing for Bioinformatics.
©CMBI 2006 Amino Acids “ When you understand the amino acids, you understand everything ”
Protein Secondary Structure Prediction Some of the slides are adapted from Dr. Dong Xu’s lecture notes.
Force Fields G Vriend Force Fields 2 What is a Force Field ? A force field is a set of equations and parameters which when evaluated for a.
NEURAL NETWORKS FOR DATA MINING
Protein Secondary Structure Prediction Based on Position-specific Scoring Matrices Yan Liu Sep 29, 2003.
Neural Networks for Protein Structure Prediction Brown, JMB 1999 CS 466 Saurabh Sinha.
Protein Secondary Structure Prediction
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Secondary structure prediction
Learning Targets “I Can...” -State how many nucleotides make up a codon. -Use a codon chart to find the corresponding amino acid.
2 o structure, TM regions, and solvent accessibility Topic 13 Chapter 29, Du and Bourne “Structural Bioinformatics”
Web Servers for Predicting Protein Secondary Structure (Regular and Irregular) Dr. G.P.S. Raghava, F.N.A. Sc. Bioinformatics Centre Institute of Microbial.
Protein structure prediction May 26, 2011 HW #8 due today Quiz #3 on Tuesday, May 31 Learning objectives-Understand the biochemical basis of secondary.
Protein Structure Prediction
Protein Structure & Modeling Biology 224 Instructor: Tom Peavy Nov 18 & 23, 2009
Protein secondary structure Prediction Why 2 nd Structure prediction? The problem Seq: RPLQGLVLDTQLYGFPGAFDDWERFMRE Pred:CCCCCHHHHHCCCCEEEECCHHHHHHCC.
Protein Secondary Structure Prediction G P S Raghava.
1 Protein Structure Prediction (Lecture for CS397-CXZ Algorithms in Bioinformatics) April 23, 2004 ChengXiang Zhai Department of Computer Science University.
Protein Structure Prediction ● Why ? ● Type of protein structure predictions – Sec Str. Pred – Homology Modelling – Fold Recognition – Ab Initio ● Secondary.
1 Protein structure Prediction. 2 Copyright notice Many of the images in this power point presentation are from Bioinformatics and Functional Genomics.
Protein Structure Modelling
Protein structure prediction Haixu Tang School of Informatics.
Protein structure prediction June 27, 2003 Learning objectives-Understand the basis of secondary structure prediction programs. Become familiar with the.
Predicting Structural Features Chapter 12. Structural Features Phosphorylation sites Transmembrane helices Protein flexibility.
Improved Protein Secondary Structure Prediction. Secondary Structure Prediction Given a protein sequence a 1 a 2 …a N, secondary structure prediction.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Learning with Perceptrons and Neural Networks
Classification with Perceptrons Reading:
Introduction to Bioinformatics II
network of simple neuron-like computing elements
Presentation transcript:

Secondary Structure Prediction Lecture 7 Structural Bioinformatics Dr. Avraham Samson

Secondary structure prediction from amino acid sequence 2012Avraham Samson - Faculty of Medicine - Bar Ilan University 2

Secondary Structure Prediction Given a protein sequence a 1 a 2 …a N, secondary structure prediction aims at defining the state of each amino acid ai as being either H (helix), E (extended=strand), or O (other) (Some methods have 4 states: H, E, T for turns, and O for other). The quality of secondary structure prediction is measured with a “3-state accuracy” score, or Q 3. Q 3 is the percent of residues that match “reality” (X-ray structure).

Quality of Secondary Structure Prediction Determine Secondary Structure positions in known protein structures using DSSP or STRIDE: 1.Kabsch and Sander. Dictionary of Secondary Structure in Proteins: pattern recognition of hydrogen-bonded and geometrical features. Biopolymer 22: (1983) (DSSP) 2.Frischman and Argos. Knowledge-based secondary structure assignments. Proteins, 23: (1995) (STRIDE)

Limitations of Q 3 ALHEASGPSVILFGSDVTVPPASNAEQAK hhhhhooooeeeeoooeeeooooohhhhh ohhhooooeeeeoooooeeeooohhhhhh hhhhhoooohhhhooohhhooooohhhhh Amino acid sequence Actual Secondary Structure Q3=22/29=76% (useful prediction) (terrible prediction) Q3 for random prediction is 33% Secondary structure assignment in real proteins is uncertain to about 10%; Therefore, a “perfect” prediction would have Q3=90%.

Early methods for Secondary Structure Prediction Chou and Fasman (Chou and Fasman. Prediction of protein conformation. Biochemistry, 13: , 1974) GOR (Garnier, Osguthorpe and Robson. Analysis of the accuracy and implications of simple methods for predicting the secondary structure of globular proteins. J. Mol. Biol., 120: , 1978)

Chou and Fasman Start by computing amino acids propensities to belong to a given type of secondary structure: Propensities > 1 mean that the residue type I is likely to be found in the Corresponding secondary structure type.

Amino Acid  -Helix  -SheetTurn Ala Cys Leu Met Glu Gln His Lys Val Ile Phe Tyr Trp Thr Gly Ser Asp Asn Pro Arg Chou and Fasman Favors  -Helix Favors  -strand Favors turn

Chou and Fasman Predicting helices: - find nucleation site: 4 out of 6 contiguous residues with P(  )>1 - extension: extend helix in both directions until a set of 4 contiguous residues has an average P(  ) < 1 (breaker) - if average P(  ) over whole region is >1, it is predicted to be helical Predicting strands: - find nucleation site: 3 out of 5 contiguous residues with P(  )>1 - extension: extend strand in both directions until a set of 4 contiguous residues has an average P(  ) < 1 (breaker) - if average P(  ) over whole region is >1, it is predicted to be a strand

Chou and Fasman Position-specific parameters for turn: Each position has distinct amino acid preferences. Examples: -At position 2, Pro is highly preferred; Trp is disfavored -At position 3, Asp, Asn and Gly are preferred -At position 4, Trp, Gly and Cys preferred f(i) f(i+1) f(i+2) f(i+3)

Chou and Fasman Predicting turns: - for each tetrapeptide starting at residue i, compute: - P Turn (average propensity over all 4 residues) - F = f(i)*f(i+1)*f(i+2)*f(i+3) - if P Turn > P  and P Turn > P  and P Turn > 1 and F> tetrapeptide is considered a turn. Chou and Fasman prediction:

The GOR method Position-dependent propensities for helix, sheet or turn is calculated for each amino acid. For each position j in the sequence, eight residues on either side are considered. A helix propensity table contains information about propensity for residues at 17 positions when the conformation of residue j is helical. The helix propensity tables have 20 x 17 entries. Build similar tables for strands and turns. GOR simplification: The predicted state of AAj is calculated as the sum of the position- dependent propensities of all residues around AAj. GOR can be used at : (current version is GOR IV) j

Accuracy Both Chou and Fasman and GOR have been assessed and their accuracy is estimated to be Q3=60-65%. (initially, higher scores were reported, but the experiments set to measure Q3 were flawed, as the test cases included proteins used to derive the propensities!)

Neural networks The most successful methods for predicting secondary structure are based on neural networks. The overall idea is that neural networks can be trained to recognize amino acid patterns in known secondary structure units, and to use these patterns to distinguish between the different types of secondary structure. Neural networks classify “input vectors” or “examples” into categories (2 or more). They are loosely based on biological neurons.

The perceptron X1 X2 XN w1 w2 wN T InputThreshold UnitOutput The perceptron classifies the input vector X into two categories. If the weights and threshold T are not known in advance, the perceptron must be trained. Ideally, the perceptron must be trained to return the correct answer on all training examples, and perform well on examples it has never seen. The training set must contain both type of data (i.e. with “1” and “0” output).

Applications of Artificial Neural Networks speech recognition medical diagnosis image compression financial prediction

Existing Neural Network Systems for Secondary Structure Prediction First systems were about 62% accurate. Newer ones are about 70% accurate when they take advantage of information from multiple sequence alignment. PHD NNPREDICT

Applications in Bioinformatics Translational initiation sites and promoter sites in E. coli Splice junctions Specific structural features in proteins such as α-helical transmembrane domains

Neural Networks Applied to Secondary Structure Prediction Create a neural network (a computer program) “Train” it uses proteins with known secondary structure. Then give it new proteins with unknown structure and determine their structure with the neural network. Look to see if the prediction of a series of residues makes sense from a biological point of view – e.g., you need at least 4 amino acids in a row for an α- helix.

Example Neural Network From Bioinformatics by David W. Mount, p. 453 Training pattern One of n inputs, each with 21 bits

Inputs to the Network Both the residues and target classes are encoded in unary format, for example Alanine: Cysteine: Helix: Each pattern presented to the network requires n 21-bit inputs for a window of size n. (One bit is required per residue to indicate when the window overlaps the end of the chain). The advantage of this sparse encoding scheme is that it does not pay attention to ordering of the amino acids The main disadvantage is that it requires a lot of input.

Weights Input values at each layer are multiplied by weights. Weights are initially random. Weights are adjusted after the output is computed based on how close the output is to the “right” answer. When the full training session is completed, the weights have settled on certain values. These weights are then used to compute output for new problems that weren’t part of the training set.

Neural Network Training Set A problem-solving paradigm modeled after the physiological functioning of the human brain. A typical training set contains over 100 non- homologous protein chains comprising more than 15,000 training patterns. The number of training patterns is equal to the total number of residues in the 100 proteins. For example, if there are 100 proteins and 150 residues per protein there would be 15,000 training patterns.

Neural Network Architecture A typical architecture has a window-size of n and 5 hidden layer nodes.* Then a fully-connected would be 17(21)-5-3 network, i.e. a net with an input window of 17, five hidden nodes in a single hidden layer and three outputs. Such a network has 357 input nodes and 1,808 weights. ((17 * 21) * 5) + (5 * 3) = 1808? *This information is adapted from “Protein Secondary Structure Prediction with Neural Networks: A Tutorial” by Adrian Shepherd (UCL),

Window The n-residue window is moved across the protein, one residue at a time. Each time the window is moved, the center residue becomes the focus. The neural network “learns” what secondary structure that residue is a part of. It keeps adjusting weights until it gets the right answer within a certain tolerance. Then the window is moved to the right.

Predictions Based on Output Predictions are made on a winner-takes-all basis. That is, the prediction is determined by the strongest of the three outputs. For example, the output (0.3, 0.1, 0.1) is interpreted as a helix prediction.

Disadvantages to Neural Networks They are black boxes. They cannot explain why a given pattern has been classified as x rather than y. Unless we associate other methods with them, they don’t tell us anything about underlying principles.

Summary Perceptrons (single-layer neural networks) can be used to find protein secondard structure, but more often feed-forward multi-layer networks are used. Two frequently-used web sites for neural- network-based secondary structure prediction are PHD ( heidelberg.de/predictprotein/predictprotein.html ) and NNPREDICT ( ) heidelberg.de/predictprotein/predictprotein.html

The perceptron Notes: - The input is a vector X and the weights can be stored in another vector W. - the perceptron computes the dot product S = X.W - the output F is a function of S: it is often set discrete (i.e. 1 or 0), in which case the function is the step function. For continuous output, often use a sigmoid: 0 1/ Not all perceptrons can be trained ! (famous example: XOR)

The perceptron Training a perceptron: Find the weights W that minimizes the error function: P: number of training data X i : training vectors F(W.X i ): output of the perceptron t(X i ) : target value for X i Use steepest descent: - compute gradient: - update weight vector: - iterate (e: learning rate)

Neural Network A complete neural network is a set of perceptrons interconnected such that the outputs of some units becomes the inputs of other units. Many topologies are possible! Neural networks are trained just like perceptron, by minimizing an error function:

Neural networks and Secondary Structure prediction Experience from Chou and Fasman and GOR has shown that: –In predicting the conformation of a residue, it is important to consider a window around it. –Helices and strands occur in stretches –It is important to consider multiple sequences

PHD: Secondary structure prediction using NN

13x20=260 values PHD: Input For each residue, consider a window of size 13:

PHD: Network 1 Sequence Structure 13x20 values3 values P  (i) P  (i) Pc(i) Network1

PHD: Network 2 Structure 3 values P  (i) P  (i) Pc(i) 3 values P  (i) P  (i) Pc(i) 17x3=51 values For each residue, consider a window of size 17: Network2

PHD Sequence-Structure network: for each amino acid aj, a window of 13 residues aj-6…aj…aj+6 is considered. The corresponding rows of the sequence profile are fed into the neural network, and the output is 3 probabilities for aj: P(aj,alpha), P(aj, beta) and P(aj,other) Structure-Structure network: For each aj, PHD considers now a window of 17 residues; the probabilities P(ak,alpha), P(ak,beta) and P(ak,other) for k in [j-8,j+8] are fed into the second layer neural network, which again produces probabilities that residue aj is in each of the 3 possible conformation Jury system: PHD has trained several neural networks with different training sets; all neural networks are applied to the test sequence, and results are averaged Prediction: For each position, the secondary structure with the highest average score is output as the prediction

PSIPRED Convert to [0-1] Using: Add one value per row to indicate if Nter of Cter Jones. Protein secondary structure prediction based on position specific scoring matrices. J. Mol. Biol. 292: (1999)

Performances (monitored at CASP) CASPYEAR # of Targets Group CASP Rost and Sander CASP Rost CASP Jones CASP Jones

-Available servers: - JPRED : - PHD: - PSIPRED: - NNPREDICT: - Chou and Fassman: Secondary Structure Prediction -Interesting paper: - Rost and Eyrich. EVA: Large-scale analysis of secondary structure prediction. Proteins 5: (2001)

Solvent accessibility prediction Membrane region prediction Avraham Samson - Faculty of Medicine - Bar Ilan University 41