Bioinformatics 3 V 4 – Weak Indicators and Communities Fri, Oct 26, 2012.

Slides:



Advertisements
Similar presentations
Fast algorithm for detecting community structure in networks M. E. J. Newman Department of Physics and Center for the Study of Complex Systems, University.
Advertisements

Social network partition Presenter: Xiaofei Cao Partick Berg.
Exact Inference in Bayes Nets
Community Detection Laks V.S. Lakshmanan (based on Girvan & Newman. Finding and evaluating community structure in networks. Physical Review E 69,
Graph Partitioning Dr. Frank McCown Intro to Web Science Harding University This work is licensed under Creative Commons Attribution-NonCommercial 3.0Attribution-NonCommercial.
VL Netzwerke, WS 2007/08 Edda Klipp 1 Max Planck Institute Molecular Genetics Humboldt University Berlin Theoretical Biophysics Networks in Metabolism.
CMPUT 466/551 Principal Source: CMU
V4 Matrix algorithms and graph partitioning
Structural Inference of Hierarchies in Networks BY Yu Shuzhi 27, Mar 2014.
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
A hub-attachment based method to detect functional modules from confidence-scored protein interactions and expression profiles Authors: Chia-Hao Chin 1,4,
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Comparison of Networks Across Species CS374 Presentation October 26, 2006 Chuan Sheng Foo.
Fast algorithm for detecting community structure in networks.
Modularity in Biological networks.  Hypothesis: Biological function are carried by discrete functional modules.  Hartwell, L.-H., Hopfield, J. J., Leibler,
Copyright  2004 limsoon wong Assessing Reliability of Protein- Protein Interaction Experiments Limsoon Wong Institute for Infocomm Research.
25. Lecture WS 2003/04Bioinformatics III1 Integrating Protein-Protein Interactions: Bayesian Networks - Lot of direct experimental data coming about protein-protein.
Graph, Search Algorithms Ka-Lok Ng Department of Bioinformatics Asia University.
Systems Biology, April 25 th 2007Thomas Skøt Jensen Technical University of Denmark Networks and Network Topology Thomas Skøt Jensen Center for Biological.
Comparative Expression Moran Yassour +=. Goal Build a multi-species gene-coexpression network Find functions of unknown genes Discover how the genes.
Systematic Analysis of Interactome: A New Trend in Bioinformatics KOCSEA Technical Symposium 2010 Young-Rae Cho, Ph.D. Assistant Professor Department of.
V8 The Bayesian Network Representation Our goal is to represent a joint distribution P over some set of random variables X = { X 1, X 2, … X n }. Even.
Dependency networks Sushmita Roy BMI/CS 576 Nov 26 th, 2013.
12. Lecture WS 2004/05Bioinformatics III1 Direct comparison of different data sets Bayesian Network approach V12: Reliability of Protein Interaction Networks.
Community Structure in Social and Biological Network
Graph Coalition Structure Generation Maria Polukarov University of Southampton Joint work with Tom Voice and Nick Jennings HUJI, 25 th September 2011.
Proteome and interactome Bioinformatics.
Unraveling condition specific gene transcriptional regulatory networks in Saccharomyces cerevisiae Speaker: Chunhui Cai.
Part 1: Biological Networks 1.Protein-protein interaction networks 2.Regulatory networks 3.Expression networks 4.Metabolic networks 5.… more biological.
CS5263 Bioinformatics Lecture 20 Practical issues in motif finding Final project.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Intel Confidential – Internal Only Co-clustering of biological networks and gene expression data Hanisch et al. This paper appears in: bioinformatics 2002.
Mathematics of Networks (Cont)
Module networks Sushmita Roy BMI/CS 576 Nov 18 th & 20th, 2014.
Data Structures & Algorithms Graphs
Bioinformatics 3 V 4 – Weak Indicators and Communities Mon, Nov 3, 2014.
CSCE555 Bioinformatics Lecture 18 Network Biology: Comparison of Networks Across Species Meeting: MW 4:00PM-5:15PM SWGN2A21 Instructor: Dr. Jianjun Hu.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Communities. Questions 1.What is a community (intuitively)? Examples and fundamental hypothesis 2.What do we really mean by communities? Basic definitions.
Network Community Behavior to Infer Human Activities.
341- INTRODUCTION TO BIOINFORMATICS Overview of the Course Material 1.
6. Lecture WS 2006/07Bioinformatics III1 V6 Menger’s theorem V6 closely follows chapter 5.3 in on „Max-Min Duality and Menger‘s Theorems“ Second half of.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Community Discovery in Social Network Yunming Ye Department of Computer Science Shenzhen Graduate School Harbin Institute of Technology.
Discovering functional interaction patterns in Protein-Protein Interactions Networks   Authors: Mehmet E Turnalp Tolga Can Presented By: Sandeep Kumar.
Community structure in graphs Santo Fortunato. More links “inside” than “outside” Graphs are “sparse” “Communities”
(c) M Gerstein '06, gerstein.info/talks 1 CS/CBB Data Mining Predicting Networks through Bayesian Integration #1 - Theory Mark Gerstein, Yale University.
Bioinformatics 3 V 4 – Weak Indicators and Communities Fri, Nov 6, 2015.
Network Partition –Finding modules of the network. Graph Clustering –Partition graphs according to the connectivity. –Nodes within a cluster is highly.
James Hipp Senior, Clemson University.  Graph Representation G = (V, E) V = Set of Vertices E = Set of Edges  Adjacency Matrix  No Self-Inclusion (i.
Computational methods for inferring cellular networks II Stat 877 Apr 17 th, 2014 Sushmita Roy.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Comparative Network Analysis BMI/CS 776 Spring 2013 Colin Dewey
Alan Mislove Bimal Viswanath Krishna P. Gummadi Peter Druschel.
1 Ensembles An ensemble is a set of classifiers whose combined results give the final decision. test feature vector classifier 1classifier 2classifier.
Mining Coherent Dense Subgraphs across Multiple Biological Networks Vahid Mirjalili CSE 891.
Graph clustering to detect network modules
Bioinformatics 3 V6 – Biological Networks are Scale- free, aren't they? Fri, Nov 2, 2012.
Bioinformatics 3 V 4 – Weak Indicators and Communities
Sofus A. Macskassy Fetch Technologies
Community detection in graphs
Bioinformatics 3 V 5 – Weak Indicators and Communities
Ensemble learning.
SEG5010 Presentation Zhou Lanjun.
Statistics II: An Overview of Statistics
Bioinformatics, Vol.17 Suppl.1 (ISMB 2001) Weekly Lab. Seminar
Presentation transcript:

Bioinformatics 3 V 4 – Weak Indicators and Communities Fri, Oct 26, 2012

Bioinformatics 3 – WS 12/13V 4 – 2 Noisy Data — Clear Statements? For yeast: ~ 6000 proteins → ~18 million potential interactions rough estimates: ≤ interactions occur → 1 true positive for 200 potential candidates = 0.5% → decisive experiment must have accuracy << 0.5% false positives Different experiments detect different interactions For yeast: interactions known, 2400 found in > 1 experiment TAP HMS-PCI Y2H annotated: septin complex see: von Mering (2002) Y2H:→ many false positives (up to 50% errors) Co-expression: → gives indications at best Combine weak indicators = ???

Bioinformatics 3 – WS 12/13V 4 – 3 Conditional Probabilities Joint probability for "A and B": P(A)P(A) P(B)P(B) P(A ⋂ B) Solve for conditional probability for "A when B is true" → Bayes' Theorem: P(A) =prior probability (marginal prob.) for "A" → no prior knowledge about A P(B) =prior probability for "B" → normalizing constant P(B | A) =conditional probability for "B given A" P(A | B) =posterior probability for "A given B" → Use information about B to improve knowledge about A

Bioinformatics 3 – WS 12/13V 4 – 4 What are the Odds? Express Bayes theorem in terms of odds: Consider case "A does not apply": odds for A when we know about B: posterior odds for Aprior odds for Alikelihood ratio Λ(A | B) → by how much does our knowledge about A improve? P(A)P(A) P(B)P(B) P(A ⋂ B)

Bioinformatics 3 – WS 12/13V 4 – 5 2 types of Bayesian Networks Encode conditional dependencies between evidences = "A depends on B" with the conditional probability P(A | B) Naive Bayesian network → independent odds Fully connected Bayesian network → table of joint odds B!B C !C Evidence nodes can have a variety of types: numbers, categories, …

Bioinformatics 3 – WS 12/13V 4 – 6 Bayesian Analysis of Complexes Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 7 Improving the Odds Is a given protein pair AB a complex (from all that we know)? prior odds for a random pair AB to be a complex likelihood ratio: improvement of the odds when we know about features f 1, f 2, … Features used by Jansen et al (2003): 4 experimental data sets of complexes mRNA co-expression profiles biological functions annotated to the proteins (GO, MIPS) essentiality for the cell Idea: determine from known complexes and use for prediction of new complexes estimate (somehow)

Bioinformatics 3 – WS 12/13V 4 – 8 Gold Standard Sets To determine Requirements for training data: i)independent of the data serving as evidence ii)large enough for good statistics iii)free of systematic bias Gold Negative Set (GN): (non-)complexes formed by proteins from different cellular compartments Gold Positive Set (GP): 8250 complexes from the hand-curated MIPS catalog of protein complexes (MIPS stands for Munich Information Center for Protein Sequences) → use two data sets with known features f 1, f 2, … for training

Bioinformatics 3 – WS 12/13V 4 – 9 Prior Odds Jansen et al: estimate ≥ existing complexes 18 Mio. possible complexes → P(Complex) ≈ 1/600 → The odds are 600 : 1 against picking a complex at random → O prior = 1/600 Note: O prior is mostly an educated guess → expect 50% good hits (TP > FP) with  ≈ 600

Bioinformatics 3 – WS 12/13V 4 – 10 Essentiality Test whether both proteins are essential (E) for the cell or not (N) → EE or NN should occur more often EssentialityposnegP(Ess|pos)P(Ess|neg)L(Ess) EE ,18E-011,43E-013,6 NE ,90E-014,98E-010,6 NN ,92E-013,60E-010,5 sum ,00 possible values of the feature probabilities for each feature value likelihood ratios = 0, overlap of gold standard sets with feature values = 0,518

Bioinformatics 3 – WS 12/13V 4 – 11 mRNA Co-Expression Publicly available expression data from the Rosetta compendium the yeast cell cycle Correlation between the data sets → use principal component ) Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 12 Biological Function Use MIPS function catalog and Gene Ontology function annotations determine functional class shared by the two proteins count how many of the 18 Mio potential pairs share this classification Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 13 Experimental Data Sets In vivo pull-down: HT-Y2H: Gavin et al, Nature 415 (2002) 141 Ho et al, Nature 415 (2002) 180 Uetz et al, Nature 403 (2000) 623 Ito et al, PNAS 98 (2001) pairs pairs 981 pairs 4393 pairs 4 experiments → 2 4 = 16 categories — fully connected Bayes network Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 14 Statistical Uncertainties 1) L(1111) < L(1001) statistical uncertainty: Overlap with all experiments is smaller → larger uncertainty 2) L(1110) = NaN? Conservative lower bound → assume 1 overlap with GN → L(1110) ≥ 1970 Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 15 Overview Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 16 Performance of complex prediction Re-classify Gold standard complexes: Ratio of true positives to false positives → None of the evidences alone was enough Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 17 Experiments vs. Predictions Sensitivity: how many of the GP are recovered At TP/FP = 0.3 predicted: pairs — 6179 from GP → TP/P = 75% measured: pairs — 1758 from GP → TP/P = 21% Limitation of experimental confirmations: time: someone must have done the experiment selectivity: more sensitive to certain compartments, complex types, … Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 18 Coverage Jansen et al, Science 302 (2003) 449 Predicted set covers 27% of the GP

Bioinformatics 3 – WS 12/13V 4 – 19 Verification of Predicted Complexes Jansen et al, Science 302 (2003) 449 Compare predicted complexes with available experimental evidence and directed new TAP- tag experiments → use directed experiments to verify new predictions (more efficient)

Bioinformatics 3 – WS 12/13V 4 – 20 Consider Connectivity Only take proteins e.g. with ≥ 20 links → This preserves links inside the complex, filter false-positive links to heterogeneous groups outside the complex Jansen et al, Science 302 (2003) 449

Bioinformatics 3 – WS 12/13V 4 – 21 Summary: Bayesian Analysis Combination of weak features yields powerful predictions boosts odds via Bayes' theorem Gold standard sets for training the likelihood ratios Bayes vs. other machine learning techniques: (voting, unions, SVM, neuronal networks, decision trees, …) → arbitrary types of data can be combined → weight data according to their reliability → include conditional relations between evidences → easily accommodates missing data (e.g., zero overlap with GN) → transparent procedure → predictions easy to interpret

Bioinformatics 3 – WS 12/13V 4 – 22 Connected Regions Observation: more interactions inside a complex than to the outside → identify highly connected regions in a network? 1) Fully connected region: Clique clique := G' = (V', E' = V' (2) ) Problems with cliques: finding cliques is NP-hard ("works" O(N 2 ) for the sparsely connected biological networks) biological protein complexes are not allways fully connected

Bioinformatics 3 – WS 12/13V 4 – 23 Communities Community := subset of vertices, for which the internal connectivity is denser than to the outside Aim: map network onto tree that reflects the community structure ??? Radicchi et al, PNAS 101 (2004) 2658:

Bioinformatics 3 – WS 12/13V 4 – 24 Hierarchical Clustering 1) Assign a weight W ij to each pair of vertices i, j that measures how "closely related" these two vertices are. 2) Iteratively add edges between pairs of nodes with decreasing W ij Measures for W ij : 1) Number of vertex-independent paths between vertices i and j (vertex-independent paths between i and j: no shared vertex except i and j) 2) Number of edge-independent paths between i and j Menger (1927): the number of vertex-independent paths equals the number of vertices that have to be removed to cut all paths between i and j → measure for network robustness 3) Total number of paths L between i and j L = 0 or ∞ → weight paths with their length α L with α < 1 Problem: vertices with a single link are separated from the communities

Bioinformatics 3 – WS 12/13V 4 – 25 Vertex Betweenness Freeman (1927): count on how many shortest paths a vertex is visited For a graph G = (V, E) with |V| = n Betweenness for vertex ν: Alternatively: edge betweenness → to how many shortest paths does this edge belong

Bioinformatics 3 – WS 12/13V 4 – 26 Girvan-Newman Algorithm Girvan, Newman, PNAS 99 (2002) 7821: 1) Calculate betweenness for all m edges (takes O(mn) time) For a graph G = (V, E) with |V| = n, |E| = m 2) Remove edge with highest betweenness 3) Recalculate betweenness for all affected nodes 4) Repeat from 2) until no more edge is left (at most n iterations) 5) Build up tree from V by reinserting vertices in reverse order Works well, but slow: O(mn 2 ) ≈ O(n 3 ) for SF networks (|E| = 2 |V|) shortest paths (n 2 ) for m edges → recalculating a global property is expensive for larger networks

Bioinformatics 3 – WS 12/13V 4 – 27 Zachary's Karate Club observed friendship relations of 34 members over two years correlate fractions at break-up with calculated communities administrator's faction instructor's faction with edge betweenness: with number of edge-independent paths: Girvan, Newman, PNAS 99 (2002) 7821

Bioinformatics 3 – WS 12/13V 4 – 28 Collaboration Network Girvan, Newman, PNAS 99 (2002) 7821 The largest component of the Santa Fe Institute collaboration network, with the primary divisions detected by the GN algorithm indicated by different vertex shapes. Edge: two authors have co- authored a joint paper.

Bioinformatics 3 – WS 12/13V 4 – 29 Determining Communities Faster Radicchi et al, PNAS 101 (2004) 2658: Determine edge weights via edge-clustering coefficient → local measure → much faster, esp. for large networks Modified edge-clustering coefficient: → fraction of potential triangles with edge between i and j k = 5 k = 4 C (3) = (2+1) / 3 = 1 Note: "+ 1" to remove degeneracy for z i,j = 0

Bioinformatics 3 – WS 12/13V 4 – 30 Performance Instead of triangles: cycles of higher order g → continuous transition to a global measure Radicchi et al-algorithm: O(N 2 ) for large networks Radicchi et al, PNAS 101 (2004) 2658:

Bioinformatics 3 – WS 12/13V 4 – 31 Comparison of algorithms Girven-Newman algorithmRadicchi with g = 4 → very similar communities Data set: football teams from US colleges; different symbols = different conferences, teams played ca. 7 intraconference games and 4 inter- conference games in 2000 season.

Bioinformatics 3 – WS 12/13V 4 – 32 Strong Communities "Community := subgraph with more interactions inside than to the outside" …strong sense when: → Check every node individually A subgraph V is a community in a… …weak sense when: → allow for borderline nodes Σ k in = 2, Σ k out = 1 {k in, k out } = {1,1}, {1,0} → community in a weak sense Σ k in = 10, Σ k out = 2 {k in, k out } = {2,1}, {2, 0}, {3, 1}, {2,0}, {1,0} → community in a strong and weak sense Radicchi et al, PNAS 101 (2004) 2658

Bioinformatics 3 – WS 12/13V 4 – 33 Summary What you learned today: Next lecture: Mon, Oct 29, 2012 Modular decomposition Robustness Short Test #1: Fri, Nov. 2 how to combine a set of noisy evidences into a powerful prediction tool → Bayes analysis how to find communities in a network efficiently → betweenness, edge-cluster-coefficient