Download presentation
Presentation is loading. Please wait.
Published byEileen Murphy Modified over 9 years ago
1
Computational methods to inferring cellular networks Stat 877 Apr 15 th 2014 Sushmita Roy
2
Goals for today Introduction – Different types of cellular networks Methods for network reconstruction from expression – Per-gene vs Per-module methods – Sparse Candidates Bayesian networks – Regression-based methods GENIE3 L1-DAG learn Assessing confidence in network structure
3
Why networks? “A system is an entity that maintains its function through the interaction of its parts” – Kohl & Noble
4
To understand cells as systems: measure, model, predict, refine Uwe Sauer, Matthias Heinemann, Nicola Zamboni, Science 2007
5
Different types of networks Physical networks – Transcriptional regulatory networks: interactions between regulatory proteins (transcription factors) and genes – Protein-protein: interactions among proteins – Signaling networks: protein-protein and protein-small molecule interactions to relay signals from outside the cell to the nucleus Functional networks – Metabolic: reactions through which enzymes convert substrates to products – Genetic: interactions among genes which when perturbed together produce a significant phenotype than when individually perturbed
6
Transcriptional regulatory networks Regulatory network of E. coli. 153 TFs (green & light red), 1319 targets Vargas and Santillan, 2008 A B Gene C Transcription factors (TF) C A B Directed, signed, weighted graph Nodes: TFs and Target genes Edges: A regulates B’s expression level DNA
7
Reactions associated with Galactose metabolism Metabolic networks Metabolites N M a b c O Enzymes d O M N KEGG Unweighted graph Nodes: Metabolic enzyme Edges: Enzymes M and N share a compound
8
Protein-protein interaction networks Barabasi et al. 2003 Yeast protein interaction network XY X Y Un/weighted graph Nodes: Proteins Edges: Protein X physically interacts with protein Y
9
Challenges in network biology Network structure analysis Network reconstruction/in ference (today) AB X Y A B A XY Hubs, degree-distributions, Network motifs ? ? ? Identifying edges and their logic X=f(A,B) Y=g(B) Node attributes 2 1 Structure Parameters Network applications f f g ? Predicting function and activity of genes from network 3
10
Goals for today Introduction – Different types of cellular networks Methods for network reconstruction from expression – Per-gene vs Per-module methods – Sparse Candidates Bayesian networks – Regression-based methods GENIE3 L1-DAG learn Assessing confidence in network structure
11
Computational methods to infer networks We will focus on transcriptional regulatory networks – These networks control what genes get activated when – Precise gene activation or inactivation is crucial for many biological processes – Microarrays and RNA-seq allows us to systematically measure gene activity levels These networks are primarily inferred from gene expression data
12
What do we want a model to capture? X 3 =ψ(X 1,X 2 ) Function X1X1 X2X2 X3X3 BOOLEAN LINEAR DIFF. EQNS PROBABILISTIC …. How they determine expression levels? Sko1 Structure HSP12 Hot1 Who are the regulators? Hot1 regulates HSP12 HSP12 is a target of Hot1 Input: Transcription factor level (trans) HSP12 Output: mRNA levels Sko1 Hot1 Output: expression levels
13
Mathematical representations of regulatory networks X1X1 X2X2 X3X3 f Output expression of target gene Models differ in the function that maps regulator input levels to target levels Input expression/activity of regulators Rate equations Probability distributions Boolean NetworksDifferential equationsProbabilistic graphical models X1X2 00 01 10 11 X3 0 1 1 1 Input Output X1X2 X3
14
Regulatory network inference from expression Expression-based network inference Genes Experiments X2X2 Structure X3X3 X1X1 X 3 =f(X 1,X 2 ) Function X1X1 X2X2 X3X3 Expression level of gene i in experiment j
15
Two classes of expression-based methods Per-gene/direct methods (Today) Module based methods (Thursday) X5X5 X3X3 X1X1 X2X2 Module X3X3 X1X1 X2X2 X5X5 X3X3 X4X4
16
Per-gene methods X3X3 X1X1 X2X2 X5X5 X3X3 X4X4 Key idea: find the regulators that “best explain” expression level of a gene Probabilistic graphical methods – Bayesian network Sparse Candidates – Dependency networks GENIE3, TIGRESS Information theoretic methods – Context Likelihood of relatedness – ARACNE
17
Module-based methods Find regulators for an entire module – Assume genes in the same module have the same regulators Module Networks (Segal et al. 2005) Stochastic LeMoNe (Joshi et al. 2008) Per module Y2Y2 Y1Y1 X1X1 X2X2 Module
18
Goals for today Introduction – Different types of cellular networks Methods for network reconstruction from expression – Per-gene vs Per-module methods – Sparse Candidates Bayesian networks – Regression-based methods GENIE3 L1-DAG learn Assessing confidence in network structure
19
Notation V : A set of p network components – p genes E : Edge set connecting V G=(V, E). G is the graph we wish to infer X v : Random variable, for v ε V X={X 1,.., X p } D : Dataset of N measurements for X – D: {x 1,…x N } Θ : Set of parameters associated with the network Spang and Markowetz, BMC Bioinformatics 2005
20
Bayesian networks (BN) Denoted by B={G, Θ} – G : Graph is directed and acyclic (DAG) – Pa(X i ): Parents of X i – Θ : { θ 1,.., θ p } Parameters for p conditional probability distributions (CPD) P(X i | Pa(X i ) ) Vertices of G correspond to random variables X 1 … X p Edges of G encode directed influences between X 1 … X p
21
A simple Bayesian network of four variables Adapted from “Introduction to graphical models”, Kevin Murphy, 2001 Random variables: Cloudy ε {T, F} Sprinker ε {T, F} Rain ε {T, F} WetGrass ε {T,F}
22
A simple Bayesian network of four variables Conditional probability distributions (CPD) Adapted from “Introduction to graphical models”, Kevin Murphy, 2001 Random variables: Cloudy ε {T, F} Sprinker ε {T, F} Rain ε {T, F} WetGrass ε {T,F}
23
Bayesian network representation of a regulatory network Bayesian network T ARGET ( CHILD ) R EGULATORS (P ARENTS ) X1X1 X2X2 X3X3 X1X1 X2X2 X3X3 P(X 3 |X 1,X 2 ) Random variables HSP12 Sko1 Hot1 Inside the cell Hot1: Sko1: Hsp12: P(X 1 ) P(X 2 )
24
Bayesian networks compactly represent joint distributions
25
Example Bayesian network of 5 variables C HILD P ARENTS X1X1 X2X2 X3X3 X5X5 X4X4 No independence assertions Independence assertions Assume X i is binary Needs 2 5 measurements Needs 2 3 measurements
26
CPD in Bayesian networks The CPD P(X i |Pa(X i )) specifies a distribution over values of X i for each combination of values of Pa(X i ) CPD P(X i |Pa(X i )) can be parameterized in different ways X i are discrete random variables – Conditional probability table or tree X i are continuous random variables – CPD can be Gaussians or regression trees
27
Consider four binary variables X 1, X 2, X 3, X 4 Representing CPDs as tables X1X1 X2X2 X3X3 tf ttt0.90.1 ttf0.90.1 tft0.90.1 tff0.90.1 ftt0.80.2 ftf0.5 fft fff P( X 4 | X 1, X 2, X 3 ) as a table X4X4 X1X1 X2X2 X4X4 X3X3 Pa(X 4 ): X 1, X 2, X 3
28
Estimating CPD table from data Assume we observe the following assignments for X 1, X 2, X 3, X 4 TFTT TTFT TTFT TFTT TFTF TFTF FFTF X1X1 X2X2 X3X3 X4X4 For each joint assignment to X 1, X 2, X 3, estimate the probabilities for each value of X 4 For example, consider X 1 =T, X 2 =F, X 3 =T P(X 4 =T|X 1 =T, X 2 =F, X 3 =T)=2/4 P(X 4 =F|X 1 =T, X 2 =F, X 3 =T)=2/4 N=7
29
A tree representation of a CPD P( X 4 | X 1, X 2, X 3 ) as a tree X1X1 P(X 4 = t ) = 0.9 f t X2X2 P(X 4 = t ) = 0.5 ft X3X3 P(X 4 = t ) = 0.8 ft X1X1 X2X2 X4X4 X3X3 Allows more compact representation of CPDs, by ignoring some unlikely relationships.
30
The learning problems in Bayesian networks Parameter learning on known graph structure – Given data D and G, learn Θ Structure learning – Given data D, learn G and Θ
31
Structure learning using score-based search... A function of how well B describes the data D
32
Scores for Bayesian networks Maximum likelihood Regularized maximum likelihood Bayesian score
33
Decomposability of scores The score of a Bayesian network B decomposes over individual variables Enables efficient computation of the score change to local changes Joint assignment to Pa(X i ) in the d th sample
34
Search space of graphs is huge For N variables there are possible graphs Set of possible networks grows super exponentially NNumber of networks 38 464 51024 632768 Need approximate methods to search the space of networks
35
Greedy Hill climbing to search Bayesian network space Input: D={x 1,..,x N }, An initial graph, B 0 ={ G 0, Θ 0 } Output: B best Loop until convergence: – { B i 1,.., B i m } = Neighbors(B i ) by making local changes to B i – B i+1 : arg max j ( Score(B i j ) ) Termination: – B best = B i
36
Local changes to B i A B CD A B C D add an edge A B C D delete an edge Current network Check for cycles BiBi
37
Challenges with applying Bayesian network to genome-scale data Number of variables, p is in thousands Number of samples, N is in hundreds
38
Extensions to Bayesian networks to handle genome-scale networks Sparse candidate algorithm – Friedman, Nachman, Pe’er. 1999 Bootstrap to identify high scoring graph features – Friedman, Linial, Nachman, Pe’er. 2000 Module networks (subsequent lecture) – Segal, Pe’er, Regev, Koller, Friedman. 2005 Add graph priors (subsequent lecture, hopefully)
39
The Sparse candidate Structure learning in Bayesian networks Key idea: Identify k “promising” candidate parents for each X i – k<<p, p : number of random variables – Candidates define a “skeleton graph” H Restrict graph structure to select parents from H Early choices in H might exclude other good parents – Resolve using an iterative algorithm
40
Sparse candidate algorithm Input: – A data set D – An initial Bayes net B 0 – A parameter k : max number of parents per variable Output: – Final B Loop until convergence – Restrict Based on D and B n-1 select candidate parents C i n-1 for X i This defines a skeleton directed network H n – Maximize Find network B n that maximizes the score Score(B n ;D) among networks satisfying Termination: Return B n
41
Selecting candidate parents in the Restrict Step A good parent for X i is one with strong statistical dependence with X i – Mutual information provides a good measure of statistical dependence I(X i ; X j ) – Mutual information should be used only as a first approximation Candidate parents need to be iteratively refined to avoid missing important dependences A good parent for X i has the highest score improvement when added to Pa(X i )
42
Mutual Information Measure of statistical dependence between two random variables, X i and X j
43
Mutual information can miss some parents Consider the following true network If I(A;C)>I(A;D)>I(A;B) and we are selecting k<=2 parents, B will never be selected as a parent How do we get B as a candidate parent? If we used mutual information alone to select candidates, we might be stuck with C and D A B C D True network
44
Computational savings in Sparse Candidate Ordinary hill climbing – O(2 n ) possible parent sets – O(n 2 ) initial score change calculations – O(n) for subsequent iterations Complexity of learning constrained on a skeleton directed graph – O(2 k ) possible parent sets – O(nk) initial score change calculations – O(k) for subsequent iterations
45
Sparse candidate learns good networks faster than hill-climbing Dataset 1Dataset 2 100 variables200 variables Greedy hill climbing takes much longer to reach a high scoring bayesian network Score (higher is better)
46
Some comments about choosing candidates How to select k in the sparse candidate algorithm? Should k be the same for all X i ? If the data are Gaussian could be do something better? Regularized regression approaches can be used to estimate the structure of an undirected graph L1-Dag learn provides an alternate – Schmidt, Niculescu-Mizil, Murphy 2007 – Estimate an undirected dependency network G undir – Learn a Bayesian network constrained on G undir
47
Dependency network A type of probabilistic graphical model As in Bayesian networks has – A graph component – A probability component Unlike Bayesian network – Can have cyclic dependencies Dependency Networks for Inference, Collaborative Filtering and Data visualization Heckerman, Chickering, Meek, Rounthwaite, Kadie 2000
48
Selecting candidate regulators for the i th gene using regularized linear regression XiXi = X 1 …… X p-1 bibi 1 N 1 p-1 1 N 1 Regularization term ??? … XiXi Candidates Everything other than X i L1 norm, sparsity imposing Sets many regression coefficients to 0 Also called Lasso regression
49
Learning dependency networks Learning: estimate a set of conditional probability distributions, one per variable. P(X,|X -j ) could be estimated by solving A set of linear regression problem Meinhausen & Buhlmann, 2006 TIGRESS (Haury et al, 2010) A set of non-linear regression problems Non-linearity captured by Regression Tree (Heckerman et al, 2000) Non-linearity captured by Random forest GENIE3, (Huynh-Thu et al, 2010)
50
Where do different methods rank? Marbach et al., 2012 Community Random
51
Goals for today Introduction – Why should we care? – Different types of cellular networks Methods for network reconstruction from expression – Per-gene methods Sparse Candidates Bayesian networks Regression-based methods – GENIE3 – L1-DAG learn – Per-module methods Assessing confidence in network structure
52
Assessing confidence in the learned network Typically the number of training samples is not sufficient to reliably determine the “right” network One can however estimate the confidence of specific features of the network – Graph features f(G) Examples of f(G) – An edge between two random variables – Order relations: Is X, Y ’s ancestor?
53
How to assess confidence in graph features? What we want is P(f(G)|D), which is But it is not feasible to compute this sum Instead we will use a “bootstrap” procedure
54
Bootstrap to assess graph feature confidence For i=1 to m – Construct dataset D i by sampling with replacement N samples from dataset D, where N is the size of the original D – Learn a network B i For each feature of interest f, calculate confidence
55
Does the bootstrap confidence represent real relationships? Compare the confidence distribution to that obtained from randomized data Shuffle the columns of each row (gene) separately. Repeat the bootstrap procedure randomize each row independently genes Experimental conditions
56
Application of Bayesian network to yeast expression data 76 experiments/microarrays 800 genes Bootstrap procedure on 200 subsampled datasets Sparse candidate as the Bayesian network learning algorithm
57
Bootstrap-based confidence differs between real and actual data f f Random Real
58
Example of a high confidence sub-network One learned Bayesian networkBootstrapped confidence Bayesian network Highlights a subnetwork associated with yeast mating
59
Summary Network inference from expression provides a promising approach to identify cellular networks Bayesian networks are one representation of networks that have a probabilistic and graphical component – Network inference naturally translates to learning problems in Bayesian networks. Successful application of Bayesian networks to expression data requires additional considerations – Reduce potential parents statistically or using biological knowledge – Bootstrap based confidence estimation
61
Linear regression with N inputs Y : output interceptParameters/coefficients Given: Data= Estimate:
62
Information theoretic concepts Kullback Leibler (KL) Divergence – Distance between two distributions Mutual information – Measures statistical dependence between X and Y – Equal to KL Divergence between P(X,Y) and P(X)P(Y) Conditional Mutual information – Measures the information between two variables given a third
63
KL Divergence P(X), Q(X) are two distributions over X
64
Measuring relevance of Y to X M Disc (X,Y) – D KL (P(X,Y)||P B (X,Y)) M Shield (X,Y) – I(X;Y|Pa(X)) M score (X,Y) – Score(X;Y,Pa(X),D)
65
Conditional Mutual Information Measures the mutual information between X and Y, given Z If Z captures everything about X, knowing Y gives no more information about X. Thus the conditional mutual information would be zero.
66
What do the Bayesian network edges represent? Spang and Markowetz, BMC Bioinformatics 2005 Is it just correlation? No. High correlation could be due to any of the three possible regulatory mechanisms
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.