Download presentation
Presentation is loading. Please wait.
Published byDerek Hill Modified over 9 years ago
1
Microarray Normalization Issues in High-Throughput Data Analysis BIOS 691-803 Spring 2010 Dr Mark Reimers
2
Normalization of Expression Arrays Historical approaches 1995-2005 Current standard approaches –Q.75 for Agilent –Quantile for Affymetrix and Nimblegen –VSN for Illumina Modeling correlation (later) Technical variable regression (later)
3
Historic Normalization Approaches One Parameter –Single reference standard –Total or median brightness Two parameter or non-parametric –Invariant Set –Lowess for two-color arrays –Matching variance –Distribution matching By variance or by quantiles Local approaches –Print-tip separate normalization
4
Median Normalization Subtract chip medians from values to align centers of chip intensity distributions
5
Two-color Intensity-Dependent Bias Non-normalized data {(M,A)} n=1..5184 : M = log 2 (R/G) Saturation occurs at different densities for Cy-3 (green) and Cy- 5 (red) dyes because different densities of label get attached to the same amount of cDNA target. Model bias by an intensity dependent function c(A) c(A)
6
Global (lowess) Normalization Global normalized data {(M,A)} n=1..5184 : M norm = M-c(A) c(A) could be determined by any local averaging method Terry Speed suggested lowess (local weighted regression) Subtract c(A) to obtain ‘corrected’ data
7
Print-tip Normalization Print-tip normalized data {(M,A)} n=1..5184 : M p,norm = M p -c p (A);p=print tip (1-16) where c p (A) is an intensity dependent function for print tip p. 1234 5678 9101112 13141516 Print-tip layout
8
Scaled Print-tip Normalization Scaled print-tip normalized data {(M,A)} n=1..5184 : M p,norm = s p ·(M p -c p (A));p=print tip (1-16) where s p is a scale factor for print tip p (Median Absolute Deviation). After print-tip normalizationAfter scaled print-tip normalization
9
Effect on Spatial Artifacts Median normalizationGlobal lowess normalization Print-tip lowess normalization Scaled Print-tip normalization
10
Quantile Normalization Determine reference distribution (can use any good chip or average a set of chips) For each chip, for each probe, determine quantile within that chip Shift to corresponding quantile of reference distribution ---------------------------------- Easy to implement Resolves intensity dependent bias as well as loess
11
Quantile Normalization (Irizarry et al 2002) The mapping by quantile normalization
12
THINGS TO MAYBE ADD Maybe examples of how to do mapping How to assign reference linear extension to high genes Critiques: induced correlations –example and details about variable cross-hybe
13
Ratio-Intensity: Before
14
Ratio-Intensity: After
15
Key Assumption of Quantile Norm The processes that distort the distribution act on all probes of a given intensity more or less equally Probably true within differences of 30% or 40% Smaller differences depend quite strongly on technical characteristics of probes
16
Critiques of Quantile Normalization Artificially compresses variation of highly expressed genes Confounds systematic changes due to cross-hybridization with changes in abundance to genes of low expression Induces artificial correlations in gene expression across samples
17
How to Assess Normalization? We want to minimize technical variations in relation to biological variation –Most tests like t-test or ANOVA compare technical and between-group variance Compare distributions of biological to technical variation after normalization Most small estimates of variance are under-estimates
18
Other Issues in Normalization Transformation of variables –Variance stabilization Background compensation
19
Why Variance Stabilization? Ideal raw x log2 (x) log2 (x+offset) x-y plot mean-var plot From Du CHI 2007
20
Simple power transforms (Box-Cox) often nearly stabilize variance Durbin and Huber derived variance-stabilizing transform from a theoretical model: –y = (background) + m e (mult. error) + static error) –m is true signal; and have N(0, ) distribution –Transform: Could estimate (background) and empirically In practice often best effect on variance comes from parameters different from empirical estimates –Huber’s harder to estimate Variance Stabilization
21
Effect of Box-Cox Transforms
22
Model Solution – arcsinh Fit the relations between mean and standard deviation Relations between log2 and VST (arcsinh) (Lin, Pan, Huber, and Warren, 2007)
23
Illumina Bead Arrays Oligonucleotides (50- mers) immobilized on glass beads Identifier tag on each oligo Usually ~ 30 beads per probe
24
Comparing VSN results with log scale VST improves the cross-site concordance MAQC data From Du CHI 2007
25
Applicability to Other Array Types The crucial assumption of most current methods for expression array normalization is that the differences between arrays reflect changes in only a small proportion of the genome and that the overall distribution of expression levels is unchanged
26
Recent Approaches 1.Use of standard or control variables to infer covariates Mike West 2.PCA of residuals to infer covariates or patterns of systematic error 3.Regression on technical covariates of probes
27
Inferred (Surrogate) Covariates Surrogate variable analysis (SVA) –Leek and Storey, PLoS Genetics, 2007 Motivation: many unmodeled (and unknown) factors affect the measures Even if known, most experiments don’t have sufficient d.f. to estimate their effects Idea: often the effects of all factors are somewhat correlated Can you infer a manageable set of ersatz (surrogate) covariates that do the same thing?
28
Underlying Model There are factors f 1, …, f L, which affect genes via linear combinations of functions g i1 ( f 1 ), …, g iL ( f L ). The distorting effect on gene i in array j is: Claim: this is a sufficiently general representation, because additive models can represent most data sets (dubious) Fact: an additive representation can be represented as a linear function (of transformed variables)
29
Inferring Covariates Given observations Y and predictors X L x N, –(e.g. X might record diagnosis and age in columns) Fit the following model: The residual matrix R is approximated by R ~ U V T using singular value decomposition with K non- trivial components The k th row of matrix V records the k th inferred covariate across the N samples
30
SV Decomposition of Residuals How many singular values to keep? Test whether fraction of variance explained higher than ‘chance’ Compute test statistic Assess significance by SVD following many permutations, acting on rows independently, to disrupt correlation structure Surrogate variables
31
Using the Surrogate Covariates For each gene i separately fit ’s and ’s in model: Issue: How to limit d.f. used up by covariates? –For each k, many genes show little correlation with inferred covariate k. Compute variance explained by each covariate across all genes Select which genes i are significantly associated with predictor v k. (i.e. ik > 0) using Storey’s FDR approach on variance explained Include only those significantly associated
32
Critiques and Issues for SVA –SVA does not distinguish between technical variation and biological variation in most designs Biological variation within treatment groups is often important –SVA assumes covariate effects are additive (linear in practice) This may be plausible for a single outcome, but the assumption of the model is that the same functions of the unknown factors contribute linearly to the distortion of ALL genes –Fairly complex procedure with several tuning parameters –Does not address confounding between systematic errors and design a general fault of covariate methods, but not (I think) necessary –SVA as published is not robust – easily fixed
33
A Simpler Approach Using SVD Left singular vectors represent basis for subspace of technical variation Hypothesis: Technical errors are reproducible Implication: one can ‘learn’ typical patterns of technical variation for each technology from one set of replicates
34
Algorithm Consider sets of technical replicates of the same samples, with only technical differences within sets PCA of replicates identifies major components Algorithm: –Construct technical differences from mean of each set –Robust PCA of differences Outliers can be handled by simple winsorization –Find differences of each array from common mean of all arrays in experiment –Project each array’s difference onto K PC’s (K small) –Subtract projection (typically 50% of variance) Leverage points in regression are also winsorized
35
Principal Components of MAQC Four samples: A: brain; B mixed tissue; C: 3:1 mixture of A & B; D 1:3 mixture of A & B Each sample hybridized five times in each of three labs Scree plot of replicate PCA for Agilent 44K 1 color MAQC data set (3 sets of 4x5 reps)
36
Results on MAQC Data Using each lab’s PC’s to normalize the other two labs Five PC’s (left singular vectors) used Proportion of variance explained > 50% –5/40,000 expected if taking a ‘random’ subspace Number of F-scores greater than 7
37
Technical Variable Regression Hypothesis: 1.Most technical variation between chips is caused by a few (unknown) systematic factors 2.Probes with similar technical characteristics (T m, position in gene, location on chip, typical intensity,..) will be distorted by similar amounts Therefore we can use technical variables as an index of technical similarity (the predictor) and (usually) treat real biological differences as ‘noise’ Construct deviations from average or standard chip Identify which technical variables have the most effect Regress deviations from average on technical variables
38
Covariates to Index Similar Probes Analogous to ‘loess’ normalization of Yang et al Index similar probes by technical covariates Known covariates of array probes –Location (X,Y) on chip –Reference (or average) intensity –Tm (‘melting’ or annealing temperature) –Location relative to priming site for expression arrays –Pyrimidine content (C + T) Cross-hybridization easier than with purines –Deviation of reference from average reference (two color arrays) Deviations of log intensity from average as function of average
39
Many Covariates Predict Deviations A moderate number (5-9) of technical predictors have significant effects on many chips Non-linear, non-additive interactions are usual Low CT; near 3’ end High CT; near 3’ end Low CT; far 3’ end High CT; far 3’ end Deviations of chip GSM 25410 from average of all chips in expt. Overall downward trend (apparent loss of expression) at higher values of average intensity Average of all chips LOESS curves tracking:
40
Regression in Moderate Dimensions Local regression (LOESS) works reasonably well up to three or four dimensions, but there is too much flexibility in five or more dimensions –Curse of dimensionality: If 7 variables are truly independent at all levels, and if 4 bins for each variable: 4 7 = 16,384 bins –But there is plenty of data! How to reconcile flexibility with restricting df? Representation: How to represent the high- dimensional surface effectively for 10 5 points?
41
Addressing Curse of Dimensionality Local regression unwieldy in 7 dimensions There don’t seem to be dimension reduction subspaces within predictors –The condition number of the data matrices selected by 5% ‘slices’ is 1.5 - 2 Approaches such as MARS don’t seem to work, because interactions dominate most main effects –Manufacturers tune the probes to remove main effects
42
Issues in Building Representation Spend degrees of freedom wisely Borrow idea from MARS – limit number of effective dimensions in local regression Construct neighborhoods in B 7 that are wide in most directions, but narrow in directions of high variation –Directions determined adaptively with high threshold
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.