Vladimir Makarenkov Université du Québec à Montréal (UQAM)

Slides:



Advertisements
Similar presentations
1 Chapter 4 Experiments with Blocking Factors The Randomized Complete Block Design Nuisance factor: a design factor that probably has an effect.
Advertisements

1 Design of Engineering Experiments Part 3 – The Blocking Principle Text Reference, Chapter 4 Blocking and nuisance factors The randomized complete block.
Chapter 4 Randomized Blocks, Latin Squares, and Related Designs
Analysis of Variance Outlines: Designing Engineering Experiments
The Search for Synergism A Data Analytic Approach L. Wouters, J. Van Dun, L. Bijnens May 2003 Three Country Corner Royal Statistical Society.
Experimental Design, Response Surface Analysis, and Optimization
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Objectives (BPS chapter 24)
Autocorrelation and Linkage Cause Bias in Evaluation of Relational Learners David Jensen and Jennifer Neville.
Using process knowledge to identify uncontrolled variables and control variables as inputs for Process Improvement 1.
Laboratory Quality Control
Mutual Information Mathematical Biology Seminar
Evaluating Hypotheses
ViaLogy Lien Chung Jim Breaux, Ph.D. SoCalBSI 2004 “ Improvements to Microarray Analytical Methods and Development of Differential Expression Toolkit ”
One-way Between Groups Analysis of Variance
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 6-1 Chapter 6 The Normal Distribution and Other Continuous Distributions.
Experimental Evaluation
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
12.3 – Measures of Dispersion
Chemometrics Method comparison
Inference for regression - Simple linear regression
Hypothesis Testing.
Determining Sample Size
One-Factor Experiments Andy Wang CIS 5930 Computer Systems Performance Analysis.
Chapter 6 The Normal Probability Distribution
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Comparing Three or More Means 13.
© Copyright McGraw-Hill CHAPTER 3 Data Description.
PARAMETRIC STATISTICAL INFERENCE
© 1998, Geoff Kuenning General 2 k Factorial Designs Used to explain the effects of k factors, each with two alternatives or levels 2 2 factorial designs.
Chapter 11: Applications of Chi-Square. Count or Frequency Data Many problems for which the data is categorized and the results shown by way of counts.
University of Ottawa - Bio 4118 – Applied Biostatistics © Antoine Morin and Scott Findlay 08/10/ :23 PM 1 Some basic statistical concepts, statistics.
Measures of Variability In addition to knowing where the center of the distribution is, it is often helpful to know the degree to which individual values.
The Examination of Residuals. Examination of Residuals The fitting of models to data is done using an iterative approach. The first step is to fit a simple.
Central Locations and Variability RAISHEINE JOYCE DALMACIO MS BIO I.
Yaomin Jin Design of Experiments Morris Method.
Chapter 12 The Analysis of Categorical Data and Goodness-of-Fit Tests.
1 © 2008 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 12 The Analysis of Categorical Data and Goodness-of-Fit Tests.
Bioinformatics Expression profiling and functional genomics Part II: Differential expression Ad 27/11/2006.
PCB 3043L - General Ecology Data Analysis. OUTLINE Organizing an ecological study Basic sampling terminology Statistical analysis of data –Why use statistics?
Chapter 9 Three Tests of Significance Winston Jackson and Norine Verberg Methods: Doing Social Research, 4e.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
KNR 445 Statistics t-tests Slide 1 Introduction to Hypothesis Testing The z-test.
Robust Estimators.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Stats 845 Applied Statistics. This Course will cover: 1.Regression –Non Linear Regression –Multiple Regression 2.Analysis of Variance and Experimental.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
Analyzing Expression Data: Clustering and Stats Chapter 16.
Evaluation of gene-expression clustering via mutual information distance measure Ido Priness, Oded Maimon and Irad Ben-Gal BMC Bioinformatics, 2007.
Designing Factorial Experiments with Binary Response Tel-Aviv University Faculty of Exact Sciences Department of Statistics and Operations Research Hovav.
Tutorial I: Missing Value Analysis
1 Détection et élimination de l'erreur systématique lors du processus de criblage à haut débit Vladimir Makarenkov Université du Québec à Montréal (UQAM)
Handout Six: Sample Size, Effect Size, Power, and Assumptions of ANOVA EPSE 592 Experimental Designs and Analysis in Educational Research Instructor: Dr.
Chapter 7 Data for Decisions. Population vs Sample A Population in a statistical study is the entire group of individuals about which we want information.
1 1 Chapter 6 Forecasting n Quantitative Approaches to Forecasting n The Components of a Time Series n Measures of Forecast Accuracy n Using Smoothing.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
1 © 2008 Brooks/Cole, a division of Thomson Learning, Inc Tests for Homogeneity and Independence in a Two-Way Table Data resulting from observations.
Chapter 10: The t Test For Two Independent Samples.
STA248 week 121 Bootstrap Test for Pairs of Means of a Non-Normal Population – small samples Suppose X 1, …, X n are iid from some distribution independent.
Comparing Three or More Means
PCB 3043L - General Ecology Data Analysis.
Laboratory Quality Control
Daniela Stan Raicu School of CTI, DePaul University
Daniela Stan Raicu School of CTI, DePaul University
DESIGN OF EXPERIMENTS by R. C. Baker
Objectives 6.1 Estimating with confidence Statistical confidence
Objectives 6.1 Estimating with confidence Statistical confidence
The Normal Distribution
STATISTICS INFORMED DECISIONS USING DATA
Presentation transcript:

Correction des résultats expérimentaux lors du processus de criblage à haut débit Vladimir Makarenkov Université du Québec à Montréal (UQAM) Montréal, Canada

Talk outline What is HTS ? Hit selection Random and systematic error Statistical methods to correct HTS measurements: Removal of the evaluated background Well correction Comparison of methods for HTS data correction and hit selection Conclusions

What is HTS? High-throughput screening (HTS) is a large-scale process to screen thousands of chemical compounds to identify potential drug candidates (i.e. hits). Malo, N. et al., Nature Biotechnol. 24, no. 2, 167-175 (2006).

An HTS procedure consists of running chemical compounds arranged in 2-dimensional plates through an automated screening system that makes experimental measurements. What is HTS?

Samples are located in wells The plates are processed in sequence What is HTS? Samples are located in wells The plates are processed in sequence Screened samples can be divided into active (i.e. hits) and inactive ones Most of the samples are inactive The measured values for the active samples are substantially different from the inactive ones

Automated plate handling system and plates of SSI Robotics HTS technology Automated plate handling system and plates of SSI Robotics

Classical hit selection Compute the mean value  and standard deviation  of the observed plate (or of the whole assay) Identify samples whose values differ from the mean  by at least c (where c is a preliminary chosen constant) For example, in case of an inhibition assay, by choosing c = 3, we select samples with values lower than  - 3

Random and systematic error Random error : a variability that randomly deviates all measured values from their “true” levels. This is a result of random influences in each particular well on a particular plate (~ random noise). Usually, it cannot be detected or compensated. Systematic error : a systematic variability of the measured values along all plates of the assay. It can be detected and its effect can be removed from the data.

Sources of systematic error Various sources of systematic errors can affect experimental HTS data, and thus introduce a bias into the hit selection process (Heuer et al. 2003), including: Systematic errors caused by ageing, reagent evaporation or cell decay. They can be recognized as smooth trends in the plate means/medians. Errors in liquid handling and malfunction of pipettes can also generate localized deviations from expected values. Variation in incubation time, time drift in measuring different wells or different plates, and reader effects.

Random and systematic error Examples of systematic error (computed and plotted by HTS Corrector): HTS assay, Chemistry Department, Princeton University : inhibition of the glycosyltransferase MurG function of E. coli., 164 plates, 22 rows and 16 columns. Hit distribution by rows and columns of Assay 1 computed for 164 plates. Hits were selected with the threshold m-1.

Hit distribution surface of Assay 1 (a) raw data (b) approximated data

Raw and evaluated background: an example Computed and plotted by HTS Corrector

Systematic error detection tests comparison Simulation 1. Plate Size: 96 wells – Cohen’s Kappa vs Error Size (from Dragiev et al., 2011) Systematic error size: 10% (at most 2 columns and 2 rows affected). First column: (a) - (c): a = 0.01; Second column: (d) - (f): a = 0.1. Systematic Error Detection Tests: () t-test and () K-S test.

Systematic error detection tests comparison Simulation 2, Plate Size: 96 wells, Cohen’s Kappa vs Hit Percentage (from Dragiev et al., 2011) Systematic error size: 10% (at most 2 columns and 2 rows affected). First column: cases (a) - (b): a = 0.01; Second column: cases (c) - (d): a = 0.1. Systematic Error Detection Tests: () t-test, () K-S test and () goodness-of-fit test.

Hit distribution surface of Assay 1 (a) raw data (b) approximated data

Data normalization (called z-score normalization) Applying the following formula, we can normalize the elements of the input data: where xi - input element, x'i - normalized output element,  - mean value,  - standard deviation, and n – total number of elements in the plate. The output data conditions will be x’=0 and  x’=1.

Evaluated background An assay background can be defined as the mean of normalized plate measurements, i.e. where : x'i,j - normalized value in well i of plate j, zi - background value in well i, N - total number of plates.

Removing the evaluated background: main steps Normalization of the HTS data by plate Elimination of the outliers Computation of the evaluated background Elimination of systematic errors by subtracting the evaluated background from the normalized data Selection of hits in the corrected data

Removing the evaluated background: hit distribution surface before and after the correction

Hit distribution after the correction HTS assay, Chemistry Department, Princeton University : inhibition of the glycosyltransferase MurG function of E. coli., 164 plates, 22 rows and 16 columns. Hit distribution by rows in Assay 1 (164 plates): (a) hits selected with the threshold m-1 ; (b) hits selected with the threshold m-2

Well correction method: main ideas Once the data are plate normalized, it is possible to analyze their values in each particular well along the entire assay The distribution of inactive measurements along a fixed well should be zero-mean centered (if there is no systematic error) and the compounds are randomly distributed along this well Values along each well may have ascending and descending trends that can be detected via a polynomial approximation

Example of systematic error in the McMaster dataset Hit distribution surface for a McMaster dataset (1250 plates - a screen of compounds that inhibit the Escherichia coli dihydrofolate reductase). Values deviating from the plate means for more than 1 standard deviation (SD) were taken into account during the computation. Well, row and column positional effects are shown.

Well correction method: an example McMaster University HTS laboratory screen of compounds to inhibit the Escherichia coli dihydrofolate reductase. 1250 plates with 8 rows and 10 columns.

Well correction method: main steps Normalize the data within each plate (plate normalization) Compute the trend along each well using polynomial approximation Subtract the obtained trend from the plate normalized values Normalize the data along each well (well normalization) Reexamine the hit distribution surface

Hit distribution for different thresholds for the raw and corrected McMaster data Hit distributions for the raw (a, c, and e) and well-corrected (b, d, and f) McMaster datasets (a screen of compounds that inhibit the Escherichia coli dihydrofolate reductase; 1250 plates of size 8x10) obtained for the thresholds: (a and b) : m - 1s (c and d) : m – 1.5s (e and f) : m - 2s

Simulations with random data: Constant noise Comparison of the hit selection methods Hit detection rate False positives  - classical hit selection O - background subtraction method ∆ - well correction method

Simulations with random data: Variable noise Comparison of the hit selection methods Hit detection rate False positives  - classical hit selection O - background subtraction method ∆ - well correction method

Comparison of methods for data correction in HTS (1) Method 1. Classical hit selection using the value m - cs as a hit selection threshold, where the mean value m and standard deviation s are computed separately for each plate and c is a preliminary chosen constant (usually c equals 3). Method 2. Classical hit selection using the value m - cs as a hit selection threshold, where the mean value m and standard deviation s are computed over all the assay values, and c is a preliminary chosen constant. This method can be chosen when we are certain that all plates of the given assay were processed under the “same testing conditions”.

Comparison of methods for data correction in HTS (2) Method 3. Median polish procedure (Tukey 1977) can be used to remove the impact of systematic error. Median polish works by alternately removing the row and column medians. In our study, Method 2 was applied to the values of the matrix of the obtained residuals in order to select hits. Method 4 (designed at Merck Frost). B score normalization procedure (Brideau et al., 2003) is designed to remove row/column biases in HTS (Malo et al., 2006). The residual (rijp) of the measurement for row i and column j on the p-th plate is obtained by median polish. The B score is calculated as follows: B score = where MADp = median{|rijp – median(rijp)|}. To select hits, this computation was followed by Method 2, applied to the B score matrix.

Comparison of methods for data correction in HTS (3) Method 5. Well correction procedure followed by Method 2 applied to the well-corrected data. The well correction method consists of two main steps: 1. Least-squares approximation of the data carried out separately for each well of the assay. 2. Z score normalization of the data within each well across all plates of the assay

Simulation design Random standard normal datasets N(0, 1) were generated. Different percentages of hits (1% to 5%) were added to them. Systematic error following a standard normal distribution with parameters N(0, c), where c equals 0, 0.6s, 1.2s, 1.8s, 2.4s, or 3s was added to them. Two types of systematic error were : A. Systematic errors stemming from row x column interactions: Different constant values were applied to each row and each column of the first plate. The same constants were added to the corresponding rows and columns of all other plates. B. Systematic error stemming from changing row x column interactions: As in (A) but with the values of the row and column constants varying across plates.

Systematic error stemming from constant row x column interactions A. Systematic errors stemming from row x column interactions: Different constant values were applied to each row and each column of the first plate. The same constants were added to the corresponding rows and columns of all other plates.

Results for systematic error stemming from row x column interactions which are constant across plates (SD = s) The results were obtained with the methods using plates’ parameters (i.e., Method 1, ◊), assay parameters (i.e., Method 2, □), median polish (x), B score (○), and well correction (). The abscissa axis indicates the noise factor (a and c - with fixed hit percentage of 1%) and the percentage of added hits (b and d - with fixed error rate of 1.2SD).

Systematic error stemming from varying row x column interactions B. Systematic error stemming from changing row x column interactions: As in (A) but with the values of the row and column constants varying across plates.

Results for systematic error stemming from row x column interactions which are varying across plates (SD = s) The results were obtained with the methods using plates’ parameters (i.e., Method 1, ◊), assay parameters (i.e., Method 2, □), median polish (x), B score (○), and well correction (). The abscissa axis indicates the noise factor (a and c - with fixed hit percentage of 1%) and the percentage of added hits (b and d - with fixed error rate of 1.2SD).

HTS Corrector Software Main features Visualization of HTS assays Data partitioning using k-means Evaluation of the background surface Correction of experimental datasets Hit selection using different methods Chi-square analysis of hit distribution Contact: makarenkov.vladimir@uqam.ca Download from: http://www.info2.uqam.ca/~makarenv/HTS/home.php

Conclusions The proposed well correction method (Makarenkov et al, 2007) rectifies the distribution of assay measurements by normalizing data within each considered well across all assay plates. Simulations suggest that the well correction procedure is a robust method that should be used prior to the hit selection process. When neither hits nor systematic errors were present in the data, the well correction method showed the performances similar to the traditional methods of hit selection. Well correction generally outperformed the Median polish and B score methods as well as the classical hit selection procedure. The simulation study confirmed that Method 2 based on the assay parameters was more accurate than Method 1 based on the plates’ parameters. Therefore, in case of identical testing conditions for all plates of the given assay, all assay measurements should be treated as a single batch.

Future research Application of clustering methods for hit selection in HTS Incorporation into the method the information about chemical properties of the tested compounds Possible combinations with the virtual HTS screening (vHTS) information and combinatorial chemistry models

References Brideau C., Gunter B., Pikounis W., Pajni N. and Liaw A. (2003) Improved statistical methods for hit selection in HTS. J. Biomol. Screen., 8, 634-647. Dragiev, P., Nadon, R. and Makarenkov, V. (2011). Systematic error detection in experimental high-throughput screening, BMC Bioinformatics, 12:25. Heyse S. (2002) Comprehensive analysis of high-throughput screening data. In Proc. of SPIE 2002, 4626, 535-547. Kevorkov D. and Makarenkov V. (2005) Statistical analysis of systematic errors in HTS. J. Biomol. Screen., 10, 557-567. Makarenkov V., Kevorkov D., Zentilli P., Gagarin A., Malo N. and Nadon R. (2006) HTS-Corrector: new application for statistical analysis and correction of experimental data, Bioinformatics, 22, 1408-1409. Makarenkov V., Zentilli P., Kevorkov D., Gagarin A., Malo N. and Nadon R. (2007) An efficient method for the detection and elimination of systematic error in HTS, Bioinformatics, 23, 1648-1657. Malo N., Hanley J.A., Cerquozzi S., Pelletier J. and Nadon R. (2006) Statistical practice in HTS data analysis. Nature Biotechnol., 24, 167-175. Tukey J.W. (1977) Exploratory Data Analysis. Cambridge, MA: Addison-Wesley.