Likelihood & Hierarchical Models  ij  j,  j 2   j ,  2 

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Meta-Regression & Mixed Effects T i ~ N(  i  i 2 )  i ~ N(  2 )
Log-linear and logistic models Generalised linear model ANOVA revisited Log-linear model: Poisson distribution logistic model: Binomial distribution Deviances.
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Uncertainty and confidence intervals Statistical estimation methods, Finse Friday , 12.45–14.05 Andreas Lindén.
Likelihood Ratio, Wald, and Lagrange Multiplier (Score) Tests
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
SOLVED EXAMPLES.
Statistical Models in Meta- Analysis T i ~ N(  i  i 2 )  i ~ N(  2 )
Visual Recognition Tutorial
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Part 2b Parameter Estimation CSE717, FALL 2008 CUBS, Univ at Buffalo.
Maximum likelihood (ML) and likelihood ratio (LR) test
Estimation of parameters. Maximum likelihood What has happened was most likely.
Sampling Distributions
Different chi-squares Ulf H. Olsson Professor of Statistics.
Psychology 202b Advanced Psychological Statistics, II February 17, 2011.
OLS versus MLE Example YX Here is the data:
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
1 Inference About a Population Variance Sometimes we are interested in making inference about the variability of processes. Examples: –Investors use variance.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Factor Analysis Ulf H. Olsson Professor of Statistics.
Introduction to Multilevel Modeling Using SPSS
One Sample  M ean μ, Variance σ 2, Proportion π Two Samples  M eans, Variances, Proportions μ1 vs. μ2 σ12 vs. σ22 π1 vs. π Multiple.
Overview of Meta-Analytic Data Analysis
  Meta-Analysis of Correlated Data. Common Forms of Dependence Multiple effects per study –Or per research group! Multiple effect sizes using same.
Introduction to Statistical Inference Chapter 11 Announcement: Read chapter 12 to page 299.
Random Sampling, Point Estimation and Maximum Likelihood.
Lecture 4 SIMPLE LINEAR REGRESSION.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
1 Schaum’s Outline PROBABILTY and STATISTICS Chapter 6 ESTIMATION THEORY Presented by Professor Carol Dahl Examples from D. Salvitti J. Mazumdar C. Valencia.
July, 2000Guang Jin Statistics in Applied Science and Technology Chapter 7 - Sampling Distribution of Means.
1.State your research hypothesis in the form of a relation between two variables. 2. Find a statistic to summarize your sample data and convert the above.
Lecture 12: Linkage Analysis V Date: 10/03/02  Least squares  An EM algorithm  Simulated distribution  Marker coverage and density.
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 8. Parameter Estimation Using Confidence Intervals.
Week 41 How to find estimators? There are two main methods for finding estimators: 1) Method of moments. 2) The method of Maximum likelihood. Sometimes.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Fixed- v. Random-Effects. Fixed and Random Effects 1 All conditions of interest – Fixed. Sample of interest – Random. Both fixed and random-effects meta-analyses.
Section 6.4 Inferences for Variances. Chi-square probability densities.
Information criteria What function fits best? The more free parameters a model has the higher will be R 2. The more parsimonious a model is the lesser.
A framework for multiple imputation & clustering -Mainly basic idea for imputation- Tokei Benkyokai 2013/10/28 T. Kawaguchi 1.
Chapter 9 Day 2. Warm-up  If students picked numbers completely at random from the numbers 1 to 20, the proportion of times that the number 7 would be.
MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.
Probability Theory and Parameter Estimation I
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Confidence Intervals and Hypothesis Tests for Variances for One Sample
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Ch3: Model Building through Regression
Linear Mixed Models in JMP Pro
Lauren Rodgers Supervisor: Prof JNS Matthews
Statistics in Applied Science and Technology
Hedges’ Approach.
STAT120C: Final Review.
Quantitative Methods PSY302 Quiz 6 Confidence Intervals
الأستاذ المساعد بقسم المناهج وطرق التدريس
STAT 5372: Experimental Statistics
Probability & Statistics Probability Theory Mathematical Probability Models Event Relationships Distributions of Random Variables Continuous Random.
More about Posterior Distributions
POINT ESTIMATOR OF PARAMETERS
Statistical Process Control
Solving Quadratics Using Square Roots
CHAPTER 6 Statistical Inference & Hypothesis Testing
The estimate of the proportion (“p-hat”) based on the sample can be a variety of values, and we don’t expect to get the same value every time, but the.
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Maximum Likelihood Estimation (MLE)
Chapter 4 (cont.) The Sampling Distribution
Presentation transcript:

Likelihood & Hierarchical Models  ij  j,  j 2   j ,  2 

This is an approximation… Biased if n is small in many studies Biased if distribution of effect sizes isn't normal Biased for some estimators of w i

Solution: estimate t 2 from the data using Likelihood

Likelihood: how well data support a given hypothesis. Note: Each and every parameter choice IS a hypothesis

L(θ|D) = p(D|θ) Where D is the data and θ is some choice of parameter values

Calculating a Likelihood  = 5,  = 1

Calculating a Likelihood  = 5,  = 1

Calculating a Likelihood  = 5,  = 1

Calculating a Likelihood  = 5,  = 1 y1 y2 y3 L(  =5|D) = p(D|  =5) = y1*y2*y3

The Maximum Likelihood Estimate is the value at which p(D|θ) is highest.

Calculating Two Likelihoods  = 3,  = 1  = 7,  = 1

Calculating Two Likelihoods  = 3,  = 1  = 7,  = 1

Calculating Two Likelihoods  = 3,  = 1  = 7,  = 1

Maximum Likelihood

Log-Likelihood

-2*Log-Likelihood = Deviance Ratio of deviances of nested models is ~  2 Distributed, so, we can use it for tests!

Last Bits of Likelihood Review θcan be as many parameters for as complex a model as we would like Underlying likelihood equation also encompasses distributional information Multiple algorithms for searching parameter space

rma(Hedges.D, Var.D, data=lep, method="ML") Random-Effects Model (k = 25; tau^2 estimator: DL) tau^2 (estimated amount of total heterogeneity): (SE = ) tau (square root of estimated tau^2 value): I^2 (total heterogeneity / total variability): 47.47% H^2 (total variability / sampling variability): 1.90 Test for Heterogeneity: Q(df = 24) = , p-val = Model Results: estimate se zval pval ci.lb ci.ub <

profile(gm_mod_ML)

REML v. ML In practice we use Restricted Maximum Likelihood (REML) ML can produce biased estimates if multiple variance parameters are unknown In practice,  2 i from our studies is an observation. We must estimate  2 i as well as  2

The REML Algorithm in a Nutshell Set all parameters as constant, except one Get the ML estimate for that parameter Plug that in Free up and estimate the next parameter Plug that in Etc... Revisit the first parameter with the new values, and rinse and repeat

OK, so, we should use ML for random/mixed models – is that it?

Meta-Analytic Models so far Fixed Effects: 1 grand mean –Can be modified by grouping, covariates Random Effects: Distribution around grand mean –Can be modified by grouping, covariates Mixed Models: – Both predictors and random intercepts

What About This Data? StudyExperiment in StudyHedges DV Hedges D Ramos & Pinto Ramos & Pinto Ramos & Pinto Ellner & Vadas Ellner & Vadas Moria & Melian

Hierarchical Models Study-level random effect Study-level variation in coefficients Covariates at experiment and study level

Hierarchical Models Random variation within study (j) and between studies (i) T ij  ij,  ij 2 )  ij  j,  j 2   j ,  2 

Study Level Clustering

Hierarchical Partitioning of One Study Grand Mean Study Mean Variation due to  Variation due to 

Hierarchical Models in Paractice Different methods of estimation We can begin to account for complex non- independence But for now, we'll start simple…

Linear Mixed Effects Model > marine_lme <- lme(LR ~ 1, random=~1|Study/Entry, data=marine, weights=varFunc(~VLR))

summary(marine_lme)... Random effects: Formula: ~1 | Study (Intercept) StdDev: Formula: ~1 | Entry %in% Study (Intercept) Residual StdDev:  

summary(marine_lme)... Variance function: Structure: fixed weights Formula: ~VLR Fixed effects: LR ~ 1 Value Std.Error DF t-value p-value (Intercept)

Meta-Analytic Version > marine_mv <- rma.mv(LR ~ 1, VLR, data=marine, random= list(~1|Study, ~1|Entry)) Allows greater flexibility in variance structure and correlation, but, some under-the-hood differences

summary(marine_mv) Multivariate Meta-Analysis Model (k = 142; method: REML) logLik Deviance AIC BIC AICc Variance Components: estim sqrt nlvls fixed factor sigma^ no Entry sigma^ no Study  

summary(marine_mv) Test for Heterogeneity: Q(df = 141) = , p-val <.0001 Model Results: estimate se zval pval ci.lb ci.ub ** --- Signif. codes: 0 ‘***’ ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Comparison of lme and rma.mv   rma.mv lme lme and rma.mv use different weighting rma.mv assumes variance is known lme assumes you know proportional differences in variance