Jump to first page Bayesian Approach FOR MIXED MODEL Bioep740 Final Paper Presentation By Qiang Ling.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Bayes rule, priors and maximum a posteriori
MCMC estimation in MlwiN
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Probabilistic models Haixu Tang School of Informatics.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Integration of sensory modalities
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Comparison of Repeated Measures and Covariance Analysis for Pretest-Posttest Data -By Chunmei Zhou.
EE-148 Expectation Maximization Markus Weber 5/11/99.
Some Terms Y =  o +  1 X Regression of Y on X Regress Y on X X called independent variable or predictor variable or covariate or factor Which factors.
Maximum likelihood (ML) and likelihood ratio (LR) test
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Maximum likelihood (ML) and likelihood ratio (LR) test
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Continuous Random Variables and Probability Distributions
Maximum likelihood (ML)
Mixture Modeling Chongming Yang Research Support Center FHSS College.
1 Terminating Statistical Analysis By Dr. Jason Merrick.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Additional Slides on Bayesian Statistics for STA 101 Prof. Jerry Reiter Fall 2008.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Model Inference and Averaging
Random Sampling, Point Estimation and Maximum Likelihood.
Montecarlo Simulation LAB NOV ECON Montecarlo Simulations Monte Carlo simulation is a method of analysis based on artificially recreating.
G Lecture 5 Example fixed Repeated measures as clustered data
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Bayesian Analysis and Applications of A Cure Rate Model.
Measurement Bias Detection Through Factor Analysis Barendse, M. T., Oort, F. J. Werner, C. S., Ligtvoet, R., Schermelleh-Engel, K.
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
1 OUTPUT ANALYSIS FOR SIMULATIONS. 2 Introduction Analysis of One System Terminating vs. Steady-State Simulations Analysis of Terminating Simulations.
Confidence Interval & Unbiased Estimator Review and Foreword.
Lecture 2: Statistical learning primer for biologists
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Simulation Study for Longitudinal Data with Nonignorable Missing Data Rong Liu, PhD Candidate Dr. Ramakrishnan, Advisor Department of Biostatistics Virginia.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
1 Statistics 262: Intermediate Biostatistics Regression Models for longitudinal data: Mixed Models.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Biostatistics Case Studies Peter D. Christenson Biostatistician Session 3: Missing Data in Longitudinal Studies.
G Lecture 71 Revisiting Hierarchical Mixed Models A General Version of the Model Variance/Covariances of Two Kinds of Random Effects Parameter Estimation.
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Density Estimation in R Ha Le and Nikolaos Sarafianos COSC 7362 – Advanced Machine Learning Professor: Dr. Christoph F. Eick 1.
Chapter 9 Sampling Distributions 9.1 Sampling Distributions.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Bayesian Estimation and Confidence Intervals Lecture XXII.
Markov Chain Monte Carlo in R
Bayesian Estimation and Confidence Intervals
Model Inference and Averaging
Ch3: Model Building through Regression
Parameter Estimation 主講人:虞台文.
Predictive distributions
More about Posterior Distributions
Sampling Distribution
Sampling Distribution
'Linear Hierarchical Models'
A Gentle Introduction to Linear Mixed Modeling and PROC MIXED
Integration of sensory modalities
Parametric Methods Berlin Chen, 2005 References:
Mathematical Foundations of BME Reza Shadmehr
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

Jump to first page Bayesian Approach FOR MIXED MODEL Bioep740 Final Paper Presentation By Qiang Ling

Jump to first page Introduction of Bayesian n Latent variables n Prior distribution n posterior distribution

Jump to first page What is meant by Latent variables Conceive population parameters as varying over time or other variables. Such as human body temperature. And over given days in the year, there may be a difference in the number of deer ticks in the grass[1].

Jump to first page What is PRIOR Distribution Prior distribution is representing the belief of model parameters distribution prior to carrying out experiments. Such as we know the human temperature is positive. That is a prior distribution. Or we may even to specify human temperature uniformly distributed at [35-40] interval.

Jump to first page 2 kinds of prior distribution n Noninformative ( contains no prior information on the parameters). 1. Jeffreys Prior 2. Flat Prior n Informative( Based on prior knowledge of the parameters, such as human body temperature) 1. Example: Normal Distribution

Jump to first page What is posterior Distribution n Updated parameter distribution n Used to estimate parameters of interest n generated by combining prior distribution and the likelihood function

Jump to first page What is the Difference between Classical Statistics Method vs Bayesian Distribution of data, not parameters are used to estimated parameters. Based on ML & REML, only the parameters values that maximize the Likelihood and SE are obtained. Posterior density is fully evaluated based on ML and prior distribution

Jump to first page MODEL BUILDING PSU SSU *Both N and M are infinite

Jump to first page LET

Jump to first page

Use first measure of DBP Data To estimate overall mean of DBP, and Use those estimations above to generate a data set of 3 trials, each trial contains 30 clinical centers and 10 repeat measures.

Jump to first page Mathematical Aspects of Bayesian

Jump to first page Prior distribution Joint density function of observed data conditional on model paremeters are all possible values of

Jump to first page ==> Let So distribution integrates to one

Jump to first page Flat Prior Since: So:

Jump to first page Jeffreys Prior Define AND THEN Jeffreys prior:

Jump to first page NormalPrior Normal Prior Prior Joint Density Function:

Jump to first page Posterior Density Define TEHN

Jump to first page RESULTS

Jump to first page Estimation of mean and variance of BDP data PROC MIXED DATA=Dbp2; CLASS center; MODEL dbp=/s; RANDOM center/s; RUN;

Jump to first page Estimation of DBP data Covariance Parameter Estimates Cov Parm Estimate centre Residual Effect Estimate Error DF t Value Pr > |t| Intercept <.0001 Solution for fixed Effect:

Jump to first page DATA d1; FILE PRINT; * Route output to Log Window; %LET tmean=90; *overall Mean of DBP; %LET ncenter=30; *Number of centers; %LET nmeasure=10; *Number of time measures; %LET err_v=81.08; *Residual error variance at a time point; %LET within_v=3.49; %LET nrep=3; *Number of replications of the simulation; *Loop begins HERE; DO trial=1 to &nrep; DO center=1 to &ncenter; m=&tmean; ec=sqrt(&within_v)*rannor(1234); DO rep=1 to &nmeasure; err=sqrt(&err_v)*rannor(3456); SBP=m+ec+err; OUTPUT; END; DROP m ec err;

Jump to first page SAS program for Bayesican PROC MIXED DATA=d1; BY trial; CLASS center; MODEL SBP = /s; RANDOM center/s; PRIOR FLAT / NSAMPLE=5500 SEED=52201 OUT=bayes4; RUN; FLAT PRIOR Can input DATA= for the prior density or just leave it blank for default Jeffrey prior

Jump to first page Posterior Sampling Information Prior Flat Algorithm Independence Chain Sample Size 5500 Seed Base Densities Type Parm1 Parm2 ig ig

Jump to first page INPUT data for Norm prior DATA nor_dist; INPUT TYPE $ parm1 parm2; CARDS; n n ;

Jump to first page

Discussion(1) n Bayesian works fine with mixed data n Bayesian estimation of variance is biased but less variance than using

Jump to first page Discussion(2) n If Normal prior is used properly, we should see a posterior variance smaller than both prior variance and variance of likelihood n Bayesian approach for mixed model with 2 random effects will require much longer computer time and depend on the speed of computer