Short Subject Presentation Unbounded Likelihoods with NM6

Slides:



Advertisements
Similar presentations
An introduction to population kinetics Didier Concordet NATIONAL VETERINARY SCHOOL Toulouse.
Advertisements

Spatial autoregressive methods
Exposure-AE-Dropout Analysis in Patients treated with pregabalin. Raymond Miller Pfizer Global Research and Development.
Generalized Method of Moments: Introduction
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Wolverine Pharmacometrics Corporation Within Subject Random Effect Transformations with NONMEM VI B. Frame 9/11/2009.
Topic 6: Introduction to Hypothesis Testing
Bayesian inference Gil McVean, Department of Statistics Monday 17 th November 2008.
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
Pattern Recognition and Machine Learning
Different chi-squares Ulf H. Olsson Professor of Statistics.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
SYSTEMS Identification
The General LISREL Model Ulf H. Olsson Professor of statistics.
Let sample from N(μ, σ), μ unknown, σ known.
Raw data analysis S. Purcell & M. C. Neale Twin Workshop, IBG Colorado, March 2002.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Statistical Analysis of Systematic Errors and Small Signals Reinhard Schwienhorst University of Minnesota 10/26/99.
Component Reliability Analysis
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Learning Theory Reza Shadmehr logistic regression, iterative re-weighted least squares.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Wolverine Pharmacometrics Corporation Between Subject Random Effect Transformations with NONMEM VI Bill Frame 09/11/2009.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
QM Spring 2002 Business Statistics Probability Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
1 Statistics 262: Intermediate Biostatistics Regression Models for longitudinal data: Mixed Models.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
Estimating Volatilities and Correlations
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
G Lecture 71 Revisiting Hierarchical Mixed Models A General Version of the Model Variance/Covariances of Two Kinds of Random Effects Parameter Estimation.
- 1 - Preliminaries Multivariate normal model (section 3.6, Gelman) –For a multi-parameter vector y, multivariate normal distribution is where  is covariance.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Context Background Pharmacokinetic data consist of drug concentration measurements, as well as reports of some measured concentrations being below the.
Probability Theory and Parameter Estimation I
LECTURE 11: Advanced Discriminant Analysis
Ch3: Model Building through Regression
CHAPTER 4 Designing Studies
CH 5: Multivariate Methods
Classification of unlabeled data:
Ch9 Random Function Models (II)
Statistical Models for Automatic Speech Recognition
Sample Mean Distributions
Chapter 3 Component Reliability Analysis of Structures.
Roberto Battiti, Mauro Brunato
Igor V. Cadez, Padhraic Smyth, Geoff J. Mclachlan, Christine and E
Mathematical Foundations of BME Reza Shadmehr
Long Subject Presentation Sojourn Models for Ordinal Data
Integration of sensory modalities
LECTURE 15: REESTIMATION, EM AND MIXTURES
LECTURE 09: BAYESIAN LEARNING
Parametric Methods Berlin Chen, 2005 References:
Learning From Observed Data
Mathematical Foundations of BME Reza Shadmehr
Basic Probability Chapter Goal:
Probabilistic Surrogate Models
Maximum Likelihood Estimation (MLE)
Presentation transcript:

Short Subject Presentation Unbounded Likelihoods with NM6 B Frame 9/15/2009 Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation Background Mentioned in Gelman’s text “Bayesian Data Analysis”. Also known as “variance escape” to some frequentists. Dealt with in at least two NONMEM based papers. Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation Conditioning on Certain Random Events Associated with Statistical Variability in PK/PD, Stuart L. Beal Volume 32, Number 2 / April, 2005 This paper discusses several interesting topics. Mixed effects modeling of weight change associated with placebo and pregabalin administration Bill Frame1  , Stuart L. Beal2, Raymond Miller1, Jeannette Barrett3 and Paula Burger1 Volume 34, Number 6 / December, 2007 This paper discusses a particular method of dealing with unbounded likelihoods. Wolverine Pharmacometrics Corporation

So what is an unbounded likelihood? Consider the Gaussian kernel… Wolverine Pharmacometrics Corporation

Consider the limit as 2  0 There are two cases here… One when y   And one when y =  See the homework! Wolverine Pharmacometrics Corporation

NONMEM Symptomatology -2LL rapidly decreases then NONMEM crashes without any output or error messages. Diagonsed by Professor Stuart L. Beal as being caused by a group of subjects with all their observations equal to baseline. Wolverine Pharmacometrics Corporation

Specifics of the Problem The pregabalin weight change data set. Baseline weight not modeled, treated as a covariate. The model giving rise to the Problem is not the one that was publised. Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation But… A very similar model is described in my chapter on Finite Mixtures in Ene Ette’s Pharmacometrics text book. A model of this type was stable with the pregabalin data, then at one point variance escape occurred. Wolverine Pharmacometrics Corporation

Knowing what causes the problem… It should be easy to cook up an example so you can watch NONMEM crash and burn. After nearly a day of simulating and estimating I could not re-create the problem with data that I can share. Wolverine Pharmacometrics Corporation

So, to get the technique out… I simulated some data that does not crash NONMEM but does drive a sigma to zero. I will show the technique developed by Stuart. Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation On the surface There are two types of subjects. Those who gain and those that do not. But there are really two types of “stay the samers”, ones that do not move at all, and ones that bounce around baseline. Wolverine Pharmacometrics Corporation

Model/Data c1.txt / nmdata100.csv $PRED AS=THETA(1)*EXP(ETA(1)) K=THETA(2) Y=BSLN*EXP(AS*(1-EXP(-K*TIME)))+EPS(1) $THETA (0,0.2) ;1 ASYMPTOTE (0,0.1) ;2 RATE Subjects are modeled as gainers, or possibly stayers if ETA(1) << 0. -2LL = 638.753 $COV = YES Wolverine Pharmacometrics Corporation

Model/Data c2.txt / nmdata100.csv $PRED AS=THETA(1)*EXP(ETA(1)) K=THETA(2) IF (MIXNUM.EQ.1) THEN Y=BSLN*EXP(AS*(1-EXP(-K*TIME)))+EPS(1) ;GAINERS GO HERE ELSE Y=BSLN+EPS(2) ; BOTH TYPES OF STAYERS GO HERE ENDIF 0PARAMETER ESTIMATE IS NEAR ITS BOUNDARY THIS MUST BE ADDRESSED BEFORE THE COVARIANCE STEP CAN BE IMPLEMENTED -2LL = -1144.884 $COV = NO Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation SIGMA - COV MATRIX FOR RANDOM EFFECTS - EPSILONS **** EPS1 EPS2 EPS1 + 9.68E-01 EPS2 + 0.00E+00 1.00E-05 Here is the problem, a sigma has gone to zero. Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation Stu’s Solution Weights were recorded to the nearest 0.1kg. Initial observations equal to baseline are discarded. The likelihood for the first non-baseline observation is adjusted to reflect that it cannot be in [baseline -0.05,baseline+0.05) Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation That is… For an arbitrary subject, with baseline=b, and random effects vector , and initial observation x, let… L0(x) be the un-adjusted likelihood of the observation under the model. Then the adjusted likelihood for the first non-baseline observation is… L0(x)/1-p0(b)), where p0(b) is the probability that x is in [b-0.05, b+0.05 ) Wolverine Pharmacometrics Corporation

Wolverine Pharmacometrics Corporation Do your homework! Wolverine Pharmacometrics Corporation