Stat 305 2009 Final Lab.

Slides:



Advertisements
Similar presentations
Econometrics I Professor William Greene Stern School of Business
Advertisements

Two topics in R: Simulation and goodness-of-fit HWU - GS.
Ch 4 & 5 Important Ideas Sampling Theory. Density Integration for Probability (Ch 4 Sec 1-2) Integral over (a,b) of density is P(a
1 Lecture Twelve. 2 Outline Failure Time Analysis Linear Probability Model Poisson Distribution.
Engineering Probability and Statistics - SE-205 -Chap 4 By S. O. Duffuaa.
CONTINUOUS RANDOM VARIABLES These are used to define probability models for continuous scale measurements, e.g. distance, weight, time For a large data.
Random-Variate Generation. Need for Random-Variates We, usually, model uncertainty and unpredictability with statistical distributions Thereby, in order.
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Stat 321 – Day 16 Continuous random variables (cont.)
Presenting: Assaf Tzabari
Parametric Inference.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Continuous random variables Uniform and Normal distribution (Sec. 3.1, )
Stat 321 – Day 15 More famous continuous random variables “All models are wrong; some are useful” -- G.E.P. Box.
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 4 Continuous Random Variables and Probability Distributions.
4-1 Continuous Random Variables 4-2 Probability Distributions and Probability Density Functions Figure 4-1 Density function of a loading on a long,
1 Ch5. Probability Densities Dr. Deshi Ye
Topic 4 - Continuous distributions
The Triangle of Statistical Inference: Likelihoood
Chapter 3 Basic Concepts in Statistics and Probability
Moment Generating Functions
Continuous Distributions The Uniform distribution from a to b.
1 Lecture 13: Other Distributions: Weibull, Lognormal, Beta; Probability Plots Devore, Ch. 4.5 – 4.6.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
4-1 Continuous Random Variables 4-2 Probability Distributions and Probability Density Functions Figure 4-1 Density function of a loading on a long,
1 Special Continuous Probability Distributions -Exponential Distribution -Weibull Distribution Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering.
MATH 4030 – 4B CONTINUOUS RANDOM VARIABLES Density Function PDF and CDF Mean and Variance Uniform Distribution Normal Distribution.
CONTINUOUS RANDOM VARIABLES
Introduction Paul J. Hurtado Mathematical Biosciences Institute (MBI), The Ohio State University 19 May 2014 (Monday a.m.)
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
STT : BIOSTATISTICS ANALYSIS Dr. Cuixian Chen
Concepts in Probability, Statistics and Stochastic Modeling
4-1 Continuous Random Variables 4-2 Probability Distributions and Probability Density Functions Figure 4-1 Density function of a loading on a long,
Chapter 4 Continuous Random Variables and Probability Distributions
-Exponential Distribution -Weibull Distribution
Engineering Probability and Statistics - SE-205 -Chap 4
ASV Chapters 1 - Sample Spaces and Probabilities
Chapter 4 Continuous Random Variables and Probability Distributions
Extreme Value Theory for High-Frequency Financial Data
Cumulative distribution functions and expected values
Derivative of an Exponential
CONTINUOUS RANDOM VARIABLES
Chapter 7: Sampling Distributions
Market Risk VaR: Historical Simulation Approach
Statistics 200b. Spring David Brillinger
NORMAL PROBABILITY DISTRIBUTIONS
Moment Generating Functions
More about Posterior Distributions
More about Normal Distributions
12/1/2018 Normal Distributions
Parametric Survival Models (ch. 7)
Covered only ML estimator
Confidence Intervals Chapter 11.
Stat Lab 6 Parameter Estimation.
Lecture 5 b Faten alamri.
The total score of the quizzes:
Econometrics Chengyuan Yin School of Mathematics.
Locomotive Test and CI for Two Proportions Sample X N Sample p
Parametric Empirical Bayes Methods for Microarrays
Functions of Random variables
Two topics in R: Simulation and goodness-of-fit
EVENT PROJECTION Minzhao Liu, 2018
Stat Lab 9.
The hazard function The hazard function gives the so-called “instantaneous” risk of death (or failure) at time t, assuming survival up to time t. Estimate.
Random Variate Generation
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Continuous Distributions
Applied Statistics and Probability for Engineers
Copyright © 2015 Elsevier Inc. All rights reserved.
Generating Random Variates
Presentation transcript:

Stat 305 2009 Final Lab

Q1a: Write down the log likelihood function. Find the density f of xi first by taking the 1st derivative of the cdf F. Take the natural log of f

Density f of xi: Natural log f of xi: Log likelihood of all observed data:

rweibull(n, shape=a, scale=b) Q1b: For n=200, write R code to generate data from the Weibull distribution with α=2.3. Then comput the MLE of α by using ‘optim’ in R. In R, use rweibull(n, shape=a, scale=b) to generate data from the Weibull distribution. Check the definition of the Weibull distribution in R by ?rweibull because the density of the Weibull defined in R may be different from the Weibull density in our assignment.

f(x) = (a/b) (x/b)^(a-1) exp(- (x/b)^a) In R, the Weibull distribution with shape parameter a and scale parameter b has density given by f(x) = (a/b) (x/b)^(a-1) exp(- (x/b)^a) for x >= 0. The cdf is F(x) = 1 - exp(- (x/b)^a) on x >= 0, E(X) = b Gamma(1 + 1/a) Var(X) = b^2 * (Gamma(1 + 2/a) - (Gamma(1 + 1/a))^2). Compare with the definition of the cdf of the Weibull distribution in our assignment. where x > 0. α=a=shape, 1/5=b=scale

α=a=shape, 1/5=b=scale Use ‘optim’ to find the MLE of α. n = 200 alpha=2.3 beta=1/5 x = rweibull(n, shape=alpha, scale=beta) Log likelihood of all observed data: Use ‘optim’ to find the MLE of α. f=function(y) { h=n*log(y)+(y-1)*sum(log(5*x))-5^y*sum(x^y) } Then use optim(par, f), where par is an initial value for the parameters to be optimized over.

a1 = optim(2, f, method = "BFGS")$par f=function(y) { h=n*log(y)+(y-1)*sum(log(5*x))-5^y*sum(x^y) } -h optim(par, fn, …) par: Initial values for the parameters to be optimized over. fn: A function to be minimized. a1 = optim(2, f, method = "BFGS")$par Get the MLE from “optim”:

Problems!! The approximate 95% C.I. for α is where Q1c and d: Find an approximate 95% C.I. for α. where Problems!! The approximate 95% C.I. for α is

Natural log f of X:

Shit!! ? Natural log f of X: The 1st derivative of the log f: The Fisher information at the MLE: The Observed Fisher information at the MLE: Shit!! has no closed form. How to find ? Even we know the closed form, then how to find ? The 2nd derivative of the log f:

where z0.975 is the 97.5th quantile of the standard normal density and

# MLE of alpha a1 = optim(2, f, method = "BFGS")$par # Observed Fisher Information I1= (a1)^(-2)+ 5^(a1)*mean(x^(a1)*(log(5*x))^2) # 1.96 is (approximately) the 97.5th quantile of standard normal sd1 =1.96*sqrt(1/(n*I1)) # The approximate 95% C.I. for alpha CIa =c(a1-sd1, a1+sd1 ) CIa

lambda1=optim(6, f2, method="BFGS")$par # Ques 2d: # Ques 2a: n =200 x =rweibull(n,2.3,1/5) f2 =function(lambda) { h=n*log(2.3)+n*2.3*log(lambda)+(1.3)*sum(log(x))-lambda^(2.3)*sum(x^(2.3)) -h } # MLE of lambda lambda1=optim(6, f2, method="BFGS")$par # Ques 2d: # Observed Fisher Information I2 = 2.3/(lambda1)^2+2.3*(2.3-1)*(lambda1)^(2.3-2)*mean(x^2.3) sd2=1.96*((n*I2)^(-1/2)) # The approximate 95% C.I. for alpha CIlambda = c(lambda1-sd2, lambda1+sd2) CIlambda