Chapter 4. Supplementary Questions

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 7) Slideshow: exercise 7.5 Original citation: Dougherty, C. (2012) EC220 - Introduction.
Chain Rules for Entropy
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Sampling Distributions
CF-3 Bank Hapoalim Jun-2001 Zvi Wiener Computational Finance.
Rules for means Rule 1: If X is a random variable and a and b are fixed numbers, then Rule 2: If X and Y are random variables, then.
Simulation Modeling and Analysis Session 12 Comparing Alternative System Designs.
Chapter 5 Part II 5.3 Spread of Data 5.4 Fisher Discriminant.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Probability theory 2010 Conditional distributions  Conditional probability:  Conditional probability mass function: Discrete case  Conditional probability.
The moment generating function of random variable X is given by Moment generating function.
Correlations and Copulas Chapter 10 Risk Management and Financial Institutions 2e, Chapter 10, Copyright © John C. Hull
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the.
Section 5.6 Important Theorem in the Text: The Central Limit TheoremTheorem (a) Let X 1, X 2, …, X n be a random sample from a U(–2, 3) distribution.
1 Chapter 17: Introduction to Regression. 2 Introduction to Linear Regression The Pearson correlation measures the degree to which a set of data points.
Analysis of Monte Carlo Integration Fall 2012 By Yaohang Li, Ph.D.
Chapter 14 Monte Carlo Simulation Introduction Find several parameters Parameter follow the specific probability distribution Generate parameter.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
8 Sampling Distribution of the Mean Chapter8 p Sampling Distributions Population mean and standard deviation,  and   unknown Maximal Likelihood.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
HMM - Part 2 The EM algorithm Continuous density HMM.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Matrix Notation for Representing Vectors
5. Maximum Likelihood –II Prof. Yuille. Stat 231. Fall 2004.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Joint Moments and Joint Characteristic Functions.
March 7, Using Pattern Recognition Techniques to Derive a Formal Analysis of Why Heuristic Functions Work B. John Oommen A Joint Work with Luis.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.
Data Modeling Patrice Koehl Department of Biological Sciences
Applied statistics Usman Roshan.
Sampling and Sampling Distributions
12. Principles of Parameter Estimation
Probability Theory and Parameter Estimation I
Basic simulation methodology
Chapter 4. Inference about Process Quality
Probability Review for Financial Engineers
CH 5: Multivariate Methods
Lebesgue measure: Lebesgue measure m0 is a measure on i.e., 1. 2.
Evgeniya Anatolievna Kolomak, Professor
The distribution function F(x)
Quantum Two.
Lecture 5 – Improved Monte Carlo methods in finance: lab
Some Rules for Expectation
Monte Carlo Approximations – Introduction
Unfolding Problem: A Machine Learning Approach
Hidden Markov Models Part 2: Algorithms
Chi Square Distribution (c2) and Least Squares Fitting
Chernoff bounds The Chernoff bound for a random variable X is
OVERVIEW OF LINEAR MODELS
Computing and Statistical Data Analysis / Stat 7
Chapter 14 Monte Carlo Simulation
Parametric Methods Berlin Chen, 2005 References:
Unfolding with system identification
The Geometric Distributions
12. Principles of Parameter Estimation
16. Mean Square Estimation
Maximum Likelihood Estimation (MLE)
Outline Variance Matrix of Stochastic Variables and Orthogonal Transforms Principle Component Analysis Generalized Eigenvalue Decomposition.
Mathematical Expectation
Presentation transcript:

Chapter 4. Supplementary Questions

Question 1. Consider the systematic sample estimator based on the trapezoidal rule: Discuss the bias and variance of this estimator. In the case , how does it compare with other estimators such as crude Monte Carlo and antithetic random numbers requiring n function evaluations. Are there any disadvantages to its use?

- - can not be calculated by the sample variance because ‘s are not independent.

The difference (ARN) is 2.6667e-004, almost 0. Results Actual value is . The difference (SS) is 0.0050. The difference (MC) is 0.0391. The difference (ARN) is 2.6667e-004, almost 0. Disadvantage : careful to calculate the variance of the estimator N Crude MC S. S Antithetic 50 0.3217 0.3283 0.3336 100 0.3865 0.3300

Question 2. For any random variables , , prove that for all x, y.

Proof. Case 1. X and Y are independent. Obvious! Case 2. X and Y are not independent.

Question 4. Suppose we wish to generate the partial sum of independent identically distributed summands, for (a) is generated with having (b) is generated with a student What is the maximum possible correlation we can achieve between and ? What is the minimum correlation?

Theorem 40. (maximum/minimum covariance) Suppose and are both non-decreasing (or both non-increasing) functions. Subject to the constraint that X, Y have cumulative distribution functions , respectively, the covariance is maximized when and and is minimized when and , where U~U[0,1].

Solution Using common random number(CRN), the maximum correlation is obtained and Using antithetic random number(ARN), the minimum correlation is obtained. Let , So, ,

Algorithms (sigma=1, n=10, m=100000) Generate For CRN, x=norminv(u,0,1); For ARN, x=norminv(1-u,0,1); y=tinv(u,5); Compute sums of x and y Compute the covariance matrix Compute Corr(sum(x),sum(y))

Results Maximum of covariance is 3.8376e+004 . Minimum of covariance is -1.5634e+005. Thus, the maximum of correlation is 0.9271 and the minimum of correlation is -0.9928 . When dof=2, maximum = 0.7880 and minimum = -0.5521.