More Monte Carlo Methods

Slides:



Advertisements
Similar presentations
T.C ATILIM UNIVERSITY MODES ADVANCED SYSTEM SIMULATION MODES 650
Advertisements

Sample Approximation Methods for Stochastic Program Jerry Shen Zeliha Akca March 3, 2005.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Generating Continuous Random Variables some. Quasi-random numbers So far, we learned about pseudo-random sequences and a common method for generating.
Parametric Inference.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
The Monte Carlo Method: an Introduction Detlev Reiter Research Centre Jülich (FZJ) D Jülich
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
Estimation and Hypothesis Testing. The Investment Decision What would you like to know? What will be the return on my investment? Not possible PDF for.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Analysis of Monte Carlo Integration Fall 2012 By Yaohang Li, Ph.D.
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
Simulating the value of Asian Options Vladimir Kozak.
1 Lesson 3: Choosing from distributions Theory: LLN and Central Limit Theorem Theory: LLN and Central Limit Theorem Choosing from distributions Choosing.
Monte Carlo I Previous lecture Analytical illumination formula This lecture Numerical evaluation of illumination Review random variables and probability.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
Monte Carlo Methods1 T Special Course In Information Science II Tomas Ukkonen
1 Two Functions of Two Random Variables In the spirit of the previous lecture, let us look at an immediate generalization: Suppose X and Y are two random.
Variance Reduction Fall 2012
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Gil McVean, Department of Statistics Thursday February 12 th 2009 Monte Carlo simulation.
Introduction to Computer Simulation of Physical Systems (Lecture 10) Numerical and Monte Carlo Methods (CONTINUED) PHYS 3061.
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang 12/3/2008 with slides by Pat Hanrahan and Torsten Moller.
Chapter 19 Monte Carlo Valuation. © 2013 Pearson Education, Inc., publishing as Prentice Hall. All rights reserved.19-2 Monte Carlo Valuation Simulation.
Virtual University of Pakistan
Lesson 8: Basic Monte Carlo integration
Random Numbers and Simulation
The Maximum Likelihood Method
Introduction to Monte Carlo Method
STATISTICAL INFERENCE
12. Principles of Parameter Estimation
Probability and Statistics for Computer Scientists Second Edition, By: Michael Baron Section 11.1: Least squares estimation CIS Computational.
Basic simulation methodology
Lebesgue measure: Lebesgue measure m0 is a measure on i.e., 1. 2.
CORRELATION ANALYSIS.
3.1 Expectation Expectation Example
Probability and Statistics for Computer Scientists Second Edition, By: Michael Baron Section 11.1: Least squares estimation CIS Computational.
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
Some Rules for Expectation
Monte Carol Integration
Random-Variate Generation
Lecture 2 – Monte Carlo method in finance
Slides for Introduction to Stochastic Search and Optimization (ISSO) by J. C. Spall CHAPTER 15 SIMULATION-BASED OPTIMIZATION II: STOCHASTIC GRADIENT AND.
CS 188: Artificial Intelligence
Integration of sensory modalities
Addition of Independent Normal Random Variables
EM for Inference in MV Data
Simple Linear Regression
Statistics Lecture 12.
3.0 Functions of One Random Variable
Monte Carlo I Previous lecture Analytical illumination formula
Monte Carlo Rendering Central theme is sampling:
Computing and Statistical Data Analysis / Stat 7
Lecture 4 - Monte Carlo improvements via variance reduction techniques: antithetic sampling Antithetic variates: for any one path obtained by a gaussian.
Chapter 14 Monte Carlo Simulation
Learning From Observed Data
Statistical Methods for Data Analysis Random number generators
EM for Inference in MV Data
Mathematical Foundations of BME Reza Shadmehr
9. Two Functions of Two Random Variables
12. Principles of Parameter Estimation
Chapter 8 Estimation.
Lesson 9: Basic Monte Carlo integration
政治大學統計系余清祥 2004年5月17日~ 5月26日 第十四、十五週:蒙地卡羅積分 、縮減變異數
Empirical Distributions
Statistical inference for the slope and intercept in SLR
Presentation transcript:

More Monte Carlo Methods and Variance Reduction Techniques (for random quadrature)

Importance Sampling Write I as: for some pdf with support [a,b]. Then

Isn’t this accept-reject sampling? (Answer: No.) Importance sampling uses all of the draws from the distribution with pdf f and weights them in an appropriate way. Also, we do not need to find a function that is always “above” the integrand. (Not to mention that accept-reject sampling was not even used (so far) to evaluate an integral. It was used to draw random variables.

We draw n values X1, X2, …, Xn from the distribution with pdf f and estimate with

Clearly is an unbiased estimator of I:

The variance of :

The best that we could do is to have (perfect information!) Then

We could estimate this variance by rewriting it: Then (Note: This estimator is biased, but okay for large samples!)

Recall that for a random sample Y1, Y2, …, Yn from a distribution with variance an estimate for is given by but most people instead learn about the estimator because it is unbiased for .

The estimator we came up with was This is simply the first (biased) estimator from the last slide. To show this, let and let be the estimated variance for a single Yi. Then, note that

On the other hand, using we can come up with the unbiased estimator

Example: Importance sampling with exp(rate=1) density where X~exp(rate=1).

Simulation Results: 0.8862269255 Sim 95% CI 1 0.8855383 1.97115x10-6 (0.8827865, 0.8882901) 2 0.8841449 1.974939x10-6 (0.8813905, 0.8868993) 3 0.8856674 1.969067x10-6 (0.8829171, 0.8884177) (n=100,000)

Example: Importance sampling with density where X~ .

Simulation Results: 0.8862269255 Sim 95% CI 1 0.8773822 4.227073x10-5 (0.8646391, 0.8901253) 2 0.8938431 6.225086x10-5 (0.8783789, 0.9093073) 3 0.8846356 5.925374x10-5 (0.8695482, 0.899723) (n=100,000)

Note: The inaccuracy associated with a poor choice of pdf f will become much more pronounced with a smaller sample size!

Antithetic Monte Carlo (a variance reduction technique) Example: Consider the integral (Of course we know that the answer is 1.) The simple Monte Carlo approach is:

So, we simulate X1, X2, …, Xn iid exp(rate=1) rv’s and we compute Sim 95% CI 1 1.040535 0.001054932 (0.9768748, 1.104195) 2 0.9711500 0.0009132939 (0.9119173, 1.030383) 3 0.9959556 0.001031307 (0.9330122, 1.058899) (n=1,000)

The antithetic approach is based on the fact that For X1 and X2 independent, So, if we can simulate X1 and X2 so that they are negatively correlated, we can get Var(X1+X2) lower than it would be if they were independent!

In our example, can be re-written as where Xi and Xi’ are negatively correlated exp(rate=1) rv’s.

How do we draw two negatively correlated exponentials? We can draw an exponential rate rv X by X=F-1(U) where U~unif(0,1) But U~unif(0,1) implies that (1-U)~unif(0,1). So, X=F-1(1-U) is also exponential with rate . These X’s are negatively correlated by monotonicity of F (and hence of F-1).

Simulation: Antithetic with 500 uniforms: Sim 95% CI 1 0.9998595 0.0003355855 (0.9639543, 1.035765) 2 1.0014030 0.0003887170 (0.9627598, 1.040046) 3 1.0208540 0.0003841881 (0.9824366, 1.059271) (We have cut the CI lengths almost in half!)

In general, for estimated by we need g(x)/f(x) monotone so that negatively correlated X’s produce negatively correlated g(X)/f(X)’s.

Control Variate Sampling (another variance reduction technique) sometimes we can analytically solve the approximate integral Try Taylor series. and then we could Monte Carlo the remaining integral in