Introduction to Monte Carlo Method

Slides:



Advertisements
Similar presentations
Sample Approximation Methods for Stochastic Program Jerry Shen Zeliha Akca March 3, 2005.
Advertisements

Monte Carlo Methods and Statistical Physics
Review of Basic Probability and Statistics
Sampling Attila Gyulassy Image Synthesis. Overview Problem Statement Random Number Generators Quasi-Random Number Generation Uniform sampling of Disks,
Monte Carlo Integration Robert Lin April 20, 2004.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lectures 18, 19: Monte Carlo Integration Ravi Ramamoorthi Acknowledgements.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Machine Learning CMPT 726 Simon Fraser University
1 Monte Carlo Global Illumination Brandon Lloyd COMP 238 December 16, 2002.
Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
Lecture II-2: Probability Review
Computer Simulation A Laboratory to Evaluate “What-if” Questions.
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Analysis of Monte Carlo Integration Fall 2012 By Yaohang Li, Ph.D.
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
1 Lesson 3: Choosing from distributions Theory: LLN and Central Limit Theorem Theory: LLN and Central Limit Theorem Choosing from distributions Choosing.
Monte Carlo I Previous lecture Analytical illumination formula This lecture Numerical evaluation of illumination Review random variables and probability.
02/10/03© 2003 University of Wisconsin Last Time Participating Media Assignment 2 –A solution program now exists, so you can preview what your solution.
MA-250 Probability and Statistics Nazar Khan PUCIT Lecture 26.
Monte Carlo Methods1 T Special Course In Information Science II Tomas Ukkonen
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
4. Numerical Integration. Standard Quadrature We can find numerical value of a definite integral by the definition: where points x i are uniformly spaced.
Monté Carlo Simulation  Understand the concept of Monté Carlo Simulation  Learn how to use Monté Carlo Simulation to make good decisions  Learn how.
CS348B Lecture 9Pat Hanrahan, Spring 2005 Overview Earlier lecture Statistical sampling and Monte Carlo integration Last lecture Signal processing view.
Prabhas Chongstitvatana1 Monte Carlo integration It is a numerical probabilistic algorithm ab I/(b-a) f.
Computer Graphics III – Monte Carlo integration Direct illumination
Variance Reduction Fall 2012
CSE 474 Simulation Modeling | MUSHFIQUR ROUF CSE474:
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
01/26/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Introduction to Computer Simulation of Physical Systems (Lecture 10) Numerical and Monte Carlo Methods (CONTINUED) PHYS 3061.
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang 12/3/2008 with slides by Pat Hanrahan and Torsten Moller.
Random Variables By: 1.
CS498-EA Reasoning in AI Lecture #19 Professor: Eyal Amir Fall Semester 2011.
Lesson 8: Basic Monte Carlo integration
Random Variable 2013.
Basic simulation methodology
Monte Carlo Integration
4. Numerical Integration
Intro to Sampling Methods
AP Statistics: Chapter 7
Professor S K Dubey,VSM Amity School of Business
CAP 5636 – Advanced Artificial Intelligence
Monte Carlo Approximations – Introduction
Path Tracing (some material from University of Wisconsin)
Monte Carol Integration
Lecture 2 – Monte Carlo method in finance
Introduction to Probability & Statistics The Central Limit Theorem
POPULATION (of “units”)
Ch13 Empirical Methods.
CS 188: Artificial Intelligence
Monte Carlo I Previous lecture Analytical illumination formula
Continuous Random Variables
Monte Carlo Path Tracing
Monte Carlo Rendering Central theme is sampling:
Continuous Random Variables
Lecture 4 - Monte Carlo improvements via variance reduction techniques: antithetic sampling Antithetic variates: for any one path obtained by a gaussian.
Chapter 14 Monte Carlo Simulation
Statistical Methods for Data Analysis Random number generators
POPULATION (of “units”)
More Monte Carlo Methods
Probability overview Event space – set of possible outcomes
Metropolis Light Transit
政治大學統計系余清祥 2004年5月17日~ 5月26日 第十四、十五週:蒙地卡羅積分 、縮減變異數
Continuous Random Variables
Monte Carlo Integration
Presentation transcript:

Introduction to Monte Carlo Method Kadi Bouatouch IRISA Email: kadi@irisa.fr

Why Monte Carlo Integration? To generate realistic looking images, we need to solve integrals of 2 or higher dimension Pixel filtering and lens simulation both involve solving a 2 dimensional integral Combining pixel filtering and lens simulation requires solving a 4 dimensional integral Normal quadrature algorithms don’t extend well beyond 1 dimension

Continuous Probability A continuous random variable x is a variable that randomly takes on a value from its domain The behavior of x is completely described by the distribution of values it takes.

Continuous Probability The distribution of values that x takes on is described by a probability distribution function p We say that x is distributed according to p, or x ~ p p(x) ≥ 0 The probability that x takes on a value in the interval [a, b] is:

Expected Value The expected value of x~p is defined as As a function of a random variable is itself a random variable, the expected value of f(x) is The expected value of a sum of random variables is the sum of the expected values: E(x + y) = E(x) + E(y)

Multi-Dimensional Random Variables For some space S, we can define a pdf p:S → If x is a random variable, and x~p, the probability that x takes on a value in Si  S is: Expected value of a real valued function f :S → extends naturally to the multidimensional case:

Monte Carlo Integration Suppose we have a function f(x) defined over the domain x є [a, b] We would like to evaluate the integral The Monte Carlo approach is to consider N samples, selected randomly with pdf p(x), to estimate the integral We get the following estimator:

Monte Carlo Integration Finding the estimated value of the estimator we get: In other words:

Variance The variance of the estimator is As the error in the estimator is proportional to σ, the error is proportional to So, to halve the error we need to use four times as many samples

Variance Reduction Increase number of samples Choose p such that f/p has low variance (f and p should have similar shape) This is called importance sampling because if p is large when f is large, and small when f is small, there will be more sample in important regions Partition the domain of the integral into several smaller regions and evaluate the integral as a the sum of integrals over the smaller regions This is called stratified sampling

Sampling Random Variables Given a pdf p(x), defined over the interval [xmin, xmax], we can sample a random variable x~p from a set of uniform random numbers ξi є [0,1) To do this, we need the cumulative probability distribution function: To get xi we transform ξi: xi = P-1(ξi) P-1 is guaranteed to exist for all valid pdfs

Example Sample the pdf p(x)=3x2/2, x є [-1, 1]: First we need to find P(x): Choosing C such that P(-1)=0 and P(1)=1, we get P(x)=(x3+1)/2 and P-1(ξ)= Thus, given a random number ξi є [0,1), we can ’warp’ it according to p(x) to get xi: xi =

Sampling 2D Random Variables If we have a 2D random variable α=(X, Y) with pdf p(x, y), we need the two dimensional cumulative pdf: We can choose xi using the marginal distribution pG(x) and then choose yi according to p(y|x), where If p is separable, that is p(x, y)=q(x)r(y), the one dimensional technique can be used on each dimension instead

2D example: Uniform Sampling of Spheres

Distribution Ray Tracing Uniform distribution

Distribution Ray Tracing Cosine distribution

Distribution Ray Tracing BRDF distribution (frame change)

Distribution Ray Tracing BRDF*cosine distribution

Summary Given a function f(μ), defined over an n-dimensional domain S, we can estimate the integral of f over S by a sum: where x~p is a random variable and xi are samples of x selected according to p. To reduce the variance and get faster convergence we: Use importance sampling: p should have similar shape as f Use Stratified sampling: Subdivide S into smaller regions, evaluate the integral for each region and sum these together