Lecture 10 Outline Monte Carlo methods Monte Carlo methods History of methods History of methods Sequential random number generators Sequential random.

Slides:



Advertisements
Similar presentations
Monte Carlo Methods and Statistical Physics
Advertisements

CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
The General Linear Model. The Simple Linear Model Linear Regression.
Simulation Where real stuff starts. ToC 1.What, transience, stationarity 2.How, discrete event, recurrence 3.Accuracy of output 4.Monte Carlo 5.Random.
Sampling Attila Gyulassy Image Synthesis. Overview Problem Statement Random Number Generators Quasi-Random Number Generation Uniform sampling of Disks,
Monte Carlo Integration Robert Lin April 20, 2004.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lectures 18, 19: Monte Carlo Integration Ravi Ramamoorthi Acknowledgements.
Maximum likelihood (ML) and likelihood ratio (LR) test
CF-3 Bank Hapoalim Jun-2001 Zvi Wiener Computational Finance.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Using random numbers Simulation: accounts for uncertainty: biology (large number of individuals), physics (large number of particles, quantum mechanics),
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
1 Random Number Generation H Plan: –Introduce basics of RN generation –Define concepts and terminology –Introduce RNG methods u Linear Congruential Generator.
Maximum likelihood (ML) and likelihood ratio (LR) test
Lecture 10 Outline Monte Carlo methods History of methods
Pseudorandom Number Generators
Statistics.
K. Desch – Statistical methods of data analysis SS10
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
CSCE Monte Carlo Methods When you can’t do the math, simulate the process with random numbers Numerical integration to get areas/volumes Particle.
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
Maximum likelihood (ML)
APPENDIX D RANDOM NUMBER GENERATION
Random Number Generation Fall 2013
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Introduction to Monte Carlo Methods D.J.C. Mackay.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Simulation of Random Walk How do we investigate this numerically? Choose the step length to be a=1 Use a computer to generate random numbers r i uniformly.
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
Chapter 13 States Of Matter.
Development of An ERROR ESTIMATE P M V Subbarao Professor Mechanical Engineering Department A Tolerance to Error Generates New Information….
Standard Statistical Distributions Most elementary statistical books provide a survey of commonly used statistical distributions. The reason we study these.
1 IE 607 Heuristic Optimization Simulated Annealing.
MonteCarlo Optimization (Simulated Annealing) Mathematical Biology Lecture 6 James A. Glazier.
1 Lesson 3: Choosing from distributions Theory: LLN and Central Limit Theorem Theory: LLN and Central Limit Theorem Choosing from distributions Choosing.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory Mixed Integer Problems Most optimization algorithms deal.
CS433 Modeling and Simulation Lecture 15 Random Number Generator Dr. Anis Koubâa 24 May 2009 Al-Imam Mohammad Ibn Saud Islamic University College Computer.
Chapter 7 Random-Number Generation
CS433: Modeling and Simulation Dr. Anis Koubâa Al-Imam Mohammad bin Saud University 15 October 2010 Lecture 05: Statistical Analysis Tools.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Basic Concepts in Number Theory Background for Random Number Generation 1.For any pair of integers n and m, m  0, there exists a unique pair of integers.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
Random Number Generators 1. Random number generation is a method of producing a sequence of numbers that lack any discernible pattern. Random Number Generators.
Monte Carlo Methods.
Experimental Method and Data Process: “Monte Carlo Method” Presentation # 1 Nafisa Tasneem CHEP,KNU
Simulated Annealing.
Monte Carlo Methods So far we have discussed Monte Carlo methods based on a uniform distribution of random numbers on the interval [0,1] p(x) = 1 0  x.
APPENDIX D R ANDOM N UMBER G ENERATION Organization of chapter in ISSO* – General description and linear congruential generators Criteria for “good” random.
First topic: clustering and pattern recognition Marc Sobel.
Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation Radford M. Neal 발표자 : 장 정 호.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Molecular Modelling - Lecture 2 Techniques for Conformational Sampling Uses CHARMM force field Written in C++
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Sampling and estimation Petter Mostad
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Introduction to Computer Simulation of Physical Systems (Lecture 10) Numerical and Monte Carlo Methods (CONTINUED) PHYS 3061.
Computational Physics (Lecture 10) PHY4370. Simulation Details To simulate Ising models First step is to choose a lattice. For example, we can us SC,
Introduction to Probability - III John Rundle Econophysics PHYS 250
Heuristic Optimization Methods
Parallel Programming in C with MPI and OpenMP
Haim Kaplan and Uri Zwick
Lecture 2 – Monte Carlo method in finance
CSE 589 Applied Algorithms Spring 1999
Introduction to Simulated Annealing
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Statistical Methods for Data Analysis Random number generators
Presentation transcript:

Lecture 10 Outline Monte Carlo methods Monte Carlo methods History of methods History of methods Sequential random number generators Sequential random number generators Parallel random number generators Parallel random number generators Generating non-uniform random numbers Generating non-uniform random numbers Monte Carlo case studies Monte Carlo case studies

Monte Carlo Methods Monte Carlo is another name for statistical sampling methods of great importance to physics and computer science Monte Carlo is another name for statistical sampling methods of great importance to physics and computer science Applications of Monte Carlo Method Applications of Monte Carlo Method  Evaluating integrals of arbitrary functions of 6+ dimensions  Predicting future values of stocks  Solving partial differential equations  Sharpening satellite images  Modeling cell populations  Finding approximate solutions to NP-hard problems

In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases: great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.Daniel Bernoullikinetic theory of gasesheat In 1859, Scottish physicist James Clerk Maxwell formulated the distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell used a simple thought experiment: particles must move independent of any chosen coordinates, hence the only possible distribution of velocities must be normal in each coordinate.James Clerk Maxwell distribution In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell’s paper and was so inspired by it that he spent much of his long, distinguished, and tortured life developing the subject further.Ludwig Boltzmann An Interesting History

History of Monte Carlo Method Credit for inventing the Monte Carlo method is shared by Stanislaw Ulam, John von Neuman and Nicholas Metropolis. Credit for inventing the Monte Carlo method is shared by Stanislaw Ulam, John von Neuman and Nicholas Metropolis. Ulam, a Polish born mathematician, worked for John von Neumann on the Manhattan Project. Ulam is known for designing the hydrogen bomb with Edward Teller in In a thought experiment he conceived of the MC method in 1946 while pondering the probabilities of winning a card game of solitaire. Ulam, a Polish born mathematician, worked for John von Neumann on the Manhattan Project. Ulam is known for designing the hydrogen bomb with Edward Teller in In a thought experiment he conceived of the MC method in 1946 while pondering the probabilities of winning a card game of solitaire. Ulam, von Neuman, and Metropolis developed algorithms for computer implementations, as well as exploring means of transforming non-random problems into random forms that would facilitate their solution via statistical sampling. This work transformed statistical sampling from a mathematical curiosity to a formal methodology applicable to a wide variety of problems. It was Metropolis who named the new methodology after the casinos of Monte Carlo. Ulam and Metropolis published a paper called “The Monte Carlo Method” in Journal of the American Statistical Association, 44 (247), , in Ulam, von Neuman, and Metropolis developed algorithms for computer implementations, as well as exploring means of transforming non-random problems into random forms that would facilitate their solution via statistical sampling. This work transformed statistical sampling from a mathematical curiosity to a formal methodology applicable to a wide variety of problems. It was Metropolis who named the new methodology after the casinos of Monte Carlo. Ulam and Metropolis published a paper called “The Monte Carlo Method” in Journal of the American Statistical Association, 44 (247), , in 1949.

Solving Integration Problems via Statistical Sampling: Monte Carlo Approximation How to evaluate integral of f(x)? How to evaluate integral of f(x)?

Integration Approximation Can approximate using another function g(x) Can approximate using another function g(x)

Integration Approximation Can approximate by taking the average or expected value Can approximate by taking the average or expected value

Integration Approximation Estimate the average by taking N samples Estimate the average by taking N samples

Monte Carlo Integration I m = Monte Carlo estimate I m = Monte Carlo estimate N = number of samples N = number of samples x 1, x 2, …, x N are uniformly distributed random numbers between a and b x 1, x 2, …, x N are uniformly distributed random numbers between a and b

Monte Carlo Integration

We have the definition of expected value and how to estimate it. We have the definition of expected value and how to estimate it. Since the expected value can be expressed as an integral, the integral is also approximated by the sum. Since the expected value can be expressed as an integral, the integral is also approximated by the sum. To simplify the integral, we can substitute g(x) = f(x)p(x). To simplify the integral, we can substitute g(x) = f(x)p(x).

Variance The variance describes how much the sampled values vary from each other. The variance describes how much the sampled values vary from each other. Variance proportional to 1/N Variance proportional to 1/N

Variance Standard Deviation is just the square root of the variance Standard Deviation is just the square root of the variance Standard Deviation proportional to 1 / sqrt(N) Standard Deviation proportional to 1 / sqrt(N) Need 4X samples to halve the error Need 4X samples to halve the error

Variance Problem: Problem:  Variance (noise) decreases slowly  Using more samples only removes a small amount of noise

Variance Reduction There are several ways to reduce the variance There are several ways to reduce the variance  Importance Sampling  Stratified Sampling  Quasi-random Sampling  Metropolis Random Mutations

Importance Sampling Idea: use more samples in important regions of the function Idea: use more samples in important regions of the function If function is high in small areas, use more samples there If function is high in small areas, use more samples there

Importance Sampling Want g/p to have low variance Want g/p to have low variance Choose a good function p similar to g: Choose a good function p similar to g:

Stratified Sampling Partition S into smaller domains S i Partition S into smaller domains S i Evaluate integral as sum of integrals over S i Evaluate integral as sum of integrals over S i Example: jittering for pixel sampling Example: jittering for pixel sampling Often works much better than importance sampling in practice Often works much better than importance sampling in practice

Parallelism in Monte Carlo Methods Monte Carlo methods often amenable to parallelism Monte Carlo methods often amenable to parallelism Find an estimate about p times faster Find an estimate about p times faster OR OR Reduce error of estimate by p 1/2 Reduce error of estimate by p 1/2

Random versus Pseudo-random Virtually all computers have “random number” generators Virtually all computers have “random number” generators Their operation is deterministic Their operation is deterministic Sequences are predictable Sequences are predictable More accurately called “pseudo-random number” generators More accurately called “pseudo-random number” generators In this chapter “random” is shorthand for “pseudo- random” In this chapter “random” is shorthand for “pseudo- random” “RNG” means “random number generator” “RNG” means “random number generator”

Properties of an Ideal RNG Uniformly distributed Uniformly distributed Uncorrelated Uncorrelated Never cycles Never cycles Satisfies any statistical test for randomness Satisfies any statistical test for randomness Reproducible Reproducible Machine-independent Machine-independent Changing “seed” value changes sequence Changing “seed” value changes sequence Easily split into independent subsequences Easily split into independent subsequences Fast Fast Limited memory requirements Limited memory requirements

No RNG Is Ideal Finite precision arithmetic  finite number of states  cycles Finite precision arithmetic  finite number of states  cycles  Period = length of cycle  If period > number of values needed, effectively acyclic Reproducible  correlations Reproducible  correlations Often speed versus quality trade-offs Often speed versus quality trade-offs

Linear Congruential RNGs Multiplier Additive constant Modulus Sequence depends on choice of seed, X 0

Period of Linear Congruential RNG Maximum period is M Maximum period is M For 32-bit integers maximum period is 2 32, or about 4 billion For 32-bit integers maximum period is 2 32, or about 4 billion This is too small for modern computers This is too small for modern computers Use a generator with at least 48 bits of precision Use a generator with at least 48 bits of precision

Producing Floating-Point Numbers X i, a, c, and M are all integers X i, a, c, and M are all integers X i s range in value from 0 to M-1 X i s range in value from 0 to M-1 To produce floating-point numbers in range [0, 1), divide X i by M To produce floating-point numbers in range [0, 1), divide X i by M

Defects of Linear Congruential RNGs Least significant bits correlated Least significant bits correlated  Especially when M is a power of 2 k-tuples of random numbers form a lattice k-tuples of random numbers form a lattice  Points tend to lie on hyperplanes  Especially pronounced when k is large

Lagged Fibonacci RNGs p and q are lags, p > q p and q are lags, p > q * is any binary arithmetic operation * is any binary arithmetic operation Addition modulo M Addition modulo M Subtraction modulo M Subtraction modulo M Multiplication modulo M Multiplication modulo M Bitwise exclusive or Bitwise exclusive or

Properties of Lagged Fibonacci RNGs Require p seed values Require p seed values Careful selection of seed values, p, and q can result in very long periods and good randomness Careful selection of seed values, p, and q can result in very long periods and good randomness For example, suppose M has b bits For example, suppose M has b bits Maximum period for additive lagged Fibonacci RNG is (2 p -1)2 b-1 Maximum period for additive lagged Fibonacci RNG is (2 p -1)2 b-1

Ideal Parallel RNGs All properties of sequential RNGs All properties of sequential RNGs No correlations among numbers in different sequences No correlations among numbers in different sequences Scalability Scalability Locality Locality

Parallel RNG Designs Manager-worker Manager-worker Leapfrog Leapfrog Sequence splitting Sequence splitting Independent sequences Independent sequences

Manager-Worker Parallel RNG Manager process generates random numbers Manager process generates random numbers Worker processes consume them Worker processes consume them If algorithm is synchronous, may achieve goal of consistency If algorithm is synchronous, may achieve goal of consistency Not scalable Not scalable Does not exhibit locality Does not exhibit locality

Leapfrog Method Process with rank 1 of 4 processes

Properties of Leapfrog Method Easy modify linear congruential RNG to support jumping by p Easy modify linear congruential RNG to support jumping by p Can allow parallel program to generate same tuples as sequential program Can allow parallel program to generate same tuples as sequential program Does not support dynamic creation of new random number streams Does not support dynamic creation of new random number streams

Sequence Splitting Process with rank 1 of 4 processes

Properties of Sequence Splitting Forces each process to move ahead to its starting point Forces each process to move ahead to its starting point Does not support goal of reproducibility Does not support goal of reproducibility May run into long-range correlation problems May run into long-range correlation problems Can be modified to support dynamic creation of new sequences Can be modified to support dynamic creation of new sequences

Independent Sequences Run sequential RNG on each process Run sequential RNG on each process Start each with different seed(s) or other parameters Start each with different seed(s) or other parameters Example: linear congruential RNGs with different additive constants Example: linear congruential RNGs with different additive constants Works well with lagged Fibonacci RNGs Works well with lagged Fibonacci RNGs Supports goals of locality and scalability Supports goals of locality and scalability

Statistical Simulation: Metropolis Algorithm Metropolis algorithm. [Metropolis, Rosenbluth, Rosenbluth, Teller, Teller 1953] Metropolis algorithm. [Metropolis, Rosenbluth, Rosenbluth, Teller, Teller 1953]  Simulate behavior of a physical system according to principles of statistical mechanics.  Globally biased toward "downhill" lower-energy steps, but occasionally makes "uphill" steps to break out of local minima. Gibbs-Boltzmann function. The probability of finding a physical system in a state with energy E is proportional to e -E / (kT), where T > 0 is temperature and k is a constant. Gibbs-Boltzmann function. The probability of finding a physical system in a state with energy E is proportional to e -E / (kT), where T > 0 is temperature and k is a constant.  For any temperature T > 0, function is monotone decreasing function of energy E.  System more likely to be in a lower energy state than higher one.  T large: high and low energy states have roughly same probability  T small: low energy states are much more probable

Metropolis algorithm. Metropolis algorithm.  Given a fixed temperature T, maintain current state S.  Randomly perturb current state S to new state S'  N(S).  If E(S')  E(S), update current state to S' Otherwise, update current state to S' with probability e -  E / (kT), where  E = E(S') - E(S) > 0. Convergence Theorem. Let f S (t) be fraction of first t steps in which simulation is in state S. Then, assuming some technical conditions, with probability 1: Convergence Theorem. Let f S (t) be fraction of first t steps in which simulation is in state S. Then, assuming some technical conditions, with probability 1: Intuition. Simulation spends roughly the right amount of time in each state, according to Gibbs-Boltzmann equation. Intuition. Simulation spends roughly the right amount of time in each state, according to Gibbs-Boltzmann equation.

Simulated Annealing Simulated annealing. Simulated annealing.  T large  probability of accepting an uphill move is large.  T small  uphill moves are almost never accepted.  Idea: turn knob to control T.  Cooling schedule: T = T(i) at iteration i. Physical analog. Physical analog.  Take solid and raise it to high temperature, we do not expect it to maintain a nice crystal structure.  Take a molten solid and freeze it very abruptly, we do not expect to get a perfect crystal either.  Annealing: cool material gradually from high temperature, allowing it to reach equilibrium at succession of intermediate lower temperatures.

Other Distributions Analytical transformations Analytical transformations Box-Muller Transformation Box-Muller Transformation Rejection method Rejection method

Analytical Transformation -probability density function f(x) -cumulative distribution F(x) In theory of probability, a quantile function of a distribution is the inverse of its cumulative distribution function.theory of probabilitydistribution inversecumulative distribution function

Exponential Distribution: An exponential distribution arises naturally when modeling the time between independent events that happen at a constant average rate and are memoryless. One of the few cases where the quartile function is known analytically. 1.0

Example 1: Produce four samples from an exponential distribution with mean 3 Produce four samples from an exponential distribution with mean 3 Uniform sample: 0.540, 0.619, 0.452, Uniform sample: 0.540, 0.619, 0.452, Take natural log of each value and multiply by -3 Take natural log of each value and multiply by -3 Exponential sample: 1.850, 1.440, 2.317, Exponential sample: 1.850, 1.440, 2.317, 7.072

Example 2: Simulation advances in time steps of 1 second Simulation advances in time steps of 1 second Probability of an event happening is from an exponential distribution with mean 5 seconds Probability of an event happening is from an exponential distribution with mean 5 seconds What is probability that event will happen in next second? What is probability that event will happen in next second? F(x=1/5) =1 - exp(-1/5)) = F(x=1/5) =1 - exp(-1/5)) = Use uniform random number to test for occurrence of event (if u < then ‘event’ else ‘no event’) Use uniform random number to test for occurrence of event (if u < then ‘event’ else ‘no event’)

Normal Distributions: Box-Muller Transformation Cannot invert cumulative distribution function to produce formula yielding random numbers from normal (gaussian) distribution Cannot invert cumulative distribution function to produce formula yielding random numbers from normal (gaussian) distribution Box-Muller transformation produces a pair of standard normal deviates g 1 and g 2 from a pair of normal deviates u 1 and u 2 Box-Muller transformation produces a pair of standard normal deviates g 1 and g 2 from a pair of normal deviates u 1 and u 2

Box-Muller Transformation repeat v 1  2u v 2  2u r  v v 2 2 until r > 0 and r < 1 f  sqrt (-2 ln r /r ) g 1  f v 1 g 2  f v 2 This is a consequence of the fact that the chi- square distribution with two degrees of freedom is an easily-generated exponential random variable.

Example Produce four samples from a normal distribution with mean 0 and standard deviation 1 Produce four samples from a normal distribution with mean 0 and standard deviation 1 u1u1u1u1 u2u2u2u2 v1v1v1v1 v2v2v2v2rf g1g1g1g1 g2g2g2g

Rejection Method

Example Generate random variables from this probability density function Generate random variables from this probability density function

Example (cont.) So  h(x)  f(x) for all x

Example (cont.) xixixixi uiuiuiui u i  h(x i ) f(x i ) Outcome Reject Accept Reject Accept Two samples from f(x) are and 1.306

Case Studies (Topics Introduced) Temperature inside a 2-D plate (Random walk) Temperature inside a 2-D plate (Random walk) Two-dimensional Ising model (Metropolis algorithm) Two-dimensional Ising model (Metropolis algorithm) Room assignment problem (Simulated annealing) Room assignment problem (Simulated annealing) Parking garage (Monte Carlo time) Parking garage (Monte Carlo time) Traffic circle (Simulating queues) Traffic circle (Simulating queues)

Temperature Inside a 2-D Plate Random walk

Example of Random Walk

NP-Hard Assignment Problems TSP:Find a tour of US cities that minimizes distance.

Physical Annealing Heat a solid until it melts Heat a solid until it melts Cool slowly to allow material to reach state of minimum energy Cool slowly to allow material to reach state of minimum energy Produces strong, defect-free crystal with regular structure Produces strong, defect-free crystal with regular structure

Simulated Annealing Makes analogy between physical annealing and solving combinatorial optimization problem Makes analogy between physical annealing and solving combinatorial optimization problem Solution to problem = state of material Solution to problem = state of material Value of objective function = energy associated with state Value of objective function = energy associated with state Optimal solution = minimum energy state Optimal solution = minimum energy state

How Simulated Annealing Works Iterative algorithm, slowly lower T Iterative algorithm, slowly lower T Randomly change solution to create alternate solution Randomly change solution to create alternate solution Compute , the change in value of objective function Compute , the change in value of objective function If  < 0, then jump to alternate solution If  < 0, then jump to alternate solution Otherwise, jump to alternate solution with probability e -  /T Otherwise, jump to alternate solution with probability e -  /T

Performance of Simulated Annealing Rate of convergence depends on initial value of T and temperature change function Rate of convergence depends on initial value of T and temperature change function Geometric temperature change functions typical; e.g., T i+1 = T i Geometric temperature change functions typical; e.g., T i+1 = T i Not guaranteed to find optimal solution Not guaranteed to find optimal solution Same algorithm using different random number streams can converge on different solutions Same algorithm using different random number streams can converge on different solutions Opportunity for parallelism Opportunity for parallelism

Convergence Starting with higher initial temperature leads to more iterations before convergence

Parking Garage Parking garage has S stalls Parking garage has S stalls Car arrivals fit Poisson distribution with mean A: Exponentially distributed inter- arrival times Car arrivals fit Poisson distribution with mean A: Exponentially distributed inter- arrival times Stay in garage fits a normal distribution with mean M and standard deviation M/S Stay in garage fits a normal distribution with mean M and standard deviation M/S

Implementation Idea Current Time Times Spaces Are Available 15 Car CountCars Rejected 2

Summary (1/3) Applications of Monte Carlo methods Applications of Monte Carlo methods  Numerical integration  Simulation Random number generators Random number generators  Linear congruential  Lagged Fibonacci

Summary (2/3) Parallel random number generators Parallel random number generators  Manager/worker  Leapfrog  Sequence splitting  Independent sequences Non-uniform distributions Non-uniform distributions  Analytical transformations  Box-Muller transformation  (Rejection method)

Summary (3/3) Concepts revealed in case studies Concepts revealed in case studies  Monte Carlo time  Random walk  Metropolis algorithm  Simulated annealing  Modeling queues