Monte Carlo Methods and Statistical Physics

Slides:



Advertisements
Similar presentations
The microcanonical ensemble Finding the probability distribution We consider an isolated system in the sense that the energy is a constant of motion. We.
Advertisements

Statistics review of basic probability and statistics.
Ch 11. Sampling Models Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by I.-H. Lee Biointelligence Laboratory, Seoul National.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Graduate School of Information Sciences, Tohoku University
BAYESIAN INFERENCE Sampling techniques
Planning under Uncertainty
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Advanced Computer Graphics (Spring 2005) COMS 4162, Lectures 18, 19: Monte Carlo Integration Ravi Ramamoorthi Acknowledgements.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
CF-3 Bank Hapoalim Jun-2001 Zvi Wiener Computational Finance.
The University of Texas at Austin, CS 395T, Spring 2008, Prof. William H. Press IMPRS Summer School 2009, Prof. William H. Press 1 4th IMPRS Astronomy.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
Evaluating Hypotheses
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
The Monte Carlo Method: an Introduction Detlev Reiter Research Centre Jülich (FZJ) D Jülich
Monte Carlo Methods in Partial Differential Equations.
Random Variable and Probability Distribution
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
Introduction to Monte Carlo Methods D.J.C. Mackay.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Simulation of Random Walk How do we investigate this numerically? Choose the step length to be a=1 Use a computer to generate random numbers r i uniformly.
Advanced methods of molecular dynamics Monte Carlo methods
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Free energies and phase transitions. Condition for phase coexistence in a one-component system:
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
MonteCarlo Optimization (Simulated Annealing) Mathematical Biology Lecture 6 James A. Glazier.
General Principle of Monte Carlo Fall 2013 By Yaohang Li, Ph.D.
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
1 Lesson 3: Choosing from distributions Theory: LLN and Central Limit Theorem Theory: LLN and Central Limit Theorem Choosing from distributions Choosing.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Module 1: Statistical Issues in Micro simulation Paul Sousa.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
Monte Carlo Methods.
Monte Carlo Methods in Statistical Mechanics Aziz Abdellahi CEDER group Materials Basics Lecture : 08/18/
Nathan Baker BME 540 The Monte Carlo method Nathan Baker BME 540.
MA-250 Probability and Statistics Nazar Khan PUCIT Lecture 26.
Monte Carlo Methods1 T Special Course In Information Science II Tomas Ukkonen
The Ising Model Mathematical Biology Lecture 5 James A. Glazier (Partially Based on Koonin and Meredith, Computational Physics, Chapter 8)
Monte Carlo Methods So far we have discussed Monte Carlo methods based on a uniform distribution of random numbers on the interval [0,1] p(x) = 1 0  x.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
Monté Carlo Simulation  Understand the concept of Monté Carlo Simulation  Learn how to use Monté Carlo Simulation to make good decisions  Learn how.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Lecture 2 Molecular dynamics simulates a system by numerically following the path of all particles in phase space as a function of time the time T must.
Random Variable The outcome of an experiment need not be a number, for example, the outcome when a coin is tossed can be 'heads' or 'tails'. However, we.
Ka-fu Wong © 2003 Chap 6- 1 Dr. Ka-fu Wong ECON1003 Analysis of Economic Data.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Computer simulation Sep. 9, QUIZ 2 Determine whether the following experiments have discrete or continuous out comes A fair die is tossed and the.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Gil McVean, Department of Statistics Thursday February 12 th 2009 Monte Carlo simulation.
G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Statistical Data Analysis: Lecture 5 1Probability, Bayes’ theorem 2Random variables and.
STAT 534: Statistical Computing
Basic Monte Carlo (chapter 3) Algorithm Detailed Balance Other points non-Boltzmann sampling.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Introduction to Computer Simulation of Physical Systems (Lecture 10) Numerical and Monte Carlo Methods (CONTINUED) PHYS 3061.
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang 12/3/2008 with slides by Pat Hanrahan and Torsten Moller.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Lesson 8: Basic Monte Carlo integration
Jun Liu Department of Statistics Stanford University
Markov chain monte carlo
Lecture 2 – Monte Carlo method in finance
CHAPTER 12 STATISTICAL METHODS FOR OPTIMIZATION IN DISCRETE PROBLEMS
Statistical Methods for Data Analysis Random number generators
Monte Carlo Integration
Presentation transcript:

Monte Carlo Methods and Statistical Physics Mathematical Biology Lecture 4 James A. Glazier (Partially Based on Koonin and Meredith, Computational Physics, Chapter 8)

Two Basic Applications: Monte Carlo Methods Use Statistical Physics Techniques to Solve Problems that are Difficult or Inconvenient to Solve Deterministically. Two Basic Applications: Evaluation of Complex Multidimensional Integrals (e.g. in Statistical Mechanics) [1950s] Optimization of Problems where the Deterministic Problem is Algorithmically Hard (NP Complete—e.g. the Traveling Salesman Problem) [1970s]. Both Applications Important in Biology.

Example: Thermodynamic Partition Function For a Gas of N Atoms at Temperature 1/b, Interacting Pairwise through a Potential V(r), the Partition Function: Suppose We need to Evaluate Z Numerically with 10 Steps/Integration. Then Have 103N Exponentials to Evaluate. The Current Fastest Computer is About 1012 Operations/Second. One Year ~ 3 x 107 Seconds. So One Year ~ 3 x 1019 Operations. In One Year Could Evaluate Z for about 7 atoms! This Result is Pretty Hopeless. There Must Be a Better Way.

Normal Deterministic Integration Consider the Integral: Subdivide [0,1] into N Evenly Spaced Intervals of width Dx=1/N. Then:

Error Estimate—Continued 2) Convergence is Slow: while for Normal Deterministic Integration: However, Comparison Isn’t Fair. Suppose You Fix the Number of Subdomains in the Integral to be N. In d Dimensions Each Deterministic Sub-Integral has N1/d Intervals. So the Net Error is So, if d>4 the Monte Carlo Method is Better!

Error Estimate How Good is the Estimate? For a constant Function, the Error is 0 for Both Deterministic and Monte Carlo Integration. Two Rather Strange Consequences: In Normal Integration, Error is 0 for Straight Lines. In Monte Carlo Integration, Errors Differ for Straight Lines Depending on Slope (Worse for Steeper Lines). If

Monte Carlo Integration Use the Idea of the Integral as an Average: Before We Solved by Subdividing [0,1] into Evenly Spaced Intervals, but could Equally Well Pick Positions Where We Evaluate f(x) Randomly: Chosen to be Uniform Random. So: Approximates I Note: Need a Good Random Number Generator for this Method to Work. See (Vetterling, Press…, Numerical Recipies)

Pathology Like Normal Integration, Monte Carlo Integration Can Have Problems. Suppose You have N Delta Functions Scattered over the Unit Interval. However, the Probability of Hitting a Delta Function is 0, so IN=0. For Sharply-Peaked Functions, the Random Sample is a Bad Estimate (Standard Numerical Integration doesn’t Work Well Either)

Weight Functions Can Improve Estimates by Picking the ‘Random’ Points Intelligently, to Have More Points Where f(x) is Large and Few Where f(x) is Small. Let w(x) be a Weight Function Such That: For Deterministic Integration, the Weight Function has No Effect: Let: Then: Alternatively, Pick: So All We have to do is Pick x According to the Distribution w(x) and Divide f(x) by that Distribution:

Weight Functions—Continued If Why Not Just Let w(x)= f(x)? Then Need to Solve the Integral to Invert y(x) to Obtain x(y) or to Pick x According to w(x). But Stripping Linear Drift is Easy and Always Helps. In d dimensions have: So: Which is Hard to Invert, so Need to Pick Directly (Though, Again Can Strip Drift).

Example Let And Then When You Can’t Invert y(x) Refer to Large Literature on How to Generate With the Needed Distribution.

Metropolis Algorithm Originally a Way to Derive Statistics for Canonical Ensemble in Statistical Mechanics. A Way to Pick the According to the Weight Function in a very high dimensional space. Idea: Pick any x0 and do a Random Walk: Subject to Constraints Such that the Probability of a Walker at has: Problems: 1) Convergence can be Very Slow. 2) Result can be Wrong. 3) Variance Not Known.

Algorithm For any State Generate a New Trial State . Usually (Not Necessary) Assume that is Not Too Far From . I.e. that it lies within a ball of Radius d >0 of : Let: If r≥1 then Accept the Trial: If r<1 then Accept the Trial with Probability r. I.e. Pick a Random Number [0,1]. If <r then Accept: Otherwise Reject: Repeat.

Problems May Not Sample Entire Space. If d Too Small Explore only Small Region Around . If d Too Big Probability of Acceptance Near 0. Inefficient. If Regions of High w Linked by Regions of Very Low w Never See Other Regions. If w Sharply Peaked Tend to Get Stuck Near Maximum. Sequence of Not Statistically Independent, so Cannot Estimate Error. Fixes: Use Multiple Replicas. Many Different , Which Together Sample the Whole Space. Pick d So that the Acceptance Probability is ~ ½ (Optimal). Run Many Steps Before Starting to Sample.

Convergence Theorem Theorem: Proof: Consider Many Independent Walkers Starting from Every Possible Point and Let Them Run For a Long Time. Let be the Density of Walkers at Point . Consider Two Points and . Let be the Probability for a Single Walker to Jump from to . Rate of Jumps from to is: So the Net Change in is:

Convergence Theorem—Continued At Equilibrium: So if And if So Always Tends to its Equilibrium Value Monotonically and the Rate of Convergence is Linear in the Deviation: Implies that System is Perfectly Damped. This Result is the Fundamental Justification for Using the Metropolis Algorithm to Calculate Nonequilibrium and Kinetic Phenomena.

Convergence Theorem—Conclusion Need to Evaluate If is Allowed, So is . (Detailed Balance). I.e. So While if So Always. Normalizing by the Total Number of Walkers □ Note that This Result is Independent of How You Choose Given , As Long as Your Algorithm Has Nonzero Transition Probabilities for All Initial Conditions and Obeys Detailed Balance (Second Line Above).