Presentation is loading. Please wait.

Presentation is loading. Please wait.

P T A typical experiment in a real (not virtual) space 1.Some material is put in a container at fixed T & P. 2.The material is in a thermal fluctuation,

Similar presentations


Presentation on theme: "P T A typical experiment in a real (not virtual) space 1.Some material is put in a container at fixed T & P. 2.The material is in a thermal fluctuation,"— Presentation transcript:

1 P T A typical experiment in a real (not virtual) space 1.Some material is put in a container at fixed T & P. 2.The material is in a thermal fluctuation, producing lots of different configurations (a set of microscopic states) for a given amount of time. It is the Mother Nature who generates all the microstates. 3.An apparatus is plugged to measure an observable (a macroscopic quantity) as an average over all the microstates produced from thermal fluctuation. P T How do we mimic the Mother Nature in a virtual space to realize lots of microstates, all of which correspond to a given macroscopic state? How do we mimic the apparatus in a virtual space to obtain a macroscopic quantity (or property or observable) as an average over all the microstates? P T

2 microscopic states (microstates) or microscopic configurations under external constraints (N or , V or P, T or E, etc.)  Ensemble (micro-canonical, canonical, grand canonical, etc.) Average over a collection of microstates Macroscopic quantities (properties, observables) thermodynamic –  or N, E or T, P or V, C v, C p, H,  S,  G, etc. structural – pair correlation function g(r), etc. dynamical – diffusion, etc. These are what are measured in true experiments. they’re generated naturally from thermal fluctuation In a real-space experimentIn a virtual-space simulation How do we mimic the Mother Nature in a virtual space to realize lots of microstates, all of which correspond to a given macroscopic state? By MC or MD method! it is us who needs to generate them by MC or MD methods. t1t1 t2t2 t3t3 ~10 23 particles

3 Molecular Dynamics (MD) vs. Monte Carlo (MC) Molecular Dynamics Simulation (Deterministic) Starts from the initial microstate (a collection of positions & velocities) Solves the Newton equation of motion under the inter-particle potential V (and force F) Microstates generated by integration over time Time evolution (trajectory), dynamic behavior over time Gives a direct connection with true experiments Gives both equilibrium properties and dynamic properties Monte Carlo Simulation (Stochastic = random or based on random numbers) Microstates generated by stochastic sampling involving a random number generator Gives equilibrium properties only (no time dependence, no dynamics!) Solves mathematical problems using stochastic sampling (like rolling a dice) Performs simulation of any process whose development is influenced by random factors, but also the method enables artificial construction of a probabilistic model Randomly selects values to fit a probability distribution (e.g. bell curve, linear distribution, etc.) to create scenarios of a problem Apt to consume large computing resources (“method of last resort”) Historically executed on the fastest computers available at the time V

4 John von Neumann, Stan Ulam, and Nick Metropolis are considered to found the method from the collaboration at Los Alamos on the Manhattan project during the World War II. The name "Monte Carlo" comes from the Monte Carlo Casino (gambling house) in Monaco and first appeared in the article "The Monte Carlo Method" by Metropolis and Ulam (1949). Well before 1949, certain problems in statistics were solved by means of random sampling. Buffon experimentally determined a value of  by casting a needle on a ruled grid (1768) and Fredericks & Levy showed how it can be used to solve boundary value problems (1928). Kelvin used random sampling techniques to initialize trajectories of particles undergoing elastic collision with container walls (1901). This led to the failure of the equi-partition law and the to foundation of statistical mechanics. Fermi used this method in the calculation of neutron diffusion in nuclear reactors (1930's). A formal foundation for the method was developed by von Neumann (PDE) (1940’s). However, simulation of random variables by hand was a laborious process. Stan Ulam realized the importance of the computer in the implementation of the approach. Using MC as a universal numerical technique became practical only with the advent of computers (ENIAC, MANIAC, etc.) and high-quality pseudorandom number generators. History of Monte Carlo Simulation

5 “Monte Carlo” Casino

6

7 Various Applications of Monte Carlo Techniques Integration (especially of high-dimensional functions) System simulation Physical phenomena – nuclear power, radiation, thermodynamics, etc. (The use of MC in the area of nuclear power has undergone an important evolution.) Quantum Monte Carlo – wave functions and expectation values (QMC gives most accurate method for general quantum many-body systems.) Simulation of games (bingo, solitaire, etc.) Weather, Equipment Productivity, Risk Analysis and Management VLSI designs - tolerance analysis Computer graphics – rendering Projects are often associated with a high degree of uncertainty and complexity resulting from the unpredictable nature of events and the multi-dimensionality. MC generates multiple scenarios depending upon the assumptions fed into the model. MC calculates multiple scenarios by repeatedly inserting different sampling values from probability distribution for the uncertain variables into the computerized spread-sheet.

8

9

10 1957, 1959: Alder & Wainwright, Introduction of basic MD of hard sphere particles 1964: Rahman, Liquid Ar (LJ 6-12, NVE) – First quantitative study with a realistic potential 1967: Verlet, the Verlet numerical integration algorithm & the Verlet neighbor list 1974: Stillinger & Rahman, Water – First study on a realistic system 1980, 1981: Andersen, Parrinello-Rahman, Constant pressure (NPT) MD 1984, 1986: Nose, Hoover, Constant temperatue (NVT) MD 1985: Car & Parrinello, ab initio MD (AIMD) based on Density Functional Theory 2013: Karplus, Levitt & Warshel, Nobel Prize in Chemistry Short History of Molecular Dynamics Simulation References Frenkel & Smit, Understanding Molecular Simulations, 2 nd ed., Ch. 4 & 6 Allen & Tildesley, Computer Simulation of Liquids, 1991, Ch. 3 Leach, Molecular Modeling, 2nd ed., pp. 353-406.

11 1.Analytical integration, if possible Not for many functions (very limited) 2.Numerical integration (summation) Rectangular/trapezoidal/parabolic rules 3.Numerical integration (Monte Carlo) Random sampling of the area enclosed by a<x<b and 0<y<f max (x) MC Application No. 1. How to evaluate integrals f(x) ab x A 0

12 3.Numerical integration (Hit-and-Miss Monte Carlo) Random sampling of the area enclosed by a<x<b and 0<y<f max (x) MC Application No. 1. How to evaluate integrals f(x) ab x A f max (x) f(x) ab f max (x) x  Choose at random M points in x  [a,b]. Designate the number of points lying under the curve y = f(x) by M'. It is geometrically obvious that the area of A is approximately equal to the ratio M'/M. The greater the number of drawings or trials (M), the greater the accuracy of this estimate.

13  = 4 A where A = area of the first quadrant of a circle of the radius r = 1 1st example of MC: Let’s calculate  ! Hit-and-Miss (or Rejection) MC Method Equivalent to integrating the equation of the circle x y A O 1 1 1 (x i,y i )

14  = 3.14159265359… 1st example of MC: Let’s calculate  ! Hit-and-Miss (or Rejection) MC Method

15 N = 10,000Pi= 3.104385 N = 100,000Pi= 3.139545 N = 1,000,000Pi= 3.139668 N = 10,000,000Pi= 3.141774 … 1st example of MC: Let’s calculate  ! Hit-and-Miss (or Rejection) MC Method

16 Example: Importance of small region: Measuring the depth of the Nile Systematic quadrature or uniform sampling Importance sampling (importance weighted random walk) Frenkel and Smit, Understanding Molecular Simulations

17 Importance sampling for MC simulation (“Importance-weighted random walk”) Sampling points from a uniform distribution may not be the best way for MC. When most of the weight of the integral comes from a small range of x where f(x) is large, sampling more often in this region would increase the accuracy of the MC.

18 Beyond 1D integrals: A system of N particles in a container of a volume V in contact with a thermostat T (constant NVT) (  =1/kT) or for discrete microstates for discrete microstates Particles interact with each other through a potential energy U(r N ) (~ pair potential). U(r N ) is the potential energy of a microstate {r N } = {x 1, y 1, z 1, …, x N, y N, z N }.  (r N ) is the probability to find a microstate {r N } under the constant-NVT constraint. Partition function Z (required for normalization) = the weighted sum of all the microstates compatible with the constant-NVT condition Average of an observable O,, over all the microstates compatible with constant NVT or for discrete microstates  ensemble average  “canonical ensemble” external constraint 3N-dimension integration 3N-dimension

19 1885 – Johann Balmer – Line spectrum of hydrogen 1886 – Heinrich Hertz – Photoelectric effect experiment 1897 – J. J. Thomson – Discovery of electrons from cathode rays experiment 1900 – Max Planck – Quantum theory of blackbody radiation 1905 – Albert Einstein– Quantum theory of photoelectric effect 1910 – Ernest Rutherford – Scattering experiment with  -particles 1913 – Niels Bohr – Quantum theory of hydrogen spectra 1923 – Arthur Compton – Scattering experiment of photons off electrons 1924 – Wolfgang Pauli – Exclusion principle – Ch. 10 1924 – Louis de Broglie – Matter waves 1925 – Davisson and Germer – Diffraction experiment on wave properties of electrons 1926 – Erwin Schrodinger – Wave equation – Ch. 2 1927 – Werner Heisenberg – Uncertainty principle – Ch. 6 1927 – Max Born – Interpretation of wave function – Ch. 3 Ludwig Boltzmann in the Maxwell-Boltzmann distribution Boltzmann a pioneer in atomic theory was used in the derivation of

20 Cyrus Levinthal formulated the “Levinthal paradox” (late 1960’s): -Consider a protein molecule composed of (only) 100 residues, -each of which can assume (only) 3 different conformations. -The number of possible structures of this protein yields 3 100 = 5×10 47. -Assume that it takes (only) 100 fs to convert from one structure to another. -It would require 5×10 34 s = 1.6×10 27 years to ”systematically” explore all possibilities. -This long time disagrees with the actual folding time (μs~ms).  Levinthal’s paradox Example: Importance of small region: Energy funnel in protein folding

21 Can we use Monte Carlo to compute Z? (even for a very simple, minimum-size, discrete case) Consider a model of spins on a 2D lattice (idealized magnetic model). ⇒ a discrete system Each of N spins can take 2 states: up ( ↑ ) and down ( ↓ ). ⇒ 2 N microstates Each spin interacts only with its nearest neighbors (nn). ⇒ 4 neighbors for a 2D square lattice Suppose that it takes 10 -6 (or 10 -15 ) s to compute the interaction of a spin with its neighbors. ⇒ Time to calculate the energy E i of a microstate i = N x 10 -6 (or N x 10 -15 ) s For N = 100 spins: - 2 100 ~ 10 30 microstates - 10 -3 (or 10 -12 ) s to calculate the energy of a microstate ⇒ 10 27 (or 10 18 ) s to estimate Z! (~ age of the universe ~ 13.8 billion years ~ 4 x 10 17 s) The situation gets worse with a real (continuous, larger, with beyond-nn interaction) system! a microstate ⇒ We cannot compute Z (and the absolute free energy  kT ln Z) for real systems! (However, we can compute a relative free energy between two states.)

22 What contributes to the most? Instantaneous values { O i } of microstates with high probability (= with large Boltzmann weight = with low energy) We just showed that we cannot compute all the microstates. Let’s take a subset, for example, 10 6 states for the spin system in the previous example. How do we choose these states? a)Brute force Monte Carlo (hit & miss, sample mean by uniform sampling) Randomly pick a microstate i (i.e. the orientation of each spin). → Too high probability that the subset doesn’t contribute to the average → The situation is the worst for a continuous dense system! b)Importance sampling Pick a microstate i with high probability (i.e. large  i ). Sample according to  i. → We need to use a normalized distribution {  i }. → The normalization requires Z, and we showed that we cannot compute Z! c) Metropolis importance sampling Pick a microstate i with large  i without calculating the normalization. → biased random walk in the phase space Can we use Monte Carlo to compute ? Hard sphere 1/10 260

23 Finished ? Yes No Give the particle a random displacement. Calculate the new energy. Accept the move with Select a particle at random. Calculate the energy. Calculate the ensemble Average. Initialize the positions Biased random walk in configuration space: Metropolic Monte Carlo method Kristen A. Fichthorn, Penn. State U.

24 Choices of Metropolis for canonical ensembles N. Metropolis et al. J. Chem. Phys. 21, 1087 (1953) 1. Detailed balance condition at chemical equilibrium 2.Symmetric  : 3.Limiting probability distribution for canonical ensemble = Boltzmann  Transition probability  & acceptance probability acc should satisfy: 4. Define the acceptance probability as:  Accept all the downhill moves.  Accept uphill moves only when not too uphill. naturally satisfies this condition.

25 Almost always involves a Markov process Move to a new configuration from an existing one according to a well-defined transition probability Simulation procedure 1.Generate a new “trial” configuration by making a perturbation to the present configuration 2.Accept the new configuration based on the ratio of the probabilities for the new and old configurations, according to the Metropolis algorithm 3.If the trial is rejected, the present configuration is taken as the next one in the Markov chain 4.Repeat this many times, accumulating sums for averages state k state k+1 Metropolis Monte Carlo Molecular Simulation David A. Kofke, SUNY Buffalo

26 Michael B. Hall, Texas A&M U.

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42 Themodynamic limit : V →∞ and simulation particles (atoms, molecules, macromolecules, spins, etc) are confined in a finite-size cell particles are in interaction: - bonded interactions (bonds, angles, torsions) to connect atoms to make molecules - non-bonded interactions (between distant atoms) particles on the surface of the cell will experience different interactions than those in the bulk! The total number of particles is always « small » (with respect to N A ): the fraction of surface particles will significantly alter the average of any observables with respect to the expected value in the thermodynamic limit V →∞ example: simple atomic system with N particles in a simple cubic crystal state - N = 10 x 10 x 10 = 10 3 : ~60% surface atoms - N = 10 4 : ~30% surface atoms - N = 10 5 : ~13% aurface atoms - N = 10 6 : ~6% surface atoms (but big computational system!) N s /N ~ 6 x N 2/3 / N ~ 6 / N 1/3 (exact calculation: N s = 6 x(N 1/3 -2) 2 + 12 x (N 1/3 -2) + 8. For N = 10 3, 49% surface atoms)

43

44

45 Periodic boundary conditions (PBC) (from Allen & Tildesley) When a particle leaves the cell, one of its images comes in. Images are not kept in memory: Particle position after a move is checked and folded back in the cell. Surface effects are removed, but the largest fluctuations are ~L (cell size). If the system exhibits large fluctuations (e.g., near a 2 nd order phase transition), PBC will still lead to artefacts (finite-size effects). Finite-size effects can be studied by considering cells of different sizes. A … H: images of the cell Remark: Cell does not have to be cubic.

46 Periodic Boundary Condition and Non-Bonded Interactions usually non-bonded pair interaction L rcrc L

47

48

49

50 Periodic Boundary Condition – Implementation (2D Case) 1. Real coordinates /* (xi, yi) particle i coordinates */ if (xi > Lx/2) xi = xi – Lx; else if (xi < -Lx/2) xi = xi + Lx; if (yi > Ly/2) yi = yi – Ly; else if (yi < -Ly/2) yi = yi + Ly; y 0 L x /2 -L x /2 L y /2 -L y /2 i xixi yiyi x x i -L x 2. Scaled (between [-0.5,0.5]) coordinates (better to handle any cell shape) orthorombic cell case: #define NINT(x) ((x) < 0.0 ? (int) ((x) - 0.5) : (int) ((x) + 0.5)) sxi = xi / Lx; /* (sxi, syi) particle i scaled coordinates */ syi = yi / Ly; sxi = NINT(sxi); /* Apply PBC */ syi = NINT(syi); xi = sxi * Lx; /* (xi, yi) particle i folded real coordinates */ yi = syi * Ly;

51

52

53

54 Plotting Molecular Dynamics Properties Equilibration step Equilibration step allows atoms and molecules to find more natural positions with respect to one another MD Phase During the MD Phase molecular properties (structures, energies, etc.) are accumulated for future analysis

55 Michael B. Hall, Texas A&M U.

56 Molecular Dynamics: What Are Current Simulation Capabi lities? Time scales of biological processes Femtosecond (fs) = 10 -15 second Picosecond (ps) = 10 -12 second Nanosecond (ns) = 10 -9 second Microsecond (μs) = 10 -6 second

57 Michael B. Hall, Texas A&M U.

58

59

60

61

62

63 Statistical Ensembles Supported in Amber Microcanonical ensemble (NVE) : The thermodynamic state characterized by a fixed number of atoms, N, a fixed volume, V, and a fixed energy, E. This corresponds to an isolated system. Canonical Ensemble (NVT): This is a collection of all systems whose thermodynamic state is characterized by a fixed number of atoms, N, a fixed volume, V, and a fixed temperature, T. Isobaric-Isothermal Ensemble (NPT): This ensemble is characterized by a fixed number of atoms, N, a fixed pressure, P, and a fixed temperature, T.

64 Michael B. Hall, Texas A&M U.

65

66

67 Exploring Conformational Space: Simul ating Annealing Time Temperature Energy Profile Local Minima Heating phase Cooling phase Heating phase Cooling phase


Download ppt "P T A typical experiment in a real (not virtual) space 1.Some material is put in a container at fixed T & P. 2.The material is in a thermal fluctuation,"

Similar presentations


Ads by Google