Computational Physics (Lecture 18) PHY4061. Molecular dynamics simulations Most physical systems are collections of interacting objects. – a drop of water.

Slides:



Advertisements
Similar presentations
Time averages and ensemble averages
Advertisements

Simulazione di Biomolecole: metodi e applicazioni giorgio colombo
The Kinetic Theory of Gases
Pressure and Kinetic Energy
Statistical mechanics
How to setup the initial conditions for a Molecular Dynamic Simulation
Sect. 8.2: Cyclic Coordinates & Conservation Theorems
Molecular Dynamics at Constant Temperature and Pressure Section 6.7 in M.M.
Lecture 13: Conformational Sampling: MC and MD Dr. Ronald M. Levy Contributions from Mike Andrec and Daniel Weinstock Statistical Thermodynamics.
Classical Statistical Mechanics in the Canonical Ensemble.
We’ve spent quite a bit of time learning about how the individual fundamental particles that compose the universe behave. Can we start with that “microscopic”
Introduction to Molecular Orbitals
Incorporating Solvent Effects Into Molecular Dynamics: Potentials of Mean Force (PMF) and Stochastic Dynamics Eva ZurekSection 6.8 of M.M.
Dynamics of Rotational Motion
Molecular Dynamics Classical trajectories and exact solutions
Joo Chul Yoon with Prof. Scott T. Dunham Electrical Engineering University of Washington Molecular Dynamics Simulations.
Gravity and Orbits The gravitational force between two objects:
Linear Momentum and Collisions
Lecture VII Rigid Body Dynamics CS274: Computer Animation and Simulation.
Ch 23 pages Lecture 15 – Molecular interactions.
1 Physical Chemistry III Molecular Simulations Piti Treesukol Chemistry Department Faculty of Liberal Arts and Science Kasetsart University :
1 Scalar Properties, Static Correlations and Order Parameters What do we get out of a simulation? Static properties: pressure, specific heat, etc. Density.
Room 2032 China Canada Winnipeg Manitoba.
Central Force Motion Chapter 8
Javier Junquera Molecular dynamics in the microcanonical (NVE) ensemble: the Verlet algorithm.
Cross section for potential scattering
Molecular Dynamics Simulation Solid-Liquid Phase Diagram of Argon ZCE 111 Computational Physics Semester Project by Gan Sik Hong (105513) Hwang Hsien Shiung.
1 CE 530 Molecular Simulation Lecture 17 Beyond Atoms: Simulating Molecules David A. Kofke Department of Chemical Engineering SUNY Buffalo
Ch 9 pages Lecture 23 – The Hydrogen Atom.
1 CE 530 Molecular Simulation Lecture 6 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Ch 9 pages Lecture 22 – Harmonic oscillator.
Cross Sections One of the most important quantities we measure in nuclear physics is the cross section. Cross sections always have units of area and in.
Surface and Bulk Fluctuations of the Lennard-Jones Clusrers D. I. Zhukhovitskii.
Chapter 3: Central Forces Introduction Interested in the “2 body” problem! Start out generally, but eventually restrict to motion of 2 bodies interacting.
LECTURE 21 THE HYDROGEN AND HYDROGENIC ATOMS PHYSICS 420 SPRING 2006 Dennis Papadopoulos.
Chapter 10 Rotational Motion.
 We just discussed statistical mechanical principles which allow us to calculate the properties of a complex macroscopic system from its microscopic characteristics.
In the Hamiltonian Formulation, the generalized coordinate q k & the generalized momentum p k are called Canonically Conjugate quantities. Hamilton’s.
1 CE 530 Molecular Simulation Lecture 23 Symmetric MD Integrators David A. Kofke Department of Chemical Engineering SUNY Buffalo
Physics Lecture 14 3/22/ Andrew Brandt Monday March 22, 2010 Dr. Andrew Brandt 1.Hydrogen Atom 2.HW 6 on Ch. 7 to be assigned Weds 3/24.
Molecular Modelling - Lecture 2 Techniques for Conformational Sampling Uses CHARMM force field Written in C++
ChE 452 Lecture 25 Non-linear Collisions 1. Background: Collision Theory Key equation Method Use molecular dynamics to simulate the collisions Integrate.
Quantum Mechanical Cross Sections In a practical scattering experiment the observables we have on hand are momenta, spins, masses, etc.. We do not directly.
Chapter 13 Gravitation Newton’s Law of Gravitation Here m 1 and m 2 are the masses of the particles, r is the distance between them, and G is the.
Ludwid Boltzmann 1844 – 1906 Contributions to Kinetic theory of gases Electromagnetism Thermodynamics Work in kinetic theory led to the branch of.
Interacting Molecules in a Dense Fluid
The Boltzmann Distribution allows Calculation of Molecular Speeds Mathematically the Boltzmann Distribution says that the probability of being in a particular.
CS274 Spring 01 Lecture 7 Copyright © Mark Meyer Lecture VII Rigid Body Dynamics CS274: Computer Animation and Simulation.
Review Of Statistical Mechanics Continued
Tuesday, June 26, 2007PHYS , Summer 2006 Dr. Jaehoon Yu 1 PHYS 1443 – Section 001 Lecture #15 Tuesday, June 26, 2007 Dr. Jaehoon Yu Rotational.
©D.D. Johnson and D. Ceperley MSE485/PHY466/CSE485 1 Scalar Properties, Static Correlations and Order Parameters What do we get out of a simulation?
The Ideal Diatomic and Polyatomic Gases. Canonical partition function for ideal diatomic gas Consider a system of N non-interacting identical molecules:
Wednesday, Nov. 10, 2004PHYS , Fall 2004 Dr. Jaehoon Yu 1 1.Moment of Inertia 2.Parallel Axis Theorem 3.Torque and Angular Acceleration 4.Rotational.
Statistical Mechanics and Multi-Scale Simulation Methods ChBE
Chapter 10 Lecture 18: Rotation of a Rigid Object about a Fixed Axis: II.
Electrostatic field in dielectric media When a material has no free charge carriers or very few charge carriers, it is known as dielectric. For example.
Computational Physics (Lecture 11) PHY4061. Variation quantum Monte Carlo the approximate solution of the Hamiltonian Time Independent many-body Schrodinger’s.
Computational Physics (Lecture 10) PHY4370. Simulation Details To simulate Ising models First step is to choose a lattice. For example, we can us SC,
Computational Physics (Lecture 10)
J P SINGH Dept of Physics P.G.G.C-11, Chandigarh
Overview of Molecular Dynamics Simulation Theory
The units of g(): (energy)-1
PHYS 1443 – Section 003 Lecture #18
Computational Physics (Lecture 18)
Computational Physics (Lecture 20)
Lecture 41 Statistical Mechanics and Boltzmann factor
Scalar Properties, Static Correlations and Order Parameters
Quantum Two.
Chapter 9: Molecular-dynamics
Physical Chemistry Chapter VI Interaction between Molecules 2019/5/16
Presentation transcript:

Computational Physics (Lecture 18) PHY4061

Molecular dynamics simulations Most physical systems are collections of interacting objects. – a drop of water contains more than water molecules. – a galaxy is a collection of millions and millions of stars. no analytical solution that can be found for an interacting system with more than two objects. – We can solve the problem of a two-body system, such as the Earth–Sun system, analytically, but not a three- body system, such as the Moon–Earth–Sun system.

The situation is similar in quantum mechanics, – one can obtain the energy levels of the hydrogen atom – (one electron and one proton) analytically, but not those the helium atom (two electrons and a nucleus) Numerical techniques are needed to study a system of a large number of interacting objects, or the so-called many-body system.

a distinction between three-body systems such as the Moon–Earth–Sun system and a more complicated system, such as a drop of water. Statistical mechanics has to be applied to the latter.

The methods for solving Newton’s equation discussed early Chapters can be used to solve the above equation set. – However, those methods are not as practical as MD, in terms of the speed and accuracy of the computation and given the statistical nature of large systems.

General behavior of a classical system MD solves the dynamics of a classical many- body system described by the Hamiltonian Here: EK and EP are the kinetic energy and potential energy of the system, mi, ri, and pi are the mass, position vector, and momentum of the ith particle, and V(ri j ) and U(ri ) are the corresponding interaction energy and external potential energy. From Hamilton’s principle, the position vector and momentum satisfy

There are several ways to simulate a many-body system. – Most simulations are done either through a stochastic process, such as the Monte Carlo simulation – or through a deterministic process, such as a molecular dynamics simulation. Some numerical simulations are performed in a hybridized form of both, – for example, Langevin dynamics, and Brownian dynamics. Here the force fi is given by

Another issue is the distribution function of the system. In statistical mechanics, each special environment is dealt with by way of a special ensemble.

For an isolated system, we use the micro-canonical ensemble, – which assumes a constant total energy, number of particles, and volume. A system in good contact with a thermal bath is dealt with using the canonical ensemble – which assumes a constant temperature, number of particles, and volume (or pressure). For any given ensemble, the system is described by a probability function W(R, P), – which is in general a function of phase space, consisting of all coordinates and momenta of the particles R = (r1, r2,..., rN ) and P = (p1, p2,..., pN ), and other quantities, such as temperature, total particle number of the system, and so forth.

For the canonical ensemble, we have: – where T is the temperature of the system, kB is the Boltzmann constant, and N is a normalization constant.

We can separate the position dependence and momentum dependence in W(R, P) if they are not coupled in H. Any average of the momentum-dependent quantity becomes quite simple because of the quadratic behavior of the momentum in H. So we concentrate on the position dependence here. The statistical average of a physical quantity A(R, P) is then given by:

if the system is ergodic: that is, every possible state is accessed with an equal probability. Because molecular dynamics simulations are deterministic in nature, almost all physical quantities are obtained through time averages. Sometimes the average over all the particles is also needed to characterize the system. – For example, the average kinetic energy of the system can be obtained from any ensemble average, and the result is given by the partition theorem

where G is the total number of degrees of freedom. For a very large system, G ~ 3N, because each particle has three degrees of freedom. In molecular dynamics simulations, the average kinetic energy of the system can be obtained through where M is the total number of data points taken at different time steps and EK(t j ) is the kinetic energy of the system at time t j. If the system is ergodic, the time average is equivalent to the ensemble average. The temperature T of the simulated system is given by the average kinetic energy with the application of the partition theorem, T = 2EK/GkB.

Basic methods for many-body systems In general, we can define an n-body density function where dR_n = dr_n+1 dr_n+2 · · · dr_N. Note that the particle density ρ(r) = ρ1(r) is the special case of n = 1. The two-body density function is related to the pair-distribution function g(r, r) through

where ρ is the average density from the points r = 0 and r. where the first term is the so-called density–density correlation function. Here ρˆ(r) is the density operator, defined as The density of the system is given by the average of the density operator, If the density of the system is nearly a constant, the expression for g(r, r) can be reduced to a much simpler form

If the angular distribution is not the information needed, we can take the angular average to obtain the radial distribution function: where θ and φ are the polar and azimuthal angles from the spherical coordinate system. The pair-distribution or radial distribution function is related to the static structure factor S(k) through the Fourier transform The angular average of S(k) is given by The structure factor of a system can be measured with the light- or neutronscatteringexperiment.

The behavior of the pair-distribution function can provide a lot of information regarding the translational nature of the particles in the system. For example, a solid structure would have a pair- distribution function with sharp peaks at the distances of nearest neighbors, next nearest neighbors, and so forth. If the system is a liquid, the pair-distribution still has some broad peaks at the average distances of nearest neighbors, next nearest neighbors, and so forth, but the feature fades away after several peaks.

If the bond orientational order is important, one can also define an orientational correlation function where qn(r) is a quantity associated with the orientation of a specific bond. Detailed discussions on orientational order can be found in Strandburg (1992). Here we discuss how one can calculate ρ(r) and g(r ) in a numerical simulation. The density at a specific point is given by: where Omega (r,r ) is the volume of a sphere centered at r with a radius delta r and N(r, delta r ) is the number of particles in the volume. Note that we may need to adjust the radius r to have a smooth and realistic density distribution ρ(r) for a specific system. The average is taken over the time steps.

Similarly, we can obtain the radial distribution function numerically. We need to measure the radius r from the position of a specific particle r_i, and then the radial distribution function g(r ) is the probability of another particle’s showing up at a distance r.

Numerically, we have where delta omega (r,delta r ) ~ 4πr 2r is the volume element of a spherical shell with radius r and thickness delta r and delta N(r, delta r ) is the number of particles in the shell with the ith particle at the center of the sphere. The average is taken over the time steps as well as over the particles, if necessary.

The dynamics of the system can be measured from the displacement of the particles in the system. We can evaluate the time dependence of the mean-square displacement of all the particles, where ri (t) is the position vector of the ith particle at time t. For a solid system, delta 2 (t) is relatively small and does not grow with time, and the particles are in nondiffusive, or oscillatory, states. For a liquid system, delta 2 (t) grows linearly with time: where D is the self-diffusion coefficient (a measure of the motion of a particle in a medium of identical particles) and delta 2 (0) is a time-independent constant. The particles are then in diffusive, or propagating, states.

The very first issue in numerical simulations for a bulk system is how to extend a finite simulation box to model the nearly infinite system. A common practice is to use a periodic boundary condition, that is, to approximate an infinite system by piling up identical simulation boxes periodically. A periodic boundary condition removes the conservation of the angular momentum of the simulated system (particles in one simulation box), but still preserves the translational symmetry of the center of mass. So the temperature is related to the average kinetic energy by E_K = 3/2(N − 1) k B T, – where the factor (N − 1) is due to the removal of the rotation around the center of mass.

how to include the interactions among the particles in different simulation boxes If the interaction is a short-range interaction, – one can truncate it at a cut-off length r_c. – The interaction V(rc) has to be small enough that the truncation does not affect the simulation results significantly. – A typical simulation box usually has much larger dimensions than r_c. For a threedimensional cubic box with sides of length L, the total interaction potential can be evaluated with many fewer summations than N!/2, the number of possible pairs in the system. For example, if we have L/2 > rc, and if |xi j |, |yi j |, and|zi j | are all smaller than L/2, we can use Vi j = V(r_i j ); otherwise, we use the corresponding point in the neighboring box. In order to avoid a finite jump at the truncation, one can always shift the interaction to V(r ) − V(r_c) to make sure that it is zero at the truncation.

which can be evaluated quite easily, because at every time step the force fi j =− ∇ V(ri j ) is calculated for each particle pair. The pressure of a bulk system can be evaluated from the pair-distribution function through which is useful for estimating the influence on the pressure from the truncation in the interaction potential. Numerically, one can also evaluate the pressure from the time average which is the result of the virial theorem that relates the average kinetic energy to the average potential energy of the system. The correction due to the truncation of the potential is then given by because g(r ) can be interpreted as the probability of seeing another particle at a distance r. Then we have

The Verlet algorithm Hamilton’s equations are equivalent to Newton’s equation To simplify the notation, we can rewrite: If we apply the three-point formula to the second-order derivative d2R/dt2, we have We can also apply the three-point formula to the velocity with t = kτ. After we put all the above together, we obtain the simplest algorithm, the Verlet algorithm, for a classical many-body system, with

The Verlet algorithm can be started if the first two positions R0 and R1 of the particles are given. However, in practice, only the initial position R0 and initial velocity V0 are given. Therefore, we need to figure out R1 before we can start the recursion. A common practice is to treat the force during the first time interval [0, τ] as a constant, and then to apply the kinematic equation to obtain G0 is the acceleration vector evaluated at the initial configuration R0.

Of course, the position R1 can be improved by carrying out the Taylor expansion to higher- order terms if the accuracy in the first two points is critical. We can also replace G0 with the average (G0 + G1)/2, with G1 evaluated at R1.This procedure can be iterated several times before starting the algorithm for the velocity V1 and the next position R2.

The Verlet algorithm has advantages and disadvantages. It preserves the time reversibility that is one of the important properties of Newton’s equation. The rounding error may eventually destroy this time symmetry. The error in the velocity is two orders of magnitude higher than the error in the position. In many applications, we may only need information about the positions of the particles, and the Verlet algorithm yields very high accuracy for the position. If the velocity is not needed, we can totally ignore the evaluation of the velocity, since the evaluation of the position does not depend on the velocity at each time step. The biggest disadvantage of the Verlet algorithm is that the velocity is evaluated one time step behind the position! However, this lag can be removed if the velocity is evaluated directly from the force. A two-point formula would yield: V_k+1 = V_k + τ G_k + O(τ^ 2).

Note that the evaluation of the position still has the same accuracy because the velocity is now updated which provides the cancelation of the third-order term in the new position. We would get much better accuracy if we replaced Gk with the average (Gk + Gk+1)/2. The new position can be obtained by treating the motion within t ∈ [kτ, (k + 1)τ ] as motion with a constant acceleration Gk ; that is, Then a variation of the Verlet algorithm with the velocity calculated at the same time step of the position is