Computational Physics (Lecture 18)

Slides:



Advertisements
Similar presentations
Time averages and ensemble averages
Advertisements

The Kinetic Theory of Gases
Pressure and Kinetic Energy
Statistical mechanics
Molecular Dynamics at Constant Temperature and Pressure Section 6.7 in M.M.
Lecture 13: Conformational Sampling: MC and MD Dr. Ronald M. Levy Contributions from Mike Andrec and Daniel Weinstock Statistical Thermodynamics.
We’ve spent quite a bit of time learning about how the individual fundamental particles that compose the universe behave. Can we start with that “microscopic”
A Digital Laboratory “In the real world, this could eventually mean that most chemical experiments are conducted inside the silicon of chips instead of.
Introduction to Molecular Orbitals
Joo Chul Yoon with Prof. Scott T. Dunham Electrical Engineering University of Washington Molecular Dynamics Simulations.
Linear Momentum and Collisions
Lecture VII Rigid Body Dynamics CS274: Computer Animation and Simulation.
Ch 23 pages Lecture 15 – Molecular interactions.
1 Physical Chemistry III Molecular Simulations Piti Treesukol Chemistry Department Faculty of Liberal Arts and Science Kasetsart University :
1 Scalar Properties, Static Correlations and Order Parameters What do we get out of a simulation? Static properties: pressure, specific heat, etc. Density.
Central Force Motion Chapter 8
Javier Junquera Molecular dynamics in the microcanonical (NVE) ensemble: the Verlet algorithm.
Molecular Dynamics Simulation Solid-Liquid Phase Diagram of Argon ZCE 111 Computational Physics Semester Project by Gan Sik Hong (105513) Hwang Hsien Shiung.
1 CE 530 Molecular Simulation Lecture 17 Beyond Atoms: Simulating Molecules David A. Kofke Department of Chemical Engineering SUNY Buffalo
1 CE 530 Molecular Simulation Lecture 6 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Chapter 3: Central Forces Introduction Interested in the “2 body” problem! Start out generally, but eventually restrict to motion of 2 bodies interacting.
 We just discussed statistical mechanical principles which allow us to calculate the properties of a complex macroscopic system from its microscopic characteristics.
In the Hamiltonian Formulation, the generalized coordinate q k & the generalized momentum p k are called Canonically Conjugate quantities. Hamilton’s.
1 CE 530 Molecular Simulation Lecture 23 Symmetric MD Integrators David A. Kofke Department of Chemical Engineering SUNY Buffalo
Molecular Modelling - Lecture 2 Techniques for Conformational Sampling Uses CHARMM force field Written in C++
ChE 452 Lecture 25 Non-linear Collisions 1. Background: Collision Theory Key equation Method Use molecular dynamics to simulate the collisions Integrate.
Quantum Mechanical Cross Sections In a practical scattering experiment the observables we have on hand are momenta, spins, masses, etc.. We do not directly.
Chapter 13 Gravitation Newton’s Law of Gravitation Here m 1 and m 2 are the masses of the particles, r is the distance between them, and G is the.
Ludwid Boltzmann 1844 – 1906 Contributions to Kinetic theory of gases Electromagnetism Thermodynamics Work in kinetic theory led to the branch of.
Interacting Molecules in a Dense Fluid
Review Of Statistical Mechanics Continued
Tuesday, June 26, 2007PHYS , Summer 2006 Dr. Jaehoon Yu 1 PHYS 1443 – Section 001 Lecture #15 Tuesday, June 26, 2007 Dr. Jaehoon Yu Rotational.
©D.D. Johnson and D. Ceperley MSE485/PHY466/CSE485 1 Scalar Properties, Static Correlations and Order Parameters What do we get out of a simulation?
Wednesday, Oct. 31, 2007 PHYS , Fall 2007 Dr. Jaehoon Yu 1 PHYS 1443 – Section 002 Lecture #16 Wednesday, Oct. 31, 2007 Dr. Jae Yu Two Dimensional.
Statistical Mechanics and Multi-Scale Simulation Methods ChBE
Computational Physics (Lecture 18) PHY4061. Molecular dynamics simulations Most physical systems are collections of interacting objects. – a drop of water.
Electrostatic field in dielectric media When a material has no free charge carriers or very few charge carriers, it is known as dielectric. For example.
Computational Physics (Lecture 11) PHY4061. Variation quantum Monte Carlo the approximate solution of the Hamiltonian Time Independent many-body Schrodinger’s.
PHY 151: Lecture 9B 9.5 Collisions in Two Dimensions 9.6 The Center of Mass 9.7 Systems of Many Particles 9.8 Deformable Systems 9.9 Rocket Propulsion.
Quantum Theory of Hydrogen Atom
Schrodinger’s Equation for Three Dimensions
Classical Mechanics Lagrangian Mechanics.
Solutions of Schrodinger Equation
Classical EM - Master in Physics - AA
The Hydrogen Atom The only atom that can be solved exactly.
Linear Momentum and Collisions
Physics of Electronics: 3. Collection of Particles in Gases and Solids
PHYS 1443 – Section 001 Lecture #19
Computational Physics (Lecture 10)
J P SINGH Dept of Physics P.G.G.C-11, Chandigarh
Overview of Molecular Dynamics Simulation Theory
Chapter 13 Gravitation.
The units of g(): (energy)-1
PHYS 1443 – Section 003 Lecture #18
PHYS 1443 – Section 003 Lecture #17
Computational Physics (Lecture 20)
PHYS 1441 – Section 002 Lecture #22
Lecture 41 Statistical Mechanics and Boltzmann factor
Scalar Properties, Static Correlations and Order Parameters
PHYS 1443 – Section 003 Lecture #16
Classical Statistical Mechanics in the Canonical Ensemble: Application to the Classical Ideal Gas.
Quantum Two.
RELATIVISTIC EFFECTS.
Chapter 9: Molecular-dynamics
Quantum Theory of Hydrogen Atom
Chapter 13 Gravitation.
Center of Mass Prepared by; Dr. Rajesh Sharma Assistant Professor
Simple introduction to quantum mechanics
Physical Chemistry Chapter VI Interaction between Molecules 2019/5/16
CHAPTER 7 The Hydrogen Atom
Presentation transcript:

Computational Physics (Lecture 18)

Molecular dynamics simulations Most physical systems are collections of interacting objects. a drop of water contains more than 1022 water molecules. a galaxy is a collection of millions and millions of stars. no analytical solution that can be found for an interacting system with more than two objects. We can solve the problem of a two-body system, such as the Earth–Sun system, analytically, but not a three-body system, such as the Moon–Earth–Sun system.

The situation is similar in quantum mechanics, one can obtain the energy levels of the hydrogen atom (one electron and one proton) analytically, but not those the helium atom (two electrons and a nucleus) Numerical techniques are needed to study a system of a large number of interacting objects, or the so-called many-body system.

a distinction between three-body systems such as the Moon–Earth–Sun system and a more complicated system, such as a drop of water. Statistical mechanics has to be applied to the latter.

The methods for solving Newton’s equation discussed early Chapters can be used to solve the above equation set. However, those methods are not as practical as MD, in terms of the speed and accuracy of the computation and given the statistical nature of large systems.

General behavior of a classical system MD solves the dynamics of a classical many-body system described by the Hamiltonian Here: EK and EP are the kinetic energy and potential energy of the system, mi , ri , and pi are the mass, position vector, and momentum of the ith particle, and V(ri j ) and U(ri ) are the corresponding interaction energy and external potential energy. From Hamilton’s principle, the position vector and momentum satisfy

There are several ways to simulate a many-body system. Here the force fi is given by There are several ways to simulate a many-body system. Most simulations are done either through a stochastic process, such as the Monte Carlo simulation or through a deterministic process, such as a molecular dynamics simulation. Some numerical simulations are performed in a hybridized form of both, for example, Langevin dynamics, and Brownian dynamics.

Another issue is the distribution function of the system. In statistical mechanics, each special environment is dealt with by way of a special ensemble.

For an isolated system, we use the micro-canonical ensemble, which assumes a constant total energy, number of particles, and volume. A system in good contact with a thermal bath is dealt with using the canonical ensemble which assumes a constant temperature, number of particles, and volume (or pressure). For any given ensemble, the system is described by a probability function W(R, P), which is in general a function of phase space, consisting of all coordinates and momenta of the particles R = (r1, r2, . . . , rN ) and P = (p1, p2, . . . , pN ), and other quantities, such as temperature, total particle number of the system, and so forth.

For the canonical ensemble, we have: where T is the temperature of the system, kB is the Boltzmann constant, and N is a normalization constant.

We can separate the position dependence and momentum dependence in W(R, P) if they are not coupled in H. Any average of the momentum-dependent quantity becomes quite simple because of the quadratic behavior of the momentum in H.

So we concentrate on the position dependence here. The statistical average of a physical quantity A(R, P) is then given by:

if the system is ergodic: that is, every possible state is accessed with an equal probability. Because molecular dynamics simulations are deterministic in nature, almost all physical quantities are obtained through time averages. Sometimes the average over all the particles is also needed to characterize the system. For example, the average kinetic energy of the system can be obtained from any ensemble average, and the result is given by the partition theorem

where G is the total number of degrees of freedom. For a very large system, G ~ 3N, because each particle has three degrees of freedom. In molecular dynamics simulations, the average kinetic energy of the system can be obtained through where M is the total number of data points taken at different time steps and EK(t j ) is the kinetic energy of the system at time t j . If the system is ergodic, the time average is equivalent to the ensemble average. The temperature T of the simulated system is given by the average kinetic energy with the application of the partition theorem, T = 2EK/GkB.

Basic methods for many-body systems In general, we can define an n-body density function where dRn = drn+1 drn+2 · · · drN . Note that the particle density ρ(r) = ρ1(r) is the special case of n = 1. The two-body density function is related to the pair-distribution function g(r, r) through

where the first term is the so-called density–density correlation function. Here ρˆ(r) is the density operator, defined as The density of the system is given by the average of the density operator, If the density of the system is nearly a constant, the expression for g(r, r) can be reduced to a much simpler form where ρ is the average density from the points r = 0 and r.

If the angular distribution is not the information needed, we can take the angular average to obtain the radial distribution function: where θ and φ are the polar and azimuthal angles from the spherical coordinate system. The pair-distribution or radial distribution function is related to the static structure factor S(k) through the Fourier transform The angular average of S(k) is given by The structure factor of a system can be measured with the light- or neutron scattering experiment.

Question: What’s general shape of PDF for crystal and liquid Question: What’s general shape of PDF for crystal and liquid? Can you tell the difference between SC and FCC based on PDF?

The behavior of the pair-distribution function can provide a lot of information regarding the translational nature of the particles in the system. For example, a solid structure would have a pair-distribution function with sharp peaks at the distances of nearest neighbors, next nearest neighbors, and so forth. If the system is a liquid, the pair-distribution still has some broad peaks at the average distances of nearest neighbors, next nearest neighbors, and so forth, but the feature fades away after several peaks.

The density at a specific point is given by: If the bond orientational order is important, one can also define an orientational correlation function where qn(r) is a quantity associated with the orientation of a specific bond. Detailed discussions on orientational order can be found in Strandburg (1992). Here we discuss how one can calculate ρ(r) and g(r ) in a numerical simulation. The density at a specific point is given by: where Ω (r,r ) is the volume of a sphere centered at r with a radius Δ r and N(r, Δ r ) is the number of particles in the volume. Note that we may need to adjust the radius r to have a smooth and realistic density distribution ρ(r) for a specific system. The average is taken over the time steps.

Similarly, we can obtain the radial distribution function numerically. We need to measure the radius r from the position of a specific particle ri , The radial distribution function g(r ) is the probability of another particle’s showing up at a distance r .

Numerically, we have where ΔΩ (r, Δ r ) ~ 4πr 2rΔr is the volume element of a spherical shell with radius r and thickness Δ r and Δ N(r, Δ r ) is the number of particles in the shell with the ith particle at the center of the sphere. The average is taken over the time steps as well as over the particles, if necessary.

The dynamics of the system can be measured from the displacement of the particles in the system. We can evaluate the time dependence of the mean-square displacement of all the particles, where ri (t) is the position vector of the ith particle at time t. For a solid system, Δ 2(t) is relatively small and does not grow with time, and the particles are in nondiffusive, or oscillatory states. For a liquid system, Δ 2(t) grows linearly with time: where D is the self-diffusion coefficient (a measure of the motion of a particle in a medium of identical particles) and Δ 2(0) is a time-independent constant. The particles are then in diffusive, or propagating, states.

The very first issue in numerical simulations for a bulk system is how to extend a finite simulation box to model the nearly infinite system. A common practice is to use a periodic boundary condition, that is, to approximate an infinite system by piling up identical simulation boxes periodically. A periodic boundary condition removes the conservation of the angular momentum of the simulated system (particles in one simulation box), but still preserves the translational symmetry of the center of mass. So the temperature is related to the average kinetic energy by EK = 3/2(N − 1) kB T, where the factor (N − 1) is due to the removal of the rotation around the center of mass.

how to include the interactions among the particles in different simulation boxes If the interaction is a short-range interaction, one can truncate it at a cut-off length rc. The interaction V(rc) has to be small enough that the truncation does not affect the simulation results significantly. A typical simulation box usually has much larger dimensions than rc. For a three dimensional cubic box with sides of length L, the total interaction potential can be evaluated with many fewer summations than N!/2, the number of possible pairs in the system. For example, if we have L/2 > rc, and if |xi j |, |yi j |, and|zi j | are all smaller than L/2, we can use Vi j = V(ri j ); otherwise, we use the corresponding point in the neighboring box. In order to avoid a finite jump at the truncation, one can always shift the interaction to V(r ) − V(rc) to make sure that it is zero at the truncation.

The pressure of a bulk system can be evaluated from the pair-distribution function through which is the result of the virial theorem that relates the average kinetic energy to the average potential energy of the system. The correction due to the truncation of the potential is then given by which is useful for estimating the influence on the pressure from the truncation in the interaction potential. Numerically, one can also evaluate the pressure from the time average because g(r ) can be interpreted as the probability of seeing another particle at a distance r . Then we have which can be evaluated quite easily, because at every time step the force fi j =−∇V(ri j ) is calculated for each particle pair.

The Verlet algorithm Hamilton’s equations are equivalent to Newton’s equation To simplify the notation, we can rewrite: If we apply the three-point formula to the second-order derivative d2R/dt2, we have with t = kτ . We can also apply the three-point formula to the velocity After we put all the above together, we obtain the simplest algorithm, the Verlet algorithm, for a classical many-body system, with

Therefore, we need to figure out R1 before we can start the recursion. The Verlet algorithm can be started if the first two positions R0 and R1 of the particles are given. However, in practice, only the initial position R0 and initial velocity V0 are given. Therefore, we need to figure out R1 before we can start the recursion. A common practice is to treat the force during the first time interval [0, τ] as a constant, and then to apply the kinematic equation to obtain G0 is the acceleration vector evaluated at the initial configuration R0.

Of course, the position R1 can be improved by carrying out the Taylor expansion to higher-order terms if the accuracy in the first two points is critical. We can also replace G0 with the average (G0 + G1)/2, with G1 evaluated at R1.This procedure can be iterated several times before starting the algorithm for the velocity V1 and the next position R2.

The Verlet algorithm has advantages and disadvantages. It preserves the time reversibility that is one of the important properties of Newton’s equation. The rounding error may eventually destroy this time symmetry. The error in the velocity is two orders of magnitude higher than the error in the position.

In many applications, we may only need information about the positions of the particles, The Verlet algorithm yields very high accuracy for the position. If the velocity is not needed, we can totally ignore the evaluation of the velocity, since the evaluation of the position does not depend on the velocity at each time step. How to solve the velocity problem which is not evaluated at the same step of position?

The biggest disadvantage of the Verlet algorithm is that the velocity is evaluated one time step behind the position! However, this lag can be removed if the velocity is evaluated directly from the force. A two-point formula would yield: Vk+1 = Vk + τ Gk + O(τ2).

We would get much better accuracy if we replaced Gk with the average (Gk + The new position can be obtained by treating the motion within t ∈ [kτ, (k + 1)τ ] as motion with a constant acceleration Gk ; that is, Then a variation of the Verlet algorithm with the velocity calculated at the same time step of the position is Note that the evaluation of the position still has the same accuracy because the velocity is now updated which provides the cancelation of the third-order term in the new position.