Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Quantum Monte Carlo Methods

Similar presentations


Presentation on theme: "Introduction to Quantum Monte Carlo Methods"β€” Presentation transcript:

1 Introduction to Quantum Monte Carlo Methods
Claudio Attaccalite

2 Outline A bit of probability theory Variational Monte Carlo
Wave-Function and Optimization

3 Definition of probability
P(Ei)= pi= Number of successful events Total Number of experiments In the limit of a large number of experiments 𝑖=1 𝑁 𝑝 𝑖 =1 joint probability: pi,j probability of composite events probability for j whatever the second event may be or not marginal probability: 𝑝 𝑖 = π‘˜ 𝑝 𝑖,π‘˜ probability for occurrence of j give that the event i occurred conditional probability: p(i|j)

4 More Definitions Mean Value: Variance: Standard deviation:
π‘₯ = 𝑖 π‘₯ 𝑖 𝑝 𝑖 The mean value <x> is the expected average value after repeating several times the same experiment Variance: π‘£π‘Žπ‘Ÿ π‘₯ = π‘₯ 2 βˆ’ π‘₯ 2 = 𝑖 π‘₯ 𝑖 βˆ’ π‘₯ 𝑝 𝑖 The variance is a positive quantity that is zero only if all the events having a non-vanishing probability give the same value for the variable xi Standard deviation: Οƒ= π‘£π‘Žπ‘Ÿ π‘₯ The standard deviation is assumed as a measure of the dispersion of the variable x

5 Chebyshev's inequality
𝑃 Λ‰ =𝑃 π‘₯βˆ’ π‘₯ 2 β‰₯ π‘£π‘Žπ‘Ÿ π‘₯ Ξ΄ ≀δ for δ≀1 If the variance is small the random variable x became β€œmore” predictable, in the sense that is value xi at each event is close to <x> with a non-negligible probability

6 Extension to Continues Variables
Cumulative probability : 𝐹 𝑦 =𝑃 π‘₯≀𝑦 Clearly F(y) is a monotonically increasing function and 𝐹 ∞ =1 ρ 𝑦 = 𝑑𝐹 𝑦 𝑑𝑦 Density probability: Obviously: ρ 𝑦 β‰₯0 And for discrete distributions: ρ 𝑦 = 𝑖 𝑝 𝑖 Ξ΄ π‘¦βˆ’π‘₯ 𝐸 𝑖

7 The law of large number Central Limit Theorem
The average of x is obtained averaging over a large number N of independent realizations of the same experiment π‘₯ Λ‰ = 1 𝑁 𝑖 π‘₯ 𝑖 π‘₯ Λ‰ = π‘₯ π‘£π‘Žπ‘Ÿ π‘₯ Λ‰ = π‘₯ 2 Λ‰ βˆ’ π‘₯ Λ‰ = 1 𝑁 π‘£π‘Žπ‘Ÿ π‘₯ ρ π‘₯ Λ‰ = Ο€ Οƒ 2 𝑁 𝑒 βˆ’ π‘₯ Λ‰ βˆ’ π‘₯ Οƒ 2 𝑁 The average of x is Gaussian distributed for large N and its standard deviation decrease as 1/sqrt(N) Central Limit Theorem

8 Monte Carlo Example: Estimating 
If you are a very poor dart player, it is easy to imagine throwing darts randomly at the above figure, and it should be apparent that of the total number of darts that hit within the square, the number of darts that hit the shaded part is proportional to the area of that part.

9 In other words: 𝑃 𝑖𝑛𝑠𝑖𝑑𝑒 = Ο€ π‘Ÿ 2 4 π‘Ÿ 2 = Ο€ 4 and:

10 A Simple Integral Consider the simple integral:
This can be evaluated in the same way as the pi example. By randomly tossing darts in the interval a-b and evaluating the function f(x) on these points

11 The electronic structure problem
P.A.M. Dirac:The fundamental laws necessary for the mathematical treatment of a large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved.

12 Variational Monte Carlo
Monte Carlo integration is necessary because the wave-function contains explicit particle correlations that leads to non-factoring multi-dimension integrals.

13 How to sample a given probability distribution?

14 Solution Markov chain: random walk in configuration space
π‘₯ 𝑛+1 =𝐹 π‘₯ 𝑛 , ΞΎ 𝑛 A Markov chain is a stochastic dynamics for which a random variable xn evolves according to xn and xn+1 are not independent so we can define a joint probability to go from first to the second 𝑓 𝑛 π‘₯ 𝑛+1 , π‘₯ 𝑛 =𝐾 π‘₯ 𝑛+1 ∣ π‘₯ 𝑛 ρ 𝑛 π‘₯ 𝑛 Marginal probability to be in xn Conditional probability to go from xn to xn+1 ρ 𝑛+1 π‘₯ 𝑛+1 = π‘₯ 𝑛 𝐾 π‘₯ 𝑛+1 ∣ π‘₯ 𝑛 ρ 𝑛 π‘₯ 𝑛 Master equation:

15 Limit distribution of the Master equation
ρ 𝑛+1 π‘₯ 𝑛+1 = π‘₯ 𝑛 𝐾 π‘₯ 𝑛+1 ∣ π‘₯ 𝑛 ρ 𝑛 π‘₯ 𝑛 1) Does It exist a limiting distribution? ρ Λ‰ π‘₯ 2) Starting form a given arbitrary configuration under which condition we converge?

16 Sufficient and necessary conditions for the convergence
The answer to the first question requires that: ρ Λ‰ π‘₯ 𝑛+1 = π‘₯ 𝑛 𝐾 π‘₯ 𝑛+1 ∣ π‘₯ 𝑛 ρ Λ‰ π‘₯ 𝑛 In order to satisfy this requirement it is sufficient but not necessary that the so-called detailed balance holds: 𝐾 π‘₯β€²βˆ£π‘₯ ρ Λ‰ π‘₯ =𝐾 π‘₯∣π‘₯β€² ρ Λ‰ π‘₯β€² The answer to the second question requires ergodicity! Namely that every configuration x' can be reached in a sufficient large number of Markov interactions, starting from any initial configuration x

17 Nicholas Metropolis (1915-1999)‏
The algorithm by Metropolis (and A Rosenbluth, M Rosenbluth, A Teller and E Teller, 1953) has been cited as among the top 10 algorithms having the "greatest influence on the development and practice of science and engineering in the 20th century." β€œThe code that was to become the famous Monte Carlo method of calculation originated from a synthesis of insights that Metropolis brought to more general applications in collaboration with Stanislaw Ulam in A team headed by Metropolis, which included Anthony Turkevich from Chicago, carried out the first actual Monte Carlo calculations on the ENIAC in Metropolis attributes the germ of this statistical method to Enrico Fermi, who had used such ideas some 15 years earlier. The Monte Carlo method, with its seemingly limitless potential for development to all areas of science and economic activities, continues to find new applications.” From β€œPhysics Today” Oct 2000, Vol 53, No. 10., see also

18 Solution! (by Metropolis and friends)
Metropolis Algorithm We want 1) a Markov chain such that, for large n, converge to r(x) 2) a condition probability K(x'|x) that satisfy the detailed balance with this probability distribution Solution! (by Metropolis and friends) 𝐾 π‘₯β€²βˆ£π‘₯ =𝐴 π‘₯β€²βˆ£π‘₯ 𝑇 π‘₯β€²βˆ£π‘₯ 𝐴 π‘₯β€²βˆ£π‘₯ =π‘šπ‘–π‘› 1, ρ Λ‰ π‘₯β€² 𝑇 π‘₯∣π‘₯β€² ρ Λ‰ π‘₯ 𝑇 π‘₯β€²βˆ£π‘₯ where T(x'|x) is a general and reasonable transition probability from x to x'

19 The Algorithm Important
start from a random configuration x' generate a new one according to T(x'|x) accept or reject according to Metropolis rule evaluate our function Important It not necessary to have a normalized probability distribution (or wave-function!)

20 More or less we have arrived we can evaluate this integral
𝐴 = Ξ¨ 𝑅 𝐴 Λ† Ξ¨ 𝑅 𝑑𝑅 Ξ¨ 𝑅 2 𝑑𝑅 = 𝐴 𝐿 𝑅 Ξ¨ 2 𝑅 𝑑𝑅 Ξ¨ 𝑅 2 𝑑𝑅 and its variance π‘£π‘Žπ‘Ÿ 𝐴 = 𝐴 𝐿 2 𝑅 Ξ¨ 2 𝑅 𝑑𝑅 Ξ¨ 𝑅 2 𝑑𝑅 βˆ’ 𝐴 2 but we just need a wave function

21 The trial wave-function
The trial-function completely determines quality of the approximation for the physical observables The simplest WF is the Slater-Jastrow Ξ¨ π‘Ÿ 1, π‘Ÿ 2,. .., π‘Ÿ 𝑛 =D∣ Ο† 𝐴 ∣exp π‘ˆ π‘π‘œπ‘Ÿπ‘Ÿ Det|f|: from DFT, CI, HF, scratch, etc.. other functional forms: pairing BCS, multi-determinant, pfaffian

22 Optimization strategies
In order to obtain a good variational wave-function, it is possible to optimize the WF minimizing one of the following functionals or a linear combination of both 𝐸 𝑉 π‘Ž,𝑏,𝑐 = Ξ¨ π‘Ž,𝑏,𝑐.. 𝐻Ψ π‘Ž,𝑏,𝑐... 𝑑𝑅 𝑛 Ξ¨ Ξ¨ The Variational Energy The Variance of the Energy: (always positive and 0 for exact ground state!) Οƒ 2 π‘Ž,𝑏,𝑐... = 𝐻Ψ Ξ¨ Ξ¨ 2 βˆ’ 𝐸 𝑣 2

23 And finally an application!!!

24 2D electron gas Unpolarized phase Wigner Crystal The Hamiltonian :
π‘Ÿ 𝑆 = 1 π𝑛 π‘Ž 𝐡 𝐻= βˆ’1 2 π‘Ÿ 𝑠 2 𝑖 𝑁 βˆ‡ 𝑖 π‘Ÿ 𝑠 𝑖<𝑗 𝑁 1 ∣ π‘Ÿ 𝑖 βˆ’ π‘Ÿ 𝑗 ∣ Unpolarized phase Wigner Crystal

25 2D electron gas: the phase diagram
We found a new phase of the 2D electron gas at low density a stable spin polarized phase before the Wigner crystallization.

26 Difficulties With VMC The many-electron wavefunction is unknown
Has to be approximated May seem hopeless to have to actually guess the wavefunction But is surprisingly accurate when it works

27 The Limitation of VMC Nothing can really be done if the trial wavefunction isn’t accurate enough Moreover it favours simple states over more complicated ones Therefore, there are other methods Example: Diffusion QMC

28 Next Monday Then . . . . Diffusion Monte Carlo and Sign-Problem
Applications Then Finite Temperature Path-Integral Monte Carlo One-dimensional electron gas Excited States One-body density matrix Diagramatic Monte Carlo

29 Reference SISSA Lectures on Numerical methods for strongly correlated electrons 4th draft S. Sorella G. E. Santoro and F. Becca (2008) Introduction to Diffusion Monte Carlo Method I. Kostin, B. Faber and K. Schulten, physics/ v1 (1995) FreeScience.info-> Quantum Monte Carlo

30 Exact conditions Electron-Nuclei cusp conditions
When one electron approach a nuclei the wave-function reduce to a simple hydrogen like, namely: The same condition holds when two electron meet electron-electron cusp condition and can be satisfied with a two-body Jastrow factor


Download ppt "Introduction to Quantum Monte Carlo Methods"

Similar presentations


Ads by Google