Download presentation
Presentation is loading. Please wait.
Published byEverett Logan Modified over 9 years ago
1
Dario Bressanini UNAM, Mexico City, 2007 http://scienze-como.uninsubria.it/bressanini Universita’ dell’Insubria, Como, Italy Introduction to Quantum Monte Carlo
2
2 Why do simulations? Simulations are a general method for “solving” many-body problems. Other methods usually involve approximations. Simulations are a general method for “solving” many-body problems. Other methods usually involve approximations. Experiment is limited and expensive. Simulations can complement the experiment. Experiment is limited and expensive. Simulations can complement the experiment. Simulations are easy even for complex systems. Simulations are easy even for complex systems. They scale up with the computer power. They scale up with the computer power.
3
3 Buffon needle experiment, AD 1777 d L
4
4 Simulations “The general theory of quantum mechanics is now almost complete. The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.” Dirac, 1929 “The general theory of quantum mechanics is now almost complete. The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.” Dirac, 1929
5
5 General strategy How to solve a deterministic problem using a Monte Carlo method? How to solve a deterministic problem using a Monte Carlo method? Rephrase the problem using a probability distribution Rephrase the problem using a probability distribution “Measure” A by sampling the probability distribution “Measure” A by sampling the probability distribution
6
6 Monte Carlo Methods The points R i are generated using random numbers The points R i are generated using random numbers We introduce noise into the problem!! We introduce noise into the problem!! Our results have error bars... ... Nevertheless it might be a good way to proceed This is why the methods are called Monte Carlo methods Metropolis, Ulam, Fermi, Von Neumann (-1945) Metropolis, Ulam, Fermi, Von Neumann (-1945)
7
7 Stanislaw Ulam (1909-1984) S. Ulam is credited as the inventor of Monte Carlo method in 1940s, which solves mathematical problems using statistical sampling.
8
8 Why Monte Carlo? We can approximate the numerical value of a definite integral by the definition: We can approximate the numerical value of a definite integral by the definition: where we use L points x i uniformly spaced. where we use L points x i uniformly spaced.
9
9 Error in Quadrature Consider an integral in D dimensions: Consider an integral in D dimensions: N= L D uniformly spaced points, to CPU time N= L D uniformly spaced points, to CPU time The error with N sampling points is The error with N sampling points is
10
10 Monte Carlo Estimates of Integrals If we sample the points not on regular grids, but randomly (uniformly distributed), then If we sample the points not on regular grids, but randomly (uniformly distributed), then Where we assume the integration domain is a regular box of V=L D.
11
11 Monte Carlo Error From probability theory one can show that the Monte Carlo error decreases with sample size N as From probability theory one can show that the Monte Carlo error decreases with sample size N as Independent of dimension D (good). Independent of dimension D (good). To get another decimal place takes 100 times longer! (bad) To get another decimal place takes 100 times longer! (bad)
12
12 MC is advantageous for large dimensions Error by simple quadrature N -1/D Error by simple quadrature N -1/D Using smarter quadrature N -A/D Using smarter quadrature N -A/D Error by Monte Carlo always N -1/2 Error by Monte Carlo always N -1/2 Monte Carlo is always more efficient for large D (usually D > 4 - 6) Monte Carlo is always more efficient for large D (usually D > 4 - 6)
13
13 Monte Carlo Estimates of π (1,0) We can estimate π using Monte Carlo
14
14 Monte Carlo Integration Note that Note that Can automatically estimate the error by computing the standard deviation of the sampled function values All points generated are independent All points generated are used
15
15 Inefficient? If the function is strongly peaked, the process is inefficient If the function is strongly peaked, the process is inefficient We should generate more points where the function is large We should generate more points where the function is large Use a non-uniform distribution! Use a non-uniform distribution!
16
16 General Monte Carlo If the samples are not drawn uniformly but with some probability distribution p(R), we can compute by Monte Carlo: If the samples are not drawn uniformly but with some probability distribution p(R), we can compute by Monte Carlo: Where p(R) is normalized,
17
17 Monte Carlo so so Convergence guaranteed by the Central Limit Theorem The statistical error 0 if p(R) f(R), convergence is faster
18
18 Warning! Beware of Monte Carlo integration routines in libraries: they usually cannot assume anything about your functions since they must be general. Beware of Monte Carlo integration routines in libraries: they usually cannot assume anything about your functions since they must be general. Can be quite inefficients Can be quite inefficients Also beware of standard compiler supplied Random Number Generators (they are known to be bad!!) Also beware of standard compiler supplied Random Number Generators (they are known to be bad!!)
19
19 Equation of state of a fluid The problem: compute the equation of state (p as function of particle density N/V ) of a fluid in a box given some interaction potential between the particles Assume for every position of particles we can compute the potential energy V(R)
20
20 The Statistical Mechanics Problem For equilibrium properties we can just compute the Boltzmann multi-dimensional integrals For equilibrium properties we can just compute the Boltzmann multi-dimensional integrals Where the energy usually is a sum Where the energy usually is a sum
21
21 An inefficient recipe For 100 particles (not really the thermodynamic limit), integrals are in 300 dimensions. For 100 particles (not really the thermodynamic limit), integrals are in 300 dimensions. The naïve MC procedure would be to uniformly distribute the particles in the box, throwing them randomly. The naïve MC procedure would be to uniformly distribute the particles in the box, throwing them randomly. If the density is high, throwing particles at random will put them some of them too close to each other. If the density is high, throwing particles at random will put them some of them too close to each other. almost all such generated points will give negligible contribution, due to the boltzmann factor almost all such generated points will give negligible contribution, due to the boltzmann factor
22
22 An inefficient recipe E(R) becomes very large and positive E(R) becomes very large and positive We should try to generate more points where E(R) is close to the minima We should try to generate more points where E(R) is close to the minima
23
23 The Metropolis Algorithm How do we do it? How do we do it? Anyone who consider arithmetical methods of producing random digits is, of course, in a state of sin. John Von Neumann John Von Neumann Use the Metropolis algorithm (M(RT) 2 1953)...... and a powerful computer Use the Metropolis algorithm (M(RT) 2 1953)...... and a powerful computer The algorithm is a random walk (markov chain) in configuration space. Points are not independent The algorithm is a random walk (markov chain) in configuration space. Points are not independent
24
24
25
25
26
26 Importance Sampling The idea is to use Importance Sampling, that is sampling more where the function is large The idea is to use Importance Sampling, that is sampling more where the function is large “…, instead of choosing configurations randomly, …, we choose configuration with a probability exp(-E/k B T) and weight them evenly.” - from M(RT) 2 paper “…, instead of choosing configurations randomly, …, we choose configuration with a probability exp(-E/k B T) and weight them evenly.” - from M(RT) 2 paper
27
27 The key Ideas Points are no longer independent! Points are no longer independent! We consider a point (a Walker) that moves in configuration space according to some rules We consider a point (a Walker) that moves in configuration space according to some rules
28
28 A Markov Chain A Markov chain is a random walk through configuration space: A Markov chain is a random walk through configuration space: R 1 R 2 R 3 R 4 … Given R n there is a transition probability to go to the next point R n+1 : p(R n R n+1 ) stochastic matrix Given R n there is a transition probability to go to the next point R n+1 : p(R n R n+1 ) stochastic matrix In a Markov chain, the distribution of R n+1 depends only on R n. There is no memory In a Markov chain, the distribution of R n+1 depends only on R n. There is no memory We must use an ergodic markov chain We must use an ergodic markov chain
29
29 The key Ideas Choose an appropriate p(R n R n+1 ) so that at equilibrium we sample a distribution π(R) (for this problem is just π = exp(-E/k B T) ) Choose an appropriate p(R n R n+1 ) so that at equilibrium we sample a distribution π(R) (for this problem is just π = exp(-E/k B T) ) A sufficient condition is to apply detailed balance. A sufficient condition is to apply detailed balance. Consider an infinite number of walkers, and two positions R, and R’ Consider an infinite number of walkers, and two positions R, and R’ At equilibrium, the #of walkers that go from R R’ is equal to the #of walkers R’ R At equilibrium, the #of walkers that go from R R’ is equal to the #of walkers R’ R p(R R’) ≠ p(R’ R)
30
30 The Detailed Balance π(R) is the distribution we want to sample π(R) is the distribution we want to sample We have the freedom to choose p(R R’) We have the freedom to choose p(R R’)
31
31 Rejecting points The third key idea is to use rejection to enforce detailed balance The third key idea is to use rejection to enforce detailed balance p(R R’) is split into a Transition step and an Acceptance/Rejection step p(R R’) is split into a Transition step and an Acceptance/Rejection step T(R R’) generate the next “candidate” point T(R R’) generate the next “candidate” point A(R R’) will decide to accept or reject this point A(R R’) will decide to accept or reject this point
32
32 The Acceptance probability Given some T, a possible choice for A is Given some T, a possible choice for A is For symmetric T For symmetric T
33
33 What it does Suppose π(R’) ≥ π(R) Suppose π(R’) ≥ π(R) move is always accepted Suppose π(R’) < π(R) Suppose π(R’) < π(R) move is accepted with probability π(R’)/π(R) Flip a coin The algorithm samples regions of large π(R) The algorithm samples regions of large π(R) Convergence is guaranteed but the rate is not!! Convergence is guaranteed but the rate is not!!
34
34 IMPORTANT! Accepted and rejected states count the same! Accepted and rejected states count the same! When a point is rejected, you add the previous one to the averages When a point is rejected, you add the previous one to the averages Measure acceptance ratio. Set to roughly 1/2 by varying the “step size” Measure acceptance ratio. Set to roughly 1/2 by varying the “step size” Exact: no time step error, no ergodic problems in principle (but no dynamics). Exact: no time step error, no ergodic problems in principle (but no dynamics).
35
35 Quantum Mechanics We wish to solve H = E to high accuracy We wish to solve H = E to high accuracy The solution usually involves computing integrals in high dimensions: 3-30000 The “classic” approach (from 1929): The “classic” approach (from 1929): Find approximate (... but good...) ... whose integrals are analitically computable (gaussians) Compute the approximate energy chemical accuracy ~ 0.001 hartree ~ 0.027 eV
36
36 VMC: Variational Monte Carlo Start from the Variational Principle Start from the Variational Principle Translate it into Monte Carlo language Translate it into Monte Carlo language
37
37 VMC: Variational Monte Carlo E is a statistical average of the local energy over P(R) E is a statistical average of the local energy over P(R) Recipe: Recipe: take an appropriate trial wave function distribute N points according to P(R) compute the average of the local energy
38
38 Error bars estimation In Monte Carlo it is easy to estimate the statistical error In Monte Carlo it is easy to estimate the statistical error if if The associated statistical error is The associated statistical error is
39
39 The Metropolis Algorithm move rejectaccept RiRiRiRi R try R i+1 =R i R i+1 =R try Call the Oracle Compute averages
40
40 if p ≥ 1 /* accept always */ accept move If 0 ≤ p < 1 /* accept with probability p */ if p > rnd() accept move else reject move The Metropolis Algorithm The Oracle
41
41 VMC: Variational Monte Carlo No need to analytically compute integrals: complete freedom in the choice of the trial wave function. No need to analytically compute integrals: complete freedom in the choice of the trial wave function. r1r1 r2r2 r 12 He atom Can use explicitly correlated wave functions Can use explicitly correlated wave functions Can satisfy the cusp conditions Can satisfy the cusp conditions
42
42 VMC advantages Can go beyond the Born-Oppenheimer approximation, with any potential, in any number of dimensions. Can go beyond the Born-Oppenheimer approximation, with any potential, in any number of dimensions. Ps 2 molecule (e + e + e - e - ) in 2D and 3D M + m + M - m - as a function of M/m Can compute lower bounds Can compute lower bounds
43
43 Properties of the Local energy For an exact eigenstate E L is a constant For an exact eigenstate E L is a constant At particles coalescence the divergence of V must be cancelled by the divergence of the kinetic term At particles coalescence the divergence of V must be cancelled by the divergence of the kinetic term For an approximate trial function, E L is not constant For an approximate trial function, E L is not constant
44
44 Reducing Errors For a trial function, if E L can diverge, the statistical error will be large For a trial function, if E L can diverge, the statistical error will be large To eliminate the divergence we impose the Kato’s cusp conditions To eliminate the divergence we impose the Kato’s cusp conditions
45
45 Kato’s cusps conditions on We can include the correct analytical structure We can include the correct analytical structure electron – electron cusps: electron – nucleus cusps:
46
46 Optimization of Suppose we have variational parameters in the trial wave function that we want to optimize Suppose we have variational parameters in the trial wave function that we want to optimize The straigthforward optimization of E is numerically unstable, because E L can diverge The straigthforward optimization of E is numerically unstable, because E L can diverge For a finite N can be unbound For a finite N can be unbound Also, our energies have error bars. Can be difficult to compare Also, our energies have error bars. Can be difficult to compare
47
47 Optimization of It is better to optimize It is better to optimize Even for finite N is numerically stable. Even for finite N is numerically stable. The lowest will not have the lowest E but it is usually close The lowest will not have the lowest E but it is usually close It is a measure of the quality of the trial function It is a measure of the quality of the trial function
48
48 Optimization of Meaning of optimization of Meaning of optimization of We want V’ to be “close” to the real V We want V’ to be “close” to the real V For which potential V’ is T an eigenfunction? For which potential V’ is T an eigenfunction? Trying to reduce the distance between upper and lower bound
49
49 VMC drawbacks Error bar goes down as N -1/2 Error bar goes down as N -1/2 It is computationally demanding It is computationally demanding The optimization of becomes difficult as the number of nonlinear parameters increases The optimization of becomes difficult as the number of nonlinear parameters increases It depends critically on our skill to invent a good It depends critically on our skill to invent a good There exist exact, automatic ways to get better wave functions. Let the computer do the work... There exist exact, automatic ways to get better wave functions. Let the computer do the work... To be continued...
50
In the last episode: VMC Today: DMC
51
51 First Major VMC Calculation W. McMillan Thesis in 1964 W. McMillan Thesis in 1964 VMC calculation of ground state of liquid helium 4. VMC calculation of ground state of liquid helium 4. Applied MC techniques from classical liquid theory. Applied MC techniques from classical liquid theory.
52
52 VMC advantages and drawbacks Simple, easy to implement Simple, easy to implement Intrinsic error bars Intrinsic error bars Usually obtains 60-90% of correlation energy Usually obtains 60-90% of correlation energy Error bar goes down as N -1/2 Error bar goes down as N -1/2 It is computationally demanding It is computationally demanding The optimization of becomes difficult as the number of nonlinear parameters increases The optimization of becomes difficult as the number of nonlinear parameters increases It depends critically on our skill to invent a good It depends critically on our skill to invent a good
53
53 Diffusion Monte Carlo Suggested by Fermi in 1945, but implemented only in the 70’s Suggested by Fermi in 1945, but implemented only in the 70’s Nature is not classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy. Richard P. Feynman VMC is a “classical” simulation method VMC is a “classical” simulation method
54
54 The time dependent Schrödinger equation is similar to a diffusion equation The time dependent Schrödinger equation is similar to a diffusion equation Time evolution DiffusionBranch The diffusion equation can be “solved” by directly simulating the system The diffusion equation can be “solved” by directly simulating the system Can we simulate the Schrödinger equation? Diffusion equation analogy
55
55 The analogy is only formal The analogy is only formal is a complex quantity, while C is real and positive Imaginary Time Sch. Equation If we let the time t be imaginary, then can be real! If we let the time t be imaginary, then can be real! Imaginary time Schrödinger equation
56
56 as a concentration is interpreted as a concentration of fictitious particles, called walkers is interpreted as a concentration of fictitious particles, called walkers The schrödinger equation is simulated by a process of diffusion, growth and disappearance of walkers The schrödinger equation is simulated by a process of diffusion, growth and disappearance of walkers Ground State
57
57 Diffusion Monte Carlo SIMULATION: discretize time Kinetic process (branching)Kinetic process (branching) Diffusion processDiffusion process
58
58 First QMC calculation in chemistry 77 lines of Fortran code!
59
59 Formal development Formally, in imaginary time Formally, in imaginary time In coordinate representation In coordinate representation
60
60 Schrödinger Equation in integral form Monte Carlo is good at integrals... Monte Carlo is good at integrals... We interpret G as a probability to move from R to R’ in an time step . We iterate this equation We interpret G as a probability to move from R to R’ in an time step . We iterate this equation
61
61 Iteration of Schrödinger Equation We can iterate this equation We can iterate this equation
62
62 Zassenhaus formula We must use a small time step , but at the same time we must let We must use a small time step , but at the same time we must let In general we do not have the exact G In general we do not have the exact G
63
63 Trotter theorem A and B do not commute, use Trotter Theorem A and B do not commute, use Trotter Theorem Figure out what each operator does independently and then alternate their effect. This is rigorous in the limit as n Figure out what each operator does independently and then alternate their effect. This is rigorous in the limit as n In DMC A is diffusion operator, B is a branching operator In DMC A is diffusion operator, B is a branching operator
64
64 Short Time approximation Diffusion + branching Diffusion + branching At equilibrium the algorithm will sample 0 At equilibrium the algorithm will sample 0 The energy can be estimated as The energy can be estimated as
65
65 The DMC algorithm
66
66 A picture for H 2 +
67
67 Short Time approximation
68
68 Importance sampling V can diverge, so branching can be inefficient V can diverge, so branching can be inefficient We can transform the Schrödinger equation, by multiplying by T We can transform the Schrödinger equation, by multiplying by T
69
69 Importance sampling Similar to a Fokker-Plank equation Similar to a Fokker-Plank equation Simulated by diffusion+drift+branching Simulated by diffusion+drift+branching To the pure diffusion algorithm we added a drift step that pushes the random walk in directions of increasing trial function To the pure diffusion algorithm we added a drift step that pushes the random walk in directions of increasing trial function
70
70 Importance sampling The branching term now is The branching term now is Fluctuations are controlled Fluctuations are controlled At equilibrium it samples: At equilibrium it samples:
71
71 DMC Algorithm Initialize a population of walkers {R i } Initialize a population of walkers {R i } For each walker For each walker R Drift Diffusion R’
72
72 DMC Algorithm Compute branching Compute branching Duplicate R’ to M copies: M = int( ξ + w ) Duplicate R’ to M copies: M = int( ξ + w ) Compute statistics Compute statistics Adjust E ref to make average population constant. Adjust E ref to make average population constant. Iterate…. Iterate….
73
73 Good for Helium studies Thousands of theoretical and experimental papers Thousands of theoretical and experimental papers have been published on Helium, in its various forms: Atom Small Clusters DropletsBulk
74
74 3 He m 4 He n Stability Chart 32 4 He n 4 He n 3 He m 3 He m 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 012345 3 He 3 4 He 8 L=0 S=1/2 3 He 2 4 He 4 L=1 S=1 3 He 2 4 He 2 L=0 S=0 3 He 3 4 He 4 L=1 S=1/2 Terra Incognita Bound L=0 Unbound Unknown L=1 S=1/2 L=1 S=1 Bound
75
75 Good for vibrational problems
76
76 For electronic structure?
77
77 The Fermion Problem Wave functions for fermions have nodes. Wave functions for fermions have nodes. Diffusion equation analogy is lost. Need to introduce positive and negative walkers. The (In)famous Sign Problem If we knew the exact nodes of , we could exactly simulate the system by QMC methods, restricting random walk to a positive region bounded by nodes. If we knew the exact nodes of , we could exactly simulate the system by QMC methods, restricting random walk to a positive region bounded by nodes. Unfortunately, the exact nodes are unknown. Use approximate nodes from a trial . Kill the walkers if they cross a node. Unfortunately, the exact nodes are unknown. Use approximate nodes from a trial . Kill the walkers if they cross a node.+-
78
78 Common misconception on nodes Nodes are not fixed by antisymmetry alone, only a 3N-3 sub-dimensional subset Nodes are not fixed by antisymmetry alone, only a 3N-3 sub-dimensional subset
79
79 Common misconception on nodes They have (almost) nothing to do with Orbital Nodes. They have (almost) nothing to do with Orbital Nodes. It is (sometimes) possible to use nodeless orbitals
80
80 Common misconceptions on nodes A common misconception is that on a node, two like-electrons are always close. This is not true A common misconception is that on a node, two like-electrons are always close. This is not true 21 12
81
81 Common misconceptions on nodes Nodal theorem is NOT VALID in N-Dimensions Nodal theorem is NOT VALID in N-Dimensions Higher energy states does not mean more nodes ( Courant and Hilbert ) It is only an upper bound
82
82 Common misconceptions on nodes Not even for the same symmetry species Not even for the same symmetry species Courant counterexample
83
83 Tiling Theorem (Ceperley) Impossible for ground state The Tiling Theorem does not say how many nodal domains we should expect! Nodal domains must have the same shape
84
84 Nodes are relevant Levinson Theorem: Levinson Theorem: the number of nodes of the zero-energy scattering wave function gives the number of bound states Fractional quantum Hall effect Fractional quantum Hall effect Quantum Chaos (billiards) Quantum Chaos (billiards) Integrable system Chaotic system
85
85 The Helium triplet First 3 S state of He is one of very few systems where we know the exact node First 3 S state of He is one of very few systems where we know the exact node For S states we can write For S states we can write Which means that the node is Which means that the node is For the Pauli Principle For the Pauli Principle
86
86 The Helium triplet node Independent of r 12 Independent of r 12 The node is “more symmetric” than the wave function itself The node is “more symmetric” than the wave function itself It is a polynomial in r 1 and r 2 It is a polynomial in r 1 and r 2 Present in all 3 S states of two-electron atoms Present in all 3 S states of two-electron atoms r1r1 r2r2 r 12 r1r1 r2r2
87
87 The Fixed Node approximation Since in general we do not know the exact nodes, we resort to approximate nodes Since in general we do not know the exact nodes, we resort to approximate nodes We use the nodes of some trial function We use the nodes of some trial function The energy is an upper bound to E 0 The energy is an upper bound to E 0 The energy depends only on the nodes, the rest of affects the statistical error The energy depends only on the nodes, the rest of affects the statistical error Usually very good results! Even poor usually have good nodes Usually very good results! Even poor usually have good nodes
88
88 Trial Wave functions For small systems (N<7) For small systems (N<7) Specialized forms (linear expansions, hylleraas,...) For larger systems (up to ~ 200) For larger systems (up to ~ 200) Slater-Jastrow Form A sum of Slater Determinants A sum of Slater Determinants Jastrow factor: a polynomial parametrized in interparticle distances Jastrow factor: a polynomial parametrized in interparticle distances
89
89 Asymptotic behavior of Example with 2-e atoms Example with 2-e atoms is the solution of the 1 electron problem
90
90 Asymptotic behavior of The usual form The usual form does not satisfy the asymptotic conditions A closed shell determinant has the wrong structure
91
91 Asymptotic behavior of In general In general Recursively, fixing the cusps, and setting the right symmetry… Each electron has its own orbital, Multideterminant (GVB) Structure! Take 2N coupled electrons 2 N determinants. Again an exponential wall
92
92 Basis In order to build compact wave functions we used basis functions where the cusp and the asymptotic behavior is decoupled In order to build compact wave functions we used basis functions where the cusp and the asymptotic behavior is decoupled Use one function per electron plus a simple Jastrow Use one function per electron plus a simple Jastrow
93
A little intermezzo Be atom nodal structure
94
94 Be Nodal Structure r3-r4 r1-r2 r1+r2 r1-r2 r1+r2 r3-r4
95
95 Be nodal structure Now there are only two nodal domains Now there are only two nodal domains It can be proved that the exact Be wave function has exactly two regions It can be proved that the exact Be wave function has exactly two regions See Bressanini, Ceperley and Reynolds http://scienze-como.uninsubria.it/bressanini/ Node is
96
96 Be nodal structure A physicist proof...( David Ceperley ) A physicist proof...( David Ceperley ) 4 electrons: 1 and 2 spin up, 3 and 4 spin down 4 electrons: 1 and 2 spin up, 3 and 4 spin down Tiling Theorem applies. There are at most 4 nodal domains Tiling Theorem applies. There are at most 4 nodal domains+ - - +R
97
97 Be nodal structure We need to find a point R and a path R(t) that connects R to P 12 P 34 R so that (R(t)) ≠ 0 We need to find a point R and a path R(t) that connects R to P 12 P 34 R so that (R(t)) ≠ 0 Consider the point R = (r 1,-r 1,r 3,-r 3 ) Consider the point R = (r 1,-r 1,r 3,-r 3 ) r1r1r1r1 r2r2r2r2 r3r3r3r3 r4r4r4r4 is invariant w.r.t. rotations is invariant w.r.t. rotations Path: Rotating by along r 1 x r 3, is constant Path: Rotating by along r 1 x r 3, is constant But (R) ≠ 0: But (R) ≠ 0: exact = HF + higher terms HF (R) = 0 higher terms ≠ 0
98
98 Nodal Topology Conjecture The HF ground state of Atomic and Molecular systems has 4 Nodal Regions, while the Exact ground state has only 2 WARNING: Conjecture Ahead...
99
99 An example High precision total energy calculations of molecules High precision total energy calculations of molecules An example: what is the most stable fullerene? An example: what is the most stable fullerene? C 24 QMC could make consistent predictions of the lowest structure QMC could make consistent predictions of the lowest structure Other methods are not capable of making consistent predictions about the stability of fullerenes
100
100 DMC advantages and drawbacks Correlation between particles is automatically taken into account. Correlation between particles is automatically taken into account. Exact for boson systems Exact for boson systems Fixed node for electrons obtains 85-95% of correlation energy. Very good results in many different fields Fixed node for electrons obtains 85-95% of correlation energy. Very good results in many different fields Works for T=0. For T > 0 must use Path Integral MC Works for T=0. For T > 0 must use Path Integral MC Not a “black box” Not a “black box” It is computationally demanding for large systems It is computationally demanding for large systems Derivatives of are very hard. Not good enough Derivatives of are very hard. Not good enough
101
101 Current research Current research focusses on Current research focusses on Applications: nanoscience, solid state, condensed matter, nuclear physics, geometry for molecules,... Estimating derivatives of wave function Solving the sign problem (very hard!!) Make it O(N) method (currently is O(N^3)) to treat bigger systems (currently about 200 particles) Better wave functions Better optimization methods
102
102 A reflection... A new method is initially not as well formulated or understood as existing methods It can seldom offer results of a comparable quality before a considerable amount of development has taken place Only rarely do new methods differ in major ways from previous approaches A new method for calculating properties in nuclei, atoms, molecules, or solids automatically provokes three sorts of negative reactions: Nonetheless, new methods need to be developed to handle problems that are vexing to or beyond the scope of the current approaches ( Slightly modified from Steven R. White, John W. Wilkins and Kenneth G. Wilson)
103
THE END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.