Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational approaches for quantum many-body systems

Similar presentations


Presentation on theme: "Computational approaches for quantum many-body systems"— Presentation transcript:

1 Computational approaches for quantum many-body systems
HGSFP Graduate Days SS2019 Martin Gärttner

2 Course overview Lecture 1: Introduction to many-body spin systems
Quantum Ising model, Bloch sphere, tensor structure, exact diagonalization Lecture 2: Collective spin models LMG model, symmetry, semi-classical methods, Monte Carlo Lecture 3: Entanglement Mixed states, partial trace, Schmidt decomposition Lecture 4: Tensor network states Area laws, matrix product states, tensor contraction, AKLT model Lecture 5: DMRG and other variational approaches Energy minimization, PEPS and MERA, neural quantum states

3 Learning goals After today you will be able to …
… explain how to find ground states using the MPS ansatz (DMRG). … dig deeper into tensor network states (PEPS and MERA) … explain alternative variational ansätze inspired by neural networks.

4 Tensor network states beyond MPS: Extensions and applications
Projected entangled pair states (PEPS): → extension to 2D Problem: No efficient contraction:

5 Tensor network states beyond MPS: Extensions and applications
Multiscale entanglement renormalization ansatz (MERA) Treat quantum critical ground states ADS (2+1) CFT (1+1)

6 Libraries for Tensor network states
iTensor: C++ library for tensor network state calculations. ALPS (Algorithms and Libraries for Physics Simulations). Contains many different numerical methods for quantum many-body systems, not only spins. Especially also quantum Monte Carlo methods. MPS: Open Source MPS (OSMPS), Python frontend!

7 Other variational approaches
Variational Monte Carlo (VMC): 𝐸 𝑎 = 𝜓(𝑎) 𝐻 𝜓(𝑎) 𝜓(𝑎) 𝜓(𝑎) = 𝜎 𝜎 ′ 𝜓 𝜎 ′ ∗ (𝑎) 𝐻 𝜎 ′ 𝜎 𝜓 𝜎 (𝑎) 𝜎 𝜓 𝜎 (𝑎) 2 = 𝜎 𝜓 𝜎 (𝑎) 2 𝜎 ′ 𝜓 𝜎 ′ ∗ (𝑎) 𝐻 𝜎 ′ 𝜎 / 𝜓 𝜎 ∗ (𝑎) 𝜎 𝜓 𝜎 (𝑎) 2 Sample states according to Local energies: 𝑃 𝜎 = 𝜓 𝜎 (𝑎) 𝜎 𝜓 𝜎 (𝑎) 2 𝐸 𝑙𝑜𝑐 (𝜎)= 𝜎 ′ 𝜓 𝜎 ′ ∗ (𝑎) 𝐻 𝜎 ′ 𝜎 / 𝜓 𝜎 ∗ (𝑎)

8 Neural-network quantum states
See also: Deng , Li, Das Sarma, PRX 2017, PRB 2017 Gao, Duan, Nat. Commun. 2017 Cirac et al., PRX 2018 Clark, J. Phys. A 2018 Moore, arXiv 2017 Carleo, Nomura, Imada, arxiv 2018 Freitas, Morigi, Dunjko, arXiv 2018 …… [Carleo and Troyer, Science 2017] Restricted Boltzmann machine | 𝜓 = 𝑖 1 … 𝑖 𝑁 𝑐 𝑖 1 … 𝑖 𝑁 | 𝑖 1 … 𝑖 𝑁 | 𝜓 = 𝐯 𝑐 𝐯 | 𝐯 𝑐 𝐯 = {𝐡} 𝑒 −𝐸[𝐯,𝐡] Classical networks: probability This is an ansatz that was proposed by Giuseppe which I want to summarize in the following. The starting point is again a general quantum state. Multiindex v Express using a parameterization that is motivated by a network architecture called restricted Boltzmann machine. Visible and hidden spins (all -1 or 1) Parameters: Connections only between them (interactions), biases (energy penalty) This gives the energy of the network Ansatz is to sum over the hiddens Difference to usual RBMs: coeffs are complex, thus weights also, not a probability distribution. 𝐸 𝐯,𝐡 =− 𝑖,𝑗 𝑊 𝑖𝑗 v 𝑖 h 𝑗 − 𝑖 𝑎 𝑖 v 𝑖 − 𝑗 𝑏 𝑗 h 𝑗 v 𝑖 , h 𝑗 ∈{−1,1}

9 Neural-network quantum states
v 1 v 2 v 3 v 4 v 5 h 1 h 2 h 3 h 4 h 5 h 6 𝑎 1 𝑎 2 𝑏 1 𝑏 2 𝑊 11 𝑊 42 . . . Neural-network quantum states Efficient evaluation 𝑐 𝐯 = {𝐡} 𝑒 𝑖,𝑗 𝑊 𝑖𝑗 v 𝑖 h 𝑗 + 𝑖 𝑎 𝑖 v 𝑖 + 𝑗 𝑏 𝑗 h 𝑗 = 𝑒 𝑖 𝑎 𝑖 v 𝑖 𝑗 2 cosh 𝑏 𝑗 + 𝑖 𝑊 𝑖𝑗 v 𝑖 Back to neural network states The sum over the hidden neurons still has exponentially many terms. An important feature of RBM states is that the sum over the hidden neurons can be calculated explicitly. Simple rewriting of the sum as a product. Hidden spins don’t appear explicitly, that’s why they are called hidden. This is now simply a variational ansatz for the state which can be treated using the machinery of variational Monte Carlo methods. What is nice about this ansatz is that more parameters can be added naturally by simply adding more hidden neurons. This way we can hope to get convergence with respect to the number of hidden neurons. Problem of any representation: Must be efficiently evaluated. For tensor network states: Efficiently contractable, which is only possible for 1D. Not any more for higher dimensional tensor network states. visible hidden

10 Neural-network quantum states
Finding ground states: Stochastic reconfiguration method 𝐸 𝑾 = 𝜓 𝐻 𝜓 = 𝐯, 𝐯 ′ 𝑐 𝐯 ′ ∗ 𝐻 𝐯 𝐯 ′ 𝑐 𝐯 Minimize energy functional 𝑊 𝑘 𝑝+1 = 𝑊 𝑘 𝑝 −𝛾 𝜕𝐸 𝜕 𝑊 𝑘 What we really want to use this NQS representation for is efficiently calculating ground states or unitary time evolution. For the former what we have to do is to chose the weight parameters such that the energy functional is minimized. How to do this is simply a gradient descent method. For each network parameter Wk the update step consists in subtracting a term proportional to gradient of the energy with respect to this parameter. On first sight this seems problematic because the gradients evolve sums over all visible configurations but here variational Monte Carlo tells us that the sum can be sampled efficiently by importance sampling according to the absolute values of the coefficients. We do this by a simple Markov chain Monte Carlo sampling procedure. Learning rate or imaginary time step. Learning rate Determine gradients by Monte Carlo sampling Imaginary time evolution

11 Neural-network quantum states
Time evolution: Time dependent variational Monte Carlo 𝑅 𝑡,𝑊 𝑡 =𝑑𝑖𝑠𝑡[ 𝜕 𝑡 𝜓 𝑊 ,−𝑖𝐻𝜓] Minimize deviation from SE solution in each step Time-dependent variational principle 𝑊 𝑘 𝑡+∆𝑡 = 𝑊 𝑘 𝑡 −𝑖 ∆𝑡 𝜕𝐸 𝜕 𝑊 𝑘 What we really want to use this NQS representation for is efficiently calculating ground states or unitary time evolution. For the former what we have to do is to chose the weight parameters such that the energy functional is minimized. How to do this is simply a gradient descent method. For each network parameter Wk the update step consists in subtracting a term proportional to gradient of the energy with respect to this parameter. On first sight this seems problematic because the gradients evolve sums over all visible configurations but here variational Monte Carlo tells us that the sum can be sampled efficiently by importance sampling according to the absolute values of the coefficients. We do this by a simple Markov chain Monte Carlo sampling procedure. Learning rate or imaginary time step. time step Determine gradients by Monte Carlo sampling Real time evolution

12 References Ulrich Schollwoeck: The density-matrix renormalization group in the age of matrix product states, Annals of Physics 326, 96 (2011) Time dependent variational principle: Phys. Rev. Lett. 107,  (2011) MERA and AdS/CFT: e.g. Phys. Rev. D 86,  (2012) Neural network quantum state ansatz: Giuseppe Carleo, Matthias Troyer: Solving the Quantum Many-Body Problem with Artificial Neural Networks, Science 355, 602 (2017)


Download ppt "Computational approaches for quantum many-body systems"

Similar presentations


Ads by Google