Doing Very Big Calculations on Modest Size Computers Reducing the Cost of Exact Diagonalization Using Singular Value Decomposistion Marvin Weinstein, Assa.

Slides:



Advertisements
Similar presentations
The DMRG and Matrix Product States
Advertisements

Matrix Representation
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Point-wise Discretization Errors in Boundary Element Method for Elasticity Problem Bart F. Zalewski Case Western Reserve University Robert L. Mullen Case.
1 Top Production Processes at Hadron Colliders By Paul Mellor.
Matrices: Inverse Matrix
MATH 685/ CSI 700/ OR 682 Lecture Notes
Kagome Spin Liquid Assa Auerbach Ranny Budnik Erez Berg.
Solving Linear Systems (Numerical Recipes, Chap 2)
2D and time dependent DMRG
Some Ideas Behind Finite Element Analysis
Data mining and statistical learning - lecture 6
Boris Altshuler Columbia University Anderson Localization against Adiabatic Quantum Computation Hari Krovi, Jérémie Roland NEC Laboratories America.
Quantum One: Lecture Canonical Commutation Relations 3.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
1cs542g-term Notes  Simpler right-looking derivation (sorry):
Motion Analysis Slides are from RPI Registration Class.
1cs542g-term Notes  r 2 log r is technically not defined at r=0 but can be smoothly continued to =0 there  Question (not required in assignment):
Chapter 6 Eigenvalues.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Introduction to Quantum Information Processing Lecture 4 Michele Mosca.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Efficient Quantum State Tomography using the MERA in 1D critical system Presenter : Jong Yeon Lee (Undergraduate, Caltech)
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Dominant Eigenvalues & The Power Method
MIMO Multiple Input Multiple Output Communications © Omar Ahmad
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
CHAPTER SIX Eigenvalues
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Density Matrix Density Operator State of a system at time t:
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Measurement.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Geometric Models & Camera Calibration
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 20-Oct-15 Last course Lecture plan and policies What is FEM? Brief history of the FEM Example of applications Discretization Example of FEM softwares.
A study of two-dimensional quantum dot helium in a magnetic field Golam Faruk * and Orion Ciftja, Department of Electrical Engineering and Department of.
SINGULAR VALUE DECOMPOSITION (SVD)
Random volumes from matrices Sotaro Sugishita (Kyoto Univ.) Masafumi Fukuma & Naoya Umeda (Kyoto Univ.) arXiv: (accepted in JHEP)
Scientific Computing Singular Value Decomposition SVD.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Tensor networks and the numerical study of quantum and classical systems on infinite lattices Román Orús School of Physical Sciences, The University of.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Preliminary CPMD Benchmarks On Ranger, Pople, and Abe TG AUS Materials Science Project Matt McKenzie LONI.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Quasi-1D antiferromagnets in a magnetic field a DMRG study Institute of Theoretical Physics University of Lausanne Switzerland G. Fath.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
James Brown, Tucker Carrington Jr. Computing vibrational energies with phase-space localized functions and an iterative eigensolver.
Adiabatic Quantum Computing Josh Ball with advisor Professor Harsh Mathur Problems which are classically difficult to solve may be solved much more quickly.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Chapter 13 Discrete Image Transforms
Problems in solving generic AX = B Case 1: There are errors in data such that data cannot be fit perfectly (analog: simple case of fitting a line while.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Singular Value Decomposition and its applications
Generalized DMRG with Tree Tensor Network
Hidden Markov Models Part 2: Algorithms
Sevdalina S. Dimitrova Institute for Nuclear Research & Nuclear Energy
Principal Component Analysis
Adaptive Perturbation Theory: QM and Field Theory
Quantum Two.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
QM2 Concept Test 11.1 In a 3D Hilbert space,
Linear Vector Space and Matrix Mechanics
Computational approaches for quantum many-body systems
Presentation transcript:

Doing Very Big Calculations on Modest Size Computers Reducing the Cost of Exact Diagonalization Using Singular Value Decomposistion Marvin Weinstein, Assa Auerbach and V. Ravi Chandra

Some Uses For Exact Diagonalization CORE Use ED to carry out RG transformations for clusters. Mean field theory computations, DMRG (density matrix) Are all tested on small clusters against ED Also: to compute short wavelength dynamical response functions to compute Chern numbers for quantum Hall systems

Counting Memory Requirements Consider a lattice with 2N-sites and one spin-1/2 degree of freedom per site: This, of course, means memory needs grow exponentially with number of sites For 36 sites a single vector is ½ TB of ram For 40 sites it is 8 TB This means simple Lanczos becomes very memory unfriendly ! The problem is not computational speed, it is reducing memory footprint.

Singular Value Decomposition Key Message: For problems where the space is a tensor product space, SVD allows us to reduce memory for storing big vectors. The SVD says any matrix can be written as: Where M is an n x m matrix, U is n x n, S is a vector with at most min(n,m) non-zero elements, and V is m x m, and the entries in S are arranged in decreasing order

Rewriting Tensor Product States As Matrices Suppose we have two blocks with N-sites A generic vector is the sum of tensor products in block A and block B: i.e., Generically, given a vector, we can write it as a matrix or a single vector

So SVD Says… Every vector in the product space can be written as a sum of products of the form Where the ’s are the ’s and the vectors are: We can choose to represent a vector by a sum of simple tensor products, ignoring the small enough ’s

Another Look At Memory Requirements Once again, consider a 36 site lattice and ask how much memory we need to store the ground state of the Hamiltonian (before it was ½ TB) Assume we keep 100 of the largest eigenvalues in S.  Dimension of an 18 site vector:  2 MB  Dimension of the SVD form of the g-s is  400 MB or ½ GB We have gone from undoable to easily done, it takes some time. How do we do it ?

Key Idea For Manipulating SVD States We are starting with simple tensor product states. The Hamiltonian is: There are 11 operators that act on each side, so effective Hilbert space for SVD is 11 times the size of the SVD space we started with. This is impt. because having to do SVD on a 30, 36 or 40-site form of a vector is prohibitive.

To Be Specific Kagome Lattice Why is this interesting ?

CORE & Magen-David (MD) Blocking Each MD has 12 sites (4096 dim) Diagonalizing the single MD block yields two degenerate spin-0 states. These are RVB states: i.e., they are pairwise spin-0 singlets as shown to the left. A possible CORE computation Truncate space to the two singlets and then compute the new renormalized Hamiltonian between two blocks (24 site computation). Then do the same for the three MD’s arranged in a triangle (36 site).

Magen-David (MD) Blocking Two computations that have been done We did the 2 MD (24- site) blocking because it can also be done exactly by brute force. Of course we also needed for CORE. Results of SVD Lanczos and ordinary Lanczos were compared and the convergence is very good for ~100 states. The 3 MD (36-site) blocking is under way. With ~100 states we get convergence to or better.

Magen-David (MD) Blocking The 24 site computation We see that there are 8 operators acting on the first block and 8 on the second Thus, if we start with a single t-prod state we have 8 states on the left and 8 on the right. But they aren’t orthonormal on left and right, so we can have fewer states on each side. Orthonormalize them and expand the 8 states in the new basis. In this basis combine into a 64 component vector, do SVD and reduce back to an SVD state having the desired number of SVD states. After a few multiplications this grows to the desired 100 states.

Three MD Blocking Three MD Blocking The 36 site blocking Now there are 6 bonds or 18 spin operators linking the two blocks. With the single block Hamiltonian and the unit operator, this means each term in the SVD state is multiplied by 20. Thus, for 100 states, we have a 2000 x 2000 matrix to do SVD on. This is a “piece of cake”.

Some Results for 24 and 36 Site MD’s Error in energy as a function of number of SVD states for 24-site problem. Here we have exact answer to compare to. Rate of convergence of energy as a function of number of Lanczos iterations for the 36-site problem.

CORE Hexagon-Triangle Blocking This lattice can also be covered with hexagons and triangles. The ground state on hexagon is spin-0 singlet. There are two degenerate spin-1/2 states for each triangle. Problem: Truncate to four states per triangle and compute triangle-triangle CORE renormalized Hamiltonian.

These Triangles Then Form A New Hexagonal Lattice Now each vertex has four spin ½ states. The coupling between the vertices is a spin-pseudospin coupling and the coefficients rotate with direction. This rotation is the remnant of the original geometric frustration. NOTE: The hexagonal lattice then blocks (second CORE transf) to a triangular lattice. But with additional frustration.

SVD Entropy – Some Analytics Assume we have an SVD decomposition of the form. Introduce the parameter ‘s’ s.t. Then introduce the density of states How good is the power law assumption ? NOTE: The integral of must be unity

SVD Density of States for 30 site Blocking NOTE: This is a power law

SVD Entropy - More Analytics Given that the number of states as a function of the parameter ‘s’ is a power law, it is convenient to introduce the SVD entropy Then if we choose a cutoff on the integral s.t. It follows that the error in the state is and so This is consistent with our results !

More Than Bi-Partite SVD Lanczos Consider a disk of radius R so that N is proportional to the area of the disck. Partition the disk first in half, then divide each half again, etc. In this way obtain P = 2 p clusters. Assume the SVD entropy of each partition goes like the ‘area’ of the boundary, i.e., ~R. The total number of SVD vectors to get a fixed error is then known. From this we can estimate the optimal partitioning. Slower than 2 N.

Recap I have shown how one can, using the singular value decomposition of a vector, carry out Lanczos, or contractor, computations of eigenstates to high accuracy. The methods allow you to check convergence and, plotting the density of states and fitting to a power law, estimate the error of the computation. This allows those of us who are not running on machines with 2000 cores and 1.5TB of ram to do big computations. Imagine what people with those resources can do!