Computational Physics (Lecture 7) PHY4061. Eigen Value Problems.

Slides:



Advertisements
Similar presentations
Matrix Representation
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 6 Eigenvalues and Eigenvectors
Quantum One: Lecture 16.
Linear Algebra.
Molecular Quantum Mechanics
Lecture 13 - Eigen-analysis CVEN 302 July 1, 2002.
Systems of Linear Equations
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Ch 7.9: Nonhomogeneous Linear Systems
QR-RLS Algorithm Cy Shimabukuro EE 491D
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
Chapter 2 Matrices Definition of a matrix.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
Finding Eigenvalues and Eigenvectors What is really important?
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices CS485/685 Computer Vision Dr. George Bebis.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Orthogonal Matrices and Spectral Representation In Section 4.3 we saw that n  n matrix A was similar to a diagonal matrix if and only if it had n linearly.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Day 1 Eigenvalues and Eigenvectors
Day 1 Eigenvalues and Eigenvectors
MODULE 8 APPROXIMATION METHODS I Once we move past the two particle systems, the Schrödinger equation cannot be solved exactly. The electronic inter-repulsion.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
§ Linear Operators Christopher Crawford PHY
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
What is the determinant of What is the determinant of
Algorithms 2005 Ramesh Hariharan. Algebraic Methods.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
MODELING MATTER AT NANOSCALES 6.The theory of molecular orbitals for the description of nanosystems (part II) The density matrix.
The Mathematics for Chemists (I) (Fall Term, 2004) (Fall Term, 2005) (Fall Term, 2006) Department of Chemistry National Sun Yat-sen University 化學數學(一)
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
1 MODELING MATTER AT NANOSCALES 6. The theory of molecular orbitals for the description of nanosystems (part II) Perturbational methods for dealing.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
MODELING MATTER AT NANOSCALES 4. Introduction to quantum treatments Eigenvectors and eigenvalues of a matrix.
Restricted and Unrestricted Hartree-Fock method Sudarshan Dhungana Phys790 Seminar (Feb15,2007)
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Sect. 4.5: Cayley-Klein Parameters 3 independent quantities are needed to specify a rigid body orientation. Most often, we choose them to be the Euler.
ALGEBRAIC EIGEN VALUE PROBLEMS
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Computational Physics (Lecture 7)
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Complex Eigenvalues Prepared by Vince Zaccone
Eigenvalues and Eigenvectors
Quantum Two.
Numerical Analysis Lecture 16.
6-4 Symmetric Matrices By毛.
Quantum One.
Chapter 3 Linear Algebra
Eigenvalues and Eigenvectors
Numerical Analysis Lecture 17.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Presentation transcript:

Computational Physics (Lecture 7) PHY4061

Eigen Value Problems

An eigenvector v of a matrix B is a nonzero vector that does not rotate when B is applied to it. Eigen vector v may change length or reverse its directions. – But it won’t turn sideways. Iterative methods often depend on applying matrix on the vector over and over again. If | λ |<1, B i v= λ i v will vanish If | λ |>1, B i v= λ i v will go infinity

If the function involves the inverse of the matrix and the eigenvalue happens to be zero, we can always add a term ηI to the original matrix to remove the singularity. The modified matrix has the eigenvalue λ_i − η for the corresponding eigenvector x_i. The eigenvalue of the original matrix is recovered by taking η → 0 after the problem is solved. Based on this property of nondefective matrices, we can construct a recursion: (Iterative method) To extract the eigen value that is closest to the parameter mu, and N_k is a normalization constant to ensure

Eigen Values of a Hermitian matrix – In many physical problems, the matrix is Hermitian. Complex conjugate of the transpose matrix equals the matrix itself. – Three important properties: the eigenvalues of a Hermitian matrix are all real; the eigenvectors of a Hermitian matrix can be made orthonormal; a Hermitian matrix can be transformed into a diagonal matrix with the same set of eigenvalues under a similarity transformation of a unitary matrix that contains all its eigenvectors.

the eigenvalue problem of an n × n complex Hermitian matrix is equivalent to that of a 2n × 2n real symmetric matrix. A = B + iC, – B and C are the real and imaginary parts of A, If A is Hermitian, – B ij = B ji – C ij = -C ji eigenvector z in a similar fashion, we have z = x + iy. (B + iC)(x + iy) = λ(x + iy) Therefore, we only need to solve the real symmetric eigen value problem if the matrix is Hermitian.

(1) use an orthogonal matrix to perform a similarity transformation of the real symmetric matrix into a real symmetric tridiagonal matrix; – a matrix that has nonzero elements only on the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal. (2) solve the eigenvalue problem of the resulting tridiagonal matrix.

The similarity transformation preserves the eigenvalues of the original matrix, – the eigenvectors are the columns or rows of the orthogonal matrix used in the transformation. Householder method. – Givens method. – The most commonly used for tridiagonalizing – achieved with a total of n − 2 consecutive transformations, each operating on a row and a column of the matrix.

The transformations can be cast into a recursion: – for k = 1, 2,..., n − 2, where O_k is an orthogonal matrix that works on the row elements with i = k + 2,..., n of the kth column and the column elements with j = k + 2,..., n of the kth row. The recursion begins with A (0) = A. – It is an O(N) algorithm. – Storage advantages.

Here: – Where the ith component of the vector w k is given by with

Provided in standard math libraries. So we don’t have to reinvent the wheel here. Just take the code from most standard math libraries. After we obtain the tridiagonalized matrix, the eigenvalues can be found using one of the root search routines available. – Note that the secular equation |A − λI| = 0 is equivalent to a polynomial equation p_n(λ) = 0. – Because of the simplicity of the symmetric tridiagonal matrix, the polynomial pn(λ) can be generated recursively with:

Where a i = A ii and b i = A_ii+1=A_i+1i. The polynomial p_i(lambda) is given from the submatrix of A_jk with j, k = 1,2…,i with the starting value p_0(lambda) =1 And p_1(lambda)=a_1 - lambda.

In principle, we can use any of the root searching routines to find the eigenvalues from the secular equation – as soon as the polynomial is generated. However, two properties associated with the zeros of p_n(λ) are useful in developing a fast and accurate routine to obtain the eigenvalues of a symmetric tridiagonal matrix. Will not be covered in this lecture. If interested, you can read T. Pang book.

The Faddeev--Leverrier method A very interesting method developed for matrix inversion and eigenvalue problem. The scheme was first discovered by Leverrier in the middle of the nineteenth century modified by Faddeev (Faddeev and Faddeeva, 1963).

The characteristic polynomial of the matrix is given by – where c_n = (−1)^n. we can introduce a set of supplementary matrices S_k with

If we multiply the above equation by (lambda I – A, we have: Comparing the coefficients from the same order of lamda^l for l = 0, 1,…n on the both sides of the equation, we obtain the recursion: – For k = 1, 2, …, n. The recursion is started with S_0 = I is ended at c_0. – With lamda = 0, we can show that

Mistake in the code! The correct one should be c[0] = -tr(mm(a,s[n-1]))/n;

Because the Faddeev–Leverrier recursion also generates all the coefficients c_k for the characteristic polynomial p_n(λ), we can use a root search method to obtain all the eigenvalues from p_n(λ) = 0.

After we have found all the eigenvalues λ_k, we can also obtain the corresponding eigenvectors with the availability of the supplementary matrices Sk.

Electronic structures of atoms The Schrodinger equation for a multi-electron atom is given by

Hartree Fock Approximations The Born–Oppenheimer approximation is inherently assumed. Typically, relativistic effects are completely neglected. The variational solution is assumed to be a linear combination of a finite number of basis functions. Each energy eigenfunction is assumed to be describable by a single Slater determinant The mean field approximation is implied. Effects arising from deviations from this assumption, known as electron correlation, are completely neglected for the electrons of opposite spin, but are taken into account for electrons of parallel spin.

the ground state is approximated by the Hartree–Fock ansatz, which can be cast into a determinant

To optimize (minimize) E_HF, we can perform the functional variation with respect to – Hartree-Fock equation. – V_H(R) (HF potential) is given by: – Where: is the total density of the electron at r. – The exchange correlation is given by:

The Hartree potential can also be obtained from the solution of the Poisson equation: The single particle wavefunctions in the atomic systems can be assumed to have the form: The H-F equation for given l can be further converted into:

We can easily apply the numerical schemes introduced in this lecture to solve this matrix eigenvalue problem.