Chapter 3 Formalism.

Slides:



Advertisements
Similar presentations
Mathematical Formulation of the Superposition Principle
Advertisements

Matrix Representation
Commutators and the Correspondence Principle Formal Connection Q.M.Classical Mechanics Correspondence between Classical Poisson bracket of And Q.M. Commutator.
Integrals over Operators
Symmetric Matrices and Quadratic Forms
New chapter – quantum and stat approach We will start with a quick tour over QM basics in order to refresh our memory. Wave function, as we all know, contains.
Chapter 3 Formalism. Hilbert Space Two kinds of mathematical constructs - wavefunctions (representing the system) - operators (representing observables)
4. The Postulates of Quantum Mechanics 4A. Revisiting Representations
Dirac Notation and Spectral decomposition
Quantum One: Lecture 8. Continuously Indexed Basis Sets.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Quantum Mechanics(14/2)Taehwang Son Functions as vectors  In order to deal with in more complex problems, we need to introduce linear algebra. Wave function.
Physics 3 for Electrical Engineering
Chap 3. Formalism Hilbert Space Observables
3. Hilbert Space and Vector Spaces
Quantum One: Lecture Representation Independent Properties of Linear Operators 3.
Too Many to Count.
PHYS 773: Quantum Mechanics February 6th, 2012

AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
5. Quantum Theory 5.0. Wave Mechanics
Mathematical Tools of Quantum Mechanics
Chapter 3 Postulates of Quantum Mechanics. Questions QM answers 1) How is the state of a system described mathematically? (In CM – via generalized coordinates.
Physics 451 Quantum mechanics I Fall 2012 Oct 12, 2012 Karine Chesnel.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Systems of Identical Particles
Mathematical Formulation of the Superposition Principle
Q. M. Particle Superposition of Momentum Eigenstates Partially localized Wave Packet Photon – Electron Photon wave packet description of light same.
Formalism Chapter 3.
Postulates of Quantum Mechanics
Chapter 6 Angular Momentum.
Stationary Perturbation Theory And Its Applications
Systems of First Order Linear Equations
Quantum One.
Quantum One.
Chapter 4 Quantum Mechanics in 3D.
Quantum One.
Quantum One.
Quantum One.
Schrodinger Equation The equation describing the evolution of Ψ(x,t) is the Schrodinger equation Given suitable initial conditions (Ψ(x,0)) Schrodinger’s.
Quantum Two.
4. The Postulates of Quantum Mechanics 4A. Revisiting Representations
Quantum One.
Chapter 9 Spin.
Lecture on Linear Algebra
Quantum One.
Postulates of Quantum Mechanics
Quantum One.
Quantum One.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Quantum One.
Chapter 3 Linear Algebra
Quantum Two Body Problem, Hydrogen Atom
The Stale of a System Is Completely Specified by lts Wave Function
Addition of Angular Momenta
Symmetric Matrices and Quadratic Forms
Chapter 5 1D Harmonic Oscillator.
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Chapter 3 Formalism

Hilbert space Let’s recall for Cartesian 3D space: 3.1 Hilbert space Let’s recall for Cartesian 3D space: A vector is a set of 3 numbers, called components – it can be expanded in terms of three unit vectors (basis) The basis spans the vector space Inner (dot, scalar) product of 2 vectors is defined as: Length (norm) of a vector

3.1 Hilbert space

Hilbert space Hilbert space: 3.1 Hilbert space Hilbert space: Its elements are functions (vectors of Hilbert space) The space is linear: if φ and ψ belong to the space then φ + ψ, as well as aφ (a – constant) also belong to the space David Hilbert (1862 – 1943)

Hilbert space Hilbert space: 3.1 Hilbert space Hilbert space: Inner (dot, scalar) product of 2 vectors is defined as: Length (norm) of a vector is related to the inner product as: David Hilbert (1862 – 1943)

Hilbert space Hilbert space: 3.1 Hilbert space Hilbert space: The space is complete, i.e. it contains all its limit points (we will see later) Example of a Hilbert space: L 2, set of square-integrable functions defined on the whole interval David Hilbert (1862 – 1943)

Wave function space Recall: 3.1 Wave function space Recall: Thus we should retain only such functions Ψ that are well-defined everywhere, continuous, and infinitely differentiable Let us call such set of functions F F is a subspace of L 2 For two complex numbers λ1 and λ2 it can be shown that if

Scalar product In F the scalar product is defined as: 3.1 Scalar product In F the scalar product is defined as: Properties of the scalar product: φ and ψ are orthogonal if Norm is defined as

Karl Hermann Amandus Schwarz 3.1 Scalar product Schwarz inequality Karl Hermann Amandus Schwarz (1843 – 1921)

Orthonormal bases A countable set of functions is called orthonormal if: It constitutes a basis if every function in F can be expanded in one and only one way: Recall for 3D vectors:

Orthonormal bases For two functions a scalar product is: Recall for 3D vectors:

Orthonormal bases This means that Closure relation

Orthonormal bases A set of functions labelled by a continuous index α is called orthonormal if: It constitutes a basis if every function in F can be expanded in one and only one way:

Orthonormal bases For two functions a scalar product is:

Orthonormal bases This means that Closure relation

Example of an orthonormal basis Let us consider a set of functions: The set is orthonormal: Functions in F can be expanded:

Example of an orthonormal basis For two functions a scalar product is:

Example of an orthonormal basis This means that Closure relation

State vectors and state space The same function ψ can be represented by a multiplicity of different sets of components, corresponding to the choice of a basis These sets characterize the state of the system as well as the wave function itself Moreover, the ψ function appears on the same footing as other sets of components

State vectors and state space Each state of the system is thus characterized by a state vector, belonging to state space of the system Er As F is a subspace of L 2, Er is a subspace of the Hilbert space

Paul Adrien Maurice Dirac 3.6 Dirac notation Bracket = “bra” x “ket” < > = < | > = “< |” x “| >” Paul Adrien Maurice Dirac (1902 – 1984)

Paul Adrien Maurice Dirac 3.6 Dirac notation We will be working in the Er space Any vector element of this space we will call a ket vector Notation: We associate kets with wave functions: F and Er are isomporphic r is an index labelling components Paul Adrien Maurice Dirac (1902 – 1984)

Paul Adrien Maurice Dirac 3.6 Dirac notation With each pair of kets we associate their scalar product – a complex number We define a linear functional (not the same as a linear operator!) on kets as a linear operation associating a complex number with a ket: Such functionals form a vector space We will call it a dual space Er* Paul Adrien Maurice Dirac (1902 – 1984)

Paul Adrien Maurice Dirac 3.6 Dirac notation Any element of the dual space we will call a bra vector Ket | φ > enables us to define a linear functional that associates (linearly) with each ket | ψ > a complex number equal to the scalar product: For every ket in Er there is a bra in Er* Paul Adrien Maurice Dirac (1902 – 1984)

Paul Adrien Maurice Dirac 3.6 Dirac notation Some properties: Paul Adrien Maurice Dirac (1902 – 1984)

Linear operators Linear operator A is defined as: Product of operators: In general: Commutator: Matrix element of operator A:

Linear operators Example: What is ? It is an operator – it converts one ket into another

Linear operators Example: Let us assume that Projector operator It projects one ket onto another

Linear operators Example: Let us assume that These kets span space Eq, a subspace of E Subspace projector operator It projects a ket onto a subspace of kets

Linear operators Recall matrix element of a linear operator A: Since a scalar product depends linearly on the ket, the matrix element depends linearly on the ket Thus for a given bra and a given operator we can associate a number that will depend linearly on the ket So there is a new linear functional on the kets in space E, i.e., a bra in space of E *, which we will denote Therefore

Linear operators Operator A associates with a given bra a new bra Let’s show that this correspondence is linear

Linear operators For each ket there is a bra associated with it Hermitian conjugate (adjoint) operator: This operator is linear (can be shown) Charles Hermite (1822 – 1901)

Linear operators Some properties: Charles Hermite (1822 – 1901)

Hermitian conjugation To obtain Hermitian conjugation of an expression: Replace constants with their complex conjugates Replace operators with their Hermitian conjugates Replace kets with bras Replace bras with kets Reverse order of factors Charles Hermite (1822 – 1901)

Hermitian operators For a Hermitian operator: 3.2 Hermitian operators For a Hermitian operator: Hermitian operators play a fundamental role in quantum mechanics (we’ll see later) E.g., projector operator is Hermitian: If: Charles Hermite (1822 – 1901)

Representations in state space In a certain basis, vectors and operators are represented by numbers (components and matrix elements) Thus vector calculus becomes matrix calculus A choice of a specific representation is dictated by the simplicity of calculations We will rewrite expressions obtained above for orthonormal bases using Dirac notation

Orthonormal bases A countable set of kets is called orthonormal if: It constitutes a basis if every vector in E can be expanded in one and only one way:

Orthonormal bases Closure relation 1 – identity operator

Orthonormal bases For two kets a scalar product is:

Orthonormal bases A set of kets labelled by a continuous index α is called orthonormal if: It constitutes a basis if every vector in E can be expanded in one and only one way:

Orthonormal bases Closure relation 1 – identity operator

Orthonormal bases For two kets a scalar product is:

Representation of kets and bras In a certain basis, a ket is represented by its components These components could be arranged as a column-vector:

Representation of kets and bras In a certain basis, a bra is also represented by its components These components could be arranged as a row-vector:

Representation of operators In a certain basis, an operator is represented by matrix components:

Representation of operators

Representation of operators

Representation of operators

Representation of operators

Representation of operators For Hermitian operators: Diagonal elements of Hermitian operators are always real

Change of representations How do representations change when we go from one basis to another? Let’s denote Some properties:

Change of representations

Change of representations

Eigenvalue equations A ket is called an eigenvector of a linear operator if: This is called an eigenvalue equation for an operator This equation has solutions only when λ takes certain values - eigenvalues If: then:

Eigenvalue equations The eigenvalue is called nondegenerate (simple) if the corresponding eigenvector is unique to within a constant The eigenvalue is called degenerate if there are at least two linearly independent kets corresponding to this eigenvalue The number of linearly independent eigenvectors corresponding to a certain eigenvalue is called a degree of degeneracy

Eigenvalue equations If for a certain eigenvalue λ the degree of degeneracy is g: then every eigenvector of the form is an eigenvector of the operator A corresponding to the eigenvalue λ for any ci: The set of linearly independent eigenvectors corresponding to a certain eigenvalue comprises a g-dimensional vector space called an eigensubspace

Eigenvalue equations Let us assume that the basis is finite-dimensional, with dimensionality N This is a system of N linear homogenous equations for N coefficients cj Condition for a non-trivial solution:

Eigenvalue equations This equation is called the characteristic equation This is an Nth order equation in and it has N roots – the eigenvalues of the operator Condition for a non-trivial solution:

Eigenvalue equations Let us select λ0 as one of the eigenvalues If λ0 is a simple root of the characteristic equation, then we have a system of N – 1 independent equations for coefficients cj From linear algebra: the solution of this system (for one of the coefficients fixed) is

Eigenvalue equations Let us select λ0 as one of the eigenvalues If λ0 is a multiple (degenrate) root of the characteristic equation, then we have less than N – 1 independent equations for coefficients cj E.g., if we have N – 2 independent equations then (from linear algebra) the solution of this system is

Eigenproblems for Hermitian operators 3.2 Eigenproblems for Hermitian operators For: Therefore λ is a real number Also: If: Then: But:

3.2 Observables Consider a Hermitian operator A whose eigenvalues form a discrete spectrum The degree of degeneracy of a given eigenvalue an will be labelled as gn In the eigensubspace En we consider gn linearly independent kets: If Then

Observables Inside each eigensubspace Therefore: 3.2 Observables Inside each eigensubspace Therefore: If all these eigenkets form a basis in the state space, then operator A is called an observable

Observables For an eigensubspace projector 3.2 Observables For an eigensubspace projector These relations could be generalized for the case of continuous bases E.g., a projector is an observable

Observables If Then If a is non-degenerate then 3.2 Observables If Then If a is non-degenerate then so this ket is also an eigenvector of B If a is degenerate then Thereby, if A and B commute, each eigensubspace of A is globally invariant (stable) under the action of B

3.2 Observables If Then If two operators commute, there is an orthonormal basis with eigenvectors common to both operators

3.4 Questions QM answers 1) How is the state of a system described mathematically? (In CM – via generalized coordinates and momenta) 2) For a given state, how can one predict results of measurements of various physical quantities? (In CM – unambiguously, via the calculated trajectory in a phase space) 3) For a given state of the system known at time t0, how can one find a state of this system at an arbitrary time t? (In CM – using Hamilton’s equations) Answers to these questions are given by the postulates of QM

State of a system 1st postulate: At certain time t0 a state of this system is defined by a ket belonging to the state space E

Physical quantities 2nd postulate: Every measurable physical quantity is described by an observable operator acting in E

Measurement 3rd postulate: Measurements of a physical quantity result only in (real) eigenvalues of a corresponding observable

Measurement 3rd postulate: Measurements of a physical quantity result only in (real) eigenvalues of a corresponding observable It is not obvious a priori whether the spectrum of the measured quantity is continuous or discrete (e.g., a system consisting of a proton and an electron)

Spectral decomposition 3.4 Spectral decomposition If Then the state of the system 4th postulate: The probability of measuring an eigenvalue an of an observable A in a certain state of the system is:

Spectral decomposition 3.4 Spectral decomposition

Spectral decomposition 3.4 Spectral decomposition The mean value of an observable:

Spectral decomposition 3.4 Spectral decomposition If Then the state of the system 4th postulate: The probability of measuring an eigenvalue of an observable A between α and α+dα in a certain state of the system is: ρ – probability density

Spectral decomposition 3.4 Spectral decomposition The mean value of an observable:

3.2 RMS deviation How can one quantify the dispersion of the measurements around the mean value? Averaging a deviation from the average is not adequate: Instead, the RMS deviation is used:

3.2 RMS deviation How can one quantify the dispersion of the measurements around the mean value? Averaging a deviation from the average is not adequate: Instead, the RMS deviation is used:

Reduction via measurement When the measurement is performed only one possible result is obtained Then the state of the system after the measurement of an eigenvalue is: We can write this as:

Reduction via measurement 5th postulate: If measurement of a physical quantity in a given state of the system yields a certain eigenvalue, the state of the system immediately after the measurement is the normalized projection of the initial state onto a state associated with that eigenvalue The state of the system after the measurement is the eigenvector corresponding to that eigenvlaue

Reduction via measurement We shall consider only ideal measurements This means that the perturbations the measurement devices produce are only due to the quantum-mechanical aspect of measurement We will consider the studied system and the measurement device together as a whole

Time evolution of the system 6th postulate: The time evolution of the state vector of the system is determined by the Schrödinger equation: H – is the Hamiltonian operator, observable associated with the total energy of the system Sir William Rowan Hamilton (1805 – 1865)

Time evolution of the system 3.5 Time evolution of the system How does the mean value of an observable evolve? Recall the CM result:

Compatibility of observables 3.5 Compatibility of observables If two (observable) operators commute, there exists a basis common to both operators There is at least one state that will simultaneously yield specific eigenvalues for these two operators, thereby these two observable can be measured simultaneously Such operators are called compatible with each other If, on the other hand, the operators do not commute, a state cannot in general be an eigenvector of both observables, thus these operators are called incompatible

Compatibility of observables 3.5 Compatibility of observables When two observables are compatible, the measurement of the second does not produce any loss of the information obtained from the measurement of the first When two observables are incompatible, the measurement of the second does produces a loss of the information obtained from the measurement of the first

The uncertainty principle 3.5 The uncertainty principle Recall Schwarz inequality: In Dirac’s notation: Since Then:

The uncertainty principle 3.5 The uncertainty principle Let us calculate: Similarly: On the other hand:

The uncertainty principle 3.5 The uncertainty principle Synopsizing: Hence: This is the generalized uncertainty principle Recall: Then:

The uncertainty principle 3.5 The uncertainty principle Synopsizing: Hence: If And operator Q doesn’t depend on time explicitly Then:

The uncertainty principle 3.5 The uncertainty principle Recall: Hence: Then:

The uncertainty principle 3.5 The uncertainty principle Introducing Δt as the time it takes the expectation value of Q to change by one standard deviation: Then: