Quantum One: Lecture 8. Continuously Indexed Basis Sets.

Slides:



Advertisements
Similar presentations
Mathematical Formulation of the Superposition Principle
Advertisements

10.4 Complex Vector Spaces.
Quantum One: Lecture 6. The Initial Value Problem for Free Particles, and the Emergence of Fourier Transforms.
Quantum One: Lecture 9. Graham Schmidt Orthogonalization.
Quantum One: Lecture Ket-Bra Operators, Projection Operators, and Completeness Relations 3.
Quantum One: Lecture 5a. Normalization Conditions for Free Particle Eigenstates.
Quantum One: Lecture 17.
Quantum One: Lecture Postulate II 3 Observables of Quantum Mechanical Systems 4.
Quantum One: Lecture 16.
Quantum One: Lecture 3. Implications of Schrödinger's Wave Mechanics for Conservative Systems.
Quantum One: Lecture Canonical Commutation Relations 3.
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
New chapter – quantum and stat approach We will start with a quick tour over QM basics in order to refresh our memory. Wave function, as we all know, contains.
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Chapter 3 Formalism. Hilbert Space Two kinds of mathematical constructs - wavefunctions (representing the system) - operators (representing observables)
Boyce/DiPrima 9th ed, Ch 11.2: Sturm-Liouville Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9th edition, by.
Quantum One: Lecture 7. The General Formalism of Quantum Mechanics.
5.1 Orthogonality.
Quantum One: Lecture Completeness Relations, Matrix Elements, and Hermitian Conjugation 3.
राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the.
: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri Principles of.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Physics 3 for Electrical Engineering
Linear Algebra Chapter 4 Vector Spaces.
From the previous discussion on the double slit experiment on electron we found that unlike a particle in classical mechanics we cannot describe the trajectory.
University of Texas at Austin CS384G - Computer Graphics Fall 2008 Don Fussell Orthogonal Functions and Fourier Series.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Quantum One: Lecture Representation Independent Properties of Linear Operators 3.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 10 Real Inner Products and Least-Square
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
1 The Mathematics of Quantum Mechanics 3. State vector, Orthogonality, and Scalar Product.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Mathematical Tools of Quantum Mechanics
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
1 MAC 2103 Module 11 lnner Product Spaces II. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Construct an orthonormal.
12.1 Orthogonal Functions a function is considered to be a generalization of a vector. We will see the concepts of inner product, norm, orthogonal (perpendicular),
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Beyond Vectors Hung-yi Lee. Introduction Many things can be considered as “vectors”. E.g. a function can be regarded as a vector We can apply the concept.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Lecture from Quantum Mechanics. "The most beautiful experience we can have is the mysterious. It is the fundamental emotion which stands at the cradle.
Mathematical Formulation of the Superposition Principle
Chapter 3 Formalism.
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum Two.
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum Two.
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Quantum One.
Approximation of Functions
Presentation transcript:

Quantum One: Lecture 8

Continuously Indexed Basis Sets

In the last lecture, we began to describe a more general formulation of quantum mechanics, applicable to arbitrary quantum systems, which develops the basic postulates in a form that is designed to be representation independent. We began by stating the first postulate, which associated the dynamical state of a quantum system with a state vector |ψ 〉 that is an element of a complex linear vector space S. We then gave a definition of the term linear vector space, and saw that it defines a set of objects that we can multiply by scalars and add together, to obtain other elements of the set. That is, they obey a superposition principle. We then introduced a series of additional definitions, that included the idea of spanning sets, linearly independent sets, and basis sets, and we defined what we mean by the dimension of a linear vector space.

In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But it often arises that a set of vectors {|φ_{α} 〉 } is labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But it often arises that a set of vectors {|φ_{α} 〉 } is labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index. This gives rise to the following set of definitions

In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index. This gives rise to the following set of definitions

Span - A continuously indexed set of vectors is said to span a vector space S if every vector |ψ 〉 in S can be written as a continuous linear combination of the elements of the set. In this expression the function gives the complex value of the expansion coefficient of multiplying the state of the spanning set. Linear Independence - A continuously indexed set of vectors is linearly independent if the only solution to the equation is for all α.

Span - A continuously indexed set of vectors is said to span a vector space S if every vector |ψ 〉 in S can be written as a continuous linear combination of the elements of the set. In this expression the function gives the complex value of the expansion coefficient of multiplying the state of the spanning set. Linear Independence - A continuously indexed set of vectors is linearly independent if the only solution to the equation is for all α.

Basis - A linearly independent set of continuously indexed vectors that spans S forms a basis for the space. We note in passing that any space that contains a continuously indexed basis, is necessarily infinite dimensional, since it must contain an infinite number of linearly independent vectors in any domain of the index α in which it takes on a continuous range of values.

Basis - A linearly independent set of continuously indexed vectors that spans S forms a basis for the space. We note in passing that any space that contains a continuously indexed basis, is necessarily infinite dimensional, since it must contain an infinite number of linearly independent vectors in any domain in which the index α takes on continuous values.

Inner Products

Towards a notion of length and direction

Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉, the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3),

In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3),

In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3), while the second stems from the fact that if then which defines the condition of antilinearity with respect to |φ 〉.

In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3), while the second stems from the fact that if then which defines the condition of antilinearity with respect to |φ 〉.

In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3), while the second stems from the fact that if then which defines the condition of antilinearity with respect to |φ 〉.

3. In functional spaces, the inner product involves the continuous analog of a summation over components, namely an integral. Thus, e.g., in the space of Fourier transformable function on R³ we "associate" with each function ψ(r) a vector |ψ 〉. The inner product of two functions then takes the form where the integral is over all space.

3. In functional spaces, the inner product involves the continuous analog of a summation over components, namely an integral. Thus, e.g., in the space of Fourier transformable function on R³ we "associate" with each function ψ(r) a vector |ψ 〉. The inner product of two functions then takes the form where the integral is over all space.

The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 1.A discrete set of vectors forms an orthonormal set if that is, if they are a set of unit-normalized vectors which are mutually orthogonal.

Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 1.A discrete set of vectors forms an orthonormal set if that is, if they are a set of unit-normalized vectors which are mutually orthogonal.

Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 1.A discrete set of vectors forms an orthonormal set if that is, if they are a set of unit-normalized vectors which are mutually orthogonal.

Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 2. A continuously-indexed set of vectors forms an orthonormal set if The members of such a set have infinite norm, and are not square-normalizable.

It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes! We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes! We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes! We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes, it does. We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes, it does. We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing so is referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, it is always possible to construct an orthonormal basis.