Presentation is loading. Please wait.

Presentation is loading. Please wait.

Quantum Two.

Similar presentations


Presentation on theme: "Quantum Two."— Presentation transcript:

1 Quantum Two

2

3 Angular Momentum and Rotations

4 Angular Momentum and Rotations The Wigner-Eckart Theorem

5 In this last segment on angular momentum we prove an important result regarding the matrix elements of tensor operators between basis states of any standard representation. This result, known as the Wigner-Eckart theorem, encodes the geometrical constraints put on the components of tensor operators by the transformation laws that they satisfy. Specifically, we will show that the matrix elements of the components of a spherical tensor operator between basis states are given by the product of a Clebsch Gordon coefficient and a quantity that is independent of m, M, and m′, referred to as the reduced matrix element.

6 In this last segment on angular momentum we prove an important result regarding the matrix elements of tensor operators between basis states of any standard representation. This result, known as the Wigner-Eckart theorem, encodes the geometrical constraints put on the components of tensor operators by the transformation laws that they satisfy. Specifically, we will show that the matrix elements of the components of a spherical tensor operator between basis states are given by the product of a Clebsch Gordon coefficient and a quantity that is independent of m, M, and m′, referred to as the reduced matrix element.

7 In this last segment on angular momentum we prove an important result regarding the matrix elements of tensor operators between basis states of any standard representation. This result, known as the Wigner-Eckart theorem, encodes the geometrical constraints put on the components of tensor operators by the transformation laws that they satisfy. The Wigner-Eckart theorem: the matrix elements of the components of a spherical tensor operator of rank J between basis states of a standard representation are given by the product of a Clebsch Gordon coefficient and a quantity that is independent of m, M, and m′, referred to as the reduced matrix element.

8 In this last segment on angular momentum we prove an important result regarding the matrix elements of tensor operators between basis states of any standard representation. This result, known as the Wigner-Eckart theorem, encodes the geometrical constraints put on the components of tensor operators by the transformation laws that they satisfy. The Wigner-Eckart theorem: the matrix elements of the components of a spherical tensor operator of rank J between basis states of a standard representation are given by the product of a Clebsch Gordon coefficient and a quantity that is independent of m, M, and m′, referred to as the reduced matrix element.

9 In this last segment on angular momentum we prove an important result regarding the matrix elements of tensor operators between basis states of any standard representation. This result, known as the Wigner-Eckart theorem, encodes the geometrical constraints put on the components of tensor operators by the transformation laws that they satisfy. The Wigner-Eckart theorem: the matrix elements of the components of a spherical tensor operator of rank J between basis states of a standard representation are given by the product of a Clebsch Gordon coefficient and a quantity that is independent of m, M, and m′, referred to as the reduced matrix element.

10 In this last segment on angular momentum we prove an important result regarding the matrix elements of tensor operators between basis states of any standard representation. This result, known as the Wigner-Eckart theorem, encodes the geometrical constraints put on the components of tensor operators by the transformation laws that they satisfy. The Wigner-Eckart theorem: the matrix elements of the components of a spherical tensor operator of rank J between basis states of a standard representation are given by the product of a Clebsch Gordon coefficient and a quantity that is independent of m, M, and m′, referred to as the reduced matrix element.

11 This result is not so surprising, given that the product of the two quantities on the right hand side of the matrix element, i.e., transforms under rotations like a direct product ket of the form while the quantity on the left transforms as the bra of angular momentum (j, m), that it is. The reduced matrix element characterizes the extent to which the given tensor operator mixes the two subspaces S(j, τ) and S(j′, τ′), and is generally a different number for each tensor operator.

12 This result is not so surprising, given that the product of the two quantities on the right hand side of the matrix element, i.e., transforms under rotations like a direct product ket of the form while the quantity on the left transforms as the bra of angular momentum (j, m), that it is. The reduced matrix element characterizes the extent to which the given tensor operator mixes the two subspaces S(j, τ) and S(j′, τ′), and is generally a different number for each tensor operator.

13 This result is not so surprising, given that the product of the two quantities on the right hand side of the matrix element, i.e., transforms under rotations like a direct product ket of the form while the quantity on the left transforms as the bra of angular momentum (j, m), that it is. The reduced matrix element characterizes the extent to which the given tensor operator mixes the two subspaces S(j, τ) and S(j′, τ′), and is generally a different number for each tensor operator.

14 This result is not so surprising, given that the product of the two quantities on the right hand side of the matrix element, i.e., transforms under rotations like a direct product ket of the form while the quantity on the left transforms as the bra of angular momentum (j, m), that it is. The reduced matrix element characterizes the extent to which the given tensor operator mixes the two subspaces S(j, τ) and S(j′, τ′), and is generally a different number for each tensor operator.

15 In a certain sense the two sets of numbers
{ } and { } are the only way in which any two tensor operators and of the same rank differ from one another. To prove the Wigner-Eckart theorem we will simply show that the matrix elements of interest obey the same recursion relations as the CG coefficients, and thus differ from them at most by an overall multiplicative constant (which is proportional to the reduced matrix element).

16 In a certain sense the two sets of numbers
{ } and { } are the only way in which any two tensor operators and of the same rank differ from one another. To prove the Wigner-Eckart theorem we will simply show that the matrix elements of interest obey the same recursion relations as the CG coefficients, and thus differ from them at most by an overall multiplicative constant (which is proportional to the reduced matrix element).

17 To this end, we use the simplifying notation
for the CG coefficients and denote the matrix elements of interest in a similar fashion, i.e., where we have temporarily set , and This latter quantity is implicitly a function of the labels τ and τ′, but we will suppress this dependence until it is needed. We then recall that the CG coefficients obey a recursion relation that is generated by the matrix elements which in this more compact notation leads to the relation . . .

18 To this end, we use the simplifying notation
for the CG coefficients and denote the matrix elements of interest in a similar fashion, i.e., where we have temporarily set , and This latter quantity is implicitly a function of the labels τ and τ′, but we will suppress this dependence until it is needed. We then recall that the CG coefficients obey a recursion relation that is generated by the matrix elements which in this more compact notation leads to the relation . . .

19 To this end, we use the simplifying notation
for the CG coefficients and denote the matrix elements of interest in a similar fashion, i.e., where we have temporarily set , and This latter quantity is implicitly a function of the labels τ and τ′, but we will suppress this dependence until it is needed. We then recall that the CG coefficients obey a recursion relation that is generated by the matrix elements which in this more compact notation leads to the relation . . .

20 To this end, we use the simplifying notation
for the CG coefficients and denote the matrix elements of interest in a similar fashion, i.e., where we have temporarily set , and This matrix element is implicitly a function of the labels τ and τ′, but we will suppress this dependence until it is needed. We then recall that the CG coefficients obey a recursion relation that is generated by the matrix elements which in this more compact notation leads to the relation . . .

21 To this end, we use the simplifying notation
for the CG coefficients and denote the matrix elements of interest in a similar fashion, i.e., where we have temporarily set , and This matrix element is implicitly a function of the labels τ and τ′, but we will suppress this dependence until it is needed. We then recall that the CG coefficients obey a recursion relation that is generated by the matrix elements which in this more compact notation leads to the relation . . .

22 To this end, we use the simplifying notation
for the CG coefficients and denote the matrix elements of interest in a similar fashion, i.e., where we have temporarily set , and This matrix element is implicitly a function of the labels τ and τ′, but we will suppress this dependence until it is needed. We then recall that the CG coefficients obey a recursion relation that is generated by the matrix elements In this more compact notation the recursion relation can be written . . .

23 To obtain a similar relation for the matrix elements of the components we consider an "analogous" matrix element Using our more compact notation, the term on the right hand side can be re- expressed as follows . . .

24 To obtain a similar relation for the matrix elements of the components we consider an "analogous" matrix element Using our more compact notation, the term on the right hand side can be re- expressed as follows . . .

25 To obtain a similar relation for the matrix elements of the components we consider an "analogous" matrix element Using our more compact notation, the term on the right hand side can be re- expressed as follows . . .

26 To obtain a similar relation for the matrix elements of the components we consider an "analogous" matrix element Using our more compact notation, the term on the right hand side can be re- expressed as follows . . .

27 We can evaluate this in a second way by using the commutation relations satisfied by and , i.e., we can write

28 We can evaluate this in a second way by using the commutation relations satisfied by and , i.e., we can write This allows us to express the matrix element above in the form . . .

29 We can evaluate this in a second way by using the commutation relations satisfied by and , i.e., we can write This allows us to express the matrix element above in the form . . .

30 We can evaluate this in a second way by using the commutation relations satisfied by and , i.e., we can write This allows us to express the matrix element above in the form . . .

31 which reduces to

32 which reduces to

33 which reduces to

34 Comparing to

35 Comparing to

36 Comparing to

37 Comparing to

38 we deduce the recursion relation
which is precisely the same as that obeyed by the Clebsch-Gordan coefficients The two sets of numbers, for given values of j, j₁, and j₂, must be proportional to one another.

39 we deduce the recursion relation
which is precisely the same as that obeyed by the Clebsch-Gordan coefficients The two sets of numbers, for given values of j, j₁, and j₂, must be proportional to one another.

40 we deduce the recursion relation
which is precisely the same as that obeyed by the Clebsch-Gordan coefficients The two sets of numbers, for given values of j, j₁, and j₂, must be proportional to one another.

41 Introducing the reduced matrix element as the constant of proportionality, we deduce that
which becomes after a re-introduction of and our regular notation, and a little rearrangement thus proving the theorem. The Wigner-Eckart theorem is very useful because it leads automatically to certain selection rules. Most of the matrix elements of the components of a tensor operator between basis states of a standard representation are zero.

42 Introducing the reduced matrix element as the constant of proportionality, we deduce that
which becomes after a re-introduction of and our regular notation, and a little rearrangement thus proving the theorem. The Wigner-Eckart theorem is very useful because it leads automatically to certain selection rules. Most of the matrix elements of the components of a tensor operator between basis states of a standard representation are zero.

43 Introducing the reduced matrix element as the constant of proportionality, we deduce that
which becomes after a re-introduction of and our regular notation, and a little rearrangement thus proving the theorem. The Wigner-Eckart theorem is very useful because it leads automatically to certain selection rules. Most of the matrix elements of the components of a tensor operator between basis states of a standard representation are zero.

44 Introducing the reduced matrix element as the constant of proportionality, we deduce that
which becomes after a re-introduction of and our regular notation, and a little rearrangement thus proving the theorem. The Wigner-Eckart theorem is very useful because it leads automatically to certain selection rules. Most of the matrix elements of the components of a tensor operator between basis states of a standard representation are zero.

45 Introducing the reduced matrix element as the constant of proportionality, we deduce that
which becomes after a re-introduction of and our regular notation, and a little rearrangement thus proving the theorem. The Wigner-Eckart theorem is very useful because it leads automatically to certain selection rules. Most of the matrix elements of the components of a tensor operator between basis states of a standard representation are zero.

46 Introducing the reduced matrix element as the constant of proportionality, we deduce that
which becomes after a re-introduction of and our regular notation, and a little rearrangement thus proving the theorem. The Wigner-Eckart theorem is very useful because it leads automatically to certain selection rules. Most of the matrix elements of the components of a tensor operator between basis states of a standard representation are zero.

47 Indeed, because of the CG coefficient on the right hand side we see that the matrix element of between two states of this type vanishes unless m = M + m’, which means that the change in the z-component of angular momentum satisfies and unless Thus, for example we see that the matrix elements of a scalar operator vanish unless Δm=0 and Δj=j-j′= 0.

48 Indeed, because of the CG coefficient on the right hand side we see that the matrix element of between two states of this type vanishes unless m = M + m’, which means that the change in the z-component of angular momentum satisfies and unless Thus, for example we see that the matrix elements of a scalar operator vanish unless Δm=0 and Δj=j-j′= 0.

49 Indeed, because of the CG coefficient on the right hand side we see that the matrix element of between two states of this type vanishes unless m = M + m’, which means that the change in the z-component of angular momentum satisfies and unless Thus, for example we see that the matrix elements of a scalar operator vanish unless Δm=0 and Δj=j-j′= 0.

50 Thus, scalar operators cannot change the angular momentum of any states that they act upon.
They are often said to carry no angular momentum, in contrast to tensor operators of higher rank, which can and do change the angular momentum of the states that they act upon. Thus the matrix elements for scalar operators take the form In particular, it follows that within any irreducible subspace S(j, τ) the matrix representing any scalar is just a constant times the identity matrix for that subspace (confirming the rotational invariance of scalar observables within any such subspace), i.e.,

51 Thus, scalar operators cannot change the angular momentum of any states that they act upon.
They are often said to carry no angular momentum, in contrast to tensor operators of higher rank, which can and do change the angular momentum of the states that they act upon. Thus the matrix elements for scalar operators take the form In particular, it follows that within any irreducible subspace S(j, τ) the matrix representing any scalar is just a constant times the identity matrix for that subspace (confirming the rotational invariance of scalar observables within any such subspace), i.e.,

52 Thus, scalar operators cannot change the angular momentum of any states that they act upon.
They are often said to carry no angular momentum, in contrast to tensor operators of higher rank, which can and do change the angular momentum of the states that they act upon. Thus the matrix elements for scalar operators take the form In particular, it follows that within any irreducible subspace S(j, τ) the matrix representing any scalar is just a constant times the identity matrix for that subspace (confirming the rotational invariance of scalar observables within any such subspace), i.e.,

53 Thus, scalar operators cannot change the angular momentum of any states that they act upon.
They are often said to carry no angular momentum, in contrast to tensor operators of higher rank, which can and do change the angular momentum of the states that they act upon. Thus the matrix elements for scalar operators take the form In particular, it follows that within any irreducible subspace S(j, τ) the matrix representing any scalar is just a constant times the identity matrix for that subspace (confirming the rotational invariance of scalar observables within any such subspace), i.e.,

54 Thus, scalar operators cannot change the angular momentum of any states that they act upon.
They are often said to carry no angular momentum, in contrast to tensor operators of higher rank, which can and do change the angular momentum of the states that they act upon. Thus the matrix elements for scalar operators take the form In particular, it follows that within any irreducible subspace S(j, τ) the matrix representing any scalar is just a constant times the identity matrix for that subspace (confirming the rotational invariance of scalar observables within any such subspace), i.e.,

55 Application of the Wigner-Eckart theorem to vector operators, leads to consideration of the spherical components of such an operator. The corresponding matrix elements satisfy the relation and vanish unless Similarly, application of the triangle inequality to vector operators leads to the selection rule Thus, vector operators act as though they impart or take away angular momentum j = 1.

56 Application of the Wigner-Eckart theorem to vector operators, leads to consideration of the spherical components of such an operator. The corresponding matrix elements satisfy the relation and vanish unless Similarly, application of the triangle inequality to vector operators leads to the selection rule Thus, vector operators act as though they impart or take away angular momentum j = 1.

57 Application of the Wigner-Eckart theorem to vector operators, leads to consideration of the spherical components of such an operator. The corresponding matrix elements satisfy the relation and vanish unless Similarly, application of the triangle inequality to vector operators leads to the selection rule Thus, vector operators act as though they impart or take away angular momentum j = 1.

58 Application of the Wigner-Eckart theorem to vector operators, leads to consideration of the spherical components of such an operator. The corresponding matrix elements satisfy the relation and vanish unless Similarly, application of the triangle inequality to vector operators leads to the selection rule Thus, vector operators act as though they impart or take away angular momentum j = 1.

59 Application of the Wigner-Eckart theorem to vector operators, leads to consideration of the spherical components of such an operator. The corresponding matrix elements satisfy the relation and vanish unless Similarly, application of the triangle inequality to vector operators leads to the selection rule Thus, vector operators act as though they impart or take away angular momentum j = 1.

60 The matrix elements of a vector operator within any given irreducible space are proportional to those of any other vector operator, such as the angular momentum operator J, whose spherical components satisfy In which Doesn’t connect states in different irreducible subspaces, and is independent of τ. It follows that within S(j, τ) the matrix representing any component of V differs from that representing the corresponding component of by a multiplicative constant

61 The matrix elements of a vector operator within any given irreducible space are proportional to those of any other vector operator, such as the angular momentum operator J, whose spherical components satisfy in which doesn’t connect states in different irreducible subspaces, and is independent of τ. It follows that within S(j, τ) the matrix representing any component of V differs from that representing the corresponding component of by a multiplicative constant

62 The matrix elements of a vector operator within any given irreducible space are proportional to those of any other vector operator, such as the angular momentum operator J, whose spherical components satisfy in which doesn’t connect states in different irreducible subspaces, and is independent of τ. It follows that within S(j, τ) the matrix representing any component of V differs from that representing the corresponding component of by a multiplicative constant

63 The matrix elements of a vector operator within any given irreducible space are proportional to those of any other vector operator, such as the angular momentum operator J, whose spherical components satisfy in which doesn’t connect states in different irreducible subspaces, and is independent of τ. It follows that within S(j, τ) the matrix representing any component of V differs from that representing the corresponding component of by a multiplicative constant

64 Thus, within any subspace S(j, τ) all vector operators are proportional, and we can write
V=αJ within S(j, τ). It is a straight forward exercise to compute the constant of proportionality in terms of the scalar observable , the result being what is referred to as the projection theorem, i.e., within S(j, τ) , where the mean value 〈J⋅V〉,, being a scalar with respect to rotations can be taken with respect to any state in the subspace S(j, τ).

65 Thus, within any subspace S(j, τ) all vector operators are proportional, and we can write
V=αJ within S(j, τ). It is a straight forward exercise to compute the constant of proportionality in terms of the scalar observable , the result being what is referred to as the projection theorem, i.e., within S(j, τ) , where the mean value 〈J⋅V〉,, being a scalar with respect to rotations can be taken with respect to any state in the subspace S(j, τ).

66 Thus, within any subspace S(j, τ) all vector operators are proportional, and we can write
V=αJ within S(j, τ). It is a straight forward exercise to compute the constant of proportionality in terms of the scalar observable , the result being what is referred to as the projection theorem, i.e., within S(j, τ) , where the mean value 〈J⋅V〉,, being a scalar with respect to rotations can be taken with respect to any state in the subspace S(j, τ).

67 Thus, within any subspace S(j, τ) all vector operators are proportional, and we can write
V=αJ within S(j, τ). It is a straight forward exercise to compute the constant of proportionality in terms of the scalar observable , the result being what is referred to as the projection theorem, i.e., within S(j, τ) , where the mean value 〈J⋅V〉,, being a scalar with respect to rotations, can be taken with respect to any state in the subspace S(j, τ).

68 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided , there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look like one another within any give irreducible invariant subspace.

69 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided that , then there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look like one another within any give irreducible invariant subspace.

70 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided that , then there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look like one another within any give irreducible invariant subspace.

71 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided that , then there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look like one another within any give irreducible invariant subspace.

72 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided , there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look alike.

73 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided , there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look alike.

74 More generally, one finds that the matrix elements of any two tensor operators and of the same rank between states in any two irreducible invariant subspaces S(j, τ) and S(j’, τ’) are proportional, i.e., provided , there exists a constant β such that where Thus, the orientational dependence of the matrix elements is completely determined by the transformational properties of the states and the tensors involved. In this sense, all tensors of the same rank look alike. That’s the essence of the Wigner-Eckart theorem.

75


Download ppt "Quantum Two."

Similar presentations


Ads by Google