Presentation is loading. Please wait.

Presentation is loading. Please wait.

Signal Processing and Representation Theory Lecture 2.

Similar presentations


Presentation on theme: "Signal Processing and Representation Theory Lecture 2."— Presentation transcript:

1 Signal Processing and Representation Theory Lecture 2

2 Outline: Review Invariance Schur’s Lemma Fourier Decomposition

3 Representation Theory Review An orthogonal / unitary representation of a group G onto an inner product space V is a map  that sends every element of G to an orthogonal / unitary transformation, subject to the conditions: 1.  (0)v=v, for all v  V, where 0 is the identity element. 2.  (gh)v=  (g)  (h)v

4 Representation Theory Review If we are given a representation of a group G onto a vector space V, then W  V is a sub-representation if:  (g)w  W for every g  G and every w  W. A representation of a group G onto V is irreducible if the only sub-representations are W  V are W=V or W= .

5 Representation Theory Review Example: –If G is the group of 2x2 rotation matrices, and V is the vector space of 4-dimensional real / complex arrays, then: is not an irreducible representation since it maps the space W=(x 1,x 2,0,0) back into itself.

6 Representation Theory Review Given a representation  of a group G onto a vector space V, for any two elements v,w  V, we can define the correlation function: Corr (g,v,w)=  v,  (g)w  Giving the dot-product of v with the transformations of w.

7 Representation Theory Review (Why We Care) Given a representation  of a group G onto a vector space V, if we can express V as the direct sum of irreducible representations: V=V 1  …  V n then: 1.Alignment can be solved more efficiently by reducing the number of multiplications in the computation of the correlation. 2.We can obtain (robust) transformation-invariant representations.

8 Representation Theory Review (Why We Care) Correlation: T(vn)T(vn) v1v1 v2v2 vnvn + + + T(v1)T(v1) + + + T(v2)T(v2) … … T(wn)T(wn) w1w1 w2w2 wnwn + + + T(w1)T(w1) + + + T(w2)T(w2) … …

9 Outline: Review Invariance Schur’s Lemma Fourier Decomposition

10 Representation Theory Motivation If v M is a spherical function representing model M and v n is a spherical function representing model N, we want to define a map Ψ that takes a spherical function and return a rotation invariant array of values: –Ψ(v M )=Ψ(T(v M )) for all rotations T and all shape descriptors v M. –||Ψ(v M )-Ψ(v N )||  ||v M -v N || for all shape descriptors v M and v N.

11 Representation Theory More Generally Given a representation  of a group G onto a vector space V, we want to define a map Ψ that takes a vector v  V and returns a G-invariant array of values: –Ψ(v)=Ψ(  (g)v) for all v  V and all g  G. –||Ψ(v)-Ψ(w)||  ||v-w|| for all v,w  V.

12 Representation Theory Invariance Approach: Given a representation  of a group G onto a vector space V, map each vector v  V to its norm: Ψ(v)=||v|| 1.Since the representation is unitary, ||  (g)v||=||v|| for all v  V and all g  G. Thus, Ψ(v)=Ψ(  (g)v) and the map Ψ is invariant to the action of G. 2.Since the difference between the size of two vectors is never bigger than the distance between the vectors, we have ||Ψ(v)-Ψ(w)||  ||v-w|| for all v,w  V.

13 Representation Theory Invariance If V is an inner product space, v,w  V, we know that: v w v-w ║ ||v || - || w|| ║

14 Representation Theory Invariance Example: Consider the representation of the group of 2x2 rotation matrices onto the vector space of 4- dimensional arrays: Then the map: is a rotation-invariant map…

15 Representation Theory Invariance Example: … but so is the map: The new map is better because it gives more rotation invariant information about the initial vector.

16 Representation Theory Invariance Generally: Given a representation  of a group G onto a vector space V, if we can express V as the direct sum of sub-representations: V=V 1  …  V n then expressing a vector v as the sum v=v 1 +…+v n with v i  V i, we can define the rotation invariant mapping:

17 Representation Theory Invariance Generally: The finer the resolution, (i.e. the bigger n is) the more rotation invariant information is captured by the mapping: Thus, the best case is when each of the V i is an irreducible representation.

18 Representation Theory Invariance Why is the mapping Ψ invariant? If v=v 1 +…+v n is any vector in V, with v i  V i and g  G then we write out:  (g)v=w 1 +…+w n where w i  V i and we get:

19 Representation Theory Invariance Why is the mapping Ψ invariant? We can also write out:  (g)v=  (g)v 1 +…+  (g)v n. Since the V i are sub-representations we know that  (g)v i  V i, giving two different expressions for  (g)v as the sum of vectors in V i :  (g)v=w 1 +…+w n  (g)v=  (g)v 1 +…+  (g)v n

20 Representation Theory Invariance Why is the mapping Ψ invariant? However, since V is the direct sum of the V i : V=V 1  …  V n we know that any such decomposition is unique, and hence we must have: w i =  (g)v i and consequently:

21 Outline: Review Invariance Schur’s Lemma Fourier Decomposition

22 Representation Theory Schur’s Lemma Preliminaries: –If A is a linear map A:V→V, then the kernel of A is the subspace W  V such that A(w)=0 for all w  W.

23 Representation Theory Schur’s Lemma Preliminaries: –If A is a linear map, the characteristic polynomial of A is the polynomial: –The roots of the characteristic polynomial, the values of λ for which P A (λ)=0, are the eigen-values of A. –If V is a complex vector space and A:V→V is a linear transformation, then A always has at least one eigen- value. (Because of the algebraic closure of ℂ.)

24 Representation Theory Schur’s Lemma Lemma: If G is a commutative group, and  is a representation of G onto a complex inner product space V, then if V is more than one complex dimensional, it is not irreducible. So we can break up V into a direct sum of smaller, one-dimensional representations.

25 Representation Theory Schur’s Lemma Proof: Suppose that V is an irreducible representation and larger than one complex-dimensional… Let h  G be any element of the group. Then for every h  G and every v  V, we know that:  (g)  (h)(v)=  (h)  (g)(v).

26 Representation Theory Schur’s Lemma Proof: Since  (h) is a linear operator we know that it has a complex eigen-value λ. Set A:V→V to be the linear operator: A=  (h)- λI. Note that because G is commutative and diagonal matrices commute with any matrix, we have:  (g)A=A  (g) for all g  G.

27 Representation Theory Schur’s Lemma Proof: A=  (h)- λI Set W  V to be the kernel of A. Since λ is an eigen- value of A, we know that W≠ .

28 Representation Theory Schur’s Lemma Proof: Then since we know that:  (g)A=A  (g), for any w  W=Kernel(A), we have:  (g)(Aw)=0A(  (g)w)=0. Thus,  (g)w  W for all g  G and therefore we get a sub-representation of G on W.

29 Representation Theory Schur’s Lemma Proof: Two cases: 1.Either W≠V, in which case we did not start with an irreducible representation.  2.Or, W=V, in which case the kernel of A is all of V, which implies that A=0 and hence  (h)=λI. Since this must be true for all h  G, this must mean that every h  G, acts on V by multiplication by a complex scalar. Then any one-dimensional subspace of V is an irreducible representation. 

30 Outline: Review Invariance Schur’s Lemma Fourier Decomposition

31 g= Algebra Review Fourier Decomposition If V is the space of functions defined on a circle and G is the group of rotations about the origin, then we have a representation of G onto V: If g is the rotation by  0 degrees, then g sends the function f(  ) to the function f(  -  0 ). f()f()f(-0)f(-0) 00

32 Algebra Review Fourier Decomposition Since the group of 2D rotations is commutative, by Schur’s lemma we know that there exists one- dimensional sub-representations V i  V such that V=V 1  …  V n  …

33 Algebra Review Fourier Decomposition Or in other words, there exist orthogonal, complex- valued, functions {w 1 (  ),…,w n (  ),…} such that for any rotation g  G, we have:  (g)w i (  ) =λ i (g)w i (  ) with λ i (g)  ℂ.

34 Representation Theory Fourier Decomposition The w k are precisely the functions: w k (  )=e ik  And a rotation by  0 degrees acts on w k (  ) by sending:

35 Representation Theory Fourier Decomposition If f(  ) is a function defined on a circle, we can express the function f in terms of its Fourier decomposition: with a k  ℂ.

36 Representation Theory Fourier Decomposition Invariance / Power Spectrum / Fourier Descriptors: If f(  ) is a function defined on a circle, expressed in terms of its Fourier decomposition: then the collection of norms: is rotation invariant.

37 Fourier Descriptors Circular Function

38 Fourier Descriptors Circular Function +++=+ … Cosine/Sine Decomposition

39 Fourier Descriptors Circular Function +++=+ … = Constant Frequency Decomposition

40 Fourier Descriptors Circular Function +++=+ … = Constant Frequency Decomposition 1 st Order +

41 Fourier Descriptors Circular Function +++=+ … = Constant Frequency Decomposition 1 st Order2 nd Order + +

42 Fourier Descriptors Circular Function +++=+ … = Constant Frequency Decomposition 1 st Order2 nd Order + 3 rd Order + + …

43 Fourier Descriptors Circular Function +++=+ … = Constant Frequency Decomposition 1 st Order2 nd Order + 3 rd Order + … Amplitudes invariant to rotation

44 Representation Theory Fourier Decomposition Correlation: If f(  ) and h(  ) are function defined on a circle, expressed in terms of their Fourier decomposition:

45 Fourier Decomposition Correlation: then the correlation of f with g at a rotation  is: Representation Theory Convolution in the spatial domain is equivalent to multiplication in the frequency domain.

46 Representation Theory Fourier Decomposition Two (circular) n-dimensional arrays can be correlated by computing the Fourier decompositions, multiplying the frequency terms, and computing the inverse Fourier decomposition. –Computing the forward transforms:O(n log n) –Multiplying Fourier coefficients:O(n) –Computing the inverse transform:O(n log n) Total running time for correlation: O(n log n)

47 Representation Theory How do we get the Fourier decomposition?

48 Representation Theory Fourier Decomposition Preliminaries: If f is a function defined in 2D, we can get a function on the unit circle by looking at the restriction of f to points with norm 1.

49 Representation Theory Fourier Decomposition Preliminaries: A polynomial p(x,y) is homogenous of degree d if it is the sum of monomials of degree d: p(x,y)=a d x d +a d-1 x d-1 y+…+a 1 xy d-1 +a 0 y d monomials of degree d

50 Representation Theory Fourier Decomposition Preliminaries: If we let P d (x,y) be the set of homogenous polynomials of degree d, then P d (x,y) is a vector- space of dimension d+1:

51 Representation Theory Fourier Decomposition Observation: If M is any 2x2 matrix, and p(x,y) is a homogenous polynomial of degree d: then p(M(x,y)) is also a homogenous polynomial of degree d:

52 Representation Theory Fourier Decomposition If V is the space of functions on a circle, we can set V d  V to be the space of functions on the circle that are restrictions of homogenous polynomials of degree d. Since a rotation will map a homogenous polynomial of degree d back to a homogenous polynomial of degree d, the spaces V d are sub-representations.

53 Representation Theory Fourier Decomposition In general, the space of homogenous polynomials of degree d has dimension d+1: But we know that the irreducible representations are one-(complex)-dimensional!

54 Representation Theory Fourier Decomposition If (x,y) is a point on the circle, we know that this point satisfies: Thus, if q(x,y)  P d (x,y), then even though in general, the polynomial: is a homogenous polynomial of degree d+2, its restriction to the circle is actually a homogenous polynomial of degree d.

55 Representation Theory Fourier Decomposition Thus, the dimension of the space of homogenous polynomials restricted to the unit circle is actually:

56 Representation Theory Fourier Decomposition Using the fact that any point (x,y) on the circle can be expressed as: (x,y)=(cos ,sin  ) for some angle , we can write out the basis for each of the V d : 


Download ppt "Signal Processing and Representation Theory Lecture 2."

Similar presentations


Ads by Google