Download presentation
Presentation is loading. Please wait.
1
Introduction to Transforms
Rutgers University Discrete Mathematics for ECE 14:332:202
2
The Basic Idea We have a signal
could be a function or a vector (sampled function) Given a set of basis functions/vectors, we wish to find the coefficients that will weight the functions, so that when summed, they will add to our original signal. The signal now has now been transformed into this set of coefficients
3
Fourier Series In the Fourier Series, we transform a continuous (has a value for every possible t) periodic signal into a weighted sum of sines and cosines. The more terms that are added to the sum, the better our approximation to the original signal. In a sense, we are “discretizing” the continuous function by representing as a finite set of coefficients (really need infinite set to have exact representation)
4
Square Wave Example Consider the periodic square wave, that oscillates between +1 and -1 with period T.
5
Fourier Series We can make an approximation to the square wave by a sine wave of period T. fa(t) = b1sin(2π/T)
6
Fourier Series Adding another term gives us a better approximation, that term being a sine with period T/3. fa(t) = b1sin(2π/T) + b3sin(3*2π/T)
7
Fourier Series Continuing to add terms our representation becomes better. Here we have all terms up to the 21st Harmonic. fa(t) = b1sin(2π/T) + b3sin(3*2π/T) + … + b21sin(21*2π/T)
8
Fourier Series With enough terms calculated to adequately represent our signal, we then have a “discrete representation” of our periodic function. The original signal can now be represented by the set of coefficients (bn’s) that multiply by our basis functions to give us the approximation. Finding these coefficients is known as the Forward Transform, and getting back our signal from the coefficients and our known set of basis functions, is known as the Inverse Transform
9
Sampled Signals (Vectors)
We will learn the concept of transforms by working with sampled signals, the N samples being placed in a row vector: x = [x0 x1 x2 … xN-1] We also must have a set of basis vectors, b0, ... , bN-1, each of length N, that comprise the rows of a matrix B
10
Vector Transforms Consider the trivial example where we have 4 basis vectors: e0 = [ ] e1 = [ ] e2 = [ ] e3 = [ ] Obviously, we can form any 4 element vector by a weighted sum of the unit vectors, which is easy to do: x = [x0 x1 x2 x3] = x0e0 + x1e1 + x2e2 + x3e3
11
Vector Transforms The set of unit vectors is a trivial basis, so we may be interested in some other set of basis vectors. We can represent a signal by a weighted sum of a (well-chosen) set of basis vectors (of the same size). x = [x0 x1 x2 … xN-1] = u0b0 + u1b1 + u2b2 + … + uN-1bN-1 The vector, x, has now been transformed into the vector of coefficients, u. This is also known as “changing the basis”
12
Vector Transforms The equation:
x = [x0 x1 x2 … xN-1] = u0b0 + u1b1 + u2b2 + … + uN-1bN-1 can be compactly written in matrix notation as where x and u are row vectors, and B is an NxN matrix whose rows are the basis vectors, b0 … bN-1
13
The Forward Transform We wish to find the vector of coefficients u = xB-1 Computing the inverse of the matrix, B, is not necessary if all of the bi’s forming the rows of B are orthogonal, that for all k≠j, bjbkT = dot(bi,bj) = 0 Also, |bk| = bkbkT, and in the case of the Hadamard and FFT matrices, |bk| = N Because of the orthogonality property, we can compute the elements of u by the equation: uk = (x·bkT) / |bk|
14
Orthogonality That two vectors are orthogonal can be interpreted geometrically as being perpendicular in N-dimensional space, such that their dot product is 0.
15
The Hadamard Basis Vectors
The rows of the Hadamard Matrix create N orthogonal vectors of +1’s and -1’s Hadamard Basis Vectors of Order 8 We can represent any 8 element vector by a weighted sum of these 8 orthogonal vectors
16
The DCT Basis Vectors Another possible basis – these vectors represent sampled cosines of different frequencies
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.