CT-321 Digital Signal Processing

Slides:



Advertisements
Similar presentations
Chapter 4 Euclidean Vector Spaces
Advertisements

Lecture 7: Basis Functions & Fourier Series
Chapter 5 Orthogonality
Linear Equations in Linear Algebra
Lecture 2: Geometry vs Linear Algebra Points-Vectors and Distance-Norm Shang-Hua Teng.
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Systems: Definition Filter
Digital Signals and Systems
Preview of Calculus.
This section is a field guide to all of the functions with which we must be proficient in a Calculus course.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
CISE315 SaS, L171/16 Lecture 8: Basis Functions & Fourier Series 3. Basis functions: Concept of basis function. Fourier series representation of time functions.
(Lecture #08)1 Digital Signal Processing Lecture# 8 Chapter 5.
Module 2 SPECTRAL ANALYSIS OF COMMUNICATION SIGNAL.
Signals and Systems 1 Lecture 8 Dr. Ali. A. Jalali September 6, 2002.
Concepts of Multimedia Processing and Transmission IT 481, Lecture 2 Dennis McCaughey, Ph.D. 29 January, 2007.
1 Lecture 1: February 20, 2007 Topic: 1. Discrete-Time Signals and Systems.
Convolution in Matlab The convolution in matlab is accomplished by using “conv” command. If “u” is a vector with length ‘n’ and “v” is a vector with length.
Astronomical Data Analysis I
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Z Transform The z-transform of a digital signal x[n] is defined as:
Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Neural Networks Laboratory Slide 1 DISCRETE SIGNALS AND SYSTEMS.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Computer Graphics Mathematical Fundamentals Lecture 10 Taqdees A. Siddiqi
Chapter 2. Signals and Linear Systems
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Digital Image Processing Lecture 8: Fourier Transform Prof. Charlene Tsai.
Lecture 7: Basis Functions & Fourier Series
Properties of the z-Transform
College Algebra Chapter 6 Matrices and Determinants and Applications
CPT450 – Computer Graphics
Chapter 2. Signals and Linear Systems
Computation of the solutions of nonlinear polynomial systems
CT-474: Satellite Communications
Digital Signal Processing
Lecture 11 FIR Filtering Intro
CE Digital Signal Processing Fall Discrete-time Fourier Transform
CEN352 Dr. Nassim Ammour King Saud University
EEE4176 Applications of Digital Signal Processing
Lecture 12 Linearity & Time-Invariance Convolution
Recap: Chapters 1-7: Signals and Systems
Microwave Engineering
(MTH 250) Calculus Lecture 22.
ROTATIONS & TRANSLATIONS
Copyright © Cengage Learning. All rights reserved.
Lecture 03: Linear Algebra
Lecture 14 Digital Filtering of Analog Signals
Trigonometric Identities
Fitting Curve Models to Edges
CT-321 Digital Signal Processing
CT-321 Digital Signal Processing
Microwave Engineering
HKN ECE 310 Exam Review Session
CT-321 Digital Signal Processing
Linear Algebra Lecture 3.
Fourier Series September 18, 2000 EE 64, Section 1 ©Michael R. Gustafson II Pratt School of Engineering.
HKN ECE 210 Exam 3 Review Session
Linear Algebra Lecture 21.
Lecture 2: Geometry vs Linear Algebra Points-Vectors and Distance-Norm
CT-321 Digital Signal Processing
Lecture #8 (Second half) FREQUENCY RESPONSE OF LSI SYSTEMS
Fourier Transforms.
Math review - scalars, vectors, and matrices
Lecture 4: Linear Systems and Convolution
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Linear Equations in Linear Algebra
Linear Equations in Linear Algebra
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

CT-321 Digital Signal Processing Yash Vasavada Autumn 2016 DA-IICT Lecture 6 LSI Systems and DTFT 16th August 2016

Review and Preview Review of past lecture: Preview of this lecture: Different ways of understanding the convolution operation Preview of this lecture: Convolution Operation Relation to DTFT Properties of DTFT and LSI Systems Reading Assignment OS, 3rd Edition: Sections 1.2 to 1.4, and Sections 1.6 to 1.9 Note: available in the library PM: Sections 2.1 to 2.3

Convolution Operation Output of an LSI system can be written as a function of its input 𝑥 𝑛 and impulse response ℎ(𝑛): 𝑦 𝑛 = 𝑘=−∞ ∞ 𝑥 𝑘 ℎ 𝑛−𝑘 This formula represents the convolution operation between two D-T sequences 𝑥(𝑛) and ℎ(𝑛) There are several ways to understand the convolution operation. Use the principles of superposition and homogeneity Flipping the impulse response and slide it over the input 𝑥(𝑛): demo on next several slides From: http://users.ece.gatech.edu/mcclella/matlabGUIs/ Analogy with multiplication of two polynomials Linear algebra and vector dot product Prior lecture Today’s lecture

Polynomial Multiplication Consider two polynomials: 𝑝 𝑥 = 𝑝 0 + 𝑝 1 𝑥+ 𝑝 2 𝑥 2 +…+ 𝑝 𝑁 𝑥 𝑁 𝑞 𝑥 = 𝑞 0 + 𝑞 1 𝑥+ 𝑞 2 𝑥 2 +…+ 𝑞 𝑀 𝑥 𝑀 Product of these two polynomials is given as 𝑟 𝑥 =𝑝 𝑥 𝑞 𝑥 = 𝑟 0 + 𝑟 1 𝑥+ 𝑟 2 𝑥 2 +…+ 𝑟 𝑁+𝑀 𝑥 𝑁+𝑀 Coefficients of this product polynomial can be calculated by convolution of the coefficients of 𝑝(𝑥) and 𝑞(𝑥) Define: 𝑝 𝑛 = 𝑝 𝑛 , 𝑞 𝑛 = 𝑞 𝑛 and 𝑟 𝑛 = 𝑟 𝑛 𝑟 𝑛 =𝑝 𝑛 ∗𝑞(𝑛) An indication of a proof showing why convolution in one domain (say, time domain) is equivalent to multiplication in another frequency domain Evaluate 𝑝(𝑥) and 𝑞(𝑥) and 𝑥=𝑢 (equivalent to taking DTFT at frequency 𝑓) and take product to get 𝑟(𝑢), or Convolve 𝑝(𝑛) and 𝑞(𝑛) (equivalent to passing an input 𝑝(𝑛) through an LSI system with impulse response 𝑞(𝑛)) to get 𝑟(𝑛) and use that to evaluate 𝑟(𝑥=𝑢)

A Brief Detour of Linear Algebra and Dot Products 𝑦 Consider a point 𝐴 on a unit circle in 2D plane This can be represented by a vector 𝑨= 𝐴 1 , 𝐴 2 𝑇 Convention is to highlight (bold-face) the notation of a vector, which is usually a column of numbers. Notation 𝑇 stands for transpose, and it is used to convert a row vector into a column vector and vice versa Suppose this vector 𝑨 makes an angle of 𝜃 𝐴 with respect to 𝑋 axis Therefore 𝐴 1 = cos 𝜃 𝐴 and 𝐴 2 = cos 90− 𝜃 𝐴 = sin 𝜃 𝐴 We consider another point 𝐵 and corresponding vector 𝐵=[ cos 𝜃 𝐵 , sin 𝜃 𝐵 ] What is the value of 𝐴 1 𝐵 1 + 𝐴 2 𝐵 2 ? Answer: cos 𝜃 𝐴 − 𝜃 𝐵 𝐵 𝐴 𝐴 2 𝜃 𝐵 𝜃 𝐴 𝑥 𝐴 1 Unit length circle in 2D plane Angle between the two vectors

A Brief Detour of Linear Algebra and Dot Products The dot product of two 𝑛×1 (real-valued) vectors 𝑨 = 𝐴 1 , 𝐴 2 , …, 𝐴 𝑛 𝑇 and 𝑩 = 𝐵 1 , 𝐵 2 , …, 𝐵 𝑛 𝑇 is defined as Here, 𝑨 denotes the norm of vector 𝑨, which is effectively its length Suppose vectors 𝑨 and 𝑩 are both unit length vectors. In this case, the dot product 𝑨∙𝑩 is simply cos 𝜃 , i.e., the cosine of the angle between the two vectors Generalization of the prior slide for which 𝑛=2 This has a nice visualization: when the unit-norm vectors 𝑨 and 𝑩 are in a good alignment, angle 𝜃 between them is near zero and the dot product approaches the maximum value of unity When does the dot product between vectors 𝑨 and 𝑩 become zero?

Convolution Operation as Sequential Dot Products Output 𝑦= 𝑆𝑦𝑠𝑡𝑒𝑚 𝕋 or matrix 𝐻 × Input 𝑥 Define 𝒉= ℎ 𝑀 , ℎ 𝑀−1 ,…, ℎ 1 , ℎ 0 𝑇 as a system vector (flipped impulse response), and 𝒙 𝑛 = 𝑥 𝑛 , 𝑥 𝑛−1 ,…, 𝑥 𝑛−𝑀+1 , 𝑥 𝑛−𝑀 𝑇 as a vector of input signal samples from samples 𝑛−𝑀 to 𝑛 With this, 𝑛 𝑡ℎ output sample is 𝑦 𝑛 = 𝒉 𝑇 𝒙 𝑛 , i.e., it’s a dot product of system vector 𝒉 with 𝒙 𝑛 𝑦 𝑛 will have a high value if 𝒙 𝑛 is well aligned with 𝒉, and will have a low value otherwise

An Aside: Why the Word “Linear”? Any system 𝕋 that satisfies the properties of superposition and scaling by a constant is called a linear system Reason for the word “linear” is that such systems allow a matrix viewpoint Output 𝑦(𝑛) of such systems can be represented as a vector 𝒚=𝑯𝒙 As a course in elementary linear algebra shows, it’s only lines and their generalizations (planes, etc.) that also can be described in such a way. A typical equation of a line: 𝑦=𝑚𝑥+𝑐, where 𝑚 is the slope and 𝑐 is the abscissa This can be written as a matrix product 𝒚=𝑪𝒃, where 𝑦= 𝑦 1 , 𝑦 2 ,…, 𝑦 𝑀 𝑇 is a vector of the Y- coordinates of 𝑀 points along this line, 𝑪= 𝑥 1 1 ⋮ ⋮ 𝑥 𝑀 1 , and 𝒃= 𝑚 𝑐 Nonlinear functions such as powers, logarithms, exponentials, etc. have non-constant slope and they do not have such a matrix product representation