Mathematical Descriptions of Systems

Slides:



Advertisements
Similar presentations
ECON 397 Macroeconometrics Cunningham
Advertisements

Lect.3 Modeling in The Time Domain Basil Hamed
Digital Image Processing, 2nd ed. www. imageprocessingbook.com © 2001 R. C. Gonzalez & R. E. Woods 1 Objective To provide background material in support.
Control Systems and Adaptive Process. Design, and control methods and strategies 1.
Differential Equations
THE LAPLACE TRANSFORM LEARNING GOALS Definition The transform maps a function of time into a function of a complex variable Two important singularity functions.
CHAPTER 4 Laplace Transform.
Chapter 2 One Dimensional Continuous Time System.
Fundamentals of Electric Circuits Chapter 16 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1 Lecture 1: February 20, 2007 Topic: 1. Discrete-Time Signals and Systems.
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
THE LAPLACE TRANSFORM LEARNING GOALS Definition
Prof. Wahied Gharieb Ali Abdelaal CSE 502: Control Systems (1) Topic# 3 Representation and Sensitivity Analysis Faculty of Engineering Computer and Systems.
Chapter 7 The Laplace Transform
Signals and Systems Analysis NET 351 Instructor: Dr. Amer El-Khairy د. عامر الخيري.
ERT 210/4 Process Control & Dynamics DYNAMIC BEHAVIOR OF PROCESSES :
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Digital Image Processing, 2nd ed. www. imageprocessingbook.com © 2001 R. C. Gonzalez & R. E. Woods 1 Review: Linear Systems Some Definitions With reference.
Chapter 2. Signals and Linear Systems
Math for CS Fourier Transforms
Time Domain Representations of Linear Time-Invariant Systems
EE611 Deterministic Systems System Descriptions, State, Convolution Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Analogue and digital techniques in closed loop regulation applications
EE4262: Digital and Non-Linear Control
Transfer Functions Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: The following terminology.
Modeling and Simulation Dr. Mohammad Kilani
Chapter 2. Signals and Linear Systems
State Space Representation
Transfer Functions Chapter 4
Advanced Engineering Mathematics 6th Edition, Concise Edition
SIGNALS PROCESSING AND ANALYSIS
Automatic Control Theory CSE 322
Mathematical Modeling of Control Systems
Effects of Zeros and Additional Poles
§7-4 Lyapunov Direct Method
Pole Placement and Decoupling by State Feedback
§3-3 realization for multivariable systems
Description and Analysis of Systems
Mathematical Descriptions of Systems
Fundamentals of Electric Circuits Chapter 16
UNIT II Analysis of Continuous Time signal
Trigonometric Identities
Root-Locus Analysis (1)
§2-3 Observability of Linear Dynamical Equations
Static Output Feedback and Estimators
Chapter 3 Convolution Representation
§1-3 Solution of a Dynamical Equation
§3-2 Realization of single variable systems
§1-2 State-Space Description
State Space Analysis UNIT-V.
Digital and Non-Linear Control
8. Stability, controllability and observability
Fundamentals of Electric Circuits Chapter 15
Controllability and Observability of Linear Dynamical Equations
§2-2 Controllability of Linear Dynamical Equations
Stability Analysis of Linear Systems
§2-2 Controllability of Linear Dynamical Equations
Beijing University of Aeronautics & Astronautics
§1—2 State-Variable Description The concept of state
Signals & Systems (CNET - 221) Chapter-3 Linear Time Invariant System
Laplace Transforms Lecture-11 Additional chapters of mathematics
Signals and Systems Lecture 2
Linear Systems Review Objective
§3-2 Realization of single variable systems
Linear Time Invariant systems
Chapter 3 Modeling in the Time Domain
DIGITAL CONTROL SYSTEM WEEK 3 NUMERICAL APPROXIMATION
پردازش تصاویر دیجیتالی
State-Variable Description Motivation
THE LAPLACE TRANSFORM LEARNING GOALS Definition
Presentation transcript:

Mathematical Descriptions of Systems Chapter 1 Mathematical Descriptions of Systems

Control System Design Steps The design of a controller that can modify the behavior and response of a plant to meet certain performance requirements can be a tedious and challenging problem in many control applications. Plant Inputs u Outputs y Settling time The control design task is to choose the input u(t) so that the output response y(t) satisfies certain given performance requirements.

The following control design steps often follow by most control engineers: Step 1 (Modeling): The task of the control engineer in this step is to understand the processing mechanism of the plant, which takes a given input signal u(t) and produces the output response y(t), to the point that he or she can describe it in the form of some mathematical equations. For example, t t t t

Though the controlled physical system may be very complex, from the above response, the system can be described approximately by the following first-order system: In many cases, the model, however, could be complicated enough from the control design viewpoint. Some of the approaches often used to obtain a simplified model are (1) Linearization around operating points, and (2) Model order reduction techniques.

Step 2 (System analysis) Step 2 (System analysis). Once we obtain a mathematical model, system analysis can be carried out, which includes controllability and observability analysis, stability analysis and other performance analysis. For example, let the system be described by Then, we know the system performance.

Step 3 (Controller design): If the analysis shows that the system performance does not meet the requirements, a controller is needed. For example, for the following system, Controller Plant model R(s)=1/s C(s) a proportional controller can be designed to improve the steady state performance.

A PD controller can be used to make the system to be stable: Plant model R(s)=1/s C(s) u=up+ud u

Step 4 (Implementation): In this step, a controller designed in step 3, which is shown to meet the performance requirements for the plant model, is ready to be applied to the plant. The implemen-tation can be done using a digital computer.

What dose linear systems theory study? Linear system theory studies physical systems that can be expressed by linear mathematical models. For example: Consider the following R-C network whose mathematical model is a linear differen-tial equation:

How many descriptions are there for linear systems? There are two descriptions for linear control systems: Input-output description and state-space description. Input-output description gives a mathematical relation between the input and output of the system. For example, transfer functions. State-space description describes both input-output relation and the behavior of internal states of a system. Therefore, with state-space description,

People can obtain full information of a system, which makes it possible to design a nice perfor-mance system. However, in many cases, it is difficult to obtain the state-space description. The two descriptions both have advantages and disadvantages. Which description should be used depends on the problem, on the data available, and on the question asked.

§1-1 The Input-Output Description 1. Why is I/O description necessary? For the input-output description, the knowledge of the internal structure of a system is assumed to be unavailable; the only access to the system is the input and output. Under this assumption, a system may be considered as a “black box” shown below. Black Box u y Clearly, what we can do to a black box is to apply all

kinds of inputs and measure their corresponding outputs, and then try to abstract key properties of the system from these input-output pairs. A system may have more than one input terminal or more than one output terminal. Black Box u y Definition: A system is said to be a single-variable system if and only if it has only one input and one output.

In other words, a system is said to be a multi-variable system if it has more than one input terminals or more than one output terminals. v1 x1 y1 v2 u2 x2 K2 _ y2 u1 K1

2. Relaxedness Question What conditions should be satisfied if the output can be excited solely and uniquely by the input? This question is important because if for an input-output described system the same input corresponds more than one different outputs, then the input-output description is of no use to determine the key properties of the system.

Example. Consider the following second-order system: whose initial conditions are: Assume that only input and output signals are available. If the initial condition are not available, the output can not be determined solely and uniquely by the input u since the output is exited by both input signal and the initial conditions; hence, it is impossible to determine which part of the output is excited by the input.

In classic control theory, we always assume that all the initial conditions of a given system are zero and therefore, the output can be excited by input solely and uniquely: If the concept of energy is applicable to a system, the system is said to be relaxed at time t1, if no energy is stored in the system at that instant. Therefore, represent the energy the system stores in time  to time 0.

As in the engineering literature, we shall assume that every system is relaxed at . Consequently, if an input u(, +  ) is applied at t= , the corresponding output will be excited solely and uniquely by u. Hence, under the relaxedness assumption, it is legitimate to write y=Hu This assumption is in fact an axiom that is obviously true.

Definition: A system is said to be an initially relaxed system if it is relaxed at time . For a relaxed system, we have y=Hu (1-1) where H is some operator that specifies uniquely the output y in terms of the input u of the system. The equation (1-1) can be expressed in the following form: (1-2) Generally, represents a function defined on the time interval (t1, t2).

Operator H: Note that transfer function is a typical linear operator: which maps the signal into and can be expressed as In the time domain the operator H is a convolution:

where h(t) is an impulse function.

Definition: A relaxed system is said to be linear if and only if 3. Linearity Definition: A relaxed system is said to be linear if and only if (1-3) for any inputs u1 and u2 , and any real (or complex) numbers 1 and 2. Otherwise, the relaxed system is said to be nonlinear. In engineering literature, the condition of (1-3) is often written as Additivity: Principle of superposition Homogeneity:

Example. Consider the Laplace transform: Example. Consider a system described by the following differential equation: which is a linear system. In fact, we have It is easy to verify that

Example. Consider the following system whose input and output are related by It is easy to verify that the input-output pair satisfies the property of homogeneity but not the property of additivity. Therefore, the system is not a linear system.

Impulse response function of a relaxed system We need the concept of  function or impulse function, which can be derived by introducing a pulse function (tt0): Pulse function: t 1/△ t1 t1+△ As  approaches zero, the limiting “function” is called -function:

-function: -function has the properties that for any positive  and that

for any function f that is continues at t1. Every piecewise continuous input can be approximated by a series of pulse functions. Since every pulse function can be described by t 1/△u(tn)△ = u(tn) tn tn+△ u(t) tn

Impulse response function (1-7) Let tn=n=. Let 0, the summation becomes an integration and the pulse function (ttn) tends to a -function. Consequently, as 0, (1-7) becomes (1-8)

The physical meaning of H (t) is that it is the output of a relaxed system due to an impulse function applied at time . Define g( ·,  )=H  (t) where the first variable denotes the time at which the output is observed. For convenience, we write g(,  )=H  (t) Hence, or (1-10)

Impulse-response matrix: If a system has p input terminals and q output terminals, and if the system is initially relaxed, the input-output description (1-10) can be extended to where

and gij(t, ) is the response at time t at the ith output terminal due to an impulse function applied at time  at the jth input terminal, the inputs at other terminals being identically zero.

4. Causality Definition: A system is said to be causal if the output of the system at time t does not depend on the input applied after time t; it depends only on the input applied before and at time t. In short, the past affects the future, but not conversely. If a relaxed system is causal, the output is identically zero before any input is applied. Hence, a linear system is causal if and only if The question: Do there exist non-causal systems?

Consequently, the input-output description of a linear, causal, relaxed system becomes (1-14) Example: We often use the truncation operator to define causality. The definition of a truncation operator is

and can be illustrated by the following figure, which chops off the input after time : past α t present future A system is said to be causal if the following equation holds Equation (*) means that the input in dose not affect the output in .

The future input does not affect the past and the present output. y u y The future input does not affect the past and the present output.

5. Relaxedness at time t0 Definition of relaxedness at time t0 Definition: A system is said to be relaxed at time t0 if and only if the output y[t0, +) is solely and uniquely exited by u[t0, +). If a system is known to be relaxed at t0, then its input-output relation can be written as Definition: A linear system is said to be relaxed at t0 if and only if

In particular, if the systems is causal, then Example: Consider the system We have It is clear that the system is relaxed and causal at t0, because can be determined by solely and uniquely.

Example: Consider the system We have Though the output can be determined uniquely, it is not a relaxed system at t0 because can not be determined solely and uniquely by .

Example: If a linear system satisfies u(, t0)0, then the system is relaxed at t0. As a matter of fact, which is exactly the definition of relaxedness at t0.

However, u(, t0)0 is only a sufficient condition of relaxedness at t0. Example: Consider a unit-time-delay system It is easy to verify that H is a linear operator. Let and then the system is relaxed at t0. In fact, to determine y[t0, +) by u[t0, +) solely and uniquely, one only needs to know that u[t01, t0) is zero.

t01 t0 Initially relaxed t01 t0 u(t) t01 t0 Initially relaxed t0 t01 y(t) It is clear that if u[t01, t0) is zero, then y[t0, +) can be determined by u[t0, +) solely and uniquely regardless of u(t), t(, t01).

Theorem: The system that is described by Criterion Theorem: The system that is described by is relaxed at t0 if and only if u[t0, +)0 implies y[t0, +)0. Proof: Necessity. If a system is relaxed at t0, the output y(t) for t t0 can be expressed by Is identically equal to zero.

Sufficiency: We shall show that if then the system is relaxed at t0. Since The assumption u[t0, +)0, y[t0, +)0 imply that

In words, the net effect of u(, t0) on the output y(t) for t t0 is zero. Hence, quod erat demonstrandum ['kwɔd'eræt,demən'strændum] Q.E.D.

The relaxedness of the system can be determined from the behavior of the system after t0 without knowing the previous history of the system. Certainly, it is impractical or impossible to observe the output from time t0 to infinity; fortunately, for a large class of systems, it is not necessary to do so.

The following corollary gives a more applicable criterion for a system that is relaxed at t0. Corollary: If the impulse response matrix can be decomposed into and if every element of M(t) is analytic on (, ), then the system is relaxed at t0 if for a fixed positive , implies . Remark: This is an important result. Because  is a finite positive number, and this corollary can be applied in engineering.

Example. Consider the following system if using the property of matrix exponential, G(t,) can be decomposed into where M(t)=eAt is an analytic function.

Appendix: Analytic Function Let D be an open interval in the real line R and let f(·) be a function defined on D; that is, to each point in D, a unique number is assigned to f. A function of a real variable, f(·), is said to be an element of class Cn on D if its nth derivative f(n)(t) exists and is continuous for all t in D. C is the class of functions having derivatives of all orders.

Definition: A function of real variable, f(t), is said to be analytic on D if f(t) is an element of C and if for each t0 in D there exists a positive real number 0, such that for all t in (t00, t0 +0), f(t) is representable by a Taylor series about the point t0 For instance, polynomials, exponential functions, and sinusoidal functions are analytic in the entire real line.

Theorem: If a function f is analytic on D and if f is known to be identically zero on an arbitrarily small nonzero interval in D, then the function f is identically zero on D. Proof. If the function is identically zero on an arbitrarily small nonzero interval, say, (t0, t1), then the function and its derivatives are all equal to zero on (t0, t1). By analytic continuation, the function can be shown to be identically zero. Q.E.D.

The process of analytic continuation: f(t) is analytic on D D1 D , D1 is an interval on which f(t)0 D

Corollary 1-1. If the impulse response matrix can be decomposed as and each element of M(t) is analytic over (, +), then the system is relaxed if for a constant , u[t0, t0+ )0 implies that y[t0, t0+ ) 0 .

Proof. We only need to prove that u[t0, +)0 implies y[t0, +)0, i.e. Let u[t0, +)0. Then,

Because is a constant, the assumption that M(t) is a analytic function implies that y(t) is analytic over [t0, +). The equation u[t0, t0+ )0 implies that y[t0, t0+ )0, that is By analytic continuation,

This completes the proof. This is an important result. For any system that satisfies Corollary 1-1 can be easily determined by observing the output over any nonzero interval of time. If the output is zero in this interval, then the system is relaxed at the moment.

Example: consider the system where A and B are constant matrices with proper dimensions. We have If x(t0)=0, the energy of the system at time t0 is zero, the system is relaxed at t0. Hence

6. Time Invariance If the characteristics of a system do not change with time, then the system is said to be time invariant. In order to define it precisely, we need the concept of a shifting operator Q. t t u y No matter what time the input is applied to the system, the output waveform remains the same.  t u y t 

Response to u=1(t1): By tacking the Laplace transform Example: Consider the following linear time-invariant system that is relaxed at time t=0: Response to u=1(t): t Response to u=1(t1): By tacking the Laplace transform we have t

Definition of shifting operator and time-invariant system The effect of the shifting operator Q is illustrated in Figure 1-5. The output of Qu is equal to the input delayed by  seconds. u t  Figure 1-5: The effect of a shifting operator on a signal

Definition: A relaxed system is said to be time invariant if and only if for any input u and any real number . Otherwise the relaxed system is said to be time varying. In other words, no matter what time an input is applied to a relaxed time-invariant system, the waveform of the output remains the same.  t u y t  t  t

Example: Prove that for a constant , the shifting operator Q is a linear time-invariant system, and compute its impulse function and transfer function. Proof: The linearity of Q is obvious. From the definition of time invariance, we only need to show that for any R. In fact,

Hence, the system is a linear time-invariant system, whose impulse function is and whose transfer function is

Impulse response function of time invariant system If a system is linear and time invariant, the impulse response function reduces to In fact, from the property of time invariance, we have Right-hand side = Left-hand side =

which implies that for any t, , ..

In particular, by choosing , we have For simplicity, write Hence, the impulse response of g(t, ) of a relaxed linear time-invariant system depends only on the difference of t and . Example. Consider the following relaxed linear, causal and time-invariant system

Multivariable systems For all t and , we have Hence, the input-output pair of a linear causal time-invariant system which is relaxed at t0 satisfies (1-19) For time invariant systems, without loss of generality, we can choose t0 =0. Then, (1-20)

7. Transfer-function matrix Transfer-function matrix Taking the Laplace transform of y(t), we have From the Laplace transform of the convolution, where is the Laplace transform of the impulse response matrix and is called the transfer-function matrix.

Here, the elements of a transfer-function matrix are assumed to be rational of s. Example: The impulse response matrix of a system is find its transfer-function matrix. Solution: Taking the Laplace transform of each element of G(t) yields

Proper and strictly proper We assume that the numerator polynomial and denominator polynomial of every element of G(s) have no common factor. Definition: A rational transfer function G(s) is said to be proper if G() is nonzero; G(s) is said to be strictly proper if G()=0 . Zeros and poles of a transfer function matrix We can define the zeros and poles of rational transfer function matrices by extending the definition of “transfer function” in classical control theory.

Assumption: Let G(s) be a q×p rational transfer function matrix with rank r. Here, the rank of a transfer function matrix is defined as the highest order of non-zero minor of G(s) . Example Consider the following transfer function matrices

Definition1-5: The characteristic polynomial of a proper rational matrix G(s) is defined to be the least common denominator of all minors of G(s). Note that in computing the characteristic polynomial of a rational matrix, every minor of the matrix must be reduced to an irreducible one; otherwise we will obtain an erroneous result.

Example Consider the following transfer function matrix From the definition 1-5, the common denominator of the minors of order 1 is (s+1)(s1)(s+2). the denominators of the minors of order 2 are Therefore, the common denominator of minors of order 2 is

Hence, the characteristic polynomial of G(s) is which has four poles:  1, 2, 2 and +1. Definition 1-6: Let the denominators of all the minors of order r of G(s) be replaced by the characteristic polynomial. Then, their greatest common factor of numerators is defined as the zero polynomial of G(s).

Example: Consider the following transfer function matrix The characteristic polynomial of G(s) is Let the denominators of minors of order 2 be replaced by the characteristic polynomial: whose greatest common divisor of numerators is (s1); hence the zero polynomial of G(s) is (s1), and G(s) has one zero s=1.

Example Consider the following transfer function matrices Find their characteristic polynomials. The four minors of order 1 of G1(s) are The minor of order 2 is zero; hence, the characteristic polynomial is s+1.

The four minors of order 1 of G2(s) are and the minor of order 2 is Hence, its characteristic polynomial is (s+1)2.

Example. Consider the following transfer function matrix The minors of order 1 are the elements of G(s), and the minors of order 2 are

The least common denominator of all minors of G(s), i. e The least common denominator of all minors of G(s), i.e., the characteristic polynomial of G(s) is: s(s+1)2(s+2)(s+3).

Summary Relaxed at  Linearity Causality

Relaxed at t0 Time-invariance t0=0

Laplace transform SISO where g(s) is a rational transfer function studied in classic control theory.

MIMO: Poles and zeros are the extension of those in classic control theory. p. 234.