OPTIMAL CONTROL SYSTEMS

Slides:



Advertisements
Similar presentations
Nonlinear Programming McCarl and Spreen Chapter 12.
Advertisements

1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Optimization. f(x) = 0 g i (x) = 0 h i (x)
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 13 Nonlinear and Multiple Regression.
Engineering Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Discrete Optimization Shi-Chung Chang. Discrete Optimization Lecture #1 Today: Reading Assignments 1.Chapter 1 and the Appendix of [Pas82] 2.Chapter 1.
1 OPTIMISATION AND OPTIMAL CONTROL AIM To provide an understanding of the principles of optimisation techniques in the static and dynamic contexts.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
MIT and James Orlin © Nonlinear Programming Theory.
Maximum likelihood (ML) and likelihood ratio (LR) test
Chapter 5 Orthogonality
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Development of Empirical Models From Process Data
Unconstrained Optimization Problem
1 Real-Time Optimization (RTO) In previous chapters we have emphasized control system performance for disturbance and set-point changes. Now we will be.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Introduction and Basic Concepts

Maximum likelihood (ML)
Tier I: Mathematical Methods of Optimization
Introduction to Optimization (Part 1)
UNCONSTRAINED MULTIVARIABLE
KKT Practice and Second Order Conditions from Nash and Sofer
Managerial Economics Managerial Economics = economic theory + mathematical eco + statistical analysis.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
1. Problem Formulation. General Structure Objective Function: The objective function is usually formulated on the basis of economic criterion, e.g. profit,
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear Programming (NLP) Operation Research December 29, 2014 RS and GISc, IST, Karachi.
Optimization of Process Flowsheets S,S&L Chapter 24 T&S Chapter 12 Terry A. Ring CHEN 5253.
Ken YoussefiMechanical Engineering Dept. 1 Design Optimization Optimization is a component of design process The design of systems can be formulated as.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Mathematical Models & Optimization?
Nonlinear Programming Models
EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring Instructor: Prof.Dr.Sahand Daneshvar Submited.
Numerical Methods.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
(COEN507) LECTURE III SLIDES By M. Abdullahi
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
1 Development of Empirical Models From Process Data In some situations it is not feasible to develop a theoretical (physically-based model) due to: 1.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
deterministic operations research
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Chapter 11 Optimization with Equality Constraints
Computational Optimization
Properties Of the Quadratic Performance Surface
Lecture 8 – Nonlinear Programming Models
Dr. Arslan Ornek IMPROVING SEARCH
1. Problem Formulation.
Collaborative Filtering Matrix Factorization Approach
L5 Optimal Design concepts pt A
Outline Unconstrained Optimization Functions of One Variable
I.4 Polyhedral Theory (NW)
EE 458 Introduction to Optimization
I.4 Polyhedral Theory.
What are optimization methods?
Multivariable optimization with no constraints
Presentation transcript:

OPTIMAL CONTROL SYSTEMS AIM To provide an understanding of the principles of optimization techniques in the static and dynamic contexts.

LEARNING OBJECTIVES On completion of the module the student should be able to demonstrate:  - an understanding of the basic principles of optimization and the ability to apply them to linear and non-linear unconstrained and constrained static problems,  - an understanding of the fundamentals of optimal control.

LABORATORY WORK ASSESSMENT The module will be illustrated by laboratory exercises and demonstrations on the use of MATLAB and the associated Optimization and Control Tool Boxes for solving unconstrained and constrained static optimization problems and for solving linear quadratic regulator problems.   ASSESSMENT  Via written examination.  MSc only - also laboratory session and report

PRINCIPLES OF OPTIMIZATION Typical engineering problem: You have a process that can be represented by a mathematical model. You also have a performance criterion such as minimum cost. The goal of optimization is to find the values of the variables in the process that yield the best value of the performance criterion.   Two ingredients of an optimization problem: (i) process or model (ii) performance criterion

Some typical performance criteria: ·        maximum profit ·        minimum cost ·        minimum effort ·        minimum error ·        minimum waste ·        maximum throughput ·        best product quality  Note the need to express the performance criterion in mathematical form.

Static optimization: variables have numerical values, fixed with respect to time. Dynamic optimization: variables are functions of time.

Essential Features Every optimization problem contains three essential categories: 1.        At least one objective function to be optimized 2.        Equality constraints 3.        Inequality constraints

linear inequality constraint By a feasible solution we mean a set of variables which satisfy categories 2 and 3. The region of feasible solutions is called the feasible region. feasible region linear inequality constraint equality nonlinear constraints x2 x1 linear inequality constraint

An optimal solution is a set of values of the variables that are contained in the feasible region and also provide the best value of the objective function in category 1. For a meaningful optimization problem the model needs to be underdetermined.

Mathematical Description

Steps Used To Solve Optimization Problems Analyse the process in order to make a list of all the variables. Determine the optimization criterion and specify the objective function. Develop the mathematical model of the process to define the equality and inequality constraints. Identify the independent and dependent variables to obtain the number of degrees of freedom. If the problem formulation is too large or complex simplify it if possible. Apply a suitable optimization technique. Check the result and examine it’s sensitivity to changes in model parameters and assumptions.

Classification of optimization Problems Properties of f(x) ·    single variable or multivariable ·    linear or nonlinear ·    sum of squares ·    quadratic ·    smooth or non-smooth ·    sparsity

Properties of h(x) and g(x) ·        simple bounds ·        smooth or non-smooth ·        sparsity ·        linear or nonlinear ·        no constraints

Properties of variables x time variant or invariant continuous or discrete take only integer values mixed

Obstacles and Difficulties Objective function and/or the constraint functions may have finite discontinuities in the continuous parameter values. Objective function and/or the constraint functions may be non-linear functions of the variables. Objective function and/or the constraint functions may be defined in terms of complicated interactions of the variables. This may prevent calculation of unique values of the variables at the optimum.

Objective function and/or the constraint functions may exhibit nearly “flat” behaviour for some ranges of variables or exponential behaviour for other ranges. This causes the problem to be insensitive, or too sensitive. The problem may exhibit many local optima whereas the global optimum is sought. A solution may be obtained that is less satisfactory than another solution elsewhere. Absence of a feasible region. Model-reality differences.

Typical Examples of Application static optimization Plant design (sizing and layout). Operation (best steady-state operating condition). Parameter estimation (model fitting). Allocation of resources. Choice of controller parameters (e.g. gains, time constants) to minimise a given performance index (e.g. overshoot, settling time, integral of error squared).

dynamic optimization Determination of a control signal u(t) to transfer a dynamic system from an initial state to a desired final state to satisfy a given performance index. Optimal plant start-up and/or shut down. Minimum time problems

BASIC PRINCIPLES OF STATIC optimization THEORY Continuity of Functions Functions containing discontinuities can cause difficulty in solving optimization problems. Definition: A function of a single variable x is continuous at a point xo if:

If f(x) is continuous at every point in a region R, then f(x) is said to be continuous throughout R. f(x) is discontinuous. x f(x) f(x) is continuous, but is not.

Unimodal and Multimodal Functions A unimodal function f(x) (in the range specified for x) has a single extremum (minimum or maximum). A multimodal function f(x) has two or more extrema. at the extremum, the point is called a stationary point. There is a distinction between the global extremum (the biggest or smallest between a set of extrema) and local extrema (any extremum). Note: many numerical procedures terminate at a local extremum.

A multimodal function f(x) x global max (not stationary) local max (stationary) local min (stationary) global max (not stationary) stationary point (saddle point) global min (stationary)

Multivariate Functions - Surface and Contour Plots We shall be concerned with basic properties of a scalar function f(x) of n variables (x1,...,xn). If n = 1, f(x) is a univariate function If n > 1, f(x) is a multivariate function. For any multivariate function, the equation z = f(x) defines a surface in n+1 dimensional space .

In the case n = 2, the points z = f(x1,x2) represent a three dimensional surface. Let c be a particular value of f(x1,x2). Then f(x1,x2) = c defines a curve in x1 and x2 on the plane z = c. If we consider a selection of different values of c, we obtain a family of curves which provide a contour map of the function z = f(x1,x2).

contour map of x2 x1 z = 20 saddle point local minimum 4 5 3 6 1.8 2 0.2 0.4 0.7 1.0 1.7 1.8 2 3 4 5 6 z = 20 local minimum saddle point

Example: Surface and Contour Plots of “Peaks” Function

multimodal! x1 x2 global max global min local min local max saddle

Gradient Vector The slope of f(x) at a point in the direction of the ith co-ordinate axis is

The gradient vector at a point is normal to the the contour through that point in the direction of increasing f. increasing f At a stationary point: (a null vector)

Note: If is a constant vector, f(x) is then linear. e.g.

Convex and Concave Functions A function is called concave over a given region R if: The function is strictly concave if is replaced by >. A function is called convex (strictly convex) if is replaced by (<).

concave function x xa xb f(x) convex function x xa xb f(x)

For a multivariate function f(x) the conditions are:- f(x) H(x) Hessian matrix

Tests for Convexity and Concavity H is +ve def (+ve semi def) iff H is -ve def (-ve semi def) iff Convenient tests: H(x) is strictly convex (+ve def) (convex) (+ve semi def)) if: 1. all eigenvalues of H(x) are or 2. all principal determinants of H(x) are

H(x) is strictly concave (-ve def) (concave (- ve semi def)) if: 1. all eigenvalues of H(x) are or 2. the principal determinants of H(x) are alternating in sign:

Example

Convex Region A convex set of points exist if for any two points, xa xb convex region xa xb non convex region A convex set of points exist if for any two points, xa and xb, in a region, all points: on the straight line joining xa and xb are in the set. If a region is completely bounded by concave functions then the functions form a convex region.

Necessary and Sufficient Conditions for an Extremum of an Unconstrained Function A condition N is necessary for a result R if R can be true only if N is true. A condition S is sufficient for a result R if R is true if S is true. A condition T is necessary and sufficient for a result R iff T is true.

1 and 2 are necessary conditions; 3 is a sufficient condition. There are two necessary and a single sufficient conditions to guarantee that x* is an extremum of a function f(x) at x = x*: 1. f(x) is twice continuously differentiable at x*. 2. , i.e. a stationary point exists at x*. 3. is +ve def for a minimum to exist at x*, or -ve def for a maximum to exist at x* 1 and 2 are necessary conditions; 3 is a sufficient condition. Note: an extremum may exist at x* even though it is not possible to demonstrate the fact using the three conditions.

optimization with Equality Constraints Elimination of variables: example: Using (b) to eliminate x1 gives: (c) and substituting into (a) :-

At a stationary point Then using (c): Hence, the stationary point (min) is: (1.071, 1.286)