RANDOM MATRIX APPROACH TO LINEAR CONTROL SYSTEMS CHARLOTTE KIANG MAY 16, 2012.

Slides:



Advertisements
Similar presentations
Introduction to Monte Carlo Markov chain (MCMC) methods
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
Week 6 - Wednesday CS322.
5.1 Real Vector Spaces.
Queueing Models and Ergodicity. 2 Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing models.
President UniversityErwin SitompulSMI 7/1 Dr.-Ing. Erwin Sitompul President University Lecture 7 System Modeling and Identification
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Entropy Rates of a Stochastic Process
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Ch 7.9: Nonhomogeneous Linear Systems
Introduction to chaotic dynamics
Scheduling Algorithms for Wireless Ad-Hoc Sensor Networks Department of Electrical Engineering California Institute of Technology. [Cedric Florens, Robert.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
INFINITE SEQUENCES AND SERIES
MM3FC Mathematical Modeling 3 LECTURE 2 Times Weeks 7,8 & 9. Lectures : Mon,Tues,Wed 10-11am, Rm.1439 Tutorials : Thurs, 10am, Rm. ULT. Clinics : Fri,
Sampling Distributions
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Maximum likelihood (ML)
Boyce/DiPrima 10th ed, Ch 10.5: Separation of Variables; Heat Conduction in a Rod Elementary Differential Equations and Boundary Value Problems, 10th.
Elec471 Embedded Computer Systems Chapter 4, Probability and Statistics By Prof. Tim Johnson, PE Wentworth Institute of Technology Boston, MA Theory and.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
LIAL HORNSBY SCHNEIDER
Based on slides by Patrice Belleville and Steve Wolfman CPSC 121: Models of Computation Unit 9b: Mathematical Induction - part 2.
Managerial Economics Managerial Economics = economic theory + mathematical eco + statistical analysis.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Applied Discrete Mathematics Week 9: Relations
Advanced Counting Techniques
Chapter 8. Section 8. 1 Section Summary Introduction Modeling with Recurrence Relations Fibonacci Numbers The Tower of Hanoi Counting Problems Algorithms.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
The importance of sequences and infinite series in calculus stems from Newton’s idea of representing functions as sums of infinite series.  For instance,
Slide 7- 1 Copyright © 2006 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Yaomin Jin Design of Experiments Morris Method.
1 20-Oct-15 Last course Lecture plan and policies What is FEM? Brief history of the FEM Example of applications Discretization Example of FEM softwares.
Integrals  In Chapter 2, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.  In much the.
Section 2.4. Section Summary Sequences. Examples: Geometric Progression, Arithmetic Progression Recurrence Relations Example: Fibonacci Sequence Summations.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Serge Andrianov Theory of Symplectic Formalism for Spin-Orbit Tracking Institute for Nuclear Physics Forschungszentrum Juelich Saint-Petersburg State University,
1 Lecture 1: February 20, 2007 Topic: 1. Discrete-Time Signals and Systems.
Sequences and Summations Section 2.4. Section Summary Sequences. – Examples: Geometric Progression, Arithmetic Progression Recurrence Relations – Example:
ECE 466/658: Performance Evaluation and Simulation Introduction Instructor: Christos Panayiotou.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
 In this packet we will look at:  The meaning of acceleration  How acceleration is related to velocity and time  2 distinct types acceleration  A.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Advanced Engineering Mathematics, 7 th Edition Peter V. O’Neil © 2012 Cengage Learning Engineering. All Rights Reserved. CHAPTER 4 Series Solutions.
Lab for Remote Sensing Hydrology and Spatial Modeling Dept of Bioenvironmental Systems Engineering National Taiwan University 1/45 GEOSTATISTICS INTRODUCTION.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Basic Concepts of Information Theory A measure of uncertainty. Entropy. 1.
DEPARTMENT/SEMESTER ME VII Sem COURSE NAME Operation Research Manav Rachna College of Engg.
Foundations of Discrete Mathematics Chapter 1 By Dr. Dalia M. Gil, Ph.D.
Chapter 2. Signals and Linear Systems
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
Fall 2002CMSC Discrete Structures1 Chapter 3 Sequences Mathematical Induction Recursion Recursion.
Chapter 4 Dynamical Behavior of Processes Homework 6 Construct an s-Function model of the interacting tank-in-series system and compare its simulation.
OPERATING SYSTEMS CS 3502 Fall 2017
Modeling and Simulation Dr. Mohammad Kilani
Chapter 4 Dynamical Behavior of Processes Homework 6 Construct an s-Function model of the interacting tank-in-series system and compare its simulation.
State Space Representation
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Mathematical Induction Recursion
Autonomous Cyber-Physical Systems: Dynamical Systems
Quantum One.
Hidden Markov Models Part 2: Algorithms
State Space Analysis UNIT-V.
Digital and Non-Linear Control
September 1, 2010 Dr. Itamar Arel College of Engineering
Presentation transcript:

RANDOM MATRIX APPROACH TO LINEAR CONTROL SYSTEMS CHARLOTTE KIANG MAY 16, 2012

ABOUT ME My name is Charlotte Kiang, and I am a junior at Wellesley College, majoring in math and computer science with a focus on engineering applications.

WHAT I HOPE TO ACCOMPLISH TODAY -Show the equivalence between a single jump linear control system and a random walk on a self-similar graph whose vertices are the visible points of the plane. (Result first published by Tamás Kalmár-Nagy in Proceedings of the 46 th IEEE Congress on Decision and Control.) -Talk a bit about the implications of this result.

A (VERY BRIEF) INTRODUCTION TO CONTROL THEORY -Mathematical control theory is the area of applied mathematics that deals with the principles underlying the design and analysis of control systems. -Control systems, themselves, are broadly defined as systems whose functions are to influence an object’s behavior in order to achieve a desired goal. -Two main “branches” of control theory research concern optimization and uncertainty. The problem I will be discussing today mainly concerns optimization.

OUR GENERAL PROBLEM -The problem that we are looking into today concerns a networked control system, meaning its inputs come from an external source in its environment. -Problems that concern networked control systems occur in real-time, so our system must receive input and process it as quickly as possible. -Some examples of these control systems that we see in everyday life: -Robotics -Aircrafts and spacecrafts (avionics and autopilots) -Cars -Vehicular control systems are extremely sensitive to delays, so we must find the most efficient way to communicate with our network. Since our signals occur at random time intervals, networks often experience traffic that delay reception. We can use random matrix theory to help build a framework for analysis of these systems.

ON TO THE MATH… -To deal with the problem of traffic, we must consider each signal’s arrival time. Since arrival times are uncertain, the best we can do is to characterize them according to a probability distribution. -The underlying probability distributions for control systems are often very complicated, but we can gain insight by focusing on a simple but characteristic problem of its class. One such model is the so-called-random Fibonacci sequence. -By exploring patterns that arise in linear control systems with random time delays, we can find an equivalence with the random Fibonnaci recurrence. We can exploit this symmetry to propose a modified recurrence rule for the system.

GRAPH OF A CONTROL SYSTEM WITH RANDOM ACTUATION TIME DELAYS

OUR CLAIM We claim that linear control systems with random time delay follow a jump linear pattern. Thus, we must show why this pattern arises naturally in these systems.

DEFINITION OF A LINEAR JUMP SYSTEM A linear jump system is a discrete linear time system with randomly jumping parameters that can be described using a finite state Markov chain. (For the purposes of this presentation, that’s all you need to understand right now, but a formal definition is below.)

LINEAR JUMPS, PT. 1 We consider a basic model of a linear control system with random actuator delays due to communication over a network. The structure of the control system takes the usual form where the state evolves continuously. We assume that our control signal is proportional to our output. Our sampling time ds (as seen on our earlier graph) specifies the time instants s i = ids. Our terms of interest here are x(t) (overall state at given time), and u (our control input).

LINEAR JUMPS, PT. 2 We want our actuation times to be the same as our sampling times, but this only occurs in an ideal model. Due to network delays, these instead become Note that we are assuming here that (That is, our delay in actuation is less than our sampling time interval.) (This is a highly simplified model, but the observed patterns are still useful.)

LINEAR JUMPS, PT. 3 Now we derive a mapping between values of the state and control input for consecutive actuation times. On [t i, t i+1 ], we can solve for x = Ax + Bu:

LINEAR JUMPS, PT. 4 Also, And,

LINEAR JUMPS, PT. 5 We have assumed state-feedback control, so:

LINEAR JUMPS, PT. 6 (from previous slide) From our earlier definitions, we also have So if we substitute this in, we get:

LINEAR JUMPS, PT. 7 With these two results, and we can build our mapping.

LINEAR JUMPS, PT. 8 Our mapping: where

LINEAR JUMPS, PT. 9 As we can see, this model is a gives a recursive relationship between our state and control input at any given time. We can generalize this further for any time n, by saying that the nth element of the sequence can be written as

LINEAR JUMPS, PT. 10. Therefore, the stability of our system (our rate of convergence/divergence) will be determined by the growth/decay of the associated infinite random matrix product. (Note that each term should be dependent on the previous one.) To account for this dependence, we can describe the distribution of these matrices over an associated Markov chain. This system is otherwise known as a linear jump system.

WHAT TO DO WITH THIS Using these results alone, we can run some Monte-Carlo Simulations to obtain information about the stability of a system, but our goal today is to understand the underlying principles that govern this class of problems. As such, we now return to one of the simplest linear jump systems: the Fibonacci sequence.

FIBONACCI: INTRO One of the simplest linear jump systems is a variant of the well-known Fibonacci series. The one we will look at is a second-order random recurrence relation, also known as the Fibonacci recurrence. That is, can be written as the two-dimensional map where our coefficient matrix D n at time n is chosen randomly as either A or B with probability ½, where

RESULTS FROM FIBONACCI Recall that our original goal was to exploit the symmetry between our linear control system equation and this Fibonacci one to build a new recurrence relation. To do so, we must uncover the geometric structure induced by the Fibonacci series.

LYAPUNOV EXPONENTS To relate our Fibonacci series to control systems, we must look at their Lyapunov exponents. A Lyapunov exponent of a dynamical system (such as the networked control system we are considering) is a quantity that characterizes the rate of separation of two infinitesimally small trajectories. Quantitatively, two trajectories in phase space (i.e. possible states) with initial separation diverge at a rate given by: where is the Lyapunov exponent. For us, this is useful in characterizing state vs control – similar to what we did earlier.

LYAPUNOV EXPONENTS AND FIBONACCI The Lyapunov exponent for the Fibonacci series can also be expressed as a path average, by considering all possible values for |x n |. We can see that where the geometric mean is taken over all the m possible paths of length n.

FIBONACCI PATHS, PT. 1 To characterize all possible paths, we must investigate their geometrical structure. Consider the following two transformations, which both map lattice points to lattice points (i.e., points with integer coordinates): We will now modify this recurrence slightly to act only on the first quadrant of integer lattice. Our modified maps are:

FIBONACCI PATHS, PT. 2 THE RANDOM FIBONACCI MAPS ON THE FIRST QUADRANT The figure to the left depicts our result of repeated applications of our modified A and B beginning with (1,1), just as the Fibonacci sequence does. An interesting consequence of this is that the coordinates of points that can be reached from (1,1) are relative primes (though I have not had a chance to prove this). More importantly, this figure is a directed graph called the Fibonacci graph.

TOPOLOGY OF THE FIBONACCI GRAPH FIBONACCI GRAPH (COORDINATES ARE ALL SAME AS LAST SLIDE’S GRAPH) The graph to the left can be obtained by “unfolding” our previous graph (all values are the same).

TOPOLOGY OF THE FIBONACCI GRAPH, PT. 2 FIBONACCI GRAPH A point with integer points (i, j) is called visible if gcd(i, j) = 1. We say that a point is reachable from another lattice point (i, j) if there exist a sequence of transformations (using our modified A and B that we used to construct our graph) that takes (i, j) to (k, l). Then (i, j) is reachable from (1, 1) iff (i, j) is visible. (This is equivalent to saying all nodes of this graph are relatively prime, as we mentioned before.) Consequently, every visible point is reachable from every visible point.

TOPOLOGY OF THE FIBONACCI GRAPH, PT. 3 FIBONACCI GRAPH Given the previous statements, it follows that all points on this graph have an “address” that consists of the series of transformations that take (i, j) to (1, 1). But really, this is equivalent to the mathematical statement that we proved earlier:

CONSEQUENCES OF THIS SYMMETRY Now that we have drawn this parallel with the Fibonacci graph, we can say a lot more about linear control systems. For instance, a random walk on the Fibonacci graph is a diffusion process, so we can calculate the probability of arriving at a point (i, j) after n applications of A or B. Let w n (i, j) denote the probability of arriving to (i, j) in the nth step. The following recursion denotes those probabilities. Then, where m is the distance of point (i, j) from (1, 1).

CONSEQUENCES OF THIS SYMMETRY, PT. 2 Using some hypergeometric functions, we can obtain a closed form expression for w n (i, j): By obtaining expressions for the diffusion coefficient, the rate of the divergence for the random walk can be categorized.

WHAT ELSE? The previous result is just one among many mathematical possibilities that exist now that we have established symmetry between a random walk on the Fibonacci graph and linear control systems. If we can model our problem using this graph, this strongly suggests that graph theory, statistical physics, combinatorics and even number theory have applications in control theory.

CONTROL THEORY DEMONSTRATION

REFERENCES Czornik, Adam and Swierniak, Adrzej. “On direct controllability of discrete time jump linear systems.” Accessed April 15, Gupta, Vijay and Murray, Richard M. “Lecture Summary: Markov Linear Jump Systems.” ( European Embedded Control Institute. Accessed May 5, Kalmár-Nagy, Tamás. “Random Matrix Theory Approach to Linear Control Systems with Network-Induced Delays.” Proceedings of the 46 th IEEE Conference on Decision and Control, Dec : Accessed online. Sontag, Eduardo. Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition, Springer, New York, 1998.