April 29, 2008 UC Berkeley EECS, Berkeley, CA 1 Anytime Control Algorithms for Embedded Real-Time Systems L. Greco, D. Fontanelli, A. Bicchi Interdepartmental.

Slides:



Advertisements
Similar presentations
L3S Research Center University of Hanover Germany
Advertisements

Fakultät für informatik informatik 12 technische universität dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund,
Feedback Control Real- time Scheduling James Yang, Hehe Li, Xinguang Sheng CIS 642, Spring 2001 Professor Insup Lee.
Simulation of Feedback Scheduling Dan Henriksson, Anton Cervin and Karl-Erik Årzén Department of Automatic Control.
Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms Chenyang Lu, John A. Stankovic, Gang Tao, Sang H. Son Presented by Josh Carl.
QoS-based Management of Multiple Shared Resources in Dynamic Real-Time Systems Klaus Ecker, Frank Drews School of EECS, Ohio University, Athens, OH {ecker,
Priority INHERITANCE PROTOCOLS
Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
Supervisory Control of Hybrid Systems Written by X. D. Koutsoukos et al. Presented by Wu, Jian 04/16/2002.
Model Checker In-The-Loop Flavio Lerda, Edmund M. Clarke Computer Science Department Jim Kapinski, Bruce H. Krogh Electrical & Computer Engineering MURI.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems (m, k)-firm tasks and QoS enhancement.
Parametric Throughput Analysis of Synchronous Data Flow Graphs
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
An Introduction to Markov Decision Processes Sarah Hickmott
Entropy Rates of a Stochastic Process
Kuang-Hao Liu et al Presented by Xin Che 11/18/09.
Towards Feasibility Region Calculus: An End-to-end Schedulability Analysis of Real- Time Multistage Execution William Hawkins and Tarek Abdelzaher Presented.
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Preemptive Behavior Analysis and Improvement of Priority Scheduling Algorithms Xiaoying Wang Northeastern University China.
1 Introduction to Load Balancing: l Definition of Distributed systems. Collection of independent loosely coupled computing resources. l Load Balancing.
Fakultät für informatik informatik 12 technische universität dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund,
Maximum Likelihood (ML), Expectation Maximization (EM)
Embedded Systems Exercise 3: Scheduling Real-Time Periodic and Mixed Task Sets 18. May 2005 Alexander Maxiaguine.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment Presented by Pete Perlegos C.L. Liu and James W. Layland.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
CprE 458/558: Real-Time Systems
Spring 2002Real-Time Systems (Shin) Rate Monotonic Analysis Assumptions – A1. No nonpreemptible parts in a task, and negligible preemption cost –
Misconceptions About Real-time Computing : A Serious Problem for Next-generation Systems J. A. Stankovic, Misconceptions about Real-Time Computing: A Serious.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Technische Universität Dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund, Informatik 12 Germany 2007/12/14.
Dimitrios Konstantas, Evangelos Grigoroudis, Vassilis S. Kouikoglou and Stratos Ioannidis Department of Production Engineering and Management Technical.
CHAPTER 15 S IMULATION - B ASED O PTIMIZATION II : S TOCHASTIC G RADIENT AND S AMPLE P ATH M ETHODS Organization of chapter in ISSO –Introduction to gradient.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
The Complex Braid of Communication and Control Massimo Franceschetti.
Asymptotic Techniques
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Combined Scheduling of Periodic and Aperiodic Tasks.
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 1 Chapter 1: The DP Algorithm To do:  sequential decision-making  state.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Scheduling policies for real- time embedded systems.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Tanja Magoč, François Modave, Xiaojing Wang, and Martine Ceberio Computer Science Department The University of Texas at El Paso.
6. Application mapping 6.1 Problem definition
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Adaptive Feedback Scheduling with LQ Controller for Real Time Control System Chen Xi.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund, Informatik 12 Germany 2012 年 12 月 19 日 These slides use Microsoft clip.
STATIC ANALYSIS OF UNCERTAIN STRUCTURES USING INTERVAL EIGENVALUE DECOMPOSITION Mehdi Modares Tufts University Robert L. Mullen Case Western Reserve University.
DECISION MODELING WITH MICROSOFT EXCEL Chapter 12 Copyright 2001 Prentice Hall Publishers and Ardith E. Baker Multi-Objective Decision Making and Heuristics.
Lecture 6: Real-Time Scheduling
ECE 720T5 Fall 2012 Cyber-Physical Systems
Introduction to Load Balancing:
Availability Availability - A(t)
Generalization and adaptivity in stochastic convex optimization
Optimizing Expected Time Utility in Cyber-Physical Systems Schedulers
Tim Holliday Peter Glynn Andrea Goldsmith Stanford University
Objective of This Course
High-Level Abstraction of Concurrent Finite Automata
Elastic Task Model For Adaptive Rate Control
CSCI1600: Embedded and Real Time Software
Chapter 2: Evaluative Feedback
CSCI1600: Embedded and Real Time Software
Linköping University, IDA, ESLAB
Chapter 2: Evaluative Feedback
Presentation transcript:

April 29, 2008 UC Berkeley EECS, Berkeley, CA 1 Anytime Control Algorithms for Embedded Real-Time Systems L. Greco, D. Fontanelli, A. Bicchi Interdepartmental Research Center “E. Piaggio” University of Pisa

April 29, 2008 UC Berkeley EECS, Berkeley, CA 2 Introduction  General tendency in embedded systems: implementation of many concurrent real-time tasks on the same platform  overall HW cost and development time reduction  Highly time-critical control tasks traditionally scheduled with very conservative approaches  rigid, hardly reconfigurable, underperforming architecture  Modern multitasking RTOS (e.g. in automotive ECUs), schedule their tasks dynamically, adapting to varying load conditions and QoS requirements.

April 29, 2008 UC Berkeley EECS, Berkeley, CA 3 Introduction  Real-time preemptive algorithms (e.g., RM or EDF) can suspend task execution on higher-priority interrupts  Guarantees of schedulability – based on estimates of Worst-Case Execution Time (WCET) – are obtained at the cost of HW underexploitation: e.g., RM can only guarantee schedulability if less than 70% CPU is utilized  In other terms: for most CPU cycles, a longer time is available than the worst-case guarantee  The problem of Anytime Control is to make good use of that extra time

April 29, 2008 UC Berkeley EECS, Berkeley, CA 4  Anytime algorithms and filters…  The execution can be interrupted any time, always producing a valid output;  Increasing the computational time increases the accuracy of the output ( imprecise computation )  Can we apply this to controllers? Anytime Paradigm

April 29, 2008 UC Berkeley EECS, Berkeley, CA 5 Example (I)

April 29, 2008 UC Berkeley EECS, Berkeley, CA 6 Example (II) Regulation Problem – RMS comparison Not feasible Conservative: stable but poor performance

April 29, 2008 UC Berkeley EECS, Berkeley, CA 7 Example (III) Regulation Problem – RMS comparison Unstable! Greedy: maximum allowed  i

April 29, 2008 UC Berkeley EECS, Berkeley, CA 8 Hierarchical Design: controllers must be ordered in a hierarchy of increasing performance; Switched System Performance: stability and performance of the switched system must be addressed; Practicality: implementation of both control and scheduling algorithms must be simple (limited resources); Composability: computation of higher controllers should exploit computations of lower controllers (recommended). Issues in Anytime Control

April 29, 2008 UC Berkeley EECS, Berkeley, CA 9 Consider a linear, discrete time, invariant plant and a family of stabilizing feedback controllers Controller i provides better performance than controller j if i > j (but WCET i > WCET j ) Problem Formulation The closed-loop system is

April 29, 2008 UC Berkeley EECS, Berkeley, CA 10 Sampling instants: Time allotted to the control task: Worst Case Execution Times: Time map: Scheduler Description

April 29, 2008 UC Berkeley EECS, Berkeley, CA 11 A simple stochastic description of the random sequence can be given as an i.i.d. process At time t, the time slot is such that all controllers but no controller can be executed Scheduler Description Stochastic Scheduler as an I.I.D. Process Pr

April 29, 2008 UC Berkeley EECS, Berkeley, CA 12 Description Transition probability matrix: Steady state probabilities: More general description with a finite state, discrete-time, homogeneous, irreducible aperiodic Markov chain Scheduler Description Stochastic Scheduler as a Markov Chain

April 29, 2008 UC Berkeley EECS, Berkeley, CA 13 m-step (lifted system) Theorem: The MJLS is exponentially AS-stable if and only if such that the m-step condition holds 1-step (average contractivity) [P. Bolzern, P. Colaneri, G.D. Nicolao – CDC ’04] Almost Sure Stability Definition: The MJLS is exponentially AS-stable if such that, and any initial distribution  0, the following condition holds Sufficient conditions

April 29, 2008 UC Berkeley EECS, Berkeley, CA 14 Upper bound on the index of the executable controller Controller is computed, unless a preemption event forces Switching Policy Preliminaries and Analysis Switching policy map Examples: Conservative Policy (non-switching, always av.) Greedy Policy (if already AS-stable)

April 29, 2008 UC Berkeley EECS, Berkeley, CA 15 Switching Policy Synthesis Problem Formulation Problem: Given and the invariant scheduler distribution, find a switching policy such that the resulting system is a MJLS with invariant probability distribution The computational time allotted by the scheduler cannot be increased; The probability of the i -th controller can be increased only by reducing the probabilities of more complex controllers. How can we build a switching policy ensuring ?

April 29, 2008 UC Berkeley EECS, Berkeley, CA 16 Use of an independent, conditioning Markov chain  Same structure (number of states) of the scheduler chain : in the next sampling interval at most the i -th controller is computed (if no preemption occurs) Stochastic Policy How does the conditioning chain interact with the scheduler’s one?

April 29, 2008 UC Berkeley EECS, Berkeley, CA 17 Note: the extended chain  has n 2 states Merging Markov Chains Mixing Theorem: Consider two independent finite-state homogeneous irreducible aperiodic Markov chains  and  with state space and respectively. The stochastic process is a finite-state homogeneous irreducible aperiodic Markov chain characterized by

April 29, 2008 UC Berkeley EECS, Berkeley, CA 18 The goal is to produce a process with a desired stationary probability with cardinality n After mixing, use an aggregation function derived from the schedulability constraints The i -th controller is executed if and only if: (i.e. limiting controller) (i.e. preemption) (aggregated process) Merging Markov Chains Aggregating

April 29, 2008 UC Berkeley EECS, Berkeley, CA 19 Remark: The aggregated process is a linear combination of two chains. Hence: Merging Markov Chains Aggregating (II) Remark: The state evolution of the JLS driven by is the same as the one produced by an equivalent MJLS driven by the Markov chain, constructed associating to the index, hence the controlled system. Therefore:

April 29, 2008 UC Berkeley EECS, Berkeley, CA 20 Markov Policy 1-step contractive formulation Anytime Problem – (Linear Programming) Find a vector such that

April 29, 2008 UC Berkeley EECS, Berkeley, CA 21 Example (Reprise)

April 29, 2008 UC Berkeley EECS, Berkeley, CA 22 Example - Furuta Pendulum Regulation Problem – RMS comparison Markov policy Improvement: > 55%

April 29, 2008 UC Berkeley EECS, Berkeley, CA 23 A 1-step contractive solution may not exist, but an m-step solution always exists for some m, since the minimal controller is always executable Look for a solution to the Anytime Problem for increasing m Markov Policy m-step contractive formulation (I) Key Idea: The switching policy supervises the controller choice so that some control patterns are preferred w.r.t. others

April 29, 2008 UC Berkeley EECS, Berkeley, CA 24 Lifted Scheduler chain ( n m states) Conditioning chain not lifted ( n m states) : strings of symbols Chain : Mixing: Aggregating: Same as 1-step problem Switching policy: every m steps a bet in advance for an m -string (elementwise minimum) Markov Policy m-step contractive formulation (II)

April 29, 2008 UC Berkeley EECS, Berkeley, CA 25 Example (TORA) (I)

April 29, 2008 UC Berkeley EECS, Berkeley, CA 26 Example (TORA) (II) Regulation Problem – RMS comparison Not feasible Conservative: stable but poor performance Greedy

April 29, 2008 UC Berkeley EECS, Berkeley, CA 27 Example (TORA) (III) Regulation Problem – RMS comparison Markov policy 4-step solution Most likely control pattern:        

April 29, 2008 UC Berkeley EECS, Berkeley, CA 28 Tracking and Bumpless  In tracking tasks the performance can be severely impaired by switching between different controllers  The activation of higher level controller abruptly introduces the dynamics of the re-activated (sleeping) states (low-to-high level switching)  The use of bumpless-like techniques can assist in making smoother transitions  Practicality considerations must be taken into account in developing a bumpless transfer method

April 29, 2008 UC Berkeley EECS, Berkeley, CA 29 Example (F.P.) (V) Tracking Problem – RMS comparison Not feasible Conservative: stable but poor performance

April 29, 2008 UC Berkeley EECS, Berkeley, CA 30 Example (F.P.) (VI) Tracking Problem – Reference & output comparison Markov policy Markov Bumpless policy

April 29, 2008 UC Berkeley EECS, Berkeley, CA 31 Example (F.P.) (VII) Tracking Problem – Greedy Policy Unstable! Greedy: maximum allowed  i

April 29, 2008 UC Berkeley EECS, Berkeley, CA 32 Example (F.P.) (VIII) Tracking Problem – RMS comparison Markov policy Markov Bumpless policy

April 29, 2008 UC Berkeley EECS, Berkeley, CA 33 Example (TORA) (IV) Tracking Problem – RMS comparison Not feasible Conservative: stable but poor performance Greedy

April 29, 2008 UC Berkeley EECS, Berkeley, CA 34 Example (TORA) (V) Tracking Problem – Reference & output comparison Markov policy Markov Bumpless policy Greedy

April 29, 2008 UC Berkeley EECS, Berkeley, CA 35 Example (TORA) (VI) Tracking Problem – RMS comparison Markov policy Markov Bumpless policy Greedy

April 29, 2008 UC Berkeley EECS, Berkeley, CA 36 Performance (not just stability) under switching must be considered for tracking Ongoing work is addressing: –hierarchic design of (composable) controllers for anytime control –numerical aspects of the m-step solution –implementation on real systems Conclusions

April 29, 2008 UC Berkeley EECS, Berkeley, CA 37 Anytime Control Algorithms for Embedded Real-Time Systems L. Greco, D. Fontanelli, A. Bicchi Interdepartmental Research Center “E. Piaggio” University of Pisa