Download presentation
Presentation is loading. Please wait.
1
Overview of PTIDES Project
Jia Zou Slobodan Matic Edward Lee Thomas Huining Feng Patricia Derler University of California, Berkeley
2
Cyber Physical Systems:
Reliable and Evolvable Networked Time-Sensitive Systems, Integrated with Physical Processes Cyber Physical Systems:
3
CPS Requirements – Printing Press
Application aspects local (control) distributed (coordination) global (modes) Open standards (Ethernet) Synchronous, Time-Triggered IEEE time-sync protocol High-speed, high precision Speed: 1 inch/ms Precision: 0.01 inch -> Time accuracy: 10us Bosch-Rexroth Orchestrated networked resources built with sound design principles on suitable abstractions DETERMINISM TIMED SEMANTICS
4
PTIDES: HW Platform Software Component Library Ptides Model
Code Generator PtidyOS Code Plant Model Network Model HW in the Loop Simulator Causality Analysis Program Analysis Schedulability Analysis Analysis Mixed Simulator
5
PTIDES Model Programming Temporally Integrated Distributed Embedded Systems Based on Discrete-Event model of computation Event processing is in time-stamp order Deterministic under simple causality conditions fixed-point semantics super-dense time
6
Causality Interface Software components are actor-oriented
All actors are reactive Consume input event(s) and produce output event(s) Sensors react to the physical environment Interface represented by δ δ is the minimum model time delay from the input to the output Compositionality properties: Min-plus algebra For the set of actors we are dealing with in this work, we assume all actors are reactive, meaning that they either produce output events by consuming some input events, or they simply react to the physical environment. A good example of such an actor is the sensor. Also we use causality interface to define the relationship between the timestamps of input and output event. Specifically, each actor may have a minimum model time delay, in this example is given by \delta. This \delta implies any output event of timestamp \tau’ that is produced as a reaction to the input event of timestamp \tau will satisfy the relationship \tau’ bigger than or equal to \tau + \delta. τ Actor A τ’ δ τ’ ≥ τ + δ δ δ
7
Model vs. Physical Time At sensors and actuators
Relate model time (τ) to physical time (t) t ≥ τ t ≤ τ τ1 do i4 model time τ1 τ4 physical time t1 t4
8
Single Processor PTIDES Example
Bounded sensor latency (d0) t ≥ τ , t ≤ τ + do t ≤ τ τ1 do i4 τ2 model time τ2 physical time t2 e2 at i2
9
Single Processor PTIDES Example
t ≥ τ , t ≤ τ + do t ≤ τ τ1 do i4 τ2 model time τ2 physical time t2 τ2+d0 e2 safe to process if t > τ2 + do
10
Single Processor PTIDES Example
t ≤ τ + do t ≤ τ τ1 do i4 τ2 model time τ1 physical time t2 τ1+ d0 e2 safe to process if t > τ2 + do
11
Distributed PTIDES Example
Local event processing decisions: Bounded communication latency (d0) Distributed platforms time-synchronized with bounded error (e) τ cannot be rendered unsafe by events from outside of the platform at: t > τ + do2 + e - d2 d 1 τ1 τ2 d01 Sensor Merge τ Actuator d 2 τ3 τ4 do2 Network Interface o3
12
Distributed PTIDES Example
Local event processing decisions: Bounded communication latency (d0) Distributed platforms time-synchronized with bounded error (e) τ1 may result in future event of timestamp τ1’ ≥ τ1 + d1 d 1 d01 Sensor τ1 Merge Motivated by the above examples, we developed a general execution strategy that defines when events are safe to process. Again remember, an event of interest, in this case it’s \tau at the Merge actor, is safe to process if Merge will not receive another event of smaller timestamp than \tau at any of its input ports in the future. The general strategy for safe to process can be broken down into two parts, the first part is as previously explained, where we are interested in actors that communicate with the outside of the platform, such as sensors and network interfaces, And again to ensure these actors will not produce events that render \tau unsafe, we take \tau added by some offset, and compare it against the current physical time of the system. FIXME: what’s in the box is wrong The second part of the general strategy is to check for events that have already arrived at the same platform as \tau, and see if they will make it unsafe to process \tau. This is easy given the minimum model time delay between actors. For example, if we have \tau_1 residing within the same platform as \tau, and we know the minimum model time delay between where \tau_1 resides and where \tau resides is \delta_1, then any future event arriving at the Merge actor as a result of \tau_1 must have a timestamp bigger than \tau_1 + \delta_1. In other words, if \tau_1 + \delta_1 is greater than \tau, then \tau is safe to process. Now we have a general strategy for safe to process, but as you can see it’s rather expensive, especially for the second part of this strategy, where we must check for all events that are upstream and within the same platform as \tau, But by making certain assumptions, the general execuction strategy can be greatly simplified. τ Actuator d 2 τ3 τ4 do2 Network Interface o3
13
General Execution Strategy
An event e is safe to process if no other event e’ may render e unsafe out of the platform -> clock test within the same platform as e -> model delay test For all events within the platform: τi + di ≥ τ d 1 d01 Sensor τ1 Merge Motivated by the above examples, we developed a general execution strategy that defines when events are safe to process. Again remember, an event of interest, in this case it’s \tau at the Merge actor, is safe to process if Merge will not receive another event of smaller timestamp than \tau at any of its input ports in the future. The general strategy for safe to process can be broken down into two parts, the first part is as previously explained, where we are interested in actors that communicate with the outside of the platform, such as sensors and network interfaces, And again to ensure these actors will not produce events that render \tau unsafe, we take \tau added by some offset, and compare it against the current physical time of the system. FIXME: what’s in the box is wrong The second part of the general strategy is to check for events that have already arrived at the same platform as \tau, and see if they will make it unsafe to process \tau. This is easy given the minimum model time delay between actors. For example, if we have \tau_1 residing within the same platform as \tau, and we know the minimum model time delay between where \tau_1 resides and where \tau resides is \delta_1, then any future event arriving at the Merge actor as a result of \tau_1 must have a timestamp bigger than \tau_1 + \delta_1. In other words, if \tau_1 + \delta_1 is greater than \tau, then \tau is safe to process. Now we have a general strategy for safe to process, but as you can see it’s rather expensive, especially for the second part of this strategy, where we must check for all events that are upstream and within the same platform as \tau, But by making certain assumptions, the general execuction strategy can be greatly simplified. τ Actuator d 2 τ3 τ4 do2 Network Interface o3 τ cannot be rendered unsafe by events from outside of the platform at: t > τ + do2 + e - d2
14
What Did We Gain? e1 = (v1, τ1) δ e1, e2, … e2 = (v2, τ2)
First Point: Ensures deterministic data outputs Merge e1 = (v1, τ1) δ e1, e2, … safe to process analysis for e safe to process analysis for e e2 = (v2, τ2) Second Point: Ensures deterministic timing delay from Sensor to Actuator t ≤ τ + do t ≤ τ i4 do τ1 τ2 Now that I defined the GES, let’s take a step back and
15
What’s More… Schedulability analysis
Third Point: Decoupling of design from hardware platform Schedulability analysis
16
PTIDES: HW Platform Software Component Library Ptides Model
Code Generator PtidyOS Code Plant Model Network Model HW in the Loop Simulator Causality Analysis Program Analysis Schedulability Analysis Analysis Mixed Simulator
17
Schedulability Analysis
Requires WCET of software components + event models Three cases: Zero event processing time assumption (feasibility test) if P fails, P will not satisfy constraints on any hardware No resource sharing assumption (an event is processed as soon it is safe) if P fails, P may still satisfy constraints on other hardware Resource sharing (a safe event is processed according to a scheduling algorithm) if P fails, P does not satisfy this implementation (and algorithm)
18
PTIDES Scheduler Implementations
Two layer execution engine Event coordination (safe-to-process) Event scheduling (prioritize safe events) Earliest Deadline First foundation EDF is optimal with respect to feasibility Deadline based on path from input port to actuator e1 = (v1, τ1) Actor A δ Actuator Deadline(e1) = τ1 + δ
19
PTIDES: HW Platform Software Component Library Ptides Model
Code Generator PtidyOS Code Plant Model Network Model HW in the Loop Simulator Causality Analysis Program Analysis Schedulability Analysis Analysis Mixed Simulator
20
PtidyOS Lightweight real-time operating system (RTOS)
Software components (actors) are “glued together” by a code generator into an executable Scheduler combine EDF with PTIDES Process events in deadline order Interrupts All execution are done within ISR Reentrant interrupts Experimenting with Luminary board with IEEE1588 support We are currently implementing the execution strategies as mentioned as a scheduler for a real-time operating system we call PtidyOS. Unlike conventional off the shelf RTOS’s such as VxWorks, PtidyOS is part of a tool chain that includes a code generator, which glues software components together with the scheduler into one executable. We are currently implementing strategies A, B, and C as schedulers in PtidyOS. Moreover, we are also experimenting with a scheduler that combines PTIDES semantics with earliest-deadline-first scheduling scheme. The goal is to leverage the optimality with respect to feasibility of EDF with the timed semantics of PTIDES. EDF, as the name suggests, requires the event of smallest deadline to be processed first. PtidyOS provides this behavior by making extensive use of interrupts. First, all event processing is done within interrupt service routines. We also assume all interrupts are able to interrupt one another. Though this is usually not assumed in the underlying platform, this assumption can be achieved through stack manipulation. Due to the lack of time, I will not go into how our scheduling algorithm works with the interrupt mechanism as described, but you are welcomed to attend our poster session which is held tonight. Currently we plan to implement our scheduler on a set of Luminary boards with hardware assisted IEEE1588 support, which provides time synchronization across multiple distributed platforms, which if you remember, is an underlying assumption about our distributed system.
21
PTIDES Program Design Workflow
HW Platform PtidyOS
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.