Download presentation
Presentation is loading. Please wait.
1
Enhancing the PCI Bus to Support Real- Time Streams Scottis, M.G.; Krunz, M.; Liu, M.M.-K. Dept. of Electr. & Comput. Eng., Arizona Univ., Tucson, AZ, USA 1999 IEEE International Performance, Computing and Communications Conference 元智大學 系統實驗室 楊登傑 2000.03.13
2
Outline Introduction PCI Overview Real-Time Scheduling Theory The EPCI Local Bus Simulation Study Conclusions
3
Introduction The paper present an access scheduling scheme for Real-Time Streams(RTS) over the PCI Bus. It uses the Rate Monotonic Scheduling (RMS) algorithm to guarantees the timing QoS for RTS over PCI Bus. Author define the effective bus utilization(EBU)as the worst case bus utilization. PCI Overview :gives a brief overview of the PCI architecture.
4
Introduction(cont.) Real-Time Scheduling Theory:gives the theory of RTS from which EPCI bus model. The EPCI Local Bus:introduces the EPCI bus architecture. Simulation Study:gives some simulation results of the proposed architecture. Conclusions:gives the concluding remarks.
5
PCI Overview In order to allow the PCI bus works concurrently with the CPU bus, inserting a new bus(CPU Local Bus)between the CPU and high-speed PCI Bus. The CPU can access the level-two(L2)cache or main memory while the PCI Bus is busy transferring data between its devices. Two types of devices are: a bus master and a target device A bus master must arbitrate for each access it performs on the bus, and a target device can only respond to a bus master’s request.
6
Real-Time Scheduling Theory Two real-time scheduling on a shared medium: preemptive and non-preemptive scheduling. In preemptive scheduling a higher priority link can preempt a lower priority link, where in Non-preemptive scheduling there is no preemption. Preemptive scheduling is divided into two:dynamic priority and static priority. Dynamic priority can change the link priority dynamically,and have higher schedulability than static priority. But dynamic priority are more complex,difficult to implement and require additional implementation overhead,for this reason author considers only static priority in this paper.
7
Static Priority Scheduling The RMS algorithm schedules a set of periodic links by assigning higher priorities to links with shorter periods. Given a set of n periodic links l 1,l 2,l 3,…..,l n ordered in increasing period(T 1 ≦ T 2 ≦ ……. ≦ T n ),the RMS algorithm assigns priorities in decreasing order(P 1 ≧ P 2 ≧ ……. ≧ P n ),where P i > P j implies that l i has a higher priority than l j (i≠j). A set of n independent RTS links l 1,l 2,l 3,…..,l n is schedulable using the RMS algorithm if only if the inequality can be met for 1 ≦ i ≦ n:
8
The EPCI Local Bus The hardware includes the Central Arbiter(CA) as used in current PCI bus system,and the application specific EPCI devices connected to the EPCI bus. The software includes the device drivers for the corresponding EPCI devices, the user application programs on top of the OS,and the Scheduling Manager(SM),which is part of the OS and schedules real-time traffic on the bus. The EPCI CA has programmable priorities assigned to each request-grant pair that can be changed by the SM at any time. Each EPCI device is required to have a buffer to match the rate the device produces or consumes data with the rate that it can move data across bus.
9
The EPCI Local Bus(count.)
10
Simulation Study Author assumes four RTS links l 1,l 2,l 3,l 4 which are labeled such as (T 1 ≦ T 2 ≦ T 3 ≦ T 4 )and the RMS algorithm is used to assign the priorities. This means that link l 1 has the highest priority and link l 4 has lowest. In this example,we have four links with 4!=24 possible ways to assign priorities to them. In figure 4,only four out of the twenty four possible assignments are schedulable.
11
Simulation Study(count.)
12
The link overhead and blocking are the key parameters in EPCI scheduling model that might degrade schedulability. A huge link overhead or blocking will severely degrade schedulability,they might drive a link to miss its deadline. Their values are dependent on the value of the internal latency timer(ILT).
13
Simulation Study(count.) Fig5,In the first region(ILT<55)the ILT value is small and the bus master has to give up the bus early in the transaction transferring only a small amount of data. This causes the link overhead to dominate, resulting in high EBU. Fig6,for small ILT values, the link overhead is very high. As the ILT value increases,the link overhead decreases,and the link set becomes schedulable.
14
Simulation Study(count.) In the second region(55<ILT<175)ILT has a moderate value and EBU is at its lowest value. This region is the optimum region,and either the link overhead nor the blocking dominates. In the third region(ILT>175)the ILT value is high and bus master is allowed to keep the bus for long periods even though other higher priority links might request the bus causing high blocking.
15
Simulation Study(count.) Fig7,Link l 4 has a zero blocking since no other lower priority link can block it. Link l 3 can be blocked only by link l 4 because l 4 has a small bus requirement which is exhausted before its ILT. Links l 1,l 2 can be blocked by l 3,l 4 because l 3 has a large bus time requirement and therefore l 1,l 2 have to wait for l 3 ILT to expire. This results in high blocking for l 1,l 2 at high ILT values.
16
Conclusions In this paper,author presents a bus management scheme that determines the schedulability of a set of real-time links over the PCI bus. Author also describe a bus scheduling model based on the rate monotonic scheduling (RMS) algorithm that priory guarantees the schedulability of a given link set.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.