Download presentation
Presentation is loading. Please wait.
Published bySheena Sparks Modified over 9 years ago
1
1 3/3/04 Update Virtual Prototyping of Advanced Space System Architectures based on RapidIO Principal Investigator: Dr. Alan D. George OPS Graduate Assistants: David Bueno Ian Troxel RA Graduate Assistants: Chris Conger Adam Leko MS Group HCS Research Laboratory Department of Electrical and Computer Engineering, University of Florida
2
2 Topics Review modeling progress – RapidIO (RIO) library overview – Traffic model overview Discuss project output options – Have several options in mind – Need to clarify what is needed Conclusions & questions
3
3 RIO Library Overview RIO Packets (data structures) – Message Passing Logical Layer – Parallel Physical Layer RapidIO Endpoint Model – Message Passing Logical Layer – Common Transport Layer – Parallel Physical Layer RapidIO Central Memory Switch Model – Parallel Physical Layer – Transport Layer Traffic Models Statistics Gathering – RIO Request Stats (latency and BW) – RIO Response Stats (latency and BW)
4
4 RIO Packets/Data Structures Restart-from-retry Packet Control Symbol – Used to restart after a packet retry RapidIO Flow Control Symbol – Positive or negative ACK of packet RapidIO Message Passing Logical Layer over Common Transport Layer over Parallel Physical Layer packet – Actually a physical hierarchy of data structures within the MLD data structure editor, each one adding the appropriate fields for the given layer RapidIO Message Passing Response packet – End-to-end responses used in Message Passing Logical layer Statistics data structures – Not actual RIO structures, but used to measure average latency and bandwidth for RIO requests and responses
5
5 RapidIO Endpoint Model Key Features – Message Passing Logical Layer – Common Transport Layer – Parallel Physical Layer Receiver-controlled flow control Error detection and recovery –Currently under development Priority scheme for buffer management Key Adjustable Parameters – Packet assembly delay – Packet disassembly delay – Clock frequency – Link width – Input queue length – Output queue length – Four priority thresholds Determine the maximum number of packets that may be in a buffer to still accept a packet of a given priority Example: If threshold for priority 0 packets is 4, incoming priority 0 packets will be rejected if there are 5 or more packets currently in the input buffer – Number of device ID bytes in packet (affects packet size and max number of devices in system) – Buffer memory copy delay per byte Note: As there is no “link model”, parameters such as clock frequency and link width are incorporated into the endpoint model. High-level Endpoint Model
6
6 RapidIO Central Memory Switch Model Key Features – Selectable cut-through or store-and-forward routing – Non-blocking architecture – Routes packets based solely on destination ID (read from a routing table file) as per RIO spec – RIO Common Transport Layer – RIO Parallel Physical Layer Key Adjustable Parameters – Cut-through/store-and-forward behavior – Average central memory read latency – Average central memory write latency – Additional switching latency – Queue length per port To be changed to reflect a single pool of dynamically allocated memory Parameter will become “Central memory size” – Link width, clock frequency, and other physical layer parameters as detailed in previous slide – Priority threshold scheme may need to be changed depending on how the central memory switch handles traffic of different priorities
7
7 RIO Central Memory Switch Snapshot Key Read routing table from file Insert source routing information Determine correct output port Physical layer components Re-reoute control symbols back to input link partner
8
8 Traffic Models - Sources Key Features – Generic data source (data cube generator) – Creates N RapidIO packets Uniformly distributed throughout CPI Key Adjustable Parameters – Pulses – Ranges – Beams – CPI – Max payload size – Size per element of data cube
9
9 Traffic Models – Processors Key Features – Generic vector processor Buffers N packets Adds processing delay Resends with configurable distribution – Can be used to change traffic distributions at various places in model – Can also be used to model generic processors (set group size=1) Key Adjustable Parameters – Number packets to buffer – Processing group delay – % Packets to retransmit – Output traffic distribution Uniform Poisson Normal
10
10 Preliminary System-Level Test Model
11
11 Project Output Options Excel spreadsheet – Create analytical models from analysis of MLD models and simulations – Advantages: Excel spreadsheets easy to use – Disadvantages: Least accurate option available –Difficult to make analytical models accurate Cannot model transient conditions Should be reserved for worst-case analysis Database-backed program – Run trade studies on specific ranges for parameters – Store results of each run in database – Advantages: Should be easy to use Results can be obtained in small amounts of time – Disadvantages: Can be very time consuming for lab to run trade studies Results limited by ranges of parameters chosen
12
12 Project Output Options (Recommended) MLD models + documentation – Deliver both MLD models and extensive documentation on how to use models – Advantages: No limitations on parameters Turn-key solution Can change underlying assumptions or data flow easily – Disadvantages: MLD learning curve (although mitigated by documentation) MLD license needed (Honeywell evaluation of tool in progress) Simulation time as compared to pre-run simulations or Excel spreadsheet MLD models + front end tool – Possible to “script” MLD runs through a front end tool we create Accomplished via ptclsh tool that comes with MLD – Front end tool would take care of setting up MLD parameters and coordinating runs – Advantages: Same as MLD models + documentation MLD learning curve not applicable Can modify MLD models later if necessary – Disadvantages: Same as MLD models + documentation MLD license still needed Can make site visit to walk everyone through the models and chosen output system prior to the conclusion of output system deliverable
13
13 Conclusions Building blocks for RIO library almost complete Still to do: – Alter switch model to support single dynamically allocated central memory – Project output definition Need guidance from options listed in this slide set Next phases: – System architecture specifications – Complete traffic model Generic traffic model components created will support several different parameters Need to determine traffic periodicity, distribution, and data partitioning (see questions on next slide) – System architecture modeling
14
14 Questions Switch modeling – 1) Are we interested in transmitter-controlled flow control? – 2) Is all traffic in the system of equal priority? Are the 4 RIO physical-layer packet priority levels going to be important? Traffic modeling – 1) Does data cube get directly sent to memory first, or is it sent to processing nodes first? See diagrams on HCS Honeywell website, “Initial GMTI traffic model options” – 2) Data partitioning scheme: Is incoming data cube split across a single dimension before being sent out to each processing element? If not, how is data partitioned among processors? Is any data repartitioning needed between phases of the GMTI algorithm (such as corner turn, etc)? – 3) Processor partitioning scheme(s): Do processors handle all phases of GMTI algorithm (pulse compression, Doppler processing, etc)? How many processors per board? –If > 1, what is the onboard inter-processor communication method (RapidIO, …)? Do all processors on the same board share one RapidIO endpoint? What different schemes is Honeywell interested in exploring?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.