Download presentation
Presentation is loading. Please wait.
Published byLeon Lambert Modified over 9 years ago
1
MIT Lincoln Laboratory SPP-HPEC09-1 BAM 12/5/2015 A Special-Purpose Processor System with Software-Defined Connectivity Benjamin Miller, Sara Siegal, James Haupt, Huy Nguyen and Michael Vai MIT Lincoln Laboratory 22 September 2009 This work is sponsored by the Navy under Air Force Contract FA8721-05-0002. Opinions, interpretations, conclusions and recommendations are those of the authors and are not necessarily endorsed by the United States Government.
2
MIT Lincoln Laboratory SPP-HPEC09-2 BAM 12/5/2015 Outline Introduction System Architecture Software Architecture Initial Results and Demonstration Ongoing Work/Summary
3
MIT Lincoln Laboratory SPP-HPEC09-3 BAM 12/5/2015 Why Software-Defined Connectivity? Modern ISR, COMM, EW systems need to be flexible –Change hardware and software in theatre as conditions change –Technological upgrade –Various form factors Want the system to be open –Underlying architecture specific enough to reduce redundant software development –General enough to be applied to a wide range of system components E.g., different vendors Example: Reactive electronic warfare (EW) system –Re-task components as environmental conditions change –Easily add and replace components as needed before and during mission
4
MIT Lincoln Laboratory SPP-HPEC09-4 BAM 12/5/2015 Antenna Interface Special Purpose Processor (SPP) System System representative of advanced EW architectures –RF and programmable hardware, processors all connected through a switch fabric LNARF Distribution Rx 1 Rx 2 Rx R RF Distribution HPA FPGA 1 Processor 1 Enabling technology: bare- bone, low-latency pub/sub middleware FPGA M FPGA 2 Processor 2 Processor N Switch Fabric Thin Communications Layer (TCL) Middleware Config. Mgr.Res. Mgr. Switch Fabric Proc. 1 Proc. 2 OS FPGA 1 FPGA M Rx R Rx 1 Tx T Tx 1 data processing algorithms OS Proc. N OS Process P Process 2 Process 1 configure backplane connections manage inter-process connections Tx 1 Tx T
5
MIT Lincoln Laboratory SPP-HPEC09-5 BAM 12/5/2015 Mode 1: Hardwired Hardware components physically connected Connections through backplane are fixed (no configuration management) No added latency but inflexible Thin Communications Layer (TCL) Middleware Config. Mgr.Res. Mgr. Switch Fabric Proc. 1 Proc. 2 OS FPGA Rx Tx OS Proc. N OS FPGA Process P Process 2 Process 1 connections to hardware fixed
6
MIT Lincoln Laboratory SPP-HPEC09-6 BAM 12/5/2015 Mode 2: Pub-Sub Everything communicates through the middleware –Hardware components have on-board processors running proxy processes for data transfer Most flexible, but there will be overhead due to the middleware Proc OS Process 4 FPGA Thin Communications Layer (TCL) Middleware Res. Mgr. Proc OS FPGA OS Proc OS Rx Switch Fabric FPGA Tx Proc OS Rx Process 3 Process 2 Proxy 1 Process P Process 2 Process 1
7
MIT Lincoln Laboratory SPP-HPEC09-7 BAM 12/5/2015 Mode 3: Circuit Switching Configuration manager sets up all connections across the switch fabric May still be some co-located hardware, or some hardware that communicates via a processor through the middleware Overhead only incurred during configuration Thin Communications Layer (TCL) Middleware Config. Mgr.Res. Mgr. Switch Fabric Proc. 1 Proc. 2 OS FPGA 1 FPGA M Rx R Rx 1 Tx T Tx 1 OS Proc. N OS Process P Process 2 Process 1
8
MIT Lincoln Laboratory SPP-HPEC09-8 BAM 12/5/2015 Today’s Presentation TCL middleware developed to support the SPP system –Essential foundation Resource Manager sets up (virtual) connections between processes Thin Communications Layer (TCL) Middleware Config. Mgr.Res. Mgr. Switch Fabric Proc. 1 Proc. 2 OS FPGA 1 FPGA M Rx R Rx 1 Tx T Tx 1 OS Proc. N OS Process P Process 2 Process 1
9
MIT Lincoln Laboratory SPP-HPEC09-9 BAM 12/5/2015 Outline Introduction System Architecture Software Architecture Initial Results and Demonstration Ongoing Work/Summary
10
MIT Lincoln Laboratory SPP-HPEC09-10 BAM 12/5/2015 System Configuration 3 COTS boards connected through VPX backplane – 1 Single-board computer, dual-core PowerPC 8641 –1 board with 2 Xilinx Virtex- 5 FPGAs and a dual-core 8641 –1 board with 4 dual-core 8641s –Processors run VxWorks Boards come from same vendor, but have different board support packages (BSPs) Data transfer technology of choice: Serial RapidIO (sRIO) –Low latency important for our application Implement middleware in C++
11
MIT Lincoln Laboratory SPP-HPEC09-11 BAM 12/5/2015 BSP2 VPX + Serial Rapid I/O Physical Interface Operating System (Realtime: VxWorks; Standard: Linux) VxWorks BSP1 System Model HW (Rx/Tx, ADC, etc.) Vendor Specific Application Components Signal Processing Lib System Control Components
12
MIT Lincoln Laboratory SPP-HPEC09-12 BAM 12/5/2015 VPX + Serial Rapid I/O Operating System (Realtime: VxWorks; Standard: Linux) System Model Vendor Specific VPX + Serial Rapid I/O VxWorks BSP1 BSP2 HW (Rx/Tx, ADC, etc.) TCL Middleware Signal Processing Lib System Control Components Application Components Physical Interface
13
MIT Lincoln Laboratory SPP-HPEC09-13 BAM 12/5/2015 BSP1 BSP2 System Model VPX + Serial Rapid I/O Physical Interface Operating System (Realtime: VxWorks; Standard: Linux) VxWorks HW (Rx/Tx, ADC, etc.) Vendor Specific TCL Middleware Application Components System Control Components Signal Processing Lib New components can easily be added by complying with the middleware API New SW components New HW components
14
MIT Lincoln Laboratory SPP-HPEC09-14 BAM 12/5/2015 Outline Introduction System Architecture Software Architecture Initial Results and Demonstration Ongoing Work/Summary
15
MIT Lincoln Laboratory SPP-HPEC09-15 BAM 12/5/2015 Switch Fabric Publish/Subscribe Middleware Process l 1 Process k publish Process l 2 Proc. l 1 Proc. l 2 Topic T Subscribers ------------------ l 1 l 2 Publishing application doesn’t need to know where the data is going...... and the subscribers are unconcerned about where their data comes from send to application deliver TCL Middleware Middleware acts as interface to both application and hardware/OS Proc. k notify OS send to l 1 send to l 2 subscribers
16
MIT Lincoln Laboratory SPP-HPEC09-16 BAM 12/5/2015 Abstract Interfaces to Middleware Middleware must be abstract to be effective –Middleware developers are unaware of hardware-specific libraries –Users have to implement functions that are specific to BSPs TCL Middleware publisher/subscriber mgmt interface with application interface with hardware/OS arrival notification send data to subscribers accept data from publishers How (exactly) do I execute a data transfer? What data- transfer technology am I using?
17
MIT Lincoln Laboratory SPP-HPEC09-17 BAM 12/5/2015 XML Parser Resource manager is currently in the form of an XML parser –XML file defines topics, publishers, and subscribers – Parser sets up the middleware and defines virtual network topology Thin Communications Layer (TCL) Middleware Config. Mgr.Res. Mgr. Switch Fabric Proc. 1 Proc. 2 OS FPGA 1 FPGA M Rx R Rx 1 Tx T Tx 1 OS Proc. N OS Process P Process 2 Process 1 setup.xml Parser
18
MIT Lincoln Laboratory SPP-HPEC09-18 BAM 12/5/2015 Builder has a CustomBSP- Builder derived from Interface to BSP Middleware Interfaces Base classes – DataReader, DataReaderListener and DataWriter interface with the application – Builder interfaces with BSPs Derive board- and communication-specific classes has a DataReader DataReaderListener DataWriter derived from SrioData- Reader SrioData- ReaderListener SrioData- Writer derived from Interface to Application change with hardware change with comm. tech. has a
19
MIT Lincoln Laboratory SPP-HPEC09-19 BAM 12/5/2015 Builder CustomBSP- Builder derived from Interface to BSP Builder Follows the Builder pattern in Design Patterns* Provides interface for sRIO-specific tasks –e.g., establish sRIO connections, execute data transfer Certain functions are empty (not necessarily virtual) in the base class, then implemented in the derived class with BSP- specific libraries #include #include “vendorPath/bspDma.h”... //member functions STATUS CustomBSPBuilder::performDmaTransfer(...){ return bspDmaTransfer(...); }... #include... //member functions STATUS Builder::performDmaTransfer(...){}... *E. Gamma et. al. Design Patterns: Elements of Reusable Object-Oriented Software. Reading, Mass.: Addison-Wesley, 1995.
20
MIT Lincoln Laboratory SPP-HPEC09-20 BAM 12/5/2015 has a DataReader DataReaderListener DataWriter derived from SrioData- Reader SrioData- ReaderListener SrioData- Writer derived from Interface to Application Publishers and Subscribers DataReaders, DataWriters and DataReaderListeners act as “Directors” of the Builder –Tell the Builder what to do, Builder determines how to do it DataWriter used for publishing, DataReader and DataReaderListener used by subscribers Derived classes implement communication(sRIO)-specific, but not BSP-specific, functionality –e.g., ring a recipient’s doorbell after transferring data //member functions virtual STATUS DataWriter::write(message)=0; virtual STATUS SrioDataWriter::write(message) {... myBuilder->performDmaXfer(…);... } Derived Builder type determined dynamically
21
MIT Lincoln Laboratory SPP-HPEC09-21 BAM 12/5/2015 Outline Introduction System Architecture Software Architecture Initial Results and Demonstration Ongoing Work/Summary
22
MIT Lincoln Laboratory SPP-HPEC09-22 BAM 12/5/2015 Software-Defined Connectivity Initial Implementation Send 0 8 0 SendBack 1 0 8 Send 0 8 0 SendBack 1 0 8 Experiment: Process-to- process data transfer latency –Set up two topics –Processes use TCL to send data back and forth –Measure round trip time with and without middleware in place Process 2 TCL Middleware Process 1 Switch Fabric Proc. 1 Proc. 2 OS FPGA 1 FPGA M Rx R Rx 1 Tx T Tx 1 OS Proc. N OS
23
MIT Lincoln Laboratory SPP-HPEC09-23 BAM 12/5/2015 Software-Defined Connectivity Communication Latency One-way latency ~23 us for small packet sizes Latency grows proportionally to packet size for large packets Reach 95% efficiency at 64 KB Overhead is negligible for large packets, despite increasing size P1P2
24
MIT Lincoln Laboratory SPP-HPEC09-24 BAM 12/5/2015 Demo 1: System Reconfiguration Processor #1Processor #2Processor #3 Detect Transmit Process TCL Middleware ProcessTransmit Configuration 1: Configuration 2: XML1 XML2 Objective: Demonstrate connectivity reconfiguration by simply replacing the configuration XML file DetectProcessTransmit DetectTransmitProcess
25
MIT Lincoln Laboratory SPP-HPEC09-25 BAM 12/5/2015 TransmitProc #2 ReceiveProc #1 Low-latency predefined connections allow quick response Demo 2: Resource Management Receive Proc #1 Transmit TCL Middleware Proc #2 Control I’ve detected signals! I will process this new information!
26
MIT Lincoln Laboratory SPP-HPEC09-26 BAM 12/5/2015 TransmitProc #2 ReceiveProc #1 Demo 2: Resource Management I need more help to analyze the signals! I will determine what to transmit in response! Resource manager sets up new connections on demand to efficiently utilize available computing power Receive Proc #1 Transmit TCL Middleware Proc #2 Control Working…
27
MIT Lincoln Laboratory SPP-HPEC09-27 BAM 12/5/2015 TransmitProc #2 ReceiveProc #1 Demo 2: Resource Management Working… Proc #1/Transmit are publisher/subscriber on topic TransmitWaveform I will transmit the response! I have determined an appropriate response! Receive Proc #1 Transmit TCL Middleware Proc #2 Control
28
MIT Lincoln Laboratory SPP-HPEC09-28 BAM 12/5/2015 TransmitProc #2 ReceiveProc #1 Demo 2: Resource Management Done! After finishing, components may be re-assigned Receive Proc #1 Transmit TCL Middleware Proc #2 Control I will transmit the response! I have determined an appropriate response!
29
MIT Lincoln Laboratory SPP-HPEC09-29 BAM 12/5/2015 Outline Introduction System Architecture Software Architecture Initial Results and Demonstration Ongoing Work/Summary
30
MIT Lincoln Laboratory SPP-HPEC09-30 BAM 12/5/2015 Ongoing Work Develop the middleware (configuration manager) to set up fixed connections –Mode 3: Objective system Automate resource management -Dynamically reconfigure system as needs change -Enable more efficient use of resources (load balancing) Process P Process 2 Thin Communications Layer (TCL) Middleware Config. Mgr.Res. Mgr.Process 1 Switch Fabric Proc. 1 Proc. 2 OS FPGA 1 FPGA M Rx R Rx 1 Tx T Tx 1 OS Proc. N OS
31
MIT Lincoln Laboratory SPP-HPEC09-31 BAM 12/5/2015 Summary Developing software-defined connectivity of hardware and software components Enabling technology: low-latency pub/sub middleware –Abstract base classes manage connections between nodes –Application developer implements only system-specific send and receive code Encouraging initial results –At full sRIO data rate, overhead is negligible Working toward automated resource management for efficient allocation of processing capability, as well as automated setup of low-latency hardware connections
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.