Intro to Inter-Processor Communications (IPC). Agenda  Basic Concepts  IPC Services  Setup and Examples  IPC Transports  For More Information.

Slides:



Advertisements
Similar presentations
Device Drivers. Linux Device Drivers Linux supports three types of hardware device: character, block and network –character devices: R/W without buffering.
Advertisements

Part IV: Memory Management
Yaron Doweck Yael Einziger Supervisor: Mike Sumszyk Spring 2011 Semester Project.
Real-Time Library: RTX
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
Chap 2 System Structures.
KeyStone Training Multicore Navigator Overview. Overview Agenda What is Navigator? – Definition – Architecture – Queue Manager Sub-System (QMSS) – Packet.
ECE 526 – Network Processing Systems Design Software-based Protocol Processing Chapter 7: D. E. Comer.
What's inside a router? We have yet to consider the switching function of a router - the actual transfer of datagrams from a router's incoming links to.
Architectural Support for Operating Systems. Announcements Most office hours are finalized Assignments up every Wednesday, due next week CS 415 section.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Informationsteknologi Friday, November 16, 2007Computer Architecture I - Class 121 Today’s class Operating System Machine Level.
Figure 1.1 Interaction between applications and the operating system.
KeyStone IPC Inter-Processor Communications
KeyStone IPC For Internal Audience Only
ARM-DSP Communication Architecture
KeyStone Training Multicore Applications Literature Number: SPRPXXX
1 I/O Management in Representative Operating Systems.
Operating System Organization
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
I/O Tanenbaum, ch. 5 p. 329 – 427 Silberschatz, ch. 13 p
Multicore Software Development Kit (MCSDK) Training Introduction to the MCSDK.
Multicore Navigator: Queue Manager Subsystem (QMSS)
Multicore Software Development Kit (MCSDK) Training Introduction to the MCSDK.
Using Multicore Navigator
Chapter 8 Windows Outline Programming Windows 2000 System structure Processes and threads in Windows 2000 Memory management The Windows 2000 file.
System Calls 1.
KeyStone Multicore Navigator
KeyStone Training Network Coprocessor (NETCP) Overview.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
KeyStone Training Serial RapidIO (SRIO) Subsystem.
Introduction to Processes CS Intoduction to Operating Systems.
1-1 Embedded Network Interface (ENI) API Concepts Shared RAM vs. FIFO modes ENI API’s.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
The Functions of Operating Systems Interrupts. Learning Objectives Explain how interrupts are used to obtain processor time. Explain how processing of.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
CE Operating Systems Lecture 11 Windows – Object manager and process management.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
KeyStone Training Multicore Navigator: Packet DMA (PKTDMA)
KeyStone SoC Training SRIO Demo: Board-to-Board Multicore Application Team.
Processes CS 6560: Operating Systems Design. 2 Von Neuman Model Both text (program) and data reside in memory Execution cycle Fetch instruction Decode.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
Using Multicore Navigator CIV Application Team January 2012.
The Mach System Silberschatz et al Presented By Anjana Venkat.
How to write a MSGQ Transport (MQT) Overview Nov 29, 2005 Todd Mullanix.
Network Coprocessor (NETCP) Overview
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Inter-Processor Communication (IPC). Agenda IPC Overview IPC Configurations IPC Module Details.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Linux C6x Syslink. 1.What is Syslink? 2.Syslink Architecture 3.SharedRegion 4.What is Syslink-c6x? 5.Syslink-c6x Features 6.Syslink-c6x Dependency 7.Demo.
Threads. Readings r Silberschatz et al : Chapter 4.
Chapter 2 Process Management. 2 Objectives After finish this chapter, you will understand: the concept of a process. the process life cycle. process states.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 12: I/O Systems I/O hardwared Application I/O Interface Kernel I/O.
Major OS Components CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
KeyStone SoC Training SRIO Demo: Board-to-Board Multicore Application Team.
The World Leader in High Performance Signal Processing Solutions Heterogeneous Multicore for blackfin implementation Open Platform Solutions Steven Miao.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
CSL DAT Adapter CSL 2.x DAT Reference Implementation on EDMA3 hardware using EDMA3 Low level driver.
Module 12: I/O Systems I/O hardware Application I/O Interface
Topics Covered What is Real Time Operating System (RTOS)
CSI 400/500 Operating Systems Spring 2009
Operating System Concepts
CS703 - Advanced Operating Systems
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Module 12: I/O Systems I/O hardwared Application I/O Interface
Presentation transcript:

Intro to Inter-Processor Communications (IPC)

Agenda  Basic Concepts  IPC Services  Setup and Examples  IPC Transports  For More Information

Basic Concepts  Basic Concepts  IPC Services  Setup and Examples  IPC Transports  For More Information

IPC – Definition  IPC = Inter-Processor Communication  While this definition is rather generic, it really means: “Transporting data and/or signals between threads of execution”  These threads could be located anywhere: Thread 1 Thread 2 Data CorePac 0 Device 0 CorePac 0 CorePac 1 Device 1 How would YOU solve this problem?

IPC – Possible Solutions What solutions exist in TI’s RTOS to perform IPC ?  How do you transport the data and signal?  Manual: Develop your own protocol using device resources  Auto: Using existing RTOS/Framework Utilities (i.e., IPC) Thread 1 Thread 2 Data CorePac 0 Device 0 CorePac 0 CorePac 1 Device 1

IPC – RTOS/Framework Solutions  SAME CorePac: TI’s RTOS (SYS/BIOS) supports several services for inter-thread communication (e.g. semaphores, queues, mailboxes, etc.).  DIFFERENT CorePac: The IPC framework supports communications between CorePacs via several transports.  DIFFERENT DEVICE: IPC transports can also be implemented between devices.  KEY: Same IPC APIs can be used for local or remote communications.  SYS/BIOS (or IPC)  IPC + transport Solutions Thread 1 Thread 2 Data CorePac 0 Device 0 CorePac 0 CorePac 1 Device 1

IPC – Transports  Current IPC implementation uses several transports: Device 1 SRIO CorePac 1 Thread 1 IPC Thread 2 MEM Multicore Navigator or CorePac 2 Thread 1 IPC Thread 2 Device 2 SRIO CorePac 1 Thread 1 IPC Thread 2 CorePac  CorePac (Shared Memory Model, Multicore Navigator) Device  Device (Serial Rapid I/O)  Chosen at configuration; Same code regardless of thread location.

IPC Services  Basic Concepts  IPC Services Message Queue Notify Data Passing Support Utilities  Setup and Examples  IPC Transports  For More Information

IPC Services  The IPC package is a set of APIs  MessageQ uses the modules below …  But each module can also be used independently. Application

IPC Services – Message Queue  Basic Concepts  IPC Services Message Queue Notify Data Passing Support Utilities  Setup and Examples  IPC Transports  For More Information

MessageQ – Highest Layer API  SINGLE reader, multiple WRITERS model (READER owns queue/mailbox)  Supports structured sending/receiving of variable-length messages, which can include (pointers to) data  Uses all of the IPC services layers along with IPC Configuration & Initialization  APIs do not change if the message is between two threads:  On the same core  On two different cores  On two different devices  APIs do NOT change based on transport – only the CFG (init) code  Shared memory  Multicore Navigator  SRIO

MessageQ and Messages Q How does the writer connect with the reader queue? A MultiProc and name server keep track of queue names and core IDs. Q What do we mean when we refer to structured messages with variable size? A Each message has a standard header and data. The header specifies the size of payload. Q How and where are messages allocated? A List utility provides a double-link list mechanism. The actual allocation of the memory is done by HeapMP, SharedRegion, and ListMP. Q If there are multiple writers, how does the system prevent race conditions (e.g., two writers attempting to allocate the same memory)? A GateMP provides hardware semaphore API to prevent race conditions. Q What facilitates the moving of a message to the receiver queue? A This is done by Notify API using the transport layer. Q Does the application need to configure all these modules? A No. Most of the configuration is done by the system. More details later.

Using MessageQ (1/3) MessageQ_create(“myQ”, *synchronizer); MessageQ_get(“myQ”, &msg, timeout); CorePac 2 - READER  MessageQ transactions begin with READER creating a MessageQ.  READER’s attempt to get a message results in a block (unless timeout was specified), since no messages are in the queue yet. “myQ” What happens next?

Using MessageQ (2/3) “myQ” MessageQ_create(“myQ”, …); MessageQ_get(“myQ”, &msg…); MessageQ_open (“myQ”, …); msg = MessageQ_alloc (heap, size,…); MessageQ_put(“myQ”, msg, …); CorePac 1 - WRITER  WRITER begins by opening MessageQ created by READER.  WRITER gets a message block from a heap and fills it, as desired.  WRITER puts the message into the MessageQ. Heap How does the READER get unblocked? CorePac 2 - READER

Using MessageQ (3/3) “myQ” MessageQ_create(“myQ”, …); MessageQ_get(“myQ”, &msg…); *** PROCESS MSG *** MessageQ_free(“myQ”, …); MessageQ_delete(“myQ”, …); MessageQ_open (“myQ”, …); msg = MessageQ_alloc (heap, size,…); MessageQ_put(“myQ”, msg, …); MessageQ_close(“myQ”, …); Heap  Once WRITER puts msg in MessageQ, READER is unblocked.  READER can now read/process the received message.  READER frees message back to Heap.  READER can optionally delete the created MessageQ, if desired. CorePac 1 - WRITERCorePac 2 - READER

MessageQ – Configuration  All API calls use the MessageQ module in IPC.  User must also configure MultiProc and SharedRegion modules.  All other configuration/setup is performed automatically by MessageQ. Notify MultiProc User APIs Uses ListMP Shared Region Uses GateMP NameServer HeapMemMP + Uses Cfg MessageQ

MessageQ – Miscellaneous Notes  O/S independent:  If one CorePac is running LINUX and using SysLink, the API calls do not change.  SysLink is runtime software that provides connectivity between processors (running Linux, SYSBIOS, etc.)  Messages can be allocated statically or dynamically.  Timeouts are allowed when a task receives a message.  User can specify three priority levels:  Normal  High  Urgent

More Information About MessageQ  All structures and function descriptions can be found within the release: \ipc_U_ZZ_YY_XX\docs\doxygen\html\_message_q_8h.html

IPC Services - Notify  Basic Concepts  IPC Services Message Queue Notify Data Passing Support Utilities  Setup and Examples  IPC Transports  For More Information

Using Notify – Concepts  In addition to moving MessageQ messages, Notify:  Can be used independently of MessageQ  Is a simpler form of IPC communication  Is used ONLY with shared memory transport Device 1 CorePac 1 Thread 1 IPC Thread 2 MEM CorePac 2 Thread 1 IPC Thread 2

Notify Model  Comprised of SENDER and RECEIVER.  The SENDER API requires the following information:  Destination (SENDER ID is implicit)  16-bit Line ID  32-bit Event ID  32-bit payload (For example, a pointer to message handle)  The SENDER API generates an interrupt (an event) in the destination.  Based on Line ID and Event ID, the RECEIVER schedules a pre- defined call-back function.

Notify Model

Notify Implementation Q How are interrupts generated for shared memory transport? A The IPC hardware registers are a set of 32-bit registers that generate interrupts. There is one register for each core. Q How are the notify parameters stored? A List utility provides a double-link list mechanism. The actual allocation of the memory is done by HeapMP, SharedRegion, and ListMP. Q How does the notify know to send the message to the correct destination? A MultiProc and name server keep track of the core ID. Q Does the application need to configure all these modules? A No. Most of the configuration is done by the system.

Example Callback Function /* * ======== cbFxn ======== * This fxn was registered with Notify. It is called when any event is sent to this CPU. */ Uint32 recvProcId ; Uint32 seq ; void cbFxn(UInt16 procId, UInt16 lineId, UInt32 eventId, UArg arg, UInt32 payload) { /* The payload is a sequence number. */ recvProcId = procId; seq = payload; Semaphore_post(semHandle); }

More Information About Notify  All structures and function descriptions can be found within the release: \ipc_U_ZZ_YY_XX\docs\doxygen\html\_notify_8h.html

IPC Services - Data Passing  Basic Concepts  IPC Services Message Queue Notify Data Passing Support Utilities  Setup and Examples  IPC Transports  For More Information

Data Passing Using Shared Memory (1/2)  When there is a need to allocate memory that is accessible by multiple cores, shared memory is used.  However, the MPAX register for each core might assign a different logical address to the same physical shared memory address.  The Shared Region module provides a translation look-up table that resolves the logical/physical address issue without user intervention.

 Messages are created and freed, but not necessarily in consecutive order:  HeapMP provides a dynamic heap utility that supports create and free based on double link list architecture.  ListMP provides a double link list utility that makes it easy to create and free messages for static memory. It is used by the HeapMP for dynamic cases.  To protect the above utilities from race conditions (e.g., multiple cores try to create messages at the same time):  GateMP provides hardware semaphore protection.  GateMP can also be used by non-IPC applications to assign hardware semaphores. Data Passing Using Shared Memory (2/2)

Data Passing – Static  Data Passing uses a double linked list that can be shared between CorePacs; Linked list is defined STATICALLY.  ListMP handles address translation and cache coherency.  GateMP protects read/write accesses.  ListMP is typically used by MessageQ not by itself. Now, the dynamic version... Notify MultiProc User APIs Uses ListMP Shared Region Uses GateMP NameServer Cfg

Data Passing – Dynamic  Data Passing uses a double linked list that can be shared between CPUs. Linked list is defined DYNAMICALLY (via heap).  Same as previous, except linked lists are allocated from Heap  Typically not used alone – but as a building block for MessageQ Notify MultiProc User APIs Uses ListMP Shared Region Uses GateMP NameServer HeapMemMP + Uses Cfg

Data Passing: Additional Transports  Multicore Navigator  Messages are assigned to descriptors.  Monolithic descriptors provide a pointer to the message.  SRIO Type 11 has a TI-developed message protocol that moves data and interrupts between devices.  Implementation of these transports will be discussed later.

IPC Services – Support Utilities  Basic Concepts  IPC Services Message Queue Notify Data Passing Support Utilities  Setup and Examples  IPC Transports  For More Information

IPC Support Utilities (1/2)  IPC contains several utilities, most of which do NOT need to be configured by the user.  Here is a short description of each IPC utility: More utilities… IPC Initializes IPC subsystems: Applications using IPC must call IPC_start(). In the configuration file, setupNotify and setupMessageQ specify whether to set up these IPC modules. MultiProc Manages processor IDs and core names SharedRegion Manages shared memory using HeapMemMP allocator Handles address translation for shared memory regions

IPC Support Utilities (2/2)  IPC contains several utilities, most of which do NOT need to be configured by the user.  Here is a short description of each IPC utility: ListMP Provides linked list in shared memory Uses multi-processor gate (GateMP) to prevent collisions on lists GateMP Multi-processor gate that provides local (against other threads on local core) and remote context protection HeapMemMP Traditional heap; Supports variable-sized alloc/free All allocations are aligned on cache line boundaries.

Setup and Examples  Basic Concepts  IPC Services  Setup and Examples  IPC Transports  For More Information

IPC – Tools/Setup Required  IPC is a package (library) that is installed with the MCSDK. Note: Installing IPC independent of MCSDK is strongly discouraged.  IPC also requires SYS/BIOS (threading) and XDC tools (packaging) – installed with the MCSDK (supported by SYS/BIOS ROV and RTA).  IPC is supported on the latest KeyStone multicore devices (C667x, C665x), as well as C647x and ARM+DSP devices. What IPC examples exist in the MCSDK?

IPC – Examples  IPC examples are installed along with the MCSDK in the PDK folder.  Simply import these projects into CCS and analyze them.  Some users start with this example code and modify as necessary. QMSS – Multicore Navigator (Queue Manager Subsystem) SHM – Shared Memory SRIO – SRIO (loopback) SRIO – Chip to Chip

IPC Transports  Basic Concepts  IPC Services  Setup and Examples  IPC Transports  For More Information

IPC Transports – Intro  An IPC TRANSPORT is a … Shared Memory This is the default transport. Uses on-chip shared memory resources and interrupt lines to signal the availability of data Multicore Navigator SRIO “combination of physical H/W and driver code that allows two threads to communicate on the same device or across devices.”  IPC supports three different transports: Available on all KeyStone SoC devices Uses queues and descriptors plus built-in signaling to transmit/receive data and messages between threads Hardware serial peripheral that connects two or more DEVICES together  For MessageQ, only the configuration (init) code changes. All other code (e.g., put/get) remains unchanged. Let's look briefly at the Multicore Navigator first...

Data Passing Using Multicore Navigator The Multicore Navigator:  Is an innovative packet-based infrastructure that facilitates data movement and multicore control  Provides a highly efficient inter-core communication mechanism  Provides H/W queues and packet DMA that are the basic building blocks for IPC Examples of required init/CFG code are provided with the MCSDK. Now consider the Multicore Navigator implementation …

IPC Transports – Multicore Navigator (1/3) TX Core msg = MessageQ_alloc MessageQ_put NotifyQmss_sendEvent Get a descriptor Attach ptr/msg to descriptor Qmss_queuePush (pushes descriptor directly to RxQueue of remote core) RX Core MessageQ_get QMSS-received descriptor NotifyDriverQmss_isr “get the pointer to Msg”  MessageQ_alloc allocates a heapMP buffer for the MessageQ_msg from MSMC  The buffer contains the MessageQ_Header plus the data.  When MessageQ_Put is called with the MessageQ_msg, it also calls the TransportQmss function NotifyQmss_sendEvent.

IPC Transports – Multicore Navigator (2/3) TX Core msg = MessageQ_alloc MessageQ_put NotifyQmss_sendEvent Get a descriptor Attach ptr/msg to descriptor Qmss_queuePush (pushes descriptor directly to RxQueue of remote core) RX Core MessageQ_get QMSS-received descriptor NotifyDriverQmss_isr “get the pointer to Msg”  The TransportQmss function NotifyQmss_sendEvent  gets a descriptor  Adds the pointer to the MessageQ_msg (located in MSMC and containing the data) to the descriptor.  Qmss_queuePush pushes the descriptor into a queue.

IPC Transports – Multicore Navigator (3/3) TX Core msg = MessageQ_alloc MessageQ_put NotifyQmss_sendEvent Get a descriptor Attach ptr/msg to descriptor Qmss_queuePush (pushes descriptor directly to RxQueue of remote core) RX Core MessageQ_get QMSS-receive descriptor NotifyDriverQmss_isr “get the pointer to Msg”  The Interrupt generator queue calls notifyDriverQmss_isr.  The ISR schedules a QMSS receive descriptor that sends a message to the queue of Receiver.

IPC Transports – SRIO (1/3)  The SRIO (Type 11) transport enables MessageQ to send data between tasks, cores and devices via the SRIO IP block.  Refer to the MCSDK examples for setup code required to use MessageQ over this transport. Chip V CorePac W msg = MessageQ_alloc MessageQ_put(queueId, msg) TransportSrio_put Srio_sockSend(pkt, dstAddr) Chip X CorePac Y MessageQ_get(queueHndl,rxMsg) MessageQ_put(queueId, rxMsg) TransportSrio_isr “get Msg from queue” SRIO x4

IPC Transports – SRIO (2/3)  From a messageQ standpoint, the SRIO transport works the same as the QMSS transport. At the transport level, it is also somewhat the same.  The SRIO transport copies the messageQ message into the SRIO data buffer.  It will then pop a SRIO descriptor and put a pointer to the SRIO data buffer into the descriptor. Chip V CorePac W msg = MessageQ_alloc MessageQ_put(queueId, msg) TransportSrio_put Srio_sockSend(pkt, dstAddr) Chip X CorePac Y MessageQ_get(queueHndl,rxMsg) MessageQ_put(queueId, rxMsg) TransportSrio_isr “get Msg from queue” SRIO x4

IPC Transports – SRIO (3/3)  The transport then passes the descriptor to the SRIO LLD via the Srio_sockSend API.  SRIO then sends and receives the buffer via the SRIO PKTDMA.  The message is then queued on the Receiver side. Chip V CorePac W msg = MessageQ_alloc MessageQ_put(queueId, msg) TransportSrio_put Srio_sockSend(pkt, dstAddr) Chip X CorePac Y MessageQ_get(queueHndl,rxMsg) MessageQ_put(queueId, rxMsg) TransportSrio_isr “get Msg from queue” SRIO x4

Configure the Transport Layer  Most of the transport changes are in the CFG file.  The XDC tools build the configuration and initialization code. User involvement is only in changing the CFG file.  Some additional include and initialization is needed in the code. Now consider the differences between shared memory and Navigator transport from the application perspective …

Configuration: Shared Memory CFG File var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ'); var Notify = xdc.module('ti.sdo.ipc.Notify'); var Ipc = xdc.useModule('ti.sdo.ipc.Ipc'); Notify.SetupProxy = xdc.module(Settings.getNotifySetupDelegate()); MessageQ.SetupTransportProxy= xdc.module(Settings.getMessageQSetupDelegate()); /* Use shared memory IPC */ MessageQ.SetupTransportProxy = xdc.module('ti.sdo.ipc.transports.TransportShmSetup'); Program.global.TRANSPORTSETUP = MessageQ.SetupTransportProxy.delegate$.$name;

Configuration: Navigator CFG File var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ'); var Ipc = xdc.useModule('ti.sdo.ipc.Ipc'); var TransportQmss = xdc.useModule('ti.transport.ipc.qmss.transports.TransportQmss'); /* use IPC over QMSS */ MessageQ.SetupTransportProxy = xdc.useModule(Settings.getMessageQSetupDelegate()); var TransportQmssSetup = xdc.useModule('ti.transport.ipc.qmss.transports.TransportQmssSetup'); MessageQ.SetupTransportProxy = TransportQmssSetup; TransportQmssSetup.descMemRegion = 0; Program.global.descriptorMemRegion = TransportQmssSetup.descMemRegion; Program.global.numDescriptors = 8192; Program.global.descriptorSize = cacheLineSize; // multiple of cache line size TransportQmss.numDescriptors = Program.global.numDescriptors; TransportQmss.descriptorIsInSharedMem = true; TransportQmss.descriptorSize = Program.global.descriptorSize;

Configuration: SRIO CFG File var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ'); var Ipc = xdc.useModule('ti.sdo.ipc.Ipc'); var TransportSrio = xdc.useModule('ti.transport.ipc.srio.transports.TransportSrio'); /* use IPC over SRIO */ MessageQ.SetupTransportProxy = xdc.useModule(Settings.getMessageQSetupDelegate()); var TransportSrioSetup = xdc.useModule('ti.transport.ipc.srio.transports.TransportSrioSetup'); MessageQ.SetupTransportProxy = TransportSrioSetup; TransportSrioSetup.messageQHeapId = 1; /* Sized specifically to handle receive side packets. * Heap should * not be used by any other application or module */ TransportSrioSetup.descMemRegion = 0; TransportSrioSetup.numRxDescBuffs = 256; /* Should be sized large enough so that multiple * packets can be queued on receive side and still * have buffs available for incoming packets */

Configuration: Navigator Initialization (1/2) Additional Include: /* QMSS LLD*/ #include /* CPPI LLD */ #include #include

Add an initialization routine: Int main(Int argc, Char* argv[]) { if (selfId == 0) { /* QMSS, and CPPI system wide initializations are run on * this core */ result = systemInit(); if (result != 0) { System_printf("Error (%d) while initializing QMSS\n", result); } // Note that systemInit is provided by TI Configuration: Navigator Initialization (2/2)

IPC Transport Details Benchmark Details IPC Benchmark Examples from MCSDK CPU Clock – 1 GHz Header Size– 32 bytes SRIO – Loopback Mode Messages allocated up front Message Size (Bytes) Shared Memory Multicore Navigator SRIO Throughput (Mb/second)

IPC Transport Pros/Cons PROSCONS Shared Memory Simplest to implement Moderate throughput Single device only Requires Notify module and API call if doorbell required Possible contention with other tasks using the same shared memory Multicore Navigator Highest throughput Dedicated resources Consumes least CPU cycles Interrupt generated when data is available Single device only SRIO Can be used across devices Lowest throughput

For More Information For more information, refer to the BIOS MCSDK User Guide on the Embedded Processors Wiki.BIOS MCSDK User GuideEmbedded Processors Wiki For questions regarding topics covered in this training, visit the support forums at the TI E2E Community website. TI E2E Community