Download presentation
Presentation is loading. Please wait.
1
Workload distribution in satellites Part A Final Presentation Performed by :Grossman Vadim Maslovksy Eugene Instructor:Rivkin Inna Spring 2004
2
An introduction Several I/O peripherals require lots of “attention” Degrading the overall performance Some systems should prioritize computational power over I/O latency We were appointed to design a system that would enable that
3
Concept Before Main CPU I/O per. Before : Main CPU deals with ALL the I/O ALL the time! After EV04S (I/O CPU) Main CPU I/O per. After : Main CPU deals ONLY with the device ONLY when needed!
4
What can we improve? Implement the I/O protocols elsewhere – Using an I/O CPU Reduce the number of distractions – Less interrupts – I/O CPU polling – Using Buffers What are the possible implementations?
5
Possible solution #1 Advantages: – Faster access from the PPC – Only one IPIF Disadvantages: – Additional load on the PLB – An additional bridge is required A device connected to the PLB bus PLBOPB Main CPU I/O CPU PLB2 OPB OPB 2PLB EV04S DDR
6
Possible solution #2 Advantages: – “The” place for peripherals – Only one IPIF – Less load on the PLB Disadvantages: – Additional load on the OPB – Longer I/O data transfer times for the PPC PLBOPB Main CPU I/O CPU PLB2 OPB EV04S A device connected to the OPB bus
7
Possible solution #3 Advantages: – Faster access from the PPC – Less load on the busses Disadvantages: – More complex design – More hardware needed PLBOPB Main CPU I/O CPU PLB2 OPB EV04S A hybrid of the previous two
8
Possible solution #4 Advantages: – Less issues with bus differences – The device is more transparent to the PPC Disadvantages: – More complex design – Increased latency for all plb2opb transactions PLBOPB Main CPU I/O CPU PLB2 OPB EV04S A “filter” device after bridge
9
So? PLBOPB Main CPU I/O CPU PLB2 OPB OPB 2PLB EV04S 1 st design DDR EV04S 2 nd design PLBOPB Main CPU I/O CPU PLB2 OPB EV04S 4 th design EV04S 3 rd design The hybrid (3 rd ) solution was chosen Slave or master? – Slave using Interrupts!
10
Interrupts… PPC has two interrupt inputs – Critical and Non-Critical MB has one interrupt input For more signals, we’ll use an Inter. Cont. – Number of signals limited by Bus data width – Controller accessed via OPB interface
11
Architecture diagram PPC MB Peri. #1Peri. #2Peri. #3 OPB PLB Bram + Controller PLB2OPB DLMB ILMB Bram + Controller EV04S MB Intc PPC Intc
12
Deeper and deeper IPIF IPIF Control Unit Buffers Unit OPB PLB PPC int. MB int. 64 bits 32 bits
13
Buffers Implemented using 2 FIFO sets (per device) Using Xilinx FIFO Core – Synchronous FIFO, 1 clock – Asynchronous FIFO, 2 clocks – Asynchronous fits better A new problem: – Different busses have different data widths!
14
Possible Solutions Use only 32bits on the PLB – Slow! Logic for double writes & double reads – Complicated! FIFO 64 bits 32 bits MUX 32 bits Control in Double FIFO sets + a mux \ switch
15
Deeper into the buffers FIFO SET PLB2OPB FIFO SET OPB2PLB Controls in/out 64 bits 32 bits One unit for each device A register could be used for behavioral settings Optional Reg.
16
Deeper into the controls PLB Controller FIFO SET PLB2OPB FIFO SET OPB2PLB OPB Controller OPB clk R_req W_req Ack Err Busy PLB clk R_req W_req Ack Err Busy PLB clkOPB clk PLB clkOPB clk W_enAck/ErrFull Count write read R_enAck/ErrEmpty Count W_enAck/ErrFullCount write read R_enAck/ErrEmptyCount MB int. PPC int. PLB IPIF OPB IPIF
17
Deeper into the FIFO set FIFO 1 FIFO 2 64 bits 32 bits MUX 32 bits Controller PLB2OPB set R_en 1 R_en 2 W_en MUX From each FIFO OPB clk PLB clk R_en W_en From upper level R_ackW_ackFullEmpty To upper level Counter R_ack W_ackFull1Empty Counter Controller The controller: Create multiple control signals Manage the read switching Controller
18
Deeper into the FIFO set FIFO 1 FIFO 2 64 bits 32 bits DEC 32 bits Controller OPB2PLB set Controller The controller: Create multiple control signals Manage the write switching Controller OPB clk PLB clk W_en R_en From upper level R_ackW_ackFullEmpty To upper level Counter From each FIFO R_ack W_ackFull1Empty Counter W_en 1 W_en 2 R_en MUX
19
Multiple devices How about several I/O peripherals? A unified device 1) Use one “communication line” and protocol Several devices 2) A device for each peripheral 3) Groups of 3 on a single IPIF We currently aim for the 2 nd solution The 1 st will be considered as well
20
Tasks, scheduled (and completed?) Making 2 CPUs work simultaneously Implementing the buffer: FIFO, IPIF Making the 2 CPUs exchange data via a shared BRAM Debugging Unscheduled work - interrupts
21
Current project status PPC and MB system Buffer connected to PLB and OPB Two FIFO’s for each direction, 8bits data. Full signal connected to interrupts No controller, PLB FIFO makes double writes and reads (next semester goal)
22
Demonstration We pass data between the CPUs via our system There are 3 phases: 1. The CPUs pass a pair of numbers to each other 2. One CPU writes to the FIFO until it’s full. Then an interrupt occurs and the data is read 3. Repeat the 2 nd phase the other way around
23
Demonstration (cont.) Step 1: exchange data Step 2: Test MB int. Step 3: Test PPC int. Double writes & reads
24
Schedule HDL flow and connection to EDK - 2 weeks Writing a controller for IPIF2FIFO transactions - 2 weeks Writing a controller for the FIFO set - 2 weeks Unexpected bugs, issues, debugging - 2 week Implementing the protocols: Ethernet - 3 weeks 2 RS232 ports - 1 week Simulation C programs - 1 week Final tunings - 1 last week
25
Note : the trademarks and registered trademarks of the products appearing in this presentation are hereby recognized as being the property of their respective owners. Thank you for your attention! Thanks
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.