John DeHart Netgames Plugin Issues
2 - JDD - 6/13/2016 SRAM ONL NP Router Rx (2 ME) HdrFmt (1 ME) Parse, Lookup, Copy (3 MEs) TCAM SRAM Mux (1 ME) Tx (1 ME) QM (1 ME) xScale xScale (3 Rings?) Assoc. Data ZBT-SRAM Plugin0Plugin1 Plugin2 Plugin3Plugin4 NN FreeList Mgr (1 ME) Tx, QM Parse Plugin XScale Stats (1 ME) Rx Mux HF Copy Plugins Tx SRAM NN Large SRAM Ring Scratch Ring NN Ring NN SRAM 64KW New Needs A Lot Of Mod. Needs Some Mod. Mostly Unchanged 64KW 512W Small SRAM Ring xScale Plugin to XScale Ctrl,Update & RLI Msgs LD Except Errors 512W
3 - JDD - 6/13/2016 SRAM Netgames Plugin Issues Rx (2 ME) HdrFmt (1 ME) Parse, Lookup, Copy (3 MEs) Mux (1 ME) Tx (1 ME) QM (1 ME) Plugin0Plugin1 Plugin2 Plugin3Plugin4 NN FreeList Mgr (1 ME) Tx, QM Parse Plugin XScale Stats (1 ME) Rx Mux HF Copy Plugins Tx SRAM NN SRAM 64KW SRAM 64KW SRAM 64KW SCR 256W SCR 256W SCR 256W SCR 512W SCR 256W
4 - JDD - 6/13/2016 Netgames Plugin Issues Just Under Overload »Rcv Rate: Mpkts/sec Measured by MUX counting packets it reads from the Rx input Ring »Rcv Drop Rate: 0 pkts/sec Counted by Rx when it finds a full ring to MUX »Total Pkt Rate arriving at Rx: = Mpkts/sec »Plugin to Mux Rate: Mpkts/sec Counted by Plugin when it is putting a pkt into the Plugin to Mux Ring »PLC to Plugin Drop Rate: 0 pkts/sec Counted by PLC when it finds a full Plugin Ring »QM Drop Rate: 2.3 Mpkts/sec Queues default to 32KB »Tx Rate: 1 Mpkts/sec (0.500 Mpkts/sec per port, 840 Mb/s per port) »Tx Drop Rate: 0 pkts/sec Just into Overload: »Rcv Rate: 1.07 Mpkts/sec Measured by MUX counting packets it reads from the Rx input Ring »Rcv Drop Rate: Mpkts/sec Counted by Rx when it finds a full ring to MUX »Total Pkt Rate arriving at Rx: = Mpkts/sec »Plugin to Mux Rate: Mpkts/sec Counted by Plugin when it is putting a pkt into the Plugin to Mux Ring »PLC to Plugin Drop Rate: Mpkts/sec Counted by PLC when it finds a full Plugin Ring »QM Drop Rate: 0 pkts/sec Queues default to 32KB »Tx Rate: Mpkts/sec »Tx Drop Rate: Mpkts/sec
5 - JDD - 6/13/2016 Netgames Plugin Issues Notes: »Using 5 ME’s for plugin »Each Plugin ME is reading from Plugin Input Ring 0 »Making copies going to just ports 0 and 1 (fanout = 2) for this experiment Packet from port x will not have a copy going back to port x so we are multiplying Rx packets by 8/5. »Each Port is using just 1 Queue »Ethernet Frame Size: 248 B UDP Payload 182 B Ø Application Payload 150 B Ø Application Header 32 B UDP/IP Header: 28 B Ethernet Header: 14 B Ethernet CRC: 4 B Ethernet IFS: 12 B Ethernet Preamble: 8B »Link Rate is set to 846 Mb/s (measured on IP Pkt size) 846 Mb/s * (248 B / 210 B) = Mb/s »With Workbench attached in Overload condition Mux to PLC Ring seems to fill up first This tends to agree with our results from the ONL SIGCOM Paper in which PLC was the bottleneck under heavy load »Once we are in overload and rates have “collapsed” I have to drop the input rate below the Rx rate that MUX reports for things to recover. But it does recover. »Occasionally there is another type of failure mode, PLC starts dropping lots of packets because the Xscale ring is full. PLC should NOT be sending anything to the Xscale once the test is running for a while. This failure mode is not one we can recover from. We have to reboot. Good News: »Plugin and rest of system keeps up with an input rate that is enough to fill output links Also true for fanout of 4. Question(s): »Why does the rate processed drop so drastically in overload?
6 - JDD - 6/13/2016