[Xorp-hackers] XORP IPC mechanism is stressed by data plane traffic

edrt edrt@citiz.net
Tue, 22 Mar 2005 22:02:53 +0800


Hi XORP developers,


I have a problem might be related to XORP architecture. 

I ported multicast part of XORP platform to my target board. 
Recently I do some stress testing of the ported system and find
that when feeding multicast packets overloading the board's 
processing power, the XORP components' XRL communication are
effected which result at least the following problems I observed
until now:

 * configuring XORP components on the fly through call_xrl always
   failed. (XrlPFSTCPSender::die with errno EWOULDBLOCK/ENOBUFS...) 

 * because I'm implementing configure XORP components through
   call_xrl, when start XORP system in a high volume multicast
   traffic enviroment, all components starting after MFEA are 
   failed. (because MFEA put the vif into multicast promisc mode)

I'm not yet tried to stress the system using unicast traffic which
might be a different view, but I think the general reason lies
in the control plane IPC traffic is effected by large volume data
plane traffic.

So, if I only want to run all XORP components in a single physical
node, how can I avoid the above problem (besides optimizing device
driver):

  * Can the network stack (FreeBSD-like) differentiate internal IPC
    traffic with external data traffic and implement internal QOS-like
    resource reserving and procssing?

  * Or, have anybody ever successfully tried to integrate XORP
    components using XrlPFInProcListener+XrlPFInProcSender?

  * Any other suggestions to solve/alleviate the problem?


I'll be very grateful if anyone could throw light on this issue. 


Thanks
Eddy