[Xorp-hackers] XORP IPC mechanism is stressed by data plane traffic

Pavlin Radoslavov pavlin@icir.org
Tue, 22 Mar 2005 20:01:34 -0800


> I have a problem might be related to XORP architecture. 
> 
> I ported multicast part of XORP platform to my target board. 
> Recently I do some stress testing of the ported system and find
> that when feeding multicast packets overloading the board's 
> processing power, the XORP components' XRL communication are
> effected which result at least the following problems I observed
> until now:
> 
>  * configuring XORP components on the fly through call_xrl always
>    failed. (XrlPFSTCPSender::die with errno EWOULDBLOCK/ENOBUFS...) 

Have you tried to use the rtrmgr for configuration purpose?
It uses in-process XRL generation instead of the call_xrl binary.
I don't know whether the in-process XRL generation will make any
difference, but if it does this may give you some clues about how to
solve the problem.

>  * because I'm implementing configure XORP components through
>    call_xrl, when start XORP system in a high volume multicast
>    traffic enviroment, all components starting after MFEA are 
>    failed. (because MFEA put the vif into multicast promisc mode)

By looking into the source code, it seems that the multicast
interfaces are put in promisc mode during startup of the MFEA.
Strictly speaking, this should happen only after a multicast routing
protocol registers with the MFEA.
Hence, one possible solution could be to refactor the MFEA
operations so the multicast routing related operations by the MFEA
are enabled only after the first multicast routing protocol
registers with the MFEA.

> I'm not yet tried to stress the system using unicast traffic which
> might be a different view, but I think the general reason lies
> in the control plane IPC traffic is effected by large volume data
> plane traffic.

Can you test whether the unicast traffic also triggers the problem?
If yes, then the above MFEA refactoring won't help.

> So, if I only want to run all XORP components in a single physical
> node, how can I avoid the above problem (besides optimizing device
> driver):
> 
>   * Can the network stack (FreeBSD-like) differentiate internal IPC
>     traffic with external data traffic and implement internal QOS-like
>     resource reserving and procssing?
> 
>   * Or, have anybody ever successfully tried to integrate XORP
>     components using XrlPFInProcListener+XrlPFInProcSender?

I think that to switch on the InProc XRLs, you need to set in your
environment the following variable:
setenv XORP_PF i

However, then you need a mechanism to initiate the in-process XRLs
from within your router binary (obviously, you cannot use call_xrl
for in-process communication :)

Regards,
Pavlin

> 
>   * Any other suggestions to solve/alleviate the problem?
> 
> 
> I'll be very grateful if anyone could throw light on this issue. 
> 
> 
> Thanks
> Eddy
> 
> 
> _______________________________________________
> Xorp-hackers mailing list
> Xorp-hackers@icir.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-hackers