[Bro] Update on using PF_RING/TNAPI with Bro
sstattla at gmail.com
Thu Dec 16 17:30:40 PST 2010
Okay, if this works, I don't think you'll see a gain in performance.
To leverage performance in this case, lets say you have 4 cores,
Core 0 running Bro Manager,
Core 1 running Bro Proxy,
Core 2 running Bro Worker1,
Core 3 running Bro Worker2.
For max performance (and really any performance gain) through cache
localization, you'd want all traffic input to Bro to go to either Core2
or Core3; and both of these cores coupled to one RX_Queue each. You will
somehow need to at the driver layer, split the traffic coming from the
wire to go to one of these queues (intelligently to send packets sharing
state to the same RX_Queue). This has to be done at the RSS, and I have
no idea of how to do this on my network card- Intel 82598EB. (You
couldn't use Click to do this because you want to do it at the driver
What do you think?
On 10-12-15 7:14 PM, Justin Azoff wrote:
> On Wed, Dec 15, 2010 at 05:11:18PM -0500, Sunjeet Singh wrote:
>> Yes, that's a great idea. But I'm not sure how Bro would handle
>> manager-proxy-worker communication between different RX_Queues instead
>> of different interfaces. Can't be as simple as writing eth0 at 1, etc. in
>> the cluster's node.cfg file. Maybe some changes to Bro code?
> Putting eth0 at 1,2,3,4 ec. in node.cfg should work just fine.
> no changes to bro are needed, but you may have to rebuild bro with
> ./configure --enable-cluster...
> the config I use with click just has:
More information about the Bro