[Bro] Myricom and Bro... show of hands for successful deployments on 10G links (with > 5Gpbs)

Michal Purzynski michalpurzynski1 at gmail.com
Sat Aug 23 07:38:59 PDT 2014

On 8/22/14, 3:27 PM, Harry Hoffman wrote:
> Hi Vlad,
> Absolutely. Sorry if that was vague or cryptic.
> What I meant was that using the myricom test utilities I can capture 
> everything on the wire. These utilities don't write to disk so they 
> only show that there's not a issue with nic to memory transfers.
> Once I fire up bro one worker consistently pegs a core at 100% and I 
> drop greater then 1/2 of packets. The rate of drop isn't as severe 
> with tools like tcpdump but I assume that the difference in processing 
> that bro does with packets.
That most likely means you are not using the Myricom API to capture 
packets. I've seen the symptoms you're describing.

Please send the output of

ldd `which bro` | egrep -i '(myri|snf)
> All of these is running on a Dell R710 with 2 Xeon CPUs at 2.8GHz with 
> 6 cores each (HT disabled)  and 96GB of RAM and two SSD drives for 
> data each 700GB in size. We moved to the Dell specifically to test 
> whether or not using SSD drives gave a performance boost in writing to 
> disk.
> We're using the myricom tools (/opt/snf/bin/myri_counters) to 
> determine dropped packets, via SNF drop ring full,  due to the 
> application (tcpdump, bro, etc) being too slow to grab packets from 
> the ring buffer.
> As an initial, memory only test, we've run 
> /opt/snf/bin/tests/snf_simple_recv and 
> /opt/snf/bin/tests/snf_multi_recv. Both run without any drops and 
> output shows an avg of 7Gbps on the wire. Running either test for 
> extended periods of time does not cause the values in "SNF drop ring 
> full" to increment.
> /usr/local/bro/etc/node.cfg looks like (as you can see we're 
> attempting to tweak performance via the various SNF env variables. 
> There's no difference noticed using pin_cpus:
> [manager]
> type=manager
> host=localhost
> #
> [proxy-1]
> type=proxy
> host=localhost
> #
> [worker-1]
> type=worker
> host=localhost
> interface=p1p1
> lb_method=myricom
> lb_procs=10
> #pin_cpus=2,3,4,5,6,7,8,9,10,11
Oh man that's way too small, I'll check in more details later but I'm 
running with a few GB large dataring.

Fortunately the ring is shared accross all processes, so a 16GB ring 
times 16 processes does not use 256GB of RAM ;)

There's some info in the Myricom docs on how the dataring size and 
descring size should related to each other, I believe it was 4:1.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140823/b98de260/attachment.html 

More information about the Bro mailing list