[Bro] Hardware recommends

Slagell, Adam J slagell at illinois.edu
Wed Jan 27 04:57:58 PST 2016


Which document? We should update that. 

> On Jan 26, 2016, at 9:18 PM, Gary Faulkner <gfaulkner.nsm at gmail.com> wrote:
> 
> The Bro architecture documents still seem to suggest you can only
> process 80Mb/s or so of traffic per core, but even at 2.6 - 2.7 GHz you
> end up getting closer to 250-300Mb/s+. 3.1 will boost this a bit and
> allow you to handle slightly larger flows per core, but you may be able
> to get many more cores on a single host at 2.6 for similar or less
> money. I'd just be ware of optimizing too heavily on core count and
> ending up with 1.8GHz clocks. If you are doing good flow shunting up
> front I think you are likely to stray into the territory of more smaller
> flows which probably lends itself better to having more moderately
> clocked cores than fewer slightly higher clocked cores.
> 
> Also, unless you are considering ludicrous core counts per machine you
> might find you are oversubscribing your server long before you can take
> advantage of 40Gbps or 100Gbps NICs over a 10Gbps NIC. I've been fairly
> happy with Intel 10Gbps NICs and PF_RING DNA/ZC, but some prefer to use
> Myricom to avoid dealing with third party Intel NIC drivers. Be wary of
> artificial worker limits using RSS or vendor provided host based
> load-balancing (Myricom comes to mind). There are cases where folks have
> been stuck with not being able to take full advantage of their server
> hardware without having to run additional NICs, or do other work-arounds
> due to having more cores than queues/rings on the cards.
> 
> On a side note: I found out a lot of interesting things about how my
> sensors were performing, as well as my upstream load-balancer by using
> Justin's statsd plugin (assuming your upstream shunting doesn't throw
> off the output) to send the capture-loss script output to a time series
> DB and graphing it. For example I discovered a port going to an
> unrelated tool was becoming oversubscribed over the lunch hour causing
> back-pressure on the load-balancer that translated to every worker on my
> Bro cluster reporting 25-50% loss even though Bro should have been
> seeing relatively little traffic and was itself not over-subscribed. In
> that case I found it is sometimes desirable to have an extra 10G NIC in
> each server so that the tool, not the load-balancer gets over-subscribed
> until I can add more capacity to the tool and better spread the load.
> 
>> On 1/26/2016 2:34 PM, James Lay wrote:
>>> On 2016-01-26 10:44, James Lay wrote:
>>> And on the heels of the NIC question, how about hardware experiences?
>>> I'm looking at the PCIE2 NIC's at both Myricom and Netronome....any
>>> recommends for the server hardware to wrap around these cards?  The 
>>> plan
>>> is to have this machine monitor a corporate LAN...lot's of traffic.
>>> Guessing the team will want to go Dell if that helps.  Thanks for the
>>> advice all.
>>> 
>>> James
>>> _______________________________________________
>>> Bro mailing list
>>> bro at bro-ids.org
>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>> Thanks for the great information all..it really does help.
>> 
>> James
>> _______________________________________________
>> Bro mailing list
>> bro at bro-ids.org
>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
> 
> _______________________________________________
> Bro mailing list
> bro at bro-ids.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro



More information about the Bro mailing list