[Bro] Bro with 10Gb NIC's or higher

Kristoffer Björk kristoffer.bjork at gmail.com
Mon Jan 26 23:01:30 PST 2015


It can be donw with BPF, take a look at this
https://www.bro.org/sphinx-git/scripts/policy/frameworks/packet-filter/shunt.bro.html
Usually for lots of traffic it is better to do in a hardware device in
front of your bro machines though.
I know some people are using arista switches for loadbalancing and shunting
but there are probably other devices also that work well for this.

//Kristoffer


On Tue, Jan 27, 2015 at 7:17 AM, Clement Chen <plutochen2010 at gmail.com>
wrote:

> Hi Aashish,
>
> Could you please elaborate a little bit more on the "shunting capability"?
> Do you mean using a BPF filter for bro? And what are some good filters for
> large/encrypted flows?
>
> Thanks.
>
> -Clement
>
> On Fri, Jan 9, 2015 at 12:26 PM, Aashish Sharma <asharma at lbl.gov> wrote:
>
>> > Do you really see and can handle 1Gbit/sec of traffic per core? I'm
>> curious.
>>
>> Haven't measured if a core can handle 1Gbit/sec but I highly highly
>> doubt.
>>
>> What saves us is the shunting capability - basically bro identifies and
>> cuts off the rest of the big flows by placing a src,src port - dst,
>> dst-port ACL on arista while continuing to allow control packets (and
>> dynamically removes ACL once connection_ends)
>>
>> So each core doesn't really see anything more then 20-40 Mbps
>> (approximation)
>>
>> (Notes for self, it would be good to get these numbers in a plot)
>>
>> Thanks,
>> Aashish
>>
>> On Fri, Jan 9, 2015 at 12:01 PM, Michał Purzyński <
>> michalpurzynski1 at gmail.com> wrote:
>>
>>> Do you really see and can handle 1Gbit/sec of traffic per core? I'm
>>> curious.
>>>
>>> I would say, with a 2.6Ghz CPU my educated guess would be somewhere
>>> about 250Mbit/sec / core with Bro. Of course configuration is
>>> everything here, I'm just looking into "given you do it right, that's
>>> what's possible".
>>>
>>> On Fri, Jan 9, 2015 at 8:00 PM, Aashish Sharma <asharma at lbl.gov> wrote:
>>> > While, we at LBNL continue to work towards a formal documentation, I
>>> think I'd reply then causing further delays:
>>> >
>>> > Here is the 100G cluster setup we've done:
>>> >
>>> > - 5 nodes running 10 workers + 1 proxy each on them
>>> > - 100G split by arista to 5x10G
>>> > - 10G on each node is further split my myricom to 10x1G/worker with
>>> shunting enabled !!
>>> >
>>> > Note: Scott Campbell did some very early work on the concept of
>>> shunting
>>> >             (http://dl.acm.org/citation.cfm?id=2195223.2195788)
>>> >
>>> > We are using react-framework to talk to arista written by Justin Azoff.
>>> >
>>> > With Shunting enabled cluster isn't even truly seeing 10G anymore.
>>> >
>>> > oh btw, Capture_loss is a good policy to run for sure. With above
>>> setup we get ~ 0.xx % packet drops.
>>> >
>>> > (Depending on kind of traffic you are monitoring you may need a
>>> slightly different shunting logic)
>>> >
>>> >
>>> > Here is hardware specs / node:
>>> >
>>> > - Motherboard-SM, X9DRi-F
>>> > - Intel E5-2643V2 3.5GHz Ivy Bridge (2x6-=12 Cores)
>>> > - 128GB DDRIII 1600MHz ECC/REG - (8x16GB Modules Installed)
>>> > - 10G-PCIE2-8C2-2S+; Myricom 10G "Gen2" (5 GT/s) PCI Express NIC with
>>> two SFP+
>>> > -  Myricom 10G-SR Modules
>>> >
>>> > On tapping side we have
>>> > - Arista 7504  (gets fed 100G TX/RX + backup and other 10Gb links)
>>> > - Arista 7150 (Symetric hashing via DANZ - splitting tcp sessions
>>> 1/link - 5 links to nodes
>>> >
>>> > on Bro side:
>>> > 5 nodes accepting 5 links from 7150
>>> > Each node running 10 workers + 1 proxy
>>> > Myricom spliting/load balancing to each worker on the node.
>>> >
>>> >
>>> > Hope this helps,
>>> >
>>> > let us know if you have any further questions.
>>> >
>>> > Thanks,
>>> > Aashish
>>> >
>>> > On Fri, Jan 09, 2015 at 06:20:17PM +0000, Mike Patterson wrote:
>>> >> You're right, it's 32 on mine.
>>> >>
>>> >> I posted some specs for my system a couple of years ago now, I think.
>>> >>
>>> >> 6-8GB per worker should give some headroom (my workers usually use
>>> about 5 apiece I think).
>>> >>
>>> >> Mike
>>> >>
>>> >> --
>>> >> Simple, clear purpose and principles give rise to complex and
>>> >> intelligent behavior. Complex rules and regulations give rise
>>> >> to simple and stupid behavior. - Dee Hock
>>> >>
>>> >> > On Jan 9, 2015, at 1:03 PM, Donaldson, John <donaldson8 at llnl.gov>
>>> wrote:
>>> >> >
>>> >> > I'd agree with all of this. We're monitoring a few 10Gbps network
>>> segments with DAG 9.2X2s, too. I'll add in that, when processing that much
>>> traffic on a single device, you'll definitely not want to skimp on memory.
>>> >> >
>>> >> > I'm not sure which configurations you're using that might be
>>> limiting you to 16 streams -- we're  run with at least 24 streams, and (at
>>> least with the 9.2X2s) you should be able to work with up to 32 receive
>>> streams.
>>> >> >
>>> >> > v/r
>>> >> >
>>> >> > John Donaldson
>>> >> >
>>> >> >> -----Original Message-----
>>> >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf
>>> Of
>>> >> >> Mike Patterson
>>> >> >> Sent: Thursday, January 08, 2015 7:29 AM
>>> >> >> To: coen bakkers
>>> >> >> Cc: bro at bro.org
>>> >> >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher
>>> >> >>
>>> >> >> Succinctly, yes, although that provision is a big one.
>>> >> >>
>>> >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an
>>> Endace DAG
>>> >> >> 9.2X2. Both perform reasonably well. Although my hardware is
>>> somewhat
>>> >> >> underspecced (Dell R710s of differing vintages), I still get tons
>>> of useful data.
>>> >> >>
>>> >> >> If your next question would be "how should I spec my hardware",
>>> that's
>>> >> >> quite difficult to answer because it depends on a lot. Get the
>>> hottest CPUs
>>> >> >> you can afford, with as many cores. If you're actually sustaining
>>> 10+Gb you'll
>>> >> >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on
>>> 8 3.7Ghz
>>> >> >> cores, but Bro reports 10% or so loss. Note that some hardware
>>> >> >> configurations will limit the number of streams you can feed to
>>> Bro, eg my
>>> >> >> DAG can only produce 16 streams so even if I had it in a 24 core
>>> box, I'd only
>>> >> >> be making use of 2/3 of my CPU.
>>> >> >>
>>> >> >> Mike
>>> >> >>
>>> >> >>> On Jan 7, 2015, at 5:04 AM, coen bakkers <cbakkers at yahoo.de>
>>> wrote:
>>> >> >>>
>>> >> >>> Does anyone have experience with higher speed NIC's and Bro? Will
>>> it
>>> >> >> sustain 10Gb speeds or more provide the hardware is spec'd
>>> appropriately?
>>> >> >>>
>>> >> >>> regards,
>>> >> >>>
>>> >> >>> Coen
>>> >> >>> _______________________________________________
>>> >> >>> Bro mailing list
>>> >> >>> bro at bro-ids.org
>>> >> >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>>> >> >>
>>> >> >>
>>> >> >> _______________________________________________
>>> >> >> Bro mailing list
>>> >> >> bro at bro-ids.org
>>> >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Bro mailing list
>>> >> bro at bro-ids.org
>>> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>>> >
>>> > --
>>> > Aashish Sharma  (asharma at lbl.gov)
>>> > Cyber Security,
>>> > Lawrence Berkeley National Laboratory
>>> > http://go.lbl.gov/pgp-aashish
>>> > Office: (510)-495-2680  Cell: (510)-612-7971
>>> >
>>> > _______________________________________________
>>> > Bro mailing list
>>> > bro at bro-ids.org
>>> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>>>
>>
>>
>> _______________________________________________
>> Bro mailing list
>> bro at bro-ids.org
>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>>
>
>
> _______________________________________________
> Bro mailing list
> bro at bro-ids.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/639ec455/attachment.html 


More information about the Bro mailing list