[Bro] Fwd: Packet loss

John Edwards jedwards2728 at gmail.com
Tue Oct 25 12:40:21 PDT 2016


---------- Forwarded message ----------
From: *John Edwards* <jedwards2728 at gmail.com>
Date: Wednesday, 26 October 2016
Subject: Packet loss
To: Johanna Amann <johanna at icir.org>


I am using Ubuntu bridge utils "brctl" to bond the interfaces. I'm using
dell r610 servers with 16core CPU and 24gb of ram and plenty of disk
feeding all of bro logs into splunk

So if I had a manager configured With the bonded interface and pf_ring it
would distribute the load over two workers directly connected to the
manager?

Thanks

On Wednesday, 26 October 2016, Johanna Amann <johanna at icir.org
<javascript:_e(%7B%7D,'cvml','johanna at icir.org');>> wrote:

> Workers are different from proxies. If there really is a peak around 1Gbs,
> you will need a number of workers (depending on your hardware and traffic,
> I would guess more than 1). Furthermore, you will probably need a method to
> split that traffic between workers; usually people either use pf_ring,
> af_packet or specialized nics. I don't know how that will work together
> with whatever software you use to create your bonded interface.
>
> Johanna
>
> On 25 Oct 2016, at 12:16, John Edwards wrote:
>
> I was running it in a cluster mode with one worker proxy and manager and
>> was configured when I first noticed this in the logs. I then changed it
>> back to a stand alone.
>>
>> So if I had the manager connected to the tap I can then have two workers
>> directly connected to process the throughput to see if I can get a better
>> dropped packet date?
>>
>> Thanks for the information
>>
>> On Wednesday, 26 October 2016, Johanna Amann <johanna at icir.org> wrote:
>>
>> Just to check - are you running Bro in cluster mode? An 1gb tap is
>>> probably too much for a single process to handle.
>>>
>>> Apart from that, on a first glance, that really just looks like Bro
>>> cannot
>>> keep up with processing packets. If packets come in bursts, that might
>>> be one reason why the CPU load looks ok, while there is a huge packet
>>> loss.
>>>
>>> Johanna
>>>
>>> On Mon, Oct 24, 2016 at 04:39:07PM +1000, John Edwards wrote:
>>>
>>>> Hi all
>>>>
>>>> I have just deployed bro onto two systems on my border gateway. They sit
>>>> off a tap and each system has individual Rx and Tx interfaces bridged
>>>>
>>> using
>>>
>>>> brctl. I am not seeing any interface dropped packets or errors from the
>>>> Ubuntu host via ifconfig.
>>>>
>>>> When looking at my data within bro that monitors a standalone
>>>>
>>> configuration
>>>
>>>> of br0 has the below line repeated a few times throughout the notice.log
>>>>
>>>> 1477283201.681213       -       -       -       -       -       -
>>>>  -
>>>>      -       -       PacketFilter::Dropped_Packets   2739608 packets
>>>> dropped after filtering, 12351460 received, 12351686 on link    -
>>>>  -
>>>>      -       -       -       bro     Notice::ACTION_LOG3600.000000   F
>>>>  -       -       -       -       -
>>>> We seem to be getting lots of data and as far as CPU and memory resource
>>>> consumption goes it's not under strenuous load. I haven't changed too
>>>>
>>> much
>>>
>>>> of the configuration of the 2.4.1 build.
>>>>
>>>> Sorry if this has been discussed or asked before but what can I look at
>>>> optimising or tuning to reduce the packet loss?
>>>>
>>>> One thread I found wasn't bros issue but the tap and an upgrade of the
>>>> software fixed it. I cannot do this as it's without software to tune.
>>>>
>>> It's
>>>
>>>> a vss active 1gb tap, doesn't seem to be the tap at this stage but it
>>>>
>>> quite
>>>
>>>> possibly could be :)
>>>>
>>>> Thanks
>>>> John
>>>>
>>>
>>> _______________________________________________
>>>> Bro mailing list
>>>> bro at bro-ids.org <javascript:;>
>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>>>>
>>>
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/76a6193b/attachment-0001.html 


More information about the Bro mailing list