[Bro] Multiple masters to ease the workload

Mark Buchanan mabuchan at gmail.com
Fri Jun 5 10:26:34 PDT 2015


I'm using all stock bro scripts in the test I have going - but adding some
intel indicators.

The most recurring message I have is for "NB-DNS error in DNS_Mgr::Process
(recvfrom(): Connection refused)".  This machine does not have DNS access
and the build we used put a DNS server that is not in service into the
/etc/resolv.conf.  This error is about 90% of what is in my reporter.log.
I tried to comment out the /etc/resolv.conf entry and restart bro through
broctl, but am still seeing the issues.

The other significant percentage are misc base64 messages:

"incomplete base64 group, padding with <n> bits of 0" - ~5%
"extra base64 groups after '=' padding are ignored" - ~4%
"character <n> ignored by Base64 decoding" - ~< 1%

Mark






On Fri, Jun 5, 2015 at 11:40 AM, Hosom, Stephen M <hosom at battelle.org>
wrote:

>  You’ll both want to check reporter.log. In most cases memory leaks are
> introduced due to scripts (either built-in or custom) that error. Errors
> within bro script land can result in memory leaks, so you want to do your
> best to avoid those. If you’re willing to share your reporter.log, I could
> possibly help you fix some of the errors that you’re running into.
>
>
>
> *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Mark
> Buchanan
> *Sent:* Friday, June 05, 2015 12:09 PM
> *To:* Close, Jason M.
> *Cc:* bro at bro.org; Dave Crawford
>
> *Subject:* Re: [Bro] Multiple masters to ease the workload
>
>
>
> I too have noticed memory complete memory exhaustion in Bro 2.3.2 (not
> sure what version Jason is running).  If the workers are not restarted
> every few days or at least once a week, I run out of usable memory on a few
> sensors I'm testing.
>
>
>
> I have found that just doing a restart within broctl will free the memory
> consumed up - but I regularly have to perform restarts to keep the sensors
> I am testing running smoothly.
>
>
>
> Mark
>
>
>
> On Fri, Jun 5, 2015 at 7:08 AM, Close, Jason M. <close at ou.edu> wrote:
>
>   Thanks.
>
>
>
> I went ahead and rebooted the cluster, and that cleared things up (as well
> as sent out a LOT of emails…).
>
>
>
> Has anyone else noticed a memory leak in the sensors?  We slowly see
> memory usage grow, maybe by about 10GB a month, even when our total traffic
> has gone down.  I attached an image from our Zabbix monitor.  You can see
> that once we reboot the box, memory drops down, and then slowly creeps up.
> And traffic isn’t increasing (in fact, it decreases by half over the
> summer).
>
>
>
> *Jason Close*
>
> *I**nformation Security Analyst*
>
> OU Information Technology
>
> Office: 405.325.8661  Cell: 405.421.1096
>
>
>
>
>
> *From: *Dave Crawford <dave at pingtrip.com>
> *Date: *Thursday, June 4, 2015 at 8:35 PM
> *To: *Jason Close <close at ou.edu>
> *Cc: *"bro at bro.org" <bro at bro.org>
> *Subject: *Re: [Bro] Multiple masters to ease the workload
>
>
>
> Is it actually 100% RAM usage by applications? Since the manager can be
> performing a significant amount of disk writes the kernel will allocate
> ‘free’ memory as ‘cached’ to increase file performance. The cached memory
> is released when applications demand more memory.
>
>
>
> Below is the current memory usage on one of my mangers that is handling 25
> workers and 2 proxies. At first glance it appears that all the memory has
> been consumed, but notice how 122G is cached.
>
>
>
>              total       used       free     shared    buffers     cached
>
> Mem:          126G       125G       384M         0B       329M       122G
>
> -/+ buffers/cache:       2.6G       123G
>
> Swap:          33G         0B        33G
>
>
>
>
>
> -Dave
>
>
>
>  On Jun 2, 2015, at 10:29 AM, Close, Jason M. <close at ou.edu> wrote:
>
>
>
> Our current configuration is showing a lot of heavy use by the master
> node.  We currently run around 6 worker nodes that feed data to the master,
> and while the master is keeping up in terms of CPU, it is consistently
> teetering on using all available RAM we can throw at it (128GB at the
> moment).  There are plans in place to increase our available bandwidth
> 10-fold, so the traffic coming to Bro will ramp up as well.
>
>
>
> We could piece apart the subnets and create multiple Bro clusters.  But it
> would be nice to have a single cluster, and be able to continue to throw
> more workers and managers at it.  But I have not seen any documentation
> about configurations using multiple managers.  If that does exist, can
> someone point me in the right direction?
>
>
>
> And if that doesn’t exist, can I get some suggestions about mitigations to
> this problem?  I know there are a lot of cool things being done with Bro,
> especially using scripts and APIs where Bro can help reduce traffic being
> thrown to it.  But due to the taps we have in place, and the manpower
> availability, right now, spinning up a little more hardware would be a much
> easier and more economical investment of our time.
>
>
>
> Thanks.
>
>
>
> Jason
>
> _______________________________________________
> Bro mailing list
> bro at bro-ids.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>
>
>
>
> _______________________________________________
> Bro mailing list
> bro at bro-ids.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro
>
>
>
>
> --
>
> Mark Buchanan
>



-- 
Mark Buchanan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150605/92f6835d/attachment.html 


More information about the Bro mailing list