[Bro] Bro Log ingestion

Hosom, Stephen M hosom at battelle.org
Wed Sep 17 04:54:38 PDT 2014


Jonathon,

As a nit-pick, just because the files are owned by root, doesn’t mean they aren’t world-readable. ☺ The absolute simplest solution to allow the logs to be viewable by non-root users is to scp them to a centralized server, but I’m guessing you want something a little fancier than that.

If you can do it, go with free Splunk. If you can afford it, go with paid Splunk.

Otherwise:

For log viewing with Elasticsearch Kibana works great, but, you could also check out Brownian: https://github.com/grigorescu/Brownian.

For log storage, if you want to consider something other than Elasticsearch, VAST is an option! https://github.com/mavam/vast There’s no GUI, so that might be a downer for you.

As far as Elasticsearch architecture goes, using Bro to write directly into Elasticsearch is definitely the easiest option. The only concern with this setup is that if Elasticsearch gets busy, nobody is happy. Elasticsearch has a tendency to drop writes when it is too occupied. This combined with the fact that (to the best of my knowledge) the Elasticsearch writer is a ‘send it and forget it’ could result in some hardship if you under build your Elasticsearch cluster or you undergo a period of unusually high utilization.

Seth has some interesting stuff using NSQ that he has written, but I’m not sure that it is technically ‘supported’. His NSQ stuff allows you to send the events to Elasticsearch at a rate that Elasticsearch is comfortable with.

Lastly, you could use the Logstash agent to send logs to a Redis server, which buffers the logs for additional Logstash agents to pull from and parse to insert into Elasticsearch. At the moment, I think that this is the most redundant setup. If you want as many logs to make it into Elasticsearch as possible while keeping the Bro side of things as simple as possible, this is likely the way to go. The downside is that this can require quite the large amount of infrastructure… and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact…

Hopefully that helps at least a little!

-Stephen

From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright
Sent: Tuesday, September 16, 2014 11:04 PM
To: Stephen Reese
Cc: bro at bro.org
Subject: Re: [Bro] Bro Log ingestion

Thanks Steven, I'll take a look at those.
I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info!


On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese <rsreese at gmail.com<mailto:rsreese at gmail.com>> wrote:
On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright <jonathon.s.wright at gmail.com<mailto:jonathon.s.wright at gmail.com>> wrote:
Research
Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up.

You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch.

[1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html
[2] http://www.elasticsearch.org/overview/kibana/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/4d597e2b/attachment.html 


More information about the Bro mailing list