[Bro] Bro Log ingestion

John Landers jlanders at paymetric.com
Wed Sep 17 13:08:34 PDT 2014


As it relates to Splunk, you can consume the data in a number of ways. I use a universal forwarder – agent on the box – and configure it to monitor the logs I want to consume (conn.log, dns.log, files.log, etc.) in the Bro “current” working directory. 

 

So, as Bro logs it to file, it gets replicated to the Splunk indexer by the agent. Once the file roles, I don’t care anymore. Though if you wanted to ingest old logs, that would pretty easy to accomplish as well. (Just reference splunk documentation on the inputs.conf config file.)

 

 

 

 

John Landers

 

From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright
Sent: Wednesday, September 17, 2014 2:26 PM
To: Stephen Reese
Cc: bro at bro.org
Subject: Re: [Bro] Bro Log ingestion

 

Quite the responses, thanks!

 

Here are my thoughts. 

 

I saw your post Doug, and on some of our projects we can use Security Onion w/Bro and ELSA, but in this case it must be a RHEL based solution. The solution Stephen R. demo'd with the Kibana approach [1] is pretty nice. But it brought an issue to my attention. It appears that Logstash needs to startup listening on a different port, 9292. I'm wondering if I missed something or why Kibana wouldn't simply run as a plugin or additional module under apache on port 443. We are in a highly regulated network, and if I stand up an Apache server (where all the Bro logs are going to be placed), and the Apache server is listening on a non secure (!443) port such as 9292, then it causes flags to be thrown up everywhere and always kills my project. Additional thoughts on that?

 

Stephen H, not a nit-pick at all, great post! =) My method for moving the logs from all the seensors to a central collector at this point are still in the works. My best route is probably to use 'rsync'. The problem I have right now is that Bro logs and extracted files have 600 permissions when they are created. The cause is simply the umask for root on the servers, which is set to 077. Since the servers are configured (correctly) to not allow SSH by root, then my rsync proposal also died since all the files are accessible by root only. Also, I'm unable to change the umask of root (regulations not know how) so short of creating an every minute chmod 644 cronjob, I'm scratching my head on how to get the logs over to the collector/ apache server. 

 

You make an excellent point though " The downside is that this can require quite the large amount of infrastructure… and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact…"

The knowledge and infrastructure count / increase is a large flag that will prohibit that endeavor (but great to know about).

 

Both you, John L., and Will H. indicate Splunk though as your solution which gives me another option.  But I have the same "question about ingestion" =) How did you get the logs from the multiple sensors to the "ingestion / collector server"? Rsync, SCP, owner / permission issues? I'm interested for sure. But.....the cost is a big no-no as well. As Will H. indicated the cost can go up based on usage, I do need a truly open-source free solution, so I am now leaning back to ElasticSearch / LogStash unless I missed something. 

 

Paul H. , you get to use FreeBSD... <drool>... Man do I miss FreeBSD! Give me packages or give me death, haha. Ever since we were forced to use RHEL I miss it more and more! But to your comments, this sentence really caught my attention: "...the logs are sent to a collector via syslog-ng.." Then you said "There, they are written to disk where they are read by logstash and sent to elasticsearch". Since I'm leaning in the Logstash / ElasticSearch method, based on above thoughts, can you share a bit more on how you set up the syslog-ng, logstash, elasticsearch? That seems to be really close to meeting my requirement. I'm assuming you installed them as source and set them in the rc.conf to enabled YES to startup on boot. I'm more interested in the details of the conf files on with what arguments the daemons start up and especially how you were able to get the syslog-ng piece working between the sensor and the collector. 

 

 

[1]  <http://www.appliednsm.com/parsing-bro-logs-with-logstash/> http://www.appliednsm.com/parsing-bro-logs-with-logstash/

 

 

Thanks again to all, this is great stuff.

 

JW

 

 

 

 

On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese <rsreese at gmail.com <mailto:rsreese at gmail.com> > wrote:

Jonathon,

 

As pointed out, a Redis solution may be an ideal open-source route, e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/



On Wednesday, September 17, 2014, Hosom, Stephen M <hosom at battelle.org <mailto:hosom at battelle.org> > wrote:

Jonathon, 

 

As a nit-pick, just because the files are owned by root, doesn’t mean they aren’t world-readable. :) The absolute simplest solution to allow the logs to be viewable by non-root users is to scp them to a centralized server, but I’m guessing you want something a little fancier than that. 

 

If you can do it, go with free Splunk. If you can afford it, go with paid Splunk. 

 

Otherwise:

 

For log viewing with Elasticsearch Kibana works great, but, you could also check out Brownian: https://github.com/grigorescu/Brownian. 

 

For log storage, if you want to consider something other than Elasticsearch, VAST is an option! https://github.com/mavam/vast There’s no GUI, so that might be a downer for you. 

 

As far as Elasticsearch architecture goes, using Bro to write directly into Elasticsearch is definitely the easiest option. The only concern with this setup is that if Elasticsearch gets busy, nobody is happy. Elasticsearch has a tendency to drop writes when it is too occupied. This combined with the fact that (to the best of my knowledge) the Elasticsearch writer is a ‘send it and forget it’ could result in some hardship if you under build your Elasticsearch cluster or you undergo a period of unusually high utilization.

 

Seth has some interesting stuff using NSQ that he has written, but I’m not sure that it is technically ‘supported’. His NSQ stuff allows you to send the events to Elasticsearch at a rate that Elasticsearch is comfortable with. 

 

Lastly, you could use the Logstash agent to send logs to a Redis server, which buffers the logs for additional Logstash agents to pull from and parse to insert into Elasticsearch. At the moment, I think that this is the most redundant setup. If you want as many logs to make it into Elasticsearch as possible while keeping the Bro side of things as simple as possible, this is likely the way to go. The downside is that this can require quite the large amount of infrastructure… and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact… 

 

Hopefully that helps at least a little!

 

-Stephen

 

From: bro-bounces at bro.org <mailto:bro-bounces at bro.org>  [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright
Sent: Tuesday, September 16, 2014 11:04 PM
To: Stephen Reese
Cc: bro at bro.org <mailto:bro at bro.org> 
Subject: Re: [Bro] Bro Log ingestion

 

Thanks Steven, I'll take a look at those. 

I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info!

 

 

On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese <rsreese at gmail.com <mailto:rsreese at gmail.com> > wrote:

On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright <jonathon.s.wright at gmail.com <mailto:jonathon.s.wright at gmail.com> > wrote:

Research

Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. 

 

You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch.

[1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html
[2] http://www.elasticsearch.org/overview/kibana/

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/aa798560/attachment.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6593 bytes
Desc: not available
Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/aa798560/attachment.bin 


More information about the Bro mailing list