[Bro] Bro Log ingestion
jlay at slave-tothe-box.net
Wed Sep 17 13:06:38 PDT 2014
On 2014-09-17 13:25, Jonathon Wright wrote:
> Quite the responses, thanks!
> Here are my thoughts.
> I saw your post Doug, and on some of our projects we can use Security
> Onion w/Bro and ELSA, but in this case it must be a RHEL based
> solution. The solution Stephen R. demod with the Kibana approach 
> is pretty nice. But it brought an issue to my attention. It appears
> that Logstash needs to startup listening on a different port, 9292.
> wondering if I missed something or why Kibana wouldnt simply run as a
> plugin or additional module under apache on port 443. We are in
> a highly regulated network, and if I stand up an Apache server
> (where all the Bro logs are going to be placed), and the Apache
> is listening on a non secure (!443) port such as 9292, then it
> causes flags to be thrown up everywhere and always kills my project.
> Additional thoughts on that?
To set this straight...logstash itself doesn't listen on any port
unless configured to do so. The Elasticsearch engine behind it does,
you'd need to have the backend Elasticsearch engine able to listen on
port 9200, and your client workstation will need to be able to connect
to it on that port. As for Kibana, it works just fine with any current
> Stephen H, not a nit-pick at all, great post! =) My method for moving
> the logs from all the seensors to a central collector at this point
> are still in the works. My best route is probably to use rsync. The
> problem I have right now is that Bro logs and extracted files have
> permissions when they are created. The cause is simply the umask for
> root on the servers, which is set to 077. Since the servers are
> configured (correctly) to not allow SSH by root, then my rsync
> proposal also died since all the files are accessible by root only.
> Also, Im unable to change the umask of root (regulations not know
> how) so short of creating an every minute chmod 644 cronjob, Im
> scratching my head on how to get the logs over to the collector/
> apache server.
Rsyslog on my sensors have been excellent to pipe to a listening
Logstash instance (high ports mean I can run as standard user).
Conversely, you can use a cheesy hack of "sudo /usr/bin/tail -f conn.log
| logger -d -n remote.syslog.ip -P <logstash port> -u /tmp/ignored".
This worked as I was getting my rsyslog instance able to read the
conn.log file. Since rsyslog is running as root it's able to read the
> You make an excellent point though " The downside is that this can
> require quite the large amount of infrastructure… and the only way
> to find out exactly how much your environment will need is to build
> and see. It also requires that you keep up to date in knowledge on 3
> pieces of software and how they interact…"
> The knowledge and infrastructure count / increase is a large
> flag that will prohibit that endeavor (but great to know about).
> Both you, John L., and Will H. indicate Splunk though as your
> solution which gives me another option. But I have the same
> "question about ingestion" =) How did you get the logs from the
> multiple sensors to the "ingestion / collector server"? Rsync, SCP,
> owner / permission issues? Im interested for sure. But.....the cost
> a big no-no as well. As Will H. indicated the cost can go up based on
> usage, I do need a truly open-source free solution, so I am now
> leaning back to ElasticSearch / LogStash unless I missed something.
> Paul H. , you get to use FreeBSD... <drool>... Man do I miss FreeBSD!
> Give me packages or give me death, haha. Ever since we were forced to
> use RHEL I miss it more and more! But to your comments, this sentence
> really caught my attention: "...the logs are sent to a collector via
> syslog-ng.." Then you said "There, they are written to disk where
> are read by logstash and sent to elasticsearch". Since Im leaning in
> the Logstash / ElasticSearch method, based on above thoughts, can you
> share a bit more on how you set up the syslog-ng, logstash,
> elasticsearch? That seems to be really close to meeting my
> requirement. Im assuming you installed them as source and set them in
> the rc.conf to enabled YES to startup on boot. Im more interested in
> the details of the conf files on with what arguments the daemons
> start up and especially how you were able to get the syslog-ng piece
> working between the sensor and the collector.
>  http://www.appliednsm.com/parsing-bro-logs-with-logstash/ 
orig_bytes, orig_ip_bytes, resp_bytes, and resp_ip_bytes using the
logstash entries from the above link are not treated as integers, so
you'll need this in your filter entry in your logstash.conf:
convert => [ "resp_bytes", "integer" ]
convert => [ "resp_ip_bytes", "integer" ]
convert => [ "orig_bytes", "integer" ]
convert => [ "orig_ip_bytes", "integer" ]
Let me know if you need any assistance...I have a full working complete
set up of a single backend host running logstash/elasticsearch/kibana,
with a syslog server piping firewall hits to it, and an IDS piping Bro's
conn log, and snort IDS logs to it.
> Thanks again to all, this is great stuff.
> On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese <rsreese at gmail.com
>> As pointed out, a Redis solution may be an ideal open-source route,
> e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/
>> On Wednesday, September 17, 2014, Hosom, Stephen M
>> <hosom at battelle.org > wrote:
>>> As a nit-pick, just because the files are owned by root, doesn’t
>>> mean they aren’t world-readable. J The absolute simplest
>>> solution to allow the logs to be viewable by non-root users is to
>>> scp them to a centralized server, but I’m guessing you want
>>> something a little fancier than that.
>>> If you can do it, go with free Splunk. If you can afford it, go
>>> with paid Splunk.
>>> For log viewing with Elasticsearch Kibana works great, but, you
>>> could also check out Brownian:
>>> https://github.com/grigorescu/Brownian .
>>> For log storage, if you want to consider something other than
>>> Elasticsearch, VAST is an option! https://github.com/mavam/vast
>>>  There’s no GUI, so that might be a downer for you.
>>> As far as Elasticsearch architecture goes, using Bro to write
>>> directly into Elasticsearch is definitely the easiest option. The
>>> only concern with this setup is that if Elasticsearch gets busy,
>>> nobody is happy. Elasticsearch has a tendency to drop writes when
>>> it is too occupied. This combined with the fact that (to the best
>>> of my knowledge) the Elasticsearch writer is a ‘send it and
>>> forget it’ could result in some hardship if you under build your
>>> Elasticsearch cluster or you undergo a period of unusually high
>>> Seth has some interesting stuff using NSQ that he has written, but
>>> I’m not sure that it is technically ‘supported’. His NSQ
>>> stuff allows you to send the events to Elasticsearch at a rate
>>> that Elasticsearch is comfortable with.
>>> Lastly, you could use the Logstash agent to send logs to a Redis
>>> server, which buffers the logs for additional Logstash agents to
>>> pull from and parse to insert into Elasticsearch. At the moment, I
>>> think that this is the most redundant setup. If you want as many
>>> logs to make it into Elasticsearch as possible while keeping the
>>> Bro side of things as simple as possible, this is likely the way
>>> to go. The downside is that this can require quite the large
>>> amount of infrastructure… and the only way to find out exactly
>>> how much your environment will need is to build it and see. It
>>> also requires that you keep up to date in knowledge on 3 pieces of
>>> software and how they interact…
>>> Hopefully that helps at least a little!
>>> FROM: bro-bounces at bro.org [mailto:bro-bounces at bro.org] ON BEHALF
>>> OF Jonathon Wright
>>> SENT: Tuesday, September 16, 2014 11:04 PM
>>> TO: Stephen Reese
>>> CC: bro at bro.org
>>> SUBJECT: Re: [Bro] Bro Log ingestion
>>> Thanks Steven, Ill take a look at those.
>>> Im assuming my central point server would then need Apache with
>>> ElasticSearch and Kibana installed. Im sure more questions will
>>> come as I start looking into this. Thanks again for the info!
>>> On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese <rsreese at gmail.com>
>>>> On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright
>>>> <jonathon.s.wright at gmail.com> wrote:
>>>>> Looking around and doing some reading, Ive found two possible
>>>>> solutions ELSA and LOGSTASH although I dont know them very
>>>>> well and / or what their capabilities are either. But Id like
>>>>> to know if they are viable, especially given my scenario, or
>>>>> if there is something better. Also, a how-to so I can set it
>>>> You might want to skip on the Logstash piece and push the data
>>>> directly to ElasticSearch per  unless you have a specific
>>>> requirement. From there you could use Kibana  or whatever to
>>>> interface with data stored in ElasticSearch.
>>>>  http://www.elasticsearch.org/overview/kibana/ 
>  https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html
>  http://www.elasticsearch.org/overview/kibana/
>  https://github.com/grigorescu/Brownian
>  https://github.com/mavam/vast
>  mailto:hosom at battelle.org
>  http://www.appliednsm.com/parsing-bro-logs-with-logstash/
>  mailto:rsreese at gmail.com
More information about the Bro