[Bro] Log an Arbitrarily Long Collection

Christian Buia christianbuia at gmail.com
Tue Sep 10 16:04:57 PDT 2013


Greetings Brofolk,

I have become increasingly interested in Bro lately, and I am starting to
explore how my organization can use it as a general network processor to
generate some verbose logs that we can then export for indexing and
analytics.

The first use case I would like to explore involves generating verbose HTTP
logs so that we can identify suspicious characteristics such as
direct-to-IP host headers, missing headers, unexpected ordering of headers,
RFC compliance issues, etc.

I spent some time auditing the main.bro script for the HTTP protocol, and
then proceeded to make some edits to add additional record fields.
Specifically, I created a new record:

type HeaderValue: record {
        h: string &optional;
        v: string &optional;
    };


and within the Info type, I added the following members:

client_headers:                vector of HeaderValue &log &optional;
client_headers_count:        count  &default=0;

 server_headers:                vector of HeaderValue &log &optional;
 server_headers_count:        count &default=0;

I decided to use a vector because I want to keep track of header order.
Further down on the http_header event handler, I add each header to the
appropriate vector, indexed by the count field which increments.

When I fire up bro, I get the following error message:

error in /usr/local/bro/share/bro/base/protocols/http/./main.bro, line 83:
&log applied to a type that cannot be logged (&log)

So presumably Bro doesn't like the idea of generating a log entry that
includes a vector type (no less consisting of record members, I'm not even
sure what that would have looked like but was hoping to find out).  The
next best thing I can think of doing is recording this information with
some custom delimeter in a single string field, such as:

Accept:*/*|||Accept-Language:en-US|||User-Agent:Mozilla 4.0|||Host:
somehost.com||||Connection:Keep-Alive

Further downstream I plan to convert the tab-delimited content to JSON
anyways prior to indexing.

Is this a good solution for including an arbitrarily long collection field
in my HTTP logs?  Is there a better way to accomplish this?

Also, I have a feeling that directly editing http/main.bro is a bad
practice.  Should I instead be adding this script to the policy branch,
redefining the HTTP Info object and handling the http header event in there?

Thanks for your attention!
Christian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20130910/03e9a5d9/attachment.html 


More information about the Bro mailing list