[Bro] local.bro causing memory leak

Azoff, Justin S jazoff at illinois.edu
Wed Mar 21 13:52:28 PDT 2018

> On Mar 21, 2018, at 3:57 PM, Benjamin Wood <ben.bt.wood at gmail.com> wrote:
> I'm wondering that now too.
> At some point we realized that events weren't making it all the way to splunk. But I couldn't tell you why. I'm going to be taking a step back and re-evaluate why the forwarder didn't work.
> I also figured out that using path_func in this way is a very BAD IDEA. I've spent a couple days crawling through the source code for the logging framework and testing some things.
> For every unique filename, a new thread will be started and a new writer will be created on a per file basis. The bad news is, there is no way to reap these threads when they are no longer needed.
> The only "in framework" process that will close file descriptors and reap process threads is rotation. Even if you enable rotation, files can still slip through, because rotation seems to only be effective against the current writer.

Well they should all be current and rotation will work.  path_func is not normally used for splitting a file based on time, it's used for doing things like

    Log::add_filter(HTTP::LOG, [
        $name = "http-directions",
        $path_func(id: Log::ID, path: string, rec: HTTP::Info) = {
            local l = Site::is_local_addr(rec$id$orig_h);
            local r = Site::is_local_addr(rec$id$resp_h);

            if(l && r)
                return "http_internal";
            if (l)
                return "http_outbound";
                return "http_inbound";

in this case, internal,outbound, and inbound are all current writers and they will all get rotated.

> I don't know if this breaks a contract outlined in the docs, but it seems like if this is an intended use of path_func then this is a bug that should be fixed.

It's not a crazy idea, you're just the first person to ever do that.

> The only way that I could resolve the problem in bro alone, would be to author a custom log writer that would name the files in the way I wanted, and close these dangling file descriptors.

I can think of 2 solutions:

1) Just turn rotation back on and set the rotation interval to 5 minutes and disable compression.  Point splunk at 


The end result will be the same, the only downside is all data in splunk will have a 5 minute lag.

2) Enable rotation, but override


the default runs this:

redef Log::default_rotation_postprocessors += { [Log::WRITER_ASCII] = default_rotation_postprocessor_func };
# Default function to postprocess a rotated ASCII log file. It moves the rotated
# file to a new name that includes a timestamp with the opening time, and then
# runs the writer's default postprocessor command on it.
function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool
    # If the filename has a ".gz" extension, then keep it.
    local gz = info$fname[-3:] == ".gz" ? ".gz" : "";

    # Move file to name including both opening and closing time.
    local dst = fmt("%s.%s.log%s", info$path,
            strftime(Log::default_rotation_date_format, info$open), gz);

    system(fmt("/bin/mv %s %s", info$fname, dst));

    # Run default postprocessor.
    return Log::run_rotation_postprocessor_cmd(info, dst);

if you redef Log::default_rotation_postprocessors to be empty, bro should close the old filenames and then "rotate" them.

Justin Azoff

More information about the Bro mailing list