[Bro] Version: 2.0-907 -- Bro manager memory exhaustion
Siwek, Jonathan Luke
jsiwek at illinois.edu
Mon Aug 13 09:54:33 PDT 2012
> I was under the impression that I needed to configure node.cfg from the default:
> To something that makes more sense for my environment
> For some reason, when I do this, it causes broctl to take a very long
> time to return from the status command and the number of peers
> reported is ??? and not the expected 0. Configuring my host to an IP
> address also causes CPU to spike to about 100%.
For a standalone config, localhost is probably preferable -- node.cfg is just specifying how nodes can contact/communicate with each other, so going over loopback or at least a private IP space might be what most people aim to do.
That 'peer' column of `broctl status` is obtained from the manager attempting to send a status request event to the Bro worker instance and expecting a status response event back. The failure cases in which "???" are displayed are:
(1) The manager's broccoli connection to the Bro peer fails
(2) The manager times out sending a status request event
(3) The manager times out receiving a status response event
To see which is the case, you could exit all BroControl shells, add "Debug=1" to your etc/broctl.cfg and then try `broctl status` again. There will be a spool/debug.log in which you should find one of these messages:
broccoli: cannot connect
broccoli: timeout during send
broccoli: timeout during receive
I'm guessing you're just running into (1) because a firewall is now blocking the connection. In that case, I think the timeout length for connect(2) can be pretty long (~2mins), but not sure if it also generally results in high CPU usage.
For (2) or (3), event status is polled once every second for a maximum of "CommTimeout" seconds (default 10 secs), so probably not CPU intensive, but can end up taking a long time for cluster setups especially since the manager queries node status serially.
More information about the Bro