<div>I've came to the conclusion that Bro cannot currently scale up to 100 workers for a single manager, threaded or not.</div><div><br></div><div>For now I've reverted back to the 2.0-stable build and have segmented the cluster into five parts, one per server with 20 workers on each. That has been stable, minus the occasional process stuck at 100% CPU. I'm going to try switching back to the dev track again with this segmented approach and see how that works. </div>
<div><br></div><div>--TC</div><div><br></div><div><br></div><div class="gmail_quote">On Tue, Jul 31, 2012 at 4:59 PM, Vlad Grigorescu <span dir="ltr"><<a href="mailto:vladg@cmu.edu" target="_blank">vladg@cmu.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I've been running 2.0-905 for ~25-26 hours. The manager's memory usage has<br>
slowly crept up to 13 GB.<br>
<br>
One thing of note - I'm using the ElasticSearch log writer. I see 3<br>
possible scenarios for this memleak:<br>
<br>
1) There is indeed a leak in master, potentially only triggered by<br>
specific traffic,<br>
2) There is a leak in the ElasticSearch log writer,<br>
3) My ElasticSearch server can't keep up with the load, and the manager is<br>
receiving logs faster than it can send them to the writer, so they just<br>
queue up.<br>
<br>
Has anyone else tried the current code over an extended period on live<br>
traffic? Also, if anyone has any ideas to try to figure out where this<br>
leak is occurring, please let me know. I'm going to switch back to ASCII<br>
logs for a bit, and see what that's like.<br>
<br>
--Vlad</blockquote><div> </div></div>