[Bro] bro cluster in containers

Maerz, Stefan A. maerzsa at ornl.gov
Mon Jun 4 08:02:39 PDT 2018


On Jun 4, 2018, at 9:57 AM, Azoff, Justin S <jazoff at illinois.edu<mailto:jazoff at illinois.edu>> wrote:


On Jun 4, 2018, at 9:25 AM, Maerz, Stefan A. <maerzsa at ornl.gov<mailto:maerzsa at ornl.gov>> wrote:

My question is whether or not broctl can be made to gracefully handle the rapid elastic characteristics of container orchestration — for example can I add/remove bro worker nodes/containers without doing a “broctl deploy” or “broctl restart”? I’m sure I can just restart the service, but that seems like a disruptive and non elegant solution.

It wouldn't need to, broctl would not be used at all in this environment.

—
Justin Azoff



Okay — that makes sense. I’d have to think the details through, but I bet I could figure it out.



On Jun 4, 2018, at 10:12 AM, Poore, Jeffrey S <jeffrey.s.poore at bankofamerica.com<mailto:jeffrey.s.poore at bankofamerica.com>> wrote:

Tap aggregator is a network switch with provides a layer of indirection between your bro cluster and your network taps. It allows you to “route around" or duplicate
data (maybe you also want the same data sent to Snort and full PCAP) and slice it up into smaller, more manageable, network flows. In a bro cluster, it is what load
balances between bro worker nodes.

Automation of the tapagg layer means that you would use the API your Tap aggregator has to turn on and turn off the switch ports that connect to containers. The
orchestration component, would spin up/tear down containers/pods and simultaneously turn on/off the corresponding switchport on the tap aggregator. The way
I’m thinking of it, you wouldn’t use Kubernetes/Mesos/etc. to route network traffic around, rather your orchestration platform would control the dataplane of your
tap aggregator. My experience is that handling high throughput traffic in software is a recipe for traffic loss and frustration. Do as much as you can in hardware.

Either way, I am very interested in this conversation — we run Red Hat's OpenShift which is a Kubernetes distribution. Running bare metal works really well for us,
but if I could get on OpenShift, I could get rid of most of our remaining physical machines. My question is whether or not broctl can be made to gracefully handle the
rapid elastic characteristics of container orchestration — for example can I add/remove bro worker nodes/containers without doing a “broctl deploy” or “broctl
restart”? I’m sure I can just restart the service, but that seems like a disruptive and non-elegant solution.

I happen to be somewhat of a subject matter expert on OpenShift, and I helped a great deal with our deployment in my company. My team doesn't use it, but that's mostly because they came up with this architecture before I joined the team. I'm definitely pushing them towards Kubernetes, but we already have promised deliverables and time is tight, so I can't really re-architect it right now.

I'll ask the guy who designed the current architecture about the tap aggregator. If I understand you correctly, it is really about configuring that to deal with the containers coming up and down, not so much bro itself. Does the bro cluster manager need to be updated when instances come and go then? I've seen that it acts as a log aggregator (although you can configure a different node type to do that). I am of course going to do my homework and start building it out, but I'm still kind of new, so any guardrails are appreciated.


I think Justin just answered this question — the orchestration would handle the broctl/bro management things. So no, there is no need to update the cluster manager. There is no need for broctl. If I’m understanding Justin correctly, I believe you would need to figure out all the command line options to pass to the bro binaries. I don’t think that would be too hard if you have an example bro cluster to look at.

-Stefan



--
Stefan Maerz
HPC Cyber Security Engineer
Oak Ridge National Laboratory
linkedin.com/in/stefanmaerz<http://linkedin.com/in/stefanmaerz>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180604/34d8d23f/attachment-0001.html 


More information about the Bro mailing list