From asavran at layerxtech.com Tue Apr 2 08:23:01 2019 From: asavran at layerxtech.com (Arda Savran) Date: Tue, 2 Apr 2019 11:23:01 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics Message-ID: Hello folks: I have successfully been able to send everything to a remote single Kafka Topic from a local Bro machine and following is my local.bro file to make that happen: *##! Local site policy. Customize as appropriate.* *##!* *##! This file will not be overwritten when upgrading or reinstalling!* *#@load packages* *@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* *redef Kafka::send_all_active_logs = T;* *redef Kafka::tag_json = T;* *redef Kafka::kafka_conf = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092");* However, when I change that to write logs to their individual Kafka topics I get an error message under stderr.log. Following is my updated local.bro: *##! Local site policy. Customize as appropriate.* *##!* *##! This file will not be overwritten when upgrading or reinstalling!* *#@load packages* *#@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* *#redef Kafka::send_all_active_logs = T;* *#redef Kafka::tag_json = T;* *#redef Kafka::kafka_conf = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092");* *###########* *###########* *@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* *redef Kafka::topic_name = "";* *redef Kafka::tag_json = T;* *redef Kafka::debug = "all";* *event bro_init() &priority=-10* *{* *# handles DNS* *local dns_filter: Log::Filter = [* *$name = "kafka-dns",* *$writer = Log::WRITER_KAFKAWRITER,* *$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),* *$path = "dns"* *];* *Log::add_filter(DNS::LOG, dns_filter);* *}* *###########* *###########* I enter "broctl check" and "broctl deploy" after that; but get the following: [root at localhost current]# tail -f stderr.log %7|1554218121.957|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state DOWN -> CONNECT %7|1554218121.957|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554218121.957|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused) %7|1554218121.957|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused %7|1554218121.957|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state CONNECT -> DOWN %7|1554218121.957|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554218121.957|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq with 0 buffers %7|1554218121.957|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 buffers on connection reset %7|1554218122.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic partition count is zero: should refresh metadata %7|1554218122.309|METADATA|rdkafka#producer-1| [thrd:main]: Skipping metadata refresh of 1 topic(s): no usable brokers %7|1554218122.957|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state DOWN connecting %7|1554218122.958|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to ipv4#127.0.0.1:9092 (plaintext) with socket 29 %7|1554218122.958|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state DOWN -> CONNECT %7|1554218122.958|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554218122.958|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused) %7|1554218122.958|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused %7|1554218122.958|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state CONNECT -> DOWN %7|1554218122.958|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554218122.958|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq with 0 buffers %7|1554218122.958|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 buffers on connection reset %7|1554218122.958|RECONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next reconnect by 301ms %7|1554218123.259|RECONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next reconnect by 53ms %7|1554218123.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic partition count is zero: should refresh metadata Yes, I have iptables enabled on the local bro machine but it works with the first configuration option file. How come bro thinks that the kafka broker is local. It is supposed to send the messages to XX.XX.XX.XX. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190402/1483383a/attachment.html From asavran at layerxtech.com Tue Apr 2 10:10:04 2019 From: asavran at layerxtech.com (Arda Savran) Date: Tue, 2 Apr 2019 13:10:04 -0400 Subject: [Zeek] Timestamps in logs files without any msec Message-ID: Hello all: Is there a way to use the Unix timestamp without any msec in log files? At the moment, msec is included in the timestamp. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190402/7d6eca3a/attachment.html From seth at corelight.com Wed Apr 3 05:26:11 2019 From: seth at corelight.com (Seth Hall) Date: Wed, 03 Apr 2019 08:26:11 -0400 Subject: [Zeek] Timestamps in logs files without any msec In-Reply-To: References: Message-ID: On 2 Apr 2019, at 13:10, Arda Savran wrote: > Is there a way to use the Unix timestamp without any msec in log > files? At > the moment, msec is included in the timestamp. Hm, good question. I don't think there is going to be a "good" way to do it. If you *really* want to do it you could modify the ascii formatter to make it reduce the decimal points off the end of the value. There may be side effects if you do this though, I really haven't thought through it much. Here's the line that renders that field... https://github.com/zeek/zeek/blob/master/src/threading/formatters/Ascii.cc#L118 .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From seth at corelight.com Wed Apr 3 05:31:42 2019 From: seth at corelight.com (Seth Hall) Date: Wed, 03 Apr 2019 08:31:42 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics In-Reply-To: References: Message-ID: <7546E4FD-1D6B-48AD-B312-B93763E06811@corelight.com> On 2 Apr 2019, at 11:23, Arda Savran wrote: > **$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),** Just looking through the kafka writer makes it look like most options can't be passed through that way. It looks like the kafka writer only pays attention to those kafka config options through the Kafka::kafka_conf variable. It was just a quick skim, but that's how it looked to me. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From zeolla at gmail.com Wed Apr 3 05:38:43 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 3 Apr 2019 08:38:43 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics In-Reply-To: References: Message-ID: Are you using master? The easiest way to fix this is likely to add a key of "topic_name" and a value of "dns" to your $config table, similar to as shown here . Please let me know if that works for you. There is a known issue in master where the plugin is not falling back to use $path as the destination topic name, and I have a PR open for it but unfortunately haven't had a lot of time to finish (it is just pending some btests - functionally it is done) and get that merged. - Jon Zeolla Zeolla at GMail.Com On Tue, Apr 2, 2019 at 11:37 AM Arda Savran wrote: > Hello folks: > > I have successfully been able to send everything to a remote single Kafka > Topic from a local Bro machine and following is my local.bro file to make > that happen: > > *##! Local site policy. Customize as appropriate.* > *##!* > *##! This file will not be overwritten when upgrading or reinstalling!* > > *#@load packages* > > *@load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* > *redef Kafka::send_all_active_logs = T;* > *redef Kafka::tag_json = T;* > *redef Kafka::kafka_conf = table(["metadata.broker.list"] = > "XX.XX.XX.XX:9092");* > > However, when I change that to write logs to their individual Kafka topics > I get an error message under stderr.log. Following is my updated local.bro: > > *##! Local site policy. Customize as appropriate.* > *##!* > *##! This file will not be overwritten when upgrading or reinstalling!* > > *#@load packages* > > *#@load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* > *#redef Kafka::send_all_active_logs = T;* > *#redef Kafka::tag_json = T;* > *#redef Kafka::kafka_conf = table(["metadata.broker.list"] = > "XX.XX.XX.XX:9092");* > > *###########* > *###########* > > *@load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* > *redef Kafka::topic_name = "";* > *redef Kafka::tag_json = T;* > *redef Kafka::debug = "all";* > > *event bro_init() &priority=-10* > *{* > *# handles DNS* > *local dns_filter: Log::Filter = [* > *$name = "kafka-dns",* > *$writer = Log::WRITER_KAFKAWRITER,* > *$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),* > *$path = "dns"* > *];* > *Log::add_filter(DNS::LOG, dns_filter);* > *}* > > *###########* > *###########* > > I enter "broctl check" and "broctl deploy" after that; but get the > following: > > [root at localhost current]# tail -f stderr.log > %7|1554218121.957|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state DOWN -> CONNECT > %7|1554218121.957|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554218121.957|BROKERFAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: > Local: Broker transport failure: (errno: Connection refused) > %7|1554218121.957|FAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# > 127.0.0.1:9092 failed: Connection refused > %7|1554218121.957|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state CONNECT -> DOWN > %7|1554218121.957|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554218121.957|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq > with 0 buffers > %7|1554218121.957|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 > buffers on connection reset > %7|1554218122.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic partition > count is zero: should refresh metadata > %7|1554218122.309|METADATA|rdkafka#producer-1| [thrd:main]: Skipping > metadata refresh of 1 topic(s): no usable brokers > %7|1554218122.957|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state > DOWN connecting > %7|1554218122.958|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to > ipv4#127.0.0.1:9092 (plaintext) with socket 29 > %7|1554218122.958|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state DOWN -> CONNECT > %7|1554218122.958|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554218122.958|BROKERFAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: > Local: Broker transport failure: (errno: Connection refused) > %7|1554218122.958|FAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# > 127.0.0.1:9092 failed: Connection refused > %7|1554218122.958|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state CONNECT -> DOWN > %7|1554218122.958|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554218122.958|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq > with 0 buffers > %7|1554218122.958|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 > buffers on connection reset > %7|1554218122.958|RECONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next > reconnect by 301ms > %7|1554218123.259|RECONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next > reconnect by 53ms > %7|1554218123.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic partition > count is zero: should refresh metadata > > Yes, I have iptables enabled on the local bro machine but it works with > the first configuration option file. How come bro thinks that the kafka > broker is local. It is supposed to send the messages to XX.XX.XX.XX. > > Thanks in advance. > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190403/189db9c9/attachment.html From asavran at layerxtech.com Wed Apr 3 06:52:36 2019 From: asavran at layerxtech.com (Arda Savran) Date: Wed, 3 Apr 2019 09:52:36 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics In-Reply-To: References: Message-ID: I used the master. I changed the beginning of my local.bro as follows and did a "broctl check" and "broctl deploy": #@load packages #@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka #redef Kafka::send_all_active_logs = T; #redef Kafka::tag_json = T; #redef Kafka::kafka_conf = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"); ########### ########### @load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka redef Kafka::topic_name = ""; redef Kafka::tag_json = T; redef Kafka::debug = "all"; event bro_init() &priority=-10 { # handles DNS local dns_filter: Log::Filter = [ $name = "kafka-dns", $writer = Log::WRITER_KAFKAWRITER, $config = table(["metadata.broker.list"] = " XX.XX.XX.XX:9092"), *$config = table(["topic_name"] = "bro_dns"),* $path = "dns" ]; Log::add_filter(DNS::LOG, dns_filter); } Still having no luck: [root at localhost current]# tail -f stderr.log %7|1554299460.116|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to ipv4#127.0.0.1:9092 (plaintext) with socket 34 %7|1554299460.116|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state DOWN -> CONNECT %7|1554299460.116|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554299460.116|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused) %7|1554299460.116|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused %7|1554299460.116|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state CONNECT -> DOWN %7|1554299460.116|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554299460.116|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq with 0 buffers %7|1554299460.116|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 buffers on connection reset %7|1554299460.116|RECONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next reconnect by 435ms %7|1554299460.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns partition count is zero: should refresh metadata %7|1554299460.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping metadata refresh of 1 topic(s): no usable brokers %7|1554299460.552|RECONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next reconnect by 276ms %7|1554299460.827|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state DOWN connecting %7|1554299460.827|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to ipv4#127.0.0.1:9092 (plaintext) with socket 34 %7|1554299460.827|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state DOWN -> CONNECT %7|1554299460.827|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554299460.827|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused) %7|1554299460.827|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused %7|1554299460.827|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state CONNECT -> DOWN %7|1554299460.827|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554299460.827|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq with 0 buffers %7|1554299460.827|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 buffers on connection reset %7|1554299461.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns partition count is zero: should refresh metadata %7|1554299461.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping metadata refresh of 1 topic(s): no usable brokers %7|1554299461.827|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state DOWN connecting %7|1554299461.828|CONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to ipv4#127.0.0.1:9092 (plaintext) with socket 34 %7|1554299461.828|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state DOWN -> CONNECT %7|1554299461.828|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554299461.828|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused) %7|1554299461.828|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused %7|1554299461.828|STATE|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed state CONNECT -> DOWN %7|1554299461.828|BROADCAST|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: Broadcasting state change %7|1554299461.828|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq with 0 buffers %7|1554299461.829|BUFQ|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 buffers on connection reset %7|1554299461.829|RECONNECT|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next reconnect by 715ms Do you have any other suggestions for me? Thanks On Wed, Apr 3, 2019 at 8:38 AM Zeolla at GMail.com wrote: > Are you using master? The easiest way to fix this is likely to add a key > of "topic_name" and a value of "dns" to your $config table, similar to as > shown here > . > Please let me know if that works for you. > > There is a known issue in master where the plugin is not falling back to > use $path as the destination topic name, and I have a PR open > for it but > unfortunately haven't had a lot of time to finish (it is just pending some > btests - functionally it is done) and get that merged. > > - Jon Zeolla > Zeolla at GMail.Com > > > On Tue, Apr 2, 2019 at 11:37 AM Arda Savran > wrote: > >> Hello folks: >> >> I have successfully been able to send everything to a remote single Kafka >> Topic from a local Bro machine and following is my local.bro file to make >> that happen: >> >> *##! Local site policy. Customize as appropriate.* >> *##!* >> *##! This file will not be overwritten when upgrading or reinstalling!* >> >> *#@load packages* >> >> *@load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >> *redef Kafka::send_all_active_logs = T;* >> *redef Kafka::tag_json = T;* >> *redef Kafka::kafka_conf = table(["metadata.broker.list"] = >> "XX.XX.XX.XX:9092");* >> >> However, when I change that to write logs to their individual Kafka >> topics I get an error message under stderr.log. Following is my updated >> local.bro: >> >> *##! Local site policy. Customize as appropriate.* >> *##!* >> *##! This file will not be overwritten when upgrading or reinstalling!* >> >> *#@load packages* >> >> *#@load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >> *#redef Kafka::send_all_active_logs = T;* >> *#redef Kafka::tag_json = T;* >> *#redef Kafka::kafka_conf = table(["metadata.broker.list"] = >> "XX.XX.XX.XX:9092");* >> >> *###########* >> *###########* >> >> *@load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >> *redef Kafka::topic_name = "";* >> *redef Kafka::tag_json = T;* >> *redef Kafka::debug = "all";* >> >> *event bro_init() &priority=-10* >> *{* >> *# handles DNS* >> *local dns_filter: Log::Filter = [* >> *$name = "kafka-dns",* >> *$writer = Log::WRITER_KAFKAWRITER,* >> *$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),* >> *$path = "dns"* >> *];* >> *Log::add_filter(DNS::LOG, dns_filter);* >> *}* >> >> *###########* >> *###########* >> >> I enter "broctl check" and "broctl deploy" after that; but get the >> following: >> >> [root at localhost current]# tail -f stderr.log >> %7|1554218121.957|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state DOWN -> CONNECT >> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554218121.957|BROKERFAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >> Local: Broker transport failure: (errno: Connection refused) >> %7|1554218121.957|FAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >> 127.0.0.1:9092 failed: Connection refused >> %7|1554218121.957|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state CONNECT -> DOWN >> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554218121.957|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >> with 0 buffers >> %7|1554218121.957|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >> buffers on connection reset >> %7|1554218122.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >> partition count is zero: should refresh metadata >> %7|1554218122.309|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >> metadata refresh of 1 topic(s): no usable brokers >> %7|1554218122.957|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >> DOWN connecting >> %7|1554218122.958|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >> ipv4#127.0.0.1:9092 (plaintext) with socket 29 >> %7|1554218122.958|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state DOWN -> CONNECT >> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554218122.958|BROKERFAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >> Local: Broker transport failure: (errno: Connection refused) >> %7|1554218122.958|FAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >> 127.0.0.1:9092 failed: Connection refused >> %7|1554218122.958|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state CONNECT -> DOWN >> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554218122.958|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >> with 0 buffers >> %7|1554218122.958|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >> buffers on connection reset >> %7|1554218122.958|RECONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >> reconnect by 301ms >> %7|1554218123.259|RECONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >> reconnect by 53ms >> %7|1554218123.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >> partition count is zero: should refresh metadata >> >> Yes, I have iptables enabled on the local bro machine but it works with >> the first configuration option file. How come bro thinks that the kafka >> broker is local. It is supposed to send the messages to XX.XX.XX.XX. >> >> Thanks in advance. >> >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190403/b145311a/attachment-0001.html From asavran at layerxtech.com Wed Apr 3 08:41:26 2019 From: asavran at layerxtech.com (Arda Savran) Date: Wed, 3 Apr 2019 11:41:26 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics In-Reply-To: References: Message-ID: Hello again: I tried the script on the web site and it still fails the check: ##! Local site policy. Customize as appropriate. ##! ##! This file will not be overwritten when upgrading or reinstalling! #@load packages #@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka #redef Kafka::send_all_active_logs = T; #redef Kafka::tag_json = T; #redef Kafka::kafka_conf = table(["metadata.broker.list"] = " 13.88.224.129:9092"); ########### ########### @load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka redef Kafka::logs_to_send = set(DHCP::LOG); redef Kafka::topic_name = "bro"; redef Kafka::kafka_conf = table( ["metadata.broker.list"] = "XX.XX.XX.XX:9092" ); redef Kafka::tag_json = T; event bro_init() &priority=-10 { # Send DHCP to the shew_bro_dhcp topic local shew_dhcp_filter: Log::Filter = [ $name = "kafka-dhcp-shew", $writer = Log::WRITER_KAFKAWRITER, $path = "shew_bro_dhcp" $config = table(["topic_name"] = "shew_bro_dhcp") ]; Log::add_filter(DHCP::LOG, shew_dhcp_filter); } ########### ########### [root at localhost site]# broctl check bro scripts failed. error in /usr/local/bro/share/bro/site/local.bro, lines 29-30: not a record (shew_bro_dhcp$config) error in /usr/local/bro/share/bro/site/local.bro, lines 26-31 and error: type clash for field "path" ((coerce [$name=kafka-dhcp-shew, $writer=Log::WRITER_KAFKAWRITER, $path=shew_bro_dhcp$ = table(topic_name = shew_bro_dhcp)] to Log::Filter) and error) Am I doing something wrong? Thanks, On Wed, Apr 3, 2019 at 9:52 AM Arda Savran wrote: > I used the master. > > I changed the beginning of my local.bro as follows and did a "broctl > check" and "broctl deploy": > > #@load packages > > #@load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka > #redef Kafka::send_all_active_logs = T; > #redef Kafka::tag_json = T; > #redef Kafka::kafka_conf = table(["metadata.broker.list"] = > "XX.XX.XX.XX:9092"); > > ########### > ########### > > @load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka > redef Kafka::topic_name = ""; > redef Kafka::tag_json = T; > redef Kafka::debug = "all"; > > event bro_init() &priority=-10 > { > # handles DNS > local dns_filter: Log::Filter = [ > $name = "kafka-dns", > $writer = Log::WRITER_KAFKAWRITER, > $config = table(["metadata.broker.list"] = " XX.XX.XX.XX:9092"), > *$config = table(["topic_name"] = "bro_dns"),* > $path = "dns" > ]; > Log::add_filter(DNS::LOG, dns_filter); > } > > Still having no luck: > > [root at localhost current]# tail -f stderr.log > %7|1554299460.116|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to > ipv4#127.0.0.1:9092 (plaintext) with socket 34 > %7|1554299460.116|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state DOWN -> CONNECT > %7|1554299460.116|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554299460.116|BROKERFAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: > Local: Broker transport failure: (errno: Connection refused) > %7|1554299460.116|FAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# > 127.0.0.1:9092 failed: Connection refused > %7|1554299460.116|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state CONNECT -> DOWN > %7|1554299460.116|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554299460.116|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq > with 0 buffers > %7|1554299460.116|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 > buffers on connection reset > %7|1554299460.116|RECONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next > reconnect by 435ms > %7|1554299460.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns > partition count is zero: should refresh metadata > %7|1554299460.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping > metadata refresh of 1 topic(s): no usable brokers > %7|1554299460.552|RECONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next > reconnect by 276ms > %7|1554299460.827|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state > DOWN connecting > %7|1554299460.827|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to > ipv4#127.0.0.1:9092 (plaintext) with socket 34 > %7|1554299460.827|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state DOWN -> CONNECT > %7|1554299460.827|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554299460.827|BROKERFAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: > Local: Broker transport failure: (errno: Connection refused) > %7|1554299460.827|FAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# > 127.0.0.1:9092 failed: Connection refused > %7|1554299460.827|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state CONNECT -> DOWN > %7|1554299460.827|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554299460.827|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq > with 0 buffers > %7|1554299460.827|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 > buffers on connection reset > %7|1554299461.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns > partition count is zero: should refresh metadata > %7|1554299461.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping > metadata refresh of 1 topic(s): no usable brokers > %7|1554299461.827|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state > DOWN connecting > %7|1554299461.828|CONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to > ipv4#127.0.0.1:9092 (plaintext) with socket 34 > %7|1554299461.828|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state DOWN -> CONNECT > %7|1554299461.828|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554299461.828|BROKERFAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: > Local: Broker transport failure: (errno: Connection refused) > %7|1554299461.828|FAIL|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# > 127.0.0.1:9092 failed: Connection refused > %7|1554299461.828|STATE|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed > state CONNECT -> DOWN > %7|1554299461.828|BROADCAST|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: Broadcasting state change > %7|1554299461.828|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq > with 0 buffers > %7|1554299461.829|BUFQ|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 > buffers on connection reset > %7|1554299461.829|RECONNECT|rdkafka#producer-1| > [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next > reconnect by 715ms > > > Do you have any other suggestions for me? > > Thanks > > > On Wed, Apr 3, 2019 at 8:38 AM Zeolla at GMail.com wrote: > >> Are you using master? The easiest way to fix this is likely to add a key >> of "topic_name" and a value of "dns" to your $config table, similar to as >> shown here >> . >> Please let me know if that works for you. >> >> There is a known issue in master where the plugin is not falling back to >> use $path as the destination topic name, and I have a PR open >> for it but >> unfortunately haven't had a lot of time to finish (it is just pending some >> btests - functionally it is done) and get that merged. >> >> - Jon Zeolla >> Zeolla at GMail.Com >> >> >> On Tue, Apr 2, 2019 at 11:37 AM Arda Savran >> wrote: >> >>> Hello folks: >>> >>> I have successfully been able to send everything to a remote single >>> Kafka Topic from a local Bro machine and following is my local.bro file to >>> make that happen: >>> >>> *##! Local site policy. Customize as appropriate.* >>> *##!* >>> *##! This file will not be overwritten when upgrading or reinstalling!* >>> >>> *#@load packages* >>> >>> *@load >>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>> *redef Kafka::send_all_active_logs = T;* >>> *redef Kafka::tag_json = T;* >>> *redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>> "XX.XX.XX.XX:9092");* >>> >>> However, when I change that to write logs to their individual Kafka >>> topics I get an error message under stderr.log. Following is my updated >>> local.bro: >>> >>> *##! Local site policy. Customize as appropriate.* >>> *##!* >>> *##! This file will not be overwritten when upgrading or reinstalling!* >>> >>> *#@load packages* >>> >>> *#@load >>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>> *#redef Kafka::send_all_active_logs = T;* >>> *#redef Kafka::tag_json = T;* >>> *#redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>> "XX.XX.XX.XX:9092");* >>> >>> *###########* >>> *###########* >>> >>> *@load >>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>> *redef Kafka::topic_name = "";* >>> *redef Kafka::tag_json = T;* >>> *redef Kafka::debug = "all";* >>> >>> *event bro_init() &priority=-10* >>> *{* >>> *# handles DNS* >>> *local dns_filter: Log::Filter = [* >>> *$name = "kafka-dns",* >>> *$writer = Log::WRITER_KAFKAWRITER,* >>> *$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),* >>> *$path = "dns"* >>> *];* >>> *Log::add_filter(DNS::LOG, dns_filter);* >>> *}* >>> >>> *###########* >>> *###########* >>> >>> I enter "broctl check" and "broctl deploy" after that; but get the >>> following: >>> >>> [root at localhost current]# tail -f stderr.log >>> %7|1554218121.957|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state DOWN -> CONNECT >>> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554218121.957|BROKERFAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>> Local: Broker transport failure: (errno: Connection refused) >>> %7|1554218121.957|FAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>> 127.0.0.1:9092 failed: Connection refused >>> %7|1554218121.957|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state CONNECT -> DOWN >>> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554218121.957|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>> with 0 buffers >>> %7|1554218121.957|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>> buffers on connection reset >>> %7|1554218122.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >>> partition count is zero: should refresh metadata >>> %7|1554218122.309|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >>> metadata refresh of 1 topic(s): no usable brokers >>> %7|1554218122.957|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >>> DOWN connecting >>> %7|1554218122.958|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >>> ipv4#127.0.0.1:9092 (plaintext) with socket 29 >>> %7|1554218122.958|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state DOWN -> CONNECT >>> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554218122.958|BROKERFAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>> Local: Broker transport failure: (errno: Connection refused) >>> %7|1554218122.958|FAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>> 127.0.0.1:9092 failed: Connection refused >>> %7|1554218122.958|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state CONNECT -> DOWN >>> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554218122.958|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>> with 0 buffers >>> %7|1554218122.958|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>> buffers on connection reset >>> %7|1554218122.958|RECONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>> reconnect by 301ms >>> %7|1554218123.259|RECONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>> reconnect by 53ms >>> %7|1554218123.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >>> partition count is zero: should refresh metadata >>> >>> Yes, I have iptables enabled on the local bro machine but it works with >>> the first configuration option file. How come bro thinks that the kafka >>> broker is local. It is supposed to send the messages to XX.XX.XX.XX. >>> >>> Thanks in advance. >>> >>> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190403/9652221f/attachment-0001.html From zeolla at gmail.com Thu Apr 4 03:00:05 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Thu, 4 Apr 2019 06:00:05 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics In-Reply-To: References: Message-ID: To run a local proof of concept and see a working config, apply the below patch to master and then run `./run_end_to_end.sh --kafka-topic=dns` (just requires docker and bash > 4) from the docker/ folder. The issue is, like Seth said earlier, you need to configure the metadata.broker.list in Kafka::kafka_conf not in the logging filter's $config table (although we could likely add that option pretty easily - feel free to open a ticket at https://issues.apache.org/jira/browse/METRON-2060?filter=-4&jql=project%20%3D%20METRON%20order%20by%20created%20DESC ). If you're going to run up the PoC and have already built the plugin's bro docker container on your computer in the recent past you can add `--skip-docker-build` to speed things up, but it will need to be built the first time around at least. If you want to poke around in the container running bro after things are up you can run `./scripts/docker_execute_shell.sh` from the docker/ folder for convenience and it will drop you into a shell. Also, don't forget to run `./finish_end_to_end.sh` from docker/ when you're done to clean everything up. Our docker testing environment is currently limited to testing one kafka topic at a time but this same approach should work if you configure multiple filters with different topics specified. I'm doing exactly this in one of my bro clusters using master of the plugin. ``` diff --git a/docker/in_docker_scripts/configure_bro_plugin.sh b/docker/in_docker_scripts/configure_bro_plugin.sh index c292504..afdd0ad 100755 --- a/docker/in_docker_scripts/configure_bro_plugin.sh +++ b/docker/in_docker_scripts/configure_bro_plugin.sh @@ -28,13 +28,22 @@ shopt -s nocasematch echo "Configuring kafka plugin" { echo "@load packages" - echo "redef Kafka::logs_to_send = set(HTTP::LOG, DNS::LOG, Conn::LOG, DPD::LOG, FTP::LOG, Files::LOG, Known::CERTS_LOG, SMTP::LOG, SSL::LOG, Weird::LOG, Notice::LOG, DHCP::LOG, SSH::LOG, Software::LOG, RADIUS::LOG, X509::LOG, Known::DEVICES_LOG, RFB::LOG, Stats::LOG, CaptureLoss::LOG, SIP::LOG);" - echo "redef Kafka::topic_name = \"bro\";" + echo "redef Kafka::topic_name = \"\";" echo "redef Kafka::tag_json = T;" echo "redef Kafka::kafka_conf = table([\"metadata.broker.list\"] = \"kafka:9092\");" - echo "redef Kafka::logs_to_exclude = set(Conn::LOG, DHCP::LOG);" echo "redef Known::cert_tracking = ALL_HOSTS;" echo "redef Software::asset_tracking = ALL_HOSTS;" + echo 'event bro_init() &priority=-10 +{ +# handles DNS +local dns_filter: Log::Filter = [ +$name = "kafka-dns", +$writer = Log::WRITER_KAFKAWRITER, +$config = table(["topic_name"] = "dns"), +$path = "dns" +]; +Log::add_filter(DNS::LOG, dns_filter); +}' } >> /usr/local/bro/share/bro/site/local.bro # Load "known-devices-and-hostnames.bro" which is necessary in bro 2.5.5 to ``` Let me know if that works for you or if you have any other questions - Jon Zeolla Zeolla at GMail.Com On Wed, Apr 3, 2019 at 11:41 AM Arda Savran wrote: > Hello again: > > I tried the script on the web site and it still fails the check: > > ##! Local site policy. Customize as appropriate. > ##! > ##! This file will not be overwritten when upgrading or reinstalling! > > #@load packages > > #@load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka > #redef Kafka::send_all_active_logs = T; > #redef Kafka::tag_json = T; > #redef Kafka::kafka_conf = table(["metadata.broker.list"] = " > 13.88.224.129:9092"); > > ########### > ########### > > @load > /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka > redef Kafka::logs_to_send = set(DHCP::LOG); > redef Kafka::topic_name = "bro"; > redef Kafka::kafka_conf = table( > ["metadata.broker.list"] = "XX.XX.XX.XX:9092" > ); > redef Kafka::tag_json = T; > > event bro_init() &priority=-10 > { > # Send DHCP to the shew_bro_dhcp topic > local shew_dhcp_filter: Log::Filter = [ > $name = "kafka-dhcp-shew", > $writer = Log::WRITER_KAFKAWRITER, > $path = "shew_bro_dhcp" > $config = table(["topic_name"] = "shew_bro_dhcp") > ]; > Log::add_filter(DHCP::LOG, shew_dhcp_filter); > } > > ########### > ########### > > [root at localhost site]# broctl check > bro scripts failed. > error in /usr/local/bro/share/bro/site/local.bro, lines 29-30: not a > record (shew_bro_dhcp$config) > error in /usr/local/bro/share/bro/site/local.bro, lines 26-31 and error: > type clash for field "path" ((coerce [$name=kafka-dhcp-shew, > $writer=Log::WRITER_KAFKAWRITER, $path=shew_bro_dhcp$ = > table(topic_name = shew_bro_dhcp)] to Log::Filter) and error) > > Am I doing something wrong? > > Thanks, > > > > On Wed, Apr 3, 2019 at 9:52 AM Arda Savran wrote: > >> I used the master. >> >> I changed the beginning of my local.bro as follows and did a "broctl >> check" and "broctl deploy": >> >> #@load packages >> >> #@load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka >> #redef Kafka::send_all_active_logs = T; >> #redef Kafka::tag_json = T; >> #redef Kafka::kafka_conf = table(["metadata.broker.list"] = >> "XX.XX.XX.XX:9092"); >> >> ########### >> ########### >> >> @load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka >> redef Kafka::topic_name = ""; >> redef Kafka::tag_json = T; >> redef Kafka::debug = "all"; >> >> event bro_init() &priority=-10 >> { >> # handles DNS >> local dns_filter: Log::Filter = [ >> $name = "kafka-dns", >> $writer = Log::WRITER_KAFKAWRITER, >> $config = table(["metadata.broker.list"] = " XX.XX.XX.XX:9092"), >> *$config = table(["topic_name"] = "bro_dns"),* >> $path = "dns" >> ]; >> Log::add_filter(DNS::LOG, dns_filter); >> } >> >> Still having no luck: >> >> [root at localhost current]# tail -f stderr.log >> %7|1554299460.116|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >> ipv4#127.0.0.1:9092 (plaintext) with socket 34 >> %7|1554299460.116|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state DOWN -> CONNECT >> %7|1554299460.116|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554299460.116|BROKERFAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >> Local: Broker transport failure: (errno: Connection refused) >> %7|1554299460.116|FAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >> 127.0.0.1:9092 failed: Connection refused >> %7|1554299460.116|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state CONNECT -> DOWN >> %7|1554299460.116|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554299460.116|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >> with 0 buffers >> %7|1554299460.116|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >> buffers on connection reset >> %7|1554299460.116|RECONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >> reconnect by 435ms >> %7|1554299460.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns >> partition count is zero: should refresh metadata >> %7|1554299460.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >> metadata refresh of 1 topic(s): no usable brokers >> %7|1554299460.552|RECONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >> reconnect by 276ms >> %7|1554299460.827|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >> DOWN connecting >> %7|1554299460.827|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >> ipv4#127.0.0.1:9092 (plaintext) with socket 34 >> %7|1554299460.827|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state DOWN -> CONNECT >> %7|1554299460.827|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554299460.827|BROKERFAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >> Local: Broker transport failure: (errno: Connection refused) >> %7|1554299460.827|FAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >> 127.0.0.1:9092 failed: Connection refused >> %7|1554299460.827|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state CONNECT -> DOWN >> %7|1554299460.827|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554299460.827|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >> with 0 buffers >> %7|1554299460.827|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >> buffers on connection reset >> %7|1554299461.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns >> partition count is zero: should refresh metadata >> %7|1554299461.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >> metadata refresh of 1 topic(s): no usable brokers >> %7|1554299461.827|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >> DOWN connecting >> %7|1554299461.828|CONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >> ipv4#127.0.0.1:9092 (plaintext) with socket 34 >> %7|1554299461.828|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state DOWN -> CONNECT >> %7|1554299461.828|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554299461.828|BROKERFAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >> Local: Broker transport failure: (errno: Connection refused) >> %7|1554299461.828|FAIL|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >> 127.0.0.1:9092 failed: Connection refused >> %7|1554299461.828|STATE|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >> state CONNECT -> DOWN >> %7|1554299461.828|BROADCAST|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: Broadcasting state change >> %7|1554299461.828|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >> with 0 buffers >> %7|1554299461.829|BUFQ|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >> buffers on connection reset >> %7|1554299461.829|RECONNECT|rdkafka#producer-1| >> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >> reconnect by 715ms >> >> >> Do you have any other suggestions for me? >> >> Thanks >> >> >> On Wed, Apr 3, 2019 at 8:38 AM Zeolla at GMail.com wrote: >> >>> Are you using master? The easiest way to fix this is likely to add a >>> key of "topic_name" and a value of "dns" to your $config table, similar to >>> as shown here >>> . >>> Please let me know if that works for you. >>> >>> There is a known issue in master where the plugin is not falling back to >>> use $path as the destination topic name, and I have a PR open >>> for it but >>> unfortunately haven't had a lot of time to finish (it is just pending some >>> btests - functionally it is done) and get that merged. >>> >>> - Jon Zeolla >>> Zeolla at GMail.Com >>> >>> >>> On Tue, Apr 2, 2019 at 11:37 AM Arda Savran >>> wrote: >>> >>>> Hello folks: >>>> >>>> I have successfully been able to send everything to a remote single >>>> Kafka Topic from a local Bro machine and following is my local.bro file to >>>> make that happen: >>>> >>>> *##! Local site policy. Customize as appropriate.* >>>> *##!* >>>> *##! This file will not be overwritten when upgrading or reinstalling!* >>>> >>>> *#@load packages* >>>> >>>> *@load >>>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>>> *redef Kafka::send_all_active_logs = T;* >>>> *redef Kafka::tag_json = T;* >>>> *redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>>> "XX.XX.XX.XX:9092");* >>>> >>>> However, when I change that to write logs to their individual Kafka >>>> topics I get an error message under stderr.log. Following is my updated >>>> local.bro: >>>> >>>> *##! Local site policy. Customize as appropriate.* >>>> *##!* >>>> *##! This file will not be overwritten when upgrading or reinstalling!* >>>> >>>> *#@load packages* >>>> >>>> *#@load >>>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>>> *#redef Kafka::send_all_active_logs = T;* >>>> *#redef Kafka::tag_json = T;* >>>> *#redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>>> "XX.XX.XX.XX:9092");* >>>> >>>> *###########* >>>> *###########* >>>> >>>> *@load >>>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>>> *redef Kafka::topic_name = "";* >>>> *redef Kafka::tag_json = T;* >>>> *redef Kafka::debug = "all";* >>>> >>>> *event bro_init() &priority=-10* >>>> *{* >>>> *# handles DNS* >>>> *local dns_filter: Log::Filter = [* >>>> *$name = "kafka-dns",* >>>> *$writer = Log::WRITER_KAFKAWRITER,* >>>> *$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),* >>>> *$path = "dns"* >>>> *];* >>>> *Log::add_filter(DNS::LOG, dns_filter);* >>>> *}* >>>> >>>> *###########* >>>> *###########* >>>> >>>> I enter "broctl check" and "broctl deploy" after that; but get the >>>> following: >>>> >>>> [root at localhost current]# tail -f stderr.log >>>> %7|1554218121.957|STATE|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>> state DOWN -> CONNECT >>>> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>> %7|1554218121.957|BROKERFAIL|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>>> Local: Broker transport failure: (errno: Connection refused) >>>> %7|1554218121.957|FAIL|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>>> 127.0.0.1:9092 failed: Connection refused >>>> %7|1554218121.957|STATE|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>> state CONNECT -> DOWN >>>> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>> %7|1554218121.957|BUFQ|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>>> with 0 buffers >>>> %7|1554218121.957|BUFQ|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>>> buffers on connection reset >>>> %7|1554218122.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >>>> partition count is zero: should refresh metadata >>>> %7|1554218122.309|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >>>> metadata refresh of 1 topic(s): no usable brokers >>>> %7|1554218122.957|CONNECT|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >>>> DOWN connecting >>>> %7|1554218122.958|CONNECT|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >>>> ipv4#127.0.0.1:9092 (plaintext) with socket 29 >>>> %7|1554218122.958|STATE|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>> state DOWN -> CONNECT >>>> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>> %7|1554218122.958|BROKERFAIL|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>>> Local: Broker transport failure: (errno: Connection refused) >>>> %7|1554218122.958|FAIL|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>>> 127.0.0.1:9092 failed: Connection refused >>>> %7|1554218122.958|STATE|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>> state CONNECT -> DOWN >>>> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>> %7|1554218122.958|BUFQ|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>>> with 0 buffers >>>> %7|1554218122.958|BUFQ|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>>> buffers on connection reset >>>> %7|1554218122.958|RECONNECT|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>>> reconnect by 301ms >>>> %7|1554218123.259|RECONNECT|rdkafka#producer-1| >>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>>> reconnect by 53ms >>>> %7|1554218123.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >>>> partition count is zero: should refresh metadata >>>> >>>> Yes, I have iptables enabled on the local bro machine but it works with >>>> the first configuration option file. How come bro thinks that the kafka >>>> broker is local. It is supposed to send the messages to XX.XX.XX.XX. >>>> >>>> Thanks in advance. >>>> >>>> _______________________________________________ >>>> Zeek mailing list >>>> zeek at zeek.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190404/5e5a1e30/attachment-0001.html From zeolla at gmail.com Thu Apr 4 03:13:08 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Thu, 4 Apr 2019 06:13:08 -0400 Subject: [Zeek] Cannot send logs to their individual Kafka topics In-Reply-To: References: Message-ID: Sorry, I was in a rush to send that prior email out. I should have mentioned that there are actually two issues with your original config, and the example I show above fixes both of them. One is the bug that I mentioned earlier and the other is the issue that Seth mentioned. Jon Zeolla On Thu, Apr 4, 2019, 6:00 AM Zeolla at GMail.com wrote: > To run a local proof of concept and see a working config, apply the below > patch to master and then run `./run_end_to_end.sh --kafka-topic=dns` (just > requires docker and bash > 4) from the docker/ folder. The issue is, like > Seth said earlier, you need to configure the metadata.broker.list in > Kafka::kafka_conf not in the logging filter's $config table (although we > could likely add that option pretty easily - feel free to open a ticket at > https://issues.apache.org/jira/browse/METRON-2060?filter=-4&jql=project%20%3D%20METRON%20order%20by%20created%20DESC > ). > > If you're going to run up the PoC and have already built the plugin's bro > docker container on your computer in the recent past you can add > `--skip-docker-build` to speed things up, but it will need to be built the > first time around at least. If you want to poke around in the container > running bro after things are up you can run > `./scripts/docker_execute_shell.sh` from the docker/ folder for convenience > and it will drop you into a shell. Also, don't forget to run > `./finish_end_to_end.sh` from docker/ when you're done to clean everything > up. Our docker testing environment is currently limited to testing one > kafka topic at a time but this same approach should work if you configure > multiple filters with different topics specified. I'm doing exactly this > in one of my bro clusters using master of the plugin. > > ``` > diff --git a/docker/in_docker_scripts/configure_bro_plugin.sh > b/docker/in_docker_scripts/configure_bro_plugin.sh > index c292504..afdd0ad 100755 > --- a/docker/in_docker_scripts/configure_bro_plugin.sh > +++ b/docker/in_docker_scripts/configure_bro_plugin.sh > @@ -28,13 +28,22 @@ shopt -s nocasematch > echo "Configuring kafka plugin" > { > echo "@load packages" > - echo "redef Kafka::logs_to_send = set(HTTP::LOG, DNS::LOG, Conn::LOG, > DPD::LOG, FTP::LOG, Files::LOG, Known::CERTS_LOG, SMTP::LOG, SSL::LOG, > Weird::LOG, Notice::LOG, DHCP::LOG, SSH::LOG, Software::LOG, RADIUS::LOG, > X509::LOG, Known::DEVICES_LOG, RFB::LOG, Stats::LOG, CaptureLoss::LOG, > SIP::LOG);" > - echo "redef Kafka::topic_name = \"bro\";" > + echo "redef Kafka::topic_name = \"\";" > echo "redef Kafka::tag_json = T;" > echo "redef Kafka::kafka_conf = table([\"metadata.broker.list\"] = > \"kafka:9092\");" > - echo "redef Kafka::logs_to_exclude = set(Conn::LOG, DHCP::LOG);" > echo "redef Known::cert_tracking = ALL_HOSTS;" > echo "redef Software::asset_tracking = ALL_HOSTS;" > + echo 'event bro_init() &priority=-10 > +{ > +# handles DNS > +local dns_filter: Log::Filter = [ > +$name = "kafka-dns", > +$writer = Log::WRITER_KAFKAWRITER, > +$config = table(["topic_name"] = "dns"), > +$path = "dns" > +]; > +Log::add_filter(DNS::LOG, dns_filter); > +}' > } >> /usr/local/bro/share/bro/site/local.bro > > # Load "known-devices-and-hostnames.bro" which is necessary in bro 2.5.5 > to > ``` > > Let me know if that works for you or if you have any other questions > > - Jon Zeolla > Zeolla at GMail.Com > > > On Wed, Apr 3, 2019 at 11:41 AM Arda Savran > wrote: > >> Hello again: >> >> I tried the script on the web site and it still fails the check: >> >> ##! Local site policy. Customize as appropriate. >> ##! >> ##! This file will not be overwritten when upgrading or reinstalling! >> >> #@load packages >> >> #@load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka >> #redef Kafka::send_all_active_logs = T; >> #redef Kafka::tag_json = T; >> #redef Kafka::kafka_conf = table(["metadata.broker.list"] = " >> 13.88.224.129:9092"); >> >> ########### >> ########### >> >> @load >> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka >> redef Kafka::logs_to_send = set(DHCP::LOG); >> redef Kafka::topic_name = "bro"; >> redef Kafka::kafka_conf = table( >> ["metadata.broker.list"] = "XX.XX.XX.XX:9092" >> ); >> redef Kafka::tag_json = T; >> >> event bro_init() &priority=-10 >> { >> # Send DHCP to the shew_bro_dhcp topic >> local shew_dhcp_filter: Log::Filter = [ >> $name = "kafka-dhcp-shew", >> $writer = Log::WRITER_KAFKAWRITER, >> $path = "shew_bro_dhcp" >> $config = table(["topic_name"] = "shew_bro_dhcp") >> ]; >> Log::add_filter(DHCP::LOG, shew_dhcp_filter); >> } >> >> ########### >> ########### >> >> [root at localhost site]# broctl check >> bro scripts failed. >> error in /usr/local/bro/share/bro/site/local.bro, lines 29-30: not a >> record (shew_bro_dhcp$config) >> error in /usr/local/bro/share/bro/site/local.bro, lines 26-31 and error: >> type clash for field "path" ((coerce [$name=kafka-dhcp-shew, >> $writer=Log::WRITER_KAFKAWRITER, $path=shew_bro_dhcp$ = >> table(topic_name = shew_bro_dhcp)] to Log::Filter) and error) >> >> Am I doing something wrong? >> >> Thanks, >> >> >> >> On Wed, Apr 3, 2019 at 9:52 AM Arda Savran >> wrote: >> >>> I used the master. >>> >>> I changed the beginning of my local.bro as follows and did a "broctl >>> check" and "broctl deploy": >>> >>> #@load packages >>> >>> #@load >>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka >>> #redef Kafka::send_all_active_logs = T; >>> #redef Kafka::tag_json = T; >>> #redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>> "XX.XX.XX.XX:9092"); >>> >>> ########### >>> ########### >>> >>> @load >>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka >>> redef Kafka::topic_name = ""; >>> redef Kafka::tag_json = T; >>> redef Kafka::debug = "all"; >>> >>> event bro_init() &priority=-10 >>> { >>> # handles DNS >>> local dns_filter: Log::Filter = [ >>> $name = "kafka-dns", >>> $writer = Log::WRITER_KAFKAWRITER, >>> $config = table(["metadata.broker.list"] = " XX.XX.XX.XX:9092"), >>> *$config = table(["topic_name"] = "bro_dns"),* >>> $path = "dns" >>> ]; >>> Log::add_filter(DNS::LOG, dns_filter); >>> } >>> >>> Still having no luck: >>> >>> [root at localhost current]# tail -f stderr.log >>> %7|1554299460.116|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >>> ipv4#127.0.0.1:9092 (plaintext) with socket 34 >>> %7|1554299460.116|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state DOWN -> CONNECT >>> %7|1554299460.116|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554299460.116|BROKERFAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>> Local: Broker transport failure: (errno: Connection refused) >>> %7|1554299460.116|FAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>> 127.0.0.1:9092 failed: Connection refused >>> %7|1554299460.116|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state CONNECT -> DOWN >>> %7|1554299460.116|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554299460.116|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>> with 0 buffers >>> %7|1554299460.116|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>> buffers on connection reset >>> %7|1554299460.116|RECONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>> reconnect by 435ms >>> %7|1554299460.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns >>> partition count is zero: should refresh metadata >>> %7|1554299460.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >>> metadata refresh of 1 topic(s): no usable brokers >>> %7|1554299460.552|RECONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>> reconnect by 276ms >>> %7|1554299460.827|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >>> DOWN connecting >>> %7|1554299460.827|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >>> ipv4#127.0.0.1:9092 (plaintext) with socket 34 >>> %7|1554299460.827|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state DOWN -> CONNECT >>> %7|1554299460.827|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554299460.827|BROKERFAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>> Local: Broker transport failure: (errno: Connection refused) >>> %7|1554299460.827|FAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>> 127.0.0.1:9092 failed: Connection refused >>> %7|1554299460.827|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state CONNECT -> DOWN >>> %7|1554299460.827|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554299460.827|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>> with 0 buffers >>> %7|1554299460.827|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>> buffers on connection reset >>> %7|1554299461.394|NOINFO|rdkafka#producer-1| [thrd:main]: Topic bro_dns >>> partition count is zero: should refresh metadata >>> %7|1554299461.394|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >>> metadata refresh of 1 topic(s): no usable brokers >>> %7|1554299461.827|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >>> DOWN connecting >>> %7|1554299461.828|CONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >>> ipv4#127.0.0.1:9092 (plaintext) with socket 34 >>> %7|1554299461.828|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state DOWN -> CONNECT >>> %7|1554299461.828|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554299461.828|BROKERFAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>> Local: Broker transport failure: (errno: Connection refused) >>> %7|1554299461.828|FAIL|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>> 127.0.0.1:9092 failed: Connection refused >>> %7|1554299461.828|STATE|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>> state CONNECT -> DOWN >>> %7|1554299461.828|BROADCAST|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>> %7|1554299461.828|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>> with 0 buffers >>> %7|1554299461.829|BUFQ|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>> buffers on connection reset >>> %7|1554299461.829|RECONNECT|rdkafka#producer-1| >>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>> reconnect by 715ms >>> >>> >>> Do you have any other suggestions for me? >>> >>> Thanks >>> >>> >>> On Wed, Apr 3, 2019 at 8:38 AM Zeolla at GMail.com >>> wrote: >>> >>>> Are you using master? The easiest way to fix this is likely to add a >>>> key of "topic_name" and a value of "dns" to your $config table, similar to >>>> as shown here >>>> . >>>> Please let me know if that works for you. >>>> >>>> There is a known issue in master where the plugin is not falling back >>>> to use $path as the destination topic name, and I have a PR open >>>> for it but >>>> unfortunately haven't had a lot of time to finish (it is just pending some >>>> btests - functionally it is done) and get that merged. >>>> >>>> - Jon Zeolla >>>> Zeolla at GMail.Com >>>> >>>> >>>> On Tue, Apr 2, 2019 at 11:37 AM Arda Savran >>>> wrote: >>>> >>>>> Hello folks: >>>>> >>>>> I have successfully been able to send everything to a remote single >>>>> Kafka Topic from a local Bro machine and following is my local.bro file to >>>>> make that happen: >>>>> >>>>> *##! Local site policy. Customize as appropriate.* >>>>> *##!* >>>>> *##! This file will not be overwritten when upgrading or reinstalling!* >>>>> >>>>> *#@load packages* >>>>> >>>>> *@load >>>>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>>>> *redef Kafka::send_all_active_logs = T;* >>>>> *redef Kafka::tag_json = T;* >>>>> *redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>>>> "XX.XX.XX.XX:9092");* >>>>> >>>>> However, when I change that to write logs to their individual Kafka >>>>> topics I get an error message under stderr.log. Following is my updated >>>>> local.bro: >>>>> >>>>> *##! Local site policy. Customize as appropriate.* >>>>> *##!* >>>>> *##! This file will not be overwritten when upgrading or reinstalling!* >>>>> >>>>> *#@load packages* >>>>> >>>>> *#@load >>>>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>>>> *#redef Kafka::send_all_active_logs = T;* >>>>> *#redef Kafka::tag_json = T;* >>>>> *#redef Kafka::kafka_conf = table(["metadata.broker.list"] = >>>>> "XX.XX.XX.XX:9092");* >>>>> >>>>> *###########* >>>>> *###########* >>>>> >>>>> *@load >>>>> /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka* >>>>> *redef Kafka::topic_name = "";* >>>>> *redef Kafka::tag_json = T;* >>>>> *redef Kafka::debug = "all";* >>>>> >>>>> *event bro_init() &priority=-10* >>>>> *{* >>>>> *# handles DNS* >>>>> *local dns_filter: Log::Filter = [* >>>>> *$name = "kafka-dns",* >>>>> *$writer = Log::WRITER_KAFKAWRITER,* >>>>> *$config = table(["metadata.broker.list"] = "XX.XX.XX.XX:9092"),* >>>>> *$path = "dns"* >>>>> *];* >>>>> *Log::add_filter(DNS::LOG, dns_filter);* >>>>> *}* >>>>> >>>>> *###########* >>>>> *###########* >>>>> >>>>> I enter "broctl check" and "broctl deploy" after that; but get the >>>>> following: >>>>> >>>>> [root at localhost current]# tail -f stderr.log >>>>> %7|1554218121.957|STATE|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>>> state DOWN -> CONNECT >>>>> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>>> %7|1554218121.957|BROKERFAIL|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>>>> Local: Broker transport failure: (errno: Connection refused) >>>>> %7|1554218121.957|FAIL|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>>>> 127.0.0.1:9092 failed: Connection refused >>>>> %7|1554218121.957|STATE|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>>> state CONNECT -> DOWN >>>>> %7|1554218121.957|BROADCAST|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>>> %7|1554218121.957|BUFQ|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>>>> with 0 buffers >>>>> %7|1554218121.957|BUFQ|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>>>> buffers on connection reset >>>>> %7|1554218122.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >>>>> partition count is zero: should refresh metadata >>>>> %7|1554218122.309|METADATA|rdkafka#producer-1| [thrd:main]: Skipping >>>>> metadata refresh of 1 topic(s): no usable brokers >>>>> %7|1554218122.957|CONNECT|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: broker in state >>>>> DOWN connecting >>>>> %7|1554218122.958|CONNECT|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connecting to >>>>> ipv4#127.0.0.1:9092 (plaintext) with socket 29 >>>>> %7|1554218122.958|STATE|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>>> state DOWN -> CONNECT >>>>> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>>> %7|1554218122.958|BROKERFAIL|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: failed: err: >>>>> Local: Broker transport failure: (errno: Connection refused) >>>>> %7|1554218122.958|FAIL|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4# >>>>> 127.0.0.1:9092 failed: Connection refused >>>>> %7|1554218122.958|STATE|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Broker changed >>>>> state CONNECT -> DOWN >>>>> %7|1554218122.958|BROADCAST|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: Broadcasting state change >>>>> %7|1554218122.958|BUFQ|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Purging bufq >>>>> with 0 buffers >>>>> %7|1554218122.958|BUFQ|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Updating 0 >>>>> buffers on connection reset >>>>> %7|1554218122.958|RECONNECT|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>>>> reconnect by 301ms >>>>> %7|1554218123.259|RECONNECT|rdkafka#producer-1| >>>>> [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Delaying next >>>>> reconnect by 53ms >>>>> %7|1554218123.309|NOINFO|rdkafka#producer-1| [thrd:main]: Topic >>>>> partition count is zero: should refresh metadata >>>>> >>>>> Yes, I have iptables enabled on the local bro machine but it works >>>>> with the first configuration option file. How come bro thinks that the >>>>> kafka broker is local. It is supposed to send the messages to XX.XX.XX.XX. >>>>> >>>>> Thanks in advance. >>>>> >>>>> _______________________________________________ >>>>> Zeek mailing list >>>>> zeek at zeek.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190404/2430e4e2/attachment-0001.html From tscheponik at gmail.com Thu Apr 4 08:50:44 2019 From: tscheponik at gmail.com (Woot4moo) Date: Thu, 4 Apr 2019 11:50:44 -0400 Subject: [Zeek] Proper way to reference potentially missing key Message-ID: How can one reference a potentially missing key such that the script will not terminate? For example in a file_new event, if I reference the mime_type attribute and it is uninitialized I receive ?no such field in record? Example code below: if( f?$mime_type) #error here -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190404/3d978a4f/attachment.html From jsiwek at corelight.com Thu Apr 4 09:19:09 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Thu, 4 Apr 2019 09:19:09 -0700 Subject: [Zeek] Proper way to reference potentially missing key In-Reply-To: References: Message-ID: On Thu, Apr 4, 2019 at 8:59 AM Woot4moo wrote: > > How can one reference a potentially missing key such that the script will not terminate? For example in a file_new event, if I reference the mime_type attribute and it is uninitialized I receive ?no such field in record? > > Example code below: > > if( f?$mime_type) #error here That's the correct way to check for uninitialized &optional values, but the error here is saying there's "no such field", not that the "field is uninitialized. i.e. there is no "mime_type" field in the "fa_file" record type. You're probably meaning to access f$info$mime_type, which gets populated via the "file_sniff" event's "fa_metadata" record's "mime_type" field. (You can check if a record contains a field name by using the "record_fields" function to introspect, but that's not a typical thing people do and likely not what you really want). - Jon From shadowx787 at gmail.com Thu Apr 4 13:45:05 2019 From: shadowx787 at gmail.com (Justin Mullins) Date: Thu, 4 Apr 2019 16:45:05 -0400 Subject: [Zeek] Extract IP Header Options Message-ID: Hi, I was wondering is there an existing way in Zeek to log IP Header Options? The conn log has a lot of the IP Header fields but not the IP Header "Options" field data. Specifically looking at logging data related to CIPSO packet labeling (reference: https://tools.ietf.org/html/draft-ietf-cipso-ipsecurity-01). If not, can anyone point me to a decent example of a bro script logging similar data from the IP Header? (it's been quite a few years since I've looked at bro scripts and I haven't found any examples doing something similar to what I want) Thank guys any information you can provide would be helpful! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190404/d38d319b/attachment.html From tscheponik at gmail.com Thu Apr 4 14:18:45 2019 From: tscheponik at gmail.com (Woot4moo) Date: Thu, 4 Apr 2019 17:18:45 -0400 Subject: [Zeek] Proper way to reference potentially missing key In-Reply-To: References: Message-ID: That worked. Thanks On Thu, Apr 4, 2019 at 12:19 PM Jon Siwek wrote: > On Thu, Apr 4, 2019 at 8:59 AM Woot4moo wrote: > > > > How can one reference a potentially missing key such that the script > will not terminate? For example in a file_new event, if I reference the > mime_type attribute and it is uninitialized I receive ?no such field in > record? > > > > Example code below: > > > > if( f?$mime_type) #error here > > That's the correct way to check for uninitialized &optional values, > but the error here is saying there's "no such field", not that the > "field is uninitialized. i.e. there is no "mime_type" field in the > "fa_file" record type. You're probably meaning to access > f$info$mime_type, which gets populated via the "file_sniff" event's > "fa_metadata" record's "mime_type" field. > > (You can check if a record contains a field name by using the > "record_fields" function to introspect, but that's not a typical thing > people do and likely not what you really want). > > - Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190404/1b518bec/attachment.html From jsiwek at corelight.com Thu Apr 4 16:19:02 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Thu, 4 Apr 2019 16:19:02 -0700 Subject: [Zeek] Extract IP Header Options In-Reply-To: References: Message-ID: On Thu, Apr 4, 2019 at 1:48 PM Justin Mullins wrote: > I was wondering is there an existing way in Zeek to log IP Header Options? Doesn't look like it, but you can try hacking it in. For example, add the Options data as a field to the ip4_hdr record: https://github.com/zeek/zeek/blob/3f7bbf2784d094787e6c7cb32adb0fc658fb8a86/scripts/base/init-bare.bro#L1515-L1524 Add code to populate it here: https://github.com/zeek/zeek/blob/3f7bbf2784d094787e6c7cb32adb0fc658fb8a86/src/IP.cc#L311-L322 Then consume the data via a new_packet event handler: https://docs.zeek.org/en/latest/scripts/base/bif/event.bif.bro.html#id-new_packet - Jon From johanna at icir.org Fri Apr 5 20:24:50 2019 From: johanna at icir.org (Johanna Amann) Date: Fri, 5 Apr 2019 20:24:50 -0700 Subject: [Zeek] Projected Throughput In-Reply-To: References: Message-ID: <20190406032450.gcsxxraurrovtjnw@Tranquility.local> Hi, I know this is a bit late, but still... > I've built a 1U box (Xeon Bronze-3104 / 16 GB RAM / 10GBase-T ports with > Intel X557) and I'm wondering if it's able to manage a certain level of > traffic; in this case, a sustained daily rate of 10MBps, spiking at > 15MBps (please note, MBps, not Mbps - I know I could easily handle a > sustained 15 Mbps). I'll be analyzing traffic on a large corporate > network. What do you think? Is it underpowered? Way overboard? Any best > guesses about the max level of throughput it could handle? 15 megabytes per second is still only around 120 megabit per second. While it always depends on the traffic, for the things that I have typically seen you should not have any problems; I am not even sure if you will need a cluster setup for that, a standalone Bro process might be enough. Johanna From tscheponik at gmail.com Tue Apr 9 05:05:06 2019 From: tscheponik at gmail.com (Woot4moo) Date: Tue, 9 Apr 2019 08:05:06 -0400 Subject: [Zeek] Unit test extensions Message-ID: Has anyone in the community extended btest to support better test metrics? Currently btest will give me a pass or fail per file as opposed to having multiple scenarios in a file. The structure I am looking for is below: Example in one file @Scenario(First) #test code here @Scenario(Second) #test code here Success 2 out of 2 passed -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190409/f3036304/attachment.html From jsiwek at corelight.com Tue Apr 9 09:19:54 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 9 Apr 2019 09:19:54 -0700 Subject: [Zeek] Unit test extensions In-Reply-To: References: Message-ID: On Tue, Apr 9, 2019 at 5:20 AM Woot4moo wrote: > > Has anyone in the community extended btest to support better test metrics? Currently btest will give me a pass or fail per file as opposed to having multiple scenarios in a file. If each scenario can share the same @TEST-EXEC commands and just need different %INPUT contents, then @TEST-START-NEXT may work for you. - Jon From kevross33 at googlemail.com Wed Apr 10 07:18:45 2019 From: kevross33 at googlemail.com (Kevin Ross) Date: Wed, 10 Apr 2019 15:18:45 +0100 Subject: [Zeek] Interface Removed From Config but Keeps Monitoring Traffic Message-ID: Hi, I configured an afpacket interface in addition to one I was already using and it monitored fine but I want to stop monitoring this link for now and just leave it to Suricata at the moment. I have removed the configuration for it and redeployed, cleaned and everything else I can thing of and many config installs and when started while only the works configured on the original interface show in running jobs I am still getting traffic events from the other interface (I know this because of the IPs being monitored). Is there anything I can check or clean up to try and force bro to completely "forget" it ever knew about this interface? Thanks. Kind Regards, Kevin Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190410/5cca1abe/attachment.html From mfernandez at mitre.org Wed Apr 10 09:38:34 2019 From: mfernandez at mitre.org (Fernandez, Mark I) Date: Wed, 10 Apr 2019 16:38:34 +0000 Subject: [Zeek] UPDATE: Bro/Zeek ATT&CK-based Analytics and Reporting (BZAR), by MITRE Message-ID: Gary, All - We updated the BZAR scripts to be forward-compatible with Zeek v2.6.x and backward-compatible with v2.5.x and below, using '@if' directives to check the version number. Affected files include: main.bro, bzar_dce-rpc.bro, and bzar_smb.bro. Please visit the GitHub repo to find the updates files. * https://github.com/mitre-attack/car/tree/master/implementations/bzar Cheers, Mark Mark I. Fernandez The MITRE Corporation mfernandez at mitre.org P.S. The Bro/Zeek Package Manager for BZAR is coming soon. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190410/867578f3/attachment.html From gary.w.weasel2.civ at mail.mil Wed Apr 10 10:03:39 2019 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W CIV DISA RE (US)) Date: Wed, 10 Apr 2019 17:03:39 +0000 Subject: [Zeek] UPDATE: Bro/Zeek ATT&CK-based Analytics and Reporting (BZAR), by MITRE Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B291C30B18@UMECHPA68.easf.csd.disa.mil> Mark, Thank you for the update! Confirming on my end that we're able to get it running and producing notices. v/r Gary -----Original Message----- From: Fernandez, Mark I Sent: Wednesday, April 10, 2019 12:39 PM To: zeek at zeek.org; Weasel, Gary W CIV DISA RE (US) Subject: [Non-DoD Source] UPDATE: Bro/Zeek ATT&CK-based Analytics and Reporting (BZAR), by MITRE All active links contained in this email were disabled. Please verify the identity of the sender, and confirm the authenticity of all links contained within the message prior to copying and pasting the address to a Web browser. ________________________________ Gary, All - We updated the BZAR scripts to be forward-compatible with Zeek v2.6.x and backward-compatible with v2.5.x and below, using '@if' directives to check the version number. Affected files include: main.bro, bzar_dce-rpc.bro, and bzar_smb.bro. Please visit the GitHub repo to find the updates files. * Caution-https://github.com/mitre-attack/car/tree/master/implementations/bzar < Caution-https://github.com/mitre-attack/car/tree/master/implementations/bzar > Cheers, Mark Mark I. Fernandez The MITRE Corporation mfernandez at mitre.org < Caution-mailto:mfernandez at mitre.org > P.S. The Bro/Zeek Package Manager for BZAR is coming soon. From justin at corelight.com Wed Apr 10 16:31:50 2019 From: justin at corelight.com (Justin Azoff) Date: Wed, 10 Apr 2019 19:31:50 -0400 Subject: [Zeek] Interface Removed From Config but Keeps Monitoring Traffic In-Reply-To: References: Message-ID: On Wed, Apr 10, 2019 at 10:22 AM Kevin Ross wrote: > > Hi, > > I configured an afpacket interface in addition to one I was already using and it monitored fine but I want to stop monitoring this link for now and just leave it to Suricata at the moment. > > I have removed the configuration for it and redeployed, cleaned and everything else I can thing of and many config installs and when started while only the works configured on the original interface show in running jobs I am still getting traffic events from the other interface (I know this because of the IPs being monitored). Ah, you needed to stop those extra workers before removing them from the configuration. I thought we added something to warn people when they did that, but that may only detect if you reduce lb_procs and not remove an interface entirely. > Is there anything I can check or clean up to try and force bro to completely "forget" it ever knew about this interface? Thanks. the easiest thing to do would be to do broctl stop broctl ps.bro that should show any remaining orphaned bro processes. Kill those, then start things back up and you should be good to go. -- Justin From ambros.novak.89 at gmail.com Wed Apr 10 18:57:56 2019 From: ambros.novak.89 at gmail.com (Ambros Novak) Date: Wed, 10 Apr 2019 21:57:56 -0400 Subject: [Zeek] threat intel questions Message-ID: Hello! I have several questions about the threat intel: Is there a way to add meta.url and meta.desc to intel.log? For Intel::FILE_NAME to work, does base/frameworks/intel/files.bro go in local.bro? Will Intel::FILE_HASH detect MD5, SHA1, SHA256, SHA256, imphash, and authentihash? Will Intel::CERT_HASH detect MD5 or SHA256? Will the intel frame detect part of part a URL or does only the full URL? Will "@domain.com" work in the Intel::EMAIL, or is it best to just remove the "@" and add it to Intel::Domain? Does meta.do_notice have to be set to T for an event to get logged into intel.log? Thank you for the help. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190410/cfc9a3c4/attachment.html From jan.grashoefer at gmail.com Thu Apr 11 04:22:44 2019 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 11 Apr 2019 13:22:44 +0200 Subject: [Zeek] threat intel questions In-Reply-To: References: Message-ID: On 11/04/2019 03:57, Ambros Novak wrote: > Is there a way to add meta.url and meta.desc to intel.log? In theory there is but you have to keep in mind that multiple meta data records might be associated with a single indicator that matched. This is also why the sources field in intel.log is a set. See the following blog post for more details: https://blog.zeek.org/2016/12/the-intelligence-framework-update.html > For Intel::FILE_NAME to work, does base/frameworks/intel/files.bro go in > local.bro? Scripts in base/ should be loaded by default. If you don't see hits on file names try to spot them in files.log first. > Will Intel::FILE_HASH detect MD5, SHA1, SHA256, SHA256, imphash, and > authentihash? > > Will Intel::CERT_HASH detect MD5 or SHA256? > > Will the intel frame detect part of part a URL or does only the full URL? > > Will "@domain.com" work in the Intel::EMAIL, or is it best to just remove > the "@" and add it to Intel::Domain? To understand how the different indicators work just have a look at the corresponding seen scripts: https://github.com/zeek/zeek/tree/master/scripts/policy/frameworks/intel/seen For example in case of Intel::FILE_HASH the file_hash event is used, which is triggered "each time file analysis generates a digest". > Does meta.do_notice have to be set to T for an event to get logged into > intel.log? No. Setting do_notice to T will cause a notice to be generated. More info on notices can be found here: https://docs.zeek.org/en/stable/frameworks/notice.html Jan From kevross33 at googlemail.com Thu Apr 11 05:52:53 2019 From: kevross33 at googlemail.com (Kevin Ross) Date: Thu, 11 Apr 2019 13:52:53 +0100 Subject: [Zeek] Interface Removed From Config but Keeps Monitoring Traffic In-Reply-To: References: Message-ID: > > Ok it seems to be fine now. The ps.bro showed nothing but I added the > interface back in and ran it again (it didn't show those threads either > before with status). Then I stopped bro and commented it out again and > redeployed and seems ok now. Before when I changed I commented them out > while bro was running and redeployed. Thanks for your time, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190411/d4b1d7cb/attachment.html From robin at corelight.com Thu Apr 11 23:42:30 2019 From: robin at corelight.com (Robin Sommer) Date: Fri, 12 Apr 2019 08:42:30 +0200 Subject: [Zeek] Unit test extensions In-Reply-To: References: Message-ID: <20190412064230.GK48331@corelight.com> There are also two more mechanisms that might be helpful: - Split a test across parts: https://github.com/zeek/btest#splitting-tests-into-parts - Display where you are inside a test: https://github.com/zeek/btest#displaying-progress May all not be quite what you're looking for, though. Robin On Tue, Apr 09, 2019 at 09:19 -0700, Jonathan Siwek wrote: > On Tue, Apr 9, 2019 at 5:20 AM Woot4moo wrote: > > > > Has anyone in the community extended btest to support better test metrics? Currently btest will give me a pass or fail per file as opposed to having multiple scenarios in a file. > > If each scenario can share the same @TEST-EXEC commands and just need > different %INPUT contents, then @TEST-START-NEXT may work for you. > > - Jon > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Robin Sommer * Corelight, Inc. * robin at corelight.com * www.corelight.com From tscheponik at gmail.com Fri Apr 12 04:27:16 2019 From: tscheponik at gmail.com (Woot4moo) Date: Fri, 12 Apr 2019 07:27:16 -0400 Subject: [Zeek] NIC benchmarks and selection criteria Message-ID: Are there any available benchmarks by which the community measures NIC selection? I.E., How do others known which hardware baseline to choose for a given traffic volume while using Zeek. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190412/5cdc9714/attachment.html From michalpurzynski1 at gmail.com Fri Apr 12 04:49:55 2019 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Fri, 12 Apr 2019 13:49:55 +0200 Subject: [Zeek] NIC benchmarks and selection criteria In-Reply-To: References: Message-ID: What?s your expected throughput? For anything from 1Gbit up just use Intel X710 based cards. The Suricata running guide we wrote a while ago applies to Zeek as well. https://github.com/pevma/SEPTun https://github.com/pevma/SEPTun-Mark-II Gone are the days of Myricoms, etc. It?s rather unlikely you will need dedicated capture cards either. > On Apr 12, 2019, at 1:27 PM, Woot4moo wrote: > > Are there any available benchmarks by which the community measures NIC selection? I.E., How do others known which hardware baseline to choose for a given traffic volume while using Zeek. > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190412/6bae3fe2/attachment.html From tscheponik at gmail.com Fri Apr 12 04:58:21 2019 From: tscheponik at gmail.com (Woot4moo) Date: Fri, 12 Apr 2019 07:58:21 -0400 Subject: [Zeek] NIC benchmarks and selection criteria In-Reply-To: References: Message-ID: Thanks, I was unaware of these two guides. Appreciate the extra detail about probably not needing a dedicated card. I'll dig into these and drop more questions as they come up. On Fri, Apr 12, 2019 at 7:49 AM Micha? Purzy?ski wrote: > What?s your expected throughput? For anything from 1Gbit up just use Intel > X710 based cards. > > The Suricata running guide we wrote a while ago applies to Zeek as well. > > https://github.com/pevma/SEPTun > > https://github.com/pevma/SEPTun-Mark-II > > Gone are the days of Myricoms, etc. It?s rather unlikely you will need > dedicated capture cards either. > > > On Apr 12, 2019, at 1:27 PM, Woot4moo wrote: > > Are there any available benchmarks by which the community measures NIC > selection? I.E., How do others known which hardware baseline to choose for > a given traffic volume while using Zeek. > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190412/20bf561d/attachment.html From tscheponik at gmail.com Fri Apr 12 10:52:01 2019 From: tscheponik at gmail.com (Woot4moo) Date: Fri, 12 Apr 2019 13:52:01 -0400 Subject: [Zeek] Unit test extensions In-Reply-To: References: Message-ID: Yes this is similar. On Tue, Apr 9, 2019 at 12:20 PM Jon Siwek wrote: > On Tue, Apr 9, 2019 at 5:20 AM Woot4moo wrote: > > > > Has anyone in the community extended btest to support better test > metrics? Currently btest will give me a pass or fail per file as opposed to > having multiple scenarios in a file. > > If each scenario can share the same @TEST-EXEC commands and just need > different %INPUT contents, then @TEST-START-NEXT may work for you. > > - Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190412/1ed74b24/attachment.html From andrew at aklaus.ca Sat Apr 13 00:14:36 2019 From: andrew at aklaus.ca (Andrew Klaus) Date: Sat, 13 Apr 2019 09:14:36 +0200 Subject: [Zeek] VRRP/CARP Packet Analyser In-Reply-To: References: Message-ID: Hello, In my weird.log, I've noticed unknown_protocol_112 showing up regularly for me. I believe this to be the Virtual Router Redundancy Protocol (VRRP), which does match up with CARP that's enabled on our OpenBSD firewalls. Before I start looking further, has anyone built a parser for Zeek already? If not, I'll start reading the protocol spec and seeing if I'm able to write one. I believe it to be useful to have the protocol analyzed for noticing any anomalies, etc. Thanks! Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190413/d84fba0e/attachment.html From dherakovic at hotmail.com Sat Apr 13 12:50:09 2019 From: dherakovic at hotmail.com (Daniel Herakovic) Date: Sat, 13 Apr 2019 19:50:09 +0000 Subject: [Zeek] (no subject) Message-ID: Hello, I've been trying to get Zeek installed on a Clear linux distribution machine for a while. I know my way around linux enough to get this done from the github source, but what caused me so much trouble was a missing pre-requisite - the C++ Actor framework. I'm not a linux beginner, and I installed all of the pre-requisits, but if this was added to ?the part of the instalation documentation under "To build Bro from source, the following additional dependencies are required:", installing from source would have been much smoother for me. If for some reason, this being left out is intentional, sorry to bring this up. After setting up all of the. cfg files and runnung install and start in broctrl, I got the following error: ? cl at clr-31868b162a544d5290cfe54c3dd15df1 /usr/local/bro/logs/current $ cat stderr.log? *** failed to set config parameter work-stealing.moderate-sleep-duration-us: invalid name *** failed to set config parameter work-stealing.relaxed-sleep-duration-us: invalid name /usr/local/bro/share/broctl/scripts/run-bro: line 110: 1211 Segmentation fault ? ? ?(core dumped) nohup "$mybro" "$@"? The proces did not start. Any suggestions how to solve this or any links to possibles hints for a solution would be appreciated. I enjoyed the conference at Cern very much. Thanks. Dan. From dherakovic at hotmail.com Sun Apr 14 10:43:17 2019 From: dherakovic at hotmail.com (Daniel Herakovic) Date: Sun, 14 Apr 2019 17:43:17 +0000 Subject: [Zeek] Fw: CAF and running zeek help In-Reply-To: References: Message-ID: Hello, Last night I wrote about adding C++ Actor framework as a pre-requisite in the installation manual. Today I tried to install zeek in on my osx and noticed it gets recursivley cloned from the zeek github repo. Sorry to bring this up. I haven't resolved the second question about getting a segmentation fault when running zeek. Any help will be appreciated. Thanks. Dan. ________________________________________ From: Daniel Herakovic Sent: Saturday, April 13, 2019 9:50 PM To: zeek at zeek.org Subject: Hello, I've been trying to get Zeek installed on a Clear linux distribution machine for a while. I know my way around linux enough to get this done from the github source, but what caused me so much trouble was a missing pre-requisite - the C++ Actor framework. I'm not a linux beginner, and I installed all of the pre-requisits, but if this was added to the part of the instalation documentation under "To build Bro from source, the following additional dependencies are required:", installing from source would have been much smoother for me. If for some reason, this being left out is intentional, sorry to bring this up. After setting up all of the. cfg files and runnung install and start in broctrl, I got the following error: ? cl at clr-31868b162a544d5290cfe54c3dd15df1 /usr/local/bro/logs/current $ cat stderr.log *** failed to set config parameter work-stealing.moderate-sleep-duration-us: invalid name *** failed to set config parameter work-stealing.relaxed-sleep-duration-us: invalid name /usr/local/bro/share/broctl/scripts/run-bro: line 110: 1211 Segmentation fault (core dumped) nohup "$mybro" "$@"? The proces did not start. Any suggestions how to solve this or any links to possibles hints for a solution would be appreciated. I enjoyed the conference at Cern very much. Thanks. Dan. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190414/05aa8de6/attachment.html From michalpurzynski1 at gmail.com Mon Apr 15 02:13:31 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Mon, 15 Apr 2019 11:13:31 +0200 Subject: [Zeek] (no subject) In-Reply-To: References: Message-ID: There is no need to manually compile Zeek on ClearLinux, as it is included in the distribution. swupd bundle-add network-security-monitoring And Zeek is installed. You want to work-around the Zeek/ClearLinux incompatibility next /usr/bin/rsync -aP /usr/share/bro /tmp rm -rf /usr/share/bro /usr/bin/rsync -aP /tmp/bro /usr/share/ ln -s /etc/bro/config/broctl.cfg /etc/broctl.cfg ln -s /etc/bro/config/networks.cfg /etc/networks.cfg ln -s /etc/bro/config/node.cfg /etc/node.cfg Create the service user useradd bro chown -Rv bro:bro /var/lib/bro chown -Rv bro:bro /usr/share/broctl/scripts su - bro broctl deploy On Sat, Apr 13, 2019 at 9:58 PM Daniel Herakovic wrote: > > > Hello, > > I've been trying to get Zeek installed on a Clear linux distribution machine for a while. I know my way around linux enough to get this done from the github source, but what caused me so much trouble was a missing pre-requisite - the C++ Actor framework. > > I'm not a linux beginner, and I installed all of the pre-requisits, but if this was added to the part of the instalation documentation under "To build Bro from source, the following additional dependencies are required:", installing from source would have been much smoother for me. If for some reason, this being left out is intentional, sorry to bring this up. > > After setting up all of the. cfg files and runnung install and start in broctrl, I got the following error: > ? > cl at clr-31868b162a544d5290cfe54c3dd15df1 /usr/local/bro/logs/current $ cat stderr.log > *** failed to set config parameter work-stealing.moderate-sleep-duration-us: invalid name > *** failed to set config parameter work-stealing.relaxed-sleep-duration-us: invalid name > /usr/local/bro/share/broctl/scripts/run-bro: line 110: 1211 Segmentation fault (core dumped) nohup "$mybro" "$@"? > > The proces did not start. Any suggestions how to solve this or any links to possibles hints for a solution would be appreciated. > > I enjoyed the conference at Cern very much. > > Thanks. > > Dan. > > > > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190415/e185a363/attachment.html From jsiwek at corelight.com Mon Apr 15 10:10:47 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Mon, 15 Apr 2019 10:10:47 -0700 Subject: [Zeek] Fw: CAF and running zeek help In-Reply-To: References: Message-ID: On Sun, Apr 14, 2019 at 10:52 AM Daniel Herakovic wrote: > > I haven't resolved the second question about getting a segmentation fault when running zeek. Any help will be appreciated. Getting a stack trace in a debugger (gdb / lldb) and sending that would give us more hints. - Jon From justin at corelight.com Mon Apr 15 11:19:41 2019 From: justin at corelight.com (Justin Azoff) Date: Mon, 15 Apr 2019 14:19:41 -0400 Subject: [Zeek] (no subject) In-Reply-To: References: Message-ID: On Sat, Apr 13, 2019 at 3:58 PM Daniel Herakovic wrote: > cl at clr-31868b162a544d5290cfe54c3dd15df1 /usr/local/bro/logs/current $ cat stderr.log > *** failed to set config parameter work-stealing.moderate-sleep-duration-us: invalid name > *** failed to set config parameter work-stealing.relaxed-sleep-duration-us: invalid name You installed CAF from somewhere? You likely just need to remove that and let zeek install the specific version that it includes in src/3rdparty/caf -- Justin From ambros.novak.89 at gmail.com Mon Apr 15 18:33:09 2019 From: ambros.novak.89 at gmail.com (Ambros Novak) Date: Mon, 15 Apr 2019 21:33:09 -0400 Subject: [Zeek] threat intel questions In-Reply-To: References: Message-ID: Thank you, Jan. I'm unable to to get any threat intel events. The specific feed file was added in local.bro and the policy was redeployed. The intel.log is not being generated. Is there a verbose debugging or warning when the policy is deployed to check for errors? What is the best way to test the threat intel framework and events? If the syntax of the feed.txt is bad will it cause the no events in intel.log? Will unicode characters (non-ASCII) in the feed.txt cause an error or break the threat intel framework? Will multi-line values in the source, desc, or url cause the threat intel framework to not work? Thank you in advance for the help!!! On Thu, Apr 11, 2019 at 7:25 AM Jan Grash?fer wrote: > On 11/04/2019 03:57, Ambros Novak wrote: > > Is there a way to add meta.url and meta.desc to intel.log? > > In theory there is but you have to keep in mind that multiple meta data > records might be associated with a single indicator that matched. This > is also why the sources field in intel.log is a set. See the following > blog post for more details: > https://blog.zeek.org/2016/12/the-intelligence-framework-update.html > > > For Intel::FILE_NAME to work, does base/frameworks/intel/files.bro go in > > local.bro? > > Scripts in base/ should be loaded by default. If you don't see hits on > file names try to spot them in files.log first. > > > Will Intel::FILE_HASH detect MD5, SHA1, SHA256, SHA256, imphash, and > > authentihash? > > > > Will Intel::CERT_HASH detect MD5 or SHA256? > > > > Will the intel frame detect part of part a URL or does only the full URL? > > > > Will "@domain.com" work in the Intel::EMAIL, or is it best to just > remove > > the "@" and add it to Intel::Domain? > > To understand how the different indicators work just have a look at the > corresponding seen scripts: > > https://github.com/zeek/zeek/tree/master/scripts/policy/frameworks/intel/seen > > For example in case of Intel::FILE_HASH the file_hash event is used, > which is triggered "each time file analysis generates a digest". > > > Does meta.do_notice have to be set to T for an event to get logged into > > intel.log? > > No. Setting do_notice to T will cause a notice to be generated. More > info on notices can be found here: > https://docs.zeek.org/en/stable/frameworks/notice.html > > Jan > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190415/48e3367f/attachment.html From shirkdog.bsd at gmail.com Mon Apr 15 20:59:53 2019 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Mon, 15 Apr 2019 23:59:53 -0400 Subject: [Zeek] threat intel questions In-Reply-To: References: Message-ID: Format of the Intel files is critical, there should be errors in the reporter.log on startup if there are any issues with the formatting of the file. Most important issue is tab separated fields in your Intel files, next being that you have all of the necessary fields. -- Michael Shirk Daemon Security, Inc. https://www.daemon-security.com On Mon, Apr 15, 2019, 22:15 Ambros Novak wrote: > Thank you, Jan. > > > I'm unable to to get any threat intel events. The specific feed file was > added in local.bro and the policy was redeployed. The intel.log is not > being generated. > > Is there a verbose debugging or warning when the policy is deployed to > check for errors? > > What is the best way to test the threat intel framework and events? > > If the syntax of the feed.txt is bad will it cause the no events in > intel.log? > > Will unicode characters (non-ASCII) in the feed.txt cause an error or > break the threat intel framework? > > Will multi-line values in the source, desc, or url cause the threat intel > framework to not work? > > Thank you in advance for the help!!! > > On Thu, Apr 11, 2019 at 7:25 AM Jan Grash?fer > wrote: > >> On 11/04/2019 03:57, Ambros Novak wrote: >> > Is there a way to add meta.url and meta.desc to intel.log? >> >> In theory there is but you have to keep in mind that multiple meta data >> records might be associated with a single indicator that matched. This >> is also why the sources field in intel.log is a set. See the following >> blog post for more details: >> https://blog.zeek.org/2016/12/the-intelligence-framework-update.html >> >> > For Intel::FILE_NAME to work, does base/frameworks/intel/files.bro go in >> > local.bro? >> >> Scripts in base/ should be loaded by default. If you don't see hits on >> file names try to spot them in files.log first. >> >> > Will Intel::FILE_HASH detect MD5, SHA1, SHA256, SHA256, imphash, and >> > authentihash? >> > >> > Will Intel::CERT_HASH detect MD5 or SHA256? >> > >> > Will the intel frame detect part of part a URL or does only the full >> URL? >> > >> > Will "@domain.com" work in the Intel::EMAIL, or is it best to just >> remove >> > the "@" and add it to Intel::Domain? >> >> To understand how the different indicators work just have a look at the >> corresponding seen scripts: >> >> https://github.com/zeek/zeek/tree/master/scripts/policy/frameworks/intel/seen >> >> For example in case of Intel::FILE_HASH the file_hash event is used, >> which is triggered "each time file analysis generates a digest". >> >> > Does meta.do_notice have to be set to T for an event to get logged into >> > intel.log? >> >> No. Setting do_notice to T will cause a notice to be generated. More >> info on notices can be found here: >> https://docs.zeek.org/en/stable/frameworks/notice.html >> >> Jan >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190415/aafcb1bf/attachment.html From dherakovic at hotmail.com Tue Apr 16 08:29:42 2019 From: dherakovic at hotmail.com (Daniel Herakovic) Date: Tue, 16 Apr 2019 15:29:42 +0000 Subject: [Zeek] Fw: CAF and running zeek help In-Reply-To: References: , Message-ID: Hey, I tried a fresh install from the github repo on duplicate hardware, with the included CAF, and didn't get any segmentation fault errors when running it. Now its running and logging properly. I assume the stack trace wont be necessary. If for some reason, you want the gdb/llbd stack trace, I have that install available. Thank for your responses and suggestions. Dan. From: Jon Siwek Sent: Monday, April 15, 2019 7:10 PM To: Daniel Herakovic Cc: zeek at zeek.org Subject: Re: [Zeek] Fw: CAF and running zeek help ? On Sun, Apr 14, 2019 at 10:52 AM Daniel Herakovic wrote: > > I haven't resolved the second question about getting a segmentation fault when running zeek. Any help will be appreciated. Getting a stack trace in a debugger (gdb / lldb) and sending that would give us more hints. - Jon From ambros.novak.89 at gmail.com Wed Apr 17 11:00:00 2019 From: ambros.novak.89 at gmail.com (Ambros Novak) Date: Wed, 17 Apr 2019 14:00:00 -0400 Subject: [Zeek] threat intel questions In-Reply-To: References: Message-ID: Thank you Michael. One last weird question, is there a way to have threat intel events with a different source (or a different threat intel file altogether) write out to another log - like to the feed2.txt file would write to intel2.log? ?? > On Apr 15, 2019, at 11:59 PM, Michael Shirk wrote: > > Format of the Intel files is critical, there should be errors in the reporter.log on startup if there are any issues with the formatting of the file. Most important issue is tab separated fields in your Intel files, next being that you have all of the necessary fields. > > -- > Michael Shirk > Daemon Security, Inc. > https://www.daemon-security.com > >> On Mon, Apr 15, 2019, 22:15 Ambros Novak wrote: >> Thank you, Jan. >> >> >> I'm unable to to get any threat intel events. The specific feed file was added in local.bro and the policy was redeployed. The intel.log is not being generated. >> >> Is there a verbose debugging or warning when the policy is deployed to check for errors? >> >> What is the best way to test the threat intel framework and events? >> >> If the syntax of the feed.txt is bad will it cause the no events in intel.log? >> >> Will unicode characters (non-ASCII) in the feed.txt cause an error or break the threat intel framework? >> >> Will multi-line values in the source, desc, or url cause the threat intel framework to not work? >> >> Thank you in advance for the help!!! >> >>> On Thu, Apr 11, 2019 at 7:25 AM Jan Grash?fer wrote: >>> On 11/04/2019 03:57, Ambros Novak wrote: >>> > Is there a way to add meta.url and meta.desc to intel.log? >>> >>> In theory there is but you have to keep in mind that multiple meta data >>> records might be associated with a single indicator that matched. This >>> is also why the sources field in intel.log is a set. See the following >>> blog post for more details: >>> https://blog.zeek.org/2016/12/the-intelligence-framework-update.html >>> >>> > For Intel::FILE_NAME to work, does base/frameworks/intel/files.bro go in >>> > local.bro? >>> >>> Scripts in base/ should be loaded by default. If you don't see hits on >>> file names try to spot them in files.log first. >>> >>> > Will Intel::FILE_HASH detect MD5, SHA1, SHA256, SHA256, imphash, and >>> > authentihash? >>> > >>> > Will Intel::CERT_HASH detect MD5 or SHA256? >>> > >>> > Will the intel frame detect part of part a URL or does only the full URL? >>> > >>> > Will "@domain.com" work in the Intel::EMAIL, or is it best to just remove >>> > the "@" and add it to Intel::Domain? >>> >>> To understand how the different indicators work just have a look at the >>> corresponding seen scripts: >>> https://github.com/zeek/zeek/tree/master/scripts/policy/frameworks/intel/seen >>> >>> For example in case of Intel::FILE_HASH the file_hash event is used, >>> which is triggered "each time file analysis generates a digest". >>> >>> > Does meta.do_notice have to be set to T for an event to get logged into >>> > intel.log? >>> >>> No. Setting do_notice to T will cause a notice to be generated. More >>> info on notices can be found here: >>> https://docs.zeek.org/en/stable/frameworks/notice.html >>> >>> Jan >>> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190417/15426a6a/attachment.html From ejmartin2 at wpi.edu Wed Apr 17 11:31:09 2019 From: ejmartin2 at wpi.edu (Martin, Eric J) Date: Wed, 17 Apr 2019 18:31:09 +0000 Subject: [Zeek] binpac error Message-ID: I just installed from source (master) on a fresh pull, and I am unable to run bro deploy. When I do, I receive the following error: sudo /usr/local/bro/bin/broctl deploy checking configurations ... bro scripts failed. /usr/local/bro/bin/bro: error while loading shared libraries: libbinpac.so.0: cannot open shared object file: No such file or directory ldd confirms that libbinpac.so.0 isn?t linked, though the library installed, and the library is linked in ~/sandbox/zeek/build/src ldd /usr/local/bro/bin/bro linux-vdso.so.1 => (0x00007fff093de000) libbinpac.so.0 => not found libpcap.so.1 => /opt/pfring/lib/libpcap.so.1 (0x00007f13d7196000) ldd ~/sandbox/zeek/build/src/bro linux-vdso.so.1 => (0x00007ffc18dcb000) libbinpac.so.0 => /home/ejmartin2/sandbox/zeek/build/aux/binpac/lib/libbinpac.so.0 (0x00007f41ba9de000) lrwxrwxrwx. 1 root root 14 Apr 17 12:47 /usr/local/bro/lib64/libbinpac.so -> libbinpac.so.0 lrwxrwxrwx. 1 root root 20 Apr 17 12:47 /usr/local/bro/lib64/libbinpac.so.0 -> libbinpac.so.0.51-11 -rwxr-xr-x. 1 root root 96072 Apr 17 12:53 /usr/local/bro/lib64/libbinpac.so.0.51-11 to install, I ? cloned the repository, made sure the submodules were recursively up to date ? cloned PF_RING ? Compiled ? installed ? ./configure ?with-pcacp=/opt/pcap ? make ? sudo make install Can somebody please help me with what I?m doing incorrectly? Thank you, [cid:5E7156F0-3BAB-41F2-B32B-5702AED1A414] Eric Martin, CISSP Information Security Engineer Worcester Polytechnic Institute ejmartin2 at wpi.edu Key fingerprint = C74F 1EBF 2E80 7984 8CB5 064E BF17 D34C C704 B30F For security purposes, this message has been double ROT13 encoded -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190417/3bf6d6c2/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: unknown.png Type: image/png Size: 3595 bytes Desc: unknown.png Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190417/3bf6d6c2/attachment-0001.bin From jsiwek at corelight.com Wed Apr 17 16:07:08 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 17 Apr 2019 16:07:08 -0700 Subject: [Zeek] binpac error In-Reply-To: References: Message-ID: On Wed, Apr 17, 2019 at 11:33 AM Martin, Eric J wrote: > > I just installed from source (master) on a fresh pull, and I am unable to run bro deploy. When I do, I receive the following error: > > > sudo /usr/local/bro/bin/broctl deploy > checking configurations ... > bro scripts failed. > /usr/local/bro/bin/bro: error while loading shared libraries: libbinpac.so.0: cannot open shared object file: No such file or directory Thanks for the report, I just pushed a change that should help you out, grab it like: git pull && git submodule update --recursive --init Then rebuild/reinstall and let me know if there's still problems. - Jon From seth at corelight.com Wed Apr 17 19:30:46 2019 From: seth at corelight.com (Seth Hall) Date: Wed, 17 Apr 2019 22:30:46 -0400 Subject: [Zeek] VRRP/CARP Packet Analyser In-Reply-To: References: Message-ID: On 13 Apr 2019, at 3:14, Andrew Klaus wrote: > In my weird.log, I've noticed unknown_protocol_112 showing up > regularly for > me. I believe this to be the Virtual Router Redundancy Protocol > (VRRP), > which does match up with CARP that's enabled on our OpenBSD firewalls. > > Before I start looking further, has anyone built a parser for Zeek > already? I haven't heard of anyone working on this fwiw. Feel free to reach out again if you need help with anything! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From mauro.palumbo at aizoon.it Thu Apr 18 00:44:23 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Thu, 18 Apr 2019 07:44:23 +0000 Subject: [Zeek] zeek performance with some events activated Message-ID: <3b8b05185f2c4817b6b96cc887bcbfa0@SRVEX03.aizoon.local> Hi Zeek-devs, I need to do some analysis on TCP flags and the event "tcp_packet" perfectly fits my needs. However, as stated in Zeek's documentation, using this event may significantly affect Zeek's performance, given the high number of TCP packets to look into. Is there any other way to look into TCP flags? Would bypassing scriptland and modifyng directly the C++ code be more efficient (though not the "proper" way to do it)? Thanks in advance, Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190418/aeb6c572/attachment.html From jsiwek at corelight.com Thu Apr 18 09:30:21 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Thu, 18 Apr 2019 09:30:21 -0700 Subject: [Zeek] zeek performance with some events activated In-Reply-To: <3b8b05185f2c4817b6b96cc887bcbfa0@SRVEX03.aizoon.local> References: <3b8b05185f2c4817b6b96cc887bcbfa0@SRVEX03.aizoon.local> Message-ID: On Thu, Apr 18, 2019 at 12:46 AM Palumbo Mauro wrote: > I need to do some analysis on TCP flags and the event ?tcp_packet? perfectly fits my needs. However, as stated in Zeek?s documentation, using this event may significantly affect Zeek?s performance, given the high number of TCP packets to look into. > > Is there any other way to look into TCP flags? No other script-only method comes to mind. > Would bypassing scriptland and modifyng directly the C++ code be more efficient (though not the ?proper? way to do it)? Generally, yes. You could always do a quick measurement of whether handling just an empty "tcp_packet" event is prohibitive for you use-case. If it's not, then some other factors to help decide whether to proceed further with script-only vs. C++ implementation might be: (1) Length of time it would take to fully implement and test the script-only solution. If it's a lot of effort, might be worth just starting from a C++ implementation. (2) Whether you plan to share this work w/ the wider community or it just needs to work for your particular case (for the later a less performant, script-only solution is more acceptable). - Jon From jmellander at lbl.gov Thu Apr 18 09:57:19 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Thu, 18 Apr 2019 09:57:19 -0700 Subject: [Zeek] zeek performance with some events activated In-Reply-To: References: <3b8b05185f2c4817b6b96cc887bcbfa0@SRVEX03.aizoon.local> Message-ID: Another consideration to think about is whether you can run against a pcap offline, or if you need realtime analysis. For offline analysis you can turn off all policies except the one you're particularly interested in. On Thu, Apr 18, 2019 at 9:40 AM Jon Siwek wrote: > On Thu, Apr 18, 2019 at 12:46 AM Palumbo Mauro > wrote: > > > I need to do some analysis on TCP flags and the event ?tcp_packet? > perfectly fits my needs. However, as stated in Zeek?s documentation, using > this event may significantly affect Zeek?s performance, given the high > number of TCP packets to look into. > > > > Is there any other way to look into TCP flags? > > No other script-only method comes to mind. > > > Would bypassing scriptland and modifyng directly the C++ code be more > efficient (though not the ?proper? way to do it)? > > Generally, yes. > > You could always do a quick measurement of whether handling just an > empty "tcp_packet" event is prohibitive for you use-case. If it's > not, then some other factors to help decide whether to proceed further > with script-only vs. C++ implementation might be: > > (1) Length of time it would take to fully implement and test the > script-only solution. If it's a lot of effort, might be worth just > starting from a C++ implementation. > > (2) Whether you plan to share this work w/ the wider community or it > just needs to work for your particular case (for the later a less > performant, script-only solution is more acceptable). > > - Jon > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190418/7dc8c914/attachment.html From Brett.Warrick at sensato.co Thu Apr 18 12:49:29 2019 From: Brett.Warrick at sensato.co (Brett Warrick) Date: Thu, 18 Apr 2019 19:49:29 +0000 Subject: [Zeek] Zeek Consultant Needed Message-ID: Salutations! I work for a NJ-based cybersecurity firm that currently uses Zeek. We are in need of a Zeek expert to serve on a consultative basis. The ideal person would be possess expert knowledge with Zeek in regards to * clustering * general network topology * deployment of zeek to enterprise environments Don't worry - you don't need to quit your day job! We're talking an hour or two, here and there, that we can tap your extensive knowledge. Interested? Let's talk more! Please email me at Thanks! Brett (844) 736-7286 x111 m: (732) 939-1290 www.sensato.co e: brett.warrick at sensato.co -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190418/1d950d5b/attachment.html From mauro.palumbo at aizoon.it Fri Apr 19 00:29:21 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Fri, 19 Apr 2019 07:29:21 +0000 Subject: [Zeek] R: zeek performance with some events activated In-Reply-To: References: <3b8b05185f2c4817b6b96cc887bcbfa0@SRVEX03.aizoon.local> Message-ID: Hi Jon, thanks. This is what I thought. We need to evaluate realtime traffic, not offline traffic. I'll think about which way is better for us. Mauro -----Messaggio originale----- Da: Jon Siwek [mailto:jsiwek at corelight.com] Inviato: gioved? 18 aprile 2019 18:30 A: Palumbo Mauro Cc: zeek at zeek.org Oggetto: Re: [Zeek] zeek performance with some events activated On Thu, Apr 18, 2019 at 12:46 AM Palumbo Mauro wrote: > I need to do some analysis on TCP flags and the event ?tcp_packet? perfectly fits my needs. However, as stated in Zeek?s documentation, using this event may significantly affect Zeek?s performance, given the high number of TCP packets to look into. > > Is there any other way to look into TCP flags? No other script-only method comes to mind. > Would bypassing scriptland and modifyng directly the C++ code be more efficient (though not the ?proper? way to do it)? Generally, yes. You could always do a quick measurement of whether handling just an empty "tcp_packet" event is prohibitive for you use-case. If it's not, then some other factors to help decide whether to proceed further with script-only vs. C++ implementation might be: (1) Length of time it would take to fully implement and test the script-only solution. If it's a lot of effort, might be worth just starting from a C++ implementation. (2) Whether you plan to share this work w/ the wider community or it just needs to work for your particular case (for the later a less performant, script-only solution is more acceptable). - Jon From anthony.kasza at gmail.com Fri Apr 19 08:51:26 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Fri, 19 Apr 2019 09:51:26 -0600 Subject: [Zeek] Zeek Week at CERN Materials Message-ID: For those who could not attend last week's Zeek Week at CERN, most of the presentation materials have been posted online. https://indico.cern.ch/event/762505/contributions/ -AK -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/24d18d40/attachment.html From akgraner at corelight.com Fri Apr 19 09:08:54 2019 From: akgraner at corelight.com (Amber Graner) Date: Fri, 19 Apr 2019 11:08:54 -0500 Subject: [Zeek] Zeek Week at CERN Materials In-Reply-To: References: Message-ID: Thanks for the link. I'll get this added to a blog post. ~Amber On Fri, Apr 19, 2019 at 11:06 AM anthony kasza wrote: > For those who could not attend last week's Zeek Week at CERN, most of the > presentation materials have been posted online. > > https://indico.cern.ch/event/762505/contributions/ > > -AK > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/3d88009c/attachment.html From akgraner at corelight.com Fri Apr 19 09:12:56 2019 From: akgraner at corelight.com (Amber Graner) Date: Fri, 19 Apr 2019 11:12:56 -0500 Subject: [Zeek] Save the Date - ZeekWeek 2019 - Dates Announced Message-ID: Save the Date October 8th - 10th ZeekWeek 2019 (formerly BroCon) King Street Ballroom & Perch, Hilton Embassy Suites 255 South King Street, Seattle WA 98104 https://blog.zeek.org/2019/04/save-date-october-8th-10th-zeekweek.html -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/c9364045/attachment-0001.html From akgraner at corelight.com Fri Apr 19 10:41:24 2019 From: akgraner at corelight.com (Amber Graner) Date: Fri, 19 Apr 2019 12:41:24 -0500 Subject: [Zeek] Some Pics from the Open Source Zeek European Workshop at CERN Message-ID: Hi all, Here's a link to the pics that I took during the workshop - https://www.flickr.com/photos/37895468 at N06/albums/72157677797276187 These are licensed CC-BY-NC-ND ( https://creativecommons.org/share-your-work/licensing-types-examples/licensing-examples/#by-nc-nd ) Enjoy! ~Amber -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/f2a1b283/attachment.html From tscheponik at gmail.com Fri Apr 19 10:48:54 2019 From: tscheponik at gmail.com (Woot4moo) Date: Fri, 19 Apr 2019 13:48:54 -0400 Subject: [Zeek] Packet drop rates Message-ID: Are there statistics (anecdotal is OK ) for packet drop percentages? How are teams scaling up and where does drop curve spike? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/09fe4b20/attachment.html From nothinrandom at gmail.com Fri Apr 19 15:49:25 2019 From: nothinrandom at gmail.com (TQ) Date: Fri, 19 Apr 2019 15:49:25 -0700 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface Message-ID: Hello All, Has anyone ran Zeek and Suricata (or something similar) off from the same network interface; especially via docker? If yes, did you see any issues at all? I shortly ran both off from the same interface, but wasn't very sure due to minimum traffic. Is it better to get a fancy Intel NIC with SR-IOV feature and spawn off virtual interfaces? Have a great weekend all. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/2a7169b8/attachment.html From michalpurzynski1 at gmail.com Fri Apr 19 16:01:27 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Sat, 20 Apr 2019 01:01:27 +0200 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: There is no need to use SR-IOV and other fancy features, everything just works. Not sure about docker, I don't use that for any production-worthy workload (for performance reasons, it corrupts data randomly, etc). Just use AF_Packet and use a different cluster_id for each and you will be fine. You can even use different number of threads (for Suri) and processes (for Zeek). The first part of SEPTun I wrote with Suricata devs might be useful for Zeek as well. And keep asking questions. https://github.com/pevma/SEPTun https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md Sharing host between Suricata and Zeek is how we run our office sensors. On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: > Hello All, > > Has anyone ran Zeek and Suricata (or something similar) off from the same > network interface; especially via docker? If yes, did you see any issues > at all? I shortly ran both off from the same interface, but wasn't very > sure due to minimum traffic. Is it better to get a fancy Intel NIC with > SR-IOV feature and spawn off virtual interfaces? Have a great weekend all. > > Thanks, > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190420/9f1c4278/attachment.html From patrick.kelley at criticalpathsecurity.com Fri Apr 19 16:15:04 2019 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Fri, 19 Apr 2019 19:15:04 -0400 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: Works fine. I've used a docker container once, for this purpose. It did fine, but like Michal, I don't recommend it. On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski wrote: > There is no need to use SR-IOV and other fancy features, everything just > works. Not sure about docker, I don't use that for any production-worthy > workload (for performance reasons, it corrupts data randomly, etc). > > Just use AF_Packet and use a different cluster_id for each and you will be > fine. You can even use different number of threads (for Suri) and processes > (for Zeek). > > The first part of SEPTun I wrote with Suricata devs might be useful for > Zeek as well. And keep asking questions. > > https://github.com/pevma/SEPTun > https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md > > Sharing host between Suricata and Zeek is how we run our office sensors. > > > > On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: > >> Hello All, >> >> Has anyone ran Zeek and Suricata (or something similar) off from the same >> network interface; especially via docker? If yes, did you see any issues >> at all? I shortly ran both off from the same interface, but wasn't very >> sure due to minimum traffic. Is it better to get a fancy Intel NIC with >> SR-IOV feature and spawn off virtual interfaces? Have a great weekend all. >> >> Thanks, >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- *Patrick Kelley, CISSP, C|EH, ITIL* *CTO* patrick.kelley at criticalpathsecurity.com (o) 770-224-6482 *The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure.* -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/b723ceca/attachment.html From nothinrandom at gmail.com Fri Apr 19 16:20:14 2019 From: nothinrandom at gmail.com (TQ) Date: Fri, 19 Apr 2019 16:20:14 -0700 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: Thank you Michal and Patrick! I learned something new today and will take a look at your git repo. to learn more. I currently have them both on docker for easy maintenance (reload if something goes wrong). Have a great weekend! On Fri, Apr 19, 2019 at 4:15 PM Patrick Kelley < patrick.kelley at criticalpathsecurity.com> wrote: > Works fine. > > I've used a docker container once, for this purpose. It did fine, but > like Michal, I don't recommend it. > > On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > >> There is no need to use SR-IOV and other fancy features, everything just >> works. Not sure about docker, I don't use that for any production-worthy >> workload (for performance reasons, it corrupts data randomly, etc). >> >> Just use AF_Packet and use a different cluster_id for each and you will >> be fine. You can even use different number of threads (for Suri) and >> processes (for Zeek). >> >> The first part of SEPTun I wrote with Suricata devs might be useful for >> Zeek as well. And keep asking questions. >> >> https://github.com/pevma/SEPTun >> https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md >> >> Sharing host between Suricata and Zeek is how we run our office sensors. >> >> >> >> On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: >> >>> Hello All, >>> >>> Has anyone ran Zeek and Suricata (or something similar) off from the >>> same network interface; especially via docker? If yes, did you see any >>> issues at all? I shortly ran both off from the same interface, but wasn't >>> very sure due to minimum traffic. Is it better to get a fancy Intel NIC >>> with SR-IOV feature and spawn off virtual interfaces? Have a great weekend >>> all. >>> >>> Thanks, >>> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -- > > *Patrick Kelley, CISSP, C|EH, ITIL* > *CTO* > patrick.kelley at criticalpathsecurity.com > (o) 770-224-6482 > > *The limit to which you have accepted being comfortable is the limit to > which you have grown. Accept new challenges as an opportunity to enrich > yourself and not as a point of potential failure.* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/f365ed52/attachment-0001.html From patrick.kelley at criticalpathsecurity.com Fri Apr 19 16:23:17 2019 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Fri, 19 Apr 2019 19:23:17 -0400 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: You are most welcome. As always, reach out if you have any questions. On Fri, Apr 19, 2019 at 7:20 PM TQ wrote: > Thank you Michal and Patrick! I learned something new today and will take > a look at your git repo. to learn more. I currently have them both on > docker for easy maintenance (reload if something goes wrong). Have a great > weekend! > > On Fri, Apr 19, 2019 at 4:15 PM Patrick Kelley < > patrick.kelley at criticalpathsecurity.com> wrote: > >> Works fine. >> >> I've used a docker container once, for this purpose. It did fine, but >> like Michal, I don't recommend it. >> >> On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski < >> michalpurzynski1 at gmail.com> wrote: >> >>> There is no need to use SR-IOV and other fancy features, everything just >>> works. Not sure about docker, I don't use that for any production-worthy >>> workload (for performance reasons, it corrupts data randomly, etc). >>> >>> Just use AF_Packet and use a different cluster_id for each and you will >>> be fine. You can even use different number of threads (for Suri) and >>> processes (for Zeek). >>> >>> The first part of SEPTun I wrote with Suricata devs might be useful for >>> Zeek as well. And keep asking questions. >>> >>> https://github.com/pevma/SEPTun >>> https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md >>> >>> Sharing host between Suricata and Zeek is how we run our office sensors. >>> >>> >>> >>> On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: >>> >>>> Hello All, >>>> >>>> Has anyone ran Zeek and Suricata (or something similar) off from the >>>> same network interface; especially via docker? If yes, did you see any >>>> issues at all? I shortly ran both off from the same interface, but wasn't >>>> very sure due to minimum traffic. Is it better to get a fancy Intel NIC >>>> with SR-IOV feature and spawn off virtual interfaces? Have a great weekend >>>> all. >>>> >>>> Thanks, >>>> _______________________________________________ >>>> Zeek mailing list >>>> zeek at zeek.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>> >>> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> >> >> >> -- >> >> *Patrick Kelley, CISSP, C|EH, ITIL* >> *CTO* >> patrick.kelley at criticalpathsecurity.com >> (o) 770-224-6482 >> >> *The limit to which you have accepted being comfortable is the limit to >> which you have grown. Accept new challenges as an opportunity to enrich >> yourself and not as a point of potential failure.* >> >> >> -- *Patrick Kelley, CISSP, C|EH, ITIL* *CTO* patrick.kelley at criticalpathsecurity.com (o) 770-224-6482 *The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure.* -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/a4f4fc06/attachment.html From liburdi.joshua at gmail.com Fri Apr 19 16:33:18 2019 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Fri, 19 Apr 2019 16:33:18 -0700 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: Not much to add to the conversation except to say that where I work we have a large Docker-based deployment and have observed no issues compared to our previous bare metal install (in some locations performance increased). On Fri, Apr 19, 2019 at 4:25 PM Patrick Kelley < patrick.kelley at criticalpathsecurity.com> wrote: > You are most welcome. > > As always, reach out if you have any questions. > > On Fri, Apr 19, 2019 at 7:20 PM TQ wrote: > >> Thank you Michal and Patrick! I learned something new today and will >> take a look at your git repo. to learn more. I currently have them both on >> docker for easy maintenance (reload if something goes wrong). Have a great >> weekend! >> >> On Fri, Apr 19, 2019 at 4:15 PM Patrick Kelley < >> patrick.kelley at criticalpathsecurity.com> wrote: >> >>> Works fine. >>> >>> I've used a docker container once, for this purpose. It did fine, but >>> like Michal, I don't recommend it. >>> >>> On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski < >>> michalpurzynski1 at gmail.com> wrote: >>> >>>> There is no need to use SR-IOV and other fancy features, everything >>>> just works. Not sure about docker, I don't use that for any >>>> production-worthy workload (for performance reasons, it corrupts data >>>> randomly, etc). >>>> >>>> Just use AF_Packet and use a different cluster_id for each and you will >>>> be fine. You can even use different number of threads (for Suri) and >>>> processes (for Zeek). >>>> >>>> The first part of SEPTun I wrote with Suricata devs might be useful for >>>> Zeek as well. And keep asking questions. >>>> >>>> https://github.com/pevma/SEPTun >>>> https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md >>>> >>>> Sharing host between Suricata and Zeek is how we run our office sensors. >>>> >>>> >>>> >>>> On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: >>>> >>>>> Hello All, >>>>> >>>>> Has anyone ran Zeek and Suricata (or something similar) off from the >>>>> same network interface; especially via docker? If yes, did you see any >>>>> issues at all? I shortly ran both off from the same interface, but wasn't >>>>> very sure due to minimum traffic. Is it better to get a fancy Intel NIC >>>>> with SR-IOV feature and spawn off virtual interfaces? Have a great weekend >>>>> all. >>>>> >>>>> Thanks, >>>>> _______________________________________________ >>>>> Zeek mailing list >>>>> zeek at zeek.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>> >>>> _______________________________________________ >>>> Zeek mailing list >>>> zeek at zeek.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>> >>> >>> >>> -- >>> >>> *Patrick Kelley, CISSP, C|EH, ITIL* >>> *CTO* >>> patrick.kelley at criticalpathsecurity.com >>> (o) 770-224-6482 >>> >>> *The limit to which you have accepted being comfortable is the limit to >>> which you have grown. Accept new challenges as an opportunity to enrich >>> yourself and not as a point of potential failure.* >>> >>> >>> > > -- > > *Patrick Kelley, CISSP, C|EH, ITIL* > *CTO* > patrick.kelley at criticalpathsecurity.com > (o) 770-224-6482 > > *The limit to which you have accepted being comfortable is the limit to > which you have grown. Accept new challenges as an opportunity to enrich > yourself and not as a point of potential failure.* > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/a477b91c/attachment.html From blackhole.em at gmail.com Fri Apr 19 18:05:59 2019 From: blackhole.em at gmail.com (Joe Blow) Date: Fri, 19 Apr 2019 21:05:59 -0400 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190419/b88eb542/attachment-0001.html From dhoelzer at enclaveforensics.com Sat Apr 20 09:56:20 2019 From: dhoelzer at enclaveforensics.com (=?UTF-8?Q?David_Hoelzer?=) Date: Sat, 20 Apr 2019 16:56:20 +0000 Subject: [Zeek] Deprecation of &persistent References: <4cb31782-7a2c-5f5d-759a-d6f9f9011c07@enclaveforensics.com> Message-ID: <0100016a3bad66c3-1c041790-cada-455f-afcc-dc4c501c76fa-000000@email.amazonses.com> Hello all! TLDR: I'd like to ask that there be some thought given to the deprecation and eventual removal of the &persistent option in favor of Broker data stores.? IMHO, there are uses cases where the &persistent attribute is much more attractive and lower overhead than the data store approach. Longer: As you are likely aware, &persistent is now marked deprecated and we expect it to disappear in the next version or two.? The recommendation for replacement is the much more robust, SQLite backed, Broker data store. The data store solution is very elegant, though it does seem to require more fiddling than it ought to to get a data store set up.? In the long term and when dealing with large amounts of data that must be persistent and synchronized across nodes, this really is a wonderful solution. That said, there seem to me to be some use cases where that is a massive hammer to swing at some very small problems.? For example, we have one analysis script that is tracking successful external DNS resolutions.? Specifically, it is keeping track of all IPv4 and IPv6 addresses resolved in the last 7 days (&read_expire 7 days) in a set.? For all outbound connection attempts, this script generates a notice when the connection involves an external host that never appeared in a DNS answer record.? This is quite handy when it comes to locating unauthorized outbound scanning, some C2 behaviors that do not rely on DNS/fast flux sorts of things, fragile configurations of enterprise services, etc.? This has been performing quite well for several years now in more than one relatively decent sized networks (100,000+ hosts). For this problem (and others that I can imagine that would take a similar tack - i.e., only storing a set, vector, or other single primitive, rather than a massive record in a table or a table of tables), the &persistent is perfectly "sized." Am I alone in thinking that this feature should be retained *along side of* Broker data stores and potentially documented as recommended for simple primitive data persistence? Thanks! -- ---- David Hoelzer Chief of Operations Enclave Forensics, Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: pEpkey.asc Type: application/pgp-keys Size: 1808 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190420/75b88f51/attachment.bin From liburdi.joshua at gmail.com Sat Apr 20 11:32:26 2019 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Sat, 20 Apr 2019 11:32:26 -0700 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: I?d prefer to not speak too publicly about it without permission, but there?s very little config magic involved. Performance increases were the result of process isolation. On Fri, Apr 19, 2019 at 6:06 PM Joe Blow wrote: > Have you done any config magic? Docker compose? What circumstances > surrounded the performance increase? I know a bunch of folks swear by pcap > in containers, but I've never done 10gb+ in docker. > > Cheers, > > JB > > Sent via the BlackBerry Hub for Android > > *From:* liburdi.joshua at gmail.com > *Sent:* April 19, 2019 7:35 PM > *To:* patrick.kelley at criticalpathsecurity.com > *Cc:* nothinrandom at gmail.com; zeek at zeek.org > *Subject:* Re: [Zeek] Running Zeek & Suricata on Same Network Interface > > Not much to add to the conversation except to say that where I work we > have a large Docker-based deployment and have observed no issues compared > to our previous bare metal install (in some locations performance > increased). > > On Fri, Apr 19, 2019 at 4:25 PM Patrick Kelley < > patrick.kelley at criticalpathsecurity.com> wrote: > >> You are most welcome. >> >> As always, reach out if you have any questions. >> >> On Fri, Apr 19, 2019 at 7:20 PM TQ wrote: >> >>> Thank you Michal and Patrick! I learned something new today and will >>> take a look at your git repo. to learn more. I currently have them both on >>> docker for easy maintenance (reload if something goes wrong). Have a great >>> weekend! >>> >>> On Fri, Apr 19, 2019 at 4:15 PM Patrick Kelley < >>> patrick.kelley at criticalpathsecurity.com> wrote: >>> >>>> Works fine. >>>> >>>> I've used a docker container once, for this purpose. It did fine, but >>>> like Michal, I don't recommend it. >>>> >>>> On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski < >>>> michalpurzynski1 at gmail.com> wrote: >>>> >>>>> There is no need to use SR-IOV and other fancy features, everything >>>>> just works. Not sure about docker, I don't use that for any >>>>> production-worthy workload (for performance reasons, it corrupts data >>>>> randomly, etc). >>>>> >>>>> Just use AF_Packet and use a different cluster_id for each and you >>>>> will be fine. You can even use different number of threads (for Suri) and >>>>> processes (for Zeek). >>>>> >>>>> The first part of SEPTun I wrote with Suricata devs might be useful >>>>> for Zeek as well. And keep asking questions. >>>>> >>>>> https://github.com/pevma/SEPTun >>>>> https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md >>>>> >>>>> Sharing host between Suricata and Zeek is how we run our office >>>>> sensors. >>>>> >>>>> >>>>> >>>>> On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: >>>>> >>>>>> Hello All, >>>>>> >>>>>> Has anyone ran Zeek and Suricata (or something similar) off from the >>>>>> same network interface; especially via docker? If yes, did you see any >>>>>> issues at all? I shortly ran both off from the same interface, but wasn't >>>>>> very sure due to minimum traffic. Is it better to get a fancy Intel NIC >>>>>> with SR-IOV feature and spawn off virtual interfaces? Have a great weekend >>>>>> all. >>>>>> >>>>>> Thanks, >>>>>> _______________________________________________ >>>>>> Zeek mailing list >>>>>> zeek at zeek.org >>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>>> >>>>> _______________________________________________ >>>>> Zeek mailing list >>>>> zeek at zeek.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>> >>>> >>>> >>>> -- >>>> >>>> *Patrick Kelley, CISSP, C|EH, ITIL* >>>> *CTO* >>>> patrick.kelley at criticalpathsecurity.com >>>> (o) 770-224-6482 <7702246482> >>>> >>>> *The limit to which you have accepted being comfortable is the limit to >>>> which you have grown. Accept new challenges as an opportunity to enrich >>>> yourself and not as a point of potential failure.* >>>> >>>> >>>> >> >> -- >> >> *Patrick Kelley, CISSP, C|EH, ITIL* >> *CTO* >> patrick.kelley at criticalpathsecurity.com >> (o) 770-224-6482 <7702246482> >> >> *The limit to which you have accepted being comfortable is the limit to >> which you have grown. Accept new challenges as an opportunity to enrich >> yourself and not as a point of potential failure.* >> >> >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190420/e71df110/attachment.html From blackhole.em at gmail.com Sat Apr 20 13:36:45 2019 From: blackhole.em at gmail.com (Joe Blow) Date: Sat, 20 Apr 2019 16:36:45 -0400 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: You should get permission then, especially if there is very little (proprietary) magic involved. You brought this up publicly, not me. We're all just trying to better the community as a whole. If you learned something useful about optimizing open source network capture software via docker. I'm sure I'm not the only person who is interested in exactly how. Cheers, JB On Sat, Apr 20, 2019 at 2:32 PM Josh Liburdi wrote: > I?d prefer to not speak too publicly about it without permission, but > there?s very little config magic involved. Performance increases were the > result of process isolation. > > On Fri, Apr 19, 2019 at 6:06 PM Joe Blow wrote: > >> Have you done any config magic? Docker compose? What circumstances >> surrounded the performance increase? I know a bunch of folks swear by pcap >> in containers, but I've never done 10gb+ in docker. >> >> Cheers, >> >> JB >> >> Sent via the BlackBerry Hub for Android >> >> *From:* liburdi.joshua at gmail.com >> *Sent:* April 19, 2019 7:35 PM >> *To:* patrick.kelley at criticalpathsecurity.com >> *Cc:* nothinrandom at gmail.com; zeek at zeek.org >> *Subject:* Re: [Zeek] Running Zeek & Suricata on Same Network Interface >> >> Not much to add to the conversation except to say that where I work we >> have a large Docker-based deployment and have observed no issues compared >> to our previous bare metal install (in some locations performance >> increased). >> >> On Fri, Apr 19, 2019 at 4:25 PM Patrick Kelley < >> patrick.kelley at criticalpathsecurity.com> wrote: >> >>> You are most welcome. >>> >>> As always, reach out if you have any questions. >>> >>> On Fri, Apr 19, 2019 at 7:20 PM TQ wrote: >>> >>>> Thank you Michal and Patrick! I learned something new today and will >>>> take a look at your git repo. to learn more. I currently have them both on >>>> docker for easy maintenance (reload if something goes wrong). Have a great >>>> weekend! >>>> >>>> On Fri, Apr 19, 2019 at 4:15 PM Patrick Kelley < >>>> patrick.kelley at criticalpathsecurity.com> wrote: >>>> >>>>> Works fine. >>>>> >>>>> I've used a docker container once, for this purpose. It did fine, but >>>>> like Michal, I don't recommend it. >>>>> >>>>> On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski < >>>>> michalpurzynski1 at gmail.com> wrote: >>>>> >>>>>> There is no need to use SR-IOV and other fancy features, everything >>>>>> just works. Not sure about docker, I don't use that for any >>>>>> production-worthy workload (for performance reasons, it corrupts data >>>>>> randomly, etc). >>>>>> >>>>>> Just use AF_Packet and use a different cluster_id for each and you >>>>>> will be fine. You can even use different number of threads (for Suri) and >>>>>> processes (for Zeek). >>>>>> >>>>>> The first part of SEPTun I wrote with Suricata devs might be useful >>>>>> for Zeek as well. And keep asking questions. >>>>>> >>>>>> https://github.com/pevma/SEPTun >>>>>> https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md >>>>>> >>>>>> Sharing host between Suricata and Zeek is how we run our office >>>>>> sensors. >>>>>> >>>>>> >>>>>> >>>>>> On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: >>>>>> >>>>>>> Hello All, >>>>>>> >>>>>>> Has anyone ran Zeek and Suricata (or something similar) off from the >>>>>>> same network interface; especially via docker? If yes, did you see any >>>>>>> issues at all? I shortly ran both off from the same interface, but wasn't >>>>>>> very sure due to minimum traffic. Is it better to get a fancy Intel NIC >>>>>>> with SR-IOV feature and spawn off virtual interfaces? Have a great weekend >>>>>>> all. >>>>>>> >>>>>>> Thanks, >>>>>>> _______________________________________________ >>>>>>> Zeek mailing list >>>>>>> zeek at zeek.org >>>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>>>> >>>>>> _______________________________________________ >>>>>> Zeek mailing list >>>>>> zeek at zeek.org >>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> *Patrick Kelley, CISSP, C|EH, ITIL* >>>>> *CTO* >>>>> patrick.kelley at criticalpathsecurity.com >>>>> (o) 770-224-6482 <7702246482> >>>>> >>>>> *The limit to which you have accepted being comfortable is the limit >>>>> to which you have grown. Accept new challenges as an opportunity to enrich >>>>> yourself and not as a point of potential failure.* >>>>> >>>>> >>>>> >>> >>> -- >>> >>> *Patrick Kelley, CISSP, C|EH, ITIL* >>> *CTO* >>> patrick.kelley at criticalpathsecurity.com >>> (o) 770-224-6482 <7702246482> >>> >>> *The limit to which you have accepted being comfortable is the limit to >>> which you have grown. Accept new challenges as an opportunity to enrich >>> yourself and not as a point of potential failure.* >>> >>> >>> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190420/580fe5a2/attachment.html From michalpurzynski1 at gmail.com Sat Apr 20 14:32:13 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Sat, 20 Apr 2019 23:32:13 +0200 Subject: [Zeek] Deprecation of &persistent In-Reply-To: <0100016a3bad66c3-1c041790-cada-455f-afcc-dc4c501c76fa-000000@email.amazonses.com> References: <4cb31782-7a2c-5f5d-759a-d6f9f9011c07@enclaveforensics.com> <0100016a3bad66c3-1c041790-cada-455f-afcc-dc4c501c76fa-000000@email.amazonses.com> Message-ID: I second David's opinion that some form of a quick-path stores or a new implementation of &persistent should be implemented. The solution we offer right now makes people write tens of lines of cluster code, learning all details of cluster communication and dealing with a product that's pretty much impossible to debug (broker). Rinse, repeat, guess why it does not work. I communicated internally we are not upgrading to 2.6 for now. I scoped the upgrade project to take me half a year (or we would have to take a detection impact). While it is understood the old &synchronized attribute was an enormous hack, broker gives us the ability to do it right. An easy to use, transparent attribute or some kind of wrapper is something we should consider to offer. On Sat, Apr 20, 2019 at 7:05 PM David Hoelzer wrote: > Hello all! > > TLDR: > I'd like to ask that there be some thought given to the deprecation and > eventual removal of the &persistent option in favor of Broker data > stores. IMHO, there are uses cases where the &persistent attribute is > much more attractive and lower overhead than the data store approach. > > Longer: > As you are likely aware, &persistent is now marked deprecated and we > expect it to disappear in the next version or two. The recommendation > for replacement is the much more robust, SQLite backed, Broker data store. > > The data store solution is very elegant, though it does seem to require > more fiddling than it ought to to get a data store set up. In the long > term and when dealing with large amounts of data that must be persistent > > and synchronized across nodes, this really is a wonderful solution. > > That said, there seem to me to be some use cases where that is a massive > > hammer to swing at some very small problems. For example, we have one > analysis script that is tracking successful external DNS resolutions. > Specifically, it is keeping track of all IPv4 and IPv6 addresses > resolved in the last 7 days (&read_expire 7 days) in a set. For all > outbound connection attempts, this script generates a notice when the > connection involves an external host that never appeared in a DNS answer > > record. This is quite handy when it comes to locating unauthorized > > outbound scanning, some C2 behaviors that do not rely on DNS/fast flux > sorts of things, fragile configurations of enterprise services, etc. > This has been performing quite well for several years now in more than > one relatively decent sized networks (100,000+ hosts). > > For this problem (and others that I can imagine that would take a > similar tack - i.e., only storing a set, vector, or other single > primitive, rather than a massive record in a table or a table of > tables), the &persistent is perfectly "sized." > > Am I alone in thinking that this feature should be retained *along side > of* Broker data stores and potentially documented as recommended for > simple primitive data persistence? > > Thanks! > > -- > ---- > David Hoelzer > Chief of Operations > Enclave Forensics, Inc. > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190420/e7a8d292/attachment-0001.html From nothinrandom at gmail.com Sun Apr 21 14:48:43 2019 From: nothinrandom at gmail.com (TQ) Date: Sun, 21 Apr 2019 14:48:43 -0700 Subject: [Zeek] dpd.sig rejection syntax Message-ID: Hello All, There are two protocols, A and B which use and to encapsulate their data. Both protocols operate over 20+ ports, and the only difference is that protocol B starts with lowercase 's' after \x02. I've looked over the dpd.sig files on Zeek GitHub but didn't find anything for rejection. I've tried adding (!s), [!s] after \x02, but protocol A stops logging... so I know there's a syntax issue. ##! Match for ... signature dpd_02_03_client { ip-proto == tcp payload /\x02.{0,1500}\x03/ tcp-state originator enable "A" } ##! Match for ... signature dpd_02_03_server { ip-proto == tcp payload /\x02.{0,1500}\x03/ tcp-state responder enable " A" } Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190421/78fa48ee/attachment.html From ipninichuck at gmail.com Sun Apr 21 21:11:16 2019 From: ipninichuck at gmail.com (ivan ninichuck) Date: Sun, 21 Apr 2019 21:11:16 -0700 Subject: [Zeek] Running Zeek & Suricata on Same Network Interface In-Reply-To: References: Message-ID: For learning more about using these tools at scale in a container environment take a look at this video from last years convention. https://www.youtube.com/watch?v=jFT5QV6pft0 On Sat, Apr 20, 2019 at 1:40 PM Joe Blow wrote: > You should get permission then, especially if there is very little > (proprietary) magic involved. You brought this up publicly, not me. We're > all just trying to better the community as a whole. If you learned > something useful about optimizing open source network capture software via > docker. I'm sure I'm not the only person who is interested in exactly how. > > Cheers, > > JB > > On Sat, Apr 20, 2019 at 2:32 PM Josh Liburdi > wrote: > >> I?d prefer to not speak too publicly about it without permission, but >> there?s very little config magic involved. Performance increases were the >> result of process isolation. >> >> On Fri, Apr 19, 2019 at 6:06 PM Joe Blow wrote: >> >>> Have you done any config magic? Docker compose? What circumstances >>> surrounded the performance increase? I know a bunch of folks swear by pcap >>> in containers, but I've never done 10gb+ in docker. >>> >>> Cheers, >>> >>> JB >>> >>> Sent via the BlackBerry Hub for Android >>> >>> *From:* liburdi.joshua at gmail.com >>> *Sent:* April 19, 2019 7:35 PM >>> *To:* patrick.kelley at criticalpathsecurity.com >>> *Cc:* nothinrandom at gmail.com; zeek at zeek.org >>> *Subject:* Re: [Zeek] Running Zeek & Suricata on Same Network Interface >>> >>> Not much to add to the conversation except to say that where I work we >>> have a large Docker-based deployment and have observed no issues compared >>> to our previous bare metal install (in some locations performance >>> increased). >>> >>> On Fri, Apr 19, 2019 at 4:25 PM Patrick Kelley < >>> patrick.kelley at criticalpathsecurity.com> wrote: >>> >>>> You are most welcome. >>>> >>>> As always, reach out if you have any questions. >>>> >>>> On Fri, Apr 19, 2019 at 7:20 PM TQ wrote: >>>> >>>>> Thank you Michal and Patrick! I learned something new today and will >>>>> take a look at your git repo. to learn more. I currently have them both on >>>>> docker for easy maintenance (reload if something goes wrong). Have a great >>>>> weekend! >>>>> >>>>> On Fri, Apr 19, 2019 at 4:15 PM Patrick Kelley < >>>>> patrick.kelley at criticalpathsecurity.com> wrote: >>>>> >>>>>> Works fine. >>>>>> >>>>>> I've used a docker container once, for this purpose. It did fine, >>>>>> but like Michal, I don't recommend it. >>>>>> >>>>>> On Fri, Apr 19, 2019 at 7:10 PM Micha? Purzy?ski < >>>>>> michalpurzynski1 at gmail.com> wrote: >>>>>> >>>>>>> There is no need to use SR-IOV and other fancy features, everything >>>>>>> just works. Not sure about docker, I don't use that for any >>>>>>> production-worthy workload (for performance reasons, it corrupts data >>>>>>> randomly, etc). >>>>>>> >>>>>>> Just use AF_Packet and use a different cluster_id for each and you >>>>>>> will be fine. You can even use different number of threads (for Suri) and >>>>>>> processes (for Zeek). >>>>>>> >>>>>>> The first part of SEPTun I wrote with Suricata devs might be useful >>>>>>> for Zeek as well. And keep asking questions. >>>>>>> >>>>>>> https://github.com/pevma/SEPTun >>>>>>> https://github.com/pevma/SEPTun-Mark-II/blob/master/README.md >>>>>>> >>>>>>> Sharing host between Suricata and Zeek is how we run our office >>>>>>> sensors. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Sat, Apr 20, 2019 at 12:52 AM TQ wrote: >>>>>>> >>>>>>>> Hello All, >>>>>>>> >>>>>>>> Has anyone ran Zeek and Suricata (or something similar) off from >>>>>>>> the same network interface; especially via docker? If yes, did you see any >>>>>>>> issues at all? I shortly ran both off from the same interface, but wasn't >>>>>>>> very sure due to minimum traffic. Is it better to get a fancy Intel NIC >>>>>>>> with SR-IOV feature and spawn off virtual interfaces? Have a great weekend >>>>>>>> all. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> _______________________________________________ >>>>>>>> Zeek mailing list >>>>>>>> zeek at zeek.org >>>>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Zeek mailing list >>>>>>> zeek at zeek.org >>>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> *Patrick Kelley, CISSP, C|EH, ITIL* >>>>>> *CTO* >>>>>> patrick.kelley at criticalpathsecurity.com >>>>>> (o) 770-224-6482 <7702246482> >>>>>> >>>>>> *The limit to which you have accepted being comfortable is the limit >>>>>> to which you have grown. Accept new challenges as an opportunity to enrich >>>>>> yourself and not as a point of potential failure.* >>>>>> >>>>>> >>>>>> >>>> >>>> -- >>>> >>>> *Patrick Kelley, CISSP, C|EH, ITIL* >>>> *CTO* >>>> patrick.kelley at criticalpathsecurity.com >>>> (o) 770-224-6482 <7702246482> >>>> >>>> *The limit to which you have accepted being comfortable is the limit to >>>> which you have grown. Accept new challenges as an opportunity to enrich >>>> yourself and not as a point of potential failure.* >>>> >>>> >>>> _______________________________________________ >>>> Zeek mailing list >>>> zeek at zeek.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >>> >>> _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Ivan Paul Ninichuck 714-388-9614 ipninichuck at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190421/1c4090a7/attachment.html From gary.w.weasel2.civ at mail.mil Mon Apr 22 08:10:26 2019 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W CIV DISA RE (US)) Date: Mon, 22 Apr 2019 15:10:26 +0000 Subject: [Zeek] Kafka plugin causes logger to segfault Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B291C361F2@UMECHPA68.easf.csd.disa.mil> All, I'm currently at my wits end dealing with the Kafka plugin, I'm having great difficulty stopping it from crashing. When I use the library of librdkafka as prescribed from https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086 (librdkafka-0.11.5), my logger crashes immediately after startup. When using an alternative version of librdkafka (librdkakfa1-0.11.4_confluent4.1.3) the logger doesn't immediately crash but within a minute of starting it usually does. The stderr.log says the same every time, /run-bro: line 110: Segmentation fault nohup "$mybro" "$@" I have downloaded the most recent version of https://github.com/apache/metron-bro-plugin-kafka and still experience this. I am building an RPM (running CentOS) for the Kafka plugin and installing that way, since the box is offline and unable to reach bro-packages. When I tried to use librdkafka-0.11.5 I've also built an RPM for that. The following is my only added configuration @load Apache/Kafka/logs-to-kafka.bro redef Kafka::logs_to_send = set(Conn::LOG); redef Kafka::kafka_conf = table( ["metadata.broker.list"] = "172.16.0.40.9092" ); redef Kafka::topic_name = "bro"; redef Kafka::tag_json = T; The interesting thing to note: the logger does not crash if no logs are being sent (i.e. I comment out the logs_to_send line). The only other plugins I'm running are Bro::AF_Packet and Corelight::CommunityID. Anyone have any insight or doing something different? v/r Gary From jsiwek at corelight.com Mon Apr 22 11:22:32 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Mon, 22 Apr 2019 11:22:32 -0700 Subject: [Zeek] dpd.sig rejection syntax In-Reply-To: References: Message-ID: On Sun, Apr 21, 2019 at 2:58 PM TQ wrote: > There are two protocols, A and B which use and to encapsulate their data. Both protocols operate over 20+ ports, and the only difference is that protocol B starts with lowercase 's' after \x02. I've looked over the dpd.sig files on Zeek GitHub but didn't find anything for rejection. Here's more extensive documentation on signatures: https://docs.zeek.org/en/latest/frameworks/signatures.html The negated "requires-signature" condition may be relevant to you. > I've tried adding (!s), [!s] after \x02, but protocol A stops logging... so I know there's a syntax issue. The syntax generally follows these rules: http://westes.github.io/flex/manual/Patterns.html So [^s] means "anything except an 's' character" - Jon From mkg at vt.edu Mon Apr 22 13:15:45 2019 From: mkg at vt.edu (Mark Gardner) Date: Mon, 22 Apr 2019 16:15:45 -0400 Subject: [Zeek] Traceback in summary email Message-ID: I am getting a traceback in the connection summary emails rather than useful information. I didn't have the Python SubnetTree package installed when I built, installed, and first started Zeek but have since installed it on the management/logger and all sensors. I restarted Zeek but am still seeing the traceback. Suggestions on where to look next? Traceback (most recent call last): File "/usr/local/bro/bin/trace-summary", line 22, in import SubnetTree File "/usr/local/bro/lib/broctl/SubnetTree.py", line 21, in _SubnetTree = swig_import_helper() File "/usr/local/bro/lib/broctl/SubnetTree.py", line 20, in swig_import_helper return importlib.import_module('_SubnetTree') File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) ImportError: dynamic module does not define init function (init_SubnetTree) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190422/cd2f1821/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6312 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190422/cd2f1821/attachment.bin From nothinrandom at gmail.com Mon Apr 22 16:21:19 2019 From: nothinrandom at gmail.com (TQ) Date: Mon, 22 Apr 2019 16:21:19 -0700 Subject: [Zeek] dpd.sig rejection syntax In-Reply-To: References: Message-ID: Thanks Jon. Life saver as always! On Mon, Apr 22, 2019 at 11:22 AM Jon Siwek wrote: > On Sun, Apr 21, 2019 at 2:58 PM TQ wrote: > > > There are two protocols, A and B which use and to > encapsulate their data. Both protocols operate over 20+ ports, and the > only difference is that protocol B starts with lowercase 's' after \x02. > I've looked over the dpd.sig files on Zeek GitHub but didn't find anything > for rejection. > > Here's more extensive documentation on signatures: > > https://docs.zeek.org/en/latest/frameworks/signatures.html > > The negated "requires-signature" condition may be relevant to you. > > > I've tried adding (!s), [!s] after \x02, but protocol A stops > logging... so I know there's a syntax issue. > > The syntax generally follows these rules: > > http://westes.github.io/flex/manual/Patterns.html > > So [^s] means "anything except an 's' character" > > - Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190422/e026d55b/attachment.html From jsiwek at corelight.com Mon Apr 22 17:01:00 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Mon, 22 Apr 2019 17:01:00 -0700 Subject: [Zeek] Traceback in summary email In-Reply-To: References: Message-ID: On Mon, Apr 22, 2019 at 1:24 PM Mark Gardner wrote: > > I am getting a traceback in the connection summary emails rather than useful information. I didn't have the Python SubnetTree package installed when I built, installed, and first started Zeek but have since installed it on the management/logger and all sensors. That usually should get built/installed as part of the default Zeek installation and you don't have to independently install it. > I restarted Zeek but am still seeing the traceback. Just double-checking: the message didn't change after independently installing pysubnettree ? That would make sense since I expect there's some explicit PYTHONPATH that's always picking up the version installed with Bro/Zeek rather than the independently installed version. You could try comparing: python -c "import SubnetTree" versus: PYTHONPATH=/usr/local/bro/lib/broctl python -c "import SubnetTree" as a test of whether either version successfully gets imported. > Suggestions on where to look next? Probably would help to get more details/info that could help reproduce the error. What Zeek/Bro version ? What operating system ? What Python version and what `swig -version` ? The full `./configure` command you used when building Zeek/Bro and its output may be most helpful. A guess is that the configuration failed to detect a valid/consistent Python and somehow that botched the build/install of pysubnettree. - Jon From pcain at coopercain.com Tue Apr 23 12:07:02 2019 From: pcain at coopercain.com (Patrick Cain) Date: Tue, 23 Apr 2019 15:07:02 -0400 Subject: [Zeek] Kafka plugin causes logger to segfault In-Reply-To: <0C34D9CA9B9DBB45B1C51871C177B4B291C361F2@UMECHPA68.easf.csd.disa.mil> References: <0C34D9CA9B9DBB45B1C51871C177B4B291C361F2@UMECHPA68.easf.csd.disa.mil> Message-ID: <286601d4fa07$bcc8ac50$365a04f0$@coopercain.com> Hi, You don't say what version you're running, but with 2.5 and 2.6 I use these lines along with the kafka config: ### JSON LOGGING @load tuning/json-logs # Set the log separator redef Log::default_scope_sep = "_"; # Set the time in iso format redef LogAscii::json_timestamps = JSON::TS_ISO8601; Your kafka config looks close to mine (I leave the topic_name field blank.) My kafka emitter has been running on Centos 6, Centos 7 and RHEL7 systems for about a year. Can you manually connect to your broker from the zeek box? I have had issues in the past when the logger was happy but other things in the pipe to zookeeper and kafka were unhappy. Pat -----Original Message----- From: zeek-bounces at zeek.org On Behalf Of Weasel, Gary W CIV DISA RE (US) Sent: Monday, April 22, 2019 11:10 AM To: 'zeek at zeek.org' Subject: [Zeek] Kafka plugin causes logger to segfault All, I'm currently at my wits end dealing with the Kafka plugin, I'm having great difficulty stopping it from crashing. When I use the library of librdkafka as prescribed from https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086 (librdkafka-0.11.5), my logger crashes immediately after startup. When using an alternative version of librdkafka (librdkakfa1-0.11.4_confluent4.1.3) the logger doesn't immediately crash but within a minute of starting it usually does. The stderr.log says the same every time, /run-bro: line 110: Segmentation fault nohup "$mybro" "$@" I have downloaded the most recent version of https://github.com/apache/metron-bro-plugin-kafka and still experience this. I am building an RPM (running CentOS) for the Kafka plugin and installing that way, since the box is offline and unable to reach bro-packages. When I tried to use librdkafka-0.11.5 I've also built an RPM for that. The following is my only added configuration @load Apache/Kafka/logs-to-kafka.bro redef Kafka::logs_to_send = set(Conn::LOG); redef Kafka::kafka_conf = table( ["metadata.broker.list"] = "172.16.0.40.9092" ); redef Kafka::topic_name = "bro"; redef Kafka::tag_json = T; The interesting thing to note: the logger does not crash if no logs are being sent (i.e. I comment out the logs_to_send line). The only other plugins I'm running are Bro::AF_Packet and Corelight::CommunityID. Anyone have any insight or doing something different? v/r Gary _______________________________________________ Zeek mailing list zeek at zeek.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek From zeolla at gmail.com Tue Apr 23 12:28:28 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Tue, 23 Apr 2019 15:28:28 -0400 Subject: [Zeek] Kafka plugin causes logger to segfault In-Reply-To: <286601d4fa07$bcc8ac50$365a04f0$@coopercain.com> References: <0C34D9CA9B9DBB45B1C51871C177B4B291C361F2@UMECHPA68.easf.csd.disa.mil> <286601d4fa07$bcc8ac50$365a04f0$@coopercain.com> Message-ID: 172.16.0.40.9092 doesn't appear to be an IP address to me. Did you mean 172.16.0.40:9092? - Jon Zeolla Zeolla at GMail.Com On Tue, Apr 23, 2019 at 3:16 PM Patrick Cain wrote: > Hi, > > You don't say what version you're running, but with 2.5 and 2.6 I use these > lines along with the kafka config: > > ### JSON LOGGING > @load tuning/json-logs > # Set the log separator > redef Log::default_scope_sep = "_"; > # Set the time in iso format > redef LogAscii::json_timestamps = JSON::TS_ISO8601; > > Your kafka config looks close to mine (I leave the topic_name field blank.) > My kafka emitter has been running on Centos 6, Centos 7 and RHEL7 systems > for about a year. > Can you manually connect to your broker from the zeek box? I have had > issues in the past when the logger was happy but other things in the pipe > to > zookeeper and kafka were unhappy. > > Pat > -----Original Message----- > From: zeek-bounces at zeek.org On Behalf Of Weasel, > Gary W CIV DISA RE (US) > Sent: Monday, April 22, 2019 11:10 AM > To: 'zeek at zeek.org' > Subject: [Zeek] Kafka plugin causes logger to segfault > > All, > > I'm currently at my wits end dealing with the Kafka plugin, I'm having > great > difficulty stopping it from crashing. > > When I use the library of librdkafka as prescribed from > > https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086 > (librdkafka-0.11.5 > ), > my logger crashes immediately after startup. When > using an alternative version of librdkafka > (librdkakfa1-0.11.4_confluent4.1.3) the logger doesn't immediately crash > but > within a minute of starting it usually does. > > The stderr.log says the same every time, /run-bro: line 110: > Segmentation fault nohup "$mybro" "$@" > > I have downloaded the most recent version of > https://github.com/apache/metron-bro-plugin-kafka and still experience > this. > > I am building an RPM (running CentOS) for the Kafka plugin and installing > that way, since the box is offline and unable to reach bro-packages. When > I > tried to use librdkafka-0.11.5 I've also built an RPM for that. > > The following is my only added configuration > > @load Apache/Kafka/logs-to-kafka.bro > redef Kafka::logs_to_send = set(Conn::LOG); redef Kafka::kafka_conf = > table( > ["metadata.broker.list"] = "172.16.0.40.9092" > ); > redef Kafka::topic_name = "bro"; > redef Kafka::tag_json = T; > > The interesting thing to note: the logger does not crash if no logs are > being sent (i.e. I comment out the logs_to_send line). > > The only other plugins I'm running are Bro::AF_Packet and > Corelight::CommunityID. > > Anyone have any insight or doing something different? > > v/r > Gary > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/e3d08845/attachment.html From dopheide at gmail.com Tue Apr 23 13:04:11 2019 From: dopheide at gmail.com (Mike Dopheide) Date: Tue, 23 Apr 2019 15:04:11 -0500 Subject: [Zeek] Deprecation of &persistent In-Reply-To: References: <4cb31782-7a2c-5f5d-759a-d6f9f9011c07@enclaveforensics.com> <0100016a3bad66c3-1c041790-cada-455f-afcc-dc4c501c76fa-000000@email.amazonses.com> Message-ID: I'm not a core developer so I could be totally incorrect here, but I've always been under the impression that &persistent was nearly as broken as &sychronized was, at least in a cluster environment. If I'm correct in my assessment, it's not really about retaining an existing feature as much as correctly implementing a new feature which I imagine is quite complicated. I agree the learning curve with Broker and persistent stores is a little steep, but there's actually a fair bit of good methods for debugging to make sure it's doing what you think it should be. You also have full control over when it updates the store so your persistence doesn't rely on a clean shutdown of the nodes. I imagine you could write some boilerplate code that just fires off a scheduled task via bro_init to update the store, all you'd need to do is paste it in and update the variable name for each policy. -Dop On Sat, Apr 20, 2019 at 4:47 PM Micha? Purzy?ski wrote: > I second David's opinion that some form of a quick-path stores or a new > implementation of &persistent should be implemented. > > The solution we offer right now makes people write tens of lines of > cluster code, learning all details of cluster communication and dealing > with a product that's pretty much impossible to debug (broker). Rinse, > repeat, guess why it does not work. > > I communicated internally we are not upgrading to 2.6 for now. I scoped > the upgrade project to take me half a year (or we would have to take a > detection impact). > > While it is understood the old &synchronized attribute was an enormous > hack, broker gives us the ability to do it right. > An easy to use, transparent attribute or some kind of wrapper is something > we should consider to offer. > > > > On Sat, Apr 20, 2019 at 7:05 PM David Hoelzer < > dhoelzer at enclaveforensics.com> wrote: > >> Hello all! >> >> TLDR: >> I'd like to ask that there be some thought given to the deprecation and >> eventual removal of the &persistent option in favor of Broker data >> stores. IMHO, there are uses cases where the &persistent attribute is >> much more attractive and lower overhead than the data store approach. >> >> Longer: >> As you are likely aware, &persistent is now marked deprecated and we >> expect it to disappear in the next version or two. The recommendation >> for replacement is the much more robust, SQLite backed, Broker data store. >> >> The data store solution is very elegant, though it does seem to require >> more fiddling than it ought to to get a data store set up. In the long >> term and when dealing with large amounts of data that must be persistent >> >> and synchronized across nodes, this really is a wonderful solution. >> >> That said, there seem to me to be some use cases where that is a massive >> >> hammer to swing at some very small problems. For example, we have one >> analysis script that is tracking successful external DNS resolutions. >> Specifically, it is keeping track of all IPv4 and IPv6 addresses >> resolved in the last 7 days (&read_expire 7 days) in a set. For all >> outbound connection attempts, this script generates a notice when the >> connection involves an external host that never appeared in a DNS answer >> >> record. This is quite handy when it comes to locating unauthorized >> >> outbound scanning, some C2 behaviors that do not rely on DNS/fast flux >> sorts of things, fragile configurations of enterprise services, etc. >> This has been performing quite well for several years now in more than >> one relatively decent sized networks (100,000+ hosts). >> >> For this problem (and others that I can imagine that would take a >> similar tack - i.e., only storing a set, vector, or other single >> primitive, rather than a massive record in a table or a table of >> tables), the &persistent is perfectly "sized." >> >> Am I alone in thinking that this feature should be retained *along side >> of* Broker data stores and potentially documented as recommended for >> simple primitive data persistence? >> >> Thanks! >> >> -- >> ---- >> David Hoelzer >> Chief of Operations >> Enclave Forensics, Inc. >> >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/1fc9ca4e/attachment-0001.html From mkg at vt.edu Tue Apr 23 13:10:57 2019 From: mkg at vt.edu (Mark Gardner) Date: Tue, 23 Apr 2019 16:10:57 -0400 Subject: [Zeek] Traceback in summary email In-Reply-To: References: Message-ID: On Mon, Apr 22, 2019 at 8:01 PM Jon Siwek wrote: > You could try comparing: > > python -c "import SubnetTree" > > versus: > > PYTHONPATH=/usr/local/bro/lib/broctl python -c "import SubnetTree" > > as a test of whether either version successfully gets imported. > This could be the difference: $ python -c "import SubnetTree" $ PYTHONPATH=/usr/local/bro/lib/broctl python -c "import SubnetTree" Traceback (most recent call last): File "", line 1, in File "/usr/local/bro/lib/broctl/SubnetTree.py", line 21, in _SubnetTree = swig_import_helper() File "/usr/local/bro/lib/broctl/SubnetTree.py", line 20, in swig_import_helper return importlib.import_module('_SubnetTree') File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) ImportError: dynamic module does not define init function (init_SubnetTree) > > Suggestions on where to look next? > > Probably would help to get more details/info that could help reproduce > the error. > > What Zeek/Bro version ? > 2.6-167 built from source > What operating system ? > Debian 9 "Stretch" > What Python version and what `swig -version` ? > $ python --version Python 2.7.13 $ python3 --version Python 3.5.3 $ swig -version SWIG Version 3.0.10 Compiled with g++ [x86_64-pc-linux-gnu] Configured options: +pcre The full `./configure` command you used when building Zeek/Bro and its > output may be most helpful. > $ CC=clang CXX=clang++ ./configure The version of clang I am using: $ clang --version clang version 3.8.1-24 (tags/RELEASE_381/final) Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin A guess is that the configuration failed to detect a valid/consistent > Python and somehow that botched the build/install of pysubnettree. > It looks like the two versions of python are installed. That could be the problem as Python 2.7 is found for the interpreter but 3.5 is found for libraries. The following lines are taken from the configuration output: ... -- Found PythonInterp: /usr/bin/python (found version "2.7.13") ... -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so (found version "3.5.3") -- Found PythonDev: /usr/include/python3.5m To test the theory, I will rebuild explicitly specifying the version of python: $ CC=clang CXX=clang++ ./configure --with-python=/usr/bin/python --with-python-lib=/usr/lib/x86_64-linux-gnu/libpython2.7.so --with-python-inc=/usr/include/python2.7 I'll let you know how it turns out once the build finishes and I am able to test it. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/4986660c/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6312 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/4986660c/attachment.bin From gary.w.weasel2.civ at mail.mil Tue Apr 23 13:16:43 2019 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W CIV DISA RE (US)) Date: Tue, 23 Apr 2019 20:16:43 +0000 Subject: [Zeek] [Non-DoD Source] Re: Kafka plugin causes logger to segfault In-Reply-To: References: <0C34D9CA9B9DBB45B1C51871C177B4B291C361F2@UMECHPA68.easf.csd.disa.mil> <286601d4fa07$bcc8ac50$365a04f0$@coopercain.com> Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B291C362D8@UMECHPA68.easf.csd.disa.mil> That was a typo when copying over into the email. It's a colon in the actual config. I'm running bro 2.6.1. It turns out there was something wrong with the Kafka pipeline, and after we resolved those issues, the logger stopped crashing with the confluent version of librdkafka, but still crashes immediately with the regular version (the version prescribed by zeek packages). v/r Gary -----Original Message----- From: Zeolla at GMail.com Sent: Tuesday, April 23, 2019 3:28 PM To: Patrick Cain Cc: Weasel, Gary W CIV DISA RE (US) ; zeek at zeek.org Subject: [Non-DoD Source] Re: [Zeek] Kafka plugin causes logger to segfault All active links contained in this email were disabled. Please verify the identity of the sender, and confirm the authenticity of all links contained within the message prior to copying and pasting the address to a Web browser. ________________________________ 172.16.0.40.9092 doesn't appear to be an IP address to me. Did you mean 172.16.0.40:9092 < Caution-http://172.16.0.40:9092 > ? - Jon Zeolla Zeolla at GMail.Com On Tue, Apr 23, 2019 at 3:16 PM Patrick Cain > wrote: Hi, You don't say what version you're running, but with 2.5 and 2.6 I use these lines along with the kafka config: ### JSON LOGGING @load tuning/json-logs # Set the log separator redef Log::default_scope_sep = "_"; # Set the time in iso format redef LogAscii::json_timestamps = JSON::TS_ISO8601; Your kafka config looks close to mine (I leave the topic_name field blank.) My kafka emitter has been running on Centos 6, Centos 7 and RHEL7 systems for about a year. Can you manually connect to your broker from the zeek box? I have had issues in the past when the logger was happy but other things in the pipe to zookeeper and kafka were unhappy. Pat -----Original Message----- From: zeek-bounces at zeek.org < Caution-mailto:zeek-bounces at zeek.org > > On Behalf Of Weasel, Gary W CIV DISA RE (US) Sent: Monday, April 22, 2019 11:10 AM To: 'zeek at zeek.org < Caution-mailto:zeek at zeek.org > ' > Subject: [Zeek] Kafka plugin causes logger to segfault All, I'm currently at my wits end dealing with the Kafka plugin, I'm having great difficulty stopping it from crashing. When I use the library of librdkafka as prescribed from Caution-https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086 (librdkafka-0.11.5 < Caution-https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086(librdkafka-0.11.5 > ), my logger crashes immediately after startup. When using an alternative version of librdkafka (librdkakfa1-0.11.4_confluent4.1.3) the logger doesn't immediately crash but within a minute of starting it usually does. The stderr.log says the same every time, /run-bro: line 110: Segmentation fault nohup "$mybro" "$@" I have downloaded the most recent version of Caution-https://github.com/apache/metron-bro-plugin-kafka < Caution-https://github.com/apache/metron-bro-plugin-kafka > and still experience this. I am building an RPM (running CentOS) for the Kafka plugin and installing that way, since the box is offline and unable to reach bro-packages. When I tried to use librdkafka-0.11.5 I've also built an RPM for that. The following is my only added configuration @load Apache/Kafka/logs-to-kafka.bro redef Kafka::logs_to_send = set(Conn::LOG); redef Kafka::kafka_conf = table( ["metadata.broker.list"] = "172.16.0.40.9092" ); redef Kafka::topic_name = "bro"; redef Kafka::tag_json = T; The interesting thing to note: the logger does not crash if no logs are being sent (i.e. I comment out the logs_to_send line). The only other plugins I'm running are Bro::AF_Packet and Corelight::CommunityID. Anyone have any insight or doing something different? v/r Gary _______________________________________________ Zeek mailing list zeek at zeek.org < Caution-mailto:zeek at zeek.org > Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek < Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > _______________________________________________ Zeek mailing list zeek at zeek.org < Caution-mailto:zeek at zeek.org > Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek < Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > From mkg at vt.edu Tue Apr 23 13:41:32 2019 From: mkg at vt.edu (Mark Gardner) Date: Tue, 23 Apr 2019 16:41:32 -0400 Subject: [Zeek] High capture loss for some workers Message-ID: We are setting up a Zeek cluster consisting of a manager/logger and five sensors. Each node uses the same hardware: - 2.4 GHz AMD Epyc 7351P (16-core, 32-threads) - 256 GB DDR3 ECC RAM - Intel X520-T2 10 Gbps to Arista with 0.5m DAC Configuration: - Arista 7150S hashing on 5-tuple - Gigamon sends to Arista via 4x10 Gbps - Zeek v2.6-167 with AF_Packet - 16 workers per sensor (total: 5x16=80 workers) The capture loss was 50-70% until I remembered to turn off offloading. Now it averages about 0.8%. Except that often 0-4 cores in a 1 hour summary spike at 60-70% capture loss. There doesn't appear to be a pattern on which core suffers the high loss. Searches for how to identify and fix the reason for such large losses have failed to yield any suggestions for debugging the problem. Suggestions? Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/c71777d5/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6312 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/c71777d5/attachment-0001.bin From zeolla at gmail.com Tue Apr 23 14:46:30 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Tue, 23 Apr 2019 17:46:30 -0400 Subject: [Zeek] [Non-DoD Source] Re: Kafka plugin causes logger to segfault In-Reply-To: <0C34D9CA9B9DBB45B1C51871C177B4B291C362D8@UMECHPA68.easf.csd.disa.mil> References: <0C34D9CA9B9DBB45B1C51871C177B4B291C361F2@UMECHPA68.easf.csd.disa.mil> <286601d4fa07$bcc8ac50$365a04f0$@coopercain.com> <0C34D9CA9B9DBB45B1C51871C177B4B291C362D8@UMECHPA68.easf.csd.disa.mil> Message-ID: Are you able to turn debug on[1] and share the details? If you need to bring this off list for sensitivity reasons feel free to contact me directly. 1: https://github.com/apache/metron-bro-plugin-kafka/blob/master/README.md#debug Jon Zeolla On Tue, Apr 23, 2019, 4:18 PM Weasel, Gary W CIV DISA RE (US) < gary.w.weasel2.civ at mail.mil> wrote: > That was a typo when copying over into the email. It's a colon in the > actual config. > > I'm running bro 2.6.1. > > It turns out there was something wrong with the Kafka pipeline, and after > we resolved those issues, the logger stopped crashing with the confluent > version of librdkafka, but still crashes immediately with the regular > version (the version prescribed by zeek packages). > > v/r > Gary > > -----Original Message----- > From: Zeolla at GMail.com > Sent: Tuesday, April 23, 2019 3:28 PM > To: Patrick Cain > Cc: Weasel, Gary W CIV DISA RE (US) ; > zeek at zeek.org > Subject: [Non-DoD Source] Re: [Zeek] Kafka plugin causes logger to segfault > > All active links contained in this email were disabled. Please verify the > identity of the sender, and confirm the authenticity of all links contained > within the message prior to copying and pasting the address to a Web > browser. > > > ________________________________ > > > > 172.16.0.40.9092 doesn't appear to be an IP address to me. Did you mean > 172.16.0.40:9092 < Caution-http://172.16.0.40:9092 > ? > > > - Jon Zeolla > Zeolla at GMail.Com > > > On Tue, Apr 23, 2019 at 3:16 PM Patrick Cain Caution-mailto:pcain at coopercain.com > > wrote: > > > Hi, > > You don't say what version you're running, but with 2.5 and 2.6 I > use these > lines along with the kafka config: > > ### JSON LOGGING > @load tuning/json-logs > # Set the log separator > redef Log::default_scope_sep = "_"; > # Set the time in iso format > redef LogAscii::json_timestamps = JSON::TS_ISO8601; > > Your kafka config looks close to mine (I leave the topic_name > field blank.) > My kafka emitter has been running on Centos 6, Centos 7 and RHEL7 > systems > for about a year. > Can you manually connect to your broker from the zeek box? I have > had > issues in the past when the logger was happy but other things in > the pipe to > zookeeper and kafka were unhappy. > > Pat > -----Original Message----- > From: zeek-bounces at zeek.org < Caution-mailto:zeek-bounces at zeek.org > > > On > Behalf Of Weasel, > Gary W CIV DISA RE (US) > Sent: Monday, April 22, 2019 11:10 AM > To: 'zeek at zeek.org < Caution-mailto:zeek at zeek.org > ' < > zeek at zeek.org < Caution-mailto:zeek at zeek.org > > > Subject: [Zeek] Kafka plugin causes logger to segfault > > All, > > I'm currently at my wits end dealing with the Kafka plugin, I'm > having great > difficulty stopping it from crashing. > > When I use the library of librdkafka as prescribed from > Caution- > https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086 > (librdkafka-0.11.5 < Caution- > https://packages.zeek.org/packages/view/7388aa77-4fb7-11e8-88be-0a645a3f3086(librdkafka-0.11.5 > > ), my logger crashes immediately after startup. When > using an alternative version of librdkafka > (librdkakfa1-0.11.4_confluent4.1.3) the logger doesn't immediately > crash but > within a minute of starting it usually does. > > The stderr.log says the same every time, /run-bro: line 110: > Segmentation fault nohup "$mybro" "$@" > > I have downloaded the most recent version of > Caution-https://github.com/apache/metron-bro-plugin-kafka < > Caution-https://github.com/apache/metron-bro-plugin-kafka > and still > experience this. > > I am building an RPM (running CentOS) for the Kafka plugin and > installing > that way, since the box is offline and unable to reach > bro-packages. When I > tried to use librdkafka-0.11.5 I've also built an RPM for that. > > The following is my only added configuration > > @load Apache/Kafka/logs-to-kafka.bro > redef Kafka::logs_to_send = set(Conn::LOG); redef > Kafka::kafka_conf = table( > ["metadata.broker.list"] = "172.16.0.40.9092" > ); > redef Kafka::topic_name = "bro"; > redef Kafka::tag_json = T; > > The interesting thing to note: the logger does not crash if no > logs are > being sent (i.e. I comment out the logs_to_send line). > > The only other plugins I'm running are Bro::AF_Packet and > Corelight::CommunityID. > > Anyone have any insight or doing something different? > > v/r > Gary > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org < Caution-mailto:zeek at zeek.org > > Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek < > Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org < Caution-mailto:zeek at zeek.org > > Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek < > Caution-http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190423/45891ad3/attachment.html From justin at corelight.com Tue Apr 23 15:47:30 2019 From: justin at corelight.com (Justin Azoff) Date: Tue, 23 Apr 2019 18:47:30 -0400 Subject: [Zeek] Traceback in summary email In-Reply-To: References: Message-ID: On Mon, Apr 22, 2019 at 4:24 PM Mark Gardner wrote: > > I am getting a traceback in the connection summary emails rather than useful information. I didn't have the Python SubnetTree package installed when I built, installed, and first started Zeek but have since installed it on the management/logger and all sensors. I restarted Zeek but am still seeing the traceback. Suggestions on where to look next? Was it the same traceback before? The broctl bundles subnettree, so you should have already had it in the form of these 2 files: /usr/local/bro/lib/broctl/_SubnetTree.so /usr/local/bro/lib/broctl/SubnetTree.py now it looks like you have 2 incompatible versions of it installed that are conflicting with each other. -- Justin From justin at corelight.com Tue Apr 23 15:55:57 2019 From: justin at corelight.com (Justin Azoff) Date: Tue, 23 Apr 2019 18:55:57 -0400 Subject: [Zeek] High capture loss for some workers In-Reply-To: References: Message-ID: Once you have a high capture loss value you need to switch from focusing on that and look at the missed_bytes column in the conn.log. The capture loss value is like a check engine light. It only tells you that something is wrong, but the conn.log tells you what is wrong. Look for entries in the conn.log where missed_bytes is non zero, or even start with looking for any records where it is > 100000. You may find that you simply have a few connections that are completely broken causing the capture loss to be skewed towards that 60% value. A much better metric that I like to use is 'percent of connections with loss'. It's a completely different problem if you have 40% overall capture loss but only .01% of connections with loss, compared to 40% overall capture loss with loss on 20% of connections. If you install bro-doctor from bro-pkg that will do a lot of analysis like this for you. I'd also run 1 less worker on each of those boxes. With 16 workers and 16 cores, you're not leaving any spare cores to dedicate to cron jobs and other background tasks. On Tue, Apr 23, 2019 at 4:44 PM Mark Gardner wrote: > > We are setting up a Zeek cluster consisting of a manager/logger and five sensors. Each node uses the same hardware: > - 2.4 GHz AMD Epyc 7351P (16-core, 32-threads) > - 256 GB DDR3 ECC RAM > - Intel X520-T2 10 Gbps to Arista with 0.5m DAC > Configuration: > - Arista 7150S hashing on 5-tuple > - Gigamon sends to Arista via 4x10 Gbps > - Zeek v2.6-167 with AF_Packet > - 16 workers per sensor (total: 5x16=80 workers) > > The capture loss was 50-70% until I remembered to turn off offloading. Now it averages about 0.8%. Except that often 0-4 cores in a 1 hour summary spike at 60-70% capture loss. There doesn't appear to be a pattern on which core suffers the high loss. Searches for how to identify and fix the reason for such large losses have failed to yield any suggestions for debugging the problem. Suggestions? > > Mark > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From Ananditha.Raghunath at ll.mit.edu Wed Apr 24 06:48:04 2019 From: Ananditha.Raghunath at ll.mit.edu (Raghunath, Ananditha - 0557 - MITLL) Date: Wed, 24 Apr 2019 13:48:04 +0000 Subject: [Zeek] Extracting packets from a particular connection Message-ID: <7C26AEE4-FA23-4310-8925-1C2FBBB31C41@contoso.com> Hi, I was hoping to understand how Zeek aggregates packets by connection. Is there any documentation that summarizes the approach? Is there a way to extract all the packets that correspond to a particular connection? Thank you, Ananditha Raghunath - 0557 Assistant Staff Cyber Operations and Analysis Technology MIT Lincoln Laboratory ananditha.raghunath at ll.mit.edu | 781-981-9035 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190424/8d0b536a/attachment-0001.html From jmellander at lbl.gov Wed Apr 24 09:33:10 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Wed, 24 Apr 2019 09:33:10 -0700 Subject: [Zeek] Deprecation of &persistent In-Reply-To: References: <4cb31782-7a2c-5f5d-759a-d6f9f9011c07@enclaveforensics.com> <0100016a3bad66c3-1c041790-cada-455f-afcc-dc4c501c76fa-000000@email.amazonses.com> Message-ID: On Tue, Apr 23, 2019 at 1:13 PM Mike Dopheide wrote: > > > I imagine you could write some boilerplate code that just fires off a > scheduled task via bro_init to update the store, all you'd need to do is > paste it in and update the variable name for each policy. > > Some sort of macro system for zeek policies could be useful, to ease use of generic code as suggested by Dop's comment. Anyone else ever had that thought? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190424/56e1a2c1/attachment.html From manju.atri87 at gmail.com Sat Apr 27 03:47:50 2019 From: manju.atri87 at gmail.com (Manju Lalwani) Date: Sat, 27 Apr 2019 16:17:50 +0530 Subject: [Zeek] Help with zeek script Message-ID: Hi Team, I am working on a Zeek script and would like to understand how can I make Zeek look only for the first ten packets in a tcp session.The first ten packets are enough to fingerprint the traffic I am trying to identify and so would to ensure my script also looks at only the first 10 packets to save processing time. The communication is as follows : There is the initial 3 way handshake and then there are 7 packets with variable lengths and on a non-default destination port/service. So I had to use the tcp_packet event in my script. Is there a better way of doing it ? Using tcp_packet would make my script to check for all tcp packets increasing the load on my zeek system. Please do let me know if you have any suggestions for me on this. Looking forward to your response. Thanks, Manju Lalwani -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190427/006126c5/attachment.html From bill.de.ping at gmail.com Sun Apr 28 00:04:07 2019 From: bill.de.ping at gmail.com (william de ping) Date: Sun, 28 Apr 2019 10:04:07 +0300 Subject: [Zeek] - extracted filename with md5 Message-ID: Hi everyone, I want to extract files and have their names include their md5 hash. The problem is that the md5 hashing happens on file_hash event while file extraction occurs on former events such as file_new or file_over_new_connection. Any ideas on how to accomplish this? Thanks B -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190428/9b8599f7/attachment.html From tscheponik at gmail.com Sun Apr 28 11:32:32 2019 From: tscheponik at gmail.com (Woot4moo) Date: Sun, 28 Apr 2019 14:32:32 -0400 Subject: [Zeek] Number of CPU cores for 100Gbps Message-ID: My understanding is that 4,000+ CPU cores would be necessary to support this throughput. In the recent meeting from CERN I recall seeing someone describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or am I doing a conversion incorrectly? I am basing this purely on this quote, from https://docs.zeek.org/en/stable/cluster/ ?The rule of thumb we have followed recently is to allocate approximately 1 core for every 250Mbps of traffic that is being analyzed. However, this estimate could be extremely traffic mix-specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 8 cores available (2048 / 250 == 8.2). ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190428/44fcd01c/attachment.html From michalpurzynski1 at gmail.com Sun Apr 28 15:29:16 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Sun, 28 Apr 2019 15:29:16 -0700 Subject: [Zeek] Number of CPU cores for 100Gbps In-Reply-To: References: Message-ID: These rules aren't current anymore and frankly, have never been accurate. Your Zeek speed depends on the traffic you have, if you have some elephant flows (and how you deal with them), scripts you run, etc. I remember pushing between 5-10Gbit/sec through a server with 24 cores (not threads), with room to spare. You will also need memory, and depending on scripts you intend to write, that might be quite a lot. We run with 192GB / server. Do you have 100Gbit of traffic or 100Gbit interfaces? Either way, you're gonna build yourself a cluster with a packet broker in front of it. Arista works well, other people use different brands, depending on your needs and your budget. Give those tuning guides I wrote with Suricata developers a read, while on it, they apply to Zeek as well. Of course Suricata can process way more traffic per core, than Zeek, because the processing it does is way simpler. https://github.com/pevma/SEPTun https://github.com/pevma/SEPTun-Mark-II On Sun, Apr 28, 2019 at 11:35 AM Woot4moo wrote: > My understanding is that 4,000+ CPU cores would be necessary to support > this throughput. In the recent meeting from CERN I recall seeing someone > describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or > am I doing a conversion incorrectly? > > I am basing this purely on this quote, from > > https://docs.zeek.org/en/stable/cluster/ > > ?The rule of thumb we have followed recently is to allocate approximately > 1 core for every 250Mbps of traffic that is being analyzed. However, this > estimate could be extremely traffic mix-specific. It has generally worked > for mixed traffic with many users and servers. For example, if your traffic > peaks around 2Gbps (combined) and you want to handle traffic at peak load, > you may want to have 8 cores available (2048 / 250 == 8.2). ? > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190428/9f489d7e/attachment.html From tscheponik at gmail.com Sun Apr 28 15:41:58 2019 From: tscheponik at gmail.com (Woot4moo) Date: Sun, 28 Apr 2019 18:41:58 -0400 Subject: [Zeek] Number of CPU cores for 100Gbps In-Reply-To: References: Message-ID: Thanks for the details. I am aware of MarkII and am reading through it. How as a community can we update that clustering documentation? If it?s not accurate it could very easily turn people away On Sun, Apr 28, 2019 at 6:29 PM Micha? Purzy?ski wrote: > These rules aren't current anymore and frankly, have never been accurate. > > Your Zeek speed depends on the traffic you have, if you have some elephant > flows (and how you deal with them), scripts you run, etc. I remember > pushing between 5-10Gbit/sec through a server with 24 cores (not threads), > with room to spare. > > You will also need memory, and depending on scripts you intend to write, > that might be quite a lot. We run with 192GB / server. > > Do you have 100Gbit of traffic or 100Gbit interfaces? > > Either way, you're gonna build yourself a cluster with a packet broker in > front of it. Arista works well, other people use different brands, > depending on your needs and your budget. > > Give those tuning guides I wrote with Suricata developers a read, while on > it, they apply to Zeek as well. Of course Suricata can process way more > traffic per core, than Zeek, because the processing it does is way simpler. > > https://github.com/pevma/SEPTun > https://github.com/pevma/SEPTun-Mark-II > > > On Sun, Apr 28, 2019 at 11:35 AM Woot4moo wrote: > >> My understanding is that 4,000+ CPU cores would be necessary to support >> this throughput. In the recent meeting from CERN I recall seeing someone >> describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or >> am I doing a conversion incorrectly? >> >> I am basing this purely on this quote, from >> >> https://docs.zeek.org/en/stable/cluster/ >> >> ?The rule of thumb we have followed recently is to allocate >> approximately 1 core for every 250Mbps of traffic that is being analyzed. >> However, this estimate could be extremely traffic mix-specific. It has >> generally worked for mixed traffic with many users and servers. For >> example, if your traffic peaks around 2Gbps (combined) and you want to >> handle traffic at peak load, you may want to have 8 cores available (2048 / >> 250 == 8.2). ? >> > _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190428/7cc25178/attachment.html From anthony.kasza at gmail.com Mon Apr 29 05:55:23 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Mon, 29 Apr 2019 06:55:23 -0600 Subject: [Zeek] Number of CPU cores for 100Gbps In-Reply-To: References: Message-ID: I agree. If you keep notes as you build your cluster please share them. Updating cluster docs may be another thing to add here. https://blog.zeek.org/2019/04/google-season-of-docs.html -AK On Sun, Apr 28, 2019, 16:51 Woot4moo wrote: > Thanks for the details. I am aware of MarkII and am reading through it. > > How as a community can we update that clustering documentation? If it?s > not accurate it could very easily turn people away > > On Sun, Apr 28, 2019 at 6:29 PM Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > >> These rules aren't current anymore and frankly, have never been accurate. >> >> Your Zeek speed depends on the traffic you have, if you have some >> elephant flows (and how you deal with them), scripts you run, etc. I >> remember pushing between 5-10Gbit/sec through a server with 24 cores (not >> threads), with room to spare. >> >> You will also need memory, and depending on scripts you intend to write, >> that might be quite a lot. We run with 192GB / server. >> >> Do you have 100Gbit of traffic or 100Gbit interfaces? >> >> Either way, you're gonna build yourself a cluster with a packet broker in >> front of it. Arista works well, other people use different brands, >> depending on your needs and your budget. >> >> Give those tuning guides I wrote with Suricata developers a read, while >> on it, they apply to Zeek as well. Of course Suricata can process way more >> traffic per core, than Zeek, because the processing it does is way simpler. >> >> https://github.com/pevma/SEPTun >> https://github.com/pevma/SEPTun-Mark-II >> >> >> On Sun, Apr 28, 2019 at 11:35 AM Woot4moo wrote: >> >>> My understanding is that 4,000+ CPU cores would be necessary to support >>> this throughput. In the recent meeting from CERN I recall seeing someone >>> describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or >>> am I doing a conversion incorrectly? >>> >>> I am basing this purely on this quote, from >>> >>> https://docs.zeek.org/en/stable/cluster/ >>> >>> ?The rule of thumb we have followed recently is to allocate >>> approximately 1 core for every 250Mbps of traffic that is being analyzed. >>> However, this estimate could be extremely traffic mix-specific. It has >>> generally worked for mixed traffic with many users and servers. For >>> example, if your traffic peaks around 2Gbps (combined) and you want to >>> handle traffic at peak load, you may want to have 8 cores available (2048 / >>> 250 == 8.2). ? >>> >> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> >> _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/445231f3/attachment-0001.html From liviu.valsan at cern.ch Mon Apr 29 06:41:22 2019 From: liviu.valsan at cern.ch (Liviu Valsan) Date: Mon, 29 Apr 2019 13:41:22 +0000 Subject: [Zeek] - extracted filename with md5 In-Reply-To: References: Message-ID: <1e508739f79552cf1d88b676739e306f01299956.camel@cern.ch> Hi, Below you can find a script that does file extraction and renames files to include the MD5 hash of the file. I'm using the file_sniff event to extract files and at this point I save them using the timestamp and the file ID. Extracted files are saved in a top level directory. Later on, in the file_state_remove event (at which point the file's MD5 should be available) I rename the file using the MD5 hash, and retaining the file's extension. I'm saying that in the file_state_remove event the file's MD5 should be available, but it's not always the case. One possible situation in which the MD5 is missing is when Zeek is missing some bytes. Renamed files are being moved in a sub-directory using the date when the file was seen. The script below allows you to customise the MIME types of the files that you want to extract and to restrict it to files downloaded by one given IP address. Feel free to customise it to fit your needs. The location where files are extracted can be customised as well. Cheers, Liviu # MIME-types to be extracted const extracted_mime_types = set( # Images: "image/jpeg", "image/png" ); # Client for which to extract files const target_client = 10.0.0.1 &redef; redef FileExtract::prefix = "/data/zeek/extracted_files/"; export { ## Path where extracted files are saved const file_extract_path: string = "/data/zeek/extracted_files/" &redef; } # File extraction event file_sniff(f: fa_file, meta: fa_metadata) { # Check the right mime-type to extract. if ( ! meta?$mime_type || meta$mime_type !in extracted_mime_types ) return; if ( target_client !in f$info$rx_hosts ) return; for (i in meta$mime_types) { if(meta$mime_types[i]$mime in extracted_mime_types) { local fext = split_string(meta$mime_types[i]$mime, /\//)[1]; local ntime = fmt("%D", network_time()); local fname = fmt("%s_%s.%s", ntime, f$id, fext); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); break; } } } event file_state_remove(f: fa_file) { if ( !f$info?$extracted || !f$info?$md5 || FileExtract::prefix == "" ) return; local orig = f$info$extracted; local split_orig = split_string(f$info$extracted, /\./); local extension = split_orig[|split_orig|-1]; local ntime = fmt("%D", network_time()); local ndate = sub_bytes(ntime, 1, 10); local dest_dir = fmt("%s%s", FileExtract::prefix, ndate); mkdir(dest_dir); local dest = fmt("%s/%s.%s", dest_dir, f$info$md5, extension); local cmd = fmt("mv %s/%s %s", FileExtract::prefix, orig, dest); when ( local result = Exec::run([$cmd=cmd]) ) { } f$info$extracted = dest; } On Sun, 2019-04-28 at 10:04 +0300, william de ping wrote: Hi everyone, I want to extract files and have their names include their md5 hash. The problem is that the md5 hashing happens on file_hash event while file extraction occurs on former events such as file_new or file_over_new_connection. Any ideas on how to accomplish this? Thanks B _______________________________________________ Zeek mailing list zeek at zeek.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/22b51522/attachment.html From justin at corelight.com Mon Apr 29 07:20:02 2019 From: justin at corelight.com (Justin Azoff) Date: Mon, 29 Apr 2019 10:20:02 -0400 Subject: [Zeek] Number of CPU cores for 100Gbps In-Reply-To: References: Message-ID: The guidelines are bit off these days, a single core does more work than it used to. However, your math is off by a factor of 10. Sticking with the rule that one core can do 250mbps, then you need 4 cores to handle 1gbps, 40 to handle 10 gbps, and 400 to handle 100gbps. Not 4000. On Sun, Apr 28, 2019 at 2:35 PM Woot4moo wrote: > > My understanding is that 4,000+ CPU cores would be necessary to support this throughput. In the recent meeting from CERN I recall seeing someone describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or am I doing a conversion incorrectly? > > I am basing this purely on this quote, from > > https://docs.zeek.org/en/stable/cluster/ > > ?The rule of thumb we have followed recently is to allocate approximately 1 core for every 250Mbps of traffic that is being analyzed. However, this estimate could be extremely traffic mix-specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 8 cores available (2048 / 250 == 8.2). ? > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From tscheponik at gmail.com Mon Apr 29 07:21:53 2019 From: tscheponik at gmail.com (Woot4moo) Date: Mon, 29 Apr 2019 10:21:53 -0400 Subject: [Zeek] Number of CPU cores for 100Gbps In-Reply-To: References: Message-ID: Whoops, thanks for fixing the arithmetic :) . Do we have to details on to how modern CPUs benchmark? On Mon, Apr 29, 2019 at 10:20 AM Justin Azoff wrote: > The guidelines are bit off these days, a single core does more work > than it used to. However, your math is off by a factor of 10. > > Sticking with the rule that one core can do 250mbps, then you need 4 > cores to handle 1gbps, 40 to handle 10 gbps, and 400 to handle > 100gbps. Not 4000. > > On Sun, Apr 28, 2019 at 2:35 PM Woot4moo wrote: > > > > My understanding is that 4,000+ CPU cores would be necessary to support > this throughput. In the recent meeting from CERN I recall seeing someone > describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or > am I doing a conversion incorrectly? > > > > I am basing this purely on this quote, from > > > > https://docs.zeek.org/en/stable/cluster/ > > > > ?The rule of thumb we have followed recently is to allocate > approximately 1 core for every 250Mbps of traffic that is being analyzed. > However, this estimate could be extremely traffic mix-specific. It has > generally worked for mixed traffic with many users and servers. For > example, if your traffic peaks around 2Gbps (combined) and you want to > handle traffic at peak load, you may want to have 8 cores available (2048 / > 250 == 8.2). ? > > _______________________________________________ > > Zeek mailing list > > zeek at zeek.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -- > Justin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/9d2c03fb/attachment-0001.html From akgraner at corelight.com Mon Apr 29 08:42:43 2019 From: akgraner at corelight.com (Amber Graner) Date: Mon, 29 Apr 2019 10:42:43 -0500 Subject: [Zeek] Newsletter Message-ID: Hi all, We're going to be rolling out a newsletter. Do you have any zeek related news you'd like me to considering adding in? Do you know of any Zeek related jobs? If you have any topics you'd like to suggest please let me know by sending to news at zeek.org. I look forward to hearing from you! With gratitude, ~Amber -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/04889140/attachment.html From jlay at slave-tothe-box.net Mon Apr 29 08:48:41 2019 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 29 Apr 2019 09:48:41 -0600 Subject: [Zeek] Newsletter In-Reply-To: References: Message-ID: Major changes upcoming (particularly ones that may break compatibility) Updated plugins (new ones as well) ZeekWeek info On 2019-04-29 09:42, Amber Graner wrote: > Hi all, > > We're going to be rolling out a newsletter. > > Do you have any zeek related news you'd like me to considering adding > in? > > Do you know of any Zeek related jobs? > > If you have any topics you'd like to suggest please let me know by > sending to news at zeek.org. > > I look forward to hearing from you! > > With gratitude, > ~Amber > > -- > > AMBER GRANER > Director of Community > Corelight, Inc > > 828.582.9469 > > * Ask me about how you can participate in the Zeek (formerly Bro) > community. > * Remember - ZEEK AND YOU SHALL FIND!! > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek From pkelley at hyperionavenue.com Mon Apr 29 09:07:51 2019 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Mon, 29 Apr 2019 12:07:51 -0400 Subject: [Zeek] Newsletter In-Reply-To: References: Message-ID: <05E7B5DB-845C-4ABB-BC13-138869353D06@hyperionavenue.com> Agree with James. The change to local.zeek when inoperability was mentioned in April 1st was disastrous as it broke a ton of things for us. Notification of major changes to protocol analyzers that require full detection rewrites would be nice, as well. Patrick Kelley, CISSP, C|EH, ITIL CTO patrick.kelley at criticalpathsecurity.com > On Apr 29, 2019, at 11:48 AM, James Lay wrote: > > Major changes upcoming (particularly ones that may break compatibility) > Updated plugins (new ones as well) > ZeekWeek info > > >> On 2019-04-29 09:42, Amber Graner wrote: >> Hi all, >> >> We're going to be rolling out a newsletter. >> >> Do you have any zeek related news you'd like me to considering adding >> in? >> >> Do you know of any Zeek related jobs? >> >> If you have any topics you'd like to suggest please let me know by >> sending to news at zeek.org. >> >> I look forward to hearing from you! >> >> With gratitude, >> ~Amber >> >> -- >> >> AMBER GRANER >> Director of Community >> Corelight, Inc >> >> 828.582.9469 >> >> * Ask me about how you can participate in the Zeek (formerly Bro) >> community. >> * Remember - ZEEK AND YOU SHALL FIND!! >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/77a88899/attachment.html From x.faith at gmail.com Mon Apr 29 13:57:58 2019 From: x.faith at gmail.com (David Decker) Date: Mon, 29 Apr 2019 16:57:58 -0400 Subject: [Zeek] Bro -r using multiple PCAP Message-ID: Looking to see if anyone has created a script, or if this is an argument to process multiple PCAPS using the bro -r argument. I have it setup to output to JSON currently and change from EPOCH time to normal date/time output, but that is one at a time, and will have multiple. Looking at either a batch script of maybe python but wanted to see if anyone has done this bfore. (Reingest multiple old PCAP files) to get re-ingested. Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/a0ed15f8/attachment.html From justin at corelight.com Mon Apr 29 15:15:05 2019 From: justin at corelight.com (Justin Azoff) Date: Mon, 29 Apr 2019 18:15:05 -0400 Subject: [Zeek] Bro -r using multiple PCAP In-Reply-To: References: Message-ID: You can specify -r multiple times. Something like import subprocess import glob cmd = ["bro"] for f in glob.glob("*.pcap"): cmd.extend(["-r", f]) subprocess.call(cmd) will work to a point. Eventually you will hit ARG_MAX with enough files. but for a few dozen this works fine. For more, something like https://github.com/assafmo/joincap could be better. I outlined a good way to do this as an input plugin a while back as well: http://mailman.icsi.berkeley.edu/pipermail/zeek/2017-July/012355.html On Mon, Apr 29, 2019 at 5:06 PM David Decker wrote: > > Looking to see if anyone has created a script, or if this is an argument to process multiple PCAPS using the bro -r argument. > > I have it setup to output to JSON currently and change from EPOCH time to normal date/time output, but that is one at a time, and will have multiple. > > Looking at either a batch script of maybe python but wanted to see if anyone has done this bfore. > (Reingest multiple old PCAP files) to get re-ingested. > > Dave > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From robin at corelight.com Mon Apr 29 15:37:44 2019 From: robin at corelight.com (Robin Sommer) Date: Mon, 29 Apr 2019 15:37:44 -0700 Subject: [Zeek] Newsletter In-Reply-To: <05E7B5DB-845C-4ABB-BC13-138869353D06@hyperionavenue.com> References: <05E7B5DB-845C-4ABB-BC13-138869353D06@hyperionavenue.com> Message-ID: <20190429223744.GA91771@corelight.com> On Mon, Apr 29, 2019 at 12:07 -0400, Patrick Kelley wrote: > The change to local.zeek when inoperability was mentioned in April 1st > was disastrous as it broke a ton of things for us. I supppose you are referring to the current development version from git. Mind elaborating what issues you encountered? Robin -- Robin Sommer * Corelight, Inc. * robin at corelight.com * www.corelight.com From pkelley at hyperionavenue.com Mon Apr 29 16:20:54 2019 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Mon, 29 Apr 2019 19:20:54 -0400 Subject: [Zeek] Newsletter In-Reply-To: <20190429223744.GA91771@corelight.com> References: <05E7B5DB-845C-4ABB-BC13-138869353D06@hyperionavenue.com> <20190429223744.GA91771@corelight.com> Message-ID: Negative. Pulled from Master. Something along the lines of ?Can?t find local.zeek?. Copied local.bro, but couldn?t load any scripts or folders with __load__.bro. Rolled back to any earlier version and moved on. A bug was reported by another party and I believe was recently merged. If I have time, I?ll get more info. Patrick Kelley, CISSP, C|EH, ITIL CTO patrick.kelley at criticalpathsecurity.com > On Apr 29, 2019, at 6:37 PM, Robin Sommer wrote: > > > >> On Mon, Apr 29, 2019 at 12:07 -0400, Patrick Kelley wrote: >> >> The change to local.zeek when inoperability was mentioned in April 1st >> was disastrous as it broke a ton of things for us. > > I supppose you are referring to the current development version from > git. Mind elaborating what issues you encountered? > > Robin > > -- > Robin Sommer * Corelight, Inc. * robin at corelight.com * www.corelight.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190429/b5fb929f/attachment.html From pkelley at hyperionavenue.com Tue Apr 30 19:31:57 2019 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Tue, 30 Apr 2019 22:31:57 -0400 Subject: [Zeek] Bro -r using multiple PCAP In-Reply-To: References: Message-ID: I run the following in a local folder for several ingest types (PREDICT, malware-traffic-analysis, etc...). Logstash, etc... does the rest. Hope it helps. Additionally, I have a watcher process written in Python to watch for pcaps that are dropped into a directory. ## Replay all pcaps in bro ## Patrick Kelley for i in `ls | sort`; do bro -r $i done On Mon, Apr 29, 2019 at 6:18 PM Justin Azoff wrote: > You can specify -r multiple times. Something like > > import subprocess > import glob > > cmd = ["bro"] > > for f in glob.glob("*.pcap"): > cmd.extend(["-r", f]) > > subprocess.call(cmd) > > > will work to a point. Eventually you will hit ARG_MAX with enough > files. but for a few dozen this works fine. For more, something like > https://github.com/assafmo/joincap could be better. > > I outlined a good way to do this as an input plugin a while back as > well: > http://mailman.icsi.berkeley.edu/pipermail/zeek/2017-July/012355.html > > On Mon, Apr 29, 2019 at 5:06 PM David Decker wrote: > > > > Looking to see if anyone has created a script, or if this is an argument > to process multiple PCAPS using the bro -r argument. > > > > I have it setup to output to JSON currently and change from EPOCH time > to normal date/time output, but that is one at a time, and will have > multiple. > > > > Looking at either a batch script of maybe python but wanted to see if > anyone has done this bfore. > > (Reingest multiple old PCAP files) to get re-ingested. > > > > Dave > > _______________________________________________ > > Zeek mailing list > > zeek at zeek.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -- > Justin > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > -- Patrick Kelley Hyperion Avenue Labs http://www.hyperionavenue.com 951.291.8310 *The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure.* [image: hal_logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190430/89df7a20/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 12155 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190430/89df7a20/attachment-0001.bin From x.faith at gmail.com Tue Apr 30 21:30:21 2019 From: x.faith at gmail.com (David Decker) Date: Tue, 30 Apr 2019 21:30:21 -0700 Subject: [Zeek] Bro -r using multiple PCAP In-Reply-To: References: Message-ID: Update on the Bro -r using multiple scripts. I guess I should add that I am needing to break out the logs (either by PCAP or by say day) not sure what is the easiest. Thanks everyone so far. Still working out the kinks i guess. New to this. On Mon, Apr 29, 2019 at 1:57 PM David Decker wrote: > Looking to see if anyone has created a script, or if this is an argument > to process multiple PCAPS using the bro -r argument. > > I have it setup to output to JSON currently and change from EPOCH time to > normal date/time output, but that is one at a time, and will have > multiple. > > Looking at either a batch script of maybe python but wanted to see if > anyone has done this bfore. > (Reingest multiple old PCAP files) to get re-ingested. > > Dave > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190430/db8f1e91/attachment.html