From ottobackwards at gmail.com Thu May 2 06:34:34 2019 From: ottobackwards at gmail.com (Otto Fowler) Date: Thu, 2 May 2019 06:34:34 -0700 Subject: [Zeek] Bro -r using multiple PCAP In-Reply-To: References: Message-ID: I made a quick github project with the script I had sent David. https://github.com/ottobackwards/run-bro-pcap-directory if anyone is interested. On May 1, 2019 at 00:33:06, David Decker (x.faith at gmail.com) wrote: Update on the Bro -r using multiple scripts. I guess I should add that I am needing to break out the logs (either by PCAP or by say day) not sure what is the easiest. Thanks everyone so far. Still working out the kinks i guess. New to this. On Mon, Apr 29, 2019 at 1:57 PM David Decker wrote: > Looking to see if anyone has created a script, or if this is an argument > to process multiple PCAPS using the bro -r argument. > > I have it setup to output to JSON currently and change from EPOCH time to > normal date/time output, but that is one at a time, and will have multiple. > > Looking at either a batch script of maybe python but wanted to see if > anyone has done this bfore. > (Reingest multiple old PCAP files) to get re-ingested. > > Dave > _______________________________________________ Zeek mailing list zeek at zeek.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190502/78999e15/attachment.html From mkg at vt.edu Thu May 2 10:01:36 2019 From: mkg at vt.edu (Mark Gardner) Date: Thu, 2 May 2019 13:01:36 -0400 Subject: [Zeek] capture_loss vs. pkts_dropped vs. missed_bytes Message-ID: I am still tuning our new Zeek cluster: an Arista switch for load balancing with 4x10 Gbps links from a Gigamon and 10 Gbps links to the sensors, five sensors (16 physical cores with 128 GB RAM each) using af_packet, 15 workers per sensor, and a separate management node running the manager, logger, proxy, and storage (XFS on RAID-0 with 8 7200 RPM spindles, 256 GB RAM). Output is JSON (for feeding into an ElasticStack later). The average capture loss was <1% early on with spikes to 50-70%. We increased the af_packet_buffer_size from the default (128MB) to 2GB and capture_loss is gone. $ zcat capture_loss.10\:00\:00-11\:00\:00.log.gz | jq .percent_lost | statgen Count Min Max Avg StdDev 300 0.0000 0.0000 0.0000 0.0000 Next, I looked at the missing bytes from the conn.log which doesn't look too bad: $ zcat conn.10\:00\:00-11\:00\:00.log.gz | jq .missed_bytes | statgen Count Min Max Avg StdDev 5488 0.0000 5802.0000 1.7332 92.9547 Out of the 5488 records, only two were non-zero (5802 and 3710) and for both of those the missed_bytes == resp_bytes (service: ssl). But even with the above, the pkts_dropped in stats.log is extremely high: $ zcat stats.10\:00\:00-11\:00\:00.log.gz | jq .pkts_dropped | grep -v null | statgen Count Min Max Avg StdDev 900 3564854 18216752 5762446.99 1591145.34 So even though there was no capture_loss and almost no missing_bytes, the pkts_dropped is huge. Is this something to be concerned about? If so, I am not sure how to go about figuring out the problem. What should I do next? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190502/e15a0026/attachment.html From michalpurzynski1 at gmail.com Thu May 2 12:49:33 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Thu, 2 May 2019 12:49:33 -0700 Subject: [Zeek] capture_loss vs. pkts_dropped vs. missed_bytes In-Reply-To: References: Message-ID: Hey Mark, First of all, I really like your setup and I don't see any obvious errors there. Cool. Jan (also on this list) might know more about the way drops are calculated in stats log. It looks like they are just af_packet statistics. Can you run Justin's troubleshooting tool and send us results? https://github.com/ncsa/bro-doctor BTW, while monitoring for drops, take a look here, where we describe several other places drops might happen (and all of them should be monitored). https://github.com/pevma/SEPTun/blob/master/SEPTun.rst#packet-drops On Thu, May 2, 2019 at 10:14 AM Mark Gardner wrote: > I am still tuning our new Zeek cluster: an Arista switch for load > balancing with 4x10 Gbps links from a Gigamon and 10 Gbps links to the > sensors, five sensors (16 physical cores with 128 GB RAM each) using > af_packet, 15 workers per sensor, and a separate management node running > the manager, logger, proxy, and storage (XFS on RAID-0 with 8 7200 RPM > spindles, 256 GB RAM). Output is JSON (for feeding into an ElasticStack > later). > > The average capture loss was <1% early on with spikes to 50-70%. We > increased the af_packet_buffer_size from the default (128MB) to 2GB and > capture_loss is gone. > $ zcat capture_loss.10\:00\:00-11\:00\:00.log.gz | jq .percent_lost | > statgen > Count Min Max Avg StdDev > 300 0.0000 0.0000 0.0000 0.0000 > > Next, I looked at the missing bytes from the conn.log which doesn't look > too bad: > $ zcat conn.10\:00\:00-11\:00\:00.log.gz | jq .missed_bytes | statgen > Count Min Max Avg StdDev > 5488 0.0000 5802.0000 1.7332 92.9547 > Out of the 5488 records, only two were non-zero (5802 and 3710) and for > both of those the missed_bytes == resp_bytes (service: ssl). > > But even with the above, the pkts_dropped in stats.log is extremely high: > $ zcat stats.10\:00\:00-11\:00\:00.log.gz | jq .pkts_dropped | grep -v > null | statgen > Count Min Max Avg StdDev > 900 3564854 18216752 5762446.99 1591145.34 > > So even though there was no capture_loss and almost no missing_bytes, the > pkts_dropped is huge. Is this something to be concerned about? If so, I am > not sure how to go about figuring out the problem. What should I do next? > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190502/da11f488/attachment.html From mauro.palumbo at aizoon.it Fri May 3 00:52:13 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Fri, 3 May 2019 07:52:13 +0000 Subject: [Zeek] cluster configuration hot update Message-ID: <8f28e1c6c0a842d5976d33365a5237a2@SRVEX03.aizoon.local> Hi all, I am trying to figure out if it is possible to update the number of nodes running Zeek on a cluster configuration without restarting it. This could be a possible way to cope with increasing network traffic occuring in certain periods during a day or certain days when trafficic is expected to peak. However, restarting Zeek would cause a possible loss of data and I would rather avoid it. As far as I understand, I can update the node.cfg file, for example with new workers, and run the deploy command in broctl to update the configuration. But this will stop and restart the workers for a short time. Is there a way to avoid it? I had a look into the cluster framework and other parts of zeek's code, but it doesn't seem so easy to me. Thanks in advance, Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190503/cb8c2476/attachment.html From jsiwek at corelight.com Fri May 3 08:48:41 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Fri, 3 May 2019 08:48:41 -0700 Subject: [Zeek] cluster configuration hot update In-Reply-To: <8f28e1c6c0a842d5976d33365a5237a2@SRVEX03.aizoon.local> References: <8f28e1c6c0a842d5976d33365a5237a2@SRVEX03.aizoon.local> Message-ID: On Fri, May 3, 2019 at 1:01 AM Palumbo Mauro wrote: > As far as I understand, I can update the node.cfg file, for example with new workers, and run the deploy command in broctl to update the configuration. But this will stop and restart the workers for a short time. Is there a way to avoid it? I had a look into the cluster framework and other parts of zeek?s code, but it doesn?t seem so easy to me. A dynamically changing cluster is theoretically possible, but not something I know any tricks to get working now -- it's likely some effort to hack that feature in or else try to roll your own cluster config that uses the underlying Broker framework to set up connections instead of the default cluster/broctl frameworks. - Jon From manju.atri87 at gmail.com Fri May 3 09:38:09 2019 From: manju.atri87 at gmail.com (Manju Lalwani) Date: Fri, 3 May 2019 22:08:09 +0530 Subject: [Zeek] Zeek script to look for first few packets Message-ID: how can I make Zeek look for the first ten packets only in a tcp session ? The first ten packets are enough to fingerprint the traffic I am trying to identify and so would like to ensure my script looks at only the first 10 packets to save processing time. Also the communication can be identified based on 7 packets immediately following the tcp handshake and using a custom service not categorised by zeek.. tcp_packet event has been the closest match for my script . Is there any Zeek event that can be a better match for this communication ? Thanks in advance, Manju -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190503/4317443d/attachment.html From justin at corelight.com Fri May 3 10:09:10 2019 From: justin at corelight.com (Justin Azoff) Date: Fri, 3 May 2019 13:09:10 -0400 Subject: [Zeek] cluster configuration hot update In-Reply-To: <8f28e1c6c0a842d5976d33365a5237a2@SRVEX03.aizoon.local> References: <8f28e1c6c0a842d5976d33365a5237a2@SRVEX03.aizoon.local> Message-ID: The main issue with adding new workers on demand is here: https://github.com/zeek/zeek/blob/c640dd70cc4229b07192a9739ece5f90d02151da/scripts/base/frameworks/cluster/main.zeek#L305:L311 the cluster layout expects to know about all workers before hand. However, if you changed the code to assume that when an unknown node connects and is calling itself worker-44, to just add a worker with that ID to the nodes table as a worker... basically just trust the name. However there are other issues. If you're using something like AF_packet and add a worker the number of workers will change causing connections to hash differently causing connections to move move between different workers which is almost as bad as restarting things. If you were talking about physically adding new nodes then you have a similar problem if you are using a packet broker because the hashing will change on that side as well. This is not it bad idea though, I had wanted to build a cluster on top of k8s or nomad and integrate it with arista to be able to dynamically provision and resize clusters. On Fri, May 3, 2019 at 4:01 AM Palumbo Mauro wrote: > > Hi all, > > I am trying to figure out if it is possible to update the number of nodes running Zeek on a cluster configuration without restarting it. This could be a possible way to cope with increasing network traffic occuring in certain periods during a day or certain days when trafficic is expected to peak. However, restarting Zeek would cause a possible loss of data and I would rather avoid it. > > > > As far as I understand, I can update the node.cfg file, for example with new workers, and run the deploy command in broctl to update the configuration. But this will stop and restart the workers for a short time. Is there a way to avoid it? I had a look into the cluster framework and other parts of zeek?s code, but it doesn?t seem so easy to me. > > > > Thanks in advance, > > Mauro > > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From asharma at lbl.gov Fri May 3 10:38:52 2019 From: asharma at lbl.gov (Aashish Sharma) Date: Fri, 3 May 2019 10:38:52 -0700 Subject: [Zeek] Zeek script to look for first few packets In-Reply-To: References: Message-ID: <20190503173850.GD54033@MacPro-2331.local> Manju, zeek conceptually works better at connection and protocol events than at packet levels. Infact thats one of the strengths of it that it does all low level tcp and protocol understandings for you and hands you events which are at more easier levels to work with. While you can work on packet, it is generally not recommended. More so if you desire to operate at packet levels to save processing time, on the contrary you are going on an non-optimal path. You should consider event based approach. Your message doesn't quite explain what your specifics are that helps you identify when you are done but here are couple of examples which might help understand other approaches or way to think: Problem: I'd like to only process if all three conditions are T - IP is in local_nets - dst port is acceptable port list && - response IP is not in list of acceptable hosts event new_connection(c: connection) { local orig = c$id$orig_h ; local resp = c$id$resp_h ; local dport = c$id$resp_p ; if (orig !in Site::local_nets) return ; if (dport !in ok_ports) return ; if (resp !in ok_hosts ) return ; # do your processing } Similarly: lets say you want to only operate on Apache Server stuff: event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5 { if (name != "SERVER") return ; if (/Apache/ in value) { # do your processing } } or alternatively: if ( name == "SERVER" && /Apache/ in value) # do processing The way is you eliminate all the un-interesting traffic you don't care about - this saves more processing than to go per packet level heuristics. You should probably look at connection events: https://docs.zeek.org/en/stable/scripts/base/bif/plugins/Bro_TCP.events.bif.bro.html and definitely try avoiding working on packet events Hope this helps, Aashish On Fri, May 03, 2019 at 10:08:09PM +0530, Manju Lalwani wrote: > how can I make Zeek look for the first ten packets only in a tcp session ? > The first ten packets are enough to fingerprint the traffic I am trying to > identify and so would like to ensure my script looks at only the first 10 > packets to save processing time. > > Also the communication can be identified based on 7 packets immediately > following the tcp handshake and using a custom service not categorised by > zeek.. tcp_packet event has been the closest match for my script . Is there > any Zeek event that can be a better match for this communication ? > > Thanks in advance, > Manju > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek From jmellander at lbl.gov Fri May 3 10:57:32 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Fri, 3 May 2019 10:57:32 -0700 Subject: [Zeek] Zeek script to look for first few packets In-Reply-To: <20190503173850.GD54033@MacPro-2331.local> References: <20190503173850.GD54033@MacPro-2331.local> Message-ID: If you're working on a protocol currently unknown to zeek, you could try your hand at writing a protocol analyzer plugin. A recent thread on that subject: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/2019-March/013196.html As an enhancement to zeek, it might be nice to trigger an event if the protocol analyzers were unable to identify the connection, with some representation of the traffic seen to allow script level analysis. Haven't spent much time thinking about the syntax or efficiency of such an event, though, although it might be an interesting topic for conversation. On Fri, May 3, 2019 at 10:48 AM Aashish Sharma wrote: > Manju, > > zeek conceptually works better at connection and protocol events than at > packet levels. Infact > thats one of the strengths of it that it does all low level tcp and > protocol > understandings for you and hands you events which are at more easier > levels to > work with. > > While you can work on packet, it is generally not recommended. More so if > you > desire to operate at packet levels to save processing time, on the > contrary you > are going on an non-optimal path. > > You should consider event based approach. Your message doesn't quite > explain > what your specifics are that helps you identify when you are done but here > are > couple of examples which might help understand other approaches or way to > think: > > Problem: I'd like to only process if all three conditions are T > > - IP is in local_nets > - dst port is acceptable port list && > - response IP is not in list of acceptable hosts > > event new_connection(c: connection) > { > > local orig = c$id$orig_h ; > local resp = c$id$resp_h ; > local dport = c$id$resp_p ; > > if (orig !in Site::local_nets) > return ; > > if (dport !in ok_ports) > return ; > > if (resp !in ok_hosts ) > return ; > > # do your processing > > } > > Similarly: lets say you want to only operate on Apache Server stuff: > > > event http_header(c: connection, is_orig: bool, name: string, value: > string) &priority=5 > { > if (name != "SERVER") > return ; > > if (/Apache/ in value) > { > # do your processing > } > > } > > or alternatively: > > > if ( name == "SERVER" && /Apache/ in value) > # do processing > > The way is you eliminate all the un-interesting traffic you don't care > about - > this saves more processing than to go per packet level heuristics. > > You should probably look at connection events: > > > https://docs.zeek.org/en/stable/scripts/base/bif/plugins/Bro_TCP.events.bif.bro.html > > and definitely try avoiding working on packet events > > Hope this helps, > > Aashish > > On Fri, May 03, 2019 at 10:08:09PM +0530, Manju Lalwani wrote: > > how can I make Zeek look for the first ten packets only in a tcp > session ? > > The first ten packets are enough to fingerprint the traffic I am trying > to > > identify and so would like to ensure my script looks at only the first > 10 > > packets to save processing time. > > > > Also the communication can be identified based on 7 packets immediately > > following the tcp handshake and using a custom service not categorised by > > zeek.. tcp_packet event has been the closest match for my script . Is > there > > any Zeek event that can be a better match for this communication ? > > > > Thanks in advance, > > Manju > > > _______________________________________________ > > Zeek mailing list > > zeek at zeek.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190503/7e11a9be/attachment.html From hlin33 at illinois.edu Sun May 5 10:33:25 2019 From: hlin33 at illinois.edu (Hui Lin (Hugo)) Date: Sun, 5 May 2019 10:33:25 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats Message-ID: Hi By default, SumStats will apply calculation on "num" instead of "dbl". How can I make it apply calculation on dbl instead? Thanks Hui Lin -- Hui Lin Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) DEPEND (http://depend.csl.illinois.edu/) ECE, Uni. of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190505/b9c71e85/attachment.html From jmellander at lbl.gov Sun May 5 10:54:09 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Sun, 5 May 2019 10:54:09 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: Hi Hugo: The observation record is defined (share/bro/base/frameworks/sumstats/main.bro) as: ## Represents data being added for a single observation. ## Only supply a single field at a time! type Observation: record { ## Count value. num: count &optional; ## Double value. dbl: double &optional; ## String value. str: string &optional; }; so in SumStats::observe, you would supply the dbl optional value instead of num, e.g. SumStats::observe("mysumstat", SumStats::Key($host=foo), SumStats::Observation($dbl=bar)); (don't supply more than 1 optional value). Hope this helps. BTW: I'm interested in the uses that folks find for sumstats. Care to comment on your use case? Jim On Sun, May 5, 2019 at 10:38 AM Hui Lin (Hugo) wrote: > Hi > > By default, SumStats will apply calculation on "num" instead of "dbl". How > can I make it apply calculation on dbl instead? > > Thanks > > Hui Lin > > -- > Hui Lin > Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) > DEPEND (http://depend.csl.illinois.edu/) > ECE, Uni. of Illinois at Urbana-Champaign > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190505/1aa11b1d/attachment.html From hlin33 at illinois.edu Sun May 5 11:15:47 2019 From: hlin33 at illinois.edu (Hui Lin (Hugo)) Date: Sun, 5 May 2019 11:15:47 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: Hi Jim, Thanks for the help. It seemed that I made a stupid mistake. I did exactly what you suggested replacing dbl with num in the observation. However, when I copy the print fmt function from the example into the call back function, I forget to let it print more effective decimals bits. So I always obtain 0, which makes me think that the "observer" is still using "num". Hope that this record can help others who want to use double type instead of count types in SumStats. Yes, as you may know, I contributed DNP3 analyzer in Bro with Robin and Seth. So I still use Bro to measure network traces related to DNP3 network packets, related to my research work. At first, I was a little bit of daunting of using SumStats, but it turns out to be very easy. I just use application layer event to calculate round trip time between DNP3 request and responses and trigger SumStats::observe event to record the latency. (to calculate goodput). Then I just calculate the average and standard deviation. RTT is very basic network measurement, so I find SumStats very useful. May I suggest a few things in SumStats? Maybe I missed something, I don't know how to directly obtain the number of data recorded in SumStats, so I need to declare another global variable to record that. It will be useful that we can directly know how many data are recorded by far. The reason that I need the number of records is to calculate the 95% or 99% confidence interval. It will be great that we can include them directly in SumStats as well. Best, Hugo On Sun, May 5, 2019 at 10:55 AM Jim Mellander wrote: > Hi Hugo: > > The observation record is defined > (share/bro/base/frameworks/sumstats/main.bro) as: > ## Represents data being added for a single observation. > ## Only supply a single field at a time! > type Observation: record { > ## Count value. > num: count &optional; > ## Double value. > dbl: double &optional; > ## String value. > str: string &optional; > }; > > so in SumStats::observe, you would supply the dbl optional value instead > of num, e.g. > > SumStats::observe("mysumstat", > SumStats::Key($host=foo), > SumStats::Observation($dbl=bar)); > > (don't supply more than 1 optional value). > > Hope this helps. BTW: I'm interested in the uses that folks find for sumstats. Care to comment on your use case? > > Jim > > > On Sun, May 5, 2019 at 10:38 AM Hui Lin (Hugo) > wrote: > >> Hi >> >> By default, SumStats will apply calculation on "num" instead of "dbl". >> How can I make it apply calculation on dbl instead? >> >> Thanks >> >> Hui Lin >> >> -- >> Hui Lin >> Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) >> DEPEND (http://depend.csl.illinois.edu/) >> ECE, Uni. of Illinois at Urbana-Champaign >> >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> > > -- Hui Lin Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) DEPEND (http://depend.csl.illinois.edu/) ECE, Uni. of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190505/7416eeb4/attachment-0001.html From x.faith at gmail.com Sun May 5 13:58:01 2019 From: x.faith at gmail.com (David Decker) Date: Sun, 5 May 2019 13:58:01 -0700 Subject: [Zeek] Bro Logs Ingestion Message-ID: Sorry beginner question here: But I know you can ingest logs into Splunk, and Elastic Search. So I know SecurityOnion has an ELK stack and it looks like they get sent right to Logstash - ES - Kibana RockNSM looks almost the same but it has a stop off at Kafka before forwarding to Logstash. Trying to figure out is there a benefit for Kafka. Also looking at using Splunk instead of ES. I know I can use the TA and monitor the logs from splunk, but would it be better to monitor from Kafka? I guess I need to understand more of how Kafka fits. Thanks Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190505/78ae95a1/attachment.html From landy-bible at utulsa.edu Sun May 5 15:11:04 2019 From: landy-bible at utulsa.edu (Bible, Landy) Date: Sun, 5 May 2019 22:11:04 +0000 Subject: [Zeek] Bro Logs Ingestion In-Reply-To: References: Message-ID: David, Think of Kafka as a message queue between Zeek and ELK. I think it depends partly on the scale of your setup. On a small system, it's probably not needed, but as you scale up it becomes much more useful. I ran a Zeek system with ELK for a few years before moving from security to networking. My ELK system ingested about 3 billion messages a week. I used Redis between Zeek and ELK. Redis was just a message queue. It provided a buffer between Zeek and ELK. That helped smooth out bursts of log messages from Zeek since it could generate messages a lot faster than my ELK cluster could process them. It also meant I could take the ELK cluster down for maintenance without stopping Zeek. Messages would just queue up in Redis until I brought ELK back up. Once I found Kafka I had planned to replace Redis with it, but I was offered the networking gig before I made that happen. Kafka had the benefits of being properly clustered itself and wrote to disk instead of RAM which was the main limitation of using Redis. Kafka also had the feature of being able to support multiple independent consumers so that I could have data feeding into multiple systems if I wanted to. -- Landy Bible ?Senior Network Engineer The University of Tulsa ________________________________ From: zeek-bounces at zeek.org on behalf of David Decker Sent: Sunday, May 5, 2019 3:58 PM To: Zeek at zeek.org Subject: [Zeek] Bro Logs Ingestion Sorry beginner question here: But I know you can ingest logs into Splunk, and Elastic Search. So I know SecurityOnion has an ELK stack and it looks like they get sent right to Logstash - ES - Kibana RockNSM looks almost the same but it has a stop off at Kafka before forwarding to Logstash. Trying to figure out is there a benefit for Kafka. Also looking at using Splunk instead of ES. I know I can use the TA and monitor the logs from splunk, but would it be better to monitor from Kafka? I guess I need to understand more of how Kafka fits. Thanks Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190505/15658dc5/attachment.html From alajal at gmail.com Sun May 5 19:02:06 2019 From: alajal at gmail.com (Mustafa Qasim) Date: Mon, 6 May 2019 12:02:06 +1000 Subject: [Zeek] Bro Logs Ingestion In-Reply-To: References: Message-ID: the biggest reason is absorbing back pressure from logstash or other ingesting tools. In past the back pressure from logstash would cause CPU spikes on the originating endpoint. second, we can write programs to clean, modify and enrich data before throwing at the ingesting tools making our log processing pipelines indipendedent. Giving us flexibility of migrating from logstash to Humio or Splunk and not worry about wasting all the efforts you put into logstash pipelines. ------ *Mustafa Qasim* PGP: C57E0A7C On Mon, May 6, 2019 at 7:08 AM David Decker wrote: > Sorry beginner question here: > > But I know you can ingest logs into Splunk, and Elastic Search. > > So I know SecurityOnion has an ELK stack and it looks like they get sent > right to Logstash - ES - Kibana > > RockNSM looks almost the same but it has a stop off at Kafka before > forwarding to Logstash. > > Trying to figure out is there a benefit for Kafka. > > Also looking at using Splunk instead of ES. > I know I can use the TA and monitor the logs from splunk, but would it be > better to monitor from Kafka? > > I guess I need to understand more of how Kafka fits. > > Thanks > Dave > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/50742770/attachment.html From michalpurzynski1 at gmail.com Sun May 5 19:16:51 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Sun, 5 May 2019 19:16:51 -0700 Subject: [Zeek] Bro Logs Ingestion In-Reply-To: References: Message-ID: There are some good patterns here. We observed that it helps a lot to just ship logs from each NSM sensor as soon as data is collected, with minimal, if any, processing. That's why we ship logs (with syslog-ng) to a RabbitMQ instance, via AMQPS. No extra processing is done there, it's just buffering. We then have a set of python workers fetching messages from Rabbit and doing all of the processing. No Kafka here but just simple solutions and avoiding any processing on endpoints. On Sun, May 5, 2019 at 7:12 PM Mustafa Qasim wrote: > the biggest reason is absorbing back pressure from logstash or other > ingesting tools. In past the back pressure from logstash would cause CPU > spikes on the originating endpoint. > > second, we can write programs to clean, modify and enrich data before > throwing at the ingesting tools making our log processing pipelines > indipendedent. Giving us flexibility of migrating from logstash to Humio or > Splunk and not worry about wasting all the efforts you put into logstash > pipelines. > > ------ > *Mustafa Qasim* > PGP: C57E0A7C > > > > On Mon, May 6, 2019 at 7:08 AM David Decker wrote: > >> Sorry beginner question here: >> >> But I know you can ingest logs into Splunk, and Elastic Search. >> >> So I know SecurityOnion has an ELK stack and it looks like they get sent >> right to Logstash - ES - Kibana >> >> RockNSM looks almost the same but it has a stop off at Kafka before >> forwarding to Logstash. >> >> Trying to figure out is there a benefit for Kafka. >> >> Also looking at using Splunk instead of ES. >> I know I can use the TA and monitor the logs from splunk, but would it be >> better to monitor from Kafka? >> >> I guess I need to understand more of how Kafka fits. >> >> Thanks >> Dave >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190505/fd63691a/attachment-0001.html From alajal at gmail.com Sun May 5 23:52:23 2019 From: alajal at gmail.com (Mustafa Qasim) Date: Mon, 6 May 2019 16:52:23 +1000 Subject: [Zeek] Bro Logs Ingestion In-Reply-To: References: Message-ID: also it's a good security practice to offload logs from endpoints ASAP. During incidnet endpoints can go offline or the data can be impacted especially if the endpoints are compromised. ------ *Mustafa Qasim* PGP: C57E0A7C On Mon, May 6, 2019 at 12:17 PM Micha? Purzy?ski wrote: > There are some good patterns here. We observed that it helps a lot to just > ship logs from each NSM sensor as soon as data is collected, with minimal, > if any, processing. That's why we ship logs (with syslog-ng) to a RabbitMQ > instance, via AMQPS. No extra processing is done there, it's just > buffering. We then have a set of python workers fetching messages from > Rabbit and doing all of the processing. > > No Kafka here but just simple solutions and avoiding any processing on > endpoints. > > > > On Sun, May 5, 2019 at 7:12 PM Mustafa Qasim wrote: > >> the biggest reason is absorbing back pressure from logstash or other >> ingesting tools. In past the back pressure from logstash would cause CPU >> spikes on the originating endpoint. >> >> second, we can write programs to clean, modify and enrich data before >> throwing at the ingesting tools making our log processing pipelines >> indipendedent. Giving us flexibility of migrating from logstash to Humio or >> Splunk and not worry about wasting all the efforts you put into logstash >> pipelines. >> >> ------ >> *Mustafa Qasim* >> PGP: C57E0A7C >> >> >> >> On Mon, May 6, 2019 at 7:08 AM David Decker wrote: >> >>> Sorry beginner question here: >>> >>> But I know you can ingest logs into Splunk, and Elastic Search. >>> >>> So I know SecurityOnion has an ELK stack and it looks like they get sent >>> right to Logstash - ES - Kibana >>> >>> RockNSM looks almost the same but it has a stop off at Kafka before >>> forwarding to Logstash. >>> >>> Trying to figure out is there a benefit for Kafka. >>> >>> Also looking at using Splunk instead of ES. >>> I know I can use the TA and monitor the logs from splunk, but would it >>> be better to monitor from Kafka? >>> >>> I guess I need to understand more of how Kafka fits. >>> >>> Thanks >>> Dave >>> _______________________________________________ >>> Zeek mailing list >>> zeek at zeek.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek >> >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/fe60c560/attachment.html From mauro.palumbo at aizoon.it Mon May 6 02:21:33 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Mon, 6 May 2019 09:21:33 +0000 Subject: [Zeek] R: cluster configuration hot update In-Reply-To: References: <8f28e1c6c0a842d5976d33365a5237a2@SRVEX03.aizoon.local> Message-ID: <2c00d09ed0e246e28d872ecf0bcec6fd@SRVEX03.aizoon.local> Thanks for your reply. My main concern is that changing the cluster config on the fly may disrupt something else in Zeek's processing of traffic, as you point out below. Not sure what else may be affected by such changes. A dynamic resizing of a cluster and consequently adapting Zeek to it seems though a possible idea to handle temporary peaks in network traffic. Mauro -----Messaggio originale----- Da: Justin Azoff [mailto:justin at corelight.com] Inviato: venerd? 3 maggio 2019 19:09 A: Palumbo Mauro Cc: zeek at zeek.org Oggetto: Re: [Zeek] cluster configuration hot update The main issue with adding new workers on demand is here: https://github.com/zeek/zeek/blob/c640dd70cc4229b07192a9739ece5f90d02151da/scripts/base/frameworks/cluster/main.zeek#L305:L311 the cluster layout expects to know about all workers before hand. However, if you changed the code to assume that when an unknown node connects and is calling itself worker-44, to just add a worker with that ID to the nodes table as a worker... basically just trust the name. However there are other issues. If you're using something like AF_packet and add a worker the number of workers will change causing connections to hash differently causing connections to move move between different workers which is almost as bad as restarting things. If you were talking about physically adding new nodes then you have a similar problem if you are using a packet broker because the hashing will change on that side as well. This is not it bad idea though, I had wanted to build a cluster on top of k8s or nomad and integrate it with arista to be able to dynamically provision and resize clusters. On Fri, May 3, 2019 at 4:01 AM Palumbo Mauro wrote: > > Hi all, > > I am trying to figure out if it is possible to update the number of nodes running Zeek on a cluster configuration without restarting it. This could be a possible way to cope with increasing network traffic occuring in certain periods during a day or certain days when trafficic is expected to peak. However, restarting Zeek would cause a possible loss of data and I would rather avoid it. > > > > As far as I understand, I can update the node.cfg file, for example with new workers, and run the deploy command in broctl to update the configuration. But this will stop and restart the workers for a short time. Is there a way to avoid it? I had a look into the cluster framework and other parts of zeek?s code, but it doesn?t seem so easy to me. > > > > Thanks in advance, > > Mauro > > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From doug.burks at gmail.com Mon May 6 03:51:27 2019 From: doug.burks at gmail.com (Doug Burks) Date: Mon, 6 May 2019 06:51:27 -0400 Subject: [Zeek] Bro Logs Ingestion In-Reply-To: References: Message-ID: Hi Dave, To clarify, Security Onion may also include redis in the pipeline, depending on what kind of architecture you are deploying. For more information, please see: https://securityonion.readthedocs.io/en/latest/architecture.html#distributed Hope that helps! On Sun, May 5, 2019 at 5:08 PM David Decker wrote: > Sorry beginner question here: > > But I know you can ingest logs into Splunk, and Elastic Search. > > So I know SecurityOnion has an ELK stack and it looks like they get sent > right to Logstash - ES - Kibana > > RockNSM looks almost the same but it has a stop off at Kafka before > forwarding to Logstash. > > Trying to figure out is there a benefit for Kafka. > > Also looking at using Splunk instead of ES. > I know I can use the TA and monitor the logs from splunk, but would it be > better to monitor from Kafka? > > I guess I need to understand more of how Kafka fits. > > Thanks > Dave > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Doug Burks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/bb5a1c68/attachment.html From jmellander at lbl.gov Mon May 6 09:16:02 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Mon, 6 May 2019 09:16:02 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: Hi Hugo: May I suggest a few things in SumStats? Maybe I missed something, I don't know how to directly obtain the number of data recorded in SumStats, so I need to declare another global variable to record that. It will be useful that we can directly know how many data are recorded by far. The reason that I need the number of records is to calculate the 95% or 99% confidence interval. It will be great that we can include them directly in SumStats as well. Each result record returned to epoch_result has a 'num' field, which is a count of the number of observations that made up that result - is that what you're looking for? If you're looking for a grand total of observations, I suppose they could be totalled up from the result records. Take care, JIm -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/d80e7cf5/attachment-0001.html From hlin33 at illinois.edu Mon May 6 09:55:09 2019 From: hlin33 at illinois.edu (Hui Lin (Hugo)) Date: Mon, 6 May 2019 09:55:09 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: Hi Jim, I think 'num' field seemed like what I am looking for. However, when I tried, it is different from the count that I manually made. Here is the codes that I used to count. As you can see, what I try to is easy, whenever, an observation is received, I increase the value of a global value. However, when I print out through epoch call back function, the value is different from one in 'num'. if (...) { total_res = total_res + 1; SumStats::observe("dnp3 rtt", SumStats::Key(), SumStats::Observation($dbl=latency)); } Best, Hugo On Mon, May 6, 2019 at 9:16 AM Jim Mellander wrote: > Hi Hugo: > > > May I suggest a few things in SumStats? Maybe I missed something, I don't > know how to directly obtain the number of data recorded in SumStats, so I > need to declare another global variable to record that. It will be useful > that we can directly know how many data are recorded by far. The reason > that I need the number of records is to calculate the 95% or 99% confidence > interval. It will be great that we can include them directly in SumStats as > well. > > > Each result record returned to epoch_result has a 'num' field, which is a > count of the number of observations that made up that result - is that what > you're looking for? If you're looking for a grand total of observations, I > suppose they could be totalled up from the result records. > > Take care, > > JIm > > -- Hui Lin Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) DEPEND (http://depend.csl.illinois.edu/) ECE, Uni. of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/4471647c/attachment.html From jmellander at lbl.gov Mon May 6 10:49:02 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Mon, 6 May 2019 10:49:02 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: Hmmm, that SumStats::observe line doesn't seem quite correct. Generally, observations are in the form: SumStats::observe("foo", [$host=bar], [$dbl=val]); Assuming what you sent was just a typo, it would be interesting to know whether the same behavior is seen both in a cluster and standalone, as SumStats uses a different code path for those two cases. If only the cluster gives a different result (likely less than the manual count), then I would be concerned that that not all cluster results are being received by the manager when it composes the results. Jim On Mon, May 6, 2019 at 9:56 AM Hui Lin (Hugo) wrote: > Hi Jim, > > I think 'num' field seemed like what I am looking for. However, when I > tried, it is different from the count that I manually made. Here is the > codes that I used to count. As you can see, what I try to is easy, > whenever, an observation is received, I increase the value of a global > value. However, when I print out through epoch call back function, the > value is different from one in 'num'. > > if (...) > { > total_res = total_res + 1; > SumStats::observe("dnp3 rtt", SumStats::Key(), > SumStats::Observation($dbl=latency)); > } > > > Best, > > Hugo > > On Mon, May 6, 2019 at 9:16 AM Jim Mellander wrote: > >> Hi Hugo: >> >> >> May I suggest a few things in SumStats? Maybe I missed something, I don't >> know how to directly obtain the number of data recorded in SumStats, so I >> need to declare another global variable to record that. It will be useful >> that we can directly know how many data are recorded by far. The reason >> that I need the number of records is to calculate the 95% or 99% confidence >> interval. It will be great that we can include them directly in SumStats as >> well. >> >> >> Each result record returned to epoch_result has a 'num' field, which is a >> count of the number of observations that made up that result - is that what >> you're looking for? If you're looking for a grand total of observations, I >> suppose they could be totalled up from the result records. >> >> Take care, >> >> JIm >> >> > > -- > Hui Lin > Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) > DEPEND (http://depend.csl.illinois.edu/) > ECE, Uni. of Illinois at Urbana-Champaign > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/60b55c22/attachment.html From hlin33 at illinois.edu Mon May 6 11:32:17 2019 From: hlin33 at illinois.edu (Hui Lin (Hugo)) Date: Mon, 6 May 2019 11:32:17 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: I am afraid that is not a typo. I copy paste from the documentation at https://docs.zeek.org/en/stable/frameworks/sumstats.html#examples. I think what I wrote is consistent with what you provide, instead I directly call Key and Observe constructor for the second and third parameters. I am just using standalone version to analyze a pcap. More interestingly, as the periodic epoch call back function print out, the "num" field of the epoch result can decrease! On Mon, May 6, 2019 at 10:49 AM Jim Mellander wrote: > Hmmm, that SumStats::observe line doesn't seem quite correct. Generally, > observations are in the form: > > SumStats::observe("foo", [$host=bar], [$dbl=val]); > > Assuming what you sent was just a typo, it would be interesting to know > whether the same behavior is seen both in a cluster and standalone, as > SumStats uses a different code path for those two cases. If only the > cluster gives a different result (likely less than the manual count), then > I would be concerned that that not all cluster results are being received > by the manager when it composes the results. > > Jim > > On Mon, May 6, 2019 at 9:56 AM Hui Lin (Hugo) wrote: > >> Hi Jim, >> >> I think 'num' field seemed like what I am looking for. However, when I >> tried, it is different from the count that I manually made. Here is the >> codes that I used to count. As you can see, what I try to is easy, >> whenever, an observation is received, I increase the value of a global >> value. However, when I print out through epoch call back function, the >> value is different from one in 'num'. >> >> if (...) >> { >> total_res = total_res + 1; >> SumStats::observe("dnp3 rtt", SumStats::Key(), >> SumStats::Observation($dbl=latency)); >> } >> >> >> Best, >> >> Hugo >> >> On Mon, May 6, 2019 at 9:16 AM Jim Mellander wrote: >> >>> Hi Hugo: >>> >>> >>> May I suggest a few things in SumStats? Maybe I missed something, I >>> don't know how to directly obtain the number of data recorded in SumStats, >>> so I need to declare another global variable to record that. It will be >>> useful that we can directly know how many data are recorded by far. The >>> reason that I need the number of records is to calculate the 95% or 99% >>> confidence interval. It will be great that we can include them directly in >>> SumStats as well. >>> >>> >>> Each result record returned to epoch_result has a 'num' field, which is >>> a count of the number of observations that made up that result - is that >>> what you're looking for? If you're looking for a grand total of >>> observations, I suppose they could be totalled up from the result records. >>> >>> Take care, >>> >>> JIm >>> >>> >> >> -- >> Hui Lin >> Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) >> DEPEND (http://depend.csl.illinois.edu/) >> ECE, Uni. of Illinois at Urbana-Champaign >> >> -- Hui Lin Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) DEPEND (http://depend.csl.illinois.edu/) ECE, Uni. of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/655def7b/attachment-0001.html From hlin33 at illinois.edu Mon May 6 11:53:18 2019 From: hlin33 at illinois.edu (Hui Lin (Hugo)) Date: Mon, 6 May 2019 11:53:18 -0700 Subject: [Zeek] Using "dbl" instead of "num" in SumStats In-Reply-To: References: Message-ID: Hi Jim, I think that I finally got it. The code is correct. But my interpretation is not. I think whatever calculation that we apply on observation, e.g., average, sum, is for the data collected within that epoch only. So 'num' field is the total number of observation within that period while I record the accumulated total number of observation by far. Originally, I don't like it as I think that it will be convenient for me to have statistics on all data. However, it does give me some benefits. As I am using very low-end computers and switches for experiments, I can easily tell when the network becomes stable, e.g., having less packet loss, based on the RTT in each epoch. P.S. as I am working as a faculty now and I have included Bro in my teaching, I think that SumStats is suitable for a class project as well. Best regards, Hui Lin On Mon, May 6, 2019 at 11:42 AM Lin, Hui wrote: > I am afraid that is not a typo. I copy paste from the documentation at > https://docs.zeek.org/en/stable/frameworks/sumstats.html#examples > . > I think what I wrote is consistent with what you provide, instead I > directly call Key and Observe constructor for the second and third > parameters. I am just using standalone version to analyze a pcap. More > interestingly, as the periodic epoch call back function print out, the > "num" field of the epoch result can decrease! > > On Mon, May 6, 2019 at 10:49 AM Jim Mellander wrote: > >> Hmmm, that SumStats::observe line doesn't seem quite correct. Generally, >> observations are in the form: >> >> SumStats::observe("foo", [$host=bar], [$dbl=val]); >> >> Assuming what you sent was just a typo, it would be interesting to know >> whether the same behavior is seen both in a cluster and standalone, as >> SumStats uses a different code path for those two cases. If only the >> cluster gives a different result (likely less than the manual count), then >> I would be concerned that that not all cluster results are being received >> by the manager when it composes the results. >> >> Jim >> >> On Mon, May 6, 2019 at 9:56 AM Hui Lin (Hugo) >> wrote: >> >>> Hi Jim, >>> >>> I think 'num' field seemed like what I am looking for. However, when I >>> tried, it is different from the count that I manually made. Here is the >>> codes that I used to count. As you can see, what I try to is easy, >>> whenever, an observation is received, I increase the value of a global >>> value. However, when I print out through epoch call back function, the >>> value is different from one in 'num'. >>> >>> if (...) >>> { >>> total_res = total_res + 1; >>> SumStats::observe("dnp3 rtt", SumStats::Key(), >>> SumStats::Observation($dbl=latency)); >>> } >>> >>> >>> Best, >>> >>> Hugo >>> >>> On Mon, May 6, 2019 at 9:16 AM Jim Mellander wrote: >>> >>>> Hi Hugo: >>>> >>>> >>>> May I suggest a few things in SumStats? Maybe I missed something, I >>>> don't know how to directly obtain the number of data recorded in SumStats, >>>> so I need to declare another global variable to record that. It will be >>>> useful that we can directly know how many data are recorded by far. The >>>> reason that I need the number of records is to calculate the 95% or 99% >>>> confidence interval. It will be great that we can include them directly in >>>> SumStats as well. >>>> >>>> >>>> Each result record returned to epoch_result has a 'num' field, which is >>>> a count of the number of observations that made up that result - is that >>>> what you're looking for? If you're looking for a grand total of >>>> observations, I suppose they could be totalled up from the result records. >>>> >>>> Take care, >>>> >>>> JIm >>>> >>>> >>> >>> -- >>> Hui Lin >>> Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) >>> DEPEND (http://depend.csl.illinois.edu/) >>> ECE, Uni. of Illinois at Urbana-Champaign >>> >>> > > -- > Hui Lin > Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) > DEPEND (http://depend.csl.illinois.edu/) > ECE, Uni. of Illinois at Urbana-Champaign > > -- Hui Lin Ph.D. Candidate (http://hlin33.web.engr.illinois.edu/) DEPEND (http://depend.csl.illinois.edu/) ECE, Uni. of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190506/1f189143/attachment.html From mkg at vt.edu Tue May 7 06:24:36 2019 From: mkg at vt.edu (Mark Gardner) Date: Tue, 7 May 2019 09:24:36 -0400 Subject: [Zeek] setcap plugin failing Message-ID: I can't figure out how to debug this issue of the setcap plugin failing: zeek at zeekmgr:~$ broctl install ... setcap plugin: executing setcap on each node: 10.0.1.12 - Executing setcap: FAIL: ... Details:: OS: Debian9 Zeek: v2.6.1 installed from source into /usr/local/bro Plugins: af_packet installed from source and PingTrip/broctl-setcap setcap.py file installed by hand into /usr/local/bro/lib/broctl/plugins. The following is appended to the bottom /usr/local/bro/etc/broctl.cfg: # Configure broctl-setcap plugin setcap.enabled=1 setcap.command=sudo /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/bro && sudo /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/capstats And this to /etc/sudoers.d/zeek on each of the sensors: Cmnd_Alias BRO_SETCAP = /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/bro Cmnd_Alias CAPSTATS_SETCAP = /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/capstats bro ALL=NOPASSWD: BRO_SETCAP, CAPSTATS_SETCAP Defaults!/sbin/setcap !requiretty Any ideas what to check to see what is going wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190507/73b55c3a/attachment.html From mkg at vt.edu Tue May 7 06:32:52 2019 From: mkg at vt.edu (Mark Gardner) Date: Tue, 7 May 2019 09:32:52 -0400 Subject: [Zeek] Large af_packet buffer size == missing logs Message-ID: In an effort to reduce capture loss, the af_packet buffer size was increased from the default to 2GB in node.cfg using "af_packet_buffer_size=2*1024*1024*1024". The capture loss afterwards was zero but many of the other logs also went missing, including conn.log. Going to 1GB with "af_packet_buffer_size=1*1024*1024*1024" and the missing logs started being collected again. The capture loss, while better, was still up to 10%. Choosing the middle with 1.5GB via "af_packet_buffer_size=1536*1024*1024" (seems it has to be integer calculations) and several of the logs including conn.log went missing again. The sensors all have 128 GB RAM for only 15 workers so memory should not be an issue. But it seems something goes wrong while trying to utilize the wealth of RAM. Any idea what I am doing wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190507/c8645f9d/attachment.html From justin at corelight.com Tue May 7 06:52:32 2019 From: justin at corelight.com (Justin Azoff) Date: Tue, 7 May 2019 09:52:32 -0400 Subject: [Zeek] setcap plugin failing In-Reply-To: References: Message-ID: Is the account you are using zeek or bro? your prompt says zeek but the sudoers file says bro. On Tue, May 7, 2019 at 9:35 AM Mark Gardner wrote: > > I can't figure out how to debug this issue of the setcap plugin failing: > > zeek at zeekmgr:~$ broctl install > ... > setcap plugin: executing setcap on each node: > 10.0.1.12 - Executing setcap: FAIL: > ... > > Details:: > > OS: Debian9 > Zeek: v2.6.1 installed from source into /usr/local/bro > Plugins: af_packet installed from source and PingTrip/broctl-setcap setcap.py file installed by hand into /usr/local/bro/lib/broctl/plugins. > > The following is appended to the bottom /usr/local/bro/etc/broctl.cfg: > # Configure broctl-setcap plugin > setcap.enabled=1 > setcap.command=sudo /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/bro && sudo /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/capstats > > And this to /etc/sudoers.d/zeek on each of the sensors: > Cmnd_Alias BRO_SETCAP = /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/bro > Cmnd_Alias CAPSTATS_SETCAP = /sbin/setcap cap_net_raw+eip /usr/local/bro/bin/capstats > bro ALL=NOPASSWD: BRO_SETCAP, CAPSTATS_SETCAP > Defaults!/sbin/setcap !requiretty > > Any ideas what to check to see what is going wrong? > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From mkg at vt.edu Tue May 7 07:07:48 2019 From: mkg at vt.edu (Mark Gardner) Date: Tue, 7 May 2019 10:07:48 -0400 Subject: [Zeek] setcap plugin failing In-Reply-To: References: Message-ID: On Tue, May 7, 2019 at 9:53 AM Justin Azoff wrote: > Is the account you are using zeek or bro? your prompt says zeek but > the sudoers file says bro. > Thanks Justin. I stared at those configs for a long time and never noticed the problem. That should fix it. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190507/1d9eebd2/attachment.html From akgraner at corelight.com Tue May 7 10:08:56 2019 From: akgraner at corelight.com (Amber Graner) Date: Tue, 7 May 2019 12:08:56 -0500 Subject: [Zeek] Open Source Zeek Leadership Team (LT) Meeting Minutes - 3 May 2019 Message-ID: Hi all, The LT meets every two weeks. Below is the link to the 3 May 2019 Meeting Minutes. https://blog.zeek.org/2019/05/open-source-zeek-leadership-team.html Let me know if you have any questions, comments, or feedback for the team. Thanks, ~Amber -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190507/a44d91dc/attachment.html From tscheponik at gmail.com Tue May 7 12:36:55 2019 From: tscheponik at gmail.com (Woot4moo) Date: Tue, 7 May 2019 15:36:55 -0400 Subject: [Zeek] Write logs from worker node directly Message-ID: Is it possible to write logs directly from the worker node instead of making a call out to a logging node? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190507/2863e6fd/attachment.html From justin at corelight.com Wed May 8 07:12:16 2019 From: justin at corelight.com (Justin Azoff) Date: Wed, 8 May 2019 10:12:16 -0400 Subject: [Zeek] Write logs from worker node directly In-Reply-To: References: Message-ID: I believe redef Log::enable_remote_logging = F; will do that. Not sure if rotation works by default so you might have a little more work to do if you are logging to files. If you're using something like the Kafka log writer then it should just work On Tue, May 7, 2019 at 3:39 PM Woot4moo wrote: > > Is it possible to write logs directly from the worker node instead of making a call out to a logging node? > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From asharma at lbl.gov Wed May 8 08:19:08 2019 From: asharma at lbl.gov (Aashish Sharma) Date: Wed, 8 May 2019 08:19:08 -0700 Subject: [Zeek] Write logs from worker node directly In-Reply-To: References: Message-ID: <8972AFA2-7037-4410-BAD3-E643C6B02A09@lbl.gov> I am curious, why do you want to do this. What would be your use-case ? Aashish > On May 7, 2019, at 12:36 PM, Woot4moo wrote: > > Is it possible to write logs directly from the worker node instead of making a call out to a logging node? > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek From tscheponik at gmail.com Wed May 8 17:35:50 2019 From: tscheponik at gmail.com (Woot4moo) Date: Wed, 8 May 2019 20:35:50 -0400 Subject: [Zeek] Write logs from worker node directly In-Reply-To: References: Message-ID: Appreciated will investigate On Wed, May 8, 2019 at 10:12 AM Justin Azoff wrote: > I believe > > redef Log::enable_remote_logging = F; > > will do that. Not sure if rotation works by default so you might have > a little more work to do if you are logging to files. If you're using > something like the Kafka log writer then it should just work > > On Tue, May 7, 2019 at 3:39 PM Woot4moo wrote: > > > > Is it possible to write logs directly from the worker node instead of > making a call out to a logging node? > > _______________________________________________ > > Zeek mailing list > > zeek at zeek.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -- > Justin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190508/68f6d90c/attachment.html From tscheponik at gmail.com Wed May 8 17:36:52 2019 From: tscheponik at gmail.com (Woot4moo) Date: Wed, 8 May 2019 20:36:52 -0400 Subject: [Zeek] Write logs from worker node directly In-Reply-To: <8972AFA2-7037-4410-BAD3-E643C6B02A09@lbl.gov> References: <8972AFA2-7037-4410-BAD3-E643C6B02A09@lbl.gov> Message-ID: Exploring different tuning options. Specifically, is there any benefit / detriment to writing logs locally thereby removing the additional call to another node. On Wed, May 8, 2019 at 11:19 AM Aashish Sharma wrote: > I am curious, why do you want to do this. What would be your use-case ? > > Aashish > > > On May 7, 2019, at 12:36 PM, Woot4moo wrote: > > > > Is it possible to write logs directly from the worker node instead of > making a call out to a logging node? > > _______________________________________________ > > Zeek mailing list > > zeek at zeek.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190508/b0a6bfa1/attachment.html From tscheponik at gmail.com Fri May 10 13:54:07 2019 From: tscheponik at gmail.com (Woot4moo) Date: Fri, 10 May 2019 16:54:07 -0400 Subject: [Zeek] Minimal packets to trigger events Message-ID: I am in the process of covering my team's feature set and we are using Behave (Python) to generate reports. Is there a collection of minimal PCAPs that the community maintains / scapy scripts to generate minimal PCAPs to trigger the events that Zeek supports? For example to trigger the "ssh_server_version(...)" event [ https://docs.zeek.org/en/stable/scripts/base/bif/plugins/Bro_SSH.events.bif.bro.html#id-ssh_server_version] it requires 4 packets (TCP handshake + 1 additional packet) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190510/3a25a12b/attachment.html From huzhenming36 at gmail.com Sat May 11 05:54:06 2019 From: huzhenming36 at gmail.com (huzhenming36 at gmail.com) Date: Sat, 11 May 2019 20:54:06 +0800 Subject: [Zeek] Vpn use behavior detection Message-ID: <201905112054038342159@gmail.com> Hi zeek's friends I want to analyze the traffic to determine which person on the local area network is using the VPN. So what method should I use to achieve this? Please help, thank you very much. huzhenming36 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190511/bb1832d0/attachment.html From justin at corelight.com Mon May 13 12:33:48 2019 From: justin at corelight.com (Justin Azoff) Date: Mon, 13 May 2019 15:33:48 -0400 Subject: [Zeek] Minimal packets to trigger events In-Reply-To: References: Message-ID: Perhaps not minimal in all cases, but the test suite is full of pcaps. Take a look at https://github.com/zeek/zeek/tree/master/testing/btest/Traces On Fri, May 10, 2019 at 5:02 PM Woot4moo wrote: > > I am in the process of covering my team's feature set and we are using Behave (Python) to generate reports. Is there a collection of minimal PCAPs that the community maintains / scapy scripts to generate minimal PCAPs to trigger the events that Zeek supports? > > For example to trigger the "ssh_server_version(...)" event [https://docs.zeek.org/en/stable/scripts/base/bif/plugins/Bro_SSH.events.bif.bro.html#id-ssh_server_version] it requires 4 packets (TCP handshake + 1 additional packet) > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From tscheponik at gmail.com Tue May 14 13:39:03 2019 From: tscheponik at gmail.com (Woot4moo) Date: Tue, 14 May 2019 16:39:03 -0400 Subject: [Zeek] =?utf-8?q?Compile_with_=E2=80=94enable-coverage?= Message-ID: I compiled Zeek 2.6.1 from source today with the following lines: ./configure ?enable-coverage make sudo make install I did not notice any differences from running bro ?help. Nor was I able to get any new outputs when running btest as described here: https://github.com/zeek/zeek/blob/master/testing/coverage/README My understanding is that this is a way to check Zeek scripts (not the C++ source) to check coverage. Is my understanding correct? If it is correct, what configuration / context am I missing to get a proper code coverage report? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190514/f89ca780/attachment.html From jsiwek at corelight.com Tue May 14 17:07:16 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 14 May 2019 17:07:16 -0700 Subject: [Zeek] =?utf-8?q?Compile_with_=E2=80=94enable-coverage?= In-Reply-To: References: Message-ID: On Tue, May 14, 2019 at 1:41 PM Woot4moo wrote: > https://github.com/zeek/zeek/blob/master/testing/coverage/README > > My understanding is that this is a way to check Zeek scripts (not the C++ source) to check coverage. Is my understanding correct? That's C++ coverage, not Zeek script coverage. Getting Zeek script coverage is a two step process: 1) Run bro with the BRO_PROFILER_FILE environment variable set to a file path, adding 'X' characters to the file name cause them to be randomized (in case you are running bro multiple times and need to aggregate coverage across runs). Here's how we set this in our unit test suite: https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/btest/btest.cfg#L24 Those files simply track how many times a given statement was executed. 2) Run a script over the results in step (1) to aggregate and calculate coverage. Here is the script we run for our test suite coverage calculation: https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/scripts/coverage-calc And how it is invoked: https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/btest/Makefile#L18-L19 - Jon From tscheponik at gmail.com Tue May 14 17:21:50 2019 From: tscheponik at gmail.com (Woot4moo) Date: Tue, 14 May 2019 20:21:50 -0400 Subject: [Zeek] =?utf-8?q?Compile_with_=E2=80=94enable-coverage?= In-Reply-To: References: Message-ID: Thanks for those details. I will execute these steps in the morning and follow up with additional questions. On Tue, May 14, 2019 at 8:07 PM Jon Siwek wrote: > On Tue, May 14, 2019 at 1:41 PM Woot4moo wrote: > > > https://github.com/zeek/zeek/blob/master/testing/coverage/README > > > > My understanding is that this is a way to check Zeek scripts (not the > C++ source) to check coverage. Is my understanding correct? > > That's C++ coverage, not Zeek script coverage. > > Getting Zeek script coverage is a two step process: > > 1) Run bro with the BRO_PROFILER_FILE environment variable set to a > file path, adding 'X' characters to the file name cause them to be > randomized (in case you are running bro multiple times and need to > aggregate coverage across runs). Here's how we set this in our unit > test suite: > > > https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/btest/btest.cfg#L24 > > Those files simply track how many times a given statement was executed. > > 2) Run a script over the results in step (1) to aggregate and > calculate coverage. > > Here is the script we run for our test suite coverage calculation: > > > https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/scripts/coverage-calc > > And how it is invoked: > > > https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/btest/Makefile#L18-L19 > > - Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190514/fb15f2cf/attachment.html From gary.w.weasel2.civ at mail.mil Wed May 15 07:57:33 2019 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W CIV DISA RE (US)) Date: Wed, 15 May 2019 14:57:33 +0000 Subject: [Zeek] ESP Traffic Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B291C661A8@UMECHPA68.easf.csd.disa.mil> All, Is there anyone who has written a plugin or script to make zeek create a conn log for any ESP traffic over IPv4? As far as I can tell, zeek seems to turn a blind eye to that traffic and not report on it at all otherwise. - Gary From tscheponik at gmail.com Wed May 15 08:16:48 2019 From: tscheponik at gmail.com (Woot4moo) Date: Wed, 15 May 2019 11:16:48 -0400 Subject: [Zeek] =?utf-8?q?Compile_with_=E2=80=94enable-coverage?= In-Reply-To: References: Message-ID: Appreciate the help on this. I think I need to slightly modify the script to only evaluate my custom .bro / .zeek files. Which seem to be properly reporting out some lines as being executed. Before I go down this path, is there some parameter I should pass instead to only scan my script directory? For example: /home -bro -my_scripts - - .tmp/script-coverage On Tue, May 14, 2019 at 8:21 PM Woot4moo wrote: > Thanks for those details. I will execute these steps in the morning and > follow up with additional questions. > > On Tue, May 14, 2019 at 8:07 PM Jon Siwek wrote: > >> On Tue, May 14, 2019 at 1:41 PM Woot4moo wrote: >> >> > https://github.com/zeek/zeek/blob/master/testing/coverage/README >> > >> > My understanding is that this is a way to check Zeek scripts (not the >> C++ source) to check coverage. Is my understanding correct? >> >> That's C++ coverage, not Zeek script coverage. >> >> Getting Zeek script coverage is a two step process: >> >> 1) Run bro with the BRO_PROFILER_FILE environment variable set to a >> file path, adding 'X' characters to the file name cause them to be >> randomized (in case you are running bro multiple times and need to >> aggregate coverage across runs). Here's how we set this in our unit >> test suite: >> >> >> https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/btest/btest.cfg#L24 >> >> Those files simply track how many times a given statement was executed. >> >> 2) Run a script over the results in step (1) to aggregate and >> calculate coverage. >> >> Here is the script we run for our test suite coverage calculation: >> >> >> https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/scripts/coverage-calc >> >> And how it is invoked: >> >> >> https://github.com/zeek/zeek/blob/6ad7099f7e4828cff893c13fe32855f947487258/testing/btest/Makefile#L18-L19 >> >> - Jon >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190515/bf6dd5ff/attachment.html From jsiwek at corelight.com Wed May 15 10:23:50 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 15 May 2019 10:23:50 -0700 Subject: [Zeek] =?utf-8?q?Compile_with_=E2=80=94enable-coverage?= In-Reply-To: References: Message-ID: On Wed, May 15, 2019 at 8:17 AM Woot4moo wrote: > > Appreciate the help on this. I think I need to slightly modify the script to only evaluate my custom .bro / .zeek files. Which seem to be properly reporting out some lines as being executed. > > Before I go down this path, is there some parameter I should pass instead to only scan my script directory? For example: Looks like the last parameter of coverage-calc is a directory containing the scripts you want the coverage calculated against, so may try pointing that to your own custom bro/zeek scripts instead of the ones shipped in the our source tree. Generally that script was meant for our own test suite usage, so may need to be modified for your use, but the data produced by running with BRO_PROFILER_FILE is what any coverage calculation could be based on. - Jon From greg.grasmehr at caltech.edu Wed May 15 11:14:27 2019 From: greg.grasmehr at caltech.edu (Greg Grasmehr) Date: Wed, 15 May 2019 11:14:27 -0700 Subject: [Zeek] Zeek Myricom port aggregation Message-ID: <20190515181427.GA15954@dakine> Hello, Hoping someone has some insight into whatever I am doing wrong as try as I might, I can't seem to get the Myricom plugin working if configured to aggregate port data. Zeek starts and then crashes in every case, regardless of configuration ie interface=myricom::3 interface=myricom::* and snf_aggregate = T Here is related dmesg output logged by kdump [67471.838822] BUG: unable to handle kernel paging request at 00007f0d8459607f [67471.838863] IP: [] snf_eop_ioctl+0x609/0xc60 [myri_snf] [67471.838897] PGD 8000000a93bb9067 PUD 12d142c067 PMD 12d142d067 PTE 8000001d54829025 [67471.838927] Oops: 0001 [#1] SMP [67471.838942] Modules linked in: binfmt_misc macsec tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag myri_snf(OE) mpt2sas raid_class scsi_transport_sas mptctl mptbase ip6t_rpfilter ipt_REJECT nf_reject_ipv4 nf_log_ipv4 ip6t_REJECT nf_reject_ipv6 nf_log_ipv6 nf_log_common xt_LOG xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter dell_rbu sunrpc dcdbas iTCO_wdt iTCO_vendor_support sb_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul joydev [67471.839241] ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd mxm_wmi ext4 mbcache jbd2 pcspkr ipmi_ssif mei_me lpc_ich mei sg ipmi_si ipmi_devintf ipmi_msghandler wmi acpi_power_meter ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crct10dif_pclmul crct10dif_common crc32c_intel drm_panel_orientation_quirks ahci libahci dca libata tg3 megaraid_sas ptp pps_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: myri10ge] [67471.839450] CPU: 24 PID: 92952 Comm: bro Kdump: loaded Tainted: G OE ------------ 3.10.0-957.10.1.el7.x86_64 #1 [67471.839483] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.9.1 12/04/2018 [67471.839508] task: ffff95d0e7c41040 ti: ffff95e3197c0000 task.ti: ffff95e3197c0000 [67471.839531] RIP: 0010:[] [] snf_eop_ioctl+0x609/0xc60 [myri_snf] [67471.839564] RSP: 0018:ffff95e3197c3d38 EFLAGS: 00010006 [67471.839583] RAX: 0000000000000286 RBX: 0000000000000001 RCX: 0000000000000000 [67471.839605] RDX: ffff95d0526253d0 RSI: 00007f0d84596000 RDI: ffffb70f589ba7f8 [67471.839627] RBP: ffff95e3197c3df8 R08: ffffb70f599bb000 R09: 0000000000000003 [67471.839648] R10: 0000000000000000 R11: 0000000000000000 R12: ffff95d052625000 [67471.839670] R13: 00007ffeb542d710 R14: 00007ffeb542d710 R15: 0000000000000000 [67471.839693] FS: 00007f180d6a7900(0000) GS:ffff95eefe900000(0000) knlGS:0000000000000000 [67471.839717] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [67471.839735] CR2: 00007f0d8459607f CR3: 0000001ff663c000 CR4: 00000000003607e0 [67471.839757] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [67471.839778] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [67471.839800] Call Trace: [67471.839818] [] ? down_read+0x12/0x40 [67471.839840] [] mx_common_ioctl+0x40/0x90 [myri_snf] [67471.839865] [] mx_ioctl+0x72/0x290 [myri_snf] [67471.839888] [] do_vfs_ioctl+0x3a0/0x5a0 [67471.839908] [] ? __do_page_fault+0x228/0x500 [67471.839928] [] SyS_ioctl+0xa1/0xc0 [67471.839947] [] system_call_fastpath+0x22/0x27 [67471.839966] Code: d3 e6 44 85 ce 74 e1 48 83 bf b8 00 00 00 00 75 d1 4c 8b 87 c0 00 00 00 4c 63 d9 41 8b 70 04 48 c1 e6 09 4b 03 b4 dc c0 06 00 00 <0f> b6 76 7f 41 39 30 75 b4 4c 89 a7 b8 00 00 00 49 89 bc 24 60 [67471.840084] RIP [] snf_eop_ioctl+0x609/0xc60 [myri_snf] [67471.840112] RSP [67471.840125] CR2: 00007f0d8459607f From tscheponik at gmail.com Wed May 15 12:04:59 2019 From: tscheponik at gmail.com (Woot4moo) Date: Wed, 15 May 2019 15:04:59 -0400 Subject: [Zeek] =?utf-8?q?Compile_with_=E2=80=94enable-coverage?= In-Reply-To: References: Message-ID: I was able to get it working by replacing line 29 in coverage-calc: filepath = os.path.normpath(filepath) With filepath = os.path.abspath(filepath) This allows for the Boolean check to succeed and processing to continue with processing only my scripts. On Wed, May 15, 2019 at 1:24 PM Jon Siwek wrote: > On Wed, May 15, 2019 at 8:17 AM Woot4moo wrote: > > > > Appreciate the help on this. I think I need to slightly modify the > script to only evaluate my custom .bro / .zeek files. Which seem to be > properly reporting out some lines as being executed. > > > > Before I go down this path, is there some parameter I should pass > instead to only scan my script directory? For example: > > Looks like the last parameter of coverage-calc is a directory > containing the scripts you want the coverage calculated against, so > may try pointing that to your own custom bro/zeek scripts instead of > the ones shipped in the our source tree. > > Generally that script was meant for our own test suite usage, so may > need to be modified for your use, but the data produced by running > with BRO_PROFILER_FILE is what any coverage calculation could be based > on. > > - Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190515/bc48386d/attachment.html From justin at corelight.com Wed May 15 14:35:56 2019 From: justin at corelight.com (Justin Azoff) Date: Wed, 15 May 2019 17:35:56 -0400 Subject: [Zeek] Zeek Myricom port aggregation In-Reply-To: <20190515181427.GA15954@dakine> References: <20190515181427.GA15954@dakine> Message-ID: That looks like a bug in the myricom Driver and not zeek. Can you reproduce the same kernel issue using tcpdump? You configure aggregation for that using SNF_FLAGS: SNF_FLAGS=0x2 (Port aggregation (or merging)) Flag 0x2 says that the port number that is passed to an application is actually a mask of port, not just one port. For example, when using tcpdump: export SNF_FLAGS=0x2 env SNF_FLAGS=0x2 /path/to/tcpdump -i snf3 Without SNF_FLAGS=0x2, you would actually try to open snf port 3 (which may not exist if you only have one adapter.) It's possible that you don't need to use aggregation in the first place, That is generally only needed if you are connecting a fiber tap directly into a card. If flows are being load balanced across multiple ports you can just run two different sets of workers, one for each port On Wed, May 15, 2019 at 2:17 PM Greg Grasmehr wrote: > > Hello, > > Hoping someone has some insight into whatever I am doing wrong as try as > I might, I can't seem to get the Myricom plugin working if configured to > aggregate port data. Zeek starts and then crashes in every case, > regardless of configuration ie > > interface=myricom::3 > interface=myricom::* > > and snf_aggregate = T > > Here is related dmesg output logged by kdump > > [67471.838822] BUG: unable to handle kernel paging request at 00007f0d8459607f > [67471.838863] IP: [] snf_eop_ioctl+0x609/0xc60 [myri_snf] > [67471.838897] PGD 8000000a93bb9067 PUD 12d142c067 PMD 12d142d067 PTE 8000001d54829025 > [67471.838927] Oops: 0001 [#1] SMP > [67471.838942] Modules linked in: binfmt_misc macsec tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag myri_snf(OE) mpt2sas raid_class scsi_transport_sas mptctl mptbase ip6t_rpfilter ipt_REJECT nf_reject_ipv4 nf_log_ipv4 ip6t_REJECT nf_reject_ipv6 nf_log_ipv6 nf_log_common xt_LOG xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter dell_rbu sunrpc dcdbas iTCO_wdt iTCO_vendor_support sb_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul joydev > [67471.839241] ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd mxm_wmi ext4 mbcache jbd2 pcspkr ipmi_ssif mei_me lpc_ich mei sg ipmi_si ipmi_devintf ipmi_msghandler wmi acpi_power_meter ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crct10dif_pclmul crct10dif_common crc32c_intel drm_panel_orientation_quirks ahci libahci dca libata tg3 megaraid_sas ptp pps_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: myri10ge] > [67471.839450] CPU: 24 PID: 92952 Comm: bro Kdump: loaded Tainted: G OE ------------ 3.10.0-957.10.1.el7.x86_64 #1 > [67471.839483] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.9.1 12/04/2018 > [67471.839508] task: ffff95d0e7c41040 ti: ffff95e3197c0000 task.ti: ffff95e3197c0000 > [67471.839531] RIP: 0010:[] [] snf_eop_ioctl+0x609/0xc60 [myri_snf] > [67471.839564] RSP: 0018:ffff95e3197c3d38 EFLAGS: 00010006 > [67471.839583] RAX: 0000000000000286 RBX: 0000000000000001 RCX: 0000000000000000 > [67471.839605] RDX: ffff95d0526253d0 RSI: 00007f0d84596000 RDI: ffffb70f589ba7f8 > [67471.839627] RBP: ffff95e3197c3df8 R08: ffffb70f599bb000 R09: 0000000000000003 > [67471.839648] R10: 0000000000000000 R11: 0000000000000000 R12: ffff95d052625000 > [67471.839670] R13: 00007ffeb542d710 R14: 00007ffeb542d710 R15: 0000000000000000 > [67471.839693] FS: 00007f180d6a7900(0000) GS:ffff95eefe900000(0000) knlGS:0000000000000000 > [67471.839717] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [67471.839735] CR2: 00007f0d8459607f CR3: 0000001ff663c000 CR4: 00000000003607e0 > [67471.839757] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [67471.839778] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > [67471.839800] Call Trace: > [67471.839818] [] ? down_read+0x12/0x40 > [67471.839840] [] mx_common_ioctl+0x40/0x90 [myri_snf] > [67471.839865] [] mx_ioctl+0x72/0x290 [myri_snf] > [67471.839888] [] do_vfs_ioctl+0x3a0/0x5a0 > [67471.839908] [] ? __do_page_fault+0x228/0x500 > [67471.839928] [] SyS_ioctl+0xa1/0xc0 > [67471.839947] [] system_call_fastpath+0x22/0x27 > [67471.839966] Code: d3 e6 44 85 ce 74 e1 48 83 bf b8 00 00 00 00 75 d1 4c 8b 87 c0 00 00 00 4c 63 d9 41 8b 70 04 48 c1 e6 09 4b 03 b4 dc c0 06 00 00 <0f> b6 76 7f 41 39 30 75 b4 4c 89 a7 b8 00 00 00 49 89 bc 24 60 > [67471.840084] RIP [] snf_eop_ioctl+0x609/0xc60 [myri_snf] > [67471.840112] RSP > [67471.840125] CR2: 00007f0d8459607f > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin From greg.grasmehr at caltech.edu Wed May 15 15:04:40 2019 From: greg.grasmehr at caltech.edu (Greg Grasmehr) Date: Wed, 15 May 2019 15:04:40 -0700 Subject: [Zeek] Zeek Myricom port aggregation In-Reply-To: References: <20190515181427.GA15954@dakine> Message-ID: <20190515220440.GB15954@dakine> tcpdump works perfectly with aggregation, no issues On 05/15/19 17:35:56, Justin Azoff wrote: > That looks like a bug in the myricom Driver and not zeek. Can you > reproduce the same kernel issue using tcpdump? You configure > aggregation for that using SNF_FLAGS: > > SNF_FLAGS=0x2 (Port aggregation (or merging)) > Flag 0x2 says that the port number that is passed to an application is actually > a mask of port, not just one port. > For example, when using tcpdump: > export SNF_FLAGS=0x2 > env SNF_FLAGS=0x2 /path/to/tcpdump -i snf3 > > Without SNF_FLAGS=0x2, you would actually try to open snf port 3 (which > may not exist if you only have one adapter.) > > > It's possible that you don't need to use aggregation in the first > place, That is generally only needed if you are connecting a fiber > tap directly into a card. If flows are being load balanced across > multiple ports you can just run two different sets of workers, one for > each port > > On Wed, May 15, 2019 at 2:17 PM Greg Grasmehr wrote: > > > > Hello, > > > > Hoping someone has some insight into whatever I am doing wrong as try as > > I might, I can't seem to get the Myricom plugin working if configured to > > aggregate port data. Zeek starts and then crashes in every case, > > regardless of configuration ie > > > > interface=myricom::3 > > interface=myricom::* > > > > and snf_aggregate = T > > > > Here is related dmesg output logged by kdump > > > > [67471.838822] BUG: unable to handle kernel paging request at 00007f0d8459607f > > [67471.838863] IP: [] snf_eop_ioctl+0x609/0xc60 [myri_snf] > > [67471.838897] PGD 8000000a93bb9067 PUD 12d142c067 PMD 12d142d067 PTE 8000001d54829025 > > [67471.838927] Oops: 0001 [#1] SMP > > [67471.838942] Modules linked in: binfmt_misc macsec tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag myri_snf(OE) mpt2sas raid_class scsi_transport_sas mptctl mptbase ip6t_rpfilter ipt_REJECT nf_reject_ipv4 nf_log_ipv4 ip6t_REJECT nf_reject_ipv6 nf_log_ipv6 nf_log_common xt_LOG xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter dell_rbu sunrpc dcdbas iTCO_wdt iTCO_vendor_support sb_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul joydev > > [67471.839241] ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd mxm_wmi ext4 mbcache jbd2 pcspkr ipmi_ssif mei_me lpc_ich mei sg ipmi_si ipmi_devintf ipmi_msghandler wmi acpi_power_meter ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crct10dif_pclmul crct10dif_common crc32c_intel drm_panel_orientation_quirks ahci libahci dca libata tg3 megaraid_sas ptp pps_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: myri10ge] > > [67471.839450] CPU: 24 PID: 92952 Comm: bro Kdump: loaded Tainted: G OE ------------ 3.10.0-957.10.1.el7.x86_64 #1 > > [67471.839483] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.9.1 12/04/2018 > > [67471.839508] task: ffff95d0e7c41040 ti: ffff95e3197c0000 task.ti: ffff95e3197c0000 > > [67471.839531] RIP: 0010:[] [] snf_eop_ioctl+0x609/0xc60 [myri_snf] > > [67471.839564] RSP: 0018:ffff95e3197c3d38 EFLAGS: 00010006 > > [67471.839583] RAX: 0000000000000286 RBX: 0000000000000001 RCX: 0000000000000000 > > [67471.839605] RDX: ffff95d0526253d0 RSI: 00007f0d84596000 RDI: ffffb70f589ba7f8 > > [67471.839627] RBP: ffff95e3197c3df8 R08: ffffb70f599bb000 R09: 0000000000000003 > > [67471.839648] R10: 0000000000000000 R11: 0000000000000000 R12: ffff95d052625000 > > [67471.839670] R13: 00007ffeb542d710 R14: 00007ffeb542d710 R15: 0000000000000000 > > [67471.839693] FS: 00007f180d6a7900(0000) GS:ffff95eefe900000(0000) knlGS:0000000000000000 > > [67471.839717] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > [67471.839735] CR2: 00007f0d8459607f CR3: 0000001ff663c000 CR4: 00000000003607e0 > > [67471.839757] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > [67471.839778] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > > [67471.839800] Call Trace: > > [67471.839818] [] ? down_read+0x12/0x40 > > [67471.839840] [] mx_common_ioctl+0x40/0x90 [myri_snf] > > [67471.839865] [] mx_ioctl+0x72/0x290 [myri_snf] > > [67471.839888] [] do_vfs_ioctl+0x3a0/0x5a0 > > [67471.839908] [] ? __do_page_fault+0x228/0x500 > > [67471.839928] [] SyS_ioctl+0xa1/0xc0 > > [67471.839947] [] system_call_fastpath+0x22/0x27 > > [67471.839966] Code: d3 e6 44 85 ce 74 e1 48 83 bf b8 00 00 00 00 75 d1 4c 8b 87 c0 00 00 00 4c 63 d9 41 8b 70 04 48 c1 e6 09 4b 03 b4 dc c0 06 00 00 <0f> b6 76 7f 41 39 30 75 b4 4c 89 a7 b8 00 00 00 49 89 bc 24 60 > > [67471.840084] RIP [] snf_eop_ioctl+0x609/0xc60 [myri_snf] > > [67471.840112] RSP > > [67471.840125] CR2: 00007f0d8459607f > > > > _______________________________________________ > > Zeek mailing list > > zeek at zeek.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -- > Justin From mauro.palumbo at aizoon.it Fri May 17 06:36:24 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Fri, 17 May 2019 13:36:24 +0000 Subject: [Zeek] logger in a Zeek's cluster Message-ID: <5781dc38ad7d495484fb958681e90f37@SRVEX03.aizoon.local> Hi Zeek's devs, I have a beginner's question on the logger process in a Zeek's ckuster. As far as I realized, the manager and the proxy processes get only some events from the worker(s) using the cluster/broker frameworks. In this way, the manager can for example receive events necessary for doing intel/notice or sumstats for example. The necessary info are carried by the events. The logger too does receive only a few events from the other nodes using the cluste/broker frameworks, but not those related to logging. How does it get the logging data from the workers? Could anyone point me where in the code this is done? Thanks, Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190517/e90e089b/attachment.html From robin at corelight.com Fri May 17 07:24:34 2019 From: robin at corelight.com (Robin Sommer) Date: Fri, 17 May 2019 07:24:34 -0700 Subject: [Zeek] logger in a Zeek's cluster In-Reply-To: <5781dc38ad7d495484fb958681e90f37@SRVEX03.aizoon.local> References: <5781dc38ad7d495484fb958681e90f37@SRVEX03.aizoon.local> Message-ID: <20190517142434.GC13047@corelight.com> On Fri, May 17, 2019 at 13:36 +0000, Palumbo Mauro wrote: > The logger too does receive only a few events from the other nodes > using the cluste/broker frameworks, but not those related to logging. > How does it get the logging data from the workers? Logging doesn't go through events, it's communicated separately over Broker through dedicated log messages. You can get statistics for that through the get_broker_state() function [1]. The returned BrokerStats record has fields num_logs_incoming and num_logs_outgoing. Robin [1] https://docs.zeek.org/en/latest/scripts/base/bif/stats.bif.zeek.html#id-get_broker_stats -- Robin Sommer * Corelight, Inc. * robin at corelight.com * www.corelight.com From Zach.Rogers at oregonstate.edu Sat May 18 16:03:20 2019 From: Zach.Rogers at oregonstate.edu (Rogers, Zach) Date: Sat, 18 May 2019 23:03:20 +0000 Subject: [Zeek] tcmalloc large alloc In-Reply-To: <94D2E9E4-F4F0-4EE2-A3EC-5A9A94E7B0C2@corelight.com> References: <4D68CE36-22B1-4B57-85A7-6A0D8095A4E0@corelight.com> <070160D7-12A1-4D7F-B28B-3D216C647DC2@getmailspring.com> <94D2E9E4-F4F0-4EE2-A3EC-5A9A94E7B0C2@corelight.com> Message-ID: Hey Seth, Did you have a chance to look into this? If anyone else has any input that would be helpful as well! All the best, -- Zach Rogers Lead Security Analyst Security and Network Monitoring Oregon Research & Teaching Security Operations Center (ORTSOC) Phone: 541.737.7723 GPG Fingerprint: ECC5 03A6 7E91 17C6 50C6 8FAC D6A0 8001 2869 BD52 ?On 3/27/19, 10:57 AM, "Seth Hall" wrote: On 27 Mar 2019, at 11:54, Zander Work wrote: > The first two showing ??:0 makes sense b/c those are memory addresses. > It looks like the PE analyzer might be the culprit but I'm not sure. Yep, I knew the first two would look like that. It's ASLR being applied to glibc function (which is fine and not what I was interested in anyway). It did end up showing what I expected it to. I'll look around a little bit and see if anything makes sense. Thanks! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From justin at corelight.com Sat May 18 16:31:58 2019 From: justin at corelight.com (Justin Azoff) Date: Sat, 18 May 2019 19:31:58 -0400 Subject: [Zeek] tcmalloc large alloc In-Reply-To: References: <4D68CE36-22B1-4B57-85A7-6A0D8095A4E0@corelight.com> <070160D7-12A1-4D7F-B28B-3D216C647DC2@getmailspring.com> <94D2E9E4-F4F0-4EE2-A3EC-5A9A94E7B0C2@corelight.com> Message-ID: There's an issue here: https://github.com/zeek/zeek/issues/245 I believe the problem was fixed with https://github.com/zeek/zeek/commit/78dcbcc71ac09d3dd8a213f658ee8e794bb1bcd9 or https://github.com/zeek/zeek/commit/6598fe991d26bd15e483fcd96ea72bb161143d4e but it has not been confirmed yet, On Sat, May 18, 2019 at 7:05 PM Rogers, Zach wrote: > Hey Seth, > > Did you have a chance to look into this? > > If anyone else has any input that would be helpful as well! > > All the best, > > -- > Zach Rogers > Lead Security Analyst > Security and Network Monitoring > Oregon Research & Teaching Security Operations Center (ORTSOC) > Phone: 541.737.7723 > GPG Fingerprint: ECC5 03A6 7E91 17C6 50C6 8FAC D6A0 8001 2869 BD52 > > ?On 3/27/19, 10:57 AM, "Seth Hall" wrote: > > > > On 27 Mar 2019, at 11:54, Zander Work wrote: > > > The first two showing ??:0 makes sense b/c those are memory > addresses. > > It looks like the PE analyzer might be the culprit but I'm not sure. > > Yep, I knew the first two would look like that. It's ASLR being > applied > to glibc function (which is fine and not what I was interested in > anyway). It did end up showing what I expected it to. I'll look > around > a little bit and see if anything makes sense. > > Thanks! > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > > > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190518/c1b087e3/attachment.html From Zach.Rogers at oregonstate.edu Sat May 18 16:34:05 2019 From: Zach.Rogers at oregonstate.edu (Rogers, Zach) Date: Sat, 18 May 2019 23:34:05 +0000 Subject: [Zeek] tcmalloc large alloc In-Reply-To: References: <4D68CE36-22B1-4B57-85A7-6A0D8095A4E0@corelight.com> <070160D7-12A1-4D7F-B28B-3D216C647DC2@getmailspring.com> <94D2E9E4-F4F0-4EE2-A3EC-5A9A94E7B0C2@corelight.com> Message-ID: <94FDEE2E-1311-4896-8A98-FCB56980F415@oregonstate.edu> Thanks Justin! I will see if we can do some testing on our end ? If so I will report back. -- Zach Rogers Lead Security Analyst Security and Network Monitoring Oregon Research & Teaching Security Operations Center (ORTSOC) Phone: 541.737.7723 GPG Fingerprint: ECC5 03A6 7E91 17C6 50C6 8FAC D6A0 8001 2869 BD52 From: Justin Azoff Date: Saturday, May 18, 2019 at 4:32 PM To: "Rogers, Zach" Cc: Seth Hall , "Nead-Work, Alexander" , "zeek at zeek.org" Subject: Re: [Zeek] tcmalloc large alloc There's an issue here: https://github.com/zeek/zeek/issues/245 I believe the problem was fixed with https://github.com/zeek/zeek/commit/78dcbcc71ac09d3dd8a213f658ee8e794bb1bcd9 or https://github.com/zeek/zeek/commit/6598fe991d26bd15e483fcd96ea72bb161143d4e but it has not been confirmed yet, On Sat, May 18, 2019 at 7:05 PM Rogers, Zach > wrote: Hey Seth, Did you have a chance to look into this? If anyone else has any input that would be helpful as well! All the best, -- Zach Rogers Lead Security Analyst Security and Network Monitoring Oregon Research & Teaching Security Operations Center (ORTSOC) Phone: 541.737.7723 GPG Fingerprint: ECC5 03A6 7E91 17C6 50C6 8FAC D6A0 8001 2869 BD52 On 3/27/19, 10:57 AM, "Seth Hall" > wrote: On 27 Mar 2019, at 11:54, Zander Work wrote: > The first two showing ??:0 makes sense b/c those are memory addresses. > It looks like the PE analyzer might be the culprit but I'm not sure. Yep, I knew the first two would look like that. It's ASLR being applied to glibc function (which is fine and not what I was interested in anyway). It did end up showing what I expected it to. I'll look around a little bit and see if anything makes sense. Thanks! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com _______________________________________________ Zeek mailing list zeek at zeek.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190518/de4dadaa/attachment.html From mauro.palumbo at aizoon.it Mon May 20 08:06:55 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Mon, 20 May 2019 15:06:55 +0000 Subject: [Zeek] logger node in a cluster Message-ID: Hi Zeek-devs, I am not sure I am getting it right, but i t seems to me that a Zeek logger in a cluster configuration simply sits there waiting for logs and then writes them down. Does it do any additional work? For example, checking for duplicated logs from workers? If yes, where is the code for this additional checks? Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190520/ac4f0dca/attachment.html From sg5414 at rit.edu Mon May 20 12:31:11 2019 From: sg5414 at rit.edu (Sahil Gupta) Date: Mon, 20 May 2019 15:31:11 -0400 Subject: [Zeek] Error while connecting bro with RYU controller Message-ID: Hi, I am a newbie in bro. I am trying to run bro script which communicated messages to RYU controller. I just follow instructions on: https://github.com/bro/bro-netcontrol Getting this error: *sg5414 at controller:/tmp/bro-netcontrol/openflow$ bro -C -i eth0 example.bro error in ./example.bro, line 18: unknown identifier OpenFlow::broker_new, at or near "OpenFlow::broker_new"* Any help is appreciated. Regards Sahil Gupta [image: Mailtrack] Sender notified by Mailtrack 05/20/19, 3:31:07 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190520/87319fc1/attachment.html From sg5414 at rit.edu Mon May 20 12:29:26 2019 From: sg5414 at rit.edu (Sahil Gupta) Date: Mon, 20 May 2019 15:29:26 -0400 Subject: [Zeek] Error in connecting bro with RYU Message-ID: Hi, I am a newbie in bro. I am trying to run bro script which communicated messages to RYU controller. I just follow instructions on: https://github.com/bro/bro-netcontrol Getting this error: *sg5414 at controller:/tmp/bro-netcontrol/openflow$ bro -C -i eth0 example.bro error in ./example.bro, line 18: unknown identifier OpenFlow::broker_new, at or near "OpenFlow::broker_new"* Any help is appreciated. Regards Sahil Gupta [image: Mailtrack] Sender notified by Mailtrack 05/20/19, 3:29:16 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190520/1faf893c/attachment.html From sg5414 at rit.edu Tue May 21 13:17:10 2019 From: sg5414 at rit.edu (Sahil Gupta) Date: Tue, 21 May 2019 16:17:10 -0400 Subject: [Zeek] Error: Wrong number of elements or type in tuple for event_flow_clear Message-ID: Hi, We're testing example.zeek and controller.py. At the startup, we get this error message: "wrong number of elements or type in tuple for event_flow_clear" This error is coming with its original code running. Any help would be appreciated. Thanks Sahil Gupta [image: Mailtrack] Sender notified by Mailtrack 05/21/19, 4:15:43 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190521/b65718fb/attachment.html From x.faith at gmail.com Wed May 22 09:29:38 2019 From: x.faith at gmail.com (David Decker) Date: Wed, 22 May 2019 12:29:38 -0400 Subject: [Zeek] EOL/LTS Message-ID: Zeek, Sorry could not find this on the site, but is there EOL information for current versions of the software? say 2.4, 2.5, 2.6, 2.6.1? Thanks Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190522/2f08367d/attachment.html From jan.grashoefer at gmail.com Wed May 22 09:39:00 2019 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Wed, 22 May 2019 18:39:00 +0200 Subject: [Zeek] EOL/LTS In-Reply-To: References: Message-ID: <0d089fc0-1e18-1689-6965-5ec1007a0407@gmail.com> On 22/05/2019 18:29, David Decker wrote: > Sorry could not find this on the site, but is there EOL information for > current versions of the software? Recently there was a blog post on the new release schedule: https://blog.zeek.org/2019/04/new-zeek-release-schedule.html Jan From kclawson at gmail.com Wed May 22 16:12:39 2019 From: kclawson at gmail.com (Kurtis Lawson) Date: Wed, 22 May 2019 16:12:39 -0700 Subject: [Zeek] Duplicate DNS packets Message-ID: Hello fellow Zeekers, I am new to the mailing list and fairly new to Zeek. I am having an issue where DNS traffic is duplicated. It seem fairly obvious to me that the issue is that the manager is sending a single "session" to all of the workers defined in node.cfg. Example duplicate logs (sanitized a bit): user1 at site1bro:~$ awk -F '\t' '{ if($1 == "1558556089.463824") print $0;}' dns.date-time.log 1558556089.463824 Ce6WGH1tX7fUQCJkEb 10.1.1.1 49675 10.5.5.5 53 udp 58613 - yahoo.uservoice.com 1 C_INTERNET 1 A - - F F T F 0 SITE1BRO-4 1558556089.463824 CxhWh33b65uCcQlUR2 10.1.1.1 49675 10.5.5.5 53 udp 58613 - yahoo.uservoice.com 1 C_INTERNET 1 A - - F F T F 0 SITE1BRO-8 1558556089.463824 CNBy3ykdFSvXydiW7 10.1.1.1 49675 10.5.5.5 53 udp 58613 - yahoo.uservoice.com 1 C_INTERNET 1 A - - F F T F 0 SITE1BRO-9 1558556089.463824 CV6w2f3NKeaAwhAvJf 10.1.1.1 49675 10.5.5.5 53 udp 58613 - yahoo.uservoice.com 1 C_INTERNET 1 A - - F F T F 0 SITE1BRO-7 1558556089.463824 Cc5rcP3N92OGHYUKA2 10.1.1.1 49675 10.5.5.5 53 udp 58613 - yahoo.uservoice.com 1 C_INTERNET 1 A - - F F T F 0 SITE1BRO-6 My node.cfg file: [manager] type=manager host=10.10.10.10 [proxy-1] type=proxy host=10.10.10.10 [SITE1BRO] type=worker host=10.10.10.10 interface=eth5 lb_method=pf_ring lb_procs=10 pin_cpus=2,3,4,5,6,7,8,9,10,11 Other info: - The span feed is clean of duplicates (validated with multiple packet captures) - Other logs are generally not duplicated, and I suspect that this only happens with UDP traffic - I've tried changing the LB type in the broctl.cfg file to 2-tuple, 5-tuple, and round-robin (4-tuple is default) but none of those resolved the issue - I've tried installing the latest dev version of pf_ring to no avail - From previously archived threads, it appears that this is not a new issue, and that it also happens with af_packet ... which is what I was going to try next :( Any insights as to how I can fix, or at least filter these duplicates before they are written to file and/or sent to Kafka would be greatly appreciated. KCL -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190522/f62783d8/attachment.html From akgraner at corelight.com Thu May 23 06:34:04 2019 From: akgraner at corelight.com (Amber Graner) Date: Thu, 23 May 2019 09:34:04 -0400 Subject: [Zeek] ZeekWeek 2019 - Call for Participation and Registration Now Open Message-ID: Hi All, Registration is now open for ZeekWeek 2019. Want to attend? Sponsor? Speak? Check out the link below: ZeekWeek 2019 - Call for Participation and Registration now open! Register today!! - https://blog.zeek.org/2019/05/zeekweek-2019-call-for-participation.html Hope to see you in Seattle! ~Amber -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190523/aee7ecca/attachment.html From hasifsulaiman94 at gmail.com Thu May 23 19:09:59 2019 From: hasifsulaiman94 at gmail.com (Muhammad Hasif Sulaiman) Date: Fri, 24 May 2019 10:09:59 +0800 Subject: [Zeek] Generate New Log using Customized Script Message-ID: Hi, I need help with some script i customized. Basically the script is to log http header. I don't want to mess the original http log, so i tried to create a new log file to log some field similar with the original http log along with the http header. I tested the script on http://try.bro.org and was able to execute the script, also I tested the script to analyze live traffic from an interface using "*bro -i en0 *" command with success. But when i load the script on local.bro and restart bro service, the logger crashed. I'm not sure if the script is the cause or something else is. on local.bro file i have included *@load protocols/http/httpheaders line*. The script is located */usr/src/bro-2.6.1/scripts/base/protocols/http/httpheaders.bro* Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190524/7a897a9f/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: httpheaders.bro Type: application/octet-stream Size: 6139 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190524/7a897a9f/attachment.obj From mauro.palumbo at aizoon.it Fri May 24 02:29:04 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Fri, 24 May 2019 09:29:04 +0000 Subject: [Zeek] ntp protocol analyzer Message-ID: <8a4bcfd9ec414b68a0410290c1633ddd@SRVEX03.aizoon.local> Hi Zeek's devs, I am interested in an analyzer for the NTP protocol. I have seen that there is one in Zeek, but it doesn't really parse all fields in details. Is anyone working on extending the present analyzer? Would it be of interest for the community to do so? Is there any reason why the present analyzer is written in C++ rather than binpac? Thanks, Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190524/e6695998/attachment.html From justin at corelight.com Fri May 24 09:00:16 2019 From: justin at corelight.com (Justin Azoff) Date: Fri, 24 May 2019 12:00:16 -0400 Subject: [Zeek] Duplicate DNS packets In-Reply-To: References: Message-ID: On Wed, May 22, 2019 at 7:21 PM Kurtis Lawson wrote: > Hello fellow Zeekers, > > I am new to the mailing list and fairly new to Zeek. > I am having an issue where DNS traffic is duplicated. It seem fairly > obvious to me that the issue is that the manager is sending a single > "session" to all of the workers defined in node.cfg. > not quite, the manager doesn't send any traffic, the workers read it directly, but you are correct in that all of the workers are seeing the same traffic > Other info: > > - The span feed is clean of duplicates (validated with multiple packet > captures) > > - Other logs are generally not duplicated, and I suspect that this only > happens with UDP traffic > > - I've tried changing the LB type in the broctl.cfg file to 2-tuple, > 5-tuple, and round-robin (4-tuple is default) but none of those resolved > the issue > > - I've tried installing the latest dev version of pf_ring to no avail > > - From previously archived threads, it appears that this is not a new > issue, and that it also happens with af_packet ... which is what I was > going to try next :( > > Your problem is that you are not actually using pf_ring to load balance, you're just running 10 workers all seeing 100% of the traffic. This isn't really an issue it's just a common misconfiguration. The easiest way to fix this is to install https://packages.bro.org/packages/view/1bafeed3-c141-11e8-88be-0a645a3f3086 And not try to use the PF ring libpcap which is where your problem is (It may be installed but you're not actually using it) Using af_packet https://packages.bro.org/packages/view/74610004-4fb7-11e8-88be-0a645a3f3086 It's probably easier anyway and that does not have this problem -- Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190524/f3146341/attachment-0001.html From kclawson at gmail.com Fri May 24 16:27:55 2019 From: kclawson at gmail.com (Kurtis Lawson) Date: Fri, 24 May 2019 16:27:55 -0700 Subject: [Zeek] Duplicate DNS packets In-Reply-To: References: Message-ID: Justin, Thanks for taking the time to reply and thanks for the information. I'll work on this next week and reply to the list. Kurtis Lawson On Fri, May 24, 2019 at 9:00 AM Justin Azoff wrote: > On Wed, May 22, 2019 at 7:21 PM Kurtis Lawson wrote: > >> Hello fellow Zeekers, >> >> I am new to the mailing list and fairly new to Zeek. >> I am having an issue where DNS traffic is duplicated. It seem fairly >> obvious to me that the issue is that the manager is sending a single >> "session" to all of the workers defined in node.cfg. >> > > not quite, the manager doesn't send any traffic, the workers read it > directly, but you are correct in that all of the workers are seeing the > same traffic > > >> Other info: >> >> - The span feed is clean of duplicates (validated with multiple packet >> captures) >> >> - Other logs are generally not duplicated, and I suspect that this only >> happens with UDP traffic >> >> - I've tried changing the LB type in the broctl.cfg file to 2-tuple, >> 5-tuple, and round-robin (4-tuple is default) but none of those resolved >> the issue >> >> - I've tried installing the latest dev version of pf_ring to no avail >> >> - From previously archived threads, it appears that this is not a new >> issue, and that it also happens with af_packet ... which is what I was >> going to try next :( >> >> > Your problem is that you are not actually using pf_ring to load balance, > you're just running 10 workers all seeing 100% of the traffic. This isn't > really an issue it's just a common misconfiguration. > > The easiest way to fix this is to install > https://packages.bro.org/packages/view/1bafeed3-c141-11e8-88be-0a645a3f3086 > And not try to use the PF ring libpcap which is where your problem is (It > may be installed but you're not actually using it) > > Using af_packet > https://packages.bro.org/packages/view/74610004-4fb7-11e8-88be-0a645a3f3086 It's > probably easier anyway and that does not have this problem > > -- > Justin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190524/e81175f1/attachment.html From seth at corelight.com Sat May 25 02:24:20 2019 From: seth at corelight.com (Seth Hall) Date: Sat, 25 May 2019 04:24:20 -0500 Subject: [Zeek] ntp protocol analyzer In-Reply-To: <8a4bcfd9ec414b68a0410290c1633ddd@SRVEX03.aizoon.local> References: <8a4bcfd9ec414b68a0410290c1633ddd@SRVEX03.aizoon.local> Message-ID: <18DE7AED-AF24-4460-BA09-0D89D46B5550@corelight.com> No one is working on it that I know of and it's written in C++ because it's older. I think at some point I rewrote it in binpac but I suspect that has been lost to the sands of time at this point. A couple of years ago I think some others were working on ntp related stuff but I don't know if that went anywhere. If you're up for it, feel free to take on the ntp analyzer and rehab it! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com > On May 24, 2019, at 4:29 AM, Palumbo Mauro wrote: > > Hi Zeek?s devs, > I am interested in an analyzer for the NTP protocol. I have seen that there is one in Zeek, but it doesn?t really parse all fields in details. Is anyone working on extending the present analyzer? Would it be of interest for the community to do so? > Is there any reason why the present analyzer is written in C++ rather than binpac? > > Thanks, > Mauro > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190525/221aa6f3/attachment.html From vlad at es.net Sun May 26 05:29:44 2019 From: vlad at es.net (Vlad Grigorescu) Date: Sun, 26 May 2019 07:29:44 -0500 Subject: [Zeek] ntp protocol analyzer In-Reply-To: <18DE7AED-AF24-4460-BA09-0D89D46B5550@corelight.com> References: <8a4bcfd9ec414b68a0410290c1633ddd@SRVEX03.aizoon.local> <18DE7AED-AF24-4460-BA09-0D89D46B5550@corelight.com> Message-ID: There's some work in branch topic/vladg/ntp , but that's incomplete and ~3 years old. --Vlad On Sat, May 25, 2019 at 4:26 AM Seth Hall wrote: > No one is working on it that I know of and it's written in C++ because > it's older. I think at some point I rewrote it in binpac but I suspect > that has been lost to the sands of time at this point. A couple of years > ago I think some others were working on ntp related stuff but I don't know > if that went anywhere. > > If you're up for it, feel free to take on the ntp analyzer and rehab it! > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > > > On May 24, 2019, at 4:29 AM, Palumbo Mauro > wrote: > > Hi Zeek?s devs, > > I am interested in an analyzer for the NTP protocol. I have seen that > there is one in Zeek, but it doesn?t really parse all fields in details. Is > anyone working on extending the present analyzer? Would it be of interest > for the community to do so? > > Is there any reason why the present analyzer is written in C++ rather than > binpac? > > > > Thanks, > > Mauro > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190526/a6b1814a/attachment.html From mauro.palumbo at aizoon.it Mon May 27 00:35:26 2019 From: mauro.palumbo at aizoon.it (Palumbo Mauro) Date: Mon, 27 May 2019 07:35:26 +0000 Subject: [Zeek] R: ntp protocol analyzer In-Reply-To: References: <8a4bcfd9ec414b68a0410290c1633ddd@SRVEX03.aizoon.local> <18DE7AED-AF24-4460-BA09-0D89D46B5550@corelight.com> Message-ID: Ok, thanks for the feedback. I?ll have a look at that. I need to get something working soon. But I?ll keep everybody posted about what I can do about this analyzer. Mauro Da: Vlad Grigorescu [mailto:vlad at es.net] Inviato: domenica 26 maggio 2019 14:30 A: Seth Hall Cc: Palumbo Mauro ; zeek at zeek.org Oggetto: Re: [Zeek] ntp protocol analyzer There's some work in branch topic/vladg/ntp, but that's incomplete and ~3 years old. --Vlad On Sat, May 25, 2019 at 4:26 AM Seth Hall > wrote: No one is working on it that I know of and it's written in C++ because it's older. I think at some point I rewrote it in binpac but I suspect that has been lost to the sands of time at this point. A couple of years ago I think some others were working on ntp related stuff but I don't know if that went anywhere. If you're up for it, feel free to take on the ntp analyzer and rehab it! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com On May 24, 2019, at 4:29 AM, Palumbo Mauro > wrote: Hi Zeek?s devs, I am interested in an analyzer for the NTP protocol. I have seen that there is one in Zeek, but it doesn?t really parse all fields in details. Is anyone working on extending the present analyzer? Would it be of interest for the community to do so? Is there any reason why the present analyzer is written in C++ rather than binpac? Thanks, Mauro _______________________________________________ Zeek mailing list zeek at zeek.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek _______________________________________________ Zeek mailing list zeek at zeek.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190527/a28d0fd7/attachment.html From sachin.giribuva at niyuj.com Mon May 27 22:07:00 2019 From: sachin.giribuva at niyuj.com (Sachinji Giri) Date: Tue, 28 May 2019 10:37:00 +0530 Subject: [Zeek] Which services are identified in conn.log by bro? In-Reply-To: References: Message-ID: Hi all, I am looking for the list of services that bro/zeek identifies in conn.log. But I am unable to find out exactly how many services bro identifies. Can someone please point out to me the correct script le or source code or documentation where I can get the list of services that bro detects? Documentation says : > application-layer services ( - the service field is filled in as Bro > determines a specific protocol to be in use, independent of the > connection?s ports) > But where are these services defined? How many are identified in the conn.log? Thanks in advance! Regards, Sachin Giri -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190528/2d664c21/attachment.html From anthony.kasza at gmail.com Tue May 28 08:14:18 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Tue, 28 May 2019 09:14:18 -0600 Subject: [Zeek] Which services are identified in conn.log by bro? In-Reply-To: References: Message-ID: You can find how this field gets set by grepping through Zeek's source. ``` $ grep -R '\$service' ./scripts | grep 'add' ./scripts/base/frameworks/dpd/main.bro: add c$service[analyzer]; ./scripts/base/frameworks/dpd/main.bro: add c$service[fmt("-%s", analyzer)]; ./scripts/base/protocols/ftp/main.bro: add c$service["ftp-data"]; ./scripts/base/protocols/ftp/gridftp.bro: add c$service["gridftp-data"]; ./scripts/base/protocols/ftp/gridftp.bro: add c$service["gridftp"]; ./scripts/base/protocols/irc/dcc-send.bro: add c$service["irc-dcc-data"]; ``` Most services are identified via the Dynamic Protocol Detection (DPD) framework. https://www.zeek.org/development/howtos/dpd.html Looking at `scripts/base/frameworks/dpd/main.bro`, you can see that the service field is set within the protocol_confirmation() scriptland event which is generated by protocol analyzers in C++land. The ProtocolConfirmation() function from `src/analyzer/Analyzer.cc` is how the scriptland event is called. Grepping for that function in the source shows 29 different protocol analyzers. ``` $ grep -R 'ProtocolConfirmation' ./src/* | cut -f1 -d':' | grep 'protocol' | cut -d'/' -f5 | sort -u ayiya bittorrent dce-rpc dhcp dnp3 dns ftp gssapi gtpv1 http imap irc krb modbus mysql ntlm pop3 radius rdp rfb sip smb smtp snmp socks ssh ssl teredo xmpp ``` It seems that there are, in total, 33 possible connection service values. -AK On Mon, May 27, 2019, 23:10 Sachinji Giri wrote: > Hi all, > > I am looking for the list of services that bro/zeek identifies in > conn.log. But I am unable to find out exactly how many services bro > identifies. Can someone please point out to me the correct script le or > source code or documentation where I can get the list of services that bro > detects? > > Documentation says : > >> application-layer services ( - the service field is filled in as Bro >> determines a specific protocol to be in use, independent of the >> connection?s ports) >> > > > > But where are these services defined? How many are identified in the > conn.log? > > Thanks in advance! > > Regards, > > Sachin Giri > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190528/2d119847/attachment.html From konrad.weglowski at gmail.com Tue May 28 08:39:09 2019 From: konrad.weglowski at gmail.com (Konrad Weglowski) Date: Tue, 28 May 2019 11:39:09 -0400 Subject: [Zeek] input framework - reading data from files - set vs table data structure? Message-ID: Hey, Is there a way to read data from a file into a "set" data structure (instead of "table")? I would like to read contents of the file that has list of domain names for example, one per line and store in a "set" data structure variable. Thanks, Konrad -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190528/fd7e151d/attachment.html From justin at corelight.com Tue May 28 09:02:17 2019 From: justin at corelight.com (Justin Azoff) Date: Tue, 28 May 2019 12:02:17 -0400 Subject: [Zeek] input framework - reading data from files - set vs table data structure? In-Reply-To: References: Message-ID: Yep, you just setup a table with only an index like so: https://github.com/zeek/zeek/blob/master/testing/btest/scripts/base/frameworks/input/set.zeek On Tue, May 28, 2019 at 11:48 AM Konrad Weglowski < konrad.weglowski at gmail.com> wrote: > Hey, > > Is there a way to read data from a file into a "set" data structure > (instead of "table")? > > I would like to read contents of the file that has list of domain names > for example, one per line and store in a "set" data structure variable. > > Thanks, > > Konrad > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -- Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190528/fc6d5174/attachment.html From konrad.weglowski at gmail.com Tue May 28 09:11:44 2019 From: konrad.weglowski at gmail.com (Konrad Weglowski) Date: Tue, 28 May 2019 12:11:44 -0400 Subject: [Zeek] input framework - reading data from files - set vs table data structure? In-Reply-To: References: Message-ID: Awesome, thank you very much! Konrad On Tue, May 28, 2019 at 12:02 PM Justin Azoff wrote: > Yep, you just setup a table with only an index like so: > > > https://github.com/zeek/zeek/blob/master/testing/btest/scripts/base/frameworks/input/set.zeek > > On Tue, May 28, 2019 at 11:48 AM Konrad Weglowski < > konrad.weglowski at gmail.com> wrote: > >> Hey, >> >> Is there a way to read data from a file into a "set" data structure >> (instead of "table")? >> >> I would like to read contents of the file that has list of domain names >> for example, one per line and store in a "set" data structure variable. >> >> Thanks, >> >> Konrad >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > > > -- > Justin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190528/90c87a9c/attachment-0001.html From pssunu6 at gmail.com Tue May 28 23:19:52 2019 From: pssunu6 at gmail.com (ps sunu) Date: Wed, 29 May 2019 11:49:52 +0530 Subject: [Zeek] zeek 2.6.1 pfring version Message-ID: Hi , i want to install zeek 2.6.1 , which pfring version will support zeek 2.6.1 ? Regards, PS -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190529/ac605fc8/attachment.html From pssunu6 at gmail.com Tue May 28 23:19:52 2019 From: pssunu6 at gmail.com (ps sunu) Date: Wed, 29 May 2019 11:49:52 +0530 Subject: [Zeek] zeek 2.6.1 pfring version Message-ID: Hi , i want to install zeek 2.6.1 , which pfring version will support zeek 2.6.1 ? Regards, PS -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190529/ac605fc8/attachment-0001.html From justin at corelight.com Wed May 29 17:01:31 2019 From: justin at corelight.com (Justin Azoff) Date: Wed, 29 May 2019 20:01:31 -0400 Subject: [Zeek] Generate New Log using Customized Script In-Reply-To: References: Message-ID: On Thu, May 23, 2019 at 10:12 PM Muhammad Hasif Sulaiman < hasifsulaiman94 at gmail.com> wrote: > Hi, > > I need help with some script i customized. Basically the script is to log > http header. I don't want to mess the original http log, so i tried to > create a new log file to log some field similar with the original http log > along with the http header. I tested the script on http://try.bro.org and > was able to execute the script, also I tested the script to analyze live > traffic from an interface using "*bro -i en0 *" > command with success. But when i load the script on local.bro and restart > bro service, the logger crashed. I'm not sure if the script is the cause or > something else is. > on local.bro file i have included *@load protocols/http/httpheaders line*. > The script is located > */usr/src/bro-2.6.1/scripts/base/protocols/http/httpheaders.bro* > That all sounds reasonable to me.. how exactly was the logger crashing? Were you getting script errors or was it segfaulting? -- Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190529/3c60e9b1/attachment.html From merril.mathew at baby2body.com Thu May 30 02:45:01 2019 From: merril.mathew at baby2body.com (Merril Mathew) Date: Thu, 30 May 2019 10:45:01 +0100 Subject: [Zeek] Send email on any SSH attempt Message-ID: Hi All, I am very new to Zeek. I was trying to send an email on any SSH attempt, regardless of success or fail. The notice framework is really confusing and I could not find much information online. :) Would be great if someone can explain to me what I need to do to solve this specific issue. Please find attached what I have tried so far. Please also note that whenever I tried to run my scripts with pcap file it generates a notice.log. However if I load my script to local.zeek then I cannot find any notice.log in $PREFIX/bro/logs/current. zeek_mail.zeek is where the Notice implementation is done and zeek_mail2.zeek is where the notice hook is applied. Kind regards, Merril. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/ebf47a4d/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: zeek_mail2.zeek Type: application/octet-stream Size: 225 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/ebf47a4d/attachment.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: zeek_mail.zeek Type: application/octet-stream Size: 353 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/ebf47a4d/attachment-0001.obj From anthony.kasza at gmail.com Thu May 30 06:30:21 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Thu, 30 May 2019 07:30:21 -0600 Subject: [Zeek] Send email on any SSH attempt In-Reply-To: References: Message-ID: Hi Merril, In zeek_mail.zeek, change "$note=Notice::Login_attempted" to "$note=SSH::Login_attempted". This is because you exported the additional notice type from the SSH module namespace. I'm not completely sure, but you may need to change the second @load directive in zeek_mail2.zeek to "zeek_mail" instead of "alert_ssh_attempted.zeek". -AK On Thu, May 30, 2019, 03:48 Merril Mathew wrote: > Hi All, > > I am very new to Zeek. I was trying to send an email on any SSH attempt, > regardless of success or fail. The notice framework is really confusing and > I could not find much information online. :) Would be great if someone can > explain to me what I need to do to solve this specific issue. > > Please find attached what I have tried so far. Please also note that > whenever I tried to run my scripts with pcap file it generates a > notice.log. However if I load my script to local.zeek then I cannot find > any notice.log in $PREFIX/bro/logs/current. > > zeek_mail.zeek is where the Notice implementation is done and > zeek_mail2.zeek is where the notice hook is applied. > > Kind regards, > Merril. > _______________________________________________ > Zeek mailing list > zeek at zeek.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/4edce322/attachment.html From merril.mathew at baby2body.com Thu May 30 07:10:53 2019 From: merril.mathew at baby2body.com (Merril Mathew) Date: Thu, 30 May 2019 15:10:53 +0100 Subject: [Zeek] Send email on any SSH attempt In-Reply-To: References: Message-ID: Hi Anthony, Thank you for the reply. I have changed the files as suggested by you (please find new files attached for reference). I loaded both zeek_mail.zeek and zeek_mail2.zeek to local.zeek ( eg. @load /usr/local/bro/share/bro/site/zeek_mail.zeek). I restarted the zeekctl `zeekctl deploy`. Then I logged out of my AWS ec2 server and logged back in. I can see ssh.log under $PREFIX/logs/current but no notice.log and I did not receive an email . I am not sure if there is something else I am missing. Please note that I have MailTo="email at address" set in my zeekctl.cfg and I can send an email using sendmail manually and Zeek seems to send emails on connection summary and capture loss fine. I tried most of the resources available to the best of my efforts on notice available online without success. Any help would be much appreciated. Kind regards, Merril. On Thu, 30 May 2019 at 14:30, anthony kasza wrote: > Hi Merril, > > In zeek_mail.zeek, change "$note=Notice::Login_attempted" to > "$note=SSH::Login_attempted". This is because you exported the additional > notice type from the SSH module namespace. > > I'm not completely sure, but you may need to change the second @load > directive in zeek_mail2.zeek to "zeek_mail" instead of > "alert_ssh_attempted.zeek". > > -AK > > On Thu, May 30, 2019, 03:48 Merril Mathew > wrote: > >> Hi All, >> >> I am very new to Zeek. I was trying to send an email on any SSH attempt, >> regardless of success or fail. The notice framework is really confusing and >> I could not find much information online. :) Would be great if someone can >> explain to me what I need to do to solve this specific issue. >> >> Please find attached what I have tried so far. Please also note that >> whenever I tried to run my scripts with pcap file it generates a >> notice.log. However if I load my script to local.zeek then I cannot find >> any notice.log in $PREFIX/bro/logs/current. >> >> zeek_mail.zeek is where the Notice implementation is done and >> zeek_mail2.zeek is where the notice hook is applied. >> >> Kind regards, >> Merril. >> _______________________________________________ >> Zeek mailing list >> zeek at zeek.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/461beaa1/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: zeek_mail.zeek Type: application/octet-stream Size: 350 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/461beaa1/attachment-0002.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: zeek_mail2.zeek Type: application/octet-stream Size: 215 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/461beaa1/attachment-0003.obj From salwa.alem at univ-ubs.fr Thu May 30 08:48:26 2019 From: salwa.alem at univ-ubs.fr (Salwa Alem) Date: Thu, 30 May 2019 17:48:26 +0200 (CEST) Subject: [Zeek] Ask for help: zeek/modbus Message-ID: <1521241427.1423590.1559231306951.JavaMail.zimbra@univ-ubs.fr> Hello, I'm novice in zeek use and command line interpretation. But when I did this command, I get the below error: ~/bro$ bro -r 09052019.pcap /home/salwa/bro/scripts/base/protocols/modbus/main.zeek fatal error in /home/salwa/bro/scripts/base/protocols/modbus/main.zeek, line 5: can't find ./consts My goal is to analyze my pcap file with zeek and extract the maximum of statistics to train my neural network. The more statistics I have, the better my learning of the neural network. That's why I used the modbus script but I ignore the reason of the above error. Thanks in advance for your help. Best regards, Salwa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190530/45bc1382/attachment.html From jsiwek at corelight.com Thu May 30 09:29:35 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Thu, 30 May 2019 09:29:35 -0700 Subject: [Zeek] Bro 2.6.2 release (security update) Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 A security patch release, Bro v2.6.2, is now available for download: https://www.zeek.org/download/index.html The following Denial of Service vulnerabilities are addressed: * Integer type mismatches in BinPAC-generated parser code and Bro analyzer code may allow for crafted packet data to cause unintentional code paths in the analysis logic to be taken due to unsafe integer conversions causing the parser and analysis logic to each expect different fields to have been parsed. One such example, reported by Maksim Shudrak, causes the Kerberos analyzer to dereference a null pointer. CVE-2019-12175 was assigned for this issue. * The Kerberos parser allows for several fields to be left uninitialized, but they were not marked with an &optional attribute and several usages lacked existence checks. Crafted packet data could potentially cause an attempt to access such uninitialized fields, generate a runtime error/exception, and leak memory. Existence checks and &optional attributes have been added to the relevent Kerberos fields. * BinPAC-generated protocol parsers commonly contain fields whose length is derived from other packet input, and for those that allow for incremental parsing, BinPAC did not impose a limit on how large such a field could grow, allowing for remotely-controlled packet data to cause growth of BinPAC's flowbuffer bounded only by the numeric limit of an unsigned 64-bit integer, leading to memory exhaustion. There is now a generalized limit for how large flowbuffers are allowed to grow, tunable by setting "BinPAC::flowbuffer_capacity_max". -----BEGIN PGP SIGNATURE----- iQIzBAEBAgAdFiEE6WkLK32KwaGfkhxKxotJTfVqzH4FAlzvRkwACgkQxotJTfVq zH4pLA//SO5JEvq1OLU5MFUvaMD2FraqcAsE/nj7+Yt+UbyRqG3NAwdgE19ZmtCb bRTbHpdnRo+chM+JdtB+alyojgAt0sBtMQyVqMSR2UhQgCn68OJvCT9Qi7FbCI/q ZqxKYwZ9Lfrgx4EJWnbS2hNhrBsSt9kBtqm/6YsPjyIIk3zt4q5xxJwaAouQIDFy DxTQqwaIeDNvjjV9HxYkzrWJINt4CzxG512yfXBgX1sRa2rNAhiSGOubd6uFjkWu WABfzJUDQILN0RiefT8MilEf1OBCcLtUNhVAnIgqkUkmkWm48VZu2Sup6THwU3nU N3x8XFYBLLbV3+l1dt8fqWAyzBPWs2irQBY2xmPT2xBkq4gQXxlR1Le41b/hZXCJ azmmDepedm6vfSl2Q0S9wNqEVpFAx98wj7cGZuce4VLom3W0ANl67jchXrzIX2UT BZ78jc50F8+FM7/yjYsUf+kd5t6zOWGSCq2iraZBDOaNKa1bVKBirbmFySkVuCDt fKXyLw7OKSsZD18P2SVQWHKv/JdfOTm7SRixm5Sbr+yNFceNU0KTrMSu8WI+4kxE qpVSjbMqf5XpUWZYygtGZQgg5lsrgArkOWoIfxldGDLpjQM5vUdvY3uJEdOxIsZT AmdS3SFoorzHPhKywiSANRbGdMn4o8E3y1UCdyoerKrZJoy2ZZc= =XuB+ -----END PGP SIGNATURE----- From jsiwek at corelight.com Thu May 30 17:33:58 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Thu, 30 May 2019 17:33:58 -0700 Subject: [Zeek] Ask for help: zeek/modbus In-Reply-To: <1521241427.1423590.1559231306951.JavaMail.zimbra@univ-ubs.fr> References: <1521241427.1423590.1559231306951.JavaMail.zimbra@univ-ubs.fr> Message-ID: On Thu, May 30, 2019 at 8:51 AM Salwa Alem wrote: > ~/bro$ bro -r 09052019.pcap /home/salwa/bro/scripts/base/protocols/modbus/main.zeek > > fatal error in /home/salwa/bro/scripts/base/protocols/modbus/main.zeek, line 5: can't find ./consts In this case, that script is loaded by default and you don't have to specify it. So you could just do: bro -r 09052019.pcap Not sure that would explain the error though, unless you somehow do not have a file at /home/salwa/bro/scripts/base/protocols/modbus/consts.zeek, which should have come as part of the source code. - Jon From kayavila at illinois.edu Fri May 31 06:58:08 2019 From: kayavila at illinois.edu (Avila, Kay) Date: Fri, 31 May 2019 13:58:08 +0000 Subject: [Zeek] BroCon18 slides Message-ID: Alan Commike's Bro protocol analyzer talk from BroCon 18 on YouTube is missing the slides link (https://www.youtube.com/watch?v=UtEe-VTPcDY&t=145s). Looks like the slides for other talks are hosted at https://www.zeek.org/brocon2018/slides/ but directory indexing has been turned off. Could anyone share those with me, please? Thanks, Kay Kay Avila Senior Security Engineer, Cybersecurity and Networking Division National Center for Supercomputing Applications (NCSA) University of Illinois, Urbana-Champaign P: (217) 300-1754 F: (217) 244-1987 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190531/6167cfdb/attachment.html From akgraner at corelight.com Fri May 31 10:00:22 2019 From: akgraner at corelight.com (Amber Graner) Date: Fri, 31 May 2019 12:00:22 -0500 Subject: [Zeek] 17 May 2019 - LT Meeting Minutes Message-ID: Hi all, The minutes from the 17 May 2019 LT Meeting can be found at: https://blog.zeek.org/2019/05/open-source-leadership-team-meeting.html Please let us know if you have any questions, comments, feedback or suggestions for topics for the the LT to discuss. Also a reminded that the Registration and Call for Participation for ZeekWeek 2019 is now open. If you want to suggest a presentation topic or presentation or if you want to become a sponsor please go to: https://www.zeekweek.com Thanks, ~Amber -- *Amber Graner* Director of Community Corelight, Inc 828.582.9469 * Ask me about how you can participate in the Zeek (formerly Bro) community. * Remember - ZEEK AND YOU SHALL FIND!! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190531/a2ad8087/attachment.html