From jazoff at illinois.edu Sat Oct 1 08:18:47 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Sat, 1 Oct 2016 15:18:47 +0000 Subject: [Bro] Bro 2.4.1 documentation In-Reply-To: References: Message-ID: <6592A6ED-4BBF-441B-9648-0BFCD1E55DF4@illinois.edu> > On Oct 1, 2016, at 7:34 AM, John Edwards wrote: > > So Bro uses Sphinx for its documentation? Sphinx is what Viper, Cuckoo sandbox and Suricata house their documentation on also that have a export as pdf option. Correct.. the pdf export is generated via the latex output. I can generate the Bro.tex file, but wasn't able to get pdflatex to work last night. Trying to run pdflatex on it first gives memory errors, but doing export extra_mem_bot=18000000 gets past that, but it has more issues: (/usr/local/texlive/2016/texmf-dist/tex/latex/psnfss/ts1pcr.fd) ! Dimension too large. \fb at put@frame ...p \ifdim \dimen@ >\ht \@tempboxa \fb at putboxa #1\fb at afterfra... l.19029 \end{Verbatim} ? Now that I'm not super tired, I figured out I can just type R at that prompt and it completes after that. There's probably some table or figure that is missing from the output, but otherwise the result looks good. I replaced http://www.ncsa.illinois.edu/People/jazoff/bro-2.4.1.pdf with this new output. The latex -> pdf output is much nicer than the man -> pdf output. In addition to the links working, the images are included. It looks like it could still use some tweaks. I think the main thing is that the ToC needs to show one more sub level to break up each of the sections further. I tried changing the index.rst to have .. toctree:: :maxdepth: 3 instead of 2 but that didn't seem to do anything. The docs has a note that "The LaTeX writer only refers the maxdepth option of first toctree directive in the document." but the one in index.rst should be the first. -- - Justin Azoff From philosnef at gmail.com Sun Oct 2 05:31:57 2016 From: philosnef at gmail.com (erik clark) Date: Sun, 2 Oct 2016 08:31:57 -0400 Subject: [Bro] Monitoring a directory and running bro on the PCAPs Message-ID: Moloch is a threaded pcap writer. You are writing multiple pcaps concurrently. Spewing that kind of content at bro probably will not have the desired effect, causing loss of session information and who knows what else. I agree that you should drop another link off your tap and feed it just to bro. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161002/44553af8/attachment.html From michalpurzynski1 at gmail.com Sun Oct 2 07:38:36 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Sun, 2 Oct 2016 16:38:36 +0200 Subject: [Bro] Monitoring a directory and running bro on the PCAPs In-Reply-To: References: Message-ID: <5E176900-F9A2-4DF7-884B-D38AF06C7480@gmail.com> So is netsniff-ng - well not technical multi threaded but multi process, yes. It does not do indexing but it is much lighter and friendly to tune. > On 2 Oct 2016, at 14:31, erik clark wrote: > > Moloch is a threaded pcap writer. You are writing multiple pcaps concurrently. Spewing that kind of content at bro probably will not have the desired effect, causing loss of session information and who knows what else. I agree that you should drop another link off your tap and feed it just to bro. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From Art.Maddalena at teamaol.com Sun Oct 2 09:14:48 2016 From: Art.Maddalena at teamaol.com (Art Maddalena) Date: Sun, 02 Oct 2016 16:14:48 +0000 Subject: [Bro] Monitoring a directory and running bro on the PCAPs In-Reply-To: <5E176900-F9A2-4DF7-884B-D38AF06C7480@gmail.com> References: <5E176900-F9A2-4DF7-884B-D38AF06C7480@gmail.com> Message-ID: Moloch is amazing and Erik makes a good point. I am likely going to continue to duplicate capture due to the amount of data being captured. Bro and Moloch are both fantastic compliments to most security stacks. I can't wait for the latest bro release to come out of beta! @Michael: if you haven't checked out Moloch recently I would recommend checking out the latest version and giving it a go as we are constantly developing! Open source ftw! Thanks again for everyone's input! This community is fantastically helpful. - Art On Sun, Oct 2, 2016 at 10:38 Micha? Purzy?ski wrote: > So is netsniff-ng - well not technical multi threaded but multi process, > yes. It does not do indexing but it is much lighter and friendly to tune. > > > On 2 Oct 2016, at 14:31, erik clark wrote: > > > > Moloch is a threaded pcap writer. You are writing multiple pcaps > concurrently. Spewing that kind of content at bro probably will not have > the desired effect, causing loss of session information and who knows what > else. I agree that you should drop another link off your tap and feed it > just to bro. > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161002/34c745e1/attachment.html From brot212 at googlemail.com Mon Oct 3 04:14:30 2016 From: brot212 at googlemail.com (Dane Wullen) Date: Mon, 3 Oct 2016 13:14:30 +0200 Subject: [Bro] New layer 2 analyzer Message-ID: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> Hi there, I want to write an analyzer to detect EtherCat traffic, which is encapsulated in layer 2 (like ARP). I wanted use the BinPAC language to create this analyzer, but I found out that BinPAC only supports protocols that areencapsulated in TCP/UDP. (correct me if I'm wrong :-) ) Now I'm thinking about writing that analyzer without BinPAC, but I'm not really sure where to start. Can anyone give me a few hints or could tell me his/her experience in writing a new protocol analyzer with C++ for Bro? Thank you and have a nice day! -Dane -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161003/dc9d4dea/attachment.html From jlay at slave-tothe-box.net Mon Oct 3 08:24:42 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 03 Oct 2016 09:24:42 -0600 Subject: [Bro] Quick load balancing question Message-ID: <3d3c54a1963ce74d557bbe4c618627e3@localhost> So here's an instance of bro via command line with two nic's: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2278 0.0 0.0 66264 1468 ? S Sep20 0:00 \_ sudo /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter not ip6 local Site::local_nets += { 192.168.1.0/24 } root 2280 22.2 4.0 1484900 247056 ? Sl Sep20 4208:33 \_ /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter not ip6 local Site::local_nets += { 192.168.1.0/24 } An instance of bro using broctl standalone, just one nic: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 7479 0.0 0.1 12572 2896 ? S 09:18 0:00 /bin/bash /opt/bro/share/broctl/scripts/run-bro -1 -i eth0 -U .status -p broctl -p broctl-live -p standalone -p local -p bro local.bro broctl broctl/standalone broctl/auto --no-checksums --filter not ip6 root 7485 25.9 2.8 523644 57328 ? Rl 09:18 0:11 /opt/bro/bin/bro -i eth0 -U .status -p broctl -p broctl-live -p standalone -p local -p bro local.bro broctl broctl/standalone broctl/auto --no-checksums --filter not ip6 root 7543 0.0 2.1 162880 42560 ? SN 09:18 0:00 /opt/bro/bin/bro -i eth0 -U .status -p broctl -p broctl-live -p standalone -p local -p bro local.bro broctl broctl/standalone broctl/auto --no-checksums --filter not ip6 Lastly an instance using pf_ring load balancing, just one one nic: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 4072 0.0 0.0 12572 528 ? S Oct02 0:00 /bin/bash /opt/bro/share/broctl/scripts/run-bro -1 -U .status -p broctl -p broctl-live -p local -p logger local.bro broctl base/frameworks/cluster broctl/auto --no-checksums --filter not ipv6 root 4078 2.6 0.1 1365652 4040 ? Sl Oct02 41:53 \_ /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p logger local.bro broctl base/frameworks/cluster broctl/auto --no-checksums --filter not ipv6 root 4079 0.0 0.0 118428 832 ? SN Oct02 0:09 \_ /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p logger local.bro broctl base/frameworks/cluster broctl/auto --no-checksums --filter not ipv6 root 4121 0.0 0.0 12572 524 ? S Oct02 0:00 /bin/bash /opt/bro/share/broctl/scripts/run-bro -1 -U .status -p broctl -p broctl-live -p local -p manager local.bro broctl base/frameworks/cluster local-manager.bro broctl/auto --no-checksums --filter not ipv6 root 4127 3.7 0.4 119316 13592 ? S Oct02 58:03 \_ /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p manager local.bro broctl base/frameworks/cluster local-manager.bro broctl/auto --no-checksums --filter not ipv6 root 4128 0.0 0.0 118796 680 ? SN Oct02 0:07 \_ /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p manager local.bro broctl base/frameworks/cluster local-manager.bro broctl/auto --no-checksums --filter not ipv6 root 4161 0.0 0.0 12572 528 ? S Oct02 0:00 /bin/bash /opt/bro/share/broctl/scripts/run-bro -1 -U .status -p broctl -p broctl-live -p local -p proxy-1 local.bro broctl base/frameworks/cluster local-proxy broctl/auto --no-checksums --filter not ipv6 root 4167 3.7 0.2 111780 6344 ? S Oct02 58:13 \_ /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p proxy-1 local.bro broctl base/frameworks/cluster local-proxy broctl/auto --no-checksums --filter not ipv6 root 4186 0.0 0.0 118436 492 ? SN Oct02 0:07 \_ /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p proxy-1 local.bro broctl base/frameworks/cluster local-proxy broctl/auto --no-checksums --filter not ipv6 root 4225 0.0 0.0 12576 528 ? S Oct02 0:00 /bin/bash /opt/bro/share/broctl/scripts/run-bro 1 -i eth0 -U .status -p broctl -p broctl-live -p local -p worker-1-2 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto --no-checksums --filter not ipv6 root 4239 17.8 11.8 540792 361052 ? S Oct02 278:21 \_ /opt/bro/bin/bro -i eth0 -U .status -p broctl -p broctl-live -p local -p worker-1-2 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto --no-checksums --filter not ipv6 root 4245 0.0 8.6 380428 264796 ? SN Oct02 0:08 \_ /opt/bro/bin/bro -i eth0 -U .status -p broctl -p broctl-live -p local -p worker-1-2 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto --no-checksums --filter not ipv6 root 4230 0.0 0.0 12576 528 ? S Oct02 0:00 /bin/bash /opt/bro/share/broctl/scripts/run-bro 0 -i eth0 -U .status -p broctl -p broctl-live -p local -p worker-1-1 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto --no-checksums --filter not ipv6 root 4242 21.0 11.8 537128 362680 ? S Oct02 327:41 \_ /opt/bro/bin/bro -i eth0 -U .status -p broctl -p broctl-live -p local -p worker-1-1 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto --no-checksums --filter not ipv6 root 4248 0.0 8.6 380436 264768 ? SN Oct02 0:08 \_ /opt/bro/bin/bro -i eth0 -U .status -p broctl -p broctl-live -p local -p worker-1-1 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto --no-checksums --filter not ipv6 for the pf_ring I have the below: [worker-1] type=worker host=localhost interface=eth0 lb_method=pf_ring lb_procs=2 pin_cpus=0,1 So my question is twofold,...does each pinned cpu get a process, and, is there a way to get load balancing using just standalone, without needing the logger, worker, and proxy processes? Thank you. James From fatema.bannatwala at gmail.com Mon Oct 3 11:49:38 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 3 Oct 2016 14:49:38 -0400 Subject: [Bro] File extraction after checking hash. Message-ID: Hi, I was reading about the Files framework of Bro, and know that there are file analyzers available that can be attached to files that Bro sees on the network connection. I am currently extracting all the 'application/x-dosexec' files from http connections, and realized that there are lot of files that are just duplicates (i.e with same hashes). Hence was thinking to write some bro script that would use Files analysis FW and checkes the hash of the file first against a set of hashes already seen (extracted) by Bro and will skip the extraction of that file if it's present in the set of hashes. I tried adding the files::add_analyzer(f, Files::ANALYZER_EXTRACT,...) in file_new event, file_sniff event and file_state_removed event(except it didn't work here), but turns out that file_hash event triggers later than all these events and hashes get calculated after the file extraction analyzer has run. Hence wanted to ask is it possible to add Files::ANALYZER_EXTRACT AFTER Files::ANALYZER_MD5 analyzer so that I can get the hash first to compare against the set before making a decision to extract the file? Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161003/ac0a334c/attachment.html From robin at icir.org Mon Oct 3 12:04:31 2016 From: robin at icir.org (Robin Sommer) Date: Mon, 3 Oct 2016 12:04:31 -0700 Subject: [Bro] New layer 2 analyzer In-Reply-To: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> References: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> Message-ID: <20161003190431.GI49820@icir.org> On Mon, Oct 03, 2016 at 13:14 +0200, Dane Wullen wrote: > Now I'm thinking about writing that analyzer without BinPAC, but I'm not > really sure where to start. Can anyone give me a few hints or could tell me > his/her experience in writing a new protocol analyzer with C++ for Bro? Yeah, BinPAC isn't a good tool for layer 2 protocols. Generally Bro's support for layer 2 analysis lacks behind the upper layers of the stack, it doesn't have as much abstraction / APIs in place for adding new analyzers. That said, looking at ARP is actually a good starting point. See analyzer/protocol/arp/ARP.cc, the main work happens there in ARP_Analyzer::NextPacket(). The method is called from NetSessions::NextPacket() (in Sessions.cc) after ARP has been identified in Packet::ProcessLayer2() (iosource/Packet.cc) Does that help? Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From seth at icir.org Mon Oct 3 19:36:30 2016 From: seth at icir.org (Seth Hall) Date: Mon, 3 Oct 2016 22:36:30 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: > On Oct 3, 2016, at 2:49 PM, fatema bannatwala wrote: > > Hence wanted to ask is it possible to add Files::ANALYZER_EXTRACT AFTER Files::ANALYZER_MD5 analyzer so that I can get the hash first to compare against the set before making a decision to extract the file? Unfortunately not. Since we don't know the hash of the file when we see the beginning we can't yet determine that we don't want to extract the file. Sort of a chicken and egg problem. :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From brot212 at googlemail.com Tue Oct 4 00:28:49 2016 From: brot212 at googlemail.com (Dane Wullen) Date: Tue, 4 Oct 2016 09:28:49 +0200 Subject: [Bro] New layer 2 analyzer In-Reply-To: <20161003190431.GI49820@icir.org> References: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> <20161003190431.GI49820@icir.org> Message-ID: Yeah, I think that will help. Thank you. My first goal is to write some C++ code, so that EtherCat traffic will be detected. For someone with basic knowledge about C++, how much time will this take? Thanks -Dane Am 03.10.2016 um 21:04 schrieb Robin Sommer: > Yeah, BinPAC isn't a good tool for layer 2 protocols. Generally Bro's > support for layer 2 analysis lacks behind the upper layers of the > stack, it doesn't have as much abstraction / APIs in place for adding > new analyzers. > > That said, looking at ARP is actually a good starting point. See > analyzer/protocol/arp/ARP.cc, the main work happens there in > ARP_Analyzer::NextPacket(). The method is called from > NetSessions::NextPacket() (in Sessions.cc) after ARP has been > identified in Packet::ProcessLayer2() (iosource/Packet.cc) > > Does that help? > > Robin > From jedwards2728 at gmail.com Tue Oct 4 01:45:52 2016 From: jedwards2728 at gmail.com (John Edwards) Date: Tue, 4 Oct 2016 18:45:52 +1000 Subject: [Bro] NAT connection logs Message-ID: Hi all In my implementation of bro I am observing traffic from two different zones in the one physical box I have one physical powerful system that has two optical feeds from a passive tap that observes traffic from inside a firewall and outside the firewall. A lot of the connections are NAT leaving our gateway My question is regarding logging , with a cluster configuration (or any bro configuration for that matter) if a connection is outbound to an ip of 1.2.3.4 does bro see the connection as two separate streams with two separate log entries to follow that stream? Or one stream and the NAT conversion is within the log? I'm assuming the former and it sees it as two separate connections I'm just considering if it's worth having that level of visibility as my logs folder is a combination of both interfaces obviously and don't want to be potentially storing duplicate data :) all data is then ingested into a SIEM so I can search both IP's if I know what they are but if I can reduce it down to one search query and see the whole connection obviously that's better :) Cheers John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/cb2a9e6b/attachment.html From jan.grashoefer at gmail.com Tue Oct 4 04:35:35 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Tue, 4 Oct 2016 13:35:35 +0200 Subject: [Bro] New layer 2 analyzer In-Reply-To: <20161003190431.GI49820@icir.org> References: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> <20161003190431.GI49820@icir.org> Message-ID: Out of curiosity: Is the plugin interface for layer 2 protocols mentioned in https://github.com/bro/bro/pull/76 still on the table? Jan From fatema.bannatwala at gmail.com Tue Oct 4 05:34:15 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 4 Oct 2016 08:34:15 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: Thanks Seth for confirming! I think we can go through the extractions afterwards and write some sort of script to delete the dups. :) And same for hashes, asked Wes Young about querying limit to cif server (REN-ISAC) for hashes. I know that we can query the cif server for a given hash, and get back the results with cif confidence rate and other respective fields. Hence will be writing some scripts to get unique hashes and malware execs from traffic :) Thanks! Fatema. On Mon, Oct 3, 2016 at 10:36 PM, Seth Hall wrote: > > > On Oct 3, 2016, at 2:49 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Hence wanted to ask is it possible to add Files::ANALYZER_EXTRACT AFTER > Files::ANALYZER_MD5 analyzer so that I can get the hash first to compare > against the set before making a decision to extract the file? > > Unfortunately not. Since we don't know the hash of the file when we see > the beginning we can't yet determine that we don't want to extract the > file. Sort of a chicken and egg problem. :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/4453a38d/attachment.html From philosnef at gmail.com Tue Oct 4 05:47:14 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 4 Oct 2016 08:47:14 -0400 Subject: [Bro] File extraction after checking hash. Message-ID: Can't you simply write a script that calls file extract at a later date? I would think to hook it into file intel which runs after the file analysis (its comparing hashes) and extract at that point, not before... -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/287c4eb6/attachment-0001.html From seth at icir.org Tue Oct 4 07:21:20 2016 From: seth at icir.org (Seth Hall) Date: Tue, 4 Oct 2016 10:21:20 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> > On Oct 4, 2016, at 8:47 AM, erik clark wrote: > > Can't you simply write a script that calls file extract at a later date? I would think to hook it into file intel which runs after the file analysis (its comparing hashes) and extract at that point, not before... I've been thinking about some potential directions we could go that might open the door to doing this in some cases for the next release, but for now imagine that your file is 10G. We can't keep that much data in memory but you don't know the file hash until you've seen every byte of that file. You can't choose to extract the file at the end because all of the content for that file is already gone. You'd have to extract it up front and make the decision to keep it or delete it after the fact. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From philosnef at gmail.com Tue Oct 4 07:33:12 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 4 Oct 2016 10:33:12 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> References: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> Message-ID: Hm, good point. Is there somewhere in the analysis framework where you can say, if a file is above x bytes, kill the analysis process? I ask, because I see this as somewhat related to the gridftp problem at lbl. If we have large tarballs or zip files or whatever crossing the wire, killing those off at say, a 5 gig point or so, seems reasonable. As you mentioned that is quite a lot of memory being consumed by extraction. :/ On Tue, Oct 4, 2016 at 10:21 AM, Seth Hall wrote: > > > On Oct 4, 2016, at 8:47 AM, erik clark wrote: > > > > Can't you simply write a script that calls file extract at a later date? > I would think to hook it into file intel which runs after the file analysis > (its comparing hashes) and extract at that point, not before... > > I've been thinking about some potential directions we could go that might > open the door to doing this in some cases for the next release, but for now > imagine that your file is 10G. We can't keep that much data in memory but > you don't know the file hash until you've seen every byte of that file. > You can't choose to extract the file at the end because all of the content > for that file is already gone. You'd have to extract it up front and make > the decision to keep it or delete it after the fact. > > .Seth > > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/75828785/attachment.html From seth at icir.org Tue Oct 4 07:39:35 2016 From: seth at icir.org (Seth Hall) Date: Tue, 4 Oct 2016 10:39:35 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> Message-ID: <7CAABB96-4515-4E49-AAEA-7D486D039B5A@icir.org> > On Oct 4, 2016, at 10:33 AM, erik clark wrote: > > Hm, good point. Is there somewhere in the analysis framework where you can say, if a file is above x bytes, kill the analysis process? I ask, because I see this as somewhat related to the gridftp problem at lbl. If we have large tarballs or zip files or whatever crossing the wire, Yeah, I've been thinking about this problem for a while and I might take a stab at addressing it in 2.6 (although there will be loads of caveats!). > killing those off at say, a 5 gig point or so, seems reasonable. As you mentioned that is quite a lot of memory being consumed by extraction. :/ Now what if you have 20 5gig transfers going on concurrently? :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From fatema.bannatwala at gmail.com Tue Oct 4 07:42:59 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 4 Oct 2016 10:42:59 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> Message-ID: I think following could be used to some extent for crude analyses of the file on wire (please correct me if m wrong): event: file_extraction_limit Type: event (f: fa_file, args: any, limit: count, len: count) Desc: This event is generated when a file extraction analyzer is about to exceed the maximum permitted file size allowed by the extract_limit field of Files::AnalyzerArgs. The analyzer is automatically removed from file f. Files::remove_analyzer Type:function (f: fa_file, tag: Files::Tag, args: Files::AnalyzerArgs &default =[chunk_event=, stream_event=,extract_filename=, extract_limit=104857600] &optional) :bool Files::stop Type:function (f: fa_file) : bool Stops/ignores any further analysis of a given file. On Tue, Oct 4, 2016 at 10:33 AM, erik clark wrote: > Hm, good point. Is there somewhere in the analysis framework where you can > say, if a file is above x bytes, kill the analysis process? I ask, because > I see this as somewhat related to the gridftp problem at lbl. If we have > large tarballs or zip files or whatever crossing the wire, killing those > off at say, a 5 gig point or so, seems reasonable. As you mentioned that is > quite a lot of memory being consumed by extraction. :/ > > On Tue, Oct 4, 2016 at 10:21 AM, Seth Hall wrote: > >> >> > On Oct 4, 2016, at 8:47 AM, erik clark wrote: >> > >> > Can't you simply write a script that calls file extract at a later >> date? I would think to hook it into file intel which runs after the file >> analysis (its comparing hashes) and extract at that point, not before... >> >> I've been thinking about some potential directions we could go that might >> open the door to doing this in some cases for the next release, but for now >> imagine that your file is 10G. We can't keep that much data in memory but >> you don't know the file hash until you've seen every byte of that file. >> You can't choose to extract the file at the end because all of the content >> for that file is already gone. You'd have to extract it up front and make >> the decision to keep it or delete it after the fact. >> >> .Seth >> >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/8b89275d/attachment.html From robin at icir.org Tue Oct 4 07:45:25 2016 From: robin at icir.org (Robin Sommer) Date: Tue, 4 Oct 2016 07:45:25 -0700 Subject: [Bro] New layer 2 analyzer In-Reply-To: References: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> <20161003190431.GI49820@icir.org> Message-ID: <20161004144524.GE66878@icir.org> On Tue, Oct 04, 2016 at 13:35 +0200, Jan Grash?fer wrote: > Out of curiosity: Is the plugin interface for layer 2 protocols > mentioned in https://github.com/bro/bro/pull/76 still on the table? Yes, still on the table, but nobody has started to work on in yet. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From seth at icir.org Tue Oct 4 07:45:28 2016 From: seth at icir.org (Seth Hall) Date: Tue, 4 Oct 2016 10:45:28 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> Message-ID: <0E6A3EC9-A8D4-47FA-8E67-EA0871FB183E@icir.org> > On Oct 4, 2016, at 10:42 AM, fatema bannatwala wrote: > > I think following could be used to some extent for crude analyses of the file on wire (please correct me if m wrong): > > event: file_extraction_limit That event is only if the maximum file size that you set for the file when you attached the extraction analyzer is about to be crossed. You would still have to start extracting the file for this event to happen. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From robin at icir.org Tue Oct 4 07:47:20 2016 From: robin at icir.org (Robin Sommer) Date: Tue, 4 Oct 2016 07:47:20 -0700 Subject: [Bro] New layer 2 analyzer In-Reply-To: References: <6aec189a-afcc-99dd-3af4-76853e4982c5@googlemail.com> <20161003190431.GI49820@icir.org> Message-ID: <20161004144720.GF66878@icir.org> On Tue, Oct 04, 2016 at 09:28 +0200, Dane Wullen wrote: > For someone with basic knowledge about C++, how much time > will this take? Hard to tell, your best approach is probably looking at the code and seeing if you can follow pretty readily what's going on. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From fatema.bannatwala at gmail.com Tue Oct 4 07:57:41 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 4 Oct 2016 10:57:41 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: <0E6A3EC9-A8D4-47FA-8E67-EA0871FB183E@icir.org> References: <494BEF05-A299-4789-9EEB-AFA3ABB4C15E@icir.org> <0E6A3EC9-A8D4-47FA-8E67-EA0871FB183E@icir.org> Message-ID: Hmm, got it! :) On Tue, Oct 4, 2016 at 10:45 AM, Seth Hall wrote: > > > On Oct 4, 2016, at 10:42 AM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > I think following could be used to some extent for crude analyses of the > file on wire (please correct me if m wrong): > > > > event: file_extraction_limit > > That event is only if the maximum file size that you set for the file when > you attached the extraction analyzer is about to be crossed. You would > still have to start extracting the file for this event to happen. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/24f031b8/attachment.html From philosnef at gmail.com Tue Oct 4 08:13:00 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 4 Oct 2016 11:13:00 -0400 Subject: [Bro] host field Message-ID: Is there a non-invasive way to rename the host field in bro log output? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/393910d4/attachment.html From seth at icir.org Tue Oct 4 09:14:45 2016 From: seth at icir.org (Seth Hall) Date: Tue, 4 Oct 2016 12:14:45 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: > On Oct 4, 2016, at 11:13 AM, erik clark wrote: > > Is there a non-invasive way to rename the host field in bro log output? In 2.5.... redef Log::default_field_name_map = { ["host"] = "something_else", }; You can do this per-filter too, but this setting is a global default for all writers and filters. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From philosnef at gmail.com Tue Oct 4 09:15:26 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 4 Oct 2016 12:15:26 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: Ah shoot, but not in 2.4. Ok, thanks! On Tue, Oct 4, 2016 at 12:14 PM, Seth Hall wrote: > > > On Oct 4, 2016, at 11:13 AM, erik clark wrote: > > > > Is there a non-invasive way to rename the host field in bro log output? > > In 2.5.... > > redef Log::default_field_name_map = { > ["host"] = "something_else", > }; > > You can do this per-filter too, but this setting is a global default for > all writers and filters. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/0f47a35c/attachment.html From shirkdog.bsd at gmail.com Tue Oct 4 09:32:56 2016 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Tue, 4 Oct 2016 12:32:56 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: Seth, in 2.5 is this the way to make elastic happy, so you can rename 'id.orig_h' natively to whatever you want in Bro (minus the dots)? -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com On Oct 4, 2016 12:26 PM, "erik clark" wrote: > Ah shoot, but not in 2.4. Ok, thanks! > > On Tue, Oct 4, 2016 at 12:14 PM, Seth Hall wrote: > >> >> > On Oct 4, 2016, at 11:13 AM, erik clark wrote: >> > >> > Is there a non-invasive way to rename the host field in bro log output? >> >> In 2.5.... >> >> redef Log::default_field_name_map = { >> ["host"] = "something_else", >> }; >> >> You can do this per-filter too, but this setting is a global default for >> all writers and filters. >> >> .Seth >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/e88c5d2f/attachment.html From jlay at slave-tothe-box.net Tue Oct 4 09:39:27 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 04 Oct 2016 10:39:27 -0600 Subject: [Bro] host field In-Reply-To: References: Message-ID: Dot's were fixed in 2.4.0: https://www.elastic.co/blog/elasticsearch-2-4-0-released "You can disable the check which prohibits dots in field names by starting Elasticsearch as follows: export ES_JAVA_OPTS="-Dmapper.allow_dots_in_name=true" ./bin/elasticsearch" James On 2016-10-04 10:32, Michael Shirk wrote: > Seth, in 2.5 is this the way to make elastic happy, so you can rename > 'id.orig_h' natively to whatever you want in Bro (minus the dots)? > > -- > Michael Shirk > Daemon Security, Inc. > http://www.daemon-security.com > > On Oct 4, 2016 12:26 PM, "erik clark" wrote: > >> Ah shoot, but not in 2.4. Ok, thanks! >> >> On Tue, Oct 4, 2016 at 12:14 PM, Seth Hall wrote: >> >>>> On Oct 4, 2016, at 11:13 AM, erik clark >>> wrote: >>>> >>>> Is there a non-invasive way to rename the host field in bro log >>> output? >>> >>> In 2.5.... >>> >>> redef Log::default_field_name_map = { >>> ["host"] = "something_else", >>> }; >>> >>> You can do this per-filter too, but this setting is a global >>> default for all writers and filters. >>> >>> .Seth >>> >>> -- >>> Seth Hall >>> International Computer Science Institute >>> (Bro) because everyone has a network >>> http://www.bro.org/ >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro [1] > > > Links: > ------ > [1] http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From fatema.bannatwala at gmail.com Tue Oct 4 10:39:59 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 4 Oct 2016 13:39:59 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: So here's a simple script that will add a column 'uniq_hash' to the files.log file that will show whether bro has seen that hash before (in one day duration). module Uniq_hashes; redef record Files::Info += { ## Adding a field column of host and uniq_hash to show from where ## the file got downloaded and whether seen first time or duplicate. host: string &optional &log; uniq_hash: bool &optional &log ; }; #global uniq_hashes: set[string] ; global uniq_hashes: set[string] &create_expire=1day; event file_hash(f: fa_file, kind: string, hash: string) { print "file_hash", f$id, kind, hash; if(f?$http && f$http?$host) f$info$host = f$http$host; if(hash in uniq_hashes) f$info$uniq_hash = F; else { add uniq_hashes[hash]; f$info$uniq_hash = T; } } And, then I can grep the hashes with uniq_hash=T and query the cif server for analysis. Also, can script to get the name of the extracted file from the 'extracted' field in files.log with uniq_hash=F and delete that file almost realtime, after Bro has extracted that file. Before I can test it in production, I want to ask if there is a way I can delete the contents of set uniq_hashes right at the midnight so that we can get unique files and hashes on a daily basis logged in files.log? I don't want that variable to grow out of bound, consuming lot of memory, hence thought 1 day should be reasonable period of time to flush the contents of the set and exact time line will give an idea of uniq hashes queried daily and no. of execs extracted on daily basis. Any help appreciated! Thanks, Fatema. On Tue, Oct 4, 2016 at 10:28 AM, Seth Hall wrote: > > > On Oct 4, 2016, at 8:34 AM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > I know that we can query the cif server for a given hash, and get back > the results with cif confidence rate and other respective fields. > > Hence will be writing some scripts to get unique hashes and malware > execs from traffic :) > > Awesome! Anything you can do to package what you're doing well enough > that other people in higher-ed could use it too would be great. I've just > seen so many things in higher-ed that people will create but would be so > difficult to install anywhere else that they never get used anywhere else. > It's a shame that so much good work goes to waste because it's never made > generic. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/33b1833f/attachment-0001.html From philosnef at gmail.com Tue Oct 4 10:47:10 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 4 Oct 2016 13:47:10 -0400 Subject: [Bro] icap plugin/analyzer Message-ID: Where is the icap plugin/analyzer located? I dont have Mark Fernandez's contact info to find out. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/6927917b/attachment.html From jdopheid at illinois.edu Tue Oct 4 10:56:53 2016 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Tue, 4 Oct 2016 17:56:53 +0000 Subject: [Bro] icap plugin/analyzer In-Reply-To: References: Message-ID: <748F0D4F-9552-4A7E-9410-1320C9E4031D@illinois.edu> Mark posted this to the bro-dev list: From: on behalf of Mark Fernandez Date: Friday, September 30, 2016 at 1:42 PM To: "'bro-dev at bro.org'" Subject: [Bro-Dev] ICAP Analyzer: BinPAC vs Plugin :: RegEx Issues In support of submitting the ICAP Analyzer as a Bro Package, I am porting the ICAP Analyzer to build as a dynamic Plugin. Originally, I inserted the ICAP Analyzer straight into the source code tree, under /src/analyzer/protocol/icap, and compiled it as part of the Bro core. But in an effort to make it easier for others to integrate into their existing Bro instantiations, I am making the effort to make it a stand-alone Plugin instead? but the BinPAC parser is not working when I run it as a Plugin. The Plugin builds and installs without error, and I verify that the Plugin is enabled and that my ICAP main.bro script is loaded, but it is not producing any ICAP or HTTP related output: (a) It appears that the parser is not recognizing the ICAP Request messages whatsoever. (b) It starts to parse the ICAP Response messages; but it breaks mid-way thru the packet. I think the problem is within the BinPAC files where I use regular expressions to define a data element within the ICAP packet structures/records. In the ICAP Request message, the very first element is a regex pattern, so that?s why it fails to parse these packets at all. In the ICAP Response message, it parses the first element correctly, but then it bombs on the second element, which is a regex pattern. In the BinPAC help/reference document, it contains a section titled, ?Running Binpac-Generated Analyzer Standalone? [https://www.bro.org/sphinx/components/binpac/README.html#running-binpac-generated-analyzer-standalone], which states that to run binpac-generated code independent of Bro, the regex library must be substituted? I presume the stand-alone guidance applies to the Plugin? It must because I did not have this trouble when I built the analyzer straight into the Bro core. The regex library guidance says I need to include three header files: RE.h, bro-dummy.h, and binpac_pcre.h. You provide sample code for each file. Am I to copy-n-paste the sample code directly into my Plugin source code as three new headers files? Or do these three files exist elsewhere in the Bro source? I can find ?RE.h? in the source (/src/RE.h). And I can find ?binpac_regex.h? in the source (/aux/binpac/lib/binpac_regex.h), which seems similar, but I cannot find ?binpac_pcre.h? nor ?bro_dummy.h? anywhere. I need a little bit of advice? or a lot of advice :) Can I use RE.h and binpac_regex.h that exist in the Bro 2.4.1 distro? Or do I need to create the three header files and paste the sample code verbatim? Thanks! Mark Mark I. Fernandez MITRE Corporation Email: mfernandez at mitre.org MITRE is a not-for-profit corporation that operates several Federally Funded Research and Development Centers (FFRDCs) in the interests of the US Government. ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From: on behalf of erik clark Date: Tuesday, October 4, 2016 at 12:47 PM To: "bro at bro.org" Subject: [Bro] icap plugin/analyzer Where is the icap plugin/analyzer located? I dont have Mark Fernandez's contact info to find out. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/ab6f5528/attachment.html From mfernandez at mitre.org Tue Oct 4 11:11:32 2016 From: mfernandez at mitre.org (Fernandez, Mark I) Date: Tue, 4 Oct 2016 18:11:32 +0000 Subject: [Bro] icap plugin/analyzer In-Reply-To: References: Message-ID: The ICAP Analyzer is not released yet. Originally, I compiled it straight into the Bro core, rather than as a Plugin. I support of releasing it to the Bro community, I am now translating it into a Plugin, but I am having trouble with the BinPAC-generated code using regular expressions in Plugin-land. Once I get the Plugin version working, then I will initiate the review/release process for the source code via my company?s public release office. When it is approved for release, I will send an email out to the [Bro] distro list to notify you (and others) that it is available. Cheers! Mark Mark I. Fernandez MITRE Corporation Email: mfernandez at mitre.org MITRE is a not-for-profit corporation that operates several Federally Funded Research and Development Centers (FFRDCs) in the interests of the US Government. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of erik clark Sent: Tuesday, October 04, 2016 1:47 PM To: bro at bro.org Subject: [Bro] icap plugin/analyzer Where is the icap plugin/analyzer located? I dont have Mark Fernandez's contact info to find out. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/3af024dd/attachment-0001.html From jazoff at illinois.edu Tue Oct 4 11:40:30 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 4 Oct 2016 18:40:30 +0000 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: > On Oct 4, 2016, at 1:39 PM, fatema bannatwala wrote: > > > And, then I can grep the hashes with uniq_hash=T and query the cif server for analysis. > Also, can script to get the name of the extracted file from the 'extracted' field in files.log with uniq_hash=F > and delete that file almost realtime, after Bro has extracted that file. Do you know that the intel framework supports hashes? If you export a feed of hashes from CIF you can load that into bro and do the alerting on known hashes bad in real time. > Before I can test it in production, I want to ask if there is a way I can delete the contents of set uniq_hashes right at the midnight > so that we can get unique files and hashes on a daily basis logged in files.log? I don't want that variable to grow out of bound, > consuming lot of memory, hence thought 1 day should be reasonable period of time to flush the contents of the set and exact time line > will give an idea of uniq hashes queried daily and no. of execs extracted on daily basis. You can probably do it using something like this: global SECONDS_IN_DAY = 60*60*24; function midnight(): time { local now = network_time(); local dt = time_to_double(now); local mn = double_to_count(dt / SECONDS_IN_DAY) * SECONDS_IN_DAY; return double_to_time(mn); } function interval_to_midnight(): interval { return midnight() - network_time(); } event reset_hashes() { uniq_hashes = set(); #I think this is the proper way to clear a set? } event bro_init() { print "Time to midnight:", interval_to_midnight(); schedule interval_to_midnight() { reset_hashes()}; } I think that might work properly except for the timezone being in UTC, so it might need to be adjusted, or something different altogether Seth has this plugin: https://github.com/sethhall/bro-approxidate which would let you do local md = approxidate("midnight"); If it was packaged for bro-pkg it would be easier to install though :-) The known hosts/services/certs scripts need a framework to do things like this, so 2.6 may end up having this as a built in feature. -- - Justin Azoff From fatema.bannatwala at gmail.com Tue Oct 4 12:49:54 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 4 Oct 2016 15:49:54 -0400 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: Hi Justin, >Do you know that the intel framework supports hashes? If you export a feed of hashes from CIF you can load that into bro and do the alerting on known hashes bad in real time. Yes. And that was the plan, but unfortunately, I couldn't get the list of the feeds (hashes) pulled down from REN-ISAC , that's interesting that they provide other feeds but hashes (will ask in REN-ISAC mailing list to confirm). But I figured out that you can query their database to get information about a particular hash. Also, tried looking for a good open source of feeds for hashes, but couldn't find it hence don't have any hash feeds currently in intel :( Thank you for the code, works perfect! :-) Made a little tweak, replaced network_time() with current_time() function at both the places. For some reason I was getting 0.0 as network_time() value when ran the code in try.bro.org with sample http pcap. Also, added "local mn_EST = mn + 14400.0; " in midnight() function to get local EST in quick and dirty way. :) (I know the best way to do ii to use Seth's plugin, will try that next). Hence, the complete script looks like this now: module Uniq_hashes; redef record Files::Info += { ## Adding a field column of host and uniq_hash to show from where ## the file got downloaded and whether seen first time or duplicate. host: string &optional &log; uniq_hash: bool &optional &log ; }; global SECONDS_IN_DAY = 60*60*24; global uniq_hashes: set[string] ; function midnight(): time { local now = current_time(); local dt = time_to_double(now); local mn = double_to_count(dt / SECONDS_IN_DAY) * SECONDS_IN_DAY; local mn_EST = mn + 14400.0; return double_to_time(mn_EST); } function interval_to_midnight(): interval { return midnight() - current_time(); } event reset_hashes() { uniq_hashes = set(); #I think this is the proper way to clear a set? } event file_hash(f: fa_file, kind: string, hash: string) { #print "file_hash", f$id, kind, hash; if(f?$http && f$http?$host) f$info$host = f$http$host; if(hash in uniq_hashes) f$info$uniq_hash = F; else { add uniq_hashes[hash]; f$info$uniq_hash = T; } } event bro_init() { #print "current_time", current_time(); #print "midnight", midnight(); #print "Time to midnight:", interval_to_midnight(); schedule interval_to_midnight() { reset_hashes()}; } Thanks, Fatema. On Tue, Oct 4, 2016 at 2:40 PM, Azoff, Justin S wrote: > > > On Oct 4, 2016, at 1:39 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > > > And, then I can grep the hashes with uniq_hash=T and query the cif > server for analysis. > > Also, can script to get the name of the extracted file from the > 'extracted' field in files.log with uniq_hash=F > > and delete that file almost realtime, after Bro has extracted that file. > > Do you know that the intel framework supports hashes? If you export a > feed of hashes from CIF you can load that into bro and do the alerting on > known hashes bad in real time. > > > Before I can test it in production, I want to ask if there is a way I > can delete the contents of set uniq_hashes right at the midnight > > so that we can get unique files and hashes on a daily basis logged in > files.log? I don't want that variable to grow out of bound, > > consuming lot of memory, hence thought 1 day should be reasonable period > of time to flush the contents of the set and exact time line > > will give an idea of uniq hashes queried daily and no. of execs > extracted on daily basis. > > You can probably do it using something like this: > > global SECONDS_IN_DAY = 60*60*24; > > function midnight(): time > { > local now = network_time(); > local dt = time_to_double(now); > local mn = double_to_count(dt / SECONDS_IN_DAY) * SECONDS_IN_DAY; > return double_to_time(mn); > } > > function interval_to_midnight(): interval > { > return midnight() - network_time(); > } > event reset_hashes() > { > uniq_hashes = set(); #I think this is the proper way to clear a set? > } > > event bro_init() > { > print "Time to midnight:", interval_to_midnight(); > schedule interval_to_midnight() { reset_hashes()}; > } > > I think that might work properly except for the timezone being in UTC, so > it might need to be adjusted, or something different altogether > > Seth has this plugin: https://github.com/sethhall/bro-approxidate > > which would let you do > > local md = approxidate("midnight"); > > If it was packaged for bro-pkg it would be easier to install though :-) > > The known hosts/services/certs scripts need a framework to do things like > this, so 2.6 may end up having this as a built in feature. > > -- > - Justin Azoff > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161004/20f757b8/attachment.html From jan.grashoefer at gmail.com Tue Oct 4 13:22:06 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Tue, 4 Oct 2016 22:22:06 +0200 Subject: [Bro] File extraction after checking hash. In-Reply-To: References: Message-ID: <5c765e0c-2959-8947-bed0-e43a545e4963@gmail.com> >> Do you know that the intel framework supports hashes? If you export a > feed of hashes from CIF you can load that into bro and do the alerting on > known hashes bad in real time. > > Yes. And that was the plan, but unfortunately, I couldn't get the list of > the feeds (hashes) pulled down from REN-ISAC If you come up with a feed, using the intel framework should be straight forward. We did a POC, extracting files (I think below 100MB) and just preserve them in case of an intel hit (see https://github.com/J-Gras/intel-extensions/blob/master/scripts/preserve_files.bro). The only thing to set up except extraction and this script is a cron job deleting the extracted files that aren't of interest. To avoid dups one might want to name the extracted files according to their hash or something like that. Jan From seth at icir.org Tue Oct 4 19:46:18 2016 From: seth at icir.org (Seth Hall) Date: Tue, 4 Oct 2016 22:46:18 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: > On Oct 4, 2016, at 12:32 PM, Michael Shirk wrote: > > Seth, in 2.5 is this the way to make elastic happy, so you can rename 'id.orig_h' natively to whatever you want in Bro (minus the dots)? The way to make elasticsearch happy is probably this... redef Log::default_scope_sep = "_"; It changes all of the periods in field names to anything you want (underscore in this case). .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Tue Oct 4 19:49:09 2016 From: seth at icir.org (Seth Hall) Date: Tue, 4 Oct 2016 22:49:09 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: > On Oct 4, 2016, at 12:39 PM, James Lay wrote: > > "You can disable the check which prohibits dots in field names by > starting Elasticsearch as follows: Haha, I replied to the other email before reading the whole thread. I recommend this method instead! .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From mpselab at gmail.com Tue Oct 4 20:58:10 2016 From: mpselab at gmail.com (M P) Date: Wed, 5 Oct 2016 06:58:10 +0300 Subject: [Bro] host field In-Reply-To: References: Message-ID: As far as I know as I understand it, going this route in 2.4 and then later upgrading to 5.x may create conflict and cause unforeseen issues, as suggested here: https://www.elastic.co/guide/en/elasticsearch/reference/current/dots-in-names.html On Tuesday, October 4, 2016, James Lay wrote: > Dot's were fixed in 2.4.0: > > https://www.elastic.co/blog/elasticsearch-2-4-0-released > > "You can disable the check which prohibits dots in field names by > starting Elasticsearch as follows: > > export ES_JAVA_OPTS="-Dmapper.allow_dots_in_name=true" > ./bin/elasticsearch" > > James > > On 2016-10-04 10:32, Michael Shirk wrote: > > Seth, in 2.5 is this the way to make elastic happy, so you can rename > > 'id.orig_h' natively to whatever you want in Bro (minus the dots)? > > > > -- > > Michael Shirk > > Daemon Security, Inc. > > http://www.daemon-security.com > > > > On Oct 4, 2016 12:26 PM, "erik clark" > wrote: > > > >> Ah shoot, but not in 2.4. Ok, thanks! > >> > >> On Tue, Oct 4, 2016 at 12:14 PM, Seth Hall > wrote: > >> > >>>> On Oct 4, 2016, at 11:13 AM, erik clark > > >>> wrote: > >>>> > >>>> Is there a non-invasive way to rename the host field in bro log > >>> output? > >>> > >>> In 2.5.... > >>> > >>> redef Log::default_field_name_map = { > >>> ["host"] = "something_else", > >>> }; > >>> > >>> You can do this per-filter too, but this setting is a global > >>> default for all writers and filters. > >>> > >>> .Seth > >>> > >>> -- > >>> Seth Hall > >>> International Computer Science Institute > >>> (Bro) because everyone has a network > >>> http://www.bro.org/ > >> > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro [1] > > > > > > Links: > > ------ > > [1] http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161005/52ac4e1c/attachment.html From seth at icir.org Wed Oct 5 05:12:24 2016 From: seth at icir.org (Seth Hall) Date: Wed, 5 Oct 2016 08:12:24 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: <6256AE17-F9EA-4F8B-BE13-478018F48050@icir.org> > On Oct 4, 2016, at 11:58 PM, M P wrote: > > As far as I know as I understand it, going this route in 2.4 and then later upgrading to 5.x may create conflict and cause unforeseen issues, as suggested here: > > https://www.elastic.co/guide/en/elasticsearch/reference/current/dots-in-names.html Thanks for the link. If I understood that right, the only scenario where that's a problem would be if we had a field named "id" and another field named "id.orig_h". Due to the way the logging framework functions, Bro will never write out logs that would have that issue so I think what this link describes isn't an issue for us. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Wed Oct 5 05:17:18 2016 From: seth at icir.org (Seth Hall) Date: Wed, 5 Oct 2016 08:17:18 -0400 Subject: [Bro] Feature Request: Append In-Reply-To: References: Message-ID: <571B5F1B-189D-4019-8A1F-16FFEA63C4D1@icir.org> > On Sep 29, 2016, at 6:53 PM, James Lay wrote: > > I know I've brought this up before, but I was going to put this in on > the github but that feature isn't enabled. > > I know a lot of people just use broctl and be done with it, but I just > use it via command line most of the time. It would REALLY be nice have > a command line switch to not overwrite log files and just append to > existing files. Thank you. Yeah, this has been a bit of an unfortunate change. When we switched to the current logging format in 2.0, we changed the logging so you couldn't do append because the ascii writer in the default "bro log format" wants to put the header and footer in place. If the format of the logs changes between restarts the content wouldn't even be consistent (i.e., column offsets could change or be renamed). This request may be an early sign that we need to consider a bit of overhaul to the default writers in 2.6. The ascii writer is sort of overloaded by doing the "bro log format" and JSON logging, the JSON logging doesn't provide any indication of the structure of the logs being provided, you can't append with the ascii writer as you've indicated (although, if we had a dedicated json logger then it might make more sense to have an append mode). Definitely some issues to think about. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Wed Oct 5 05:25:02 2016 From: seth at icir.org (Seth Hall) Date: Wed, 5 Oct 2016 08:25:02 -0400 Subject: [Bro] New Cluster configuration In-Reply-To: References: Message-ID: <6EA1B401-EBBB-4B53-AD38-4EE0D0055F9E@icir.org> > On Sep 30, 2016, at 3:56 AM, John Edwards wrote: > > So PF_RING as the front end, then a manager and proxy but each worker defined within the Cluster worker config as the same host but different interfaces. > > Or should i suggest getting additional hardware and splitting the interfaces? it seems a little silly that one worker can only monitor one interface i thought. thats why i thought id ask here first. You should be able to do what you're attempting to do on a single system. You could configure multiple workers, each sniffing a bridge interface and load balancing. Probably something like this, but with an appropriate number of processes for your system.... [worker-1] host=localhost type=worker interface=br0 lb_method=pf_ring lb_procs=4 [worker-2] host=localhost type=worker interface=br1 lb_method=pf_ring lb_procs=4 Your logs will be a bit repetitive though since it sounds like you're monitoring inside and outside of a NATing router. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jlay at slave-tothe-box.net Wed Oct 5 05:35:49 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 05 Oct 2016 06:35:49 -0600 Subject: [Bro] Feature Request: Append In-Reply-To: <571B5F1B-189D-4019-8A1F-16FFEA63C4D1@icir.org> References: <571B5F1B-189D-4019-8A1F-16FFEA63C4D1@icir.org> Message-ID: <1475670949.2380.2.camel@slave-tothe-box.net> On Wed, 2016-10-05 at 08:17 -0400, Seth Hall wrote: > > > > On Sep 29, 2016, at 6:53 PM, James Lay > > wrote: > > > > I know I've brought this up before, but I was going to put this in > > on? > > the github but that feature isn't enabled. > > > > I know a lot of people just use broctl and be done with it, but I > > just? > > use it via command line most of the time.??It would REALLY be nice > > have? > > a command line switch to not overwrite log files and just append > > to? > > existing files.??Thank you. > Yeah, this has been a bit of an unfortunate change.??When we switched > to the current logging format in 2.0, we changed the logging so you > couldn't do append because the ascii writer in the default "bro log > format" wants to put the header and footer in place.??If the format > of the logs changes between restarts the content wouldn't even be > consistent (i.e., column offsets could change or be renamed). > > This request may be an early sign that we need to consider a bit of > overhaul to the default writers in 2.6.??The ascii writer is sort of > overloaded by doing the "bro log format" and JSON logging, the JSON > logging doesn't provide any indication of the structure of the logs > being provided, you can't append with the ascii writer as you've > indicated (although, if we had a dedicated json logger then it might > make more sense to have an append mode).??Definitely some issues to > think about. > ?? > ? .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ Thanks Seth. ?Truth be told it wouldn't bother me one bit if the headers were written again...they're all prefaced with "#" anyways. ?Just to have it not create a new file and append to the current if it exists is all I'd really like to see at some point. ?And personally I love the ascii...makes it so easy to quickly search ? ?Anyway thanks for looking at this. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161005/d973c7ee/attachment.html From seth at icir.org Wed Oct 5 05:42:08 2016 From: seth at icir.org (Seth Hall) Date: Wed, 5 Oct 2016 08:42:08 -0400 Subject: [Bro] Feature Request: Append In-Reply-To: <1475670949.2380.2.camel@slave-tothe-box.net> References: <571B5F1B-189D-4019-8A1F-16FFEA63C4D1@icir.org> <1475670949.2380.2.camel@slave-tothe-box.net> Message-ID: > On Oct 5, 2016, at 8:35 AM, James Lay wrote: > > Thanks Seth. Truth be told it wouldn't bother me one bit if the headers were written again...they're all prefaced with "#" anyways. Yeah, I had that same thought. Would you mind filling a ticket in the tracker about log append in the ascii writer? Seems reasonable to me. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jlay at slave-tothe-box.net Wed Oct 5 05:51:09 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 05 Oct 2016 06:51:09 -0600 Subject: [Bro] Feature Request: Append In-Reply-To: References: <571B5F1B-189D-4019-8A1F-16FFEA63C4D1@icir.org> <1475670949.2380.2.camel@slave-tothe-box.net> Message-ID: <1475671869.2380.3.camel@slave-tothe-box.net> On Wed, 2016-10-05 at 08:42 -0400, Seth Hall wrote: > > > > On Oct 5, 2016, at 8:35 AM, James Lay > > wrote: > > > > Thanks Seth.??Truth be told it wouldn't bother me one bit if the > > headers were written again...they're all prefaced with "#" > > anyways.? > Yeah, I had that same thought.??Would you mind filling a ticket in > the tracker about log append in the ascii writer???Seems reasonable > to me. > > ? .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ Absolutely. ?Thanks Seth. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161005/4793b52d/attachment-0001.html From philosnef at gmail.com Wed Oct 5 06:00:24 2016 From: philosnef at gmail.com (erik clark) Date: Wed, 5 Oct 2016 09:00:24 -0400 Subject: [Bro] Feature Request: Append Message-ID: I agree that appending in json format mode would be nice. We are moving to json format away from tsv to save on tsidx bucket size in splunk. While I dont think we would see a major need for this, it would save analysts from having to scrounge through multiple log files for the same type if somehow the logs rotated out early because of a bro restart. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161005/2b1fb6a6/attachment.html From philosnef at gmail.com Wed Oct 5 06:35:45 2016 From: philosnef at gmail.com (erik clark) Date: Wed, 5 Oct 2016 09:35:45 -0400 Subject: [Bro] New Cluster configuration Message-ID: There is good reason to tap both inside and outside of a firewall, but only if you are tapping both sides of a firewall. Doing this on both sides of a router is a giant waste of time. That way you can see what actually got out, and not just what got to the firewall but not out. At my old job this is what we did, however we weren't natting everything (except ipv6, which did ipv4 translation, and had its own challenges). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161005/07be3c50/attachment.html From philosnef at gmail.com Wed Oct 5 07:28:41 2016 From: philosnef at gmail.com (erik clark) Date: Wed, 5 Oct 2016 10:28:41 -0400 Subject: [Bro] host field In-Reply-To: References: Message-ID: OK, so. I absolutely must rename these fields, and I can not wait to deploy 2.5, and can not deploy beta. Does anyone know all the analyzers that have host explicitly defined in them so I can hack this manually? There is absolutely no way that Splunk can keep up with json format, because it has to run a regex against every event processed to rename the host field. (The value host in splunk is reserved....). We can not do this with the bro json app, because that just puts us right back at square one with tsidx file size issues. On Tue, Oct 4, 2016 at 10:46 PM, Seth Hall wrote: > > > On Oct 4, 2016, at 12:32 PM, Michael Shirk > wrote: > > > > Seth, in 2.5 is this the way to make elastic happy, so you can rename > 'id.orig_h' natively to whatever you want in Bro (minus the dots)? > > The way to make elasticsearch happy is probably this... > redef Log::default_scope_sep = "_"; > > It changes all of the periods in field names to anything you want > (underscore in this case). > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161005/79e3691b/attachment.html From michalpurzynski1 at gmail.com Wed Oct 5 13:34:57 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Wed, 5 Oct 2016 22:34:57 +0200 Subject: [Bro] New Cluster configuration In-Reply-To: <6EA1B401-EBBB-4B53-AD38-4EE0D0055F9E@icir.org> References: <6EA1B401-EBBB-4B53-AD38-4EE0D0055F9E@icir.org> Message-ID: Also, use a modern kernel and afpacket rather then pfring. > On 5 Oct 2016, at 14:25, Seth Hall wrote: > > >> On Sep 30, 2016, at 3:56 AM, John Edwards wrote: >> >> So PF_RING as the front end, then a manager and proxy but each worker defined within the Cluster worker config as the same host but different interfaces. >> >> Or should i suggest getting additional hardware and splitting the interfaces? it seems a little silly that one worker can only monitor one interface i thought. thats why i thought id ask here first. > > You should be able to do what you're attempting to do on a single system. You could configure multiple workers, each sniffing a bridge interface and load balancing. > > Probably something like this, but with an appropriate number of processes for your system.... > > [worker-1] > host=localhost > type=worker > interface=br0 > lb_method=pf_ring > lb_procs=4 > > [worker-2] > host=localhost > type=worker > interface=br1 > lb_method=pf_ring > lb_procs=4 > > Your logs will be a bit repetitive though since it sounds like you're monitoring inside and outside of a NATing router. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From daniel.guerra69 at gmail.com Wed Oct 5 23:47:00 2016 From: daniel.guerra69 at gmail.com (Daniel Guerra) Date: Thu, 6 Oct 2016 08:47:00 +0200 Subject: [Bro] host field In-Reply-To: References: Message-ID: <438EFC25-40FE-4933-99C7-9EB996539541@gmail.com> Hi Seth, It works perfect ! I have the git version running with elastic 2.4 (2.5 gave some trouble again) without my nasty JSON.cc patch. Regards, Daniel > On 05 Oct 2016, at 04:46, Seth Hall wrote: > > >> On Oct 4, 2016, at 12:32 PM, Michael Shirk wrote: >> >> Seth, in 2.5 is this the way to make elastic happy, so you can rename 'id.orig_h' natively to whatever you want in Bro (minus the dots)? > > The way to make elasticsearch happy is probably this... > redef Log::default_scope_sep = "_"; > > It changes all of the periods in field names to anything you want (underscore in this case). > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From mpselab at gmail.com Thu Oct 6 07:49:31 2016 From: mpselab at gmail.com (M P) Date: Thu, 6 Oct 2016 17:49:31 +0300 Subject: [Bro] New Cluster configuration In-Reply-To: References: <6EA1B401-EBBB-4B53-AD38-4EE0D0055F9E@icir.org> Message-ID: Hello Michal, Would you mind elaborating more, please? I am not trying to hijack the thread but more interested in the suggestion. Any pointers are welcome. MP. On Wednesday, October 5, 2016, Micha? Purzy?ski wrote: > Also, use a modern kernel and afpacket rather then pfring. > > > On 5 Oct 2016, at 14:25, Seth Hall > wrote: > > > > > >> On Sep 30, 2016, at 3:56 AM, John Edwards > wrote: > >> > >> So PF_RING as the front end, then a manager and proxy but each worker > defined within the Cluster worker config as the same host but different > interfaces. > >> > >> Or should i suggest getting additional hardware and splitting the > interfaces? it seems a little silly that one worker can only monitor one > interface i thought. thats why i thought id ask here first. > > > > You should be able to do what you're attempting to do on a single > system. You could configure multiple workers, each sniffing a bridge > interface and load balancing. > > > > Probably something like this, but with an appropriate number of > processes for your system.... > > > > [worker-1] > > host=localhost > > type=worker > > interface=br0 > > lb_method=pf_ring > > lb_procs=4 > > > > [worker-2] > > host=localhost > > type=worker > > interface=br1 > > lb_method=pf_ring > > lb_procs=4 > > > > Your logs will be a bit repetitive though since it sounds like you're > monitoring inside and outside of a NATing router. > > > > .Seth > > > > -- > > Seth Hall > > International Computer Science Institute > > (Bro) because everyone has a network > > http://www.bro.org/ > > > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161006/8646a141/attachment.html From eshelton at butler.net Thu Oct 6 10:26:01 2016 From: eshelton at butler.net (eshelton) Date: Thu, 6 Oct 2016 11:26:01 -0600 Subject: [Bro] 2.5 Beta cluster issue Message-ID: Previously, I have successfully run a 2.4.1 Bro cluster with 160 workers processes. After updating to 2.5_beta, I'm suddenly seeing an issue crop up where I'm unable to start this same number of worker processes without the manager and logger crashing either immediately, or shortly after restarting the cluster. I'm able to successfully get to 140 worker processes, but when I try to add the last two nodes (10 worker procs each) back into the mix, things go wonky quickly. There is no crash report being generated as I would have normally expected. I have checked for orphan processes within the cluster, and none exist. I'm wondering if this is re-manifestation of an issue Justin Azoff assisted me with in the past (Bro 2.4.1 cluster) where he noted that around 180 worker procs, this sort of issue can happen. In this previous case after finding orphaned worker processes and killing them, I was able to successfully start my cluster at full strength. Any input regarding this issue would be greatly appreciated. Respectfully, -Erin Shelton Program Manager: Incident Response and Network Security Office of Information Technology University of Colorado Boulder -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161006/10ce316d/attachment.html From jazoff at illinois.edu Thu Oct 6 10:48:30 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 6 Oct 2016 17:48:30 +0000 Subject: [Bro] 2.5 Beta cluster issue In-Reply-To: References: Message-ID: <8027342C-EAFE-4F3B-8BFA-1C37627D4ECC@illinois.edu> > On Oct 6, 2016, at 1:26 PM, eshelton wrote: > > Previously, I have successfully run a 2.4.1 Bro cluster with 160 workers processes. After updating to 2.5_beta, I'm suddenly seeing an issue crop up where I'm unable to start this same number of worker processes without the manager and logger crashing either immediately, or shortly after restarting the cluster. I'm able to successfully get to 140 worker processes, but when I try to add the last two nodes (10 worker procs each) back into the mix, things go wonky quickly. There is no crash report being generated as I would have normally expected. I have checked for orphan processes within the cluster, and none exist. > > I'm wondering if this is re-manifestation of an issue Justin Azoff assisted me with in the past (Bro 2.4.1 cluster) where he noted that around 180 worker procs, this sort of issue can happen. In this previous case after finding orphaned worker processes and killing them, I was able to successfully start my cluster at full strength. > > Any input regarding this issue would be greatly appreciated. Yes.. This is likely the same issue. There was a change committed just after the 2.5 beta that fixed some file descriptor leaking. With the fix in place 2.5 should support more workers than 2.4.1 did. With broker in 2.6 the issue should hopefully be gone completely. To fix it you can switch to using bro from git, or apply this small change to the beta: commit b3a7d07e66b027a56e57ba010998639ff0d6da86 Author: Daniel Thayer Date: Wed Aug 31 14:07:44 2016 -0500 Added a missing fclose in scan.l On OS X, Bro was failing to startup without first using the "ulimit -n" command to increase the allowed number of open files (OS X has a much lower default limit than Linux or FreeBSD). diff --git a/src/scan.l b/src/scan.l index a6e37a6..a026b31 100644 --- a/src/scan.l +++ b/src/scan.l @@ -599,7 +599,12 @@ static int load_files(const char* orig_file) ino_t i = get_inode_num(f, file_path); if ( already_scanned(i) ) + { + if ( f != stdin ) + fclose(f); + return 0; + } ScannedFile sf(i, file_stack.length(), file_path); files_scanned.push_back(sf); -- - Justin Azoff From zeolla at gmail.com Thu Oct 6 11:55:33 2016 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Thu, 06 Oct 2016 18:55:33 +0000 Subject: [Bro] Monitoring for MAC address Message-ID: I have a use case where I would like to monitor for certain MAC addresses in use. I took a look at the Intel framework and it doesn't seem to have a type that can handle this. Has anybody else encountered a similar scenario in the past? The list will be ever-evolving and so I would like to be able to modify it without having to restart my cluster (hence considering the Intel framework). I did find this thread , and if I have to, I will just write a script that uses known_devices. Thanks, Jon -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161006/3c47c3a6/attachment.html From pkelley at hyperionavenue.com Thu Oct 6 12:05:54 2016 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Thu, 06 Oct 2016 12:05:54 -0700 Subject: [Bro] Monitoring for MAC address In-Reply-To: References: Message-ID: Maybe using this? Might work better than using Intel feeds. https://github.com/evernote/bro-scripts/blob/master/bolo/scripts/main.bro Patrick Kelley, CISSP The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure. From: on behalf of "Zeolla at GMail.com" Date: Thursday, October 6, 2016 at 11:55 AM To: "bro at bro.org" Subject: [Bro] Monitoring for MAC address I have a use case where I would like to monitor for certain MAC addresses in use. I took a look at the Intel framework and it doesn't seem to have a type that can handle this. Has anybody else encountered a similar scenario in the past? The list will be ever-evolving and so I would like to be able to modify it without having to restart my cluster (hence considering the Intel framework). I did find this thread , and if I have to, I will just write a script that uses known_devices. Thanks, Jon -- Jon _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161006/39822891/attachment.html From jan.grashoefer at gmail.com Thu Oct 6 12:50:07 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 6 Oct 2016 21:50:07 +0200 Subject: [Bro] Monitoring for MAC address In-Reply-To: References: Message-ID: > I have a use case where I would like to monitor for certain MAC addresses > in use. I took a look at the Intel framework > > and > it doesn't seem to have a type that can handle this. Has anybody else > encountered a similar scenario in the past? I theory it should be possible to redef Intel::Type and add a type for MAC addresses as they are treated as strings by Bro anyway. > I did find this thread > , and > if I have to, I will just write a script that uses known_devices. Bro 2.5 will support logging of MAC addresses (see https://github.com/bro/bro/blob/master/scripts/site/local.bro#L98). Enabling this you would just have to add a seen script like the conn-established.bro script. Jan From zeolla at gmail.com Thu Oct 6 14:00:02 2016 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Thu, 06 Oct 2016 21:00:02 +0000 Subject: [Bro] Monitoring for MAC address In-Reply-To: References: Message-ID: Very helpful, thank you both. Jon On Thu, Oct 6, 2016, 16:00 Jan Grash?fer wrote: > > I have a use case where I would like to monitor for certain MAC addresses > > in use. I took a look at the Intel framework > > < > https://www.bro.org/sphinx-git/scripts/base/frameworks/intel/main.bro.html#type-Intel::Type > > > > and > > it doesn't seem to have a type that can handle this. Has anybody else > > encountered a similar scenario in the past? > > I theory it should be possible to redef Intel::Type and add a type for > MAC addresses as they are treated as strings by Bro anyway. > > > I did find this thread > > , > and > > if I have to, I will just write a script that uses known_devices. > > Bro 2.5 will support logging of MAC addresses (see > https://github.com/bro/bro/blob/master/scripts/site/local.bro#L98). > Enabling this you would just have to add a seen script like the > conn-established.bro script. > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161006/079e35cd/attachment.html From drakearonhalt at gmail.com Thu Oct 6 20:01:50 2016 From: drakearonhalt at gmail.com (Drake Aronhalt) Date: Thu, 6 Oct 2016 23:01:50 -0400 Subject: [Bro] Bro crashing on start Message-ID: All, This morning I updated bro and pfring on my dev sensor to their respective git master branches and started receiving this error when I try to start bro: # broctl start starting logger ... starting manager ... starting proxy-1 ... starting worker-1-1 ... starting worker-1-2 ... starting worker-1-3 ... starting worker-1-4 ... starting worker-1-5 ... worker-1-5 terminated immediately after starting; check output with "diag" worker-1-4 terminated immediately after starting; check output with "diag" worker-1-1 terminated immediately after starting; check output with "diag" worker-1-3 terminated immediately after starting; check output with "diag" worker-1-2 terminated immediately after starting; check output with "diag" running 'broctl diag' gives me this fatal error: problem with interface eno33557248 (pcap_error: BPF program is not valid) pf_ring is loading properly as far as I can tell. My node.cfg is below: [logger] type=logger host=localhost [manager] type=manager host=localhost [proxy-1] type=proxy host=localhost [worker-1] type=worker host=localhost interface=eno33557248 lb_method=pf_ring lb_procs=5 pin_cpus=2,3,4,5,6 Any ideas on what causes this? Should I just roll back to my last config that worked, or did I miss a change in bro 2.5 config? Drake -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161006/e3c63625/attachment.html From philosnef at gmail.com Fri Oct 7 05:02:19 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 7 Oct 2016 08:02:19 -0400 Subject: [Bro] cluster question Message-ID: I noticed the previous gentleman running 160 workers (I assume 16 boxes with 10 workers each??) in a cluster, and had a general question about this. If I am pumping out well above 5Gb/s, doesn't that mean running in a cluser that I am pushing 5 right back out the other side? If so, this doesn't seem to scale well beyond 5ish Gb/s. At what point, and how many pps, should we move away from a single manager host talking to cluster hosts? Even if there is no processing by bro on the manager, you still have bandwidth issues, unless you are loading up your bro manager with multiple 10 gig nics, and are loadbalancing upstream, in which case, why aren't you just load balancing to stand alone boxes each with their own manager, logger, and set of workers? It seems to me that running multiple physical bro hosts tied to a single manager is the poor mans solution to running proper load balancing hardware upstream. Am I mistaken? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/df48867d/attachment-0001.html From hovsep.sanjay.levi at gmail.com Fri Oct 7 08:56:34 2016 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 7 Oct 2016 15:56:34 +0000 Subject: [Bro] Intel framework troubleshooting on Bro 2.5 Message-ID: Are there any tricks to use when debugging the Intel framework that would show parsing errors ? The problem we have is when combining multiple intel files one bad file seems to corrupt the entire lot. http://blog.bro.org/2014/01/intelligence-data-and-bro_4980.html Following that guide works fine, there are a number of intel hits on Tor activity within minutes of restarting Bro. When we add in the giant list of intel from CriticalStack the Tor intel hits no longer trigger which suggests an issue with that file. Commenting out the tor.intel file sort of narrows it down to the CriticalStack file but it also suggests that a bad intel file somehow corrupts previously read files. (they both use a list of Tor exit nodes from the Suricata project.) I've checked the files for correct headers, no spaces, and tab formatting all which seem to be OK. #---- from local.bro file ----# @load frameworks/intel/seen @load frameworks/intel/do_notice # Load custom intel feed @load local-intel.bro #---- local-intel.bro file ------# [bro at mgr /opt/bro]$ less /opt/bro/share/bro/site/local-intel.bro const feed_directory = "/opt/bro/feeds"; redef Intel::read_files += { # feed_directory + "/tor.intel", feed_directory + "/critical-stack/master-public.bro.dat" }; -Hovsep -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/04817903/attachment.html From jazoff at illinois.edu Fri Oct 7 09:03:47 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 7 Oct 2016 16:03:47 +0000 Subject: [Bro] Intel framework troubleshooting on Bro 2.5 In-Reply-To: References: Message-ID: <47510B9D-EDF8-4AEA-9145-5D1F6F421519@illinois.edu> > On Oct 7, 2016, at 11:56 AM, Hovsep Levi wrote: > > Are there any tricks to use when debugging the Intel framework that would show parsing errors ? First step would be to check reporter.log and stderr.log on the manager. -- - Justin Azoff From hovsep.sanjay.levi at gmail.com Fri Oct 7 09:23:11 2016 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 7 Oct 2016 16:23:11 +0000 Subject: [Bro] cluster question In-Reply-To: References: Message-ID: You sound a little confused, multi-node scaling is a feature of Bro and really the only way to monitor high volume locations. See the LBNL paper on Bro at 100G for an example. When using a front-end load-balancer you are distributing the traffic directly to the worker nodes which in turn produce metadata to be sent to the manager node. The decision to use more than one box is relative to the processing requirements, the basic formula is something like one 3.0 Ghz core per 250Mbps of traffic. If you use multiple managers you break global visibility in the scripting context, proxies share state among the entire cluster which operates as a sort of giant shared memory space. Multiple managers is essentially independent Bro clusters. I think a basic example would be a scanning script or SQL injection script... if the threshold is 25 and 10.1.1.1 attacks your entire network each cluster only sees 1/n of that activity and may not fire an event because of the limited context. As for the bandwidth concerns you mention I'm not sure what you mean exactly. The metadata produced by the workers and sent to the manager (logs) are a fraction of the monitored raw traffic. HTH, -Hovsep On Fri, Oct 7, 2016 at 12:02 PM, erik clark wrote: > I noticed the previous gentleman running 160 workers (I assume 16 boxes > with 10 workers each??) in a cluster, and had a general question about this. > > If I am pumping out well above 5Gb/s, doesn't that mean running in a > cluser that I am pushing 5 right back out the other side? If so, this > doesn't seem to scale well beyond 5ish Gb/s. > > At what point, and how many pps, should we move away from a single manager > host talking to cluster hosts? Even if there is no processing by bro on the > manager, you still have bandwidth issues, unless you are loading up your > bro manager with multiple 10 gig nics, and are loadbalancing upstream, in > which case, why aren't you just load balancing to stand alone boxes each > with their own manager, logger, and set of workers? > > It seems to me that running multiple physical bro hosts tied to a single > manager is the poor mans solution to running proper load balancing hardware > upstream. Am I mistaken? > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/4916b0d3/attachment.html From hovsep.sanjay.levi at gmail.com Fri Oct 7 09:43:46 2016 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 7 Oct 2016 16:43:46 +0000 Subject: [Bro] Intel framework troubleshooting on Bro 2.5 In-Reply-To: <47510B9D-EDF8-4AEA-9145-5D1F6F421519@illinois.edu> References: <47510B9D-EDF8-4AEA-9145-5D1F6F421519@illinois.edu> Message-ID: Nothing stands out. Looking at base/frameworks/intel/input.bro is there a way to hook Input::add_event and have those events written to a log file ? I tried moving a new intel file into place but didn't notice anything in reporter.log or stderr. ex: cp master-public.bro.dat master-public.bro.dat.new && mv master-public.bro.dat.new master-public.bro.dat On Fri, Oct 7, 2016 at 4:03 PM, Azoff, Justin S wrote: > > > On Oct 7, 2016, at 11:56 AM, Hovsep Levi > wrote: > > > > Are there any tricks to use when debugging the Intel framework that > would show parsing errors ? > > First step would be to check reporter.log and stderr.log on the manager. > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/df9c7942/attachment.html From neslog at gmail.com Fri Oct 7 10:29:57 2016 From: neslog at gmail.com (Neslog) Date: Fri, 7 Oct 2016 13:29:57 -0400 Subject: [Bro] cluster question In-Reply-To: References: Message-ID: Is that formula based on Myricom NIC or using PF_Ring? What's the best way to calculate the expected increase when switching to a custom nic? On Oct 7, 2016 12:31 PM, "Hovsep Levi" wrote: > You sound a little confused, multi-node scaling is a feature of Bro and > really the only way to monitor high volume locations. See the LBNL paper > on Bro at 100G for an example. When using a front-end load-balancer you > are distributing the traffic directly to the worker nodes which in turn > produce metadata to be sent to the manager node. > > The decision to use more than one box is relative to the processing > requirements, the basic formula is something like one 3.0 Ghz core per > 250Mbps of traffic. > > If you use multiple managers you break global visibility in the scripting > context, proxies share state among the entire cluster which operates as a > sort of giant shared memory space. Multiple managers is essentially > independent Bro clusters. I think a basic example would be a scanning > script or SQL injection script... if the threshold is 25 and 10.1.1.1 > attacks your entire network each cluster only sees 1/n of that activity and > may not fire an event because of the limited context. > > As for the bandwidth concerns you mention I'm not sure what you mean > exactly. The metadata produced by the workers and sent to the manager > (logs) are a fraction of the monitored raw traffic. > > HTH, > > -Hovsep > > > > On Fri, Oct 7, 2016 at 12:02 PM, erik clark wrote: > >> I noticed the previous gentleman running 160 workers (I assume 16 boxes >> with 10 workers each??) in a cluster, and had a general question about this. >> >> If I am pumping out well above 5Gb/s, doesn't that mean running in a >> cluser that I am pushing 5 right back out the other side? If so, this >> doesn't seem to scale well beyond 5ish Gb/s. >> >> At what point, and how many pps, should we move away from a single >> manager host talking to cluster hosts? Even if there is no processing by >> bro on the manager, you still have bandwidth issues, unless you are loading >> up your bro manager with multiple 10 gig nics, and are loadbalancing >> upstream, in which case, why aren't you just load balancing to stand alone >> boxes each with their own manager, logger, and set of workers? >> >> It seems to me that running multiple physical bro hosts tied to a single >> manager is the poor mans solution to running proper load balancing hardware >> upstream. Am I mistaken? >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/a8e1a6d9/attachment-0001.html From michalpurzynski1 at gmail.com Fri Oct 7 10:40:55 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Fri, 7 Oct 2016 19:40:55 +0200 Subject: [Bro] cluster question In-Reply-To: References: Message-ID: <9DBF8353-6D8A-4EA9-B0F4-D4DC926013AC@gmail.com> The correct formula, valid for every nic is between 0-1Gbit per worker. Depends on the traffic type, CPU, bios settings, kernel settings and sometimes version, OS type version and settings, NIC and running scripts. The old 250 comes from a dark past and was for unknown traffic on unknown hardware. Sometimes I process 500 Mbit/sec per worker on Myricom sometimes I have quite a packet drop. Oh well ;) Also afpacket >> pfring. And there's afpacket, netmap, Myricom, napatech and dozen others ;) > On 7 Oct 2016, at 19:29, Neslog wrote: > > Is that formula based on Myricom NIC or using PF_Ring? What's the best way to calculate the expected increase when switching to a custom nic? > > >> On Oct 7, 2016 12:31 PM, "Hovsep Levi" wrote: >> You sound a little confused, multi-node scaling is a feature of Bro and really the only way to monitor high volume locations. See the LBNL paper on Bro at 100G for an example. When using a front-end load-balancer you are distributing the traffic directly to the worker nodes which in turn produce metadata to be sent to the manager node. >> >> The decision to use more than one box is relative to the processing requirements, the basic formula is something like one 3.0 Ghz core per 250Mbps of traffic. >> >> If you use multiple managers you break global visibility in the scripting context, proxies share state among the entire cluster which operates as a sort of giant shared memory space. Multiple managers is essentially independent Bro clusters. I think a basic example would be a scanning script or SQL injection script... if the threshold is 25 and 10.1.1.1 attacks your entire network each cluster only sees 1/n of that activity and may not fire an event because of the limited context. >> >> As for the bandwidth concerns you mention I'm not sure what you mean exactly. The metadata produced by the workers and sent to the manager (logs) are a fraction of the monitored raw traffic. >> >> HTH, >> >> -Hovsep >> >> >> >>> On Fri, Oct 7, 2016 at 12:02 PM, erik clark wrote: >>> I noticed the previous gentleman running 160 workers (I assume 16 boxes with 10 workers each??) in a cluster, and had a general question about this. >>> >>> If I am pumping out well above 5Gb/s, doesn't that mean running in a cluser that I am pushing 5 right back out the other side? If so, this doesn't seem to scale well beyond 5ish Gb/s. >>> >>> At what point, and how many pps, should we move away from a single manager host talking to cluster hosts? Even if there is no processing by bro on the manager, you still have bandwidth issues, unless you are loading up your bro manager with multiple 10 gig nics, and are loadbalancing upstream, in which case, why aren't you just load balancing to stand alone boxes each with their own manager, logger, and set of workers? >>> >>> It seems to me that running multiple physical bro hosts tied to a single manager is the poor mans solution to running proper load balancing hardware upstream. Am I mistaken? >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/5be5d35c/attachment.html From neslog at gmail.com Fri Oct 7 10:59:00 2016 From: neslog at gmail.com (Neslog) Date: Fri, 7 Oct 2016 13:59:00 -0400 Subject: [Bro] cluster question In-Reply-To: <9DBF8353-6D8A-4EA9-B0F4-D4DC926013AC@gmail.com> References: <9DBF8353-6D8A-4EA9-B0F4-D4DC926013AC@gmail.com> Message-ID: Any performance hit seen using software based load balancing versus specialized NICs? (Have actual test results?) On Oct 7, 2016 1:40 PM, "Micha? Purzy?ski" wrote: > The correct formula, valid for every nic is between 0-1Gbit per worker. > Depends on the traffic type, CPU, bios settings, kernel settings and > sometimes version, OS type version and settings, NIC and running scripts. > > The old 250 comes from a dark past and was for unknown traffic on unknown > hardware. > > Sometimes I process 500 Mbit/sec per worker on Myricom sometimes I have > quite a packet drop. Oh well ;) > > Also afpacket >> pfring. And there's afpacket, netmap, Myricom, napatech > and dozen others ;) > > > On 7 Oct 2016, at 19:29, Neslog wrote: > > Is that formula based on Myricom NIC or using PF_Ring? What's the best > way to calculate the expected increase when switching to a custom nic? > > On Oct 7, 2016 12:31 PM, "Hovsep Levi" > wrote: > >> You sound a little confused, multi-node scaling is a feature of Bro and >> really the only way to monitor high volume locations. See the LBNL paper >> on Bro at 100G for an example. When using a front-end load-balancer you >> are distributing the traffic directly to the worker nodes which in turn >> produce metadata to be sent to the manager node. >> >> The decision to use more than one box is relative to the processing >> requirements, the basic formula is something like one 3.0 Ghz core per >> 250Mbps of traffic. >> >> If you use multiple managers you break global visibility in the scripting >> context, proxies share state among the entire cluster which operates as a >> sort of giant shared memory space. Multiple managers is essentially >> independent Bro clusters. I think a basic example would be a scanning >> script or SQL injection script... if the threshold is 25 and 10.1.1.1 >> attacks your entire network each cluster only sees 1/n of that activity and >> may not fire an event because of the limited context. >> >> As for the bandwidth concerns you mention I'm not sure what you mean >> exactly. The metadata produced by the workers and sent to the manager >> (logs) are a fraction of the monitored raw traffic. >> >> HTH, >> >> -Hovsep >> >> >> >> On Fri, Oct 7, 2016 at 12:02 PM, erik clark wrote: >> >>> I noticed the previous gentleman running 160 workers (I assume 16 boxes >>> with 10 workers each??) in a cluster, and had a general question about this. >>> >>> If I am pumping out well above 5Gb/s, doesn't that mean running in a >>> cluser that I am pushing 5 right back out the other side? If so, this >>> doesn't seem to scale well beyond 5ish Gb/s. >>> >>> At what point, and how many pps, should we move away from a single >>> manager host talking to cluster hosts? Even if there is no processing by >>> bro on the manager, you still have bandwidth issues, unless you are loading >>> up your bro manager with multiple 10 gig nics, and are loadbalancing >>> upstream, in which case, why aren't you just load balancing to stand alone >>> boxes each with their own manager, logger, and set of workers? >>> >>> It seems to me that running multiple physical bro hosts tied to a single >>> manager is the poor mans solution to running proper load balancing hardware >>> upstream. Am I mistaken? >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/355d51a0/attachment.html From jan.grashoefer at gmail.com Fri Oct 7 11:04:33 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Fri, 7 Oct 2016 20:04:33 +0200 Subject: [Bro] Intel framework troubleshooting on Bro 2.5 In-Reply-To: References: <47510B9D-EDF8-4AEA-9145-5D1F6F421519@illinois.edu> Message-ID: > Nothing stands out. Looking at base/frameworks/intel/input.bro is there a > way to hook Input::add_event and have those events written to a log file ? You could use the Intel::read_entry event. For validation of the files have a look at https://github.com/packetsled/bro_intel_linter. Can you reproduce the issue running a standalone deployment or against a pcap and is that issue new in Bro 2.5? Jan From dwaters at bioteam.net Fri Oct 7 13:40:35 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Fri, 7 Oct 2016 15:40:35 -0500 Subject: [Bro] 5 node cluster Message-ID: Hello The myricom cards in my cluster nodes are dropping packets, and I am not getting any log information in prefix/logs. Did I miss something during the setup process ? Please see below for initial info and please let me know what else is needed. Thank you. Darrain I compiled bro using the option below. --with-pcap=/opt/snf/ [bromgr at bromgr ~]$ ldd /usr/local/bro/bin/bro | grep pcap libpcap.so.1 => /opt/snf/lib/libpcap.so.1 (0x00007faf9c3d5000) I get the following when I run capstats [BroControl] > capstats Interface kpps mbps (10s average) ---------------------------------------- worker-1-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) worker-2-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) worker-3-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) worker-4-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) worker-5-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) My node.cfg file [manager] type=manager host=10.0.40.19 # [proxiy-5] type=proxy host=10.0.40.19 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH # [worker-5] type=worker host=10.0.40.19 interface=eth2 lb_method=myricom lb_procs=10 pin_cpus=7,8,9,10,11,18,19,20,21,22 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 # [proxy-1] type=proxy host=10.0.40.18 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH # [worker-1] type=worker host=10.0.40.18 interface=eth2 lb_method=myricom lb_procs=10 pin_cpus=7,8,9,10,11,18,19,20,21,22 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 # [proxy-2] type=proxy host=10.0.40.17 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH # [worker-2] type=worker host=10.0.40.17 interface=eth2 lb_method=myricom lb_procs=10 pin_cpus=7,8,9,10,11,18,19,20,21,22 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 # [proxy-3] type=proxy host=10.0.40.16 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH # [worker-3] type=worker host=10.0.40.16 interface=eth2 lb_method=myricom lb_procs=10 pin_cpus=7,8,9,10,11,18,19,20,21,22 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 # [proxy-4] type=proxy host=10.0.40.15 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH # [worker-4] type=worker host=10.0.40.15 interface=eth2 lb_method=myricom lb_procs=10 pin_cpus=7,8,9,10,11,18,19,20,21,22 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/29309e7f/attachment.html From jazoff at illinois.edu Fri Oct 7 13:58:49 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 7 Oct 2016 20:58:49 +0000 Subject: [Bro] 5 node cluster In-Reply-To: References: Message-ID: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> > On Oct 7, 2016, at 4:40 PM, Darrain Waters wrote: > > Hello > > The myricom cards in my cluster nodes are dropping packets, and I am not getting any log information in prefix/logs. Did I miss something during the setup process ? Please see below for initial info and please let me know what else is needed. Thank you. > > Darrain > > I compiled bro using the option below. > --with-pcap=/opt/snf/ > > [bromgr at bromgr ~]$ ldd /usr/local/bro/bin/bro | grep pcap > > libpcap.so.1 => /opt/snf/lib/libpcap.so.1 (0x00007faf9c3d5000) > Looks good > > I get the following when I run capstats > > [BroControl] > capstats > > > > Interface kpps mbps (10s average) > > ---------------------------------------- > > worker-1-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) > > worker-2-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) > > worker-3-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) > > worker-4-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) > > worker-5-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) failed: Device or resource busy) This is normal.. capstats for snf never worked right (it could never work with snfv2 and with snfv3 it needs to set a different app id as bro, otherwise it can't capture at the same time as bro. As long as bro is running and not failing with the same error you're ok. There are better ways to get data out of a myricom card using the myricom tools as well. Your node.cfg looks mostly ok. I would switch to only running 1 or 2 proxies and just run them on the manager node. Why are you using 7,8,9,10,11,18,19,20,21,22 in particular? What CPUs do you have? This is potentially not doing what you intend. Most likely 7/19 8/20 9/21 10/22 are the same cpu. Your underlying problem is probably that a firewall is enabled on your hosts and the worker processes can't reach the manager. Daniel just wrote a good section on this for the manual: This section summarizes the network communication between Bro and BroControl, which is useful to understand if you need to reconfigure your firewall. If your firewall is preventing Bro communication, then either the "deploy" command or the "peerstatus" command will fail. For a cluster setup, BroControl uses ssh to run commands on other hosts in the cluster, so the manager host needs to connect to TCP port 22 on each of the other hosts in the cluster. Note that BroControl never attempts to ssh to the localhost, so in a standalone setup BroControl does not use ssh. Each instance of Bro in a cluster needs to communicate directly with other instances of Bro regardless of whether these instances are running on the same host or not. Each proxy and worker needs to connect to the manager, and each worker needs to connect to one proxy. If a logger node is defined, then each of the other nodes needs to connect to the logger. Note that you can change the port that Bro listens on by changing the value of the "BroPort" option in your ``broctl.cfg`` file (this should be needed only if your system has another process that listens on the same port). By default, a standalone Bro listens on TCP port 47760. For a cluster setup, the logger listens on TCP port 47761, and the manager listens on TCP port 47762 (or 47761 if no logger is defined). Each proxy is assigned its own port number, starting with one number greater than the manager's port. Likewise, each worker is assigned its own port starting one number greater than the highest port number assigned to a proxy. Finally, a few BroControl commands (such as "print" and "peerstatus") rely on broccoli to communicate with Bro. This means that for those commands to function, BroControl needs to connect to each Bro instance. -- - Justin Azoff From dnj0496 at gmail.com Fri Oct 7 14:00:05 2016 From: dnj0496 at gmail.com (Dk Jack) Date: Fri, 7 Oct 2016 14:00:05 -0700 Subject: [Bro] bro script q. Message-ID: Hi, Can a function defined in one script be accessed from another script? Currently, I have the following in two files: File A: global myfunc: function(c: connection, msg: string): string function myfunc(c: connection, msg: string): string { ... print fmt("myfunc: called from %s", msg); ... return mystring; } event someEventA(c: connection, ...) { ... c$fileA$myfunc_result = myfunc(c, "fileA"); } File B: global myfunc: function(c: connection, msg: string): string even someEventB(c: connection, ...) { ... c$fileB$myfunc_result = myfunc(c, "fileB"); ... } This compiles and runs fine when I run against a pcap. The events 'someEventA' and 'someEventB' write to two different log files. In log fileA, I see all the columns populated include myfunc_result column. However, in log fileB, I the myfunc_result shows the default string 'NA'. In the standard out file, I only see 'myfunc: called from fileA' messages. Since the myfunc function is performing a lookup on a table (loaded from file on disk), I'd like both the events to be able to see the same info. What am I doing wrong which is preventing me from accessing myfunc function from fileB. Thanks. Dk. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/9c6c51bf/attachment.html From dwaters at bioteam.net Fri Oct 7 14:18:03 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Fri, 7 Oct 2016 16:18:03 -0500 Subject: [Bro] 5 node cluster In-Reply-To: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> Message-ID: Thanks for the quick reply. I put proxy on everything because I was grabbing at straws. I did only have 1 proxy and it was on the manager with the same results. Why are you using 7,8,9,10,11,18,19,20,21,22 in particular? What CPUs do you have? This is potentially not doing what you intend. Most likely 7/19 8/20 9/21 10/22 are the same cpu. Those are the core that are with node 1 and node 1 is associated with the myricom card. [bromgr at bromgr 2016-10-07]$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz Stepping: 2 CPU MHz: 1200.000 BogoMIPS: 6799.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-5,12-17 NUMA node1 CPU(s): 6-11,18-23 Your underlying problem is probably that a firewall is enabled on your hosts and the worker processes can't reach the manager. I have ip6 & iptables off peerstatus [BroControl] > peerstatus manager 1475875039.738664 peer=worker-2-2 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-3 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=proxy-2 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=proxiy-5 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-4 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-3 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-4 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-8 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-9 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-1 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-1 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-9 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-8 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-6 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-9 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-3 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=proxy-3 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-7 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-7 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=proxy-4 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-8 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=proxy-1 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-2 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-4 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-6 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-1 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-10 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-9 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-10 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-2 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-3 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-1 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-8 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-5 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-6 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-6 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-8 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-7 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-7 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-6 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-1 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-5 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-10 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-10 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-7 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-3 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-9 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-5 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-2 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer= host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=proxy-5 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-3-5 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-4 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-5-10 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-2-5 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-4-4 host=10.0.40.15 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? 1475875039.738664 peer=worker-1-2 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? On Fri, Oct 7, 2016 at 3:58 PM, Azoff, Justin S wrote: > > > On Oct 7, 2016, at 4:40 PM, Darrain Waters wrote: > > > > Hello > > > > The myricom cards in my cluster nodes are dropping packets, and I am not > getting any log information in prefix/logs. Did I miss something during the > setup process ? Please see below for initial info and please let me know > what else is needed. Thank you. > > > > Darrain > > > > I compiled bro using the option below. > > --with-pcap=/opt/snf/ > > > > [bromgr at bromgr ~]$ ldd /usr/local/bro/bin/bro | grep pcap > > > > libpcap.so.1 => /opt/snf/lib/libpcap.so.1 (0x00007faf9c3d5000) > > > > Looks good > > > > > I get the following when I run capstats > > > > [BroControl] > capstats > > > > > > > > Interface kpps mbps (10s average) > > > > ---------------------------------------- > > > > worker-1-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) > failed: Device or resource busy) > > > > worker-2-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) > failed: Device or resource busy) > > > > worker-3-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) > failed: Device or resource busy) > > > > worker-4-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) > failed: Device or resource busy) > > > > worker-5-1: capstats failed (error: eth2: snf_ring_open_id(ring=-1) > failed: Device or resource busy) > > This is normal.. capstats for snf never worked right (it could never work > with snfv2 and with snfv3 it needs to set a different app id as bro, > otherwise it can't capture at the same time as bro. As long as bro is > running and not failing with the same error you're ok. There are better > ways to get data out of a myricom card using the myricom tools as well. > > Your node.cfg looks mostly ok. I would switch to only running 1 or 2 > proxies and just run them on the manager node. > > Why are you using 7,8,9,10,11,18,19,20,21,22 in particular? What CPUs do > you have? This is potentially not doing what you intend. Most likely 7/19 > 8/20 9/21 10/22 are the same cpu. > > Your underlying problem is probably that a firewall is enabled on your > hosts and the worker processes can't reach the manager. Daniel just wrote > a good section on this for the manual: > > > This section summarizes the network communication between Bro and > BroControl, > which is useful to understand if you need to reconfigure your firewall. If > your firewall is preventing Bro communication, then either the "deploy" > command or the "peerstatus" command will fail. > > For a cluster setup, BroControl uses ssh to run commands on other hosts in > the cluster, so the manager host needs to connect to TCP port 22 on each > of the other hosts in the cluster. Note that BroControl never attempts > to ssh to the localhost, so in a standalone setup BroControl does not use > ssh. > > Each instance of Bro in a cluster needs to communicate directly with other > instances of Bro regardless of whether these instances are running on the > same > host or not. Each proxy and worker needs to connect to the manager, > and each worker needs to connect to one proxy. If a logger node is > defined, > then each of the other nodes needs to connect to the logger. > > Note that you can change the port that Bro listens on by changing the value > of the "BroPort" option in your ``broctl.cfg`` file (this should be needed > only if your system has another process that listens on the same port). By > default, a standalone Bro listens on TCP port 47760. For a cluster setup, > the logger listens on TCP port 47761, and the manager listens on TCP port > 47762 > (or 47761 if no logger is defined). Each proxy is assigned its own port > number, starting with one number greater than the manager's port. > Likewise, > each worker is assigned its own port starting one number greater than the > highest port number assigned to a proxy. > > Finally, a few BroControl commands (such as "print" and "peerstatus") rely > on broccoli to communicate with Bro. This means that for those commands to > function, BroControl needs to connect to each Bro instance. > > > -- > - Justin Azoff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/d2bc2a76/attachment-0001.html From anthony.kasza at gmail.com Fri Oct 7 14:24:35 2016 From: anthony.kasza at gmail.com (anthony kasza) Date: Fri, 7 Oct 2016 15:24:35 -0600 Subject: [Bro] bro script q. In-Reply-To: References: Message-ID: In your example you're defining the same function twice within the global namespace. This might be causing an issue. Try using the module and export functionality of the scripting language. -AK On Oct 7, 2016 3:20 PM, "Dk Jack" wrote: Hi, Can a function defined in one script be accessed from another script? Currently, I have the following in two files: File A: global myfunc: function(c: connection, msg: string): string function myfunc(c: connection, msg: string): string { ... print fmt("myfunc: called from %s", msg); ... return mystring; } event someEventA(c: connection, ...) { ... c$fileA$myfunc_result = myfunc(c, "fileA"); } File B: global myfunc: function(c: connection, msg: string): string even someEventB(c: connection, ...) { ... c$fileB$myfunc_result = myfunc(c, "fileB"); ... } This compiles and runs fine when I run against a pcap. The events 'someEventA' and 'someEventB' write to two different log files. In log fileA, I see all the columns populated include myfunc_result column. However, in log fileB, I the myfunc_result shows the default string 'NA'. In the standard out file, I only see 'myfunc: called from fileA' messages. Since the myfunc function is performing a lookup on a table (loaded from file on disk), I'd like both the events to be able to see the same info. What am I doing wrong which is preventing me from accessing myfunc function from fileB. Thanks. Dk. _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/13bcf557/attachment.html From jazoff at illinois.edu Fri Oct 7 14:35:26 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 7 Oct 2016 21:35:26 +0000 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> Message-ID: <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> > On Oct 7, 2016, at 5:18 PM, Darrain Waters wrote: > > Thanks for the quick reply. I put proxy on everything because I was grabbing at straws. I did only have 1 proxy and it was on the manager with the same results. > > > Why are you using 7,8,9,10,11,18,19,20,21,22 in particular? What CPUs do you have? This is potentially not doing what you intend. Most likely 7/19 8/20 9/21 10/22 are the same cpu. > > Those are the core that are with node 1 and node 1 is associated with the myricom card. > > [bromgr at bromgr 2016-10-07]$ lscpu > > Architecture: x86_64 > > CPU op-mode(s): 32-bit, 64-bit > > Byte Order: Little Endian > > CPU(s): 24 > > On-line CPU(s) list: 0-23 > > Thread(s) per core: 2 > > Core(s) per socket: 6 I see. You have 2 6 core cpus with hyper threading. So those are the two sets of cpus that make up each hypertheading pair. We haven't gotten to do performance testing for this yet, but you might get better performance by just using 2,3,4,5,6,7,8,9,10,11. It's the tradeoff between having to copy half of the packets across to the other numa node, but using more of the 'real' cores and less of the hyper threading ones. > > Your underlying problem is probably that a firewall is enabled on your hosts and the worker processes can't reach the manager. > I have ip6 & iptables off On all the machines? "everything is working but there are no logs" almost always turns out to be firewall rules. The last time it turned out that another admin had re-enabled the firewall.. :-) One thing to check for that are the logs written to the spool/ on each worker. There will be a local communication.log for each worker that may be complaining about something. Now that I reread your first message I see "I am not getting any log information in prefix/logs". Do you mean that there are literally no log files in there? under current/ you should at least have stderr.log and communication.log. If you literally have no log files you may have some permission issues if you are not running bro as root. You can also run tcpdump on the manager and see if the workers are even trying to send it anything. > peerstatus > > > > [BroControl] > peerstatus > > manager > > 1475875039.738664 peer=worker-2-2 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > 1475875039.738664 peer=worker-1-3 host=10.0.40.18 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > 1475875039.738664 peer=proxy-2 host=10.0.40.17 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > 1475875039.738664 peer=proxiy-5 host=10.0.40.19 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > 1475875039.738664 peer=worker-3-4 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > 1475875039.738664 peer=worker-3-3 host=10.0.40.16 events_in=3165 events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > That appears normal.. I'm not sure what bytes_in and bytes_out were supposed to be.. it doesn't look like we output that anymore. What does 'broctl netstats' show? -- - Justin Azoff From hovsep.sanjay.levi at gmail.com Fri Oct 7 14:44:05 2016 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 7 Oct 2016 21:44:05 +0000 Subject: [Bro] Intel framework troubleshooting on Bro 2.5 In-Reply-To: References: <47510B9D-EDF8-4AEA-9145-5D1F6F421519@illinois.edu> Message-ID: Thanks, that linter is finding errors. I just started using CriticalStack with Bro 2.5 so I can't comment on prior issues. If the linter is working as expected then it appears the problem is with a few URIs from PhishTank with odd URL encoding, maybe they are mistakenly being interpreted as tabs during parsing or corrupting some internal state within Bro. bro at mgr:/opt/bro/feeds % bro_intel_linter/intel_linter.py -f master-public.bro.dat WARNING: Line 1263 - Invalid entry "bjcurio.com/js/index.htm?\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82???\xc3\x83?\xc3\x82?\xc3\x83?\xc3\x82??%" for column "indicator" WARNING: Line 4501 - Invalid entry " generalfil.es/download/gs4eb28030h17i0/windows%20live%20messenger%208.5%20%20%20patch%20anti-atualizaai\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbdi\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbdai\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbdi\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbdo%20%20%20messenger%20plus!%20liv.html" for column "indicator" WARNING: Line 12438 - Invalid entry " www.alhotocaia.com.br/Templates/11632/simplestyle_5/style/-6327-40825785664-3357953/index.html?A?A?A?%20I?A?A?A?A\xef\xbf\xbd\xef\xbf\xbd1A?A\xef\xbf\xbd\xef\xbf\xbdA?A??" for column "indicator" ERROR: Line 13902 - Indicator type "Intel::ADDR" possible issue with indicator: "2400:8901::f03c:91ff:feb0:bdb0" ERROR: Line 13902 - Details - Invalid IP address On Fri, Oct 7, 2016 at 6:04 PM, Jan Grash?fer wrote: > > Nothing stands out. Looking at base/frameworks/intel/input.bro is > there a > > way to hook Input::add_event and have those events written to a log file > ? > > You could use the Intel::read_entry event. For validation of the files > have a look at https://github.com/packetsled/bro_intel_linter. > > Can you reproduce the issue running a standalone deployment or against a > pcap and is that issue new in Bro 2.5? > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/d8ed710c/attachment.html From jan.grashoefer at gmail.com Fri Oct 7 14:51:14 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Fri, 7 Oct 2016 23:51:14 +0200 Subject: [Bro] Intel framework troubleshooting on Bro 2.5 In-Reply-To: References: <47510B9D-EDF8-4AEA-9145-5D1F6F421519@illinois.edu> Message-ID: <273af119-4ea4-d53b-23e3-aa299a8e606a@gmail.com> > If the linter is working as expected then it appears the problem is with a > few URIs from PhishTank with odd URL encoding, maybe they are mistakenly > being interpreted as tabs during parsing or corrupting some internal state > within Bro. Theoretically malformed items should not affect previously loaded intel files. If you can reproduce this issue and provide something like a minimal example it would be good to open a ticket. Jan From dwaters at bioteam.net Fri Oct 7 15:27:47 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Fri, 7 Oct 2016 17:27:47 -0500 Subject: [Bro] 5 node cluster In-Reply-To: <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> Message-ID: Sorry, yeah I am getting comm logs and stderr on the manager. I do have two NICS enabled on each system, one for management with IP and the other is the myricom with no IP and in sniffer mode. Each of the workers do have the spool wirker directories but they are empty. I use to be able to run this on the manager [bromgr at bromgr etc]$ sudo tcpdump -i eth2 tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy [BroControl] > netstats worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 worker-1-3: 1475878452.693975 recvd=1 dropped=17260812 link=17260813 worker-1-4: 1475878452.895009 recvd=1 dropped=17260812 link=17260813 worker-1-5: 1475878453.095000 recvd=1 dropped=17260812 link=17260813 worker-1-6: 1475878453.296049 recvd=1 dropped=17260812 link=17260813 worker-1-7: 1475878453.497139 recvd=1 dropped=17260812 link=17260813 worker-1-8: 1475878453.697990 recvd=1 dropped=17260812 link=17260813 worker-1-9: 1475878453.897974 recvd=1 dropped=17260812 link=17260813 worker-2-1: 1475878450.084311 recvd=1 dropped=43750502 link=43750503 worker-2-10: 1475878450.285335 recvd=1 dropped=43750502 link=43750503 worker-2-2: 1475878450.485317 recvd=1 dropped=43750502 link=43750503 worker-2-3: 1475878450.686430 recvd=1 dropped=43750502 link=43750503 worker-2-4: 1475878450.887373 recvd=1 dropped=43750502 link=43750503 worker-2-5: 1475878451.088348 recvd=1 dropped=43750502 link=43750503 worker-2-6: 1475878451.288262 recvd=1 dropped=43750502 link=43750503 worker-2-7: 1475878451.489370 recvd=1 dropped=43750502 link=43750503 worker-2-8: 1475878451.689311 recvd=1 dropped=43750502 link=43750503 worker-2-9: 1475878451.890323 recvd=1 dropped=43750502 link=43750503 worker-3-1: 1475878448.077118 recvd=1 dropped=9847880 link=9847881 worker-3-10: 1475878448.278158 recvd=1 dropped=9847880 link=9847881 worker-3-2: 1475878448.479115 recvd=1 dropped=9847880 link=9847881 worker-3-3: 1475878448.679110 recvd=1 dropped=9847880 link=9847881 worker-3-4: 1475878448.880134 recvd=1 dropped=9847880 link=9847881 worker-3-5: 1475878449.081098 recvd=1 dropped=9847880 link=9847881 worker-3-6: 1475878449.281137 recvd=1 dropped=9847880 link=9847881 worker-3-7: 1475878449.482134 recvd=1 dropped=9847880 link=9847881 worker-3-8: 1475878449.683136 recvd=1 dropped=9847880 link=9847881 worker-3-9: 1475878449.884120 recvd=1 dropped=9847880 link=9847881 worker-4-1: 1475878446.070765 recvd=1 dropped=14367380 link=14367381 worker-4-10: 1475878446.271782 recvd=1 dropped=14367380 link=14367381 worker-4-2: 1475878446.472749 recvd=1 dropped=14367380 link=14367381 worker-4-3: 1475878446.672736 recvd=1 dropped=14367380 link=14367381 worker-4-4: 1475878446.873773 recvd=1 dropped=14367380 link=14367381 worker-4-5: 1475878447.074779 recvd=1 dropped=14367380 link=14367381 worker-4-6: 1475878447.274758 recvd=1 dropped=14367380 link=14367381 worker-4-7: 1475878447.475787 recvd=1 dropped=14367380 link=14367381 worker-4-8: 1475878447.676719 recvd=1 dropped=14367380 link=14367381 worker-4-9: 1475878447.876731 recvd=1 dropped=14367380 link=14367381 On Fri, Oct 7, 2016 at 4:35 PM, Azoff, Justin S wrote: > > > On Oct 7, 2016, at 5:18 PM, Darrain Waters wrote: > > > > Thanks for the quick reply. I put proxy on everything because I was > grabbing at straws. I did only have 1 proxy and it was on the manager with > the same results. > > > > > > Why are you using 7,8,9,10,11,18,19,20,21,22 in particular? What CPUs > do you have? This is potentially not doing what you intend. Most likely > 7/19 8/20 9/21 10/22 are the same cpu. > > > > Those are the core that are with node 1 and node 1 is associated with > the myricom card. > > > > [bromgr at bromgr 2016-10-07]$ lscpu > > > > Architecture: x86_64 > > > > CPU op-mode(s): 32-bit, 64-bit > > > > Byte Order: Little Endian > > > > CPU(s): 24 > > > > On-line CPU(s) list: 0-23 > > > > Thread(s) per core: 2 > > > > Core(s) per socket: 6 > > I see. You have 2 6 core cpus with hyper threading. So those are the two > sets of cpus that make up each hypertheading pair. We haven't gotten to do > performance testing for this yet, but you might get better performance by > just using 2,3,4,5,6,7,8,9,10,11. It's the tradeoff between having to copy > half of the packets across to the other numa node, but using more of the > 'real' cores and less of the hyper threading ones. > > > > > Your underlying problem is probably that a firewall is enabled on your > hosts and the worker processes can't reach the manager. > > I have ip6 & iptables off > > On all the machines? "everything is working but there are no logs" almost > always turns out to be firewall rules. The last time it turned out that > another admin had re-enabled the firewall.. :-) > > One thing to check for that are the logs written to the spool/ on each > worker. There will be a local communication.log for each worker that may > be complaining about something. > > Now that I reread your first message I see "I am not getting any log > information in prefix/logs". Do you mean that there are literally no log > files in there? under current/ you should at least have stderr.log and > communication.log. If you literally have no log files you may have some > permission issues if you are not running bro as root. > > You can also run tcpdump on the manager and see if the workers are even > trying to send it anything. > > > peerstatus > > > > > > > > [BroControl] > peerstatus > > > > manager > > > > 1475875039.738664 peer=worker-2-2 host=10.0.40.17 events_in=3165 > events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > > > 1475875039.738664 peer=worker-1-3 host=10.0.40.18 events_in=3165 > events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > > > 1475875039.738664 peer=proxy-2 host=10.0.40.17 events_in=3165 > events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > > > 1475875039.738664 peer=proxiy-5 host=10.0.40.19 events_in=3165 > events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > > > 1475875039.738664 peer=worker-3-4 host=10.0.40.16 events_in=3165 > events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > > > 1475875039.738664 peer=worker-3-3 host=10.0.40.16 events_in=3165 > events_out=3165 ops_in=0 ops_out=3472 bytes_in=? bytes_out=? > > > That appears normal.. I'm not sure what bytes_in and bytes_out were > supposed to be.. it doesn't look like we output that anymore. > > What does 'broctl netstats' show? > > > > -- > - Justin Azoff > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/a25a83b2/attachment-0001.html From dnj0496 at gmail.com Fri Oct 7 15:30:04 2016 From: dnj0496 at gmail.com (Dk Jack) Date: Fri, 7 Oct 2016 15:30:04 -0700 Subject: [Bro] bro script q. In-Reply-To: References: Message-ID: Could you direct me to an example on how to do that? I've only seen export being used with export info records... thanks. On Fri, Oct 7, 2016 at 2:24 PM, anthony kasza wrote: > In your example you're defining the same function twice within the global > namespace. This might be causing an issue. > Try using the module and export functionality of the scripting language. > > -AK > > On Oct 7, 2016 3:20 PM, "Dk Jack" wrote: > > Hi, > Can a function defined in one script be accessed from another script? > Currently, I have the following in two files: > > File A: > > global myfunc: function(c: connection, msg: string): string > > function myfunc(c: connection, msg: string): string > { > ... > print fmt("myfunc: called from %s", msg); > ... > return mystring; > } > > event someEventA(c: connection, ...) > { > ... > c$fileA$myfunc_result = myfunc(c, "fileA"); > } > > File B: > global myfunc: function(c: connection, msg: string): string > > even someEventB(c: connection, ...) > { > ... > c$fileB$myfunc_result = myfunc(c, "fileB"); > ... > } > > This compiles and runs fine when I run against a pcap. The events > 'someEventA' and 'someEventB' write to two different log files. In log > fileA, I see all the columns populated include myfunc_result column. > However, in log fileB, I the myfunc_result shows the default string 'NA'. > In the standard out file, I only see 'myfunc: called from fileA' messages. > > Since the myfunc function is performing a lookup on a table (loaded from > file on disk), I'd like both the events to be able to see the same info. > What am I doing wrong which is preventing me from accessing myfunc function > from fileB. Thanks. > > Dk. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/ccde0add/attachment.html From anthony.kasza at gmail.com Fri Oct 7 17:08:37 2016 From: anthony.kasza at gmail.com (anthony kasza) Date: Fri, 7 Oct 2016 18:08:37 -0600 Subject: [Bro] bro script q. In-Reply-To: References: Message-ID: Look at this script. It does things with PE files. https://github.com/bro/bro/blob/master/scripts/base/files/pe/main.bro Someone may want to correct me here: Line 1 declares a new module, which I believe is analogous to C++ namespaces, named "PE". The export at line 5 declares exported things under the PE namespace. So, to reference the event log_pe from the global namespace, as your script is doing, it would need to use PE::log_pe(). Try exporting your function with a module name declared above it. -AK On Oct 7, 2016 4:30 PM, "Dk Jack" wrote: > Could you direct me to an example on how to do that? I've only seen export > being used with export info records... thanks. > > On Fri, Oct 7, 2016 at 2:24 PM, anthony kasza > wrote: > >> In your example you're defining the same function twice within the global >> namespace. This might be causing an issue. >> Try using the module and export functionality of the scripting language. >> >> -AK >> >> On Oct 7, 2016 3:20 PM, "Dk Jack" wrote: >> >> Hi, >> Can a function defined in one script be accessed from another script? >> Currently, I have the following in two files: >> >> File A: >> >> global myfunc: function(c: connection, msg: string): string >> >> function myfunc(c: connection, msg: string): string >> { >> ... >> print fmt("myfunc: called from %s", msg); >> ... >> return mystring; >> } >> >> event someEventA(c: connection, ...) >> { >> ... >> c$fileA$myfunc_result = myfunc(c, "fileA"); >> } >> >> File B: >> global myfunc: function(c: connection, msg: string): string >> >> even someEventB(c: connection, ...) >> { >> ... >> c$fileB$myfunc_result = myfunc(c, "fileB"); >> ... >> } >> >> This compiles and runs fine when I run against a pcap. The events >> 'someEventA' and 'someEventB' write to two different log files. In log >> fileA, I see all the columns populated include myfunc_result column. >> However, in log fileB, I the myfunc_result shows the default string 'NA'. >> In the standard out file, I only see 'myfunc: called from fileA' messages. >> >> Since the myfunc function is performing a lookup on a table (loaded from >> file on disk), I'd like both the events to be able to see the same info. >> What am I doing wrong which is preventing me from accessing myfunc function >> from fileB. Thanks. >> >> Dk. >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/19245a0f/attachment.html From dnj0496 at gmail.com Fri Oct 7 17:16:55 2016 From: dnj0496 at gmail.com (Dk Jack) Date: Fri, 7 Oct 2016 17:16:55 -0700 Subject: [Bro] bro script q. In-Reply-To: References: Message-ID: Thanks, I figured it after sending the email. Thanks. On Fri, Oct 7, 2016 at 5:08 PM, anthony kasza wrote: > Look at this script. It does things with PE files. > > https://github.com/bro/bro/blob/master/scripts/base/files/pe/main.bro > > Someone may want to correct me here: > Line 1 declares a new module, which I believe is analogous to C++ > namespaces, named "PE". The export at line 5 declares exported things under > the PE namespace. So, to reference the event log_pe from the global > namespace, as your script is doing, it would need to use PE::log_pe(). > Try exporting your function with a module name declared above it. > > -AK > > On Oct 7, 2016 4:30 PM, "Dk Jack" wrote: > >> Could you direct me to an example on how to do that? I've only seen >> export being used with export info records... thanks. >> >> On Fri, Oct 7, 2016 at 2:24 PM, anthony kasza >> wrote: >> >>> In your example you're defining the same function twice within the >>> global namespace. This might be causing an issue. >>> Try using the module and export functionality of the scripting language. >>> >>> -AK >>> >>> On Oct 7, 2016 3:20 PM, "Dk Jack" wrote: >>> >>> Hi, >>> Can a function defined in one script be accessed from another script? >>> Currently, I have the following in two files: >>> >>> File A: >>> >>> global myfunc: function(c: connection, msg: string): string >>> >>> function myfunc(c: connection, msg: string): string >>> { >>> ... >>> print fmt("myfunc: called from %s", msg); >>> ... >>> return mystring; >>> } >>> >>> event someEventA(c: connection, ...) >>> { >>> ... >>> c$fileA$myfunc_result = myfunc(c, "fileA"); >>> } >>> >>> File B: >>> global myfunc: function(c: connection, msg: string): string >>> >>> even someEventB(c: connection, ...) >>> { >>> ... >>> c$fileB$myfunc_result = myfunc(c, "fileB"); >>> ... >>> } >>> >>> This compiles and runs fine when I run against a pcap. The events >>> 'someEventA' and 'someEventB' write to two different log files. In log >>> fileA, I see all the columns populated include myfunc_result column. >>> However, in log fileB, I the myfunc_result shows the default string 'NA'. >>> In the standard out file, I only see 'myfunc: called from fileA' messages. >>> >>> Since the myfunc function is performing a lookup on a table (loaded from >>> file on disk), I'd like both the events to be able to see the same info. >>> What am I doing wrong which is preventing me from accessing myfunc function >>> from fileB. Thanks. >>> >>> Dk. >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/4ba105b6/attachment-0001.html From seth at icir.org Fri Oct 7 18:24:25 2016 From: seth at icir.org (Seth Hall) Date: Fri, 7 Oct 2016 21:24:25 -0400 Subject: [Bro] cluster question In-Reply-To: References: <9DBF8353-6D8A-4EA9-B0F4-D4DC926013AC@gmail.com> Message-ID: > On Oct 7, 2016, at 1:59 PM, Neslog wrote: > > Any performance hit seen using software based load balancing versus specialized NICs? (Have actual test results?) Some of the specialized NICs are actually doing software based load balancing as well, they just don't make it terribly clear in their marketing material or documentation. (just to muddy the waters more than they already are!) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From anthony.kasza at gmail.com Fri Oct 7 19:42:51 2016 From: anthony.kasza at gmail.com (anthony kasza) Date: Fri, 7 Oct 2016 20:42:51 -0600 Subject: [Bro] bro script q. In-Reply-To: References: Message-ID: Was that your issue? If you want to PM me your scripts I can take a look. -AK On Oct 7, 2016 6:16 PM, "Dk Jack" wrote: > Thanks, > I figured it after sending the email. Thanks. > > On Fri, Oct 7, 2016 at 5:08 PM, anthony kasza > wrote: > >> Look at this script. It does things with PE files. >> >> https://github.com/bro/bro/blob/master/scripts/base/files/pe/main.bro >> >> Someone may want to correct me here: >> Line 1 declares a new module, which I believe is analogous to C++ >> namespaces, named "PE". The export at line 5 declares exported things under >> the PE namespace. So, to reference the event log_pe from the global >> namespace, as your script is doing, it would need to use PE::log_pe(). >> Try exporting your function with a module name declared above it. >> >> -AK >> >> On Oct 7, 2016 4:30 PM, "Dk Jack" wrote: >> >>> Could you direct me to an example on how to do that? I've only seen >>> export being used with export info records... thanks. >>> >>> On Fri, Oct 7, 2016 at 2:24 PM, anthony kasza >>> wrote: >>> >>>> In your example you're defining the same function twice within the >>>> global namespace. This might be causing an issue. >>>> Try using the module and export functionality of the scripting language. >>>> >>>> -AK >>>> >>>> On Oct 7, 2016 3:20 PM, "Dk Jack" wrote: >>>> >>>> Hi, >>>> Can a function defined in one script be accessed from another script? >>>> Currently, I have the following in two files: >>>> >>>> File A: >>>> >>>> global myfunc: function(c: connection, msg: string): string >>>> >>>> function myfunc(c: connection, msg: string): string >>>> { >>>> ... >>>> print fmt("myfunc: called from %s", msg); >>>> ... >>>> return mystring; >>>> } >>>> >>>> event someEventA(c: connection, ...) >>>> { >>>> ... >>>> c$fileA$myfunc_result = myfunc(c, "fileA"); >>>> } >>>> >>>> File B: >>>> global myfunc: function(c: connection, msg: string): string >>>> >>>> even someEventB(c: connection, ...) >>>> { >>>> ... >>>> c$fileB$myfunc_result = myfunc(c, "fileB"); >>>> ... >>>> } >>>> >>>> This compiles and runs fine when I run against a pcap. The events >>>> 'someEventA' and 'someEventB' write to two different log files. In log >>>> fileA, I see all the columns populated include myfunc_result column. >>>> However, in log fileB, I the myfunc_result shows the default string 'NA'. >>>> In the standard out file, I only see 'myfunc: called from fileA' messages. >>>> >>>> Since the myfunc function is performing a lookup on a table (loaded >>>> from file on disk), I'd like both the events to be able to see the same >>>> info. What am I doing wrong which is preventing me from accessing myfunc >>>> function from fileB. Thanks. >>>> >>>> Dk. >>>> >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161007/983893db/attachment.html From jazoff at illinois.edu Fri Oct 7 20:34:43 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Sat, 8 Oct 2016 03:34:43 +0000 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> Message-ID: <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> > On Oct 7, 2016, at 6:27 PM, Darrain Waters wrote: > > Sorry, yeah I am getting comm logs and stderr on the manager. I do have two NICS enabled on each system, one for management with IP and the other is the myricom with no IP and in sniffer mode. > > Each of the workers do have the spool wirker directories but they are empty. > > I use to be able to run this on the manager > > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 > > > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy > > > > [BroControl] > netstats > > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 > > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 > > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 Ah, ok.. so this isn't the firewall issue... That's when "everything is working but there are no logs" but in your case nothing is working :-) I'd stop bro and then make sure everything is stopped. You can use 'broctl ps.bro' to ensure that there are no stray procs lying around. Then at that point with nothing else running you should be able to run things like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. You should also be able to run tools like /opt/snf/bin/myri_nic_info /opt/snf/bin/myri_counters /opt/snf/bin/myri_bandwidth /opt/snf/sbin/myri_license to ensure that the card+drivers are working properly as well as check dmesg output and check to see if it is complaining about anything I don't recall every seeing that particular netstats output, but I bet you'll be able to reproduce the problem with regular tcpdump. Generally speaking if tcpdump -w foo.pcap writes out packets that look ok, and you can use bro -r against foo.pcap, bro it should work in realtime. The snf issues on the manager may be due to trying to use snf libs against a regular NIC, I've had to use things like LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... to force it to use standard libpcap. -- - Justin Azoff From michalpurzynski1 at gmail.com Sat Oct 8 01:43:50 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Sat, 8 Oct 2016 10:43:50 +0200 Subject: [Bro] cluster question In-Reply-To: References: <9DBF8353-6D8A-4EA9-B0F4-D4DC926013AC@gmail.com> Message-ID: I second what Seth said. Correctly configured afpacket is as fast as Myricom. There - I said that. If you have great budget and no time get napatech. If smaller budget and little time-Myricom. Just works! Intel has the best future potential, right now requires very careful tuning. And by Intel I mean X710 these days. > On 8 Oct 2016, at 03:24, Seth Hall wrote: > > >> On Oct 7, 2016, at 1:59 PM, Neslog wrote: >> >> Any performance hit seen using software based load balancing versus specialized NICs? (Have actual test results?) > > Some of the specialized NICs are actually doing software based load balancing as well, they just don't make it terribly clear in their marketing material or documentation. (just to muddy the waters more than they already are!) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > From blackhole.em at gmail.com Sat Oct 8 06:12:21 2016 From: blackhole.em at gmail.com (Joe Blow) Date: Sat, 8 Oct 2016 09:12:21 -0400 Subject: [Bro] cluster question In-Reply-To: References: <9DBF8353-6D8A-4EA9-B0F4-D4DC926013AC@gmail.com> Message-ID: Solarflare FTW! Being able to multi thread single threaded capture programs is sweet. Very little config to do on card load balancing to worker threads. Also, all you have to do is compile against libpcap 1.5.3 and slipstream the driver in and *poof* it works. Cheers, JB On Sat, Oct 8, 2016 at 4:43 AM, Micha? Purzy?ski wrote: > I second what Seth said. Correctly configured afpacket is as fast as > Myricom. There - I said that. > > If you have great budget and no time get napatech. If smaller budget and > little time-Myricom. Just works! > > Intel has the best future potential, right now requires very careful > tuning. And by Intel I mean X710 these days. > > > > On 8 Oct 2016, at 03:24, Seth Hall wrote: > > > > > >> On Oct 7, 2016, at 1:59 PM, Neslog wrote: > >> > >> Any performance hit seen using software based load balancing versus > specialized NICs? (Have actual test results?) > > > > Some of the specialized NICs are actually doing software based load > balancing as well, they just don't make it terribly clear in their > marketing material or documentation. (just to muddy the waters more than > they already are!) > > > > .Seth > > > > -- > > Seth Hall > > International Computer Science Institute > > (Bro) because everyone has a network > > http://www.bro.org/ > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161008/8a337913/attachment-0001.html From dwaters at bioteam.net Sat Oct 8 10:44:36 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Sat, 8 Oct 2016 12:44:36 -0500 Subject: [Bro] 5 node cluster In-Reply-To: <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> Message-ID: Turns out it was a simple config issue (like most times & RTFM), and traffic is flowing to snf0. My workers were not using the snf0 interface as you must if you compiled using the myricom sniffer. Also changed the cpu pinning so thanks for that info. I also turned off the time source in the snf driver. Now I need to add Arista time source. Thanks for your time. [worker-3] type=worker host=10.0.40.16 interface=snf0 use to be eth2 lb_method=myricom lb_procs=10 pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 [worker-3] type=worker host=10.0.40.16 interface=eth2 lb_method=myricom lb_procs=10 pin_cpus=7,8,9,10,11,18,19,20,21,22 env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 To: [worker-3] type=worker host=10.0.40.16 interface=snf0 lb_method=myricom lb_procs=10 pin_cpus=2,3,4,5,6,7,8,9,10,11 On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S wrote: > > > On Oct 7, 2016, at 6:27 PM, Darrain Waters wrote: > > > > Sorry, yeah I am getting comm logs and stderr on the manager. I do have > two NICS enabled on each system, one for management with IP and the other > is the myricom with no IP and in sniffer mode. > > > > Each of the workers do have the spool wirker directories but they are > empty. > > > > I use to be able to run this on the manager > > > > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 > > > > > > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy > > > > > > > > [BroControl] > netstats > > > > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 > > > > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 > > > > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 > > Ah, ok.. so this isn't the firewall issue... That's when "everything is > working but there are no logs" but in your case nothing is working :-) > > I'd stop bro and then make sure everything is stopped. You can use > 'broctl ps.bro' to ensure that there are no stray procs lying around. Then > at that point with nothing else running you should be able to run things > like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. > > You should also be able to run tools like > > /opt/snf/bin/myri_nic_info > /opt/snf/bin/myri_counters > /opt/snf/bin/myri_bandwidth > /opt/snf/sbin/myri_license > > to ensure that the card+drivers are working properly as well as check > dmesg output and check to see if it is complaining about anything > > I don't recall every seeing that particular netstats output, but I bet > you'll be able to reproduce the problem with regular tcpdump. Generally > speaking if tcpdump -w foo.pcap writes out packets that look ok, and you > can use bro -r against foo.pcap, bro it should work in realtime. > > The snf issues on the manager may be due to trying to use snf libs against > a regular NIC, I've had to use things like > > LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... > > to force it to use standard libpcap. > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161008/ca0e6f52/attachment.html From michalpurzynski1 at gmail.com Sat Oct 8 10:50:09 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Sat, 8 Oct 2016 19:50:09 +0200 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> Message-ID: <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> You don't seem to use native Myricom support, there's a plugin for that. > On 8 Oct 2016, at 19:44, Darrain Waters wrote: > > Turns out it was a simple config issue (like most times & RTFM), and traffic is flowing to snf0. My workers were not using the snf0 interface as you must if you compiled using the myricom sniffer. Also changed the cpu pinning so thanks for that info. I also turned off the time source in the snf driver. Now I need to add Arista time source. Thanks for your time. > > [worker-3] > type=worker > host=10.0.40.16 > interface=snf0 use to be eth2 > lb_method=myricom > lb_procs=10 > pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either > #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 > > [worker-3] > > type=worker > > host=10.0.40.16 > > interface=eth2 > > lb_method=myricom > > lb_procs=10 > > pin_cpus=7,8,9,10,11,18,19,20,21,22 > > env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 > > To: > > [worker-3] > type=worker > host=10.0.40.16 > interface=snf0 > lb_method=myricom > lb_procs=10 > pin_cpus=2,3,4,5,6,7,8,9,10,11 > > > > >> On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S wrote: >> >> > On Oct 7, 2016, at 6:27 PM, Darrain Waters wrote: >> > >> > Sorry, yeah I am getting comm logs and stderr on the manager. I do have two NICS enabled on each system, one for management with IP and the other is the myricom with no IP and in sniffer mode. >> > >> > Each of the workers do have the spool wirker directories but they are empty. >> > >> > I use to be able to run this on the manager >> > >> > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 >> > >> > >> > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy >> > >> > >> > >> > [BroControl] > netstats >> > >> > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 >> > >> > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 >> > >> > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 >> >> Ah, ok.. so this isn't the firewall issue... That's when "everything is working but there are no logs" but in your case nothing is working :-) >> >> I'd stop bro and then make sure everything is stopped. You can use 'broctl ps.bro' to ensure that there are no stray procs lying around. Then at that point with nothing else running you should be able to run things like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. >> >> You should also be able to run tools like >> >> /opt/snf/bin/myri_nic_info >> /opt/snf/bin/myri_counters >> /opt/snf/bin/myri_bandwidth >> /opt/snf/sbin/myri_license >> >> to ensure that the card+drivers are working properly as well as check dmesg output and check to see if it is complaining about anything >> >> I don't recall every seeing that particular netstats output, but I bet you'll be able to reproduce the problem with regular tcpdump. Generally speaking if tcpdump -w foo.pcap writes out packets that look ok, and you can use bro -r against foo.pcap, bro it should work in realtime. >> >> The snf issues on the manager may be due to trying to use snf libs against a regular NIC, I've had to use things like >> >> LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... >> >> to force it to use standard libpcap. >> >> -- >> - Justin Azoff >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161008/7905e2a9/attachment.html From dwaters at bioteam.net Sat Oct 8 10:52:26 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Sat, 8 Oct 2016 12:52:26 -0500 Subject: [Bro] 5 node cluster In-Reply-To: <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> Message-ID: Great, would you point me in the right direction. Thank you. On Sat, Oct 8, 2016 at 12:50 PM, Micha? Purzy?ski < michalpurzynski1 at gmail.com> wrote: > You don't seem to use native Myricom support, there's a plugin for that. > > On 8 Oct 2016, at 19:44, Darrain Waters wrote: > > Turns out it was a simple config issue (like most times & RTFM), and > traffic is flowing to snf0. My workers were not using the snf0 interface as > you must if you compiled using the myricom sniffer. Also changed the cpu > pinning so thanks for that info. I also turned off the time source in the > snf driver. Now I need to add Arista time source. Thanks for your time. > > [worker-3] > type=worker > host=10.0.40.16 > interface=snf0 use to be eth2 > lb_method=myricom > lb_procs=10 > pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either > #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, > SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 > > [worker-3] > > type=worker > > host=10.0.40.16 > > interface=eth2 > > lb_method=myricom > > lb_procs=10 > > pin_cpus=7,8,9,10,11,18,19,20,21,22 > > env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, > SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 > To: > > [worker-3] > type=worker > host=10.0.40.16 > interface=snf0 > lb_method=myricom > lb_procs=10 > pin_cpus=2,3,4,5,6,7,8,9,10,11 > > > > > On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S > wrote: > >> >> > On Oct 7, 2016, at 6:27 PM, Darrain Waters wrote: >> > >> > Sorry, yeah I am getting comm logs and stderr on the manager. I do have >> two NICS enabled on each system, one for management with IP and the other >> is the myricom with no IP and in sniffer mode. >> > >> > Each of the workers do have the spool wirker directories but they are >> empty. >> > >> > I use to be able to run this on the manager >> > >> > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 >> > >> > >> > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy >> > >> > >> > >> > [BroControl] > netstats >> > >> > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 >> > >> > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 >> > >> > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 >> >> Ah, ok.. so this isn't the firewall issue... That's when "everything is >> working but there are no logs" but in your case nothing is working :-) >> >> I'd stop bro and then make sure everything is stopped. You can use >> 'broctl ps.bro' to ensure that there are no stray procs lying around. Then >> at that point with nothing else running you should be able to run things >> like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. >> >> You should also be able to run tools like >> >> /opt/snf/bin/myri_nic_info >> /opt/snf/bin/myri_counters >> /opt/snf/bin/myri_bandwidth >> /opt/snf/sbin/myri_license >> >> to ensure that the card+drivers are working properly as well as check >> dmesg output and check to see if it is complaining about anything >> >> I don't recall every seeing that particular netstats output, but I bet >> you'll be able to reproduce the problem with regular tcpdump. Generally >> speaking if tcpdump -w foo.pcap writes out packets that look ok, and you >> can use bro -r against foo.pcap, bro it should work in realtime. >> >> The snf issues on the manager may be due to trying to use snf libs >> against a regular NIC, I've had to use things like >> >> LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... >> >> to force it to use standard libpcap. >> >> -- >> - Justin Azoff >> >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161008/50464b43/attachment-0001.html From michalpurzynski1 at gmail.com Sat Oct 8 13:29:32 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Sat, 8 Oct 2016 22:29:32 +0200 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> Message-ID: http://lmgtfy.com/?q=Bro+Myricom Finds me https://www.bro.org/sphinx-git/components/bro-plugins/myricom/README.html You're welcome ;) Tested on 2.5 master and beta. Haven't tried on 2.4 although by the time you will build your cluster 2.5 will have been released. > On 8 Oct 2016, at 19:52, Darrain Waters wrote: > > Great, would you point me in the right direction. Thank you. > >> On Sat, Oct 8, 2016 at 12:50 PM, Micha? Purzy?ski wrote: >> You don't seem to use native Myricom support, there's a plugin for that. >> >>> On 8 Oct 2016, at 19:44, Darrain Waters wrote: >>> >>> Turns out it was a simple config issue (like most times & RTFM), and traffic is flowing to snf0. My workers were not using the snf0 interface as you must if you compiled using the myricom sniffer. Also changed the cpu pinning so thanks for that info. I also turned off the time source in the snf driver. Now I need to add Arista time source. Thanks for your time. >>> >>> [worker-3] >>> type=worker >>> host=10.0.40.16 >>> interface=snf0 use to be eth2 >>> lb_method=myricom >>> lb_procs=10 >>> pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either >>> #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 >>> >>> [worker-3] >>> >>> type=worker >>> >>> host=10.0.40.16 >>> >>> interface=eth2 >>> >>> lb_method=myricom >>> >>> lb_procs=10 >>> >>> pin_cpus=7,8,9,10,11,18,19,20,21,22 >>> >>> env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 >>> >>> To: >>> >>> [worker-3] >>> type=worker >>> host=10.0.40.16 >>> interface=snf0 >>> lb_method=myricom >>> lb_procs=10 >>> pin_cpus=2,3,4,5,6,7,8,9,10,11 >>> >>> >>> >>> >>>> On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S wrote: >>>> >>>> > On Oct 7, 2016, at 6:27 PM, Darrain Waters wrote: >>>> > >>>> > Sorry, yeah I am getting comm logs and stderr on the manager. I do have two NICS enabled on each system, one for management with IP and the other is the myricom with no IP and in sniffer mode. >>>> > >>>> > Each of the workers do have the spool wirker directories but they are empty. >>>> > >>>> > I use to be able to run this on the manager >>>> > >>>> > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 >>>> > >>>> > >>>> > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy >>>> > >>>> > >>>> > >>>> > [BroControl] > netstats >>>> > >>>> > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 >>>> > >>>> > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 >>>> > >>>> > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 >>>> >>>> Ah, ok.. so this isn't the firewall issue... That's when "everything is working but there are no logs" but in your case nothing is working :-) >>>> >>>> I'd stop bro and then make sure everything is stopped. You can use 'broctl ps.bro' to ensure that there are no stray procs lying around. Then at that point with nothing else running you should be able to run things like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. >>>> >>>> You should also be able to run tools like >>>> >>>> /opt/snf/bin/myri_nic_info >>>> /opt/snf/bin/myri_counters >>>> /opt/snf/bin/myri_bandwidth >>>> /opt/snf/sbin/myri_license >>>> >>>> to ensure that the card+drivers are working properly as well as check dmesg output and check to see if it is complaining about anything >>>> >>>> I don't recall every seeing that particular netstats output, but I bet you'll be able to reproduce the problem with regular tcpdump. Generally speaking if tcpdump -w foo.pcap writes out packets that look ok, and you can use bro -r against foo.pcap, bro it should work in realtime. >>>> >>>> The snf issues on the manager may be due to trying to use snf libs against a regular NIC, I've had to use things like >>>> >>>> LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... >>>> >>>> to force it to use standard libpcap. >>>> >>>> -- >>>> - Justin Azoff >>>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161008/ff197c54/attachment.html From jazoff at illinois.edu Sat Oct 8 13:56:31 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Sat, 8 Oct 2016 20:56:31 +0000 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> Message-ID: <214C6D5E-BC95-4AD5-8C33-FEE77BCF8AD0@illinois.edu> > On Oct 8, 2016, at 1:44 PM, Darrain Waters wrote: > > Turns out it was a simple config issue (like most times & RTFM), and traffic is flowing to snf0. My workers were not using the snf0 interface as you must if you compiled using the myricom sniffer. Interesting.. which cards and which version of the snf drivers are you using? I use interface=p1p1 on our clusters and have never had an issue. As I understood things using snf0 was just an alias for 'the first myricom card' -- - Justin Azoff From GRAY at shepherd.edu Sat Oct 8 19:21:39 2016 From: GRAY at shepherd.edu (George Ray) Date: Sun, 9 Oct 2016 02:21:39 +0000 Subject: [Bro] Make error on Mac OS 10 Message-ID: <040BB91BB8AE594F8E3FBC243FC56A1C108DBE9A@MAILBOX3.shepherd.edu> Bro Mailing List, I am attempting to install Bro 2.4.1 on Mac OS 10.6.8. The prereqs have been installed and the ./configure works. During Make, at the 89% level, the system reports: ld: library not found for -lpython2.7 collect2: ld returned 1 exit status make[3]: *** [aux/broctl/aux/pysubnettree/_SubnetTree.so] Error 1 make[2]: *** [aux/broctl/aux/pysubnettree/CMakeFiles/_SubnetTree.dir/all] Error The default python install is 2.6, so I installed version 2.7 from python.org. Python works fine from the command line but the same error recurs. The following is my path: /Library/Frameworks/Python.framework/Versions/2.7/lib:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin Does python have to be installed in a different directory, this path reflects the default? Any ideas on how to resolve this issue? Thanks, George -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161009/d9e0bc8b/attachment.html From dwaters at bioteam.net Sun Oct 9 08:01:21 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Sun, 9 Oct 2016 10:01:21 -0500 Subject: [Bro] 5 node cluster In-Reply-To: <214C6D5E-BC95-4AD5-8C33-FEE77BCF8AD0@illinois.edu> References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> <214C6D5E-BC95-4AD5-8C33-FEE77BCF8AD0@illinois.edu> Message-ID: # Serial MAC ProductCode Driver Version License 0 482741 00:60:dd:43:84:4a 10G-PCIE2-8C2-2S-SYNC myri_snf 3.0.9.50782 Valid 1 482741 00:60:dd:43:84:4b 10G-PCIE2-8C2-2S-SYNC myri_snf 3.0.9.50782 Valid It may not make any difference see below. I will try eth2, and that would reduce it to the time source default settings on the cards. myri_snf INFO: myriC0: my ether interface name is eth2 myri_snf INFO: eth2: Will use skbuf frags (4096 bytes, order=0) myri_snf INFO: Enabling host timestamping. On Sat, Oct 8, 2016 at 3:56 PM, Azoff, Justin S wrote: > > > On Oct 8, 2016, at 1:44 PM, Darrain Waters wrote: > > > > Turns out it was a simple config issue (like most times & RTFM), and > traffic is flowing to snf0. My workers were not using the snf0 interface as > you must if you compiled using the myricom sniffer. > > Interesting.. which cards and which version of the snf drivers are you > using? > > I use interface=p1p1 on our clusters and have never had an issue. As I > understood things using snf0 was just an alias for 'the first myricom card' > > -- > - Justin Azoff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161009/28a9500e/attachment-0001.html From bkellogg at dresser-rand.com Mon Oct 10 09:02:15 2016 From: bkellogg at dresser-rand.com (Kellogg, Brian (GS IT PG-DR)) Date: Mon, 10 Oct 2016 16:02:15 +0000 Subject: [Bro] check rx and tx hosts for files Message-ID: What is the best/most efficient method for checking if rx_hosts is_local_addr and tx_hosts is not is_local_addr? I'm extracting files and only want to extract files coming from the Inet to an internal host. I've also seen some scripts using f$conns[cid]$id... . Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/681aa772/attachment.html From fatema.bannatwala at gmail.com Mon Oct 10 10:37:31 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 10 Oct 2016 13:37:31 -0400 Subject: [Bro] Understanding Connection history for ssh. Message-ID: Hi Bro team, I am trying to understand the 'history' field in conn.log for failed and successful ssh logins. Can we tell by looking into it whether the ssh connection was successful or not? For ex: We had a case today where bro-intel flagged an IP to be bad with 85% confidence rate, and when we saw the conn.log corresponding to that uid, we saw that, that IP was trying to ssh into a machine. Now the question is, can we tell by looking at the history - ShAdDa that the ssh was successful? intel.log entry 1476046696.592070 CXs7MT25xi6ykmT3o1 *77.242.90.96 50367 x.y.z.k 22* - - - 77.242.90.96 Intel::ADDR *SSH::SUCCESSFUL_LOGIN* worker-3-4 dataplane.org 85.0 scanner conn.log entry 1476046725.508913 CXs7MT25xi6ykmT3o1 *77.242.90.96 50367 ** x.y.z.k** 22* tcp ssh 10.623538 1383 1843 S1 F T 0 * ShAdDa* 15 2171 15 2631 (empty) ssh.log entry 1476046725.634328 CXs7MT25xi6ykmT3o1 *77.242.90.96* 50367 *x.y.z.k* 22 2 T INBOUND SSH-2.0-libssh2_1.7.0 SSH-2.0-1.82 sshlib: WinSSHD 4.27 aes256-cbc hmac-sha1 none diffie-hellman-group1-sha1 ssh-dss b9:93:6a:61:8d:29:01:ec:aa:01:1f:0e:90:0a:7b:6e CZ 84 Prerov 49.453899 17.4524 Also, what does the conn history would look like in case of failed ssh login? Thanks for the help. Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/4b934caf/attachment.html From jlay at slave-tothe-box.net Mon Oct 10 11:11:49 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 10 Oct 2016 12:11:49 -0600 Subject: [Bro] Understanding Connection history for ssh. In-Reply-To: References: Message-ID: <7eca0bd60dec71985f22d17b4fc754e6@localhost> On 2016-10-10 11:37, fatema bannatwala wrote: > Hi Bro team, > > I am trying to understand the 'history' field in conn.log for failed > and successful ssh logins. > Can we tell by looking into it whether the ssh connection was > successful or not? > > For ex: We had a case today where bro-intel flagged an IP to be bad > with 85% confidence rate, and when we saw the conn.log corresponding > to that uid, we saw that, that IP was trying to ssh into a machine. > Now the question is, can we tell by looking at the history - ShAdDa > that the ssh was successful? > > intel.log entry > 1476046696.592070 CXs7MT25xi6ykmT3o1 77.242.90.96 50367 > X.Y.Z.K 22 - - - 77.242.90.96 Intel::ADDR SSH::SUCCESSFUL_LOGIN > worker-3-4 dataplane.org [1] 85.0 scanner > > conn.log entry > 1476046725.508913 CXs7MT25xi6ykmT3o1 77.242.90.96 50367 > X.Y.Z.K 22 tcp ssh 10.623538 1383 1843 S1 F T 0 > SHADDA 15 2171 15 2631 (empty) > > ssh.log entry > > 1476046725.634328 CXs7MT25xi6ykmT3o1 77.242.90.96 50367 > X.Y.Z.K 22 2 T INBOUND SSH-2.0-libssh2_1.7.0 > SSH-2.0-1.82 sshlib: WinSSHD 4.27 aes256-cbc hmac-sha1 > none diffie-hellman-group1-sha1 ssh-dss > b9:93:6a:61:8d:29:01:ec:aa:01:1f:0e:90:0a:7b:6e CZ 84 Prerov > 49.453899 17.4524 > > Also, what does the conn history would look like in case of failed ssh > login? > > Thanks for the help. > > Thanks, > Fatema. Fatema, The T in your ssh.log is "auth_success", so yes...bro views this as a successful login. Also, that source IP is not so good...that IP is listed in https://lists.blocklist.de/lists/ssh.txt. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/b3147a79/attachment.html From fatema.bannatwala at gmail.com Mon Oct 10 12:22:21 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 10 Oct 2016 15:22:21 -0400 Subject: [Bro] Understanding Connection history for ssh. Message-ID: Hi James, Thank you for the answer. The problem is that, when contacted the concerned party, they say that they don't see any login attempts from that IP and asking whether we were sure that the ssh login were successful. Looking at what we have recorded using Bro, I just wanted to know how one could tell whether the ssh login resulted a success/ failure just by looking at the bro conn.log, and ssh.log. Hence, wanted to know the heuristics behind setting that 'auth_success' field to T or F. Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/012979ad/attachment.html From jlay at slave-tothe-box.net Mon Oct 10 12:26:05 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 10 Oct 2016 13:26:05 -0600 Subject: [Bro] Understanding Connection history for ssh. In-Reply-To: References: Message-ID: On 2016-10-10 13:22, fatema bannatwala wrote: > Hi James, > > Thank you for the answer. > The problem is that, when contacted the concerned party, > they say that they don't see any login attempts from that IP and > asking whether we were sure that the ssh login were successful. > Looking at what we have recorded using Bro, I just wanted to know how > one could > tell whether the ssh login resulted a success/ failure just by looking > at the bro conn.log, and ssh.log. > Hence, wanted to know the heuristics behind setting that > 'auth_success' field to T or F. > > Thanks, > Fatema. Understood...looking at the reputation of that IP I would stick with the theory that there was success. Also I would look into correlating the bro logs with ssh logs. James From fatema.bannatwala at gmail.com Mon Oct 10 12:32:05 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 10 Oct 2016 15:32:05 -0400 Subject: [Bro] check rx and tx hosts for files Message-ID: Hi Brian, I had the kind of same use-case where I had to exclude file extraction for certain subnets. Hence this is what I have done in my script: # White list of subnets to exclude file extraction for. global subnet_map: table[subnet] of string = { [x.x.x.x/25] = "VIP subnet1", [y.y.y.y/26] = "VIP subnet2", [z.z.z.z/24] = "VIP subnet3", } &default =""; event file_sniff(f: fa_file, meta: fa_metadata) { # check for right source to extract. if(f$source != "HTTP") return; #check the right mime-type to extract. if ( ! meta?$mime_type || meta$mime_type !in ext_map ) return; # get the recieving hosts from the record. local rx_addr: set[addr]; rx_addr = f$info$rx_hosts; # check if the rx host is in VIP subnets for (i in rx_addr) { if ( i in subnet_map ) { return; } } if ( meta?$mime_type ) { local fname = fmt("%s-%s.%s", f$source, f$id, ext_map[meta$mime_type]); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } } You can define the rx or tx which you want to exclude/include and modify accordingly. I am sure there might be some more efficient ways to do this, I will let other more experience people to answer that :) Hope this helps. Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/5f367100/attachment.html From bkellogg at dresser-rand.com Mon Oct 10 12:34:37 2016 From: bkellogg at dresser-rand.com (Kellogg, Brian (GS IT PG-DR)) Date: Mon, 10 Oct 2016 19:34:37 +0000 Subject: [Bro] check rx and tx hosts for files In-Reply-To: References: Message-ID: Thanks, I did something similar. Always concerned I?m doing it the hard way. From: fatema bannatwala [mailto:fatema.bannatwala at gmail.com] Sent: Monday, October 10, 2016 3:32 PM To: Kellogg, Brian (GS IT PG-DR) Cc: bro at bro.org Subject: Re: check rx and tx hosts for files Hi Brian, I had the kind of same use-case where I had to exclude file extraction for certain subnets. Hence this is what I have done in my script: # White list of subnets to exclude file extraction for. global subnet_map: table[subnet] of string = { [x.x.x.x/25] = "VIP subnet1", [y.y.y.y/26] = "VIP subnet2", [z.z.z.z/24] = "VIP subnet3", } &default =""; event file_sniff(f: fa_file, meta: fa_metadata) { # check for right source to extract. if(f$source != "HTTP") return; #check the right mime-type to extract. if ( ! meta?$mime_type || meta$mime_type !in ext_map ) return; # get the recieving hosts from the record. local rx_addr: set[addr]; rx_addr = f$info$rx_hosts; # check if the rx host is in VIP subnets for (i in rx_addr) { if ( i in subnet_map ) { return; } } if ( meta?$mime_type ) { local fname = fmt("%s-%s.%s", f$source, f$id, ext_map[meta$mime_type]); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } } You can define the rx or tx which you want to exclude/include and modify accordingly. I am sure there might be some more efficient ways to do this, I will let other more experience people to answer that :) Hope this helps. Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/70f5c891/attachment-0001.html From jazoff at illinois.edu Mon Oct 10 12:37:46 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Mon, 10 Oct 2016 19:37:46 +0000 Subject: [Bro] Understanding Connection history for ssh. In-Reply-To: References: Message-ID: <9C50AEB9-0E90-40BC-A881-8836CC3AFE0E@illinois.edu> > On Oct 10, 2016, at 3:22 PM, fatema bannatwala wrote: > > The problem is that, when contacted the concerned party, > they say that they don't see any login attempts from that IP and > asking whether we were sure that the ssh login were successful. If they are not seeing *any* attempts then something is screwed up with the logging on their end. It's possible that the value of auth_success is wrong[1], but it's not possible that no attempt happened. There was a tcp 3 way handshake, there was a ssh protocol negotiation, they should have something in their logs. [1] Or misleading, often from the SSH point of view it was a login, but sometimes the remote system drops you into another password prompt instead of a shell. Appliances do this a lot. -- - Justin Azoff From mus3 at lehigh.edu Mon Oct 10 12:42:51 2016 From: mus3 at lehigh.edu (Munroe Sollog) Date: Mon, 10 Oct 2016 15:42:51 -0400 Subject: [Bro] Understanding Connection history for ssh. In-Reply-To: <7eca0bd60dec71985f22d17b4fc754e6@localhost> References: <7eca0bd60dec71985f22d17b4fc754e6@localhost> Message-ID: <920301a8-0ac2-d1ca-7bd1-0f0b46f04622@lehigh.edu> On 10/10/2016 02:11 PM, James Lay wrote: > On 2016-10-10 11:37, fatema bannatwala wrote: >> Hi Bro team, >> >> I am trying to understand the 'history' field in conn.log for failed >> and successful ssh logins. >> Can we tell by looking into it whether the ssh connection was >> successful or not? >> >> For ex: We had a case today where bro-intel flagged an IP to be bad >> with 85% confidence rate, and when we saw the conn.log corresponding >> to that uid, we saw that, that IP was trying to ssh into a machine. >> Now the question is, can we tell by looking at the history - ShAdDa >> that the ssh was successful? >> >> intel.log entry >> 1476046696.592070 CXs7MT25xi6ykmT3o1 77.242.90.96 50367 >> X.Y.Z.K 22 - - - 77.242.90.96 Intel::ADDR SSH::SUCCESSFUL_LOGIN >> worker-3-4 dataplane.org [1] 85.0 scanner >> >> conn.log entry >> 1476046725.508913 CXs7MT25xi6ykmT3o1 77.242.90.96 50367 >> X.Y.Z.K 22 tcp ssh 10.623538 1383 1843 S1 F T 0 >> SHADDA 15 2171 15 2631 (empty) >> >> ssh.log entry >> >> 1476046725.634328 CXs7MT25xi6ykmT3o1 77.242.90.96 50367 >> X.Y.Z.K 22 2 T INBOUND SSH-2.0-libssh2_1.7.0 >> SSH-2.0-1.82 sshlib: WinSSHD 4.27 aes256-cbc hmac-sha1 >> none diffie-hellman-group1-sha1 ssh-dss >> b9:93:6a:61:8d:29:01:ec:aa:01:1f:0e:90:0a:7b:6e CZ 84 Prerov >> 49.453899 17.4524 >> >> Also, what does the conn history would look like in case of failed ssh >> login? >> >> Thanks for the help. >> >> Thanks, >> Fatema. > > Fatema, > The T in your ssh.log is "auth_success", so yes...bro views this as a successful login. Also, that > source IP is not so good...that IP is listed in https://lists.blocklist.de/lists/ssh.txt. > > James Be careful taking that column as fact. It seems like the success of an SSH connection is purely based on the size of the response. A large SSH banner can cause a false positive. > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Munroe Sollog LTS - Senior Network Engineer x85002 From fatema.bannatwala at gmail.com Mon Oct 10 12:58:32 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 10 Oct 2016 15:58:32 -0400 Subject: [Bro] Understanding Connection history for ssh. In-Reply-To: <9C50AEB9-0E90-40BC-A881-8836CC3AFE0E@illinois.edu> References: <9C50AEB9-0E90-40BC-A881-8836CC3AFE0E@illinois.edu> Message-ID: Thanks Justin! That makes sense, was just curious to know how bro evaluates the auth_success field :) A quick question, as the connection was seen to last almost 10 secs and was thinking that the failed login connections are not that long, hence wanted to ask could it be possible that the user might have got multiple password prompts over the same connection and Bro logged that single connection of 10secs? would it also explain why no 'R' or 'F' flag was seen in the end of conn history (*ShAdDa)?* Thanks, Fatema. On Mon, Oct 10, 2016 at 3:37 PM, Azoff, Justin S wrote: > > > On Oct 10, 2016, at 3:22 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > The problem is that, when contacted the concerned party, > > they say that they don't see any login attempts from that IP and > > asking whether we were sure that the ssh login were successful. > > If they are not seeing *any* attempts then something is screwed up with > the logging on their end. > > It's possible that the value of auth_success is wrong[1], but it's not > possible that no attempt happened. There was a tcp 3 way handshake, there > was a ssh protocol negotiation, they should have something in their logs. > > > [1] Or misleading, often from the SSH point of view it was a login, but > sometimes the remote system drops you into another password prompt instead > of a shell. Appliances do this a lot. > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161010/d72d9cd0/attachment.html From dwaters at bioteam.net Sun Oct 9 08:07:32 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Sun, 9 Oct 2016 10:07:32 -0500 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> Message-ID: I did compile using below. Perhaps that is not enough, as I was following the myricom instructions. See attached On Sat, Oct 8, 2016 at 3:29 PM, Micha? Purzy?ski wrote: > http://lmgtfy.com/?q=Bro+Myricom > > Finds me > > https://www.bro.org/sphinx-git/components/bro-plugins/myricom/README.html > > You're welcome ;) > > Tested on 2.5 master and beta. Haven't tried on 2.4 although by the time > you will build your cluster 2.5 will have been released. > > On 8 Oct 2016, at 19:52, Darrain Waters wrote: > > Great, would you point me in the right direction. Thank you. > > On Sat, Oct 8, 2016 at 12:50 PM, Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > >> You don't seem to use native Myricom support, there's a plugin for that. >> >> On 8 Oct 2016, at 19:44, Darrain Waters wrote: >> >> Turns out it was a simple config issue (like most times & RTFM), and >> traffic is flowing to snf0. My workers were not using the snf0 interface as >> you must if you compiled using the myricom sniffer. Also changed the cpu >> pinning so thanks for that info. I also turned off the time source in the >> snf driver. Now I need to add Arista time source. Thanks for your time. >> >> [worker-3] >> type=worker >> host=10.0.40.16 >> interface=snf0 use to be eth2 >> lb_method=myricom >> lb_procs=10 >> pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either >> #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, >> SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 >> >> [worker-3] >> >> type=worker >> >> host=10.0.40.16 >> >> interface=eth2 >> >> lb_method=myricom >> >> lb_procs=10 >> >> pin_cpus=7,8,9,10,11,18,19,20,21,22 >> >> env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, >> SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 >> To: >> >> [worker-3] >> type=worker >> host=10.0.40.16 >> interface=snf0 >> lb_method=myricom >> lb_procs=10 >> pin_cpus=2,3,4,5,6,7,8,9,10,11 >> >> >> >> >> On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S >> wrote: >> >>> >>> > On Oct 7, 2016, at 6:27 PM, Darrain Waters >>> wrote: >>> > >>> > Sorry, yeah I am getting comm logs and stderr on the manager. I do >>> have two NICS enabled on each system, one for management with IP and the >>> other is the myricom with no IP and in sniffer mode. >>> > >>> > Each of the workers do have the spool wirker directories but they are >>> empty. >>> > >>> > I use to be able to run this on the manager >>> > >>> > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 >>> > >>> > >>> > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy >>> > >>> > >>> > >>> > [BroControl] > netstats >>> > >>> > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 >>> > >>> > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 >>> > >>> > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 >>> >>> Ah, ok.. so this isn't the firewall issue... That's when "everything is >>> working but there are no logs" but in your case nothing is working :-) >>> >>> I'd stop bro and then make sure everything is stopped. You can use >>> 'broctl ps.bro' to ensure that there are no stray procs lying around. Then >>> at that point with nothing else running you should be able to run things >>> like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. >>> >>> You should also be able to run tools like >>> >>> /opt/snf/bin/myri_nic_info >>> /opt/snf/bin/myri_counters >>> /opt/snf/bin/myri_bandwidth >>> /opt/snf/sbin/myri_license >>> >>> to ensure that the card+drivers are working properly as well as check >>> dmesg output and check to see if it is complaining about anything >>> >>> I don't recall every seeing that particular netstats output, but I bet >>> you'll be able to reproduce the problem with regular tcpdump. Generally >>> speaking if tcpdump -w foo.pcap writes out packets that look ok, and you >>> can use bro -r against foo.pcap, bro it should work in realtime. >>> >>> The snf issues on the manager may be due to trying to use snf libs >>> against a regular NIC, I've had to use things like >>> >>> LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... >>> >>> to force it to use standard libpcap. >>> >>> -- >>> - Justin Azoff >>> >>> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161009/68bd9686/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: BRO App Note.pdf Type: application/pdf Size: 280306 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161009/68bd9686/attachment-0001.pdf From philosnef at gmail.com Tue Oct 11 04:26:00 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 11 Oct 2016 07:26:00 -0400 Subject: [Bro] bro cluster and load balancers Message-ID: I still do not understand this... If I have 1. 1 manager node 2. 1 logger node 3. 2 worker nodes and I load balance between the two worker nodes, how, if at all, does the manager know if a session is split across multiple worker nodes? The worker nodes (as mentioned before) would have to spit considerable amounts of traffic information back up to the manager node. My load balancer uses 5 tuples to determine where to send traffic for a given session. I need to limit the number of physical servers assigned to this cluster due to budgetary constraints, and ideally, 2 stand alone worker/manager/logger all in one systems would be more doable than 3 or 4 physical systems. I am under the impression that in the previous thread on this, load balancing in this way is impossible since conn tracking wouldn't work without a manager handling both worker hosts??? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161011/174fe7d1/attachment.html From jan.grashoefer at gmail.com Tue Oct 11 04:43:46 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Tue, 11 Oct 2016 13:43:46 +0200 Subject: [Bro] bro cluster and load balancers In-Reply-To: References: Message-ID: > and I load balance between the two worker nodes, how, if at all, does the > manager know if a session is split across multiple worker nodes? The worker > nodes (as mentioned before) would have to spit considerable amounts of > traffic information back up to the manager node. My load balancer uses 5 > tuples to determine where to send traffic for a given session. I guess by session you mean connection: If your load balancer uses 5-tuples *symmetrically* there shouldn't be any split connection. Accordingly each connection can be analyzed by a worker without interaction with other nodes. State that is shared across the cluster depends on the scripts (e.g., scan.bro), which build upon the events spit out by the analyzers. So there is no need to send traffic to other nodes of the cluster. Jan From philosnef at gmail.com Tue Oct 11 05:19:02 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 11 Oct 2016 08:19:02 -0400 Subject: [Bro] bro cluster and load balancers Message-ID: Thanks Jan! This is what I had thought was true. I have a path forward (on the cheap) now. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161011/12df8663/attachment.html From philosnef at gmail.com Tue Oct 11 06:53:10 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 11 Oct 2016 09:53:10 -0400 Subject: [Bro] question about a 100 gig nic and support Message-ID: Does anyone know if the anic-200Ku is supported? I don't think so, but wanted to ask. While it is totally overkill for 10Gb/s inspection, I am curious to see what would happen if you interconnected two for processing. Apparently these nics support 2 slot interconnection. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161011/46e0c72f/attachment.html From jason.carr at gmail.com Tue Oct 11 08:09:46 2016 From: jason.carr at gmail.com (Jason Carr) Date: Tue, 11 Oct 2016 15:09:46 +0000 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> Message-ID: Do you have a time source hooked up? If you do not, you'll need to start the kernel module with myri_timesource=0. I learned this the hard way because I received a SYNC device when I ordered a regular. My startup script does this on my SYNC device, unload your module /opt/snf/sbin/myri_start_stop stop then /opt/snf/sbin/rebuild.sh /opt/snf/sbin/myri_start_stop start myri_timesource=0 On Mon, Oct 10, 2016 at 10:52 PM Darrain Waters wrote: > I did compile using below. Perhaps that is not enough, as I was following > the myricom instructions. See attached > > On Sat, Oct 8, 2016 at 3:29 PM, Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > > http://lmgtfy.com/?q=Bro+Myricom > > Finds me > > https://www.bro.org/sphinx-git/components/bro-plugins/myricom/README.html > > You're welcome ;) > > Tested on 2.5 master and beta. Haven't tried on 2.4 although by the time > you will build your cluster 2.5 will have been released. > > On 8 Oct 2016, at 19:52, Darrain Waters wrote: > > Great, would you point me in the right direction. Thank you. > > On Sat, Oct 8, 2016 at 12:50 PM, Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > > You don't seem to use native Myricom support, there's a plugin for that. > > On 8 Oct 2016, at 19:44, Darrain Waters wrote: > > Turns out it was a simple config issue (like most times & RTFM), and > traffic is flowing to snf0. My workers were not using the snf0 interface as > you must if you compiled using the myricom sniffer. Also changed the cpu > pinning so thanks for that info. I also turned off the time source in the > snf driver. Now I need to add Arista time source. Thanks for your time. > > [worker-3] > type=worker > host=10.0.40.16 > interface=snf0 use to be eth2 > lb_method=myricom > lb_procs=10 > pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either > #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, > SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 > > [worker-3] > > type=worker > > host=10.0.40.16 > > interface=eth2 > > lb_method=myricom > > lb_procs=10 > > pin_cpus=7,8,9,10,11,18,19,20,21,22 > > env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, > SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 > To: > > [worker-3] > type=worker > host=10.0.40.16 > interface=snf0 > lb_method=myricom > lb_procs=10 > pin_cpus=2,3,4,5,6,7,8,9,10,11 > > > > > On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S > wrote: > > > > On Oct 7, 2016, at 6:27 PM, Darrain Waters wrote: > > > > Sorry, yeah I am getting comm logs and stderr on the manager. I do have > two NICS enabled on each system, one for management with IP and the other > is the myricom with no IP and in sniffer mode. > > > > Each of the workers do have the spool wirker directories but they are > empty. > > > > I use to be able to run this on the manager > > > > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 > > > > > > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy > > > > > > > > [BroControl] > netstats > > > > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 > > > > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 > > > > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 > > Ah, ok.. so this isn't the firewall issue... That's when "everything is > working but there are no logs" but in your case nothing is working :-) > > I'd stop bro and then make sure everything is stopped. You can use > 'broctl ps.bro' to ensure that there are no stray procs lying around. Then > at that point with nothing else running you should be able to run things > like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. > > You should also be able to run tools like > > /opt/snf/bin/myri_nic_info > /opt/snf/bin/myri_counters > /opt/snf/bin/myri_bandwidth > /opt/snf/sbin/myri_license > > to ensure that the card+drivers are working properly as well as check > dmesg output and check to see if it is complaining about anything > > I don't recall every seeing that particular netstats output, but I bet > you'll be able to reproduce the problem with regular tcpdump. Generally > speaking if tcpdump -w foo.pcap writes out packets that look ok, and you > can use bro -r against foo.pcap, bro it should work in realtime. > > The snf issues on the manager may be due to trying to use snf libs against > a regular NIC, I've had to use things like > > LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... > > to force it to use standard libpcap. > > -- > - Justin Azoff > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161011/81b76b86/attachment-0001.html From dwaters at bioteam.net Tue Oct 11 08:15:33 2016 From: dwaters at bioteam.net (Darrain Waters) Date: Tue, 11 Oct 2016 10:15:33 -0500 Subject: [Bro] 5 node cluster In-Reply-To: References: <3B323057-C383-4984-BE68-A97448F7E6EA@illinois.edu> <5761E17D-0809-4A4D-9E59-DCA822E0DD14@illinois.edu> <104E65E1-00F3-4B22-8BC7-2F15480A0C6B@illinois.edu> <21E29593-6C3C-4E17-A02E-A075F2C9FC12@gmail.com> Message-ID: Yeah, I already did that and determined that it was the issue. I responded as much, and the sysadmin blocked my response because I had myricom instructions attached to it. Thanks for your response, it is helpful. On Tue, Oct 11, 2016 at 10:09 AM, Jason Carr wrote: > Do you have a time source hooked up? If you do not, you'll need to start > the kernel module with myri_timesource=0. I learned this the hard way > because I received a SYNC device when I ordered a regular. > > My startup script does this on my SYNC device, unload your module > /opt/snf/sbin/myri_start_stop stop then > > /opt/snf/sbin/rebuild.sh > /opt/snf/sbin/myri_start_stop start myri_timesource=0 > > > > On Mon, Oct 10, 2016 at 10:52 PM Darrain Waters > wrote: > >> I did compile using below. Perhaps that is not enough, as I was following >> the myricom instructions. See attached >> >> On Sat, Oct 8, 2016 at 3:29 PM, Micha? Purzy?ski < >> michalpurzynski1 at gmail.com> wrote: >> >>> http://lmgtfy.com/?q=Bro+Myricom >>> >>> Finds me >>> >>> https://www.bro.org/sphinx-git/components/bro-plugins/ >>> myricom/README.html >>> >>> You're welcome ;) >>> >>> Tested on 2.5 master and beta. Haven't tried on 2.4 although by the time >>> you will build your cluster 2.5 will have been released. >>> >>> On 8 Oct 2016, at 19:52, Darrain Waters wrote: >>> >>> Great, would you point me in the right direction. Thank you. >>> >>> On Sat, Oct 8, 2016 at 12:50 PM, Micha? Purzy?ski < >>> michalpurzynski1 at gmail.com> wrote: >>> >>>> You don't seem to use native Myricom support, there's a plugin for that. >>>> >>>> On 8 Oct 2016, at 19:44, Darrain Waters wrote: >>>> >>>> Turns out it was a simple config issue (like most times & RTFM), and >>>> traffic is flowing to snf0. My workers were not using the snf0 interface as >>>> you must if you compiled using the myricom sniffer. Also changed the cpu >>>> pinning so thanks for that info. I also turned off the time source in the >>>> snf driver. Now I need to add Arista time source. Thanks for your time. >>>> >>>> [worker-3] >>>> type=worker >>>> host=10.0.40.16 >>>> interface=snf0 use to be eth2 >>>> lb_method=myricom >>>> lb_procs=10 >>>> pin_cpus=2,3,4,5,6,7,8,9,10,11 the cpu pinning was not right either >>>> #env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, >>>> SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 >>>> >>>> [worker-3] >>>> >>>> type=worker >>>> >>>> host=10.0.40.16 >>>> >>>> interface=eth2 >>>> >>>> lb_method=myricom >>>> >>>> lb_procs=10 >>>> >>>> pin_cpus=7,8,9,10,11,18,19,20,21,22 >>>> >>>> env_vars=LD_LIBRARY_PATH=/opt/snf/lib:$PATH, SNF_FLAGS=0x1, >>>> SNF_DATARING_SIZE=0x100000000, SNF_NUM_RINGS=10 >>>> To: >>>> >>>> [worker-3] >>>> type=worker >>>> host=10.0.40.16 >>>> interface=snf0 >>>> lb_method=myricom >>>> lb_procs=10 >>>> pin_cpus=2,3,4,5,6,7,8,9,10,11 >>>> >>>> >>>> >>>> >>>> On Fri, Oct 7, 2016 at 10:34 PM, Azoff, Justin S >>>> wrote: >>>> >>>>> >>>>> > On Oct 7, 2016, at 6:27 PM, Darrain Waters >>>>> wrote: >>>>> > >>>>> > Sorry, yeah I am getting comm logs and stderr on the manager. I do >>>>> have two NICS enabled on each system, one for management with IP and the >>>>> other is the myricom with no IP and in sniffer mode. >>>>> > >>>>> > Each of the workers do have the spool wirker directories but they >>>>> are empty. >>>>> > >>>>> > I use to be able to run this on the manager >>>>> > >>>>> > [bromgr at bromgr etc]$ sudo tcpdump -i eth2 >>>>> > >>>>> > >>>>> > tcpdump: snf_ring_open_id(ring=-1) failed: Device or resource busy >>>>> > >>>>> > >>>>> > >>>>> > [BroControl] > netstats >>>>> > >>>>> > worker-1-1: 1475878452.092051 recvd=1 dropped=17260812 link=17260813 >>>>> > >>>>> > worker-1-10: 1475878452.292009 recvd=1 dropped=17260812 link=17260813 >>>>> > >>>>> > worker-1-2: 1475878452.493003 recvd=1 dropped=17260812 link=17260813 >>>>> >>>>> Ah, ok.. so this isn't the firewall issue... That's when "everything >>>>> is working but there are no logs" but in your case nothing is working :-) >>>>> >>>>> I'd stop bro and then make sure everything is stopped. You can use >>>>> 'broctl ps.bro' to ensure that there are no stray procs lying around. Then >>>>> at that point with nothing else running you should be able to run things >>>>> like 'tcpdump' or 'broctl capstats' and verify that you can capture packets. >>>>> >>>>> You should also be able to run tools like >>>>> >>>>> /opt/snf/bin/myri_nic_info >>>>> /opt/snf/bin/myri_counters >>>>> /opt/snf/bin/myri_bandwidth >>>>> /opt/snf/sbin/myri_license >>>>> >>>>> to ensure that the card+drivers are working properly as well as check >>>>> dmesg output and check to see if it is complaining about anything >>>>> >>>>> I don't recall every seeing that particular netstats output, but I bet >>>>> you'll be able to reproduce the problem with regular tcpdump. Generally >>>>> speaking if tcpdump -w foo.pcap writes out packets that look ok, and you >>>>> can use bro -r against foo.pcap, bro it should work in realtime. >>>>> >>>>> The snf issues on the manager may be due to trying to use snf libs >>>>> against a regular NIC, I've had to use things like >>>>> >>>>> LD_PRELOAD=/usr/lib64/libpcap.so.1 tcpdump ... >>>>> >>>>> to force it to use standard libpcap. >>>>> >>>>> -- >>>>> - Justin Azoff >>>>> >>>>> >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> >>> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161011/40eda1ec/attachment.html From philosnef at gmail.com Tue Oct 11 09:40:58 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 11 Oct 2016 12:40:58 -0400 Subject: [Bro] possible bug with smtp analyzer/trans_depth issue Message-ID: We were researching into an issue where we have multiple smtp messages in the same uid (normal), but where every message has the same trans_depth... When the pcap is run against bro manually, we get the correct number of trans_depth values. Packet loss on the systems is very low (below .5%), so I can't exactly chalk it up to traffic issues. Anyone have any experience with this, or might have some insight as to why trans_depth isn't being incremented in these messages? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161011/679b7d01/attachment.html From gfaulkner.nsm at gmail.com Tue Oct 11 11:08:05 2016 From: gfaulkner.nsm at gmail.com (Gary Faulkner) Date: Tue, 11 Oct 2016 13:08:05 -0500 Subject: [Bro] Bro Plugin Question Message-ID: <78e55e77-0d41-fb45-23c3-f6c673658829@gmail.com> Can Bro plugins that add BIFs take configuration files of their own? An example configuration item might be to add the IP and port that another host listens to for output from Bro, or other app specific parameters. An example plugin might take the types of data being put into known_* scripts and adding entries to a DB on another host for a passive inventory. Another example might be configuring a host and port to send event stats to. I'd rather not hard code these values into a plugin if possible and I'd like to be able to change the configuration values without recompiling; although a Broctl install/restart would be OK. Thanks, Gary From david at woist.net Tue Oct 11 23:25:18 2016 From: david at woist.net (david at woist.net) Date: Wed, 12 Oct 2016 08:25:18 +0200 Subject: [Bro] ip_bytes and pkts not set in conn.log for cropped UDP packets Message-ID: <20161012082518.Horde.UjNWLn8D7ubY5zjWodsIUd8@webmail.woist.net> Hi all, We use bro-2.4.1 to analyze pcap traces with cropped payload. This works fine for TCP and ICMP packets, however orig_pkts, orig_ip_bytes, resp_pkts and resp_ip_bytes are all set to "0" in conn.log for the connections with cropped UDP packets (such as DNS packets with a snaplen of 42). That is, we end up with a conn.log entry stating that no packets were observed for the corresponding connection. I assume this is because no application analyzer is started. How can we still get these fields updated? (-C does not help and orig_bytes as well as resp_bytes are set correctly, also I noted that the event udp_contents is triggered for the packets). Many thanks! David From seth at icir.org Wed Oct 12 08:20:38 2016 From: seth at icir.org (Seth Hall) Date: Wed, 12 Oct 2016 11:20:38 -0400 Subject: [Bro] possible bug with smtp analyzer/trans_depth issue In-Reply-To: References: Message-ID: <88808D30-13E8-4221-82AF-986359FEDA49@icir.org> > On Oct 11, 2016, at 12:40 PM, erik clark wrote: > > We were researching into an issue where we have multiple smtp messages in the same uid (normal), but where every message has the same trans_depth... When the pcap is run against bro manually, we get the correct number of trans_depth values. Packet loss on the systems is very low (below .5%), so I can't exactly chalk it up to traffic issues. Are these all on the same TCP connection? (the uid field). You could just be seeing the message flow over multiple connections as it's passed around from mail server to mail server. The trans_depth only refers to the depth of messages passed between hosts within a single TCP connection since many message transfers can be pipelined within a TCP connection. I agree that this is unlikely to be a side effect of packet loss. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From philosnef at gmail.com Wed Oct 12 08:22:53 2016 From: philosnef at gmail.com (erik clark) Date: Wed, 12 Oct 2016 11:22:53 -0400 Subject: [Bro] possible bug with smtp analyzer/trans_depth issue In-Reply-To: <88808D30-13E8-4221-82AF-986359FEDA49@icir.org> References: <88808D30-13E8-4221-82AF-986359FEDA49@icir.org> Message-ID: Yep, these are all on the same connection, which is why we are interested in tracking this. :) On Wed, Oct 12, 2016 at 11:20 AM, Seth Hall wrote: > > > On Oct 11, 2016, at 12:40 PM, erik clark wrote: > > > > We were researching into an issue where we have multiple smtp messages > in the same uid (normal), but where every message has the same > trans_depth... When the pcap is run against bro manually, we get the > correct number of trans_depth values. Packet loss on the systems is very > low (below .5%), so I can't exactly chalk it up to traffic issues. > > Are these all on the same TCP connection? (the uid field). You could just > be seeing the message flow over multiple connections as it's passed around > from mail server to mail server. The trans_depth only refers to the depth > of messages passed between hosts within a single TCP connection since many > message transfers can be pipelined within a TCP connection. > > I agree that this is unlikely to be a side effect of packet loss. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161012/874fb4b2/attachment.html From johanna at icir.org Fri Oct 14 06:10:38 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 06:10:38 -0700 Subject: [Bro] Bro Plugin Question In-Reply-To: <78e55e77-0d41-fb45-23c3-f6c673658829@gmail.com> References: <78e55e77-0d41-fb45-23c3-f6c673658829@gmail.com> Message-ID: <20161014131035.3gfq32a66vv4i75c@Beezling.local> Hello Gary, On Tue, Oct 11, 2016 at 01:08:05PM -0500, Gary Faulkner wrote: > Can Bro plugins that add BIFs take configuration files of their own? It depends what exactly you mean by configuration files. The way I would go to add configuration options is to use exactly the same approach that Bro itself uses: create scripts, which contain default values that can be redefined. That is something that plugins also can do - users then can redefine the values to watever they need in their local.bro or other scripts. If you want to get more fancy, you can also write plugins to broctl, which might be able to automate this task (depending on what kind of configuration options you want to set). Johanna From johanna at icir.org Fri Oct 14 06:14:43 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 06:14:43 -0700 Subject: [Bro] Make error on Mac OS 10 In-Reply-To: <040BB91BB8AE594F8E3FBC243FC56A1C108DBE9A@MAILBOX3.shepherd.edu> References: <040BB91BB8AE594F8E3FBC243FC56A1C108DBE9A@MAILBOX3.shepherd.edu> Message-ID: <20161014131443.ksalpybjsptfgany@Beezling.local> Hello George, I sadly do no longer have a copy of snow leopard, so I cannot really try this myself. However, Bro should not hardcode any python library versions, so I assume that python 2.7 actually already was present somewhere on your system (e.g. installed by homebrew or macports) and that this is some kind of include path issue. Could you potentially include the full output of configure and a few lines of make before the error occurs, either here or in a private mail to me? Johanna On Sun, Oct 09, 2016 at 02:21:39AM +0000, George Ray wrote: > Bro Mailing List, > > I am attempting to install Bro 2.4.1 on Mac OS 10.6.8. The prereqs have been installed and the ./configure works. During Make, at the 89% level, the system reports: > ld: library not found for -lpython2.7 > collect2: ld returned 1 exit status > make[3]: *** [aux/broctl/aux/pysubnettree/_SubnetTree.so] Error 1 > make[2]: *** [aux/broctl/aux/pysubnettree/CMakeFiles/_SubnetTree.dir/all] Error > > The default python install is 2.6, so I installed version 2.7 from python.org. Python works fine from the command line but the same error recurs. The following is my path: > /Library/Frameworks/Python.framework/Versions/2.7/lib:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin > > Does python have to be installed in a different directory, this path reflects the default? Any ideas on how to resolve this issue? > > Thanks, > > George > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Fri Oct 14 06:18:23 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 06:18:23 -0700 Subject: [Bro] Bro crashing on start In-Reply-To: References: Message-ID: <20161014131823.n5ewvkpxzgxmmki5@Beezling.local> Hello Drake, I am not aware of any changes that we did that should cause this kind of error, so I assume that the reason for this is not the updated pfring, and not the updated Bro. Could you check if this indeed does work with Bro 2.4.1 and the new pfring, or if Bro 2.4.1 and the new pfring fails in the same way and report back? :) Thanks, Johanna On Thu, Oct 06, 2016 at 11:01:50PM -0400, Drake Aronhalt wrote: > All, > This morning I updated bro and pfring on my dev sensor to their > respective git master branches and started receiving this error when I try > to start bro: > > # broctl start > > starting logger ... > > starting manager ... > > starting proxy-1 ... > > starting worker-1-1 ... > > starting worker-1-2 ... > > starting worker-1-3 ... > > starting worker-1-4 ... > > starting worker-1-5 ... > > worker-1-5 terminated immediately after starting; check output with "diag" > > worker-1-4 terminated immediately after starting; check output with "diag" > > worker-1-1 terminated immediately after starting; check output with "diag" > > worker-1-3 terminated immediately after starting; check output with "diag" > > worker-1-2 terminated immediately after starting; check output with "diag" > > > running 'broctl diag' gives me this > > fatal error: problem with interface eno33557248 (pcap_error: BPF > program is not valid) > > > > pf_ring is loading properly as far as I can tell. My node.cfg is below: > > > [logger] > > type=logger > > host=localhost > > > [manager] > > type=manager > > host=localhost > > > [proxy-1] > > type=proxy > > host=localhost > > > [worker-1] > > type=worker > > host=localhost > > interface=eno33557248 > > lb_method=pf_ring > > lb_procs=5 > > pin_cpus=2,3,4,5,6 > > > Any ideas on what causes this? Should I just roll back to my last config > that worked, or did I miss a change in bro 2.5 config? > > > Drake > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From drakearonhalt at gmail.com Fri Oct 14 06:30:04 2016 From: drakearonhalt at gmail.com (Drake Aronhalt) Date: Fri, 14 Oct 2016 09:30:04 -0400 Subject: [Bro] Bro crashing on start In-Reply-To: <20161014131823.n5ewvkpxzgxmmki5@Beezling.local> References: <20161014131823.n5ewvkpxzgxmmki5@Beezling.local> Message-ID: Thanks Johanna, the issue seemed to be pf_ring, I rolled it back to 6.0.3 and it's working fine now. On Fri, Oct 14, 2016 at 9:18 AM, Johanna Amann wrote: > Hello Drake, > > I am not aware of any changes that we did that should cause this kind of > error, so I assume that the reason for this is not the updated pfring, and > not the updated Bro. > > Could you check if this indeed does work with Bro 2.4.1 and the new > pfring, or if Bro 2.4.1 and the new pfring fails in the same way and > report back? :) > > Thanks, > Johanna > > On Thu, Oct 06, 2016 at 11:01:50PM -0400, Drake Aronhalt wrote: > > All, > > This morning I updated bro and pfring on my dev sensor to their > > respective git master branches and started receiving this error when I > try > > to start bro: > > > > # broctl start > > > > starting logger ... > > > > starting manager ... > > > > starting proxy-1 ... > > > > starting worker-1-1 ... > > > > starting worker-1-2 ... > > > > starting worker-1-3 ... > > > > starting worker-1-4 ... > > > > starting worker-1-5 ... > > > > worker-1-5 terminated immediately after starting; check output with > "diag" > > > > worker-1-4 terminated immediately after starting; check output with > "diag" > > > > worker-1-1 terminated immediately after starting; check output with > "diag" > > > > worker-1-3 terminated immediately after starting; check output with > "diag" > > > > worker-1-2 terminated immediately after starting; check output with > "diag" > > > > > > running 'broctl diag' gives me this > > > > fatal error: problem with interface eno33557248 (pcap_error: BPF > > program is not valid) > > > > > > > > pf_ring is loading properly as far as I can tell. My node.cfg is below: > > > > > > [logger] > > > > type=logger > > > > host=localhost > > > > > > [manager] > > > > type=manager > > > > host=localhost > > > > > > [proxy-1] > > > > type=proxy > > > > host=localhost > > > > > > [worker-1] > > > > type=worker > > > > host=localhost > > > > interface=eno33557248 > > > > lb_method=pf_ring > > > > lb_procs=5 > > > > pin_cpus=2,3,4,5,6 > > > > > > Any ideas on what causes this? Should I just roll back to my last config > > that worked, or did I miss a change in bro 2.5 config? > > > > > > Drake > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161014/6aa4da14/attachment.html From johanna at icir.org Fri Oct 14 06:31:08 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 06:31:08 -0700 Subject: [Bro] check rx and tx hosts for files In-Reply-To: References: Message-ID: <20161014133103.seoxx26r6i5mva23@Beezling.local> Hi Brian, you should be able to just use the event file_over_new_connection, which includes the connection record. With that, you don't have to loop over complex data structures and can just use Site::is_local_addr. This would probably look similar to: event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) { if ( is_orig && Site::is_local_addr(c$id$orig_h) ) Files::add_analyzer(f, Files::ANALYZER_EXTRACT); } I hope this helps, Johanna From johanna at icir.org Fri Oct 14 06:35:27 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 06:35:27 -0700 Subject: [Bro] question about a 100 gig nic and support In-Reply-To: References: Message-ID: <20161014133527.ubnwty3pqm5k52ox@Beezling.local> Hi Eric, you should be able to use it if it works with libpcap. Apart from that, no special support exists. Johanna On Tue, Oct 11, 2016 at 09:53:10AM -0400, erik clark wrote: > Does anyone know if the anic-200Ku is supported? I don't think so, but > wanted to ask. While it is totally overkill for 10Gb/s inspection, I am > curious to see what would happen if you interconnected two for processing. > Apparently these nics support 2 slot interconnection. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Fri Oct 14 07:04:33 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 07:04:33 -0700 Subject: [Bro] ip_bytes and pkts not set in conn.log for cropped UDP packets In-Reply-To: <20161012082518.Horde.UjNWLn8D7ubY5zjWodsIUd8@webmail.woist.net> References: <20161012082518.Horde.UjNWLn8D7ubY5zjWodsIUd8@webmail.woist.net> Message-ID: <20161014140433.mvf2macrbfztun22@Beezling.local> Hello David, > We use bro-2.4.1 to analyze pcap traces with cropped payload. > This works fine for TCP and ICMP packets, however orig_pkts, > orig_ip_bytes, resp_pkts and resp_ip_bytes are all set to "0" in > conn.log for the connections with cropped UDP packets (such as DNS > packets with a snaplen of 42). > That is, we end up with a conn.log entry stating that no packets were > observed for the corresponding connection. > I assume this is because no application analyzer is started. Close, but not quite. The ip_pkts, ip_bytes fields are tracked by the connection size analyzer. Roughly speaking, Bro arranges its analyzers in a tree structure. The connection size analyzer is started as a child analyzer of the TCP and the UDP analyzer, meaning it only gets information about packets that is passed on by these analyzers. By default, the UDP analyzer does not forward incomplete packets (where the packet content was not captured due to a small snaplength) to any child analyzer - including the connection size analyzer. Changing this behavior requires patching the c++ source - you could try just removing the if (caplen >= len) check just before ForwardPacket in UDP.cc -- however, this will probably have other side-effects, because you now are also forwarding incomplete data to other analyzers who might not deal too gracefully with them and generate binpac errors - so you only should do this if you also prevent application layer analyzers from instantiating. The current solution here is not quite satisfying - it would probably be nicer if the connection size analyzer is not a child of tcp/udp, but that is difficult due to other reasons. I hope this helps a little bit, Johanna From johanna at icir.org Fri Oct 14 07:09:35 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 07:09:35 -0700 Subject: [Bro] NAT connection logs In-Reply-To: References: Message-ID: <20161014140935.qlc55emsynpdcgh6@Beezling.local> Hello John, > I have one physical powerful system that has two optical feeds from a > passive tap that observes traffic from inside a firewall and outside the > firewall. A lot of the connections are NAT leaving our gateway > > My question is regarding logging , with a cluster configuration (or any bro > configuration for that matter) if a connection is outbound to an ip of > 1.2.3.4 does bro see the connection as two separate streams with two > separate log entries to follow that stream? Or one stream and the NAT > conversion is within the log? I'm assuming the former and it sees it as > two separate connections >From your setup, I assume that you will see the traffic twice (once with the internal IP and once with the IP of the NAT gateway). In that case, the connections will be logged twice - Bro does not do any kind of internal deduplication. Johanna From johanna at icir.org Fri Oct 14 07:12:56 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 07:12:56 -0700 Subject: [Bro] Quick load balancing question In-Reply-To: <3d3c54a1963ce74d557bbe4c618627e3@localhost> References: <3d3c54a1963ce74d557bbe4c618627e3@localhost> Message-ID: <20161014141256.6wakuxm53wnp5upl@Beezling.local> Hi James, > for the pf_ring I have the below: > [worker-1] > type=worker > host=localhost > interface=eth0 > lb_method=pf_ring > lb_procs=2 > pin_cpus=0,1 > > So my question is twofold,...does each pinned cpu get a process, In your case - yes. Basically, processes are pinned to the CPUs that you specify in order. If you pin 6 CPUs and have 2 processes, only the first 2 will have processes on it. If you pin 2 CPUs and have 3 processes, the first one will have 2 and the second one one. > and, is there a way to get load balancing using just standalone, without > needing the logger, worker, and proxy processes? Thank you. No, due to the current Bro architecture, you need a cluster for load balancing. Johanna From jlay at slave-tothe-box.net Fri Oct 14 07:20:45 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 14 Oct 2016 08:20:45 -0600 Subject: [Bro] Quick load balancing question In-Reply-To: <20161014141256.6wakuxm53wnp5upl@Beezling.local> References: <3d3c54a1963ce74d557bbe4c618627e3@localhost> <20161014141256.6wakuxm53wnp5upl@Beezling.local> Message-ID: <80913649078257cceff67598e36726f0@localhost> On 2016-10-14 08:12, Johanna Amann wrote: > Hi James, > >> for the pf_ring I have the below: >> [worker-1] >> type=worker >> host=localhost >> interface=eth0 >> lb_method=pf_ring >> lb_procs=2 >> pin_cpus=0,1 >> >> So my question is twofold,...does each pinned cpu get a process, > > In your case - yes. Basically, processes are pinned to the CPUs that > you > specify in order. If you pin 6 CPUs and have 2 processes, only the > first 2 > will have processes on it. If you pin 2 CPUs and have 3 processes, the > first one will have 2 and the second one one. > >> and, is there a way to get load balancing using just standalone, >> without >> needing the logger, worker, and proxy processes? Thank you. > > No, due to the current Bro architecture, you need a cluster for load > balancing. > > Johanna Awesome...thanks for the information as always Johanna :) James From philosnef at gmail.com Fri Oct 14 07:52:02 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 14 Oct 2016 10:52:02 -0400 Subject: [Bro] logging to multiple locations in a cluster Message-ID: Is it possible to log to more than one location? I want my broctl to push a remote logger, AND log locally, for redundancy in case the remote logger dies. So, each capture node in the cluster should be instructed to log to that capture node, and copy across the wire to the logger node(s). If this is not possible, is there a way to perhaps sniff the outbound link and log that? Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161014/32deaa11/attachment.html From johanna at icir.org Fri Oct 14 08:02:54 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 14 Oct 2016 08:02:54 -0700 Subject: [Bro] logging to multiple locations in a cluster In-Reply-To: References: Message-ID: Yes, it is. I think you only have to redef Log::enable_local_logging to true on the workers (it is usually set to false when enabling cluster mode). Johanna On 14 Oct 2016, at 7:52, erik clark wrote: > Is it possible to log to more than one location? I want my broctl to > push a > remote logger, AND log locally, for redundancy in case the remote > logger > dies. > > So, each capture node in the cluster should be instructed to log to > that > capture node, and copy across the wire to the logger node(s). If this > is > not possible, is there a way to perhaps sniff the outbound link and > log > that? > > Erik > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From zeolla at gmail.com Fri Oct 14 08:11:00 2016 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Fri, 14 Oct 2016 15:11:00 +0000 Subject: [Bro] logging to multiple locations in a cluster In-Reply-To: References: Message-ID: I'm not positive about your exact scenario, but I am currently logging to multiple locations. For instance - to flat files, and to a kafka topic - but there is much more that I could be doing. See the logging framework . Jon On Fri, Oct 14, 2016 at 10:59 AM erik clark wrote: > Is it possible to log to more than one location? I want my broctl to push > a remote logger, AND log locally, for redundancy in case the remote logger > dies. > > So, each capture node in the cluster should be instructed to log to that > capture node, and copy across the wire to the logger node(s). If this is > not possible, is there a way to perhaps sniff the outbound link and log > that? > > Erik > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161014/9db40818/attachment.html From philosnef at gmail.com Fri Oct 14 08:19:53 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 14 Oct 2016 11:19:53 -0400 Subject: [Bro] logging to multiple locations in a cluster In-Reply-To: References: Message-ID: Yep, ok, can do. Thanks Johanna and Zoella! So redef in local-worker.bro? On Fri, Oct 14, 2016 at 11:11 AM, Zeolla at GMail.com wrote: > I'm not positive about your exact scenario, but I am currently logging to > multiple locations. For instance - to flat files, and to a kafka topic - > but there is much more that I could be doing. See the logging framework > . > > > Jon > > On Fri, Oct 14, 2016 at 10:59 AM erik clark wrote: > >> Is it possible to log to more than one location? I want my broctl to push >> a remote logger, AND log locally, for redundancy in case the remote logger >> dies. >> >> So, each capture node in the cluster should be instructed to log to that >> capture node, and copy across the wire to the logger node(s). If this is >> not possible, is there a way to perhaps sniff the outbound link and log >> that? >> >> Erik >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161014/42cd07cd/attachment.html From theflakes at gmail.com Fri Oct 14 09:23:30 2016 From: theflakes at gmail.com (Brian Kellogg) Date: Fri, 14 Oct 2016 12:23:30 -0400 Subject: [Bro] check rx and tx hosts for files In-Reply-To: <20161014133103.seoxx26r6i5mva23@Beezling.local> References: <20161014133103.seoxx26r6i5mva23@Beezling.local> Message-ID: <9980867C-C3CC-479C-9D4F-03FF732022A7@gmail.com> Thanks, unfortunately I lose the ability to access mime type with that function. Therefore I think I'll stick with file_sniff. Get errors saying f$info$mime_type isn?t present. I?ll keep playing with it when I can. Thanks again for Bro, incredible tool to have. On Fri, Oct 14, 2016 at 9:31 AM, Johanna Amann > wrote: Hi Brian, you should be able to just use the event file_over_new_connection, which includes the connection record. With that, you don't have to loop over complex data structures and can just use Site::is_local_addr. This would probably look similar to: event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) { if ( is_orig && Site::is_local_addr(c$id$orig_h) ) Files::add_analyzer(f, Files::ANALYZER_EXTRACT); } I hope this helps, Johanna _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161014/6f14fbb5/attachment-0001.html From jlay at slave-tothe-box.net Sat Oct 15 09:48:09 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Sat, 15 Oct 2016 10:48:09 -0600 Subject: [Bro] Several protosig questions Message-ID: <1476550089.2331.12.camel@slave-tothe-box.net> Wow...so here's my sig: signature protosig_ntp_apple { ? dst-ip == 17.0.0.0/8 ? ip-proto == udp ? dst-port == 123 ? payload /.*\x00/ ? payload-size == 48 ? eval ProtoSig::match } First, IP is 192.168.1.95 -> 17.253.4.253 udp port 123. Issue #1: ?CIDR doesn't appear to work..with the above dst-ip entry this fails to identify ntp_apple, commenting out the dst-ip the line matches. Issue #2: ?Payload-size; of interest, if you don't set a payload entry, then setting payload-size with ">" and "==" won't match, but ANY number with "<" fired off. Ironically I could set payload-size < 1 and this would fire. this is using latest beta. ?Thank you. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161015/eacc2c84/attachment.html From jlay at slave-tothe-box.net Sat Oct 15 09:52:19 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Sat, 15 Oct 2016 10:52:19 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <1476550089.2331.12.camel@slave-tothe-box.net> References: <1476550089.2331.12.camel@slave-tothe-box.net> Message-ID: <1476550339.2331.13.camel@slave-tothe-box.net> On Sat, 2016-10-15 at 10:48 -0600, James Lay wrote: > Wow...so here's my sig: > > signature protosig_ntp_apple { > ? dst-ip == 17.0.0.0/8 > ? ip-proto == udp > ? dst-port == 123 > ? payload /.*\x00/ > ? payload-size == 48 > ? eval ProtoSig::match > } > > First, IP is 192.168.1.95 -> 17.253.4.253 udp port 123. > > Issue #1: ?CIDR doesn't appear to work..with the above dst-ip entry > this fails to identify ntp_apple, commenting out the dst-ip the line > matches. > Issue #2: ?Payload-size; of interest, if you don't set a payload > entry, then setting payload-size with ">" and "==" won't match, but > ANY number with "<" fired off. Ironically I could set payload-size < > 1 and this would fire. > > this is using latest beta. ?Thank you. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Also if interest,?header ip[16:4] == 17.0.0.0/8 DOES in fact work, so I believe there's an issue with the dst-ip item. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161015/a8b9c970/attachment.html From alex.hope at shopify.com Mon Oct 17 11:55:39 2016 From: alex.hope at shopify.com (Alex Hope) Date: Mon, 17 Oct 2016 14:55:39 -0400 Subject: [Bro] When...timeout statement not executing Message-ID: Hi Bro mailing list, I'm having an issue where the when...timeout block isn't executing. I'll post my code then explain the problem I'm experiencing. The relevant code is: when ( c$id$resp_h in valid_ipaddrs ) { whitelist_status = "to whitelisted destination "; interesting = F; } timeout 3 sec { whitelist_status = "to non-whitelisted destination "; interesting = T; } Basically, I'm checking connections against a set of whitelisted IP addresses. The reason I'm using a when...timeout block is to avoid a race condition so that if a whitelisted domain shows up with an IP address not yet in the IP whitelist, we allow time for the new IP to be written so that subsequent connections to the whitelisted domain don't trigger alerts by attempting to look up the IP address before it has had time to be written to the whitelist. The problem I'm having is that sometimes neither block gets executed, so when I do something like NOTICE([$note = Unauthorized, $msg = fmt("%s %s connection %s%s: ", internal_status, get_port_transport_proto(c$id$orig_p), whitelist_status, established_status), $conn = c]); I'll get notices that have messages like Outgoing tcp connection established since whitelist_status won't have been set -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161017/a4137234/attachment.html From alex.hope at shopify.com Mon Oct 17 11:58:16 2016 From: alex.hope at shopify.com (Alex Hope) Date: Mon, 17 Oct 2016 14:58:16 -0400 Subject: [Bro] When...timeout statement not executing In-Reply-To: References: Message-ID: Prematurely sent email.... It is worth mentioning that if I use an if...else block, I do not have this problem, but then I run into the race condition :/ On Mon, Oct 17, 2016 at 2:55 PM, Alex Hope wrote: > Hi Bro mailing list, > > I'm having an issue where the when...timeout block isn't executing. I'll > post my code then explain the problem I'm experiencing. The relevant code > is: > > > when ( c$id$resp_h in valid_ipaddrs ) > { > whitelist_status = "to whitelisted destination "; > interesting = F; > } > timeout 3 sec > { > whitelist_status = "to non-whitelisted destination "; > interesting = T; > } > > Basically, I'm checking connections against a set of whitelisted IP > addresses. The reason I'm using a when...timeout block is to avoid a race > condition so that if a whitelisted domain shows up with an IP address not > yet in the IP whitelist, we allow time for the new IP to be written so that > subsequent connections to the whitelisted domain don't trigger alerts by > attempting to look up the IP address before it has had time to be written > to the whitelist. > > The problem I'm having is that sometimes neither block gets executed, so > when I do something like > > NOTICE([$note = Unauthorized, > $msg = fmt("%s %s connection %s%s: ", internal_status, > get_port_transport_proto(c$id$orig_p), whitelist_status, > established_status), > $conn = c]); > > I'll get notices that have messages like > > Outgoing tcp connection established > > since whitelist_status won't have been set > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161017/57458174/attachment.html From robin at icir.org Mon Oct 17 13:31:47 2016 From: robin at icir.org (Robin Sommer) Date: Mon, 17 Oct 2016 13:31:47 -0700 Subject: [Bro] Several protosig questions In-Reply-To: <1476550339.2331.13.camel@slave-tothe-box.net> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> Message-ID: <20161017203147.GF60965@icir.org> Do you have a trace that you can send demonstrating the two issues? Robin On Sat, Oct 15, 2016 at 10:52 -0600, James Lay wrote: > On Sat, 2016-10-15 at 10:48 -0600, James Lay wrote: > > Wow...so here's my sig: > > > > signature protosig_ntp_apple { > > ? dst-ip == 17.0.0.0/8 > > ? ip-proto == udp > > ? dst-port == 123 > > ? payload /.*\x00/ > > ? payload-size == 48 > > ? eval ProtoSig::match > > } > > > > First, IP is 192.168.1.95 -> 17.253.4.253 udp port 123. > > > > Issue #1: ?CIDR doesn't appear to work..with the above dst-ip entry > > this fails to identify ntp_apple, commenting out the dst-ip the line > > matches. > > Issue #2: ?Payload-size; of interest, if you don't set a payload > > entry, then setting payload-size with ">" and "==" won't match, but > > ANY number with "<" fired off. Ironically I could set payload-size < > > 1 and this would fire. > > > > this is using latest beta. ?Thank you. > > > > James > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > Also if interest,?header ip[16:4] == 17.0.0.0/8 DOES in fact work, so I > believe there's an issue with the dst-ip item. > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jlay at slave-tothe-box.net Mon Oct 17 14:08:01 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 17 Oct 2016 15:08:01 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161017203147.GF60965@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> Message-ID: On 2016-10-17 14:31, Robin Sommer wrote: > Do you have a trace that you can send demonstrating the two issues? > > Robin Included! Sigs below (in 2.4.1 order mattered..I think last matched gets the protosig tag, but I've swapped these around with the same results)..in either case only ntp matches, not ntp_apple. Maybe it's a beta thing? signature protosig_ntp { ip-proto == udp dst-port == 123 payload /.*\x00/ payload-size == 48 eval ProtoSig::match } signature protosig_ntp_apple { #header ip[16:4] == 17.0.0.0/8 dst-ip = = 17.0.0.0/8 ip-proto == udp dst-port == 123 payload /.*\x00/ payload-size == 48 eval ProtoSig::match } Thank you. James > > On Sat, Oct 15, 2016 at 10:52 -0600, James Lay wrote: > >> On Sat, 2016-10-15 at 10:48 -0600, James Lay wrote: >> > Wow...so here's my sig: >> > >> > signature protosig_ntp_apple { >> > ? dst-ip == 17.0.0.0/8 >> > ? ip-proto == udp >> > ? dst-port == 123 >> > ? payload /.*\x00/ >> > ? payload-size == 48 >> > ? eval ProtoSig::match >> > } >> > >> > First, IP is 192.168.1.95 -> 17.253.4.253 udp port 123. >> > >> > Issue #1: ?CIDR doesn't appear to work..with the above dst-ip entry >> > this fails to identify ntp_apple, commenting out the dst-ip the line >> > matches. >> > Issue #2: ?Payload-size; of interest, if you don't set a payload >> > entry, then setting payload-size with ">" and "==" won't match, but >> > ANY number with "<" fired off. Ironically I could set payload-size < >> > 1 and this would fire. >> > >> > this is using latest beta. ?Thank you. >> > >> > James >> > _______________________________________________ >> > Bro mailing list >> > bro at bro-ids.org >> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> Also if interest,?header ip[16:4] == 17.0.0.0/8 DOES in fact work, so >> I >> believe there's an issue with the dst-ip item. >> James > >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/vnd.tcpdump.pcap Size: 1296 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161017/9d5a7db6/attachment.bin From jlay at slave-tothe-box.net Mon Oct 17 14:11:27 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 17 Oct 2016 15:11:27 -0600 Subject: [Bro] Several protosig questions In-Reply-To: References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> Message-ID: <2b9a0096745ba65a8e756951cef0328c@localhost> On 2016-10-17 15:08, James Lay wrote: > On 2016-10-17 14:31, Robin Sommer wrote: >> Do you have a trace that you can send demonstrating the two issues? >> >> Robin > > Included! Sigs below (in 2.4.1 order mattered..I think last matched > gets the protosig tag, but I've swapped these around with the same > results)..in either case only ntp matches, not ntp_apple. Maybe it's > a beta thing? > > signature protosig_ntp { > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > eval ProtoSig::match > } > > signature protosig_ntp_apple { > #header ip[16:4] == 17.0.0.0/8 -> dst-ip == 17.0.0.0/8 > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > eval ProtoSig::match > } > > Thank you. > > James > >> >> On Sat, Oct 15, 2016 at 10:52 -0600, James Lay wrote: >> >>> On Sat, 2016-10-15 at 10:48 -0600, James Lay wrote: >>> > Wow...so here's my sig: >>> > >>> > signature protosig_ntp_apple { >>> > ? dst-ip == 17.0.0.0/8 >>> > ? ip-proto == udp >>> > ? dst-port == 123 >>> > ? payload /.*\x00/ >>> > ? payload-size == 48 >>> > ? eval ProtoSig::match >>> > } >>> > >>> > First, IP is 192.168.1.95 -> 17.253.4.253 udp port 123. >>> > >>> > Issue #1: ?CIDR doesn't appear to work..with the above dst-ip entry >>> > this fails to identify ntp_apple, commenting out the dst-ip the line >>> > matches. >>> > Issue #2: ?Payload-size; of interest, if you don't set a payload >>> > entry, then setting payload-size with ">" and "==" won't match, but >>> > ANY number with "<" fired off. Ironically I could set payload-size < >>> > 1 and this would fire. >>> > >>> > this is using latest beta. ?Thank you. >>> > >>> > James >>> > _______________________________________________ >>> > Bro mailing list >>> > bro at bro-ids.org >>> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> Also if interest,?header ip[16:4] == 17.0.0.0/8 DOES in fact work, so >>> I >>> believe there's an issue with the dst-ip item. >>> James >> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From philosnef at gmail.com Tue Oct 18 05:10:01 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 18 Oct 2016 08:10:01 -0400 Subject: [Bro] null set question Message-ID: I can't find any test to do null set evaluation, let alone null string evaluation. For null sets, I am doing local set_count = |set|; if (set_count == 0) ... Shouldn't there be a proper is null way of doing this? I can't find it in the bro scripting documentation. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161018/6884bdac/attachment.html From robin at icir.org Tue Oct 18 11:28:58 2016 From: robin at icir.org (Robin Sommer) Date: Tue, 18 Oct 2016 11:28:58 -0700 Subject: [Bro] Several protosig questions In-Reply-To: References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> Message-ID: <20161018182858.GB27597@icir.org> On Mon, Oct 17, 2016 at 15:08 -0600, you wrote: > Included! Thanks, I'll take a look, give me a little bit. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jlay at slave-tothe-box.net Tue Oct 18 11:30:46 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 18 Oct 2016 12:30:46 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161018182858.GB27597@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> Message-ID: On 2016-10-18 12:28, Robin Sommer wrote: > On Mon, Oct 17, 2016 at 15:08 -0600, you wrote: > >> Included! > > Thanks, I'll take a look, give me a little bit. > > Robin Thanks Robin...I err on the side of "maybe I hosed something up" in general :) James From philosnef at gmail.com Tue Oct 18 12:16:47 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 18 Oct 2016 15:16:47 -0400 Subject: [Bro] file identification modification Message-ID: I see that: scripts/base/protocols/http/file-ident.sig lets me create magic byte signatures for filetypes I have an interest in. This seems to be specific to http. My problem is that I want to detect files sent via smtp. Right now, files.log does NOT have filenames for things I am sending as attachments, such as mytext.ext. When I send this as attachment, there is no filename *.ext... As such, I would like to attach this to the file analyzer so that I can get notices for files that have the magic byte headers I am concerned with. Is there an easy way to do this for smtp and ftp? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161018/456e2892/attachment.html From johanna at icir.org Tue Oct 18 12:55:59 2016 From: johanna at icir.org (Johanna Amann) Date: Tue, 18 Oct 2016 15:55:59 -0400 Subject: [Bro] null set question In-Reply-To: References: Message-ID: On 18 Oct 2016, at 8:10, erik clark wrote: > I can't find any test to do null set evaluation, let alone null string > evaluation. For null sets, I am doing > > local set_count = |set|; > if (set_count == 0) ... > > Shouldn't there be a proper is null way of doing this? I can't find it > in > the bro scripting documentation. The way that you found is the way to do it. So - just check if the length of the string/vector/set is null. You can also do that in a single line without using a separate variable. Is there a specific reason why you want a different way of doing that? Johanna From jlay at slave-tothe-box.net Tue Oct 18 13:34:21 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 18 Oct 2016 14:34:21 -0600 Subject: [Bro] So uh...how do you know which pin_cpus to use? Message-ID: <6d46a6c550430e1c8837f16e40c95ace@localhost> Never really understood this: "The correct pin_cpus setting to use is dependent on your CPU architecture. Intel and AMD systems enumerate processors in different ways. Using the wrong pin_cpus setting can cause poor performance." Is there a magical formula? Any advice would help thanks. James From jazoff at illinois.edu Tue Oct 18 13:52:56 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 18 Oct 2016 20:52:56 +0000 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <6d46a6c550430e1c8837f16e40c95ace@localhost> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> Message-ID: <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> > On Oct 18, 2016, at 4:34 PM, James Lay wrote: > > Never really understood this: > > "The correct pin_cpus setting to use is dependent on your CPU > architecture. Intel and AMD systems enumerate processors in different > ways. Using the wrong pin_cpus setting can cause poor performance." > > Is there a magical formula? Any advice would help thanks. The best thing to do is to install the hwloc package and use the lstopo or lstopo-no-graphics tool to render a big ascii art image of the system. on centos7 this works: lstopo-no-graphics --of txt You'll get something that looks like this: https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v2+2cuda+1display_v1.11.png or https://www.open-mpi.org/projects/hwloc/lstopo/images/4Opteron6200.v1.11.png The numbers towards the bottom are the cpu ids. So you can see that using something like 1,3,5,7,9,11,13,15,17,19,21,23,25 on an intel cpu would be the worst thing you could do since 21,23,25 are on the same physical cores as 1,3, and 5 -- - Justin Azoff From jazoff at illinois.edu Tue Oct 18 13:55:56 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 18 Oct 2016 20:55:56 +0000 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> Message-ID: > On Oct 18, 2016, at 4:52 PM, Azoff, Justin S wrote: > > >> On Oct 18, 2016, at 4:34 PM, James Lay wrote: >> >> Never really understood this: >> >> "The correct pin_cpus setting to use is dependent on your CPU >> architecture. Intel and AMD systems enumerate processors in different >> ways. Using the wrong pin_cpus setting can cause poor performance." >> >> Is there a magical formula? Any advice would help thanks. > > The best thing to do is to install the hwloc package and use the lstopo or lstopo-no-graphics tool to render a big ascii art image of the system. > > > on centos7 this works: > > lstopo-no-graphics --of txt > > > You'll get something that looks like this: > > https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v2+2cuda+1display_v1.11.png > > or > > https://www.open-mpi.org/projects/hwloc/lstopo/images/4Opteron6200.v1.11.png > > The numbers towards the bottom are the cpu ids. So you can see that using something like > > 1,3,5,7,9,11,13,15,17,19,21,23,25 > > on an intel cpu would be the worst thing you could do since 21,23,25 are on the same physical cores as 1,3, and 5 Oh, I should add... ".. on that particular system". On some of our numa machines the allocation is different and 1,3,5,7,9 would be the right cpus to use! -- - Justin Azoff From jlay at slave-tothe-box.net Tue Oct 18 14:12:31 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 18 Oct 2016 15:12:31 -0600 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> Message-ID: <25f29551fb5359dcb7c02cbdf8b7e837@localhost> On 2016-10-18 14:55, Azoff, Justin S wrote: >> On Oct 18, 2016, at 4:52 PM, Azoff, Justin S >> wrote: >> >> >>> On Oct 18, 2016, at 4:34 PM, James Lay >>> wrote: >>> >>> Never really understood this: >>> >>> "The correct pin_cpus setting to use is dependent on your CPU >>> architecture. Intel and AMD systems enumerate processors in different >>> ways. Using the wrong pin_cpus setting can cause poor performance." >>> >>> Is there a magical formula? Any advice would help thanks. >> >> The best thing to do is to install the hwloc package and use the >> lstopo or lstopo-no-graphics tool to render a big ascii art image of >> the system. >> >> >> on centos7 this works: >> >> lstopo-no-graphics --of txt >> >> >> You'll get something that looks like this: >> >> https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v2+2cuda+1display_v1.11.png >> >> or >> >> https://www.open-mpi.org/projects/hwloc/lstopo/images/4Opteron6200.v1.11.png >> >> The numbers towards the bottom are the cpu ids. So you can see that >> using something like >> >> 1,3,5,7,9,11,13,15,17,19,21,23,25 >> >> on an intel cpu would be the worst thing you could do since 21,23,25 >> are on the same physical cores as 1,3, and 5 > > Oh, I should add... ".. on that particular system". On some of our > numa machines the allocation is different and 1,3,5,7,9 would be the > right cpus to use! Ok cool thanks Justin...so basically I wanna stagger these out so I don't have several processes on the same core ya? cat /proc/cpuinfo | egrep "processor|core id" processor : 0 core id : 0 processor : 1 core id : 0 processor : 2 core id : 1 processor : 3 core id : 1 processor : 4 core id : 2 processor : 5 core id : 2 processor : 6 core id : 3 processor : 7 core id : 3 processor : 8 core id : 4 processor : 9 core id : 4 processor : 10 core id : 5 processor : 11 core id : 5 processor : 12 core id : 0 processor : 13 core id : 0 processor : 14 core id : 1 processor : 15 core id : 1 processor : 16 core id : 2 processor : 17 core id : 2 processor : 18 core id : 3 processor : 19 core id : 3 processor : 20 core id : 4 processor : 21 core id : 4 processor : 22 core id : 5 processor : 23 core id : 5 1,3,5,7,9,11 seem to be the best ones here. Thanks...that's super helpful! James From jazoff at illinois.edu Tue Oct 18 14:19:34 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 18 Oct 2016 21:19:34 +0000 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <25f29551fb5359dcb7c02cbdf8b7e837@localhost> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> <25f29551fb5359dcb7c02cbdf8b7e837@localhost> Message-ID: <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> > On Oct 18, 2016, at 5:12 PM, James Lay wrote: > > On 2016-10-18 14:55, Azoff, Justin S wrote: >>> On Oct 18, 2016, at 4:52 PM, Azoff, Justin S >>> wrote: >>> >>> >>>> On Oct 18, 2016, at 4:34 PM, James Lay >>>> wrote: >>>> >>>> Never really understood this: >>>> >>>> "The correct pin_cpus setting to use is dependent on your CPU >>>> architecture. Intel and AMD systems enumerate processors in different >>>> ways. Using the wrong pin_cpus setting can cause poor performance." >>>> >>>> Is there a magical formula? Any advice would help thanks. >>> >>> The best thing to do is to install the hwloc package and use the >>> lstopo or lstopo-no-graphics tool to render a big ascii art image of >>> the system. >>> >>> >>> on centos7 this works: >>> >>> lstopo-no-graphics --of txt >>> >>> >>> You'll get something that looks like this: >>> >>> https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v2+2cuda+1display_v1.11.png >>> >>> or >>> >>> https://www.open-mpi.org/projects/hwloc/lstopo/images/4Opteron6200.v1.11.png >>> >>> The numbers towards the bottom are the cpu ids. So you can see that >>> using something like >>> >>> 1,3,5,7,9,11,13,15,17,19,21,23,25 >>> >>> on an intel cpu would be the worst thing you could do since 21,23,25 >>> are on the same physical cores as 1,3, and 5 >> >> Oh, I should add... ".. on that particular system". On some of our >> numa machines the allocation is different and 1,3,5,7,9 would be the >> right cpus to use! > > Ok cool thanks Justin...so basically I wanna stagger these out so I > don't have several processes on the same core ya? > > Possibly.. I'd check with what hwloc says. I think just turning off hyper threading makes this even easier since that completely removes the possibility of accidentally pinning 2 workers to the same core. -- - Justin Azoff From jlay at slave-tothe-box.net Tue Oct 18 14:23:08 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 18 Oct 2016 15:23:08 -0600 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> <25f29551fb5359dcb7c02cbdf8b7e837@localhost> <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> Message-ID: <0412653142f976b8c82b3741ccbf941d@localhost> On 2016-10-18 15:19, Azoff, Justin S wrote: >> On Oct 18, 2016, at 5:12 PM, James Lay >> wrote: >> >> On 2016-10-18 14:55, Azoff, Justin S wrote: >>>> On Oct 18, 2016, at 4:52 PM, Azoff, Justin S >>>> wrote: >>>> >>>> >>>>> On Oct 18, 2016, at 4:34 PM, James Lay >>>>> wrote: >>>>> >>>>> Never really understood this: >>>>> >>>>> "The correct pin_cpus setting to use is dependent on your CPU >>>>> architecture. Intel and AMD systems enumerate processors in >>>>> different >>>>> ways. Using the wrong pin_cpus setting can cause poor performance." >>>>> >>>>> Is there a magical formula? Any advice would help thanks. >>>> >>>> The best thing to do is to install the hwloc package and use the >>>> lstopo or lstopo-no-graphics tool to render a big ascii art image of >>>> the system. >>>> >>>> >>>> on centos7 this works: >>>> >>>> lstopo-no-graphics --of txt >>>> >>>> >>>> You'll get something that looks like this: >>>> >>>> https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v2+2cuda+1display_v1.11.png >>>> >>>> or >>>> >>>> https://www.open-mpi.org/projects/hwloc/lstopo/images/4Opteron6200.v1.11.png >>>> >>>> The numbers towards the bottom are the cpu ids. So you can see that >>>> using something like >>>> >>>> 1,3,5,7,9,11,13,15,17,19,21,23,25 >>>> >>>> on an intel cpu would be the worst thing you could do since 21,23,25 >>>> are on the same physical cores as 1,3, and 5 >>> >>> Oh, I should add... ".. on that particular system". On some of our >>> numa machines the allocation is different and 1,3,5,7,9 would be the >>> right cpus to use! >> >> Ok cool thanks Justin...so basically I wanna stagger these out so I >> don't have several processes on the same core ya? >> >> > > Possibly.. I'd check with what hwloc says. I think just turning off > hyper threading makes this even easier since that completely removes > the possibility of accidentally pinning 2 workers to the same core. Sweet...thanks Justin...hwloc is a cool app! James From jazoff at illinois.edu Tue Oct 18 14:34:34 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 18 Oct 2016 21:34:34 +0000 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <0412653142f976b8c82b3741ccbf941d@localhost> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> <25f29551fb5359dcb7c02cbdf8b7e837@localhost> <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> <0412653142f976b8c82b3741ccbf941d@localhost> Message-ID: <20CFA330-B222-4491-A411-FFA487B2E659@illinois.edu> > On Oct 18, 2016, at 5:23 PM, James Lay wrote: >>> >> >> Possibly.. I'd check with what hwloc says. I think just turning off >> hyper threading makes this even easier since that completely removes >> the possibility of accidentally pinning 2 workers to the same core. > > Sweet...thanks Justin...hwloc is a cool app! > > James Yeah... it can be a bit confusing though since it has both a 'logical' (-l) and a 'physical' (-p) view. I _think_ that the cpu ids in the physical view match what taskset use via broctl. Fortunately you can run hwloc-ps -p and compare which pids are mapped to which cpus to verify it is working right. -- - Justin Azoff From michalpurzynski1 at gmail.com Tue Oct 18 15:18:42 2016 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Wed, 19 Oct 2016 00:18:42 +0200 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <20CFA330-B222-4491-A411-FFA487B2E659@illinois.edu> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> <25f29551fb5359dcb7c02cbdf8b7e837@localhost> <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> <0412653142f976b8c82b3741ccbf941d@localhost> <20CFA330-B222-4491-A411-FFA487B2E659@illinois.edu> Message-ID: <5E51567F-68CC-45F8-BA64-9B1475EE04E4@gmail.com> 2.6 kernels on Linux enumerate HT in a different way 3.x and 4.x do 2.6 Core 0 thread 0 Core 0 thread 1 Etc 3.x Core 0-N on CPU 0 first half of threads Then CPU 1 Then CPU 0 second half of threads Then CPU 1 Results for HT vs cross numa are about to be published, soon ;) I don't like cache misses when CPU 1 is reaching for data on node 0 though. It is not about cross numa bandwidth it's the fact then you have in the worst case 67ns to process a smallest packet on 10Gbit. And L3 hit on ivy bridge is at least 15ns. Miss is 5x that. > On 18 Oct 2016, at 23:34, Azoff, Justin S wrote: > > >> On Oct 18, 2016, at 5:23 PM, James Lay wrote: >>>> >>> >>> Possibly.. I'd check with what hwloc says. I think just turning off >>> hyper threading makes this even easier since that completely removes >>> the possibility of accidentally pinning 2 workers to the same core. >> >> Sweet...thanks Justin...hwloc is a cool app! >> >> James > > Yeah... it can be a bit confusing though since it has both a 'logical' (-l) and a 'physical' (-p) view. > > I _think_ that the cpu ids in the physical view match what taskset use via broctl. > > Fortunately you can run hwloc-ps -p and compare which pids are mapped to which cpus to verify it is working right. > > -- > - Justin Azoff > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jazoff at illinois.edu Tue Oct 18 15:28:03 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 18 Oct 2016 22:28:03 +0000 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <5E51567F-68CC-45F8-BA64-9B1475EE04E4@gmail.com> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> <25f29551fb5359dcb7c02cbdf8b7e837@localhost> <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> <0412653142f976b8c82b3741ccbf941d@localhost> <20CFA330-B222-4491-A411-FFA487B2E659@illinois.edu> <5E51567F-68CC-45F8-BA64-9B1475EE04E4@gmail.com> Message-ID: <7D175D98-5720-4701-B2CF-FBE251646457@illinois.edu> > On Oct 18, 2016, at 6:18 PM, Micha? Purzy?ski wrote: > > 2.6 kernels on Linux enumerate HT in a different way 3.x and 4.x do > > 2.6 > > Core 0 thread 0 > Core 0 thread 1 > > Etc > > 3.x > > Core 0-N on CPU 0 first half of threads > Then CPU 1 > Then CPU 0 second half of threads > Then CPU 1 > > Results for HT vs cross numa are about to be published, soon ;) > I don't like cache misses when CPU 1 is reaching for data on node 0 though. It is not about cross numa bandwidth it's the fact then you have in the worst case 67ns to process a smallest packet on 10Gbit. And L3 hit on ivy bridge is at least 15ns. > Miss is 5x that. Ah! That explains a lot. I wonder if numa allocation changed too. We just upgraded some machines from centos6 to 7 and I was wondering how the meticulously written node.cfg we had been using for months now appeared completely wrong. I wonder if broctl should support hwloc for cpu pinning instead of task set. I wouldn't mind having an 'auto' mode that just does the right thing. It looks like on our dual socket numa box we should be using 0,2,4,6,8,10,12,14 for one 10g card and 1,3,5,7,9,11,13,15 for the other 10g card 0-19 are the physical cores and 20-39 are the HT cores, but using 0,1,2,3 flips between numa nodes which is not what anyone wants. -- - Justin Azoff From michalpurzynski1 at gmail.com Tue Oct 18 15:37:44 2016 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Wed, 19 Oct 2016 00:37:44 +0200 Subject: [Bro] So uh...how do you know which pin_cpus to use? In-Reply-To: <7D175D98-5720-4701-B2CF-FBE251646457@illinois.edu> References: <6d46a6c550430e1c8837f16e40c95ace@localhost> <19D8BCBB-D9B1-4E15-A29D-EC45E629F080@illinois.edu> <25f29551fb5359dcb7c02cbdf8b7e837@localhost> <495F8543-6D1F-4E2E-AF8A-CF45BEB53642@illinois.edu> <0412653142f976b8c82b3741ccbf941d@localhost> <20CFA330-B222-4491-A411-FFA487B2E659@illinois.edu> <5E51567F-68CC-45F8-BA64-9B1475EE04E4@gmail.com> <7D175D98-5720-4701-B2CF-FBE251646457@illinois.edu> Message-ID: Lesson learned for me. Never answer from a phone, esp. trying to cover numa allocation on 56 threads on 4 inches ;) Take back what I said. Here is how it looks like, I'm in front of a server with 2x NIC. I have E5-2697 v3 here, 14 physical cores per CPU, HT enabled, kernel 4.4.something. 0-13 - NUMA node 0, CPU 0, hthreads 0-13 14-27 - NUMA node 1, CPU 1, cores 14-27 28-41 - NUMA node 0, CPU 0, hthreads 28-41 42-55 - NUMA node 1, CPU 1 again 1st card should use virtual cores (AKA threads) 0-13 + 28-41 2nd card should use 14-27 + 42-55 On Wed, Oct 19, 2016 at 12:28 AM, Azoff, Justin S wrote: > > > On Oct 18, 2016, at 6:18 PM, Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > > > > 2.6 kernels on Linux enumerate HT in a different way 3.x and 4.x do > > > > 2.6 > > > > Core 0 thread 0 > > Core 0 thread 1 > > > > Etc > > > > 3.x > > > > Core 0-N on CPU 0 first half of threads > > Then CPU 1 > > Then CPU 0 second half of threads > > Then CPU 1 > > > > Results for HT vs cross numa are about to be published, soon ;) > > I don't like cache misses when CPU 1 is reaching for data on node 0 > though. It is not about cross numa bandwidth it's the fact then you have in > the worst case 67ns to process a smallest packet on 10Gbit. And L3 hit on > ivy bridge is at least 15ns. > > Miss is 5x that. > > Ah! That explains a lot. I wonder if numa allocation changed too. We > just upgraded some machines from centos6 to 7 and I was wondering how the > meticulously written node.cfg we had been using for months now appeared > completely wrong. > > I wonder if broctl should support hwloc for cpu pinning instead of task > set. I wouldn't mind having an 'auto' mode that just does the right thing. > > It looks like on our dual socket numa box we should be using > > 0,2,4,6,8,10,12,14 for one 10g card and > 1,3,5,7,9,11,13,15 for the other 10g card > > 0-19 are the physical cores and 20-39 are the HT cores, but using 0,1,2,3 > flips between numa nodes which is not what anyone wants. > > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/d1b4f9b7/attachment.html From center.mnt at gmail.com Wed Oct 19 02:03:34 2016 From: center.mnt at gmail.com (sec-x sec-x) Date: Wed, 19 Oct 2016 12:03:34 +0300 Subject: [Bro] Tuning Bro Message-ID: Hi, Recently I install bro ids instance on my network. I want to filter out all internal dns messages from dns.log. I need an explanation how i configure this and where. Thanks, CM. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/cbcded7b/attachment-0001.html From philosnef at gmail.com Wed Oct 19 04:22:50 2016 From: philosnef at gmail.com (erik clark) Date: Wed, 19 Oct 2016 07:22:50 -0400 Subject: [Bro] file identification modification In-Reply-To: References: Message-ID: Actually, I do not see file-ident.sig anywhere in the source tree, or my deployment tree. Where is this kept? Thanks! On Tue, Oct 18, 2016 at 3:16 PM, erik clark wrote: > I see that: > > scripts/base/protocols/http/file-ident.sig > > lets me create magic byte signatures for filetypes I have an interest in. > This seems to be specific to http. > > My problem is that I want to detect files sent via smtp. Right now, > files.log does NOT have filenames for things I am sending as attachments, > such as mytext.ext. When I send this as attachment, there is no filename > *.ext... As such, I would like to attach this to the file analyzer so that > I can get notices for files that have the magic byte headers I am concerned > with. Is there an easy way to do this for smtp and ftp? > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/bac2065f/attachment.html From brot212 at googlemail.com Wed Oct 19 04:45:11 2016 From: brot212 at googlemail.com (Dane Wullen) Date: Wed, 19 Oct 2016 13:45:11 +0200 Subject: [Bro] Confusing binPAC error... Message-ID: Hi there, I've tried to implement a little test analyzer to detect TCP payload with 2 bytes in it, just to know how binpac works. Here's my protocol.pac: type t_header = record { b1 : uint8; b2 : uint8; } type TEST_PDU(is_orig: bool) = record { data : t_header; } &byteorder = bigendian Here's my analyzer.pac refine flow TEST_Flow += { function proc_test_message(msg: TEST_PDU): bool %{ printf("Read TEST_PDU\n"); BifEvent::generate_test_event(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn()); return true; %} }; refine typeattr TEST_PDU += &let { proc: bool = $context.flow.proc_test_message(this); }; Everything works fine, but when I want to print my byte-values ( printf("Val 1: %d, Val 2: %d, Val 3: %d", ${msg.b1}, ${msg.b2}, ${msg.b3}); ), I get an error while making the file which says that " 'b1' undeclared". Even if I put an if-statement to check if those values are undeclared ( if( ${msg.b1} != NULL && ${msg.b2} != NULL && ${msg.b3} != NULL)), I still get the same error. Can someone help me? :D Or tell me how to proper use C++ code in binPAC? Thanks! From philosnef at gmail.com Wed Oct 19 05:14:50 2016 From: philosnef at gmail.com (erik clark) Date: Wed, 19 Oct 2016 08:14:50 -0400 Subject: [Bro] bug in smtp analyzer? Message-ID: In 2.4.1, it seems that there is no c$smtp$cc field in the smtp analyzer, but there is in 2.5. I noticed in 2.4.1, processing cc fields is haphazard at best, and is totally unreliable. Is this really only fixed in 2.5 with the addition of the cc processor for the smtp analyzer? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/6674dfe3/attachment.html From BLMILLER at comerica.com Wed Oct 19 06:11:39 2016 From: BLMILLER at comerica.com (Miller, Brad L) Date: Wed, 19 Oct 2016 13:11:39 +0000 Subject: [Bro] Tuning Bro In-Reply-To: References: Message-ID: I personally used a bro script much like example 3 in this link: http://blog.bro.org/2012/02/filtering-logs-with-bro.html You define what are ?local? zones to and then splits the dns.log into dns_localzone.log (your items) and dns_remotezone.log (anything not defined). You can then process/query the remotezone log as you would with a dns.log and discard the localzone log if you wish. I would encourage you to keep that localzone log though, it?s a great resource. Brad Miller From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of sec-x sec-x Sent: Wednesday, October 19, 2016 5:04 AM To: bro at bro.org Subject: [Bro] Tuning Bro Hi, Recently I install bro ids instance on my network. I want to filter out all internal dns messages from dns.log. I need an explanation how i configure this and where. Thanks, CM. Please be aware that if you reply directly to this particular message, your reply may not be secure. Do not use email to send us communications that contain unencrypted confidential information such as passwords, account numbers or Social Security numbers. If you must provide this type of information, please visit comerica.com to submit a secure form using any of the ?Contact Us? forms. In addition, you should not send via email any inquiry or request that may be time sensitive. The information in this e-mail is confidential. It is intended for the individual or entity to whom it is addressed. If you have received this email in error, please destroy or delete the message and advise the sender of the error by return email. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/8f36a4bf/attachment.html From jbarber at computer.org Wed Oct 19 08:09:47 2016 From: jbarber at computer.org (Jeff Barber) Date: Wed, 19 Oct 2016 09:09:47 -0600 Subject: [Bro] Confusing binPAC error... In-Reply-To: References: Message-ID: Dane, As you've listed it, msg is of type TEST_PDU, which is a record containing another record (of type 't_header' named 'data'). You can't ignore the inner record. Looks like you should be using "${msg.data.b1}" in your printf. Also, you're not showing a "b3" anywhere so that should come up undeclared as well. HTH On Wed, Oct 19, 2016 at 5:45 AM, Dane Wullen wrote: > Hi there, > > I've tried to implement a little test analyzer to detect TCP payload > with 2 bytes in it, just to know how binpac works. > > Here's my protocol.pac: > > type t_header = record { > b1 : uint8; > b2 : uint8; > } > > type TEST_PDU(is_orig: bool) = record { > data : t_header; > } &byteorder = bigendian > > Here's my analyzer.pac > > refine flow TEST_Flow += { > function proc_test_message(msg: TEST_PDU): bool > %{ > printf("Read TEST_PDU\n"); > BifEvent::generate_test_event(connection()->bro_analyzer(), > connection()->bro_analyzer()->Conn()); > return true; > %} > }; > > refine typeattr TEST_PDU += &let { > proc: bool = $context.flow.proc_test_message(this); > }; > > Everything works fine, but when I want to print my byte-values ( > printf("Val 1: %d, Val 2: %d, Val 3: %d", ${msg.b1}, ${msg.b2}, > ${msg.b3}); ), > I get an error while making the file which says that " 'b1' undeclared". > Even if I put an if-statement to check if those values are undeclared ( > if( ${msg.b1} != NULL && ${msg.b2} != NULL && ${msg.b3} != NULL)), > I still get the same error. > Can someone help me? :D Or tell me how to proper use C++ code in binPAC? > > Thanks! > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/176c9368/attachment-0001.html From robin at icir.org Wed Oct 19 17:28:28 2016 From: robin at icir.org (Robin Sommer) Date: Wed, 19 Oct 2016 17:28:28 -0700 Subject: [Bro] Several protosig questions In-Reply-To: References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> Message-ID: <20161020002827.GE90584@icir.org> On Mon, Oct 17, 2016 at 15:08 -0600, you wrote: > Included! Sigs below (in 2.4.1 order mattered..I think last matched > gets the protosig tag, but I've swapped these around with the same > results)..in either case only ntp matches, not ntp_apple. So both the problems turn out to be bugs: dst-ip is indeed not working with IPv4 CIDR ranges, and payload-size is behaving oddly with (I believe only) UDP. I have patches for both in git branch topic/robin/sig-fixes, could you give that a try? Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jlay at slave-tothe-box.net Wed Oct 19 17:44:32 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 19 Oct 2016 18:44:32 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161020002827.GE90584@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161020002827.GE90584@icir.org> Message-ID: <1476924272.2879.8.camel@slave-tothe-box.net> On Wed, 2016-10-19 at 17:28 -0700, Robin Sommer wrote: > On Mon, Oct 17, 2016 at 15:08 -0600, you wrote: > > > > > Included!??Sigs below (in 2.4.1 order mattered..I think last > > matched > > gets the protosig tag, but I've swapped these around with the same > > results)..in either case only ntp matches, not ntp_apple. > So both the problems turn out to be bugs: dst-ip is indeed not > working > with IPv4 CIDR ranges, and payload-size is behaving oddly with (I > believe only) UDP. I have patches for both in git branch > topic/robin/sig-fixes, could you give that a try? > > Robin > Can do....those patches in the bro git repository or somewhere else? ?I wasn't able to find them on github. ?Thanks for the work Robin! James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161019/740c73a9/attachment.html From robin at icir.org Wed Oct 19 20:10:54 2016 From: robin at icir.org (Robin Sommer) Date: Wed, 19 Oct 2016 20:10:54 -0700 Subject: [Bro] Several protosig questions In-Reply-To: <1476924272.2879.8.camel@slave-tothe-box.net> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161020002827.GE90584@icir.org> <1476924272.2879.8.camel@slave-tothe-box.net> Message-ID: <20161020031054.GG90584@icir.org> On Wed, Oct 19, 2016 at 18:44 -0600, James Lay wrote: > Can do....those patches in the bro git repository or somewhere else? Yeah: https://github.com/bro/bro/commit/5cf2320fbc3ebb0d53e431829d2c9c56fdf6f08b Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From seth at icir.org Thu Oct 20 07:13:33 2016 From: seth at icir.org (Seth Hall) Date: Thu, 20 Oct 2016 10:13:33 -0400 Subject: [Bro] file identification modification In-Reply-To: References: Message-ID: > On Oct 19, 2016, at 7:22 AM, erik clark wrote: > > Actually, I do not see file-ident.sig anywhere in the source tree, or my deployment tree. Where is this kept? Thanks! This was broken out a couple of releases ago. There are a bunch of file signature files in base/frameworks/files/magic/ .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Thu Oct 20 07:14:38 2016 From: seth at icir.org (Seth Hall) Date: Thu, 20 Oct 2016 10:14:38 -0400 Subject: [Bro] bug in smtp analyzer? In-Reply-To: References: Message-ID: > On Oct 19, 2016, at 8:14 AM, erik clark wrote: > > In 2.4.1, it seems that there is no c$smtp$cc field in the smtp analyzer, but there is in 2.5. I noticed in 2.4.1, processing cc fields is haphazard at best, and is totally unreliable. Is this really only fixed in 2.5 with the addition of the cc processor for the smtp analyzer? https://github.com/bro/bro/blob/master/NEWS#L228 That was an oversight in previous version of Bro. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From hacecky at jlab.org Thu Oct 20 08:41:40 2016 From: hacecky at jlab.org (Eric Hacecky) Date: Thu, 20 Oct 2016 11:41:40 -0400 (EDT) Subject: [Bro] Going from standalone to Bro cluster error Message-ID: <595890484.327965.1476978100077.JavaMail.zimbra@jlab.org> I'm in the process of converting my Bro installs from standalone instances to a Bro Cluster. The previous standalone Bro instance/server I'm using for testing is not able to be updated by the manager when I list it as a worker node. I have the following 3 machines: manager - New Bro install. version 2.5-beta-79 (rhel7 python 2.7.5) worker1 - New Bro install. version 2.5-beta-79 (rhel7 python 2.7.5) worker2 - Previous standalone Bro instance. version 2.4 <---- Notice the version differs, does worker2 need to run the same version as the manager? (rhel6 python 2.6.6) Everything works between manager and worker1. Here is the output from broctl when worker2 is listed in node.cfg. // [BroControl] > check manager scripts are ok. proxy-1 scripts are ok. worker-1 scripts are ok. worker-2 scripts are ok. [BroControl] > install removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/site ... removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/auto ... creating policy directories ... installing site policies ... generating cluster-layout.bro ... generating local-networks.bro ... generating broctl-config.bro ... generating broctl-config.sh ... updating nodes ... sh: line 1: /bin/python: No such file or directory sh: line 2: [/bin/echo,: No such file or directory sh: line 3: syntax error near unexpected token `done' sh: line 3: `done' Error: cannot create a directory on node worker-2 Error: Failed to establish ssh connection to host // ssh keys are working as intended. Even though the error says, "failed to establish ssh connection on host" I can see manager login on worker2 as the root user via ssh in secure log. /bin/python is 2.6.6 and is in root's path. Everything in question is owned by root. Any ideas? Thanks, Eric From dnthayer at illinois.edu Thu Oct 20 09:15:05 2016 From: dnthayer at illinois.edu (Daniel Thayer) Date: Thu, 20 Oct 2016 11:15:05 -0500 Subject: [Bro] Going from standalone to Bro cluster error In-Reply-To: <595890484.327965.1476978100077.JavaMail.zimbra@jlab.org> References: <595890484.327965.1476978100077.JavaMail.zimbra@jlab.org> Message-ID: All of the machines in a Bro cluster must be running the same OS version, because when you run "broctl install" (or "broctl deploy"), broctl will copy the bro installation (the executables and scripts) from the manager to all of the other machines in your cluster. This ensures that all machines in the cluster run the same version of Bro with the same configuration. If you had previously built and installed Bro on worker-2, then it would just get overwritten by broctl. On 10/20/16 10:41 AM, Eric Hacecky wrote: > I'm in the process of converting my Bro installs from standalone instances to a Bro Cluster. > > The previous standalone Bro instance/server I'm using for testing is not able to be updated by the manager when I list it as a worker node. > > I have the following 3 machines: > > manager - New Bro install. version 2.5-beta-79 (rhel7 python 2.7.5) > worker1 - New Bro install. version 2.5-beta-79 (rhel7 python 2.7.5) > worker2 - Previous standalone Bro instance. version 2.4 <---- Notice the version differs, does worker2 need to run the same version as the manager? (rhel6 python 2.6.6) > > Everything works between manager and worker1. > > Here is the output from broctl when worker2 is listed in node.cfg. > > // > [BroControl] > check > manager scripts are ok. > proxy-1 scripts are ok. > worker-1 scripts are ok. > worker-2 scripts are ok. > > [BroControl] > install > removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/site ... > removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/auto ... > creating policy directories ... > installing site policies ... > generating cluster-layout.bro ... > generating local-networks.bro ... > generating broctl-config.bro ... > generating broctl-config.sh ... > updating nodes ... > sh: line 1: /bin/python: No such file or directory > sh: line 2: [/bin/echo,: No such file or directory > sh: line 3: syntax error near unexpected token `done' > sh: line 3: `done' > Error: cannot create a directory on node worker-2 > Error: Failed to establish ssh connection to host > // > > ssh keys are working as intended. Even though the error says, "failed to establish ssh connection on host" I can see manager login on worker2 as the root user via ssh in secure log. > > /bin/python is 2.6.6 and is in root's path. Everything in question is owned by root. > > Any ideas? > > Thanks, > Eric > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From hacecky at jlab.org Thu Oct 20 09:39:10 2016 From: hacecky at jlab.org (Eric Hacecky) Date: Thu, 20 Oct 2016 12:39:10 -0400 (EDT) Subject: [Bro] Going from standalone to Bro cluster error In-Reply-To: <462219271.336590.1476981545443.JavaMail.zimbra@jlab.org> References: <595890484.327965.1476978100077.JavaMail.zimbra@jlab.org> Message-ID: <415774687.336608.1476981550152.JavaMail.zimbra@jlab.org> Makes sense. Thanks Daniel. ----- Original Message ----- From: "Daniel Thayer" To: "Eric Hacecky" , bro at bro.org Sent: Thursday, October 20, 2016 12:15:05 PM Subject: Re: [Bro] Going from standalone to Bro cluster error All of the machines in a Bro cluster must be running the same OS version, because when you run "broctl install" (or "broctl deploy"), broctl will copy the bro installation (the executables and scripts) from the manager to all of the other machines in your cluster. This ensures that all machines in the cluster run the same version of Bro with the same configuration. If you had previously built and installed Bro on worker-2, then it would just get overwritten by broctl. On 10/20/16 10:41 AM, Eric Hacecky wrote: > I'm in the process of converting my Bro installs from standalone instances to a Bro Cluster. > > The previous standalone Bro instance/server I'm using for testing is not able to be updated by the manager when I list it as a worker node. > > I have the following 3 machines: > > manager - New Bro install. version 2.5-beta-79 (rhel7 python 2.7.5) > worker1 - New Bro install. version 2.5-beta-79 (rhel7 python 2.7.5) > worker2 - Previous standalone Bro instance. version 2.4 <---- Notice the version differs, does worker2 need to run the same version as the manager? (rhel6 python 2.6.6) > > Everything works between manager and worker1. > > Here is the output from broctl when worker2 is listed in node.cfg. > > // > [BroControl] > check > manager scripts are ok. > proxy-1 scripts are ok. > worker-1 scripts are ok. > worker-2 scripts are ok. > > [BroControl] > install > removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/site ... > removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/auto ... > creating policy directories ... > installing site policies ... > generating cluster-layout.bro ... > generating local-networks.bro ... > generating broctl-config.bro ... > generating broctl-config.sh ... > updating nodes ... > sh: line 1: /bin/python: No such file or directory > sh: line 2: [/bin/echo,: No such file or directory > sh: line 3: syntax error near unexpected token `done' > sh: line 3: `done' > Error: cannot create a directory on node worker-2 > Error: Failed to establish ssh connection to host > // > > ssh keys are working as intended. Even though the error says, "failed to establish ssh connection on host" I can see manager login on worker2 as the root user via ssh in secure log. > > /bin/python is 2.6.6 and is in root's path. Everything in question is owned by root. > > Any ideas? > > Thanks, > Eric > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From jdopheid at illinois.edu Thu Oct 20 11:37:01 2016 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Thu, 20 Oct 2016 18:37:01 +0000 Subject: [Bro] Bro Blog: Contributing to the Bro Project Message-ID: <7EFD7D614A2BB84ABEA19B2CEDD246581C386B70@CITESMBX5.ad.uillinois.edu> Link to blog post: http://blog.bro.org/2016/10/contributing-to-bro-project.html Recently we have had a number of community members ask us for suggestions for contributing back to the Bro Project. We have updated the Community page on our website to reflect the new options available. Custom Scripts and/or Plugins We encourage Bro users to make their custom scripts and/or plugins available to the community by creating a package and submitting it to the Bro Package Source. See the README file of that GitHub repo for more instructions on how to create a package and submit it. Once your package is accepted, it becomes installable via the Bro Package Manager. Patches and New Functionality For working on the Bro codebase itself, work from our official GitHub mirrors or clone the master bro.org repositories directly fromgit://git.bro.org/. See our contribution guidelines for more information. Writing Documentation We are grateful for any corrections or contributions to documentation. Send documentation to info at bro.org or submit a ticket to our issue tracker. Provide community support Respond to user questions on the Mailing List, Twitter, IRC, and Gitter. Financial Support Become a Bro Future Fund sponsor, make an individual donation, or sponsor Bro events like BroCon. ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161020/794163b9/attachment.html From daniel.manzo at bayer.com Thu Oct 20 13:41:42 2016 From: daniel.manzo at bayer.com (Daniel Manzo) Date: Thu, 20 Oct 2016 20:41:42 +0000 Subject: [Bro] Automated shutdown Message-ID: <2C7473428EFB4348960ACC47FDC529451ACD0E72@MOXCXR.na.bayer.cnb> Hi all, I am looking to create a script that will shutdown all applications on my server prior to rebooting. Does Bro automatically stop when a reboot is initiated? If not, is there a bash script that can stop bro so I don't have to do it manually? Best regards, Dan Manzo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161020/e034e1e6/attachment.html From brot212 at googlemail.com Thu Oct 20 14:07:48 2016 From: brot212 at googlemail.com (Dane Wullen) Date: Thu, 20 Oct 2016 23:07:48 +0200 Subject: [Bro] Confusing binPAC error... In-Reply-To: References: Message-ID: <58093224.7030203@googlemail.com> Hey, thanks for your answer! Now I'm able to operate with the records! Am 19.10.2016 um 17:09 schrieb Jeff Barber: > Dane, > > As you've listed it, msg is of type TEST_PDU, which is a record > containing another record (of type 't_header' named 'data'). You can't > ignore the inner record. Looks like you should be using > "${msg.data.b1}" in your printf. > > Also, you're not showing a "b3" anywhere so that should come up > undeclared as well. > > HTH > > > On Wed, Oct 19, 2016 at 5:45 AM, Dane Wullen > wrote: > > Hi there, > > I've tried to implement a little test analyzer to detect TCP payload > with 2 bytes in it, just to know how binpac works. > > Here's my protocol.pac: > > type t_header = record { > b1 : uint8; > b2 : uint8; > } > > type TEST_PDU(is_orig: bool) = record { > data : t_header; > } &byteorder = bigendian > > Here's my analyzer.pac > > refine flow TEST_Flow += { > function proc_test_message(msg: TEST_PDU): bool > %{ > printf("Read TEST_PDU\n"); > BifEvent::generate_test_event(connection()->bro_analyzer(), > connection()->bro_analyzer()->Conn()); > return true; > %} > }; > > refine typeattr TEST_PDU += &let { > proc: bool = $context.flow.proc_test_message(this); > }; > > Everything works fine, but when I want to print my byte-values ( > printf("Val 1: %d, Val 2: %d, Val 3: %d", ${msg.b1}, ${msg.b2}, > ${msg.b3}); ), > I get an error while making the file which says that " 'b1' > undeclared". > Even if I put an if-statement to check if those values are > undeclared ( > if( ${msg.b1} != NULL && ${msg.b2} != NULL && ${msg.b3} != NULL)), > I still get the same error. > Can someone help me? :D Or tell me how to proper use C++ code in > binPAC? > > Thanks! > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161020/d6685786/attachment.html From pyrodie18 at gmail.com Thu Oct 20 20:23:41 2016 From: pyrodie18 at gmail.com (Troy Ward) Date: Thu, 20 Oct 2016 23:23:41 -0400 Subject: [Bro] Fwd: Simultaneous Connections In-Reply-To: References: Message-ID: I am trying to identify connections with the same source host and destination host/port occuring at the same time. My plan is to examine each connection_established event. I've created a table the pairs up those 3 items and when the event fires it looks to see if the pair exists. If it does, I want to tag a bol value that I have added to the conn record to mark it as a duplicate. When the connection closes, it takes information about both connections and records them into a new log file. I have attached my code below. My problem is that I get a "field value missing [simultanious::c$conn] on line 75 (c$conn$duplicate = T). If I move the command to the connection_closed event it works fine but that is to late. Ideas? Thanks, Troy local.bro # Add a field to the connection log record. redef record Conn::Info += { ## Indicate if the originator of the connection is part of the ## "private" address space defined in RFC1918. duplicate: bool &default=F ; }; type tmp : record { # Timestamp of the event ts : time &log; #source Port orig_p : count &log; #UID uid : string &log; }; # Add a field to the connection log record. redef record Conn::Info += { ## Indicate if the originator of the connection is part of the ## "private" address space defined in RFC1918. tmp_duplicate: tmp &optional; }; @load simultanious simultanious.bro module simultanious; export { redef enum Log::ID += { LOG }; #Data structure for final record to record type Info : record { # Timestamp of the event ts : time &log; # Source IP Host address orig_h : addr &log; # Destination IP Host address resp_h : addr &log; #Destination Port resp_p : count &log; #Protocol proto : transport_proto &log; #First Connection Timestamp first_ts : time &log; #First UID first_uid : string &log; #First originating port first_orig_p : count &log &optional; #Second Connection Timestamp second_ts : time &log; #Second UID second_uid : string &log; #Second Pack orig_p : string &log; second_orig_p : count &log &optional; }; type tmp : record { # Timestamp of the event ts : time &log; #source Port orig_p : count &log; #UID uid : string &log; }; #Table of hosts that are currently being tracked #Order is source IP address with a sub table of destination IP and port global current_connections : table [addr, addr, port] of tmp; #And event that can be handled to access the :bro:type: SimultaniousConnections::Info ##record as it is sent on to the logging framework global log_duplicate_connections : event(rec: Info); #List of subnets to monitor global monitor_subnets : set[subnet] = { 192.168.1.0/24, 192.68.2.0/24, 172.16.0.0/20, 172.16.16.0/20, 172.16.32.0/20, 172.16.48.0/20 }; #List of ports to monitor global monitor_ports : set [port] = { 443/tcp, 80/tcp, 8080/tcp, 22/tcp}; } event bro_init() { # Create the logging stream Log::create_stream(LOG, [$columns=Info, $path="simultanious_conn"]); } event connection_established(c: connection) { #Check to see if there is already an entry for the connection string in the table if ([c$id$orig_h, c$id$resp_h, c$id$resp_p] in current_connections) { #There is a duplicate record #duplicate_host = T; c$conn$duplicate = T; c$conn$tmp_duplicate$ts = current_connections[c$id$orig_h, c$id$resp_h, c$id$resp_p]$ts; c$conn$tmp_duplicate$orig_p = current_connections[c$id$orig_h, c$id$resp_h, c$id$resp_p]$orig_p; c$conn$tmp_duplicate$uid = current_connections[c$id$orig_h, c$id$resp_h, c$id$resp_p]$uid; print fmt("dup - %s %s %s %s", c$uid, c$id$orig_h, c$id$resp_h, c$id$resp_p); } else { local temp_record : tmp = [$ts=c$start_time, $orig_p=port_to_count(c$id$orig_p), $uid=c$uid]; current_connections[c$id$orig_h, c$id$resp_h, c$id$resp_p]=temp_record; print fmt("no dup - %s %s %s %s", c$uid, c$id$orig_h, c$id$resp_h, c$id$resp_p); } } event connection_state_remove(c: connection) { if (c$conn$duplicate && c$duration > 1min) { print fmt("end of record dup %s %s %s %s %s", c$uid, c$id$orig_h, c$id$resp_h, c$id$resp_p, c$conn$tmp_duplicate); #Log::write (simultanious::LOG, temp_working_record); } else { print fmt("end of packet no dup - %s %s %s %s", c$uid, c$id$orig_h, c$id$resp_h, c$id$resp_p); } } -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161020/bccd51b5/attachment-0001.html From philosnef at gmail.com Fri Oct 21 06:03:20 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 21 Oct 2016 09:03:20 -0400 Subject: [Bro] file identification modification In-Reply-To: References: Message-ID: Hmm. So I modified the msoffice.sig with this /\x21\x42\x44\x4E/ but the sig doesnt fire. However when I do /!BDN/ it does. What gives? :) Also, whats the number after the mimetype association mean? My mimetype is application/outlook, 5 Thanks! On Thu, Oct 20, 2016 at 10:13 AM, Seth Hall wrote: > > > On Oct 19, 2016, at 7:22 AM, erik clark wrote: > > > > Actually, I do not see file-ident.sig anywhere in the source tree, or my > deployment tree. Where is this kept? Thanks! > > This was broken out a couple of releases ago. There are a bunch of file > signature files in base/frameworks/files/magic/ > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161021/9e4ad9b0/attachment.html From philosnef at gmail.com Fri Oct 21 06:04:13 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 21 Oct 2016 09:04:13 -0400 Subject: [Bro] file identification modification In-Reply-To: References: Message-ID: Sorry, thats /^... and /^!... On Fri, Oct 21, 2016 at 9:03 AM, erik clark wrote: > Hmm. So I modified the msoffice.sig with this > > /\x21\x42\x44\x4E/ > > but the sig doesnt fire. However when I do > > /!BDN/ > > it does. What gives? :) Also, whats the number after the mimetype > association mean? My mimetype is > > application/outlook, 5 > > Thanks! > > On Thu, Oct 20, 2016 at 10:13 AM, Seth Hall wrote: > >> >> > On Oct 19, 2016, at 7:22 AM, erik clark wrote: >> > >> > Actually, I do not see file-ident.sig anywhere in the source tree, or >> my deployment tree. Where is this kept? Thanks! >> >> This was broken out a couple of releases ago. There are a bunch of file >> signature files in base/frameworks/files/magic/ >> >> .Seth >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161021/28712899/attachment.html From seth at icir.org Fri Oct 21 06:42:36 2016 From: seth at icir.org (Seth Hall) Date: Fri, 21 Oct 2016 09:42:36 -0400 Subject: [Bro] file identification modification In-Reply-To: References: Message-ID: <729F7FC7-1CE5-47C3-976C-336F53BF8676@icir.org> > On Oct 21, 2016, at 9:04 AM, erik clark wrote: > > /\x21\x42\x44\x4E/ > > but the sig doesnt fire. However when I do > > /!BDN/ > > it does. What gives? :) I'm not sure offhand why that wouldn't work. > Also, whats the number after the mimetype association mean? My mimetype is > > application/outlook, 5 That's a priority. Since multiple matches can happen, we've tried to make the signatures that should be more specific and reliable be higher priority. The current numbers are a bit haphazard though. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jlay at slave-tothe-box.net Fri Oct 21 16:38:01 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 21 Oct 2016 17:38:01 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161018182858.GB27597@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> Message-ID: <1477093081.2523.5.camel@slave-tothe-box.net> So ok here's where I'm at. ?With only the ntp_apple rule this works. ?However if I have a generic ntp rule, either before or after the ntp_apple, I only get the ntp match: signature protosig_ntp { ? ip-proto == udp ? dst-port == 123 ? payload /.*\x00/ ? payload-size == 48 ? eval ProtoSig::match } signature protosig_ntp_apple { ? #header ip[16:4] == 17.0.0.0/8 ? dst-ip == 17.0.0.0/8 ? ip-proto == udp ? dst-port == 123 ? payload /.*\x00/ ? payload-size == 48 ? eval ProtoSig::match } #signature protosig_ntp { #??ip-proto == udp #??dst-port == 123 #??payload /.*\x00/ #??payload-size == 48 #??eval ProtoSig::match #} Currently with 2.4.1 protosig you put the generic one first, and the specific ones after. ?This appears to have changed? ?Anyway at least it does now match, so that's a plus. On Tue, 2016-10-18 at 11:28 -0700, Robin Sommer wrote: > On Mon, Oct 17, 2016 at 15:08 -0600, you wrote: > > > > > Included! > Thanks, I'll take a look, give me a little bit. > > Robin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161021/af8f28ce/attachment.html From fatema.bannatwala at gmail.com Sun Oct 23 14:00:03 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Sun, 23 Oct 2016 17:00:03 -0400 Subject: [Bro] Bro crashed this morning.. Message-ID: Hi all, So, it happened again, this morning around 6:55am. Bro stopped at that time, don't really know why. I got to know about this when I wanted to analyse traffic for a particular IP around 11 and found out that we don't have any logs after 7am logged by BRO :( I quickly checked the status of bro on manager, and found that bro isn't running. I restarted bro from manager and all but one worker came up online, and bro started normally, running with remaining nodes in the cluster. This have happened before, when one of the workers will become unreachable and bro stops. I don't really know what happens first,i.e whether worker becomes offline first and then bro stops, or vise versa. I tried looking for some errors on the workers as well as on manager in : /usr/local/bro/logs/brolog/spool/tmp/post-terminate-2016-10-23-15-40-10-2410-crash dir but nothing useful, only some warnings in stderr.log like following: warning in /usr/local/bro/2.4.1/share/bro/site/connStats.bro, line 39: dangerous assignment of double to integral (ConnStats::out$EstinboundConns = ConnStats::result[EstinboundConns]$sum) warning in /usr/local/bro/2.4.1/share/bro/site/connStats.bro, line 40: dangerous assignment of double to integral (ConnStats::out$EstoutboundConns = ConnStats::result[EstoutboundConns]$sum) listening on em1, capture length 8192 bytes 1477133753.104159 processing suspended 1477133753.104159 processing continued 1477133759.776854 Failed to open GeoIP Cityv6 database: /usr/share/GeoIP/GeoIPCityv6.dat 1477133759.776854 Failed to open GeoIPv6 Country database: /usr/share/GeoIP/GeoIPv6.dat Is there anywhere else I can look also to diagnose the issue? Is there any reason, bro will stop entirely if one of the workers become offline for some reason? Or the issue is completely else, and I am looking in completely wrong direction. Any help appreciated :) Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161023/b9931abc/attachment.html From hms.uet at gmail.com Sun Oct 23 21:36:08 2016 From: hms.uet at gmail.com (Hafiz Shafiq) Date: Mon, 24 Oct 2016 09:36:08 +0500 Subject: [Bro] How to detect transparent proxy by BRO IDS (2.4.1) Message-ID: Sir, Our network administrator is using proxy in transparent mode (SQUID). In this mode , there is no need for user to configure proxy option on his computer. I have captured few hours traffic via tcpdump and when I run bro, to know about http trafffic and defferent apps used (like google, youtube etc.). I am amazed to know that there is even not http.log and app_stats.log files generated. Is it some problem in bro configuration. I have searched from its manual, infomation given about proxy could not solve my problem. I have checked load_scripts.log. I shows that http analyzer is loaded. Can you please guide me about this issue ? Regards Hafiz Muhammad Shafiq -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/0d93a37d/attachment.html From jedwards2728 at gmail.com Sun Oct 23 23:39:07 2016 From: jedwards2728 at gmail.com (John Edwards) Date: Mon, 24 Oct 2016 16:39:07 +1000 Subject: [Bro] Packet loss Message-ID: Hi all I have just deployed bro onto two systems on my border gateway. They sit off a tap and each system has individual Rx and Tx interfaces bridged using brctl. I am not seeing any interface dropped packets or errors from the Ubuntu host via ifconfig. When looking at my data within bro that monitors a standalone configuration of br0 has the below line repeated a few times throughout the notice.log 1477283201.681213 - - - - - - - - - PacketFilter::Dropped_Packets 2739608 packets dropped after filtering, 12351460 received, 12351686 on link - - - - - bro Notice::ACTION_LOG3600.000000 F - - - - - We seem to be getting lots of data and as far as CPU and memory resource consumption goes it's not under strenuous load. I haven't changed too much of the configuration of the 2.4.1 build. Sorry if this has been discussed or asked before but what can I look at optimising or tuning to reduce the packet loss? One thread I found wasn't bros issue but the tap and an upgrade of the software fixed it. I cannot do this as it's without software to tune. It's a vss active 1gb tap, doesn't seem to be the tap at this stage but it quite possibly could be :) Thanks John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/0f9ebc4d/attachment.html From philosnef at gmail.com Mon Oct 24 10:07:42 2016 From: philosnef at gmail.com (erik clark) Date: Mon, 24 Oct 2016 13:07:42 -0400 Subject: [Bro] Bro crashed this morning.. Message-ID: Did you check to see if it is being killed off by OOMkiller??? We see that bro gets killed off by OOM killer periodically (I think we might have similar traffic profiles), but that should just kill off a worker or two, not the entire process. Check for that in /var/log/messages. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/202ae480/attachment.html From fatema.bannatwala at gmail.com Mon Oct 24 11:14:06 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 24 Oct 2016 14:14:06 -0400 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: Bro once invoked oom killer but that was on 11th Oct, no records recently :/ Oct 11 18:33:21 bro-Sensor kernel: bro invoked oom-killer: gfp_mask=0x200da, order=0, oom_score_adj=0 Oct 11 18:33:21 bro-Sensor kernel: [] oom_kill_process+0x24e/0x3b0 Thanks for the comments! On Mon, Oct 24, 2016 at 1:07 PM, erik clark wrote: > Did you check to see if it is being killed off by OOMkiller??? We see that > bro gets killed off by OOM killer periodically (I think we might have > similar traffic profiles), but that should just kill off a worker or two, > not the entire process. Check for that in /var/log/messages. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/73410aaf/attachment.html From jazoff at illinois.edu Mon Oct 24 11:24:33 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Mon, 24 Oct 2016 18:24:33 +0000 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: > On Oct 23, 2016, at 5:00 PM, fatema bannatwala wrote: > > Hi all, > > So, it happened again, this morning around 6:55am. > Bro stopped at that time, don't really know why. > I got to know about this when I wanted to analyse traffic for a particular IP around 11 and found out that we don't have any logs after 7am logged by BRO :( Do you have the 'broctl cron' job installed? # /etc/cron.d/bro # bro cron tasks @reboot root timeout 10m /bro/bin/broctl start */5 * * * * root timeout 10m /bro/bin/broctl cron -- - Justin Azoff From fatema.bannatwala at gmail.com Mon Oct 24 11:48:56 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 24 Oct 2016 14:48:56 -0400 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: I have two crons currently in bro's crontab: $ crontab -l 0-59/5 * * * * /usr/local/bro/default/bin/broctl cron 55 6 * * * /usr/local/bro/bin/restart-bro restart-bro is a small script that looks like this: /usr/local/bro/default/bin/broctl install /usr/local/bro/default/bin/broctl restart The reason, I think, for having bro restart every morning at 6:55 is we pull down the intel feeds every morning at 6:45 that updates the files that bro monitors as input feeds for intel framework. And I thought that Bro would not pick up new/updated input feeds unless restarted. Is that would be something causing bro to not restart? On Mon, Oct 24, 2016 at 2:24 PM, Azoff, Justin S wrote: > > > On Oct 23, 2016, at 5:00 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Hi all, > > > > So, it happened again, this morning around 6:55am. > > Bro stopped at that time, don't really know why. > > I got to know about this when I wanted to analyse traffic for a > particular IP around 11 and found out that we don't have any logs after 7am > logged by BRO :( > > Do you have the 'broctl cron' job installed? > > # /etc/cron.d/bro > # bro cron tasks > @reboot root timeout 10m /bro/bin/broctl start > */5 * * * * root timeout 10m /bro/bin/broctl cron > > -- > - Justin Azoff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/57450eb6/attachment.html From robin at icir.org Mon Oct 24 11:51:46 2016 From: robin at icir.org (Robin Sommer) Date: Mon, 24 Oct 2016 11:51:46 -0700 Subject: [Bro] Several protosig questions In-Reply-To: <1477093081.2523.5.camel@slave-tothe-box.net> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> <1477093081.2523.5.camel@slave-tothe-box.net> Message-ID: <20161024185146.GE28345@icir.org> On Fri, Oct 21, 2016 at 17:38 -0600, you wrote: > However if I have a generic ntp rule, either before or after the > ntp_apple, I only get the ntp match: Let me clarify one thing: > ? eval ProtoSig::match "eval" is not for flagging a match. It's a condition by itself that influences the matching of the signature. To learn about a match use "event" instead and then hook into the "signature_event" event. If I do that, things seem to work for me correctly with the sig-fixes branch: # cat test.sig signature protosig_ntp { ip-proto == udp dst-port == 123 payload /.*\x00/ payload-size == 48 event "match" } signature protosig_ntp_apple { dst-ip == 17.0.0.0/8 ip-proto == udp dst-port == 123 payload /.*\x00/ payload-size == 48 event "match" } # cat test.bro event signature_match(state: signature_state, msg: string, data: string) { print "signature match", state$sig_id; } # bro -s ./test.sig -r ntp-1.pcap ./test.bro signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple If I add "eval", I do see it execute for both signatures, though more often for the generic one. That's probably an artefact of the order in which conditions are run internally; having the dst-ip in there may change that. Btw, the order of matches is undefined, and might have well changed since 2.4. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From robin at icir.org Mon Oct 24 11:57:01 2016 From: robin at icir.org (Robin Sommer) Date: Mon, 24 Oct 2016 11:57:01 -0700 Subject: [Bro] When...timeout statement not executing In-Reply-To: References: Message-ID: <20161024185701.GF28345@icir.org> On Mon, Oct 17, 2016 at 14:55 -0400, Alex Hope wrote: > I'm having an issue where the when...timeout block isn't executing. I'll > post my code then explain the problem I'm experiencing. Is there any chance you could find a way to reproduce this problem on a small trace? Since you say it happens only "sometimes" I'm guessing that it may be hard to track down otherwise. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jlay at slave-tothe-box.net Mon Oct 24 11:58:08 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 24 Oct 2016 12:58:08 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161024185146.GE28345@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> <1477093081.2523.5.camel@slave-tothe-box.net> <20161024185146.GE28345@icir.org> Message-ID: <4e1013457c0cf046f0f4972fb32a5ed3@localhost> On 2016-10-24 12:51, Robin Sommer wrote: > On Fri, Oct 21, 2016 at 17:38 -0600, you wrote: > >> However if I have a generic ntp rule, either before or after the >> ntp_apple, I only get the ntp match: > > Let me clarify one thing: > >> ? eval ProtoSig::match > > "eval" is not for flagging a match. It's a condition by itself that > influences the matching of the signature. To learn about a match use > "event" instead and then hook into the "signature_event" event. If I > do that, things seem to work for me correctly with the sig-fixes > branch: Ok cool...I've just been going per the docs here: https://github.com/broala/bro-protosigs "You must add the eval ProtoSig::match condition into your signature that does the final match. That call is what ties the signature match into the protosigs Bro scripts." I'll give event a go and report my findings. Thanks so much Robin! James > > # cat test.sig > signature protosig_ntp { > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > event "match" > > } > signature protosig_ntp_apple { > dst-ip == 17.0.0.0/8 > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > event "match" > } > > # cat test.bro > event signature_match(state: signature_state, msg: string, data: > string) > { > print "signature match", state$sig_id; > } > > # bro -s ./test.sig -r ntp-1.pcap ./test.bro > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > > > If I add "eval", I do see it execute for both signatures, though more > often for the generic one. That's probably an artefact of the order in > which conditions are run internally; having the dst-ip in there may > change that. > > Btw, the order of matches is undefined, and might have well changed > since 2.4. > > Robin > > -- > Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jazoff at illinois.edu Mon Oct 24 11:58:22 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Mon, 24 Oct 2016 18:58:22 +0000 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: > On Oct 24, 2016, at 2:48 PM, fatema bannatwala wrote: > > I have two crons currently in bro's crontab: > $ crontab -l > 0-59/5 * * * * /usr/local/bro/default/bin/broctl cron > 55 6 * * * /usr/local/bro/bin/restart-bro > > restart-bro is a small script that looks like this: > > /usr/local/bro/default/bin/broctl install > /usr/local/bro/default/bin/broctl restart > > The reason, I think, for having bro restart every morning at 6:55 is we pull down the intel feeds every morning at 6:45 > that updates the files that bro monitors as input feeds for intel framework. > And I thought that Bro would not pick up new/updated input feeds unless restarted. > > Is that would be something causing bro to not restart? > You shouldn't have to restart bro for it to pull in updates from intel files. It's suspicious that you say bro crashed at 7am and that cron job runs at 6:55. It's possible that something went wrong during the restart and bro just ended up stopped. I could see 'broctl restart' leaving the cluster in an inconsistent state if it gets interrupted. I'd just remove that job (since intel files should auto update on their own) or try changing the time it runs at to 6:57, which should at least avoid it running at the same time as cron. -- - Justin Azoff From fatema.bannatwala at gmail.com Mon Oct 24 12:26:43 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 24 Oct 2016 15:26:43 -0400 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: Hmm, that kinda makes sense. Disabled the cron job of restart-bro, and will keep a check on bro on manager for future. Thanks Justin :) On Mon, Oct 24, 2016 at 2:58 PM, Azoff, Justin S wrote: > > > On Oct 24, 2016, at 2:48 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > I have two crons currently in bro's crontab: > > $ crontab -l > > 0-59/5 * * * * /usr/local/bro/default/bin/broctl cron > > 55 6 * * * /usr/local/bro/bin/restart-bro > > > > restart-bro is a small script that looks like this: > > > > /usr/local/bro/default/bin/broctl install > > /usr/local/bro/default/bin/broctl restart > > > > The reason, I think, for having bro restart every morning at 6:55 is we > pull down the intel feeds every morning at 6:45 > > that updates the files that bro monitors as input feeds for intel > framework. > > And I thought that Bro would not pick up new/updated input feeds unless > restarted. > > > > Is that would be something causing bro to not restart? > > > > You shouldn't have to restart bro for it to pull in updates from intel > files. > > It's suspicious that you say bro crashed at 7am and that cron job runs at > 6:55. > > It's possible that something went wrong during the restart and bro just > ended up stopped. I could see 'broctl restart' leaving the cluster in an > inconsistent state if it gets interrupted. > > I'd just remove that job (since intel files should auto update on their > own) or try changing the time it runs at to 6:57, which should at least > avoid it running at the same time as cron. > > > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/084117b4/attachment.html From jlay at slave-tothe-box.net Mon Oct 24 12:53:56 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 24 Oct 2016 13:53:56 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161024185146.GE28345@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> <1477093081.2523.5.camel@slave-tothe-box.net> <20161024185146.GE28345@icir.org> Message-ID: On 2016-10-24 12:51, Robin Sommer wrote: > On Fri, Oct 21, 2016 at 17:38 -0600, you wrote: > >> However if I have a generic ntp rule, either before or after the >> ntp_apple, I only get the ntp match: > > Let me clarify one thing: > >> ? eval ProtoSig::match > > "eval" is not for flagging a match. It's a condition by itself that > influences the matching of the signature. To learn about a match use > "event" instead and then hook into the "signature_event" event. If I > do that, things seem to work for me correctly with the sig-fixes > branch: Ok first off thanks for that test setup...now I can just test a sig vs. a pcap, so that's tight. My results: [13:36:27 @tester:~/dev/bro$] bro -s ./test.sig -r pcaps/ntp-1.pcap ./test.bro signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple So it does indeed match...however in the official conn.log, this is what I get: [13:36:32 @tester:~/dev/bro$] ./testhome pcaps/ntp-1.pcap [13:36:37 @tester:~/dev/bro$] cat conn.log #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path conn #open 2016-10-24-13-36-37 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents protosig #types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string] string 1476535656.489094 ClixtHWwLmpBYkZRh 192.168.1.95 123 17.253.4.253 123 udp - 0.040715 48 48 SF T F 0 Dd 1 76 1 76 (empty) ntp 1476535656.533910 CJFnxQiLgFYwVcEFi 192.168.1.95 123 17.253.4.125 123 udp - 0.040804 48 48 SF T F 0 Dd 1 76 1 76 (empty) ntp 1476535657.111868 Cds9uP3GtXbb1jHh3d 192.168.1.95 123 17.253.26.253 123 udp - 0.037826 48 48 SF T F 0 Dd 1 76 1 76 (empty) ntp 1476535738.400766 CTkIhX1qHjadF6iple 192.168.1.100 123 17.253.4.253 123 udp - 0.040577 48 48 SF T F 0 Dd 1 76 1 76 (empty) ntp 1476535738.360132 Chm9Q6WalLZnpFx4g 192.168.1.100 123 17.253.26.253 123 udp - 0.037825 48 48 SF T F 0 Dd 1 76 1 76 (empty) ntp 1476535739.752622 CRWW8j41rCTK6gYZSk 192.168.1.100 123 17.253.4.125 123 udp - 0.040857 48 48 SF T F 0 Dd 1 76 1 76 (empty) ntp #close 2016-10-24-13-36-37 Swapping which sig is first gets me this: [13:46:24 @tester:~/dev/bro$] bro -s ./test.sig -r pcaps/ntp-1.pcap ./test.bro signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp signature match, protosig_ntp_apple signature match, protosig_ntp But the same results as above in conn.log. So I guess that's a feature request? To hard define either a first rule that matches gets logged, or the last rule that matches gets logged. This will allow granular flow identification..which, to be honest, is the whole reason I'm doing this in the first place :) Thanks again Robin. > > # cat test.sig > signature protosig_ntp { > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > event "match" > > } > signature protosig_ntp_apple { > dst-ip == 17.0.0.0/8 > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > event "match" > } > > # cat test.bro > event signature_match(state: signature_state, msg: string, data: > string) > { > print "signature match", state$sig_id; > } > > # bro -s ./test.sig -r ntp-1.pcap ./test.bro > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > > > If I add "eval", I do see it execute for both signatures, though more > often for the generic one. That's probably an artefact of the order in > which conditions are run internally; having the dst-ip in there may > change that. > > Btw, the order of matches is undefined, and might have well changed > since 2.4. > > Robin > > -- > Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From dnj0496 at gmail.com Mon Oct 24 17:42:55 2016 From: dnj0496 at gmail.com (Dk Jack) Date: Mon, 24 Oct 2016 17:42:55 -0700 Subject: [Bro] bro size Message-ID: Hi, When I compile bro, the bro binary comes out to about 120Mb. Are there any options I can use to reduce by eliminating some of the features I don't need? the configure script doesn't seem to have many options. Thanks. Dk. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/f8bac826/attachment.html From jan.grashoefer at gmail.com Tue Oct 25 02:32:07 2016 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Tue, 25 Oct 2016 11:32:07 +0200 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: > Hmm, that kinda makes sense. > Disabled the cron job of restart-bro, and will keep a check on bro on > manager for future. While Bro should pick up new intel without a restart, 2.4.1 will never delete any intel that has been ingested. If you are using large volatile feeds that might become a problem. With 2.5 the intel framework allows to expire intel. I would be curious to know if you are experiencing any corresponding problems with 2.4.1. Jan From hosom at battelle.org Tue Oct 25 10:28:15 2016 From: hosom at battelle.org (Hosom, Stephen M) Date: Tue, 25 Oct 2016 17:28:15 +0000 Subject: [Bro] bro size In-Reply-To: References: Message-ID: I wouldn?t recommend changing many of the options in configure unless you truly know what you?re doing. Why is it that you need Bro to be smaller? From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Dk Jack Sent: Monday, October 24, 2016 8:43 PM To: bro at bro.org Subject: [Bro] bro size Hi, When I compile bro, the bro binary comes out to about 120Mb. Are there any options I can use to reduce by eliminating some of the features I don't need? the configure script doesn't seem to have many options. Thanks. Dk. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161025/1d561f69/attachment-0001.html From fmonsen at ucsc.edu Tue Oct 25 10:48:38 2016 From: fmonsen at ucsc.edu (Forest Monsen) Date: Tue, 25 Oct 2016 10:48:38 -0700 Subject: [Bro] Bro crashed this morning.. In-Reply-To: References: Message-ID: <23792112-427d-2f83-3aa3-58a88b2301b7@ucsc.edu> On 10/24/2016 11:58 AM, Azoff, Justin S wrote: > It's possible that something went wrong during the restart and bro just ended up stopped. I could see 'broctl restart' leaving the cluster in an inconsistent state if it gets interrupted. Fatema, additionally, if one of your intel files is formatted incorrectly, Bro won't be happy. Best, Forest -- Forest Monsen Senior Information Security Analyst University of California, Santa Cruz https://keybase.io/forestmonsen fmonsen at ucsc.edu +1.831.502.7109 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161025/48189b33/attachment.bin From johanna at icir.org Tue Oct 25 10:56:32 2016 From: johanna at icir.org (Johanna Amann) Date: Tue, 25 Oct 2016 10:56:32 -0700 Subject: [Bro] bro size In-Reply-To: References: Message-ID: <54EE16CB-21FC-44CC-A9E9-F60912313520@icir.org> If for some reason it really is a concern, you also can just call strip on the binary. This brings the binary size down to ~6MB for me. Johanna On 25 Oct 2016, at 10:28, Hosom, Stephen M wrote: > I wouldn?t recommend changing many of the options in configure > unless you truly know what you?re doing. Why is it that you need Bro > to be smaller? > > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Dk > Jack > Sent: Monday, October 24, 2016 8:43 PM > To: bro at bro.org > Subject: [Bro] bro size > > Hi, > When I compile bro, the bro binary comes out to about 120Mb. Are there > any options I can use to reduce by eliminating some of the features I > don't need? the configure script doesn't seem to have many options. > Thanks. > > Dk. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From audrjon at gmail.com Tue Oct 25 11:24:24 2016 From: audrjon at gmail.com (Audrius J) Date: Tue, 25 Oct 2016 21:24:24 +0300 Subject: [Bro] Bro Digest, Vol 126, Issue 43 In-Reply-To: References: Message-ID: Hi, I think that you should restart bro and here is why... Your data are appended but the old ioc's that are not relevant anymore are not removed. One day you may have a lot ioc's to check against the traffic so you may experience problems like packets drops and etc... We had this issue so this is why we decide to restart bro once day just to avoid that problems... Sure just pick up time that works for you best! Regards, Audrius Sent from my iPhone > On 24 Oct 2016, at 22:00, bro-request at bro.org wrote: > > Send Bro mailing list submissions to > bro at bro.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > or, via email, send a message with subject or body 'help' to > bro-request at bro.org > > You can reach the person managing the list at > bro-owner at bro.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Bro digest..." > > > Today's Topics: > > 1. Re: Bro crashed this morning.. (fatema bannatwala) > 2. Re: Several protosig questions (Robin Sommer) > 3. Re: When...timeout statement not executing (Robin Sommer) > 4. Re: Several protosig questions (James Lay) > 5. Re: Bro crashed this morning.. (Azoff, Justin S) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 24 Oct 2016 14:48:56 -0400 > From: fatema bannatwala > Subject: Re: [Bro] Bro crashed this morning.. > To: "Azoff, Justin S" > Cc: "bro at bro.org" > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > I have two crons currently in bro's crontab: > $ crontab -l > 0-59/5 * * * * /usr/local/bro/default/bin/broctl cron > 55 6 * * * /usr/local/bro/bin/restart-bro > > restart-bro is a small script that looks like this: > > /usr/local/bro/default/bin/broctl install > /usr/local/bro/default/bin/broctl restart > > The reason, I think, for having bro restart every morning at 6:55 is we > pull down the intel feeds every morning at 6:45 > that updates the files that bro monitors as input feeds for intel framework. > And I thought that Bro would not pick up new/updated input feeds unless > restarted. > > Is that would be something causing bro to not restart? > > > On Mon, Oct 24, 2016 at 2:24 PM, Azoff, Justin S > wrote: > >> >>> On Oct 23, 2016, at 5:00 PM, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >>> >>> Hi all, >>> >>> So, it happened again, this morning around 6:55am. >>> Bro stopped at that time, don't really know why. >>> I got to know about this when I wanted to analyse traffic for a >> particular IP around 11 and found out that we don't have any logs after 7am >> logged by BRO :( >> >> Do you have the 'broctl cron' job installed? >> >> # /etc/cron.d/bro >> # bro cron tasks >> @reboot root timeout 10m /bro/bin/broctl start >> */5 * * * * root timeout 10m /bro/bin/broctl cron >> >> -- >> - Justin Azoff >> >> >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161024/57450eb6/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Mon, 24 Oct 2016 11:51:46 -0700 > From: Robin Sommer > Subject: Re: [Bro] Several protosig questions > To: James Lay > Cc: Bro-IDS > Message-ID: <20161024185146.GE28345 at icir.org> > Content-Type: text/plain; charset=iso-8859-1 > > > >> On Fri, Oct 21, 2016 at 17:38 -0600, you wrote: >> >> However if I have a generic ntp rule, either before or after the >> ntp_apple, I only get the ntp match: > > Let me clarify one thing: > >> ? eval ProtoSig::match > > "eval" is not for flagging a match. It's a condition by itself that > influences the matching of the signature. To learn about a match use > "event" instead and then hook into the "signature_event" event. If I > do that, things seem to work for me correctly with the sig-fixes > branch: > > # cat test.sig > signature protosig_ntp { > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > event "match" > > } > signature protosig_ntp_apple { > dst-ip == 17.0.0.0/8 > ip-proto == udp > dst-port == 123 > payload /.*\x00/ > payload-size == 48 > event "match" > } > > # cat test.bro > event signature_match(state: signature_state, msg: string, data: string) > { > print "signature match", state$sig_id; > } > > # bro -s ./test.sig -r ntp-1.pcap ./test.bro > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > signature match, protosig_ntp > signature match, protosig_ntp_apple > > > If I add "eval", I do see it execute for both signatures, though more > often for the generic one. That's probably an artefact of the order in > which conditions are run internally; having the dst-ip in there may > change that. > > Btw, the order of matches is undefined, and might have well changed > since 2.4. > > Robin > > -- > Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin > > > ------------------------------ > > Message: 3 > Date: Mon, 24 Oct 2016 11:57:01 -0700 > From: Robin Sommer > Subject: Re: [Bro] When...timeout statement not executing > To: Alex Hope > Cc: bro at bro.org > Message-ID: <20161024185701.GF28345 at icir.org> > Content-Type: text/plain; charset=us-ascii > > > >> On Mon, Oct 17, 2016 at 14:55 -0400, Alex Hope wrote: >> >> I'm having an issue where the when...timeout block isn't executing. I'll >> post my code then explain the problem I'm experiencing. > > Is there any chance you could find a way to reproduce this problem on > a small trace? Since you say it happens only "sometimes" I'm guessing > that it may be hard to track down otherwise. > > Robin > > -- > Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin > > > ------------------------------ > > Message: 4 > Date: Mon, 24 Oct 2016 12:58:08 -0600 > From: James Lay > Subject: Re: [Bro] Several protosig questions > To: Robin Sommer > Cc: Bro-IDS > Message-ID: <4e1013457c0cf046f0f4972fb32a5ed3 at localhost> > Content-Type: text/plain; charset=UTF-8; format=flowed > >> On 2016-10-24 12:51, Robin Sommer wrote: >>> On Fri, Oct 21, 2016 at 17:38 -0600, you wrote: >>> >>> However if I have a generic ntp rule, either before or after the >>> ntp_apple, I only get the ntp match: >> >> Let me clarify one thing: >> >>> ? eval ProtoSig::match >> >> "eval" is not for flagging a match. It's a condition by itself that >> influences the matching of the signature. To learn about a match use >> "event" instead and then hook into the "signature_event" event. If I >> do that, things seem to work for me correctly with the sig-fixes >> branch: > > Ok cool...I've just been going per the docs here: > > https://github.com/broala/bro-protosigs > > "You must add the eval ProtoSig::match condition into your signature > that does the final match. That call is what ties the signature match > into the protosigs Bro scripts." > > I'll give event a go and report my findings. Thanks so much Robin! > > James > >> >> # cat test.sig >> signature protosig_ntp { >> ip-proto == udp >> dst-port == 123 >> payload /.*\x00/ >> payload-size == 48 >> event "match" >> >> } >> signature protosig_ntp_apple { >> dst-ip == 17.0.0.0/8 >> ip-proto == udp >> dst-port == 123 >> payload /.*\x00/ >> payload-size == 48 >> event "match" >> } >> >> # cat test.bro >> event signature_match(state: signature_state, msg: string, data: >> string) >> { >> print "signature match", state$sig_id; >> } >> >> # bro -s ./test.sig -r ntp-1.pcap ./test.bro >> signature match, protosig_ntp >> signature match, protosig_ntp_apple >> signature match, protosig_ntp >> signature match, protosig_ntp_apple >> signature match, protosig_ntp >> signature match, protosig_ntp_apple >> signature match, protosig_ntp >> signature match, protosig_ntp_apple >> signature match, protosig_ntp >> signature match, protosig_ntp_apple >> signature match, protosig_ntp >> signature match, protosig_ntp_apple >> >> >> If I add "eval", I do see it execute for both signatures, though more >> often for the generic one. That's probably an artefact of the order in >> which conditions are run internally; having the dst-ip in there may >> change that. >> >> Btw, the order of matches is undefined, and might have well changed >> since 2.4. >> >> Robin >> >> -- >> Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin > > > ------------------------------ > > Message: 5 > Date: Mon, 24 Oct 2016 18:58:22 +0000 > From: "Azoff, Justin S" > Subject: Re: [Bro] Bro crashed this morning.. > To: fatema bannatwala > Cc: "bro at bro.org" > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > >> On Oct 24, 2016, at 2:48 PM, fatema bannatwala wrote: >> >> I have two crons currently in bro's crontab: >> $ crontab -l >> 0-59/5 * * * * /usr/local/bro/default/bin/broctl cron >> 55 6 * * * /usr/local/bro/bin/restart-bro >> >> restart-bro is a small script that looks like this: >> >> /usr/local/bro/default/bin/broctl install >> /usr/local/bro/default/bin/broctl restart >> >> The reason, I think, for having bro restart every morning at 6:55 is we pull down the intel feeds every morning at 6:45 >> that updates the files that bro monitors as input feeds for intel framework. >> And I thought that Bro would not pick up new/updated input feeds unless restarted. >> >> Is that would be something causing bro to not restart? >> > > You shouldn't have to restart bro for it to pull in updates from intel files. > > It's suspicious that you say bro crashed at 7am and that cron job runs at 6:55. > > It's possible that something went wrong during the restart and bro just ended up stopped. I could see 'broctl restart' leaving the cluster in an inconsistent state if it gets interrupted. > > I'd just remove that job (since intel files should auto update on their own) or try changing the time it runs at to 6:57, which should at least avoid it running at the same time as cron. > > > > -- > - Justin Azoff > > > > > ------------------------------ > > _______________________________________________ > Bro mailing list > Bro at bro.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > End of Bro Digest, Vol 126, Issue 43 > ************************************ From johanna at icir.org Tue Oct 25 11:53:33 2016 From: johanna at icir.org (Johanna Amann) Date: Tue, 25 Oct 2016 11:53:33 -0700 Subject: [Bro] Automated shutdown In-Reply-To: <2C7473428EFB4348960ACC47FDC529451ACD0E72@MOXCXR.na.bayer.cnb> References: <2C7473428EFB4348960ACC47FDC529451ACD0E72@MOXCXR.na.bayer.cnb> Message-ID: <20161025185330.tb3ysi5gzq5saavm@wifi155.sys.ICSI.Berkeley.EDU> Hello Dan, Bro does not automatically stop when a reboot is initiated - your operating system will probably just terminate all the processes sometime during shutdown. The proper way to shutdown Bro, assuming you use broctl, would be to call broctl stop when your server shuts down. How exactly you do that depends on your operating system. I hope that helps, Johanna On Thu, Oct 20, 2016 at 08:41:42PM +0000, Daniel Manzo wrote: > Hi all, > > I am looking to create a script that will shutdown all applications on my server prior to rebooting. Does Bro automatically stop when a reboot is initiated? If not, is there a bash script that can stop bro so I don't have to do it manually? > > Best regards, > > Dan Manzo > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue Oct 25 12:00:37 2016 From: johanna at icir.org (Johanna Amann) Date: Tue, 25 Oct 2016 12:00:37 -0700 Subject: [Bro] Fwd: Simultaneous Connections In-Reply-To: References: Message-ID: <20161025190037.qvv5u2do4jbrrmcu@wifi155.sys.ICSI.Berkeley.EDU> Hi Troy, the c$conn record is only populated way after the connection_established event (usually in connection_state_remove). If that is too late, the easiest way is probably to also extend the connection record, first track that in there, and then copy it over to c$conn in connection_state_remove. I hope this helps, Johanna On Thu, Oct 20, 2016 at 11:23:41PM -0400, Troy Ward wrote: > I am trying to identify connections with the same source host and > destination host/port occuring at the same time. My plan is to examine > each connection_established event. I've created a table the pairs up those > 3 items and when the event fires it looks to see if the pair exists. If it > does, I want to tag a bol value that I have added to the conn record to > mark it as a duplicate. When the connection closes, it takes information > about both connections and records them into a new log file. I have > attached my code below. My problem is that I get a "field value missing > [simultanious::c$conn] on line 75 (c$conn$duplicate = T). If I move the > command to the connection_closed event it works fine but that is to late. > > Ideas? > > Thanks, > > Troy > > > local.bro > > > > > # Add a field to the connection log record. > redef record Conn::Info += { > ## Indicate if the originator of the connection is part of the > ## "private" address space defined in RFC1918. > duplicate: bool &default=F ; > }; > > type tmp : record > { > # Timestamp of the event > ts : time &log; > #source Port > orig_p : count &log; > #UID > uid : string &log; > }; > # Add a field to the connection log record. > redef record Conn::Info += { > ## Indicate if the originator of the connection is part of the > ## "private" address space defined in RFC1918. > tmp_duplicate: tmp &optional; > }; > > @load simultanious > > > > simultanious.bro > > module simultanious; > export > { > redef enum Log::ID += { LOG }; > #Data structure for final record to record > type Info : record > { > # Timestamp of the event > ts : time &log; > # Source IP Host address > orig_h : addr &log; > # Destination IP Host address > resp_h : addr &log; > #Destination Port > resp_p : count &log; > #Protocol > proto : transport_proto &log; > #First Connection Timestamp > first_ts : time &log; > #First UID > first_uid : string &log; > #First originating port > first_orig_p : count &log &optional; > #Second Connection Timestamp > second_ts : time &log; > #Second UID > second_uid : string &log; > #Second Pack orig_p : string &log; > second_orig_p : count &log &optional; > }; > type tmp : record > { > # Timestamp of the event > ts : time &log; > #source Port > orig_p : count &log; > #UID > uid : string &log; > }; > #Table of hosts that are currently being tracked > #Order is source IP address with a sub table of destination IP and port > global current_connections : table [addr, addr, port] of tmp; > > #And event that can be handled to access the :bro:type: > SimultaniousConnections::Info > ##record as it is sent on to the logging framework > global log_duplicate_connections : event(rec: Info); > #List of subnets to monitor > global monitor_subnets : set[subnet] = { 192.168.1.0/24, 192.68.2.0/24, > 172.16.0.0/20, 172.16.16.0/20, 172.16.32.0/20, 172.16.48.0/20 }; > #List of ports to monitor > global monitor_ports : set [port] = { 443/tcp, 80/tcp, 8080/tcp, 22/tcp}; > > > > } > event bro_init() > { > # Create the logging stream > Log::create_stream(LOG, [$columns=Info, $path="simultanious_conn"]); > } > event connection_established(c: connection) > { > #Check to see if there is already an entry for the connection string in the > table > if ([c$id$orig_h, c$id$resp_h, c$id$resp_p] in current_connections) > { > #There is a duplicate record > #duplicate_host = T; > c$conn$duplicate = T; > c$conn$tmp_duplicate$ts = current_connections[c$id$orig_h, c$id$resp_h, > c$id$resp_p]$ts; > c$conn$tmp_duplicate$orig_p = current_connections[c$id$orig_h, c$id$resp_h, > c$id$resp_p]$orig_p; > c$conn$tmp_duplicate$uid = current_connections[c$id$orig_h, c$id$resp_h, > c$id$resp_p]$uid; > print fmt("dup - %s %s %s %s", c$uid, c$id$orig_h, > c$id$resp_h, c$id$resp_p); > } > else > { > local temp_record : tmp = [$ts=c$start_time, > $orig_p=port_to_count(c$id$orig_p), > $uid=c$uid]; > current_connections[c$id$orig_h, c$id$resp_h, c$id$resp_p]=temp_record; > print fmt("no dup - %s %s %s %s", c$uid, c$id$orig_h, > c$id$resp_h, c$id$resp_p); > } > } > event connection_state_remove(c: connection) > { > if (c$conn$duplicate && c$duration > 1min) > { > > > print fmt("end of record dup %s %s %s %s %s", c$uid, > c$id$orig_h, c$id$resp_h, c$id$resp_p, c$conn$tmp_duplicate); > #Log::write (simultanious::LOG, temp_working_record); > } > else > { > print fmt("end of packet no dup - %s %s %s %s", c$uid, > c$id$orig_h, c$id$resp_h, c$id$resp_p); > } > } > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue Oct 25 12:02:09 2016 From: johanna at icir.org (Johanna Amann) Date: Tue, 25 Oct 2016 12:02:09 -0700 Subject: [Bro] How to detect transparent proxy by BRO IDS (2.4.1) In-Reply-To: References: Message-ID: <20161025190209.s2ru3cuwvrxe72ad@wifi155.sys.ICSI.Berkeley.EDU> Hi Hafiz, there is no reason why Bro should not log HTTP sessions when there is a transparent proxy (which, as the name suggest, should also be transparent to Bro). Hence I assume there is something different going on. Do your conn.log entries look like Bro sees entire TCP sessions? Johanna On Mon, Oct 24, 2016 at 09:36:08AM +0500, Hafiz Shafiq wrote: > Sir, > Our network administrator is using proxy in transparent mode (SQUID). In > this mode , there is no need for user to configure proxy option on his > computer. I have captured few hours traffic via tcpdump and when I run bro, > to know about http trafffic and defferent apps used (like google, youtube > etc.). I am amazed to know that there is even not http.log and > app_stats.log files generated. Is it some problem in bro configuration. I > have searched from its manual, infomation given about proxy could not solve > my problem. I have checked load_scripts.log. I shows that http analyzer is > loaded. > Can you please guide me about this issue ? > > Regards > > Hafiz Muhammad Shafiq > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From philosnef at gmail.com Tue Oct 25 12:02:46 2016 From: philosnef at gmail.com (erik clark) Date: Tue, 25 Oct 2016 15:02:46 -0400 Subject: [Bro] Bro crashed this morning.. Message-ID: Why don't you just use deploy instead of install/start? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161025/9ea79d1b/attachment.html From johanna at icir.org Tue Oct 25 12:05:01 2016 From: johanna at icir.org (Johanna Amann) Date: Tue, 25 Oct 2016 12:05:01 -0700 Subject: [Bro] Packet loss In-Reply-To: References: Message-ID: <20161025190501.sa6hplaczumcczwp@wifi155.sys.ICSI.Berkeley.EDU> Just to check - are you running Bro in cluster mode? An 1gb tap is probably too much for a single process to handle. Apart from that, on a first glance, that really just looks like Bro cannot keep up with processing packets. If packets come in bursts, that might be one reason why the CPU load looks ok, while there is a huge packet loss. Johanna On Mon, Oct 24, 2016 at 04:39:07PM +1000, John Edwards wrote: > Hi all > > I have just deployed bro onto two systems on my border gateway. They sit > off a tap and each system has individual Rx and Tx interfaces bridged using > brctl. I am not seeing any interface dropped packets or errors from the > Ubuntu host via ifconfig. > > When looking at my data within bro that monitors a standalone configuration > of br0 has the below line repeated a few times throughout the notice.log > > 1477283201.681213 - - - - - - - > - - PacketFilter::Dropped_Packets 2739608 packets > dropped after filtering, 12351460 received, 12351686 on link - - > - - - bro Notice::ACTION_LOG3600.000000 F > - - - - - > We seem to be getting lots of data and as far as CPU and memory resource > consumption goes it's not under strenuous load. I haven't changed too much > of the configuration of the 2.4.1 build. > > Sorry if this has been discussed or asked before but what can I look at > optimising or tuning to reduce the packet loss? > > One thread I found wasn't bros issue but the tap and an upgrade of the > software fixed it. I cannot do this as it's without software to tune. It's > a vss active 1gb tap, doesn't seem to be the tap at this stage but it quite > possibly could be :) > > Thanks > John > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jlay at slave-tothe-box.net Tue Oct 25 12:37:13 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 25 Oct 2016 13:37:13 -0600 Subject: [Bro] How to detect transparent proxy by BRO IDS (2.4.1) In-Reply-To: <20161025190209.s2ru3cuwvrxe72ad@wifi155.sys.ICSI.Berkeley.EDU> References: <20161025190209.s2ru3cuwvrxe72ad@wifi155.sys.ICSI.Berkeley.EDU> Message-ID: <0c73d913caad9e5ce6dcdf1fb128d248@localhost> On 2016-10-25 13:02, Johanna Amann wrote: > Hi Hafiz, > > there is no reason why Bro should not log HTTP sessions when there is a > transparent proxy (which, as the name suggest, should also be > transparent > to Bro). Hence I assume there is something different going on. > > Do your conn.log entries look like Bro sees entire TCP sessions? > > Johanna > > On Mon, Oct 24, 2016 at 09:36:08AM +0500, Hafiz Shafiq wrote: >> Sir, >> Our network administrator is using proxy in transparent mode (SQUID). >> In >> this mode , there is no need for user to configure proxy option on his >> computer. I have captured few hours traffic via tcpdump and when I run >> bro, >> to know about http trafffic and defferent apps used (like google, >> youtube >> etc.). I am amazed to know that there is even not http.log and >> app_stats.log files generated. Is it some problem in bro >> configuration. I >> have searched from its manual, infomation given about proxy could not >> solve >> my problem. I have checked load_scripts.log. I shows that http >> analyzer is >> loaded. >> Can you please guide me about this issue ? >> >> Regards >> >> Hafiz Muhammad Shafiq > >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro FWIW I do this at home with squid. If you have bro listening on the external and internal as I do you'll see something like this on the external: 2016-10-25T13:19:54-0600 CSBm181DwlSMeM1Jkl ext.ip.address 35292 151.101.52.246 80 1 GET i.scdn.co /image/bfe99c49e55b1b0881da51b6820051673071c34e - Spotify/6.3.0 Android/22 (LG-ls990) 0 94040 200 OK - - - (empty) - - VIA -> 1.1 gateway (squid/3.5.22),X-FORWARDED-FOR -> 192.168.1.101 - - F898pvpKI8FAldYVb image/jpeg - If you're only listening internal, you may not have any evidence to show proxied information: 2016-10-25T13:19:54-0600 CKUVZB4Buv5shQEfre 192.168.1.101 45741 151.101.52.246 80 1 GET i.scdn.co /image/bfe99c49e55b1b0881da51b6820051673071c34e - Spotify/6.3.0 Android/22 (LG-ls990) 0 94040 200 OK - - - (empty) - - - -- FIcIh42olruCzFTJgl image/jpeg - Hope that helps. James From jedwards2728 at gmail.com Tue Oct 25 12:40:21 2016 From: jedwards2728 at gmail.com (John Edwards) Date: Wed, 26 Oct 2016 05:40:21 +1000 Subject: [Bro] Fwd: Packet loss In-Reply-To: References: <20161025190501.sa6hplaczumcczwp@wifi155.sys.ICSI.Berkeley.EDU> <489A893A-DAD0-4BA7-B936-6E922A0244FD@icir.org> Message-ID: ---------- Forwarded message ---------- From: *John Edwards* Date: Wednesday, 26 October 2016 Subject: Packet loss To: Johanna Amann I am using Ubuntu bridge utils "brctl" to bond the interfaces. I'm using dell r610 servers with 16core CPU and 24gb of ram and plenty of disk feeding all of bro logs into splunk So if I had a manager configured With the bonded interface and pf_ring it would distribute the load over two workers directly connected to the manager? Thanks On Wednesday, 26 October 2016, Johanna Amann > wrote: > Workers are different from proxies. If there really is a peak around 1Gbs, > you will need a number of workers (depending on your hardware and traffic, > I would guess more than 1). Furthermore, you will probably need a method to > split that traffic between workers; usually people either use pf_ring, > af_packet or specialized nics. I don't know how that will work together > with whatever software you use to create your bonded interface. > > Johanna > > On 25 Oct 2016, at 12:16, John Edwards wrote: > > I was running it in a cluster mode with one worker proxy and manager and >> was configured when I first noticed this in the logs. I then changed it >> back to a stand alone. >> >> So if I had the manager connected to the tap I can then have two workers >> directly connected to process the throughput to see if I can get a better >> dropped packet date? >> >> Thanks for the information >> >> On Wednesday, 26 October 2016, Johanna Amann wrote: >> >> Just to check - are you running Bro in cluster mode? An 1gb tap is >>> probably too much for a single process to handle. >>> >>> Apart from that, on a first glance, that really just looks like Bro >>> cannot >>> keep up with processing packets. If packets come in bursts, that might >>> be one reason why the CPU load looks ok, while there is a huge packet >>> loss. >>> >>> Johanna >>> >>> On Mon, Oct 24, 2016 at 04:39:07PM +1000, John Edwards wrote: >>> >>>> Hi all >>>> >>>> I have just deployed bro onto two systems on my border gateway. They sit >>>> off a tap and each system has individual Rx and Tx interfaces bridged >>>> >>> using >>> >>>> brctl. I am not seeing any interface dropped packets or errors from the >>>> Ubuntu host via ifconfig. >>>> >>>> When looking at my data within bro that monitors a standalone >>>> >>> configuration >>> >>>> of br0 has the below line repeated a few times throughout the notice.log >>>> >>>> 1477283201.681213 - - - - - - >>>> - >>>> - - PacketFilter::Dropped_Packets 2739608 packets >>>> dropped after filtering, 12351460 received, 12351686 on link - >>>> - >>>> - - - bro Notice::ACTION_LOG3600.000000 F >>>> - - - - - >>>> We seem to be getting lots of data and as far as CPU and memory resource >>>> consumption goes it's not under strenuous load. I haven't changed too >>>> >>> much >>> >>>> of the configuration of the 2.4.1 build. >>>> >>>> Sorry if this has been discussed or asked before but what can I look at >>>> optimising or tuning to reduce the packet loss? >>>> >>>> One thread I found wasn't bros issue but the tap and an upgrade of the >>>> software fixed it. I cannot do this as it's without software to tune. >>>> >>> It's >>> >>>> a vss active 1gb tap, doesn't seem to be the tap at this stage but it >>>> >>> quite >>> >>>> possibly could be :) >>>> >>>> Thanks >>>> John >>>> >>> >>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/76a6193b/attachment-0001.html From daniel.guerra69 at gmail.com Tue Oct 25 15:47:19 2016 From: daniel.guerra69 at gmail.com (Daniel Guerra) Date: Wed, 26 Oct 2016 00:47:19 +0200 Subject: [Bro] bro size In-Reply-To: <54EE16CB-21FC-44CC-A9E9-F60912313520@icir.org> References: <54EE16CB-21FC-44CC-A9E9-F60912313520@icir.org> Message-ID: <9CB8DEE8-B306-4B19-80A6-A541BED0EAA0@gmail.com> Same here 6mb stripped. Check https://hub.docker.com/r/danielguerra/alpine-bro-build/ The broker and brocolli etc are disabled Daniel > On 25 Oct 2016, at 19:56, Johanna Amann wrote: > > If for some reason it really is a concern, you also can just call strip > on the binary. This brings the binary size down to ~6MB for me. > > Johanna > > On 25 Oct 2016, at 10:28, Hosom, Stephen M wrote: > >> I wouldn?t recommend changing many of the options in configure >> unless you truly know what you?re doing. Why is it that you need Bro >> to be smaller? >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Dk >> Jack >> Sent: Monday, October 24, 2016 8:43 PM >> To: bro at bro.org >> Subject: [Bro] bro size >> >> Hi, >> When I compile bro, the bro binary comes out to about 120Mb. Are there >> any options I can use to reduce by eliminating some of the features I >> don't need? the configure script doesn't seem to have many options. >> Thanks. >> >> Dk. >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From bill.de.ping at gmail.com Wed Oct 26 05:01:48 2016 From: bill.de.ping at gmail.com (william de ping) Date: Wed, 26 Oct 2016 15:01:48 +0300 Subject: [Bro] GSSAPI - kerberos issue Message-ID: Hello all The GSSAPI analyzer does not recognize KRB5 authentication made over SPNEGO. looking at the code (gssapi-analyzer.pac), the analyzer does compare the value of the mech_token variable with the id of krb5 and mskrb5: **else if ( ${val.mech_token}.length() == 9 && (memcmp("\x2a\x86\x48\x86\xf7\x12\x01\x02\x02", ${val.mech_token}.begin(), ${val.mech_token}.length()) == 0 || memcmp("\x2a\x86\x48\x82\xf7\x12\x01\x02\x02", ${val.mech_token}.begin(), ${val.mech_token}.length()) == 0 )) ** By looking with wireshark through pcaps containing relevant transactions, i found out that these bytes are preceded by 6 more bytes in both smb1 and smb2 (they change from session to session, possibly a part of the ASN1Meta that is wrongly parsed?), and the length of the mech_token is quite large (up to the end of the packet). by adjusting some offsets and lengths (${val.mech_token}.begin() +6 etc.), I was able to reach the code that delivers the packet to the KRB analyzer. After this fix (+6 for request, +5 for response) I was able to produce Kerberos logs from the said packets, but perhaps the problem lays in the arguments of DeliverPacket function? Hope this bug can be fixed in a more professional way W -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/618daea0/attachment.html From seth at icir.org Wed Oct 26 05:57:43 2016 From: seth at icir.org (Seth Hall) Date: Wed, 26 Oct 2016 08:57:43 -0400 Subject: [Bro] GSSAPI - kerberos issue In-Reply-To: References: Message-ID: > On Oct 26, 2016, at 8:01 AM, william de ping wrote: > > By looking with wireshark through pcaps containing relevant transactions, i found out that these bytes are preceded by 6 more bytes in both smb1 and smb2 (they change from session to session, possibly a part of the ASN1Meta that is wrongly parsed?) I would like to fix this for the 2.5 release. Do you have some packets I could take a look at? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From zeolla at gmail.com Wed Oct 26 06:22:32 2016 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 26 Oct 2016 13:22:32 +0000 Subject: [Bro] bro syntax checking Message-ID: So I've been looking for a cleaner way to check bro syntax via a pre-commit hook - we currently have bro installed on a server where we commit from that does a `broctl check`. I was thinking of doing something small like a docker instance that can run `broctl check` using a mounted host directory. My questions are: 1. Has anybody else already solved this issue? What are others using to validate syntax before pushing out changes? 2. Is this the official bro docker image? I pulled it down and was playing around a bit but ran into an issue but I wasn't sure if this was expected. Specifically, /bro/bin/broctl wasn't functional until I installed python, but after running `apt-get update && apt-get install -y python && /bin/bro/broctl install` things seemed to be functional. I did briefly try to peruse the mailing list archive for the past few months but didn't find what I was looking for. Thanks, Jon -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/fdf4e6e5/attachment.html From dhoelzer at enclaveforensics.com Wed Oct 26 06:27:27 2016 From: dhoelzer at enclaveforensics.com (=?UTF-8?Q?David_Hoelzer?=) Date: Wed, 26 Oct 2016 13:27:27 +0000 Subject: [Bro] bro syntax checking In-Reply-To: References: Message-ID: <01000158012c8eb4-fb778192-c0fd-4ef1-8e7a-6b9c2d308966-000000@email.amazonses.com> Why not just load the edited scripts against a small pcap? ?That?s what I?ve learned to do on my end before doing a deploy. :) On Oct 26, 2016, at 9:22 AM, Zeolla at GMail.com > wrote: So I've been looking for a cleaner way to check bro syntax via a pre-commit hook - we currently have bro installed on a server where we commit from that does a `broctl check`.? I was thinking of doing something small like a docker instance that can run `broctl check` using a mounted host directory.? My questions are: 1. Has anybody else already solved this issue?? What are others using to validate syntax before pushing out changes? ? 2. Is this the official bro docker image?? I pulled it down and was playing around a bit but ran into an issue but I wasn't sure if this was expected.? Specifically, /bro/bin/broctl wasn't functional until I installed python, but after running `apt-get update && apt-get install -y python && /bin/bro/broctl install` things seemed to be functional. ? I did briefly try to peruse the mailing list archive for the past few months but didn't find what I was looking for.? Thanks, Jon -- Jon _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/aab39cc9/attachment.html From jazoff at illinois.edu Wed Oct 26 06:36:26 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 26 Oct 2016 13:36:26 +0000 Subject: [Bro] bro syntax checking In-Reply-To: References: Message-ID: > On Oct 26, 2016, at 9:22 AM, Zeolla at GMail.com wrote: > > So I've been looking for a cleaner way to check bro syntax via a pre-commit hook - we currently have bro installed on a server where we commit from that does a `broctl check`. I was thinking of doing something small like a docker instance that can run `broctl check` using a mounted host directory. My questions are: > > 1. Has anybody else already solved this issue? What are others using to validate syntax before pushing out changes? bro supports a '-a' option for validating syntax on scripts. I've built integration for it inside syntastic for vim and wrote an atom linter for bro, adding support for other editors is pretty easy. Aside from that we don't bother.. if a broken script ends up getting pushed out somehow, broctl deploy will complain and we can fix it without ever impacting the running bro instances. > 2. Is this the official bro docker image? I pulled it down and was playing around a bit but ran into an issue but I wasn't sure if this was expected. Specifically, /bro/bin/broctl wasn't functional until I installed python, but after running `apt-get update && apt-get install -y python && /bin/bro/broctl install` things seemed to be functional. Ah.. I build those images for try.bro.org and for script testing (there's one for each version of bro) but I've never actually used them to run bro via broctl. You're probably better off just using it to run your scripts against a pcap. -- - Justin Azoff From zeolla at gmail.com Wed Oct 26 08:09:21 2016 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 26 Oct 2016 15:09:21 +0000 Subject: [Bro] bro syntax checking In-Reply-To: References: Message-ID: What I'm working on doing is making this more accessible to high turnover, fairly green SOC analysts. In that situation I don't trust process/procedure, I need an easily distributed validation mechanism. The thought would be for them to get assigned a task -> attempt a solution -> push to a test branch which requires some very basic checks -> request a Sr analyst to review and merge to master. I don't want to waste my Sr analyst's time with something that doesn't pass very basic tests. Essentially I'm looking to scale this process out. Jon On Wed, Oct 26, 2016 at 9:38 AM Azoff, Justin S wrote: > > > On Oct 26, 2016, at 9:22 AM, Zeolla at GMail.com wrote: > > > > So I've been looking for a cleaner way to check bro syntax via a > pre-commit hook - we currently have bro installed on a server where we > commit from that does a `broctl check`. I was thinking of doing something > small like a docker instance that can run `broctl check` using a mounted > host directory. My questions are: > > > > 1. Has anybody else already solved this issue? What are others using to > validate syntax before pushing out changes? > > bro supports a '-a' option for validating syntax on scripts. I've built > integration for it inside syntastic for vim and wrote an atom linter for > bro, adding support for other editors is pretty easy. > > Aside from that we don't bother.. if a broken script ends up getting > pushed out somehow, broctl deploy will complain and we can fix it without > ever impacting the running bro instances. > > > 2. Is this the official bro docker image? I pulled it down and was > playing around a bit but ran into an issue but I wasn't sure if this was > expected. Specifically, /bro/bin/broctl wasn't functional until I > installed python, but after running `apt-get update && apt-get install -y > python && /bin/bro/broctl install` things seemed to be functional. > > Ah.. I build those images for try.bro.org and for script testing (there's > one for each version of bro) but I've never actually used them to run bro > via broctl. You're probably better off just using it to run your scripts > against a pcap. > > > -- > - Justin Azoff > > -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/c4bf1c6a/attachment.html From george.papulis at wustl.edu Wed Oct 26 12:41:44 2016 From: george.papulis at wustl.edu (Papulis, George) Date: Wed, 26 Oct 2016 19:41:44 +0000 Subject: [Bro] SQLite logging and as white/blacklist in a cluster Message-ID: Hello everyone, I have a bro script that logs events based on a blacklist, but I don't want to log the same IP - blacklisted item twice. I was thinking I could log the data using the SQLite writer, and then also read from that database checking if the event has been logged earlier. Has anyone used the SQLite logging in a cluster, and if so, is there anything I should look out for? The size of the log is very small. Will I need to manually sync the database so each node in the cluster can reference the tables? Thanks, George Papulis, GCIA, GPEN Information Security Analyst Washington University in St. Louis george.papulis at wustl.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/49125339/attachment.html From jazoff at illinois.edu Wed Oct 26 13:10:55 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 26 Oct 2016 20:10:55 +0000 Subject: [Bro] SQLite logging and as white/blacklist in a cluster In-Reply-To: References: Message-ID: > On Oct 26, 2016, at 3:41 PM, Papulis, George wrote: > > I have a bro script that logs events based on a blacklist, but I don't want to log the same IP - blacklisted item twice. I was thinking I could log the data using the SQLite writer, and then also read from that database checking if the event has been logged earlier. Has anyone used the SQLite logging in a cluster, and if so, is there anything I should look out for? The size of the log is very small. > > Will I need to manually sync the database so each node in the cluster can reference the tables? > A different approach here is probably better. What is your timeframe for not logging something twice? Forever? or would once a day be ok? -- - Justin Azoff From george.papulis at wustl.edu Wed Oct 26 13:15:33 2016 From: george.papulis at wustl.edu (Papulis, George) Date: Wed, 26 Oct 2016 20:15:33 +0000 Subject: [Bro] SQLite logging and as white/blacklist in a cluster In-Reply-To: References: , Message-ID: Just once a day ________________________________ From: Azoff, Justin S Sent: Wednesday, October 26, 2016 3:10:55 PM To: Papulis, George Cc: bro at bro.org Subject: Re: [Bro] SQLite logging and as white/blacklist in a cluster > On Oct 26, 2016, at 3:41 PM, Papulis, George wrote: > > I have a bro script that logs events based on a blacklist, but I don't want to log the same IP - blacklisted item twice. I was thinking I could log the data using the SQLite writer, and then also read from that database checking if the event has been logged earlier. Has anyone used the SQLite logging in a cluster, and if so, is there anything I should look out for? The size of the log is very small. > > Will I need to manually sync the database so each node in the cluster can reference the tables? > A different approach here is probably better. What is your timeframe for not logging something twice? Forever? or would once a day be ok? -- - Justin Azoff -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/08f8d2ee/attachment.html From jazoff at illinois.edu Wed Oct 26 13:24:10 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 26 Oct 2016 20:24:10 +0000 Subject: [Bro] SQLite logging and as white/blacklist in a cluster In-Reply-To: References: Message-ID: <02A1EA7B-77DD-4A4E-9DA9-9CF5ABE5125F@illinois.edu> > On Oct 26, 2016, at 4:15 PM, Papulis, George wrote: > > Just once a day If you are raising a notice, you can use suppression that is built in: https://www.bro.org/sphinx-git/frameworks/notice.html#automated-suppression otherwise see how the known hosts policy does it: https://www.bro.org/sphinx/_downloads/known-hosts.bro -- - Justin Azoff From george.papulis at wustl.edu Wed Oct 26 14:24:47 2016 From: george.papulis at wustl.edu (Papulis, George) Date: Wed, 26 Oct 2016 21:24:47 +0000 Subject: [Bro] SQLite logging and as white/blacklist in a cluster In-Reply-To: <02A1EA7B-77DD-4A4E-9DA9-9CF5ABE5125F@illinois.edu> References: , <02A1EA7B-77DD-4A4E-9DA9-9CF5ABE5125F@illinois.edu> Message-ID: We do not use the notice log in this instance, but using the &synchronized and &create_expire attributes look perfect for what I'm trying to accomplish, and significantly easier to use, haha. Thanks Justin! ________________________________ From: Azoff, Justin S Sent: Wednesday, October 26, 2016 3:24:10 PM To: Papulis, George Cc: bro at bro.org Subject: Re: [Bro] SQLite logging and as white/blacklist in a cluster > On Oct 26, 2016, at 4:15 PM, Papulis, George wrote: > > Just once a day If you are raising a notice, you can use suppression that is built in: https://www.bro.org/sphinx-git/frameworks/notice.html#automated-suppression otherwise see how the known hosts policy does it: https://www.bro.org/sphinx/_downloads/known-hosts.bro -- - Justin Azoff -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161026/cc631353/attachment-0001.html From robin at icir.org Wed Oct 26 15:47:10 2016 From: robin at icir.org (Robin Sommer) Date: Wed, 26 Oct 2016 15:47:10 -0700 Subject: [Bro] Several protosig questions In-Reply-To: References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> <1477093081.2523.5.camel@slave-tothe-box.net> <20161024185146.GE28345@icir.org> Message-ID: <20161026224710.GD77705@icir.org> On Mon, Oct 24, 2016 at 13:53 -0600, James Lay wrote: > But the same results as above in conn.log. So I guess that's a feature > request? To hard define either a first rule that matches gets logged, or > the last rule that matches gets logged. It's a feature, not a bug. :) The signature engine always reports all matches, actually with the intention to *not* make order matter. What you could do is add logic in scriptland that selects which match to continue working with, based on some scheme you come up with (like having a table of signature names map to priorities). Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jlay at slave-tothe-box.net Thu Oct 27 04:43:08 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 27 Oct 2016 05:43:08 -0600 Subject: [Bro] Several protosig questions In-Reply-To: <20161026224710.GD77705@icir.org> References: <1476550089.2331.12.camel@slave-tothe-box.net> <1476550339.2331.13.camel@slave-tothe-box.net> <20161017203147.GF60965@icir.org> <20161018182858.GB27597@icir.org> <1477093081.2523.5.camel@slave-tothe-box.net> <20161024185146.GE28345@icir.org> <20161026224710.GD77705@icir.org> Message-ID: <1477568588.2356.1.camel@slave-tothe-box.net> On Wed, 2016-10-26 at 15:47 -0700, Robin Sommer wrote: > On Mon, Oct 24, 2016 at 13:53 -0600, James Lay wrote: > > > > > But the same results as above in conn.log.??So I guess that's a > > feature > > request???To hard define either a first rule that matches gets > > logged, or > > the last rule that matches gets logged. > It's a feature, not a bug. :) The signature engine always reports all > matches, actually with the intention to *not* make order matter. What > you could do is add logic in scriptland that selects which match to > continue working with, based on some scheme you come up with (like > having a table of signature names map to priorities). > > Robin > Thanks Robin...that helps. ?Truth be told I wouldn't have a clue on where to start have a table of sigs to map priorities, so I guess I'll suck it up and just make specific sigs and leave out the generics. ?I'll keep testing and report anything else...thanks again for the work on this. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161027/ba4cfbab/attachment.html From fatema.bannatwala at gmail.com Thu Oct 27 12:35:30 2016 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Thu, 27 Oct 2016 15:35:30 -0400 Subject: [Bro] Understanding Connection history for ssh. In-Reply-To: References: <9C50AEB9-0E90-40BC-A881-8836CC3AFE0E@illinois.edu> Message-ID: So, finally some closing remarks: When asked to look deeper, the client finally did see the ssh attempts on the server, the issue was with the time zone. It seemed the clock on the client machine was 4hrs ahead of EST, that's why the attempts were getting logged with different time stamps than the ones logged in our logs. When they searched a range of time periods they found those. Unfortunately (or fortunately) they all were failed login attempts and Bro alert for successful ssh was a false positive from our side. We had a case in past where Bro had reported a successful ssh in intel.log for a linux machine and when verified with the client it was a true positive, but for this case it came out to be a false positive, hence was just thinking that may be bro might have a high false positive rate for WinSSHD or ssh for Windows for say, might be wrong. Thanks, Fatema. On Mon, Oct 10, 2016 at 3:58 PM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Thanks Justin! > That makes sense, was just curious to know how bro evaluates the > auth_success field :) > A quick question, as the connection was seen to last almost 10 secs and > was thinking that > the failed login connections are not that long, hence wanted to ask could > it be possible that > the user might have got multiple password prompts over the same connection > and Bro logged that single > connection of 10secs? > would it also explain why no 'R' or 'F' flag was seen in the end of conn > history (*ShAdDa)?* > > Thanks, > Fatema. > > > > On Mon, Oct 10, 2016 at 3:37 PM, Azoff, Justin S > wrote: > >> >> > On Oct 10, 2016, at 3:22 PM, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >> > >> > The problem is that, when contacted the concerned party, >> > they say that they don't see any login attempts from that IP and >> > asking whether we were sure that the ssh login were successful. >> >> If they are not seeing *any* attempts then something is screwed up with >> the logging on their end. >> >> It's possible that the value of auth_success is wrong[1], but it's not >> possible that no attempt happened. There was a tcp 3 way handshake, there >> was a ssh protocol negotiation, they should have something in their logs. >> >> >> [1] Or misleading, often from the SSH point of view it was a login, but >> sometimes the remote system drops you into another password prompt instead >> of a shell. Appliances do this a lot. >> >> -- >> - Justin Azoff >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161027/7e0b4141/attachment.html From ysrivas at ncsu.edu Thu Oct 27 14:37:44 2016 From: ysrivas at ncsu.edu (Yagyesh Srivastava) Date: Thu, 27 Oct 2016 17:37:44 -0400 Subject: [Bro] Help with Bro source code Message-ID: Hi, I am trying to understand the bro events engine for HTTP. I see that the code has two places where http is handled: 1) build/src/protocol/http (files like events.bif.cc , events.bif.init.cc and functions.bif.cc) 2) src/protocol/http (files like HTTP.CC) I am guessing the first one is the event engine and the second one is for handling the incoming HTTP packets. is that correct? Does anyone know of a runtime analysis tool which would be helpful in this case? How do we generally go about to understand bro's code base, i am just a beginner at this. Would really appreciate all the help. Thanks, Yagyesh -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161027/831b0c6c/attachment.html From ysrivas at ncsu.edu Fri Oct 28 05:33:50 2016 From: ysrivas at ncsu.edu (Yagyesh Srivastava) Date: Fri, 28 Oct 2016 08:33:50 -0400 Subject: [Bro] Help with Bro source code In-Reply-To: References: Message-ID: Thanks Anthony. I now have a basic understanding having gone through anthony kasza's blog. Can someone please help me with any kind of material/slides for understanding bro source code? Any other help/source would really help me a lot! Thanks, Yagyesh On Thu, Oct 27, 2016 at 5:56 PM, anthony kasza wrote: > Hi Yagyesh, > > I wrote a blog about what I found while first exploring Bro's code base. I > hope you find it helpful. > http://supbrosup.blogspot.com/2014/10/out-of-scripts-and-into-core.html > > -AK > > On Oct 27, 2016 3:46 PM, "Yagyesh Srivastava" wrote: > >> Hi, >> >> I am trying to understand the bro events engine for HTTP. >> I see that the code has two places where http is handled: >> 1) build/src/protocol/http (files like events.bif.cc , events.bif.init.cc >> and functions.bif.cc) >> 2) src/protocol/http (files like HTTP.CC) >> >> I am guessing the first one is the event engine and the second one is for >> handling the incoming HTTP packets. is that correct? >> >> Does anyone know of a runtime analysis tool which would be helpful in >> this case? >> How do we generally go about to understand bro's code base, i am just a >> beginner at this. >> Would really appreciate all the help. >> >> Thanks, >> Yagyesh >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/c907394d/attachment.html From jdvessey at gmail.com Fri Oct 28 05:57:23 2016 From: jdvessey at gmail.com (David Vessey) Date: Fri, 28 Oct 2016 08:57:23 -0400 Subject: [Bro] Tracking PCAP file sources? Message-ID: Hi there, I've tried to find this in the docs and even tried exploring source code. This use case is more around after the fact network forensics, when working with PCAP files. If I have a bunch of pcaps, and I run bro like: $ bro -r input1.pcap -r input2.pcap -r input3.pcap Is there some way to associate bro's connection IDs back to contributing pcap(s)? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/e996961b/attachment.html From DHoelzer at sans.org Fri Oct 28 06:12:53 2016 From: DHoelzer at sans.org (Hoelzer, Dave) Date: Fri, 28 Oct 2016 13:12:53 +0000 Subject: [Bro] Tracking PCAP file sources? In-Reply-To: References: Message-ID: <4742FE06-9411-4EF7-8664-4A65908BBED1@sans.org> Not really. :) Are the pcaps all contemporaneous or are they sequential? If they?re sequential you could potentially use the timestamp. ??????????????????????? David Hoelzer Dean of Faculty, STI Fellow, SANS.org On Oct 28, 2016, at 8:57 AM, David Vessey > wrote: Hi there, I've tried to find this in the docs and even tried exploring source code. This use case is more around after the fact network forensics, when working with PCAP files. If I have a bunch of pcaps, and I run bro like: $ bro -r input1.pcap -r input2.pcap -r input3.pcap Is there some way to associate bro's connection IDs back to contributing pcap(s)? Thanks! _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/acf99326/attachment-0001.html From philosnef at gmail.com Fri Oct 28 06:43:22 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 28 Oct 2016 09:43:22 -0400 Subject: [Bro] extract smtp objects Message-ID: How can I extract an entire email, and split the attachments out into separate files in Bro? Specifically, I want the entire smtp _transaction_ (not just the body of the email, but headers as well) in a file, and then the the attachments in the smtp body extracted as well. Not sure how to go about this. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/301ec874/attachment.html From philosnef at gmail.com Fri Oct 28 06:59:02 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 28 Oct 2016 09:59:02 -0400 Subject: [Bro] Tracking PCAP file sources? Message-ID: for i in *.pcap do mkdir ${i%%.*}; cd ${%%.*} bro here cd .. done -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/ad577b9d/attachment.html From philosnef at gmail.com Fri Oct 28 07:23:25 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 28 Oct 2016 10:23:25 -0400 Subject: [Bro] extract smtp objects In-Reply-To: References: Message-ID: For reference, I am probably going to run an edited version of https://people.eecs.berkeley.edu/~mavam/teaching/cs161-sp11/mime-attachment.bro to extract attachments, but it doesn't seem to help me too much in getting the entire smtp transaction into a file. :) Thanks! erik On Fri, Oct 28, 2016 at 9:43 AM, erik clark wrote: > How can I extract an entire email, and split the attachments out into > separate files in Bro? > > Specifically, I want the entire smtp _transaction_ (not just the body of > the email, but headers as well) in a file, and then the the attachments in > the smtp body extracted as well. Not sure how to go about this. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/c0388e53/attachment.html From philosnef at gmail.com Fri Oct 28 08:04:10 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 28 Oct 2016 11:04:10 -0400 Subject: [Bro] extract smtp objects In-Reply-To: References: Message-ID: Actually, the linked script doesnt work with 2.5 at all. Is there an up to date version of this that is out in the public domain somewhere? On Fri, Oct 28, 2016 at 10:23 AM, erik clark wrote: > For reference, I am probably going to run an edited version of > > https://people.eecs.berkeley.edu/~mavam/teaching/cs161- > sp11/mime-attachment.bro > > to extract attachments, but it doesn't seem to help me too much in getting > the entire smtp transaction into a file. :) > > Thanks! > > erik > > On Fri, Oct 28, 2016 at 9:43 AM, erik clark wrote: > >> How can I extract an entire email, and split the attachments out into >> separate files in Bro? >> >> Specifically, I want the entire smtp _transaction_ (not just the body of >> the email, but headers as well) in a file, and then the the attachments in >> the smtp body extracted as well. Not sure how to go about this. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/528c0a61/attachment.html From philosnef at gmail.com Fri Oct 28 08:25:55 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 28 Oct 2016 11:25:55 -0400 Subject: [Bro] extract smtp objects In-Reply-To: References: Message-ID: Sorry for the clutter. I did this a different way with extract from file analyzer. I will just script some glue with conn.log, smtp.log, and fuid. I had originally wanted to scrap the data out of the raw smtp message (and would still prefer to do that) with other tools entirely, so if someone has a way to do that, that would be fantastic. :) On Fri, Oct 28, 2016 at 11:04 AM, erik clark wrote: > Actually, the linked script doesnt work with 2.5 at all. Is there an up to > date version of this that is out in the public domain somewhere? > > On Fri, Oct 28, 2016 at 10:23 AM, erik clark wrote: > >> For reference, I am probably going to run an edited version of >> >> https://people.eecs.berkeley.edu/~mavam/teaching/cs161-sp11/ >> mime-attachment.bro >> >> to extract attachments, but it doesn't seem to help me too much in >> getting the entire smtp transaction into a file. :) >> >> Thanks! >> >> erik >> >> On Fri, Oct 28, 2016 at 9:43 AM, erik clark wrote: >> >>> How can I extract an entire email, and split the attachments out into >>> separate files in Bro? >>> >>> Specifically, I want the entire smtp _transaction_ (not just the body of >>> the email, but headers as well) in a file, and then the the attachments in >>> the smtp body extracted as well. Not sure how to go about this. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/bd8c5543/attachment.html From johanna at icir.org Fri Oct 28 08:43:03 2016 From: johanna at icir.org (Johanna Amann) Date: Fri, 28 Oct 2016 08:43:03 -0700 Subject: [Bro] Help with Bro source code In-Reply-To: References: Message-ID: <20161028154303.f24iyjgujem5h5tr@wifi155.sys.ICSI.Berkeley.EDU> I am not sure if you already found it - we have https://www.bro.org/development/howtos/index.html on our webpage for a few pointers. Apart from that, I don't think there is much that exists. Johanna On Fri, Oct 28, 2016 at 08:33:50AM -0400, Yagyesh Srivastava wrote: > Thanks Anthony. > > I now have a basic understanding having gone through anthony kasza's blog. > > Can someone please help me with any kind of material/slides for > understanding bro source code? > Any other help/source would really help me a lot! > > Thanks, > Yagyesh > > On Thu, Oct 27, 2016 at 5:56 PM, anthony kasza > wrote: > > > Hi Yagyesh, > > > > I wrote a blog about what I found while first exploring Bro's code base. I > > hope you find it helpful. > > http://supbrosup.blogspot.com/2014/10/out-of-scripts-and-into-core.html > > > > -AK > > > > On Oct 27, 2016 3:46 PM, "Yagyesh Srivastava" wrote: > > > >> Hi, > >> > >> I am trying to understand the bro events engine for HTTP. > >> I see that the code has two places where http is handled: > >> 1) build/src/protocol/http (files like events.bif.cc , events.bif.init.cc > >> and functions.bif.cc) > >> 2) src/protocol/http (files like HTTP.CC) > >> > >> I am guessing the first one is the event engine and the second one is for > >> handling the incoming HTTP packets. is that correct? > >> > >> Does anyone know of a runtime analysis tool which would be helpful in > >> this case? > >> How do we generally go about to understand bro's code base, i am just a > >> beginner at this. > >> Would really appreciate all the help. > >> > >> Thanks, > >> Yagyesh > >> > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From bro at pingtrip.com Fri Oct 28 08:43:21 2016 From: bro at pingtrip.com (Dave Crawford) Date: Fri, 28 Oct 2016 11:43:21 -0400 Subject: [Bro] Worker OOM crashes Message-ID: Has anyone experienced this crash scenario that I?m in the process of debugging? This just started in the last couple of days on a cluster that has been in production for just shy of two years without issue. tcmalloc: large alloc 18446744072956297216 bytes == (nil) @ 0x7f08e85b5dcb 0x7f08e85b5f1b 0x7f08e85b6965 0x7f08e85e89c5 0x7f1fff 0x867016 0x86781e 0x7f1a96 0x7f1db6 0x7f121a 0x7efbbe 0x7ed334 0x866eb5 0x56441c 0x600327 0x60124e 0x5cfa14 0x837db3 0x5cfecf 0x52ecb0 0x7f08e759fb45 0x5373ad (nil) out of memory in new. 1477631586.955885 fatal error: out of memory in new. The server itself still has some ceiling height: $ free -h total used free shared buffers cached Mem: 126G 23G 102G 16M 67M 5.1G -/+ buffers/cache: 18G 107G Swap: 33G 0B 33G But I?m wondering if the processes themselves are hitting a maximum size for memory allocation, as some are nearing 4GB. PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 24925 bro 20 0 3964M 3262M 269M S 78.1 2.5 2h47:55 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-5 loc 618 bro 20 0 1217M 1160M 269M S 77.0 0.9 1h57:12 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-2 loc 26073 bro 20 0 3989M 3156M 269M S 76.4 2.4 2h41:38 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-7 loc 23938 bro 20 0 2610M 2318M 269M R 75.9 1.8 2h45:00 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-9 loc 27816 bro 20 0 1554M 1455M 269M S 74.8 1.1 2h37:40 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-1 loc 27817 bro 20 0 1532M 1435M 269M S 72.1 1.1 2h33:42 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-4 loc 26677 bro 20 0 1421M 1372M 269M R 70.5 1.1 2h40:42 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-6 loc 622 bro 20 0 1165M 1108M 269M R 70.5 0.9 1h54:24 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-8 loc 613 bro 20 0 1231M 1166M 269M R 69.9 0.9 1h55:45 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-10 lo 621 bro 20 0 1242M 1177M 269M R 68.9 0.9 1h53:02 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-3 loc 1175 bro 20 0 234M 178M 10776 S 2.2 0.1 2h42:07 /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p WIN_EXT_PXY_1 local.b -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/de57c224/attachment.html From jazoff at illinois.edu Fri Oct 28 09:05:30 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 28 Oct 2016 16:05:30 +0000 Subject: [Bro] Worker OOM crashes In-Reply-To: References: Message-ID: <9E74E283-FBB4-4A68-A23B-209B832772AC@illinois.edu> Yes.. there seems to be an issue where something goes wrong and tries to allocate too much. I don't think it has anything to do with 4G Can you apply this patch and see if the next time it happens you get a core dump + backtrace? diff --git a/src/util.cc b/src/util.cc index acfcb19..661f8ac 100644 --- a/src/util.cc +++ b/src/util.cc @@ -1624,7 +1624,7 @@ extern "C" void out_of_memory(const char* where) if ( reporter ) // Guess that might fail here if memory is really tight ... - reporter->FatalError("out of memory in %s.\n", where); + reporter->FatalErrorWithCore("out of memory in %s.\n", where); abort(); } -- - Justin Azoff > On Oct 28, 2016, at 11:43 AM, Dave Crawford wrote: > > Has anyone experienced this crash scenario that I?m in the process of debugging? This just started in the last couple of days on a cluster that has been in production for just shy of two years without issue. > > tcmalloc: large alloc 18446744072956297216 bytes == (nil) @ 0x7f08e85b5dcb 0x7f08e85b5f1b 0x7f08e85b6965 0x7f08e85e89c5 0x7f1fff 0x867016 0x86781e 0x7f1a96 0x7f1db6 0x7f121a 0x7efbbe 0x7ed334 0x866eb5 0x56441c 0x600327 0x60124e 0x5cfa14 0x837db3 0x5cfecf 0x52ecb0 0x7f08e759fb45 0x5373ad (nil) out of memory in new. > 1477631586.955885 fatal error: out of memory in new. > > The server itself still has some ceiling height: > > $ free -h > total used free shared buffers cached > Mem: 126G 23G 102G 16M 67M 5.1G > -/+ buffers/cache: 18G 107G > Swap: 33G 0B 33G > > But I?m wondering if the processes themselves are hitting a maximum size for memory allocation, as some are nearing 4GB. > > PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command > 24925 bro 20 0 3964M 3262M 269M S 78.1 2.5 2h47:55 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-5 loc > 618 bro 20 0 1217M 1160M 269M S 77.0 0.9 1h57:12 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-2 loc > 26073 bro 20 0 3989M 3156M 269M S 76.4 2.4 2h41:38 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-7 loc > 23938 bro 20 0 2610M 2318M 269M R 75.9 1.8 2h45:00 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-9 loc > 27816 bro 20 0 1554M 1455M 269M S 74.8 1.1 2h37:40 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-1 loc > 27817 bro 20 0 1532M 1435M 269M S 72.1 1.1 2h33:42 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-4 loc > 26677 bro 20 0 1421M 1372M 269M R 70.5 1.1 2h40:42 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-6 loc > 622 bro 20 0 1165M 1108M 269M R 70.5 0.9 1h54:24 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-8 loc > 613 bro 20 0 1231M 1166M 269M R 69.9 0.9 1h55:45 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-10 lo > 621 bro 20 0 1242M 1177M 269M R 68.9 0.9 1h53:02 /opt/bro/bin/bro -i eth6 -U .status -p broctl -p broctl-live -p local -p WIN_EXT-3 loc > 1175 bro 20 0 234M 178M 10776 S 2.2 0.1 2h42:07 /opt/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p WIN_EXT_PXY_1 local.b > > > -Dave > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From philosnef at gmail.com Fri Oct 28 09:49:16 2016 From: philosnef at gmail.com (erik clark) Date: Fri, 28 Oct 2016 12:49:16 -0400 Subject: [Bro] Worker OOM crashes Message-ID: Justin, any chance that this is in 2.5? Or do you think this has been addressed somehow. (re OOM). As mentioned in last month or the month prior to that, I was having OOM-killer issues with 2.4.1 for a while that seem to have resolved themselves. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161028/d5d571cf/attachment.html From jazoff at illinois.edu Fri Oct 28 10:21:40 2016 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 28 Oct 2016 17:21:40 +0000 Subject: [Bro] Worker OOM crashes In-Reply-To: References: Message-ID: <1FFC95FC-F7AC-4751-B931-49048DBDBD35@illinois.edu> The crash in new is not the OOM killer, it is something else. -- - Justin Azoff > On Oct 28, 2016, at 12:49 PM, erik clark wrote: > > Justin, any chance that this is in 2.5? Or do you think this has been addressed somehow. (re OOM). As mentioned in last month or the month prior to that, I was having OOM-killer issues with 2.4.1 for a while that seem to have resolved themselves. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From bill.de.ping at gmail.com Sun Oct 30 01:53:24 2016 From: bill.de.ping at gmail.com (william de ping) Date: Sun, 30 Oct 2016 10:53:24 +0200 Subject: [Bro] Have a cluster infrastructure read pcaps Message-ID: Hi all, I have an issue with processing multiple pcap files in bro. Due to the fact that loading all of bro's scripts and infrastructure is a time consuming task, processing each pcap file takes longer than it should. Is there any way that a bro cluster could be up and running and have it's workers process the pcap files ? btw, it needs to be a pcap file and not live capture using tcpreplay for transmitting them because of time issues (some sessions might be very long and bro will process the pcap file faster than retransmitting the same pcap file). If anyone can think of a better way to accomplish it, I am free for offers :) Thanks, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161030/0c74184d/attachment.html From valerio.click at gmx.com Sun Oct 30 04:31:04 2016 From: valerio.click at gmx.com (Valerio) Date: Sun, 30 Oct 2016 12:31:04 +0100 Subject: [Bro] Protocol Analyzer Compilation Issue types.bif Message-ID: <58217af9-2580-8bc9-f29f-422b9178970b@gmx.com> Hi all, in writing a custom protocol analyzer for BRO, I came across a strange behaviour at compilation time. It seems that the order in which you specify events.bif and types.bif files in CMakeLists.txt matters. In fact, if I have: bro_plugin_begin(BroCustomProt) bro_plugin_cc(CustomProt.cc Plugin.cc) bro_plugin_bif(types.bif) bro_plugin_bif(events.bif) [...] bro_plugin_end() and I try to compile bro I get the following error: [...] /build/src/analyzer/protocol/customprot/customprot_pac.h fatal error: types.bif.h: File or directory do not exist If instead I modify CMakeLists.txt by swapping events.bif and types.bif as in: bro_plugin_begin(BroCustomProt) bro_plugin_cc(CustomProt.cc Plugin.cc) bro_plugin_bif(events.bif) bro_plugin_bif(types.bif) [...] bro_plugin_end() the compilation succeeds. Is there any ordering issue in writing the CMakeLists.txt for a BRO Protocol Analyzer that needs to be taken into account? best regards, Valerio From philosnef at gmail.com Sun Oct 30 12:26:35 2016 From: philosnef at gmail.com (erik clark) Date: Sun, 30 Oct 2016 15:26:35 -0400 Subject: [Bro] Have a cluster infrastructure read pcaps Message-ID: Run mergecap against your files and run bro against the one pcap file that way, Call it done. > > Hi all, > > I have an issue with processing multiple pcap files in bro. > Due to the fact that loading all of bro's scripts and infrastructure is a > time consuming task, > processing each pcap file takes longer than it should. > > Is there any way that a bro cluster could be up and running and have it's > workers process the pcap files ? > > btw, it needs to be a pcap file and not live capture using tcpreplay for > transmitting them because of time issues (some sessions might be very long > and bro will process the pcap file faster than retransmitting the same pcap > file). > > If anyone can think of a better way to accomplish it, I am free for offers > :) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161030/2b628d89/attachment.html From jedwards2728 at gmail.com Sun Oct 30 21:41:51 2016 From: jedwards2728 at gmail.com (John Edwards) Date: Mon, 31 Oct 2016 15:41:51 +1100 Subject: [Bro] bro logging gzip Message-ID: I have configured /opt/bro/etc/broctl.cfg LogRotationInterval = 1800 due to the default 3600 was causing too much data to be rotated and causing CPU spikes and dropped packets. Since i have changed the interval to be 1800 i am noticing in my log directory every protocol log is logging from 00:00- 00:30 and then the next log is 00:00 - 01:00 so it seems both the 30minutes that i have defined to log and also the default 1 hour log is being logged also. This seems weird to me as it wasnt doing this when i first installed bro.. What process or sub process handles the gzip component of logs from the spool? Also on another standalone worker i converted from ASCII to JSON as its going straight into our Splunk siem. Now JSON data is being logged for multiple days at a time in the current directory and not gzipping to the directory every half an hour like i have defined. The way i have got it to gzip is to deploy a config change and it gracefully shut down and start again, this causes the post processer to write to disk but while the daemon is running configured to output as json it doesnt log rotate correctly. Has anyone else run into this issue? Cheers, John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/baa23e41/attachment.html From bill.de.ping at gmail.com Mon Oct 31 01:25:30 2016 From: bill.de.ping at gmail.com (william de ping) Date: Mon, 31 Oct 2016 10:25:30 +0200 Subject: [Bro] Have a cluster infrastructure read pcaps In-Reply-To: References: Message-ID: Hi Erik, I cannot use the megecap and merge my pcaps because I need to keep them separated. The reason for that is that I want to keep track and eventually store the pcap file with its relevant log files produced from bro. Therefore I want to keep the pcap file name. Any ideas ? Thanks On Sun, Oct 30, 2016 at 9:26 PM, erik clark wrote: > Run mergecap against your files and run bro against the one pcap file that > way, Call it done. > > >> >> Hi all, >> >> I have an issue with processing multiple pcap files in bro. >> Due to the fact that loading all of bro's scripts and infrastructure is a >> time consuming task, >> processing each pcap file takes longer than it should. >> >> Is there any way that a bro cluster could be up and running and have it's >> workers process the pcap files ? >> >> btw, it needs to be a pcap file and not live capture using tcpreplay for >> transmitting them because of time issues (some sessions might be very long >> and bro will process the pcap file faster than retransmitting the same >> pcap >> file). >> >> If anyone can think of a better way to accomplish it, I am free for offers >> :) >> >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/bd91ae98/attachment.html From philosnef at gmail.com Mon Oct 31 03:37:48 2016 From: philosnef at gmail.com (erik clark) Date: Mon, 31 Oct 2016 06:37:48 -0400 Subject: [Bro] Have a cluster infrastructure read pcaps In-Reply-To: References: Message-ID: If you cant run mergecap, you are going to have to do it as I posted elsewhere on the mailing list (few days ago?) to walk the tree (simple shell script). You will not be able to have Bro parse a bunch of pcaps continuously. You will have to call it once for every pcap you have, and deal with it that way. Aside from which, if you need to keep the bro logs separate for each pcap, even if you could process a bunch of these at once, bro is going to comingle your logs, which you don't seem to want. On Mon, Oct 31, 2016 at 4:25 AM, william de ping wrote: > Hi Erik, > > I cannot use the megecap and merge my pcaps because I need to keep them > separated. > The reason for that is that I want to keep track and eventually store the > pcap file with its relevant log files produced from bro. > Therefore I want to keep the pcap file name. > > Any ideas ? > > Thanks > > > > On Sun, Oct 30, 2016 at 9:26 PM, erik clark wrote: > >> Run mergecap against your files and run bro against the one pcap file >> that way, Call it done. >> >> >>> >>> Hi all, >>> >>> I have an issue with processing multiple pcap files in bro. >>> Due to the fact that loading all of bro's scripts and infrastructure is a >>> time consuming task, >>> processing each pcap file takes longer than it should. >>> >>> Is there any way that a bro cluster could be up and running and have it's >>> workers process the pcap files ? >>> >>> btw, it needs to be a pcap file and not live capture using tcpreplay for >>> transmitting them because of time issues (some sessions might be very >>> long >>> and bro will process the pcap file faster than retransmitting the same >>> pcap >>> file). >>> >>> If anyone can think of a better way to accomplish it, I am free for >>> offers >>> :) >>> >>> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/81e3adad/attachment.html From bill.de.ping at gmail.com Mon Oct 31 04:34:25 2016 From: bill.de.ping at gmail.com (william de ping) Date: Mon, 31 Oct 2016 13:34:25 +0200 Subject: [Bro] Have a cluster infrastructure read pcaps In-Reply-To: References: Message-ID: Hi Erik, I was hoping for some solution that will keep bro process loaded and running and feeding it with pcaps. This way I can at least skip the reoccurring loading process. On Mon, Oct 31, 2016 at 12:37 PM, erik clark wrote: > If you cant run mergecap, you are going to have to do it as I posted > elsewhere on the mailing list (few days ago?) to walk the tree (simple > shell script). You will not be able to have Bro parse a bunch of pcaps > continuously. You will have to call it once for every pcap you have, and > deal with it that way. > > Aside from which, if you need to keep the bro logs separate for each pcap, > even if you could process a bunch of these at once, bro is going to > comingle your logs, which you don't seem to want. > > On Mon, Oct 31, 2016 at 4:25 AM, william de ping > wrote: > >> Hi Erik, >> >> I cannot use the megecap and merge my pcaps because I need to keep them >> separated. >> The reason for that is that I want to keep track and eventually store the >> pcap file with its relevant log files produced from bro. >> Therefore I want to keep the pcap file name. >> >> Any ideas ? >> >> Thanks >> >> >> >> On Sun, Oct 30, 2016 at 9:26 PM, erik clark wrote: >> >>> Run mergecap against your files and run bro against the one pcap file >>> that way, Call it done. >>> >>> >>>> >>>> Hi all, >>>> >>>> I have an issue with processing multiple pcap files in bro. >>>> Due to the fact that loading all of bro's scripts and infrastructure is >>>> a >>>> time consuming task, >>>> processing each pcap file takes longer than it should. >>>> >>>> Is there any way that a bro cluster could be up and running and have >>>> it's >>>> workers process the pcap files ? >>>> >>>> btw, it needs to be a pcap file and not live capture using tcpreplay for >>>> transmitting them because of time issues (some sessions might be very >>>> long >>>> and bro will process the pcap file faster than retransmitting the same >>>> pcap >>>> file). >>>> >>>> If anyone can think of a better way to accomplish it, I am free for >>>> offers >>>> :) >>>> >>>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/e903bd54/attachment-0001.html From philosnef at gmail.com Mon Oct 31 05:02:11 2016 From: philosnef at gmail.com (erik clark) Date: Mon, 31 Oct 2016 08:02:11 -0400 Subject: [Bro] bro logging gzip Message-ID: broctl cron cronjob? Pretty sure this is what controls rollover and compression. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/b59e71cb/attachment.html From jedwards2728 at gmail.com Mon Oct 31 05:05:03 2016 From: jedwards2728 at gmail.com (John Edwards) Date: Mon, 31 Oct 2016 23:05:03 +1100 Subject: [Bro] bro logging gzip In-Reply-To: References: Message-ID: Oh I just found that 10 minutes ago. I overlooked it as I have built two standalone systems and just forgot about cron. Then you emailed :) thanks for reminding me On Monday, 31 October 2016, erik clark wrote: > broctl cron cronjob? Pretty sure this is what controls rollover and > compression. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/209deb54/attachment.html From philosnef at gmail.com Mon Oct 31 09:38:22 2016 From: philosnef at gmail.com (erik clark) Date: Mon, 31 Oct 2016 12:38:22 -0400 Subject: [Bro] af_packet/pf_ring equivalency Message-ID: I am using pf_ring with pfcount to do traffic analysis (pps/throughput) since it is very reliable. Does af_packet have an equivalent for this? I dont want to use broctl capstats unless there is absolutely no other option. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161031/ee8dd1ed/attachment.html From jlay at slave-tothe-box.net Mon Oct 31 12:09:14 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 31 Oct 2016 13:09:14 -0600 Subject: [Bro] Quick Signature Framework question Message-ID: <4d72503651ec2dade289e447b6b6c465@localhost> Per the docs: src-port/dst-port Source and destination port, respectively. Can use a range? Say 27015-27030? Thank you. James From robin at icir.org Mon Oct 31 13:43:41 2016 From: robin at icir.org (Robin Sommer) Date: Mon, 31 Oct 2016 13:43:41 -0700 Subject: [Bro] Quick Signature Framework question In-Reply-To: <4d72503651ec2dade289e447b6b6c465@localhost> References: <4d72503651ec2dade289e447b6b6c465@localhost> Message-ID: <20161031204341.GR21981@icir.org> On Mon, Oct 31, 2016 at 13:09 -0600, James Lay wrote: > Can use a range? Say 27015-27030? No, not supported. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jlay at slave-tothe-box.net Mon Oct 31 13:45:12 2016 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 31 Oct 2016 14:45:12 -0600 Subject: [Bro] Quick Signature Framework question In-Reply-To: <20161031204341.GR21981@icir.org> References: <4d72503651ec2dade289e447b6b6c465@localhost> <20161031204341.GR21981@icir.org> Message-ID: <5e1c90a28599f8356c2f3f4070c56a7f@localhost> On 2016-10-31 14:43, Robin Sommer wrote: > On Mon, Oct 31, 2016 at 13:09 -0600, James Lay wrote: > >> Can use a range? Say 27015-27030? > > No, not supported. > > Robin Gratze :) James From michalpurzynski1 at gmail.com Mon Oct 31 16:21:49 2016 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Tue, 1 Nov 2016 00:21:49 +0100 Subject: [Bro] af_packet/pf_ring equivalency In-Reply-To: References: Message-ID: ifpps for generic bandwidth and pps monitoring. Never, ever, use iptraf. ifpps has been written by the netsniff-ng author and it speaks for itself. bwm-ng seems to be good, haven't compared the accuracy and the perf data acquisition. For monitoring drops ethtool -S to detect drops in card's FIFO and sometimes, reasons for them. https://github.com/netoptimizer/network-testing/blob/master/bin/softnet_stat.pl to detect drops at the softirq layer Bro's stats.log to detect drops at the af_packet layer Bro capture_loss to detect drops in all above + drops before packets reach your sensor. Monitoring drops is complex and there is no single metric that tells you all. Some of this is true for pfring as well, people just don't know. I've seen sensors with 2-3% drops (in Suricata) but 40% drops in FIFO and they were like "we're doing fine". Well, so here's a bad news... ;-) On Mon, Oct 31, 2016 at 5:38 PM, erik clark wrote: > I am using pf_ring with pfcount to do traffic analysis (pps/throughput) > since it is very reliable. > > Does af_packet have an equivalent for this? I dont want to use broctl > capstats unless there is absolutely no other option. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20161101/c2b81097/attachment.html