From seth at corelight.com Fri Dec 1 07:00:26 2017 From: seth at corelight.com (Seth Hall) Date: Fri, 01 Dec 2017 10:00:26 -0500 Subject: [Bro] How to convert name field in smb_files.log to "readable" string? In-Reply-To: <8A3473AB-7301-4F39-8A13-CFE4DBE918CF@illinois.edu> References: <996D872D-0A7B-49D8-BD5C-A8328A4FBD4C@corelight.com> <8A3473AB-7301-4F39-8A13-CFE4DBE918CF@illinois.edu> Message-ID: <3699E331-8443-422C-BC6C-6A23246A3193@corelight.com> On 30 Nov 2017, at 13:47, Azoff, Justin S wrote: > Does the json log writer make this simpler for users? I think bro > writes out valid json for this, > so any json parser should give you proper UTF-8 strings. It writes out valid JSON but strings aren't handled as well as they could. It's why I was saying that non-ascii bytes are escaped according to the json spec, but that has other problems. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From vikrambasu059 at gmail.com Sat Dec 2 03:11:11 2017 From: vikrambasu059 at gmail.com (Vikram Basu) Date: Sat, 2 Dec 2017 16:41:11 +0530 Subject: [Bro] Big Packet loss and PacketFilter::Dropped_Packets Message-ID: <5a228a50.416e620a.12cc5.b437@mx.google.com> So I am running Bro 2.5.2 in cluster mode using pf_ring and using it to monitor a SPAN port interface. I am running 8 workers and each of them are pinned to a CPU. When I am performance testing by sending upto 1 gbps of network traffic having a random mix of HTTP, FTP and SMTP data I find that I am getting massive packet loss notices. {"ts":1512212763.169748,"note":"PacketFilter::Dropped_Packets","msg":"4135277 packets dropped after filtering, 4371549 received, 236272 on link","peer_descr":"worker-1-5","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} {"ts":1512212771.177625,"note":"PacketFilter::Dropped_Packets","msg":"4827328 packets dropped after filtering, 5073087 received, 245759 on link","peer_descr":"worker-1-7","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} {"ts":1512212773.214689,"note":"PacketFilter::Dropped_Packets","msg":"4767851 packets dropped after filtering, 5028737 received, 260886 on link","peer_descr":"worker-1-6","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} {"ts":1512212783.667576,"note":"PacketFilter::Dropped_Packets","msg":"5563389 packets dropped after filtering, 5818919 received, 255530 on link","peer_descr":"worker-1-3","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} I am running Bro on a 8 core 8 GB machine with an SSD and not sure why I am getting such high packet loss. Here is my BroControl netstats and they are also not encouraging. [BroControl] > netstats worker-1-1: 1512212665.151426 recvd=297260 dropped=7862632 link=297260 worker-1-2: 1512212659.639980 recvd=251046 dropped=7934351 link=251046 worker-1-3: 1512212652.110004 recvd=261434 dropped=7896026 link=261434 worker-1-4: 1512212662.089539 recvd=291058 dropped=7887963 link=291058 worker-1-5: 1512212666.662180 recvd=246944 dropped=7934732 link=246944 worker-1-6: 1512212661.373981 recvd=254560 dropped=7910802 link=254560 worker-1-7: 1512212657.278461 recvd=255041 dropped=7922435 link=255041 worker-1-8: 1512212671.643251 recvd=214359 dropped=7966526 link=214359 Any help or advise would be greatly appreciated. Regards, Vikram Basu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171202/00769141/attachment.html From felipe.tavares at opencloudfactory.com Sat Dec 2 13:00:18 2017 From: felipe.tavares at opencloudfactory.com (Felipe Tavares) Date: Sat, 2 Dec 2017 21:00:18 +0000 Subject: [Bro] Fwd: Big Packet loss and PacketFilter::Dropped_Packets References: <5a228a50.416e620a.12cc5.b437@mx.google.com> Message-ID: Hello there Vikram! We are running the same Bro 2.5.2 with pf_ring and we also had the pinned CPUs and had a lot of packet drops. After a couple tests, we managed to get the packet drops to 0 by unpinning the CPU procs, letting the OS do the dirty job. We have being running like that for a couple days now, without drops. Hope you can get it working! Regards, Felipe Tavares OpenCloud Factory From: "Vikram Basu" > Date: 2 Dec 2017 11:26 am Subject: [Bro] Big Packet loss and PacketFilter::Dropped_Packets To: "bro at bro.org" > Cc: So I am running Bro 2.5.2 in cluster mode using pf_ring and using it to monitor a SPAN port interface. I am running 8 workers and each of them are pinned to a CPU. When I am performance testing by sending upto 1 gbps of network traffic having a random mix of HTTP, FTP and SMTP data I find that I am getting massive packet loss notices. {"ts":1512212763.169748,"note":"PacketFilter::Dropped_Packets","msg":"4135277 packets dropped after filtering, 4371549 received, 236272 on link","peer_descr":"worker-1-5","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} {"ts":1512212771.177625,"note":"PacketFilter::Dropped_Packets","msg":"4827328 packets dropped after filtering, 5073087 received, 245759 on link","peer_descr":"worker-1-7","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} {"ts":1512212773.214689,"note":"PacketFilter::Dropped_Packets","msg":"4767851 packets dropped after filtering, 5028737 received, 260886 on link","peer_descr":"worker-1-6","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} {"ts":1512212783.667576,"note":"PacketFilter::Dropped_Packets","msg":"5563389 packets dropped after filtering, 5818919 received, 255530 on link","peer_descr":"worker-1-3","actions":["Notice::ACTION_LOG"],"suppress_for":3600.0,"dropped":false} I am running Bro on a 8 core 8 GB machine with an SSD and not sure why I am getting such high packet loss. Here is my BroControl netstats and they are also not encouraging. [BroControl] > netstats worker-1-1: 1512212665.151426 recvd=297260 dropped=7862632 link=297260 worker-1-2: 1512212659.639980 recvd=251046 dropped=7934351 link=251046 worker-1-3: 1512212652.110004 recvd=261434 dropped=7896026 link=261434 worker-1-4: 1512212662.089539 recvd=291058 dropped=7887963 link=291058 worker-1-5: 1512212666.662180 recvd=246944 dropped=7934732 link=246944 worker-1-6: 1512212661.373981 recvd=254560 dropped=7910802 link=254560 worker-1-7: 1512212657.278461 recvd=255041 dropped=7922435 link=255041 worker-1-8: 1512212671.643251 recvd=214359 dropped=7966526 link=214359 Any help or advise would be greatly appreciated. Regards, Vikram Basu _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171202/bed45525/attachment.html From rich-reco at hotmail.com Sat Dec 2 22:14:39 2017 From: rich-reco at hotmail.com (Rich Perry) Date: Sun, 3 Dec 2017 06:14:39 +0000 Subject: [Bro] (no subject) Message-ID: Hello and thank you for your assistance. As the subject states, I'm not getting email notifications to this email address (rich-reco at hotmail.com). I've gone to /etc/bro/broctl.cfg and uncommented and added: MailTo = rich-reco at hotmail.com sendmail = /usr/sbin/sendmail I also uncommented and added LogRotationInterval = 60 to test it. I ran into issues with sendmail so I commented it out so now it currently looks like: MailTo = rich-reco at hotmail.com #sendmail = /usr/sbin/sendmail bro is logging them in /var/log/bro/[today's date] but i'm not receiving anything. As far as the local.bro file goes, I've only added: hook Notice::policy(n: Notice::Info) { add n$actions[Notice::ACTION_EMAIL]; } which I believe is what actually emails the notices. Is this correct? If this is not correct, what is the correct code to add to receive ALL alerts. I've looked at the documentation but I did not find a function to send ALL notices or couldn't understand what I saw. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171203/dfe59cc5/attachment.html From vikrambasu059 at gmail.com Sun Dec 3 23:19:24 2017 From: vikrambasu059 at gmail.com (Vikram Basu) Date: Mon, 4 Dec 2017 12:49:24 +0530 Subject: [Bro] Re : Fwd: Big Packet loss and PacketFilter::Dropped_Packets In-Reply-To: References: Message-ID: <5a24f6fd.8f326b0a.382bc.38c3@mx.google.com> I tried doing that but it did not seem to be all that helpful for me. What kind of network bandwidth are you handling and also how many CPU cores and amount of RAM have you given to Bro I wonder ? Regards, Vikram Basu Seceon Inc. Hello there Vikram! We are running the same Bro 2.5.2 with pf_ring and we also had the pinned CPUs and had a lot of packet drops. After a couple tests, we managed to get the packet drops to 0 by unpinning the CPU procs, letting the OS do the dirty job. We have being running like that for a couple days now, without drops. Hope you can get it working! Regards, Felipe Tavares OpenCloud Factory -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171204/0c733ad4/attachment.html From vikrambasu059 at gmail.com Sun Dec 3 23:34:53 2017 From: vikrambasu059 at gmail.com (Vikram Basu) Date: Mon, 4 Dec 2017 13:04:53 +0530 Subject: [Bro] Re : Fwd: Big Packet loss and PacketFilter::Dropped_Packets Message-ID: <5a24fa9e.829c6b0a.c073b.3819@mx.google.com> I am using the basic PF_Ring edition to monitor the interfaces. Is that what is causing the issues ? What kind of throughput can the basic PF_Ring (non ZC) handle ? If I am looking at network traffic above 100 mbps will I definitely need the PF_Ring ZC edition ? Regards, Vikram Basu Seceon Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171204/1e201389/attachment.html From craigp at iup.edu Mon Dec 4 04:42:28 2017 From: craigp at iup.edu (Craig Pluchinsky) Date: Mon, 4 Dec 2017 07:42:28 -0500 (EST) Subject: [Bro] Re : Fwd: Big Packet loss and PacketFilter::Dropped_Packets In-Reply-To: <5a24fa9e.829c6b0a.c073b.3819@mx.google.com> References: <5a24fa9e.829c6b0a.c073b.3819@mx.google.com> Message-ID: Might want to take a look at this. https://www.bro.org/documentation/faq.html#how-can-i-reduce-the-amount-of-captureloss-or-dropped-packets-notices We had issues with dropped packets until I disabled offloading...etc on the monitoring nic. ------------------------------- Craig Pluchinsky IT Services Indiana University of Pennsylvania 724-357-3327 On Mon, 4 Dec 2017, Vikram Basu wrote: > > I am using the basic PF_Ring edition to monitor the interfaces. Is that what is causing the issues ? What kind of throughput can the > basic PF_Ring (non ZC) handle ? If I am looking at network traffic above 100 mbps will I definitely need the PF_Ring ZC edition ? > > ? > > Regards, > > Vikram Basu > Seceon Inc. > > ? > > > From fatema.bannatwala at gmail.com Mon Dec 4 10:44:47 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 4 Dec 2017 13:44:47 -0500 Subject: [Bro] DNS Unmatched msg/reply Message-ID: Hi All, I was looking into the bro weird log file, and finally decided to spare some time tuning down the dns_unmatched_* messages in weird.log, as we usually get *many* to them. So to begin with, first I looked at the weird.log, grep-ed the very first entry for dns_unmatched_msg, and then grep-ed everything in *.log corresponding to that uid: $ less *.log | grep "CgOnko1s28TKjoaB07" 1512410399.813927 CgOnko1s28TKjoaB07 34.228.158.69 41438 128.175.13.16 53 udp dns 0.003451 42 2638 SF F T 0 Dd 1 70 22694 (empty) worker-2-18 1512410399.813927 CgOnko1s28TKjoaB07 34.228.158.69 41438 128.175.13.16 53 udp 22592 0.003411 dns1.udel.edu 1 C_INTERNET 1 A 0 NOERROR T F F F 1 128.175.13.16 86400.000000 F 1512410399.817378 CgOnko1s28TKjoaB07 34.228.158.69 41438 128.175.13.16 53 udp 22592 - dns1.udel.edu - - - - 0 NOERROR T F F F 0 128.175.13.16 86400.000000 F 1512410399.817338 CgOnko1s28TKjoaB07 34.228.158.69 41438 128.175.13.16 53 DNS_RR_unknown_type 46 F worker-2-18 1512410399.817378 CgOnko1s28TKjoaB07 34.228.158.69 41438 128.175.13.16 53 dns_unmatched_reply - F worker-2-18 1512410409.813946 CgOnko1s28TKjoaB07 34.228.158.69 41438 128.175.13.16 53 dns_unmatched_msg - F worker-2-18 Looks like Bro seeing proper connection (SF) in conn.log enrty, and dns.log logging the query and response, the second log entry above. I am unsure of the third entry above, corresponding to dns.log. Any reason, weird.log would log a dns_unmatched* log for this connection? P.S: we have disabled the checksum offloading on the NIC. Any thoughts? Thanks! Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171204/a61ccf41/attachment.html From seth at corelight.com Tue Dec 5 05:56:23 2017 From: seth at corelight.com (Seth Hall) Date: Tue, 05 Dec 2017 08:56:23 -0500 Subject: [Bro] DNS Unmatched msg/reply In-Reply-To: References: Message-ID: <3E77300D-1638-4F43-89D8-AA361A8A8576@corelight.com> It looks like you got two replies from a single query. This tends to happen frequently in DNS traffic unfortunately and I think it's correct to log the second reply. The main problem that I've seen on my networks is the weirds that are being generated. I'm planning to get rid of dns_unmatched_msg and dns_unmatched_reply for the 2.6 release. They don't actually tell you much and they both indicate far too common situations to be useful. .Seth On 4 Dec 2017, at 13:44, fatema bannatwala wrote: > Hi All, > > I was looking into the bro weird log file, and finally decided to > spare > some time > tuning down the dns_unmatched_* messages in weird.log, as we usually > get > *many* > to them. > > So to begin with, first I looked at the weird.log, grep-ed the very > first > entry for dns_unmatched_msg, > and then grep-ed everything in *.log corresponding to that uid: > > $ less *.log | grep "CgOnko1s28TKjoaB07" > 1512410399.813927 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 udp dns 0.003451 42 2638 > SF F T 0 Dd 1 70 22694 (empty) > worker-2-18 > 1512410399.813927 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 udp 22592 0.003411 dns1.udel.edu > 1 > C_INTERNET 1 A 0 NOERROR T F > F F 1 128.175.13.16 86400.000000 F > 1512410399.817378 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 udp 22592 - dns1.udel.edu - > - > - - 0 NOERROR T F F F 0 > 128.175.13.16 86400.000000 F > 1512410399.817338 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 DNS_RR_unknown_type 46 F > worker-2-18 > 1512410399.817378 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 dns_unmatched_reply - F > worker-2-18 > 1512410409.813946 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 dns_unmatched_msg - F > worker-2-18 > > Looks like Bro seeing proper connection (SF) in conn.log enrty, > and dns.log logging the query and response, the second log entry > above. > I am unsure of the third entry above, corresponding to dns.log. > Any reason, weird.log would log a dns_unmatched* log for this > connection? > > P.S: we have disabled the checksum offloading on the NIC. > > Any thoughts? > > Thanks! > Fatema. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall * Corelight, Inc * www.corelight.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171205/8c9ad832/attachment.html From fatema.bannatwala at gmail.com Tue Dec 5 09:23:10 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 5 Dec 2017 12:23:10 -0500 Subject: [Bro] DNS Unmatched msg/reply In-Reply-To: <3E77300D-1638-4F43-89D8-AA361A8A8576@corelight.com> References: <3E77300D-1638-4F43-89D8-AA361A8A8576@corelight.com> Message-ID: Ah, that makes sense. Thanks Seth! We get lot of weirds too corresponding to dns_unmatched messages everyday. Good to know that they would be going soon in next major release of Bro :) Thanks! Fatema. On Tue, Dec 5, 2017 at 8:56 AM, Seth Hall wrote: > It looks like you got two replies from a single query. This tends to > happen frequently in DNS traffic unfortunately and I think it's correct to > log the second reply. The main problem that I've seen on my networks is the > weirds that are being generated. I'm planning to get rid of > dns_unmatched_msg and dns_unmatched_reply for the 2.6 release. They don't > actually tell you much and they both indicate far too common situations to > be useful. > > .Seth > > On 4 Dec 2017, at 13:44, fatema bannatwala wrote: > > Hi All, > > I was looking into the bro weird log file, and finally decided to spare > some time > tuning down the dns_unmatched_* messages in weird.log, as we usually get > *many* > to them. > > So to begin with, first I looked at the weird.log, grep-ed the very first > entry for dns_unmatched_msg, > and then grep-ed everything in *.log corresponding to that uid: > > $ less *.log | grep "CgOnko1s28TKjoaB07" > 1512410399.813927 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 udp dns 0.003451 42 2638 > SF F T 0 Dd 1 70 22694 (empty) > worker-2-18 > 1512410399.813927 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 udp 22592 0.003411 dns1.udel.edu > 1 C_INTERNET 1 A 0 NOERROR T F > F F 1 128.175.13.16 86400.000000 F > 1512410399.817378 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 udp 22592 - dns1.udel.edu - > - - - 0 NOERROR T F F F 0 > 128.175.13.16 86400.000000 F > 1512410399.817338 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 DNS_RR_unknown_type 46 F worker-2-18 > 1512410399.817378 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 dns_unmatched_reply - F worker-2-18 > 1512410409.813946 CgOnko1s28TKjoaB07 34.228.158.69 41438 > 128.175.13.16 53 dns_unmatched_msg - F worker-2-18 > > Looks like Bro seeing proper connection (SF) in conn.log enrty, > and dns.log logging the query and response, the second log entry above. > I am unsure of the third entry above, corresponding to dns.log. > Any reason, weird.log would log a dns_unmatched* log for this connection? > > P.S: we have disabled the checksum offloading on the NIC. > > Any thoughts? > > Thanks! > Fatema. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > Seth Hall * Corelight, Inc * www.corelight.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171205/6529dddf/attachment.html From sunari1031 at gmail.com Tue Dec 5 23:36:12 2017 From: sunari1031 at gmail.com (=?UTF-8?B?6rmA7IiY66Co?=) Date: Wed, 6 Dec 2017 16:36:12 +0900 Subject: [Bro] How to convert name field in smb_files.log to "readable" string? In-Reply-To: <3699E331-8443-422C-BC6C-6A23246A3193@corelight.com> References: <996D872D-0A7B-49D8-BD5C-A8328A4FBD4C@corelight.com> <8A3473AB-7301-4F39-8A13-CFE4DBE918CF@illinois.edu> <3699E331-8443-422C-BC6C-6A23246A3193@corelight.com> Message-ID: > On Nov 30, 2017, at 12:18 PM, Seth Hall wrote: > > I've been thinking about how to handle this for a while. The data that > is being written into the log is technically already UTF-8, it's just > that non-ascii bytes are escaped. > > I think we can deal with this by making a switch for the logs to make > them "UTF-8". It would incur a bit of overhead because each string > would have to be scanned for valid UTF-8 characters before being written > and then only non-valid bytes would be escaped. > > .Seth I see.. So, I need to write non-ascii bytes that are escaped to utf-8. I want to make the logs to be readable even if it would make a bit overhead. Is there some sample bro script to do it? It's hard to do it because I'm newbie about bro script. Thanks! 2017-12-02 0:00 GMT+09:00 Seth Hall : > > > On 30 Nov 2017, at 13:47, Azoff, Justin S wrote: > > Does the json log writer make this simpler for users? I think bro writes >> out valid json for this, >> so any json parser should give you proper UTF-8 strings. >> > > It writes out valid JSON but strings aren't handled as well as they > could. It's why I was saying that non-ascii bytes are escaped according to > the json spec, but that has other problems. > > .Seth > > > -- > Seth Hall * Corelight, Inc * www.corelight.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171206/7aa06ac0/attachment.html From mike at swedishmike.org Wed Dec 6 00:31:54 2017 From: mike at swedishmike.org (Mike Eriksson) Date: Wed, 06 Dec 2017 08:31:54 +0000 Subject: [Bro] Different delimiter for archived log files? Message-ID: All, I've been looking through the documentation and config files but haven't found anything relating to this - chances are still big that I've missed it so please let me know if I have. At the moment log files that gets rotated out/archived looks like this: conn.17:00:00-18:00:00.log.gz What causes trouble on certain operating systems here is the : (colon) character. For example under Windows it is an invalid character for a file name. If you try to copy the file off your Bro server, or some other off-host storage that supports the file name, onto a Windows host it fails. Sadly there's occasions when I need to get these files across to a Windows host which means that I have to manually rename the files before I copy them across. Is there any configuration setting where this could be changed or would this be a feature request for a future version? Thanks in advance, Mike -- twitter: https://twitter.com/swedishmike github: http://github.com/swedishmike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171206/f7163bcb/attachment.html From bill.de.ping at gmail.com Wed Dec 6 08:23:19 2017 From: bill.de.ping at gmail.com (william de ping) Date: Wed, 6 Dec 2017 18:23:19 +0200 Subject: [Bro] - MTU and defragmentation Message-ID: Hello, I wonder what happens if my mtu is set to 1500 (default) and a jumbo TCP or UDP packet is sent to Bro's monitored interface. Will Bro parse only the packet containing the IP header ? >From my tests I see that Bro does not defragment the packets by default, any flag I should use for that ? Thanks B -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171206/e57bfb61/attachment.html From jsiwek at corelight.com Wed Dec 6 08:34:09 2017 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 6 Dec 2017 10:34:09 -0600 Subject: [Bro] Different delimiter for archived log files? In-Reply-To: References: Message-ID: On Wed, Dec 6, 2017 at 2:31 AM, Mike Eriksson wrote: > At the moment log files that gets rotated out/archived looks like this: > > conn.17:00:00-18:00:00.log.gz > > ... > > Is there any configuration setting where this could be changed or would this > be a feature request for a future version? There's a bit in the broctl faq about changing format of archived filenames that you can try out: https://www.bro.org/sphinx/components/broctl/README.html#questions-and-answers Basically says to set the MakeArchiveName option in your broctl.cfg to point at a custom script which outputs your desired format and you can use the existing make-archive-name script as an example. - Jon From mike at swedishmike.org Wed Dec 6 09:24:53 2017 From: mike at swedishmike.org (Mike Eriksson) Date: Wed, 06 Dec 2017 17:24:53 +0000 Subject: [Bro] Different delimiter for archived log files? In-Reply-To: References: Message-ID: Jon, Sweet - many thanks for that. I'll give that a go. Just shows how well I can read/search for info. ;-) Cheers, Mike On Wed, Dec 6, 2017 at 4:34 PM Jon Siwek wrote: > On Wed, Dec 6, 2017 at 2:31 AM, Mike Eriksson > wrote: > > > At the moment log files that gets rotated out/archived looks like this: > > > > conn.17:00:00-18:00:00.log.gz > > > > ... > > > > Is there any configuration setting where this could be changed or would > this > > be a feature request for a future version? > > There's a bit in the broctl faq about changing format of archived > filenames that you can try out: > > > https://www.bro.org/sphinx/components/broctl/README.html#questions-and-answers > > Basically says to set the MakeArchiveName option in your broctl.cfg to > point at a custom script which outputs your desired format and you can > use the existing make-archive-name script as an example. > > - Jon > -- twitter: https://twitter.com/swedishmike github: http://github.com/swedishmike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171206/0f99fd7a/attachment.html From jsiwek at corelight.com Wed Dec 6 10:43:46 2017 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 6 Dec 2017 12:43:46 -0600 Subject: [Bro] - MTU and defragmentation In-Reply-To: References: Message-ID: On Wed, Dec 6, 2017 at 10:23 AM, william de ping wrote: > I wonder what happens if my mtu is set to 1500 (default) and a jumbo TCP or > UDP packet is sent to Bro's monitored interface. > > Will Bro parse only the packet containing the IP header ? > From my tests I see that Bro does not defragment the packets by default, any > flag I should use for that ? I'd actually expect Bro to reassemble IPv4/IPv6 fragments by default, providing that it is actually seeing the fragments from the interface in their entirety. Anything relevant in weird.log? e.g. there could be other problems going on that prevent fully processing the fragments, like bad checksums (maybe from nic offloading), or incomplete captures (from too low a snaplen setting). - Jon From Travis.Debary at pharmerica.com Thu Dec 7 10:37:32 2017 From: Travis.Debary at pharmerica.com (Debary, Travis) Date: Thu, 7 Dec 2017 18:37:32 +0000 Subject: [Bro] bro logs stopped Message-ID: Good afternoon all, Hello all, I'm new to bro and am having to learn and manage an existing implementation, which means I have to make sense of everything as I troubleshoot. If this is not the best place to ask for help, I apologize and please feel free to correct me. I'm having an issue with a sensor that collects bro logs and then sends them to Splunk. On 11/17, it stopped sending logs and I've spent the last couple of weeks trying to figure this out. When I go to /nsm/bro/logs/ and /current, there are no log files at all in the directories. On another sensor that is working, when I go to these folders, I see log files that are named after the date (e.g. 2017-12-07). When I try to run broctl on the nonworking sensor, it gives me the below error: "Error: must run broctl on same machine as the standalone node. The standalone node has IP address 127.0.0.1 and this machine has IP addresses: 172.27.x.x (x are placeholders), fe80::1e98:ecff:fe15:d098" I get that same error whenever I try to do anything with broctl, even stop it. Since it's giving the loopback address, I'm not sure why it recognizes it as a different machine. When I go to the node.cfg file in /opt/bro, it displays this: [bro] type=standalone host=localhost interface=eth0 However, when I look at that file on the other sensor that is working, it displays: [manager] type=manager host=localhost [proxy] type=proxy host=localhost [nsmsen04-eth1] type=worker host=localhost interface=eth1 lb_method=pf_ring lb_procs=1 Just an FYI, the working sensor also sends logs to SecurityOnion so not sure if that has anything to do with the difference in node.cfg. The nonworking sensor only sends logs to Splunk, which I have already verified the Splunk Forwarder is working properly. Is there anything I am missing that would fix this? I'm probably not giving you everything you need to help but please let me know what else I can provide that would assist. * Travis Confidentiality Notice: This email and its attachments may contain privileged and confidential information and/or protected health information (PHI) intended solely for the recipient(s) named above. If you are not the recipient, or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any review, dissemination, distribution, printing or copying of this email message and/or any attachments is strictly prohibited. If you have received this transmission in error, please notify the sender immediately and permanently delete this email and any attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171207/3d8284e0/attachment.html From dnthayer at illinois.edu Thu Dec 7 11:48:30 2017 From: dnthayer at illinois.edu (Daniel Thayer) Date: Thu, 7 Dec 2017 13:48:30 -0600 Subject: [Bro] bro logs stopped In-Reply-To: References: Message-ID: Which OS are you using? Which version of Bro? On 12/7/17 12:37 PM, Debary, Travis wrote: > Good afternoon all, > > Hello all, I'm new to bro and am having to learn and manage an existing > implementation, which means I have to make sense of everything as I > troubleshoot. If this is not the best place to ask for help, I apologize > and please feel free to correct me. > > I?m having an issue with a sensor that collects bro logs and then sends > them to Splunk. ?On 11/17, it stopped sending logs and I've spent the > last couple of weeks trying to figure this out. > > When I go to /nsm/bro/logs/ and /current, there are no log files at all > in the directories. On another sensor that is working, when I go to > these folders, I see log files that are named after the date (e.g. > 2017-12-07). > > When I try to run broctl on the nonworking sensor, it gives me the below > error: > > "Error: must run broctl on same machine as the standalone node. The > standalone node has IP address 127.0.0.1 and this machine has IP > addresses: 172.27.x.x (x are placeholders), fe80::1e98:ecff:fe15:d098" > > I get that same error whenever I try to do anything with broctl, even > stop it.? Since it's giving the loopback address, I'm not sure why it > recognizes it as a different machine. > > When I go to the node.cfg file in /opt/bro, it displays this: > [bro] > type=standalone > host=localhost > interface=eth0 > > However, when I look at that file on the other sensor that is working, > it displays: > [manager] > type=manager > host=localhost > > [proxy] > type=proxy > host=localhost > > [nsmsen04-eth1] > type=worker > host=localhost > interface=eth1 > lb_method=pf_ring > lb_procs=1 > > Just an FYI, the working sensor also sends logs to SecurityOnion so not > sure if that has anything to do with the difference in node.cfg. The > nonworking sensor only sends logs to Splunk, which I have already > verified the Splunk Forwarder is working properly. > > Is there anything I am missing that would fix this? I'm probably not > giving you everything you need to help but please let me know what else > I can provide that would assist. > > * Travis From haoscs at gmail.com Sun Dec 10 22:45:02 2017 From: haoscs at gmail.com (Shuai Hao) Date: Mon, 11 Dec 2017 01:45:02 -0500 Subject: [Bro] Dealing with tcp-based Unknown Protocols Message-ID: Hi All, I wonder that does anyone have experience to tackle the "unknown protocol" when DPD cannot recognize the protocol and/or all existing analyzers fail. At this time we assume that all protocols are tcp-based. According to one of previous discussions, http://mailman.icsi.berkeley.edu/pipermail/bro/2014-July/007222.html we first attempt to create a signature which matches everything. Such signature will eventually capture ALL connections even when there is available analyzer can process the stream (e.g., HTTP). However, we want the analyzer for unknown protocol only be triggered when no existing analyzer can be used. (1) One possible way we are considering is that if there is a mechanism can control the process of analyzers. For example, when one of analyzers is successful, it sends a signal to the Unknown-Protocol-Analyzer to terminate it. (2) Another way is that we set a global variable which captures and indicates the failed/successful analyzers after the DPD; then if all analyzers fail, the Unknown-Protocol-Analyzer is triggered. In addition, according to this message http://mailman.icsi.berkeley.edu/pipermail/bro/2007-December/002593.html, Robin mentioned the method DPM::BuildInitialAnalyzerTree() in DPM.{h, cc} (Manage::BuildInitialAnalyzerTree() in current distribution). With the source code, https://github.com/bro/bro/blob/master/src/analyzer/Manager.cc it seems that we can initiate an analyzer here when seeing a connection which is non-TCP, non-UDP, and non-ICMP. However, if we assume all TCP-based protocols, where we should look at if we have to touch the source code? We haven't investigated the PIA implementation; is this part related and worth to explore? As such, anyone have ideas how to deal with such a case? Thanks for any comments~ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171211/7a7a77b2/attachment.html From jsiwek at corelight.com Mon Dec 11 08:40:39 2017 From: jsiwek at corelight.com (Jon Siwek) Date: Mon, 11 Dec 2017 10:40:39 -0600 Subject: [Bro] Dealing with tcp-based Unknown Protocols In-Reply-To: References: Message-ID: On Mon, Dec 11, 2017 at 12:45 AM, Shuai Hao wrote: > In addition, according to this message > http://mailman.icsi.berkeley.edu/pipermail/bro/2007-December/002593.html, > Robin mentioned the method DPM::BuildInitialAnalyzerTree() in DPM.{h, cc} > (Manage::BuildInitialAnalyzerTree() in current distribution). With the > source code, > https://github.com/bro/bro/blob/master/src/analyzer/Manager.cc > it seems that we can initiate an analyzer here when seeing a connection > which is non-TCP, non-UDP, and non-ICMP. However, if we assume all TCP-based > protocols, where we should look at if we have to touch the source code? The first thing that comes to my mind would still be trying to unconditionally add your analyzer in analyzer::Manager::BuildInitialAnalyzerTree. If it's just TCP that you need, that's fine, see the other tcp->AddChildAnalyzer() calls there for ideas. Then, the other part of your problem would be disabling that analyzer when any other protocol analyzer is confirmed. An idea would be to periodically (e.g. every DeliverPacket) walk the analyzer tree (e.g. analyzer::Parent() and analyzer::GetChildren()) and check whether analyzer::ProtocolConfirmed() is true for anything. You could also maybe just handle this in scripts via Analyzer::protocol_confirmation event and Analyzer::disable_analyzer() functions. The PIA implementation is related to protocol matching, but not sure whether you'd need to modify anything there to get what you want. Hope that helps. - Jon From jan.grashoefer at gmail.com Mon Dec 11 09:24:43 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Mon, 11 Dec 2017 18:24:43 +0100 Subject: [Bro] Dealing with tcp-based Unknown Protocols In-Reply-To: References: Message-ID: <7c69e994-89f3-331d-8683-0ae2f615fb59@gmail.com> On 11/12/17 07:45, Shuai Hao wrote: > I wonder that does anyone have experience to tackle the "unknown protocol" > when DPD cannot recognize the protocol and/or all existing analyzers fail. Maybe the "Analyzers of Last Resort" Leo and Aaron talked about in their BroCon'17 Lightning-Talks is what you are looking for: https://www.bro.org/brocon2017/slides/2017_lightning_talk.pdf Jan From promero at cenic.org Mon Dec 11 15:59:26 2017 From: promero at cenic.org (Philip Romero) Date: Mon, 11 Dec 2017 15:59:26 -0800 Subject: [Bro] Elastic/Filebeat and Bro Logs Inquiry Message-ID: <6dccf964-4831-49c2-3003-b7ea52d2800f@cenic.org> All, I'm in the process of getting Bro logs fed into a new elasticsearch cluster we're building out and had what I am hoping is a quick and easy question someone could provide input on. My elasticsearch engineering team stood up a logstash server to ingest data input from our various sources, of which Bro is one. I came across the below URL at the elastic site, which give some direction on an option for getting bro log data ingested. It was my intention to have filebeat loaded on our Bro serer and have the "current" log folder monitored for new events, as suggested in the elastic write-up. My elasticsearch engineering team is a little concerned about the hourly log rotation process performed in that folder by bro and how it may impact "live" monitored files. https://www.elastic.co/blog/bro-ids-elastic-stack Is there a concern with this way of monitoring bro events? Is there a "better" way to do this to ensure we don't miss events during the hourly log rotation process? Were a bit new to this so any pointers would be appreciated. Thanks. -- Philip Romero, CISSP, CISA Sr. Information Security Analyst CENIC promero at cenic.org Phone: (714) 220-3430 Mobile: (562) 237-9290 From mus3 at lehigh.edu Tue Dec 12 05:13:55 2017 From: mus3 at lehigh.edu (Munroe Sollog) Date: Tue, 12 Dec 2017 08:13:55 -0500 Subject: [Bro] Elastic/Filebeat and Bro Logs Inquiry In-Reply-To: <6dccf964-4831-49c2-3003-b7ea52d2800f@cenic.org> References: <6dccf964-4831-49c2-3003-b7ea52d2800f@cenic.org> Message-ID: Take a look at NSQ. Both Bro and Logstash support using it to transport messages. On Mon, Dec 11, 2017 at 6:59 PM, Philip Romero wrote: > All, > > I'm in the process of getting Bro logs fed into a new elasticsearch > cluster we're building out and had what I am hoping is a quick and easy > question someone could provide input on. My elasticsearch engineering > team stood up a logstash server to ingest data input from our various > sources, of which Bro is one. I came across the below URL at the elastic > site, which give some direction on an option for getting bro log data > ingested. It was my intention to have filebeat loaded on our Bro serer > and have the "current" log folder monitored for new events, as suggested > in the elastic write-up. My elasticsearch engineering team is a little > concerned about the hourly log rotation process performed in that folder > by bro and how it may impact "live" monitored files. > > https://www.elastic.co/blog/bro-ids-elastic-stack > > Is there a concern with this way of monitoring bro events? Is there a > "better" way to do this to ensure we don't miss events during the hourly > log rotation process? Were a bit new to this so any pointers would be > appreciated. Thanks. > > -- > Philip Romero, CISSP, CISA > Sr. Information Security Analyst > CENIC > promero at cenic.org > Phone: (714) 220-3430 > Mobile: (562) 237-9290 > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Munroe Sollog Senior Network Engineer munroe at lehigh.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171212/329059dc/attachment.html From haoscs at gmail.com Tue Dec 12 21:20:44 2017 From: haoscs at gmail.com (Shuai Hao) Date: Wed, 13 Dec 2017 00:20:44 -0500 Subject: [Bro] Dealing with tcp-based Unknown Protocols In-Reply-To: <7c69e994-89f3-331d-8683-0ae2f615fb59@gmail.com> References: <7c69e994-89f3-331d-8683-0ae2f615fb59@gmail.com> Message-ID: Thanks, Jan and Jon! I noticed that the "Analyzers of Last Resort" is very useful for our case and PacketSled would like to share this part with community. I cannot find the email addresses of speakers from PacketSled, anyone can help? Also, following Jon's suggestion with the solution of script level, we write a sample code as follows for testing. We here assume that anytime we capture a protocol_confirmation from any analyzer, there is an available analyzer responsible for the stream so we disable the Unknown_Protocol analyzer which matches anything. > > event bro_init () { > Log::create_stream(Unknown:LOG, ....) > } > > hook disable_unknown() { > Analyzer::disable_analyzer(Analyzer::ANALYZER_UNKNOWN) > } > > event protocol_confirmation(c: connection, atype: Analyzer:Tag, aid: count) { > hook disable_unknown(); > } > > event Unknown_event(c: connection) { > // unknown protocol process > } > If we test with uncommon protocol trace and there is no corresponding protocol analyzer, the Unknown_Protocol Analyzer successfully captures the stream. However, this Analyzer::disable_analyzer() doesn't work here. With normal protocol traces, we still see the analyzer is processing the stream and produce the logs. Any ideas how this Analyzer::disable_analyzer() should be used in such scenario? In addition, the Log::disable_stream() works here if we only terminate the log stream for the Unknown Protocol analyzer. However, we essentially would like to disable the process of analysis instead of only closing the log stream. Thanks a lot. On Mon, Dec 11, 2017 at 12:24 PM, Jan Grash?fer wrote: > On 11/12/17 07:45, Shuai Hao wrote: > > I wonder that does anyone have experience to tackle the "unknown > protocol" > > when DPD cannot recognize the protocol and/or all existing analyzers > fail. > > Maybe the "Analyzers of Last Resort" Leo and Aaron talked about in their > BroCon'17 Lightning-Talks is what you are looking for: > https://www.bro.org/brocon2017/slides/2017_lightning_talk.pdf > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171213/bc87988c/attachment.html From jsiwek at corelight.com Wed Dec 13 12:14:23 2017 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 13 Dec 2017 14:14:23 -0600 Subject: [Bro] Dealing with tcp-based Unknown Protocols In-Reply-To: References: <7c69e994-89f3-331d-8683-0ae2f615fb59@gmail.com> Message-ID: On Tue, Dec 12, 2017 at 11:20 PM, Shuai Hao wrote: >> event bro_init () { >> Log::create_stream(Unknown:LOG, ....) >> } >> >> hook disable_unknown() { >> Analyzer::disable_analyzer(Analyzer::ANALYZER_UNKNOWN) >> } >> >> event protocol_confirmation(c: connection, atype: Analyzer:Tag, aid: >> count) { >> hook disable_unknown(); >> } >> >> event Unknown_event(c: connection) { >> // unknown protocol process >> } >> > > If we test with uncommon protocol trace and there is no corresponding > protocol analyzer, the Unknown_Protocol Analyzer successfully captures the > stream. However, this Analyzer::disable_analyzer() doesn't work here. With > normal protocol traces, we still see the analyzer is processing the stream > and produce the logs. Yeah, Analyzer::disable_analyzer() doesn't seem like what you want (it should disable the analyzer, but the not ones associated w/ current connections, just future ones). > Any ideas how this Analyzer::disable_analyzer() should be used in such > scenario? You might actually need to write your own function (unless I misremember, it looks like Bro doesn't actually have a generic way to do that from scripts), though it shouldn't be hard. Basically I would try turning your disable_unknown() function into a BIF (there's a bunch of *.bif files you can use as reference). E.g. an idea on how the BIF code would look: function disable_unknown%(c: connection%) : bool %{ auto ua = c->GetRootAnalyzer()->FindChild(ANALYZER_UNKNOWN); if ( ! ua ) return new Val(false, TYPE_BOOL); ua->Parent()->RemoveChildAnalyzer(ua); return new Val(true, TYPE_BOOL); %} The RemoveChildAnalyzer() call then marks the analyzer as "removed" and shouldn't receive any further stream data. - Jon From vikrambasu059 at gmail.com Thu Dec 14 04:51:54 2017 From: vikrambasu059 at gmail.com (Vikram Basu) Date: Thu, 14 Dec 2017 18:21:54 +0530 Subject: [Bro] Bro system requirements for 1gbps traffic Message-ID: <5a3273eb.168f630a.ead72.2485@mx.google.com> I am running Bro in cluster mode using pf_ring to make it utilize the different cores of a system. Now I am wondering if the Bro devs have outlined the hardware requirements to be able to handle about 1gbps traffic. Specifically - CPU (No. Of cores) - Memory (RAM) - Disk (SSD right ?) Regards, Vikram Basu Seceon Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171214/e9d18cd9/attachment.html From danilovic.neda at gmail.com Fri Dec 15 15:03:22 2017 From: danilovic.neda at gmail.com (Neda Danilovic) Date: Sat, 16 Dec 2017 00:03:22 +0100 Subject: [Bro] Exam Message-ID: Hello, I need to prepare for my exam Resilient Networks and I have a few questions about Bro. Do you have smething like lectures? I would gladly pay for that, that someone help me to prepare Bro for exam. Regards, Neda Danilovi? *www.kaficamagazin.rs * -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171216/e701dda1/attachment.html From justDoSports at gmx.de Fri Dec 15 23:59:30 2017 From: justDoSports at gmx.de (Zick Zack) Date: Sat, 16 Dec 2017 08:59:30 +0100 Subject: [Bro] How to get generated specific log files under DEFAULT path (e.g. notice.log) In-Reply-To: <45a7a45d-3a81-639c-600a-fdd3877cda3f@gmx.de> References: <45a7a45d-3a81-639c-600a-fdd3877cda3f@gmx.de> Message-ID: <3805979d-624b-1522-294f-88a51ed15bbf@gmx.de> Hi Bro'ers I have a problem to get generated a notice.log file with it's DEFAULT path. Short description of my problem: * whenever I start Bro to do sth., I get generated some log-files (e.g. communication, http, ...) in a folder named /var/log/bro * however (also after a "deploy" command!), when I call e.g. "NOTICE([$note=***, $msg="***"])", I get NOT generated a notice.log file ANYWHERE on my VM * I can somehow circumvent that by manipulating the share/bro/base/frameworks/notice/main.bro file, when I explicitly set the $path variables in there to my absolute path like "/var/log/bro/notice" Some background I already found out: * it is said in the Bro documentation NOT to change any files in the directories (and its sub-folders) from share/bro EXCEPT the share/bro/site-folder * I found out, all the modules for which the DEFAULT path log-file generation is working somehow load (directly or indirectly) the base/utils/paths or the base/utils/site modules What I want: * getting generated my notice.log file without specifiying an absolute path; only the file-name (just like as it works for the other log files in my /var/log/bro folder) Please help me to get my notice.log file WITHOUT manipulating files which one should not touch! Thanks alot in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171216/8ac17bd9/attachment.html From haoscs at gmail.com Wed Dec 20 12:01:32 2017 From: haoscs at gmail.com (Shuai Hao) Date: Wed, 20 Dec 2017 15:01:32 -0500 Subject: [Bro] Calling external functions in binpac protocol parser Message-ID: Hi All, (1) I wonder that what's the rationales of removing the binpac files for some common protocols (e.g., HTTP, DNS, et al.)? Does current bro distribution only include the handwritten protocol parsers for those protocols? I can find the http-{protocol, analyzer}.pac files have been removed since bro-2.2. I checked the CHANGE log but cannot find the explanation. (2) We create a "general" analytic module that includes APIs (e.g., passing a key/value pair) can be called by multiple protocol parsers such as HTTP and DNS (essentially we only want the "parser" instead of the whole "analyzer" part; that's the reason we are looking for the http-protocol.pac). We develop such module as a plugin, say "Sample::Test" which includes a function test_func(...). We have another sample protocol parser including following code: > type myPDU() = record { > data: bytestring &restofflow; > } &byteorder = bigendian & let{ > deliver: bool = $context.flow.myFUNC(); >}; > flow myFLOW() { > flowunit = myPDU(...); > > function myFUNC() { > Sample::test_func(...); > } That is, in current sample module we want the external function being called when receiving a protocol flow PDU (in &let {...}). So how we can get the binpac (protocol parser) recognize the function Sample::test_func() written in another plugin Sample::Test? I can see in /src/analyzer/protocols, the analyzers can include the functionality from another analyzer by including the headers such as #include "analyzer/protocols/tcp/...". But when writing the plugins in /aux/bro-aux/plugin-support, how can we do that? Thanks very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171220/96920bb4/attachment.html From jackycsie at gmail.com Thu Dec 21 03:43:29 2017 From: jackycsie at gmail.com (=?UTF-8?B?6buD6aao5bmz?=) Date: Thu, 21 Dec 2017 19:43:29 +0800 Subject: [Bro] Question about http.log and conn.log. Message-ID: Hi All I have two questions about http.log and conn.log. (1)Why do some UID in http.log not correspond to conn.log UID? (2)Why may one conn.log UID correspond to many flows in HTTP.log? Thanks ~ ?????www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171221/0c22b00b/attachment.html From jackycsie at gmail.com Thu Dec 21 18:03:39 2017 From: jackycsie at gmail.com (=?utf-8?Q?=E9=BB=83=E9=A6=A8=E5=B9=B3?=) Date: Fri, 22 Dec 2017 10:03:39 +0800 Subject: [Bro] Question about http.log and conn.log. Message-ID: <5a3c67fa.8741650a.7e1c2.9990@mx.google.com> ?Hi All ?I have two questions about http.log and conn.log. (1)Why do some UID in http.log not correspond to conn.log UID? (2)Why may one conn.log UID correspond to many flows in HTTP.log? Thanks ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171222/2d5cb8f8/attachment.html From bill.de.ping at gmail.com Mon Dec 25 05:24:38 2017 From: bill.de.ping at gmail.com (william de ping) Date: Mon, 25 Dec 2017 15:24:38 +0200 Subject: [Bro] - logging postprocessor func Message-ID: Hello, Anyone every experienced with setting a costume postprocessor func to a specific filter ? here's what I want to do : function rotation_postprocessor_func(info: Log::RotationInfo) : bool { # Move file to name including both opening and closing time. local dst = fmt("/tmp/%s.%s.log", info$path, strftime(Log::default_rotation_date_format, info$open)); system(fmt("/bin/mv %s %s", info$fname, dst)); # Run default postprocessor. return Log::run_rotation_postprocessor_cmd(info, dst); } Log::add_filter(test_log::LOG,[ $name="test_log", $path_func=test_log_func, $config=table(["tsv"] = "T"), $interv=100sec, $postprocessor=rotation_postprocessor_func, $include=set("ts") ]); and when I run it in a cluster mode\single instance mode - I see that the "test_log" are rotated like all the other logs, meaning that my /tmp/ folder is empty Any ideas ? Thanks B -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171225/aef5c9c6/attachment.html From bill.de.ping at gmail.com Tue Dec 26 04:59:28 2017 From: bill.de.ping at gmail.com (william de ping) Date: Tue, 26 Dec 2017 14:59:28 +0200 Subject: [Bro] - logging postprocessor func In-Reply-To: References: Message-ID: Anyone ? Thanks On Mon, Dec 25, 2017 at 3:24 PM, william de ping wrote: > Hello, > > Anyone every experienced with setting a costume postprocessor func to a > specific filter ? > > here's what I want to do : > > function rotation_postprocessor_func(info: Log::RotationInfo) : bool > { > # Move file to name including both opening and closing time. > local dst = fmt("/tmp/%s.%s.log", info$path, > strftime(Log::default_rotation_date_format, > info$open)); > > system(fmt("/bin/mv %s %s", info$fname, dst)); > > # Run default postprocessor. > return Log::run_rotation_postprocessor_cmd(info, dst); > } > > > Log::add_filter(test_log::LOG,[ > $name="test_log", > $path_func=test_log_func, > $config=table(["tsv"] = "T"), > $interv=100sec, > $postprocessor=rotation_postprocessor_func, > $include=set("ts") > ]); > > > and when I run it in a cluster mode\single instance mode - I see that the > "test_log" are rotated like all the other logs, meaning that my /tmp/ > folder is empty > > Any ideas ? > > Thanks > B > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171226/db64b132/attachment.html From johanna at icir.org Thu Dec 28 07:42:12 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 16:42:12 +0100 Subject: [Bro] Payload In-Reply-To: References: Message-ID: <20171228154212.mxm6mfmxhht3jsqe@Beezling.fritz.box> Hi, in case this is still interesting - I would try using https://www.bro.org/sphinx/scripts/base/bif/plugins/Bro_TCP.functions.bif.bro.html#id-set_contents_file in one of the new_connection events. Johanna On Tue, Oct 17, 2017 at 12:06:12PM +0200, Rober Fern?ndez wrote: > Hi, > > How can I get the payload of each connection and print each payload in a > different file? > > Regards > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Thu Dec 28 07:43:33 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 16:43:33 +0100 Subject: [Bro] Looking up fa_file given FUID In-Reply-To: References: Message-ID: <20171228154333.gmprlcmzakmmsv3l@Beezling.fritz.box> Hi, sorry for the slow reply. There currently is no inbuilt functionality in Bro to do this. However this feels a bit like an oversight and is probably something that should be added in the future. Johanna On Thu, Oct 12, 2017 at 07:16:02PM +0000, Lamps, Jereme wrote: > Hello, > > I was just wondering if it was possible to lookup fa_file or Files::Info records given a FUID. I have been looking through the built in functions but have not seen anything. > > Best, > > Jereme Lamps > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Thu Dec 28 07:45:13 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 16:45:13 +0100 Subject: [Bro] Mininet In-Reply-To: References: Message-ID: <20171228154513.4ejylx2xrkjbqfjf@Beezling.fritz.box> Hi Daniel, I am not aware of anyone doing that. I am not sure if that might be of interest - but if you ever set this up note that Bro has (limited) support to interact with OpenFlow devices through NetControl. I hope this helps, Johanna On Thu, Oct 19, 2017 at 03:39:15PM +0000, Sniper wrote: > Hello, > > Just wondering if anyone has tryed to use Bro on an openflow-based > network using mininet? > > Kind regards > > Daniel > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From johanna at icir.org Thu Dec 28 07:47:47 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 16:47:47 +0100 Subject: [Bro] Scanned Unique Host In-Reply-To: <8655B0F9-A35E-4546-B4C2-416CB114575F@contoso.com> References: <8655B0F9-A35E-4546-B4C2-416CB114575F@contoso.com> Message-ID: <20171228154747.5ztmo6oxlgrohgxo@Beezling.fritz.box> Hi, typically the only way to do this is to look into conn.log; it might be possible to add that information using the SAMPLE or LAST SumStat reducers; however that will require modifying scans.bro. Johanna On Wed, Oct 25, 2017 at 09:40:11PM +0000, Hector Pena wrote: > Hi, > > Is there a way to view which host were scanned when receiving a notice for the scan.bro script? We have been receiving a lot of notices lately for ?x.x.x.x scanned at least X unique hosts on port X in Xtime?. I cannot seem to find a good way to determine which host were scanned by the host machine. > > Thanks, > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Thu Dec 28 07:52:27 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 16:52:27 +0100 Subject: [Bro] size file In-Reply-To: References: Message-ID: <20171228155227.w4r5jofcqewjauht@Beezling.fritz.box> Hi Rober, I think sadly the answer in this case is that this will require modifications to the Bro core. As far as I can see, set_contents_file is an irreversible operation (if Bro started extracting the content of a connection you cannot stop it anymore). Johanna On Fri, Nov 03, 2017 at 01:19:28PM +0100, Rober Fern?ndez wrote: > hi, > > I have this code: > > event connection_established(c: connection) { > > local orig_file = generate_extraction_filename(extrac_prefix, c, > "orig.dat"); > local orig_f = open(orig_file); > set_contents_file(c$id, CONTENTS_ORIG, orig_f); > > local resp_file = generate_extraction_filename(extrac_prefix, c, > "resp.dat"); > local resp_f = open(resp_file); > set_contents_file(c$id, CONTENTS_RESP, resp_f); > > } > > and I would like set a maximum size, I think that I have two options, > 1. set a maximum size file > 2. control the data so that it does not exceed the size > > How can I do this? > > thanks > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Thu Dec 28 07:59:57 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 16:59:57 +0100 Subject: [Bro] Multi tap architecture In-Reply-To: References: Message-ID: <20171228155957.2fn45wv2peazmx4p@Beezling.fritz.box> Hi Pierre, just to recap if I understand everything correctly: you have low-powered boxes that you just want to capture traffic on, without analyzing the payload, because they are too low powered. And then you would like to do the protocol analysis on another machine. Bro itself does not support this scheme - parsing has to happen on the same instance that does the capturing. It sounds like you might want to use some other software that can just duplicate and forward interesting traffic to a more high-powered machine, where you can perform the actual analysis. I have never built a setup like this myself - but I suspect you might even be able to do this directly in Linux using onboard tools; create a tunneled interface that sends traffic to the destination that you want to send it to and mirror traffic to that interface - or something similar to this. Johanna On Sun, Nov 12, 2017 at 02:05:19PM +0100, bro-ml at razaborg.fr wrote: > Hi everyone, > > I'm looking to build a Bro architecture with several Tap components (I > mean the tcpdump stuff), all separated from the core. > I've seen the "cluster" architecture > (https://www.bro.org/sphinx/cluster/index.html), but as I said I want to > split out the capture work, not the protocol analysis stuff. > > My situation is the following : I have several "boxes" (with not enough > power to do the protocol analysis work, that's the point) in different > networks, all connected to one single "core" component. I would like to > deploy network capture (Tap) instances on all those boxes, and let the > core component do all the hard stuff (I can potentially install a > front-end on this core component to set up many "workers" behind it). > > Is there any way to do this ? Any documentation ? Does anyone have any > clue about how to set it up that way ? > > Thanks a lot, > Pierre > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From johanna at icir.org Thu Dec 28 08:27:23 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 17:27:23 +0100 Subject: [Bro] integrate FPGA\PF_RING supported NIC with Bro - offload In-Reply-To: References: Message-ID: <20171228162723.2wgca2d67drosoyf@Beezling.fritz.box> Hi, > > It still seems like speeding up the reading of network traffic to Bro > > can get you so far, no other ways of taking some of Bro's processing > > and offload them to a network card\ FPGA card ? > > There aren't any code paths in Bro that offload work into any > specialized NICs. It's fairly hard to find the exact right abstraction > that would provide some benefit to Bro and still be technically > achievable. to add a bit to this - while there currently is no support (at all) to offload work into specialized NICs, this actually is one of our research projects. Our ideas and plans have been documented in a short paper - if you are interested you can take a look here: http://icir.org/johanna/papers/sdnfvsec17codesign.pdf If you have any ideas of things that you think might especially benefit from such acceleration please let me know :) Thanks, Johanna > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From johanna at icir.org Thu Dec 28 08:38:08 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 17:38:08 +0100 Subject: [Bro] Bro system requirements for 1gbps traffic In-Reply-To: <5a3273eb.168f630a.ead72.2485@mx.google.com> References: <5a3273eb.168f630a.ead72.2485@mx.google.com> Message-ID: <20171228163808.dksilzkwgqfvbql2@Beezling.fritz.box> Hi, no, we don't really have exact hardware requirements/specifications outlined anywhere since this is really dependent on your traffic - and available hardware changes all the time. Since 1GB of mixed traffic is not really all that much I assume that you are ok with any recent hardware with a number of CPUs and a bit of Ram (10GB or so available per Bro process should be enough). There are a couple of postings on this mailing list that talk about the configuration that some people use; however the most recent one I can find was at the beginning of 2016: http://mailman.icsi.berkeley.edu/pipermail/bro/2016-January/009481.html Johanna On Thu, Dec 14, 2017 at 06:21:54PM +0530, Vikram Basu wrote: > I am running Bro in cluster mode using pf_ring to make it utilize the different cores of a system. Now I am wondering if the Bro devs have outlined the hardware requirements to be able to handle about 1gbps traffic. Specifically > - CPU (No. Of cores) > - Memory (RAM) > - Disk (SSD right ?) > > Regards, > > Vikram Basu > Seceon Inc. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jazoff at illinois.edu Thu Dec 28 08:45:19 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 28 Dec 2017 16:45:19 +0000 Subject: [Bro] Scanned Unique Host In-Reply-To: <20171228154747.5ztmo6oxlgrohgxo@Beezling.fritz.box> References: <8655B0F9-A35E-4546-B4C2-416CB114575F@contoso.com> <20171228154747.5ztmo6oxlgrohgxo@Beezling.fritz.box> Message-ID: <167753A3-505A-4506-AFA3-52BD532AF746@illinois.edu> > On Dec 28, 2017, at 10:47 AM, Johanna Amann wrote: > > Hi, > > typically the only way to do this is to look into conn.log; it might be > possible to add that information using the SAMPLE or LAST SumStat > reducers; however that will require modifying scans.bro. > > Johanna This has come up a few times.. What do you think of the idea of adding a tags field to conn.log like http.log has? The sql injection script makes good use of this: if ( match_sql_injection_uri in unescaped_URI ) { add c$http$tags[URI_SQLI]; SumStats::observe("http.sqli.attacker", [$host=c$id$orig_h], [$str=original_URI]); SumStats::observe("http.sqli.victim", [$host=c$id$resp_h], [$str=original_URI]); } But there's no corresponding c$conn$tags Adding SCAN to c$conn$tags would make it easy to figure things out after the fact. ? Justin Azoff From johanna at icir.org Thu Dec 28 12:38:15 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 21:38:15 +0100 Subject: [Bro] Calling external functions in binpac protocol parser In-Reply-To: References: Message-ID: <20171228203815.7iwbvz4z36a7xczn@Beezling.fritz.box> Hi Shuai, > (1) I wonder that what's the rationales of removing the binpac files for > some common protocols (e.g., HTTP, DNS, et al.)? Does current bro > distribution only include the handwritten protocol parsers for those > protocols? The protocol parsers that were removed were incomplete and not used by Bro. For example for HTTP, which you also mention, the binpac analyzer was never enabled AFAIK. > I can find the http-{protocol, analyzer}.pac files have been removed since > bro-2.2. I checked the CHANGE log but cannot find the explanation. I assume that since this is not a user-visible change (as I said they never were used) back in those days we decided this did not warrant a CHANGE entry :). If you are still interested into them, you can just get them from the bro 2.2 tarball, there was no further development of them. But - as I said - note that they never were actually used. > (2) We create a "general" analytic module that includes APIs (e.g., passing > a key/value pair) can be called by multiple protocol parsers such as HTTP > and DNS (essentially we only want the "parser" instead of the whole > "analyzer" part; that's the reason we are looking for the > http-protocol.pac). I assume you are aware that the -parser and -analyzer in binpac is just a naming convention and all that the -analyzer is is the part that has all the callbacks into Bro that raises events? > We develop such module as a plugin, say "Sample::Test" which includes a > function test_func(...). We have another sample protocol parser including > following code: > > > type myPDU() = record { > > data: bytestring &restofflow; > > } &byteorder = bigendian & let{ > > deliver: bool = $context.flow.myFUNC(); > >}; > > > flow myFLOW() { > > flowunit = myPDU(...); > > > > function myFUNC() { > > Sample::test_func(...); > > } > > That is, in current sample module we want the external function being > called when receiving a protocol flow PDU (in &let {...}). So how we can > get the binpac (protocol parser) recognize the function Sample::test_func() > written in another plugin Sample::Test? I can see in > /src/analyzer/protocols, the analyzers can include the functionality from > another analyzer by including the headers such as #include > "analyzer/protocols/tcp/...". But when writing the plugins in > /aux/bro-aux/plugin-support, how can we do that? Ok, I will give a slightly long answer to this. First - assuming that the test function is just a c++ function, chances are that you want to develop it outside of the binpac files - you can e.g. move it into a separate class that is accessible by everyone. Depending on the things that your function does, it even might be possible to make it a static function. The second answer is - since binpac files are only compiles to c++ you have to do the same thing that the other protocol analyzers do - include the headers using #include statements. I think you can just use relative paths from where the files are located, in addition to absolute paths from the bro base. So - doing something like #include "../your.h" might work fine. Also note that you probably should not put your plugins into bro-aux/plugin-support in the first case. Having them in a separate directory is probably preferable - completely outside of the Bro source tree. I hope this helps a bit, Johanna From johanna at icir.org Thu Dec 28 12:40:52 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 21:40:52 +0100 Subject: [Bro] Scanned Unique Host In-Reply-To: <167753A3-505A-4506-AFA3-52BD532AF746@illinois.edu> References: <8655B0F9-A35E-4546-B4C2-416CB114575F@contoso.com> <20171228154747.5ztmo6oxlgrohgxo@Beezling.fritz.box> <167753A3-505A-4506-AFA3-52BD532AF746@illinois.edu> Message-ID: <20171228204052.w7dkgvayo6dhpjyy@Beezling.fritz.box> > This has come up a few times.. What do you think of the idea of adding a tags field to conn.log like http.log has? That might be a good idea - even though I am always a bit hesitant to add new fields to conn.log. One small drawback is that this approach will always only mark future connections as scan connections - all the ones that actually caused something to be recognized as scanning activity will probably already have been logged into conn.log (and we don't actually have the connection UIDs - at least at the moment). So - adding a sample of IPs might still make sense. Or even make more sense in this case. Johanna From johanna at icir.org Thu Dec 28 12:42:37 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 21:42:37 +0100 Subject: [Bro] Question about http.log and conn.log. In-Reply-To: <5a3c67fa.8741650a.7e1c2.9990@mx.google.com> References: <5a3c67fa.8741650a.7e1c2.9990@mx.google.com> Message-ID: <20171228204237.dxnmxns6k2akagkg@Beezling.fritz.box> > (1)Why do some UID in http.log not correspond to conn.log UID? This should not be possible - all connections in http.log should (eventually) be logged in conn.log. Note that they do not necessarily have to be logged with the same timestamp or even in the same logfile - especially with long-loved connections. > (2)Why may one conn.log UID correspond to many flows in HTTP.log? The HTTP log does not contain flows but request. One HTTP connection can have many request/reply pairs. Johanna From johanna at icir.org Thu Dec 28 12:52:12 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 28 Dec 2017 21:52:12 +0100 Subject: [Bro] - logging postprocessor func In-Reply-To: References: Message-ID: <20171228205212.yb6wsmduvptttj2m@Beezling.fritz.box> On a first glance this does actually not look bad to me - but I have not tried this myself :). Have you tried to do a bit debugging with prints to see if your custom postprocessor function is called by the core? Johanna On Mon, Dec 25, 2017 at 03:24:38PM +0200, william de ping wrote: > Hello, > > Anyone every experienced with setting a costume postprocessor func to a > specific filter ? > > here's what I want to do : > > function rotation_postprocessor_func(info: Log::RotationInfo) : bool > { > # Move file to name including both opening and closing time. > local dst = fmt("/tmp/%s.%s.log", info$path, > strftime(Log::default_rotation_date_format, > info$open)); > > system(fmt("/bin/mv %s %s", info$fname, dst)); > > # Run default postprocessor. > return Log::run_rotation_postprocessor_cmd(info, dst); > } > > > Log::add_filter(test_log::LOG,[ > $name="test_log", > $path_func=test_log_func, > $config=table(["tsv"] = "T"), > $interv=100sec, > $postprocessor=rotation_postprocessor_func, > $include=set("ts") > ]); > > > and when I run it in a cluster mode\single instance mode - I see that the > "test_log" are rotated like all the other logs, meaning that my /tmp/ > folder is empty > > Any ideas ? > > Thanks > B > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jazoff at illinois.edu Thu Dec 28 12:53:29 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 28 Dec 2017 20:53:29 +0000 Subject: [Bro] Scanned Unique Host In-Reply-To: <20171228204052.w7dkgvayo6dhpjyy@Beezling.fritz.box> References: <8655B0F9-A35E-4546-B4C2-416CB114575F@contoso.com> <20171228154747.5ztmo6oxlgrohgxo@Beezling.fritz.box> <167753A3-505A-4506-AFA3-52BD532AF746@illinois.edu> <20171228204052.w7dkgvayo6dhpjyy@Beezling.fritz.box> Message-ID: > On Dec 28, 2017, at 3:40 PM, Johanna Amann wrote: > >> This has come up a few times.. What do you think of the idea of adding a tags field to conn.log like http.log has? > > That might be a good idea - even though I am always a bit hesitant to add > new fields to conn.log. One small drawback is that this approach will > always only mark future connections as scan connections - all the ones > that actually caused something to be recognized as scanning activity will > probably already have been logged into conn.log (and we don't actually > have the connection UIDs - at least at the moment). Yeah.. I don't really want to add a new field either, but I think it could be useful in a few places. Maybe I just need to come up with a handful first :-) I thought it would work fine for scans.. all my scan.bro does is this: event connection_attempt(c: connection) { if ( c$history == "S" ) add_scan(c$id); } event connection_rejected(c: connection) { if ( c$history == "Sr" ) add_scan(c$id); } So as long as I could add to c$conn$tags from those 2 events before the log is written, it would work. > So - adding a sample of IPs might still make sense. Or even make more > sense in this case. I was thinking about doing that, but the only good place I know of to put a lot of info is in email_body_sections, and that doesn't make it to the notice.log ? Justin Azoff From jmellander at lbl.gov Thu Dec 28 15:18:14 2017 From: jmellander at lbl.gov (Jim Mellander) Date: Thu, 28 Dec 2017 15:18:14 -0800 Subject: [Bro] Multi tap architecture In-Reply-To: <20171228155957.2fn45wv2peazmx4p@Beezling.fritz.box> References: <20171228155957.2fn45wv2peazmx4p@Beezling.fritz.box> Message-ID: Many moons ago, I deployed a number of Linksys systems running OpenWRT (one per internal subnet), all reporting back to a central Bro system. The Linksys systems captured all traffic (which would be directed traffic to their IP, as well as broadcast traffic - ARP, etc.). The traffic was captured via tcpdump, and tunneled via an ssh connection to the central collector box, which would push that traffic via a small binary onto a virtual interface monitored by Bro (the traffic from all of the collectors was pushed onto the same interface) - worked fine for the low volume of traffic that the systems saw. Does this sound similar to what you're trying to accomplish? Jim On Thu, Dec 28, 2017 at 7:59 AM, Johanna Amann wrote: > Hi Pierre, > > just to recap if I understand everything correctly: you have low-powered > boxes that you just want to capture traffic on, without analyzing the > payload, because they are too low powered. And then you would like to do > the protocol analysis on another machine. > > Bro itself does not support this scheme - parsing has to happen on the > same instance that does the capturing. It sounds like you might want to > use some other software that can just duplicate and forward interesting > traffic to a more high-powered machine, where you can perform the actual > analysis. > > I have never built a setup like this myself - but I suspect you might even > be able to do this directly in Linux using onboard tools; create a > tunneled interface that sends traffic to the destination that you want to > send it to and mirror traffic to that interface - or something similar to > this. > > Johanna > > On Sun, Nov 12, 2017 at 02:05:19PM +0100, bro-ml at razaborg.fr wrote: > > Hi everyone, > > > > I'm looking to build a Bro architecture with several Tap components (I > > mean the tcpdump stuff), all separated from the core. > > I've seen the "cluster" architecture > > (https://www.bro.org/sphinx/cluster/index.html), but as I said I want to > > split out the capture work, not the protocol analysis stuff. > > > > My situation is the following : I have several "boxes" (with not enough > > power to do the protocol analysis work, that's the point) in different > > networks, all connected to one single "core" component. I would like to > > deploy network capture (Tap) instances on all those boxes, and let the > > core component do all the hard stuff (I can potentially install a > > front-end on this core component to set up many "workers" behind it). > > > > Is there any way to do this ? Any documentation ? Does anyone have any > > clue about how to set it up that way ? > > > > Thanks a lot, > > Pierre > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171228/bba9f389/attachment-0001.html From zeolla at gmail.com Thu Dec 28 17:02:12 2017 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Fri, 29 Dec 2017 01:02:12 +0000 Subject: [Bro] Scanned Unique Host In-Reply-To: References: <8655B0F9-A35E-4546-B4C2-416CB114575F@contoso.com> <20171228154747.5ztmo6oxlgrohgxo@Beezling.fritz.box> <167753A3-505A-4506-AFA3-52BD532AF746@illinois.edu> <20171228204052.w7dkgvayo6dhpjyy@Beezling.fritz.box> Message-ID: Would https://github.com/JonZeolla/scan-sampling do what you're looking for? It's in bro-pkg as well. Jon On Thu, Dec 28, 2017, 15:55 Azoff, Justin S wrote: > > > On Dec 28, 2017, at 3:40 PM, Johanna Amann wrote: > > > >> This has come up a few times.. What do you think of the idea of adding > a tags field to conn.log like http.log has? > > > > That might be a good idea - even though I am always a bit hesitant to add > > new fields to conn.log. One small drawback is that this approach will > > always only mark future connections as scan connections - all the ones > > that actually caused something to be recognized as scanning activity will > > probably already have been logged into conn.log (and we don't actually > > have the connection UIDs - at least at the moment). > > Yeah.. I don't really want to add a new field either, but I think it could > be useful in a few places. > Maybe I just need to come up with a handful first :-) > > I thought it would work fine for scans.. all my scan.bro does is this: > > event connection_attempt(c: connection) > { > if ( c$history == "S" ) > add_scan(c$id); > } > > event connection_rejected(c: connection) > { > if ( c$history == "Sr" ) > add_scan(c$id); > } > > So as long as I could add to c$conn$tags from those 2 events before the > log is written, it would work. > > > > So - adding a sample of IPs might still make sense. Or even make more > > sense in this case. > > I was thinking about doing that, but the only good place I know of to put > a lot of info is in email_body_sections, and that doesn't make it to the > notice.log > > > > ? > Justin Azoff > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171229/cb13e0a9/attachment.html From michalpurzynski1 at gmail.com Thu Dec 28 21:25:09 2017 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Thu, 28 Dec 2017 21:25:09 -0800 Subject: [Bro] Multi tap architecture In-Reply-To: <20171228155957.2fn45wv2peazmx4p@Beezling.fritz.box> References: <20171228155957.2fn45wv2peazmx4p@Beezling.fritz.box> Message-ID: This might work with something that's light enough to capture and forward traffic. I suggest doing it with netsniff-ng and copying traffic to some kind of tunnel interface. On Thu, Dec 28, 2017 at 7:59 AM, Johanna Amann wrote: > Hi Pierre, > > just to recap if I understand everything correctly: you have low-powered > boxes that you just want to capture traffic on, without analyzing the > payload, because they are too low powered. And then you would like to do > the protocol analysis on another machine. > > Bro itself does not support this scheme - parsing has to happen on the > same instance that does the capturing. It sounds like you might want to > use some other software that can just duplicate and forward interesting > traffic to a more high-powered machine, where you can perform the actual > analysis. > > I have never built a setup like this myself - but I suspect you might even > be able to do this directly in Linux using onboard tools; create a > tunneled interface that sends traffic to the destination that you want to > send it to and mirror traffic to that interface - or something similar to > this. > > Johanna > > On Sun, Nov 12, 2017 at 02:05:19PM +0100, bro-ml at razaborg.fr wrote: > > Hi everyone, > > > > I'm looking to build a Bro architecture with several Tap components (I > > mean the tcpdump stuff), all separated from the core. > > I've seen the "cluster" architecture > > (https://www.bro.org/sphinx/cluster/index.html), but as I said I want to > > split out the capture work, not the protocol analysis stuff. > > > > My situation is the following : I have several "boxes" (with not enough > > power to do the protocol analysis work, that's the point) in different > > networks, all connected to one single "core" component. I would like to > > deploy network capture (Tap) instances on all those boxes, and let the > > core component do all the hard stuff (I can potentially install a > > front-end on this core component to set up many "workers" behind it). > > > > Is there any way to do this ? Any documentation ? Does anyone have any > > clue about how to set it up that way ? > > > > Thanks a lot, > > Pierre > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20171228/9819a71d/attachment.html