From bill.de.ping at gmail.com Thu Mar 1 01:52:39 2018 From: bill.de.ping at gmail.com (william de ping) Date: Thu, 1 Mar 2018 11:52:39 +0200 Subject: [Bro] - try-bro package In-Reply-To: References: Message-ID: Hi Justin, Thanks for replying I fail to start it and I think that there must be some prerequisites prior to this stage. Do I need to install bro as a Docker for that ? Thank you B On Mon, Feb 26, 2018 at 4:07 PM, Azoff, Justin S wrote: > Add https://github.com/bro/try-bro/blob/master/trybro.service to > /etc/systemd/system and start it, that's it. > > ? > Justin Azoff > > > On Feb 26, 2018, at 4:10 AM, william de ping > wrote: > > > > Hi there, > > > > I was just wondering how could I have a try-bro site on an internal > network ? > > How can I install https://github.com/bro/try-bro on a local pc ? > > > > thank you very much > > B > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180301/66ff11c8/attachment-0001.html From shahpri at oregonstate.edu Thu Mar 1 03:05:25 2018 From: shahpri at oregonstate.edu (Priyal Shah) Date: Thu, 1 Mar 2018 03:05:25 -0800 Subject: [Bro] Help: Experiment Message-ID: Hello, I am new to this whole Bro IDS and would like to learn more about it. Right now, I am trying to experiment something with Bro to learn more. My simple experiment setup is as bellow. I am simply trying to get the message distribution between an encrypted and without encrypted traffic of a malware. I have virtual nodes running and I am generating traffic using tcpreply and .pcap files. And I have pointed Bro to the virtual interface and it successfully captures the packets and generates the by default logs. But I want to distinguish a malware traffic and non malware traffic in encrypted and non encrypted scenarios. Right now Bro sees everything and logs everything but I want log of just the malicious traffic. With this email, I am attaching the .pcaps I am using to generate the traffic. (Link to pcaps: https://www.dropbox.com/s/csduqscqaccdmld/Pcaps.zip?dl=0 ) If you have simple script to do this do you mind sharing it with me? And I would greatly appreciate if you could explain a bit how to actually run it. Moroever, if you have some other .pcaps of any kind of malicious traffic and if you think that would be easy to see the logs using them, I could use them as well. Thank you so much!! Regards Priyal Shah -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180301/76e1f9dc/attachment.html From mike at swedishmike.org Thu Mar 1 03:08:17 2018 From: mike at swedishmike.org (Mike Eriksson) Date: Thu, 01 Mar 2018 11:08:17 +0000 Subject: [Bro] Trying to get a simple detection on certificate hashes to fire Message-ID: Hi all, I'm trying to create, what I assume should be, a simple detection and notification based on certificate hashes. Sadly I seem to have gotten something very wrong - since it doesn't fire at all. What I've done is that I've created a file named certstream.bro with the following content: <- Cut -> @load base/frameworks/intel @load frameworks/intel/seen @load frameworks/intel/do_notice redef Intel::read_files += { "/usr/local/bro/share/site/certstream/intel.dat" }; <- Cut -> I load this file from local.bro with no errors or complaints, it shows up in loaded_scripts.log and all that. The file I reference as my 'Intelligence file' looks as follows: <- Cut -> #fields indicator indicator_type meta.source meta.do_notice 7B00009ACF21C67564F1AC3C31000000009ACF Intel::CERT_HASH certstream Stolen hash from the x509 log T 0551B592FA53CF2052B8B70F275CC159 Intel::CERT_HASH certstream Stolen hash from the x509 log T 2AA9E2483E8C62DF0037D183 Intel::CERT_HASH certstream Stolen hash from the x509 log T <- Cut -> The hashes I'm using are taken from my x509.log - just to make sure that I tested against something that comes up quite a lot in our environment. I've been using data from the field 'serial' - since there is no actual field called 'hash' in either x509.log or known_certs. Have I been using the wrong identifier or is there some 'hash all certs' setting somewhere that I've missed? As always - grateful for any tips or pointers. Thanks in advance, Mike -- website: http://swedishmike.org twitter: https://twitter.com/swedishmike github: http://github.com/swedishmike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180301/9bd1dd8f/attachment.html From jazoff at illinois.edu Thu Mar 1 05:34:54 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 1 Mar 2018 13:34:54 +0000 Subject: [Bro] - try-bro package In-Reply-To: References: Message-ID: <8CC72C7C-28E1-4FCB-9788-C42B553C0E56@illinois.edu> > On Mar 1, 2018, at 4:52 AM, william de ping wrote: > > Hi Justin, > > Thanks for replying > > I fail to start it and I think that there must be some prerequisites prior to this stage. > Do I need to install bro as a Docker for that ? You need to install docker. ? Justin Azoff From jazoff at illinois.edu Thu Mar 1 05:48:11 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 1 Mar 2018 13:48:11 +0000 Subject: [Bro] Trying to get a simple detection on certificate hashes to fire In-Reply-To: References: Message-ID: <5D2D1E49-F9F1-4F86-BF9F-DBB2E2CCF262@illinois.edu> > On Mar 1, 2018, at 6:08 AM, Mike Eriksson wrote: > > The hashes I'm using are taken from my x509.log - just to make sure that I tested against something that comes up quite a lot in our environment. I've been using data from the field 'serial' - since there is no actual field called 'hash' in either x509.log or known_certs. > > Have I been using the wrong identifier or is there some 'hash all certs' setting somewhere that I've missed? Ah.. that is where you went wrong.. The hashes for certs end up in files.log (with all other files). It could make sense for it to be in the x509 or known certs log. I know there was some talk about re-doing that log file to be more useful and less verbose. In any case, if you have a cert of interest in the x509.log, you can use the 'id' column to find the corresponding file record in the files.log The files.log has the sha1 column which is the hash you would add to the intel file. If you wanted to see how it is implemented, https://github.com/bro/bro/blob/master/scripts/policy/frameworks/intel/seen/x509.bro is what produces all the intel data from certs. ? Justin Azoff From mike at swedishmike.org Thu Mar 1 06:02:49 2018 From: mike at swedishmike.org (Mike Eriksson) Date: Thu, 01 Mar 2018 14:02:49 +0000 Subject: [Bro] Trying to get a simple detection on certificate hashes to fire In-Reply-To: <5D2D1E49-F9F1-4F86-BF9F-DBB2E2CCF262@illinois.edu> References: <5D2D1E49-F9F1-4F86-BF9F-DBB2E2CCF262@illinois.edu> Message-ID: Justin, Many thanks for that - looking in all the wrong places for the right things as usual. ;) Cheers, Mike On Thu, Mar 1, 2018 at 1:48 PM Azoff, Justin S wrote: > > > On Mar 1, 2018, at 6:08 AM, Mike Eriksson wrote: > > > > The hashes I'm using are taken from my x509.log - just to make sure that > I tested against something that comes up quite a lot in our environment. > I've been using data from the field 'serial' - since there is no actual > field called 'hash' in either x509.log or known_certs. > > > > Have I been using the wrong identifier or is there some 'hash all certs' > setting somewhere that I've missed? > > Ah.. that is where you went wrong.. The hashes for certs end up in > files.log (with all other files). > > It could make sense for it to be in the x509 or known certs log. I know > there was some talk about re-doing that log file to be more useful and less > verbose. > > In any case, if you have a cert of interest in the x509.log, you can use > the 'id' column to find the corresponding file record in the files.log > > The files.log has the sha1 column which is the hash you would add to the > intel file. > > If you wanted to see how it is implemented, > > > https://github.com/bro/bro/blob/master/scripts/policy/frameworks/intel/seen/x509.bro > > is what produces all the intel data from certs. > > > ? > Justin Azoff > > -- website: http://swedishmike.org twitter: https://twitter.com/swedishmike github: http://github.com/swedishmike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180301/79de2ac4/attachment-0001.html From roberixion at gmail.com Thu Mar 1 08:37:26 2018 From: roberixion at gmail.com (=?UTF-8?Q?Rober_Fern=C3=A1ndez?=) Date: Thu, 1 Mar 2018 17:37:26 +0100 Subject: [Bro] Reload script Message-ID: Hi, I have a variable script with a constant. This script is generate every time that someone modifies that constant. In other script I have something like that @load variable_script event { here call the constant to variable_script } The problem is that this constant not change every time it is generated the script. Is there any way to do it? regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180301/14a1ca96/attachment.html From ben.bt.wood at gmail.com Fri Mar 2 11:31:56 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Fri, 2 Mar 2018 14:31:56 -0500 Subject: [Bro] Renaming Every Log -- Bro Script Message-ID: I'm trying to get logs to be written with an initial time in the file name, and not renamed after rotation. I have several good examples on how to rename an output log, the problem is there are many logs. Is there a way I can iterate through "Log::ID" or some other structure to rename every log? I've not been able to do anything to iterate over an enum type so far. Thanks, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180302/44f2f256/attachment.html From scottc at es.net Fri Mar 2 13:37:10 2018 From: scottc at es.net (Scott Campbell) Date: Fri, 2 Mar 2018 13:37:10 -0800 Subject: [Bro] bro policy to identify memcached attacks/participation Message-ID: We have put together some sample bro policy that might be useful in identifying: 1) memcached instances with publicly available TCP ports. 2) UDP connection attempts to 11211/udp. 3) excessive outbound traffic from an IP that has previously had an inbound memcached 'get' request from outside the local address space. This code is a little green, but can be used to keep an eye on your local network as this problem evolves. Repo can be found here: https://github.com/set-element/bro_memcached_detect If you have any questions please let me know and I will do what I can to help. As well, any changes or improvements will be gladly integrated into the code as well. Feel free to share with anyone as this is public information. Many thanks! scott ----- Scott Campbell ESnet Security Analyst -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180302/aea81939/attachment.html From jazoff at illinois.edu Fri Mar 2 14:34:50 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 2 Mar 2018 22:34:50 +0000 Subject: [Bro] bro policy to identify memcached attacks/participation In-Reply-To: References: Message-ID: Neat. I kind of have a generic version of this that detects any udp reflection attack, at least the ones we have seen. I've been meaning to make a package for it, I just want to generate some tests first. >From research I've done, other than a few endpoints like VPN boxes that can be whitelisted and bittorrent uTP users, any large inbound or outbound udp flows are DoS attacks, especially when orig_h is remote. ? Justin Azoff > On Mar 2, 2018, at 4:37 PM, Scott Campbell wrote: > > We have put together some sample bro policy that might be useful in identifying: > > 1) memcached instances with publicly available TCP ports. > 2) UDP connection attempts to 11211/udp. > 3) excessive outbound traffic from an IP that has previously had an inbound memcached 'get' request from outside the local address space. > > This code is a little green, but can be used to keep an eye on your local network as this problem evolves. > > Repo can be found here: > > https://github.com/set-element/bro_memcached_detect > > If you have any questions please let me know and I will do what I can to help. As well, any changes or improvements will be gladly integrated into the code as well. > > Feel free to share with anyone as this is public information. > > Many thanks! > scott > > ----- > Scott Campbell > ESnet Security Analyst > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From vitaly.repin at gmail.com Sun Mar 4 11:02:54 2018 From: vitaly.repin at gmail.com (Vitaly Repin) Date: Sun, 4 Mar 2018 21:02:54 +0200 Subject: [Bro] User Agent parser in bro In-Reply-To: References: Message-ID: Hello, Here you go: https://github.com/vitalyrepin/uap-bro Installable through bro package manager. I hope, it will be useful not only for my case. 2018-01-29 18:11 GMT+02:00 Seth Hall : > Hi Vitaly! > > I've been wanting to port one of these type of things to Bro for a long > time. That would be a great contribution if you wanted to take that on. I'm > sure that a number of people would find it valuable. I don't know of anyone > in the community that has already done it. > > .Seth > > On 22 Jan 2018, at 1:50, Vitaly Repin wrote: > > Hello, > > I am looking for a way to parse the User Agent string in bro. > > Is anybody aware of any bro scripts which are similar in functionality to > something like ua-parser-js ( https://github.com/faisalman/ua-parser-js ) > or ES user-agent ingest plugin ( https://www.elastic.co/guide/ > en/elasticsearch/plugins/master/ingest-user-agent.html )? > > Thanks in advance! > -- > WBR & WBW, Vitaly > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > Seth Hall * Corelight, Inc * www.corelight.com > -- WBR & WBW, Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180304/d639b71e/attachment.html From jan.grashoefer at gmail.com Mon Mar 5 02:28:16 2018 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Mon, 5 Mar 2018 11:28:16 +0100 Subject: [Bro] Finding Golden Tickets in Kerberos Logs In-Reply-To: References: Message-ID: <258b2f59-7ee0-b0d2-2dc3-2a255e07ae93@gmail.com> On 27/02/18 20:49, brolist at vt.edu wrote: > Does anyone have a reliable method to find Active Directory Golden or > Silver Tickets in the Bro Kerberos logs? I was planning to look into doing > this (maybe based partially on expiration) but wanted to ask the list > first. I appreciate any advice. Please correct me if I am wrong: Golden Tickts are generated using some special account and won't be sent to the "user" like normal TGTs. In that case, keeping track of the issued TGTs might allow to detect "self-generated" Golden Tickets. The same should apply for TGS in case of Silver Tickets. As far as I know, expiration is usually quite high for Golden/Silver Tickets and thus can be used for detection. However, it should be easy for an attacker to adapt to default expiration times. Jan From jan.grashoefer at gmail.com Mon Mar 5 02:29:43 2018 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Mon, 5 Mar 2018 11:29:43 +0100 Subject: [Bro] Renaming Every Log -- Bro Script In-Reply-To: References: Message-ID: Hi Ben, On 02/03/18 20:31, Benjamin Wood wrote: > Is there a way I can iterate through "Log::ID" or some other structure to > rename every log? in https://github.com/J-Gras/add-json/blob/master/scripts/add-json.bro#L41 Log::active_streams is used to add a filter to all active log streams. Jan From cgaylord at vt.edu Mon Mar 5 06:07:54 2018 From: cgaylord at vt.edu (Clark Gaylord) Date: Mon, 5 Mar 2018 09:07:54 -0500 Subject: [Bro] Finding Golden Tickets in Kerberos Logs In-Reply-To: References: <258b2f59-7ee0-b0d2-2dc3-2a255e07ae93@gmail.com> Message-ID: True but an important point about them is their lack of expiration, hence the need to redo the TGT credential after exploit. This would probably still be wise, but that is a primary motivation. I agree it would be interesting to audit tickets on the wire to ensure they appear to be consistent with policy. -- Clark Gaylord cgaylord at vt.edu ... Autocorrect may have improved this message Brevity should not be interpreted as curtness ... On Mar 5, 2018 05:36, "Jan Grash?fer" wrote: On 27/02/18 20:49, brolist at vt.edu wrote: > Does anyone have a reliable method to find Active Directory Golden or > Silver Tickets in the Bro Kerberos logs? I was planning to look into doing > this (maybe based partially on expiration) but wanted to ask the list > first. I appreciate any advice. Please correct me if I am wrong: Golden Tickts are generated using some special account and won't be sent to the "user" like normal TGTs. In that case, keeping track of the issued TGTs might allow to detect "self-generated" Golden Tickets. The same should apply for TGS in case of Silver Tickets. As far as I know, expiration is usually quite high for Golden/Silver Tickets and thus can be used for detection. However, it should be easy for an attacker to adapt to default expiration times. Jan _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180305/4b606bc2/attachment.html From ben.bt.wood at gmail.com Mon Mar 5 06:58:58 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Mon, 5 Mar 2018 09:58:58 -0500 Subject: [Bro] Renaming Every Log -- Bro Script In-Reply-To: References: Message-ID: Jan, Thank you very much. This is exactly what I needed. ~Ben On Mon, Mar 5, 2018 at 5:29 AM, Jan Grash?fer wrote: > Hi Ben, > > On 02/03/18 20:31, Benjamin Wood wrote: > >> Is there a way I can iterate through "Log::ID" or some other structure to >> rename every log? >> > > in https://github.com/J-Gras/add-json/blob/master/scripts/add-json.bro#L41 > Log::active_streams is used to add a filter to all active log streams. > > Jan > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180305/dd7fbf3d/attachment.html From pkelley at hyperionavenue.com Mon Mar 5 11:32:30 2018 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Mon, 5 Mar 2018 14:32:30 -0500 Subject: [Bro] Finding Golden Tickets in Kerberos Logs In-Reply-To: References: <258b2f59-7ee0-b0d2-2dc3-2a255e07ae93@gmail.com> Message-ID: If it helps, when I was recreating the attacks in MetaSploit, EMPIRE, and on engagements, I noticed following: /DRSGetNCChanges.*/ |/DRSCrackNames.*/ event dce_rpc_request(c: connection, fid: count, opnum: count, stub_len: count) &priority=5 When I observe that sort of traffic, not associated to a known AD controller, I use it as the IOC. I'm sure there is a far better way, but that's my initial stab. If you want a link to my detection script, I'll share it. On Mon, Mar 5, 2018 at 9:07 AM, Clark Gaylord wrote: > True but an important point about them is their lack of expiration, hence > the need to redo the TGT credential after exploit. This would probably > still be wise, but that is a primary motivation. I agree it would be > interesting to audit tickets on the wire to ensure they appear to be > consistent with policy. > > -- > Clark Gaylord > cgaylord at vt.edu > ... Autocorrect may have improved this message > Brevity should not be interpreted as curtness ... > > On Mar 5, 2018 05:36, "Jan Grash?fer" wrote: > > On 27/02/18 20:49, brolist at vt.edu wrote: > > Does anyone have a reliable method to find Active Directory Golden or > > Silver Tickets in the Bro Kerberos logs? I was planning to look into > doing > > this (maybe based partially on expiration) but wanted to ask the list > > first. I appreciate any advice. > > Please correct me if I am wrong: Golden Tickts are generated using some > special account and won't be sent to the "user" like normal TGTs. In > that case, keeping track of the issued TGTs might allow to detect > "self-generated" Golden Tickets. The same should apply for TGS in case > of Silver Tickets. > > As far as I know, expiration is usually quite high for Golden/Silver > Tickets and thus can be used for detection. However, it should be easy > for an attacker to adapt to default expiration times. > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Patrick Kelley Hyperion Avenue Labs http://www.hyperionavenue.com 951.291.8310 *The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure.* [image: hal_logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180305/725f08fc/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 12155 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180305/725f08fc/attachment-0001.bin From krasinski at cines.fr Tue Mar 6 01:45:42 2018 From: krasinski at cines.fr (Nicolas KRASINSKI) Date: Tue, 6 Mar 2018 10:45:42 +0100 (CET) Subject: [Bro] arp_main.bro : hexa code in arp_states result In-Reply-To: References: <258b2f59-7ee0-b0d2-2dc3-2a255e07ae93@gmail.com> Message-ID: <109693414.2574290.1520329541986.JavaMail.zimbra@cines.fr> Hello, In the arp_main.bro script ( https://gist.github.com/grigorescu/a28b814a8fb626e2a7b4715d278198aa ), the global " arp_states " give this weird result : [mac_addr=00:2c:7h:40:55:55, ip_addr=192.82.180.62, assoc_ips= { \x0a\x09 192.168.3.254, \x0a\x09 192.168.1.254, \x0a\x09 195.83.180.62, \x0a\x09 192.168.2.254 \x0a }, requests={ \x0a\x0a }] The part of arp_main script with the global arp_states : ---------------------- type State: record { mac_addr: string; ip_addr: addr; assoc_ips : set[addr]; requests : table[string, addr, addr] of Info &create_expire = 1 min &expire_func = expired_request; }; global arp_states: table[string] of State; -------------------- The " assoc_ips" is set[addr] and requests is table , so I don't understand why I have hexa code (x0a = line feed, x09 = horizontal tab) in the result ? Thank you for your help, Best regard, Nicolas KRASINSKI. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180306/b0d8299f/attachment.html From jlay at slave-tothe-box.net Tue Mar 6 08:29:41 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 06 Mar 2018 09:29:41 -0700 Subject: [Bro] Extract only certain files types In-Reply-To: References: Message-ID: <58aba46710c7a9ef897ec0fd03d2126e@localhost> This took me way to long to get to sorry..here's what I have for my smtp extract...should help: global ext_map: table[string] of string = { ["application/x-dosexec"] = "exe", ["application/zip"] = "zip", ["application/msword"] = "xls", ["application/vnd.openxmlformats-officedocument.wordprocessingml.document"] = "docx", ["application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"] = "xlsx", ["application/vnd.openxmlformats-officedocument.presentationml.presentation"] = "pptx" }; event file_sniff(f: fa_file, meta: fa_metadata) { if ( f$source != "SMTP" ) return; if ( ! meta?$mime_type || meta$mime_type !in ext_map ) return; local ext = ""; if ( meta?$mime_type ) ext = ext_map[meta$mime_type]; local fname = fmt("%s-%s.%s", f$source, f$id, ext); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } James On 2018-02-16 04:47, Fernandez, Mark I wrote: > Ambros, > >>> What should the extract-all-files.bro look like in order to >>> only extract pdf, exe, doc and docx? > > The fa_metadata record contains the MIME type. Using the MIME type, > you can make a condition on whether or not to extract the file. > > Mark > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From pssunu6 at gmail.com Tue Mar 6 21:58:28 2018 From: pssunu6 at gmail.com (ps sunu) Date: Wed, 7 Mar 2018 11:28:28 +0530 Subject: [Bro] bro policy to identify memcached attacks/participation In-Reply-To: References: Message-ID: after activating this script i am getting below warning and bro not starting warning in /opt/data/behavior/spool/tmp/check-config-worker-1-1/local-networks.bro, lines 41-42: multiple initializations for index (207.17.136.32/27) warning in /opt/data/behavior/spool/tmp/check-config-worker-1-1/local-networks.bro, lines 57-58: multiple initializations for index (207.17.136.64/26) warning in /opt/data/behavior/spool/tmp/check-config-worker-1-1/local-networks.bro, lines 70-71: multiple initializations for index (207.17.137.0/24) On Sat, Mar 3, 2018 at 4:04 AM, Azoff, Justin S wrote: > Neat. I kind of have a generic version of this that detects any udp > reflection attack, at least the ones we have seen. > > I've been meaning to make a package for it, I just want to generate some > tests first. > > From research I've done, other than a few endpoints like VPN boxes that > can be whitelisted and bittorrent > uTP users, any large inbound or outbound udp flows are DoS attacks, > especially when orig_h is remote. > > ? > Justin Azoff > > > On Mar 2, 2018, at 4:37 PM, Scott Campbell wrote: > > > > We have put together some sample bro policy that might be useful in > identifying: > > > > 1) memcached instances with publicly available TCP ports. > > 2) UDP connection attempts to 11211/udp. > > 3) excessive outbound traffic from an IP that has previously had an > inbound memcached 'get' request from outside the local address space. > > > > This code is a little green, but can be used to keep an eye on your > local network as this problem evolves. > > > > Repo can be found here: > > > > https://github.com/set-element/bro_memcached_detect > > > > If you have any questions please let me know and I will do what I can to > help. As well, any changes or improvements will be gladly integrated into > the code as well. > > > > Feel free to share with anyone as this is public information. > > > > Many thanks! > > scott > > > > ----- > > Scott Campbell > > ESnet Security Analyst > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180307/2ff513f4/attachment.html From stu.h at live.com Wed Mar 7 09:00:42 2018 From: stu.h at live.com (Stuart H) Date: Wed, 7 Mar 2018 17:00:42 +0000 Subject: [Bro] Link to latest version on download archive Message-ID: <9160931E-8279-4799-8FC4-E9D4BF6F2B0C@contoso.com> Hi there, I?m starting to write a script to build and deploy Bro and was wondering if there is a link to the latest version of Bro in the archives? If not, would it be possible to create a symlink to the latest release? Perhaps something like Bro-latest-Linux-x86_64.deb Thanks, Stu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180307/44d7d516/attachment.html From fatema.bannatwala at gmail.com Wed Mar 7 16:16:40 2018 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 7 Mar 2018 19:16:40 -0500 Subject: [Bro] bro policy to identify memcached attacks/participation Message-ID: I don't think the errors has to do anything with the script. As it mentions that there are multiple initializations of those cidrs. Check where you define the local networks on master and see if there are any duplicates, I think it's networks.cfg file on master. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180307/50f67ec7/attachment.html From jlay at slave-tothe-box.net Fri Mar 9 12:54:50 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 09 Mar 2018 13:54:50 -0700 Subject: [Bro] Detecting remote powershell In-Reply-To: References: <1518613437.2390.1.camel@slave-tothe-box.net> Message-ID: So at the end of the day, unencrypted remote powershell goes over tcp port 5985, WinRMI style: POST /wsman?PSVersion=5.1.14393.1944 HTTP/1.1 Connection: Keep-Alive Content-Type: application/soap+xml;charset=UTF-8 Authorization: Kerberos User-Agent: Microsoft WinRM Client Content-Length: 0 Host: bleh:5985 HTTP/1.1 401 Server: Microsoft-HTTPAPI/2.0 WWW-Authenticate: Negotiate WWW-Authenticate: Kerberos Date: Fri, 16 Feb 2018 18:17:04 GMT Connection: close Content-Length: 0 So any chance we can get 5985 added to the list of "http" ports to parse, thank you. James On 2018-02-16 11:32, James Dickenson wrote: > I don't believe I've seen any work in this regard for Bro, it would be great if someone invested the time to build something. I do know that there is the Attack Detection team that have been submitting a lot of powershell,empire,etc based rules to the ET ruleset for Snort/Suricata. > > -James D. > > On Wed, Feb 14, 2018 at 5:03 AM, James Lay wrote: > >> Hey All, >> >> Topic really...has anyone put some work/sigs into detecting remote powershell? Figured I'd start here first...thank you. >> >> James >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro [1] Links: ------ [1] http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180309/dbc19bd2/attachment.html From patrick.kelley at criticalpathsecurity.com Fri Mar 9 13:27:16 2018 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Fri, 9 Mar 2018 16:27:16 -0500 Subject: [Bro] Detecting remote powershell In-Reply-To: References: <1518613437.2390.1.camel@slave-tothe-box.net> Message-ID: I'd like to see this as well. Though most of the data we observe is encrypted, previously I've created events or pushing to a new log where observed. Such as... ##### const winrm_user_agent = /WinRM.*/; } const winrm_port: set[port] = { 5985/tcp, 5986/tcp, }; event http_header (c: connection, is_orig: bool, name: string, value: string) &priority=5 { if ( ! c?$http ) return; if ( ! c$http?$user_agent ) return; ##### On Fri, Mar 9, 2018 at 3:54 PM, James Lay wrote: > So at the end of the day, unencrypted remote powershell goes over tcp port > 5985, WinRMI style: > > > > POST /wsman?PSVersion=5.1.14393.1944 HTTP/1.1 > Connection: Keep-Alive > Content-Type: application/soap+xml;charset=UTF-8 > Authorization: Kerberos > User-Agent: Microsoft WinRM Client > Content-Length: 0 > Host: bleh:5985 > > HTTP/1.1 401 > Server: Microsoft-HTTPAPI/2.0 > WWW-Authenticate: Negotiate > WWW-Authenticate: Kerberos > Date: Fri, 16 Feb 2018 18:17:04 GMT > Connection: close > Content-Length: 0 > > > > So any chance we can get 5985 added to the list of "http" ports to parse, > thank you. > > James > > > On 2018-02-16 11:32, James Dickenson wrote: > > > > I don't believe I've seen any work in this regard for Bro, it would be > great if someone invested the time to build something. I do know that > there is the Attack Detection team that have been submitting a lot of > powershell,empire,etc based rules to the ET ruleset for Snort/Suricata. > > > -James D. > > > On Wed, Feb 14, 2018 at 5:03 AM, James Lay > wrote: > >> Hey All, >> >> Topic really...has anyone put some work/sigs into detecting remote >> powershell? Figured I'd start here first...thank you. >> >> James >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- *Patrick Kelley, CISSP, C|EH, ITIL* *CTO* patrick.kelley at criticalpathsecurity.com (o) 770-224-6482 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180309/58a35367/attachment.html From neslog at gmail.com Fri Mar 9 13:28:25 2018 From: neslog at gmail.com (Neslog) Date: Fri, 9 Mar 2018 16:28:25 -0500 Subject: [Bro] Detecting remote powershell In-Reply-To: References: <1518613437.2390.1.camel@slave-tothe-box.net> Message-ID: We have had good success combining ja 3 TLS fingerprinting with server certificate information to identify anomalous traffic. On Feb 16, 2018 1:53 PM, "James Dickenson" wrote: > > > I don't believe I've seen any work in this regard for Bro, it would be > great if someone invested the time to build something. I do know that > there is the Attack Detection team that have been submitting a lot of > powershell,empire,etc based rules to the ET ruleset for Snort/Suricata. > > > -James D. > > > On Wed, Feb 14, 2018 at 5:03 AM, James Lay > wrote: > >> Hey All, >> >> Topic really...has anyone put some work/sigs into detecting remote >> powershell? Figured I'd start here first...thank you. >> >> James >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180309/920bda4b/attachment.html From lagoon7 at gmail.com Sun Mar 11 21:56:11 2018 From: lagoon7 at gmail.com (Ludwig Goon) Date: Mon, 12 Mar 2018 00:56:11 -0400 Subject: [Bro] bro crashes with memory error Message-ID: Running bro 2.5.3 on ubuntu 17.10. I have a 8Gb of memory on an intel based platform. This is a single mode instance and I compiled 2.5.3 from scratch. I get the following when using broctl deploy src/central_freelist.cc:333] tcmalloc: allocation failed 16384 src/central_freelist.cc:333] tcmalloc: allocation failed 8192 out of memory in new. fatal error: out of memory in new. out of memory in new. fatal error: out of memory in new. If I run with pf_ring as the load balancer the manager crashes but bro processes will run until time to rotate files. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180312/106e6498/attachment.html From carlopmart at gmail.com Mon Mar 12 00:22:37 2018 From: carlopmart at gmail.com (C. L. Martinez) Date: Mon, 12 Mar 2018 08:22:37 +0100 Subject: [Bro] Bro Time Machine is EOL? Message-ID: Hi all, Is Time Machine EOL? Is it possible accomplish packet capture with Bro or do I need an external software like tcpdump, netsniff, etc? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180312/070ab522/attachment.html From zeolla at gmail.com Mon Mar 12 03:05:14 2018 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Mon, 12 Mar 2018 10:05:14 +0000 Subject: [Bro] Bro Time Machine is EOL? In-Reply-To: References: Message-ID: As far as I'm aware, yes. Some alternatives to consider are gotm[1], Apache Metron, or stenographer. 1: https://github.com/JustinAzoff/gotm Jon On Mon, Mar 12, 2018, 03:32 C. L. Martinez wrote: > Hi all, > > Is Time Machine EOL? Is it possible accomplish packet capture with Bro or > do I need an external software like tcpdump, netsniff, etc? > > Thanks. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180312/d21e6ca3/attachment.html From philosnef at gmail.com Mon Mar 12 11:38:28 2018 From: philosnef at gmail.com (erik clark) Date: Mon, 12 Mar 2018 14:38:28 -0400 Subject: [Bro] gridftp Message-ID: I am looking for the guy at LBL that deals with gridftp. I can't find his contact info. If someone could connect me that would be great. Thanks! Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180312/ba0edc39/attachment.html From slagell at illinois.edu Mon Mar 12 11:45:59 2018 From: slagell at illinois.edu (Slagell, Adam J) Date: Mon, 12 Mar 2018 18:45:59 +0000 Subject: [Bro] gridftp In-Reply-To: References: Message-ID: <4348BDE7-5F0D-4219-B7F9-C33A71832046@illinois.edu> You are probably thinking of Aashish, who is using Justin?s Dumbno here at NCSA. Either one could probably help. Send me a note directly if you can?t find contact info. On Mar 12, 2018, at 1:38 PM, erik clark > wrote: I am looking for the guy at LBL that deals with gridftp. I can't find his contact info. If someone could connect me that would be great. Thanks! Erik _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro ------ Adam J. Slagell Director, Cybersecurity & Networking Division Chief Information Security Officer National Center for Supercomputing Applications University of Illinois at Urbana-Champaign www.slagell.info "Under the Illinois Freedom of Information Act (FOIA), any written communication to or from University employees regarding University business is a public record and may be subject to public disclosure." -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180312/2bf8ac47/attachment.html From philosnef at gmail.com Mon Mar 12 11:56:35 2018 From: philosnef at gmail.com (erik clark) Date: Mon, 12 Mar 2018 14:56:35 -0400 Subject: [Bro] gridftp In-Reply-To: <4348BDE7-5F0D-4219-B7F9-C33A71832046@illinois.edu> References: <4348BDE7-5F0D-4219-B7F9-C33A71832046@illinois.edu> Message-ID: Thanks all, found him. Just needed to know his first name looks like. (Waves at Aashish). On Mon, Mar 12, 2018 at 2:45 PM, Slagell, Adam J wrote: > You are probably thinking of Aashish, who is using Justin?s Dumbno here at > NCSA. Either one could probably help. Send me a note directly if you can?t > find contact info. > > On Mar 12, 2018, at 1:38 PM, erik clark wrote: > > I am looking for the guy at LBL that deals with gridftp. I can't find his > contact info. If someone could connect me that would be great. Thanks! > > Erik > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > ------ > > Adam J. Slagell > Director, Cybersecurity & Networking Division > Chief Information Security Officer > National Center for Supercomputing Applications > University of Illinois at Urbana-Champaign > www.slagell.info > > "Under the Illinois Freedom of Information Act (FOIA), any written > communication to or from University employees regarding University business > is a public record and may be subject to public disclosure." > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180312/315dee3a/attachment-0001.html From asharma at lbl.gov Mon Mar 12 13:53:02 2018 From: asharma at lbl.gov (Aashish Sharma) Date: Mon, 12 Mar 2018 13:53:02 -0700 Subject: [Bro] Bro Time Machine is EOL? In-Reply-To: References: Message-ID: <20180312205300.GH59049@MacPro-2331.local> > Is Time Machine EOL? Is it possible accomplish packet capture with Bro or Not quite. Atleast LBNL isn't letting it EOL. We had a very sharp student Naoki Eto work/upgrade/optimize it a couple years ago: Naoki's branch : topic/naokieto/ipv6 branch. I made some some minor tweaks related to VLANs and we use topic/aashish/ipv6 Naoki's or my branch has very stable code - has IPv6 support built in, also a ton of optimizations in performance. LBL uses this code for production and this branch been running easily for 3+ years with < 1G mem and < 9% CPU with 0.02% cummulative packet drops on our external-DMZ taps. We don't use indexes. Also, I have two bro scripts which if enabled help estimate what cutoffs you should setup in your network for gaining 99.999% coverage for each bucket. And a python script which does similar counts on bro's connection logs. https://github.com/initconf/timemachine-conf-scripts SO yea, timemachine is very much in production and doing well. I just couldn't get Naoki's branch merged into master. But use naoki (or my branch) and you'd have pretty stable and IPv6 support code. let me know if you have any related questions. Thanks, Aashish On Mon, Mar 12, 2018 at 08:22:37AM +0100, C. L. Martinez wrote: > Hi all, > > Is Time Machine EOL? Is it possible accomplish packet capture with Bro or > do I need an external software like tcpdump, netsniff, etc? > > Thanks. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From shirkdog.bsd at gmail.com Tue Mar 13 03:42:51 2018 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Tue, 13 Mar 2018 06:42:51 -0400 Subject: [Bro] Bro Time Machine is EOL? In-Reply-To: <20180312205300.GH59049@MacPro-2331.local> References: <20180312205300.GH59049@MacPro-2331.local> Message-ID: Aashish, are you running this on FreeBSD 10? I ran into an issue with building on FreeBSD 11 and 12-CURRENT that I have not had time to debug. The code built fine on 10.3. On Mon, Mar 12, 2018 at 4:53 PM, Aashish Sharma wrote: >> Is Time Machine EOL? Is it possible accomplish packet capture with Bro or > > Not quite. Atleast LBNL isn't letting it EOL. We had a very sharp student Naoki > Eto work/upgrade/optimize it a couple years ago: > > Naoki's branch : topic/naokieto/ipv6 branch. > > I made some some minor tweaks related to VLANs and we use topic/aashish/ipv6 > > Naoki's or my branch has very stable code - has IPv6 support built in, also a > ton of optimizations in performance. LBL uses this code for production and this > branch been running easily for 3+ years with < 1G mem and < 9% CPU with 0.02% > cummulative packet drops on our external-DMZ taps. > > We don't use indexes. > > Also, I have two bro scripts which if enabled help estimate what cutoffs you > should setup in your network for gaining 99.999% coverage for each bucket. And a > python script which does similar counts on bro's connection logs. > > https://github.com/initconf/timemachine-conf-scripts > > SO yea, timemachine is very much in production and doing well. I just couldn't > get Naoki's branch merged into master. But use naoki (or my branch) and you'd > have pretty stable and IPv6 support code. > > let me know if you have any related questions. > > Thanks, > Aashish > > > On Mon, Mar 12, 2018 at 08:22:37AM +0100, C. L. Martinez wrote: >> Hi all, >> >> Is Time Machine EOL? Is it possible accomplish packet capture with Bro or >> do I need an external software like tcpdump, netsniff, etc? >> >> Thanks. > >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com From carlopmart at gmail.com Tue Mar 13 04:06:17 2018 From: carlopmart at gmail.com (C. L. Martinez) Date: Tue, 13 Mar 2018 12:06:17 +0100 Subject: [Bro] Bro Time Machine is EOL? In-Reply-To: References: <20180312205300.GH59049@MacPro-2331.local> Message-ID: Yep, I am really interested because I will run this on FreeBSD too ... On Tue, Mar 13, 2018 at 11:42 AM, Michael Shirk wrote: > Aashish, are you running this on FreeBSD 10? I ran into an issue with > building on FreeBSD 11 and 12-CURRENT that I have not had time to > debug. The code built fine on 10.3. > > On Mon, Mar 12, 2018 at 4:53 PM, Aashish Sharma wrote: > >> Is Time Machine EOL? Is it possible accomplish packet capture with Bro > or > > > > Not quite. Atleast LBNL isn't letting it EOL. We had a very sharp > student Naoki > > Eto work/upgrade/optimize it a couple years ago: > > > > Naoki's branch : topic/naokieto/ipv6 branch. > > > > I made some some minor tweaks related to VLANs and we use > topic/aashish/ipv6 > > > > Naoki's or my branch has very stable code - has IPv6 support built in, > also a > > ton of optimizations in performance. LBL uses this code for production > and this > > branch been running easily for 3+ years with < 1G mem and < 9% CPU with > 0.02% > > cummulative packet drops on our external-DMZ taps. > > > > We don't use indexes. > > > > Also, I have two bro scripts which if enabled help estimate what cutoffs > you > > should setup in your network for gaining 99.999% coverage for each > bucket. And a > > python script which does similar counts on bro's connection logs. > > > > https://github.com/initconf/timemachine-conf-scripts > > > > SO yea, timemachine is very much in production and doing well. I just > couldn't > > get Naoki's branch merged into master. But use naoki (or my branch) and > you'd > > have pretty stable and IPv6 support code. > > > > let me know if you have any related questions. > > > > Thanks, > > Aashish > > > > > > On Mon, Mar 12, 2018 at 08:22:37AM +0100, C. L. Martinez wrote: > >> Hi all, > >> > >> Is Time Machine EOL? Is it possible accomplish packet capture with Bro > or > >> do I need an external software like tcpdump, netsniff, etc? > >> > >> Thanks. > > > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > Michael Shirk > Daemon Security, Inc. > http://www.daemon-security.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180313/0f150186/attachment.html From asharma at lbl.gov Tue Mar 13 06:46:24 2018 From: asharma at lbl.gov (Aashish Sharma) Date: Tue, 13 Mar 2018 06:46:24 -0700 Subject: [Bro] Bro Time Machine is EOL? In-Reply-To: References: <20180312205300.GH59049@MacPro-2331.local> Message-ID: <20180313134622.GB72331@MacPro-2331.local> Michael, Yes, we run on 10.3. I haven't tried building on 11 or 12-CURRENT yet. I believe someone else also mentioned this to me a bit ago. On my to-do list to see whats going on wrt builds. I am guessing its gcc vs clag issue. Aashish On Tue, Mar 13, 2018 at 06:42:51AM -0400, Michael Shirk wrote: > Aashish, are you running this on FreeBSD 10? I ran into an issue with > building on FreeBSD 11 and 12-CURRENT that I have not had time to > debug. The code built fine on 10.3. > > On Mon, Mar 12, 2018 at 4:53 PM, Aashish Sharma wrote: > >> Is Time Machine EOL? Is it possible accomplish packet capture with Bro or > > > > Not quite. Atleast LBNL isn't letting it EOL. We had a very sharp student Naoki > > Eto work/upgrade/optimize it a couple years ago: > > > > Naoki's branch : topic/naokieto/ipv6 branch. > > > > I made some some minor tweaks related to VLANs and we use topic/aashish/ipv6 > > > > Naoki's or my branch has very stable code - has IPv6 support built in, also a > > ton of optimizations in performance. LBL uses this code for production and this > > branch been running easily for 3+ years with < 1G mem and < 9% CPU with 0.02% > > cummulative packet drops on our external-DMZ taps. > > > > We don't use indexes. > > > > Also, I have two bro scripts which if enabled help estimate what cutoffs you > > should setup in your network for gaining 99.999% coverage for each bucket. And a > > python script which does similar counts on bro's connection logs. > > > > https://github.com/initconf/timemachine-conf-scripts > > > > SO yea, timemachine is very much in production and doing well. I just couldn't > > get Naoki's branch merged into master. But use naoki (or my branch) and you'd > > have pretty stable and IPv6 support code. > > > > let me know if you have any related questions. > > > > Thanks, > > Aashish > > > > > > On Mon, Mar 12, 2018 at 08:22:37AM +0100, C. L. Martinez wrote: > >> Hi all, > >> > >> Is Time Machine EOL? Is it possible accomplish packet capture with Bro or > >> do I need an external software like tcpdump, netsniff, etc? > >> > >> Thanks. > > > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > Michael Shirk > Daemon Security, Inc. > http://www.daemon-security.com From asharma at lbl.gov Tue Mar 13 17:12:43 2018 From: asharma at lbl.gov (Aashish Sharma) Date: Tue, 13 Mar 2018 17:12:43 -0700 Subject: [Bro] gridftp In-Reply-To: References: <4348BDE7-5F0D-4219-B7F9-C33A71832046@illinois.edu> Message-ID: <20180314001241.GN72331@MacPro-2331.local> > (Waves at Aashish). Greetings Erik! On Mon, Mar 12, 2018 at 02:56:35PM -0400, erik clark wrote: > Thanks all, found him. Just needed to know his first name looks like. > (Waves at Aashish). > > On Mon, Mar 12, 2018 at 2:45 PM, Slagell, Adam J > wrote: > > > You are probably thinking of Aashish, who is using Justin?s Dumbno here at > > NCSA. Either one could probably help. Send me a note directly if you can?t > > find contact info. > > > > On Mar 12, 2018, at 1:38 PM, erik clark wrote: > > > > I am looking for the guy at LBL that deals with gridftp. I can't find his > > contact info. If someone could connect me that would be great. Thanks! > > > > Erik > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > > ------ > > > > Adam J. Slagell > > Director, Cybersecurity & Networking Division > > Chief Information Security Officer > > National Center for Supercomputing Applications > > University of Illinois at Urbana-Champaign > > www.slagell.info > > > > "Under the Illinois Freedom of Information Act (FOIA), any written > > communication to or from University employees regarding University business > > is a public record and may be subject to public disclosure." > > > > > > > > > > > > > > > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at corelight.com Thu Mar 15 08:41:20 2018 From: seth at corelight.com (Seth Hall) Date: Thu, 15 Mar 2018 11:41:20 -0400 Subject: [Bro] Detecting remote powershell In-Reply-To: References: <1518613437.2390.1.camel@slave-tothe-box.net> Message-ID: <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> On 9 Mar 2018, at 15:54, James Lay wrote: > So any chance we can get 5985 added to the list of "http" ports to > parse, thank you. No need. Bro should automatically detect HTTP and add the analyzer. If it isn't working correctly then I think we can view that as a bug. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From dwdixon at umich.edu Thu Mar 15 12:27:45 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Thu, 15 Mar 2018 15:27:45 -0400 Subject: [Bro] redef LogExpireInterval with JSON log writer? Message-ID: I'd like to switch to writing both tab-delimited logs and JSON logs with my smaller bro cluster, but I would like the JSON logs to expire and get removed at a much shorter "LogExpireInterval" than my tab delimited logs. I see this is possible with the add-json package... I've looked at both J-Gras' add-json and Seth's json-streaming-logs (both are great) but I've been looking more at add-json since it seems like it's more along the lines of what I was thinking and I see I can set the rotation interval for the JSON writer by redefining the Log::default_rotation_interval option but I don't see a way to extend add-json with a redef-able option for the log expire interval? I also realize I could probably just script this with a shell script or python script to remove the archived JSON logs by leveraging the shorter rotation interval for JSON logs but I thought it would be nice to do right in the add-json package script. Is a redef-able option for the log expire interval something that might be added in a future version of bro? Is there a way to do this now that I'm just missing? Is LogExpireInterval only available for broctl/broctl.cfg? https://www.bro.org/sphinx-git/scripts/base/frameworks/logging/main.bro.html#id-Log::default_rotation_interval https://www.bro.org/sphinx-git/frameworks/logging.html#rotation -Drew -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180315/0aca5678/attachment.html From seth at corelight.com Thu Mar 15 14:48:33 2018 From: seth at corelight.com (Seth Hall) Date: Thu, 15 Mar 2018 17:48:33 -0400 Subject: [Bro] redef LogExpireInterval with JSON log writer? In-Reply-To: References: Message-ID: On 15 Mar 2018, at 15:27, Drew Dixon wrote: > Is a redef-able option for the log expire interval something that > might be added in a future version of bro?? Is there a way to do this > now that I'm just missing??Is?LogExpireInterval?only available for > broctl/broctl.cfg??? What you set with broctl is just the global filter. If you look at the json-streaming-logs package (link included below), you can see that I'm setting a custom rotation interval separately from the global default rotation interval. If you are looking to duplicate logging, you're going to be doing something similar to what json-streaming-logs is doing. I'm curious if json-streaming-logs doesn't do what you need to. It's possible that if what you need conceptually fits into that package I could just add it there. From how you described your problem, it sounds like json-streaming-logs might already do what you need? Here's the link to how I'm setting a custom rotation interval for a log filter that I referenced above: https://github.com/corelight/json-streaming-logs/blob/master/scripts/main.bro#L72 .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From seth at corelight.com Thu Mar 15 14:50:06 2018 From: seth at corelight.com (Seth Hall) Date: Thu, 15 Mar 2018 17:50:06 -0400 Subject: [Bro] bro crashes with memory error In-Reply-To: References: Message-ID: <3EC209D9-35F8-449E-98DD-2C3762AF5AC9@corelight.com> Hi Ludwig! I think you're going to have to give more information. A process table when the box is low on memory would be helpful. Run this... ps auwwxf | grep bro .Seth On 12 Mar 2018, at 0:56, Ludwig Goon wrote: > Running bro 2.5.3 on ubuntu 17.10. I have a 8Gb of memory on an intel > based > platform. > > This is a single mode instance and I compiled 2.5.3 from scratch. > > I get the following when using broctl deploy > > src/central_freelist.cc:333] tcmalloc: allocation failed 16384 > src/central_freelist.cc:333] tcmalloc: allocation failed 8192 > out of memory in new. > fatal error: out of memory in new. > > out of memory in new. > fatal error: out of memory in new. > > > If I run with pf_ring as the load balancer the manager crashes but bro > processes will run until time to rotate files. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall * Corelight, Inc * www.corelight.com From turbidtarantula at gmail.com Thu Mar 15 16:07:46 2018 From: turbidtarantula at gmail.com (Mike M) Date: Thu, 15 Mar 2018 19:07:46 -0400 Subject: [Bro] Bro capture loss without dropped packets Message-ID: Hi, Bro is reporting capture loss without dropped packet notices. I've read the FAQ entry and poked around, but I'm not sure why I'm seeing this behavior. I'm running Bro in a docker container on a low-end box and I want to see where it starts having performance problems. I've got the Bro box directly connected to a box where I'm running tcpreplay at various speeds using different pcaps. At 10Mbps everything works as expected. As I increase the speed (20Mbps, 30Mbps... 200Mbps) I start to see capture_loss reported in the 10-30% range, but no dropped packet notices. Running tcpdump on the box as a sanity check, it collects all the packets at all speeds. The Bro box has an Intel NIC, and I've turned off tso, gro, etc per the Bro FAQ entry. I'd think it was an artifact of the pcap, but I've seen the same results using both my own captures and publicly available ones. Getting up into the 200Mbps+ range I started to see dropped packet notices, as I'd expect. Is the capture loss at low rates just something odd about replaying pcaps at various speeds, or are there additional things I should check in my setup? thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180315/bbcee5c4/attachment.html From vern at corelight.com Thu Mar 15 17:12:28 2018 From: vern at corelight.com (Vern Paxson) Date: Thu, 15 Mar 2018 17:12:28 -0700 Subject: [Bro] Bro capture loss without dropped packets In-Reply-To: (Thu, 15 Mar 2018 19:07:46 EDT). Message-ID: <20180316001228.AD84E2C4045@rock.ICSI.Berkeley.EDU> > At 10Mbps everything works as expected. As I increase the speed (20Mbps, > 30Mbps... 200Mbps) I start to see capture_loss reported in the 10-30% > range, but no dropped packet notices. The dropped packet notices come from statistics reported by the packet sources. In many setups, these statistics are unreliable, which is what originally led us to develop capture_loss. capture_loss is quite robust; if you are losing packets from your monitoring and you have any significant TCP traffic, it *will* flag the problem. So one possibility is that the statistics for your packet capture setup are indeed unreliable, and are under-reporting lower rates as no loss. Another possibility is that the trace you're replaying using tcprelay itself has capture loss. The capture_loss mechanism will key off of those even if the replay is perfect with no additional capture loss, i.e., your packet source doesn't having any problems until the replay speed gets quite high. You could diagnose that second possibility by seeing whether just running directly off the pcap with -r produces capture loss reports. Vern From jlay at slave-tothe-box.net Fri Mar 16 09:52:29 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 16 Mar 2018 10:52:29 -0600 Subject: [Bro] Detecting remote powershell In-Reply-To: <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> References: <1518613437.2390.1.camel@slave-tothe-box.net> <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> Message-ID: <301614f440543cf29e1aa4f8373a3c1e@localhost> Ah...ok well there it is...I'll get a bug report going as I see the connection in conn.log, but nothing in http.log...thanks Seth! James On 2018-03-15 09:41, Seth Hall wrote: > On 9 Mar 2018, at 15:54, James Lay wrote: > >> So any chance we can get 5985 added to the list of "http" ports to >> parse, thank you. > > No need. Bro should automatically detect HTTP and add the analyzer. > If it isn't working correctly then I think we can view that as a bug. > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com From jlay at slave-tothe-box.net Fri Mar 16 10:46:12 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 16 Mar 2018 11:46:12 -0600 Subject: [Bro] Detecting remote powershell In-Reply-To: <301614f440543cf29e1aa4f8373a3c1e@localhost> References: <1518613437.2390.1.camel@slave-tothe-box.net> <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> <301614f440543cf29e1aa4f8373a3c1e@localhost> Message-ID: <26856382ef9070222a596f31afd74c68@localhost> And disregard :D Totally seeing this: 2018-02-16T11:19:09-0700 CUve5yhDRpb6vE7u3 x.x.x.x 58754 x.x.x.x 5985 tcp http 109.998204 407 616 SF T T 0 ShADadFf 7 699 4 788 (empty) - -mac mac 2018-02-16T11:19:09-0700 FMF4K53EV8nQTRfKuh x.x.x.x x.x.x.x CUve5yhDRpb6vE7u3 HTTP 0 SHA1,MD5 text/plain - 0.000000 T T 198 198 0 0 F - d34f7af5e7fd60da9b7eee0fa1f7a569 87c8ce87b9efa3f2e02f9327adc38e0fe25fcc49 - - - - 2018-02-16T11:19:09-0700 FuSlHJ2gGKtnYoE1H x.x.x.x x.x.x.x CUve5yhDRpb6vE7u3 HTTP 0 SHA1,MD5 text/plain - 0.000000 T F 460 460 0 0 F - 63dbdde9a283f4ff750c39ebb018a2a7 666e7574be2dddabf9fae349109198d2481bc3ac - - - - 2018-02-16T11:19:09-0700 CUve5yhDRpb6vE7u3 x.x.x.x 58754 x.x.x.x 5985 1 POST server /wsman - 1.1 Microsoft WinRM Client 198 460 200 (empty) - - (empty) - -- FMF4K53EV8nQTRfKuh - text/plain FuSlHJ2gGKtnYoE1H - text/plain YAY James On 2018-03-16 10:52, James Lay wrote: > Ah...ok well there it is...I'll get a bug report going as I see the > connection in conn.log, but nothing in http.log...thanks Seth! > > James > > On 2018-03-15 09:41, Seth Hall wrote: >> On 9 Mar 2018, at 15:54, James Lay wrote: >> >>> So any chance we can get 5985 added to the list of "http" ports to >>> parse, thank you. >> >> No need. Bro should automatically detect HTTP and add the analyzer. >> If it isn't working correctly then I think we can view that as a bug. >> >> .Seth >> >> -- >> Seth Hall * Corelight, Inc * www.corelight.com > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From dwdixon at umich.edu Fri Mar 16 11:09:37 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Fri, 16 Mar 2018 14:09:37 -0400 Subject: [Bro] redef LogExpireInterval with JSON log writer? In-Reply-To: References: Message-ID: Thank you much Seth- in all honesty I probably didn't dig into either package enough and just started exploring setting up JSON logging yesterday, certainly possible that I didn't entirely understand what json-streaming-logs was doing/solving yet. I hadn't tested either package, but just installed and enabled and have been testing your json-streaming-logs package. I believe you may be correct that json-streaming-logs does most of what I'm wanting now that I look closer, however, what I'm still not sure about though is if there is a way to tell the JSON logs to auto "expire" and be removed off of disk (not just rotate) at a separate expire interval than the default tab delimited logs. So, for example- if I have a retention of say 15 days ( in broctl.cfg setting LogExpireInterval = 15) of archived logs for the default tab delimited logs. I want to be able to tell bro independently of the broctl.cfg global LogExpireInterval setting value that I want only all of my json_streaming_* logs to expire/be deleted/removed off of disk after say 1 day while the normal tab delimited logs still adhere to the 15 day archive retention. In other words I do not want the JSON logs to eat up disk space since they will be getting shipped off box and my cold log retention on on box will be the archives of tab delimited logs. I see you're keeping iterations of the json_streaming versions of the logs around in the event a log shipper process or some process is still attached to the inode and that the creation of the .1, .2, json logs probably keys off the custom rotation interval (15 min) from what I can tell, which makes sense to me. Aside from that, in my testing I see that json_streaming logs are in fact being archived along with the default tab delimited logs so I'm assuming that as it stands now the json_streaming .gz log archives will stick around on disk just as long as my tab delimited archives unless I scripted something external of bro to remove them on a daily basis. If this is all correct and I'm not missing anything else, I'm wondering if it would be possible for you to do something like I described above for removing the json_streaming_logs archives from disk more frequently with your package script? I think bro cron does this now? So not 100% certain how that may affect the plausibility of this, if at all. Respectfully, -Drew On Thu, Mar 15, 2018 at 5:48 PM, Seth Hall wrote: > > > On 15 Mar 2018, at 15:27, Drew Dixon wrote: > > Is a redef-able option for the log expire interval something that might be >> added in a future version of bro? Is there a way to do this now that I'm >> just missing? Is LogExpireInterval only available for >> broctl/broctl.cfg? >> > > What you set with broctl is just the global filter. If you look at the > json-streaming-logs package (link included below), you can see that I'm > setting a custom rotation interval separately from the global default > rotation interval. If you are looking to duplicate logging, you're going > to be doing something similar to what json-streaming-logs is doing. I'm > curious if json-streaming-logs doesn't do what you need to. It's possible > that if what you need conceptually fits into that package I could just add > it there. > > From how you described your problem, it sounds like json-streaming-logs > might already do what you need? > > Here's the link to how I'm setting a custom rotation interval for a log > filter that I referenced above: > https://github.com/corelight/json-streaming-logs/blob/master > /scripts/main.bro#L72 > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180316/394138f8/attachment.html From jan.grashoefer at gmail.com Fri Mar 16 12:13:58 2018 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Fri, 16 Mar 2018 20:13:58 +0100 Subject: [Bro] redef LogExpireInterval with JSON log writer? In-Reply-To: References: Message-ID: <51713071-0261-3792-6eb9-4028615235cd@gmail.com> On 16/03/18 19:09, Drew Dixon wrote: > So, for example- if I have a retention of > say 15 days ( in broctl.cfg setting LogExpireInterval = 15) of archived > logs for the default tab delimited logs. I want to be able to tell bro > independently of the broctl.cfg global LogExpireInterval setting value that > I want only all of my json_streaming_* logs to expire/be deleted/removed > off of disk after say 1 day while the normal tab delimited logs still > adhere to the 15 day archive retention. The point here is that expiration of archived logs isn't done by bro but by broctl. Using add-json one thing that might work for you is to redef Log::path_json and write out your JSON logs into a different directory. For this you could setup a cron job or something to expire files using a different interval than you configured for the default logs. Jan From seth at corelight.com Fri Mar 16 12:39:51 2018 From: seth at corelight.com (Seth Hall) Date: Fri, 16 Mar 2018 15:39:51 -0400 Subject: [Bro] redef LogExpireInterval with JSON log writer? In-Reply-To: References: Message-ID: <2F58B13F-F035-4F03-A664-C6734EF4215D@corelight.com> On 16 Mar 2018, at 14:09, Drew Dixon wrote: > I see you're keeping iterations of the json_streaming versions of the > logs around in the event a log shipper process or some process is > still attached to the inode and that the creation of the .1, .2, json > logs probably keys off the custom rotation interval (15 min) from what > I can tell, which makes sense to me.? Aside from that, in my testing > I see that json_streaming logs are in fact being archived along with > the default tab delimited logs so I'm assuming that as it stands now > the json_streaming .gz Oh! That's a bug then. I was bad an never ended up running that script on a full cluster with Broctl, sorry about that. I'll do some more testing because that archiving was not the intent. :( .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From seth at corelight.com Fri Mar 16 12:46:01 2018 From: seth at corelight.com (Seth Hall) Date: Fri, 16 Mar 2018 15:46:01 -0400 Subject: [Bro] Detecting remote powershell In-Reply-To: <26856382ef9070222a596f31afd74c68@localhost> References: <1518613437.2390.1.camel@slave-tothe-box.net> <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> <301614f440543cf29e1aa4f8373a3c1e@localhost> <26856382ef9070222a596f31afd74c68@localhost> Message-ID: On 16 Mar 2018, at 13:46, James Lay wrote: > YAY Whew. Everytime I see stuff like that I start getting nervous. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From anthony.kasza at gmail.com Fri Mar 16 16:03:39 2018 From: anthony.kasza at gmail.com (anthony kasza) Date: Fri, 16 Mar 2018 17:03:39 -0600 Subject: [Bro] Detecting remote powershell In-Reply-To: References: <1518613437.2390.1.camel@slave-tothe-box.net> <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> <301614f440543cf29e1aa4f8373a3c1e@localhost> <26856382ef9070222a596f31afd74c68@localhost> Message-ID: If you do some baselining in your environment, JA3 can be very successful at detecting Powershell. -AK On Mar 16, 2018 2:13 PM, "Seth Hall" wrote: > > > On 16 Mar 2018, at 13:46, James Lay wrote: > > > YAY > > Whew. Everytime I see stuff like that I start getting nervous. > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180316/a37a4f3c/attachment.html From jlay at slave-tothe-box.net Fri Mar 16 16:05:49 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 16 Mar 2018 17:05:49 -0600 Subject: [Bro] Detecting remote powershell In-Reply-To: References: <1518613437.2390.1.camel@slave-tothe-box.net> <349196C8-635D-4EA8-8648-13EA0952EAB7@corelight.com> <301614f440543cf29e1aa4f8373a3c1e@localhost> <26856382ef9070222a596f31afd74c68@localhost> Message-ID: <1521241549.2397.1.camel@slave-tothe-box.net> Thanks Anthony...as luck would have it I'd already installed it on all my sensors so I'll dig a little deeper into leveraging JA3 on the detection side...thanks again. James On Fri, 2018-03-16 at 17:03 -0600, anthony kasza wrote: > If you do some baselining in your environment, JA3 can be very > successful at detecting Powershell. > > -AK > > On Mar 16, 2018 2:13 PM, "Seth Hall" wrote: > > > > > > On 16 Mar 2018, at 13:46, James Lay wrote: > > > > > YAY > > > > Whew.? Everytime I see stuff like that I start getting nervous. > > > > ? .Seth > > > > -- > > Seth Hall * Corelight, Inc * www.corelight.com > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180316/e45bdecf/attachment.html From brot212 at googlemail.com Sat Mar 17 15:00:25 2018 From: brot212 at googlemail.com (DW) Date: Sat, 17 Mar 2018 23:00:25 +0100 Subject: [Bro] Problem installing Bro Message-ID: <28f46e97-cd04-e4d3-b19b-5837485fd384@googlemail.com> Hey there, I want to install Bro on my raspberry pi but Ialways get an error during the make process. I've got problems with OpenSSL Version 1.1, which was "fixed" by installing libssl1.0-dev (like suggested in this link: https://bro-tracker.atlassian.net/browse/BIT-1775). But now I get another error during the make process: file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In function `sk_GENERAL_NAME_num': /usr/include/openssl/x509v3.h:165: undefined reference to `OPENSSL_sk_num' file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In function `sk_GENERAL_NAME_value': /usr/include/openssl/x509v3.h:165: undefined reference to `OPENSSL_sk_value' file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In function `file_analysis::X509::KeyCurve(evp_pkey_st*)': /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:389: undefined reference to `EVP_PKEY_get0_EC_KEY' file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In function `file_analysis::X509::KeyLength(evp_pkey_st*)': /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:413: undefined reference to `EVP_PKEY_get0_RSA' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:413: undefined reference to `RSA_get0_key' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:429: undefined reference to `EVP_PKEY_get0_EC_KEY' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:418: undefined reference to `EVP_PKEY_get0_DSA' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:418: undefined reference to `DSA_get0_pqg' file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In function `file_analysis::X509::ParseCertificate(file_analysis::X509Val*, char const*)': /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:99: undefined reference to `X509_get_version' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:136: undefined reference to `X509_getm_notBefore' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:137: undefined reference to `X509_getm_notAfter' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:143: undefined reference to `X509_get_X509_PUBKEY' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:203: undefined reference to `X509_get_X509_PUBKEY' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:183: undefined reference to `EVP_PKEY_get0_RSA' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:183: undefined reference to `RSA_get0_key' /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:158: undefined reference to `X509_get_X509_PUBKEY' collect2: error: ld returned 1 exit status Does anyone knows how to bypass this error? I also got problems installing Bro on Arch Linux, same error with OpenSSL 1.1, but I dont know how to compile against OpenSSL 1.0, is there a solution too? Thanks Dane From sumana at polylogyx.com Sun Mar 18 23:23:50 2018 From: sumana at polylogyx.com (Sumana Tirumala) Date: Mon, 19 Mar 2018 11:53:50 +0530 Subject: [Bro] Query Regarding bro-osquery - Broker error Message-ID: Hi, I am trying to use bro-osquery integration from https://github.com/iBigQ/osquery-plugin-bro. I have followed on the steps correctly mentioned in the link, but I am unable to start osquerd following are the errors. I0319 11:50:44.819684 29705 broker_manager.cpp:274] Connecting to Bro localhost:47760 W0319 11:50:44.823487 29705 broker_manager.cpp:351] Broker error:4, error(4, 'broker', (invalid-node, *localhost:47760, "remote endpoint unavailable")) W0319 11:50:44.823573 29705 broker_manager.cpp:254] Retrying to connect to Bro... in the netstat ouput I am able to see the connection getting established bro-osquery at bro-osquery:~/osquery$ netstat -na | grep '47760' tcp 0 0 0.0.0.0:47760 0.0.0.0:* LISTEN tcp6 0 0 :::47760 :::* LISTEN tcp6 0 0 ::1:47760 ::1:58498 ESTABLISHED tcp6 144 0 ::1:58498 ::1:47760 ESTABLISHED In the external/osquery-plugin-bro/install/osquery.conf, I have the configuration as // The IP and port of the Bro endpoint. "custom_bro_ip": "localhost", "custom_bro_port": "47760", This setup is on Debian9. Any kind of help would be appreciated. Regards Sumana -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180319/c691badc/attachment.html From hlin33 at illinois.edu Sun Mar 18 23:29:23 2018 From: hlin33 at illinois.edu (Hui Lin (Hugo) ) Date: Sun, 18 Mar 2018 23:29:23 -0700 Subject: [Bro] Question on BInPAC Sample analyzer Message-ID: Hi I have tried to study the updated way to install sample analyzer through BinPAC. I followed the instructions on https://www.bro.org/ development/howtos/binpac-sample-analyzer.html. I encountered two questions: 1. It seems that the parameter of "--buffered" is not working. Executing this command with this parameter generate datagram analyzer not flowunit one. 2. After installing the sample analyzer through the script, what should I do to remove them? I tried to directly remove two directories, scripts/base/protocols/sample/ and src/analyzer/protocol/sample/, but this will give me CMake configuration errors if I try to compile Bro again. Thank you and best regards, Hui Lin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180318/c1f835dd/attachment.html From roberixion at gmail.com Mon Mar 19 09:14:45 2018 From: roberixion at gmail.com (=?UTF-8?Q?Rober_Fern=C3=A1ndez?=) Date: Mon, 19 Mar 2018 17:14:45 +0100 Subject: [Bro] read json Message-ID: Hi,can Bro read a json file? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180319/ef45937a/attachment.html From dwdixon at umich.edu Mon Mar 19 09:48:40 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Mon, 19 Mar 2018 12:48:40 -0400 Subject: [Bro] redef LogExpireInterval with JSON log writer? In-Reply-To: <2F58B13F-F035-4F03-A664-C6734EF4215D@corelight.com> References: <2F58B13F-F035-4F03-A664-C6734EF4215D@corelight.com> Message-ID: Thank you Jan and Seth, Jan I might try that in the meantime which is what I was originally thinking. Ideally I'd like the JSON logs to never get archived though then I don't have to worry about maintaining another cleanup process/script. Seth, thank you much for doing some more testing on a full cluster with broctl, if you can sort out the bug with the JSON logs getting archived when they were not intended to be I think I'll be all set to deploy your json-streaming-logs : ) -Drew On Fri, Mar 16, 2018 at 3:39 PM, Seth Hall wrote: > > > On 16 Mar 2018, at 14:09, Drew Dixon wrote: > > I see you're keeping iterations of the json_streaming versions of the logs >> around in the event a log shipper process or some process is still attached >> to the inode and that the creation of the .1, .2, json logs probably keys >> off the custom rotation interval (15 min) from what I can tell, which makes >> sense to me. Aside from that, in my testing I see that json_streaming logs >> are in fact being archived along with the default tab delimited logs so I'm >> assuming that as it stands now the json_streaming .gz >> > > Oh! That's a bug then. I was bad an never ended up running that script > on a full cluster with Broctl, sorry about that. I'll do some more testing > because that archiving was not the intent. :( > > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180319/782d96b3/attachment.html From ben.bt.wood at gmail.com Mon Mar 19 12:31:03 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Mon, 19 Mar 2018 15:31:03 -0400 Subject: [Bro] local.bro causing memory leak Message-ID: I've got some custom log names happening, and it's causing a memory leak. Bro never closes the file descriptors or releases the objects. This is causing the manager to crash over a period of time. I'm running my cluster with broctl, and rotation is turned off because I'm naming files with a timestamp to begin with. Any suggestions on how to perform a periodic "clean up"? function datepath(id: Log::ID, path: string, rec: any) : string { local filter = Log::get_filter(id, "default"); return string_cat(filter$path, strftime("_%F_%H", current_time())); } event bro_init() { Log::disable_stream(Syslog::LOG); for ( id in Log::active_streams ) { local filter = Log::get_filter(id, "default"); filter$path_func = datepath; Log::add_filter(id, filter); } } Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180319/b2a04e7d/attachment.html From bill.de.ping at gmail.com Tue Mar 20 04:19:11 2018 From: bill.de.ping at gmail.com (william de ping) Date: Tue, 20 Mar 2018 13:19:11 +0200 Subject: [Bro] all broctl instances are running yet broctl status shows stopped Message-ID: Hi all, When running bro release 2.5 in cluster mode (manager,proxy and several workers) I have a strange issue : new logs are written to spool/manager and according to pgrep -c bro all instances are running, yet "broctl status" shows that all instances are stopped. For some reason about an hour after using the "broctl deploy", this issue occurs again. Any thoughts on what might cause this behavior ? I see no fix for this on newer versions. Thank you B -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/54ce90dd/attachment.html From gl89 at cornell.edu Tue Mar 20 06:27:18 2018 From: gl89 at cornell.edu (Glenn Forbes Fleming Larratt) Date: Tue, 20 Mar 2018 09:27:18 -0400 (EDT) Subject: [Bro] all broctl instances are running yet broctl status shows stopped In-Reply-To: References: Message-ID: I've seen this as well, with the additional behavior that if I attempt a "broctl start", the processes all immediately crash. I have had some success with doing "broctl stop", manually killing the remaining (unmanaged?) processes on each listener, *then* doing a "broctl start". N.B. We are woefully behind, at release 2.3 . -g -- Glenn Forbes Fleming Larratt Cornell University IT Security Office On Tue, 20 Mar 2018, william de ping wrote: > Hi all, > > When running bro release 2.5 in cluster mode (manager,proxy and several workers) I have a strange issue : > new logs are written to spool/manager and according to pgrep -c bro all instances are running, yet "broctl > status" shows that all instances are stopped. > > For some reason about an hour after using the "broctl deploy", this issue occurs again. > > Any thoughts on what might cause this behavior ? > I see no fix for this on newer versions. > > Thank you > B > > From dnthayer at illinois.edu Tue Mar 20 07:19:20 2018 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 20 Mar 2018 09:19:20 -0500 Subject: [Bro] all broctl instances are running yet broctl status shows stopped In-Reply-To: References: Message-ID: <11d73228-169e-7f0e-d32b-62e297748d6e@illinois.edu> On 3/20/18 6:19 AM, william de ping wrote: > Hi all, > > When running bro release 2.5 in cluster mode (manager,proxy and several > workers) I have a strange issue : > new logs are written to spool/manager and according to pgrep -c bro all > instances are running, yet "broctl status" shows that all instances are > stopped. > > For some reason about an hour after using the "broctl deploy", this > issue occurs again. > > Any thoughts on what might cause this behavior ? > I see no fix for this on newer versions. > > Thank you > B When this happens, what does the output of "broctl diag" look like? From ben.bt.wood at gmail.com Tue Mar 20 07:24:09 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Tue, 20 Mar 2018 10:24:09 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: I now have the diag output for the crash. I think I will be using a custom routine to identify and "close" files on a regular basis. [BroControl] > diag manager [manager] No core file found. You may need to change your system settings to allow core files. Bro 2.5.2 Linux 3.10.0-693.17.1.el7.x86_64 Bro plugins: (none found) ==== No reporter.log ==== stderr.log /usr/local/bro/share/broctl/scripts/run-bro: line 61: ulimit: core file size: cannot modify limit: Operation not permitted terminate called after throwing an instance of 'std::system_error' what(): Resource temporarily unavailable /usr/local/bro/share/broctl/scripts/run-bro: line 110: 144420 Aborted nohup "$mybro" "$@" ==== stdout.log max memory size (kbytes, -m) unlimited data seg size (kbytes, -d) unlimited virtual memory (kbytes, -v) unlimited core file size (blocks, -c) 0 ==== .cmdline -U .status -p broctl -p broctl-live -p local -p manager local.bro broctl base/frameworks/cluster local-manager.bro broctl/auto ==== .env_vars PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/dell/srvadmin/bin:/home/bro/.local/bin:/home/bro/bin BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/site::/usr/local/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/bro/policy:/usr/local/bro/share/bro/site CLUSTER_NODE=manager ==== .status RUNNING [net_run] ==== No prof.log ==== No packet_filter.log ==== No loaded_scripts.log Thanks, Ben On Mon, Mar 19, 2018 at 3:31 PM, Benjamin Wood wrote: > I've got some custom log names happening, and it's causing a memory leak. > Bro never closes the file descriptors or releases the objects. This is > causing the manager to crash over a period of time. > > I'm running my cluster with broctl, and rotation is turned off because I'm > naming files with a timestamp to begin with. > > Any suggestions on how to perform a periodic "clean up"? > > function datepath(id: Log::ID, path: string, rec: any) : string > { > local filter = Log::get_filter(id, "default"); > return string_cat(filter$path, strftime("_%F_%H", current_time())); > } > > event bro_init() { > Log::disable_stream(Syslog::LOG); > > for ( id in Log::active_streams ) { > local filter = Log::get_filter(id, "default"); > filter$path_func = datepath; > Log::add_filter(id, filter); > } > } > > Thanks, > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/29de255e/attachment.html From bill.de.ping at gmail.com Tue Mar 20 09:52:17 2018 From: bill.de.ping at gmail.com (william de ping) Date: Tue, 20 Mar 2018 18:52:17 +0200 Subject: [Bro] all broctl instances are running yet broctl status shows stopped In-Reply-To: <11d73228-169e-7f0e-d32b-62e297748d6e@illinois.edu> References: <11d73228-169e-7f0e-d32b-62e297748d6e@illinois.edu> Message-ID: Hi Thanks for your replies broctl diag returns "HINT : Run broctl deploy to get started. All the rest of the output is not populated broctl deploy solves this issue, but I do not want to restart my cluster every hour B On Tue, Mar 20, 2018 at 4:19 PM, Daniel Thayer wrote: > On 3/20/18 6:19 AM, william de ping wrote: > >> Hi all, >> >> When running bro release 2.5 in cluster mode (manager,proxy and several >> workers) I have a strange issue : >> new logs are written to spool/manager and according to pgrep -c bro all >> instances are running, yet "broctl status" shows that all instances are >> stopped. >> >> For some reason about an hour after using the "broctl deploy", this issue >> occurs again. >> >> Any thoughts on what might cause this behavior ? >> I see no fix for this on newer versions. >> >> Thank you >> B >> > > > When this happens, what does the output of "broctl diag" look like? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/3f01ca4c/attachment-0001.html From ben.bt.wood at gmail.com Tue Mar 20 09:55:06 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Tue, 20 Mar 2018 12:55:06 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: I may have solved the problem. I don't believe this was actually a memory leak. It appears to be a problem with max user processes instead. I upped my ulimits for bro and it works now. "ulimit -u" was set to 4096. I upped it to 65536, and that seems to have resolved the problem. It was a little challenging to narrow down, because I didn't have debug on, and "Resource temporarily unavailable" wasn't telling me WHICH resource it was trying to allocate, just that it couldn't. If I have problems in the future, or upgrade, I'll definitely be enabling debug so I can get better information for problems like this. I'm still not sure if bro is leaving files open, but digging into the source it looks like it will clean up file descriptors independent of the log rotation interval being set. https://github.com/bro/bro/blob/a8c0580b45157793da22984f700f92cb3a5745d5/src/File.cc#L357 Thanks, Ben On Tue, Mar 20, 2018 at 10:24 AM, Benjamin Wood wrote: > I now have the diag output for the crash. I think I will be using a custom > routine to identify and "close" files on a regular basis. > > [BroControl] > diag manager > [manager] > > No core file found. You may need to change your system settings to > allow core files. > > Bro 2.5.2 > Linux 3.10.0-693.17.1.el7.x86_64 > > Bro plugins: (none found) > > ==== No reporter.log > > ==== stderr.log > /usr/local/bro/share/broctl/scripts/run-bro: line 61: ulimit: core file > size: cannot modify limit: Operation not permitted > terminate called after throwing an instance of 'std::system_error' > what(): Resource temporarily unavailable > /usr/local/bro/share/broctl/scripts/run-bro: line 110: 144420 > Aborted nohup "$mybro" "$@" > > ==== stdout.log > max memory size (kbytes, -m) unlimited > data seg size (kbytes, -d) unlimited > virtual memory (kbytes, -v) unlimited > core file size (blocks, -c) 0 > > ==== .cmdline > -U .status -p broctl -p broctl-live -p local -p manager local.bro broctl > base/frameworks/cluster local-manager.bro broctl/auto > > ==== .env_vars > PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/ > scripts:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/ > sbin:/opt/dell/srvadmin/bin:/home/bro/.local/bin:/home/bro/bin > BROPATH=/usr/local/bro/spool/installed-scripts-do-not- > touch/site::/usr/local/bro/spool/installed-scripts-do- > not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/ > share/bro/policy:/usr/local/bro/share/bro/site > CLUSTER_NODE=manager > > ==== .status > RUNNING [net_run] > > ==== No prof.log > > ==== No packet_filter.log > > ==== No loaded_scripts.log > > Thanks, > Ben > > On Mon, Mar 19, 2018 at 3:31 PM, Benjamin Wood > wrote: > >> I've got some custom log names happening, and it's causing a memory leak. >> Bro never closes the file descriptors or releases the objects. This is >> causing the manager to crash over a period of time. >> >> I'm running my cluster with broctl, and rotation is turned off because >> I'm naming files with a timestamp to begin with. >> >> Any suggestions on how to perform a periodic "clean up"? >> >> function datepath(id: Log::ID, path: string, rec: any) : string >> { >> local filter = Log::get_filter(id, "default"); >> return string_cat(filter$path, strftime("_%F_%H", current_time())); >> } >> >> event bro_init() { >> Log::disable_stream(Syslog::LOG); >> >> for ( id in Log::active_streams ) { >> local filter = Log::get_filter(id, "default"); >> filter$path_func = datepath; >> Log::add_filter(id, filter); >> } >> } >> >> Thanks, >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/a24717c1/attachment.html From ben.bt.wood at gmail.com Tue Mar 20 10:29:10 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Tue, 20 Mar 2018 13:29:10 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: It didn't solve the problem. It just removed the roadblock. After doing a full "restart" on the cluster, lsof reports 2K+ files. While before reset it reported 1M+. So I still need to figure out a way to clean up those leftover file descriptors. On Tue, Mar 20, 2018 at 12:55 PM, Benjamin Wood wrote: > I may have solved the problem. I don't believe this was actually a memory > leak. It appears to be a problem with max user processes instead. I upped > my ulimits for bro and it works now. > > "ulimit -u" was set to 4096. I upped it to 65536, and that seems to have > resolved the problem. > > It was a little challenging to narrow down, because I didn't have debug > on, and "Resource temporarily unavailable" wasn't telling me WHICH resource > it was trying to allocate, just that it couldn't. If I have problems in the > future, or upgrade, I'll definitely be enabling debug so I can get better > information for problems like this. > > I'm still not sure if bro is leaving files open, but digging into the > source it looks like it will clean up file descriptors independent of the > log rotation interval being set. > https://github.com/bro/bro/blob/a8c0580b45157793da22984f700f92 > cb3a5745d5/src/File.cc#L357 > > Thanks, > Ben > > On Tue, Mar 20, 2018 at 10:24 AM, Benjamin Wood > wrote: > >> I now have the diag output for the crash. I think I will be using a >> custom routine to identify and "close" files on a regular basis. >> >> [BroControl] > diag manager >> [manager] >> >> No core file found. You may need to change your system settings to >> allow core files. >> >> Bro 2.5.2 >> Linux 3.10.0-693.17.1.el7.x86_64 >> >> Bro plugins: (none found) >> >> ==== No reporter.log >> >> ==== stderr.log >> /usr/local/bro/share/broctl/scripts/run-bro: line 61: ulimit: core file >> size: cannot modify limit: Operation not permitted >> terminate called after throwing an instance of 'std::system_error' >> what(): Resource temporarily unavailable >> /usr/local/bro/share/broctl/scripts/run-bro: line 110: 144420 >> Aborted nohup "$mybro" "$@" >> >> ==== stdout.log >> max memory size (kbytes, -m) unlimited >> data seg size (kbytes, -d) unlimited >> virtual memory (kbytes, -v) unlimited >> core file size (blocks, -c) 0 >> >> ==== .cmdline >> -U .status -p broctl -p broctl-live -p local -p manager local.bro broctl >> base/frameworks/cluster local-manager.bro broctl/auto >> >> ==== .env_vars >> PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts: >> /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ >> dell/srvadmin/bin:/home/bro/.local/bin:/home/bro/bin >> BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/ >> site::/usr/local/bro/spool/installed-scripts-do-not- >> touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/ >> bro/policy:/usr/local/bro/share/bro/site >> CLUSTER_NODE=manager >> >> ==== .status >> RUNNING [net_run] >> >> ==== No prof.log >> >> ==== No packet_filter.log >> >> ==== No loaded_scripts.log >> >> Thanks, >> Ben >> >> On Mon, Mar 19, 2018 at 3:31 PM, Benjamin Wood >> wrote: >> >>> I've got some custom log names happening, and it's causing a memory >>> leak. Bro never closes the file descriptors or releases the objects. This >>> is causing the manager to crash over a period of time. >>> >>> I'm running my cluster with broctl, and rotation is turned off because >>> I'm naming files with a timestamp to begin with. >>> >>> Any suggestions on how to perform a periodic "clean up"? >>> >>> function datepath(id: Log::ID, path: string, rec: any) : string >>> { >>> local filter = Log::get_filter(id, "default"); >>> return string_cat(filter$path, strftime("_%F_%H", current_time())); >>> } >>> >>> event bro_init() { >>> Log::disable_stream(Syslog::LOG); >>> >>> for ( id in Log::active_streams ) { >>> local filter = Log::get_filter(id, "default"); >>> filter$path_func = datepath; >>> Log::add_filter(id, filter); >>> } >>> } >>> >>> Thanks, >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/b43d7a3d/attachment.html From pssunu6 at gmail.com Tue Mar 20 10:32:09 2018 From: pssunu6 at gmail.com (ps sunu) Date: Tue, 20 Mar 2018 23:02:09 +0530 Subject: [Bro] bro installation in lxc container Message-ID: Any way to install bro in lxc container ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/a42d94d5/attachment-0001.html From jazoff at illinois.edu Tue Mar 20 10:38:22 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 20 Mar 2018 17:38:22 +0000 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: <1D301061-AD69-434D-B50B-2EBD2E7288E8@illinois.edu> > On Mar 20, 2018, at 12:55 PM, Benjamin Wood wrote: > > I'm still not sure if bro is leaving files open Pretty sure it is.. i don't think path_func is intended to be used the way you are using it and I don't think anything garbage collects writers that have not been used in a while. It's trivial to verify this, just wait a few hours and run lsof or just ls -l /proc/*/fd | grep log It's probably not that hard to fix though. ? Justin Azoff From blason16 at gmail.com Tue Mar 20 10:44:44 2018 From: blason16 at gmail.com (Blason R) Date: Tue, 20 Mar 2018 23:14:44 +0530 Subject: [Bro] Converting my own feeds to bro intel Message-ID: Hi, I do have certain OSINT Feeds and wanted to convert those to intel.dat and later consumed by ELK stack. Can someone guide how do I convert those IP addresses into intel.dat. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/2e1e8e9e/attachment.html From dnthayer at illinois.edu Tue Mar 20 11:47:10 2018 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 20 Mar 2018 13:47:10 -0500 Subject: [Bro] all broctl instances are running yet broctl status shows stopped In-Reply-To: References: <11d73228-169e-7f0e-d32b-62e297748d6e@illinois.edu> Message-ID: On 3/20/18 11:52 AM, william de ping wrote: > Hi > > Thanks for your replies > > broctl diag returns "HINT : Run broctl deploy to get started. > All the rest of the output is not populated > > broctl deploy solves this issue, but I do not want to restart my cluster > every hour > > B > OK, that is what I expected. You have two different copies of Bro installed on your system (doesn't matter if they are the same version or not), and I recommend removing one of them to avoid confusion. This problem could happen, for example, if you have two copies of Bro installed and you run "sudo broctl deploy", but then later you run "broctl status" and this actually runs the other copy (on most systems, "sudo" uses a different PATH than normal users). Each installation of Bro includes its own scripts, config files, executables, state file, etc. From seth at corelight.com Tue Mar 20 11:50:20 2018 From: seth at corelight.com (Seth Hall) Date: Tue, 20 Mar 2018 14:50:20 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: <9E94060A-D614-4178-BB3C-76C66376740A@corelight.com> On 19 Mar 2018, at 15:31, Benjamin Wood wrote: > I'm running my cluster with broctl, and rotation is turned off because > I'm > naming files with a timestamp to begin with. Justin got your problem right. If you turn off file rotation, then Bro is never closing any of these hourly logs. You have to be really careful with how you use $path_func because you can easily get yourself into hot water. Alternately you need to define a rotation interval and post processor. Something like this... ```bro function my_log_post_processor(info: Log::RotationInfo): bool { local ext = sub(info$fname, /^[^\-]+-[0-9]+-[0-9]+-[0-9]+_[0-9]+\.[0-9]+\.[0-9]+\./, ""); # Move file to name including both opening and closing time. local dst = fmt("%s_%s_%s-%s.%s", info$path, strftime("%Y%m%d", info$open), strftime("%H:%M:%S", info$open), strftime("%H:%M:%S%z", info$close), ext); local cmd = fmt("/bin/mv %s %s/%s", info$fname, "/data/logs", dst); system(cmd); return T; } event bro_init() { for ( id in Log::active_streams ) { local filter = Log::get_filter(id, "default"); filter$interv = 1hr; filter$postprocessor = my_log_post_processor; Log::add_filter(id, filter); } } ``` Something like that will enable you to turn off log rotation in broctl (but you'll lose some broctl niceties as well). .Seth -- Seth Hall * Corelight, Inc * www.corelight.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/c3abcf9b/attachment.html From ben.bt.wood at gmail.com Tue Mar 20 13:11:59 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Tue, 20 Mar 2018 16:11:59 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: <9E94060A-D614-4178-BB3C-76C66376740A@corelight.com> References: <9E94060A-D614-4178-BB3C-76C66376740A@corelight.com> Message-ID: Thanks Seth. The whole problem I'm trying to solve is steaming data into splunk. Splunk forwarder's don't like it when filenames change, and the artificial delay created by rotating logs adds too much latency. The solution that was proposed was "don't rotate logs", and leave them in place long enough for the forwarders to finish. At this point I've got to step back and ask, "Am I doing it wrong?" This problem has to have been solved by others. I'm certain there is a way to stream my data to splunk that is better than this. The file rotation and renaming functions give me enough to play with to solve the problem using bro-script. Thanks again for the feedback, Ben On Tue, Mar 20, 2018 at 2:50 PM, Seth Hall wrote: > On 19 Mar 2018, at 15:31, Benjamin Wood wrote: > > I'm running my cluster with broctl, and rotation is turned off because I'm > naming files with a timestamp to begin with. > > Justin got your problem right. If you turn off file rotation, then Bro is > never closing any of these hourly logs. You have to be really careful with > how you use $path_func because you can easily get yourself into hot water. > > Alternately you need to define a rotation interval and post processor. > Something like this... > > > > function my_log_post_processor(info: Log::RotationInfo): bool > { > local ext = sub(info$fname, /^[^\-]+-[0-9]+-[0-9]+-[0-9]+_[0-9]+\.[0-9]+\.[0-9]+\./, ""); > > # Move file to name including both opening and closing time. > local dst = fmt("%s_%s_%s-%s.%s", info$path, strftime("%Y%m%d", info$open), > strftime("%H:%M:%S", info$open), > strftime("%H:%M:%S%z", info$close), > ext); > local cmd = fmt("/bin/mv %s %s/%s", info$fname, "/data/logs", dst); > system(cmd); > > return T; > } > event bro_init() > { > for ( id in Log::active_streams ) > { > local filter = Log::get_filter(id, "default"); > filter$interv = 1hr; > filter$postprocessor = my_log_post_processor; > Log::add_filter(id, filter); > } > } > > Something like that will enable you to turn off log rotation in broctl > (but you'll lose some broctl niceties as well). > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180320/8d9edf82/attachment.html From bill.de.ping at gmail.com Wed Mar 21 03:58:09 2018 From: bill.de.ping at gmail.com (william de ping) Date: Wed, 21 Mar 2018 12:58:09 +0200 Subject: [Bro] all broctl instances are running yet broctl status shows stopped In-Reply-To: References: <11d73228-169e-7f0e-d32b-62e297748d6e@illinois.edu> Message-ID: Hi Daniel, Thanks I have deleted another bro environment on that server and doubled checked that there are no other broctl\bro executable besides the work_dir and build_dir. Yet this issue still occures. I run bro with a specific user and on "top" I see that bro is running under that user, yet "./bin/broctl status" still returns that all instances are stopped. Any suggestions ? Thanks again B On Tue, Mar 20, 2018 at 8:47 PM, Daniel Thayer wrote: > On 3/20/18 11:52 AM, william de ping wrote: > >> Hi >> >> Thanks for your replies >> >> broctl diag returns "HINT : Run broctl deploy to get started. >> All the rest of the output is not populated >> >> broctl deploy solves this issue, but I do not want to restart my cluster >> every hour >> >> B >> >> > OK, that is what I expected. You have two different copies of Bro > installed on your system (doesn't matter if they are the same > version or not), and I recommend removing one of them to > avoid confusion. > > This problem could happen, for example, if you have two copies > of Bro installed and you run "sudo broctl deploy", but then later > you run "broctl status" and this actually runs the other copy (on > most systems, "sudo" uses a different PATH than normal users). > Each installation of Bro includes its own scripts, config files, > executables, state file, etc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180321/7216ea59/attachment.html From seth at corelight.com Wed Mar 21 06:45:04 2018 From: seth at corelight.com (Seth Hall) Date: Wed, 21 Mar 2018 09:45:04 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: <9E94060A-D614-4178-BB3C-76C66376740A@corelight.com> Message-ID: <2CE98E74-1961-42ED-B18C-16AD71AEBD81@corelight.com> On 20 Mar 2018, at 16:11, Benjamin Wood wrote: > The whole problem I'm trying to solve is steaming data into splunk. > Splunk forwarder's don't like it when filenames change, and the > artificial delay created by rotating logs adds too much latency. The > solution that was proposed was "don't rotate logs", and leave them in > place long enough for the forwarders to finish. Ah! I'm trying to solve a similar problem with my json-streaming-logs package. I'm planning on doing some testing and getting that fixed soon. I think it's still a little broken right now, but I can definitely sympathize with your trouble. Hopefully there'll be some guidance on this from me (or you!?) soon. :) .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From brot212 at googlemail.com Wed Mar 21 11:18:12 2018 From: brot212 at googlemail.com (DW) Date: Wed, 21 Mar 2018 19:18:12 +0100 Subject: [Bro] Problem installing Bro In-Reply-To: <28f46e97-cd04-e4d3-b19b-5837485fd384@googlemail.com> References: <28f46e97-cd04-e4d3-b19b-5837485fd384@googlemail.com> Message-ID: <307a26dc-deee-61ea-e4e1-fc80450a7f83@googlemail.com> Hey, somehow I managed to fix the problem by reinstalling libssldev-1.0 and python-dev, after that I compiled the whole project under root privileges - it's working now. Still I don't really know how I can bypass OpenSSL1.1 on my arch linux distro (Antergos). I wanted to fork the current Bro git-repository, write a new protocol analyzer and compile the project on my notebook rather than on the Pi (or using another distro). If someone has experience with Bro+Arch-Linux I would be glad to hear from you. Dane Am 17.03.2018 um 23:00 schrieb DW: > Hey there, > > I want to install Bro on my raspberry pi but Ialways get an error > during the make process. I've got problems with OpenSSL Version 1.1, > which was "fixed" by installing libssl1.0-dev (like suggested in this > link: https://bro-tracker.atlassian.net/browse/BIT-1775). > > But now I get another error during the make process: > > file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In > function `sk_GENERAL_NAME_num': > /usr/include/openssl/x509v3.h:165: undefined reference to > `OPENSSL_sk_num' > file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In > function `sk_GENERAL_NAME_value': > /usr/include/openssl/x509v3.h:165: undefined reference to > `OPENSSL_sk_value' > file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In > function `file_analysis::X509::KeyCurve(evp_pkey_st*)': > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:389: undefined > reference to `EVP_PKEY_get0_EC_KEY' > file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In > function `file_analysis::X509::KeyLength(evp_pkey_st*)': > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:413: undefined > reference to `EVP_PKEY_get0_RSA' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:413: undefined > reference to `RSA_get0_key' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:429: undefined > reference to `EVP_PKEY_get0_EC_KEY' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:418: undefined > reference to `EVP_PKEY_get0_DSA' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:418: undefined > reference to `DSA_get0_pqg' > file_analysis/analyzer/x509/libplugin-Bro-X509.a(X509.cc.o): In > function > `file_analysis::X509::ParseCertificate(file_analysis::X509Val*, char > const*)': > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:99: undefined > reference to `X509_get_version' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:136: undefined > reference to `X509_getm_notBefore' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:137: undefined > reference to `X509_getm_notAfter' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:143: undefined > reference to `X509_get_X509_PUBKEY' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:203: undefined > reference to `X509_get_X509_PUBKEY' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:183: undefined > reference to `EVP_PKEY_get0_RSA' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:183: undefined > reference to `RSA_get0_key' > /home/pi/bro/src/file_analysis/analyzer/x509/X509.cc:158: undefined > reference to `X509_get_X509_PUBKEY' > collect2: error: ld returned 1 exit status > > Does anyone knows how to bypass this error? > > I also got problems installing Bro on Arch Linux, same error with > OpenSSL 1.1, but I dont know how to compile against OpenSSL 1.0, is > there a solution too? > > Thanks > > Dane > > From fatema.bannatwala at gmail.com Wed Mar 21 12:04:17 2018 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 21 Mar 2018 15:04:17 -0400 Subject: [Bro] local.bro causing memory leak Message-ID: Hey Ben, So, if the whole purpose of doing file renaming was just for Splunk streaming, then I wonder why it won't work for you to just have forwarder keep monitoring the log files in the current Bro log dir. We are also a Splunk shop, and index some of our Bro logs into Splunk using Splunk forwarder running on our Bro manager, and just monitoring logs/current/ dir for the types of logs we want to index. It's a very basic setup, works without any issues on our side. wonder why it would create a problem in your situation. Hmm (or I might have mis-interpreted the problem :) ). Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180321/c18ce125/attachment.html From ben.bt.wood at gmail.com Wed Mar 21 12:57:15 2018 From: ben.bt.wood at gmail.com (Benjamin Wood) Date: Wed, 21 Mar 2018 15:57:15 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: I'm wondering that now too. At some point we realized that events weren't making it all the way to splunk. But I couldn't tell you why. I'm going to be taking a step back and re-evaluate why the forwarder didn't work. I also figured out that using path_func in this way is a very BAD IDEA. I've spent a couple days crawling through the source code for the logging framework and testing some things. For every unique filename, a new thread will be started and a new writer will be created on a per file basis. The bad news is, there is no way to reap these threads when they are no longer needed. The only "in framework" process that will close file descriptors and reap process threads is rotation. Even if you enable rotation, files can still slip through, because rotation seems to only be effective against the current writer. I don't know if this breaks a contract outlined in the docs, but it seems like if this is an intended use of path_func then this is a bug that should be fixed. The only way that I could resolve the problem in bro alone, would be to author a custom log writer that would name the files in the way I wanted, and close these dangling file descriptors. It's a pretty complicated problem, and I'm planning to abstract it away buy using the features in a splunk forwarder, or using Apache NiFi to manage the logs if that fails. Thanks, Ben On Wed, Mar 21, 2018 at 3:04 PM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hey Ben, > > So, if the whole purpose of doing file renaming was just for Splunk > streaming, > then I wonder why it won't work for you to just have forwarder keep > monitoring the log files in the current Bro log dir. > We are also a Splunk shop, and index some of our Bro logs into Splunk > using Splunk forwarder running on our Bro manager, and just monitoring > logs/current/ dir for the types of logs we want to index. > It's a very basic setup, works without any issues on our side. wonder why > it would create a problem in your situation. Hmm (or I might have > mis-interpreted the problem :) ). > > Fatema. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180321/73e74399/attachment.html From jazoff at illinois.edu Wed Mar 21 13:52:28 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 21 Mar 2018 20:52:28 +0000 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: Message-ID: <3593B396-2C1E-4A14-B9E8-97F5B7D251C4@illinois.edu> > On Mar 21, 2018, at 3:57 PM, Benjamin Wood wrote: > > I'm wondering that now too. > > At some point we realized that events weren't making it all the way to splunk. But I couldn't tell you why. I'm going to be taking a step back and re-evaluate why the forwarder didn't work. > > I also figured out that using path_func in this way is a very BAD IDEA. I've spent a couple days crawling through the source code for the logging framework and testing some things. > > For every unique filename, a new thread will be started and a new writer will be created on a per file basis. The bad news is, there is no way to reap these threads when they are no longer needed. > > The only "in framework" process that will close file descriptors and reap process threads is rotation. Even if you enable rotation, files can still slip through, because rotation seems to only be effective against the current writer. Well they should all be current and rotation will work. path_func is not normally used for splitting a file based on time, it's used for doing things like Log::remove_default_filter(HTTP::LOG); Log::add_filter(HTTP::LOG, [ $name = "http-directions", $path_func(id: Log::ID, path: string, rec: HTTP::Info) = { local l = Site::is_local_addr(rec$id$orig_h); local r = Site::is_local_addr(rec$id$resp_h); if(l && r) return "http_internal"; if (l) return "http_outbound"; else return "http_inbound"; } ]); in this case, internal,outbound, and inbound are all current writers and they will all get rotated. > I don't know if this breaks a contract outlined in the docs, but it seems like if this is an intended use of path_func then this is a bug that should be fixed. It's not a crazy idea, you're just the first person to ever do that. > The only way that I could resolve the problem in bro alone, would be to author a custom log writer that would name the files in the way I wanted, and close these dangling file descriptors. I can think of 2 solutions: 1) Just turn rotation back on and set the rotation interval to 5 minutes and disable compression. Point splunk at /usr/local/bro/logs/*/*.log The end result will be the same, the only downside is all data in splunk will have a 5 minute lag. 2) Enable rotation, but override Log::default_rotation_postprocessors the default runs this: redef Log::default_rotation_postprocessors += { [Log::WRITER_ASCII] = default_rotation_postprocessor_func }; ... # Default function to postprocess a rotated ASCII log file. It moves the rotated # file to a new name that includes a timestamp with the opening time, and then # runs the writer's default postprocessor command on it. function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool { # If the filename has a ".gz" extension, then keep it. local gz = info$fname[-3:] == ".gz" ? ".gz" : ""; # Move file to name including both opening and closing time. local dst = fmt("%s.%s.log%s", info$path, strftime(Log::default_rotation_date_format, info$open), gz); system(fmt("/bin/mv %s %s", info$fname, dst)); # Run default postprocessor. return Log::run_rotation_postprocessor_cmd(info, dst); } if you redef Log::default_rotation_postprocessors to be empty, bro should close the old filenames and then "rotate" them. ? Justin Azoff From jlay at slave-tothe-box.net Wed Mar 21 16:50:18 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 21 Mar 2018 17:50:18 -0600 Subject: [Bro] Converting my own feeds to bro intel In-Reply-To: References: Message-ID: <1521676218.2308.1.camel@slave-tothe-box.net> This should fit the bill: https://github.com/jonschipp/mal-dnssearch If you're using effective domain you'll need to to some grep/seding to change it. James On Tue, 2018-03-20 at 23:14 +0530, Blason R wrote: > Hi, > > I do have certain OSINT Feeds and wanted to convert those to > intel.dat and later consumed by ELK stack. Can someone guide how do I > convert those IP addresses into intel.dat. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180321/e10a58ba/attachment.html From dnthayer at illinois.edu Wed Mar 21 17:46:33 2018 From: dnthayer at illinois.edu (Daniel Thayer) Date: Wed, 21 Mar 2018 19:46:33 -0500 Subject: [Bro] all broctl instances are running yet broctl status shows stopped In-Reply-To: References: <11d73228-169e-7f0e-d32b-62e297748d6e@illinois.edu> Message-ID: <7242a03a-9316-f4dc-30f1-823d13d7c8b1@illinois.edu> On 3/21/18 5:58 AM, william de ping wrote: > Hi Daniel, > > Thanks > > I have deleted another bro environment on that server and doubled > checked that there are no other broctl\bro executable besides the > work_dir and build_dir. > > Yet this issue still occures. > I run bro with a specific user and on "top" I see that bro is running > under that user, yet "./bin/broctl status" still returns that all > instances are stopped. > > Any suggestions ? > > Thanks again > B When you run "broctl diag", it will output the contents of several files in the Bro working directory (this is the directory where bro is running). For example, it will show you the contents of the ".status" file and "stdout.log", and several other files. If you don't see anything in the output, but you are sure that bro is running (and producing logs), then that means bro is running in a different directory. Each installation of bro uses its own directory paths for locations of the config files, working directory, executables, etc. You can see these by running "broctl config". You can check if the output of "broctl config | grep spooldir" is the parent directory of the directory where you are seeing bro producing log files. From blason16 at gmail.com Wed Mar 21 20:37:58 2018 From: blason16 at gmail.com (Blason R) Date: Thu, 22 Mar 2018 09:07:58 +0530 Subject: [Bro] Converting my own feeds to bro intel In-Reply-To: <1521676218.2308.1.camel@slave-tothe-box.net> References: <1521676218.2308.1.camel@slave-tothe-box.net> Message-ID: Thanks appreciate your quick answer. Let me dive in :) On Thu, Mar 22, 2018 at 5:20 AM, James Lay wrote: > This should fit the bill: > > https://github.com/jonschipp/mal-dnssearch > > If you're using effective domain you'll need to to some grep/seding to > change it. > > James > > On Tue, 2018-03-20 at 23:14 +0530, Blason R wrote: > > Hi, > > I do have certain OSINT Feeds and wanted to convert those to intel.dat and > later consumed by ELK stack. Can someone guide how do I convert those IP > addresses into intel.dat. > > _______________________________________________ > Bro mailing listbro at bro-ids.orghttp://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180322/1f0cfbc3/attachment-0001.html From jholmes at psu.edu Thu Mar 22 09:32:50 2018 From: jholmes at psu.edu (Jason Holmes) Date: Thu, 22 Mar 2018 12:32:50 -0400 Subject: [Bro] local.bro causing memory leak In-Reply-To: References: <9E94060A-D614-4178-BB3C-76C66376740A@corelight.com> Message-ID: <8ca9bf27-36c0-5391-526f-e4a21dac9f61@psu.edu> We're streaming JSON versions of Bro logs into Splunk without an issue. Some pointers that may help: 1. Set your initCrcLength to something like 2048 in your monitor statement in your inputs.conf for Bro logs. The default is 256 bytes, which can be too small to extend past the headers at the beginning of a Bro log for some log types. If you don't do something like this, Splunk will get confused when logs rotate because it will find a log with a different name having the same CRC. This could be why you're having issues with file renames on log rotation. 2. If you rotate your logs off to some other server for long term storage, keep a day or three local as well and have Splunk monitor those directories as well. If you have the initCrcLength set, Splunk is smart enough to recognize that conn.log and conn-datestamp.log are the same thing if they have the same initCrcLength and won't reindex the rotated log. On the other hand, if Splunk was down or had a log queued for batch processing and didn't get it before it was rotated, it'll pick it up from the archive directory. We accomplish this by rotating to an archive directory on the same partition on the Bro manager. That makes the rotate time almost nothing since the move is essentially a rename rather than moving all of those bytes of logs. We then use a cron job with rsync to copy the files over to long term storage. Another cron job removes files that are too old. Example monitor statements: [monitor:///path/to/your/bro/spool/manager/] disabled = 0 sourcetype = json_bro index = your_bro_index initCrcLength = 2048 whitelist = (dns|notice|weird)_json.*\.log$ [monitor:///path/to/your/bro/spool/archive/20*/] disabled = 0 sourcetype = json_bro index = your_bro_index initCrcLength = 2048 whitelist = (dns|notice|weird)_json.*\.log$ 3. If you're moving a massive amount of Bro logs and are regularly falling behind, try a heavy forwarder rather than a universal forwarder and bump the number of parallelIngestionPipelines in your server.conf for your Bro node up. Thanks, -- Jason Holmes On 3/20/18 4:11 PM, Benjamin Wood wrote: > Thanks Seth. > > The whole problem I'm trying to solve is steaming data into splunk. > Splunk forwarder's don't like it when filenames change, and the > artificial delay created by rotating logs adds too much latency. The > solution that was proposed was "don't rotate logs", and leave them in > place long enough for the forwarders to finish. > > At this point I've got to step back and ask, "Am I doing it wrong?" This > problem has to have been solved by others. I'm certain there is a way to > stream my data to splunk that is better than this. > > The file rotation and renaming functions give me enough to play with to > solve the problem using bro-script. > > Thanks again for the feedback, > Ben > > On Tue, Mar 20, 2018 at 2:50 PM, Seth Hall > wrote: > > __ > > On 19 Mar 2018, at 15:31, Benjamin Wood wrote: > > I'm running my cluster with broctl, and rotation is turned off > because I'm > naming files with a timestamp to begin with. > > Justin got your problem right. If you turn off file rotation, then > Bro is never closing any of these hourly logs. You have to be really > careful with how you use $path_func because you can easily get > yourself into hot water. > > Alternately you need to define a rotation interval and post > processor. Something like this... > > > > |function my_log_post_processor(info: Log::RotationInfo): bool { > local ext = sub(info$fname, > /^[^\-]+-[0-9]+-[0-9]+-[0-9]+_[0-9]+\.[0-9]+\.[0-9]+\./, ""); # Move > file to name including both opening and closing time. local dst = > fmt("%s_%s_%s-%s.%s", info$path, strftime("%Y%m%d", info$open), > strftime("%H:%M:%S", info$open), strftime("%H:%M:%S%z", info$close), > ext); local cmd = fmt("/bin/mv %s %s/%s", info$fname, "/data/logs", > dst); system(cmd); return T; } event bro_init() { for ( id in > Log::active_streams ) { local filter = Log::get_filter(id, > "default"); filter$interv = 1hr; filter$postprocessor = > my_log_post_processor; Log::add_filter(id, filter); } } | > > Something like that will enable you to turn off log rotation in > broctl (but you'll lose some broctl niceties as well). > > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com > > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.ICSI.Berkeley.EDU%2Fmailman%2Flistinfo%2Fbro&data=02%7C01%7Cjwh128%40psu.edu%7C791424b7915646c6fdd408d58e9fea7d%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636571739851708847&sdata=WPuthzDJXbZFwNUS5fSKYTqHHnNkuk7IqD1dsZFZoP4%3D&reserved=0 > From 2015223040113 at stu.scu.edu.cn Sun Mar 25 19:16:51 2018 From: 2015223040113 at stu.scu.edu.cn (=?GBK?B?wO7RqcDy?=) Date: Mon, 26 Mar 2018 10:16:51 +0800 Subject: [Bro] =?gbk?q?How_to_change_the_situation_that_BRO_signature_only?= =?gbk?q?_match_once_at_most?= Message-ID: <18032610165195bfdf507f7ff1ba10906a0f40bdc232@stu.scu.edu.cn> Hi, everyone, I have recently worked on some BRO-ID works, that is, I want to intercept some REST messages from net interface using signatures, and I found that I can only intercept a part of all of the messages, for example, I can use tshark to intercept, let's say, 100 messages, but with BRO, there is only 50. And I have read the official document that says, "Each signature is reported at most once for every connection, further matches of the same signature are ignored". I just want to know is their any chance to change this situation? or did I configure something wrong? Regards, Sherry from China -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180326/b5dcdef0/attachment.html From brot212 at googlemail.com Tue Mar 27 09:31:57 2018 From: brot212 at googlemail.com (D.W.) Date: Tue, 27 Mar 2018 18:31:57 +0200 Subject: [Bro] New bro types when writing a plugin Message-ID: <56872b17-aed5-5d69-20c6-fcc6edbdaedc@googlemail.com> Hey there, I'm writing an analyzer as a plugin and I would like to create some new bro data type (record type to be exact) to hand over some protocol data in a compact form as parameters to the event functions. For now I have declared the new types in types.bif and defined them in init-bare.bro, but I don't think that this is the right way, because I have to manually modify the bro source files. Is there a way to declare and define the new type inside the plugin source files, so that the types will be featured in bro after the plugin was installed? Greetings, Dane From vitaly.repin at gmail.com Wed Mar 28 00:32:33 2018 From: vitaly.repin at gmail.com (Vitaly Repin) Date: Wed, 28 Mar 2018 10:32:33 +0300 Subject: [Bro] New bro types when writing a plugin In-Reply-To: <56872b17-aed5-5d69-20c6-fcc6edbdaedc@googlemail.com> References: <56872b17-aed5-5d69-20c6-fcc6edbdaedc@googlemail.com> Message-ID: Hello, Take a look into this example: https://github.com/vitalyrepin/uap-bro I have defined three record types in that plugin: DeviceRec, UserAgentRec and AgentRec. P.S. I think bro-dev is a better mailing list to discuss bro dev. issues: http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev 2018-03-27 19:31 GMT+03:00 D.W. : > Hey there, > > I'm writing an analyzer as a plugin and I would like to create some new > bro data type (record type to be exact) to hand over some protocol data > in a compact form as parameters to the event functions. > > For now I have declared the new types in types.bif and defined them in > init-bare.bro, but I don't think that this is the right way, because I > have to manually modify the bro source files. > > Is there a way to declare and define the new type inside the plugin > source files, so that the types will be featured in bro after the plugin > was installed? > > Greetings, > > Dane > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- WBR & WBW, Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/9c532092/attachment.html From brot212 at googlemail.com Wed Mar 28 01:41:50 2018 From: brot212 at googlemail.com (DW) Date: Wed, 28 Mar 2018 10:41:50 +0200 Subject: [Bro] New bro types when writing a plugin In-Reply-To: References: <56872b17-aed5-5d69-20c6-fcc6edbdaedc@googlemail.com> Message-ID: <45c7e8a7-b363-87e7-a6af-0cae70691005@googlemail.com> Hey thanks for the hint. I declared my types in types.bif and defined them in types.bro now, but now I can't acces them anymore in my Source.cc file. I get following error: error: 'BifType::Record::My_Type' has not been declared My .cc line: rl = new RecordVal(BifType::Record::My_Type); How can I access the types now? P.S. Yeah, I think the other mailing list is besser for this purpose, but I don't want to start a new question now. :-) Am 28.03.2018 um 09:32 schrieb Vitaly Repin: > Hello, > > > Take a? look into this example: https://github.com/vitalyrepin/uap-bro > I have defined three record types in that plugin: DeviceRec, > UserAgentRec and AgentRec. > > P.S. I think bro-dev is a better mailing list to discuss bro dev. > issues: http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev > > 2018-03-27 19:31 GMT+03:00 D.W. >: > > Hey there, > > I'm writing an analyzer as a plugin and I would like to create > some new > bro data type (record type to be exact) to hand over some protocol > data > in a compact form as parameters to the event functions. > > For now I have declared the new types in types.bif and defined them in > init-bare.bro, but I don't think that this is the right way, because I > have to manually modify the bro source files. > > Is there a way to declare and define the new type inside the plugin > source files, so that the types will be featured in bro after the > plugin > was installed? > > Greetings, > > Dane > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > > -- > WBR & WBW, Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/07b76281/attachment.html From philosnef at gmail.com Wed Mar 28 09:52:34 2018 From: philosnef at gmail.com (erik clark) Date: Wed, 28 Mar 2018 12:52:34 -0400 Subject: [Bro] filebeat +elk Message-ID: I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/90916de0/attachment.html From zeolla at gmail.com Wed Mar 28 10:09:16 2018 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 28 Mar 2018 17:09:16 +0000 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: Do you specifically need to send it to logstash or do you just need it to get inserted into elasticsearch? Jon On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: > I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat > to push the logs. Is that even the best way to do this? I have found MUCH > outdated material on ingesting bro logs into an elk stack, but very little > that is up to date, and some of which is up to date but is using older > versions of software from elastic.co. If anyone has a modern bro/elk > integration document they use(d) to set their environment up, it would be > greatly appreciated if you could share. Thanks! > > Erik > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/f778457c/attachment.html From pkelley at hyperionavenue.com Wed Mar 28 10:28:46 2018 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Wed, 28 Mar 2018 13:28:46 -0400 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: Erik, I?m doing this with Ubuntu and Pi devices. I?ll send you all of my notes outside of the main channel. Patrick Kelley, CISSP, C|EH, ITIL Principal Security Engineer patrick.kelley at criticalpathsecurity.com > On Mar 28, 2018, at 1:09 PM, Zeolla at GMail.com wrote: > > Do you specifically need to send it to logstash or do you just need it to get inserted into elasticsearch? > > Jon > >> On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: >> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! >> >> Erik >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- > Jon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/21dd85f6/attachment-0001.html From blason16 at gmail.com Wed Mar 28 10:29:43 2018 From: blason16 at gmail.com (Blason R) Date: Wed, 28 Mar 2018 22:59:43 +0530 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: I guess you refer to securityonion they already have done that and lot of logstash config file. Hats off to SO folks and Justin Henderson On Wed, Mar 28, 2018 at 10:39 PM, Zeolla at GMail.com wrote: > Do you specifically need to send it to logstash or do you just need it to > get inserted into elasticsearch? > > Jon > > On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: > >> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat >> to push the logs. Is that even the best way to do this? I have found MUCH >> outdated material on ingesting bro logs into an elk stack, but very little >> that is up to date, and some of which is up to date but is using older >> versions of software from elastic.co. If anyone has a modern bro/elk >> integration document they use(d) to set their environment up, it would be >> greatly appreciated if you could share. Thanks! >> >> Erik >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/27d8ff01/attachment.html From michalpurzynski1 at gmail.com Wed Mar 28 10:52:39 2018 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Wed, 28 Mar 2018 10:52:39 -0700 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: <524BDB72-63F8-43DE-BF52-9696565D6225@gmail.com> Sending details outside of the mailing list is not cool and against what the open source community stands for. Anyway, we?ve had a great success with taking Bro JSON logs and shipping them over to RabbitMQ with syslog-ng (no parsing done on the syslog-ng side) and fetching those with MozDef workers (which are python). 6k eps no sweat. > On Mar 28, 2018, at 10:28 AM, Patrick Kelley wrote: > > Erik, > > I?m doing this with Ubuntu and Pi devices. I?ll send you all of my notes outside of the main channel. > > Patrick Kelley, CISSP, C|EH, ITIL > Principal Security Engineer > patrick.kelley at criticalpathsecurity.com > > >> On Mar 28, 2018, at 1:09 PM, Zeolla at GMail.com wrote: >> >> Do you specifically need to send it to logstash or do you just need it to get inserted into elasticsearch? >> >> Jon >> >>> On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: >>> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! >>> >>> Erik >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> -- >> Jon >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/2691b3b3/attachment.html From blason16 at gmail.com Wed Mar 28 10:55:45 2018 From: blason16 at gmail.com (Blason R) Date: Wed, 28 Mar 2018 23:25:45 +0530 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: By the way Bro does log in JSON format that can directly be ingested into elastic search On Wed, Mar 28, 2018 at 10:58 PM, Patrick Kelley wrote: > Erik, > > I?m doing this with Ubuntu and Pi devices. I?ll send you all of my notes > outside of the main channel. > > *Patrick Kelley, CISSP, C|EH, ITIL* > *Principal Security Engineer* > patrick.kelley at criticalpathsecurity.com > > > On Mar 28, 2018, at 1:09 PM, Zeolla at GMail.com wrote: > > Do you specifically need to send it to logstash or do you just need it to > get inserted into elasticsearch? > > Jon > > On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: > >> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat >> to push the logs. Is that even the best way to do this? I have found MUCH >> outdated material on ingesting bro logs into an elk stack, but very little >> that is up to date, and some of which is up to date but is using older >> versions of software from elastic.co. If anyone has a modern bro/elk >> integration document they use(d) to set their environment up, it would be >> greatly appreciated if you could share. Thanks! >> >> Erik >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/4415fada/attachment.html From patrick.kelley at criticalpathsecurity.com Wed Mar 28 11:14:11 2018 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Wed, 28 Mar 2018 14:14:11 -0400 Subject: [Bro] filebeat +elk In-Reply-To: <524BDB72-63F8-43DE-BF52-9696565D6225@gmail.com> References: <524BDB72-63F8-43DE-BF52-9696565D6225@gmail.com> Message-ID: Micha?, It's not cool to assume the role of defining how others give away their personal time and efforts. I'm sure you'll manage just fine with my pushing unpolished notes to a particular person, as opposed to mass transmitting what could be a complete "goat rodeo" for everyone else. The "community" worked just fine. A person had a need. They asked. It was filled. Go have a coke and a smile. -PK On Wed, Mar 28, 2018 at 1:52 PM, Micha? Purzy?ski < michalpurzynski1 at gmail.com> wrote: > Sending details outside of the mailing list is not cool and against what > the open source community stands for. > > Anyway, we?ve had a great success with taking Bro JSON logs and shipping > them over to RabbitMQ with syslog-ng (no parsing done on the syslog-ng > side) and fetching those with MozDef workers (which are python). > > 6k eps no sweat. > > > On Mar 28, 2018, at 10:28 AM, Patrick Kelley > wrote: > > Erik, > > I?m doing this with Ubuntu and Pi devices. I?ll send you all of my notes > outside of the main channel. > > *Patrick Kelley, CISSP, C|EH, ITIL* > *Principal Security Engineer* > patrick.kelley at criticalpathsecurity.com > > > On Mar 28, 2018, at 1:09 PM, Zeolla at GMail.com wrote: > > Do you specifically need to send it to logstash or do you just need it to > get inserted into elasticsearch? > > Jon > > On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: > >> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat >> to push the logs. Is that even the best way to do this? I have found MUCH >> outdated material on ingesting bro logs into an elk stack, but very little >> that is up to date, and some of which is up to date but is using older >> versions of software from elastic.co. If anyone has a modern bro/elk >> integration document they use(d) to set their environment up, it would be >> greatly appreciated if you could share. Thanks! >> >> Erik >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- *Patrick Kelley, CISSP, C|EH, ITIL* *CTO* patrick.kelley at criticalpathsecurity.com (o) 770-224-6482 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/937cab7a/attachment-0001.html From daniel.guerra69 at gmail.com Wed Mar 28 11:18:24 2018 From: daniel.guerra69 at gmail.com (Daniel Guerra) Date: Wed, 28 Mar 2018 20:18:24 +0200 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> I would use json to stdout with a python script to insert it in elasticsearch. I think its the most efficient and stable method. The latest elasticsearch needs separate index for the different log types. There is a bro-pkg for json to stdout. Op 28/03/2018 om 18:52 schreef erik clark: > I am trying to ingest bro 2.5 json logs into an elk stack, using > filebeat to push the logs. Is that even the best way to do this? I > have found MUCH outdated material on ingesting bro logs into an elk > stack, but very little that is up to date, and some of which is up to > date but is using older versions of software from elastic.co > . If anyone has a modern bro/elk integration > document they use(d) to set their environment up, it would be greatly > appreciated if you could share. Thanks! > > Erik > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/8bd1043d/attachment.html From philosnef at gmail.com Wed Mar 28 11:21:38 2018 From: philosnef at gmail.com (erik clark) Date: Wed, 28 Mar 2018 14:21:38 -0400 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: I just need to get it into ES. I am going to pump eve.json in as well. I have no experience with the ELK stack at all, other than some ES work from dealing with moloch content going in there and configuring it appropriately. If I can just bypass everything and push eve.json and bro json logs directly in, that would be fantastic. Thanks Jon! On Wed, Mar 28, 2018 at 1:09 PM, Zeolla at GMail.com wrote: > Do you specifically need to send it to logstash or do you just need it to > get inserted into elasticsearch? > > Jon > > On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: > >> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat >> to push the logs. Is that even the best way to do this? I have found MUCH >> outdated material on ingesting bro logs into an elk stack, but very little >> that is up to date, and some of which is up to date but is using older >> versions of software from elastic.co. If anyone has a modern bro/elk >> integration document they use(d) to set their environment up, it would be >> greatly appreciated if you could share. Thanks! >> >> Erik >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/3948ccd9/attachment.html From zeolla at gmail.com Wed Mar 28 11:23:59 2018 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 28 Mar 2018 18:23:59 +0000 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: No guarantees, but this[1] may be helpful. I've recently moved to pushing things to kafka using this[2], which eventually feeds into ES using Apache Metron which adds some other benefits but is meant for large scale environments (i.e. it is definitely _not_ lightweight). 1: https://github.com/bro/bro-plugins/tree/00d039442b97ba545e6020200d96a3cba9d9181b/elasticsearch 2: https://github.com/apache/metron-bro-plugin-kafka Jon On Wed, Mar 28, 2018 at 2:21 PM erik clark wrote: > I just need to get it into ES. I am going to pump eve.json in as well. I > have no experience with the ELK stack at all, other than some ES work from > dealing with moloch content going in there and configuring it appropriately. > If I can just bypass everything and push eve.json and bro json logs > directly in, that would be fantastic. > > Thanks Jon! > > On Wed, Mar 28, 2018 at 1:09 PM, Zeolla at GMail.com > wrote: > >> Do you specifically need to send it to logstash or do you just need it to >> get inserted into elasticsearch? >> >> Jon >> >> On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: >> >>> I am trying to ingest bro 2.5 json logs into an elk stack, using >>> filebeat to push the logs. Is that even the best way to do this? I have >>> found MUCH outdated material on ingesting bro logs into an elk stack, but >>> very little that is up to date, and some of which is up to date but is >>> using older versions of software from elastic.co. If anyone has a >>> modern bro/elk integration document they use(d) to set their environment >>> up, it would be greatly appreciated if you could share. Thanks! >>> >>> Erik >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> -- >> >> Jon >> > > -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/ddc38dd3/attachment.html From reswob10 at gmail.com Wed Mar 28 11:32:38 2018 From: reswob10 at gmail.com (craig bowser) Date: Wed, 28 Mar 2018 18:32:38 +0000 Subject: [Bro] filebeat +elk In-Reply-To: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> References: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> Message-ID: So at job I was using logstash on bro and reading each file, parsing and enhancing the data then sending to elasticsearch. But then that was talking too many resources from bro, do not I'm using filebeat to send each log to a logstash server which parses, enhances and sends to elasticsearch. At home I'm using syslog-ng to send bro logs to logstash The suggestion to use rabbitmq is good as well. On Wed, Mar 28, 2018, 2:23 PM Daniel Guerra wrote: > I would use json to stdout with a python script to > > insert it in elasticsearch. I think its the most efficient > > and stable method. The latest elasticsearch needs > > separate index for the different log types. > > There is a bro-pkg for json to stdout. > > > > > Op 28/03/2018 om 18:52 schreef erik clark: > > I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat > to push the logs. Is that even the best way to do this? I have found MUCH > outdated material on ingesting bro logs into an elk stack, but very little > that is up to date, and some of which is up to date but is using older > versions of software from elastic.co. If anyone has a modern bro/elk > integration document they use(d) to set their environment up, it would be > greatly appreciated if you could share. Thanks! > > Erik > > > _______________________________________________ > Bro mailing listbro at bro-ids.orghttp://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/e50c19c1/attachment-0001.html From blason16 at gmail.com Wed Mar 28 11:35:05 2018 From: blason16 at gmail.com (Blason R) Date: Thu, 29 Mar 2018 00:05:05 +0530 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: Undoubtedly go ahead with Filebeat and elasticsearch and you should be good to go. ES will automatically index since those being into JSON On Wed, Mar 28, 2018 at 11:51 PM, erik clark wrote: > I just need to get it into ES. I am going to pump eve.json in as well. I > have no experience with the ELK stack at all, other than some ES work from > dealing with moloch content going in there and configuring it appropriately. > If I can just bypass everything and push eve.json and bro json logs > directly in, that would be fantastic. > > Thanks Jon! > > On Wed, Mar 28, 2018 at 1:09 PM, Zeolla at GMail.com > wrote: > >> Do you specifically need to send it to logstash or do you just need it to >> get inserted into elasticsearch? >> >> Jon >> >> On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: >> >>> I am trying to ingest bro 2.5 json logs into an elk stack, using >>> filebeat to push the logs. Is that even the best way to do this? I have >>> found MUCH outdated material on ingesting bro logs into an elk stack, but >>> very little that is up to date, and some of which is up to date but is >>> using older versions of software from elastic.co. If anyone has a >>> modern bro/elk integration document they use(d) to set their environment >>> up, it would be greatly appreciated if you could share. Thanks! >>> >>> Erik >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> -- >> >> Jon >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180329/8d48280e/attachment.html From michalpurzynski1 at gmail.com Wed Mar 28 11:45:11 2018 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Wed, 28 Mar 2018 11:45:11 -0700 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: <17799AA1-4F36-4130-82A3-C82C983E465E@gmail.com> > On Mar 28, 2018, at 11:23 AM, Zeolla at GMail.com wrote: > > No guarantees, but this[1] may be helpful. I've recently moved to pushing things to kafka using this[2], which eventually feeds into ES using Apache Metron which adds some other benefits but is meant for large scale environments (i.e. it is definitely _not_ lightweight). > > 1: https://github.com/bro/bro-plugins/tree/00d039442b97ba545e6020200d96a3cba9d9181b/elasticsearch > 2: https://github.com/apache/metron-bro-plugin-kafka > > Jon > >> On Wed, Mar 28, 2018 at 2:21 PM erik clark wrote: >> I just need to get it into ES. I am going to pump eve.json in as well. I have no experience with the ELK stack at all, other than some ES work from dealing with moloch content going in there and configuring it appropriately. >> If I can just bypass everything and push eve.json and bro json logs directly in, that would be fantastic. >> >> Thanks Jon! >> >>> On Wed, Mar 28, 2018 at 1:09 PM, Zeolla at GMail.com wrote: >>> Do you specifically need to send it to logstash or do you just need it to get inserted into elasticsearch? >>> >>> Jon >>> >>>> On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: >>>> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! >>>> >>>> Erik >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> -- >>> Jon >>> >> > -- > Jon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/17e94c5f/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 64259 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/17e94c5f/attachment-0001.jpeg From daniel.guerra69 at gmail.com Wed Mar 28 11:55:55 2018 From: daniel.guerra69 at gmail.com (Daniel Guerra) Date: Wed, 28 Mar 2018 20:55:55 +0200 Subject: [Bro] filebeat +elk In-Reply-To: References: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> Message-ID: <403e1533-c760-507c-175f-4e72eb73e21b@gmail.com> No, no logstash. Like this , with bro writing json to stdout The python script takes data from stdin and writes it real-time into elasticsearch. You need to add _stream to know which type of log it is. Op 28/03/2018 om 20:32 schreef craig bowser: > So at job I was using logstash on bro and reading each file, parsing > and enhancing the data then sending to elasticsearch. But then that > was talking too many resources from bro, do not I'm using filebeat to > send each log to a logstash server which parses, enhances and sends to > elasticsearch.? > > At home I'm using syslog-ng to send bro logs to logstash > > The suggestion to use rabbitmq is good as well. > > On Wed, Mar 28, 2018, 2:23 PM Daniel Guerra > wrote: > > I would use json to stdout with a python script to > > insert it in elasticsearch. I think its the most efficient > > and stable method. The latest elasticsearch needs > > separate index for the different log types. > > There is a bro-pkg for json to stdout. > > > > > Op 28/03/2018 om 18:52 schreef erik clark: >> I am trying to ingest bro 2.5 json logs into an elk stack, using >> filebeat to push the logs. Is that even the best way to do this? >> I have found MUCH outdated material on ingesting bro logs into an >> elk stack, but very little that is up to date, and some of which >> is up to date but is using older versions of software from >> elastic.co . If anyone has a modern bro/elk >> integration document they use(d) to set their environment up, it >> would be greatly appreciated if you could share. Thanks! >> >> Erik >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/6e2d7b1b/attachment.html From blake_moss at byu.edu Wed Mar 28 11:59:09 2018 From: blake_moss at byu.edu (Blake Moss) Date: Wed, 28 Mar 2018 18:59:09 +0000 Subject: [Bro] filebeat +elk In-Reply-To: References: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com>, Message-ID: <8dcdea397a2a41baae962800f69e31cb@MB1.byu.local> On this subject, We?ve had issues with both filebeats and logstash reading logs (written to files) once events per second reaches upwards of 3k. We are currently looking into using the bro kafka plugin. Has anyone else had issues with logstash or filebeats bottlenecking? From: craig bowser Sent: Wednesday, March 28, 2018 12:44 PM To: Daniel Guerra Cc: bro at bro.org Subject: Re: [Bro] filebeat +elk So at job I was using logstash on bro and reading each file, parsing and enhancing the data then sending to elasticsearch. But then that was talking too many resources from bro, do not I'm using filebeat to send each log to a logstash server which parses, enhances and sends to elasticsearch. At home I'm using syslog-ng to send bro logs to logstash The suggestion to use rabbitmq is good as well. On Wed, Mar 28, 2018, 2:23 PM Daniel Guerra > wrote: I would use json to stdout with a python script to insert it in elasticsearch. I think its the most efficient and stable method. The latest elasticsearch needs separate index for the different log types. There is a bro-pkg for json to stdout. Op 28/03/2018 om 18:52 schreef erik clark: I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! Erik _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/72b74ca1/attachment.html From patrick.kelley at criticalpathsecurity.com Wed Mar 28 12:25:37 2018 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Wed, 28 Mar 2018 15:25:37 -0400 Subject: [Bro] filebeat +elk In-Reply-To: <8dcdea397a2a41baae962800f69e31cb@MB1.byu.local> References: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> <8dcdea397a2a41baae962800f69e31cb@MB1.byu.local> Message-ID: I've had some issues as you described with Logstash. About the same EPS. I moved away from Filebeat some time ago. Unrelated issues. Kafka has worked quite well. I recommend the Apache Metron. https://metron.apache.org/current-book/metron-sensors/bro-plugin-kafka/index.html bro -N should output the following: Apache::Kafka (dynamic, version 0.2) On Wed, Mar 28, 2018 at 2:59 PM, Blake Moss wrote: > On this subject, We?ve had issues with both filebeats and logstash reading > logs (written to files) once events per second reaches upwards of 3k. We > are currently looking into using the bro kafka plugin. Has anyone else had > issues with logstash or filebeats bottlenecking? > > > > *From: *craig bowser > *Sent: *Wednesday, March 28, 2018 12:44 PM > *To: *Daniel Guerra > *Cc: *bro at bro.org > *Subject: *Re: [Bro] filebeat +elk > > > So at job I was using logstash on bro and reading each file, parsing and > enhancing the data then sending to elasticsearch. But then that was talking > too many resources from bro, do not I'm using filebeat to send each log to > a logstash server which parses, enhances and sends to elasticsearch. > > At home I'm using syslog-ng to send bro logs to logstash > > The suggestion to use rabbitmq is good as well. > > On Wed, Mar 28, 2018, 2:23 PM Daniel Guerra > wrote: > >> I would use json to stdout with a python script to >> >> insert it in elasticsearch. I think its the most efficient >> >> and stable method. The latest elasticsearch needs >> >> separate index for the different log types. >> >> There is a bro-pkg for json to stdout. >> >> >> >> >> Op 28/03/2018 om 18:52 schreef erik clark: >> >> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat >> to push the logs. Is that even the best way to do this? I have found MUCH >> outdated material on ingesting bro logs into an elk stack, but very little >> that is up to date, and some of which is up to date but is using older >> versions of software from elastic.co. If anyone has a modern bro/elk >> integration document they use(d) to set their environment up, it would be >> greatly appreciated if you could share. Thanks! >> >> Erik >> >> >> _______________________________________________ >> Bro mailing listbro at bro-ids.orghttp://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- *Patrick Kelley, CISSP, C|EH, ITIL* *CTO* patrick.kelley at criticalpathsecurity.com (o) 770-224-6482 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/6e90ad3a/attachment.html From perry29 at llnl.gov Wed Mar 28 13:42:01 2018 From: perry29 at llnl.gov (Perry, David) Date: Wed, 28 Mar 2018 20:42:01 +0000 Subject: [Bro] filebeat +elk In-Reply-To: <8dcdea397a2a41baae962800f69e31cb@MB1.byu.local> References: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> <8dcdea397a2a41baae962800f69e31cb@MB1.byu.local> Message-ID: I had logstash reading Bro http log and then turned on DNS lookup in logstash. It quickly got overloaded. I turned off DNS in logstash and had no more issues of that sort. Logstash geoip-lite is able to keep up. I am not using JSON files, btw. David On Mar 28, 2018, at 11:59 AM, Blake Moss > wrote: On this subject, We?ve had issues with both filebeats and logstash reading logs (written to files) once events per second reaches upwards of 3k. We are currently looking into using the bro kafka plugin. Has anyone else had issues with logstash or filebeats bottlenecking? From: craig bowser Sent: Wednesday, March 28, 2018 12:44 PM To: Daniel Guerra Cc: bro at bro.org Subject: Re: [Bro] filebeat +elk So at job I was using logstash on bro and reading each file, parsing and enhancing the data then sending to elasticsearch. But then that was talking too many resources from bro, do not I'm using filebeat to send each log to a logstash server which parses, enhances and sends to elasticsearch. At home I'm using syslog-ng to send bro logs to logstash The suggestion to use rabbitmq is good as well. On Wed, Mar 28, 2018, 2:23 PM Daniel Guerra > wrote: I would use json to stdout with a python script to insert it in elasticsearch. I think its the most efficient and stable method. The latest elasticsearch needs separate index for the different log types. There is a bro-pkg for json to stdout. Op 28/03/2018 om 18:52 schreef erik clark: I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! Erik _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/97020655/attachment-0001.html From promero at cenic.org Wed Mar 28 13:57:54 2018 From: promero at cenic.org (Philip Romero) Date: Wed, 28 Mar 2018 13:57:54 -0700 Subject: [Bro] filebeat +elk In-Reply-To: References: Message-ID: <691ae041-3979-0948-06dd-6f08ab0907eb@cenic.org> Erik, We are using filebeat to feed our bro 2.5.3 logs into logstash for a small 5 node elastic stack cluster. We're running elastic 6.0.x currently and are in the process of upgrading to 6.2. This is just a staring point for us and it seems to be working well. We're not doing any json output from bro, but the native file format with logstash side processing is working fine. Below are the files I'm currently feeding into elastic. //logs/current/capture_loss.log //logs/current/conn.log //logs/current/dns.log //logs/current/files.log //logs/current/ftp.log //logs/current/http.log //logs/current/intel.log //logs/current/notice.log //logs/current/radius.log //logs/current/smb_file.logs //logs/current/smb_mapping.log //logs/current/smtp.log //logs/current/software.log //logs/current/ssh.log On 3/28/18 1:42 PM, bro-request at bro.org wrote: > Op 28/03/2018 om 18:52 schreef erik clark: >>> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat >>> to push the logs. Is that even the best way to do this? I have found MUCH >>> outdated material on ingesting bro logs into an elk stack, but very little >>> that is up to date, and some of which is up to date but is using older >>> versions of software from elastic.co. If anyone has a modern bro/elk >>> integration document they use(d) to set their environment up, it would be >>> greatly appreciated if you could share. Thanks! >>> >>> Erik >>> -- Philip Romero, CISSP, CISA Sr. Information Security Analyst CENIC promero at cenic.org Phone: (714) 220-3430 Mobile: (562) 237-9290 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/790be1a0/attachment.html From perry29 at llnl.gov Wed Mar 28 15:07:51 2018 From: perry29 at llnl.gov (Perry, David) Date: Wed, 28 Mar 2018 22:07:51 +0000 Subject: [Bro] filebeat +elk In-Reply-To: References: <25f3322c-9ef0-30ba-a053-0a742391e0c9@gmail.com> <8dcdea397a2a41baae962800f69e31cb@MB1.byu.local> Message-ID: <2C57DA6E-A279-4F35-A77C-FA99952F51B1@llnl.gov> One unresolved issue is that my http data often has a uri field that does not get parsed correctly by logstash. If the uri (or any field) contains " (quote) characters, this causes a _csvparsefailure in logstash. I posed the problem to discuss.elastic.co, but got no solution, just the suggestion to replace the quote characters with something else, and hints elsewhere that the problem might lie deep in Ruby libraries used by logstash. Search for "CSV Filter - Quote character causing _csvparsefailure" to see the interactions. David On Mar 28, 2018, at 1:42 PM, Perry, David > wrote: I had logstash reading Bro http log and then turned on DNS lookup in logstash. It quickly got overloaded. I turned off DNS in logstash and had no more issues of that sort. Logstash geoip-lite is able to keep up. I am not using JSON files, btw. David On Mar 28, 2018, at 11:59 AM, Blake Moss > wrote: On this subject, We?ve had issues with both filebeats and logstash reading logs (written to files) once events per second reaches upwards of 3k. We are currently looking into using the bro kafka plugin. Has anyone else had issues with logstash or filebeats bottlenecking? From: craig bowser Sent: Wednesday, March 28, 2018 12:44 PM To: Daniel Guerra Cc: bro at bro.org Subject: Re: [Bro] filebeat +elk So at job I was using logstash on bro and reading each file, parsing and enhancing the data then sending to elasticsearch. But then that was talking too many resources from bro, do not I'm using filebeat to send each log to a logstash server which parses, enhances and sends to elasticsearch. At home I'm using syslog-ng to send bro logs to logstash The suggestion to use rabbitmq is good as well. On Wed, Mar 28, 2018, 2:23 PM Daniel Guerra > wrote: I would use json to stdout with a python script to insert it in elasticsearch. I think its the most efficient and stable method. The latest elasticsearch needs separate index for the different log types. There is a bro-pkg for json to stdout. Op 28/03/2018 om 18:52 schreef erik clark: I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks! Erik _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180328/e67ebc83/attachment-0001.html From philosnef at gmail.com Thu Mar 29 06:24:41 2018 From: philosnef at gmail.com (erik clark) Date: Thu, 29 Mar 2018 09:24:41 -0400 Subject: [Bro] filebeat +elk In-Reply-To: <17799AA1-4F36-4130-82A3-C82C983E465E@gmail.com> References: <17799AA1-4F36-4130-82A3-C82C983E465E@gmail.com> Message-ID: I ended up using logstash, rsyslog, es, and kibana. Next up, using Yelps elastalert! Thank you all for your assistance! On Wed, Mar 28, 2018 at 2:45 PM, Micha? Purzy?ski < michalpurzynski1 at gmail.com> wrote: > [image: image1.jpeg] > > On Mar 28, 2018, at 11:23 AM, Zeolla at GMail.com wrote: > > No guarantees, but this[1] may be helpful. I've recently moved to pushing > things to kafka using this[2], which eventually feeds into ES using Apache > Metron which adds some other benefits but is meant for large scale > environments (i.e. it is definitely _not_ lightweight). > > 1: https://github.com/bro/bro-plugins/tree/00d039442b97ba545e6020200d96a3 > cba9d9181b/elasticsearch > 2: https://github.com/apache/metron-bro-plugin-kafka > > Jon > > On Wed, Mar 28, 2018 at 2:21 PM erik clark wrote: > >> I just need to get it into ES. I am going to pump eve.json in as well. I >> have no experience with the ELK stack at all, other than some ES work from >> dealing with moloch content going in there and configuring it appropriately. >> If I can just bypass everything and push eve.json and bro json logs >> directly in, that would be fantastic. >> >> Thanks Jon! >> >> On Wed, Mar 28, 2018 at 1:09 PM, Zeolla at GMail.com >> wrote: >> >>> Do you specifically need to send it to logstash or do you just need it >>> to get inserted into elasticsearch? >>> >>> Jon >>> >>> On Wed, Mar 28, 2018 at 1:07 PM erik clark wrote: >>> >>>> I am trying to ingest bro 2.5 json logs into an elk stack, using >>>> filebeat to push the logs. Is that even the best way to do this? I have >>>> found MUCH outdated material on ingesting bro logs into an elk stack, but >>>> very little that is up to date, and some of which is up to date but is >>>> using older versions of software from elastic.co. If anyone has a >>>> modern bro/elk integration document they use(d) to set their environment >>>> up, it would be greatly appreciated if you could share. Thanks! >>>> >>>> Erik >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> -- >>> >>> Jon >>> >> >> -- > > Jon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180329/5cd1acea/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 64259 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180329/5cd1acea/attachment-0001.jpeg From pssunu6 at gmail.com Thu Mar 29 08:56:06 2018 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 29 Mar 2018 21:26:06 +0530 Subject: [Bro] jzeolla-metron-bro-plugin-kafka Message-ID: Hi, is jzeolla-metron-bro-plugin-kafka support with ssl key support ? i am not finding any example for this https://github.com/JonZeolla/jzeolla-metron-bro-plugin-kafka Regards, Sunu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180329/810705e4/attachment.html From zeolla at gmail.com Thu Mar 29 16:44:01 2018 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Thu, 29 Mar 2018 23:44:01 +0000 Subject: [Bro] jzeolla-metron-bro-plugin-kafka In-Reply-To: References: Message-ID: Please see my response in https://github.com/JonZeolla/jzeolla-metron-bro-plugin-kafka/issues/3 The summary is, in theory yes but I've never tested it. It uses librdkafka underneath, which has ssl support. See https://github.com/edenhill/librdkafka/wiki/Using-SSL-with-librdkafka and https://github.com/apache/metron-bro-plugin-kafka/blob/master/README.md Any configuration value accepted by librdkafka can be added to the kafka_conf configuration table. Jon On Thu, Mar 29, 2018, 12:05 ps sunu wrote: > Hi, > is jzeolla-metron-bro-plugin-kafka support with ssl key > support ? i am not finding any example for this > > https://github.com/JonZeolla/jzeolla-metron-bro-plugin-kafka > > > Regards, > Sunu > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180329/568a319b/attachment.html From gordonjamesr at gmail.com Sat Mar 31 15:14:07 2018 From: gordonjamesr at gmail.com (James Gordon) Date: Sat, 31 Mar 2018 18:14:07 -0400 Subject: [Bro] Intel::FILE_NAME and SMB_files behavior questions Message-ID: Hey everyone, I have a few questions on behavioral issues with the intel framework and SMB / SMB file logging: 1. I?m not sure if this is expected behavior or not, but it doesn?t look like filenames parsed in smb_files.log are properly being logged in files.log. We had a red team exercise recently where our red team was able to successfully retrieve the ntds.dit file off of one of our domain controllers. This transfer occurred over SMB, so I figured we could add ntds.dit to the Intel framework so that next time we don?t have to dig in logs to find out that our domain is owned ? we?ll have a handy alert to tell us :) I did some testing with this though, and while I see ?ntds.dit? logged clearly in the name field in smb_files.log, I don?t have a corresponding entry in files.log for this file transfer, and therefore no Intel match. What makes this weirder is I have other irrelevant files from this connection logged in files.log, that I didn?t actually touch or move during this connection: bro at SObro:/nsm/bro/logs/current$ cat /opt/bro/share/bro/intel/intel.dat | grep ntds.dit ntds.dit Intel::FILE_NAME domain ownage - update your resume! F bro at SObro:/nsm/bro/logs/2018-03-31$ zcat smb_files.16\:00\:00-17\:00\:00.log.gz | bro-cut uid id.orig_h id.resp_h id.resp_p action name | grep ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory\\ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory \\ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory \\ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory \\ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit If I search for ?ntds.dit? in files log, I get nothing. If I search for the connection UID in files.log, there are some files logged ? but not the only file I actually transferred over this connection! bro@ SObro:/nsm/bro/logs/2018-03-31$ zcat files.16\:00\:00-17\:00\:00.log.gz | bro-cut conn_uids tx_hosts rx_hosts source filename | grep C35jBF1HlcrVNLiXW2 C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB desktop.ini C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB share path\\and more\\more\\not my testing directory!? \\desktop.ini C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB share path\\and more\\more\\my testing directory \\random <> file that lives at this path.exe C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB desktop.ini C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB favorites\\desktop.ini C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB Random excel file that lives in my testing directory.xls C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB random executable that lives in my testing directory.exe Is there something wrong with my Bro instance? I feel like filenames from smb_files ?name? field should *all* be fed into files.log. I tested this with two different share paths and similar results ? everything gets logged as I would expect in smb_files.log but this filename never shows up in files.log. How can I reliably alert on file names transferred over SMB? 2. As part of the above red team exercise, I found (what I suspect) are some instances of Meterpeter being transferred from popped hosts back to the adversary system over SMB. These were logged in ?smb_files.log? with names like: ?Temp\\PBetVKZU.tmp? and ?Temp\\FapcPatS.tmp?. I don?t think the Intel framework supports wildcards ? is there a way to alert on files transferred that match a regex such as ?Temp\\[a-zA-Z]{8}.tmp?, or even: ?Temp\\*.tmp?? 3. Unrelated to the Intel framework - it seems like smb_files.log is super noisy. If I browse to a share drive, a massive amount of the contents of the share are enumerated in the smb_files log without taking any action (with the ?action? field indicating SMB::FILE_OPEN). This feels like expected behavior in SMB. Is there any way to ?filter? the log to only log files that are actually opened, written to, moved, deleted, or had any real operation occur against them? We?re running Bro 2.5.3 in Security Onion (Ubuntu 14.04). The intel framework is loaded and successfully fires on other indicators we have running. Thanks! James Gordon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180331/a70047a7/attachment.html From email4myth at gmail.com Sat Mar 31 19:43:50 2018 From: email4myth at gmail.com (Myth Ren) Date: Sun, 1 Apr 2018 10:43:50 +0800 Subject: [Bro] [BRO-ISSUE]: bro crash when so many Repoter::Error calls In-Reply-To: <56CD4701-71F3-46C7-8B04-41DD7A1F3D9A@illinois.edu> References: <56CD4701-71F3-46C7-8B04-41DD7A1F3D9A@illinois.edu> Message-ID: Hi, Justin currently i hava a bro code snippet to stable reproduce the crash. (no Reporter log to kafka with explicit remove default filter at `bro_init`) I record the backtrace and threads info in this gist: https://gist.github.com/MythRen/3d77111fb810cac941c48311dc273289, and the snippet. steps to reproduce: 0. prepare an environment with bro 2.5.1 installed, with kafka plugin 1. upload the `test.crash.bro` to your server 2. run `$BRO_BIN/bro -i INTERFACE -C test.crash.bro` 3. open another terminal and login to your server 4. use `ab` to get more bandwidth: ab -n 100000000 -c 10 ' https://www.google.com' (better with local http server, make sure traffic go out through INTERFACE in step 3) 5. waitting for crash. usually in seconds, in my test case. i think the problem is the access of `head`, 'tail' without lock between `Event.cc#86-91` and `Event.cc#118-119` in multi-thread. but i don't known the effective if we add lock here. Wish you and others could help. Best regards, Myth 2018-01-26 0:46 GMT+08:00 Azoff, Justin S : > > > On Jan 25, 2018, at 11:18 AM, Myth Ren wrote: > > > > then KafkaWriter call Reporter::Error to report the runtime error. > > This would be a problem if bro is configured to send the reporter.log to > Kafka. > > Reporter::Error generates a reporter_error event which then calls > > Log::write(Reporter::LOG, [$ts=t, $level=ERROR, $message=msg, > $location=location]); > > So you're probably also ending up in the situation where bro is trying to > log to Kafka the fact that Kafka is broken. > > If you tell bro to not send the reporter.log to Kafka, does your problem > go away? > > ? > Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180401/470d35fc/attachment-0001.html