From zaixer at gmail.com Sun Jan 1 12:29:14 2017 From: zaixer at gmail.com (M A) Date: Sun, 1 Jan 2017 23:29:14 +0300 Subject: [Bro] "to_string" ? Message-ID: Hello, I am creating a simple script to plot specific fields for different protocols counted and sorted. Your suggestions and feedback will be highly appreciated. Its just a prototype for basic HTTP fields, but I am planning to include DNS,SMB,SMTP and SSL. You can find the script here: https://github.com/eaam/Bro/blob/master/dissect.bro On a side note, I am stuck upon a situation where I wanted to handle all incoming data as strings regardless of the original field type. (For example, I would like to treat HTTP STATUS CODE as a string and not count, the same for IP, Ports...etc). however, I could not find something like "to_string" function here https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html . to_addr : function Converts a string to an addr . to_count : function Converts a string to a count . to_double : function Converts a string to a double . to_int : function Converts a string to an int . to_port : function Converts a string to a port . to_subnet : function Converts a string to a subnet . Am I missing something ? Thanks in advance Moh -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170101/7fc1ffa8/attachment.html From dopheide at gmail.com Sun Jan 1 13:27:07 2017 From: dopheide at gmail.com (Mike Dopheide) Date: Sun, 1 Jan 2017 15:27:07 -0600 Subject: [Bro] "to_string" ? In-Reply-To: References: Message-ID: You should be able to just use fmt(). mystring = fmt("%d",status_code); Dop On Sunday, January 1, 2017, M A wrote: > > Hello, > > I am creating a simple script to plot specific fields for different > protocols counted and sorted. > > Your suggestions and feedback will be highly appreciated. Its just a > prototype for basic HTTP fields, but I am planning to include DNS,SMB,SMTP > and SSL. > > You can find the script here: https://github.com/eaam/ > Bro/blob/master/dissect.bro > > > On a side note, I am stuck upon a situation where I wanted to handle all > incoming data as strings regardless of the original field type. (For > example, I would like to treat HTTP STATUS CODE as a string and not count, > the same for IP, Ports...etc). however, I could not find something like > "to_string" function here > > https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html . > > to_addr > > : function > Converts > a string > to > an addr > . > to_count > > : function > Converts > a string > to a > count . > to_double > > : function > Converts > a string > to a > double > . > to_int > : > function > Converts > a string > to > an int . > to_port > > : function > Converts > a string > to a > port . > to_subnet > > : function > Converts > a string > to a > subnet > . > Am I missing something ? > > Thanks in advance > Moh > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170101/8212505a/attachment-0001.html From johanna at icir.org Mon Jan 2 02:19:02 2017 From: johanna at icir.org (Johanna Amann) Date: Mon, 2 Jan 2017 11:19:02 +0100 Subject: [Bro] "to_string" ? In-Reply-To: References: Message-ID: <20170102101902.ecnlhotl6l25ue2c@Beezling.fritz.box> And for one more alternative, which is used quite extensively in the Bro base scripts - the cat function can convert basically anything into strings. https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-cat Johanna On Sun, Jan 01, 2017 at 03:27:07PM -0600, Mike Dopheide wrote: > You should be able to just use fmt(). > > mystring = fmt("%d",status_code); > > Dop > > > > On Sunday, January 1, 2017, M A wrote: > > > > > Hello, > > > > I am creating a simple script to plot specific fields for different > > protocols counted and sorted. > > > > Your suggestions and feedback will be highly appreciated. Its just a > > prototype for basic HTTP fields, but I am planning to include DNS,SMB,SMTP > > and SSL. > > > > You can find the script here: https://github.com/eaam/ > > Bro/blob/master/dissect.bro > > > > > > On a side note, I am stuck upon a situation where I wanted to handle all > > incoming data as strings regardless of the original field type. (For > > example, I would like to treat HTTP STATUS CODE as a string and not count, > > the same for IP, Ports...etc). however, I could not find something like > > "to_string" function here > > > > https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html . > > > > to_addr > > > > : function > > Converts > > a string > > to > > an addr > > . > > to_count > > > > : function > > Converts > > a string > > to a > > count . > > to_double > > > > : function > > Converts > > a string > > to a > > double > > . > > to_int > > : > > function > > Converts > > a string > > to > > an int . > > to_port > > > > : function > > Converts > > a string > > to a > > port . > > to_subnet > > > > : function > > Converts > > a string > > to a > > subnet > > . > > Am I missing something ? > > > > Thanks in advance > > Moh > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From siberkartal at gmail.com Mon Jan 2 11:58:30 2017 From: siberkartal at gmail.com (=?UTF-8?B?QmV5YXogxZ5hcGth?=) Date: Mon, 2 Jan 2017 21:58:30 +0200 Subject: [Bro] Custom log file Message-ID: Hi all, I want to generate custom log files. For example, the columns of one log file should be like the following. ts,uid,fuid,geo_location,idresp_h,idresp_p,method,status_code,trans_depth,response_body_len,mime_type,host,uri,referrer,source,filename,md5,sha1,extracted,flash_version. Those fields are found in files.log and http.log. To get those values I need connection, fa_file and fa_metadata records. More precisely, I need connection for resp_h and resp_p through conn_id record (c$id$resp_h and c$id$resp_p). I need connection for ts, uid, trans_depth, method, status_code, response_body_len, host, uri, referrer, _flash_version through HTTP::Info record. I need connection for geo location through lookup_location(resp_h). I need fa_metadata for mime_type, since I extract only particular mime types and also I build the filename that is going to be extracted. I need fa_file for fuid, source, filename, md5, sha1, and extracted through Files::Info record. connection and fa_file records are accessible in event file_over_new_connection. However, at this phase md5, sha1, extracted, and response_body_len values are not present. For this reason, I used HTTP::log_http and Files::log_files events. I can get all values from that events except resp_h and resp_p. I use file_over_new_connection for that since I also extract geo location from here. But in this case, due to the nature for network traffic that is not synchronic, resp_h and geo location values in my custom log file is erroneous. How do you do that? I read related parts of the documentation and source codes of the bro and alse reviewed the Bro archive for last one year. I keep the post short in order not to be wordy for now. I can send my script also. Any help is quite appriciated, Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170102/4d796e13/attachment.html From johanna at icir.org Tue Jan 3 02:35:19 2017 From: johanna at icir.org (Johanna Amann) Date: Tue, 3 Jan 2017 11:35:19 +0100 Subject: [Bro] Syntax and Semantics message Broccoli In-Reply-To: References: Message-ID: <20170103103516.2bt6xtkxvf4lzzwz@Beezling.fritz.box> Hi Alberto, the messages that are exchanged are basically Bro events. So you can send/receive any kind of data that you can usually send/receive in Bro events (with a small number of exceptions that you will probably not care about - I don't think that broccoli supports opaques, e.g.). The best way get started at this is probably to look at the test folder in broccoli, which has a few C programs and Bro scripts that exchange data. Note that there also is a newer communication library (named Broker) that can be used to do basically the same job - the API is not quite finished yet though, but at some point of time you might have to port things over to it. Johanna On Tue, Dec 13, 2016 at 11:50:10PM +0100, Alberto Ciolini wrote: > I have to develop a program, using Broccoli, that communicates with Bro > which installed on raspberry pi. My question is if there is a standard for > syntax and semantics for message that Broccoli and Bro send each other. > How messages are sent from broccoli to bro? > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue Jan 3 02:45:33 2017 From: johanna at icir.org (Johanna Amann) Date: Tue, 3 Jan 2017 11:45:33 +0100 Subject: [Bro] deep cluster documentation & status In-Reply-To: References: Message-ID: <20170103104533.eyf7ire2eag6k2nx@Beezling.fritz.box> Hello Eric, > Is there any additional documentation on the deep cluster as noted here: > > https://www.bro.org/development/projects/deep-cluster.html There is not as far as I know. Also - note that this is a project description and there is no guarantee that anything that is described in there is working or will work like that in the future. It also is not anywhere close to done as far as I know. The best person to contact with questions about this is probably Mathias Fischer (mfischer at informatik.uni-hamburg.de). Johanna > I would like to contribute to this, but the status of this project is > unclear from the documentation, and there are some requirements that need > to be laid out in Bro itself to make this work, such as logging the > hostname associated with a given worker node in every log file in order to > track node health. > > The @stats option gives you incremental information for all node types, > BUT, that is all it does. Determining from incremental counters when Bro > fails or loses capture through a network connectivity issue becomes > impossible when all the data in the logger node is intermingled. Having the > hostname in all the logs means you can simply track the event count rate > (non-incremental) in your visualization tool of choice, like ELK or Splunk. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue Jan 3 02:46:03 2017 From: johanna at icir.org (Johanna Amann) Date: Tue, 3 Jan 2017 11:46:03 +0100 Subject: [Bro] Smart Phone!! In-Reply-To: References: Message-ID: <20170103104603.h7x55um774qnxt5s@Beezling.fritz.box> On Thu, Dec 15, 2016 at 07:45:47PM +0200, abdulrahman musallam wrote: > Is there any smart phone application that supports bro(shows network > statistics, notifications on detection)?? No, I am not aware of anything like that. Johanna From johanna at icir.org Tue Jan 3 02:48:23 2017 From: johanna at icir.org (Johanna Amann) Date: Tue, 3 Jan 2017 11:48:23 +0100 Subject: [Bro] specific logging per worker In-Reply-To: References: Message-ID: <20170103104823.u7z46mltyuxldpth@Beezling.fritz.box> On Fri, Dec 16, 2016 at 02:09:09PM +1100, John Edwards wrote: > Hi all, > > If i have a cluster that contains 2 workers among a proxy and logger etc, > Worker 1 watches and logs everything, Is there a way i can tell worker 2 to > only log a specific protocol and not watch everything the Worker 1? You can add worker-specific configuration to local.bro using the @if directive. For example something like... @if ( Cluster::node == "worker-1" ) # things here will only be executed on node named worker-1 @endif That being said - why exactly do you want to do that? In a traditional cluster setting, the traffic is split eavenly among the workers and you typically want everyone to perform exactly the same actions. Johanna From johanna at icir.org Tue Jan 3 02:52:35 2017 From: johanna at icir.org (Johanna Amann) Date: Tue, 3 Jan 2017 11:52:35 +0100 Subject: [Bro] Detection of backdoors with Bro. In-Reply-To: References: Message-ID: <20170103105235.fj6rusudplngb4zp@Beezling.fritz.box> Hello Luca, > I noticed that the bro script Backdoor.bro has been deprecated with Bro > 2.5. You are right, the backdoor analyzer has been deprecated (note - not backdoor.bro, that also existed and was removed after 1.5). > So,what is now the script or group of scripts (or method) used to deal > with this kind of problem. As a use Bro mainly to read tcpdump pcaps of my > desktop Internet/browser sessions and malware installed this way is a > concern. Are you actually using the functionality that the backdoor analyzer provides? As far as I am aware, it has not been active by default in any recent version of Bro - you always needed to activate it yourself - and has not seen any active maintenance in a while. If you have been using this in practice, and it has been useful to you, I would actually be interested in hearing about it. In any case - you should always be able to use the current version of it and compile it as a module, in case it will be removed in a future version of Bro. I hope this helps, Johanna From johanna at icir.org Tue Jan 3 03:07:46 2017 From: johanna at icir.org (Johanna Amann) Date: Tue, 3 Jan 2017 12:07:46 +0100 Subject: [Bro] Modification of bro source code In-Reply-To: References: Message-ID: <20170103110746.k3arhdk2gpcjkh22@Beezling.fritz.box> Hello Yagyesh, what exactly do you want to debug? Something in scriptland, or something that is deeper in the c++ source code? For Bro scripts - there is the Bro debugger (see talk at BroCon 2016; slides at https://www.bro.org/brocon2016/slides/grigorescu_debugger.pdf). If you want to interact with the actual C++ code - yup, you are right, this will require a bit of knowledge of the internals of Bro. You basically have lldb there, as well as adding debug output to the Bro source code (which you need to change for that); there is not really much more there. Johanna On Thu, Dec 22, 2016 at 06:28:27PM -0500, Yagyesh Srivastava wrote: > hi , > > Does anyone know how to debug in bro other than using lldb? > lldb just gives the frame variables of that particular frame, while making > modifications in bro source code, require knowing the values of data > members of some other class defined somewhere else. > One way i can think of is by individually going and checking every variable > when its getting populated, but that seems a tedious task considering the > multiple inheritance going on. > > Is there another way to debug? > Please let me know, thanks!! > > Regards, > Yagyesh > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From blackhole.em at gmail.com Tue Jan 3 05:53:02 2017 From: blackhole.em at gmail.com (Joe Blow) Date: Tue, 3 Jan 2017 08:53:02 -0500 Subject: [Bro] Smart Phone!! In-Reply-To: <20170103104603.h7x55um774qnxt5s@Beezling.fritz.box> References: <20170103104603.h7x55um774qnxt5s@Beezling.fritz.box> Message-ID: Why not just pump it into ES and stare at it with kibana dashboards? Cheers, JB On Tue, Jan 3, 2017 at 5:46 AM, Johanna Amann wrote: > On Thu, Dec 15, 2016 at 07:45:47PM +0200, abdulrahman musallam wrote: > > Is there any smart phone application that supports bro(shows network > > statistics, notifications on detection)?? > > No, I am not aware of anything like that. > > Johanna > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170103/4cb0b390/attachment.html From pkelley at hyperionavenue.com Tue Jan 3 04:12:09 2017 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Tue, 03 Jan 2017 07:12:09 -0500 Subject: [Bro] Smart Phone!! In-Reply-To: References: <20170103104603.h7x55um774qnxt5s@Beezling.fritz.box> Message-ID: That?s what I?m currently doing. Best approach I?ve found. Patrick Kelley, CISSP The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure. From: on behalf of Joe Blow Date: Tuesday, January 3, 2017 at 8:53 AM To: Johanna Amann Cc: abdulrahman musallam , Bro-IDS Subject: Re: [Bro] Smart Phone!! Why not just pump it into ES and stare at it with kibana dashboards? Cheers, JB On Tue, Jan 3, 2017 at 5:46 AM, Johanna Amann wrote: > On Thu, Dec 15, 2016 at 07:45:47PM +0200, abdulrahman musallam wrote: >> > Is there any smart phone application that supports bro(shows network >> > statistics, notifications on detection)?? > > No, I am not aware of anything like that. > > Johanna > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170103/670b5aab/attachment-0001.html From blackhole.em at gmail.com Tue Jan 3 07:23:38 2017 From: blackhole.em at gmail.com (Joe Blow) Date: Tue, 03 Jan 2017 10:23:38 -0500 Subject: [Bro] Smart Phone!! In-Reply-To: Message-ID: <586bc1fd.12306b0a.a3004.1509@mx.google.com> An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170103/07da1144/attachment.html From jazoff at illinois.edu Tue Jan 3 07:26:16 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 3 Jan 2017 15:26:16 +0000 Subject: [Bro] Custom log file In-Reply-To: References: Message-ID: <144591C2-C90E-4A80-8A93-C4268E811F4C@illinois.edu> > On Jan 2, 2017, at 2:58 PM, Beyaz ?apka wrote: > > For this reason, I used HTTP::log_http and Files::log_files events. > I can get all values from that events except resp_h and resp_p. > Oh? Those two fields are part of the `id` field in the HTTP::Info record. -- - Justin Azoff From siberkartal at gmail.com Tue Jan 3 08:26:17 2017 From: siberkartal at gmail.com (=?UTF-8?B?QmV5YXogxZ5hcGth?=) Date: Tue, 3 Jan 2017 19:26:17 +0300 Subject: [Bro] Fwd: Custom log file In-Reply-To: References: <144591C2-C90E-4A80-8A93-C4268E811F4C@illinois.edu> Message-ID: You are right. They are available also from there. But it is not the solution of the problem. Using both HTTP::log and Files::log_files makes inconsistency. Because events for all packets occurring concurrently. While I got tcp stream x in HTTP::log_http, I could get tcp stream 2 in Files::log_files. So, I left this approach and developed two scripts, but they have problems also, details are below. The data fields are extracted from the event file_state_remove(f: fa_file) in the following script. https://pastebin.mozilla.org/8958179 There is two problem in here. 1 request is missing in the output of the script, because it returns 302 redirection HTTP status code, I think. Alos, response_body_len is not available in here, since it will be available in connection_state_remove (c: connection). In the second script, The data fields are extracted from the event connection_state_remove(c: connection) in the following script. https://pastebin.mozilla.org/8958182 There are two problems in here. 5 requests are missing in the output of the script, because in one connection, there is multiple http responses, I think. Secondly, it is not possible to extract 5 fields, because they are only accessible via the f$info record. There is 11 requests in the sample pcap. http://wikisend.com/download/488246/test.pcap On Tue, Jan 3, 2017 at 6:26 PM, Azoff, Justin S wrote: > > > On Jan 2, 2017, at 2:58 PM, Beyaz ?apka wrote: > > > > For this reason, I used HTTP::log_http and Files::log_files events. > > I can get all values from that events except resp_h and resp_p. > > > > Oh? Those two fields are part of the `id` field in the HTTP::Info record. > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170103/41779fb3/attachment.html From jdopheid at illinois.edu Tue Jan 3 09:13:13 2017 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Tue, 3 Jan 2017 17:13:13 +0000 Subject: [Bro] Bro4Pros 2017: One speaking spot left Message-ID: Update: we have one speaking spot left. If you?re attending Bro4Pros, don?t forget to submit your presentation proposal. More details can be found in the thread below. Thanks, Jeannette Dopheide ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign On 12/16/16, 8:59 AM, "bro-bounces at bro.org on behalf of Dopheide, Jeannette M" wrote: Hello Bro Community, If you?re attending Bro4Pros 2017 on Feb. 2nd, consider submitting a CFP proposal. We?re looking for presentations that advanced users can apply to their day-to-day activities as a security professional. Send abstracts (max 500 words) to: info at bro.org Subject: Bro4Pros 2017 Call for Presentations Submission due date: January 6th, 2017 Every presentation is limited to 45 minutes including questions and discussion. Feel free to reach out to us if you have questions. https://www.bro.org/community/bro4pros2017.html Thanks, Jeannette Dopheide ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From siberkartal at gmail.com Tue Jan 3 10:53:24 2017 From: siberkartal at gmail.com (=?UTF-8?B?QmV5YXogxZ5hcGth?=) Date: Tue, 3 Jan 2017 21:53:24 +0300 Subject: [Bro] Custom log file In-Reply-To: <144591C2-C90E-4A80-8A93-C4268E811F4C@illinois.edu> References: <144591C2-C90E-4A80-8A93-C4268E811F4C@illinois.edu> Message-ID: Hi Justin, HTTP::log_http and Files::log_files based approach is working now. https://pastebin.mozilla.org/8958232 But I came to that point with trial-and-error method. Here is the success story. I should build filename at the event file_over_new_connection . I should update filename with the extension in the file_sniff and call extract, md5, and sha1 analyzers in here. I do not know why I need to extract filename at the file_over_new_connection method, but not in file_sniff or something else. This script may work just for that sample, I need some guidance. Thanks, On Tue, Jan 3, 2017 at 6:26 PM, Azoff, Justin S wrote: > > > On Jan 2, 2017, at 2:58 PM, Beyaz ?apka wrote: > > > > For this reason, I used HTTP::log_http and Files::log_files events. > > I can get all values from that events except resp_h and resp_p. > > > > Oh? Those two fields are part of the `id` field in the HTTP::Info record. > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170103/4bd5f798/attachment-0001.html From seth at icir.org Tue Jan 3 14:05:20 2017 From: seth at icir.org (Seth Hall) Date: Tue, 3 Jan 2017 17:05:20 -0500 Subject: [Bro] SHA256 Hash File Analyzer In-Reply-To: References: Message-ID: <8E16795F-7044-4B63-A47F-6A287F7B4246@icir.org> > On Dec 30, 2016, at 11:00 AM, Ryan Stillions wrote: > > any thoughts if this would be the same with 2.5 as it was when you originally posted? I didn't see anything specific about it in release notes, so would we be correct to assume the SHA256 analyzer would probably perform the same as what you saw back in Feb 16? I would expect to still see a similar performance hit from enabling SHA256, but I don't really know. Someone needs to test it. Justin's point about the performance of OpenSSL is most of the picture, but there is still some additional overhead due to having another analyzer attached to a file so I wouldn't go totally off of the openssl benchmark testing. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From fatema.bannatwala at gmail.com Tue Jan 3 14:12:27 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 3 Jan 2017 17:12:27 -0500 Subject: [Bro] Segmentation fault while using own signature. Message-ID: Hi all, So I have a case where if I use following regex in sig file, it works, but when I edit it and make it more strict I get segmentation fault in like 5 minutes after bro gets normally started: The working version: signature rootkit-potential { payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ event "Potential rootkit" tcp-state originator } signature rootkit-malware { payload /.*SSH-2\.5-OpenSSH_6\.1\.9.[0-9\.]{7,15}\|\d{1,5}.*/ event "rootkit malware" tcp-state originator } When I change regex to be more restrictive, Seg fault occurs: signature rootkit-potential { payload /.*(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ event "Potential rootkit" tcp-state originator } signature rootkit-malware { payload /.*SSH-2\.5-OpenSSH_6\.1\.9.(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ event "rootkit malware" tcp-state originator } Any idea what might be going wrong? Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170103/d1f6209e/attachment.html From seth at icir.org Tue Jan 3 19:00:09 2017 From: seth at icir.org (Seth Hall) Date: Tue, 3 Jan 2017 22:00:09 -0500 Subject: [Bro] Bro 2.5 and log rotation In-Reply-To: <1482414587.2534.7.camel@slave-tothe-box.net> References: <1482414587.2534.7.camel@slave-tothe-box.net> Message-ID: I've seen this before when people are generating really huge logs and IO on their system goes crazy because the previous logs are still being compressed which runs into a downward spiral that it never recovers from. For those logs that you have which haven't been rotated as you expected, was there a gzip process running in the background? I suspect that you have a lot of gzip processes running and a very high system load. .Seth > On Dec 22, 2016, at 8:49 AM, James Lay wrote: > > I guess I'm in this boat as well. Since my upgrade, bro will stop rotating logs at some point. I'm not running bro via broctl. Here's my process for log rotation: > > local.bro: > redef Log::default_rotation_interval = 86400 secs; > redef Log::default_rotation_postprocessor_cmd = "archive-log"; > > broctl.cfg: > LogRotationInterval = 86400 > > sudo /usr/local/bro/bin/broctl install > > sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log /usr/local/bin/ > sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh /usr/local/bin/ > sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name /usr/local/bin/ > sudo ln -s /usr/local/bro/share/broctl/scripts/expire-logs /usr/local/bin/ > sudo ln -s /usr/local/bro/share/broctl/scripts/delete-log /usr/local/bin/ > sudo ln -s /usr/local/bro/share/broctl/scripts/cflow-stats /usr/local/bin/ > sudo ln -s /usr/local/bro/share/broctl/scripts/stats-to-csv /usr/local/bin/ > > This will work for a while. But at some point it stops: > > > at the core I believe it's because bro, after sometime, won't respond to a "normal" kill command. A "sudo killall bro" will do nothing. Usually I'll "sudo killall bro", wait a minute, and then my spool directory will be empty, I'll have an email with stats, and I'll have my new archive directory. I'll have to -9 it in order to get it to stop, I've restarted this morning and will see how many days it will go. Thank you. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Tue Jan 3 19:23:13 2017 From: seth at icir.org (Seth Hall) Date: Tue, 3 Jan 2017 22:23:13 -0500 Subject: [Bro] Mime-type issues (text/plain and application/x-msdownload) In-Reply-To: References: Message-ID: <015D67C8-2114-4266-83CA-54AD8E80E3BC@icir.org> > On Dec 28, 2016, at 9:11 AM, Beyaz ?apka wrote: > > Bro says the mime-type as "text/plain" for the response of first HTTP GET request. > However, at least, wireshark (and also CapTipper) says it is "text/html". > The correct one is text/html, it is clear. Are you referring to the first request in the "10.12.13.102 49192 195.133.48.182 80" connection? It's showing as text/html for me in Bro 2.5. > I think, bro does not look only Content-Type (maybe due to malicious manipulation), but makes some heuristics. But there should be some issues for this case. We have a fairly large set of signatures that identify file types. In HTTP traffic, the Content-Type header doesn't factor in at all. > The other one is that, there are 3 binary files in this pcap. > Bro extracts them pretty fine. > However again there are some issues about content-type. > While their content type is application/x-msdownload, the http.log and files.log says dash dash (not found). Due to the fact that we detect mime type with signatures and I don't seem to be able to find any information about what application/x-msdownload is, I don't think we'll be able to make that detection. The files that are transferred are unrecognizable binary data too (at least I was unable to see anything recognizable there). .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jedwards2728 at gmail.com Wed Jan 4 00:49:08 2017 From: jedwards2728 at gmail.com (John Edwards) Date: Wed, 4 Jan 2017 19:49:08 +1100 Subject: [Bro] specific logging per worker In-Reply-To: <20170103104823.u7z46mltyuxldpth@Beezling.fritz.box> References: <20170103104823.u7z46mltyuxldpth@Beezling.fritz.box> Message-ID: Hi Johanna, Thanks for the info, I have 1 worker up at the border inspecting everything and another worker below a few firewall and IPS systems. i have just installed another worker below all these inspection points but because all workers feed into a SIEM there no need for the likes of the conn.log etc to be logging as much as it is off the same link duplicated into the SIEM as its charged based on consumption. So if we had a worker below our inspection points only logging some of the log types we would still get the security benefit of having a worker placed there without the storage requirements. Thanks John On Tue, Jan 3, 2017 at 9:48 PM, Johanna Amann wrote: > On Fri, Dec 16, 2016 at 02:09:09PM +1100, John Edwards wrote: > > Hi all, > > > > If i have a cluster that contains 2 workers among a proxy and logger etc, > > Worker 1 watches and logs everything, Is there a way i can tell worker 2 > to > > only log a specific protocol and not watch everything the Worker 1? > > You can add worker-specific configuration to local.bro using the @if > directive. > > For example something like... > > @if ( Cluster::node == "worker-1" ) > > # things here will only be executed on node named worker-1 > > @endif > > That being said - why exactly do you want to do that? In a traditional > cluster setting, the traffic is split eavenly among the workers and you > typically want everyone to perform exactly the same actions. > > Johanna > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170104/9741c551/attachment.html From siberkartal at gmail.com Wed Jan 4 01:32:49 2017 From: siberkartal at gmail.com (=?UTF-8?B?QmV5YXogxZ5hcGth?=) Date: Wed, 4 Jan 2017 12:32:49 +0300 Subject: [Bro] Mime-type issues (text/plain and application/x-msdownload) In-Reply-To: <015D67C8-2114-4266-83CA-54AD8E80E3BC@icir.org> References: <015D67C8-2114-4266-83CA-54AD8E80E3BC@icir.org> Message-ID: Yes, I talk about the response for the first HTTP request. signature file-html is good but still could be better. The signature only check for the starting of the file for particular patterns, the problem originates from that. >From where, you are not able to find any information about what application/x-msdownload is? If you are talking about *.sig files in bro directories, of course it does not exist. However google says much: https://msdn.microsoft.com/en-us/library/ms775147(v=vs.85).aspx application/x-msdownload is Executable (.exe or .dll) file. It is similar to signature file-magic-auto433. In addition, sure, they are unrecognized binary data, since they are encrypted. I think, file-magic-auto433 flags plain ones correctly, but gives its mime type as application/x-dosexec I will duplicate it and add an additional check (http-reply-header /Content-type: application/x-msdownload/) for a workaround. On Wed, Jan 4, 2017 at 6:23 AM, Seth Hall wrote: > > > On Dec 28, 2016, at 9:11 AM, Beyaz ?apka wrote: > > > > Bro says the mime-type as "text/plain" for the response of first HTTP > GET request. > > However, at least, wireshark (and also CapTipper) says it is > "text/html". > > The correct one is text/html, it is clear. > > Are you referring to the first request in the "10.12.13.102 49192 > 195.133.48.182 80" connection? It's showing as text/html for me in Bro > 2.5. > > > I think, bro does not look only Content-Type (maybe due to malicious > manipulation), but makes some heuristics. But there should be some issues > for this case. > > We have a fairly large set of signatures that identify file types. In > HTTP traffic, the Content-Type header doesn't factor in at all. > > > The other one is that, there are 3 binary files in this pcap. > > Bro extracts them pretty fine. > > However again there are some issues about content-type. > > While their content type is application/x-msdownload, the http.log and > files.log says dash dash (not found). > > Due to the fact that we detect mime type with signatures and I don't seem > to be able to find any information about what application/x-msdownload is, > I don't think we'll be able to make that detection. The files that are > transferred are unrecognizable binary data too (at least I was unable to > see anything recognizable there). > > .Seth > > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170104/ef307f12/attachment-0001.html From jlay at slave-tothe-box.net Wed Jan 4 03:19:30 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 04 Jan 2017 04:19:30 -0700 Subject: [Bro] Bro 2.5 and log rotation In-Reply-To: References: <1482414587.2534.7.camel@slave-tothe-box.net> Message-ID: <1483528770.3264.4.camel@slave-tothe-box.net> Thanks Seth, Interestingly, this is on my home network...the largest compressed file in looking at past logs was tcprecovery at 7.8 megs. ?On a hunch, after this issue came up again on Christmas day, I disabled?TCPRS and have had no issues since. James On Tue, 2017-01-03 at 22:00 -0500, Seth Hall wrote: > I've seen this before when people are generating really huge logs and > IO on their system goes crazy because the previous logs are still > being compressed which runs into a downward spiral that it never > recovers from.??For those logs that you have which haven't been > rotated as you expected, was there a gzip process running in the > background???I suspect that you have a lot of gzip processes running > and a very high system load. > > ? .Seth > > > > > > On Dec 22, 2016, at 8:49 AM, James Lay > > wrote: > > > > I guess I'm in this boat as well.??Since my upgrade, bro will stop > > rotating logs at some point.??I'm not running bro via > > broctl.??Here's my process for log rotation: > > > > local.bro: > > ????????redef Log::default_rotation_interval = 86400 secs; > > ????????redef Log::default_rotation_postprocessor_cmd = "archive- > > log"; > > > > broctl.cfg: > > ????????LogRotationInterval = 86400 > > > > sudo /usr/local/bro/bin/broctl install > > > > sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log > > /usr/local/bin/ > > sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh > > /usr/local/bin/ > > sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name > > /usr/local/bin/ > > sudo ln -s /usr/local/bro/share/broctl/scripts/expire-logs > > /usr/local/bin/ > > sudo ln -s /usr/local/bro/share/broctl/scripts/delete-log > > /usr/local/bin/ > > sudo ln -s /usr/local/bro/share/broctl/scripts/cflow-stats > > /usr/local/bin/ > > sudo ln -s /usr/local/bro/share/broctl/scripts/stats-to-csv > > /usr/local/bin/ > > > > This will work for a while.??But at some point it stops: > > > > > > at the core I believe it's because bro, after sometime, won't > > respond to a "normal" kill command.??A "sudo killall bro" will do > > nothing.??Usually I'll "sudo killall bro", wait a minute, and then > > my spool directory will be empty, I'll have an email with stats, > > and I'll have my new archive directory.??I'll have to -9 it in > > order to get it to stop,??I've restarted this morning and will see > > how many days it will go.??Thank you. > > > > James > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170104/aee9af72/attachment.html From siberkartal at gmail.com Wed Jan 4 03:56:11 2017 From: siberkartal at gmail.com (=?UTF-8?B?QmV5YXogxZ5hcGth?=) Date: Wed, 4 Jan 2017 14:56:11 +0300 Subject: [Bro] Mime-type issues (text/plain and application/x-msdownload) In-Reply-To: References: <015D67C8-2114-4266-83CA-54AD8E80E3BC@icir.org> Message-ID: Hi Seth, I tried it in script land. https://pastebin.mozilla.org/8958431 This is the related part. ... event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=3 { if ( !is_orig ) { if ( to_lower(name) == "content-type" && value == "application/x-msdownload" ) { #c$http$resp_mime_types[0] = "application/x-msdownload"; print(c$uid); _mime_type = "application/x-msdownload"; } } } ... It does not set mime_type in the output correctly, order is erroneous. It sets mime_type of a different response, so it also breaks it. Could you please test it with the pcap file in the first post? Thanks, On Wed, Jan 4, 2017 at 12:32 PM, Beyaz ?apka wrote: > Yes, I talk about the response for the first HTTP request. > signature file-html is good but still could be better. > The signature only check for the starting of the file for particular > patterns, the problem originates from that. > > From where, you are not able to find any information about what > application/x-msdownload is? > If you are talking about *.sig files in bro directories, of course it does > not exist. > However google says much: https://msdn.microsoft. > com/en-us/library/ms775147(v=vs.85).aspx > application/x-msdownload is Executable (.exe or .dll) file. > It is similar to signature file-magic-auto433. > > In addition, sure, they are unrecognized binary data, since they are > encrypted. > I think, file-magic-auto433 flags plain ones correctly, but gives its > mime type as application/x-dosexec > I will duplicate it and add an additional check > (http-reply-header /Content-type: application/x-msdownload/) for a > workaround. > > On Wed, Jan 4, 2017 at 6:23 AM, Seth Hall wrote: > >> >> > On Dec 28, 2016, at 9:11 AM, Beyaz ?apka wrote: >> > >> > Bro says the mime-type as "text/plain" for the response of first HTTP >> GET request. >> > However, at least, wireshark (and also CapTipper) says it is >> "text/html". >> > The correct one is text/html, it is clear. >> >> Are you referring to the first request in the "10.12.13.102 49192 >> 195.133.48.182 80" connection? It's showing as text/html for me in Bro >> 2.5. >> >> > I think, bro does not look only Content-Type (maybe due to malicious >> manipulation), but makes some heuristics. But there should be some issues >> for this case. >> >> We have a fairly large set of signatures that identify file types. In >> HTTP traffic, the Content-Type header doesn't factor in at all. >> >> > The other one is that, there are 3 binary files in this pcap. >> > Bro extracts them pretty fine. >> > However again there are some issues about content-type. >> > While their content type is application/x-msdownload, the http.log and >> files.log says dash dash (not found). >> >> Due to the fact that we detect mime type with signatures and I don't seem >> to be able to find any information about what application/x-msdownload is, >> I don't think we'll be able to make that detection. The files that are >> transferred are unrecognizable binary data too (at least I was unable to >> see anything recognizable there). >> >> .Seth >> >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170104/90b55131/attachment.html From seth at icir.org Wed Jan 4 09:03:49 2017 From: seth at icir.org (Seth Hall) Date: Wed, 4 Jan 2017 12:03:49 -0500 Subject: [Bro] Mime-type issues (text/plain and application/x-msdownload) In-Reply-To: References: <015D67C8-2114-4266-83CA-54AD8E80E3BC@icir.org> Message-ID: <2F0B3FD6-11D0-4FCC-95DB-CF5F5A5C9BD7@icir.org> > On Jan 4, 2017, at 4:32 AM, Beyaz ?apka wrote: > > Yes, I talk about the response for the first HTTP request. > signature file-html is good but still could be better. > The signature only check for the starting of the file for particular patterns, the problem originates from that. We accept patches if you have improvements to be made on our file type detection. > From where, you are not able to find any information about what application/x-msdownload is? > If you are talking about *.sig files in bro directories, of course it does not exist. > However google says much: https://msdn.microsoft.com/en-us/library/ms775147(v=vs.85).aspx That link doesn't actually describe what the purpose of that mime type is or what exactly the file format should look like. It's just more of the same stuff that I already found that makes references to the mime type and places some relation to windows executables but we already identify windows executables as application/x-dosexec as you discovered. > In addition, sure, they are unrecognized binary data, since they are encrypted. > I think, file-magic-auto433 flags plain ones correctly, but gives its mime type as application/x-dosexec > I will duplicate it and add an additional check (http-reply-header /Content-type: application/x-msdownload/) for a workaround. Our file type detection is is meant to detect file types by inspecting the file content. What you want to do is just something different from the way Bro works and you are already doing the right thing by writing your own script to do something extra with that header. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From dnj0496 at gmail.com Wed Jan 4 20:40:52 2017 From: dnj0496 at gmail.com (Dk Jack) Date: Wed, 4 Jan 2017 20:40:52 -0800 Subject: [Bro] bif example Message-ID: Hi, I have a question about BIF example . I am trying to write my own BIF functions. I'd like to store some data (i.e. pass in a record to a BIF function) and retrieve it later as a record when I am processing traffic. In the example, I see 'foobar' record is defined in bro.init. There is a declaration of foobar record in types.bif. This is being accessed in bro.bif. How is the 'foobar' record type resolved when it's referenced in bro.bif? Is the example complete or is it missing some includes and such? I tried to the same but my bro script fails because my bif file doesn't know about my record type. I included my 'types.bif.h' in my bif file get it compiled without errors. But it fails to load because it does not know about my record type. I get the error 'identifier not defined:'. Any help is appreciated. Thanks. Dk. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170104/2dc4af65/attachment.html From dirk.leinenbach at consistec.de Thu Jan 5 00:19:17 2017 From: dirk.leinenbach at consistec.de (Dirk Leinenbach) Date: Thu, 5 Jan 2017 09:19:17 +0100 Subject: [Bro] Detecting lost broccoli events in python Message-ID: <586E0185.9090506@consistec.de> Hi all, I'm receiving bro events with a python script via broccoli python bindings. Is it possible to detect overload scenarios (events are being dropped because python not fast enough) and log them in some way? Preferably I would like to detect this from python, but if it's possible on the sender side that would also be of help. Does anybody have an idea? I didn't find anything in the broccoli-python doc. Thanks, Dirk -- Dr.-Ing. Dirk Leinenbach - Leitung Softwareentwicklung consistec Engineering & Consulting GmbH ------------------------------------------------------------------ Europaallee 5 Fon: +49 (0)681 / 959044-0 D-66113 Saarbr?cken Fax: +49 (0)681 / 959044-11 http://www.consistec.de e-mail: dirk.leinenbach at consistec.de Registergericht: Amtsgericht Saarbr?cken Registerblatt: HRB12003 Gesch?ftsf?hrer: Dr. Thomas Sinnwell, Volker Leiendecker, Stefan Sinnwell From hovsep.sanjay.levi at gmail.com Thu Jan 5 07:27:21 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Thu, 5 Jan 2017 15:27:21 +0000 Subject: [Bro] Bro 2.5 Logger crash --> Broken Log Directory naming In-Reply-To: <03a501d25d50$0221ebb0$0665c310$@uoregon.edu> References: <03a501d25d50$0221ebb0$0665c310$@uoregon.edu> Message-ID: When our cluster becomes unstable we see the same behavior. I think making the cluster stable is the answer. I suspect you have the same problem we do in that the logs are not written to disk fast enough and slowly buffer all memory until Bro crashes. I think the answer is to enable multiple loggers and Kafka export but I've not figured out how to do that yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170105/6ba55dbf/attachment.html From hovsep.sanjay.levi at gmail.com Thu Jan 5 07:31:33 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Thu, 5 Jan 2017 15:31:33 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> Message-ID: Ok. Do you know offhand what file I would look into to make that change ? Also, after creating multiple loggers how would I make each one disable local logging and instead use a kafka export ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170105/43555d89/attachment.html From jazoff at illinois.edu Thu Jan 5 07:46:09 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 5 Jan 2017 15:46:09 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> Message-ID: <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> > On Jan 5, 2017, at 10:31 AM, Hovsep Levi wrote: > > Ok. Do you know offhand what file I would look into to make that change ? Also, after creating multiple loggers how would I make each one disable local logging and instead use a kafka export ? > Someone that does this now using the kafka plugin could answer better, but I think it's a matter of using a small script. The kafka plugin comes with a script like this: event bro_init() &priority=-5 { for (stream_id in Log::active_streams) { if (stream_id in Kafka::logs_to_send) { local filter: Log::Filter = [ $name = fmt("kafka-%s", stream_id), $writer = Log::WRITER_KAFKAWRITER, $config = table(["stream_id"] = fmt("%s", stream_id)) ]; Log::add_filter(stream_id, filter); } } } I think you would change it to be something like event bro_init() &priority=-5 { for (stream_id in Log::active_streams) { local filter: Log::Filter = [ $name = fmt("kafka-%s", stream_id), $writer = Log::WRITER_KAFKAWRITER, $config = table(["stream_id"] = fmt("%s", stream_id)) ]; Log::remove_default_filter(stream_id) Log::add_filter(stream_id, filter); } } -- - Justin Azoff From andrew.dellana at bayer.com Thu Jan 5 08:10:43 2017 From: andrew.dellana at bayer.com (Andrew Dellana) Date: Thu, 5 Jan 2017 16:10:43 +0000 Subject: [Bro] Connection summary question Message-ID: <17063fa8d1624233926ca5beeb6757f9@moxde9.na.bayer.cnb> Hello, I have been receiving the ?[Bro] Connection summary? for a while now but don?t see any connections via SSH ? but I need to SSH down to the server location which is where the Network Tap and Bro are located. Is there a way to tell Bro to look on port 22 or a way to tell it to look at specific port numbers? Best regards, Andrew Dellana Intern ________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170105/467ec97a/attachment-0001.html From krissecinfo at gmail.com Thu Jan 5 09:03:52 2017 From: krissecinfo at gmail.com (Kris Secinfo) Date: Thu, 5 Jan 2017 11:03:52 -0600 Subject: [Bro] user agent string data enrichment Message-ID: All- I am new to Bro, and am trying to find a way to "enrich" the user agent string to a more readable format. Is there a way that Bro can read the value that is in the user agent string, compare it to a table of known strings and present the "readable" value in a new field? For example, I would want Bro to see Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36 and add a new field that reads something to the effect of "Google Chrome Version 55.0.2883.87 m (64-bit)" Thanks in advance for any new tips/starting points offered! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170105/5c8fe07c/attachment.html From bro at pingtrip.com Thu Jan 5 09:50:40 2017 From: bro at pingtrip.com (Dave Crawford) Date: Thu, 5 Jan 2017 12:50:40 -0500 Subject: [Bro] Detecting multiple Email attachments Message-ID: I?m looking to generate a notice when an email has both a PDF and Excel document attached and wanted a sanity check on a solution before I started coding First, create a lookup table to track file mime-types over a period of time. Something like: global fuid_mime_state: table[string] of string &create_expire=2min &expire_func=fuid_out; Second, on ?file_state_remove" events (where the source is SMTP) add the details to the tracking table. Something like: fuid_mime_state[f$id] = f$info$mime_type And finally, on "SMTP::log_smtp" events loop through the rec?fuids vector and look them up in the fuid_mime_state table to see if both a PDF and Excel doc are attached. Does this approach make sense, or am I overlooking an easier solution? -Dave From jazoff at illinois.edu Thu Jan 5 09:53:32 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 5 Jan 2017 17:53:32 +0000 Subject: [Bro] user agent string data enrichment In-Reply-To: References: Message-ID: <60CA7D13-A450-46BD-AC7A-ADC07284C884@illinois.edu> > On Jan 5, 2017, at 12:03 PM, Kris Secinfo wrote: > > All- > I am new to Bro, and am trying to find a way to "enrich" the user agent string to a more readable format. Is there a way that Bro can read the value that is in the user agent string, compare it to a table of known strings and present the "readable" value in a new field? > For example, I would want Bro to see > > Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36 > > and add a new field that reads something to the effect of "Google Chrome Version 55.0.2883.87 m (64-bit)" > > Thanks in advance for any new tips/starting points offered! There is code that generates the software.log entry that tries to normalize things a bit. Does the software.log by any chance already contain the result that you want? -- - Justin Azoff From hovsep.sanjay.levi at gmail.com Thu Jan 5 12:50:30 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Thu, 5 Jan 2017 20:50:30 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: Thanks. I've found the BroControl scripts to modify the logger setup and will be testing soon. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170105/6e5fb491/attachment.html From rmarsh at salesforce.com Thu Jan 5 14:16:57 2017 From: rmarsh at salesforce.com (Rhette Wallach) Date: Thu, 5 Jan 2017 14:16:57 -0800 Subject: [Bro] Exfil scripts Message-ID: Hi All, I'm relatively new to Bro and would like input if there are other exfiltration detection scripts out there other than these two: https://github.com/sooshie/bro-scripts/blob/master/2.4-scrip ts/dns-bad_behavior.bro https://github.com/reservoirlabs/bro-scripts/tree/master/ exfil-detection-framework Any others? Additionally, when I try to run the first script, I get a split string error on this line: local parts = split_string(key$str, /, /); This is odd because my understanding is that the split_string function should be built-in and part of base/bif/strings.bif.bro, and it's function is defined here: is a defined function as per here ( https://www.bro.org/sphinx/scripts/base/bif/strings.bif.bro.html). Any input on either of these questions would be appreciated. Thanks! rhette -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170105/c6f8959e/attachment.html From vladg at illinois.edu Fri Jan 6 08:56:11 2017 From: vladg at illinois.edu (Vlad Grigorescu) Date: Fri, 06 Jan 2017 10:56:11 -0600 Subject: [Bro] compressed file analyzer + docx files In-Reply-To: References: Message-ID: erik clark writes: > Has anyone given any thought as to the possiblity of using a compressed > file analyzer to open and detect embedded flash files in docx files, or > macros in the same? I realize that that means we need a file analyzer > first, but I have been thinking about alternate use cases for the analyzer, > and this one sprung to mind... Do you know what the format looks like? I took a crack at a zip file analyzer a while back, but it turns out that the only authoritative data is in the footer, so that doesn't work with incremental parsing. --Vlad -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 800 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170106/3580447e/attachment.bin From hovsep.sanjay.levi at gmail.com Fri Jan 6 11:14:21 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 6 Jan 2017 19:14:21 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: Here's the problem I'm having so far. I've included most of the modified code below. I think I'm using the wrong data type for logger_str_array ? [bro at mgr /opt/bro]$ bin/broctl start starting logger-1 ... starting logger-2 ... starting logger-3 (was crashed) ... starting logger-4 ... logger-3 terminated immediately after starting; check output with "diag" logger-2 terminated immediately after starting; check output with "diag" logger-1 terminated immediately after starting; check output with "diag" logger-4 terminated immediately after starting; check output with "diag" ==== stderr.log error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 146: not a record (logger-1$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 147: not a record (logger-2$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 148: not a record (logger-3$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 149: not a record (logger-4$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 150: not a record (logger-1$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 151: not a record (logger-2$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 152: not a record (logger-3$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 153: not a record (logger-4$manager) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 22: uninitialized list value ($node_type=Cluster::WORKER, $ip=10.1.1.1, $zone_id=, $p=47778/tcp, $interface=myri0, $logger=logger-2$ = manager, $proxy=proxy-10) error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 22: bad record initializer ([$node_type=Cluster::WORKER, $ip=10.1.1.1, $zone_id=, $p=47778/tcp, $interface=myri0, $logger=logger-2$ = manager, $proxy=proxy-10]) from install.py... #------------------------------------------------------------------------------------------# # For using multiple loggers create a unique string for each # and store in loggers_str_array[]. Reset loggerstr to an empty # string for the manager definition found in the next section below. #------------------------------------------------------------------------------------------# if loggers: # Use the first logger in list, since only one logger is allowed. # logger = loggers[0] manager_is_logger = "F" # loggerstr = '$logger="%s", ' % logger.name loggerstr = "" for l in loggers: lstr = '$logger="%s"' % l.name loggers_str_array.append(lstr) else: # If no logger exists, then manager does the logging. manager_is_logger = "T" loggerstr = "" ostr = "# Automatically generated. Do not edit.\n" ostr += "redef Cluster::manager_is_logger = %s;\n" % manager_is_logger ostr += "redef Cluster::nodes = {\n" #------------------------------------------------------------------------------------------# # For the cluster-layout replace the single definition of a logger with # all loggers defined in loggers[]. # #------------------------------------------------------------------------------------------# # Control definition. For now just reuse the manager information. ostr += '\t["control"] = [$node_type=Cluster::CONTROL, $ip=%s, $zone_id="%s", $p=%s/tcp],\n' % (util.format_bro_addr(manager.addr), config.Config.zoneid, broport.use_port(None)) # Logger definition # if loggers: # ostr += '\t["%s"] = [$node_type=Cluster::LOGGER, $ip=%s, $zone_id="%s", $p=%s/tcp],\n' % (logger.name, util.format_bro_addr(logger.addr), logger.zone_id, broport.use_port(logger)) # Multi-logger setup for testing # Define an array of loggers for assigning to workers # if loggers: for l in loggers: ostr += '\t["%s"] = [$node_type=Cluster::LOGGER, $ip=%s, $zone_id="%s", $p=%s/tcp],\n' % (l.name, util.format_bro_addr(l.addr), l.zone_id, broport.use_port(l)) #------------------------------------------------------------------------------------------# # For the workers assign a logger by balancing the total number # of loggers across the total worker count. # # Where loggerstr was previously used loggers_str_array[] is now used #------------------------------------------------------------------------------------------# # Workers definition for w in workers: p = w.count % len(proxies) logger_index = w.count % len(loggers) # ostr += '\t["%s"] = [$node_type=Cluster::WORKER, $ip=%s, $zone_id="%s", $p=%s/tcp, $interface="%s", %s$manager="%s", $proxy="%s"],\n' % (w.name, util.format_bro_addr(w.addr), w.zone_id, broport.use_port(w), w.interface, loggerstr, manager.name, proxies[p].name) ostr += '\t["%s"] = [$node_type=Cluster::WORKER, $ip=%s, $zone_id="%s", $p=%s/tcp, $interface="%s", %s$manager="%s", $proxy="%s"],\n' % (w.name, util.format_bro_addr(w.addr), w.zone_id, broport.use_port(w), w.interface, loggers_str_array[logger_index], manager.name, proxies[p].name) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170106/1b75de98/attachment-0001.html From hovsep.sanjay.levi at gmail.com Fri Jan 6 11:28:26 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 6 Jan 2017 19:28:26 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: Ok I fixed that... loggers[logger_index].name should be used , it's a reference to the previously defined loggers_str_array[] entries. Now it's... error in /opt/bro_data/spool/installed-scripts-do-not-touch/auto/cluster-layout.bro, line 22: unknown identifier logger, at or near "logger" -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170106/1a1c765d/attachment.html From hovsep.sanjay.levi at gmail.com Fri Jan 6 11:50:35 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 6 Jan 2017 19:50:35 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: The format string was the problem. The value of logger_str_array[i] would be $logger="logger-1". When trying to set it differently the %s was substituted incorrectly and didnt' have the correct template of $logger="logger-name". (My hack needs to be rewritten.) Now it's fatal error in /opt/bro_data/spool/installed-scripts-do-not-touch/site/local.bro, line 126: can't find logs-to-kafka.bro which I think means I'm pass the multiple logger configuration and closer to getting this running. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170106/0efa73c4/attachment.html From hovsep.sanjay.levi at gmail.com Fri Jan 6 13:58:13 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Fri, 6 Jan 2017 21:58:13 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: I'm using four loggers and the memory usage remains stable. When I re-enable writing logs to disk there's a difference since logs/current is a symlink to the first logger, spool/logger-1; the other loggers write into their own spool directories (ex: "spool/logger-3"). I think you mentioned this before. For some reason logger-1 and logger-3 are doing all of the work, there are no logs in logger-2 and logger-4 and the communication.log files for each doesn't show any worker communications. At startup there was "peer sent worker-1-1" but nothing afterwards. I'm not sure yet if this happens when Kafka only logging is enabled. The cluster-layout.bro looks correct and shows the 4 loggers are distributed among the workers correctly, so it's not that. When I reduced the number of loggers to 2 it's the same phenomenon, logger-1 is working OK but logger-2 seems to be stalled. Only one worker has sent data and it's very low volume. Overall the multiple logger setup shows promise for fixing the issue but there's a few more things to discover and tune. It seems the reason the cluster is stable is because only half of the logs are being received when using multiple loggers. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170106/dee21468/attachment.html From jazoff at illinois.edu Fri Jan 6 14:16:58 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 6 Jan 2017 22:16:58 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: Looks like you worked out the broctl changes the right way. That code is a bit crufty, but what you have will work. There's a much easier way to do the distributing of workers/proxies: >>> import itertools >>> loggers = ['logger-1', 'logger-2'] >>> logger_cycler = itertools.cycle(loggers) >>> next(logger_cycler) 'logger-1' >>> next(logger_cycler) 'logger-2' >>> next(logger_cycler) 'logger-1' >>> next(logger_cycler) 'logger-2' > On Jan 6, 2017, at 4:58 PM, Hovsep Levi wrote: > > I'm using four loggers and the memory usage remains stable. When I re-enable writing logs to disk there's a difference since logs/current is a symlink to the first logger, spool/logger-1; the other loggers write into their own spool directories (ex: "spool/logger-3"). I think you mentioned this before. Yep. It's an issue for purely local logging, and I'm not sure if rotation would work (but maybe it does? you tell me :-)) For people that use splunk/logstash/kafka it's mostly a non-issue since it will get re-aggregated anyway. > For some reason logger-1 and logger-3 are doing all of the work, there are no logs in logger-2 and logger-4 and the communication.log files for each doesn't show any worker communications. At startup there was "peer sent worker-1-1" but nothing afterwards. I'm not sure yet if this happens when Kafka only logging is enabled. The cluster-layout.bro looks correct and shows the 4 loggers are distributed among the workers correctly, so it's not that. > > When I reduced the number of loggers to 2 it's the same phenomenon, logger-1 is working OK but logger-2 seems to be stalled. Only one worker has sent data and it's very low volume. > > Overall the multiple logger setup shows promise for fixing the issue but there's a few more things to discover and tune. It seems the reason the cluster is stable is because only half of the logs are being received when using multiple loggers. It's very promising that you were seeing traffic to logger-1 and logger-3, so it is at least proving that multiple loggers will work. If you ran 4 workers but only one was doing anything I'd be worried. I'd be interested in knowing what happens if you ran 6 or 8 loggers. Can you post what the resulting cluster-layout looked like for 2 and 4 workers? Maybe it's a simple problem and it's just not evenly distributing things. -- - Justin Azoff From jazoff at illinois.edu Fri Jan 6 14:22:46 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 6 Jan 2017 22:22:46 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: > On Jan 6, 2017, at 5:16 PM, Justin Azoff wrote: > > That code is a bit crufty, but what you have will work. There's a much easier way to do the distributing of workers/proxies: To be clear.. The code that was already there is crufty, the easier ways didn't exist back when it was written. -- - Justin Azoff From hovsep.sanjay.levi at gmail.com Fri Jan 6 16:41:32 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Sat, 7 Jan 2017 00:41:32 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: Actually file rotation does work but it's prone to fail because of a timestamp collision. Each rotated file is named based on the timestamp when the rotation started.. so they are about 10-20 seconds different in name. (ex: x509.22:51:59.. x509.22:52:20.. x509.22:52:30). I guess the fix would be to change the filenames relative to each logger, ex: "logger-1_x509..." or something more clever like merging all logger files into a single zip file. A cluster-layout for 2 loggers and 8 loggers is attached. I don't think there's anything to fix here based on the comments below. When I configure 8 loggers only 3 loggers are working. (logger-3, logger-4, and logger-8). I restarted the cluster and this time 5 of the loggers are working. (2,3,4,6,8). Still looking into why this happens. This problem would affect the Kafka export since each logger would be exporting. Restarting the failed loggers didn't fix the log flow. It looks like they are associating with the assigned logger correctly after startup and there's nothing indicative in the worker logs stderr or stdout. >From logger-1/communication.log after restarting logger-1 post-cluster startup: 1483746743.134338 logger-1 parent - - - info [#10005/10.1.1.2:51512] peer sent class "worker-1-8" 1483746743.134338 logger-1 parent - - - info [#10005/10.1.1.2:51512] phase: handshake 1483746743.135891 logger-1 child - - - info [#10006/10.1.1.3:17887] accepted clear connection 1483746743.137351 logger-1 parent - - - info [#10006/10.1.1.3:17887] added peer 1483746743.137351 logger-1 parent - - - info [#10006/10.1.1.3:17887] peer connected 1483746743.137351 logger-1 parent - - - info [#10006/10.1.1.3:17887] phase: version 1483746743.137351 logger-1 script - - - info connection established 1483746743.139263 logger-1 parent - - - info [#10006/10.1.1.3:17887] peer sent class "worker-3-12" 1483746743.139263 logger-1 parent - - - info [#10006/10.1.1.3:17887] phase: handshake -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/367e83fc/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: cluster-layout__2-loggers.bro Type: application/octet-stream Size: 49091 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/367e83fc/attachment-0001.obj From hovsep.sanjay.levi at gmail.com Fri Jan 6 16:42:35 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Sat, 7 Jan 2017 00:42:35 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: Here's the 8-logger cluster-layout. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/8c4c8e49/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: cluster-layout__8-loggers.bro Type: application/octet-stream Size: 49613 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/8c4c8e49/attachment-0001.obj From hovsep.sanjay.levi at gmail.com Fri Jan 6 17:07:43 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Sat, 7 Jan 2017 01:07:43 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: Message-ID: Actually file rotation does work but it's prone to fail because of a timestamp collision. Each rotated file is named based on the timestamp when the rotation started.. so they are about 10-20 seconds different in name. (ex: x509.22:51:59.. x509.22:52:20.. x509.22:52:30). I guess the fix would be to change the filenames relative to each logger, ex: "logger-1_x509..." or something more clever like merging all logger files into a single zip file. A cluster-layout for 2 loggers is attached. I don't think there's anything to fix here based on the comments below. When I configure 8 loggers only 3 loggers are working. (logger-3, logger-4, and logger-8). I restarted the cluster and this time 5 of the loggers are working. (2,3,4,6,8). Still looking into why this happens. This problem would affect the Kafka export since each logger would be exporting. Restarting the failed loggers didn't fix the log flow. It looks like they are associating with the assigned logger correctly after startup and there's nothing indicative in the worker logs stderr or stdout. >From logger-1/communication.log after restarting logger-1 post-cluster startup: 1483746743.134338 logger-1 parent - - - info [#10005/10.1.1.2:51512] peer sent class "worker-1-8" 1483746743.134338 logger-1 parent - - - info [#10005/10.1.1.2:51512] phase: handshake 1483746743.135891 logger-1 child - - - info [#10006/10.1.1.3:17887] accepted clear connection 1483746743.137351 logger-1 parent - - - info [#10006/10.1.1.3:17887] added peer 1483746743.137351 logger-1 parent - - - info [#10006/10.1.1.3:17887] peer connected 1483746743.137351 logger-1 parent - - - info [#10006/10.1.1.3:17887] phase: version 1483746743.137351 logger-1 script - - - info connection established 1483746743.139263 logger-1 parent - - - info [#10006/10.1.1.3:17887] peer sent class "worker-3-12" 1483746743.139263 logger-1 parent - - - info [#10006/10.1.1.3:17887] phase: handshake -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/fd280d09/attachment.html From hovsep.sanjay.levi at gmail.com Fri Jan 6 17:08:53 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Sat, 7 Jan 2017 01:08:53 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: Message-ID: 2-logger cluster layout attached. (mailing list limits the message sizes to 100k). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/ecc1f26c/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: cluster-layout__2-loggers.bro Type: application/octet-stream Size: 49091 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/ecc1f26c/attachment-0001.obj From hovsep.sanjay.levi at gmail.com Fri Jan 6 17:31:46 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Sat, 7 Jan 2017 01:31:46 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: Message-ID: It also appears that not all workers are connecting to the loggers. This time after restarting 6 of 8 loggers are active but only 18 workers are actively sending data. [bro at mgr /opt/bro]$ grep worker spool/logger-*/communication.log | awk '{print $2}' | sort -u |grep -v logger | wc -l 18 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/a2a8e8cf/attachment.html From hovsep.sanjay.levi at gmail.com Fri Jan 6 18:20:56 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Sat, 7 Jan 2017 02:20:56 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: Message-ID: I went back to testing Kafka only with logs to disk disabled and it seems to work fine with a single logger, the memory usage is stable but it will take a few days of testing to be sure. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/b3162193/attachment.html From jason_haar at trimble.com Fri Jan 6 19:47:42 2017 From: jason_haar at trimble.com (Jason Haar) Date: Sat, 7 Jan 2017 16:47:42 +1300 Subject: [Bro] does bro-ids support parsing QUIC? Message-ID: Hey there I'm using the ssl.log files to augment our proxy logs (we have transparent proxy on port 80, but I believe TLS intercept has no future, so I'm using bro-ids to capture tcp/443 SNI data - as it's better than doing nothing) Works well - but I don't think QUIC is supported? Any chance of that being supported - same outcome as HTTPS: just after the SNI data... FYI: QUIC is basically HTTP/2 over UDP -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/98469df8/attachment.html From johanna at icir.org Fri Jan 6 22:57:50 2017 From: johanna at icir.org (Johanna Amann) Date: Sat, 07 Jan 2017 07:57:50 +0100 Subject: [Bro] does bro-ids support parsing QUIC? In-Reply-To: References: Message-ID: <47E43800-27CB-444F-B196-EF4FBC6CB67A@icir.org> Hello Jason, > I'm using the ssl.log files to augment our proxy logs (we have > transparent > proxy on port 80, but I believe TLS intercept has no future, so I'm > using > bro-ids to capture tcp/443 SNI data - as it's better than doing > nothing) > > Works well - but I don't think QUIC is supported? Any chance of that > being > supported - same outcome as HTTPS: just after the SNI data... No, it is not supported. There is a chance of it being supported, but if that happens it is likely not going to happen in the very near term (I looked into it a bit ago and would like to add it, but I am quite a bit short of time at the moment). > FYI: QUIC is basically HTTP/2 over UDP While that certainly is true from an outcome point of view, it sadly is not quite true from a protocol point of view (HTTP/2 is just TLS, QUIC does its own thing everywhere, including having special compression for cleartext stuff if I remember it correctly - that is a bit of work...). Johanna From jlay at slave-tothe-box.net Sat Jan 7 04:25:39 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Sat, 07 Jan 2017 05:25:39 -0700 Subject: [Bro] does bro-ids support parsing QUIC? In-Reply-To: <47E43800-27CB-444F-B196-EF4FBC6CB67A@icir.org> References: <47E43800-27CB-444F-B196-EF4FBC6CB67A@icir.org> Message-ID: <1483791939.2614.1.camel@slave-tothe-box.net> On Sat, 2017-01-07 at 07:57 +0100, Johanna Amann wrote: > Hello Jason, > > > > > I'm using the ssl.log files to augment our proxy logs (we have? > > transparent > > proxy on port 80, but I believe TLS intercept has no future, so > > I'm? > > using > > bro-ids to capture tcp/443 SNI data - as it's better than doing? > > nothing) > > > > Works well - but I don't think QUIC is supported? Any chance of > > that? > > being > > supported - same outcome as HTTPS: just after the SNI data... > No, it is not supported. There is a chance of it being supported, but > if? > that happens it is likely not going to happen in the very near term > (I? > looked into it a bit ago and would like to add it, but I am quite a > bit? > short of time at the moment). > > > > > FYI: QUIC is basically HTTP/2 over UDP > While that certainly is true from an outcome point of view, it sadly > is? > not quite true from a protocol point of view (HTTP/2 is just TLS, > QUIC? > does its own thing everywhere, including having special compression > for? > cleartext stuff if I remember it correctly - that is a bit of > work...). > > Johanna > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro You can use protosigs (https://github.com/broala/bro-protosigs) to catch QUIC: signature protosig_ssl_udpquic { ? ip-proto == udp ? dst-port ==443 ? payload /.*\x51\x30\x33/ ? eval ProtoSig::match } signature protosig_ssl_tcpquic { ? ip-proto == tcp ? dst-port ==443 ? payload /\x31\x51\x54\x56/ ? eval ProtoSig::match } James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170107/0781c642/attachment.html From jazoff at illinois.edu Sat Jan 7 07:33:52 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Sat, 7 Jan 2017 15:33:52 +0000 Subject: [Bro] Bro cluster requirements and manager logging backlog bug In-Reply-To: References: <39591A79-2357-45E1-AFE5-7009E764326E@illinois.edu> <4A126664-FC0D-465B-896C-0BD93809CE33@illinois.edu> <6EDF7C86-443B-4441-8A74-00C09A653845@illinois.edu> <7A125A59-5577-4BC4-BDCB-5DD4F27E7BFD@illinois.edu> Message-ID: <59AB5C74-61A5-46A8-9AFF-5458022F8C01@illinois.edu> > On Jan 6, 2017, at 7:41 PM, Hovsep Levi wrote: > > > When I configure 8 loggers only 3 loggers are working. (logger-3, logger-4, and logger-8). I restarted the cluster and this time 5 of the loggers are working. (2,3,4,6,8). Still looking into why this happens. Running broctl print Communication::nodes May shed some light on that. If it times out you can do broctl print Communication::nodes logger-1 broctl print Communication::nodes logger-2 broctl print Communication::nodes worker-1-1 broctl print Communication::nodes worker-1-2 broctl print Communication::nodes worker-1-3 to display it from individual nodes. You may also just want to try running tcpdump when the workers start up, you should see tcp connections to 10.1.1.1 on ports 47761 and 47762 from the worker nodes. -- - Justin Azoff From gordonjamesr at gmail.com Mon Jan 9 07:27:31 2017 From: gordonjamesr at gmail.com (James Gordon) Date: Mon, 9 Jan 2017 07:27:31 -0800 Subject: [Bro] Writing logs to both ACII and JSON Message-ID: Hello all, Apologies in advance if this is an uninformed question - is it possible to configure Bro to write logs to both ASCII and JSON outputs (in different directories, preferably)? There's another active thread on the mailing list at the moment about using multiple logger instances in Bro 2.5 which got me thinking that maybe this problem could be addressed by running multiple logger instances - one for ASCII logs, and one for JSON. If I understand the architecture correctly, I'd love to see a single manager instance duplicate all the data to send to both logger instances, and have one write ASCII and one to JSON. Is this a possibility? If so, how would I go about configuring this? I should note that my bro-knowledge is pretty limited to loading scripts from git hub and some very basic whitelisting, so unfortunately I'm not very comfortable modifying or writing bro code. My organization relies on ASCII logs for plain text retention and all our normal 'nix plain text searching utilities, but I've been experimenting with Graylog and importing Bro JSON logs to Graylog is too easy and flexible for us to find the time to write grok parsers for our ASCII logs. We're not prepared to not write logs to ASCII in prod, so I'm hopeful that there's an almost easy way to get logs in both formats. For additional context we run Bro on Security Onion, and we're currently running 2.4.1 in prod but plan to upgrade to 2.5 soon. I do have a test environment with Security Onion and Bro 2.5 available to me. Any advice / steps on how to achieve this would be much appreciated! Thanks, James Gordon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170109/5f843910/attachment.html From jan.grashoefer at gmail.com Mon Jan 9 08:03:58 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Mon, 9 Jan 2017 17:03:58 +0100 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: References: Message-ID: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> Hi James, > Apologies in advance if this is an uninformed question - is it possible to > configure Bro to write logs to both ASCII and JSON outputs (in different > directories, preferably)? some time ago I have written a small script that should fit your needs: https://gist.github.com/J-Gras/f9f86828f9e9d9c0b8f0908bc3573bb0 Using path_json you should also be able to log into a different directory. I hope this helps, Jan From gfaulkner.nsm at gmail.com Mon Jan 9 13:11:40 2017 From: gfaulkner.nsm at gmail.com (Gary Faulkner) Date: Mon, 9 Jan 2017 15:11:40 -0600 Subject: [Bro] postfix instead of sendmail for bro emails Message-ID: I seem to recall a conversation about substituting postfix for sendmail for sending email from bro/broctl, but now I can't find it. Is there anything special that needed to be done to use postfix instead, or was sendmail a hard requirement? I'm getting some pushback on using sendmail as listed in the Bro docs as the system admins prefer to use postfix. From jazoff at illinois.edu Mon Jan 9 13:47:56 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Mon, 9 Jan 2017 21:47:56 +0000 Subject: [Bro] postfix instead of sendmail for bro emails In-Reply-To: References: Message-ID: <1B9D98DE-4D7B-40F0-9D5F-CFF3186A1702@illinois.edu> > On Jan 9, 2017, at 4:11 PM, Gary Faulkner wrote: > > I seem to recall a conversation about substituting postfix for sendmail > for sending email from bro/broctl, but now I can't find it. Is there > anything special that needed to be done to use postfix instead, or was > sendmail a hard requirement? I'm getting some pushback on using sendmail > as listed in the Bro docs as the system admins prefer to use postfix. Bro does not require sendmail, it just requires the 'sendmail' binary. Postfix (and every other smtp server) provides a compatible binary. -- - Justin Azoff From jdopheid at illinois.edu Tue Jan 10 10:41:28 2017 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Tue, 10 Jan 2017 18:41:28 +0000 Subject: [Bro] Bro4Pros 2017 Agenda Message-ID: Attention Bro4Pros 2017 attendees, the agenda has been posted to our website [1]. Here it is below: 8:00am Breakfast 8:50am Welcome and Introduction 9:00am SSL fingerprinting ? John Althouse & Jeff Atkinson, Salesforce 9:45am Netmap ? Seth Hall, ICSI/Corelight 10:30am Break 10:45am Organizational development and Bro ? Adam Kniffen, Cisco Systems 11:30am OSQuery ? Steffen Haas, University of Hamburg 12:15pm Lunch 1:30pm This year in the Lab ? Aashish Sharma & Jay Krous, Berkeley Lab 2:15pm A High-speed multi-tenant Bro solution using SR-IOV & containers ? Ed Sealing, Sealing Technologies 3:00pm Break 3:15pm CAF and Bro ? Dominik Charousset, Hamburg University of Applied Sciences 4:00pm Using VLAN tags to physically map traffic flows ? Dilip Madathil, Reservoir Labs 5:00pm Happy hour, light refreshments 6:00pm Conference ends [1] https://www.bro.org/community/bro4pros2017.html#agenda We look forward to seeing you in San Francisco! ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From andrew.dellana at bayer.com Tue Jan 10 11:35:12 2017 From: andrew.dellana at bayer.com (Andrew Dellana) Date: Tue, 10 Jan 2017 19:35:12 +0000 Subject: [Bro] email alerts Message-ID: <8d513f0cd92643068e978de24ffda785@moxde9.na.bayer.cnb> Hello, We have email alerts configured (connection summary and dropped packets) to be emailed on the hour they worked for several weeks, but over the past few days we have not received any. Is there a reason for these not showing up? Freundliche Gr??e / Best regards, Andrew Dellana Intern ________________________ Bayer: Science For A Better Life Bayer U.S. LLC Country Platform US Scientific Computing Competence Ctr Bayer Road 15205 Pittsburgh (PA), United States Tel: +1 412 777-2043 E-mail: andrew.dellana at bayer.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170110/d99fb16a/attachment.html From dnthayer at illinois.edu Tue Jan 10 12:04:13 2017 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 10 Jan 2017 14:04:13 -0600 Subject: [Bro] email alerts In-Reply-To: <8d513f0cd92643068e978de24ffda785@moxde9.na.bayer.cnb> References: <8d513f0cd92643068e978de24ffda785@moxde9.na.bayer.cnb> Message-ID: <6d7ad967-a14a-d9c2-0481-fa3e99626aa1@illinois.edu> On 1/10/17 1:35 PM, Andrew Dellana wrote: > Hello, > > > > We have email alerts configured (connection summary and dropped packets) > to be emailed on the hour they worked for several weeks, but over the > past few days we have not received any. Is there a reason for these not > showing up? > Did you check if the connection summary reports are being created? The connection summary reports are stored along with your other log files. Try this: ls -l /usr/local/bro/logs/2017-01-10/conn-summary* From albertociolini92 at gmail.com Wed Jan 11 06:26:19 2017 From: albertociolini92 at gmail.com (Alberto Ciolini) Date: Wed, 11 Jan 2017 14:26:19 +0000 Subject: [Bro] Bro and Wireshark Message-ID: I was wondering if is there a way to take packets with bro from wireshark? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/db0d5d32/attachment.html From jan.grashoefer at gmail.com Wed Jan 11 06:31:57 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Wed, 11 Jan 2017 15:31:57 +0100 Subject: [Bro] Bro and Wireshark In-Reply-To: References: Message-ID: <28de4887-9e3a-842f-6c07-aeccbe65b9db@gmail.com> > I was wondering if is there a way to take packets with bro from wireshark? Bro can read pcap files: https://www.bro.org/sphinx/quickstart/#reading-packet-capture-pcap-files Jan From neslog at gmail.com Wed Jan 11 06:42:55 2017 From: neslog at gmail.com (Neslog) Date: Wed, 11 Jan 2017 09:42:55 -0500 Subject: [Bro] Bro and Wireshark In-Reply-To: <28de4887-9e3a-842f-6c07-aeccbe65b9db@gmail.com> References: <28de4887-9e3a-842f-6c07-aeccbe65b9db@gmail.com> Message-ID: Be sure to not save as pcap-ng. On Jan 11, 2017 9:40 AM, "Jan Grash?fer" wrote: > > I was wondering if is there a way to take packets with bro from > wireshark? > > Bro can read pcap files: > https://www.bro.org/sphinx/quickstart/#reading-packet-capture-pcap-files > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/0e30ebbd/attachment.html From hongdal at g.clemson.edu Wed Jan 11 13:17:19 2017 From: hongdal at g.clemson.edu (Hongda Li) Date: Wed, 11 Jan 2017 16:17:19 -0500 Subject: [Bro] Dataset for Bro evaluation Message-ID: Hello all, I would like to do some evaluation of Bro. My plan is to: (1) Replay network traffic dataset to Bro and observe its CPU/memory usage. (2) Replay network traffic dataset to Bro and observe the throughput achieved by Bro without dropping packets. (3) Replay network traffic dataset to Bro with different configurations (e.g., enable some of the scripts) and observe the CPU/memory usage, throughput, etc. I guess datasets without payloads (e.g., LBNL/ICSI enterprise traces) are not suitable for my plan, since the performance of Bro depends on the content of the traffic. But it is difficult to get access to the traffic datasets with payloads due to privacy issues. Does anybody have any suggestions to help accomplish the tasks listed in the above plan? Also, if necessary, I want to start a thread here discussing how you (researchers, operators and developers) effectively evaluate Bro. Appreciate any comments. Best regards, Hongda ---------------------- Hongda Li, Graduate Research Assistant Division of Computer Science, School of Computing Clemson University Email: hongdal at clemson.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/f8dcff4f/attachment.html From gordonjamesr at gmail.com Wed Jan 11 13:31:46 2017 From: gordonjamesr at gmail.com (James Gordon) Date: Wed, 11 Jan 2017 13:31:46 -0800 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> Message-ID: Jan, + re-adding the bro mailing list because email is hard and I accidentally removed it - and in case there's a bug impacting this script in v 2.5, I tested this script on my physical security onion box, as well a security onion VM and a CentOS VM both with fresh installs of Bro 2.5. I tested with live network traffic and with a pcap and consistently get different results in my JSON log dir every time I run bro against the pcap. When I run bro against a pcap, I get the following error: "expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: field value missing [Log::filter$path]" It looks like that line refers back to the json path. I have the json path defined as: const path_json = "/nsm/bro/logs/json/" &redef; - is this the correct way to define the log path? Here's some examples of the inconsistencies I see (this is reproduceable on all three systems). I'll run the same pcap through Bro twice and we'll get a different number of JSON logs, and different entries in the files - but ASCII logs always turn out the same. root at sensor:/home/sensor/test# /opt/bro/bin/bro -r test.pcap /opt/bro/share/bro/site/local.bro expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: field value missing [Log::filter$path] root at sensor:/home/sensor/test# ls capture_loss.log dhcp.log files.log loaded_scripts.log packet_filter.log ssl.log test.pcap weird.log conn.log dns.log http.log notice.log reporter.log stats.log tunnel.log x509.log root at sensor:/home/sensor/test# ls | wc -l 16 root at sensor:/home/sensor/test# cat conn.log | wc -l 1631 root at sensor:/home/sensor/test# ls /nsm/bro/logs/json/ dhcp-json.log tunnel-json.log x509-json.log root at sensor:/home/sensor/test# ls /nsm/bro/logs/json/ | wc -l 3 As you can see there was no JSON conn log generated - so i'll compare the dhcp logs: root at sensor:/home/sensor/test# cat dhcp.log | wc -l 11 root at sensor:/home/sensor/test# cat /nsm/bro/logs/json/dhcp-json.log | wc -l 2 Some of the lines (8) in the ASCII file are headers so this log only missed one entry. It still missed logging all 1631 connections in the pcap to conn.log. I'll clear out the logs now and try again, and we'll get a different number types of json logs created. root at sensor:/home/sensor/test# rm *.log root at sensor:/home/sensor/test# rm /nsm/bro/logs/json/* root at sensor:/home/sensor/test# /opt/bro/bin/bro -r test.pcap /opt/bro/share/bro/site/local.bro expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: field value missing [Log::filter$path] root at sensor:/home/sensor/test# ls capture_loss.log dhcp.log files.log loaded_scripts.log packet_filter.log ssl.log test.pcap weird.log conn.log dns.log http.log notice.log reporter.log stats.log tunnel.log x509.log root at sensor:/home/sensor/test# ls | wc -l 16 root at sensor:/home/sensor/test# cat conn.log | wc -l 1631 root at sensor:/home/sensor/test# ls /nsm/bro/logs/json/ capture_loss-json.log files-json.log packet_filter-json.log weird-json.log conn-json.log loaded_scripts-json.log reporter-json.log x509-json.log dhcp-json.log notice-json.log tunnel-json.log root at sensor:/home/sensor/test# ls /nsm/bro/logs/json/ | wc -l 11 root at sensor:/home/sensor/test# cat /nsm/bro/logs/json/conn-json.log | wc -l 1622 This time it logged all the connections, but it failed to even create http, ssl, stats, or dns json logs. This script is exactly the functionality I need, I just can't seem to get it working correctly. I don't begin to understand why I get different results every time I run the same pcap through Bro. Thanks! James Gordon On Tue, Jan 10, 2017 at 2:21 AM, Jan Grash?fer wrote: > > The logs don't ever seem to equal out. I noticed this first with the > tunnel > > log because tunnel.log was present in the ascii log dir, but not in the > > json log dir. However the issue seems consistent across all my log files > - > > some events just aren't making it to the json files. > > In this case I would try to reproduce this issue consistently. You could > first try it by reading a pcap into Bro. I doubt that will cause > different ASCII and JSON logs but it might be worth a try. If that does > work, you might want to replay traffic into your setup. You could try > enable only JSON, only ASCII and both and see which ones differ. > > Looking at the script I linked, there is no logical explanation for this > behavior in the code. Therefore I guess its a deployment related issue > or maybe even some kind of bug. Please, let me know about your findings. > > Cheers, > Jan > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/32193168/attachment.html From vladg at illinois.edu Wed Jan 11 13:49:51 2017 From: vladg at illinois.edu (Vlad Grigorescu) Date: Wed, 11 Jan 2017 15:49:51 -0600 Subject: [Bro] Dataset for Bro evaluation In-Reply-To: References: Message-ID: Hongda Li writes: > But it is difficult to get access to the traffic datasets with payloads due > to privacy issues. > Does anybody have any suggestions to help accomplish the tasks listed in > the above plan? > > Also, if necessary, I want to start a thread here discussing how you > (researchers, operators and developers) effectively evaluate Bro. I would recommend creating your own dataset and using that. It will be the best way to evaluate performance based on your particular traffic. --Vlad -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 800 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/a7430737/attachment.bin From hongdal at g.clemson.edu Wed Jan 11 14:10:49 2017 From: hongdal at g.clemson.edu (Hongda Li) Date: Wed, 11 Jan 2017 17:10:49 -0500 Subject: [Bro] Dataset for Bro evaluation In-Reply-To: References: Message-ID: Thanks Vlad. > I would recommend creating your own dataset and using that. It will be > the best way to evaluate performance based on your particular traffic. Using my own creating dataset would be a good idea. However, creating large-scale dataset for recording is sometimes time consuming and costly. Further, the method used for trace generation may produce simplistic workloads. Is there any suggestion regarding effectively creating large-scale dataset? E.g., any methods, tools and platforms that are useful? Best regards, Hongda ---------------------- Hongda Li, Graduate Research Assistant Division of Computer Science, School of Computing Clemson University Email: hongdal at clemson.edu On Wed, Jan 11, 2017 at 4:49 PM, Vlad Grigorescu wrote: > Hongda Li writes: > > > But it is difficult to get access to the traffic datasets with payloads > due > > to privacy issues. > > Does anybody have any suggestions to help accomplish the tasks listed in > > the above plan? > > > > Also, if necessary, I want to start a thread here discussing how you > > (researchers, operators and developers) effectively evaluate Bro. > > I would recommend creating your own dataset and using that. It will be > the best way to evaluate performance based on your particular traffic. > > --Vlad > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/d8a0ade5/attachment-0001.html From jedwards2728 at gmail.com Wed Jan 11 15:10:44 2017 From: jedwards2728 at gmail.com (John Edwards) Date: Thu, 12 Jan 2017 10:10:44 +1100 Subject: [Bro] Downgrade Bro from 2.5 to 2.4 Message-ID: Hi, Can someone point me to an ubuntu .deb 2.4 bro package? I have upgraded our production sensor and it has broken the Splunk TA for Bro and HTTP log isnt ingesting anymore. quickest way is to downgrade back to 2.4. Anyone know where i can find it? Seems everywhere i have looked the repos have the 2.5 copy only Cheers, John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/5c605324/attachment.html From jazoff at illinois.edu Wed Jan 11 15:12:54 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 11 Jan 2017 23:12:54 +0000 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> Message-ID: > When I run bro against a pcap, I get the following error: > "expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: field value missing [Log::filter$path]" > > It looks like that line refers back to the json path. I have the json path defined as: const path_json = "/nsm/bro/logs/json/" &redef; - is this the correct way to define the log path? > No.. as the error message says there is a problem with filter$path being missing, not path_json. > This script is exactly the functionality I need, I just can't seem to get it working correctly. I don't begin to understand why I get different results every time I run the same pcap through Bro. Because the script does this: for ( id in Log::active_streams ) { if ( (enable_all_json || (id in include_json)) && (id !in exclude_json) ) { local filter = Log::get_filter(id, "default"); filter$name = string_cat(filter$name, "_json"); filter$path = string_cat(path_json, filter$path, "-json"); filter$config = config_json; filter$interv = interv_json; Log::add_filter(id, filter); } } and Log::active_streams is a hash table populated at startup and the iteration order is random: $ cat b.bro ;echo one:;bro b.bro |head; echo two:; bro b.bro |head event bro_init() { for ( id in Log::active_streams ) print id; } one: Weird::LOG PacketFilter::LOG Conn::LOG NetControl::SHUNT DNS::LOG FTP::LOG SIP::LOG SNMP::LOG Syslog::LOG DPD::LOG two: DPD::LOG Software::LOG IRC::LOG RFB::LOG SSL::LOG KRB::LOG SOCKS::LOG Syslog::LOG Log::UNKNOWN DHCP::LOG It's failing on one of your log files because filter$path is not set. Once that happens the event aborts and everything after that does not get json added. The loop needs to check filter?$path before trying to use it. You also probably have something broken (or at least weird) in your configuration because this error does not occur on a stock 2.5 config, so it's probably useful to figure out which of your logs has no path for some reason. -- - Justin Azoff From jlay at slave-tothe-box.net Wed Jan 11 15:14:09 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 11 Jan 2017 16:14:09 -0700 Subject: [Bro] Downgrade Bro from 2.5 to 2.4 In-Reply-To: References: Message-ID: <35e5d291e0cc4fb22c469804ae214aa2@localhost> On 2017-01-11 16:10, John Edwards wrote: > Hi, > > Can someone point me to an ubuntu .deb 2.4 bro package? I have > upgraded our production sensor and it has broken the Splunk TA for Bro > and HTTP log isnt ingesting anymore. quickest way is to downgrade > back to 2.4. Anyone know where i can find it? Seems everywhere i have > looked the repos have the 2.5 copy only > > Cheers, > John > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Might still be in your apt cache at: /var/cache/apt/archives/ James From jan.grashoefer at gmail.com Wed Jan 11 15:22:08 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 12 Jan 2017 00:22:08 +0100 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> Message-ID: <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> > When I run bro against a pcap, I get the following error: > "expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: field > value missing [Log::filter$path]" I've just tested the script using 2.4.1 and 2.5 on try.bro.org (http://try.bro.org/#/trybro/saved/115989) and locally using 2.5 with a different path for JSON-logs. Unfortunately I am unable to reproduce this error. Maybe we can shed some light on this if we know which log doesn't provide a path. Can you try to replace line 34 with: if ( filter?$path ) filter$path = string_cat(path_json, filter$path, "-json"); else Reporter::error(fmt("Path missing for %s", id)); That should provide some hint on which logs don't define a filter path. If you can share your test pcap that might be of interest, too. One thing I could imagine would be some kind of timing issue. Maybe playing with the events &priority has influence on your results. Jan From slagell at illinois.edu Wed Jan 11 16:16:17 2017 From: slagell at illinois.edu (Slagell, Adam J) Date: Thu, 12 Jan 2017 00:16:17 +0000 Subject: [Bro] Segmentation fault while using own signature. Message-ID: <6773E4E7-E864-46CF-8E40-B107F3161EEB@illinois.edu> Not sure, it deserves a response and ticket if no one has done that. > On Jan 3, 2017, at 4:12 PM, fatema bannatwala wrote: > > Hi all, > > So I have a case where if I use following regex in sig file, it works, but when I edit it and make it more strict I get segmentation fault in like 5 minutes after bro gets normally started: > > The working version: > > signature rootkit-potential { > payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ > event "Potential rootkit" > tcp-state originator > } > > signature rootkit-malware { > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.[0-9\.]{7,15}\|\d{1,5}.*/ > event "rootkit malware" > tcp-state originator > } > > When I change regex to be more restrictive, Seg fault occurs: > > signature rootkit-potential { > payload /.*(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > event "Potential rootkit" > tcp-state originator > } > > signature rootkit-malware { > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > event "rootkit malware" > tcp-state originator > } > > Any idea what might be going wrong? > > Thanks, > Fatema. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From ralph.holz.tech at gmail.com Wed Jan 11 17:04:53 2017 From: ralph.holz.tech at gmail.com (Ralph Holz) Date: Thu, 12 Jan 2017 01:04:53 +0000 Subject: [Bro] Core affinity on AMD Opteron 6276 Message-ID: Hi everyone, I've been told this is the right place to share experience and maybe a script for our Opteron setup, and get feedback if this is the right thing to do. We're trialling Bro on an AMD Opteron 6276 in our campus network and did not find much useful information on the net on how to configure this particular setup. It's running fine now and we're waiting for students to return to uni so we can test under a higher load (they need to watch some videos...). Anyway, the 6276 is a strange machine in that AMD markets it as a 64-core machine - 4 sockets, 16 cores each. However, cores are paired within one socket, and each pair shares resources and data lines: CPU freq regulation, FPU, L2+L3, and instruction fetch and decode circuitry (!). This makes it quite unusual - under Linux, you will find that different methods to count the cores give you a different number, sometimes 32, sometimes 64. For our experiments, we chose to use /proc/cpuinfo to determine which cores are pairs - they should share "physical ID" and "core ID". The attached hacky script generates a node_cluster.cfg that places 32 workers on 32 cores that should not be paired (logger, manager, proxy are going to sit on another host). As far as I can tell, we are not experiencing packet loss at all with the out-of-the-box scripts loaded, but we're at just 2Gbit load atm, and come the new semester we'll know more. The moment I add another core, I have one worker that is experiencing loss, so I am guessing this is the upper limit. I'd be happy to receive feedback if this is a reasonable setup and the right thing to do. Ralph -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/1751bbe9/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: create_config_from_file.py Type: text/x-python-script Size: 1392 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/1751bbe9/attachment-0001.bin From gordonjamesr at gmail.com Wed Jan 11 17:14:48 2017 From: gordonjamesr at gmail.com (James Gordon) Date: Wed, 11 Jan 2017 20:14:48 -0500 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> Message-ID: Looks like I was wrong about this being a 'standard' config - I fired this script up on a new VM and it worked with no issues. I pulled my local.bro off one of the machines I was testing with earlier and I had the SMB analyzer enabled. I enabled SMB on the new VM, added your lines from your last post and I have the following entries in reporter.log: 0.000000 Reporter::ERROR Path missing for SMB::MAPPING_LOG /usr/local/bro/share/bro/test/./add-json.bro, line 35 0.000000 Reporter::ERROR Path missing for SMB::CMD_LOG /usr/local/bro/share/bro/test/./add-json.bro, line 35 0.000000 Reporter::ERROR Path missing for SMB::FILES_LOG /usr/local/bro/share/bro/test/./add-json.bro, line 35 Any ideas on how to fix this (preferably), or hard exclude the SMB files that cause issues? I ran with SMB disabled for about half an hour on a *very* slow network and everything worked as expected. Threw some test pcaps that I pulled off a Security Onion machine and those also ran well and logged as expected. Thanks! James Gordon On Wed, Jan 11, 2017 at 6:22 PM, Jan Grash?fer wrote: > > When I run bro against a pcap, I get the following error: > > "expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: > field > > value missing [Log::filter$path]" > > I've just tested the script using 2.4.1 and 2.5 on try.bro.org > (http://try.bro.org/#/trybro/saved/115989) and locally using 2.5 with a > different path for JSON-logs. Unfortunately I am unable to reproduce > this error. > > Maybe we can shed some light on this if we know which log doesn't > provide a path. Can you try to replace line 34 with: > > if ( filter?$path ) > filter$path = string_cat(path_json, filter$path, "-json"); > else > Reporter::error(fmt("Path missing for %s", id)); > > That should provide some hint on which logs don't define a filter path. > If you can share your test pcap that might be of interest, too. One > thing I could imagine would be some kind of timing issue. Maybe playing > with the events &priority has influence on your results. > > Jan > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170111/b4d4584e/attachment.html From jedwards2728 at gmail.com Wed Jan 11 18:14:54 2017 From: jedwards2728 at gmail.com (John Edwards) Date: Thu, 12 Jan 2017 13:14:54 +1100 Subject: [Bro] Bro Digest, Vol 129, Issue 19 In-Reply-To: References: Message-ID: Bingo! got it, Thanks James On Thu, Jan 12, 2017 at 12:05 PM, wrote: > Send Bro mailing list submissions to > bro at bro.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > or, via email, send a message with subject or body 'help' to > bro-request at bro.org > > You can reach the person managing the list at > bro-owner at bro.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Bro digest..." > > > Today's Topics: > > 1. Downgrade Bro from 2.5 to 2.4 (John Edwards) > 2. Re: Writing logs to both ACII and JSON (Azoff, Justin S) > 3. Re: Downgrade Bro from 2.5 to 2.4 (James Lay) > 4. Re: Writing logs to both ACII and JSON (Jan Grash?fer) > 5. Re: Segmentation fault while using own signature. > (Slagell, Adam J) > 6. Core affinity on AMD Opteron 6276 (Ralph Holz) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 12 Jan 2017 10:10:44 +1100 > From: John Edwards > Subject: [Bro] Downgrade Bro from 2.5 to 2.4 > To: bro at bro.org > Message-ID: > gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi, > > Can someone point me to an ubuntu .deb 2.4 bro package? I have upgraded our > production sensor and it has broken the Splunk TA for Bro and HTTP log isnt > ingesting anymore. quickest way is to downgrade back to 2.4. Anyone know > where i can find it? Seems everywhere i have looked the repos have the 2.5 > copy only > > Cheers, > John > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/ > 20170112/5c605324/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Wed, 11 Jan 2017 23:12:54 +0000 > From: "Azoff, Justin S" > Subject: Re: [Bro] Writing logs to both ACII and JSON > To: James Gordon > Cc: "bro at bro-ids.org" > Message-ID: > Content-Type: text/plain; charset="iso-8859-1" > > > > When I run bro against a pcap, I get the following error: > > "expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: > field value missing [Log::filter$path]" > > > > It looks like that line refers back to the json path. I have the json > path defined as: const path_json = "/nsm/bro/logs/json/" &redef; - is this > the correct way to define the log path? > > > > No.. as the error message says there is a problem with filter$path being > missing, not path_json. > > > This script is exactly the functionality I need, I just can't seem to > get it working correctly. I don't begin to understand why I get different > results every time I run the same pcap through Bro. > > Because the script does this: > > for ( id in Log::active_streams ) > { > if ( (enable_all_json || (id in include_json)) && (id !in > exclude_json) ) > { > local filter = Log::get_filter(id, "default"); > filter$name = string_cat(filter$name, "_json"); > filter$path = string_cat(path_json, filter$path, > "-json"); > filter$config = config_json; > filter$interv = interv_json; > Log::add_filter(id, filter); > } > } > > and Log::active_streams is a hash table populated at startup and the > iteration order is random: > > $ cat b.bro ;echo one:;bro b.bro |head; echo two:; bro b.bro |head > event bro_init() { > for ( id in Log::active_streams ) > print id; > } > one: > Weird::LOG > PacketFilter::LOG > Conn::LOG > NetControl::SHUNT > DNS::LOG > FTP::LOG > SIP::LOG > SNMP::LOG > Syslog::LOG > DPD::LOG > two: > DPD::LOG > Software::LOG > IRC::LOG > RFB::LOG > SSL::LOG > KRB::LOG > SOCKS::LOG > Syslog::LOG > Log::UNKNOWN > DHCP::LOG > > > It's failing on one of your log files because filter$path is not set. Once > that happens the event aborts and everything after that does not get json > added. > > The loop needs to check filter?$path before trying to use it. > > You also probably have something broken (or at least weird) in your > configuration because this error does not occur on a stock 2.5 config, so > it's probably useful to figure out which of your logs has no path for some > reason. > > > -- > - Justin Azoff > > > ------------------------------ > > Message: 3 > Date: Wed, 11 Jan 2017 16:14:09 -0700 > From: James Lay > Subject: Re: [Bro] Downgrade Bro from 2.5 to 2.4 > To: bro at bro.org > Message-ID: <35e5d291e0cc4fb22c469804ae214aa2 at localhost> > Content-Type: text/plain; charset=US-ASCII; format=flowed > > On 2017-01-11 16:10, John Edwards wrote: > > Hi, > > > > Can someone point me to an ubuntu .deb 2.4 bro package? I have > > upgraded our production sensor and it has broken the Splunk TA for Bro > > and HTTP log isnt ingesting anymore. quickest way is to downgrade > > back to 2.4. Anyone know where i can find it? Seems everywhere i have > > looked the repos have the 2.5 copy only > > > > Cheers, > > John > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > Might still be in your apt cache at: /var/cache/apt/archives/ > > James > > > ------------------------------ > > Message: 4 > Date: Thu, 12 Jan 2017 00:22:08 +0100 > From: Jan Grash?fer > Subject: Re: [Bro] Writing logs to both ACII and JSON > To: James Gordon , bro at bro-ids.org > Message-ID: <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73 at gmail.com> > Content-Type: text/plain; charset=utf-8 > > > When I run bro against a pcap, I get the following error: > > "expression error in /opt/bro/share/bro/test/./add-json.bro, line 34: > field > > value missing [Log::filter$path]" > > I've just tested the script using 2.4.1 and 2.5 on try.bro.org > (http://try.bro.org/#/trybro/saved/115989) and locally using 2.5 with a > different path for JSON-logs. Unfortunately I am unable to reproduce > this error. > > Maybe we can shed some light on this if we know which log doesn't > provide a path. Can you try to replace line 34 with: > > if ( filter?$path ) > filter$path = string_cat(path_json, filter$path, "-json"); > else > Reporter::error(fmt("Path missing for %s", id)); > > That should provide some hint on which logs don't define a filter path. > If you can share your test pcap that might be of interest, too. One > thing I could imagine would be some kind of timing issue. Maybe playing > with the events &priority has influence on your results. > > Jan > > > ------------------------------ > > Message: 5 > Date: Thu, 12 Jan 2017 00:16:17 +0000 > From: "Slagell, Adam J" > Subject: Re: [Bro] Segmentation fault while using own signature. > To: fatema bannatwala > Cc: "bro at bro.org" > Message-ID: <6773E4E7-E864-46CF-8E40-B107F3161EEB at illinois.edu> > Content-Type: text/plain; charset="us-ascii" > > Not sure, it deserves a response and ticket if no one has done that. > > > On Jan 3, 2017, at 4:12 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Hi all, > > > > So I have a case where if I use following regex in sig file, it works, > but when I edit it and make it more strict I get segmentation fault in like > 5 minutes after bro gets normally started: > > > > The working version: > > > > signature rootkit-potential { > > payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ > > event "Potential rootkit" > > tcp-state originator > > } > > > > signature rootkit-malware { > > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.[0-9\.]{7,15}\|\d{1,5}.*/ > > event "rootkit malware" > > tcp-state originator > > } > > > > When I change regex to be more restrictive, Seg fault occurs: > > > > signature rootkit-potential { > > payload /.*(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > > event "Potential rootkit" > > tcp-state originator > > } > > > > signature rootkit-malware { > > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\ > d{1,3}\|\d{1,5}).*/ > > event "rootkit malware" > > tcp-state originator > > } > > > > Any idea what might be going wrong? > > > > Thanks, > > Fatema. > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > ------------------------------ > > Message: 6 > Date: Thu, 12 Jan 2017 01:04:53 +0000 > From: Ralph Holz > Subject: [Bro] Core affinity on AMD Opteron 6276 > To: bro at bro.org > Message-ID: > gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi everyone, > > I've been told this is the right place to share experience and maybe a > script for our Opteron setup, and get feedback if this is the right thing > to do. > > We're trialling Bro on an AMD Opteron 6276 in our campus network and did > not find much useful information on the net on how to configure this > particular setup. It's running fine now and we're waiting for students to > return to uni so we can test under a higher load (they need to watch some > videos...). > > Anyway, the 6276 is a strange machine in that AMD markets it as a 64-core > machine - 4 sockets, 16 cores each. However, cores are paired within one > socket, and each pair shares resources and data lines: CPU freq regulation, > FPU, L2+L3, and instruction fetch and decode circuitry (!). This makes it > quite unusual - under Linux, you will find that different methods to count > the cores give you a different number, sometimes 32, sometimes 64. > > For our experiments, we chose to use /proc/cpuinfo to determine which cores > are pairs - they should share "physical ID" and "core ID". The attached > hacky script generates a node_cluster.cfg that places 32 workers on 32 > cores that should not be paired (logger, manager, proxy are going to sit on > another host). > > As far as I can tell, we are not experiencing packet loss at all with the > out-of-the-box scripts loaded, but we're at just 2Gbit load atm, and come > the new semester we'll know more. The moment I add another core, I have one > worker that is experiencing loss, so I am guessing this is the upper limit. > > I'd be happy to receive feedback if this is a reasonable setup and the > right thing to do. > > Ralph > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/ > 20170112/1751bbe9/attachment.html > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: create_config_from_file.py > Type: text/x-python-script > Size: 1392 bytes > Desc: not available > Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/ > 20170112/1751bbe9/attachment.bin > > ------------------------------ > > _______________________________________________ > Bro mailing list > Bro at bro.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > End of Bro Digest, Vol 129, Issue 19 > ************************************ > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/031ad3e8/attachment-0001.html From jan.grashoefer at gmail.com Thu Jan 12 02:00:59 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 12 Jan 2017 11:00:59 +0100 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> Message-ID: <0fe7e15e-003a-004d-09ee-0fcaa544e945@gmail.com> > 0.000000 Reporter::ERROR Path missing for SMB::MAPPING_LOG > /usr/local/bro/share/bro/test/./add-json.bro, > line 35 > > 0.000000 Reporter::ERROR Path missing for SMB::CMD_LOG > /usr/local/bro/share/bro/test/./add-json.bro, > line 35 > > 0.000000 Reporter::ERROR Path missing for SMB::FILES_LOG > /usr/local/bro/share/bro/test/./add-json.bro, > line 35 Using the SMB-Analyzer I was able to reproduce the issue: The SMB-Analyzer does not set path, which is indeed optional but used for all the other logs by convention. > Any ideas on how to fix this (preferably), or hard exclude the SMB files > that cause issues? I have fixed the script but I need some more testing (just noticed that path_func wasn't supported as well). For now, you can use exclude_json to exclude SMB::MAPPING_LOG, SMB::CMD_LOG and SMB::FILES_LOG. Jan From philosnef at gmail.com Thu Jan 12 04:20:12 2017 From: philosnef at gmail.com (erik clark) Date: Thu, 12 Jan 2017 07:20:12 -0500 Subject: [Bro] Core affinity on AMD Opteron 6276 Message-ID: Ralph, you may want to look back at the archives. Michal and I think Justin had posted an extensive discussion on how to identify and pin cpus. See: http://mailman.icsi.berkeley.edu/pipermail/bro/2016-October/010743.html The suggested lstopo is a very good way to enumerate your cores, as indicated in that thread. :) Also, regarding 32 workers, we are handling 6Gb/s traffic with af_packet with just 18 workers, minimum memory usage, but fairly high rate of cpu usage. Our drop rate is under .5% across all workers. Lastly, read the Bro bit here: https://www.sans.org/reading-room/whitepapers/intrusion/open-source-ids-high-performance-shootout-35772 We have found that this is indeed fairly accurate with regards to worker count and pps consumption by bro. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/5900120c/attachment.html From johanna at icir.org Thu Jan 12 04:44:36 2017 From: johanna at icir.org (Johanna Amann) Date: Thu, 12 Jan 2017 13:44:36 +0100 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: <0fe7e15e-003a-004d-09ee-0fcaa544e945@gmail.com> References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> <0fe7e15e-003a-004d-09ee-0fcaa544e945@gmail.com> Message-ID: <20170112124436.266gxkfljwvf2eqo@Beezling.fritz.box> On Thu, Jan 12, 2017 at 11:00:59AM +0100, Jan Grash?fer wrote: > > 0.000000 Reporter::ERROR Path missing for SMB::MAPPING_LOG > > /usr/local/bro/share/bro/test/./add-json.bro, > > line 35 > > > > 0.000000 Reporter::ERROR Path missing for SMB::CMD_LOG > > /usr/local/bro/share/bro/test/./add-json.bro, > > line 35 > > > > 0.000000 Reporter::ERROR Path missing for SMB::FILES_LOG > > /usr/local/bro/share/bro/test/./add-json.bro, > > line 35 > > Using the SMB-Analyzer I was able to reproduce the issue: The > SMB-Analyzer does not set path, which is indeed optional but used for > all the other logs by convention. Yup, you are right. This looks like an oversight, the path should have been set for all the create_stream calls. I will fix this in master in a few minutes - thanks for finding this :) Johanna From jan.grashoefer at gmail.com Thu Jan 12 06:31:47 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 12 Jan 2017 15:31:47 +0100 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: <20170112124436.266gxkfljwvf2eqo@Beezling.fritz.box> References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> <0fe7e15e-003a-004d-09ee-0fcaa544e945@gmail.com> <20170112124436.266gxkfljwvf2eqo@Beezling.fritz.box> Message-ID: <0924ee65-bc31-7755-7408-7db9bbabf723@gmail.com> >> Using the SMB-Analyzer I was able to reproduce the issue: The >> SMB-Analyzer does not set path, which is indeed optional but used for >> all the other logs by convention. > > Yup, you are right. This looks like an oversight, the path should have > been set for all the create_stream calls. I will fix this in master in a > few minutes - thanks for finding this :) Thanks a lot for the quick fix! This way the handling of streams is more consistent across the streams. I will also update my script once I find some time for testing, as not specifying the path is generally valid (cf. https://www.bro.org/sphinx/scripts/base/frameworks/logging/main.bro.html#type-Log::Filter). Jan From daniel.manzo at bayer.com Thu Jan 12 08:04:32 2017 From: daniel.manzo at bayer.com (Daniel Manzo) Date: Thu, 12 Jan 2017 16:04:32 +0000 Subject: [Bro] Tap configuration Message-ID: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> Hi all, I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help Best regards, Dan Manzo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/b950d4c6/attachment.html From gordonjamesr at gmail.com Thu Jan 12 10:07:24 2017 From: gordonjamesr at gmail.com (James Gordon) Date: Thu, 12 Jan 2017 10:07:24 -0800 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: <0924ee65-bc31-7755-7408-7db9bbabf723@gmail.com> References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> <0fe7e15e-003a-004d-09ee-0fcaa544e945@gmail.com> <20170112124436.266gxkfljwvf2eqo@Beezling.fritz.box> <0924ee65-bc31-7755-7408-7db9bbabf723@gmail.com> Message-ID: Jan + all, Thanks for your help on all this! The script is working great with the exclusion of SMB logs. Apologies for all the confusion on my side - I'm not much of a programmer, but use Bro daily as a vital data source at my job. Anything to enhance the data we get is always good, and JSON makes it much easier to ingest into other sources. I've already come across another Bro script that the add-json.bro script doesn't seem to agree with, but will unload that script as it doesn't provide much value for my org. I look forward to seeing an updated version that can handle these stray log files though! Thanks again! James Gordon On Thu, Jan 12, 2017 at 6:31 AM, Jan Grash?fer wrote: > >> Using the SMB-Analyzer I was able to reproduce the issue: The > >> SMB-Analyzer does not set path, which is indeed optional but used for > >> all the other logs by convention. > > > > Yup, you are right. This looks like an oversight, the path should have > > been set for all the create_stream calls. I will fix this in master in a > > few minutes - thanks for finding this :) > > Thanks a lot for the quick fix! This way the handling of streams is more > consistent across the streams. I will also update my script once I find > some time for testing, as not specifying the path is generally valid > (cf. > https://www.bro.org/sphinx/scripts/base/frameworks/ > logging/main.bro.html#type-Log::Filter). > > Jan > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/a0db4c0a/attachment-0001.html From hosom at battelle.org Thu Jan 12 11:21:14 2017 From: hosom at battelle.org (Hosom, Stephen M) Date: Thu, 12 Jan 2017 19:21:14 +0000 Subject: [Bro] Tap configuration In-Reply-To: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> Message-ID: Have you looked into checksum offloading? If enabled, it can result in Bro not producing many of the logs you would expect. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Daniel Manzo Sent: Thursday, January 12, 2017 11:05 AM To: bro at bro.org Subject: [Bro] Tap configuration Hi all, I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help Best regards, Dan Manzo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/eab0153b/attachment.html From vladg at illinois.edu Thu Jan 12 11:33:55 2017 From: vladg at illinois.edu (Vlad Grigorescu) Date: Thu, 12 Jan 2017 13:33:55 -0600 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: Message-ID: I could be mistaken, but some of these don't look like correct escape sequences for Bro regular expressions. Check out the PATTERNS section of the flex documentation: http://dinosaur.compilertools.net/flex/manpage.html --Vlad fatema bannatwala writes: > Hi all, > > So I have a case where if I use following regex in sig file, it works, but > when I edit it and make it more strict I get segmentation fault in like 5 > minutes after bro gets normally started: > > The working version: > > signature rootkit-potential { > payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ > event "Potential rootkit" > tcp-state originator > } > > signature rootkit-malware { > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.[0-9\.]{7,15}\|\d{1,5}.*/ > event "rootkit malware" > tcp-state originator > } > > When I change regex to be more restrictive, Seg fault occurs: > > signature rootkit-potential { > payload /.*(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > event "Potential rootkit" > tcp-state originator > } > > signature rootkit-malware { > payload > /.*SSH-2\.5-OpenSSH_6\.1\.9.(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > event "rootkit malware" > tcp-state originator > } > > Any idea what might be going wrong? > > Thanks, > Fatema. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 800 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/03eacd66/attachment.bin From dopheide at gmail.com Thu Jan 12 11:53:22 2017 From: dopheide at gmail.com (Mike Dopheide) Date: Thu, 12 Jan 2017 13:53:22 -0600 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: Message-ID: That's a good catch, I _think_ \d isn't supported, so you'll want to use [0-9]. I've chatted with Fatema off-list and I don't think this is the problem though. The \d should just cause the signature to not match correctly. -Dop On Thu, Jan 12, 2017 at 1:33 PM, Vlad Grigorescu wrote: > I could be mistaken, but some of these don't look like correct escape > sequences for Bro regular expressions. > > Check out the PATTERNS section of the flex documentation: > > http://dinosaur.compilertools.net/flex/manpage.html > > --Vlad > > > fatema bannatwala writes: > > > Hi all, > > > > So I have a case where if I use following regex in sig file, it works, > but > > when I edit it and make it more strict I get segmentation fault in like 5 > > minutes after bro gets normally started: > > > > The working version: > > > > signature rootkit-potential { > > payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ > > event "Potential rootkit" > > tcp-state originator > > } > > > > signature rootkit-malware { > > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.[0-9\.]{7,15}\|\d{1,5}.*/ > > event "rootkit malware" > > tcp-state originator > > } > > > > When I change regex to be more restrictive, Seg fault occurs: > > > > signature rootkit-potential { > > payload /.*(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > > event "Potential rootkit" > > tcp-state originator > > } > > > > signature rootkit-malware { > > payload > > /.*SSH-2\.5-OpenSSH_6\.1\.9.(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\ > d{1,3}\|\d{1,5}).*/ > > event "rootkit malware" > > tcp-state originator > > } > > > > Any idea what might be going wrong? > > > > Thanks, > > Fatema. > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/eeead65c/attachment.html From neslog at gmail.com Thu Jan 12 13:59:20 2017 From: neslog at gmail.com (Neslog) Date: Thu, 12 Jan 2017 16:59:20 -0500 Subject: [Bro] Tap configuration In-Reply-To: References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> Message-ID: I've had success disabling checksum. ignore_checksums On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" wrote: > Have you looked into checksum offloading? If enabled, it can result in Bro > not producing many of the logs you would expect. > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Daniel > Manzo > *Sent:* Thursday, January 12, 2017 11:05 AM > *To:* bro at bro.org > *Subject:* [Bro] Tap configuration > > > > Hi all, > > > > I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to > properly configure the network interfaces so that Bro can see as much of > the network traffic as possible. My tap is connected in line with the > network, and I believe that I was previously seeing the correct traffic, > but now Bro has reporting much less information. I want to make sure that I > have the interfaces configured correctly before moving on to > troubleshooting other areas. Currently, I have two eth interfaces set up in > PROMISC mode. Thank you for the help > > > > Best regards, > > Dan Manzo > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/242fbb8e/attachment.html From sudo.darkstar at gmail.com Thu Jan 12 15:34:15 2017 From: sudo.darkstar at gmail.com (John B. Althouse III) Date: Thu, 12 Jan 2017 18:34:15 -0500 Subject: [Bro] Comparing file details and connection details at the same time Message-ID: Brograming question; I want to my script to look at the conn details of a ssl session, orig_h, resp_h, ect. and also look at specific file details for that session, x509::certificate.sig_alg How do I correlate the two in a Bro script since Bro handles connections and files separately? My thought process was to use 'event ssl_established' since it would have most of what I want but it doesn't have x509 file details like the certificate.sig_alg and I wasn't able to find the event that would contain both. Anyone know how I can do this? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/2cd7ffd3/attachment.html From klehigh at iu.edu Thu Jan 12 16:13:23 2017 From: klehigh at iu.edu (Keith Lehigh) Date: Thu, 12 Jan 2017 19:13:23 -0500 Subject: [Bro] Comparing file details and connection details at the same time In-Reply-To: References: Message-ID: <52DF705B-C59F-4CEE-B255-8F8EE80A84E9@iu.edu> Specifically for x509 certificates, you might want to look at the x509_certificate event, which includes the connection details & the parsed certificate fields in one handy event. The ?misc/dump-events? script is invaluable for examining packet captures to figure out what events fire and what data is available for a given event. bro -r some.pcap misc/dump-events - Keith > On Jan 12, 2017, at 18:34, John B. Althouse III wrote: > > Brograming question; > > I want to my script to look at the conn details of a ssl session, orig_h, resp_h, ect. and also look at specific file details for that session, x509::certificate.sig_alg > > How do I correlate the two in a Bro script since Bro handles connections and files separately? > > My thought process was to use 'event ssl_established' since it would have most of what I want but it doesn't have x509 file details like the certificate.sig_alg and I wasn't able to find the event that would contain both. > > Anyone know how I can do this? > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3569 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170112/1510d4ea/attachment.bin From ralph.holz.tech at gmail.com Thu Jan 12 19:33:30 2017 From: ralph.holz.tech at gmail.com (Ralph Holz) Date: Fri, 13 Jan 2017 03:33:30 +0000 Subject: [Bro] Core affinity on AMD Opteron 6276 In-Reply-To: References: Message-ID: Hi Erik, Thanks for the links! I used lstopo to get an idea of the hardware layout, but I wanted some script that would figure out which cores share circuitry (because Opteron). Reading through your thread, it seems my script does the right thing. As for your setup - do you use an Opteron? And what is the clock rate of your cores? Thanks again! Ralph On Thu, Jan 12, 2017 at 11:20 PM erik clark wrote: > Ralph, you may want to look back at the archives. Michal and I think > Justin had posted an extensive discussion on how to identify and pin cpus. > See: > > http://mailman.icsi.berkeley.edu/pipermail/bro/2016-October/010743.html > > The suggested lstopo is a very good way to enumerate your cores, as > indicated in that thread. :) > > Also, regarding 32 workers, we are handling 6Gb/s traffic with af_packet > with just 18 workers, minimum memory usage, but fairly high rate of cpu > usage. Our drop rate is under .5% across all workers. > > Lastly, read the Bro bit here: > > > https://www.sans.org/reading-room/whitepapers/intrusion/open-source-ids-high-performance-shootout-35772 > > We have found that this is indeed fairly accurate with regards to worker > count and pps consumption by bro. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170113/fda5bfcd/attachment.html From philosnef at gmail.com Fri Jan 13 04:43:51 2017 From: philosnef at gmail.com (erik clark) Date: Fri, 13 Jan 2017 07:43:51 -0500 Subject: [Bro] Core affinity on AMD Opteron 6276 In-Reply-To: References: Message-ID: We are an intel shop, so I couldnt say for sure. We are running 2.5ghz? cores. On Thu, Jan 12, 2017 at 10:33 PM, Ralph Holz wrote: > Hi Erik, > > Thanks for the links! > > I used lstopo to get an idea of the hardware layout, but I wanted some > script that would figure out which cores share circuitry (because Opteron). > Reading through your thread, it seems my script does the right thing. > > As for your setup - do you use an Opteron? And what is the clock rate of > your cores? > > Thanks again! > > Ralph > > On Thu, Jan 12, 2017 at 11:20 PM erik clark wrote: > >> Ralph, you may want to look back at the archives. Michal and I think >> Justin had posted an extensive discussion on how to identify and pin cpus. >> See: >> >> http://mailman.icsi.berkeley.edu/pipermail/bro/2016-October/010743.html >> >> The suggested lstopo is a very good way to enumerate your cores, as >> indicated in that thread. :) >> >> Also, regarding 32 workers, we are handling 6Gb/s traffic with af_packet >> with just 18 workers, minimum memory usage, but fairly high rate of cpu >> usage. Our drop rate is under .5% across all workers. >> >> Lastly, read the Bro bit here: >> >> https://www.sans.org/reading-room/whitepapers/intrusion/ >> open-source-ids-high-performance-shootout-35772 >> >> We have found that this is indeed fairly accurate with regards to worker >> count and pps consumption by bro. >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170113/e77b8b45/attachment-0001.html From daniel.manzo at bayer.com Fri Jan 13 05:58:25 2017 From: daniel.manzo at bayer.com (Daniel Manzo) Date: Fri, 13 Jan 2017 13:58:25 +0000 Subject: [Bro] Tap configuration In-Reply-To: References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> Message-ID: <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> I have tried disabling checksum offloading, but still no luck. Here is the ifcfg file for my eth interface: DEVICE=eth12 ONBOOT=yes BOOTPROTO=static PROMISC=yes USERCTL=no Freundliche Gr??e / Best regards, Dan Manzo Asst Analyst I ________________________ Bayer: Science For A Better Life Bayer U.S. LLC Country Platform US Scientific Computing Competence Ctr Bayer Road 15205 Pittsburgh (PA), United States Tel: +1 412 7772171 Mobile: +1 412 5258332 E-mail: daniel.manzo at bayer.com From: Neslog [mailto:neslog at gmail.com] Sent: Thursday, January 12, 2017 4:59 PM To: Hosom, Stephen M Cc: Bro-IDS; Daniel Manzo Subject: Re: [Bro] Tap configuration I've had success disabling checksum. ignore_checksums On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" > wrote: Have you looked into checksum offloading? If enabled, it can result in Bro not producing many of the logs you would expect. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Daniel Manzo Sent: Thursday, January 12, 2017 11:05 AM To: bro at bro.org Subject: [Bro] Tap configuration Hi all, I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help Best regards, Dan Manzo _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170113/a9fe2c29/attachment.html From seth at icir.org Fri Jan 13 06:28:52 2017 From: seth at icir.org (Seth Hall) Date: Fri, 13 Jan 2017 09:28:52 -0500 Subject: [Bro] Tap configuration In-Reply-To: <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> Message-ID: I would recommend leaving checksum validation on in Bro, but disable checksum offloading on the NIC. I typically point people to this blog post by Doug Burks (of the SecurityOnion project)... http://blog.securityonion.net/2011/10/when-is-full-packet-capture-not-full.html There is one further thing I would recommend though that we discovered well after this blog post was written. If you are using an Intel NIC with the ixgbe driver, your nic has a feature called "flow director" that you will want to disable because it will negatively impact your analysis by reordering packets. It can be disabled like this on linux: ethtool -L eth12 combined 1 This will cause your NIC to have only a single hardware queue which will disable the flow director feature and prevent your NIC from reordering packets. Do that along with the suggestions in the blog post above and things should be better. .Seth > On Jan 13, 2017, at 8:58 AM, Daniel Manzo wrote: > > I have tried disabling checksum offloading, but still no luck. Here is the ifcfg file for my eth interface: > > DEVICE=eth12 > ONBOOT=yes > BOOTPROTO=static > PROMISC=yes > USERCTL=no > > Freundliche Gr??e / Best regards, > > Dan Manzo > Asst Analyst I > ________________________ > > Bayer: Science For A Better Life > > Bayer U.S. LLC > Country Platform US > Scientific Computing Competence Ctr > Bayer Road > 15205 Pittsburgh (PA), United States > Tel: +1 412 7772171 > Mobile: +1 412 5258332 > E-mail: daniel.manzo at bayer.com > > From: Neslog [mailto:neslog at gmail.com] > Sent: Thursday, January 12, 2017 4:59 PM > To: Hosom, Stephen M > Cc: Bro-IDS; Daniel Manzo > Subject: Re: [Bro] Tap configuration > > I've had success disabling checksum. > ignore_checksums > > > On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" wrote: > Have you looked into checksum offloading? If enabled, it can result in Bro not producing many of the logs you would expect. > > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Daniel Manzo > Sent: Thursday, January 12, 2017 11:05 AM > To: bro at bro.org > Subject: [Bro] Tap configuration > > Hi all, > > I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help > > Best regards, > Dan Manzo > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Fri Jan 13 06:38:20 2017 From: seth at icir.org (Seth Hall) Date: Fri, 13 Jan 2017 09:38:20 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: Message-ID: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> Hi Fatema, Have you been able to get a stack trace? That would be the most helpful. I suspect that Dop is right though, the problem you're encountering with Bro crashing much be somewhere else. I have a hard time believing that this is the cause of the crash. Another small note about the regular expressions you are writing is that Bro doesn't support the (?:abc) mechanism to prevent captures from occurring. You can leave out the "?:" when writing regular expressions. Bro has "flex-ish" regular expressions but it doesn't support all of the features that flex has. .Seth > On Jan 3, 2017, at 5:12 PM, fatema bannatwala wrote: > > Hi all, > > So I have a case where if I use following regex in sig file, it works, but when I edit it and make it more strict I get segmentation fault in like 5 minutes after bro gets normally started: > > The working version: > > signature rootkit-potential { > payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ > event "Potential rootkit" > tcp-state originator > } > > signature rootkit-malware { > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.[0-9\.]{7,15}\|\d{1,5}.*/ > event "rootkit malware" > tcp-state originator > } > > When I change regex to be more restrictive, Seg fault occurs: > > signature rootkit-potential { > payload /.*(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > event "Potential rootkit" > tcp-state originator > } > > signature rootkit-malware { > payload /.*SSH-2\.5-OpenSSH_6\.1\.9.(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\|\d{1,5}).*/ > event "rootkit malware" > tcp-state originator > } > > Any idea what might be going wrong? > > Thanks, > Fatema. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Fri Jan 13 06:40:17 2017 From: seth at icir.org (Seth Hall) Date: Fri, 13 Jan 2017 09:40:17 -0500 Subject: [Bro] Comparing file details and connection details at the same time In-Reply-To: <52DF705B-C59F-4CEE-B255-8F8EE80A84E9@iu.edu> References: <52DF705B-C59F-4CEE-B255-8F8EE80A84E9@iu.edu> Message-ID: > On Jan 12, 2017, at 7:13 PM, Keith Lehigh wrote: > > The ?misc/dump-events? script is invaluable for examining packet captures to figure out what events fire and what data is available for a given event. There is one small caveat to this too. If an event isn't handled by an existing script, that event won't be generated and won't show up in the output from the dump-events script. In many cases this all works out ok, but I wanted to point it out to save someone a headache trying to figure out why an event isn't being generated. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Fri Jan 13 10:28:54 2017 From: seth at icir.org (Seth Hall) Date: Fri, 13 Jan 2017 13:28:54 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> Message-ID: <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> > On Jan 13, 2017, at 12:06 PM, fatema bannatwala wrote: > , > I wrote a little script to run gstack for all bro processes for every minute. And ran it when I loaded the new sig and restarted bro. > I have attached the output files for two sensors where I captured the gstack stats. Let me know if that's not the correct way of capturing stack trace. You need to collect a core dump when the crash happens and get a stack trace from that. If this is on Linux, you will need to set your kernel.core_pattern sysctl value to something like the following.... sudo sysctl -w kernel.core_pattern=core.%e-%t-%p If you have things set this way and you have gdb installed, broctl should automatically generate a stack trace when it restarts the dead process. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From sudo.darkstar at gmail.com Fri Jan 13 11:11:33 2017 From: sudo.darkstar at gmail.com (John B. Althouse III) Date: Fri, 13 Jan 2017 14:11:33 -0500 Subject: [Bro] Comparing file details and connection details at the same time In-Reply-To: References: <52DF705B-C59F-4CEE-B255-8F8EE80A84E9@iu.edu> Message-ID: Thanks Keith! For anyone else asking the same question; fa_file contains conns which holds the connection details in table format. Example: event x509_certificate(f: fa_file , cert_ref: opaque of x509 , cert: X509::Certificate ) { for ( cid in f$conns ) { if ( cid$resp_h == 10.0.0.1 ) ect.. On Fri, Jan 13, 2017 at 9:40 AM, Seth Hall wrote: > > > On Jan 12, 2017, at 7:13 PM, Keith Lehigh wrote: > > > > The ?misc/dump-events? script is invaluable for examining packet > captures to figure out what events fire and what data is available for a > given event. > > There is one small caveat to this too. If an event isn't handled by an > existing script, that event won't be generated and won't show up in the > output from the dump-events script. In many cases this all works out ok, > but I wanted to point it out to save someone a headache trying to figure > out why an event isn't being generated. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170113/1e109fd3/attachment.html From rleonar7 at uoregon.edu Fri Jan 13 11:15:46 2017 From: rleonar7 at uoregon.edu (Ryan Leonard) Date: Fri, 13 Jan 2017 11:15:46 -0800 Subject: [Bro] Logger Child Memory Leak (logger crashing often) Message-ID: <006a01d26dd1$71c36930$554a3b90$@uoregon.edu> Hey All, Running Bro 2.5 on a single server with 20 cores and some 240 GB of memory. node.cfg specifies 14 workers, 2 proxies, 1 manager and a 1 logger process. We are running a custom build of bro built with tmalloc enabled and pfring enabled. I'm working to get my bro cluster stable. As it stand, often the logger process will crash causing us to lose a period of log files. Looking at the output of broctl top, it seems that the system is likely killing the bro logger process when it sees the amount of memory resources it is consuming. ==== stderr.log listening on p5p2 1484325490.230681 received termination signal # broctl top Name Type Host Pid Proc VSize Rss Cpu Cmd logger logger localhost 47880 parent 4G 3G 82% bro logger logger localhost 47902 child 38G 37G 13% bro . As I've been writing this email I have watched the logger process's memory utilization slowly climb from 16% to 17% (broctl top is now indicating 41G memory usage by logger child) I've been investigating if the bottleneck goes back to our storage solution, which is just a bunch of disks. Based on utilization indicated by iostat and iotop's output, it seems like the Bro logger process is writing around 4MB/s to our disc which seems reasonable and does not indicate a bottleneck to me. Aside: there is a tangential problem in that currently we are seeing a very high drop rate indicated by netstats: # broctl netstats worker-1-1: 1484334410.387744 recvd=1087209145 dropped=3525526435 link=317784575 worker-1-10: 1484334410.711691 recvd=2916765681 dropped=1696150851 link=317965517 . Thanks for any insights or suggestions! -Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170113/1907361c/attachment.html From jazoff at illinois.edu Fri Jan 13 11:36:11 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 13 Jan 2017 19:36:11 +0000 Subject: [Bro] Logger Child Memory Leak (logger crashing often) In-Reply-To: <006a01d26dd1$71c36930$554a3b90$@uoregon.edu> References: <006a01d26dd1$71c36930$554a3b90$@uoregon.edu> Message-ID: > On Jan 13, 2017, at 2:15 PM, Ryan Leonard wrote: > > Hey All, > > Running Bro 2.5 on a single server with 20 cores and some 240 GB of memory. > node.cfg specifies 14 workers, 2 proxies, 1 manager and a 1 logger process. > We are running a custom build of bro built with tmalloc enabled and pfring enabled. > > I?m working to get my bro cluster stable. As it stand, often the logger process will crash causing us to lose a period of log files. Looking at the output of broctl top, it seems that the system is likely killing the bro logger process when it sees the amount of memory resources it is consuming. > > ==== stderr.log > listening on p5p2 > > 1484325490.230681 received termination signal > > # broctl top > Name Type Host Pid Proc VSize Rss Cpu Cmd > logger logger localhost 47880 parent 4G 3G 82% bro > logger logger localhost 47902 child 38G 37G 13% bro > Most likely this isn't a leak, but that the logger process isn't able to process the data fast enough. What model CPUs does this server have? Can you show what this command outputs after bro has been running for a bit: top -b -n 1 -H -o TIME | fgrep bro: | head -n 20 The last column will be truncated, don't worry about that. -- - Justin Azoff From rleonar7 at uoregon.edu Fri Jan 13 11:42:18 2017 From: rleonar7 at uoregon.edu (Ryan Leonard) Date: Fri, 13 Jan 2017 11:42:18 -0800 Subject: [Bro] Logger Child Memory Leak (logger crashing often) In-Reply-To: References: <006a01d26dd1$71c36930$554a3b90$@uoregon.edu> Message-ID: <007901d26dd5$27262a80$75727f80$@uoregon.edu> Hey Justin, The results of running top (the "-o TIME" parameter set was unavailable on my system) # top -b -n 1 -H | fgrep bro: | head -n 20 48059 root 20 0 4496m 4.0g 260m S 27.7 1.7 38:36.49 bro: conn/Log:: 47908 root 20 0 4496m 4.0g 260m S 5.5 1.7 8:29.99 bro: weird/Log: 47911 root 20 0 4496m 4.0g 260m S 5.5 1.7 8:32.29 bro: dns/Log::W 47907 root 20 0 4496m 4.0g 260m S 1.8 1.7 4:36.68 bro: syslog/Log 10331 root 20 0 19.1g 15g 257m S 0.0 6.7 0:06.92 bro: /opt/bro/f 29622 root 20 0 134m 59m 5396 S 0.0 0.0 0:00.10 bro: loaded_scr 29634 root 20 0 593m 520m 5444 S 0.0 0.2 0:00.12 bro: loaded_scr 29624 root 20 0 175m 59m 5476 S 0.0 0.0 0:00.26 bro: /opt/bro/f 29632 root 20 0 175m 59m 5476 S 0.0 0.0 0:00.19 bro: loaded_scr 47901 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.15 bro: packet_fil 47903 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.07 bro: loaded_scr 47904 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.06 bro: reporter/L 47905 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.26 bro: communicat 47906 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.08 bro: stats/Log: 47909 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.76 bro: known_serv 47910 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.30 bro: known_host 47912 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.88 bro: software/L 47913 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:00.07 bro: dce_rpc/Lo 47914 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:03.22 bro: files/Log: 47915 root 20 0 4496m 4.0g 260m S 0.0 1.7 0:10.19 bro: http/Log:: For some more information -- the processors we are running are the following: # lstopo -v | grep Socket Socket L#0 (P#0 CPUModel="Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz" CPUType=x86_64) Socket L#1 (P#1 CPUModel="Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz" CPUType=x86_64) Hyper threading is disabled on this server. Thanks! -Ryan From bkellogg at dresser-rand.com Fri Jan 13 12:11:35 2017 From: bkellogg at dresser-rand.com (Kellogg, Brian (GS IT PG-DR)) Date: Fri, 13 Jan 2017 20:11:35 +0000 Subject: [Bro] ICAP plugin Message-ID: Just started reviewing the presentations from BroCon. Is the ICAP plugin available? Thanks all, Brian From jazoff at illinois.edu Fri Jan 13 12:17:28 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Fri, 13 Jan 2017 20:17:28 +0000 Subject: [Bro] Logger Child Memory Leak (logger crashing often) In-Reply-To: <007901d26dd5$27262a80$75727f80$@uoregon.edu> References: <006a01d26dd1$71c36930$554a3b90$@uoregon.edu> <007901d26dd5$27262a80$75727f80$@uoregon.edu> Message-ID: <6419D914-C020-465B-88B1-D3E954BEA552@illinois.edu> > On Jan 13, 2017, at 2:42 PM, Ryan Leonard wrote: > > Hey Justin, > > The results of running top (the "-o TIME" parameter set was unavailable on my system) Ah, it looks like the default is close enough. > # top -b -n 1 -H | fgrep bro: | head -n 20 > 48059 root 20 0 4496m 4.0g 260m S 27.7 1.7 38:36.49 bro: conn/Log:: > 47908 root 20 0 4496m 4.0g 260m S 5.5 1.7 8:29.99 bro: weird/Log: This shows most of your time is spent writing the conn.log and the weird.log. Does your conn.log look normal? The main thing to check for when using pf_ring is to see if things are actually being load balanced properly. If you make a single tcp connection, does it get logged to the conn.log once, or 14 times? What does this command output: cat /bro/logs/current/conn.log |bro-cut history|sort|uniq -c|sort -rn|head -n 50 The weird.log shouldn't be very large, what does this output? cat /bro/logs/current/weird.log|bro-cut name|sort|uniq -c|sort -rn > For some more information -- the processors we are running are the following: > # lstopo -v | grep Socket > Socket L#0 (P#0 CPUModel="Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz" CPUType=x86_64) > Socket L#1 (P#1 CPUModel="Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz" CPUType=x86_64) > > Hyper threading is disabled on this server. Well, those are good CPUs, things should be keeping up a bit better than they are. Are you using the pin_cpus setting in your node.cfg? -- - Justin Azoff From daniel.manzo at bayer.com Fri Jan 13 12:28:18 2017 From: daniel.manzo at bayer.com (Daniel Manzo) Date: Fri, 13 Jan 2017 20:28:18 +0000 Subject: [Bro] Tap configuration In-Reply-To: References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> Message-ID: Thank you for the help. I tried the settings, but I have noticed any difference in packets. The main test that I am doing is that I would open two putty sessions to the server, and have one running capstats on eth12 while my other session was downloading a 1GB file to /dev/null. Last week, I was able to see the packets increase greatly via capstats, but now they stay steady at 7 or 8 packets per second. Best regards, Dan Manzo -----Original Message----- From: Seth Hall [mailto:seth at icir.org] Sent: Friday, January 13, 2017 9:29 AM To: Daniel Manzo Cc: Neslog; Hosom, Stephen M; Bro-IDS Subject: Re: [Bro] Tap configuration I would recommend leaving checksum validation on in Bro, but disable checksum offloading on the NIC. I typically point people to this blog post by Doug Burks (of the SecurityOnion project)... http://blog.securityonion.net/2011/10/when-is-full-packet-capture-not-full.html There is one further thing I would recommend though that we discovered well after this blog post was written. If you are using an Intel NIC with the ixgbe driver, your nic has a feature called "flow director" that you will want to disable because it will negatively impact your analysis by reordering packets. It can be disabled like this on linux: ethtool -L eth12 combined 1 This will cause your NIC to have only a single hardware queue which will disable the flow director feature and prevent your NIC from reordering packets. Do that along with the suggestions in the blog post above and things should be better. .Seth > On Jan 13, 2017, at 8:58 AM, Daniel Manzo wrote: > > I have tried disabling checksum offloading, but still no luck. Here is the ifcfg file for my eth interface: > > DEVICE=eth12 > ONBOOT=yes > BOOTPROTO=static > PROMISC=yes > USERCTL=no > > Freundliche Gr??e / Best regards, > > Dan Manzo > Asst Analyst I > ________________________ > > Bayer: Science For A Better Life > > Bayer U.S. LLC > Country Platform US > Scientific Computing Competence Ctr > Bayer Road > 15205 Pittsburgh (PA), United States > Tel: +1 412 7772171 > Mobile: +1 412 5258332 > E-mail: daniel.manzo at bayer.com > > From: Neslog [mailto:neslog at gmail.com] > Sent: Thursday, January 12, 2017 4:59 PM > To: Hosom, Stephen M > Cc: Bro-IDS; Daniel Manzo > Subject: Re: [Bro] Tap configuration > > I've had success disabling checksum. > ignore_checksums > > > On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" wrote: > Have you looked into checksum offloading? If enabled, it can result in Bro not producing many of the logs you would expect. > > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Daniel Manzo > Sent: Thursday, January 12, 2017 11:05 AM > To: bro at bro.org > Subject: [Bro] Tap configuration > > Hi all, > > I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help > > Best regards, > Dan Manzo > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From mfernandez at mitre.org Fri Jan 13 12:40:40 2017 From: mfernandez at mitre.org (Fernandez, Mark I) Date: Fri, 13 Jan 2017 20:40:40 +0000 Subject: [Bro] ICAP plugin In-Reply-To: References: Message-ID: Hi Brian, Bureaucracy has taken hold. Approval to release the ICAP Analyzer source code to the public still pending. Once approved, I will notify the group and promptly make it available. Cheers! Mark -----Original Message----- From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Kellogg, Brian (GS IT PG-DR) Sent: Friday, January 13, 2017 3:12 PM To: bro at bro.org Subject: [Bro] ICAP plugin Just started reviewing the presentations from BroCon. Is the ICAP plugin available? Thanks all, Brian _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From rleonar7 at uoregon.edu Fri Jan 13 13:03:55 2017 From: rleonar7 at uoregon.edu (Ryan Leonard) Date: Fri, 13 Jan 2017 13:03:55 -0800 Subject: [Bro] Logger Child Memory Leak (logger crashing often) In-Reply-To: <6419D914-C020-465B-88B1-D3E954BEA552@illinois.edu> References: <006a01d26dd1$71c36930$554a3b90$@uoregon.edu> <007901d26dd5$27262a80$75727f80$@uoregon.edu> <6419D914-C020-465B-88B1-D3E954BEA552@illinois.edu> Message-ID: Thank you for the response Justin! I will continue investigating whether or not pfring is misconfigured. Last week I did fiddle with CPU pinning with no change to the logger crashing behavior. The current configuration is: Manager: pinned to 0 Logger: pinned to 1 Workers: pinned to 2-15 Proxies: unpinned # cat /home/bro/logs/current/conn.log |bro-cut history|sort|uniq -c|sort -rn|head -n 50 1691591 D 1393495 ^d 988587 A 587617 S 490552 ^a 207430 - 205059 Dd 193311 ^f 183811 ^h 180860 F 163651 ^dA 130203 Ad 114927 DA 114207 ^dD 111956 AD 98371 Da 82048 R 69047 ^c 68888 Aa 56273 ^ad 54982 ^aA 49199 ^aD 49005 ^da 41245 Adc 40925 ^dAc 38275 ^r 36261 SA 35243 Acd 32155 ^dcA 31712 ^hA 30753 DdA 30186 ^cA 28592 AF 27648 SD 27378 Ac 26598 ^df 23810 DF 23686 ^cAd 23194 Af 22530 ADd 22168 ^cdA 21580 ^hd 21373 ^hD 20249 Sd 19126 ^dAD 19051 Df 18797 ^cd 17837 Sa 17813 ^ha 17813 ^dc # cat /home/bro/logs/current/weird.log|bro-cut name|sort|uniq -c|sort -rn 297534 dns_unmatched_msg 159574 dns_unmatched_reply 85718 above_hole_data_without_any_acks 51007 truncated_tcp_payload 31704 TCP_ack_underflow_or_misorder 28605 possible_split_routing 18759 data_before_established 10649 TCP_seq_underflow_or_misorder 4952 bad_TCP_checksum 1915 active_connection_reuse 1729 line_terminated_with_single_CR 1535 inappropriate_FIN 1149 DNS_RR_unknown_type 1042 excessive_data_without_further_acks 569 SYN_seq_jump 511 unknown_dce_rpc_auth_type_68 333 window_recision 313 SYN_inside_connection 313 DNS_RR_length_mismatch 215 bad_HTTP_request 203 DNS_Conn_count_too_large 173 connection_originator_SYN_ack 132 NUL_in_line 123 binpac exception: out_of_bound: SSLRecord:length: 13 > 2 57 FIN_advanced_last_seq 51 bad_UDP_checksum 33 SYN_after_close 32 data_after_reset 31 binpac exception: out_of_bound: Syslog_Priority:lt: 1 > 0 30 DNS_truncated_len_lt_hdr_len 29 ssl_early_application_data 21 DNS_truncated_RR_rdlength_lt_len 20 SYN_with_data 18 non_ip_packet_in_encap 12 unknown_HTTP_method 10 dnp3_corrupt_header_checksum 8 SYN_after_reset 7 bad_SYN_ack 6 empty_http_request 5 Teredo_bubble_with_payload 5 binpac exception: out_of_bound: SSLRecord:rec: 61752 > 39 (Abbriviated -- see attached for full output) >> # top -b -n 1 -H | fgrep bro: | head -n 20 >> 48059 root 20 0 4496m 4.0g 260m S 27.7 1.7 38:36.49 bro: conn/Log:: >> 47908 root 20 0 4496m 4.0g 260m S 5.5 1.7 8:29.99 bro: weird/Log: > This shows most of your time is spent writing the conn.log and the weird.log. > > Does your conn.log look normal? The main thing to check for when using pf_ring is to see if things are actually being load balanced properly. If you make a single tcp connection, does it get logged to the conn.log once, or 14 times? > > What does this command output: > > cat /bro/logs/current/conn.log |bro-cut history|sort|uniq -c|sort -rn|head -n 50 > > The weird.log shouldn't be very large, what does this output? > > cat /bro/logs/current/weird.log|bro-cut name|sort|uniq -c|sort -rn > >> For some more information -- the processors we are running are the following: >> # lstopo -v | grep Socket >> Socket L#0 (P#0 CPUModel="Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz" CPUType=x86_64) >> Socket L#1 (P#1 CPUModel="Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz" CPUType=x86_64) >> >> Hyper threading is disabled on this server. > Well, those are good CPUs, things should be keeping up a bit better than they are. > > Are you using the pin_cpus setting in your node.cfg? > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170113/730b7db5/attachment-0001.html -------------- next part -------------- # cat /home/bro/logs/current/weird.log|bro-cut name|sort|uniq -c|sort -rn 297534 dns_unmatched_msg 159574 dns_unmatched_reply 85718 above_hole_data_without_any_acks 51007 truncated_tcp_payload 31704 TCP_ack_underflow_or_misorder 28605 possible_split_routing 18759 data_before_established 10649 TCP_seq_underflow_or_misorder 4952 bad_TCP_checksum 1915 active_connection_reuse 1729 line_terminated_with_single_CR 1535 inappropriate_FIN 1149 DNS_RR_unknown_type 1042 excessive_data_without_further_acks 569 SYN_seq_jump 511 unknown_dce_rpc_auth_type_68 333 window_recision 313 SYN_inside_connection 313 DNS_RR_length_mismatch 215 bad_HTTP_request 203 DNS_Conn_count_too_large 173 connection_originator_SYN_ack 132 NUL_in_line 123 binpac exception: out_of_bound: SSLRecord:length: 13 > 2 57 FIN_advanced_last_seq 51 bad_UDP_checksum 33 SYN_after_close 32 data_after_reset 31 binpac exception: out_of_bound: Syslog_Priority:lt: 1 > 0 30 DNS_truncated_len_lt_hdr_len 29 ssl_early_application_data 21 DNS_truncated_RR_rdlength_lt_len 20 SYN_with_data 18 non_ip_packet_in_encap 12 unknown_HTTP_method 10 dnp3_corrupt_header_checksum 8 SYN_after_reset 7 bad_SYN_ack 6 empty_http_request 5 Teredo_bubble_with_payload 5 binpac exception: out_of_bound: SSLRecord:rec: 61752 > 39 4 binpac exception: out_of_bound: SSLRecord:rec: 63043 > 39 4 binpac exception: out_of_bound: SSLRecord:rec: 5684 > 149 4 binpac exception: out_of_bound: SSLRecord:rec: 23176 > 38 4 binpac exception: out_of_bound: SSLRecord:rec: 15495 > 39 4 bad_ICMP_checksum 3 unknown_SIP_method 3 DNS_truncated_ans_too_short 3 dnp3_header_lacks_magic 3 binpac exception: out_of_bound: SSLRecord:rec: 9782 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 7642 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 65351 > 38 3 binpac exception: out_of_bound: SSLRecord:rec: 59434 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 56793 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 5095 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 42784 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 41009 > 164 3 binpac exception: out_of_bound: SSLRecord:rec: 39849 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 37222 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 36749 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 35100 > 38 3 binpac exception: out_of_bound: SSLRecord:rec: 35020 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 29452 > 38 3 binpac exception: out_of_bound: SSLRecord:rec: 14468 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 14160 > 39 3 binpac exception: out_of_bound: SSLRecord:rec: 10405 > 39 3 binpac exception: invalid index for case: Handshake_Response_Packet: 0 3 bad_HTTP_request_with_version 2 unknown_protocol_97 2 unknown_protocol_50 2 UDP_datagram_length_mismatch(48!=54) 2 truncated_header 2 premature_connection_reuse 2 inflate_failed 2 fragment_inconsistency 2 binpac exception: out_of_bound: SSLRecord:rec: 9084 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 7813 > 38 2 binpac exception: out_of_bound: SSLRecord:rec: 7324 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 65154 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 64815 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 61977 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 61598 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 61504 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 61383 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 61266 > 38 2 binpac exception: out_of_bound: SSLRecord:rec: 60935 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 60790 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 60295 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 59738 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 59547 > 149 2 binpac exception: out_of_bound: SSLRecord:rec: 59534 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 58561 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 57845 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 57443 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 55907 > 38 2 binpac exception: out_of_bound: SSLRecord:rec: 55878 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 55156 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 54717 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 54211 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 53288 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 5306 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 52842 > 1350 2 binpac exception: out_of_bound: SSLRecord:rec: 51609 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 51523 > 38 2 binpac exception: out_of_bound: SSLRecord:rec: 50937 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 50145 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 49079 > 149 2 binpac exception: out_of_bound: SSLRecord:rec: 48953 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 48179 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 47074 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 46885 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 46428 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 44815 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 444 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 44255 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 42990 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 42473 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 40347 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 39354 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 3863 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 37919 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 37330 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 37210 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 3679 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 36590 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 36553 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 35760 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 35311 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 3216 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 30377 > 149 2 binpac exception: out_of_bound: SSLRecord:rec: 28377 > 38 2 binpac exception: out_of_bound: SSLRecord:rec: 27178 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 26611 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 25868 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 25665 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 25660 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 25321 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 2496 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 24780 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 24757 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 23968 > 149 2 binpac exception: out_of_bound: SSLRecord:rec: 22107 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 22064 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 21374 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 20948 > 148 2 binpac exception: out_of_bound: SSLRecord:rec: 20770 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 20693 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 18420 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 18057 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 17284 > 35 2 binpac exception: out_of_bound: SSLRecord:rec: 15352 > 174 2 binpac exception: out_of_bound: SSLRecord:rec: 14525 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 11509 > 39 2 binpac exception: out_of_bound: SSLRecord:rec: 10880 > 1350 1 unknown_protocol_89 1 unknown_protocol_2 1 unknown_protocol_103 1 truncated_GRE 1 missing_HTTP_uri 1 irc_invalid_command 1 DNS_truncated_quest_too_short 1 DNS_RR_bad_length 1 DNS_NAME_too_long 1 DNS_label_too_long 1 DNS_label_len_gt_pkt 1 DNS_label_forward_compress_offset 1 binpac exception: out_of_bound: SSLRecord:rec: 9945 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 9462 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 9414 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 9408 > 111 1 binpac exception: out_of_bound: SSLRecord:rec: 8499 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 8295 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 7458 > 23 1 binpac exception: out_of_bound: SSLRecord:rec: 7393 > 161 1 binpac exception: out_of_bound: SSLRecord:rec: 7194 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 6946 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 65151 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 65049 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 64995 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 64785 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 64569 > 1350 1 binpac exception: out_of_bound: SSLRecord:rec: 63622 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 62877 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 6235 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 62026 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 61436 > 149 1 binpac exception: out_of_bound: SSLRecord:rec: 61302 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 61108 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 60758 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 6053 > 165 1 binpac exception: out_of_bound: SSLRecord:rec: 60456 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 59834 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 59002 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 58995 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 58553 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 58314 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 58264 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 57993 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 57918 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 57525 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 5733 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 57234 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 5676 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 56630 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 56178 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 56056 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 55956 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 55939 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 55868 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 55492 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 55466 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 54977 > 53 1 binpac exception: out_of_bound: SSLRecord:rec: 54885 > 168 1 binpac exception: out_of_bound: SSLRecord:rec: 54512 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 54406 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 54 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 54007 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 53706 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 52549 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 52294 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 51595 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 51237 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 50948 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 50671 > 163 1 binpac exception: out_of_bound: SSLRecord:rec: 50147 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 5012 > 150 1 binpac exception: out_of_bound: SSLRecord:rec: 49959 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 49768 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 4921 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 48836 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 484 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 47993 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 47857 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 47596 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 47415 > 36 1 binpac exception: out_of_bound: SSLRecord:rec: 47152 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 47071 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 46948 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 46837 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 46062 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 44652 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 44552 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43704 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43455 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43352 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43326 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43273 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43145 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 43096 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 42086 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 41846 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 4158 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 41554 > 1350 1 binpac exception: out_of_bound: SSLRecord:rec: 41424 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 41082 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 40276 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 4012 > 149 1 binpac exception: out_of_bound: SSLRecord:rec: 39645 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 39188 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 38656 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 38633 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 38244 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 38232 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 37819 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 3748 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 37469 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 36878 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 36192 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 36090 > 149 1 binpac exception: out_of_bound: SSLRecord:rec: 36072 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 3552 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 35507 > 150 1 binpac exception: out_of_bound: SSLRecord:rec: 34777 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 3472 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 33850 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 33661 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 33229 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 33068 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 32935 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 32468 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 3221 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 31643 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 30148 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 29818 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 28904 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 28819 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 27866 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 27419 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 27341 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 27281 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 27151 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 27026 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 26583 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 25035 > 1350 1 binpac exception: out_of_bound: SSLRecord:rec: 24891 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 24751 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 23483 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 23284 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 22813 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 22786 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 21508 > 149 1 binpac exception: out_of_bound: SSLRecord:rec: 21151 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 20965 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 20917 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 20758 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 19771 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 19489 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 19402 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 18797 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 18563 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 18090 > 42 1 binpac exception: out_of_bound: SSLRecord:rec: 17909 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 17905 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 17738 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 17521 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 16898 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 16629 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 1652 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 16421 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 16304 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 16278 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 16121 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 16079 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 15885 > 150 1 binpac exception: out_of_bound: SSLRecord:rec: 158 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 15773 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 15493 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 15118 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 14765 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 14159 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 14118 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 13646 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 13122 > 153 1 binpac exception: out_of_bound: SSLRecord:rec: 12973 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 12916 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 12695 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 12273 > 1350 1 binpac exception: out_of_bound: SSLRecord:rec: 12261 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 12125 > 38 1 binpac exception: out_of_bound: SSLRecord:rec: 11615 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 11199 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 11032 > 39 1 binpac exception: out_of_bound: SSLRecord:rec: 10305 > 39 1 binpac exception: out_of_bound: SSLRecord:length: 13 > 11 From bro at pingtrip.com Sat Jan 14 11:26:30 2017 From: bro at pingtrip.com (Dave Crawford) Date: Sat, 14 Jan 2017 14:26:30 -0500 Subject: [Bro] Tap configuration In-Reply-To: References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> Message-ID: <66492EEE-40EA-4B44-BAA4-47B81AD0A882@pingtrip.com> This is what I use in my sensor /etc/network/interfaces config along with a custom ?post-up? script. I use Debian for my Bro clusters, so your application will differ. I?m also using af_packet (v4.8.0 kernel) so some of the performance settings may need to be adjusted for your setup. My tuning is aimed at keeping the packets in L3 cache on the CPU vid the NIC hardware, hence the reduced rings. auto eth6 iface eth6 inet manual up ip link set $IFACE promisc on arp off mtu 1500 up down ip link set $IFACE promisc off down post-up /opt/tools/post-up_settings.sh $IFACE And the /opt/tools/post-up_settings.sh script: #!/bin/bash IFACE=$1 if [[ -n "$IFACE" ]]; then # Lower the NIC ring descriptor size /sbin/ethtool -G $IFACE rx 512 # Disable offloading functions for i in rx tx sg tso ufo gso gro lro rxhash ntuple txvlan rxvlan; do ethtool -K $IFACE $i off; done # Enforce a single RX queue /sbin/ethtool -L $IFACE combined 1 # Disable pause frames /sbin/ethtool -A $IFACE rx off tx off # Limit the maximum number of interrupts per second /sbin/ethtool -C $IFACE adaptive-rx on rx-usecs 100 # Disable IPv6 /bin/echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6 # Pin IRQ to local CPU /opt/tools/set_irq_affinity local $IFACE fi -Dave > On Jan 13, 2017, at 3:28 PM, Daniel Manzo wrote: > > Thank you for the help. I tried the settings, but I have noticed any difference in packets. The main test that I am doing is that I would open two putty sessions to the server, and have one running capstats on eth12 while my other session was downloading a 1GB file to /dev/null. Last week, I was able to see the packets increase greatly via capstats, but now they stay steady at 7 or 8 packets per second. > > Best regards, > Dan Manzo > > -----Original Message----- > From: Seth Hall [mailto:seth at icir.org] > Sent: Friday, January 13, 2017 9:29 AM > To: Daniel Manzo > Cc: Neslog; Hosom, Stephen M; Bro-IDS > Subject: Re: [Bro] Tap configuration > > I would recommend leaving checksum validation on in Bro, but disable checksum offloading on the NIC. > > I typically point people to this blog post by Doug Burks (of the SecurityOnion project)... > http://blog.securityonion.net/2011/10/when-is-full-packet-capture-not-full.html > > There is one further thing I would recommend though that we discovered well after this blog post was written. If you are using an Intel NIC with the ixgbe driver, your nic has a feature called "flow director" that you will want to disable because it will negatively impact your analysis by reordering packets. It can be disabled like this on linux: > ethtool -L eth12 combined 1 > > This will cause your NIC to have only a single hardware queue which will disable the flow director feature and prevent your NIC from reordering packets. Do that along with the suggestions in the blog post above and things should be better. > > .Seth > > >> On Jan 13, 2017, at 8:58 AM, Daniel Manzo wrote: >> >> I have tried disabling checksum offloading, but still no luck. Here is the ifcfg file for my eth interface: >> >> DEVICE=eth12 >> ONBOOT=yes >> BOOTPROTO=static >> PROMISC=yes >> USERCTL=no >> >> Freundliche Gr??e / Best regards, >> >> Dan Manzo >> Asst Analyst I >> ________________________ >> >> Bayer: Science For A Better Life >> >> Bayer U.S. LLC >> Country Platform US >> Scientific Computing Competence Ctr >> Bayer Road >> 15205 Pittsburgh (PA), United States >> Tel: +1 412 7772171 >> Mobile: +1 412 5258332 >> E-mail: daniel.manzo at bayer.com >> >> From: Neslog [mailto:neslog at gmail.com] >> Sent: Thursday, January 12, 2017 4:59 PM >> To: Hosom, Stephen M >> Cc: Bro-IDS; Daniel Manzo >> Subject: Re: [Bro] Tap configuration >> >> I've had success disabling checksum. >> ignore_checksums >> >> >> On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" wrote: >> Have you looked into checksum offloading? If enabled, it can result in Bro not producing many of the logs you would expect. >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Daniel Manzo >> Sent: Thursday, January 12, 2017 11:05 AM >> To: bro at bro.org >> Subject: [Bro] Tap configuration >> >> Hi all, >> >> I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help >> >> Best regards, >> Dan Manzo >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170114/15514309/attachment.html From leonardo.mokarzel.falcon at gmail.com Sat Jan 14 14:00:55 2017 From: leonardo.mokarzel.falcon at gmail.com (Leonardo Mokarzel Falcon) Date: Sat, 14 Jan 2017 23:00:55 +0100 Subject: [Bro] Bro stops logging to sqlite Message-ID: Hi Bro community, Currently I have configured my Bro instance to send DNS logs to the sqlite database: /ust/local/bro/logs/current/dns.sqlite.I'm then reading these logs from a Python script and deleting the lines which were read. I'm facing the issue that Bro stops logging to the same sqlite file if the lines are deleted by my Python program. Has someone faced similar issues in the past? Thanks! Kind regards, Leonardo Mokarzel Falcon @LMokarzel From asharma at lbl.gov Sat Jan 14 16:37:51 2017 From: asharma at lbl.gov (Aashish Sharma) Date: Sat, 14 Jan 2017 16:37:51 -0800 Subject: [Bro] Bro stops logging to sqlite In-Reply-To: References: Message-ID: <20170115003749.GB47018@mac-822.local> Leonardo, Yes, SQLite table locking is quite elementry. I have limited understanding of it but my impression is that when your Python program is making deletes its locking the table down and Bro cannot quite read it and BRO-SQLITE plugin gives up and terminates the connection. YOu should see ERROR in reporter log similar to: 0.000000 Reporter::ERROR /home/bro//Log::WRITER_SQLITE: SQLite call failed: database table is locked: dns (empty) You should be able to catch this reporter error in this event: event reporter_error(t: time , msg: string , location: string ) { if (/WRITER_SQLITE/ in msg) { NOTICE([$note=WRITER_SQLITE_CRASH, $msg=msg]); } } And May be try to re-initialize the stream again. But that generally doesn't seem to work. So second option is you might want to experiment with locking of SQLITE: http://www.sqlite.org/wal.html PRAGMA journal_mode=WAL; pragma synchronous=1; and see if that helps. Your Python program needs to not have contention with BRO writing basically. I think using postgres is a better option if you have multiple read/writes going on since postgres does row level locks unlike SQLITE. SQLITE DB is great if you have readonly or writeonly applications but again I have limited understanding here... Hope this helps. Aashish On Sat, Jan 14, 2017 at 11:00:55PM +0100, Leonardo Mokarzel Falcon wrote: > Hi Bro community, > > Currently I have configured my Bro instance to send DNS logs to the sqlite database: /ust/local/bro/logs/current/dns.sqlite.I'm then reading these logs from a Python script and deleting the lines which were read. I'm facing the issue that Bro stops logging to the same sqlite file if the lines are deleted by my Python program. > > Has someone faced similar issues in the past? > > Thanks! > > Kind regards, > > Leonardo Mokarzel Falcon > @LMokarzel > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jan.grashoefer at gmail.com Sun Jan 15 06:36:02 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Sun, 15 Jan 2017 15:36:02 +0100 Subject: [Bro] Writing logs to both ACII and JSON In-Reply-To: References: <29c4cedb-7ba3-13a1-7a80-2eb1d6b5d2f6@gmail.com> <45295951-87e6-fd40-840f-15cb236ef4b7@gmail.com> <029d3bd1-747b-8d79-28e0-d730a1fb1d67@gmail.com> <4835d2a1-cdf0-7997-28dc-3c7c07ab9e73@gmail.com> <0fe7e15e-003a-004d-09ee-0fcaa544e945@gmail.com> <20170112124436.266gxkfljwvf2eqo@Beezling.fritz.box> <0924ee65-bc31-7755-7408-7db9bbabf723@gmail.com> Message-ID: Hi James, > I've already come across another Bro script that the add-json.bro script > doesn't seem to agree with, but will unload that script as it doesn't > provide much value for my org. I look forward to seeing an updated version > that can handle these stray log files though! meanwhile I have updated the script. It should work with SMB using Bro 2.5 and supports path functions. I hope the new version also works with the third-party script you mentioned. As additional JSON-logging seems to be a quite common requirement, I have added the script as a package for bro-pkg. Thanks to Johanna it's already merged! If you have configured bro-pkg, the following will install the script: bro-pkg install add-json The package is located at https://github.com/J-Gras/add-json. In case you encounter any problems, please let me know. Best regards, Jan From zeolla at gmail.com Sun Jan 15 20:38:07 2017 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Mon, 16 Jan 2017 04:38:07 +0000 Subject: [Bro] Tap configuration In-Reply-To: <66492EEE-40EA-4B44-BAA4-47B81AD0A882@pingtrip.com> References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> <66492EEE-40EA-4B44-BAA4-47B81AD0A882@pingtrip.com> Message-ID: So I'm not sure I follow exactly why you'd want to specifically emphasize keeping packets in the L3 cache. Is there a specific hardware configuration where this makes more sense? As of right now, I do pretty much the same thing you posted earlier except I map the # of RX queues to the # of physical CPU cores and maximize the NIC ring descriptor size. Jon On Sat, Jan 14, 2017 at 2:35 PM Dave Crawford wrote: > This is what I use in my sensor /etc/network/interfaces config along with > a custom ?post-up? script. I use Debian for my Bro clusters, so your > application will differ. I?m also using af_packet (v4.8.0 kernel) so some > of the performance settings may need to be adjusted for your setup. My > tuning is aimed at keeping the packets in L3 cache on the CPU vid the NIC > hardware, hence the reduced rings. > > auto eth6 > iface eth6 inet manual > up ip link set $IFACE promisc on arp off mtu 1500 up > down ip link set $IFACE promisc off down > post-up /opt/tools/post-up_settings.sh $IFACE > > > And the /opt/tools/post-up_settings.sh script: > > #!/bin/bash > > IFACE=$1 > > if [[ -n "$IFACE" ]]; then > > # Lower the NIC ring descriptor size > /sbin/ethtool -G $IFACE rx 512 > > # Disable offloading functions > for i in rx tx sg tso ufo gso gro lro rxhash ntuple txvlan rxvlan; do > ethtool -K $IFACE $i off; done > > # Enforce a single RX queue > /sbin/ethtool -L $IFACE combined 1 > > # Disable pause frames > /sbin/ethtool -A $IFACE rx off tx off > > # Limit the maximum number of interrupts per second > /sbin/ethtool -C $IFACE adaptive-rx on rx-usecs 100 > > # Disable IPv6 > /bin/echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6 > > # Pin IRQ to local CPU > /opt/tools/set_irq_affinity local $IFACE > fi > > -Dave > > On Jan 13, 2017, at 3:28 PM, Daniel Manzo wrote: > > Thank you for the help. I tried the settings, but I have noticed any > difference in packets. The main test that I am doing is that I would open > two putty sessions to the server, and have one running capstats on eth12 > while my other session was downloading a 1GB file to /dev/null. Last week, > I was able to see the packets increase greatly via capstats, but now they > stay steady at 7 or 8 packets per second. > > Best regards, > Dan Manzo > > -----Original Message----- > From: Seth Hall [mailto:seth at icir.org ] > Sent: Friday, January 13, 2017 9:29 AM > To: Daniel Manzo > Cc: Neslog; Hosom, Stephen M; Bro-IDS > Subject: Re: [Bro] Tap configuration > > I would recommend leaving checksum validation on in Bro, but disable > checksum offloading on the NIC. > > I typically point people to this blog post by Doug Burks (of the > SecurityOnion project)... > > http://blog.securityonion.net/2011/10/when-is-full-packet-capture-not-full.html > > There is one further thing I would recommend though that we discovered > well after this blog post was written. If you are using an Intel NIC with > the ixgbe driver, your nic has a feature called "flow director" that you > will want to disable because it will negatively impact your analysis by > reordering packets. It can be disabled like this on linux: > ethtool -L eth12 combined 1 > > This will cause your NIC to have only a single hardware queue which will > disable the flow director feature and prevent your NIC from reordering > packets. Do that along with the suggestions in the blog post above and > things should be better. > > .Seth > > > On Jan 13, 2017, at 8:58 AM, Daniel Manzo wrote: > > I have tried disabling checksum offloading, but still no luck. Here is the > ifcfg file for my eth interface: > > DEVICE=eth12 > ONBOOT=yes > BOOTPROTO=static > PROMISC=yes > USERCTL=no > > Freundliche Gr??e / Best regards, > > Dan Manzo > Asst Analyst I > ________________________ > > Bayer: Science For A Better Life > > Bayer U.S. LLC > Country Platform US > Scientific Computing Competence Ctr > Bayer Road > 15205 Pittsburgh (PA), United States > Tel: +1 412 7772171 <(412)%20777-2171> > Mobile: +1 412 5258332 <(412)%20525-8332> > E-mail: daniel.manzo at bayer.com > > From: Neslog [mailto:neslog at gmail.com] > Sent: Thursday, January 12, 2017 4:59 PM > To: Hosom, Stephen M > Cc: Bro-IDS; Daniel Manzo > Subject: Re: [Bro] Tap configuration > > I've had success disabling checksum. > ignore_checksums > > > On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" wrote: > Have you looked into checksum offloading? If enabled, it can result in Bro > not producing many of the logs you would expect. > > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of > Daniel Manzo > Sent: Thursday, January 12, 2017 11:05 AM > To: bro at bro.org > Subject: [Bro] Tap configuration > > Hi all, > > I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to > properly configure the network interfaces so that Bro can see as much of > the network traffic as possible. My tap is connected in line with the > network, and I believe that I was previously seeing the correct traffic, > but now Bro has reporting much less information. I want to make sure that I > have the interfaces configured correctly before moving on to > troubleshooting other areas. Currently, I have two eth interfaces set up in > PROMISC mode. Thank you for the help > > Best regards, > Dan Manzo > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon Sent from my mobile device -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170116/94e0b01f/attachment.html From philosnef at gmail.com Tue Jan 17 05:28:43 2017 From: philosnef at gmail.com (erik clark) Date: Tue, 17 Jan 2017 08:28:43 -0500 Subject: [Bro] proxy nodes Message-ID: I have a cluster of three systems; 2 workers and 1 logger. Both workers can be accessed by ssh from the manager, and both workers can call back to the logger. However, other than ssh, the only open port available for the workers to talk out of is 22. Should I run a proxy for each worker host, bound to that workers address, so that the workers have access to a proxy locally, since they won't be able to talk out to other proxies? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170117/4aca6388/attachment.html From jdopheid at illinois.edu Tue Jan 17 09:01:26 2017 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Tue, 17 Jan 2017 17:01:26 +0000 Subject: [Bro] Bro4Pros 2017: reminder to cancel vacated registration Message-ID: <39D675C7-B01B-4402-94DD-B78B5E748787@illinois.edu> Attention Bro4Pros attendees, If you are unable to attend Bro4Pros on Thursday February 2nd, please cancel your registration so that we may open the spot to others. For those of you attending Bro4Pros, see you in a couple weeks! Thanks, Jeannette Dopheide ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From fatema.bannatwala at gmail.com Tue Jan 17 13:07:58 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 17 Jan 2017 16:07:58 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> Message-ID: Hi Seth, On one of our sensors, I did: $ sudo sysctl -w kernel.core_pattern=core.%e-%t-%p $ sudo sysctl -a | grep "kernel.core" kernel.core_pattern = core.%e-%t-%p Also, verified that I have gdb installed: $ which gdb /usr/bin/gdb Also, I m starting bro with following commands on manager: sudo -u bro /usr/local/bro/2.5/bin/broctl install sudo -u bro /usr/local/bro/2.5/bin/broctl restart However, when seeing the crash report on the sensor, it says No core file was found: (Any idea, why broctl isn't generating the core dump, or do I have to include any file in local.bro for the same?) $ cd /mnt/brolog/spool/tmp/post-terminate-worker-2017-01-17-15-50-21-90688-crash $ less .crash-diag.out No core file found. Bro 2.5 Linux 3.10.0-327.36.3.el7.x86_64 Bro plugins: (none found) ==== No reporter.log ==== stderr.log internal warning in /usr/local/bro/2.5/share/bro/site/connStats.bro, line 3: Discarded extraneous Broxygen comment: aashish: need to port to file analysis framework warning in /usr/local/bro/2.5/share/bro/site/connStats.bro, line 39: dangerous assignment of double to integral (ConnStats::out$EstinboundConns = ConnStats::result[EstinboundConns]$sum) warning in /usr/local/bro/2.5/share/bro/site/connStats.bro, line 40: dangerous assignment of double to integral (ConnStats::out$EstoutboundConns = ConnStats::result[EstoutboundConns]$sum) Warning: Kernel filter failed: Bad address listening on em1 Warning: Kernel filter failed: Bad address 1484685887.668496 processing suspended 1484685887.668496 processing continued /usr/local/bro/2.5/share/broctl/scripts/run-bro: line 107: 121052 Segmentation fault nohup ${pin_command} $pin_cpu "$mybro" "$@" ==== stdout.log max memory size (kbytes, -m) unlimited data seg size (kbytes, -d) unlimited virtual memory (kbytes, -v) unlimited core file size (blocks, -c) unlimited ==== .cmdline -i em1 -U .status -p broctl -p broctl-live -p local -p worker-1-9 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto ==== .env_vars PATH=/usr/local/bro/2.5/bin:/usr/local/bro/2.5/share/broctl/scripts:/usr/local/bin:/usr/bin BROPATH=/mnt/brolog/spool/installed-scripts-do-not-touch/site::/mnt/brolog/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/2.5/share/bro:/usr/local/bro/2.5/share/bro/policy:/usr/local/bro/2.5/share/bro/site CLUSTER_NODE=worker-1-9 ==== .status RUNNING [net_run] ==== prof.log 1484686157.516259 TCP-States: Inact. Syn. SA Part. Est. Fin. Rst. 1484686157.516259 TCP-States:Inact. 24 4 3 2 1484686157.516259 TCP-States:Syn. 118 1 1484686157.516259 TCP-States:SA 6 1484686157.516259 TCP-States:Part. 38 335 9 2 1484686157.516259 TCP-States:Est. 602 81 2 1484686157.516259 TCP-States:Fin. 3 5 3 107 1 1484686157.516259 TCP-States:Rst. 2 1484686157.516259 Connections expired due to inactivity: 1525 1484686157.516259 Total reassembler data: 1178K ==== No packet_filter.log ==== No loaded_scripts.log On Fri, Jan 13, 2017 at 1:28 PM, Seth Hall wrote: > > > On Jan 13, 2017, at 12:06 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > , > > I wrote a little script to run gstack for all bro processes for every > minute. And ran it when I loaded the new sig and restarted bro. > > I have attached the output files for two sensors where I captured the > gstack stats. Let me know if that's not the correct way of capturing stack > trace. > > You need to collect a core dump when the crash happens and get a stack > trace from that. If this is on Linux, you will need to set your > kernel.core_pattern sysctl value to something like the following.... > > sudo sysctl -w kernel.core_pattern=core.%e-%t-%p > > If you have things set this way and you have gdb installed, broctl should > automatically generate a stack trace when it restarts the dead process. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170117/f5da7b3b/attachment.html From seth at icir.org Tue Jan 17 19:58:59 2017 From: seth at icir.org (Seth Hall) Date: Tue, 17 Jan 2017 22:58:59 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> Message-ID: <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> > On Jan 17, 2017, at 4:07 PM, fatema bannatwala wrote: > Also, I m starting bro with following commands on manager: > sudo -u bro /usr/local/bro/2.5/bin/broctl install > sudo -u bro /usr/local/bro/2.5/bin/broctl restart > > However, when seeing the crash report on the sensor, it says No core file was found: > (Any idea, why broctl isn't generating the core dump, or do I have to include any file in local.bro for the same?) Ah! I suspect the problem is that you're starting Bro as the Bro user which probably doesn't have permission to increase it's maximum core file size to unlimited. You can edit /etc/security/limits.conf and add the following line to it... * soft core unlimited That should make it possible for Bro to have arbitrarily large core dumps. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From philosnef at gmail.com Wed Jan 18 04:51:54 2017 From: philosnef at gmail.com (erik clark) Date: Wed, 18 Jan 2017 07:51:54 -0500 Subject: [Bro] traffic to logger from workers Message-ID: Does the logger receive traffic over an encrypted tunnel? It does not appear to be the case. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/19fb8852/attachment.html From shirkdog.bsd at gmail.com Wed Jan 18 08:44:38 2017 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Wed, 18 Jan 2017 11:44:38 -0500 Subject: [Bro] Best set up practice In-Reply-To: References: <1500FC08-38ED-4394-8469-65387DA0E8F0@gmail.com> Message-ID: I wrote up a basic how-to for getting Bro working within a FreeBSD jail. https://www.daemon-security.com/2017/01/bro-jail-0118.html -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com On Dec 10, 2016 11:49 AM, "Michael Shirk" wrote: > In the FreeBSD sense, jail all the things. You will be able to find some > write-ups for Snort, but not so much for Bro, which I will look to create > and blog about. > > The main thing is that when you setup the jail, make sure the jail is > configured for the interface you wish to monitor. You world normally > monitor the LAN side, but you could have a separate jail configured to > monitor the external side in a separate jail looking for threats and > traffic making it in and out of your firewall. > > A couple of additional items I myself have not had the chance to play with > but should be possible in Bro 2.5 is the ability to interact with ipfw/pf > with the NetControl Framework to use update the firewall on the fly, also > for shunting flows. > > As far as logging, I normally stick to the standard Bro log files, and you > can run tools from the host OS to process the log files in the jail if you > want. > > > > -- > Michael Shirk > Daemon Security, Inc. > http://www.daemon-security.com > > > On Dec 9, 2016 13:31, "Todd Carpenter" wrote: > >> Hi all, >> >> Just joined the list and had a question ? that I apparently sent to >> customer support ..oops. >> >> anyways Im building a freebsd server and was wondering what the best >> practice / placement for bro would be >> >> Essentially It?s a forward facing firewall based on freebsd. SO I was >> wondering if its best to deploy on the host OS, or create a jail or two and >> funnel traffic through that? I also wanted to know if there were any >> special considerations with jails / setup. >> >> some options I came up with .. >> >> internet > firewall > lan/dmz >> internet > firewall > nginx proxy > lan/dmz >> internet > firewall > dmz jail > NO lan >> internet > firewall > bro jail > proxy jail > lan/dmz >> >> Thanks! >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/d9918fe7/attachment-0001.html From fatema.bannatwala at gmail.com Wed Jan 18 08:56:20 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 18 Jan 2017 11:56:20 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: Hi Seth, Thanks for the suggestions, still getting No core dump: $ less /etc/security/limits.conf #Editing the core dump limit to unlimited for Bro debugging #* soft core 0 * soft core unlimited $ less .crash-diag.out No core file found. Bro 2.5 Linux 3.10.0-327.36.3.el7.x86_64 Bro plugins: (none found) ==== No reporter.log I will check to see what am I missing. Thanks, Fatema. On Tue, Jan 17, 2017 at 10:58 PM, Seth Hall wrote: > > > On Jan 17, 2017, at 4:07 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > Also, I m starting bro with following commands on manager: > > sudo -u bro /usr/local/bro/2.5/bin/broctl install > > sudo -u bro /usr/local/bro/2.5/bin/broctl restart > > > > However, when seeing the crash report on the sensor, it says No core > file was found: > > (Any idea, why broctl isn't generating the core dump, or do I have to > include any file in local.bro for the same?) > > Ah! I suspect the problem is that you're starting Bro as the Bro user > which probably doesn't have permission to increase it's maximum core file > size to unlimited. > > You can edit /etc/security/limits.conf and add the following line to it... > > * soft core unlimited > > That should make it possible for Bro to have arbitrarily large core dumps. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/6678c13d/attachment.html From zeolla at gmail.com Wed Jan 18 09:16:45 2017 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 18 Jan 2017 17:16:45 +0000 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: I've run into issues with getting core dumps in the past. I documented some of them as comments against broala KBs, but I'm not sure where those exist now that it has been renamed. What OS are you running? Recalling from memory, there are different things that can stop successful cores using the afore-mentioned config depending on the platform (I think it was ABRT?). Happy to pull that back up again if you continue to have an issue. Jon On Wed, Jan 18, 2017 at 12:03 PM fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hi Seth, > > Thanks for the suggestions, still getting No core dump: > > $ less /etc/security/limits.conf > #Editing the core dump limit to unlimited for Bro debugging > #* soft core 0 > * soft core unlimited > > $ less .crash-diag.out > No core file found. > > Bro 2.5 > Linux 3.10.0-327.36.3.el7.x86_64 > > Bro plugins: (none found) > > ==== No reporter.log > > > > I will check to see what am I missing. > > Thanks, > Fatema. > > On Tue, Jan 17, 2017 at 10:58 PM, Seth Hall wrote: > > > > On Jan 17, 2017, at 4:07 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > Also, I m starting bro with following commands on manager: > > sudo -u bro /usr/local/bro/2.5/bin/broctl install > > sudo -u bro /usr/local/bro/2.5/bin/broctl restart > > > > However, when seeing the crash report on the sensor, it says No core > file was found: > > (Any idea, why broctl isn't generating the core dump, or do I have to > include any file in local.bro for the same?) > > Ah! I suspect the problem is that you're starting Bro as the Bro user > which probably doesn't have permission to increase it's maximum core file > size to unlimited. > > You can edit /etc/security/limits.conf and add the following line to it... > > * soft core unlimited > > That should make it possible for Bro to have arbitrarily large core dumps. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon Sent from my mobile device -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/a7f61975/attachment.html From fatema.bannatwala at gmail.com Wed Jan 18 09:27:23 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 18 Jan 2017 12:27:23 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: Hi Jon, Thanks for lending some help. Appreciate it. We are running CentOS on our bro sensors as well as on manager. Here's the full info: Linux sensor1.xx.xx 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux CentOS Linux release 7.2.1511 (Core) Thanks, Fatema. On Wed, Jan 18, 2017 at 12:16 PM, Zeolla at GMail.com wrote: > I've run into issues with getting core dumps in the past. I documented > some of them as comments against broala KBs, but I'm not sure where those > exist now that it has been renamed. What OS are you running? Recalling > from memory, there are different things that can stop successful cores > using the afore-mentioned config depending on the platform (I think it was > ABRT?). Happy to pull that back up again if you continue to have an issue. > > Jon > > On Wed, Jan 18, 2017 at 12:03 PM fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > >> Hi Seth, >> >> Thanks for the suggestions, still getting No core dump: >> >> $ less /etc/security/limits.conf >> #Editing the core dump limit to unlimited for Bro debugging >> #* soft core 0 >> * soft core unlimited >> >> $ less .crash-diag.out >> No core file found. >> >> Bro 2.5 >> Linux 3.10.0-327.36.3.el7.x86_64 >> >> Bro plugins: (none found) >> >> ==== No reporter.log >> >> >> >> I will check to see what am I missing. >> >> Thanks, >> Fatema. >> >> On Tue, Jan 17, 2017 at 10:58 PM, Seth Hall wrote: >> >> >> > On Jan 17, 2017, at 4:07 PM, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >> >> > Also, I m starting bro with following commands on manager: >> > sudo -u bro /usr/local/bro/2.5/bin/broctl install >> > sudo -u bro /usr/local/bro/2.5/bin/broctl restart >> > >> > However, when seeing the crash report on the sensor, it says No core >> file was found: >> > (Any idea, why broctl isn't generating the core dump, or do I have to >> include any file in local.bro for the same?) >> >> Ah! I suspect the problem is that you're starting Bro as the Bro user >> which probably doesn't have permission to increase it's maximum core file >> size to unlimited. >> >> You can edit /etc/security/limits.conf and add the following line to it... >> >> * soft core unlimited >> >> That should make it possible for Bro to have arbitrarily large core dumps. >> >> .Seth >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > > Sent from my mobile device > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/58b96773/attachment-0001.html From jazoff at illinois.edu Wed Jan 18 09:33:38 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 18 Jan 2017 17:33:38 +0000 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: > On Jan 18, 2017, at 11:56 AM, fatema bannatwala wrote: > > Hi Seth, > > Thanks for the suggestions, still getting No core dump: I'd just run bro from a shell.. you said it crashes pretty quickly right? sudo su - mkdir /tmp/brotest cd /tmp/brotest ulimit -c unlimited /usr/local/bro/2.5/bin/bro -i eth0 local then it should crash and dump the core file right there. (replace eth0 with whatever) -- - Justin Azoff From zeolla at gmail.com Wed Jan 18 09:36:04 2017 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 18 Jan 2017 17:36:04 +0000 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: Here are some reading materials that may help. Jon On Wed, Jan 18, 2017 at 12:27 PM fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hi Jon, > > Thanks for lending some help. Appreciate it. > We are running CentOS on our bro sensors as well as on manager. > > Here's the full info: > Linux sensor1.xx.xx 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 > UTC 2016 x86_64 x86_64 x86_64 GNU/Linux > CentOS Linux release 7.2.1511 (Core) > > Thanks, > Fatema. > > > On Wed, Jan 18, 2017 at 12:16 PM, Zeolla at GMail.com > wrote: > > I've run into issues with getting core dumps in the past. I documented > some of them as comments against broala KBs, but I'm not sure where those > exist now that it has been renamed. What OS are you running? Recalling > from memory, there are different things that can stop successful cores > using the afore-mentioned config depending on the platform (I think it was > ABRT?). Happy to pull that back up again if you continue to have an issue. > > Jon > > On Wed, Jan 18, 2017 at 12:03 PM fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > Hi Seth, > > Thanks for the suggestions, still getting No core dump: > > $ less /etc/security/limits.conf > #Editing the core dump limit to unlimited for Bro debugging > #* soft core 0 > * soft core unlimited > > $ less .crash-diag.out > No core file found. > > Bro 2.5 > Linux 3.10.0-327.36.3.el7.x86_64 > > Bro plugins: (none found) > > ==== No reporter.log > > > > I will check to see what am I missing. > > Thanks, > Fatema. > > On Tue, Jan 17, 2017 at 10:58 PM, Seth Hall wrote: > > > > On Jan 17, 2017, at 4:07 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > Also, I m starting bro with following commands on manager: > > sudo -u bro /usr/local/bro/2.5/bin/broctl install > > sudo -u bro /usr/local/bro/2.5/bin/broctl restart > > > > However, when seeing the crash report on the sensor, it says No core > file was found: > > (Any idea, why broctl isn't generating the core dump, or do I have to > include any file in local.bro for the same?) > > Ah! I suspect the problem is that you're starting Bro as the Bro user > which probably doesn't have permission to increase it's maximum core file > size to unlimited. > > You can edit /etc/security/limits.conf and add the following line to it... > > * soft core unlimited > > That should make it possible for Bro to have arbitrarily large core dumps. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > > Sent from my mobile device > > > -- Jon Sent from my mobile device -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/85ea4823/attachment.html From fatema.bannatwala at gmail.com Wed Jan 18 11:30:34 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 18 Jan 2017 14:30:34 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: Thanks Jon for the links! Thanks Justin for alternative. We have our cluster in production, hence currently that sig file is disabled so that the cluster runs properly. Hence, to recreate the seg fault issue this time, rather than enabling it for the whole cluster, I just enabled it (in local.bro) for the previous version of bro that we still have around, and ran a single bro process for that old version, as you suggested. This time I was able to generate core dump for that single process. I ran the core dump through the crash-diag script: ========================================================================== $ /usr/local/bro/2.4.1/share/broctl/scripts/crash-diag /tmp/brotest/ Bro 2.5 Linux 3.10.0-327.36.3.el7.x86_64 core.bro-1484765328-89288 Core was generated by `/usr/local/bro/2.4.1/bin/bro -i eth2 local'. Program terminated with signal 11, Segmentation fault. #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at /home/fa/bro-2.5/src/Func.cc:63 Thread 1 (LWP 89288): #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at /home/fa/bro-2.5/src/Func.cc:63 #1 0x00000000047dfaf0 in ?? () #2 0x00007ffd504940e0 in ?? () #3 0x0000000000000000 in ?? () ==== No reporter.log ==== No stderr.log ==== No stdout.log ==== No .cmdline ==== No .env_vars ==== No .status ==== prof.log 1484765327.517900 TCP-States: Inact. Syn. SA Part. Est. Fin. Rst. 1484765327.517900 TCP-States:Inact. 454 1611 19 7 1484765327.517900 TCP-States:Syn. 2360 1027 12 262 34 1484765327.517900 TCP-States:SA 31 22 1484765327.517900 TCP-States:Part. 807 6489 1036 824 18 1484765327.517900 TCP-States:Est. 13778 3785 97 1484765327.517900 TCP-States:Fin. 61 619 3114 2401 33 1484765327.517900 TCP-States:Rst. 27 14 106 45 7 1484765327.517900 Connections expired due to inactivity: 37736 1484765327.517900 Total reassembler data: 21844K ==== packet_filter.log #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path packet_filter #open 2017-01-18-13-44-17 #fields ts node filter init success #types time string string bool bool 1484765057.522496 bro ip or not ip T T ==== loaded_scripts.log #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path loaded_scripts #open 2017-01-18-13-44-17 #fields name #types string /usr/local/bro/2.4.1/share/bro/base/init-bare.bro /usr/local/bro/2.4.1/share/bro/base/bif/const.bif.bro .......... (And a whole lot of loaded scripts, truncated) ============================================================================= The interesting thing is, I don't have such folder as: /home/fa/ *bro-2.5/src*/Func.cc:63 in the home dir on that machine, where the error reported according to coredump. But located the Func.cc file and saw the function where the seg fault was reported: Func::Func() : scope(0), type(0) { unique_id = unique_ids.size(); unique_ids.push_back(this); } Don't have much intuition though, that what have caused it :/ Thanks, Fatema. On Wed, Jan 18, 2017 at 12:33 PM, Azoff, Justin S wrote: > > > On Jan 18, 2017, at 11:56 AM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Hi Seth, > > > > Thanks for the suggestions, still getting No core dump: > > I'd just run bro from a shell.. you said it crashes pretty quickly right? > > sudo su - > mkdir /tmp/brotest > cd /tmp/brotest > ulimit -c unlimited > /usr/local/bro/2.5/bin/bro -i eth0 local > > then it should crash and dump the core file right there. > > (replace eth0 with whatever) > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170118/a2452335/attachment-0001.html From johanna at icir.org Wed Jan 18 16:06:48 2017 From: johanna at icir.org (Johanna Amann) Date: Wed, 18 Jan 2017 16:06:48 -0800 Subject: [Bro] traffic to logger from workers In-Reply-To: References: Message-ID: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> n Wed, Jan 18, 2017 at 07:51:54AM -0500, erik clark wrote: > Does the logger receive traffic over an encrypted tunnel? It does not > appear to be the case. No, Bro to Bro communication is not encrypted. Johanna From johanna at icir.org Wed Jan 18 16:11:58 2017 From: johanna at icir.org (Johanna Amann) Date: Wed, 18 Jan 2017 16:11:58 -0800 Subject: [Bro] Downgrade Bro from 2.5 to 2.4 In-Reply-To: <35e5d291e0cc4fb22c469804ae214aa2@localhost> References: <35e5d291e0cc4fb22c469804ae214aa2@localhost> Message-ID: <20170119001158.jduvc534oeo7rleh@wifi218.sys.ICSI.Berkeley.EDU> On Wed, Jan 11, 2017 at 04:14:09PM -0700, James Lay wrote: > On 2017-01-11 16:10, John Edwards wrote: > > Hi, > > > > Can someone point me to an ubuntu .deb 2.4 bro package? I have > > upgraded our production sensor and it has broken the Splunk TA for Bro > > and HTTP log isnt ingesting anymore. quickest way is to downgrade > > back to 2.4. Anyone know where i can find it? Seems everywhere i have > > looked the repos have the 2.5 copy only > > > > Cheers, > > John > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > Might still be in your apt cache at: /var/cache/apt/archives/ Also - I am not sure if this is of much help, but the source files to generate the .deb/.rpm packages are available at https://build.opensuse.org/package/show/network:bro/bro?rev=6 Johanna From johanna at icir.org Wed Jan 18 16:13:41 2017 From: johanna at icir.org (Johanna Amann) Date: Wed, 18 Jan 2017 16:13:41 -0800 Subject: [Bro] Detecting lost broccoli events in python In-Reply-To: <586E0185.9090506@consistec.de> References: <586E0185.9090506@consistec.de> Message-ID: <20170119001341.vu2hn7aq24aifkvc@wifi218.sys.ICSI.Berkeley.EDU> Hello Dirk, as far as I am aware, events never should be dropped -- you probably will either see memory growth on the sender side, lag, or a broken connection at some point of time. Johanna On Thu, Jan 05, 2017 at 09:19:17AM +0100, Dirk Leinenbach wrote: > Hi all, > > I'm receiving bro events with a python script via broccoli python > bindings. Is it possible to detect overload scenarios (events are being > dropped because python not fast enough) and log them in some way? > Preferably I would like to detect this from python, but if it's possible > on the sender side that would also be of help. > > Does anybody have an idea? I didn't find anything in the broccoli-python > doc. > > Thanks, > > Dirk > > -- > > Dr.-Ing. Dirk Leinenbach - Leitung Softwareentwicklung > consistec Engineering & Consulting GmbH > ------------------------------------------------------------------ > > Europaallee 5 Fon: +49 (0)681 / 959044-0 > D-66113 Saarbr?cken Fax: +49 (0)681 / 959044-11 > http://www.consistec.de e-mail: dirk.leinenbach at consistec.de > > Registergericht: Amtsgericht Saarbr?cken > Registerblatt: HRB12003 > Gesch?ftsf?hrer: Dr. Thomas Sinnwell, Volker Leiendecker, Stefan Sinnwell > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From johanna at icir.org Wed Jan 18 16:34:46 2017 From: johanna at icir.org (Johanna Amann) Date: Wed, 18 Jan 2017 16:34:46 -0800 Subject: [Bro] bif example In-Reply-To: References: Message-ID: <20170119003446.ydz6y7tufvt5ylfq@wifi218.sys.ICSI.Berkeley.EDU> Hi, > I have a question about BIF example > . I am trying > to write my own BIF functions. I'd like to store some data (i.e. pass in a > record to a BIF function) and retrieve it later as a record when I am > processing traffic. I am not quite sure that I understand - do you want the bif to store data that can be accessed later by the same (or a different) bif? I am not sure if I know of anyone doing that - it is more common for a bif to return data, that the user then can store somewhere in scriptland (e.g. in the connection record). > In the example, I see 'foobar' record is defined in bro.init. There is a > declaration of foobar record in types.bif. This is being accessed in > bro.bif. How is the 'foobar' record type resolved when it's referenced > in bro.bif? Is the example complete or is it missing some includes and > such? The example is a bit out of date here as bro.init does not exist anymore. I assume the best way to see how something like this works is to look at the bifs that are added by one of the individual protocol or file analyzers, since they are smaller, all necessary files are contained in a directory, and they work very similar to how you would add bifs in a package that you create. That being said, the general approach is correct - you create a type in scriptland, e.g. by adding it to init-bare.bro, you then can add it to types.bif, and use it, either globally in bro.bif (which is a bit special), or if you are creating your own functions.bif, in there, after including types.bif.h. > I tried to the same but my bro script fails because my bif file doesn't > know about my record type. I included my 'types.bif.h' in my bif file get > it compiled without errors. But it fails to load because it does not know > about my record type. I get the error 'identifier not defined:'. Any help > is appreciated. Thanks. That sounds like more of a problem with the original definition of the type - where exactly did you define it? init-bare? Johanna From pssunu6 at gmail.com Thu Jan 19 00:43:30 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 19 Jan 2017 14:13:30 +0530 Subject: [Bro] notice log content into new log file Message-ID: Hi Is it possible to write Notice log content into new log file Regards, sunu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/e405dd6b/attachment.html From jan.grashoefer at gmail.com Thu Jan 19 01:12:14 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 19 Jan 2017 10:12:14 +0100 Subject: [Bro] notice log content into new log file In-Reply-To: References: Message-ID: <4c3f40d5-898e-46d3-2e96-0410f14af694@gmail.com> Hi sunu, > Is it possible to write Notice log content into new log file I am not sure what you mean by new log file but you might want to have a look at: https://www.bro.org/sphinx-git/frameworks/logging.html#filters Best regards, Jan From pssunu6 at gmail.com Thu Jan 19 01:42:11 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 19 Jan 2017 15:12:11 +0530 Subject: [Bro] notice log content into new log file In-Reply-To: <4c3f40d5-898e-46d3-2e96-0410f14af694@gmail.com> References: <4c3f40d5-898e-46d3-2e96-0410f14af694@gmail.com> Message-ID: Hi, @load base/frameworks/notice module DetectTor; export { redef enum Notice::Type += { ## Indicates that a host using Tor was discovered. DetectTor::Found }; ## Distinct Tor-like X.509 certificates to see before deciding it's Tor. const tor_cert_threshold = 10.0; ## Time period to see the :bro:see:`tor_cert_threshold` certificates ## before deciding it's Tor. const tor_cert_period = 5min; # Number of Tor certificate samples to collect. const tor_cert_samples = 3 &redef; } event bro_init() { local r1 = SumStats::Reducer($stream="ssl.tor-looking-cert", $apply=set(SumStats::UNIQUE, SumStats::SAMPLE), $num_samples=tor_cert_samples); SumStats::create([$name="detect-tor", $epoch=tor_cert_period, $reducers=set(r1), $threshold_val(key: SumStats::Key, result: SumStats::Result) = { return result["ssl.tor-looking-cert"]$unique+0.0; }, $threshold=tor_cert_threshold, $threshold_crossed(key: SumStats::Key, result: SumStats::Result) = { local r = result["ssl.tor-looking-cert"]; local samples = r$samples; local sub_msg = fmt("Sampled certificates: "); for ( i in samples ) { if ( samples[i]?$str ) sub_msg = fmt("%s%s %s", sub_msg, i==0 ? "":",", samples[i]$str); } NOTICE([$note=DetectTor::Found, $msg=fmt("%s was found using Tor by connecting to servers with at least %d unique weird certs", key$host, r$unique), $sub=sub_msg, $src=key$host, $identifier=cat(key$host)]); }]); } event ssl_established(c: connection ) { if ( c$ssl?$subject && /^CN=www.[^=,]*$/ == c$ssl$subject && c$ssl?$issuer && /^CN=www.[^=,]*$/ == c$ssl$issuer ) { SumStats::observe("ssl.tor-looking-cert", [$host=c$id$orig_h], [$str=c$ssl$subject]); } } Above code is my sample code , and the code will generate notice.log when tor found below code is the part which will write the content in notice.log , i want this content into separate log example tor.log, is it possible ?? NOTICE([$note=DetectTor::Found, $msg=fmt("%s was found using Tor by connecting to servers with at least %d unique weird certs", key$host, r$unique), $sub=sub_msg, $src=key$host, $identifier=cat(key$host)]); }]); } Regards, sunu On Thu, Jan 19, 2017 at 2:42 PM, Jan Grash?fer wrote: > Hi sunu, > > > Is it possible to write Notice log content into new log file > > I am not sure what you mean by new log file but you might want to have a > look at: https://www.bro.org/sphinx-git/frameworks/logging.html#filters > > Best regards, > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/120f264f/attachment-0001.html From jan.grashoefer at gmail.com Thu Jan 19 01:50:36 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 19 Jan 2017 10:50:36 +0100 Subject: [Bro] notice log content into new log file In-Reply-To: References: <4c3f40d5-898e-46d3-2e96-0410f14af694@gmail.com> Message-ID: Hi sunu, > Above code is my sample code , and the code > will generate notice.log when tor found below code is the part which > will write the content in notice.log , i want this content into > separate log so what you want is to write some data to your own log stream. Have a look at: https://www.bro.org/sphinx-git/frameworks/logging.html#streams Additionally there is an example on try.bro.org: http://try.bro.org/#/?example=modules-log-factorial I hope this helps, Jan From zaixer at gmail.com Thu Jan 19 03:33:37 2017 From: zaixer at gmail.com (M A) Date: Thu, 19 Jan 2017 14:33:37 +0300 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: I have come across the same behavior while testing a script against some odd PCAPs and found that most of the time tweaking the filter to be more restrictive solved the issue. so, what you can do is: 1-Test against one signature only to determine which specific signature causes the issue. 2-Count # of "Potential rootkit" string existence for before and after and see which one got more hits (supposedly this should be the one causing the issue -less restrictive). This might validate that regex is working as expected.....usage of debugging PRINT also might come handy. Thanks On 18 January 2017 at 22:30, fatema bannatwala wrote: > Thanks Jon for the links! > > Thanks Justin for alternative. > > We have our cluster in production, hence currently that sig file is > disabled so that the cluster runs properly. > Hence, to recreate the seg fault issue this time, rather than enabling it > for the whole cluster, I just enabled it (in local.bro) for the previous > version > of bro that we still have around, and ran a single bro process for that > old version, as you suggested. > This time I was able to generate core dump for that single process. > > I ran the core dump through the crash-diag script: > > ========================================================================== > $ /usr/local/bro/2.4.1/share/broctl/scripts/crash-diag /tmp/brotest/ > > Bro 2.5 > Linux 3.10.0-327.36.3.el7.x86_64 > > core.bro-1484765328-89288 > > Core was generated by `/usr/local/bro/2.4.1/bin/bro -i eth2 local'. > Program terminated with signal 11, Segmentation fault. > #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at > /home/fa/bro-2.5/src/Func.cc:63 > > Thread 1 (LWP 89288): > #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at > /home/fa/bro-2.5/src/Func.cc:63 > #1 0x00000000047dfaf0 in ?? () > #2 0x00007ffd504940e0 in ?? () > #3 0x0000000000000000 in ?? () > > ==== No reporter.log > > ==== No stderr.log > > ==== No stdout.log > > ==== No .cmdline > > ==== No .env_vars > > ==== No .status > > ==== prof.log > 1484765327.517900 TCP-States: Inact. Syn. SA Part. Est. > Fin. Rst. > 1484765327.517900 TCP-States:Inact. 454 1611 > 19 7 > 1484765327.517900 TCP-States:Syn. 2360 1027 12 > 262 34 > 1484765327.517900 TCP-States:SA 31 22 > 1484765327.517900 TCP-States:Part. 807 6489 1036 > 824 18 > 1484765327.517900 TCP-States:Est. 13778 > 3785 97 > 1484765327.517900 TCP-States:Fin. 61 619 3114 > 2401 33 > 1484765327.517900 TCP-States:Rst. 27 14 106 > 45 7 > 1484765327.517900 Connections expired due to inactivity: 37736 > 1484765327.517900 Total reassembler data: 21844K > > ==== packet_filter.log > #separator \x09 > #set_separator , > #empty_field (empty) > #unset_field - > #path packet_filter > #open 2017-01-18-13-44-17 > #fields ts node filter init success > #types time string string bool bool > 1484765057.522496 bro ip or not ip T T > > ==== loaded_scripts.log > #separator \x09 > #set_separator , > #empty_field (empty) > #unset_field - > #path loaded_scripts > #open 2017-01-18-13-44-17 > #fields name > #types string > /usr/local/bro/2.4.1/share/bro/base/init-bare.bro > /usr/local/bro/2.4.1/share/bro/base/bif/const.bif.bro > .......... (And a whole lot of loaded scripts, truncated) > > ============================================================ > ================= > > The interesting thing is, I don't have such folder as: /home/fa/ > *bro-2.5/src*/Func.cc:63 in the home dir on that machine, where the error > reported according to coredump. > But located the Func.cc file and saw the function where the seg fault was > reported: > > Func::Func() : scope(0), type(0) > { > unique_id = unique_ids.size(); > unique_ids.push_back(this); > } > > Don't have much intuition though, that what have caused it :/ > > Thanks, > Fatema. > > On Wed, Jan 18, 2017 at 12:33 PM, Azoff, Justin S > wrote: > >> >> > On Jan 18, 2017, at 11:56 AM, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >> > >> > Hi Seth, >> > >> > Thanks for the suggestions, still getting No core dump: >> >> I'd just run bro from a shell.. you said it crashes pretty quickly right? >> >> sudo su - >> mkdir /tmp/brotest >> cd /tmp/brotest >> ulimit -c unlimited >> /usr/local/bro/2.5/bin/bro -i eth0 local >> >> then it should crash and dump the core file right there. >> >> (replace eth0 with whatever) >> >> -- >> - Justin Azoff >> >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/6bc3210b/attachment.html From philosnef at gmail.com Thu Jan 19 05:37:30 2017 From: philosnef at gmail.com (erik clark) Date: Thu, 19 Jan 2017 08:37:30 -0500 Subject: [Bro] traffic to logger from workers In-Reply-To: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> References: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> Message-ID: This seems to be a pretty big oversight. Depending on the controls you implement from NIST 800-53 Rev 4, encryption between processes is mentioned. In our environment, it is not just nice to have, it is a requirement. Since no Bro to Bro communication is encrypted, this makes it 100% impossible for us to have a Bro cluster spanning multiple servers. We are relegated to load balancing via a smart tap and hosting all-in-one Bro instances in disparate hardware, and then forwarding the logs off the box with Splunk which _does_ do encrypted log handoff to the indexers. I understand that there is some concern about possible performance implications, but making an application that is completely devoid of FIPS 140-2 compliance does not seem to be very good. What can be done to get encryption into Bro to Bro communication? If nothing else, at least to the logger. The other elements (workers, proxies) can be handled by pushing proxies to the individual hosts and blocking proxy port requests from Bro between hosts. On Wed, Jan 18, 2017 at 7:06 PM, Johanna Amann wrote: > n Wed, Jan 18, 2017 at 07:51:54AM -0500, erik clark wrote: > > Does the logger receive traffic over an encrypted tunnel? It does not > > appear to be the case. > > No, Bro to Bro communication is not encrypted. > > Johanna > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/61610aa0/attachment.html From jazoff at illinois.edu Thu Jan 19 07:12:33 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 19 Jan 2017 15:12:33 +0000 Subject: [Bro] traffic to logger from workers In-Reply-To: References: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> Message-ID: <9FB884A1-3816-46E3-B06E-0B986509CA5A@illinois.edu> > On Jan 19, 2017, at 8:37 AM, erik clark wrote: > > This seems to be a pretty big oversight. Depending on the controls you implement from NIST 800-53 Rev 4, encryption between processes is mentioned. In our environment, it is not just nice to have, it is a requirement. > > Since no Bro to Bro communication is encrypted, this makes it 100% impossible for us to have a Bro cluster spanning multiple servers. We are relegated to load balancing via a smart tap and hosting all-in-one Bro instances in disparate hardware, and then forwarding the logs off the box with Splunk which _does_ do encrypted log handoff to the indexers. > > I understand that there is some concern about possible performance implications, but making an application that is completely devoid of FIPS 140-2 compliance does not seem to be very good. > If encryption between all processes is a requirement in your environment then what exactly is Bro seeing via the taps? Anything that Bro is seeing on the taps is not encrypted and is ALREADY being transmitted in plain text in the first place. > What can be done to get encryption into Bro to Bro communication? If nothing else, at least to the logger. The other elements (workers, proxies) can be handled by pushing proxies to the individual hosts and blocking proxy port requests from Bro between hosts. ipsec, openvpn, etc. Or possibly via tls via broker at some point. -- - Justin Azoff From vladg at illinois.edu Thu Jan 19 07:51:39 2017 From: vladg at illinois.edu (Vlad Grigorescu) Date: Thu, 19 Jan 2017 09:51:39 -0600 Subject: [Bro] traffic to logger from workers In-Reply-To: <9FB884A1-3816-46E3-B06E-0B986509CA5A@illinois.edu> References: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> <9FB884A1-3816-46E3-B06E-0B986509CA5A@illinois.edu> Message-ID: I've used stunnel for this in the past, and it worked well. "Azoff, Justin S" writes: >> On Jan 19, 2017, at 8:37 AM, erik clark wrote: >> >> This seems to be a pretty big oversight. Depending on the controls you implement from NIST 800-53 Rev 4, encryption between processes is mentioned. In our environment, it is not just nice to have, it is a requirement. >> >> Since no Bro to Bro communication is encrypted, this makes it 100% impossible for us to have a Bro cluster spanning multiple servers. We are relegated to load balancing via a smart tap and hosting all-in-one Bro instances in disparate hardware, and then forwarding the logs off the box with Splunk which _does_ do encrypted log handoff to the indexers. >> >> I understand that there is some concern about possible performance implications, but making an application that is completely devoid of FIPS 140-2 compliance does not seem to be very good. >> > > If encryption between all processes is a requirement in your environment then what exactly is Bro seeing via the taps? Anything that Bro is seeing on the taps is not encrypted and is ALREADY being transmitted in plain text in the first place. > >> What can be done to get encryption into Bro to Bro communication? If nothing else, at least to the logger. The other elements (workers, proxies) can be handled by pushing proxies to the individual hosts and blocking proxy port requests from Bro between hosts. > > ipsec, openvpn, etc. Or possibly via tls via broker at some point. > > -- > - Justin Azoff > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 800 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/efe8b83f/attachment.bin From vladg at illinois.edu Thu Jan 19 08:02:09 2017 From: vladg at illinois.edu (Vlad Grigorescu) Date: Thu, 19 Jan 2017 10:02:09 -0600 Subject: [Bro] Best set up practice In-Reply-To: References: <1500FC08-38ED-4394-8469-65387DA0E8F0@gmail.com> Message-ID: Thanks, Michael! I've been meaning to look into this for a while. I'll have to give this a shot. --Vlad Michael Shirk writes: > I wrote up a basic how-to for getting Bro working within a FreeBSD jail. > > https://www.daemon-security.com/2017/01/bro-jail-0118.html > > > -- > Michael Shirk > Daemon Security, Inc. > http://www.daemon-security.com > > On Dec 10, 2016 11:49 AM, "Michael Shirk" wrote: > >> In the FreeBSD sense, jail all the things. You will be able to find some >> write-ups for Snort, but not so much for Bro, which I will look to create >> and blog about. >> >> The main thing is that when you setup the jail, make sure the jail is >> configured for the interface you wish to monitor. You world normally >> monitor the LAN side, but you could have a separate jail configured to >> monitor the external side in a separate jail looking for threats and >> traffic making it in and out of your firewall. >> >> A couple of additional items I myself have not had the chance to play with >> but should be possible in Bro 2.5 is the ability to interact with ipfw/pf >> with the NetControl Framework to use update the firewall on the fly, also >> for shunting flows. >> >> As far as logging, I normally stick to the standard Bro log files, and you >> can run tools from the host OS to process the log files in the jail if you >> want. >> >> >> >> -- >> Michael Shirk >> Daemon Security, Inc. >> http://www.daemon-security.com >> >> >> On Dec 9, 2016 13:31, "Todd Carpenter" wrote: >> >>> Hi all, >>> >>> Just joined the list and had a question ? that I apparently sent to >>> customer support ..oops. >>> >>> anyways Im building a freebsd server and was wondering what the best >>> practice / placement for bro would be >>> >>> Essentially It?s a forward facing firewall based on freebsd. SO I was >>> wondering if its best to deploy on the host OS, or create a jail or two and >>> funnel traffic through that? I also wanted to know if there were any >>> special considerations with jails / setup. >>> >>> some options I came up with .. >>> >>> internet > firewall > lan/dmz >>> internet > firewall > nginx proxy > lan/dmz >>> internet > firewall > dmz jail > NO lan >>> internet > firewall > bro jail > proxy jail > lan/dmz >>> >>> Thanks! >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 800 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/f0bff255/attachment.bin From hosom at battelle.org Thu Jan 19 09:35:12 2017 From: hosom at battelle.org (Hosom, Stephen M) Date: Thu, 19 Jan 2017 17:35:12 +0000 Subject: [Bro] traffic to logger from workers In-Reply-To: References: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> Message-ID: Which 800-53 control are you referencing? I?d like to help you. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of erik clark Sent: Thursday, January 19, 2017 8:38 AM To: Johanna Amann Cc: Bro-IDS Subject: Re: [Bro] traffic to logger from workers This seems to be a pretty big oversight. Depending on the controls you implement from NIST 800-53 Rev 4, encryption between processes is mentioned. In our environment, it is not just nice to have, it is a requirement. Since no Bro to Bro communication is encrypted, this makes it 100% impossible for us to have a Bro cluster spanning multiple servers. We are relegated to load balancing via a smart tap and hosting all-in-one Bro instances in disparate hardware, and then forwarding the logs off the box with Splunk which _does_ do encrypted log handoff to the indexers. I understand that there is some concern about possible performance implications, but making an application that is completely devoid of FIPS 140-2 compliance does not seem to be very good. What can be done to get encryption into Bro to Bro communication? If nothing else, at least to the logger. The other elements (workers, proxies) can be handled by pushing proxies to the individual hosts and blocking proxy port requests from Bro between hosts. On Wed, Jan 18, 2017 at 7:06 PM, Johanna Amann > wrote: n Wed, Jan 18, 2017 at 07:51:54AM -0500, erik clark wrote: > Does the logger receive traffic over an encrypted tunnel? It does not > appear to be the case. No, Bro to Bro communication is not encrypted. Johanna -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/d5e5c040/attachment.html From charles.a.fair at gmail.com Thu Jan 19 09:56:14 2017 From: charles.a.fair at gmail.com (Charles Fair) Date: Thu, 19 Jan 2017 12:56:14 -0500 Subject: [Bro] Simple way to get a combined unique IP list from an arbitrary date range Message-ID: <533D672A-9691-45DA-9F44-4D99359C38D6@gmail.com> Help with this would be greatly appreciated. I am trying to figure out a simple way to get a combined unique ip list from an arbitrary date range. I want the unique IP addresses as a single list from the conn.log fields ip.orig_h and ip.resp_h. Answering questions like give me the unique IPs from the past 7/14/30/60/90 days would be quite tedious this way. I can do it manually as the below example using a temp file for the working data. Thanks! Chuck ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #!/bin/bash # # Create a single list of all unique IP addresses with a # sorted descending count from the # conn.log consisting of ip.orig_h and ip.resp_h # for a given five day period # zcat 2016-01-01/conn.* 2016-01-02/conn.* 2016-01-03/conn.* 2016-01-04/conn.* 2016-01-05/conn.* | bro-cut ip.orig_h > /tmp/tempalluniqip.txt zcat 2016-01-01/conn.* 2016-01-02/conn.* 2016-01-03/conn.* 2016-01-04/conn.* 2016-01-05/conn.* | bro-cut ip.resp_h >> /tmp/tempalluniqip.txt cat /tmp/tempalluniqip.txt | sort -n | uniq -c | sort -n > /tmp/alluniqip.txt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From shirkdog.bsd at gmail.com Thu Jan 19 09:56:55 2017 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Thu, 19 Jan 2017 12:56:55 -0500 Subject: [Bro] traffic to logger from workers In-Reply-To: References: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> Message-ID: I think this refers to AC-4 for information flow enforcement. But this is where you would configure border protections or segmentation of your bro data on its own private network, or configure encrypted tunnels. -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com On Jan 19, 2017 12:36 PM, "Hosom, Stephen M" wrote: > Which 800-53 control are you referencing? I?d like to help you. > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *erik > clark > *Sent:* Thursday, January 19, 2017 8:38 AM > *To:* Johanna Amann > *Cc:* Bro-IDS > *Subject:* Re: [Bro] traffic to logger from workers > > > > This seems to be a pretty big oversight. Depending on the controls you > implement from NIST 800-53 Rev 4, encryption between processes is > mentioned. In our environment, it is not just nice to have, it is a > requirement. > > > > Since no Bro to Bro communication is encrypted, this makes it 100% > impossible for us to have a Bro cluster spanning multiple servers. We are > relegated to load balancing via a smart tap and hosting all-in-one Bro > instances in disparate hardware, and then forwarding the logs off the box > with Splunk which _does_ do encrypted log handoff to the indexers. > > > > I understand that there is some concern about possible performance > implications, but making an application that is completely devoid of FIPS > 140-2 compliance does not seem to be very good. > > > > What can be done to get encryption into Bro to Bro communication? If > nothing else, at least to the logger. The other elements (workers, proxies) > can be handled by pushing proxies to the individual hosts and blocking > proxy port requests from Bro between hosts. > > > > On Wed, Jan 18, 2017 at 7:06 PM, Johanna Amann wrote: > > n Wed, Jan 18, 2017 at 07:51:54AM -0500, erik clark wrote: > > Does the logger receive traffic over an encrypted tunnel? It does not > > appear to be the case. > > No, Bro to Bro communication is not encrypted. > > Johanna > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/a0249db0/attachment-0001.html From DHoelzer at sans.org Thu Jan 19 10:04:07 2017 From: DHoelzer at sans.org (Hoelzer, Dave) Date: Thu, 19 Jan 2017 18:04:07 +0000 Subject: [Bro] traffic to logger from workers In-Reply-To: References: <20170119000629.nwufvi5pexwfk4k6@wifi218.sys.ICSI.Berkeley.EDU> Message-ID: <1596894D-92EF-4E1C-8D88-F1C97429C7F1@sans.org> Isn?t that how everyone does it? I never have IDS or other security events passing over the internal network. It?s always on a private, dark, network. ??????????????????????? David Hoelzer Dean of Faculty, STI Fellow, SANS.org On Jan 19, 2017, at 12:56 PM, Michael Shirk > wrote: I think this refers to AC-4 for information flow enforcement. But this is where you would configure border protections or segmentation of your bro data on its own private network, or configure encrypted tunnels. -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com On Jan 19, 2017 12:36 PM, "Hosom, Stephen M" > wrote: Which 800-53 control are you referencing? I?d like to help you. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of erik clark Sent: Thursday, January 19, 2017 8:38 AM To: Johanna Amann > Cc: Bro-IDS > Subject: Re: [Bro] traffic to logger from workers This seems to be a pretty big oversight. Depending on the controls you implement from NIST 800-53 Rev 4, encryption between processes is mentioned. In our environment, it is not just nice to have, it is a requirement. Since no Bro to Bro communication is encrypted, this makes it 100% impossible for us to have a Bro cluster spanning multiple servers. We are relegated to load balancing via a smart tap and hosting all-in-one Bro instances in disparate hardware, and then forwarding the logs off the box with Splunk which _does_ do encrypted log handoff to the indexers. I understand that there is some concern about possible performance implications, but making an application that is completely devoid of FIPS 140-2 compliance does not seem to be very good. What can be done to get encryption into Bro to Bro communication? If nothing else, at least to the logger. The other elements (workers, proxies) can be handled by pushing proxies to the individual hosts and blocking proxy port requests from Bro between hosts. On Wed, Jan 18, 2017 at 7:06 PM, Johanna Amann > wrote: n Wed, Jan 18, 2017 at 07:51:54AM -0500, erik clark wrote: > Does the logger receive traffic over an encrypted tunnel? It does not > appear to be the case. No, Bro to Bro communication is not encrypted. Johanna _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/d16389bc/attachment.html From jazoff at illinois.edu Thu Jan 19 10:23:32 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 19 Jan 2017 18:23:32 +0000 Subject: [Bro] Simple way to get a combined unique IP list from an arbitrary date range In-Reply-To: <533D672A-9691-45DA-9F44-4D99359C38D6@gmail.com> References: <533D672A-9691-45DA-9F44-4D99359C38D6@gmail.com> Message-ID: <8F1E67EB-4C11-46AA-8C95-BFBF42B22473@illinois.edu> > On Jan 19, 2017, at 12:56 PM, Charles Fair wrote: > > Help with this would be greatly appreciated. I am trying to figure out a simple way to get a combined unique ip list from an arbitrary date range. I want the unique IP addresses as a single list from the conn.log fields ip.orig_h and ip.resp_h. Answering questions like give me the unique IPs from the past 7/14/30/60/90 days would be quite tedious this way. > > I can do it manually as the below example using a temp file for the working data. > > Thanks! > > Chuck This should do it: zcat 2016-01-0{1,5}/conn.* | bro-cut id.orig_h id.resp_h -F $'\n' | sort | uniq -c | sort -n > /tmp/alluniqip.txt If you're going to be doing that a lot, it would make sense to process each day individually (but keep them sorted by ip), then reporting on a date range would just involve doing a k-way merge across multiple days of data. I use this program as a replacement for sort | uniq -c | sort -n, as long as you have the memory it ends up being a lot faster: #!/usr/bin/env python import sys from collections import defaultdict c = defaultdict(int) for line in sys.stdin: c[line] += 1 top = sorted(c.items(), key=lambda (k,v): v) for k, v in top: print v, k, -- - Justin Azoff From andrew.dellana at bayer.com Thu Jan 19 10:58:37 2017 From: andrew.dellana at bayer.com (Andrew Dellana) Date: Thu, 19 Jan 2017 18:58:37 +0000 Subject: [Bro] Can't get "Notice::ACTION_EMAIL" to work Message-ID: <938005758d284196b951f976fa084cd6@moxde9.na.bayer.cnb> I am still new to bro scripting and I am working with the vt_check that sooshie wrote and trying to configure email notifications for any virus findings (monitoring multiple interfaces via network tap). I looked into the notice framework section on the webpage and am getting an error: "error in ./VT_Check.bro, line 117: unknown identifier Virus_Total_Alert, at or near "Virus_Total_Alert" ". Line 117 is the "Notice::ACTION_EMAIL" line. hook Notice::policy(n: Notice::Info) { if ( n?$conn && n$conn?$http && n$conn$http?$host ) n$email_body_sections[|n$email_body_sections|] = fmt("Virus_Total_Alert header: %s", n$conn$http$host); } Notice::ACTION_EMAIL ([$note=Virus_Total_Alert, $msg=fmt("Detected potential virus effecting computer.", key$host, r$num), $src=key$host, $identifier=cat(key$host)]); Thanks, Andrew Dellana -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170119/41d9b331/attachment-0001.html From asharma at lbl.gov Thu Jan 19 11:14:05 2017 From: asharma at lbl.gov (Aashish Sharma) Date: Thu, 19 Jan 2017 11:14:05 -0800 Subject: [Bro] Can't get "Notice::ACTION_EMAIL" to work In-Reply-To: <938005758d284196b951f976fa084cd6@moxde9.na.bayer.cnb> References: <938005758d284196b951f976fa084cd6@moxde9.na.bayer.cnb> Message-ID: <20170119191404.GA86567@mac-822.local> Andrew, I'd say everyone sets up this differently. (there are quite a few ways). Here is one simple manner in which you can escalate a notice to be also emailed. I'd first simply generate a notice like this in relevant policy: local msg=fmt("Detected potential virus effecting computer.", key$host, r$num); NOTICE([$note=Virus_Total_Alert, $msg=msg, $src=key$host, $identifier=cat(key$host)]); Then, hook Notice::policy(n: Notice::Info) { if ( n$note == Virus_Total_Alert) { add n$actions[Notice::ACTION_EMAIL];} } Hope this helps, Aashish On Thu, Jan 19, 2017 at 06:58:37PM +0000, Andrew Dellana wrote: > I am still new to bro scripting and I am working with the vt_check that sooshie wrote and trying to configure email notifications for any virus findings (monitoring multiple interfaces via network tap). I looked into the notice framework section on the webpage and am getting an error: "error in ./VT_Check.bro, line 117: unknown identifier Virus_Total_Alert, at or near "Virus_Total_Alert" ". Line 117 is the "Notice::ACTION_EMAIL" line. > > > hook Notice::policy(n: Notice::Info) > { > if ( n?$conn && n$conn?$http && n$conn$http?$host ) > n$email_body_sections[|n$email_body_sections|] = fmt("Virus_Total_Alert header: %s", n$conn$http$host); > } > > Notice::ACTION_EMAIL ([$note=Virus_Total_Alert, > $msg=fmt("Detected potential virus effecting computer.", key$host, r$num), > $src=key$host, > $identifier=cat(key$host)]); > > > Thanks, > > Andrew Dellana > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From andrew.dellana at bayer.com Thu Jan 19 11:24:14 2017 From: andrew.dellana at bayer.com (Andrew Dellana) Date: Thu, 19 Jan 2017 19:24:14 +0000 Subject: [Bro] Can't get "Notice::ACTION_EMAIL" to work In-Reply-To: <20170119191404.GA86567@mac-822.local> References: <938005758d284196b951f976fa084cd6@moxde9.na.bayer.cnb> <20170119191404.GA86567@mac-822.local> Message-ID: <09db6e7f2dbd474e97d8b04390827514@moxde9.na.bayer.cnb> Thanks Aashish! I added it in and ran the script but now it dislikes the 'key$host' in the first line. (unknown identifier key, at or near "key") Thanks, Andrew Dellana -----Original Message----- From: Aashish Sharma [mailto:asharma at lbl.gov] Sent: Thursday, January 19, 2017 2:14 PM To: Andrew Dellana Cc: bro at bro.org Subject: Re: [Bro] Can't get "Notice::ACTION_EMAIL" to work Andrew, I'd say everyone sets up this differently. (there are quite a few ways). Here is one simple manner in which you can escalate a notice to be also emailed. I'd first simply generate a notice like this in relevant policy: local msg=fmt("Detected potential virus effecting computer.", key$host, r$num); NOTICE([$note=Virus_Total_Alert, $msg=msg, $src=key$host, $identifier=cat(key$host)]); Then, hook Notice::policy(n: Notice::Info) { if ( n$note == Virus_Total_Alert) { add n$actions[Notice::ACTION_EMAIL];} } Hope this helps, Aashish On Thu, Jan 19, 2017 at 06:58:37PM +0000, Andrew Dellana wrote: > I am still new to bro scripting and I am working with the vt_check that sooshie wrote and trying to configure email notifications for any virus findings (monitoring multiple interfaces via network tap). I looked into the notice framework section on the webpage and am getting an error: "error in ./VT_Check.bro, line 117: unknown identifier Virus_Total_Alert, at or near "Virus_Total_Alert" ". Line 117 is the "Notice::ACTION_EMAIL" line. > > > hook Notice::policy(n: Notice::Info) > { > if ( n?$conn && n$conn?$http && n$conn$http?$host ) > n$email_body_sections[|n$email_body_sections|] = fmt("Virus_Total_Alert header: %s", n$conn$http$host); > } > > Notice::ACTION_EMAIL ([$note=Virus_Total_Alert, > $msg=fmt("Detected potential virus effecting computer.", key$host, r$num), > $src=key$host, > $identifier=cat(key$host)]); > > > Thanks, > > Andrew Dellana > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From asharma at lbl.gov Thu Jan 19 11:41:48 2017 From: asharma at lbl.gov (Aashish Sharma) Date: Thu, 19 Jan 2017 11:41:48 -0800 Subject: [Bro] Can't get "Notice::ACTION_EMAIL" to work In-Reply-To: <09db6e7f2dbd474e97d8b04390827514@moxde9.na.bayer.cnb> References: <938005758d284196b951f976fa084cd6@moxde9.na.bayer.cnb> <20170119191404.GA86567@mac-822.local> <09db6e7f2dbd474e97d8b04390827514@moxde9.na.bayer.cnb> Message-ID: <20170119194146.GB86567@mac-822.local> oh my bad, I didn't quite read > local msg=fmt("Detected potential virus effecting computer.", key$host, r$num); it should be: local msg=fmt("Detected potential virus effecting computer: %s, %s", key$host, r$num); On Thu, Jan 19, 2017 at 07:24:14PM +0000, Andrew Dellana wrote: > Thanks Aashish! > > I added it in and ran the script but now it dislikes the 'key$host' in the first line. (unknown identifier key, at or near "key") > > > Thanks, > > Andrew Dellana > > -----Original Message----- > From: Aashish Sharma [mailto:asharma at lbl.gov] > Sent: Thursday, January 19, 2017 2:14 PM > To: Andrew Dellana > Cc: bro at bro.org > Subject: Re: [Bro] Can't get "Notice::ACTION_EMAIL" to work > > Andrew, > > I'd say everyone sets up this differently. (there are quite a few ways). > > Here is one simple manner in which you can escalate a notice to be also emailed. I'd first simply generate a notice like this in relevant policy: > > local msg=fmt("Detected potential virus effecting computer.", key$host, r$num); > NOTICE([$note=Virus_Total_Alert, $msg=msg, $src=key$host, $identifier=cat(key$host)]); > > > Then, > > hook Notice::policy(n: Notice::Info) > { > if ( n$note == Virus_Total_Alert) > { add n$actions[Notice::ACTION_EMAIL];} > } > > > Hope this helps, > Aashish > > > On Thu, Jan 19, 2017 at 06:58:37PM +0000, Andrew Dellana wrote: > > I am still new to bro scripting and I am working with the vt_check that sooshie wrote and trying to configure email notifications for any virus findings (monitoring multiple interfaces via network tap). I looked into the notice framework section on the webpage and am getting an error: "error in ./VT_Check.bro, line 117: unknown identifier Virus_Total_Alert, at or near "Virus_Total_Alert" ". Line 117 is the "Notice::ACTION_EMAIL" line. > > > > > > hook Notice::policy(n: Notice::Info) > > { > > if ( n?$conn && n$conn?$http && n$conn$http?$host ) > > n$email_body_sections[|n$email_body_sections|] = fmt("Virus_Total_Alert header: %s", n$conn$http$host); > > } > > > > Notice::ACTION_EMAIL ([$note=Virus_Total_Alert, > > $msg=fmt("Detected potential virus effecting computer.", key$host, r$num), > > $src=key$host, > > $identifier=cat(key$host)]); > > > > > > Thanks, > > > > Andrew Dellana > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From jazoff at illinois.edu Thu Jan 19 12:02:58 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 19 Jan 2017 20:02:58 +0000 Subject: [Bro] Can't get "Notice::ACTION_EMAIL" to work In-Reply-To: <20170119191404.GA86567@mac-822.local> References: <938005758d284196b951f976fa084cd6@moxde9.na.bayer.cnb> <20170119191404.GA86567@mac-822.local> Message-ID: > On Jan 19, 2017, at 2:14 PM, Aashish Sharma wrote: > > > Then, > > hook Notice::policy(n: Notice::Info) > { > if ( n$note == Virus_Total_Alert) > { add n$actions[Notice::ACTION_EMAIL];} > } This 2nd part is a common use case and is also built into the default notice::policy as if ( n$note in Notice::emailed_types ) add n$actions[ACTION_EMAIL]; so all you need in your scripts is redef Notice::emailed_types += { Virus_Total_Alert }; -- - Justin Azoff From bro at pingtrip.com Fri Jan 20 07:29:55 2017 From: bro at pingtrip.com (Dave Crawford) Date: Fri, 20 Jan 2017 10:29:55 -0500 Subject: [Bro] Tap configuration In-Reply-To: References: <6a7be6035e6248aa8571c6f59d54658a@moxde9.na.bayer.cnb> <69d5068cc10b48719f5e52181224e124@moxde9.na.bayer.cnb> <66492EEE-40EA-4B44-BAA4-47B81AD0A882@pingtrip.com> Message-ID: <798B1629-E3C1-4214-AE82-45C90F44139E@pingtrip.com> Sorry for the delayed response Jon, We?ve been tracking the "Suricata Extreme Performance Tuning" efforts where they?re hitting 20Gbps on a single box. They have a pretty good write-up of their research into the flow of a packet through the hardware/software layers: https://github.com/pevma/SEPTun The RSS=1 setting is to avoid packet re-ording impacts. Are you using PF_RING DNA with symmetric RSS? One new issue we?re working on since switching from pf_ring to af_packet is that the CPU that worker-1 is pinned to runs close to 100% while the 9 other workers are all much lower than that. We?re been able to determine that its due to the Intel ?set_irq_affinity? script and its pinning every IRQ call for the NIC (and single RSS to the same CPU, which is the first core in that NUMA node. We?re always looking for tuning feedback/best practices so I appreciate your questions. -Dave > On Jan 15, 2017, at 11:38 PM, Zeolla at GMail.com wrote: > > So I'm not sure I follow exactly why you'd want to specifically emphasize keeping packets in the L3 cache. Is there a specific hardware configuration where this makes more sense? > > As of right now, I do pretty much the same thing you posted earlier except I map the # of RX queues to the # of physical CPU cores and maximize the NIC ring descriptor size. > > Jon > > On Sat, Jan 14, 2017 at 2:35 PM Dave Crawford > wrote: > This is what I use in my sensor /etc/network/interfaces config along with a custom ?post-up? script. I use Debian for my Bro clusters, so your application will differ. I?m also using af_packet (v4.8.0 kernel) so some of the performance settings may need to be adjusted for your setup. My tuning is aimed at keeping the packets in L3 cache on the CPU vid the NIC hardware, hence the reduced rings. > > auto eth6 > iface eth6 inet manual > up ip link set $IFACE promisc on arp off mtu 1500 up > down ip link set $IFACE promisc off down > post-up /opt/tools/post-up_settings.sh $IFACE > > > And the /opt/tools/post-up_settings.sh script: > > #!/bin/bash > > IFACE=$1 > > if [[ -n "$IFACE" ]]; then > > # Lower the NIC ring descriptor size > /sbin/ethtool -G $IFACE rx 512 > > # Disable offloading functions > for i in rx tx sg tso ufo gso gro lro rxhash ntuple txvlan rxvlan; do ethtool -K $IFACE $i off; done > > # Enforce a single RX queue > /sbin/ethtool -L $IFACE combined 1 > > # Disable pause frames > /sbin/ethtool -A $IFACE rx off tx off > > # Limit the maximum number of interrupts per second > /sbin/ethtool -C $IFACE adaptive-rx on rx-usecs 100 > > # Disable IPv6 > /bin/echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6 > > # Pin IRQ to local CPU > /opt/tools/set_irq_affinity local $IFACE > fi > > -Dave > >> On Jan 13, 2017, at 3:28 PM, Daniel Manzo > wrote: >> >> Thank you for the help. I tried the settings, but I have noticed any difference in packets. The main test that I am doing is that I would open two putty sessions to the server, and have one running capstats on eth12 while my other session was downloading a 1GB file to /dev/null. Last week, I was able to see the packets increase greatly via capstats, but now they stay steady at 7 or 8 packets per second. >> >> Best regards, >> Dan Manzo >> >> -----Original Message----- >> From: Seth Hall [mailto:seth at icir.org ] >> Sent: Friday, January 13, 2017 9:29 AM >> To: Daniel Manzo >> Cc: Neslog; Hosom, Stephen M; Bro-IDS >> Subject: Re: [Bro] Tap configuration >> >> I would recommend leaving checksum validation on in Bro, but disable checksum offloading on the NIC. >> >> I typically point people to this blog post by Doug Burks (of the SecurityOnion project)... >> http://blog.securityonion.net/2011/10/when-is-full-packet-capture-not-full.html >> >> There is one further thing I would recommend though that we discovered well after this blog post was written. If you are using an Intel NIC with the ixgbe driver, your nic has a feature called "flow director" that you will want to disable because it will negatively impact your analysis by reordering packets. It can be disabled like this on linux: >> ethtool -L eth12 combined 1 >> >> This will cause your NIC to have only a single hardware queue which will disable the flow director feature and prevent your NIC from reordering packets. Do that along with the suggestions in the blog post above and things should be better. >> >> .Seth >> >> >>> On Jan 13, 2017, at 8:58 AM, Daniel Manzo > wrote: >>> >>> I have tried disabling checksum offloading, but still no luck. Here is the ifcfg file for my eth interface: >>> >>> DEVICE=eth12 >>> ONBOOT=yes >>> BOOTPROTO=static >>> PROMISC=yes >>> USERCTL=no >>> >>> Freundliche Gr??e / Best regards, >>> >>> Dan Manzo >>> Asst Analyst I >>> ________________________ >>> >>> Bayer: Science For A Better Life >>> >>> Bayer U.S. LLC >>> Country Platform US >>> Scientific Computing Competence Ctr >>> Bayer Road >>> 15205 Pittsburgh (PA), United States >>> Tel: +1 412 7772171 >>> Mobile: +1 412 5258332 >>> E-mail: daniel.manzo at bayer.com >>> >>> From: Neslog [mailto:neslog at gmail.com ] >>> Sent: Thursday, January 12, 2017 4:59 PM >>> To: Hosom, Stephen M >>> Cc: Bro-IDS; Daniel Manzo >>> Subject: Re: [Bro] Tap configuration >>> >>> I've had success disabling checksum. >>> ignore_checksums >>> >>> >>> On Jan 12, 2017 2:24 PM, "Hosom, Stephen M" > wrote: >>> Have you looked into checksum offloading? If enabled, it can result in Bro not producing many of the logs you would expect. >>> >>> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org ] On Behalf Of Daniel Manzo >>> Sent: Thursday, January 12, 2017 11:05 AM >>> To: bro at bro.org >>> Subject: [Bro] Tap configuration >>> >>> Hi all, >>> >>> I have Bro 2.4 configured on a RHEL 6.8 server and was wondering how to properly configure the network interfaces so that Bro can see as much of the network traffic as possible. My tap is connected in line with the network, and I believe that I was previously seeing the correct traffic, but now Bro has reporting much less information. I want to make sure that I have the interfaces configured correctly before moving on to troubleshooting other areas. Currently, I have two eth interfaces set up in PROMISC mode. Thank you for the help >>> >>> Best regards, >>> Dan Manzo >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- > Jon > > Sent from my mobile device > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170120/153c0f26/attachment-0001.html From jdopheid at illinois.edu Fri Jan 20 11:48:27 2017 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Fri, 20 Jan 2017 19:48:27 +0000 Subject: [Bro] 1 open seat at Bro4Pros Message-ID: <98750E6F-C230-4593-AB0F-83F5C8FEE6E0@illinois.edu> We just had a cancellation for Bro4Pros come through, grab your seat to while it?s still available: https://www.eventbrite.com/e/bro4pros-2017-tickets-29303802462 ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From project722 at gmail.com Sat Jan 21 03:54:04 2017 From: project722 at gmail.com (project722) Date: Sat, 21 Jan 2017 05:54:04 -0600 Subject: [Bro] Web GUI for Bro? Message-ID: Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions on what I should use as a web GUI for bro? What is the best options out there? NOTE - my version of Bro was compiled from source. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170121/cc3bea36/attachment.html From pkelley at hyperionavenue.com Sat Jan 21 05:22:23 2017 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Sat, 21 Jan 2017 08:22:23 -0500 Subject: [Bro] Web GUI for Bro? In-Reply-To: References: Message-ID: You might consider using an ELK stack for it for an open-source solution. If your traffic is light, there is a free version of Splunk out there. Adjust your filebeat yaml file to pickup the Bro logs. /usr/local/bro/logs/current/*.log https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearc h-logstash-and-kibana-elk-stack-on-ubuntu-14-04 Packetsled makes a solid commercial solution built on Bro. Patrick Kelley, CISSP Hyperion Avenue Labs (770) 881-6538 The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure. From: on behalf of project722 Date: Saturday, January 21, 2017 at 6:54 AM To: Subject: [Bro] Web GUI for Bro? Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions on what I should use as a web GUI for bro? What is the best options out there? NOTE - my version of Bro was compiled from source. _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170121/4b3d7222/attachment.html From charles.a.fair at gmail.com Sat Jan 21 20:09:43 2017 From: charles.a.fair at gmail.com (Charles Fair) Date: Sat, 21 Jan 2017 23:09:43 -0500 Subject: [Bro] Web GUI for Bro? Message-ID: Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions on what I should use as a web GUI for bro? What is the best options out there? NOTE - my version of Bro was compiled from source. I second Patrick Kelley's suggestion. That would be a pretty straightforward way to get Bro data into a GUI on the build you currently have. We have a Github project that builds out a Bro sensor that includes an integrated ELK system, on minimal CentOS 7.3. It is built with Ansible, or original version with Chef, and can be easily customized for your needs: http://rocknsm.io https://github.com/rocknsm/rock/tree/v2.0-beta https://github.com/rocknsm/rock/tree/v2.0-beta/scripts It can build an ISO with all updates for offline system builds. You could also use Splunk, Graylog, or ELSA. -- Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170121/f5afa991/attachment.html From fatema.bannatwala at gmail.com Mon Jan 23 09:12:56 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 23 Jan 2017 12:12:56 -0500 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: Thank you all for helping to troubleshoot the problem. :) :) Finally, was able to get the issue resolved. So the problem boiled down to having very loose regex. The regex expression itself wasn't the problem, but the bro script that was using that signature was expecting a particular type of data type (port format), but since the regex was so loose that anything matching was passed onto the script, and then the script start complaining that the port format extracted is wrong. Following would help to clarify it more: my sig file had a signature with following payload match: (It was expected to match "|" type of data in payload) payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ And the bro script was splitting that particular chunk of data from the payload and assigning it to an IP n port type variable. v_port = to_port(fmt("%s/tcp",v_strs[i])); When I ran the script and sig file with a bro instance, got following error messages: /rootkit.bro, line 57: wrong port format, must be /[0-9]{1,5}\/(tcp|udp|icmp)/ (to_port(fmt(%s/tcp, Site::v_strs[Site::i]))) Little troubleshooting revealed that the v_strs[i] was getting data like: 1;29645166|29663045|1;;cs=v%3fhttp:// ad.doubleclick.net/dot.gif?1258562851900657 HTTP/1.1\x0d\x0aHost: ad... which is definitely not a port, and hence I changed the loose regex to something like: payload /.*([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\|[0-9]{1,5}).*/ to match the "|" more accurately, and only get triggered when payload actually has a legit IP and port. Tested it and working so far without any seg fault/issues. Haven't tried on prod though. Mike - Don't know whether you came across this type of errors, but might also want to restrict the first payload to match |. Thanks, Fatema. On Thu, Jan 19, 2017 at 6:33 AM, M A wrote: > > I have come across the same behavior while testing a script against some > odd PCAPs and found that most of the time tweaking the filter to be more > restrictive solved the issue. so, what you can do is: > > 1-Test against one signature only to determine which specific signature > causes the issue. > 2-Count # of "Potential rootkit" string existence for before and after > and see which one got more hits (supposedly this should be the one causing > the issue -less restrictive). This might validate that regex is working as > expected.....usage of debugging PRINT also might come handy. > > Thanks > > > > On 18 January 2017 at 22:30, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > >> Thanks Jon for the links! >> >> Thanks Justin for alternative. >> >> We have our cluster in production, hence currently that sig file is >> disabled so that the cluster runs properly. >> Hence, to recreate the seg fault issue this time, rather than enabling it >> for the whole cluster, I just enabled it (in local.bro) for the previous >> version >> of bro that we still have around, and ran a single bro process for that >> old version, as you suggested. >> This time I was able to generate core dump for that single process. >> >> I ran the core dump through the crash-diag script: >> >> ============================================================ >> ============== >> $ /usr/local/bro/2.4.1/share/broctl/scripts/crash-diag /tmp/brotest/ >> >> Bro 2.5 >> Linux 3.10.0-327.36.3.el7.x86_64 >> >> core.bro-1484765328-89288 >> >> Core was generated by `/usr/local/bro/2.4.1/bin/bro -i eth2 local'. >> Program terminated with signal 11, Segmentation fault. >> #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at >> /home/fa/bro-2.5/src/Func.cc:63 >> >> Thread 1 (LWP 89288): >> #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at >> /home/fa/bro-2.5/src/Func.cc:63 >> #1 0x00000000047dfaf0 in ?? () >> #2 0x00007ffd504940e0 in ?? () >> #3 0x0000000000000000 in ?? () >> >> ==== No reporter.log >> >> ==== No stderr.log >> >> ==== No stdout.log >> >> ==== No .cmdline >> >> ==== No .env_vars >> >> ==== No .status >> >> ==== prof.log >> 1484765327.517900 TCP-States: Inact. Syn. SA Part. Est. >> Fin. Rst. >> 1484765327.517900 TCP-States:Inact. 454 1611 >> 19 7 >> 1484765327.517900 TCP-States:Syn. 2360 1027 12 >> 262 34 >> 1484765327.517900 TCP-States:SA 31 22 >> 1484765327.517900 TCP-States:Part. 807 6489 1036 >> 824 18 >> 1484765327.517900 TCP-States:Est. >> 13778 3785 97 >> 1484765327.517900 TCP-States:Fin. 61 619 3114 >> 2401 33 >> 1484765327.517900 TCP-States:Rst. 27 14 106 >> 45 7 >> 1484765327.517900 Connections expired due to inactivity: 37736 >> 1484765327.517900 Total reassembler data: 21844K >> >> ==== packet_filter.log >> #separator \x09 >> #set_separator , >> #empty_field (empty) >> #unset_field - >> #path packet_filter >> #open 2017-01-18-13-44-17 >> #fields ts node filter init success >> #types time string string bool bool >> 1484765057.522496 bro ip or not ip T T >> >> ==== loaded_scripts.log >> #separator \x09 >> #set_separator , >> #empty_field (empty) >> #unset_field - >> #path loaded_scripts >> #open 2017-01-18-13-44-17 >> #fields name >> #types string >> /usr/local/bro/2.4.1/share/bro/base/init-bare.bro >> /usr/local/bro/2.4.1/share/bro/base/bif/const.bif.bro >> .......... (And a whole lot of loaded scripts, truncated) >> >> ============================================================ >> ================= >> >> The interesting thing is, I don't have such folder as: /home/fa/ >> *bro-2.5/src*/Func.cc:63 in the home dir on that machine, where the >> error reported according to coredump. >> But located the Func.cc file and saw the function where the seg fault was >> reported: >> >> Func::Func() : scope(0), type(0) >> { >> unique_id = unique_ids.size(); >> unique_ids.push_back(this); >> } >> >> Don't have much intuition though, that what have caused it :/ >> >> Thanks, >> Fatema. >> >> On Wed, Jan 18, 2017 at 12:33 PM, Azoff, Justin S >> wrote: >> >>> >>> > On Jan 18, 2017, at 11:56 AM, fatema bannatwala < >>> fatema.bannatwala at gmail.com> wrote: >>> > >>> > Hi Seth, >>> > >>> > Thanks for the suggestions, still getting No core dump: >>> >>> I'd just run bro from a shell.. you said it crashes pretty quickly right? >>> >>> sudo su - >>> mkdir /tmp/brotest >>> cd /tmp/brotest >>> ulimit -c unlimited >>> /usr/local/bro/2.5/bin/bro -i eth0 local >>> >>> then it should crash and dump the core file right there. >>> >>> (replace eth0 with whatever) >>> >>> -- >>> - Justin Azoff >>> >>> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170123/325fdd22/attachment.html From pssunu6 at gmail.com Mon Jan 23 13:07:56 2017 From: pssunu6 at gmail.com (ps sunu) Date: Tue, 24 Jan 2017 02:37:56 +0530 Subject: [Bro] unusual_http_methods.bro script error Message-ID: Hi, i am using bro 2.5 version and i tried to compile below code and its getting error @load base/frameworks/notice @load base/protocols/http module MozillaUnusualHTTP; export { redef enum Notice::Type += { Interesting_HTTP_Method_Success, Interesting_HTTP_Method_Fail, }; redef enum HTTP::Tags += { HTTP_BAD_METHOD_OK, HTTP_BAD_METHOD_FAIL, }; global whitelist_hosts_methods: table[addr, string] of set[subnet] = table() &redef; const suspicious_http_methods: set[string] = { "DELETE", "TRACE", "CONNECT", "PROPPATCH", "MKCOL", "SEARCH", "COPY", "MOVE", "LOCK", "UNLOCK", "POLL", "REPORT", "SUBSCRIBE", "BMOVE" } &redef; const monitor_ip_spaces: set[subnet] &redef; const monitor_ports: set[port] &redef; const ignore_hosts_orig: set[subnet] &redef; const ignore_hosts_resp: set[subnet] &redef; } event http_reply(c: connection, version: string, code: count, reason: string ) { local cluster_client_ip: addr; if ( ! c?$http ) return; if ( ! c$http?$method ) return; if ( c$id$resp_h !in monitor_ip_spaces ) return; if ( c$id$resp_p !in monitor_ports ) return; if ( c$id$resp_h in ignore_hosts_resp ) return; if ( c$id$orig_h in ignore_hosts_orig ) return; if ( ! c$http?$cluster_client_ip ) cluster_client_ip = c$id$orig_h; else cluster_client_ip = to_addr(c$http$cluster_client_ip); if ( ( c$http?$cluster_client_ip ) && ( to_addr(c$http$cluster_client_ip) in ignore_hosts_orig ) ) return; if ( c$http$method ! in suspicious_http_methods ) return; if ( [c$id$resp_h, c$http$method] in whitelist_hosts_methods ) { if ( c$id$orig_h in whitelist_hosts_methods[c$id$resp_h, c$http$method] ) return; if ( cluster_client_ip in whitelist_hosts_methods[c$id$resp_h, c$http$method] ) return; } else { if ( c$http$status_code < 300 ) { add c$http$tags[HTTP_BAD_METHOD_OK]; NOTICE([$note=Interesting_HTTP_Method_Success, $msg=fmt("%s successfully used method %s on %s host %s", cluster_client_ip, c$http$method, c$id$resp_h, c$http$host), $uid=c$uid, $id=c$id, $identifier=cat(c$http$host,c$http$method,cluster_client_ip)]); } else { add c$http$tags[HTTP_BAD_METHOD_FAIL]; NOTICE([$note=Interesting_HTTP_Method_Fail, $msg=fmt("%s failed to used method %s on %s host %s", cluster_client_ip, c$http$method, c$id$resp_h, c$http$host), $uid=c$uid, $id=c$id, $identifier=cat(c$http$host,c$http$method,cluster_client_ip)]); } } error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 68: no such field in record (MozillaUnusualHTTP::c$http?$cluster_client_ip) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 71: no such field in record (MozillaUnusualHTTP::c$http$cluster_client_ip) error in string and /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 71: type clash (string and MozillaUnusualHTTP::c$http$) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 71 and string: type mismatch (MozillaUnusualHTTP::c$http$ and string) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 71: argument type mismatch in function call (to_addr(MozillaUnusualHTTP::c$http$)) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 72: no such field in record (MozillaUnusualHTTP::c$http?$cluster_client_ip) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 72: no such field in record (MozillaUnusualHTTP::c$http$cluster_client_ip) error in string and /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 72: type clash (string and MozillaUnusualHTTP::c$http$) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 72 and string: type mismatch (MozillaUnusualHTTP::c$http$ and string) error in /home/binu/bro/bro-findings/bro-gramming/unusual_http_methods.bro, line 72: argument type mismatch in function call (to_addr(MozillaUnusualHTTP::c$http$)) Regards, sunu error -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170124/fddd7cc1/attachment-0001.html From fatema.bannatwala at gmail.com Mon Jan 23 17:19:55 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Mon, 23 Jan 2017 20:19:55 -0500 Subject: [Bro] unusual_http_methods.bro script error Message-ID: Hi Sunu, Quick look at your script, tells that you are using c$http$cluster_client_ip, but http record doesn't have any field name "cluster_client_ip". I think what you want is c$id$orig_ip as the client ip, if that's what the purpose of cluster_client_ip is. Also, a great resource to test out your scripts is try to run them on try.bro.org (great web interface written by Justin, where you can include print statements like "print c$http; " in your scripts to check to see all the fields of http record, and then use them accordingly). Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170123/569162bb/attachment.html From dopheide at gmail.com Mon Jan 23 20:17:46 2017 From: dopheide at gmail.com (Mike Dopheide) Date: Mon, 23 Jan 2017 22:17:46 -0600 Subject: [Bro] Segmentation fault while using own signature. In-Reply-To: References: <8DA7E854-CC5A-4DC0-BF9B-F381EF616FE6@icir.org> <6C1BBDA0-46C5-4264-BA39-E9BB006B77A8@icir.org> <9041F357-128D-4506-884B-ACF86DC6CC80@icir.org> Message-ID: Fatema, thanks for continuing to dig into this. "which is definitely not a port, and hence I changed the loose regex to something like: payload /.*([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\|[0-9]{1,5}).*/ to match the "|" more accurately, and only get triggered when payload actually has a legit IP and port." I had been running a regex similar to this already, but without the ( )'s. I'd like clarification on if that's what ends up getting passed to the signature match event as 'data'. I've added the ( ) 's today just in case. Regardless, looks like I could do a little cleanup further into the script logic as well. I really appreciate the feedback. Interestingly, we just had a hit today that was an oddly formatted HTTP Cookie, but it was just a "potential" and obviously didn't result in a callback. -Dop On Mon, Jan 23, 2017 at 11:12 AM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Thank you all for helping to troubleshoot the problem. :) :) > Finally, was able to get the issue resolved. > > So the problem boiled down to having very loose regex. > > The regex expression itself wasn't the problem, but the bro script that > was using that signature was expecting a particular type of data type (port > format), > but since the regex was so loose that anything matching was passed onto > the script, and then the script start complaining that the port format > extracted is wrong. > Following would help to clarify it more: > > my sig file had a signature with following payload match: > (It was expected to match "|" type of data in payload) > payload /.*[0-9\.]{7,15}\|[0-9]{1,5}.*/ > > And the bro script was splitting that particular chunk of data from the > payload and assigning it to an IP n port type variable. > v_port = to_port(fmt("%s/tcp",v_strs[i])); > > When I ran the script and sig file with a bro instance, got following > error messages: > /rootkit.bro, line 57: wrong port format, must be > /[0-9]{1,5}\/(tcp|udp|icmp)/ (to_port(fmt(%s/tcp, Site::v_strs[Site::i]))) > > Little troubleshooting revealed that the v_strs[i] was getting data like: > 1;29645166|29663045|1;;cs=v%3fhttp://ad.doubleclick.net/ > dot.gif?1258562851900657 HTTP/1.1\x0d\x0aHost: ad... > > which is definitely not a port, and hence I changed the loose regex to > something like: > payload /.*([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\|[0-9]{1,5}).*/ > to match the "|" more accurately, and only get triggered when > payload actually has a legit IP and port. > > Tested it and working so far without any seg fault/issues. > Haven't tried on prod though. > > Mike - Don't know whether you came across this type of errors, but might > also want to restrict the first payload to match |. > > Thanks, > Fatema. > > > > On Thu, Jan 19, 2017 at 6:33 AM, M A wrote: > >> >> I have come across the same behavior while testing a script against some >> odd PCAPs and found that most of the time tweaking the filter to be more >> restrictive solved the issue. so, what you can do is: >> >> 1-Test against one signature only to determine which specific signature >> causes the issue. >> 2-Count # of "Potential rootkit" string existence for before and after >> and see which one got more hits (supposedly this should be the one causing >> the issue -less restrictive). This might validate that regex is working as >> expected.....usage of debugging PRINT also might come handy. >> >> Thanks >> >> >> >> On 18 January 2017 at 22:30, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >> >>> Thanks Jon for the links! >>> >>> Thanks Justin for alternative. >>> >>> We have our cluster in production, hence currently that sig file is >>> disabled so that the cluster runs properly. >>> Hence, to recreate the seg fault issue this time, rather than enabling >>> it for the whole cluster, I just enabled it (in local.bro) for the previous >>> version >>> of bro that we still have around, and ran a single bro process for that >>> old version, as you suggested. >>> This time I was able to generate core dump for that single process. >>> >>> I ran the core dump through the crash-diag script: >>> >>> ============================================================ >>> ============== >>> $ /usr/local/bro/2.4.1/share/broctl/scripts/crash-diag /tmp/brotest/ >>> >>> Bro 2.5 >>> Linux 3.10.0-327.36.3.el7.x86_64 >>> >>> core.bro-1484765328-89288 >>> >>> Core was generated by `/usr/local/bro/2.4.1/bin/bro -i eth2 local'. >>> Program terminated with signal 11, Segmentation fault. >>> #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at >>> /home/fa/bro-2.5/src/Func.cc:63 >>> >>> Thread 1 (LWP 89288): >>> #0 0x00000000005ed589 in Func::Func (this=0x7ffd504940e0) at >>> /home/fa/bro-2.5/src/Func.cc:63 >>> #1 0x00000000047dfaf0 in ?? () >>> #2 0x00007ffd504940e0 in ?? () >>> #3 0x0000000000000000 in ?? () >>> >>> ==== No reporter.log >>> >>> ==== No stderr.log >>> >>> ==== No stdout.log >>> >>> ==== No .cmdline >>> >>> ==== No .env_vars >>> >>> ==== No .status >>> >>> ==== prof.log >>> 1484765327.517900 TCP-States: Inact. Syn. SA Part. >>> Est. Fin. Rst. >>> 1484765327.517900 TCP-States:Inact. 454 >>> 1611 19 7 >>> 1484765327.517900 TCP-States:Syn. 2360 1027 12 >>> 262 34 >>> 1484765327.517900 TCP-States:SA 31 22 >>> 1484765327.517900 TCP-States:Part. 807 6489 >>> 1036 824 18 >>> 1484765327.517900 TCP-States:Est. >>> 13778 3785 97 >>> 1484765327.517900 TCP-States:Fin. 61 619 >>> 3114 2401 33 >>> 1484765327.517900 TCP-States:Rst. 27 14 106 >>> 45 7 >>> 1484765327.517900 Connections expired due to inactivity: 37736 >>> 1484765327.517900 Total reassembler data: 21844K >>> >>> ==== packet_filter.log >>> #separator \x09 >>> #set_separator , >>> #empty_field (empty) >>> #unset_field - >>> #path packet_filter >>> #open 2017-01-18-13-44-17 >>> #fields ts node filter init success >>> #types time string string bool bool >>> 1484765057.522496 bro ip or not ip T T >>> >>> ==== loaded_scripts.log >>> #separator \x09 >>> #set_separator , >>> #empty_field (empty) >>> #unset_field - >>> #path loaded_scripts >>> #open 2017-01-18-13-44-17 >>> #fields name >>> #types string >>> /usr/local/bro/2.4.1/share/bro/base/init-bare.bro >>> /usr/local/bro/2.4.1/share/bro/base/bif/const.bif.bro >>> .......... (And a whole lot of loaded scripts, truncated) >>> >>> ============================================================ >>> ================= >>> >>> The interesting thing is, I don't have such folder as: /home/fa/ >>> *bro-2.5/src*/Func.cc:63 in the home dir on that machine, where the >>> error reported according to coredump. >>> But located the Func.cc file and saw the function where the seg fault >>> was reported: >>> >>> Func::Func() : scope(0), type(0) >>> { >>> unique_id = unique_ids.size(); >>> unique_ids.push_back(this); >>> } >>> >>> Don't have much intuition though, that what have caused it :/ >>> >>> Thanks, >>> Fatema. >>> >>> On Wed, Jan 18, 2017 at 12:33 PM, Azoff, Justin S >>> wrote: >>> >>>> >>>> > On Jan 18, 2017, at 11:56 AM, fatema bannatwala < >>>> fatema.bannatwala at gmail.com> wrote: >>>> > >>>> > Hi Seth, >>>> > >>>> > Thanks for the suggestions, still getting No core dump: >>>> >>>> I'd just run bro from a shell.. you said it crashes pretty quickly >>>> right? >>>> >>>> sudo su - >>>> mkdir /tmp/brotest >>>> cd /tmp/brotest >>>> ulimit -c unlimited >>>> /usr/local/bro/2.5/bin/bro -i eth0 local >>>> >>>> then it should crash and dump the core file right there. >>>> >>>> (replace eth0 with whatever) >>>> >>>> -- >>>> - Justin Azoff >>>> >>>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170123/55e3b382/attachment.html From rodrigokroll at gmail.com Tue Jan 24 07:39:52 2017 From: rodrigokroll at gmail.com (-- Rodrigo Kroll --) Date: Tue, 24 Jan 2017 10:39:52 -0500 Subject: [Bro] Intel.log wrong format Message-ID: Good morning guys, I'm using the INTEL bro framework successfully. I'm having a hard time to understand why inside my intel.log file, the information "Intel::ADDR" is showing twice. In identified by the fields "seen.indicator_type" and "matched sources". Which seems wrong, in my understanding matched sources should've been identified by the text "Bad Reputation Domain", which is actually end up being identified as the field "fuid". A log sample is below: root at BroTest:~# zcat /usr/local/bro/logs/2017-01-23/intel.13\:00\:00-14\:00\:00.log.gz #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path intel #open 2017-01-23-13-01-54 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p seen.indicator seen.indicator_type seen.where seen.node matched sources fuid file_mime_type file_desc #types time string addr port addr port string enum enum string set[enum] set[string] string string string 1485194513.356126 CVmspB2e68PB5ZiXU5 192.168.1.3 47712 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485194630.876093 CT0uqm4aoaPeGA2RU4 192.168.1.3 47714 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485194636.036057 CbG2JX2YHPJXciEb59 192.168.1.3 47716 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485194640.586000 CCEoOs3ka9x4Qeqo7f 192.168.1.3 47718 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485195059.276054 CyJZA6iIJMyaC6QL8 192.168.1.100 41913 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485195061.556121 Cogijk3k5VH5Oxp9o9 192.168.1.3 47720 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485195102.716131 CYGoic29UuEmw9iO5 192.168.1.3 47722 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - 1485195327.906063 CinQa13NxfIZEwyg73 192.168.1.3 47724 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - Any help would be very useful! Thank you -- Rodrigo Kroll -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170124/f905843e/attachment-0001.html From jazoff at illinois.edu Tue Jan 24 08:02:53 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 24 Jan 2017 16:02:53 +0000 Subject: [Bro] Intel.log wrong format In-Reply-To: References: Message-ID: The log is fine, I think you're just looking at the wrong columns. Try piping the log file to this alias, and you'll see that the fields line up the way they are supposed to. alias bro-column="sed \"s/fields.//;s/types.//\" | column -s $'\t' -t" -- - Justin Azoff > On Jan 24, 2017, at 10:39 AM, -- Rodrigo Kroll -- wrote: > > Good morning guys, > > I'm using the INTEL bro framework successfully. I'm having a hard time to understand why inside my intel.log file, the information "Intel::ADDR" is showing twice. In identified by the fields "seen.indicator_type" and "matched sources". > > Which seems wrong, in my understanding matched sources should've been identified by the text "Bad Reputation Domain", which is actually end up being identified as the field "fuid". > > A log sample is below: > > root at BroTest:~# zcat /usr/local/bro/logs/2017-01-23/intel.13\:00\:00-14\:00\:00.log.gz > #separator \x09 > #set_separator , > #empty_field (empty) > #unset_field - > #path intel > #open 2017-01-23-13-01-54 > #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p seen.indicator seen.indicator_type seen.where seen.node matched sources fuid file_mime_type file_desc > #types time string addr port addr port string enum enum string set[enum] set[string] string string string > 1485194513.356126 CVmspB2e68PB5ZiXU5 192.168.1.3 47712 XXX.XXX.XXX.XXX 80 XXX.XXX.XXX.XXX Intel::ADDR Conn::IN_RESP bro Intel::ADDR Bad Reputation Domain - - - From fatema.bannatwala at gmail.com Tue Jan 24 08:13:11 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 24 Jan 2017 11:13:11 -0500 Subject: [Bro] Intel.log wrong format Message-ID: Hi Rodrigo, I had the same feeling when I first looked at my intel.log file. The thing is that "matched" and "sources" are two different fields. What you are seeing is correct, with Intel::ADDR in "matched" and Bad Reputation Domain in "sources" field. -Fatema. P.S: Here is the description of the intel record: Type: record ts: time &log Timestamp when the data was discovered. uid: string &log &optional If a connection was associated with this intelligence hit, this is the uid for the connection id: conn_id &log &optional If a connection was associated with this intelligence hit, this is the conn_id for the connection. seen: Intel::Seen &log Where the data was seen. matched: Intel::TypeSet &log Which indicator types matched. sources: set [ string ] &log &default = { } &optional Sources which supplied data that resulted in this match. fuid: string &log &optional (present if *base/frameworks/intel/files.bro* is loaded) If a file was associated with this intelligence hit, this is the uid for the file. file_mime_type: string &log &optional (present if *base/frameworks/intel/files.bro* is loaded) A mime type if the intelligence hit is related to a file. If the $f field is provided this will be automatically filled out. file_desc: string &log &optional (present if *base/frameworks/intel/files.bro* is loaded) Frequently files can be ?described? to give a bit more context. If the $f field is provided this field will be automatically filled out. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170124/7be1a8cb/attachment.html From rodrigokroll at gmail.com Tue Jan 24 08:24:55 2017 From: rodrigokroll at gmail.com (-- Rodrigo Kroll --) Date: Tue, 24 Jan 2017 11:24:55 -0500 Subject: [Bro] Intel.log wrong format In-Reply-To: References: Message-ID: Hello ALL, Fatema, you are right! Thank you so much! Have a great day On Tue, Jan 24, 2017 at 11:13 AM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hi Rodrigo, > > I had the same feeling when I first looked at my intel.log file. > The thing is that "matched" and "sources" are two different fields. > What you are seeing is correct, with Intel::ADDR in "matched" and > Bad Reputation Domain in "sources" field. > > -Fatema. > > P.S: Here is the description of the intel record: > > Type: > > record > > ts: time > > &log > > > Timestamp when the data was discovered. > uid: string > > &log > > &optional > > > If a connection was associated with this intelligence hit, this is the uid > for the connection > id: conn_id > > &log > > &optional > > > If a connection was associated with this intelligence hit, this is the > conn_id for the connection. > seen: Intel::Seen > > &log > > > Where the data was seen. > matched: Intel::TypeSet > > &log > > > Which indicator types matched. > sources: set > [ > string > ] > &log > > &default > > = { } &optional > > > Sources which supplied data that resulted in this match. > fuid: string > > &log > > &optional > > > (present if *base/frameworks/intel/files.bro* > is > loaded) > > If a file was associated with this intelligence hit, this is the uid for > the file. > file_mime_type: string > > &log > > &optional > > > (present if *base/frameworks/intel/files.bro* > is > loaded) > > A mime type if the intelligence hit is related to a file. If the $f field > is provided this will be automatically filled out. > file_desc: string > > &log > > &optional > > > (present if *base/frameworks/intel/files.bro* > is > loaded) > > Frequently files can be ?described? to give a bit more context. If the $f > field is provided this field will be automatically filled out. > -- Rodrigo Kroll -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170124/7c88145d/attachment-0001.html From jan.grashoefer at gmail.com Tue Jan 24 08:25:23 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Tue, 24 Jan 2017 17:25:23 +0100 Subject: [Bro] Intel.log wrong format In-Reply-To: References: Message-ID: Hi Rodrigo, > I'm using the INTEL bro framework successfully. I'm having a hard time to > understand why inside my intel.log file, the information "Intel::ADDR" is > showing twice. In identified by the fields "seen.indicator_type" and > "matched sources". nice to hear that the intel framework is useful to you. As Justin already pointed out, "matched" and "sources" are two different fields. The fields "seen.indicator_type" and "matched" have a slightly different meaning. For example if you specify a subnet in your intel file and you see a connection to an IP inside this subnet, "seen.indicator_type" will be Intel::ADDR while "matched" will be Intel::SUBNET. For more details about the data model the blog post about the intelligence framework update might be interesting: http://blog.bro.org/2016/12/the-intelligence-framework-update.html I hope this helps, Jan From fatema.bannatwala at gmail.com Tue Jan 24 11:20:44 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 24 Jan 2017 14:20:44 -0500 Subject: [Bro] intel.log file stops getting generated. Message-ID: Hi All, Running Bro 2.5, everything is working except intel.log file stop getting generated. Last event in that file was around 12:45pm today, and after it got rotated, I didn't see intel.log for 1pm hour and still no log for intel.log in the current log dir. Don't know why all of a sudden intel.log stopped geting generated. I checked: 1. The conn.log, and seeing the connections from IPs listed as bad in intel feed. $ less bad-IP.intel | grep "61.240.xx.yy" 61.240.xx.yy Intel::ADDR scanner 85 csirtg.io $ less conn.log | grep "61.240.144.65" 1485280794.930507 CzUCmv3TFKLcYxFps1 61.240.xx.yy 40805 128.4.107.206 8081 tcp - - - - S0 F T 0 S 1 40 0 0 ( empty) 2. Permissions on the intel input files are fine,i.e bro readable. 3. No major activity related to Bro happened during 12:45ish, that can impact any Bro processing. Any leads/suggestions? Thanks, Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170124/3bea3163/attachment.html From lc.taylor at protonmail.com Wed Jan 25 00:32:37 2017 From: lc.taylor at protonmail.com (Lincy Taylor) Date: Wed, 25 Jan 2017 03:32:37 -0500 Subject: [Bro] Lots of dns_unmatched_msg, dns_unmatched_reply in weird.log Message-ID: Hello all: I recently found lots of "dns_unmatched_msg" and "dns_unmatched_reply" errors in weird.log of Bro, which likes the following: 1485331604.840044 CSdHx91xFbEKdyo3Pi 172.16.185.11 40721 8.8.8.8 53 dns_unmatched_reply - F bro 1485331609.712570 Cw4TXS1DvS49mvRtN4 172.16.185.11 58915 8.8.8.8 53 dns_unmatched_reply - F bro 1485331619.101223 CSdHx91xFbEKdyo3Pi 172.16.185.11 40721 8.8.8.8 53 dns_unmatched_msg - F bro 1485331619.115208 CGwJfm35oSWSuMdVS6 172.16.185.11 50308 8.8.8.8 53 dns_unmatched_reply - F bro 1485331619.115208 Cw4TXS1DvS49mvRtN4 172.16.185.11 58915 8.8.8.8 53 dns_unmatched_msg - F bro 1485331619.115208 CGwJfm35oSWSuMdVS6 172.16.185.11 50308 8.8.8.8 53 dns_unmatched_msg - F bro I used tcpdump to create a traffic dump of several dns queries made by dig on ubuntu to 8.8.8.8 and analyzed by "bro -r", the errors are still there in weird.log. The errors seems to be related to an unmatch of query id of query and response messages according to snippet in "share/bro/base/protocols/dns/main.bro". But I found the query ids are consistent with each of DNS query and response by tracing the traffic dump in wireshark. Has anyone experienced the same issue before? I attached the log files and pcap file within this message, please help me to find out the root cause. Thank you! Sent with [ProtonMail](https://protonmail.com) Secure Email. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/f13b93ef/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: dns_8.8.8.8.pcap Type: application/vnd.tcpdump.pcap Size: 796 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/f13b93ef/attachment.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: dns.log Type: text/x-log Size: 1027 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/f13b93ef/attachment-0001.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: weird.log Type: text/x-log Size: 939 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/f13b93ef/attachment-0002.bin From jan.grashoefer at gmail.com Wed Jan 25 01:10:12 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Wed, 25 Jan 2017 10:10:12 +0100 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: Message-ID: <1943a9ee-2924-103c-7d9c-10f7a2c3e3a0@gmail.com> Hi Fatema, > Running Bro 2.5, everything is working except intel.log file stop getting > generated. just to be sure: You haven't configured intel expiration, right? > Last event in that file was around 12:45pm today, and after it got rotated, > I didn't see intel.log for 1pm hour and still no log for intel.log in the > current log dir. > > Don't know why all of a sudden intel.log stopped geting generated. How long was that instance running and is that behavior reproducible? Have you noticed anything in reporter.log? To debug whether this is a logging issue or an intel framework issue you might add a debug print to the matching event. Jan From jlay at slave-tothe-box.net Wed Jan 25 04:05:14 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 25 Jan 2017 05:05:14 -0700 Subject: [Bro] Lots of dns_unmatched_msg, dns_unmatched_reply in weird.log In-Reply-To: References: Message-ID: <1485345914.2567.7.camel@slave-tothe-box.net> On Wed, 2017-01-25 at 03:32 -0500, Lincy Taylor wrote: > Hello all: > > ? ? ?I recently found lots of "dns_unmatched_msg" and > "dns_unmatched_reply" errors in weird.log of Bro, which likes the > following: > > 1485331604.840044?????? CSdHx91xFbEKdyo3Pi????? 172.16.185.11?? > 40721?? 8.8.8.8 53????? dns_unmatched_reply???? -?????? F?????? bro > 1485331609.712570?????? Cw4TXS1DvS49mvRtN4????? 172.16.185.11?? > 58915?? 8.8.8.8 53????? dns_unmatched_reply???? -?????? F?????? bro > 1485331619.101223?????? CSdHx91xFbEKdyo3Pi????? 172.16.185.11?? > 40721?? 8.8.8.8 53????? dns_unmatched_msg?????? -?????? F?????? bro > 1485331619.115208?????? CGwJfm35oSWSuMdVS6????? 172.16.185.11?? > 50308?? 8.8.8.8 53????? dns_unmatched_reply???? -?????? F?????? bro > 1485331619.115208?????? Cw4TXS1DvS49mvRtN4????? 172.16.185.11?? > 58915?? 8.8.8.8 53????? dns_unmatched_msg?????? -?????? F?????? bro > 1485331619.115208?????? CGwJfm35oSWSuMdVS6????? 172.16.185.11?? > 50308?? 8.8.8.8 53????? dns_unmatched_msg?????? -?????? F?????? bro > > I used tcpdump to create a traffic dump of several dns queries made > by dig on ubuntu to 8.8.8.8 and analyzed by "bro -r", the errors are > still there in weird.log. The errors seems to be related to an > unmatch of query id of query and response messages according to > snippet in "share/bro/base/protocols/dns/main.bro". But I found the > query ids are consistent with each of DNS query and response by > tracing the traffic dump in wireshark.? > > Has anyone experienced the same issue before? > > I attached the log files and pcap file within this message, please > help me to find out the root cause. Thank you! > > > Sent with ProtonMail Secure Email. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Make sure you set your local net to include the 172 net. ?As a test on the pcap I ran: bro -C -r pcaps/dns_8.8.8.8.pcap local "Site::local_nets += { 172.16.0.0/12 }" This gets me conn and dns, but no weird log. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/e6583915/attachment.html From project722 at gmail.com Wed Jan 25 05:48:32 2017 From: project722 at gmail.com (project722) Date: Wed, 25 Jan 2017 07:48:32 -0600 Subject: [Bro] Web GUI for Bro? In-Reply-To: References: Message-ID: Thanks All. I am looking into ELK. On Tue, Jan 24, 2017 at 2:44 AM, Kevin Ross wrote: > As said before ELK is your best bet. Here is a link that may interest you. > The learning curve may be steep but it is worth it in the end (assuming you > are putting this together yourself and not a all in one solution that > provides it for you) when you can query logs as easily as a google search > and visualise. > > https://www.elastic.co/blog/bro-ids-elastic-stack > > Also you could use security oniion and it uses ELSA to present these logs > although my preference these days because of its easier ability I find to > add in new data sources would be ELK (i.e once you understand logstash and > parsing logs you can easily parse any log you have to correlate Bro, IDS, > network and even host logs). > > https://github.com/mcholste/elsa > http://blog.bro.org/2012/01/monster-logs.html > > On 21 January 2017 at 11:54, project722 wrote: > >> Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions >> on what I should use as a web GUI for bro? What is the best options out >> there? NOTE - my version of Bro was compiled from source. >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/652536d8/attachment-0001.html From pssunu6 at gmail.com Wed Jan 25 08:38:28 2017 From: pssunu6 at gmail.com (ps sunu) Date: Wed, 25 Jan 2017 22:08:28 +0530 Subject: [Bro] (no subject) Message-ID: Hi, any editor is there for bro development ? Regards, Sunu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/af22460c/attachment.html From pssunu6 at gmail.com Wed Jan 25 08:39:31 2017 From: pssunu6 at gmail.com (ps sunu) Date: Wed, 25 Jan 2017 22:09:31 +0530 Subject: [Bro] unusual_http_methods.bro script error In-Reply-To: References: Message-ID: Thanks Guys problem solved , thanks for your support Regards, Sunu On Tue, Jan 24, 2017 at 6:49 AM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hi Sunu, > > Quick look at your script, tells that you are using > c$http$cluster_client_ip, > but http record doesn't have any field name "cluster_client_ip". > I think what you want is c$id$orig_ip as the client ip, if that's what the > purpose of cluster_client_ip is. > > Also, a great resource to test out your scripts is try to run them on > try.bro.org (great web interface written by Justin, where you can include > print statements like "print c$http; " in your scripts to check to see all > the fields of http record, and then use them accordingly). > > Thanks, > Fatema. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/23a78b94/attachment.html From pkelley at hyperionavenue.com Wed Jan 25 08:43:30 2017 From: pkelley at hyperionavenue.com (Patrick Kelley) Date: Wed, 25 Jan 2017 11:43:30 -0500 Subject: [Bro] (no subject) In-Reply-To: References: Message-ID: I use ATOM. It works really well. https://atom.io/ Patrick Kelley, CISSP Hyperion Avenue Labs (770) 881-6538 The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure. From: on behalf of ps sunu Date: Wednesday, January 25, 2017 at 11:38 AM To: Subject: [Bro] (no subject) Hi, any editor is there for bro development ? Regards, Sunu _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/24cc85b2/attachment.html From shirkdog.bsd at gmail.com Wed Jan 25 08:44:15 2017 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Wed, 25 Jan 2017 11:44:15 -0500 Subject: [Bro] (no subject) In-Reply-To: References: Message-ID: vi with a plugin like this: https://github.com/mephux/bro.vim Some may say emacs, some may say nano...Just look for what editor supports the syntax. -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com On Jan 25, 2017 11:40 AM, "ps sunu" wrote: > Hi, > any editor is there for bro development ? > > Regards, > Sunu > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/ebc32e82/attachment.html From fatema.bannatwala at gmail.com Wed Jan 25 09:18:16 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 12:18:16 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: Message-ID: It turns out to be the performance issue. I restarted the bro cluster and it started getting generated, but have another issue: The bro sensors are utilizing almost 100% memory as well as some part of swap. We recently have upgraded the kernel and centos to 7.3 on bro cluster, as well as using latest pfring v6.4.1 We have 4 bro sensors each with 132G of memory and 24 core cpu @ 2.50GHz with 48 On-line CPU(s) (0-17)), and each running 22 bro processes. The memory usage on the sensors is around ~129G total used free shared buff/cache available Mem: 131921372 129912824 801672 11224 1206876 1270972 Swap: 8388600 3378312 5010288 The peak traffic we see usually toggles around 6-7Gbps. Don't know if this started happening after the upgrade to Bro 2.5, but the sensors become un-responsive because of this. I have checked that the ethtool settings on sensors are set to: rx off tx off tso off sg off gso off gro off Also, have commented out some scripts, that I used to run with 2.4.1, but no luck with memory usage. Any leads/suggestions? Thanks, Fatema. On Tue, Jan 24, 2017 at 2:20 PM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hi All, > > Running Bro 2.5, everything is working except intel.log file stop getting > generated. > Last event in that file was around 12:45pm today, and after it got rotated, > I didn't see intel.log for 1pm hour and still no log for intel.log in the > current log dir. > > Don't know why all of a sudden intel.log stopped geting generated. > > I checked: > 1. The conn.log, and seeing the connections from IPs listed as bad in > intel feed. > $ less bad-IP.intel | grep "61.240.xx.yy" > 61.240.xx.yy Intel::ADDR scanner 85 csirtg.io > > $ less conn.log | grep "61.240.144.65" > 1485280794.930507 CzUCmv3TFKLcYxFps1 61.240.xx.yy 40805 > 128.4.107.206 8081 tcp - - - - S0 F > T 0 S 1 40 0 0 ( empty) > > 2. Permissions on the intel input files are fine,i.e bro readable. > 3. No major activity related to Bro happened during 12:45ish, that can > impact any Bro processing. > > Any leads/suggestions? > > Thanks, > Fatema. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/2cd78461/attachment-0001.html From jazoff at illinois.edu Wed Jan 25 09:21:12 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 17:21:12 +0000 Subject: [Bro] (no subject) In-Reply-To: References: Message-ID: <2FC04063-617A-46F3-AB13-0F99EAADAC91@illinois.edu> > On Jan 25, 2017, at 11:38 AM, ps sunu wrote: > > Hi, > any editor is there for bro development ? Yes. Any editor :-) I've added support for bro linting to vim via Syntactic, and atom via linter with linter-bro. It is helpful for displaying errors as you write them, so if you want something like this: https://pbs.twimg.com/media/Cr4CkanW8AUp0m0.png You can use atom or vim. -- - Justin Azoff From jazoff at illinois.edu Wed Jan 25 09:23:59 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 17:23:59 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: Message-ID: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> > On Jan 25, 2017, at 12:18 PM, fatema bannatwala wrote: > > It turns out to be the performance issue. > I restarted the bro cluster and it started getting generated, but have another issue: > The bro sensors are utilizing almost 100% memory as well as some part of swap. > > We recently have upgraded the kernel and centos to 7.3 on bro cluster, as well as using latest pfring v6.4.1 > We have 4 bro sensors each with 132G of memory and 24 core cpu @ 2.50GHz with 48 On-line CPU(s) (0-17)), and each running 22 bro processes. with pf_ring the first thing to check would be to verify that bro is using pf_ring correctly. If it's not, you end up analyzing 100% of the traffic 22 times. If you do a ls -l /proc/net/pf_ring/ and cat /proc/net/pf_ring/info it should show rings in use and one file per bro process, like: -r--r--r--. 1 root root 0 Jan 25 11:23 36549-p1p1.376 -r--r--r--. 1 root root 0 Jan 25 11:23 36552-p1p1.369 -r--r--r--. 1 root root 0 Jan 25 11:23 36561-p1p1.377 -r--r--r--. 1 root root 0 Jan 25 11:23 36581-p1p1.372 -r--r--r--. 1 root root 0 Jan 25 11:23 36594-p1p1.375 -r--r--r--. 1 root root 0 Jan 25 11:23 36600-p1p1.378 -r--r--r--. 1 root root 0 Jan 25 11:23 36608-p1p1.371 -r--r--r--. 1 root root 0 Jan 25 11:23 36611-p1p2.373 -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 09:45:00 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 12:45:00 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> Message-ID: Hi Justin, Thanks for suggestions. Here are the stats (Looks like bro using pf_ring correctly though): $ ls -l /proc/net/pf_ring/ total 0 -r--r--r-- 1 root root 0 Jan 25 12:40 74966-em1.1612 -r--r--r-- 1 root root 0 Jan 25 12:40 74968-em1.1616 -r--r--r-- 1 root root 0 Jan 25 12:40 74969-em1.1618 -r--r--r-- 1 root root 0 Jan 25 12:40 74970-em1.1620 -r--r--r-- 1 root root 0 Jan 25 12:40 74977-em1.1615 -r--r--r-- 1 root root 0 Jan 25 12:40 74998-em1.1621 -r--r--r-- 1 root root 0 Jan 25 12:40 75001-em1.1614 -r--r--r-- 1 root root 0 Jan 25 12:40 75026-em1.1629 -r--r--r-- 1 root root 0 Jan 25 12:40 75027-em1.1631 -r--r--r-- 1 root root 0 Jan 25 12:40 75040-em1.1622 -r--r--r-- 1 root root 0 Jan 25 12:40 75042-em1.1619 -r--r--r-- 1 root root 0 Jan 25 12:40 75051-em1.1627 -r--r--r-- 1 root root 0 Jan 25 12:40 75072-em1.1633 -r--r--r-- 1 root root 0 Jan 25 12:40 75076-em1.1613 -r--r--r-- 1 root root 0 Jan 25 12:40 75077-em1.1623 -r--r--r-- 1 root root 0 Jan 25 12:40 75097-em1.1625 -r--r--r-- 1 root root 0 Jan 25 12:40 75102-em1.1632 -r--r--r-- 1 root root 0 Jan 25 12:40 75105-em1.1624 -r--r--r-- 1 root root 0 Jan 25 12:40 75106-em1.1630 -r--r--r-- 1 root root 0 Jan 25 12:40 75107-em1.1626 -r--r--r-- 1 root root 0 Jan 25 12:40 75109-em1.1628 -r--r--r-- 1 root root 0 Jan 25 12:40 75110-em1.1617 $ cat /proc/net/pf_ring/info PF_RING Version : 6.4.1 (unknown) Total rings : 22 Standard (non ZC) Options Ring slots : 32768 Slot version : 16 Capture TX : No [RX only] IP Defragment : No Socket Mode : Standard Total plugins : 0 Cluster Fragment Queue : 14140 Cluster Fragment Discard : 0 $ free total used free shared buff/cache available Mem: 131921372 130028924 684760 11916 1207688 1161016 Swap: 8388600 3253200 5135400 On Wed, Jan 25, 2017 at 12:23 PM, Azoff, Justin S wrote: > > > On Jan 25, 2017, at 12:18 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > It turns out to be the performance issue. > > I restarted the bro cluster and it started getting generated, but have > another issue: > > The bro sensors are utilizing almost 100% memory as well as some part of > swap. > > > > We recently have upgraded the kernel and centos to 7.3 on bro cluster, > as well as using latest pfring v6.4.1 > > We have 4 bro sensors each with 132G of memory and 24 core cpu @ 2.50GHz > with 48 On-line CPU(s) (0-17)), and each running 22 bro processes. > > with pf_ring the first thing to check would be to verify that bro is using > pf_ring correctly. If it's not, you end up analyzing 100% of the traffic > 22 times. > > If you do a > > ls -l /proc/net/pf_ring/ > > and > > cat /proc/net/pf_ring/info > > it should show rings in use and one file per bro process, like: > > -r--r--r--. 1 root root 0 Jan 25 11:23 36549-p1p1.376 > -r--r--r--. 1 root root 0 Jan 25 11:23 36552-p1p1.369 > -r--r--r--. 1 root root 0 Jan 25 11:23 36561-p1p1.377 > -r--r--r--. 1 root root 0 Jan 25 11:23 36581-p1p1.372 > -r--r--r--. 1 root root 0 Jan 25 11:23 36594-p1p1.375 > -r--r--r--. 1 root root 0 Jan 25 11:23 36600-p1p1.378 > -r--r--r--. 1 root root 0 Jan 25 11:23 36608-p1p1.371 > -r--r--r--. 1 root root 0 Jan 25 11:23 36611-p1p2.373 > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/bf89d0f9/attachment.html From jazoff at illinois.edu Wed Jan 25 09:47:21 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 17:47:21 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> Message-ID: <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> > On Jan 25, 2017, at 12:45 PM, fatema bannatwala wrote: > > Hi Justin, > > Thanks for suggestions. > Here are the stats (Looks like bro using pf_ring correctly though): Yes.. that is how it should look.. very important to verify that before checking anything else :-) What does your 'broctl top' output look like? That will break things down by each process -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 10:02:59 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 13:02:59 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> Message-ID: Forgot to mention about the arch. of cluster: 1 manager node (which is defined as logger as well) 4 worker nodes (which are defined as proxies as well) Before (in 2.4.1) we used to have manager act as proxy, but because of performance issue (i.e bro unable to rotate logs on manager), moved the proxy functionality to the workers. Attaching the output of 'broctl top', as it will swamp this email with text if pasted in the body :-) Thanks, Fatema On Wed, Jan 25, 2017 at 12:47 PM, Azoff, Justin S wrote: > > On Jan 25, 2017, at 12:45 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Hi Justin, > > > > Thanks for suggestions. > > Here are the stats (Looks like bro using pf_ring correctly though): > > Yes.. that is how it should look.. very important to verify that before > checking anything else :-) > > What does your 'broctl top' output look like? > > That will break things down by each process > > -- > - Justin Azoff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/9cdf2be4/attachment-0001.html -------------- next part -------------- Name Type Host Pid Proc VSize Rss Cpu Cmd logger logger manager.xx.xx.xx 22042 parent 2G 145M 18% bro logger logger manager.xx.xx.xx 22059 child 354M 295M 6% bro manager manager manager.xx.xx.xx 22115 parent 3G 2G 6% bro manager manager manager.xx.xx.xx 22134 child 1G 303M 0% bro proxy-1 proxy wrk1.xx.xx.xx 78639 child 212M 72M 25% bro proxy-1 proxy wrk1.xx.xx.xx 78638 parent 1G 1G 12% bro proxy-2 proxy wrk2.xx.xx.xx 76757 child 212M 73M 18% bro proxy-2 proxy wrk2.xx.xx.xx 76756 parent 1G 1G 6% bro proxy-3 proxy wrk3.xx.xx.xx 74756 child 212M 47M 31% bro proxy-3 proxy wrk3.xx.xx.xx 74755 parent 1G 1G 6% bro proxy-4 proxy wrk4.xx.xx.xx 63611 child 212M 50M 12% bro proxy-4 proxy wrk4.xx.xx.xx 63610 parent 1G 1G 0% bro worker-1-1 worker wrk1.xx.xx.xx 78985 parent 5G 5G 18% bro worker-1-1 worker wrk1.xx.xx.xx 79003 child 429M 269M 0% bro worker-1-10 worker wrk1.xx.xx.xx 78976 parent 5G 5G 6% bro worker-1-10 worker wrk1.xx.xx.xx 78999 child 428M 269M 0% bro worker-1-11 worker wrk1.xx.xx.xx 78977 parent 5G 5G 12% bro worker-1-11 worker wrk1.xx.xx.xx 78995 child 429M 269M 0% bro worker-1-12 worker wrk1.xx.xx.xx 78972 parent 5G 5G 6% bro worker-1-12 worker wrk1.xx.xx.xx 78994 child 428M 269M 0% bro worker-1-13 worker wrk1.xx.xx.xx 78981 parent 5G 5G 12% bro worker-1-13 worker wrk1.xx.xx.xx 79024 child 430M 269M 0% bro worker-1-14 worker wrk1.xx.xx.xx 78984 parent 5G 5G 31% bro worker-1-14 worker wrk1.xx.xx.xx 79037 child 429M 269M 0% bro worker-1-15 worker wrk1.xx.xx.xx 78978 parent 5G 5G 18% bro worker-1-15 worker wrk1.xx.xx.xx 79031 child 428M 269M 0% bro worker-1-16 worker wrk1.xx.xx.xx 78992 parent 5G 5G 100% bro worker-1-16 worker wrk1.xx.xx.xx 79034 child 428M 268M 0% bro worker-1-17 worker wrk1.xx.xx.xx 78979 parent 5G 5G 6% bro worker-1-17 worker wrk1.xx.xx.xx 79025 child 429M 268M 0% bro worker-1-18 worker wrk1.xx.xx.xx 78983 parent 5G 5G 12% bro worker-1-18 worker wrk1.xx.xx.xx 79027 child 429M 269M 0% bro worker-1-19 worker wrk1.xx.xx.xx 78991 parent 5G 5G 18% bro worker-1-19 worker wrk1.xx.xx.xx 79035 child 428M 268M 0% bro worker-1-2 worker wrk1.xx.xx.xx 78973 parent 5G 5G 12% bro worker-1-2 worker wrk1.xx.xx.xx 79033 child 428M 269M 0% bro worker-1-20 worker wrk1.xx.xx.xx 78987 parent 5G 5G 12% bro worker-1-20 worker wrk1.xx.xx.xx 79008 child 429M 269M 0% bro worker-1-21 worker wrk1.xx.xx.xx 78989 parent 5G 5G 12% bro worker-1-21 worker wrk1.xx.xx.xx 79026 child 429M 269M 0% bro worker-1-22 worker wrk1.xx.xx.xx 78974 parent 5G 5G 12% bro worker-1-22 worker wrk1.xx.xx.xx 78998 child 429M 269M 0% bro worker-1-3 worker wrk1.xx.xx.xx 78986 parent 5G 5G 6% bro worker-1-3 worker wrk1.xx.xx.xx 79029 child 428M 268M 0% bro worker-1-4 worker wrk1.xx.xx.xx 78982 parent 5G 5G 25% bro worker-1-4 worker wrk1.xx.xx.xx 79036 child 429M 269M 0% bro worker-1-5 worker wrk1.xx.xx.xx 78975 parent 5G 5G 12% bro worker-1-5 worker wrk1.xx.xx.xx 78997 child 428M 268M 0% bro worker-1-6 worker wrk1.xx.xx.xx 78993 parent 5G 5G 12% bro worker-1-6 worker wrk1.xx.xx.xx 79028 child 429M 269M 0% bro worker-1-7 worker wrk1.xx.xx.xx 78990 parent 5G 5G 12% bro worker-1-7 worker wrk1.xx.xx.xx 79032 child 429M 269M 0% bro worker-1-8 worker wrk1.xx.xx.xx 78980 parent 5G 5G 12% bro worker-1-8 worker wrk1.xx.xx.xx 78996 child 429M 268M 0% bro worker-1-9 worker wrk1.xx.xx.xx 78988 parent 5G 5G 56% bro worker-1-9 worker wrk1.xx.xx.xx 79030 child 428M 268M 0% bro worker-2-1 worker wrk2.xx.xx.xx 77108 parent 5G 5G 18% bro worker-2-1 worker wrk2.xx.xx.xx 77154 child 430M 271M 0% bro worker-2-10 worker wrk2.xx.xx.xx 77102 parent 5G 5G 12% bro worker-2-10 worker wrk2.xx.xx.xx 77140 child 429M 270M 0% bro worker-2-11 worker wrk2.xx.xx.xx 77090 parent 5G 5G 12% bro worker-2-11 worker wrk2.xx.xx.xx 77115 child 430M 270M 0% bro worker-2-12 worker wrk2.xx.xx.xx 77091 parent 5G 5G 12% bro worker-2-12 worker wrk2.xx.xx.xx 77116 child 430M 270M 0% bro worker-2-13 worker wrk2.xx.xx.xx 77106 parent 5G 5G 37% bro worker-2-13 worker wrk2.xx.xx.xx 77146 child 430M 270M 0% bro worker-2-14 worker wrk2.xx.xx.xx 77104 parent 5G 5G 62% bro worker-2-14 worker wrk2.xx.xx.xx 77147 child 430M 269M 6% bro worker-2-15 worker wrk2.xx.xx.xx 77098 parent 5G 5G 12% bro worker-2-15 worker wrk2.xx.xx.xx 77144 child 434M 272M 0% bro worker-2-16 worker wrk2.xx.xx.xx 77109 parent 5G 5G 12% bro worker-2-16 worker wrk2.xx.xx.xx 77143 child 430M 270M 6% bro worker-2-17 worker wrk2.xx.xx.xx 77100 parent 5G 5G 18% bro worker-2-17 worker wrk2.xx.xx.xx 77141 child 429M 269M 0% bro worker-2-18 worker wrk2.xx.xx.xx 77095 parent 5G 5G 18% bro worker-2-18 worker wrk2.xx.xx.xx 77153 child 429M 269M 0% bro worker-2-19 worker wrk2.xx.xx.xx 77111 parent 5G 5G 18% bro worker-2-19 worker wrk2.xx.xx.xx 77150 child 430M 270M 0% bro worker-2-2 worker wrk2.xx.xx.xx 77105 parent 5G 5G 25% bro worker-2-2 worker wrk2.xx.xx.xx 77155 child 429M 270M 0% bro worker-2-20 worker wrk2.xx.xx.xx 77094 parent 5G 5G 12% bro worker-2-20 worker wrk2.xx.xx.xx 77152 child 430M 270M 0% bro worker-2-21 worker wrk2.xx.xx.xx 77096 parent 5G 5G 12% bro worker-2-21 worker wrk2.xx.xx.xx 77149 child 430M 271M 0% bro worker-2-22 worker wrk2.xx.xx.xx 77101 parent 5G 5G 18% bro worker-2-22 worker wrk2.xx.xx.xx 77142 child 429M 270M 0% bro worker-2-3 worker wrk2.xx.xx.xx 77099 parent 5G 5G 25% bro worker-2-3 worker wrk2.xx.xx.xx 77148 child 430M 270M 0% bro worker-2-4 worker wrk2.xx.xx.xx 77093 parent 5G 5G 18% bro worker-2-4 worker wrk2.xx.xx.xx 77157 child 430M 269M 0% bro worker-2-5 worker wrk2.xx.xx.xx 77107 parent 5G 5G 12% bro worker-2-5 worker wrk2.xx.xx.xx 77117 child 429M 269M 6% bro worker-2-6 worker wrk2.xx.xx.xx 77092 parent 5G 5G 18% bro worker-2-6 worker wrk2.xx.xx.xx 77158 child 429M 270M 0% bro worker-2-7 worker wrk2.xx.xx.xx 77110 parent 5G 5G 18% bro worker-2-7 worker wrk2.xx.xx.xx 77151 child 430M 270M 0% bro worker-2-8 worker wrk2.xx.xx.xx 77097 parent 5G 5G 18% bro worker-2-8 worker wrk2.xx.xx.xx 77156 child 430M 270M 6% bro worker-2-9 worker wrk2.xx.xx.xx 77103 parent 5G 5G 12% bro worker-2-9 worker wrk2.xx.xx.xx 77145 child 429M 269M 0% bro worker-3-1 worker wrk3.xx.xx.xx 74969 parent 5G 5G 18% bro worker-3-1 worker wrk3.xx.xx.xx 75141 child 429M 270M 0% bro worker-3-10 worker wrk3.xx.xx.xx 74968 parent 5G 5G 12% bro worker-3-10 worker wrk3.xx.xx.xx 75139 child 427M 268M 0% bro worker-3-11 worker wrk3.xx.xx.xx 75076 parent 5G 5G 12% bro worker-3-11 worker wrk3.xx.xx.xx 75112 child 429M 269M 0% bro worker-3-12 worker wrk3.xx.xx.xx 74966 parent 5G 5G 12% bro worker-3-12 worker wrk3.xx.xx.xx 75111 child 430M 270M 0% bro worker-3-13 worker wrk3.xx.xx.xx 74970 parent 5G 5G 18% bro worker-3-13 worker wrk3.xx.xx.xx 75143 child 429M 269M 0% bro worker-3-14 worker wrk3.xx.xx.xx 74977 parent 5G 5G 12% bro worker-3-14 worker wrk3.xx.xx.xx 75128 child 428M 268M 6% bro worker-3-15 worker wrk3.xx.xx.xx 75102 parent 5G 5G 18% bro worker-3-15 worker wrk3.xx.xx.xx 75154 child 430M 270M 0% bro worker-3-16 worker wrk3.xx.xx.xx 74998 parent 5G 5G 12% bro worker-3-16 worker wrk3.xx.xx.xx 75140 child 429M 270M 0% bro worker-3-17 worker wrk3.xx.xx.xx 75001 parent 5G 5G 18% bro worker-3-17 worker wrk3.xx.xx.xx 75113 child 429M 270M 0% bro worker-3-18 worker wrk3.xx.xx.xx 75026 parent 5G 5G 18% bro worker-3-18 worker wrk3.xx.xx.xx 75152 child 429M 270M 0% bro worker-3-19 worker wrk3.xx.xx.xx 75027 parent 5G 5G 18% bro worker-3-19 worker wrk3.xx.xx.xx 75151 child 430M 269M 0% bro worker-3-2 worker wrk3.xx.xx.xx 75042 parent 5G 5G 18% bro worker-3-2 worker wrk3.xx.xx.xx 75114 child 429M 270M 0% bro worker-3-20 worker wrk3.xx.xx.xx 75051 parent 5G 5G 12% bro worker-3-20 worker wrk3.xx.xx.xx 75148 child 429M 270M 0% bro worker-3-21 worker wrk3.xx.xx.xx 75040 parent 5G 5G 25% bro worker-3-21 worker wrk3.xx.xx.xx 75142 child 429M 269M 0% bro worker-3-22 worker wrk3.xx.xx.xx 75107 parent 5G 5G 18% bro worker-3-22 worker wrk3.xx.xx.xx 75145 child 429M 270M 0% bro worker-3-3 worker wrk3.xx.xx.xx 75072 parent 6G 5G 18% bro worker-3-3 worker wrk3.xx.xx.xx 75153 child 430M 270M 0% bro worker-3-4 worker wrk3.xx.xx.xx 75077 parent 5G 5G 18% bro worker-3-4 worker wrk3.xx.xx.xx 75144 child 429M 268M 0% bro worker-3-5 worker wrk3.xx.xx.xx 75110 parent 5G 5G 12% bro worker-3-5 worker wrk3.xx.xx.xx 75130 child 429M 270M 0% bro worker-3-6 worker wrk3.xx.xx.xx 75109 parent 5G 5G 18% bro worker-3-6 worker wrk3.xx.xx.xx 75147 child 430M 271M 0% bro worker-3-7 worker wrk3.xx.xx.xx 75097 parent 5G 5G 12% bro worker-3-7 worker wrk3.xx.xx.xx 75149 child 429M 270M 0% bro worker-3-8 worker wrk3.xx.xx.xx 75150 child 430M 269M 100% bro worker-3-8 worker wrk3.xx.xx.xx 75106 parent 5G 5G 0% bro worker-3-9 worker wrk3.xx.xx.xx 75105 parent 5G 5G 12% bro worker-3-9 worker wrk3.xx.xx.xx 75146 child 429M 270M 0% bro worker-4-1 worker wrk4.xx.xx.xx 63940 parent 5G 5G 18% bro worker-4-1 worker wrk4.xx.xx.xx 63970 child 430M 270M 0% bro worker-4-10 worker wrk4.xx.xx.xx 63937 parent 5G 5G 12% bro worker-4-10 worker wrk4.xx.xx.xx 63974 child 431M 271M 0% bro worker-4-11 worker wrk4.xx.xx.xx 63936 parent 5G 5G 75% bro worker-4-11 worker wrk4.xx.xx.xx 63967 child 430M 270M 31% bro worker-4-12 worker wrk4.xx.xx.xx 63927 parent 5G 5G 12% bro worker-4-12 worker wrk4.xx.xx.xx 63966 child 430M 269M 0% bro worker-4-13 worker wrk4.xx.xx.xx 63938 parent 5G 5G 25% bro worker-4-13 worker wrk4.xx.xx.xx 63968 child 430M 270M 0% bro worker-4-14 worker wrk4.xx.xx.xx 63939 parent 5G 5G 18% bro worker-4-14 worker wrk4.xx.xx.xx 64008 child 430M 269M 6% bro worker-4-15 worker wrk4.xx.xx.xx 63941 parent 5G 5G 12% bro worker-4-15 worker wrk4.xx.xx.xx 63972 child 430M 270M 0% bro worker-4-16 worker wrk4.xx.xx.xx 63931 parent 5G 5G 25% bro worker-4-16 worker wrk4.xx.xx.xx 64003 child 430M 271M 0% bro worker-4-17 worker wrk4.xx.xx.xx 63935 parent 5G 5G 25% bro worker-4-17 worker wrk4.xx.xx.xx 64000 child 430M 270M 0% bro worker-4-18 worker wrk4.xx.xx.xx 63928 parent 5G 5G 18% bro worker-4-18 worker wrk4.xx.xx.xx 63995 child 430M 270M 0% bro worker-4-19 worker wrk4.xx.xx.xx 63932 parent 5G 5G 18% bro worker-4-19 worker wrk4.xx.xx.xx 63987 child 430M 271M 0% bro worker-4-2 worker wrk4.xx.xx.xx 63933 parent 6G 5G 37% bro worker-4-2 worker wrk4.xx.xx.xx 64009 child 430M 270M 0% bro worker-4-20 worker wrk4.xx.xx.xx 63930 parent 5G 5G 37% bro worker-4-20 worker wrk4.xx.xx.xx 63969 child 430M 271M 6% bro worker-4-21 worker wrk4.xx.xx.xx 63929 parent 5G 5G 12% bro worker-4-21 worker wrk4.xx.xx.xx 64005 child 430M 270M 0% bro worker-4-22 worker wrk4.xx.xx.xx 63934 parent 5G 5G 31% bro worker-4-22 worker wrk4.xx.xx.xx 63975 child 431M 271M 0% bro worker-4-3 worker wrk4.xx.xx.xx 63945 parent 5G 5G 12% bro worker-4-3 worker wrk4.xx.xx.xx 63973 child 430M 270M 0% bro worker-4-4 worker wrk4.xx.xx.xx 63950 parent 5G 5G 43% bro worker-4-4 worker wrk4.xx.xx.xx 64001 child 430M 270M 0% bro worker-4-5 worker wrk4.xx.xx.xx 63949 parent 5G 5G 18% bro worker-4-5 worker wrk4.xx.xx.xx 64002 child 430M 270M 0% bro worker-4-6 worker wrk4.xx.xx.xx 63957 parent 5G 5G 6% bro worker-4-6 worker wrk4.xx.xx.xx 64006 child 430M 270M 0% bro worker-4-7 worker wrk4.xx.xx.xx 63964 parent 5G 5G 12% bro worker-4-7 worker wrk4.xx.xx.xx 64007 child 430M 270M 0% bro worker-4-8 worker wrk4.xx.xx.xx 63962 parent 5G 5G 56% bro worker-4-8 worker wrk4.xx.xx.xx 63971 child 430M 271M 50% bro worker-4-9 worker wrk4.xx.xx.xx 63965 parent 5G 5G 18% bro worker-4-9 worker wrk4.xx.xx.xx 64004 child 430M 270M 0% bro From jazoff at illinois.edu Wed Jan 25 10:13:15 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 18:13:15 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> Message-ID: <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> Interesting, so all of your workers are pretty much the same at worker-1-12 worker wrk1.xx.xx.xx 78972 parent 5G 5G 6% bro worker-1-12 worker wrk1.xx.xx.xx 78994 child 428M 269M 0% bro Do you have any system monitoring graphs that would show memory usage over time? I wonder if they are quickly growing to 5G at startup, or if they are slowly growing over time. In a pinch, you can do things like throw something like (date;broctl top) in cron and send the output to a file. Are you loading misc/detect-traceroute or misc/scan.bro ? -- - Justin Azoff > On Jan 25, 2017, at 1:02 PM, fatema bannatwala wrote: > > Forgot to mention about the arch. of cluster: > 1 manager node (which is defined as logger as well) > 4 worker nodes (which are defined as proxies as well) > > Before (in 2.4.1) we used to have manager act as proxy, but because of performance issue (i.e bro unable to rotate logs on manager), moved the proxy functionality to the workers. > > Attaching the output of 'broctl top', as it will swamp this email with text if pasted in the body :-) > > Thanks, > Fatema > > > > On Wed, Jan 25, 2017 at 12:47 PM, Azoff, Justin S wrote: > > On Jan 25, 2017, at 12:45 PM, fatema bannatwala wrote: > > > > Hi Justin, > > > > Thanks for suggestions. > > Here are the stats (Looks like bro using pf_ring correctly though): > > Yes.. that is how it should look.. very important to verify that before checking anything else :-) > > What does your 'broctl top' output look like? > > That will break things down by each process > > -- > - Justin Azoff > > > > From fatema.bannatwala at gmail.com Wed Jan 25 10:28:14 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 13:28:14 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> Message-ID: Yeah, all procs pretty much the same, not sure why there is a parent/child pair for each process, thought it would just be 22 processes per node, hmm interesting. I think we don't have any system monitoring graphs on the workers (Looking into installing some tool to do that, was googling about the same :)). I can setup a cron to do broctl top and send the output to a file. The misc/detect-traceroute script isn't loaded, but misc/scan is loaded in local.bro, was just about to configure Aashish's scan-NG script to detect other kind of scans as well, but seeing the boxes already swaping, chucked the plan :( Thanks, Fatema. On Wed, Jan 25, 2017 at 1:13 PM, Azoff, Justin S wrote: > Interesting, so all of your workers are pretty much the same at > > worker-1-12 worker wrk1.xx.xx.xx 78972 parent 5G 5G 6% bro > worker-1-12 worker wrk1.xx.xx.xx 78994 child 428M 269M 0% bro > > Do you have any system monitoring graphs that would show memory usage over > time? I wonder if they are quickly growing to 5G at startup, or if they > are slowly growing over time. In a pinch, you can do things like throw > something like (date;broctl top) in cron and send the output to a file. > > Are you loading misc/detect-traceroute or misc/scan.bro ? > > -- > - Justin Azoff > > > On Jan 25, 2017, at 1:02 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Forgot to mention about the arch. of cluster: > > 1 manager node (which is defined as logger as well) > > 4 worker nodes (which are defined as proxies as well) > > > > Before (in 2.4.1) we used to have manager act as proxy, but because of > performance issue (i.e bro unable to rotate logs on manager), moved the > proxy functionality to the workers. > > > > Attaching the output of 'broctl top', as it will swamp this email with > text if pasted in the body :-) > > > > Thanks, > > Fatema > > > > > > > > On Wed, Jan 25, 2017 at 12:47 PM, Azoff, Justin S > wrote: > > > On Jan 25, 2017, at 12:45 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > > > Hi Justin, > > > > > > Thanks for suggestions. > > > Here are the stats (Looks like bro using pf_ring correctly though): > > > > Yes.. that is how it should look.. very important to verify that before > checking anything else :-) > > > > What does your 'broctl top' output look like? > > > > That will break things down by each process > > > > -- > > - Justin Azoff > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/cfb8d871/attachment.html From jdopheid at illinois.edu Wed Jan 25 10:31:52 2017 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Wed, 25 Jan 2017 18:31:52 +0000 Subject: [Bro] 1 more open seat at Bro4Pros Message-ID: We had another cancellation for Bro4Pros, grab your seat while it?s available: https://www.eventbrite.com/e/bro4pros-2017-tickets-29303802462 ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From jazoff at illinois.edu Wed Jan 25 10:42:41 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 18:42:41 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> Message-ID: <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> > On Jan 25, 2017, at 1:28 PM, fatema bannatwala wrote: > > Yeah, all procs pretty much the same, not sure why there is a parent/child pair for each process, thought it would just be 22 processes per node, hmm interesting. The child process handles the communication to the manager/proxies. These will go away once the conversion to broker is done. > I think we don't have any system monitoring graphs on the workers (Looking into installing some tool to do that, was googling about the same :)). > I can setup a cron to do broctl top and send the output to a file. Munin is crazy easy to get up and running and does the job, but it's not the best monitoring system out there. You can also use things like sar to collect data and use something else to graph it. > The misc/detect-traceroute script isn't loaded, but misc/scan is loaded in local.bro, was just about to configure Aashish's scan-NG script to detect other kind of scans as well, but > seeing the boxes already swaping, chucked the plan :( Ah.. if your network sees a lot of scan traffic, scan.bro could be what is killing your cluster. If you run these commands, what values do you get? wc -l conn.log cat conn.log|bro-cut id.resp_p |fgrep -cw 23 cat conn.log|bro-cut history|sort|uniq -c |sort -rn|head -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 11:06:16 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 14:06:16 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> Message-ID: Thanks Justin for suggesting some tools :-) will try those (Maybe Munin first) Here's the output of the cmds: $ wc -l conn.log 12913751 conn.log $ cat conn.log|bro-cut id.resp_p |fgrep -cw 23 3 $ cat conn.log|bro-cut history|sort|uniq -c |sort -rn|head 4230547 S 2938925 Dd 1059285 ShADadFf 968902 ShADadfF 915401 D 212507 ShAFf 177731 SAF 177359 ShADadFfR 159024 ShADadfFr 140911 ShADdaFf On Wed, Jan 25, 2017 at 1:42 PM, Azoff, Justin S wrote: > > On Jan 25, 2017, at 1:28 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Yeah, all procs pretty much the same, not sure why there is a > parent/child pair for each process, thought it would just be 22 processes > per node, hmm interesting. > > The child process handles the communication to the manager/proxies. These > will go away once the conversion to broker is done. > > > I think we don't have any system monitoring graphs on the workers > (Looking into installing some tool to do that, was googling about the same > :)). > > I can setup a cron to do broctl top and send the output to a file. > > Munin is crazy easy to get up and running and does the job, but it's not > the best monitoring system out there. You can also use things like sar to > collect data and use something else to graph it. > > > The misc/detect-traceroute script isn't loaded, but misc/scan is loaded > in local.bro, was just about to configure Aashish's scan-NG script to > detect other kind of scans as well, but > > seeing the boxes already swaping, chucked the plan :( > > Ah.. if your network sees a lot of scan traffic, scan.bro could be what is > killing your cluster. > > If you run these commands, what values do you get? > > wc -l conn.log > cat conn.log|bro-cut id.resp_p |fgrep -cw 23 > cat conn.log|bro-cut history|sort|uniq -c |sort -rn|head > > -- > - Justin Azoff > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/97e32f24/attachment-0001.html From jazoff at illinois.edu Wed Jan 25 11:42:19 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 19:42:19 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> Message-ID: > On Jan 25, 2017, at 2:06 PM, fatema bannatwala wrote: > > Thanks Justin for suggesting some tools :-) will try those (Maybe Munin first) > > Here's the output of the cmds: > > $ wc -l conn.log > 12913751 conn.log > > $ cat conn.log|bro-cut id.resp_p |fgrep -cw 23 > 3 > > $ cat conn.log|bro-cut history|sort|uniq -c |sort -rn|head > 4230547 S > 2938925 Dd > 1059285 ShADadFf > 968902 ShADadfF > 915401 D > 212507 ShAFf > 177731 SAF > 177359 ShADadFfR > 159024 ShADadfFr > 140911 ShADdaFf Interesting, you're not seeing port 23 scans, but you are seeing a lot of scans.. 1/3 of your connections are unanswered Syn packets. This would show what port is being scanned: cat conn.log |bro-cut id.resp_p history|fgrep -w S|sort|uniq -c|sort -nr|head Disabling scan.bro would likely help a lot. -- - Justin Azoff From pssunu6 at gmail.com Wed Jan 25 11:59:06 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 26 Jan 2017 01:29:06 +0530 Subject: [Bro] intel log fields adding and processing Message-ID: Hi, I have a script which will add one field in intel.log, that part is working now i want read the output from intel.log seen.where field example if seen.where is HTTP::IN_HOST_HEADER and i need to write "itsOk" into my intel.log new field the problem is i am not able to get seen.where field output my code @load frameworks/intel/seen export { global address: table[addr] of string &synchronized &write_expire=7day; redef Intel::read_files += { fmt("%s/intel-1.dat", @DIR) }; redef record Intel::Info += { category: string &optional &log; attribute: string &log &optional; }; } event Intel::log_intel (rec: Intel::Seen) { address[rec$host] = rec$where; host_name_dhcp[rec$assigned_ip] = rec$hostname; } any way to do this ? Regards, sunu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/301887a2/attachment.html From jazoff at illinois.edu Wed Jan 25 12:05:39 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 20:05:39 +0000 Subject: [Bro] intel log fields adding and processing In-Reply-To: References: Message-ID: > On Jan 25, 2017, at 2:59 PM, ps sunu wrote: > > Hi, > I have a script which will add one field in intel.log, that part is working > now i want read the output from intel.log seen.where field example if seen.where is HTTP::IN_HOST_HEADER and i need to write "itsOk" into my intel.log new field > > the problem is i am not able to get seen.where field output > The main issue is that the log_intel event is called with a Intel::Info, not an Intel::Seen. seen.where is the representation of the info record$seen$where field, so you need to do something like this: event Intel::log_intel (rec: Intel::Info) { print "rec$seen$where is", rec$seen$where; } http://try.bro.org/#/trybro/saved/118697 -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 12:29:44 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 15:29:44 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> Message-ID: Thanks Justin! Happening again, no intel.log file getting generated (I don't know why poor intel file getting impacted, n not any other log file :-/ ) Here's the stats (before I go ahead and disable scan.bro, and restart the cluster) $ cat conn.log |bro-cut id.resp_p history|fgrep -w S|sort|uniq -c|sort -nr|head 398587 2323 S 256953 5358 S 205109 7547 S 115442 6789 S 101712 22 S 97051 81 S 90099 5800 S 44297 40884 S 43943 40876 S 35522 80 S $ free total used free shared buff/cache available Mem: 131921372 131069700 562628 18476 289044 223264 Swap: 8388600 4443208 3945392 As it can be seen above, worker1 using almost 100% memory :( Going to disable scan.bro, and restart the cluster. Also, will get Munin to have system monitoring graph on the sensors. Thanks, Fatema. On Wed, Jan 25, 2017 at 2:42 PM, Azoff, Justin S wrote: > > On Jan 25, 2017, at 2:06 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Thanks Justin for suggesting some tools :-) will try those (Maybe Munin > first) > > > > Here's the output of the cmds: > > > > $ wc -l conn.log > > 12913751 conn.log > > > > $ cat conn.log|bro-cut id.resp_p |fgrep -cw 23 > > 3 > > > > $ cat conn.log|bro-cut history|sort|uniq -c |sort -rn|head > > 4230547 S > > 2938925 Dd > > 1059285 ShADadFf > > 968902 ShADadfF > > 915401 D > > 212507 ShAFf > > 177731 SAF > > 177359 ShADadFfR > > 159024 ShADadfFr > > 140911 ShADdaFf > > Interesting, you're not seeing port 23 scans, but you are seeing a lot of > scans.. 1/3 of your connections are unanswered Syn packets. > > This would show what port is being scanned: > > cat conn.log |bro-cut id.resp_p history|fgrep -w S|sort|uniq -c|sort > -nr|head > > Disabling scan.bro would likely help a lot. > > -- > - Justin Azoff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/eee483ee/attachment.html From jazoff at illinois.edu Wed Jan 25 13:13:22 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 21:13:22 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> Message-ID: <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> > On Jan 25, 2017, at 3:29 PM, fatema bannatwala wrote: > > Thanks Justin! > > Happening again, no intel.log file getting generated (I don't know why poor intel file getting impacted, n not any other log file :-/ ) > > Here's the stats (before I go ahead and disable scan.bro, and restart the cluster) > $ cat conn.log |bro-cut id.resp_p history|fgrep -w S|sort|uniq -c|sort -nr|head > 398587 2323 S > 256953 5358 S > 205109 7547 S > 115442 6789 S > 101712 22 S > 97051 81 S > 90099 5800 S > 44297 40884 S > 43943 40876 S > 35522 80 S Ah.. that looks about right for the constant flood of IoT Scan crap. Are you filtering port 23 before bro can see it? 23 would be about 10x the volume of 2323. -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 13:23:04 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 16:23:04 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: Ah, makes sense, yes port 23 is getting blocked at the border, hence Bro wouldn't be seeing any traffic to port 23... :) Disabled the scan.bro file. Is there any other script(s) that can be used in place of scan.bro , i.e scan-NG would also have same effect as well? Thanks Justin for the help to troubleshoot the issue, will keep an eye on the sensors for any performance hit for next 24 hours. On Wed, Jan 25, 2017 at 4:13 PM, Azoff, Justin S wrote: > > > On Jan 25, 2017, at 3:29 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Thanks Justin! > > > > Happening again, no intel.log file getting generated (I don't know why > poor intel file getting impacted, n not any other log file :-/ ) > > > > Here's the stats (before I go ahead and disable scan.bro, and restart > the cluster) > > $ cat conn.log |bro-cut id.resp_p history|fgrep -w S|sort|uniq -c|sort > -nr|head > > 398587 2323 S > > 256953 5358 S > > 205109 7547 S > > 115442 6789 S > > 101712 22 S > > 97051 81 S > > 90099 5800 S > > 44297 40884 S > > 43943 40876 S > > 35522 80 S > > Ah.. that looks about right for the constant flood of IoT Scan crap. Are > you filtering port 23 before bro can see it? 23 would be about 10x the > volume of 2323. > > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/be53a509/attachment.html From daniel.guerra69 at gmail.com Wed Jan 25 13:27:00 2017 From: daniel.guerra69 at gmail.com (Daniel Guerra) Date: Wed, 25 Jan 2017 22:27:00 +0100 Subject: [Bro] Web GUI for Bro? In-Reply-To: References: Message-ID: Hi, Check my docker project. https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/ The quick way : export DOCKERHOST=":8080" wget https://raw.githubusercontent.com/danielguerra69/bro-debian-elasticsearch/master/docker-compose.yml docker-compose pull docker-compose up You can send pcap data with pcap to port 1969 ?nc dockerip 1969 < mypcapfile? After this open your browser to dockerip:5601 for kibana, its preconfigured with some queries and desktops. > On 25 Jan 2017, at 14:48, project722 wrote: > > Thanks All. I am looking into ELK. > > On Tue, Jan 24, 2017 at 2:44 AM, Kevin Ross > wrote: > As said before ELK is your best bet. Here is a link that may interest you. The learning curve may be steep but it is worth it in the end (assuming you are putting this together yourself and not a all in one solution that provides it for you) when you can query logs as easily as a google search and visualise. > > https://www.elastic.co/blog/bro-ids-elastic-stack > > Also you could use security oniion and it uses ELSA to present these logs although my preference these days because of its easier ability I find to add in new data sources would be ELK (i.e once you understand logstash and parsing logs you can easily parse any log you have to correlate Bro, IDS, network and even host logs). > > https://github.com/mcholste/elsa > http://blog.bro.org/2012/01/monster-logs.html > > On 21 January 2017 at 11:54, project722 > wrote: > Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions on what I should use as a web GUI for bro? What is the best options out there? NOTE - my version of Bro was compiled from source. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/84432b7c/attachment-0001.html From jazoff at illinois.edu Wed Jan 25 13:27:53 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 21:27:53 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: > On Jan 25, 2017, at 4:23 PM, fatema bannatwala wrote: > > Ah, makes sense, yes port 23 is getting blocked at the border, hence Bro wouldn't be seeing any traffic to port 23... :) > Disabled the scan.bro file. Is there any other script(s) that can be used in place of scan.bro , i.e scan-NG would also have same effect as well? > Thanks Justin for the help to troubleshoot the issue, will keep an eye on the sensors for any performance hit for next 24 hours. scan-NG will work a lot better than scan.bro. I have a version that is kind of like 'scan-ng-lite' but from a users point of view it doesn't add much over scan-NG, so you should just use that. -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 13:43:24 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 16:43:24 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: Alrighty, yeah was looking into how to configure the script according to the environment. It appears that we have to define the list of allocated subnets in the network, as landmine works on watching connections which are not in allocated subnets. Defining the allocated subnets is a pain, have a whole lot list of subnets that are allocated and have just couple of subnets that constitute the darknet, hence was tweaking around the scripts to change that setting from defining allocated subnets to rather defining un-allocated subnets, which is much easier. Thanks, Fatema. On Wed, Jan 25, 2017 at 4:27 PM, Azoff, Justin S wrote: > > > On Jan 25, 2017, at 4:23 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Ah, makes sense, yes port 23 is getting blocked at the border, hence Bro > wouldn't be seeing any traffic to port 23... :) > > Disabled the scan.bro file. Is there any other script(s) that can be > used in place of scan.bro , i.e scan-NG would also have same effect as well? > > Thanks Justin for the help to troubleshoot the issue, will keep an eye > on the sensors for any performance hit for next 24 hours. > > scan-NG will work a lot better than scan.bro. I have a version that is > kind of like 'scan-ng-lite' but from a users point of view it doesn't add > much over scan-NG, so you should just use that. > > > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/34f388c7/attachment.html From project722 at gmail.com Wed Jan 25 13:55:40 2017 From: project722 at gmail.com (project722) Date: Wed, 25 Jan 2017 15:55:40 -0600 Subject: [Bro] Web GUI for Bro? In-Reply-To: References: Message-ID: This ELK/Bro combo is turning out to be more of a learning curve than I has hoped for. I can get the logs over to elasticsearch and into Kibana, but I can only see them on the "Discovery" tab. I save the search to use with a visualization, but it wants to do something by "count" and its not breaking down the connections in conn.log and graphing them like I had hoped for. Here is my logstash conf file. input { stdin { } file { path => "/opt/bro/logs/current/*.log" start_position => "beginning" } } filter { if [message] =~ /^(\d+\.\d{6}\s+\S+\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|-)\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|-)\s+\S+\s+\S+\s+\S+\s+\S+\s+[^:]+::\S+\s+[^:]+::\S+\s+\S+(?:\s\S+)*$)/ { grok{ patterns_dir => "/opt/logstash/custom_patterns" match => { message => "%{291009}" } add_field => [ "rule_id", "291009" ] add_field => [ "Device Type", "IPSIDSDevice" ] add_field => [ "Object", "Process" ] add_field => [ "Action", "General" ] add_field => [ "Status", "Informational" ] } } #translate { # field => "evt_dstip" # destination => "malicious_IP" # dictionary_path => '/opt/logstash/maliciousIPV4.yaml' #} #translate { # field => "evt_srcip" # destination => "malicious_IP" # dictionary_path => '/opt/logstash/maliciousIPV4.yaml' #} #translate { # field => "md5" # destination => "maliciousMD5" # dictionary_path => '/opt/logstash/maliciousMD5.yaml' #} #date { # match => [ "start_time", "UNIX" ] #} } output { elasticsearch { hosts => ["localhost:9200"] } stdout { codec => rubydebug } In Kibana under the Discover tab I can see my messages from conn.log. How can I get this data properly graphed and broken down more like how the connection summary emails are broken down? January 25th 2017, 15:52:57.702 1485381116.563095 CN2Wu7l8JEjji3ht3 192.168.100.102 58128 192.168.100.103 161 udp snmp 0.010298 53 53 SF T T 0 Dd 1 81 1 81 (empty) On Wed, Jan 25, 2017 at 3:27 PM, Daniel Guerra wrote: > Hi, > > Check my docker project. > > https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/ > > The quick way : > > export DOCKERHOST=":8080" > wget https://raw.githubusercontent.com/danielguerra69/bro-debian- > elasticsearch/master/docker-compose.yml > docker-compose pull > docker-compose up > > You can send pcap data with pcap to port 1969 ?nc dockerip 1969 < > mypcapfile? > > After this open your browser to dockerip:5601 for kibana, its > preconfigured with some > queries and desktops. > > > On 25 Jan 2017, at 14:48, project722 wrote: > > Thanks All. I am looking into ELK. > > On Tue, Jan 24, 2017 at 2:44 AM, Kevin Ross > wrote: > >> As said before ELK is your best bet. Here is a link that may interest >> you. The learning curve may be steep but it is worth it in the end >> (assuming you are putting this together yourself and not a all in one >> solution that provides it for you) when you can query logs as easily as a >> google search and visualise. >> >> https://www.elastic.co/blog/bro-ids-elastic-stack >> >> Also you could use security oniion and it uses ELSA to present these logs >> although my preference these days because of its easier ability I find to >> add in new data sources would be ELK (i.e once you understand logstash and >> parsing logs you can easily parse any log you have to correlate Bro, IDS, >> network and even host logs). >> >> https://github.com/mcholste/elsa >> http://blog.bro.org/2012/01/monster-logs.html >> >> On 21 January 2017 at 11:54, project722 wrote: >> >>> Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions >>> on what I should use as a web GUI for bro? What is the best options out >>> there? NOTE - my version of Bro was compiled from source. >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> >> >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/92eb61e1/attachment-0001.html From jlay at slave-tothe-box.net Wed Jan 25 14:02:50 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 25 Jan 2017 15:02:50 -0700 Subject: [Bro] Web GUI for Bro? In-Reply-To: References: Message-ID: <4e6f5db643bbed774ccb37816f305981@localhost> On 2017-01-25 14:55, project722 wrote: > This ELK/Bro combo is turning out to be more of a learning curve than > I has hoped for. I can get the logs over to elasticsearch and into > Kibana, but I can only see them on the "Discovery" tab. I save the > search to use with a visualization, but it wants to do something by > "count" and its not breaking down the connections in conn.log and > graphing them like I had hoped for. Here is my logstash conf file. > > input { > stdin { } > file { > path => "/opt/bro/logs/current/*.log" > start_position => "beginning" > } > } > > filter { > if [message] =~ > /^(\d+\.\d{6}\s+\S+\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|-)\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|-)\s+\S+\s+\S+\s+\S+\s+\S+\s+[^:]+::\S+\s+[^:]+::\S+\s+\S+(?:\s\S+)*$)/ > { > grok{ > patterns_dir => "/opt/logstash/custom_patterns" > match => { > message => "%{291009}" > } > add_field => [ "rule_id", "291009" ] > add_field => [ "Device Type", "IPSIDSDevice" ] > add_field => [ "Object", "Process" ] > add_field => [ "Action", "General" ] > add_field => [ "Status", "Informational" ] > } > } > > #translate { > # field => "evt_dstip" > # destination => "malicious_IP" > # dictionary_path => '/opt/logstash/maliciousIPV4.yaml' > #} > #translate { > # field => "evt_srcip" > # destination => "malicious_IP" > # dictionary_path => '/opt/logstash/maliciousIPV4.yaml' > #} > #translate { > # field => "md5" > # destination => "maliciousMD5" > # dictionary_path => '/opt/logstash/maliciousMD5.yaml' > #} > #date { > # match => [ "start_time", "UNIX" ] > #} > > } > > output { > elasticsearch { hosts => ["localhost:9200"] } > stdout { codec => rubydebug } > > In Kibana under the Discover tab I can see my messages from conn.log. > How can I get this data properly graphed and broken down more like how > the connection summary emails are broken down? > > January 25th 2017, 15:52:57.702 > > 1485381116.563095 CN2Wu7l8JEjji3ht3 192.168.100.102 58128 > 192.168.100.103 161 udp snmp 0.010298 53 53 SF T T 0 Dd 1 81 1 81 > (empty) > > On Wed, Jan 25, 2017 at 3:27 PM, Daniel Guerra > wrote: > >> Hi, >> >> Check my docker project. >> >> https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/ [1] >> >> The quick way : >> >> export DOCKERHOST=":8080" >> wget >> > https://raw.githubusercontent.com/danielguerra69/bro-debian-elasticsearch/master/docker-compose.yml >> [2] >> docker-compose pull >> docker-compose up >> >> You can send pcap data with pcap to port 1969 ?nc dockerip 1969 < >> mypcapfile? >> >> After this open your browser to dockerip:5601 for kibana, its >> preconfigured with some >> queries and desktops. >> >> On 25 Jan 2017, at 14:48, project722 wrote: >> >> Thanks All. I am looking into ELK. >> >> On Tue, Jan 24, 2017 at 2:44 AM, Kevin Ross >> wrote: >> >> As said before ELK is your best bet. Here is a link that may >> interest you. The learning curve may be steep but it is worth it in >> the end (assuming you are putting this together yourself and not a >> all in one solution that provides it for you) when you can query >> logs as easily as a google search and visualise. >> >> https://www.elastic.co/blog/bro-ids-elastic-stack [3] >> >> Also you could use security oniion and it uses ELSA to present these >> logs although my preference these days because of its easier ability >> I find to add in new data sources would be ELK (i.e once you >> understand logstash and parsing logs you can easily parse any log >> you have to correlate Bro, IDS, network and even host logs). >> >> https://github.com/mcholste/elsa [4] >> http://blog.bro.org/2012/01/monster-logs.html [5] >> >> On 21 January 2017 at 11:54, project722 >> wrote: >> >> Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide >> suggestions on what I should use as a web GUI for bro? What is the >> best options out there? NOTE - my version of Bro was compiled from >> source. >> Mod this to your liking and see how it goes: ##### input { file { type => "connlog" path => "/usr/local/bro/spool/bro/conn.log" sincedb_path => "/var/lib/logstash/.sincedbconn" } file { type => "ssllog" path => "/usr/local/bro/spool/bro/ssl.log" sincedb_path => "/var/lib/logstash/.sincedbssl" } } filter { #bro conn.log if [type] == "connlog" { if [message] =~ "^#" { drop { } } else { grok { match => [ "message", "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*)))" ] } } } #bro ssl.log if [type] == "ssllog" { if [message] =~ "^#" { drop { } } else { grok { match => [ "message", "(?(.*?))\t%{DATA:uid}\t%{DATA:src_ip}\t%{DATA:src_port}\t%{DATA:dst_ip}\t%{DATA:dst_port}\t%{DATA:version}\t%{DATA:cipher}\t%{DATA:curve}\t%{DATA:hostname}\t%{DATA:resumed}\t%{DATA:last_alert}\t%{DATA:next_protocol}\t%{DATA:established}\t%{DATA:cert_chain_fuids}\t%{DATA:client_cert_chain_fuids}\t%{DATA:subject}\t%{DATA:issuer}\t%{DATA:client_subject}\t%{DATA:client_issuer}\t%{DATA:validation_status}\t%{DATA:notary.first_seen}\t%{DATA:notary.last_seen}\t%{DATA:notary.times_seen}\t%{DATA:notary.valid}" ] } } } #geoip source geoip { source => "src_ip" target => "src_geoip" } #geoip destination geoip { source => "dst_ip" target => "dst_geoip" } mutate { convert => [ "resp_bytes", "integer" ] convert => [ "resp_ip_bytes", "integer" ] convert => [ "orig_bytes", "integer" ] convert => [ "orig_ip_bytes", "integer" ] convert => [ "src_port", "integer" ] convert => [ "dst_port", "integer" ] gsub => [ "src_geoip.country_name", "[ ]", "_", "dst_geoip.country_name", "[ ]", "_", "proto", "tcp", "TCP", "proto", "udp", "UDP", "proto", "icmp", "ICMP" ] } } output { #uncomment below for testing #stdout { codec => rubydebug } elasticsearch { } } #### James From jazoff at illinois.edu Wed Jan 25 14:10:15 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 25 Jan 2017 22:10:15 +0000 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: > On Jan 25, 2017, at 4:43 PM, fatema bannatwala wrote: > > Alrighty, yeah was looking into how to configure the script according to the environment. > It appears that we have to define the list of allocated subnets in the network, > as landmine works on watching connections which are not in allocated subnets. > > Defining the allocated subnets is a pain, have a whole lot list of subnets that are allocated and > have just couple of subnets that constitute the darknet, hence was tweaking around the scripts to change that setting > from defining allocated subnets to rather defining un-allocated subnets, which is much easier. That part is optional(but extremely useful). I'm glad you brought this up, the darknet configuration problem is something I've been thinking about how to fix: * Some people define darknet as NOT allocated. * Some people know exactly which subnets are dark. I did write a version of the darknet code that auto-tunes itself based on allocated subnets, it's part of my scan code: https://gist.github.com/JustinAzoff/80f97af4f4fbb91ae26492b919a50434 One can let it run for a while, and then do a broctl print Site::used_address_space manager to dump out what it figures out as active, and then put it in a file that does @load ./dark-nets redef Site::used_address_space = { ... } It's not perfect but it's a start. broker with its persistent data store support may be what is needed to make it more useful. The only issue is it doesn't support something like a honey net that does technically exist: the auto tuning code will flag it as an allocated subnet. I need to work out how it should be overridden in cases like that. Aside from the auto detection the function just comes down to return (a in local_nets && a !in used_address_space); In your case you want this instead return (a in dark_address_space); so I think the simplest thing that may possibly work for everyone is something like global dark_address_space: set[subnet] &redef; and change the is_darknet logic to be if(|dark_address_space|) return (a in dark_address_space); else return (a in local_nets && a !in used_address_space); Or maybe just return (a in local_nets && (a in dark_address_space || a !in used_address_space); but I could see a different user wanting this instead: return (a in local_nets && a in dark_address_space && a !in used_address_space); for the use case of "dark_address_space is my darknet subnets, but something may be allocated without us knowing, so double check!" I haven't quite figured this out yet.. Maybe the answer is that there isn't a one size fits all implementation and I just need to have 4 is_darknet functions depending on how people want it to work: return (a in dark_address_space); #mode=darknet return (a in local_nets && a !in used_address_space); #mode=not_allocated return (a in local_nets && (a in dark_address_space || a !in used_address_space); #mode=darknet_or_not_allocated return (a in local_nets && a in dark_address_space && a !in used_address_space); #mode=darknet_and_not_allocated actually, now that I finally wrote all this out, I see that it's just the 4 combinations of 2 boolean flags. -- - Justin Azoff From fatema.bannatwala at gmail.com Wed Jan 25 15:16:34 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Wed, 25 Jan 2017 18:16:34 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: Great! glad to know that I am not the only one dealing with this glitch in scan-NG :-) Now it totally makes sense, as I was thinking that in our case darknet and un-allocated subnets are same but will have to be careful, as you mentioned, when un-allocated subnets can get assigned without us knowing! I knew that there were two upgraded scan scripts available other than the one that gets ship with Bro by default, one that you wrote scan.bro, and another scan-NG script. Hence was thinking to migrate to use one of those, and stumbled across this darknet and allocated net defining issue. (There was no way I could be able to define the complete list of allocated subnets in scan-NG config, without missing anything :-) , at that time I though it's required to define the allocated subnets, as there's a comment in scan-config.bro that reads like: "####Important to configure for Landmine detection #### if subnet_feed is empty then LandMine detection wont work " hence thought of tweaking that setting to rather define darknet, but never got around to it) Great to know that there's already some code written by you that works around this issue! Thanks a ton for all the explanation and link to your scan script, great help! will go through it and see if I can get it up and running in our cluster :-) Thanks Justin! On Wed, Jan 25, 2017 at 5:10 PM, Azoff, Justin S wrote: > > > On Jan 25, 2017, at 4:43 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > > > > Alrighty, yeah was looking into how to configure the script according to > the environment. > > It appears that we have to define the list of allocated subnets in the > network, > > as landmine works on watching connections which are not in allocated > subnets. > > > > Defining the allocated subnets is a pain, have a whole lot list of > subnets that are allocated and > > have just couple of subnets that constitute the darknet, hence was > tweaking around the scripts to change that setting > > from defining allocated subnets to rather defining un-allocated subnets, > which is much easier. > > That part is optional(but extremely useful). I'm glad you brought this > up, the darknet configuration problem is something I've been thinking about > how to fix: > > * Some people define darknet as NOT allocated. > * Some people know exactly which subnets are dark. > > I did write a version of the darknet code that auto-tunes itself based on > allocated subnets, it's part of my scan code: > > https://gist.github.com/JustinAzoff/80f97af4f4fbb91ae26492b919a50434 > > One can let it run for a while, and then do a > > broctl print Site::used_address_space manager > > to dump out what it figures out as active, and then put it in a file that > does > > @load ./dark-nets > redef Site::used_address_space = { > ... > } > > It's not perfect but it's a start. broker with its persistent data store > support may be what is needed to make it more useful. > > The only issue is it doesn't support something like a honey net that does > technically exist: the auto tuning code will flag it as an allocated > subnet. I need to work out how it should be overridden in cases like that. > > Aside from the auto detection the function just comes down to > > return (a in local_nets && a !in used_address_space); > > In your case you want this instead > > return (a in dark_address_space); > > so I think the simplest thing that may possibly work for everyone is > something like > > global dark_address_space: set[subnet] &redef; > > and change the is_darknet logic to be > > > if(|dark_address_space|) > return (a in dark_address_space); > else > return (a in local_nets && a !in used_address_space); > > Or maybe just > > return (a in local_nets && (a in dark_address_space || a !in > used_address_space); > > but I could see a different user wanting this instead: > > return (a in local_nets && a in dark_address_space && a !in > used_address_space); > > for the use case of "dark_address_space is my darknet subnets, but > something may be allocated without us knowing, so double check!" > > I haven't quite figured this out yet.. Maybe the answer is that there > isn't a one size fits all implementation and I just need to have 4 > is_darknet functions depending on how people want it to work: > > > return (a in dark_address_space); #mode=darknet > > return (a in local_nets && a !in used_address_space); > #mode=not_allocated > > return (a in local_nets && (a in dark_address_space || a !in > used_address_space); #mode=darknet_or_not_allocated > > return (a in local_nets && a in dark_address_space && a !in > used_address_space); #mode=darknet_and_not_allocated > > actually, now that I finally wrote all this out, I see that it's just the > 4 combinations of 2 boolean flags. > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/01f267e1/attachment-0001.html From lc.taylor at protonmail.com Wed Jan 25 17:23:02 2017 From: lc.taylor at protonmail.com (Lincy Taylor) Date: Wed, 25 Jan 2017 20:23:02 -0500 Subject: [Bro] Lots of dns_unmatched_msg, dns_unmatched_reply in weird.log In-Reply-To: <1485345914.2567.7.camel@slave-tothe-box.net> References: <1485345914.2567.7.camel@slave-tothe-box.net> Message-ID: Hello James, I finally found the root cause with your provided parameters running bro. The error was due to the offloading of checksumming to adapter on my local system while the traffic was captured, which is already mentioned on bro's website[1]. So many thanks for your help! 1. https://www.bro.org/documentation/faq.html#why-isn-t-bro-producing-the-logs-i-expect-a-note-about-checksums Lincy Sent with [ProtonMail](https://protonmail.com) Secure Email. -------- Original Message -------- Subject: Re: [Bro] Lots of dns_unmatched_msg, dns_unmatched_reply in weird.log Local Time: 2017?1?25? 8:05 ?? UTC Time: 2017?1?25? ??12?05? From: jlay at slave-tothe-box.net To: bro at bro.org On Wed, 2017-01-25 at 03:32 -0500, Lincy Taylor wrote: Hello all: I recently found lots of "dns_unmatched_msg" and "dns_unmatched_reply" errors in weird.log of Bro, which likes the following: 1485331604.840044 CSdHx91xFbEKdyo3Pi 172.16.185.11 40721 8.8.8.8 53 dns_unmatched_reply - F bro 1485331609.712570 Cw4TXS1DvS49mvRtN4 172.16.185.11 58915 8.8.8.8 53 dns_unmatched_reply - F bro 1485331619.101223 CSdHx91xFbEKdyo3Pi 172.16.185.11 40721 8.8.8.8 53 dns_unmatched_msg - F bro 1485331619.115208 CGwJfm35oSWSuMdVS6 172.16.185.11 50308 8.8.8.8 53 dns_unmatched_reply - F bro 1485331619.115208 Cw4TXS1DvS49mvRtN4 172.16.185.11 58915 8.8.8.8 53 dns_unmatched_msg - F bro 1485331619.115208 CGwJfm35oSWSuMdVS6 172.16.185.11 50308 8.8.8.8 53 dns_unmatched_msg - F bro I used tcpdump to create a traffic dump of several dns queries made by dig on ubuntu to 8.8.8.8 and analyzed by "bro -r", the errors are still there in weird.log. The errors seems to be related to an unmatch of query id of query and response messages according to snippet in "share/bro/base/protocols/dns/main.bro". But I found the query ids are consistent with each of DNS query and response by tracing the traffic dump in wireshark. Has anyone experienced the same issue before? I attached the log files and pcap file within this message, please help me to find out the root cause. Thank you! Sent with [ProtonMail](https://protonmail.com) Secure Email. _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Make sure you set your local net to include the 172 net. As a test on the pcap I ran: bro -C -r pcaps/dns_8.8.8.8.pcap local "Site::local_nets += { 172.16.0.0/12 }" This gets me conn and dns, but no weird log. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170125/0de8ecb0/attachment.html From pssunu6 at gmail.com Thu Jan 26 00:39:30 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 26 Jan 2017 14:09:30 +0530 Subject: [Bro] intel log fields adding and processing In-Reply-To: References: Message-ID: Thanks Now i need to write the if condition output into Intel.log category field which i have added in intel.log my latest code @load frameworks/intel/seen export { redef Intel::read_files += { fmt("%s/intel-1.dat", @DIR) }; redef record Intel::Info += { category: string &optional &log; attribute: string &log &optional; }; } event Intel::log_intel (rec: Intel::Info) { if ( rec$seen$where == HTTP::IN_HOST_HEADER ) { print "True"; } else { print "False "; } print "rec$seen$where is", rec$seen$where; } I need if condition True string into intel.log category field its possible ? http://try.bro.org/#/trybro/saved/118899 Regards, Sunu On Thu, Jan 26, 2017 at 1:35 AM, Azoff, Justin S wrote: > > > On Jan 25, 2017, at 2:59 PM, ps sunu wrote: > > > > Hi, > > I have a script which will add one field in > intel.log, that part is working > > now i want read the output from intel.log seen.where field example > if seen.where is HTTP::IN_HOST_HEADER and i need to write "itsOk" into my > intel.log new field > > > > the problem is i am not able to get seen.where field output > > > > The main issue is that the log_intel event is called with a Intel::Info, > not an Intel::Seen. > > seen.where is the representation of the info record$seen$where field, so > you need to do something like this: > > event Intel::log_intel (rec: Intel::Info) > { > print "rec$seen$where is", rec$seen$where; > } > > http://try.bro.org/#/trybro/saved/118697 > > > > -- > - Justin Azoff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/bba34690/attachment.html From pssunu6 at gmail.com Thu Jan 26 04:17:25 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 26 Jan 2017 17:47:25 +0530 Subject: [Bro] adding output into intel.log Message-ID: i need to write the if condition output into Intel.log category field which i have added in intel.log my latest code @load frameworks/intel/seen export { redef Intel::read_files += { fmt("%s/intel-1.dat", @DIR) }; redef record Intel::Info += { category: string &optional &log; attribute: string &log &optional; }; } event Intel::log_intel (rec: Intel::Info) { if ( rec$seen$where == HTTP::IN_HOST_HEADER ) { print "True"; } else { print "False "; } print "rec$seen$where is", rec$seen$where; } I need if condition True string into intel.log category field its possible ? http://try.bro.org/#/trybro/saved/118899 Regards, Sunu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/0df6e4e2/attachment.html From craigp at iup.edu Thu Jan 26 04:40:15 2017 From: craigp at iup.edu (Craig Pluchinsky) Date: Thu, 26 Jan 2017 07:40:15 -0500 (EST) Subject: [Bro] Web GUI for Bro? In-Reply-To: <4e6f5db643bbed774ccb37816f305981@localhost> References: <4e6f5db643bbed774ccb37816f305981@localhost> Message-ID: I started to use the csv filter instead of grok. Just change the delimiter to a literal tab. Also make sure to not use "." in the column names. I just copied the bro field names. if [type] == "bro_conn" { csv { columns => [ "ts","uid","orig_h","orig_p","resp_h","resp_p","proto","service","duration","orig_bytes","resp_bytes","conn_state","local_orig","local_resp","missed_bytes","history","orig_pkts","orig_ip_bytes","resp_pkts","resp_ip_bytes","tunnel_parents","peer_descr","orig_cc","resp_cc" ] separator => " " } } ------------------------------- Craig Pluchinsky IT Services Indiana University of Pennsylvania 724-357-3327 On Wed, 25 Jan 2017, James Lay wrote: > On 2017-01-25 14:55, project722 wrote: >> This ELK/Bro combo is turning out to be more of a learning curve than >> I has hoped for. I can get the logs over to elasticsearch and into >> Kibana, but I can only see them on the "Discovery" tab. I save the >> search to use with a visualization, but it wants to do something by >> "count" and its not breaking down the connections in conn.log and >> graphing them like I had hoped for. Here is my logstash conf file. >> >> input { >> stdin { } >> file { >> path => "/opt/bro/logs/current/*.log" >> start_position => "beginning" >> } >> } >> >> filter { >> if [message] =~ >> /^(\d+\.\d{6}\s+\S+\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|-)\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|-)\s+\S+\s+\S+\s+\S+\s+\S+\s+[^:]+::\S+\s+[^:]+::\S+\s+\S+(?:\s\S+)*$)/ >> { >> grok{ >> patterns_dir => "/opt/logstash/custom_patterns" >> match => { >> message => "%{291009}" >> } >> add_field => [ "rule_id", "291009" ] >> add_field => [ "Device Type", "IPSIDSDevice" ] >> add_field => [ "Object", "Process" ] >> add_field => [ "Action", "General" ] >> add_field => [ "Status", "Informational" ] >> } >> } >> >> #translate { >> # field => "evt_dstip" >> # destination => "malicious_IP" >> # dictionary_path => '/opt/logstash/maliciousIPV4.yaml' >> #} >> #translate { >> # field => "evt_srcip" >> # destination => "malicious_IP" >> # dictionary_path => '/opt/logstash/maliciousIPV4.yaml' >> #} >> #translate { >> # field => "md5" >> # destination => "maliciousMD5" >> # dictionary_path => '/opt/logstash/maliciousMD5.yaml' >> #} >> #date { >> # match => [ "start_time", "UNIX" ] >> #} >> >> } >> >> output { >> elasticsearch { hosts => ["localhost:9200"] } >> stdout { codec => rubydebug } >> >> In Kibana under the Discover tab I can see my messages from conn.log. >> How can I get this data properly graphed and broken down more like how >> the connection summary emails are broken down? >> >> January 25th 2017, 15:52:57.702 >> >> 1485381116.563095 CN2Wu7l8JEjji3ht3 192.168.100.102 58128 >> 192.168.100.103 161 udp snmp 0.010298 53 53 SF T T 0 Dd 1 81 1 81 >> (empty) >> >> On Wed, Jan 25, 2017 at 3:27 PM, Daniel Guerra >> wrote: >> >>> Hi, >>> >>> Check my docker project. >>> >>> https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/ [1] >>> >>> The quick way : >>> >>> export DOCKERHOST=":8080" >>> wget >>> >> https://raw.githubusercontent.com/danielguerra69/bro-debian-elasticsearch/master/docker-compose.yml >>> [2] >>> docker-compose pull >>> docker-compose up >>> >>> You can send pcap data with pcap to port 1969 ?nc dockerip 1969 < >>> mypcapfile? >>> >>> After this open your browser to dockerip:5601 for kibana, its >>> preconfigured with some >>> queries and desktops. >>> >>> On 25 Jan 2017, at 14:48, project722 wrote: >>> >>> Thanks All. I am looking into ELK. >>> >>> On Tue, Jan 24, 2017 at 2:44 AM, Kevin Ross >>> wrote: >>> >>> As said before ELK is your best bet. Here is a link that may >>> interest you. The learning curve may be steep but it is worth it in >>> the end (assuming you are putting this together yourself and not a >>> all in one solution that provides it for you) when you can query >>> logs as easily as a google search and visualise. >>> >>> https://www.elastic.co/blog/bro-ids-elastic-stack [3] >>> >>> Also you could use security oniion and it uses ELSA to present these >>> logs although my preference these days because of its easier ability >>> I find to add in new data sources would be ELK (i.e once you >>> understand logstash and parsing logs you can easily parse any log >>> you have to correlate Bro, IDS, network and even host logs). >>> >>> https://github.com/mcholste/elsa [4] >>> http://blog.bro.org/2012/01/monster-logs.html [5] >>> >>> On 21 January 2017 at 11:54, project722 >>> wrote: >>> >>> Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide >>> suggestions on what I should use as a web GUI for bro? What is the >>> best options out there? NOTE - my version of Bro was compiled from >>> source. >>> > > Mod this to your liking and see how it goes: > > ##### > input { > file { > type => "connlog" > path => "/usr/local/bro/spool/bro/conn.log" > sincedb_path => "/var/lib/logstash/.sincedbconn" > } > > file { > type => "ssllog" > path => "/usr/local/bro/spool/bro/ssl.log" > sincedb_path => "/var/lib/logstash/.sincedbssl" > } > } > > filter { > #bro conn.log > if [type] == "connlog" { > if [message] =~ "^#" { > drop { } > } else { > grok { > match => [ "message", > "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*)))" > ] > } > } > } > > #bro ssl.log > if [type] == "ssllog" { > if [message] =~ "^#" { > drop { } > } else { > grok { > match => [ "message", > "(?(.*?))\t%{DATA:uid}\t%{DATA:src_ip}\t%{DATA:src_port}\t%{DATA:dst_ip}\t%{DATA:dst_port}\t%{DATA:version}\t%{DATA:cipher}\t%{DATA:curve}\t%{DATA:hostname}\t%{DATA:resumed}\t%{DATA:last_alert}\t%{DATA:next_protocol}\t%{DATA:established}\t%{DATA:cert_chain_fuids}\t%{DATA:client_cert_chain_fuids}\t%{DATA:subject}\t%{DATA:issuer}\t%{DATA:client_subject}\t%{DATA:client_issuer}\t%{DATA:validation_status}\t%{DATA:notary.first_seen}\t%{DATA:notary.last_seen}\t%{DATA:notary.times_seen}\t%{DATA:notary.valid}" > ] > } > } > } > #geoip source > geoip { > source => "src_ip" > target => "src_geoip" > } > > #geoip destination > geoip { > source => "dst_ip" > target => "dst_geoip" > } > > mutate { > convert => [ "resp_bytes", "integer" ] > convert => [ "resp_ip_bytes", "integer" ] > convert => [ "orig_bytes", "integer" ] > convert => [ "orig_ip_bytes", "integer" ] > convert => [ "src_port", "integer" ] > convert => [ "dst_port", "integer" ] > gsub => [ > "src_geoip.country_name", "[ ]", "_", > "dst_geoip.country_name", "[ ]", "_", > "proto", "tcp", "TCP", > "proto", "udp", "UDP", > "proto", "icmp", "ICMP" > ] > } > } > > output { > #uncomment below for testing > #stdout { codec => rubydebug } > elasticsearch { } > } > #### > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jlay at slave-tothe-box.net Thu Jan 26 05:37:33 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 26 Jan 2017 06:37:33 -0700 Subject: [Bro] Web GUI for Bro? In-Reply-To: References: <4e6f5db643bbed774ccb37816f305981@localhost> Message-ID: <1485437853.2781.0.camel@slave-tothe-box.net> Oh yea that's a lot easier...thanks for that Craig! James On Thu, 2017-01-26 at 07:40 -0500, Craig Pluchinsky wrote: > I started to use the csv filter instead of grok.? Just change the? > delimiter to a literal tab.? Also make sure to not use "." in the > column? > names.? I just copied the bro field names. > > ?? if [type] == "bro_conn" { > ???? csv { > ?????? columns => [? > "ts","uid","orig_h","orig_p","resp_h","resp_p","proto","service","dur > ation","orig_bytes","resp_bytes","conn_state","local_orig","local_res > p","missed_bytes","history","orig_pkts","orig_ip_bytes","resp_pkts"," > resp_ip_bytes","tunnel_parents","peer_descr","orig_cc","resp_cc"? > ] > ?????? separator => "??? " > ???? } > ?? } > > > > ------------------------------- > Craig Pluchinsky > IT Services > Indiana University of Pennsylvania > 724-357-3327 > > > On Wed, 25 Jan 2017, James Lay wrote: > > > On 2017-01-25 14:55, project722 wrote: > >> This ELK/Bro combo is turning out to be more of a learning curve > than > >> I has hoped for. I can get the logs over to elasticsearch and into > >> Kibana, but I can only see them on the "Discovery" tab. I save the > >> search to use with a visualization, but it wants to do something > by > >> "count" and its not breaking down the connections in conn.log and > >> graphing them like I had hoped for. Here is my logstash conf file. > >>? > >> input { > >>?? stdin { } > >>?? file { > >>???? path => "/opt/bro/logs/current/*.log" > >>???? start_position => "beginning" > >>?? } > >> } > >>? > >> filter { > >>?? if [message] =~ > >> /^(\d+\.\d{6}\s+\S+\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|- > )\s+(?:[\d\.]+|[\w:]+|-)\s+(?:\d+|- > )\s+\S+\s+\S+\s+\S+\s+\S+\s+[^:]+::\S+\s+[^:]+::\S+\s+\S+(?:\s\S+)*$) > / > >> { > >>???? grok{ > >>?????? patterns_dir => "/opt/logstash/custom_patterns" > >>?????? match => { > >>???????? message => "%{291009}" > >>?????? } > >>?????? add_field => [ "rule_id", "291009" ] > >>?????? add_field => [ "Device Type", "IPSIDSDevice" ] > >>?????? add_field => [ "Object", "Process" ] > >>?????? add_field => [ "Action", "General" ] > >>?????? add_field => [ "Status", "Informational" ] > >>???? } > >>?? } > >> > >>?? #translate { > >>?? #? field => "evt_dstip" > >>?? #? destination => "malicious_IP" > >>?? #?? dictionary_path => '/opt/logstash/maliciousIPV4.yaml' > >>?? #} > >>?? #translate { > >>?? #? field => "evt_srcip" > >>?? #? destination => "malicious_IP" > >>?? #? dictionary_path => '/opt/logstash/maliciousIPV4.yaml' > >>?? #} > >>?? #translate { > >>?? #? field => "md5" > >>?? #? destination => "maliciousMD5" > >>?? #? dictionary_path => '/opt/logstash/maliciousMD5.yaml' > >>?? #} > >>?? #date { > >>?? #? match => [ "start_time", "UNIX" ] > >>?? #} > >>? > >> } > >>? > >> output { > >>?? elasticsearch { hosts => ["localhost:9200"] } > >>?? stdout { codec => rubydebug } > >>? > >> In Kibana under the Discover tab I can see my messages from > conn.log. > >> How can I get this data properly graphed and broken down more like > how > >> the connection summary emails are broken down? > >> > >>??????????????January 25th 2017, 15:52:57.702 > >>? > >> 1485381116.563095 CN2Wu7l8JEjji3ht3 192.168.100.102 58128 > >> 192.168.100.103 161 udp snmp 0.010298 53 53 SF T T 0 Dd 1 81 1 81 > >> (empty) > >>? > >> On Wed, Jan 25, 2017 at 3:27 PM, Daniel Guerra > >> wrote: > >>? > >>> Hi, > >>>? > >>> Check my docker project. > >>>? > >>> https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/?[ > 1] > >>>? > >>> The quick way : > >>>? > >>> export DOCKERHOST=":8080" > >>> wget > >>>? > >> https://raw.githubusercontent.com/danielguerra69/bro-debian-elasti > csearch/master/docker-compose.yml > >>> [2] > >>> docker-compose pull > >>> docker-compose up > >>>? > >>> You can send pcap data with pcap to port 1969 ?nc dockerip 1969 < > >>> mypcapfile? > >>>? > >>> After this open your browser to dockerip:5601 for kibana, its > >>> preconfigured with some > >>> queries and desktops. > >>>? > >>> On 25 Jan 2017, at 14:48, project722 > wrote: > >>>? > >>> Thanks All. I am looking into ELK. > >>>? > >>> On Tue, Jan 24, 2017 at 2:44 AM, Kevin Ross > >>> wrote: > >>>? > >>> As said before ELK is your best bet. Here is a link that may > >>> interest you. The learning curve may be steep but it is worth it > in > >>> the end (assuming you are putting this together yourself and not > a > >>> all in one solution that provides it for you) when you can query > >>> logs as easily as a google search and visualise. > >>>? > >>> https://www.elastic.co/blog/bro-ids-elastic-stack?[3] > >>>? > >>> Also you could use security oniion and it uses ELSA to present > these > >>> logs although my preference these days because of its easier > ability > >>> I find to add in new data sources would be ELK (i.e once you > >>> understand logstash and parsing logs you can easily parse any log > >>> you have to correlate Bro, IDS, network and even host logs). > >>>? > >>> https://github.com/mcholste/elsa?[4] > >>> http://blog.bro.org/2012/01/monster-logs.html?[5] > >>>? > >>> On 21 January 2017 at 11:54, project722 > >>> wrote: > >>>? > >>> Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide > >>> suggestions on what I should use as a web GUI for bro? What is > the > >>> best options out there? NOTE - my version of Bro was compiled > from > >>> source. > >>>? > > > > Mod this to your liking and see how it goes: > > > > ##### > > input { > >???????? file { > >???????????????? type => "connlog" > >???????????????? path => "/usr/local/bro/spool/bro/conn.log" > >???????????????? sincedb_path => "/var/lib/logstash/.sincedbconn" > >???????? } > > > >???????? file { > >???????????????? type => "ssllog" > >???????????????? path => "/usr/local/bro/spool/bro/ssl.log" > >???????????????? sincedb_path => "/var/lib/logstash/.sincedbssl" > >???????? } > > } > > > > filter { > >???????? #bro conn.log > >???????? if [type] == "connlog" { > >???????????????? if [message] =~ "^#" { > >???????????????????????? drop { } > >???????????????? } else { > >???????????????????????? grok { > >???????????????????????????????? match => [ "message",? > > > "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.* > ?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(? > e>(.*?))\t(?(.*?))\t(?(.*?))\t(?(.* > ?))\t(?(.*?))\t(?(.*?))\t(?(.*?)) > \t(?(.*?))\t(?(.*?))\t(?(.*?))\t( > ?(.*?))\t(?(.*?))\t(?(.*?) > )\t(?(.*)))"? > > ] > >???????????????????????? } > >???????????????? } > >???????? } > > > >???????? #bro ssl.log > >???????? if [type] == "ssllog" { > >???????????????? if [message] =~ "^#" { > >???????????????????????? drop { } > >???????????????? } else { > >???????????????????????? grok { > >???????????????????????????????? match => [ "message",? > > > "(?(.*?))\t%{DATA:uid}\t%{DATA:src_ip}\t%{DATA:src_port}\t% > {DATA:dst_ip}\t%{DATA:dst_port}\t%{DATA:version}\t%{DATA:cipher}\t%{D > ATA:curve}\t%{DATA:hostname}\t%{DATA:resumed}\t%{DATA:last_alert}\t%{ > DATA:next_protocol}\t%{DATA:established}\t%{DATA:cert_chain_fuids}\t% > {DATA:client_cert_chain_fuids}\t%{DATA:subject}\t%{DATA:issuer}\t%{DA > TA:client_subject}\t%{DATA:client_issuer}\t%{DATA:validation_status}\ > t%{DATA:notary.first_seen}\t%{DATA:notary.last_seen}\t%{DATA:notary.t > imes_seen}\t%{DATA:notary.valid}"? > > ] > >???????????????????????? } > >???????????????? } > >???????? } > >???????????????? #geoip source > >???????????????? geoip { > >???????????????????????? source => "src_ip" > >???????????????????????? target => "src_geoip" > >???????????????? } > > > >???????????????? #geoip destination > >???????????????? geoip { > >???????????????????????? source => "dst_ip" > >???????????????????????? target => "dst_geoip" > >???????????????? } > > > >???????????????? mutate { > >???????????????????????? convert => [ "resp_bytes", "integer" ] > >???????????????????????? convert => [ "resp_ip_bytes", "integer" ] > >???????????????????????? convert => [ "orig_bytes", "integer" ] > >???????????????????????? convert => [ "orig_ip_bytes", "integer" ] > >???????????????????????? convert => [ "src_port", "integer" ] > >???????????????????????? convert => [ "dst_port", "integer" ] > >???????????????????????? gsub => [ > >???????????????????????????????? "src_geoip.country_name", "[ ]", > "_", > >???????????????????????????????? "dst_geoip.country_name", "[ ]", > "_", > >???????????????????????????????? "proto", "tcp", "TCP", > >???????????????????????????????? "proto", "udp", "UDP", > >???????????????????????????????? "proto", "icmp", "ICMP" > >???????????????????????? ] > >???????????????? } > > } > > > > output { > >???????? #uncomment below for testing > >???????? #stdout { codec => rubydebug } > >???????? elasticsearch { } > > } > > #### > > > > James > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/9d1e675c/attachment-0001.html From espressobeanies at gmail.com Thu Jan 26 07:14:51 2017 From: espressobeanies at gmail.com (Espresso Beanies) Date: Thu, 26 Jan 2017 10:14:51 -0500 Subject: [Bro] Issue with Bro plugins not loading Message-ID: Hi, I'm trying to install several Bro plugins, but when I go to reference the plugins directly in my local.bro file, I get "Fatal error in ...local.bro, line xxx: can't find [manager,proxy,worker] scripts failed." Bro-Pkg manager shows me they're installed and I see the plugin files and locations on my Bro IDS instance. Is there a way I can troubleshoot? One of them is the 'bro-long-connections' from GitHub. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/99f29736/attachment.html From jlay at slave-tothe-box.net Thu Jan 26 08:14:20 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 26 Jan 2017 09:14:20 -0700 Subject: [Bro] Lots of dns_unmatched_msg, dns_unmatched_reply in weird.log In-Reply-To: References: <1485345914.2567.7.camel@slave-tothe-box.net> Message-ID: <1818685e8f2287daa17001b05a5148a4@localhost> Glad you found the source of the issue...nice work! James On 2017-01-25 18:23, Lincy Taylor wrote: > Hello James, > > I finally found the root cause with your provided parameters > running bro. The error was due to the offloading of checksumming to > adapter on my local system while the traffic was captured, which is > already mentioned on bro's website[1]. So many thanks for your help! > > 1. > https://www.bro.org/documentation/faq.html#why-isn-t-bro-producing-the-logs-i-expect-a-note-about-checksums > > Lincy > > Sent with ProtonMail [1] Secure Email. > >> -------- Original Message -------- >> >> Subject: Re: [Bro] Lots of dns_unmatched_msg, dns_unmatched_reply in >> weird.log >> >> Local Time: 2017?1?25? 8:05 ?? >> >> UTC Time: 2017?1?25? ??12?05? >> >> From: jlay at slave-tothe-box.net >> >> To: bro at bro.org >> >> On Wed, 2017-01-25 at 03:32 -0500, Lincy Taylor wrote: >> >>> Hello all: >>> >>> I recently found lots of "dns_unmatched_msg" and >>> "dns_unmatched_reply" errors in weird.log of Bro, which likes the >>> following: >>> >>> 1485331604.840044 CSdHx91xFbEKdyo3Pi 172.16.185.11 >>> 40721 8.8.8.8 53 dns_unmatched_reply - F >>> bro >>> >>> 1485331609.712570 Cw4TXS1DvS49mvRtN4 172.16.185.11 >>> 58915 8.8.8.8 53 dns_unmatched_reply - F >>> bro >>> >>> 1485331619.101223 CSdHx91xFbEKdyo3Pi 172.16.185.11 >>> 40721 8.8.8.8 53 dns_unmatched_msg - F >>> bro >>> >>> 1485331619.115208 CGwJfm35oSWSuMdVS6 172.16.185.11 >>> 50308 8.8.8.8 53 dns_unmatched_reply - F >>> bro >>> >>> 1485331619.115208 Cw4TXS1DvS49mvRtN4 172.16.185.11 >>> 58915 8.8.8.8 53 dns_unmatched_msg - F >>> bro >>> >>> 1485331619.115208 CGwJfm35oSWSuMdVS6 172.16.185.11 >>> 50308 8.8.8.8 53 dns_unmatched_msg - F >>> bro >>> >>> I used tcpdump to create a traffic dump of several dns queries >>> made by dig on ubuntu to 8.8.8.8 and analyzed by "bro -r", the >>> errors are still there in weird.log. The errors seems to be >>> related to an unmatch of query id of query and response messages >>> according to snippet in "share/bro/base/protocols/dns/main.bro". >>> But I found the query ids are consistent with each of DNS query >>> and response by tracing the traffic dump in wireshark. >>> >>> Has anyone experienced the same issue before? >>> >>> I attached the log files and pcap file within this message, please >>> help me to find out the root cause. Thank you! >>> >>> Sent with ProtonMail [1] Secure Email. >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> Make sure you set your local net to include the 172 net. As a test >> on the pcap I ran: >> >> bro -C -r pcaps/dns_8.8.8.8.pcap local "Site::local_nets += { >> 172.16.0.0/12 }" >> >> This gets me conn and dns, but no weird log. >> >> James > > > > Links: > ------ > [1] https://protonmail.com > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From pssunu6 at gmail.com Thu Jan 26 09:53:11 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 26 Jan 2017 23:23:11 +0530 Subject: [Bro] intel log fields adding and processing In-Reply-To: References: Message-ID: Thanks i solved the problem On Thu, Jan 26, 2017 at 2:09 PM, ps sunu wrote: > Thanks > > Now i need to write the if condition output into Intel.log > category field which i have added in intel.log > > my latest code > > > @load frameworks/intel/seen > > export { > > redef Intel::read_files += { > fmt("%s/intel-1.dat", @DIR) > }; > > redef record Intel::Info += { > category: string &optional &log; > attribute: string &log &optional; > > > }; > } > > event Intel::log_intel (rec: Intel::Info) > { > > if ( rec$seen$where == HTTP::IN_HOST_HEADER ) > { > print "True"; > } > else > { > print "False "; > } > print "rec$seen$where is", rec$seen$where; > > > } > > I need if condition True string into intel.log category field its > possible ? > > http://try.bro.org/#/trybro/saved/118899 > > > > Regards, > Sunu > > On Thu, Jan 26, 2017 at 1:35 AM, Azoff, Justin S > wrote: > >> >> > On Jan 25, 2017, at 2:59 PM, ps sunu wrote: >> > >> > Hi, >> > I have a script which will add one field in >> intel.log, that part is working >> > now i want read the output from intel.log seen.where field example >> if seen.where is HTTP::IN_HOST_HEADER and i need to write "itsOk" into my >> intel.log new field >> > >> > the problem is i am not able to get seen.where field output >> > >> >> The main issue is that the log_intel event is called with a Intel::Info, >> not an Intel::Seen. >> >> seen.where is the representation of the info record$seen$where field, so >> you need to do something like this: >> >> event Intel::log_intel (rec: Intel::Info) >> { >> print "rec$seen$where is", rec$seen$where; >> } >> >> http://try.bro.org/#/trybro/saved/118697 >> >> >> >> -- >> - Justin Azoff >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/36b88915/attachment.html From jlay at slave-tothe-box.net Thu Jan 26 09:56:07 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 26 Jan 2017 10:56:07 -0700 Subject: [Bro] intel log fields adding and processing In-Reply-To: References: Message-ID: <5a949d1f31cacdee1410f311fa606460@localhost> Care to share the completed script? James On 2017-01-26 10:53, ps sunu wrote: > Thanks i solved the problem > > On Thu, Jan 26, 2017 at 2:09 PM, ps sunu wrote: > >> Thanks >> >> Now i need to write the if condition output into >> Intel.log category field which i have added in intel.log >> >> my latest code >> >> @load frameworks/intel/seen >> >> export { >> >> redef Intel::read_files += { >> fmt("%s/intel-1.dat", @DIR) >> }; >> >> redef record Intel::Info += { >> category: string &optional &log; >> attribute: string &log &optional; >> >> }; >> } >> >> event Intel::log_intel (rec: Intel::Info) >> { >> >> if ( rec$seen$where == HTTP::IN_HOST_HEADER ) >> { >> print "True"; >> } >> else >> { >> print "False "; >> } >> >> print "rec$seen$where is", rec$seen$where; >> >> } >> >> I need if condition True string into intel.log category field >> its possible ? >> >> http://try.bro.org/#/trybro/saved/118899 [2] >> >> Regards, >> Sunu >> >> On Thu, Jan 26, 2017 at 1:35 AM, Azoff, Justin S >> wrote: >> >>>> On Jan 25, 2017, at 2:59 PM, ps sunu wrote: >>>> >>>> Hi, >>>> I have a script which will add one field >>> in intel.log, that part is working >>>> now i want read the output from intel.log seen.where field >>> example if seen.where is HTTP::IN_HOST_HEADER and i need to >>> write "itsOk" into my intel.log new field >>>> >>>> the problem is i am not able to get seen.where field >>> output >>>> >>> >>> The main issue is that the log_intel event is called with a >>> Intel::Info, not an Intel::Seen. >>> >>> seen.where is the representation of the info record$seen$where >>> field, so you need to do something like this: >>> >>> event Intel::log_intel (rec: Intel::Info) >>> { >>> print "rec$seen$where is", rec$seen$where; >>> } >>> >>> http://try.bro.org/#/trybro/saved/118697 [1] >>> >>> -- >>> - Justin Azoff > > > > Links: > ------ > [1] http://try.bro.org/#/trybro/saved/118697 > [2] http://try.bro.org/#/trybro/saved/118899 > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From pssunu6 at gmail.com Thu Jan 26 09:59:14 2017 From: pssunu6 at gmail.com (ps sunu) Date: Thu, 26 Jan 2017 23:29:14 +0530 Subject: [Bro] field value missing [rec$seen$where] error Message-ID: Hi, event Intel::log_intel (rec: Intel::Info) { error line ---> if ( rec$seen$where == HTTP::IN_HOST_HEADER ) { Log::write(Intel::LOG,[$ts=network_time(),$test=fmt("host header"),$test1=fmt("ihttp") ]); } my above code is running and generating intel.log but its giving below error anythink i am missing ? line 21: field value missing [rec$seen$where] Regards, Sunu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/f65834b4/attachment-0001.html From Ben.McDowall at spark.co.nz Thu Jan 26 11:01:11 2017 From: Ben.McDowall at spark.co.nz (Ben McDowall) Date: Thu, 26 Jan 2017 19:01:11 +0000 Subject: [Bro] Adding HTTP URL to Threat Intel Message-ID: I have a scenario where for TOR related IPs its important I understand WHERE they went As an example I want to know if a TOR IP accessed I care if it accessed http://mycompany.com/webmail/mail/0,12323123,123123 I don't care if it accessed http://mycompany.com/login I know TOR nodes will always try and access our services to poke around etc but I really care if someone logs into an account successfully There is two ways I thought of doing this 1: Enrich the intel.log with http URL information (pump into SIEM for further analysis) 2: Write a custom bro script to do additional analysis. Anyone tackled a similar challenge and can share? Cheers Kind Regards ________________________________ [spark] Ben McDowall Technical Lead Spark Security Incident Response Team (S-SIRT) Spark Platforms T 027 469 5887 (extn 96239) E Ben.McDowall at spark.co.nz Level 8, Mayoral Drive Building 31 Airedale Street Private Bag 92028, Auckland 1010 www.spark.co.nz [Spark @ Twitter] [Spark @ Facebook] [Spark @ YouTube] ________________________________ This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/e8ee1509/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 20987 bytes Desc: image001.png Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/e8ee1509/attachment-0005.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 167 bytes Desc: image002.png Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/e8ee1509/attachment-0006.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 656 bytes Desc: image003.png Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/e8ee1509/attachment-0007.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 499 bytes Desc: image004.png Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/e8ee1509/attachment-0008.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 794 bytes Desc: image005.png Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170126/e8ee1509/attachment-0009.bin From seth at icir.org Thu Jan 26 11:09:11 2017 From: seth at icir.org (Seth Hall) Date: Thu, 26 Jan 2017 14:09:11 -0500 Subject: [Bro] field value missing [rec$seen$where] error In-Reply-To: References: Message-ID: <18A1940A-E51F-404B-A464-D968C1407822@icir.org> > On Jan 26, 2017, at 12:59 PM, ps sunu wrote: > > error line ---> if ( rec$seen$where == HTTP::IN_HOST_HEADER ) It's just rec$seen .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From bro at pingtrip.com Fri Jan 27 20:49:34 2017 From: bro at pingtrip.com (Dave Crawford) Date: Fri, 27 Jan 2017 23:49:34 -0500 Subject: [Bro] ActiveHTTP Message-ID: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> I?m testing a new script in 2.5 that uses ActiveHTTP but I'm unable to retrieve the response. With a simple test script of: when ( local resp = ActiveHTTP::request([$url="https://www.google.com/"]) ) { print ?Inside the Matrix." } I can see the ActiveHTTP request was successful based on the temporary files created: -rw-r--r-- 1 dave wheel 162 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_body -rw-r--r-- 1 dave wheel 163 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_headers But the print statement within the when block never executes. Any ideas what I?m missing? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170127/b09fe9bb/attachment.html From bro at pingtrip.com Sat Jan 28 06:15:25 2017 From: bro at pingtrip.com (Dave Crawford) Date: Sat, 28 Jan 2017 09:15:25 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> Message-ID: <50DE7D75-FA15-46DA-A6A8-FD13B6DDBF90@pingtrip.com> I added simple print statements in base/utils/active-http.bro and it doesn?t appear to be entering it?s when() block either. These are the two print statements I added: print "Entering the ActiveHTTP::Request when() block"; return when ( local result = Exec::run([$cmd=cmd, $stdin=stdin_data, $read_files=set(bodyfile, headersfile)]) ) { print "In ActiveHTTP::Request when() block"; # If there is no response line then nothing else will work either. And the second print doesn?t execute: $ bro -r test.pcap local ../test.bro Entering the ActiveHTTP::Request when() block... I have ?exit_only_after_terminate? set to true so it just hangs at this point until I ctrl-c and I see the tmp files deleted. -Dave > On Jan 27, 2017, at 11:49 PM, Dave Crawford wrote: > > I?m testing a new script in 2.5 that uses ActiveHTTP but I'm unable to retrieve the response. With a simple test script of: > > when ( local resp = ActiveHTTP::request([$url="https://www.google.com /"]) ) > { > print ?Inside the Matrix." > } > > I can see the ActiveHTTP request was successful based on the temporary files created: > > -rw-r--r-- 1 dave wheel 162 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_body > -rw-r--r-- 1 dave wheel 163 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_headers > > But the print statement within the when block never executes. Any ideas what I?m missing? > > -Dave > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170128/6d1b34f4/attachment.html From jazoff at illinois.edu Sat Jan 28 11:28:57 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Sat, 28 Jan 2017 19:28:57 +0000 Subject: [Bro] ActiveHTTP In-Reply-To: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> Message-ID: > On Jan 27, 2017, at 11:49 PM, Dave Crawford wrote: > > I?m testing a new script in 2.5 that uses ActiveHTTP but I'm unable to retrieve the response. With a simple test script of: > > when ( local resp = ActiveHTTP::request([$url="https://www.google.com/"]) ) > { > print ?Inside the Matrix." > } > > I can see the ActiveHTTP request was successful based on the temporary files created: > > -rw-r--r-- 1 dave wheel 162 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_body > -rw-r--r-- 1 dave wheel 163 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_headers > > But the print statement within the when block never executes. Any ideas what I?m missing? > > -Dave If you still have the temp files it means something went wrong along the way. Is bro writing out a reporter.log? -- - Justin Azoff From bro at pingtrip.com Sat Jan 28 11:32:40 2017 From: bro at pingtrip.com (Dave Crawford) Date: Sat, 28 Jan 2017 14:32:40 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> Message-ID: Hi Justin, I responded with a follow-up to my original email and temp files are there because I have ?exit_only_after_terminate? set to true, so it pauses until I ctrl-c and the tmp files are then deleted. -Dave > On Jan 28, 2017, at 2:28 PM, Azoff, Justin S wrote: > >> >> On Jan 27, 2017, at 11:49 PM, Dave Crawford wrote: >> >> I?m testing a new script in 2.5 that uses ActiveHTTP but I'm unable to retrieve the response. With a simple test script of: >> >> when ( local resp = ActiveHTTP::request([$url="https://www.google.com/"]) ) >> { >> print ?Inside the Matrix." >> } >> >> I can see the ActiveHTTP request was successful based on the temporary files created: >> >> -rw-r--r-- 1 dave wheel 162 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_body >> -rw-r--r-- 1 dave wheel 163 Jan 27 23:43 /tmp/bro-activehttp-HJKhXt6UYXi_headers >> >> But the print statement within the when block never executes. Any ideas what I?m missing? >> >> -Dave > > If you still have the temp files it means something went wrong along the way. Is bro writing out a reporter.log? > > > -- > - Justin Azoff -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170128/48ba1b8d/attachment-0001.html From jazoff at illinois.edu Sat Jan 28 11:39:26 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Sat, 28 Jan 2017 19:39:26 +0000 Subject: [Bro] ActiveHTTP In-Reply-To: References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> Message-ID: <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> > On Jan 28, 2017, at 2:32 PM, Dave Crawford wrote: > > Hi Justin, > > I responded with a follow-up to my original email and temp files are there because I have ?exit_only_after_terminate? set to true, so it pauses until I ctrl-c and the tmp files are then deleted. > > -Dave No, the files are there because something went wrong along the way. Is bro writing out a reporter.log? The code normally works fine, something is broken in your environment. $ cat b.bro redef exit_only_after_terminate=T; when ( local resp = ActiveHTTP::request([$url="https://www.google.com/"]) ) { print resp; terminate(); } $ bro --version bro version 2.5 $ bro b.bro [code=200, msg=OK\x0d, body= -- - Justin Azoff From bro at pingtrip.com Sat Jan 28 11:53:45 2017 From: bro at pingtrip.com (Dave Crawford) Date: Sat, 28 Jan 2017 14:53:45 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> Message-ID: <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> Interestingly your test script works as expected when run as: bro b.bro But if I pass it a PCAP it exhibits the same condition where the when loop isn?t entered: bro -r test.pcap b.bro This is the test PCAP I was testing with: https://github.com/LiamRandall/BroTraining-Montreal/raw/master/signature-framework/1-mswab_yayih/Mswab_Yayih_FD1BE09E499E8E380424B3835FC973A8_2012-03.pcap -Dave > On Jan 28, 2017, at 2:39 PM, Azoff, Justin S wrote: > > >> On Jan 28, 2017, at 2:32 PM, Dave Crawford wrote: >> >> Hi Justin, >> >> I responded with a follow-up to my original email and temp files are there because I have ?exit_only_after_terminate? set to true, so it pauses until I ctrl-c and the tmp files are then deleted. >> >> -Dave > > No, the files are there because something went wrong along the way. Is bro writing out a reporter.log? > > The code normally works fine, something is broken in your environment. > > $ cat b.bro > redef exit_only_after_terminate=T; > when ( local resp = ActiveHTTP::request([$url="https://www.google.com/"]) ) > { > print resp; > terminate(); > } > $ bro --version > bro version 2.5 > $ bro b.bro > [code=200, msg=OK\x0d, body= > > > -- > - Justin Azoff > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170128/fd5ac4eb/attachment.html From fatema.bannatwala at gmail.com Sun Jan 29 08:43:38 2017 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Sun, 29 Jan 2017 11:43:38 -0500 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <04F8FE7A-37A5-4B83-8997-BF679705A5BA@illinois.edu> <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: I think I know what was causing the intel log not to get generated. As suggested by Justin, I disabled the scan.bro, and the sensors looks in pretty stable state with 35-40% memory usage overall. I realized (and tested) that when the following lines are enabled in local.bro, the intel.log stops getting generated after a day or so: @load frameworks/intel/do_expire redef Intel::item_expiration = 1day; And when I comment out the above lines, the intel.log doesn't have any problem, and starts getting generated. I have been testing it for past two days and verified, with scan.bro disabled and sensors in pretty good state in terms of resource usage. Hence, my hypothesis is, something is not working correctly, that is causing the intel feeds to expire in 1day and Bro no longer has valid intel feeds in memory to compare it with traffic and hence causing no intel logs getting generated. We are pulling down the feeds every day around 6:45am in morning in the bro feed dir. I was thinking that if the feeds are not getting updated (i.e if the feeds are same as they were before pulling), then it might cause all the old feeds (longer than 1 day) to expire and hence Bro not generating intel.log. But I compared the old feeds with new feeds, and verified that every day some IPs get added and some get removed. Hence every day the feeds get modified and don't remain the same. I will still try to troubleshoot the issue, but for time being I have disabled the do_expire script so that intel.log file is generated. Anyone observed this kind of issue/behavior? Thanks, Fatema. On Wed, Jan 25, 2017 at 6:16 PM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Great! glad to know that I am not the only one dealing with this glitch in > scan-NG :-) > Now it totally makes sense, as I was thinking that in our case darknet and > un-allocated subnets are same > but will have to be careful, as you mentioned, when un-allocated subnets > can get assigned without us knowing! > > I knew that there were two upgraded scan scripts available other than the > one that gets ship with Bro by default, > one that you wrote scan.bro, and another scan-NG script. > > Hence was thinking to migrate to use one of those, and stumbled across > this darknet and allocated net defining issue. > (There was no way I could be able to define the complete list of allocated > subnets in scan-NG config, without missing anything :-) , > at that time I though it's required to define the allocated subnets, as > there's a comment in scan-config.bro that reads like: > "####Important to configure for Landmine detection > #### if subnet_feed is empty then LandMine detection wont work " > hence thought of tweaking that setting to rather define darknet, but > never got around to it) > > Great to know that there's already some code written by you that works > around this issue! > Thanks a ton for all the explanation and link to your scan script, great > help! > will go through it and see if I can get it up and running in our cluster > :-) > > Thanks Justin! > > > > On Wed, Jan 25, 2017 at 5:10 PM, Azoff, Justin S > wrote: > >> >> > On Jan 25, 2017, at 4:43 PM, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >> > >> > Alrighty, yeah was looking into how to configure the script according >> to the environment. >> > It appears that we have to define the list of allocated subnets in the >> network, >> > as landmine works on watching connections which are not in allocated >> subnets. >> > >> > Defining the allocated subnets is a pain, have a whole lot list of >> subnets that are allocated and >> > have just couple of subnets that constitute the darknet, hence was >> tweaking around the scripts to change that setting >> > from defining allocated subnets to rather defining un-allocated >> subnets, which is much easier. >> >> That part is optional(but extremely useful). I'm glad you brought this >> up, the darknet configuration problem is something I've been thinking about >> how to fix: >> >> * Some people define darknet as NOT allocated. >> * Some people know exactly which subnets are dark. >> >> I did write a version of the darknet code that auto-tunes itself based on >> allocated subnets, it's part of my scan code: >> >> https://gist.github.com/JustinAzoff/80f97af4f4fbb91ae26492b919a50434 >> >> One can let it run for a while, and then do a >> >> broctl print Site::used_address_space manager >> >> to dump out what it figures out as active, and then put it in a file that >> does >> >> @load ./dark-nets >> redef Site::used_address_space = { >> ... >> } >> >> It's not perfect but it's a start. broker with its persistent data store >> support may be what is needed to make it more useful. >> >> The only issue is it doesn't support something like a honey net that does >> technically exist: the auto tuning code will flag it as an allocated >> subnet. I need to work out how it should be overridden in cases like that. >> >> Aside from the auto detection the function just comes down to >> >> return (a in local_nets && a !in used_address_space); >> >> In your case you want this instead >> >> return (a in dark_address_space); >> >> so I think the simplest thing that may possibly work for everyone is >> something like >> >> global dark_address_space: set[subnet] &redef; >> >> and change the is_darknet logic to be >> >> >> if(|dark_address_space|) >> return (a in dark_address_space); >> else >> return (a in local_nets && a !in used_address_space); >> >> Or maybe just >> >> return (a in local_nets && (a in dark_address_space || a !in >> used_address_space); >> >> but I could see a different user wanting this instead: >> >> return (a in local_nets && a in dark_address_space && a !in >> used_address_space); >> >> for the use case of "dark_address_space is my darknet subnets, but >> something may be allocated without us knowing, so double check!" >> >> I haven't quite figured this out yet.. Maybe the answer is that there >> isn't a one size fits all implementation and I just need to have 4 >> is_darknet functions depending on how people want it to work: >> >> >> return (a in dark_address_space); #mode=darknet >> >> return (a in local_nets && a !in used_address_space); >> #mode=not_allocated >> >> return (a in local_nets && (a in dark_address_space || a !in >> used_address_space); #mode=darknet_or_not_allocated >> >> return (a in local_nets && a in dark_address_space && a !in >> used_address_space); #mode=darknet_and_not_allocated >> >> actually, now that I finally wrote all this out, I see that it's just the >> 4 combinations of 2 boolean flags. >> >> -- >> - Justin Azoff >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170129/89c8a722/attachment.html From jan.grashoefer at gmail.com Sun Jan 29 09:41:34 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Sun, 29 Jan 2017 18:41:34 +0100 Subject: [Bro] ActiveHTTP In-Reply-To: <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> Message-ID: <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> Hi Dave, > But if I pass it a PCAP it exhibits the same condition where the when loop isn?t entered: > > bro -r test.pcap b.bro my guess would be that reading a pcap causes timing problems. Have you tried processing the pcap using --pseudo-realtime? Jan From jan.grashoefer at gmail.com Sun Jan 29 09:58:20 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Sun, 29 Jan 2017 18:58:20 +0100 Subject: [Bro] intel.log file stops getting generated. In-Reply-To: References: <0C88FC7B-4E32-4C63-B7F9-2428558924AF@illinois.edu> <374E736C-1E3A-4564-8942-DCDF2F4DF470@illinois.edu> <9D746B5D-EFD7-44DA-9175-F20916849446@illinois.edu> <08A108BA-BAE5-4C85-A4B3-206B507932A0@illinois.edu> Message-ID: <010f287e-17d8-bb8b-a4c4-2bb143b46fa5@gmail.com> > We are pulling down the feeds every day around 6:45am in morning in the bro > feed dir. > I was thinking that if the feeds are not getting updated > (i.e if the feeds are same as they were before pulling), then it might > cause all the old feeds (longer than 1 day) to expire and hence > Bro not generating intel.log. That is how it is supposed to work. Updating the feed files requires atomic operations like "mv". How do you pull the feeds? > I will still try to troubleshoot the issue, but for time being I have > disabled the do_expire script so that intel.log file is generated. For debugging a good start might be to test the three cases: 1. "Old" indicators that should have been expired -> no hit 2. Readded indicators that have already been added -> hit (again) 3. "New" indicators that were added the first time -> hit Further it would be good to know if you can reproduce the same issue on a smaller time scale. Jan From bro at pingtrip.com Sun Jan 29 14:37:30 2017 From: bro at pingtrip.com (Dave Crawford) Date: Sun, 29 Jan 2017 17:37:30 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> Message-ID: <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> I tried with ?pseudo-realtime as well as creating a new PCAP to test with but it still exhibits the same behavior. ActiveHTTP successfully makes the request, and receives a response based other the contents of the temp files, but the when() block is never executed. The reporter.log only has an event for the termination: #types time enum string string 1485725443.690539 Reporter::INFO received termination signal (empty) Is anyone able to re-create the same issue or is this limited to my environment? -Dave > On Jan 29, 2017, at 12:41 PM, Jan Grash?fer wrote: > > Hi Dave, > >> But if I pass it a PCAP it exhibits the same condition where the when loop isn?t entered: >> >> bro -r test.pcap b.bro > > my guess would be that reading a pcap causes timing problems. Have you > tried processing the pcap using --pseudo-realtime? > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170129/ce8e48a4/attachment.html From karol at babioch.de Mon Jan 30 06:34:53 2017 From: karol at babioch.de (Karol Babioch) Date: Mon, 30 Jan 2017 15:34:53 +0100 Subject: [Bro] Getting SSL events into Python Message-ID: <181a0dd9-aec3-700f-92df-6dda30da1e66@babioch.de> Hi, I'm currently researching SSL/TLS handshakes and want to process several events Bro provides with the SSL plugin. I've installed Bro along with broccoli and broccoli-python and the "broping" example (from the test directory) is working just fine. For each "ping" event I sent to Bro, a "pong" is received and processed in my Python script. However, in case of the SSL my callbacks are never executed. The most simplified version looks something like this: > #! /usr/bin/env python > > from broccoli import * > > @event > def ssl_established(c): > print('established') > > bc = Connection("127.0.0.1:47760") > > while True: > bc.processInput() To my understanding I don't even have to load the SSL plugin, since it resides within "base", but nevertheless my local.bro contains the following: > @load broping > @load base/protocols/ssl When starting Bro and executing the Python script mentioned above, nothing happens, even if SSL traffic is going through the interface (and/or coming from a recorded pcap). I've also tried to register callbacks for various other SSL related events (ssl_client_hello, ssl_server_hello, etc.), but in no case were my callbacks invoked. The only difference to the "broping.py" from the examples, is that I'm not sending any events, but just want to receive them (hence I'm calling processInput() regularly). What am I missing here? Do I somehow need to enable the SSL functionality within Bro? How can I further debug the problem? Any help is very much appreciated, since I've spent a fair amount of time on this already, with no real progress. Thank you very much! Best regards, Karol Babioch -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/44105c64/attachment.bin From bro at pingtrip.com Mon Jan 30 09:34:31 2017 From: bro at pingtrip.com (Dave Crawford) Date: Mon, 30 Jan 2017 12:34:31 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> Message-ID: I?ve been able to test this in another environment (Debian 8.7 x64) and unlike OS X where the ActiveHTTP conducts a successful request but then doesn?t enter the when{} block, on Debian it errors with the following written to reporter.log: $ bro --version bro version 2.5-30 $ bro b.bro 0.000000 Reporter::ERROR curl -s -g -o "/tmp/bro-activehttp-XMayZ2GFnB6_body" -D "/tmp/bro-activehttp-XMayZ2GFnB6_headers" -X "GET" -m 60 "https://www.google.com/" && touch /tmp/bro-activehttp-XMayZ2GFnB6_body |/Input::READER_RAW: Child process exited with non-zero return code 127 (empty) 0.000000 Reporter::WARNING Stream vqz7bJcG1Pg is already queued for removal. Ignoring remove. (empty) 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_body/Input::READER_RAW: Init: cannot open /tmp/bro-activehttp-XMayZ2GFnB6_body (empty) 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_body/Input::READER_RAW: Init failed (empty) 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_body/Input::READER_RAW: terminating thread (empty) 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_headers/Input::READER_RAW: Init: cannot open /tmp/bro-activehttp-XMayZ2GFnB6_headers (empty) 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_headers/Input::READER_RAW: Init failed (empty) 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_headers/Input::READER_RAW: terminating thread (empty) 0.000000 Reporter::INFO received termination signal (empty) #close 2017-01-30-12-26-47 > On Jan 29, 2017, at 5:37 PM, Dave Crawford wrote: > > I tried with ?pseudo-realtime as well as creating a new PCAP to test with but it still exhibits the same behavior. ActiveHTTP successfully makes the request, and receives a response based other the contents of the temp files, but the when() block is never executed. > > The reporter.log only has an event for the termination: > > #types time enum string string > 1485725443.690539 Reporter::INFO received termination signal (empty) > > Is anyone able to re-create the same issue or is this limited to my environment? > > -Dave > >> On Jan 29, 2017, at 12:41 PM, Jan Grash?fer > wrote: >> >> Hi Dave, >> >>> But if I pass it a PCAP it exhibits the same condition where the when loop isn?t entered: >>> >>> bro -r test.pcap b.bro >> >> my guess would be that reading a pcap causes timing problems. Have you >> tried processing the pcap using --pseudo-realtime? >> >> Jan >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/55ac2953/attachment.html From bro at pingtrip.com Mon Jan 30 09:47:46 2017 From: bro at pingtrip.com (Dave Crawford) Date: Mon, 30 Jan 2017 12:47:46 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> Message-ID: <829BF14C-90C5-48CB-9ED2-74C9CED5FE5C@pingtrip.com> Ok, scratch that error message. The box I was testing on didn?t have curl installed. After installing curl the test script has the same behavior as when run on OS X. Work great by itself but hangs before the when{} block if passed a PCAP. > On Jan 30, 2017, at 12:34 PM, Dave Crawford wrote: > > I?ve been able to test this in another environment (Debian 8.7 x64) and unlike OS X where the ActiveHTTP conducts a successful request but then doesn?t enter the when{} block, on Debian it errors with the following written to reporter.log: > > $ bro --version > bro version 2.5-30 > > $ bro b.bro > > 0.000000 Reporter::ERROR curl -s -g -o "/tmp/bro-activehttp-XMayZ2GFnB6_body" -D "/tmp/bro-activehttp-XMayZ2GFnB6_headers" -X "GET" -m 60 "https://www.google.com/ " && touch /tmp/bro-activehttp-XMayZ2GFnB6_body |/Input::READER_RAW: Child process exited with non-zero return code 127 (empty) > 0.000000 Reporter::WARNING Stream vqz7bJcG1Pg is already queued for removal. Ignoring remove. (empty) > 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_body/Input::READER_RAW: Init: cannot open /tmp/bro-activehttp-XMayZ2GFnB6_body (empty) > 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_body/Input::READER_RAW: Init failed (empty) > 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_body/Input::READER_RAW: terminating thread (empty) > 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_headers/Input::READER_RAW: Init: cannot open /tmp/bro-activehttp-XMayZ2GFnB6_headers (empty) > 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_headers/Input::READER_RAW: Init failed (empty) > 0.000000 Reporter::ERROR /tmp/bro-activehttp-XMayZ2GFnB6_headers/Input::READER_RAW: terminating thread (empty) > 0.000000 Reporter::INFO received termination signal (empty) > #close 2017-01-30-12-26-47 > > >> On Jan 29, 2017, at 5:37 PM, Dave Crawford > wrote: >> >> I tried with ?pseudo-realtime as well as creating a new PCAP to test with but it still exhibits the same behavior. ActiveHTTP successfully makes the request, and receives a response based other the contents of the temp files, but the when() block is never executed. >> >> The reporter.log only has an event for the termination: >> >> #types time enum string string >> 1485725443.690539 Reporter::INFO received termination signal (empty) >> >> Is anyone able to re-create the same issue or is this limited to my environment? >> >> -Dave >> >>> On Jan 29, 2017, at 12:41 PM, Jan Grash?fer > wrote: >>> >>> Hi Dave, >>> >>>> But if I pass it a PCAP it exhibits the same condition where the when loop isn?t entered: >>>> >>>> bro -r test.pcap b.bro >>> >>> my guess would be that reading a pcap causes timing problems. Have you >>> tried processing the pcap using --pseudo-realtime? >>> >>> Jan >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/592bb640/attachment-0001.html From jlay at slave-tothe-box.net Mon Jan 30 10:08:03 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 30 Jan 2017 11:08:03 -0700 Subject: [Bro] Config.log Message-ID: <9ee7e80fbafdce5e5a9af45b05475e17@localhost> An odd request I'm sure....almost all other apps that I do the ./configure, make, sudo make install dance leave me with a config.log. Bro does not. Can we get this please? Unless the information is contained somewhere else of course...thank you. James From karol at babioch.de Mon Jan 30 11:01:37 2017 From: karol at babioch.de (Karol Babioch) Date: Mon, 30 Jan 2017 20:01:37 +0100 Subject: [Bro] Config.log In-Reply-To: <9ee7e80fbafdce5e5a9af45b05475e17@localhost> References: <9ee7e80fbafdce5e5a9af45b05475e17@localhost> Message-ID: <3e1389de-7023-53bc-d19c-d9482d256a5e@babioch.de> Hi, Am 30.01.2017 um 19:08 schrieb James Lay: > An odd request I'm sure....almost all other apps that I do the > ./configure, make, sudo make install dance leave me with a > config.log. This is usually done by Autotools. Bro uses CMake, which doesn't provide such a mechanism by itself, see [1]. However, I'm not familiar with the codebase at all, so maybe something like that was already implemented. > Unless the information is contained somewhere else of course...thank > you. What in particular are you actually looking for? CMake should complain about missing dependencies, etc., and at least in my case it did. Best regards, Karol Babioch [1]: http://public.kitware.com/pipermail/cmake/2008-January/019426.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/2047d60b/attachment.bin From jan.grashoefer at gmail.com Mon Jan 30 11:02:13 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Mon, 30 Jan 2017 20:02:13 +0100 Subject: [Bro] ActiveHTTP In-Reply-To: <829BF14C-90C5-48CB-9ED2-74C9CED5FE5C@pingtrip.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> <829BF14C-90C5-48CB-9ED2-74C9CED5FE5C@pingtrip.com> Message-ID: <533e24eb-3589-4f84-e7e1-3032142efbb9@gmail.com> > Ok, scratch that error message. The box I was testing on didn?t have curl installed. After installing curl the test script has the same behavior as when run on OS X. Work great by itself but hangs before the when{} block if passed a PCAP. bro --pseudo-realtime -r Mswab_Yayih_FD1BE09E499E8E380424B3835FC973A8_2012-03.pcap b.bro works for me. Takes about one and a half minute (the PCAP covers ~5mins) to spit out the result. Jan From bro at pingtrip.com Mon Jan 30 11:54:04 2017 From: bro at pingtrip.com (Dave Crawford) Date: Mon, 30 Jan 2017 14:54:04 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: <533e24eb-3589-4f84-e7e1-3032142efbb9@gmail.com> References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> <829BF14C-90C5-48CB-9ED2-74C9CED5FE5C@pingtrip.com> <533e24eb-3589-4f84-e7e1-3032142efbb9@gmail.com> Message-ID: <80700D67-5588-4637-96CC-36AEF480F299@pingtrip.com> Thanks Jan, what version of Bro are you running and on which platform? I have 'bro version 2.5-30?, compiled from Github master, on Debian 8.7 and macOS 10.12.2 and both hang until I ctrl-C, and neither enters the when{} block: macOS$ time bro -r bro_dev/Mswab_Yayih_FD1BE09E499E8E380424B3835FC973A8_2012-03.pcap b.bro ^C1330843811.964963 received termination signal real 8m30.316s user 1m31.343s sys 6m58.036s debian$ time bro -r test2.pcap b.bro ^C1330843811.964963 received termination signal real 2m42.507s user 1m19.328s sys 1m23.168s > On Jan 30, 2017, at 2:02 PM, Jan Grash?fer wrote: > >> Ok, scratch that error message. The box I was testing on didn?t have curl installed. After installing curl the test script has the same behavior as when run on OS X. Work great by itself but hangs before the when{} block if passed a PCAP. > > bro --pseudo-realtime -r > Mswab_Yayih_FD1BE09E499E8E380424B3835FC973A8_2012-03.pcap b.bro > > works for me. Takes about one and a half minute (the PCAP covers ~5mins) > to spit out the result. > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/e6247acd/attachment.html From jan.grashoefer at gmail.com Mon Jan 30 12:21:33 2017 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Mon, 30 Jan 2017 21:21:33 +0100 Subject: [Bro] ActiveHTTP In-Reply-To: References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> <829BF14C-90C5-48CB-9ED2-74C9CED5FE5C@pingtrip.com> <533e24eb-3589-4f84-e7e1-3032142efbb9@gmail.com> Message-ID: > Thanks Jan, what version of Bro are you running and on which platform? I am using Bro 2.5 on Fedora 23 (4.8 kernel). > I have 'bro version 2.5-30?, compiled from Github master, on Debian 8.7 and macOS 10.12.2 and both hang until I ctrl-C, and neither enters the when{} block: $ time bro --pseudo-realtime -r Mswab_Yayih_FD1BE09E499E8E380424B3835FC973A8_2012-03.pcap b.bro [code=302, msg=Found\x0d, body=...] 1485807420.620682 received termination signal real 1m0.583s user 0m26.229s sys 0m34.185s Without "--pseudo-realtime" it seems to hang for me, too. Have you tried using it? Jan From jlay at slave-tothe-box.net Mon Jan 30 13:40:21 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 30 Jan 2017 14:40:21 -0700 Subject: [Bro] Config.log In-Reply-To: <3e1389de-7023-53bc-d19c-d9482d256a5e@babioch.de> References: <9ee7e80fbafdce5e5a9af45b05475e17@localhost> <3e1389de-7023-53bc-d19c-d9482d256a5e@babioch.de> Message-ID: On 2017-01-30 12:01, Karol Babioch wrote: > Hi, > > Am 30.01.2017 um 19:08 schrieb James Lay: >> An odd request I'm sure....almost all other apps that I do the >> ./configure, make, sudo make install dance leave me with a >> config.log. > > This is usually done by Autotools. Bro uses CMake, which doesn't > provide > such a mechanism by itself, see [1]. However, I'm not familiar with the > codebase at all, so maybe something like that was already implemented. > >> Unless the information is contained somewhere else of course...thank >> you. > > What in particular are you actually looking for? CMake should complain > about missing dependencies, etc., and at least in my case it did. > > Best regards, > Karol Babioch > > [1]: http://public.kitware.com/pipermail/cmake/2008-January/019426.html > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro We...truth be told I have some installs which may not have been installed in the standard location...so..I go back and look at my config.log for any special config options I may have passed. I know...I should just document it and drive on, which I've done on most..just...not all :) That was really the only reason. James From bro at pingtrip.com Mon Jan 30 13:44:27 2017 From: bro at pingtrip.com (Dave Crawford) Date: Mon, 30 Jan 2017 16:44:27 -0500 Subject: [Bro] ActiveHTTP In-Reply-To: References: <92596923-DDD9-4879-8E38-5776154B4ADF@pingtrip.com> <26FADE91-D154-4C4E-A8F0-A5D50DC5F977@illinois.edu> <6391AA8D-067C-4239-9C58-0E3BCE143A43@pingtrip.com> <3fa2c498-1cc9-d252-e542-544d52e1330f@gmail.com> <9E6615FC-F5C8-48AF-8BEF-B2883F52757D@pingtrip.com> <829BF14C-90C5-48CB-9ED2-74C9CED5FE5C@pingtrip.com> <533e24eb-3589-4f84-e7e1-3032142efbb9@gmail.com> Message-ID: <9D34C6F6-B24D-4111-B271-67E685DDC6E2@pingtrip.com> > On Jan 30, 2017, at 3:21 PM, Jan Grash?fer wrote: > > $ time bro --pseudo-realtime -r > Mswab_Yayih_FD1BE09E499E8E380424B3835FC973A8_2012-03.pcap b.bro > [code=302, msg=Found\x0d, body=...] > 1485807420.620682 received termination signal > > real 1m0.583s > user 0m26.229s > sys 0m34.185s > > Without "--pseudo-realtime" it seems to hang for me, too. Have you tried > using it? > > Jan Thanks Jan! So on the --pseudo-realtime option did the trick. I had similar results on Debian as you: real 1m0.579s user 0m31.236s sys 0m29.344s And similar results on macOS: real 1m0.568s user 0m13.238s sys 0m47.192s I at least now have a comfort level to continue writing my script (my production Bro boxes are Debian). -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/bdf3fd3d/attachment.html From jazoff at illinois.edu Mon Jan 30 13:47:23 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Mon, 30 Jan 2017 21:47:23 +0000 Subject: [Bro] Config.log In-Reply-To: References: <9ee7e80fbafdce5e5a9af45b05475e17@localhost> <3e1389de-7023-53bc-d19c-d9482d256a5e@babioch.de> Message-ID: > On Jan 30, 2017, at 4:40 PM, James Lay wrote: > > We...truth be told I have some installs which may not have been > installed in the standard location...so..I go back and look at my > config.log for any special config options I may have passed. I know...I > should just document it and drive on, which I've done on > most..just...not all :) That was really the only reason. $ cat build/config.status # This is the command used to configure this build ./configure --prefix=/usr/local/bro --with-pcap=/opt/pfring --with-jemalloc=/usr/ -- - Justin Azoff From jlay at slave-tothe-box.net Mon Jan 30 13:50:15 2017 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 30 Jan 2017 14:50:15 -0700 Subject: [Bro] Config.log In-Reply-To: References: <9ee7e80fbafdce5e5a9af45b05475e17@localhost> <3e1389de-7023-53bc-d19c-d9482d256a5e@babioch.de> Message-ID: <21e88bdcbdd45084eefd5bcb4e5ec5f3@localhost> On 2017-01-30 14:47, Azoff, Justin S wrote: >> On Jan 30, 2017, at 4:40 PM, James Lay >> wrote: >> >> We...truth be told I have some installs which may not have been >> installed in the standard location...so..I go back and look at my >> config.log for any special config options I may have passed. I >> know...I >> should just document it and drive on, which I've done on >> most..just...not all :) That was really the only reason. > > $ cat build/config.status > # This is the command used to configure this build > ./configure --prefix=/usr/local/bro --with-pcap=/opt/pfring > --with-jemalloc=/usr/ Brilliant...thanks Justin! James From jdopheid at illinois.edu Mon Jan 30 17:22:20 2017 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Tue, 31 Jan 2017 01:22:20 +0000 Subject: [Bro] Bro4Pros 2017: reminder to cancel vacated registration In-Reply-To: <39D675C7-B01B-4402-94DD-B78B5E748787@illinois.edu> References: <39D675C7-B01B-4402-94DD-B78B5E748787@illinois.edu> Message-ID: <7EFD7D614A2BB84ABEA19B2CEDD246582621C6E5@CITESMBX5.ad.uillinois.edu> Hello, Friendly reminder to please cancel your registration if you know you will not be able to attend Bro4Pros this Thursday. You can cancel on the Eventbrite site or contact me and I will cancel it for you. Otherwise, we'll see you Thursday. Thanks, Jeannette Dopheide ________________________________________ From: bro-bounces at bro.org [bro-bounces at bro.org] on behalf of Dopheide, Jeannette M [jdopheid at illinois.edu] Sent: Tuesday, January 17, 2017 11:01 AM To: bro at bro.org Subject: [Bro] Bro4Pros 2017: reminder to cancel vacated registration Attention Bro4Pros attendees, If you are unable to attend Bro4Pros on Thursday February 2nd, please cancel your registration so that we may open the spot to others. For those of you attending Bro4Pros, see you in a couple weeks! Thanks, Jeannette Dopheide ------ Jeannette Dopheide Training and Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From pratik.inamdar at sjsu.edu Mon Jan 30 17:24:16 2017 From: pratik.inamdar at sjsu.edu (Pratik Inamdar) Date: Mon, 30 Jan 2017 18:24:16 -0700 Subject: [Bro] Bro4Pros 2017: reminder to cancel vacated registration In-Reply-To: <7EFD7D614A2BB84ABEA19B2CEDD246582621C6E5@CITESMBX5.ad.uillinois.edu> References: <39D675C7-B01B-4402-94DD-B78B5E748787@illinois.edu> <7EFD7D614A2BB84ABEA19B2CEDD246582621C6E5@CITESMBX5.ad.uillinois.edu> Message-ID: Please cancel my registration. Thanks you! On 30 Jan 2017 18:23, "Dopheide, Jeannette M" wrote: > Hello, > > Friendly reminder to please cancel your registration if you know you will > not be able to attend Bro4Pros this Thursday. You can cancel on the > Eventbrite site or contact me and I will cancel it for you. Otherwise, > we'll see you Thursday. > > Thanks, > Jeannette Dopheide > > ________________________________________ > From: bro-bounces at bro.org [bro-bounces at bro.org] on behalf of Dopheide, > Jeannette M [jdopheid at illinois.edu] > Sent: Tuesday, January 17, 2017 11:01 AM > To: bro at bro.org > Subject: [Bro] Bro4Pros 2017: reminder to cancel vacated registration > > Attention Bro4Pros attendees, > > If you are unable to attend Bro4Pros on Thursday February 2nd, please > cancel your registration so that we may open the spot to others. > > For those of you attending Bro4Pros, see you in a couple weeks! > > Thanks, > Jeannette Dopheide > > ------ > Jeannette Dopheide > Training and Outreach Coordinator > National Center for Supercomputing Applications > University of Illinois at Urbana-Champaign > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170130/6654d7d0/attachment-0001.html From pssunu6 at gmail.com Tue Jan 31 04:29:25 2017 From: pssunu6 at gmail.com (ps sunu) Date: Tue, 31 Jan 2017 17:59:25 +0530 Subject: [Bro] files.log need to add id [orig_h,p and resp_h,p] Message-ID: Hi, I need to add id [orig_h,orig_p, resp_h, resp_p] in files.log , so i tried to add the content into opt/bro/share/bro/base/frameworks/files/main.bro but its not accepting. I added below code into main.bro id: conn_id &log; and function set_info(f: fa_file) { if ( ! f?$info ) { local tmp: Info = Info($ts=f$last_active,$fuid=f$id, $id=f$conns); f$info = tmp; print "test",f$conns; } any other way to do this , ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170131/f534bb21/attachment.html From bro at pingtrip.com Tue Jan 31 06:49:32 2017 From: bro at pingtrip.com (Dave Crawford) Date: Tue, 31 Jan 2017 09:49:32 -0500 Subject: [Bro] Converting Notice::Info to JSON Message-ID: <8084C305-D1BE-4E68-8A21-684F7A4E50A5@pingtrip.com> I?m creating a script that hooks Notice notice/policy and executes an ActiveHTTP call to submit specific notice events to a REST endpoint. In the submission I?d like to include the Notice::Info object as a JSON data field so tried: to_json(n) But it produces the following error: 1485869266.028563 error in /Users/dave/Projects/bro/share/bro/base/utils/json.bro, line 26: wrong port format, must be /[0-9]{1,5}\/(tcp|udp|icmp)/ (to_port(cat(v))) Do I need to manually re-package all the fields the Notice::Info, and if so, has anyone already done this so I can borrow the code? :-) This is the Notice::Info object I?m testing with: [ts=1485872499.141021, uid=CSRU563utEL1B2yFl5, id=[orig_h=10.0.2.15, orig_p=1381/tcp, resp_h=199.192.156.134, resp_p=443/tcp], conn=, iconn=, f=, fuid=, file_mime_type=, file_desc=, proto=tcp, note=Signatures::Sensitive_Signature, msg=10.0.2.15: ATTACK-RESPONSES Microsoft cmd.exe banner (reverse-shell originator), sub=POST /bbs/info.asp HTTP/1.1\x0d\x0aHost: 199.192.156.134:443\x0d\x0aContent-Length: 165\x0d\x0aConnection: Keep-Alive\x0d\x0aCache-Control: no-cache\x0d\x0a\x0d\x0a3D333531501A..., src=10.0.2.15, dst=199.192.156.134, p=443/tcp, n=, src_peer=[id=0, host=127.0.0.1, p=0/unknown, is_local=T, descr=bro, class=], peer_descr=bro, actions={ Phantom::ACTION_PHANTOM, Notice::ACTION_LOG }, email_body_sections=[], email_delay_tokens={ }, identifier=, suppress_for=1.0 hr, dropped=F, remote_location=] -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170131/c74747ad/attachment.html From jazoff at illinois.edu Tue Jan 31 08:16:25 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 31 Jan 2017 16:16:25 +0000 Subject: [Bro] Converting Notice::Info to JSON In-Reply-To: <8084C305-D1BE-4E68-8A21-684F7A4E50A5@pingtrip.com> References: <8084C305-D1BE-4E68-8A21-684F7A4E50A5@pingtrip.com> Message-ID: > On Jan 31, 2017, at 9:49 AM, Dave Crawford wrote: > > I?m creating a script that hooks Notice notice/policy and executes an ActiveHTTP call to submit specific notice events to a REST endpoint. In the submission I?d like to include the Notice::Info object as a JSON data field so tried: > > to_json(n) > > But it produces the following error: > > 1485869266.028563 error in /Users/dave/Projects/bro/share/bro/base/utils/json.bro, line 26: wrong port format, must be /[0-9]{1,5}\/(tcp|udp|icmp)/ (to_port(cat(v))) This looks like a bug in to_json (or possibly to_port)... but it's harmless and there are some workarounds you can do. The json.bro code does this to convert ports to strings for json case "port": return cat(port_to_count(to_port(cat(v)))); but the unknown/uninitialized port of 0/unknown breaks to_port. It seems to_port needs to account for 0/unknown or json.bro should just be doing case "port": return cat(port_to_count(v)); I'm not sure why it does a double conversion like that in the first place. In any case, the code still works even though it outputs that error. Since it doesn't understand the port it returns 0/unknown anyway, so the end result is the same: $ cat j.bro event bro_init() { local c: conn_id; c$orig_h=1.2.3.4; c$resp_p=0/unknown; print to_json(c); } $ bro j.bro error in /usr/local/Cellar/bro/HEAD/share/bro/base/utils/json.bro, line 26: wrong port format, must be /[0-9]{1,5}\/(tcp|udp|icmp)/ (to_port(cat(v))) {"orig_h": "1.2.3.4", "resp_p": 0} $ You could probably avoid the whole issue by using to_json like this: to_json(note, T); to set the only_loggable option to true which should cause it to ignore fields that aren't normally logged in the first place. -- - Justin Azoff From jazoff at illinois.edu Tue Jan 31 11:55:28 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 31 Jan 2017 19:55:28 +0000 Subject: [Bro] Possible system tweak for reducing memory usage Message-ID: TL;DR: It's possible that transparent huge pages and bro do not get along, try doing a # on all nodes echo never > /sys/kernel/mm/transparent_hugepage/enabled # then broctl restart There are ways to make that permanent if it helps. I've been doing some research to try to figure out why some people have more memory issues than others. I think the kernel feature Transparent Huge Pages (THP) and bro may not get along well. It's supposed to help performance for memory allocations, but many services recommend disabling it (mongodb, redis, mysql). For example: > Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages. > > However, database workloads often perform poorly with THP, because they tend to have sparse rather than contiguous memory access patterns. You should disable THP on Linux machines to ensure best performance with MongoDB. Bro memory allocations can best described as unpredictable, especially on 'custer in a box' deployments. On our systems, disabling it drops bro worker memory usage by 20% and manager/logger usage by even more, but since we only have one of those it's harder to compare. For workers I disabled THP on half the nodes, and the post bro restart memory usage is consistently lower. -- - Justin Azoff From shirkdog.bsd at gmail.com Tue Jan 31 12:09:40 2017 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Tue, 31 Jan 2017 15:09:40 -0500 Subject: [Bro] Possible system tweak for reducing memory usage In-Reply-To: References: Message-ID: This was always a RHEL6/CentOS6 requirement for applications like you stated. Which OS are you noticing the issue and the performance gains on? -- Michael Shirk Daemon Security, Inc. http://www.daemon-security.com On Jan 31, 2017 3:04 PM, "Azoff, Justin S" wrote: > TL;DR: It's possible that transparent huge pages and bro do not get along, > try doing a > > # on all nodes > echo never > /sys/kernel/mm/transparent_hugepage/enabled > # then > broctl restart > > There are ways to make that permanent if it helps. > > > I've been doing some research to try to figure out why some people have > more memory issues than others. I think the kernel feature Transparent > Huge Pages (THP) and bro may not get along well. It's supposed to help > performance for memory allocations, but many services recommend disabling > it (mongodb, redis, mysql). For example: > > > Transparent Huge Pages (THP) is a Linux memory management system that > reduces the overhead of Translation Lookaside Buffer (TLB) lookups on > machines with large amounts of memory by using larger memory pages. > > > > However, database workloads often perform poorly with THP, because they > tend to have sparse rather than contiguous memory access patterns. You > should disable THP on Linux machines to ensure best performance with > MongoDB. > > Bro memory allocations can best described as unpredictable, especially on > 'custer in a box' deployments. > > On our systems, disabling it drops bro worker memory usage by 20% and > manager/logger usage by even more, but since we only have one of those it's > harder to compare. For workers I disabled THP on half the nodes, and the > post bro restart memory usage is consistently lower. > > -- > - Justin Azoff > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170131/99cc6e78/attachment.html From espressobeanies at gmail.com Tue Jan 31 12:09:48 2017 From: espressobeanies at gmail.com (Espresso Beanies) Date: Tue, 31 Jan 2017 15:09:48 -0500 Subject: [Bro] Difference between Bro Clustering method and lb_procs Message-ID: Good afternoon, What is the difference between the Bro Clustering method of creating multiple workers and lb_procs? I see the same # of CPUs in-use regardless. Sincerely, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170131/5724e40b/attachment.html From jazoff at illinois.edu Tue Jan 31 12:18:57 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 31 Jan 2017 20:18:57 +0000 Subject: [Bro] Possible system tweak for reducing memory usage In-Reply-To: References: Message-ID: > On Jan 31, 2017, at 3:09 PM, Michael Shirk wrote: > > This was always a RHEL6/CentOS6 requirement for applications like you stated. > > Which OS are you noticing the issue and the performance gains on? Centos 7.3 -- - Justin Azoff From hosom at battelle.org Tue Jan 31 12:29:51 2017 From: hosom at battelle.org (Hosom, Stephen M) Date: Tue, 31 Jan 2017 20:29:51 +0000 Subject: [Bro] Possible system tweak for reducing memory usage In-Reply-To: References: Message-ID: Can confirm that this has always been a performance improvement in our environment. We have historically used Ubuntu 14.04 and 16.04. -----Original Message----- From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Azoff, Justin S Sent: Tuesday, January 31, 2017 3:19 PM To: Michael Shirk Cc: bro Subject: Re: [Bro] Possible system tweak for reducing memory usage > On Jan 31, 2017, at 3:09 PM, Michael Shirk wrote: > > This was always a RHEL6/CentOS6 requirement for applications like you stated. > > Which OS are you noticing the issue and the performance gains on? Centos 7.3 -- - Justin Azoff _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From bro at pingtrip.com Tue Jan 31 12:39:47 2017 From: bro at pingtrip.com (Dave Crawford) Date: Tue, 31 Jan 2017 15:39:47 -0500 Subject: [Bro] Converting Notice::Info to JSON In-Reply-To: References: <8084C305-D1BE-4E68-8A21-684F7A4E50A5@pingtrip.com> Message-ID: <93400B68-43ED-4070-9181-C1677EB6C2A1@pingtrip.com> > On Jan 31, 2017, at 11:16 AM, Azoff, Justin S wrote: > > You could probably avoid the whole issue by using to_json like this: > > to_json(note, T); > > to set the only_loggable option to true which should cause it to ignore fields that aren't normally logged in the first place. > > -- > - Justin Azoff > Thanks Justin, that did the trick. -Dave From hovsep.sanjay.levi at gmail.com Tue Jan 31 16:11:52 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Wed, 1 Feb 2017 00:11:52 +0000 Subject: [Bro] Possible system tweak for reducing memory usage In-Reply-To: References: Message-ID: I think pf_ring ZC requires hugepages so this fix benefits a subset of Linux deployments. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170201/09f71331/attachment.html From hovsep.sanjay.levi at gmail.com Tue Jan 31 16:25:16 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Wed, 1 Feb 2017 00:25:16 +0000 Subject: [Bro] Logging and memory leak Message-ID: Noticed this today and I believe it's related to a recent cluster crash. The virtual memory size of the Bro manager continues to grow slowly (56G right now) while the resident active memory remains around 4G. To my knowledge this suggests a memory leak. The system is FreeBSD. We are using a Kafka only based output with local file logging disabled and a modified number of loggers. I think a single logger works fine for the most part but I used 4 here for testing after the recent cluster crash. Regarding the crash, the cluster was running fine and then active memory spiked sharply until reaching the ceiling and consuming swap. I suspect it was related to the VSIZE of the manager reaching a maximum of some sort and triggering the failure. Name Type Host Pid Proc VSize Rss Cpu Cmd logger-1 logger 10.1.1.1 85959 parent 176M 67M 6% bro logger-1 logger 10.1.1.1 86115 child 194M 62M 0% bro logger-2 logger 10.1.1.1 85962 parent 709M 175M 19% bro logger-2 logger 10.1.1.1 86017 child 194M 65M 1% bro logger-3 logger 10.1.1.1 85965 parent 663M 164M 13% bro logger-3 logger 10.1.1.1 86114 child 202M 71M 0% bro logger-4 logger 10.1.1.1 85967 parent 663M 157M 17% bro logger-4 logger 10.1.1.1 86113 child 194M 63M 0% bro manager manager 10.1.1.1 86204 child 878M 649M 100% bro manager manager 10.1.1.1 86109 parent 56G 4G 16% bro last pid: 1482; load averages: 4.73, 4.73, 4.58 up 3+23:25:56 00:12:45 52 processes: 2 running, 50 sleeping CPU: 3.4% user, 0.3% nice, 3.3% system, 0.0% interrupt, 92.9% idle Mem: 538M Active, 6674M Inact, 17G Wired, 26M Cache, 101G Free ARC: 14G Total, 2325M MFU, 11G MRU, 144K Anon, 57M Header, 548M Other Swap: 12G Total, 17M Used, 12G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 86204 bro 1 108 5 878M 649M CPU27 27 26.0H 100.00% bro 85962 bro 172 20 0 709M 175M select 25 22.4H 25.63% bro 86109 bro 7 20 0 57682M 4879M uwait 13 499:16 22.17% bro 85967 bro 162 20 0 663M 157M select 13 19.7H 20.80% bro 85965 bro 162 20 0 663M 164M select 23 18.2H 16.31% bro 85959 bro 21 20 0 176M 69352K select 40 193:32 6.30% bro 86017 bro 1 25 5 194M 66772K select 36 28:18 1.27% bro 86113 bro 1 25 5 194M 65068K select 42 23:28 0.98% bro 86114 bro 1 25 5 202M 73308K select 38 13:43 0.39% bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170201/f0737a5f/attachment-0001.html From jazoff at illinois.edu Tue Jan 31 16:26:41 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 1 Feb 2017 00:26:41 +0000 Subject: [Bro] Possible system tweak for reducing memory usage In-Reply-To: References: Message-ID: <5B0182AB-46C2-43FA-9D66-33BAABC43B5E@illinois.edu> > On Jan 31, 2017, at 7:11 PM, Hovsep Levi wrote: > > I think pf_ring ZC requires hugepages so this fix benefits a subset of Linux deployments. > huge pages or transparent huge pages? -- - Justin Azoff From jazoff at illinois.edu Tue Jan 31 16:27:54 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 1 Feb 2017 00:27:54 +0000 Subject: [Bro] Logging and memory leak In-Reply-To: References: Message-ID: > On Jan 31, 2017, at 7:25 PM, Hovsep Levi wrote: > > Noticed this today and I believe it's related to a recent cluster crash. The virtual memory size of the Bro manager continues to grow slowly (56G right now) while the resident active memory remains around 4G. To my knowledge this suggests a memory leak. The system is FreeBSD. Are you loading misc/scan or misc/detect-traceroute ? -- - Justin Azoff From jazoff at illinois.edu Tue Jan 31 16:28:49 2017 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 1 Feb 2017 00:28:49 +0000 Subject: [Bro] Logging and memory leak In-Reply-To: References: Message-ID: > On Jan 31, 2017, at 7:25 PM, Hovsep Levi wrote: > > We are using a Kafka only based output with local file logging disabled and a modified number of loggers. I think a single logger works fine for the most part but I used 4 here for testing after the recent cluster crash. Oh! You got multiple loggers working? I thought you had the broctl changes right before, but you were seeing those weird issues.. did you figure out what was causing them? -- - Justin Azoff From hovsep.sanjay.levi at gmail.com Tue Jan 31 16:34:47 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Wed, 1 Feb 2017 00:34:47 +0000 Subject: [Bro] Possible system tweak for reducing memory usage In-Reply-To: <5B0182AB-46C2-43FA-9D66-33BAABC43B5E@illinois.edu> References: <5B0182AB-46C2-43FA-9D66-33BAABC43B5E@illinois.edu> Message-ID: Oops. Only hugepages, not transparent hugepages. Nevermind what I said. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170201/c7c59e0b/attachment.html From hovsep.sanjay.levi at gmail.com Tue Jan 31 16:36:26 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Wed, 1 Feb 2017 00:36:26 +0000 Subject: [Bro] Logging and memory leak In-Reply-To: References: Message-ID: No, both are disabled. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170201/dfea07f1/attachment.html From hovsep.sanjay.levi at gmail.com Tue Jan 31 17:05:00 2017 From: hovsep.sanjay.levi at gmail.com (Hovsep Levi) Date: Wed, 1 Feb 2017 01:05:00 +0000 Subject: [Bro] Logging and memory leak In-Reply-To: References: Message-ID: Not really, I was going to reply to the old thread regarding this. I'm in the process of switching back to a single logger and considering trying Kafka export from each worker directly. Right now the logs are backlogged by about 20 minutes which I suspect is the bottleneck issue. Apparently when using only Kafka export it has taken 24+ hours to reach this state as opposed to previously with file based logging where the logs would be delayed by 20 minutes within an hour's time. I'm sure the weird issues still exist with multiple loggers I just can't see them as easily right now, my Logstash parser doesn't handle them yet. The priority has been to get the cluster stable, after that I'll have time to work on optimization. It seems with the current configuration a cluster restart once per day is going to be required. I'm also about to add another 44 workers to resolve the 9-17% packet loss per worker during peak. I'm expecting the individual worker export to have its own set of challenges so my time may be better spent re-writing the logger node for high volume. Right now I don't know how to reconfigure Bro to Kafka export from the workers directly, have to read more source. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20170201/f823aacb/attachment.html