From bkellogg at dresser-rand.com Fri Jan 2 08:46:34 2015 From: bkellogg at dresser-rand.com (Kellogg, Brian D (OLN)) Date: Fri, 2 Jan 2015 16:46:34 +0000 Subject: [Bro] adding srcip to correlation script Message-ID: I?m working with the correlation script released by CrowdStrike, thank you BTW, and I want to populated the ?srcip? field with the correct source IP so that I can do a groupby on that field in ELSA. How do I get the conn record for this connection into the below function so that I can add $conn=c to the notice? Not sure what the best way to do this is; can I just add it to the function arguments or define ?c? as a local and then assign the source IP, ?idx? in this case, to c$id$orig_h. function alerts_out(t: table[addr] of set[string], idx: addr): interval thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150102/1bbf6455/attachment.html From damonrouse at gmail.com Fri Jan 2 09:58:11 2015 From: damonrouse at gmail.com (Damon Rouse) Date: Fri, 2 Jan 2015 09:58:11 -0800 Subject: [Bro] (no subject) Message-ID: Happy New Year Everyone!!! Has anyone ever seen the following error before? Email alerts that come in looks like this: Subject: [Bro] cron: stats-to-csv failed Body: stats-to-csv failed -- [Automatically generated.] I started receiving these yesterday. They come in every 5 minutes and I've never received them before yesterday. Bro is running fine, my system is completely updated and everything looks good when I run a sostat (running BRO under Security Onion). Any insight is appreciated as I have no idea if they are something I should look into or not. Thanks Damon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150102/6dc3054f/attachment.html From dnthayer at illinois.edu Fri Jan 2 14:16:19 2015 From: dnthayer at illinois.edu (Thayer, Daniel N) Date: Fri, 2 Jan 2015 22:16:19 +0000 Subject: [Bro] (no subject) In-Reply-To: References: Message-ID: <8F865DA62E66F543B6104A2835719CF93064C24B@CITESMBX5.ad.uillinois.edu> The stats-to-csv script creates files with a ".csv" file extension in the directory /logs/stats/www/ (where is the bro install directory). In order for this script to work, it needs to read two files: /spool/stats.log and /logs/stats/meta.dat From: bro-bounces at bro.org [bro-bounces at bro.org] on behalf of Damon Rouse [damonrouse at gmail.com] Sent: Friday, January 02, 2015 11:58 AM To: bro at bro-ids.org Subject: [Bro] (no subject) Happy New Year Everyone!!! Has anyone ever seen the following error before? Email alerts that come in looks like this: Subject: [Bro] cron: stats-to-csv failed Body: stats-to-csv failed -- [Automatically generated.] I started receiving these yesterday. They come in every 5 minutes and I've never received them before yesterday. Bro is running fine, my system is completely updated and everything looks good when I run a sostat (running BRO under Security Onion). Any insight is appreciated as I have no idea if they are something I should look into or not. Thanks Damon From wren3 at illinois.edu Sat Jan 3 20:25:50 2015 From: wren3 at illinois.edu (Ren, Wenyu) Date: Sun, 4 Jan 2015 04:25:50 +0000 Subject: [Bro] Question about the Intelligence framework Message-ID: Dear all, I am trying to extend the current Intelligence framework to support some indicator of my own type. I am wondering how to inform the Intelligence framework that the data of my own type is discovered and it?s presence should be checked within the intelligence data set. Do you known in which file is the corresponding codes for the current supported indicator types located? The documentation for the Intelligence Framework mentioned some "package of hook scripts". Where can I find that those scripts? Thanks a lot, Wenyu From jburke at wapacklabs.com Mon Jan 5 05:03:57 2015 From: jburke at wapacklabs.com (Jesse V. Burke) Date: Mon, 05 Jan 2015 08:03:57 -0500 Subject: [Bro] Basic Cluster Install Issues Message-ID: <54AA8BBD.1000402@wapacklabs.com> Hello All, I am currently trying to configure bro-2.3.1 for a basic cluster as outlined on bro.org. I have a physical box with the IP 192.168.1.144 (Ubuntu 14.04 server) and a VM (Ubuntu 12.04 LTE) with the IP 192.168.1.107. I have tried using both the root user with ssh keys generated and exchanged between the machines and a 'bro' user --after changing the ownership of /nsm/bro to that user. I can execute /nsm/bro/bin/broctl and perform an install fine (the proxy and manager boot) but I get worker-1 still initializing. Then when I run a status worker-1 states it's crashed. Running a diag shows: /nsm/bro/bin/bro: 1: /nsm/bro/bin/bro: Syntax error: Unterminated quoted string /nsm/bro/bin/bro: 1: /nsm/bro/bin/bro: ELF: not found The second line has some funny characters in it which makes me think I might be missing a dependency. Has anyone seen this or know of a quick fix? Literally all I did was follow the guide and commented out the stand-alone stuff in node.cfg and then un-commented the cluster stuff + set my hosts to the IPs, exchanged SSH keys, ran broctl install, then status to see nothing working. Thanks guys! From seth at icir.org Mon Jan 5 07:12:56 2015 From: seth at icir.org (Seth Hall) Date: Mon, 5 Jan 2015 10:12:56 -0500 Subject: [Bro] Question about the Intelligence framework In-Reply-To: References: Message-ID: > On Jan 3, 2015, at 11:25 PM, Ren, Wenyu wrote: > > I am trying to extend the current Intelligence framework to support some indicator of my own type. Cool! What?s the type? If it?s a fairly generic type it could probably make sense to include it in Bro for the next release so that people can just import data for that type and have it ?automatically? work. :) > Do you known in which file is the corresponding codes for the current supported indicator types located? The documentation for the Intelligence Framework mentioned some "package of hook scripts". Where can I find that those scripts? Yes, you can find them in /share/bro/policy/frameworks/intel/seen/ The scripts in that directory send data into the intel framework to be checked against the loaded intelligence data sets. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Mon Jan 5 07:19:43 2015 From: seth at icir.org (Seth Hall) Date: Mon, 5 Jan 2015 10:19:43 -0500 Subject: [Bro] Basic Cluster Install Issues In-Reply-To: <54AA8BBD.1000402@wapacklabs.com> References: <54AA8BBD.1000402@wapacklabs.com> Message-ID: > On Jan 5, 2015, at 8:03 AM, Jesse V. Burke wrote: > > /nsm/bro/bin/bro: 1: /nsm/bro/bin/bro: Syntax error: Unterminated quoted > string > /nsm/bro/bin/bro: 1: /nsm/bro/bin/bro: ELF: not found I?m curious if your Bro binary isn?t getting copied over to your worker node correctly. I would take a look at your Bro binary to see if it is in fact a binary. :) Could you also send me your node.cfg off list? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From wren3 at illinois.edu Mon Jan 5 17:57:32 2015 From: wren3 at illinois.edu (Ren, Wenyu) Date: Tue, 6 Jan 2015 01:57:32 +0000 Subject: [Bro] Question about the Reducer in the summary statistics framework Message-ID: Dear all, I have a question about the reducer in the summary statistics framework. I have multiple reducers which have the same epoch. However, I would like them to run in a certain sequence. For example, I would like the running sequence reducer1->reducer2->reducer3. I still want them to have the same epoch and remain synchronized. Does anyone know how can that be achieved? Thanks a lot. Best, Wenyu From seth at icir.org Mon Jan 5 18:16:48 2015 From: seth at icir.org (Seth Hall) Date: Mon, 5 Jan 2015 21:16:48 -0500 Subject: [Bro] Question about the Reducer in the summary statistics framework In-Reply-To: References: Message-ID: > On Jan 5, 2015, at 8:57 PM, Ren, Wenyu wrote: > > I have a question about the reducer in the summary statistics framework. I have multiple reducers which have the same epoch. However, I would like them to run in a certain sequence. For example, I would like the running sequence reducer1->reducer2->reducer3. I still want them to have the same epoch and remain synchronized. Does anyone know how can that be achieved? Are these built-in reducers or did you write your own? I?m curious as to why you need them run in a certain order. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Tue Jan 6 06:06:01 2015 From: seth at icir.org (Seth Hall) Date: Tue, 6 Jan 2015 09:06:01 -0500 Subject: [Bro] Question about the Reducer in the summary statistics framework In-Reply-To: References: <, <>> Message-ID: <2F0C7AF2-D3B6-491A-8F4B-0E92459B90E8@icir.org> > On Jan 5, 2015, at 10:11 PM, Ren, Wenyu wrote: > > They are my own reducers. I want them to run in a certain order because the reducers will construct a multi-level data structure together. Each reducer is responsible for one level and I want the upper levels to be constructed first. I think you?re talking about calculations and not reducers? Take a look at the variance calculation? # Reduced priority since this depends on the average hook compose_resultvals_hook(result: ResultVal, rv1: ResultVal, rv2: ResultVal) &priority=-5 You can set the priority of your composition hook to force the values to compose in a specific order. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Tue Jan 6 13:24:16 2015 From: seth at icir.org (Seth Hall) Date: Tue, 6 Jan 2015 16:24:16 -0500 Subject: [Bro] Question about the Reducer in the summary statistics framework In-Reply-To: References: <, <>> <, <2F0C7AF2-D3B6-491A-8F4B-0E92459B90E8@icir.org> <>> Message-ID: > On Jan 6, 2015, at 4:05 PM, Ren, Wenyu wrote: > > SumStats::create([$name="sender-counters", > $epoch=sample_interval, $reducers=set(r1), > $epoch_result(ts: time, key: SumStats::Key, result: SumStats::Result) = > { > #different construction for different levels > }]); > > Can I also use the priority for this function and where should I put the "&priority=x"?  When you say, ?reducer?, do you really mean ?calculation?? Your example has only a single reducer attached to it. That result value in the callback will have the results for any reducers and their associated calculations in it. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From wren3 at illinois.edu Tue Jan 6 17:53:36 2015 From: wren3 at illinois.edu (Ren, Wenyu) Date: Wed, 7 Jan 2015 01:53:36 +0000 Subject: [Bro] How to update the epoch in SumStats on the run Message-ID: Dear all, Is there any way to modify the epoch value of the SumSats when it is running. I want to have the sampling frequency to adapt to changes in the traffic volume. Thanks a lot, Wenyu From wren3 at illinois.edu Tue Jan 6 19:06:23 2015 From: wren3 at illinois.edu (Ren, Wenyu) Date: Wed, 7 Jan 2015 03:06:23 +0000 Subject: [Bro] How to update the epoch in SumStats on the run In-Reply-To: References: Message-ID: Solved, never mind. Wenyu ________________________________________ From: bro-bounces at bro.org [bro-bounces at bro.org] on behalf of Ren, Wenyu [wren3 at illinois.edu] Sent: Tuesday, January 06, 2015 7:53 PM To: bro at bro.org Subject: [Bro] How to update the epoch in SumStats on the run Dear all, Is there any way to modify the epoch value of the SumSats when it is running. I want to have the sampling frequency to adapt to changes in the traffic volume. Thanks a lot, Wenyu _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From cbakkers at yahoo.de Wed Jan 7 02:04:01 2015 From: cbakkers at yahoo.de (coen bakkers) Date: Wed, 7 Jan 2015 10:04:01 +0000 (UTC) Subject: [Bro] Bro with 10Gb NIC's or higher Message-ID: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> Does anyone have experience with higher speed NIC's and Bro? Will it sustain 10Gb speeds or more provide the hardware is spec'd appropriately? regards, Coen -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150107/45c76882/attachment.html From vitologrillo at gmail.com Wed Jan 7 07:17:23 2015 From: vitologrillo at gmail.com (Vito Logrillo) Date: Wed, 7 Jan 2015 16:17:23 +0100 Subject: [Bro] Differences between conn.log and known_services.log Message-ID: Hi, conn.log and known_services.log have a field named "service": sometimes this filed is empty in conn.log but in known_services.log is not...Why? This field should be processed in the same way by the two logs...or not? Thanks, Vito From seth at icir.org Wed Jan 7 09:41:10 2015 From: seth at icir.org (Seth Hall) Date: Wed, 7 Jan 2015 12:41:10 -0500 Subject: [Bro] Differences between conn.log and known_services.log In-Reply-To: References: Message-ID: <13F1D97B-1C5A-4CBF-AFAD-80E424521565@icir.org> > On Jan 7, 2015, at 10:17 AM, Vito Logrillo wrote: > > conn.log and known_services.log have a field named "service": > sometimes this filed is empty in conn.log but in known_services.log is > not?Why? It?s due to what is actually being logged in both of those logs. conn.log has information per-connection so you can imagine that someone might connect to a host and not actually speak the protocol that the server speaks and we don?t detect any protocol. known_services.log is generally trying to figure out what protocol a host-port pair speaks and logs that. If no protocol is detected, we try to delay logging the fact that the port is held open in the hopes that a better connection will happen later. Make sense? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From doris at bro.org Wed Jan 7 10:43:35 2015 From: doris at bro.org (Doris Schioberg) Date: Wed, 07 Jan 2015 10:43:35 -0800 Subject: [Bro] Bro4Pros: The first Bro workshop for advanced users Message-ID: <54AD7E57.2090702@bro.org> The Bro Team is happy to announce our first workshop for advanced Bro users, "Bro4Pros". Going beyond the introductory level of our regular BroCons, this workshop aims at users who are already using Bro on a daily basis, feel comfortable customizing its configuration, and have written a few scripts of their own already. Bro4Pros will take place on February 18 & 19, 2015, in San Francisco, CA. We are grateful to OpenDNS for hosting us at their headquarters on 135 Bluxome St. Seating is very limited as this will be a smaller, more interactive event. Act fast to secure your registration. For more details please refer to https://bro.org/community/bro4pros2015.html. Please RSVP here: https://www.regonline.com/bro4pros2015. We thank our hosts OpenDNS for sponsoring the event. Looking forward to seeing you, The Bro Team -- Doris Schioberg Bro Outreach, Training, and Education Coordinator International Computer Science Institute (ICSI Berkeley) Phone: +1 (510) 289-8406 * doris at bro.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 882 bytes Desc: OpenPGP digital signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150107/a6717767/attachment.bin From vitologrillo at gmail.com Thu Jan 8 01:45:43 2015 From: vitologrillo at gmail.com (Vito Logrillo) Date: Thu, 8 Jan 2015 10:45:43 +0100 Subject: [Bro] Differences between conn.log and known_services.log In-Reply-To: <13F1D97B-1C5A-4CBF-AFAD-80E424521565@icir.org> References: <13F1D97B-1C5A-4CBF-AFAD-80E424521565@icir.org> Message-ID: Hi Seth, thanks for your reply. Is it correct to say that the difference between conn.log and known_services.log is that conn.log is based on a real-time analysis and and known_services.log is based on a delayed analysis?is it right or not? Another question: if known_services identifies a service on a addr/port, that information is later used by conn.log or not? Thanks Vito 2015-01-07 18:41 GMT+01:00 Seth Hall : > >> On Jan 7, 2015, at 10:17 AM, Vito Logrillo wrote: >> >> conn.log and known_services.log have a field named "service": >> sometimes this filed is empty in conn.log but in known_services.log is >> not?Why? > > It?s due to what is actually being logged in both of those logs. conn.log has information per-connection so you can imagine that someone might connect to a host and not actually speak the protocol that the server speaks and we don?t detect any protocol. known_services.log is generally trying to figure out what protocol a host-port pair speaks and logs that. If no protocol is detected, we try to delay logging the fact that the port is held open in the hopes that a better connection will happen later. > > Make sense? > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > From jdonnelly at dyn.com Thu Jan 8 06:26:48 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Thu, 8 Jan 2015 08:26:48 -0600 Subject: [Bro] print statement redirection Message-ID: Hi, I am trying to locate how the "print" and "print fmt" statements work in bro scripts ; where are they implemented in source ? Specifically I am trying to determine if they always use stdout and stderr as inherited by src/main.cc Any suggestions welcome ! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150108/89be68f7/attachment.html From seth at icir.org Thu Jan 8 06:39:09 2015 From: seth at icir.org (Seth Hall) Date: Thu, 8 Jan 2015 09:39:09 -0500 Subject: [Bro] print statement redirection In-Reply-To: References: Message-ID: > On Jan 8, 2015, at 9:26 AM, John Donnelly wrote: > > I am trying to locate how the "print" and "print fmt" statements work in bro scripts ; where are they implemented in source ? Specifically I am trying to determine if they always use stdout and stderr as inherited by src/main.cc Is this sort of what you?re looking for? global my_file = open(?test.txt?); event bro_init() { print my_file, ?woo!?; } .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jdonnelly at dyn.com Thu Jan 8 06:47:51 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Thu, 8 Jan 2015 08:47:51 -0600 Subject: [Bro] print statement redirection In-Reply-To: References: Message-ID: That is interesting , but it doesn't really satisfy my curiosity Does the scripting I/O not use stderr + stdout ? I am interested in getting bro to redirect to syslog by inheriting file descriptors on startup . On Thu, Jan 8, 2015 at 8:39 AM, Seth Hall wrote: > > > On Jan 8, 2015, at 9:26 AM, John Donnelly wrote: > > > > I am trying to locate how the "print" and "print fmt" statements > work in bro scripts ; where are they implemented in source ? Specifically > I am trying to determine if they always use stdout and stderr as > inherited by src/main.cc > > Is this sort of what you?re looking for? > > global my_file = open(?test.txt?); > event bro_init() > { > print my_file, ?woo!?; > } > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150108/a59e92b0/attachment.html From seth at icir.org Thu Jan 8 07:03:05 2015 From: seth at icir.org (Seth Hall) Date: Thu, 8 Jan 2015 10:03:05 -0500 Subject: [Bro] Differences between conn.log and known_services.log In-Reply-To: References: <13F1D97B-1C5A-4CBF-AFAD-80E424521565@icir.org> Message-ID: <833193C8-3177-43CB-80D8-53E787353707@icir.org> > On Jan 8, 2015, at 4:45 AM, Vito Logrillo wrote: > > Is it correct to say that the difference > between conn.log and known_services.log is that conn.log is based on a > real-time analysis and and known_services.log is based on a delayed > analysis?is it right or not? Technically that?s correct but I would say that it?s more accurate to say that the two logs are logging different things. conn.log is logging attribute of connections, and known_services.log is logging aspects of host/port pairs. > Another question: if known_services identifies a service on a > addr/port, that information is later used by conn.log or not? No, that wouldn?t make sense to do that. The service field in conn.log is solely showing you what analyzer(s) Bro used successfully to analyze the traffic on that particular connection. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Thu Jan 8 07:03:37 2015 From: seth at icir.org (Seth Hall) Date: Thu, 8 Jan 2015 10:03:37 -0500 Subject: [Bro] print statement redirection In-Reply-To: References: Message-ID: > On Jan 8, 2015, at 9:47 AM, John Donnelly wrote: > > That is interesting , but it doesn't really satisfy my curiosity Does the scripting I/O not use stderr + stdout ? I am interested in getting bro to redirect to syslog by inheriting file descriptors on startup . Ah, ok. I didn?t understand where you are trying to get to in the end. It does use stdout if you don?t send the content off to a certain location. I take it that you have a script you?ve written that prints? I?m asking since Bro doesn?t print much to stdout or stderr by default I don?t immediately see the utility of redirecting those to syslog.  The implementation for the print statement can be found in Stmt.cc around line 259. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From robin at icir.org Thu Jan 8 07:27:46 2015 From: robin at icir.org (Robin Sommer) Date: Thu, 8 Jan 2015 07:27:46 -0800 Subject: [Bro] print statement redirection In-Reply-To: References: Message-ID: <20150108152746.GB89287@icir.org> On Thu, Jan 08, 2015 at 08:47 -0600, John Donnelly wrote: > That is interesting , but it doesn't really satisfy my curiosity Does the > scripting I/O not use stderr + stdout ? By default, print does go to stdout. The code is in src/Stmt.cc, PrintStmt::DoExec(). > I am interested in getting bro to redirect to syslog by inheriting > file descriptors on startup . Just to be sure, you know there's a builtin function syslog(msg), right? Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From mike.patterson at uwaterloo.ca Thu Jan 8 07:28:55 2015 From: mike.patterson at uwaterloo.ca (Mike Patterson) Date: Thu, 8 Jan 2015 15:28:55 +0000 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> Message-ID: <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Succinctly, yes, although that provision is a big one. I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG 9.2X2. Both perform reasonably well. Although my hardware is somewhat underspecced (Dell R710s of differing vintages), I still get tons of useful data. If your next question would be "how should I spec my hardware", that's quite difficult to answer because it depends on a lot. Get the hottest CPUs you can afford, with as many cores. If you're actually sustaining 10+Gb you'll probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz cores, but Bro reports 10% or so loss. Note that some hardware configurations will limit the number of streams you can feed to Bro, eg my DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only be making use of 2/3 of my CPU. Mike > On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: > > Does anyone have experience with higher speed NIC's and Bro? Will it sustain 10Gb speeds or more provide the hardware is spec'd appropriately? > > regards, > > Coen > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From vitologrillo at gmail.com Thu Jan 8 07:34:06 2015 From: vitologrillo at gmail.com (Vito Logrillo) Date: Thu, 8 Jan 2015 16:34:06 +0100 Subject: [Bro] Differences between conn.log and known_services.log In-Reply-To: <833193C8-3177-43CB-80D8-53E787353707@icir.org> References: <13F1D97B-1C5A-4CBF-AFAD-80E424521565@icir.org> <833193C8-3177-43CB-80D8-53E787353707@icir.org> Message-ID: Hi Seth, thanks for your reply, but i have some doubts about: i'll try to explain me better. Sometimes in conn.log i have an output like this: ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service ....... xxx CYePUY1fgIZQcJHerb 10.0.1.2 40077 10.0.5.6 67 udp - ..... and in known_services.log something like: ts host port_num port_proto service xxx 10.0.5.6 67 udp DHCP (ip addrs are totally arbitrary) Why do you think that a log like below is totally wrong? ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service ....... xxx CYePUY1fgIZQcJHerb 10.0.1.2 40077 10.0.5.6 67 udp DHCP ..... In this case, i've used an information present in known_service.log to integrate the info present in conn.log, so the service field in conn.log is not empty. Whta's wrong with this? Regards, Vito 2015-01-08 16:03 GMT+01:00 Seth Hall : > >> On Jan 8, 2015, at 4:45 AM, Vito Logrillo wrote: >> >> Is it correct to say that the difference >> between conn.log and known_services.log is that conn.log is based on a >> real-time analysis and and known_services.log is based on a delayed >> analysis?is it right or not? > > Technically that?s correct but I would say that it?s more accurate to say that the two logs are logging different things. conn.log is logging attribute of connections, and known_services.log is logging aspects of host/port pairs. > >> Another question: if known_services identifies a service on a >> addr/port, that information is later used by conn.log or not? > > No, that wouldn?t make sense to do that. The service field in conn.log is solely showing you what analyzer(s) Bro used successfully to analyze the traffic on that particular connection. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > From gl89 at cornell.edu Thu Jan 8 08:24:09 2015 From: gl89 at cornell.edu (Glenn Forbes Fleming Larratt) Date: Thu, 8 Jan 2015 11:24:09 -0500 (EST) Subject: [Bro] [maintenance] what would cause a backlog/erasure in "...logs/current"? Message-ID: Folks, My Bro cluster is happily flagging and accumulating data - but: 1. The last two hourly cycles left uncompressed logfiles in /opt/app/bro/logs/current: : : -rw-r--r-- 1 bro bro 73529 Jan 8 11:00 reporter-15-01-08_10.00.00.log -rw-r--r-- 1 bro bro 749059 Jan 8 11:00 tunnel-15-01-08_10.00.00.log -rw-r--r-- 1 bro bro 2474781 Jan 8 11:00 weird-15-01-08_10.00.00.log -rw-r--r-- 1 bro bro 17062559659 Jan 8 10:00 conn-15-01-08_09.00.00.log -rw-r--r-- 1 bro bro 2260979370 Jan 8 10:00 files-15-01-08_09.00.00.log -rw-r--r-- 1 bro bro 4942559737 Jan 8 10:00 http-15-01-08_09.00.00.log : etc. : 2. No gzip processes were in evidence; 3. Figuring it might be the appropriate proverbial kick in the pants, I did a "broctl restart", which ran cleanly - and to all appearances, *erased* the older uncompressed files in question. I now have a hole where the data from 10:00-12:00 today used to be - can anyone shed light on what's going on here? Thanks, -- Glenn Forbes Fleming Larratt Cornell University IT Security Office From jdonnelly at dyn.com Thu Jan 8 08:31:37 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Thu, 8 Jan 2015 10:31:37 -0600 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: How does one know if bro is dropping (10%) of messages ? On Thu, Jan 8, 2015 at 9:28 AM, Mike Patterson wrote: > Succinctly, yes, although that provision is a big one. > > I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG > 9.2X2. Both perform reasonably well. Although my hardware is somewhat > underspecced (Dell R710s of differing vintages), I still get tons of useful > data. > > If your next question would be "how should I spec my hardware", that's > quite difficult to answer because it depends on a lot. Get the hottest CPUs > you can afford, with as many cores. If you're actually sustaining 10+Gb > you'll probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 > 3.7Ghz cores, but Bro reports 10% or so loss. Note that some hardware > configurations will limit the number of streams you can feed to Bro, eg my > DAG can only produce 16 streams so even if I had it in a 24 core box, I'd > only be making use of 2/3 of my CPU. > > Mike > > > On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: > > > > Does anyone have experience with higher speed NIC's and Bro? Will it > sustain 10Gb speeds or more provide the hardware is spec'd appropriately? > > > > regards, > > > > Coen > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150108/8255fd32/attachment.html From mike.patterson at uwaterloo.ca Thu Jan 8 08:32:12 2015 From: mike.patterson at uwaterloo.ca (Mike Patterson) Date: Thu, 8 Jan 2015 16:32:12 +0000 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: <7AF43436-63D7-40F0-8AFB-654E5EE74089@uwaterloo.ca> capture_loss log, not enabled by default. -- The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. - T.H. White > On Jan 8, 2015, at 11:31 AM, John Donnelly wrote: > > How does one know if bro is dropping (10%) of messages ? > > On Thu, Jan 8, 2015 at 9:28 AM, Mike Patterson wrote: > Succinctly, yes, although that provision is a big one. > > I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG 9.2X2. Both perform reasonably well. Although my hardware is somewhat underspecced (Dell R710s of differing vintages), I still get tons of useful data. > > If your next question would be "how should I spec my hardware", that's quite difficult to answer because it depends on a lot. Get the hottest CPUs you can afford, with as many cores. If you're actually sustaining 10+Gb you'll probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz cores, but Bro reports 10% or so loss. Note that some hardware configurations will limit the number of streams you can feed to Bro, eg my DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only be making use of 2/3 of my CPU. > > Mike > > > On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: > > > > Does anyone have experience with higher speed NIC's and Bro? Will it sustain 10Gb speeds or more provide the hardware is spec'd appropriately? > > > > regards, > > > > Coen > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From latt0050 at umn.edu Thu Jan 8 08:41:01 2015 From: latt0050 at umn.edu (Brandon Lattin) Date: Thu, 8 Jan 2015 10:41:01 -0600 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: Turn on the capture-loss script by adding the following to your local.bro: @load misc/capture-loss On Thu, Jan 8, 2015 at 10:31 AM, John Donnelly wrote: > How does one know if bro is dropping (10%) of messages ? > > On Thu, Jan 8, 2015 at 9:28 AM, Mike Patterson < > mike.patterson at uwaterloo.ca> wrote: > >> Succinctly, yes, although that provision is a big one. >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG >> 9.2X2. Both perform reasonably well. Although my hardware is somewhat >> underspecced (Dell R710s of differing vintages), I still get tons of useful >> data. >> >> If your next question would be "how should I spec my hardware", that's >> quite difficult to answer because it depends on a lot. Get the hottest CPUs >> you can afford, with as many cores. If you're actually sustaining 10+Gb >> you'll probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 >> 3.7Ghz cores, but Bro reports 10% or so loss. Note that some hardware >> configurations will limit the number of streams you can feed to Bro, eg my >> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd >> only be making use of 2/3 of my CPU. >> >> Mike >> >> > On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >> > >> > Does anyone have experience with higher speed NIC's and Bro? Will it >> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >> > >> > regards, >> > >> > Coen >> > _______________________________________________ >> > Bro mailing list >> > bro at bro-ids.org >> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Brandon Lattin Security Analyst University of Minnesota - University Information Security Office: 612-626-6672 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150108/1ce9a78a/attachment.html From gfaulkner.nsm at gmail.com Thu Jan 8 10:36:08 2015 From: gfaulkner.nsm at gmail.com (Gary Faulkner) Date: Thu, 08 Jan 2015 12:36:08 -0600 Subject: [Bro] [maintenance] what would cause a backlog/erasure in "...logs/current"? In-Reply-To: References: Message-ID: <54AECE18.6020709@gmail.com> Hello, I've seen this sort of thing happen when one or more log files get really large, typically dns.log, conn.log, or http.log. It seems to partially rotate the logs, perform the initial rename, but not compress them and do the second renaming (you'll notice that the naming convention differs from the already moved and compressed logs). You may also notice you stop getting connection summaries when this happens as well. I suspect that part of the post-processing never happens or never completes. Many times I find the logs end up really big due to one or more misbehaving hosts, such as open DNS resolvers participating in a DDOS or a compromised host aggressively scanning/attacking something. Best bet is to manually move the logs that missed rotation(and compress them) then restart bro. If you restart bro without moving the logs the logs that didn't fully get processed on prior rotations first I've noticed those logs tend to simply get deleted. The only work around I've found is to address the traffic that is causing the logs to explode. Regards, Gary On 1/8/2015 10:24 AM, Glenn Forbes Fleming Larratt wrote: > Folks, > > My Bro cluster is happily flagging and accumulating data - but: > > 1. The last two hourly cycles left uncompressed logfiles in > /opt/app/bro/logs/current: > > : > : > -rw-r--r-- 1 bro bro 73529 Jan 8 11:00 reporter-15-01-08_10.00.00.log > -rw-r--r-- 1 bro bro 749059 Jan 8 11:00 tunnel-15-01-08_10.00.00.log > -rw-r--r-- 1 bro bro 2474781 Jan 8 11:00 weird-15-01-08_10.00.00.log > -rw-r--r-- 1 bro bro 17062559659 Jan 8 10:00 conn-15-01-08_09.00.00.log > -rw-r--r-- 1 bro bro 2260979370 Jan 8 10:00 files-15-01-08_09.00.00.log > -rw-r--r-- 1 bro bro 4942559737 Jan 8 10:00 http-15-01-08_09.00.00.log > : etc. > : > > 2. No gzip processes were in evidence; > > 3. Figuring it might be the appropriate proverbial kick in the pants, I > did a "broctl restart", which ran cleanly - and to all appearances, > *erased* the older uncompressed files in question. > > I now have a hole where the data from 10:00-12:00 today used to be - can > anyone shed light on what's going on here? > > Thanks, > From robin at icir.org Thu Jan 8 11:37:13 2015 From: robin at icir.org (Robin Sommer) Date: Thu, 8 Jan 2015 11:37:13 -0800 Subject: [Bro] [maintenance] what would cause a backlog/erasure in "...logs/current"? In-Reply-To: <54AECE18.6020709@gmail.com> References: <54AECE18.6020709@gmail.com> Message-ID: <20150108193713.GC36431@icir.org> On Thu, Jan 08, 2015 at 12:36 -0600, Gary Faulkner wrote: > If you restart bro without moving the logs the logs that didn't fully > get processed on prior rotations first I've noticed those logs tend to > simply get deleted. Not good. Broctl should always compress and archive any left-over logs, not simply delete them on restart. If it doesn't, that's a bug. For fixing that it would be very helpful to have a way to reproduce the problem. If anybody knows how to do that, please file a corresponding ticket. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From hardenrm at uchicago.edu Fri Jan 9 08:35:31 2015 From: hardenrm at uchicago.edu (Ryan Harden) Date: Fri, 9 Jan 2015 16:35:31 +0000 Subject: [Bro] Bro Cluster Specification Hints Message-ID: I?m working on speccing a cluster and was curious if there is a knowledgebase or something that lists the gotchas and caveats like the one given in the 10G NIC thread (DAG card 16 stream limitation). Having something to refer to when building a new cluster would save lots of time/money in experimentation and tweaking when more optimal systems/NICs could have been purchased upfront. Such as: Is there a 10G NIC that most have success with? Specific drivers/kernel version? Are there any platforms to steer away from as far as server/motherboard/bus/CPU/etc? Many moons ago there used to be a ###Mb/Core rule of thumb, does something like that exist? Has there been any success with blade chassis versus standalone? Thanks /Ryan Ryan Harden Research and Advanced Networking Architect University of Chicago - AS160 P: 773-834-5441 From donaldson8 at llnl.gov Fri Jan 9 10:03:38 2015 From: donaldson8 at llnl.gov (Donaldson, John) Date: Fri, 9 Jan 2015 18:03:38 +0000 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. v/r John Donaldson > -----Original Message----- > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of > Mike Patterson > Sent: Thursday, January 08, 2015 7:29 AM > To: coen bakkers > Cc: bro at bro.org > Subject: Re: [Bro] Bro with 10Gb NIC's or higher > > Succinctly, yes, although that provision is a big one. > > I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG > 9.2X2. Both perform reasonably well. Although my hardware is somewhat > underspecced (Dell R710s of differing vintages), I still get tons of useful data. > > If your next question would be "how should I spec my hardware", that's > quite difficult to answer because it depends on a lot. Get the hottest CPUs > you can afford, with as many cores. If you're actually sustaining 10+Gb you'll > probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz > cores, but Bro reports 10% or so loss. Note that some hardware > configurations will limit the number of streams you can feed to Bro, eg my > DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only > be making use of 2/3 of my CPU. > > Mike > > > On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: > > > > Does anyone have experience with higher speed NIC's and Bro? Will it > sustain 10Gb speeds or more provide the hardware is spec'd appropriately? > > > > regards, > > > > Coen > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From pooh_champ19 at yahoo.com Fri Jan 9 10:01:22 2015 From: pooh_champ19 at yahoo.com (pooja) Date: Fri, 9 Jan 2015 18:01:22 +0000 (UTC) Subject: [Bro] Query for converting bro captured traffic into KDD CUP dataset format Message-ID: hello, I am doing research work on Intrusion detection system. I am working on live traffic and for that I am using bro. I have used bro -i eth0 command to capture traffic.Now I have to convert that into KDD CUP dataset format. Can anyone Please help me out in this task. If possible can you provide me any script, code, command or any resource for this. It will be grateful if you help me out in this as it is the core part of my research work. Thanking you Pooja Champaneria From mike.patterson at uwaterloo.ca Fri Jan 9 10:20:17 2015 From: mike.patterson at uwaterloo.ca (Mike Patterson) Date: Fri, 9 Jan 2015 18:20:17 +0000 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> You're right, it's 32 on mine. I posted some specs for my system a couple of years ago now, I think. 6-8GB per worker should give some headroom (my workers usually use about 5 apiece I think). Mike -- Simple, clear purpose and principles give rise to complex and intelligent behavior. Complex rules and regulations give rise to simple and stupid behavior. - Dee Hock > On Jan 9, 2015, at 1:03 PM, Donaldson, John wrote: > > I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. > > I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. > > v/r > > John Donaldson > >> -----Original Message----- >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of >> Mike Patterson >> Sent: Thursday, January 08, 2015 7:29 AM >> To: coen bakkers >> Cc: bro at bro.org >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >> >> Succinctly, yes, although that provision is a big one. >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG >> 9.2X2. Both perform reasonably well. Although my hardware is somewhat >> underspecced (Dell R710s of differing vintages), I still get tons of useful data. >> >> If your next question would be "how should I spec my hardware", that's >> quite difficult to answer because it depends on a lot. Get the hottest CPUs >> you can afford, with as many cores. If you're actually sustaining 10+Gb you'll >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz >> cores, but Bro reports 10% or so loss. Note that some hardware >> configurations will limit the number of streams you can feed to Bro, eg my >> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only >> be making use of 2/3 of my CPU. >> >> Mike >> >>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >>> >>> Does anyone have experience with higher speed NIC's and Bro? Will it >> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >>> >>> regards, >>> >>> Coen >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From asharma at lbl.gov Fri Jan 9 11:00:54 2015 From: asharma at lbl.gov (Aashish Sharma) Date: Fri, 9 Jan 2015 11:00:54 -0800 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> Message-ID: <20150109190051.GM14004@yaksha.lbl.gov> While, we at LBNL continue to work towards a formal documentation, I think I'd reply then causing further delays: Here is the 100G cluster setup we've done: - 5 nodes running 10 workers + 1 proxy each on them - 100G split by arista to 5x10G - 10G on each node is further split my myricom to 10x1G/worker with shunting enabled !! Note: Scott Campbell did some very early work on the concept of shunting (http://dl.acm.org/citation.cfm?id=2195223.2195788) We are using react-framework to talk to arista written by Justin Azoff. With Shunting enabled cluster isn't even truly seeing 10G anymore. oh btw, Capture_loss is a good policy to run for sure. With above setup we get ~ 0.xx % packet drops. (Depending on kind of traffic you are monitoring you may need a slightly different shunting logic) Here is hardware specs / node: - Motherboard-SM, X9DRi-F - Intel E5-2643V2 3.5GHz Ivy Bridge (2x6-=12 Cores) - 128GB DDRIII 1600MHz ECC/REG - (8x16GB Modules Installed) - 10G-PCIE2-8C2-2S+; Myricom 10G "Gen2" (5 GT/s) PCI Express NIC with two SFP+ - Myricom 10G-SR Modules On tapping side we have - Arista 7504 (gets fed 100G TX/RX + backup and other 10Gb links) - Arista 7150 (Symetric hashing via DANZ - splitting tcp sessions 1/link - 5 links to nodes on Bro side: 5 nodes accepting 5 links from 7150 Each node running 10 workers + 1 proxy Myricom spliting/load balancing to each worker on the node. Hope this helps, let us know if you have any further questions. Thanks, Aashish On Fri, Jan 09, 2015 at 06:20:17PM +0000, Mike Patterson wrote: > You're right, it's 32 on mine. > > I posted some specs for my system a couple of years ago now, I think. > > 6-8GB per worker should give some headroom (my workers usually use about 5 apiece I think). > > Mike > > -- > Simple, clear purpose and principles give rise to complex and > intelligent behavior. Complex rules and regulations give rise > to simple and stupid behavior. - Dee Hock > > > On Jan 9, 2015, at 1:03 PM, Donaldson, John wrote: > > > > I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. > > > > I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. > > > > v/r > > > > John Donaldson > > > >> -----Original Message----- > >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of > >> Mike Patterson > >> Sent: Thursday, January 08, 2015 7:29 AM > >> To: coen bakkers > >> Cc: bro at bro.org > >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher > >> > >> Succinctly, yes, although that provision is a big one. > >> > >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG > >> 9.2X2. Both perform reasonably well. Although my hardware is somewhat > >> underspecced (Dell R710s of differing vintages), I still get tons of useful data. > >> > >> If your next question would be "how should I spec my hardware", that's > >> quite difficult to answer because it depends on a lot. Get the hottest CPUs > >> you can afford, with as many cores. If you're actually sustaining 10+Gb you'll > >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz > >> cores, but Bro reports 10% or so loss. Note that some hardware > >> configurations will limit the number of streams you can feed to Bro, eg my > >> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only > >> be making use of 2/3 of my CPU. > >> > >> Mike > >> > >>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: > >>> > >>> Does anyone have experience with higher speed NIC's and Bro? Will it > >> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? > >>> > >>> regards, > >>> > >>> Coen > >>> _______________________________________________ > >>> Bro mailing list > >>> bro at bro-ids.org > >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >> > >> > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Aashish Sharma (asharma at lbl.gov) Cyber Security, Lawrence Berkeley National Laboratory http://go.lbl.gov/pgp-aashish Office: (510)-495-2680 Cell: (510)-612-7971 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150109/8ece2184/attachment.bin From luke at geekempire.com Fri Jan 9 11:11:54 2015 From: luke at geekempire.com (Mike Reeves) Date: Fri, 9 Jan 2015 14:11:54 -0500 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> Message-ID: In all of the 10G deployments I have done I always do multiple boxes behind a flow based load balancer. That way I can use commodity boxes without special NICs and keep them at a reasonable price point. The bang for the buck goes down when you talk 4 x 12 core HT processors etc. vs a dual 10 core HT. You also get the ability to have some fault tolerance where if you have hardware issues you are not blind. I have a few deployments that are going from 10G to 100G and the only thing we have to change is the inbound interfaces on the LB gear. The other positive is as usage goes up I can add additional capacity incrementally instead of having to re-solution. Thanks Mike > On Jan 9, 2015, at 1:20 PM, Mike Patterson wrote: > > You're right, it's 32 on mine. > > I posted some specs for my system a couple of years ago now, I think. > > 6-8GB per worker should give some headroom (my workers usually use about 5 apiece I think). > > Mike > > -- > Simple, clear purpose and principles give rise to complex and > intelligent behavior. Complex rules and regulations give rise > to simple and stupid behavior. - Dee Hock > >> On Jan 9, 2015, at 1:03 PM, Donaldson, John wrote: >> >> I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. >> >> I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. >> >> v/r >> >> John Donaldson >> >>> -----Original Message----- >>> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of >>> Mike Patterson >>> Sent: Thursday, January 08, 2015 7:29 AM >>> To: coen bakkers >>> Cc: bro at bro.org >>> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >>> >>> Succinctly, yes, although that provision is a big one. >>> >>> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG >>> 9.2X2. Both perform reasonably well. Although my hardware is somewhat >>> underspecced (Dell R710s of differing vintages), I still get tons of useful data. >>> >>> If your next question would be "how should I spec my hardware", that's >>> quite difficult to answer because it depends on a lot. Get the hottest CPUs >>> you can afford, with as many cores. If you're actually sustaining 10+Gb you'll >>> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz >>> cores, but Bro reports 10% or so loss. Note that some hardware >>> configurations will limit the number of streams you can feed to Bro, eg my >>> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only >>> be making use of 2/3 of my CPU. >>> >>> Mike >>> >>>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >>>> >>>> Does anyone have experience with higher speed NIC's and Bro? Will it >>> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >>>> >>>> regards, >>>> >>>> Coen >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jdonnelly at dyn.com Fri Jan 9 11:31:16 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Fri, 9 Jan 2015 13:31:16 -0600 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: Hi, What is the name of the log and where is it located at ? On Thu, Jan 8, 2015 at 10:41 AM, Brandon Lattin wrote: > Turn on the capture-loss script by adding the following to your local.bro: > > @load misc/capture-loss > > On Thu, Jan 8, 2015 at 10:31 AM, John Donnelly wrote: > >> How does one know if bro is dropping (10%) of messages ? >> >> On Thu, Jan 8, 2015 at 9:28 AM, Mike Patterson < >> mike.patterson at uwaterloo.ca> wrote: >> >>> Succinctly, yes, although that provision is a big one. >>> >>> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace >>> DAG 9.2X2. Both perform reasonably well. Although my hardware is somewhat >>> underspecced (Dell R710s of differing vintages), I still get tons of useful >>> data. >>> >>> If your next question would be "how should I spec my hardware", that's >>> quite difficult to answer because it depends on a lot. Get the hottest CPUs >>> you can afford, with as many cores. If you're actually sustaining 10+Gb >>> you'll probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 >>> 3.7Ghz cores, but Bro reports 10% or so loss. Note that some hardware >>> configurations will limit the number of streams you can feed to Bro, eg my >>> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd >>> only be making use of 2/3 of my CPU. >>> >>> Mike >>> >>> > On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >>> > >>> > Does anyone have experience with higher speed NIC's and Bro? Will it >>> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >>> > >>> > regards, >>> > >>> > Coen >>> > _______________________________________________ >>> > Bro mailing list >>> > bro at bro-ids.org >>> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > > -- > Brandon Lattin > Security Analyst > University of Minnesota - University Information Security > Office: 612-626-6672 > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150109/a57bd598/attachment.html From michalpurzynski1 at gmail.com Fri Jan 9 12:01:33 2015 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Fri, 9 Jan 2015 21:01:33 +0100 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <20150109190051.GM14004@yaksha.lbl.gov> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> <20150109190051.GM14004@yaksha.lbl.gov> Message-ID: Do you really see and can handle 1Gbit/sec of traffic per core? I'm curious. I would say, with a 2.6Ghz CPU my educated guess would be somewhere about 250Mbit/sec / core with Bro. Of course configuration is everything here, I'm just looking into "given you do it right, that's what's possible". On Fri, Jan 9, 2015 at 8:00 PM, Aashish Sharma wrote: > While, we at LBNL continue to work towards a formal documentation, I think I'd reply then causing further delays: > > Here is the 100G cluster setup we've done: > > - 5 nodes running 10 workers + 1 proxy each on them > - 100G split by arista to 5x10G > - 10G on each node is further split my myricom to 10x1G/worker with shunting enabled !! > > Note: Scott Campbell did some very early work on the concept of shunting > (http://dl.acm.org/citation.cfm?id=2195223.2195788) > > We are using react-framework to talk to arista written by Justin Azoff. > > With Shunting enabled cluster isn't even truly seeing 10G anymore. > > oh btw, Capture_loss is a good policy to run for sure. With above setup we get ~ 0.xx % packet drops. > > (Depending on kind of traffic you are monitoring you may need a slightly different shunting logic) > > > Here is hardware specs / node: > > - Motherboard-SM, X9DRi-F > - Intel E5-2643V2 3.5GHz Ivy Bridge (2x6-=12 Cores) > - 128GB DDRIII 1600MHz ECC/REG - (8x16GB Modules Installed) > - 10G-PCIE2-8C2-2S+; Myricom 10G "Gen2" (5 GT/s) PCI Express NIC with two SFP+ > - Myricom 10G-SR Modules > > On tapping side we have > - Arista 7504 (gets fed 100G TX/RX + backup and other 10Gb links) > - Arista 7150 (Symetric hashing via DANZ - splitting tcp sessions 1/link - 5 links to nodes > > on Bro side: > 5 nodes accepting 5 links from 7150 > Each node running 10 workers + 1 proxy > Myricom spliting/load balancing to each worker on the node. > > > Hope this helps, > > let us know if you have any further questions. > > Thanks, > Aashish > > On Fri, Jan 09, 2015 at 06:20:17PM +0000, Mike Patterson wrote: >> You're right, it's 32 on mine. >> >> I posted some specs for my system a couple of years ago now, I think. >> >> 6-8GB per worker should give some headroom (my workers usually use about 5 apiece I think). >> >> Mike >> >> -- >> Simple, clear purpose and principles give rise to complex and >> intelligent behavior. Complex rules and regulations give rise >> to simple and stupid behavior. - Dee Hock >> >> > On Jan 9, 2015, at 1:03 PM, Donaldson, John wrote: >> > >> > I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. >> > >> > I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. >> > >> > v/r >> > >> > John Donaldson >> > >> >> -----Original Message----- >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of >> >> Mike Patterson >> >> Sent: Thursday, January 08, 2015 7:29 AM >> >> To: coen bakkers >> >> Cc: bro at bro.org >> >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >> >> >> >> Succinctly, yes, although that provision is a big one. >> >> >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG >> >> 9.2X2. Both perform reasonably well. Although my hardware is somewhat >> >> underspecced (Dell R710s of differing vintages), I still get tons of useful data. >> >> >> >> If your next question would be "how should I spec my hardware", that's >> >> quite difficult to answer because it depends on a lot. Get the hottest CPUs >> >> you can afford, with as many cores. If you're actually sustaining 10+Gb you'll >> >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz >> >> cores, but Bro reports 10% or so loss. Note that some hardware >> >> configurations will limit the number of streams you can feed to Bro, eg my >> >> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only >> >> be making use of 2/3 of my CPU. >> >> >> >> Mike >> >> >> >>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >> >>> >> >>> Does anyone have experience with higher speed NIC's and Bro? Will it >> >> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >> >>> >> >>> regards, >> >>> >> >>> Coen >> >>> _______________________________________________ >> >>> Bro mailing list >> >>> bro at bro-ids.org >> >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> >> >> _______________________________________________ >> >> Bro mailing list >> >> bro at bro-ids.org >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > Aashish Sharma (asharma at lbl.gov) > Cyber Security, > Lawrence Berkeley National Laboratory > http://go.lbl.gov/pgp-aashish > Office: (510)-495-2680 Cell: (510)-612-7971 > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From asharma at lbl.gov Fri Jan 9 12:26:01 2015 From: asharma at lbl.gov (Aashish Sharma) Date: Fri, 9 Jan 2015 12:26:01 -0800 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> <20150109190051.GM14004@yaksha.lbl.gov> Message-ID: > Do you really see and can handle 1Gbit/sec of traffic per core? I'm curious. Haven't measured if a core can handle 1Gbit/sec but I highly highly doubt. What saves us is the shunting capability - basically bro identifies and cuts off the rest of the big flows by placing a src,src port - dst, dst-port ACL on arista while continuing to allow control packets (and dynamically removes ACL once connection_ends) So each core doesn't really see anything more then 20-40 Mbps (approximation) (Notes for self, it would be good to get these numbers in a plot) Thanks, Aashish On Fri, Jan 9, 2015 at 12:01 PM, Micha? Purzy?ski < michalpurzynski1 at gmail.com> wrote: > Do you really see and can handle 1Gbit/sec of traffic per core? I'm > curious. > > I would say, with a 2.6Ghz CPU my educated guess would be somewhere > about 250Mbit/sec / core with Bro. Of course configuration is > everything here, I'm just looking into "given you do it right, that's > what's possible". > > On Fri, Jan 9, 2015 at 8:00 PM, Aashish Sharma wrote: > > While, we at LBNL continue to work towards a formal documentation, I > think I'd reply then causing further delays: > > > > Here is the 100G cluster setup we've done: > > > > - 5 nodes running 10 workers + 1 proxy each on them > > - 100G split by arista to 5x10G > > - 10G on each node is further split my myricom to 10x1G/worker with > shunting enabled !! > > > > Note: Scott Campbell did some very early work on the concept of shunting > > (http://dl.acm.org/citation.cfm?id=2195223.2195788) > > > > We are using react-framework to talk to arista written by Justin Azoff. > > > > With Shunting enabled cluster isn't even truly seeing 10G anymore. > > > > oh btw, Capture_loss is a good policy to run for sure. With above setup > we get ~ 0.xx % packet drops. > > > > (Depending on kind of traffic you are monitoring you may need a slightly > different shunting logic) > > > > > > Here is hardware specs / node: > > > > - Motherboard-SM, X9DRi-F > > - Intel E5-2643V2 3.5GHz Ivy Bridge (2x6-=12 Cores) > > - 128GB DDRIII 1600MHz ECC/REG - (8x16GB Modules Installed) > > - 10G-PCIE2-8C2-2S+; Myricom 10G "Gen2" (5 GT/s) PCI Express NIC with > two SFP+ > > - Myricom 10G-SR Modules > > > > On tapping side we have > > - Arista 7504 (gets fed 100G TX/RX + backup and other 10Gb links) > > - Arista 7150 (Symetric hashing via DANZ - splitting tcp sessions 1/link > - 5 links to nodes > > > > on Bro side: > > 5 nodes accepting 5 links from 7150 > > Each node running 10 workers + 1 proxy > > Myricom spliting/load balancing to each worker on the node. > > > > > > Hope this helps, > > > > let us know if you have any further questions. > > > > Thanks, > > Aashish > > > > On Fri, Jan 09, 2015 at 06:20:17PM +0000, Mike Patterson wrote: > >> You're right, it's 32 on mine. > >> > >> I posted some specs for my system a couple of years ago now, I think. > >> > >> 6-8GB per worker should give some headroom (my workers usually use > about 5 apiece I think). > >> > >> Mike > >> > >> -- > >> Simple, clear purpose and principles give rise to complex and > >> intelligent behavior. Complex rules and regulations give rise > >> to simple and stupid behavior. - Dee Hock > >> > >> > On Jan 9, 2015, at 1:03 PM, Donaldson, John > wrote: > >> > > >> > I'd agree with all of this. We're monitoring a few 10Gbps network > segments with DAG 9.2X2s, too. I'll add in that, when processing that much > traffic on a single device, you'll definitely not want to skimp on memory. > >> > > >> > I'm not sure which configurations you're using that might be limiting > you to 16 streams -- we're run with at least 24 streams, and (at least > with the 9.2X2s) you should be able to work with up to 32 receive streams. > >> > > >> > v/r > >> > > >> > John Donaldson > >> > > >> >> -----Original Message----- > >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of > >> >> Mike Patterson > >> >> Sent: Thursday, January 08, 2015 7:29 AM > >> >> To: coen bakkers > >> >> Cc: bro at bro.org > >> >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher > >> >> > >> >> Succinctly, yes, although that provision is a big one. > >> >> > >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an > Endace DAG > >> >> 9.2X2. Both perform reasonably well. Although my hardware is somewhat > >> >> underspecced (Dell R710s of differing vintages), I still get tons of > useful data. > >> >> > >> >> If your next question would be "how should I spec my hardware", > that's > >> >> quite difficult to answer because it depends on a lot. Get the > hottest CPUs > >> >> you can afford, with as many cores. If you're actually sustaining > 10+Gb you'll > >> >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 > 3.7Ghz > >> >> cores, but Bro reports 10% or so loss. Note that some hardware > >> >> configurations will limit the number of streams you can feed to Bro, > eg my > >> >> DAG can only produce 16 streams so even if I had it in a 24 core > box, I'd only > >> >> be making use of 2/3 of my CPU. > >> >> > >> >> Mike > >> >> > >> >>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: > >> >>> > >> >>> Does anyone have experience with higher speed NIC's and Bro? Will it > >> >> sustain 10Gb speeds or more provide the hardware is spec'd > appropriately? > >> >>> > >> >>> regards, > >> >>> > >> >>> Coen > >> >>> _______________________________________________ > >> >>> Bro mailing list > >> >>> bro at bro-ids.org > >> >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >> >> > >> >> > >> >> _______________________________________________ > >> >> Bro mailing list > >> >> bro at bro-ids.org > >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >> > >> > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > > Aashish Sharma (asharma at lbl.gov) > > Cyber Security, > > Lawrence Berkeley National Laboratory > > http://go.lbl.gov/pgp-aashish > > Office: (510)-495-2680 Cell: (510)-612-7971 > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150109/6ea9bea0/attachment.html From soehlert at illinois.edu Fri Jan 9 13:02:41 2015 From: soehlert at illinois.edu (Oehlert, Samuel) Date: Fri, 9 Jan 2015 15:02:41 -0600 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> Message-ID: <54B041F1.3040706@illinois.edu> Capture_loss.log and it should be with all your other logs once you turn it on. Remember to install, check, and restart brocontrol to get it turned on. On 1/9/15, 1:31 PM, John Donnelly wrote: > Hi, > What is the name of the log and where is it located at ? > > > On Thu, Jan 8, 2015 at 10:41 AM, Brandon Lattin > wrote: > > Turn on the capture-loss script by adding the following to your > local.bro: > > @load misc/capture-loss > > On Thu, Jan 8, 2015 at 10:31 AM, John Donnelly > wrote: > > How does one know if bro is dropping (10%) of messages ? > > On Thu, Jan 8, 2015 at 9:28 AM, Mike Patterson > > wrote: > > Succinctly, yes, although that provision is a big one. > > I'm running Bro on two 10 gig interfaces, an Intel X520 > and an Endace DAG 9.2X2. Both perform reasonably well. > Although my hardware is somewhat underspecced (Dell R710s > of differing vintages), I still get tons of useful data. > > If your next question would be "how should I spec my > hardware", that's quite difficult to answer because it > depends on a lot. Get the hottest CPUs you can afford, > with as many cores. If you're actually sustaining 10+Gb > you'll probably want at least 20-30 cores. I'm sustaining > 4.5Gb or so on 8 3.7Ghz cores, but Bro reports 10% or so > loss. Note that some hardware configurations will limit > the number of streams you can feed to Bro, eg my DAG can > only produce 16 streams so even if I had it in a 24 core > box, I'd only be making use of 2/3 of my CPU. > > Mike > > > On Jan 7, 2015, at 5:04 AM, coen bakkers > > wrote: > > > > Does anyone have experience with higher speed NIC's and > Bro? Will it sustain 10Gb speeds or more provide the > hardware is spec'd appropriately? > > > > regards, > > > > Coen > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > -- > Brandon Lattin > Security Analyst > University of Minnesota - University Information Security > Office: 612-626-6672 > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Sam Oehlert Security Engineer NCSA soehlert at illinois.edu (217)300-1076 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150109/3d24b86a/attachment-0001.html From seth at icir.org Fri Jan 9 13:29:58 2015 From: seth at icir.org (Seth Hall) Date: Fri, 9 Jan 2015 16:29:58 -0500 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <54B041F1.3040706@illinois.edu> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <54B041F1.3040706@illinois.edu> Message-ID: <2689AD2B-9ADB-4EBF-85AE-704E670F0BC2@icir.org> > On Jan 9, 2015, at 4:02 PM, Oehlert, Samuel wrote: > > Capture_loss.log and it should be with all your other logs once you turn it on. Remember to install, check, and restart brocontrol to get it turned on. After you add the following to local.bro of course? @load misc/capture-loss .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jdonnelly at dyn.com Fri Jan 9 13:37:46 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Fri, 9 Jan 2015 15:37:46 -0600 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <2689AD2B-9ADB-4EBF-85AE-704E670F0BC2@icir.org> References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <54B041F1.3040706@illinois.edu> <2689AD2B-9ADB-4EBF-85AE-704E670F0BC2@icir.org> Message-ID: Ok - I found it it : / - along with "weird" How I can I specify another directory ? What do the fields mean ? root at x64-01:/# cat cap* 1420832673.023244,900.000068,bro,0,0,0.0 1420833573.023279,900.000035,bro,0,6,0.0 1420833727.951157,154.927878,bro,0,0,0.0 1420833885.693988,154.676438,bro,0,0,0.0 On Fri, Jan 9, 2015 at 3:29 PM, Seth Hall wrote: > > > On Jan 9, 2015, at 4:02 PM, Oehlert, Samuel > wrote: > > > > Capture_loss.log and it should be with all your other logs once you turn > it on. Remember to install, check, and restart brocontrol to get it turned > on. > > After you add the following to local.bro of course? > > @load misc/capture-loss > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150109/03e62207/attachment.html From awells at digiumcloud.com Fri Jan 9 13:42:21 2015 From: awells at digiumcloud.com (Aubrey Wells) Date: Fri, 9 Jan 2015 16:42:21 -0500 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <54B041F1.3040706@illinois.edu> <2689AD2B-9ADB-4EBF-85AE-704E670F0BC2@icir.org> Message-ID: My file has the headers in it: $ head -8 capture_loss.log #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path capture_loss #open 2015-01-09-16-33-16 #fields ts ts_delta peer gaps acks percent_lost #types time interval string count count double --------------------- Aubrey Wells Manager, Network Operations Digium Cloud Services Main: 888.305.3850 Support: 877.344.4861 or http://www.digium.com/en/support On Fri, Jan 9, 2015 at 4:37 PM, John Donnelly wrote: > Ok - I found it it : / - along with "weird" > > How I can I specify another directory ? > What do the fields mean ? > > root at x64-01:/# cat cap* > 1420832673.023244,900.000068,bro,0,0,0.0 > 1420833573.023279,900.000035,bro,0,6,0.0 > 1420833727.951157,154.927878,bro,0,0,0.0 > 1420833885.693988,154.676438,bro,0,0,0.0 > > > > > On Fri, Jan 9, 2015 at 3:29 PM, Seth Hall wrote: > >> >> > On Jan 9, 2015, at 4:02 PM, Oehlert, Samuel >> wrote: >> > >> > Capture_loss.log and it should be with all your other logs once you >> turn it on. Remember to install, check, and restart brocontrol to get it >> turned on. >> >> After you add the following to local.bro of course? >> >> @load misc/capture-loss >> >> .Seth >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150109/d414ef46/attachment.html From ytl at slac.stanford.edu Fri Jan 9 13:44:18 2015 From: ytl at slac.stanford.edu (Li, Yee Ting) Date: Fri, 9 Jan 2015 21:44:18 +0000 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> <20150109190051.GM14004@yaksha.lbl.gov> Message-ID: Hi, i concur with Aashish; the biggest help is the shunting of large flows (and possibly encrypted flows). we have a Cisco Nexus 3172 (6x40gbps + 48x10gbps copper) load balancing to 6 x Dell 620s (E5-2695 v2 @ 2.40GHz x 24); each with Intel X540-AT2?s (2x10gbp copper) running 20 workers each (with pfring/dna)? sustaining about 5gbps? and we still see packet loss >5% on some workers due to the elephant flows in our environment. Yee. > On 9 Jan 2015, at 12:26, Aashish Sharma wrote: > What saves us is the shunting capability - basically bro identifies and cuts off the rest of the big flows by placing a src,src port - dst, dst-port ACL on arista while continuing to allow control packets (and dynamically removes ACL once connection_ends) From seth at icir.org Fri Jan 9 13:45:03 2015 From: seth at icir.org (Seth Hall) Date: Fri, 9 Jan 2015 16:45:03 -0500 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <54B041F1.3040706@illinois.edu> <2689AD2B-9ADB-4EBF-85AE-704E670F0BC2@icir.org> Message-ID: <70E168CC-D490-419F-8E63-CD98D329F87A@icir.org> > On Jan 9, 2015, at 4:37 PM, John Donnelly wrote: > > How I can I specify another directory ? What do you mean? > What do the fields mean ?  It?s documented: https://www.bro.org/sphinx/scripts/policy/misc/capture-loss.bro.html#type-CaptureLoss::Info > root at x64-01:/# cat cap* > 1420832673.023244,900.000068,bro,0,0,0.0 > 1420833573.023279,900.000035,bro,0,6,0.0 > 1420833727.951157,154.927878,bro,0,0,0.0 > 1420833885.693988,154.676438,bro,0,0,0.0 That last number is the estimated percent of packet loss. Unnnnnfortunately, I think I know enough to guess that your traffic is heavily leaning toward DNS and capture-loss relies on having a lot of TCP available so in your case the numbers might be misleading. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From wren3 at illinois.edu Fri Jan 9 15:54:50 2015 From: wren3 at illinois.edu (Ren, Wenyu) Date: Fri, 9 Jan 2015 23:54:50 +0000 Subject: [Bro] Question about the sequence to call epoch_result function Message-ID: Dear all, In SumStats::SumStat, the epoch_result function will be called for each key once for each epoch. But what is the sequence to call these functions? Is there a way to control which key's function is called first? Another question is about the epoch_finished function. Is this function called after all the epoch_result are called for this interval or before? Thanks a lot. Wenyu From liburdi.joshua at gmail.com Fri Jan 9 17:36:20 2015 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Fri, 9 Jan 2015 17:36:20 -0800 Subject: [Bro] adding srcip to correlation script In-Reply-To: References: Message-ID: Hi Brian, I wrote the script you're referring to, so hopefully I can help. (Sorry for taking so long to reply to your message, I meant to do this earlier but haven't had time.) I don't use ELSA, but based on your description it sounds like it parses the Bro notice c$id fields and not the src or dst fields. This script doesn't use the c$id fields since no connection record exists after correlation has taken place; the only field containing a connection artifact is the src field, so that is the field you would want to groupby. It sounds like the fix for this could be in ELSA, but if you'd like to alter the Bro script to support the ELSA srcip field as it is now, then this (ugly solution) should work: Change this line in each notice: $src=idx, To this: $id=[$orig_h=idx,$orig_p=0/tcp,$resp_h=0.0.0.0,$resp_p=0/tcp], By doing that, we're faking a full connection record to get the idx value into the c$id$orig_h field (and thus the srcip field in ELSA). Hope this helps! Let me know if I was way off base. Josh On Fri, Jan 2, 2015 at 8:46 AM, Kellogg, Brian D (OLN) wrote: > I?m working with the correlation script released by CrowdStrike, thank you > BTW, and I want to populated the ?srcip? field with the correct source IP so > that I can do a groupby on that field in ELSA. How do I get the conn record > for this connection into the below function so that I can add $conn=c to the > notice? Not sure what the best way to do this is; can I just add it to the > function arguments or define ?c? as a local and then assign the source IP, > ?idx? in this case, to c$id$orig_h. > > > > function alerts_out(t: table[addr] of set[string], idx: addr): interval > > > > > > thanks, > > Brian > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at icir.org Fri Jan 9 19:11:14 2015 From: seth at icir.org (Seth Hall) Date: Fri, 9 Jan 2015 22:11:14 -0500 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> <20150109190051.GM14004@yaksha.lbl.gov> Message-ID: > On Jan 9, 2015, at 4:44 PM, Li, Yee Ting wrote: > > i concur with Aashish; the biggest help is the shunting of large flows (and possibly encrypted flows). Yes, this has been high on our radar for quite a while. I suspect we?re getting closer and closer to getting this into Bro in a generic manner. Unfortunately it just takes a lot of time and experiences to get it right. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From wren3 at illinois.edu Sat Jan 10 17:41:16 2015 From: wren3 at illinois.edu (Ren, Wenyu) Date: Sun, 11 Jan 2015 01:41:16 +0000 Subject: [Bro] How to define a link list in Bro Message-ID: Dear all, Does anyone know how to construct a link list in bro? When I try to define a type that represents a node, it needs to contain a field that has a type of itself, which bro forbid me to do so. I have also tried to redefine, but bro forbid me to change the type when I refine my node. Thanks a lot. Wenyu From vlad at grigorescu.org Sat Jan 10 17:49:37 2015 From: vlad at grigorescu.org (Vlad Grigorescu) Date: Sat, 10 Jan 2015 19:49:37 -0600 Subject: [Bro] How to define a link list in Bro In-Reply-To: References: Message-ID: That's not something that the Bro scripting language supports. When you start getting into more advanced data structures like that, that's often a sign that you need to write a plugin in C++ instead of trying to do this in script-land. What are you trying to accomplish with linked lists? --Vlad On Sat, Jan 10, 2015 at 7:41 PM, Ren, Wenyu wrote: > Dear all, > > Does anyone know how to construct a link list in bro? When I try to define > a type that represents a node, it needs to contain a field that has a type > of itself, which bro forbid me to do so. I have also tried to redefine, but > bro forbid me to change the type when I refine my node. > > Thanks a lot. > > Wenyu > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150110/0a9af931/attachment.html From vitologrillo at gmail.com Mon Jan 12 02:18:29 2015 From: vitologrillo at gmail.com (Vito Logrillo) Date: Mon, 12 Jan 2015 11:18:29 +0100 Subject: [Bro] Question about known_services.log Message-ID: Hi, i have a question about the known_services.log: why the service field is treated as set[string] and not as string? Another question: why using a code like below i sometimes obtain rec$service empty? event Known::log_known_services(rec: Known::ServicesInfo) &priority=5 { known_services_buffer_vec = ([$ts = rec$ts,$service_addr = rec$host, $service_port = rec$port_num, $service = rec$service]); } Thanks. From bkellogg at dresser-rand.com Mon Jan 12 05:32:36 2015 From: bkellogg at dresser-rand.com (Kellogg, Brian D (OLN)) Date: Mon, 12 Jan 2015 13:32:36 +0000 Subject: [Bro] adding srcip to correlation script In-Reply-To: References: Message-ID: Thanks for the response. I tried something similar already, but it wants the connection unique ID field filed as well and haven't figured out how to handle that yet. Haven't had time to play with it beyond my first attempt. Thanks -----Original Message----- From: Josh Liburdi [mailto:liburdi.joshua at gmail.com] Sent: Friday, January 09, 2015 8:36 PM To: Kellogg, Brian D (OLN) Cc: bro at bro.org Subject: Re: [Bro] adding srcip to correlation script Hi Brian, I wrote the script you're referring to, so hopefully I can help. (Sorry for taking so long to reply to your message, I meant to do this earlier but haven't had time.) I don't use ELSA, but based on your description it sounds like it parses the Bro notice c$id fields and not the src or dst fields. This script doesn't use the c$id fields since no connection record exists after correlation has taken place; the only field containing a connection artifact is the src field, so that is the field you would want to groupby. It sounds like the fix for this could be in ELSA, but if you'd like to alter the Bro script to support the ELSA srcip field as it is now, then this (ugly solution) should work: Change this line in each notice: $src=idx, To this: $id=[$orig_h=idx,$orig_p=0/tcp,$resp_h=0.0.0.0,$resp_p=0/tcp], By doing that, we're faking a full connection record to get the idx value into the c$id$orig_h field (and thus the srcip field in ELSA). Hope this helps! Let me know if I was way off base. Josh On Fri, Jan 2, 2015 at 8:46 AM, Kellogg, Brian D (OLN) wrote: > I?m working with the correlation script released by CrowdStrike, thank > you BTW, and I want to populated the ?srcip? field with the correct > source IP so that I can do a groupby on that field in ELSA. How do I > get the conn record for this connection into the below function so > that I can add $conn=c to the notice? Not sure what the best way to > do this is; can I just add it to the function arguments or define ?c? > as a local and then assign the source IP, ?idx? in this case, to c$id$orig_h. > > > > function alerts_out(t: table[addr] of set[string], idx: addr): > interval > > > > > > thanks, > > Brian > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From kelly at utexas.edu Mon Jan 12 09:43:58 2015 From: kelly at utexas.edu (Kelly Kerby) Date: Mon, 12 Jan 2015 11:43:58 -0600 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> Message-ID: <54B407DE.3000600@utexas.edu> Hello, We at UT Austin are fairly new to Bro and new to the list (been following, but never posted), but I thought I'd share my experience. We have had good luck monitoring our traffic which sustains ~17-20 Gbps during peak hours with 2 devices made by a company called Netronome. The traffic is distributed between the 2 clustered devices using an integrated load balancer which evenly spreads the traffic across all the processors which have been pinned to corresponding bro workers. We see very little traffic loss - random ~2-3% drops per Bro instance with the occasional larger ~10% drop. Our configuration: - 2 clustered devices 40 cores each with 32 workers and 4 proxies - Primary device with 2 10 gig cards Hope this is helpful. -Kelly UT Austin On 1/9/15 1:11 PM, Mike Reeves wrote: > In all of the 10G deployments I have done I always do multiple boxes behind a flow based load balancer. That way I can use commodity boxes without special NICs and keep them at a reasonable price point. The bang for the buck goes down when you talk 4 x 12 core HT processors etc. vs a dual 10 core HT. You also get the ability to have some fault tolerance where if you have hardware issues you are not blind. I have a few deployments that are going from 10G to 100G and the only thing we have to change is the inbound interfaces on the LB gear. The other positive is as usage goes up I can add additional capacity incrementally instead of having to re-solution. > > Thanks > > Mike > > > > >> On Jan 9, 2015, at 1:20 PM, Mike Patterson wrote: >> >> You're right, it's 32 on mine. >> >> I posted some specs for my system a couple of years ago now, I think. >> >> 6-8GB per worker should give some headroom (my workers usually use about 5 apiece I think). >> >> Mike >> >> -- >> Simple, clear purpose and principles give rise to complex and >> intelligent behavior. Complex rules and regulations give rise >> to simple and stupid behavior. - Dee Hock >> >>> On Jan 9, 2015, at 1:03 PM, Donaldson, John wrote: >>> >>> I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. >>> >>> I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. >>> >>> v/r >>> >>> John Donaldson >>> >>>> -----Original Message----- >>>> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of >>>> Mike Patterson >>>> Sent: Thursday, January 08, 2015 7:29 AM >>>> To: coen bakkers >>>> Cc: bro at bro.org >>>> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >>>> >>>> Succinctly, yes, although that provision is a big one. >>>> >>>> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG >>>> 9.2X2. Both perform reasonably well. Although my hardware is somewhat >>>> underspecced (Dell R710s of differing vintages), I still get tons of useful data. >>>> >>>> If your next question would be "how should I spec my hardware", that's >>>> quite difficult to answer because it depends on a lot. Get the hottest CPUs >>>> you can afford, with as many cores. If you're actually sustaining 10+Gb you'll >>>> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz >>>> cores, but Bro reports 10% or so loss. Note that some hardware >>>> configurations will limit the number of streams you can feed to Bro, eg my >>>> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only >>>> be making use of 2/3 of my CPU. >>>> >>>> Mike >>>> >>>>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >>>>> >>>>> Does anyone have experience with higher speed NIC's and Bro? Will it >>>> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >>>>> >>>>> regards, >>>>> >>>>> Coen >>>>> _______________________________________________ >>>>> Bro mailing list >>>>> bro at bro-ids.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3858 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150112/23b87765/attachment.bin From liburdi.joshua at gmail.com Mon Jan 12 22:35:20 2015 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Mon, 12 Jan 2015 22:35:20 -0800 (PST) Subject: [Bro] adding srcip to correlation script In-Reply-To: References: Message-ID: <1421130920635.b1ff7d93@Nodemailer> It sounds odd that ELSA requires the conn uid field-- there are many scripts that do not put conn uid values in the notice. Out of curiosity, have you (or anyone) seen any scanning notices in ELSA? ? Sent from Mailbox On Mon, Jan 12, 2015 at 5:32 AM, Kellogg, Brian D (OLN) wrote: > Thanks for the response. > I tried something similar already, but it wants the connection unique ID field filed as well and haven't figured out how to handle that yet. Haven't had time to play with it beyond my first attempt. Thanks > -----Original Message----- > From: Josh Liburdi [mailto:liburdi.joshua at gmail.com] > Sent: Friday, January 09, 2015 8:36 PM > To: Kellogg, Brian D (OLN) > Cc: bro at bro.org > Subject: Re: [Bro] adding srcip to correlation script > Hi Brian, > I wrote the script you're referring to, so hopefully I can help. > (Sorry for taking so long to reply to your message, I meant to do this earlier but haven't had time.) > I don't use ELSA, but based on your description it sounds like it parses the Bro notice c$id fields and not the src or dst fields. This script doesn't use the c$id fields since no connection record exists after correlation has taken place; the only field containing a connection artifact is the src field, so that is the field you would want to groupby. It sounds like the fix for this could be in ELSA, but if you'd like to alter the Bro script to support the ELSA srcip field as it is now, then this (ugly solution) should work: > Change this line in each notice: $src=idx, To this: $id=[$orig_h=idx,$orig_p=0/tcp,$resp_h=0.0.0.0,$resp_p=0/tcp], > By doing that, we're faking a full connection record to get the idx value into the c$id$orig_h field (and thus the srcip field in ELSA). > Hope this helps! Let me know if I was way off base. > Josh > On Fri, Jan 2, 2015 at 8:46 AM, Kellogg, Brian D (OLN) wrote: >> I?m working with the correlation script released by CrowdStrike, thank >> you BTW, and I want to populated the ?srcip? field with the correct >> source IP so that I can do a groupby on that field in ELSA. How do I >> get the conn record for this connection into the below function so >> that I can add $conn=c to the notice? Not sure what the best way to >> do this is; can I just add it to the function arguments or define ?c? >> as a local and then assign the source IP, ?idx? in this case, to c$id$orig_h. >> >> >> >> function alerts_out(t: table[addr] of set[string], idx: addr): >> interval >> >> >> >> >> >> thanks, >> >> Brian >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150112/3014ec0a/attachment.html From andrew.ratcliffe at nswcsystems.co.uk Tue Jan 13 00:30:25 2015 From: andrew.ratcliffe at nswcsystems.co.uk (Andrew Ratcliffe) Date: Tue, 13 Jan 2015 08:30:25 +0000 Subject: [Bro] Crowdstrike Additional Intel types Message-ID: Hi All, I was trying out the Crowdstrike bro additional Intel framework types http://blog.crowdstrike.com/maximizing-network-threat-intel-bro/ and very cool they are too. But does anyone know if the Intel::USER_NAME could be extended to CIFS/SMB where the username is in the clear? I have seen APT activity where service accounts that have been cracked and then used to attempt to authenticate to devices around the network. A simple CIFS honeypot might be used to attract an attacker to attempt authentication. Or even the metasploit module: msf exploit(phpmyadmin_config) > use auxiliary/server/capture/smb msf auxiliary(smb) > set JOHNPWFILE /root/johnpwfile JOHNPWFILE => /root/johnpwfile msf auxiliary(smb) > exploit [*] Auxiliary module execution completed msf auxiliary(smb) > [*] Server started. [*] SMB Captured - 2015-01-12 20:55:09 +0000 NTLMv2 Response Captured from 172.31.254.13:53729 - 172.31.254.13 USER:andy DOMAIN: OS:Mac OS X 10.10 LM:SMBFS 3.0.0 LMHASH:4d983d718a78a8692a5501f05c54f90a LM_CLIENT_CHALLENGE:cb67074c9d31d0bb NTHASH:728a9e6db88b8b4ed3ff7832cfe8fc7e NT_CLIENT_CHALLENGE:0101000000000000009e550caa2ed001cb67074c9d31d0bb00000000000000000200000000000000 If it were possible to extend the scripts to examine the SMB username then the Intel framework would pick up on this activity just using a list of usernames that should not appear on the network. Kind regards, Andy Andrew.Ratcliffe at NSWCSystems.co.uk CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP, CWSA Blog.InfoSecMatters.net -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150113/0e8c1c4f/attachment.html From bkellogg at dresser-rand.com Tue Jan 13 05:46:24 2015 From: bkellogg at dresser-rand.com (Kellogg, Brian D (OLN)) Date: Tue, 13 Jan 2015 13:46:24 +0000 Subject: [Bro] adding srcip to correlation script In-Reply-To: <1421130920635.b1ff7d93@Nodemailer> References: <1421130920635.b1ff7d93@Nodemailer> Message-ID: Sorry, I was getting an error in Bro about the Conn ID field; not ELSA. And the notices were not showing up in notice.log. I probably just missed something simple. I really didn?t have time to look into it. I see scanning notices in ELSA, but they come in with the default of the loopback IP for src and dst unfortunately. So it can make them hard to correlate. From: Josh Liburdi [mailto:liburdi.joshua at gmail.com] Sent: Tuesday, January 13, 2015 1:35 AM To: Kellogg, Brian D (OLN) Cc: bro at bro.org Subject: RE: [Bro] adding srcip to correlation script It sounds odd that ELSA requires the conn uid field-- there are many scripts that do not put conn uid values in the notice. Out of curiosity, have you (or anyone) seen any scanning notices in ELSA? ? Sent from Mailbox On Mon, Jan 12, 2015 at 5:32 AM, Kellogg, Brian D (OLN) > wrote: Thanks for the response. I tried something similar already, but it wants the connection unique ID field filed as well and haven't figured out how to handle that yet. Haven't had time to play with it beyond my first attempt. Thanks -----Original Message----- From: Josh Liburdi [mailto:liburdi.joshua at gmail.com] Sent: Friday, January 09, 2015 8:36 PM To: Kellogg, Brian D (OLN) Cc: bro at bro.org Subject: Re: [Bro] adding srcip to correlation script Hi Brian, I wrote the script you're referring to, so hopefully I can help. (Sorry for taking so long to reply to your message, I meant to do this earlier but haven't had time.) I don't use ELSA, but based on your description it sounds like it parses the Bro notice c$id fields and not the src or dst fields. This script doesn't use the c$id fields since no connection record exists after correlation has taken place; the only field containing a connection artifact is the src field, so that is the field you would want to groupby. It sounds like the fix for this could be in ELSA, but if you'd like to alter the Bro script to support the ELSA srcip field as it is now, then this (ugly solution) should work: Change this line in each notice: $src=idx, To this: $id=[$orig_h=idx,$orig_p=0/tcp,$resp_h=0.0.0.0,$resp_p=0/tcp], By doing that, we're faking a full connection record to get the idx value into the c$id$orig_h field (and thus the srcip field in ELSA). Hope this helps! Let me know if I was way off base. Josh On Fri, Jan 2, 2015 at 8:46 AM, Kellogg, Brian D (OLN) > wrote: > I?m working with the correlation script released by CrowdStrike, thank > you BTW, and I want to populated the ?srcip? field with the correct > source IP so that I can do a groupby on that field in ELSA. How do I > get the conn record for this connection into the below function so > that I can add $conn=c to the notice? Not sure what the best way to > do this is; can I just add it to the function arguments or define ?c? > as a local and then assign the source IP, ?idx? in this case, to c$id$orig_h. > > > > function alerts_out(t: table[addr] of set[string], idx: addr): > interval > > > > > > thanks, > > Brian > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150113/bbd6f60d/attachment-0001.html From seth at icir.org Tue Jan 13 06:10:18 2015 From: seth at icir.org (Seth Hall) Date: Tue, 13 Jan 2015 09:10:18 -0500 Subject: [Bro] Crowdstrike Additional Intel types In-Reply-To: References: Message-ID: <74BAE134-64FF-4003-9E22-21B2F9851678@icir.org> > On Jan 13, 2015, at 3:30 AM, Andrew Ratcliffe wrote: > > But does anyone know if the Intel::USER_NAME could be extended to CIFS/SMB where the username is in the clear? Even better is that development on the Authentication framework has been picked up again and is making some progress. Personally I?d like to see it make it?s way into 2.4 so that we?d be able to have a generic, abstract implementation for authentication handling. In the case of SMB, we?d just have a script that feeds SMB authentication information into the authentication framework in the cases we can grab it, and there will be another script that handles new authentications and feeds them into the intel framework. It should simplify and unify the work that Josh was aiming for with his scripts. Unfortunately, the authentication framework is difficult enough that it?s taken quite a few years and input from at least 4 people to cover a good set of the potential use cases. > I have seen APT activity where service accounts that have been cracked and then used to attempt to authenticate to devices around the network. A simple CIFS honeypot might be used to attract an attacker to attempt authentication. Agreed, although in this case, your whole network would be the honeypot. :) > If it were possible to extend the scripts to examine the SMB username then the Intel framework would pick up on this activity just using a list of usernames that should not appear on the network. Yep, that should be pretty easy to deal with once the rewritten SMB analyzer makes it into Bro along with the authentication framework which should make authentication handling much nicer for people writing scripts. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From stanv at altlinux.org Tue Jan 13 06:29:40 2015 From: stanv at altlinux.org (Andrew V. Stepanov) Date: Tue, 13 Jan 2015 17:29:40 +0300 Subject: [Bro] MySQL writer Message-ID: <54B52BD4.1060305@altlinux.org> Hello. I would like to know, is there any way to write logs to MySQL ? I see writers at src/logging/writers: SQLite, ElasticSearch, DataSeries, Ascii I want a writer for MySQL. If there is no one, I need to write one. Are there are general hints how to do this? Maybe exist some documents how to add writer. How do you find idea to fill-up MySQL database with normalised data (batches 50,000 entries) ? Thank you. From andrew.ratcliffe at nswcsystems.co.uk Tue Jan 13 06:51:03 2015 From: andrew.ratcliffe at nswcsystems.co.uk (Andrew Ratcliffe) Date: Tue, 13 Jan 2015 14:51:03 +0000 Subject: [Bro] Crowdstrike Additional Intel types In-Reply-To: <74BAE134-64FF-4003-9E22-21B2F9851678@icir.org> References: <74BAE134-64FF-4003-9E22-21B2F9851678@icir.org> Message-ID: <18CAE273-BA22-4A10-A510-E89B7BF1B940@nswcsystems.co.uk> Hi Seth, Thanks for the response. It sounds like there are few strands coming together soon that will make this, and much more, all possible; sounds good. BTW: Enjoyed Floss Weekly 296! Kind regards, Andy Andrew.Ratcliffe at NSWCSystems.co.uk CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP Blog.InfoSecMatters.net On 13 Jan 2015, at 14:10, Seth Hall > wrote: On Jan 13, 2015, at 3:30 AM, Andrew Ratcliffe > wrote: But does anyone know if the Intel::USER_NAME could be extended to CIFS/SMB where the username is in the clear? Even better is that development on the Authentication framework has been picked up again and is making some progress. Personally I?d like to see it make it?s way into 2.4 so that we?d be able to have a generic, abstract implementation for authentication handling. In the case of SMB, we?d just have a script that feeds SMB authentication information into the authentication framework in the cases we can grab it, and there will be another script that handles new authentications and feeds them into the intel framework. It should simplify and unify the work that Josh was aiming for with his scripts. Unfortunately, the authentication framework is difficult enough that it?s taken quite a few years and input from at least 4 people to cover a good set of the potential use cases. I have seen APT activity where service accounts that have been cracked and then used to attempt to authenticate to devices around the network. A simple CIFS honeypot might be used to attract an attacker to attempt authentication. Agreed, although in this case, your whole network would be the honeypot. :) If it were possible to extend the scripts to examine the SMB username then the Intel framework would pick up on this activity just using a list of usernames that should not appear on the network. Yep, that should be pretty easy to deal with once the rewritten SMB analyzer makes it into Bro along with the authentication framework which should make authentication handling much nicer for people writing scripts. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150113/1e50a484/attachment.html From liburdi.joshua at gmail.com Tue Jan 13 09:49:54 2015 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Tue, 13 Jan 2015 09:49:54 -0800 Subject: [Bro] Crowdstrike Additional Intel types In-Reply-To: <18CAE273-BA22-4A10-A510-E89B7BF1B940@nswcsystems.co.uk> References: <74BAE134-64FF-4003-9E22-21B2F9851678@icir.org> <18CAE273-BA22-4A10-A510-E89B7BF1B940@nswcsystems.co.uk> Message-ID: Seth nailed it-- the Intel::USER_NAME fit a specific use case for me (FTP authentication), so that's why I added it instead of waiting for the Authentication framework. With that said, if the merge of the SMB analyzer beats the merge of the Authentication framework, then we can use a similar approach to check for SMB users via the Intel framework using the username field in smb_cmd.log. Very excited to see the Authentication framework when it's ready, that should make all of this (and more) easier. On Tue, Jan 13, 2015 at 6:51 AM, Andrew Ratcliffe wrote: > Hi Seth, > Thanks for the response. It sounds like there are few strands coming > together soon that will make this, and much more, all possible; sounds good. > > BTW: Enjoyed Floss Weekly 296! > Kind regards, > Andy > Andrew.Ratcliffe at NSWCSystems.co.uk > CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP > Blog.InfoSecMatters.net > > > > > On 13 Jan 2015, at 14:10, Seth Hall wrote: > > > On Jan 13, 2015, at 3:30 AM, Andrew Ratcliffe > wrote: > > But does anyone know if the Intel::USER_NAME could be extended to CIFS/SMB > where the username is in the clear? > > > Even better is that development on the Authentication framework has been > picked up again and is making some progress. Personally I?d like to see it > make it?s way into 2.4 so that we?d be able to have a generic, abstract > implementation for authentication handling. > > In the case of SMB, we?d just have a script that feeds SMB authentication > information into the authentication framework in the cases we can grab it, > and there will be another script that handles new authentications and feeds > them into the intel framework. It should simplify and unify the work that > Josh was aiming for with his scripts. > > Unfortunately, the authentication framework is difficult enough that it?s > taken quite a few years and input from at least 4 people to cover a good set > of the potential use cases. > > I have seen APT activity where service accounts that have been cracked and > then used to attempt to authenticate to devices around the network. A simple > CIFS honeypot might be used to attract an attacker to attempt > authentication. > > > Agreed, although in this case, your whole network would be the honeypot. :) > > If it were possible to extend the scripts to examine the SMB username then > the Intel framework would pick up on this activity just using a list of > usernames that should not appear on the network. > > > Yep, that should be pretty easy to deal with once the rewritten SMB analyzer > makes it into Bro along with the authentication framework which should make > authentication handling much nicer for people writing scripts. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at icir.org Tue Jan 13 10:16:38 2015 From: seth at icir.org (Seth Hall) Date: Tue, 13 Jan 2015 13:16:38 -0500 Subject: [Bro] Crowdstrike Additional Intel types In-Reply-To: References: <74BAE134-64FF-4003-9E22-21B2F9851678@icir.org> <18CAE273-BA22-4A10-A510-E89B7BF1B940@nswcsystems.co.uk> Message-ID: <83DAA1DF-141A-47FB-B680-2FD056E8CCC6@icir.org> > On Jan 13, 2015, at 12:49 PM, Josh Liburdi wrote: > > Seth nailed it-- the Intel::USER_NAME fit a specific use case for me > (FTP authentication), so that's why I added it instead of waiting for > the Authentication framework. And thanks for that. I definitely think it makes sense to bash your way through any problems you?re running into rather than wait for a Bro release where we (at least attempt) to elegantly solve the problem. :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From damonrouse at gmail.com Tue Jan 13 13:34:47 2015 From: damonrouse at gmail.com (Damon Rouse) Date: Tue, 13 Jan 2015 13:34:47 -0800 Subject: [Bro] Intel Framework Question Message-ID: I've just started playing with the intel framework and have a question for everyone. How are people automating the conversion of their intel data (threat feeds, etc.) into the format the BRO intel files require. Are their any solutions out there to automate this? Thanks Damon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150113/adeb22dd/attachment.html From anthony.kasza at gmail.com Tue Jan 13 13:59:40 2015 From: anthony.kasza at gmail.com (anthony kasza) Date: Tue, 13 Jan 2015 13:59:40 -0800 Subject: [Bro] Intel Framework Question In-Reply-To: References: Message-ID: Python is nice. I think Jon Schipp has a script or two that assists in converting indicators too. -AK On Jan 13, 2015 1:38 PM, "Damon Rouse" wrote: > I've just started playing with the intel framework and have a question for > everyone. How are people automating the conversion of their intel data > (threat feeds, etc.) into the format the BRO intel files require. > > Are their any solutions out there to automate this? > > Thanks > Damon > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150113/29075c56/attachment.html From jonschipp at gmail.com Tue Jan 13 14:19:52 2015 From: jonschipp at gmail.com (Jon Schipp) Date: Tue, 13 Jan 2015 16:19:52 -0600 Subject: [Bro] Intel Framework Question In-Reply-To: References: Message-ID: $ wget https://raw.githubusercontent.com/jonschipp/mal-dnssearch/master/tools/mal-dns2bro.sh :) On Tue, Jan 13, 2015 at 3:59 PM, anthony kasza wrote: > Python is nice. I think Jon Schipp has a script or two that assists in > converting indicators too. > > -AK > > On Jan 13, 2015 1:38 PM, "Damon Rouse" wrote: >> >> I've just started playing with the intel framework and have a question for >> everyone. How are people automating the conversion of their intel data >> (threat feeds, etc.) into the format the BRO intel files require. >> >> Are their any solutions out there to automate this? >> >> Thanks >> Damon >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon Schipp, jonschipp.com, sickbits.net, opennsm.ncsa.illinois.edu From jonschipp at gmail.com Tue Jan 13 14:22:17 2015 From: jonschipp at gmail.com (Jon Schipp) Date: Tue, 13 Jan 2015 16:22:17 -0600 Subject: [Bro] Intel Framework Question In-Reply-To: References: Message-ID: Also, CIF has an Bro output plugin. The following article on the Bro Blog covers using both of the aforementioned tools http://blog.bro.org/2014/01/intelligence-data-and-bro_4980.html On Tue, Jan 13, 2015 at 4:19 PM, Jon Schipp wrote: > $ wget https://raw.githubusercontent.com/jonschipp/mal-dnssearch/master/tools/mal-dns2bro.sh > :) > > On Tue, Jan 13, 2015 at 3:59 PM, anthony kasza wrote: >> Python is nice. I think Jon Schipp has a script or two that assists in >> converting indicators too. >> >> -AK >> >> On Jan 13, 2015 1:38 PM, "Damon Rouse" wrote: >>> >>> I've just started playing with the intel framework and have a question for >>> everyone. How are people automating the conversion of their intel data >>> (threat feeds, etc.) into the format the BRO intel files require. >>> >>> Are their any solutions out there to automate this? >>> >>> Thanks >>> Damon >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > Jon Schipp, > jonschipp.com, sickbits.net, opennsm.ncsa.illinois.edu -- Jon Schipp, jonschipp.com, sickbits.net, opennsm.ncsa.illinois.edu From wsladekjr at hotmail.com Wed Jan 14 07:53:56 2015 From: wsladekjr at hotmail.com (Ward Sladek) Date: Wed, 14 Jan 2015 09:53:56 -0600 Subject: [Bro] Redefine const that does not have "&redef" attribute Message-ID: I want to redefine Bro's HTTP ports but I'm not having any luck... The following code is in base/protocols/http/main.bro const ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef likely_server_ports += { ports }; Here is what I've tried: redef HTTP::ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; Which generates error "already defined (HTTP::ports)".... I also tried: const custom_http_ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef HTTP::likely_server_ports += { custom_http_ports }; Which generates error ""redef" used but not previously defined (HTTP::likely_server_ports)" A nudge in the right direction would be appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150114/ce3fb74b/attachment.html From hosom at battelle.org Wed Jan 14 09:06:41 2015 From: hosom at battelle.org (Hosom, Stephen M) Date: Wed, 14 Jan 2015 17:06:41 +0000 Subject: [Bro] Redefine const that does not have "&redef" attribute In-Reply-To: References: Message-ID: Without the &redef flag set, you can't redefine a constant. You would have to modify Bro's HTTP scripts in order to make the change you are trying to make. That is generally a bad idea. I suspect that you're trying to get Bro to detect HTTP on a non-standard port. If this is the case, then you are likely already analyzing the traffic, as Bro dynamically detects HTTP running on any port and analyzes it all the same. Try capturing the non-standard HTTP and running it through Bro to see if it finds it, I'll bet that it does. The signatures that enable the HTTP analyzer on non-standard ports are located at bro/scripts/base/protocols/http/dpd.sig ( https://github.com/bro/bro/blob/master/scripts/base/protocols/http/dpd.sig ) . Don't modify those either though. If you truly have found an HTTP traffic pattern that Bro isn't detecting, you should write a signature similar to these ones, and include 'enable "http"' like they have done here. Here's a link to the documentation on signatures: https://www.bro.org/sphinx-git/frameworks/signatures.html Let me know how it goes! From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Ward Sladek Sent: Wednesday, January 14, 2015 10:54 AM To: bro at bro.org Subject: [Bro] Redefine const that does not have "&redef" attribute I want to redefine Bro's HTTP ports but I'm not having any luck... The following code is in base/protocols/http/main.bro const ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef likely_server_ports += { ports }; Here is what I've tried: redef HTTP::ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; Which generates error "already defined (HTTP::ports)".... I also tried: const custom_http_ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef HTTP::likely_server_ports += { custom_http_ports }; Which generates error ""redef" used but not previously defined (HTTP::likely_server_ports)" A nudge in the right direction would be appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150114/c73d729a/attachment-0001.html From wsladekjr at hotmail.com Wed Jan 14 10:14:22 2015 From: wsladekjr at hotmail.com (Ward Sladek) Date: Wed, 14 Jan 2015 12:14:22 -0600 Subject: [Bro] Redefine const that does not have "&redef" attribute In-Reply-To: References: , Message-ID: Thanks for the references and tips, that helps... But I'm actually trying to do the opposite - instead of getting more detection, I'm trying to get less. I want to exclude port 80 as our proxy has that covered (essentially causing duplication in SIEM)... From: hosom at battelle.org To: wsladekjr at hotmail.com; bro at bro.org Subject: RE: [Bro] Redefine const that does not have "&redef" attribute Date: Wed, 14 Jan 2015 17:06:41 +0000 Without the &redef flag set, you can?t redefine a constant. You would have to modify Bro?s HTTP scripts in order to make the change you are trying to make. That is generally a bad idea. I suspect that you?re trying to get Bro to detect HTTP on a non-standard port. If this is the case, then you are likely already analyzing the traffic, as Bro dynamically detects HTTP running on any port and analyzes it all the same. Try capturing the non-standard HTTP and running it through Bro to see if it finds it, I?ll bet that it does. The signatures that enable the HTTP analyzer on non-standard ports are located at bro/scripts/base/protocols/http/dpd.sig ( https://github.com/bro/bro/blob/master/scripts/base/protocols/http/dpd.sig ) . Don?t modify those either though. If you truly have found an HTTP traffic pattern that Bro isn?t detecting, you should write a signature similar to these ones, and include ?enable ?http?? like they have done here. Here?s a link to the documentation on signatures: https://www.bro.org/sphinx-git/frameworks/signatures.html Let me know how it goes! From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Ward Sladek Sent: Wednesday, January 14, 2015 10:54 AM To: bro at bro.org Subject: [Bro] Redefine const that does not have "&redef" attribute I want to redefine Bro's HTTP ports but I'm not having any luck... The following code is in base/protocols/http/main.bro const ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef likely_server_ports += { ports }; Here is what I've tried: redef HTTP::ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; Which generates error "already defined (HTTP::ports)".... I also tried: const custom_http_ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef HTTP::likely_server_ports += { custom_http_ports }; Which generates error ""redef" used but not previously defined (HTTP::likely_server_ports)" A nudge in the right direction would be appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150114/b87b3721/attachment.html From hosom at battelle.org Wed Jan 14 10:26:40 2015 From: hosom at battelle.org (Hosom, Stephen M) Date: Wed, 14 Jan 2015 18:26:40 +0000 Subject: [Bro] Redefine const that does not have "&redef" attribute In-Reply-To: References: , Message-ID: That helps a lot. There are a lot of ways to solve your problem. If you want to not analyze this traffic at all, I'd go the route of dropping the traffic with either your tap aggregation tool or a bpf in Bro. This would have the benefit of not using resources on traffic you're not interested in. Documentation here: https://www.bro.org/sphinx/scripts/base/frameworks/packet-filter/main.bro.html If you want to not log this traffic, but still analyze it, you can do this with log filters using the logging framework. The logging framework would also have the option of still logging the traffic, but logging it to a separate log. In this way, you could have pre-proxy-http.log and http.log (just for example). It's incredibly flexible. Documentation here: https://www.bro.org/development/logging.html#filtering There are probably other ways to accomplish what you want... but those are just a couple. It's possible someone else will chime in with better solutions. From: Ward Sladek [mailto:wsladekjr at hotmail.com] Sent: Wednesday, January 14, 2015 1:14 PM To: Hosom, Stephen M; bro at bro.org Subject: RE: [Bro] Redefine const that does not have "&redef" attribute Thanks for the references and tips, that helps... But I'm actually trying to do the opposite - instead of getting more detection, I'm trying to get less. I want to exclude port 80 as our proxy has that covered (essentially causing duplication in SIEM)... ________________________________ From: hosom at battelle.org To: wsladekjr at hotmail.com; bro at bro.org Subject: RE: [Bro] Redefine const that does not have "&redef" attribute Date: Wed, 14 Jan 2015 17:06:41 +0000 Without the &redef flag set, you can't redefine a constant. You would have to modify Bro's HTTP scripts in order to make the change you are trying to make. That is generally a bad idea. I suspect that you're trying to get Bro to detect HTTP on a non-standard port. If this is the case, then you are likely already analyzing the traffic, as Bro dynamically detects HTTP running on any port and analyzes it all the same. Try capturing the non-standard HTTP and running it through Bro to see if it finds it, I'll bet that it does. The signatures that enable the HTTP analyzer on non-standard ports are located at bro/scripts/base/protocols/http/dpd.sig ( https://github.com/bro/bro/blob/master/scripts/base/protocols/http/dpd.sig ) . Don't modify those either though. If you truly have found an HTTP traffic pattern that Bro isn't detecting, you should write a signature similar to these ones, and include 'enable "http"' like they have done here. Here's a link to the documentation on signatures: https://www.bro.org/sphinx-git/frameworks/signatures.html Let me know how it goes! From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Ward Sladek Sent: Wednesday, January 14, 2015 10:54 AM To: bro at bro.org Subject: [Bro] Redefine const that does not have "&redef" attribute I want to redefine Bro's HTTP ports but I'm not having any luck... The following code is in base/protocols/http/main.bro const ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef likely_server_ports += { ports }; Here is what I've tried: redef HTTP::ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; Which generates error "already defined (HTTP::ports)".... I also tried: const custom_http_ports = { 81/tcp, 631/tcp, 1080/tcp, 8000/tcp, 8888/tcp, }; redef HTTP::likely_server_ports += { custom_http_ports }; Which generates error ""redef" used but not previously defined (HTTP::likely_server_ports)" A nudge in the right direction would be appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150114/94b1e765/attachment-0001.html From tiburcesotohou at yahoo.fr Wed Jan 14 10:54:22 2015 From: tiburcesotohou at yahoo.fr (SOTOHOU Osince Tiburce) Date: Wed, 14 Jan 2015 18:54:22 +0000 Subject: [Bro] Bro alert Message-ID: <1421261662.74070.YahooMailAndroidMobile@web172206.mail.ir2.yahoo.com> Hi, I have used this exploit 'ms08_067_netapi' of metasploit to attack a xp machine that Bro is controling. The attack has succeeded and Bro detects the communication between the attacker machine and the attacked machine. But it does not alert. Why and what do i have to do? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150114/942fb7ff/attachment.html From jlay at slave-tothe-box.net Sat Jan 17 06:37:17 2015 From: jlay at slave-tothe-box.net (James Lay) Date: Sat, 17 Jan 2015 07:37:17 -0700 Subject: [Bro] Revisiting log rotate only Message-ID: <1421505437.3223.16.camel@JamesiMac> Hey all, I posted about this last August here: http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html I also noticed someone have a disappearing log event which I have seen before as well here: http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html I documented my process on installing bro on Ubuntu 14.04 using just log rotation below: sudo apt-get -y install cmake sudo apt-get -y install python-dev sudo apt-get -y install swig cp /usr/local/bro/share/bro/site cp /opt/bin/startbro <- command line bro with long --filter line cp /opt/bin/startbro to /etc/rc.local sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log /usr/local/bin/ sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh /usr/local/bin/ sudo ln -s /usr/local/bro/share/broctl/scripts/create-link-for-log /usr/local/bin/ sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name /usr/local/bin/ git clone https://github.com/jonschipp/mal-dnssearch.git sudo make install specifics on log rotate only: add the below to local.bro redef Log::default_rotation_interval = 86400 secs; redef Log::default_rotation_postprocessor_cmd = "archive-log"; edit the below in broctl.cfg MailTo = jlay at slave-tothe-box.net LogRotationInterval = 86400 sudo /usr/local/bro/bin/broctl install Besides the edits to broctl.cfg, file locations are the default. The above works well usually...it's after a reboot I have found things go bad. Usually logs get rotated at midnight and I get an email with statistics, just what I need. I rebooted the machine on the 13, and that's the last email or log rotation I got....this morning I see current has files and my logstash instance has data so I believe the rotation got..."stuck". I'm kicking myself for not heading/tailing the files first, but after issuing a "sudo killall bro", those file in current vanished, no directory was created, and I received no email, that data is now gone (no big deal as this is at home). I decided to run broctl install again, then start and kill bro one more time. At that point, I got a new directory with log rotation and an email with minutes or so of stats. Please let me know if there's something I can do on my end to trouble shoot. Thank you. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150117/e3e3405c/attachment.html From andrew.ratcliffe at nswcsystems.co.uk Sun Jan 18 04:02:13 2015 From: andrew.ratcliffe at nswcsystems.co.uk (Andrew Ratcliffe) Date: Sun, 18 Jan 2015 12:02:13 +0000 Subject: [Bro] Bro Intel framework - filter out Message-ID: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> Hi, I am using a threat intelligence feed from a local installation of the Collective Intelligence Framework v2 and putting data into the Bro Intel framework. andy at cif2:~$ cif --cc US --tags botnet -l 10 -c 85 -f bro > intel-2.dat #fields indicator indicator_type meta.desc meta.cif_confidence meta.source 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com echo -e "testmyids.com\tIntel::DOMAIN\tsuspicious\t85\tTester" >> intel-2.dat Add the above for testing purposes so I can trigger an Intel alert to test everything is working. This all works great and I can check my Kibana Bro intel dashboard for alerts. The problem is that, CIF2 queries DNS servers for IP addresses for domains in the intel data - so I get a false positive showing my CIF2 server as the source. I think the answer is to filter out my CIF2 server from Bro, but I?ve not managed to find an example I can follow anywhere. Any suggestions much appreciated. Kind regards, Andy Andrew.Ratcliffe at NSWCSystems.co.uk CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP Blog.InfoSecMatters.net -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150118/a8334747/attachment.html From daniel.harrison4 at baesystems.com Sun Jan 18 09:12:51 2015 From: daniel.harrison4 at baesystems.com (Harrison, Daniel (US SSA)) Date: Sun, 18 Jan 2015 17:12:51 +0000 Subject: [Bro] Log all client cipher suites Message-ID: <20150118171254.2C6262C4024@rock.ICSI.Berkeley.EDU> I am trying to write a script to log all client_hello cipher suites to the ssl log, preferably in the ascii hex format as they look in the pcap. I hacked up a similar script and got it to create the log entry but the column shows only (empty). Any idea on how to do this? Thanks. ****************************** @load base/protocols/ssl/main module SSL; export { redef record Info += { ciphers: vector of string &log &optional; }; ## A boolean value to determine if client headers are to be logged. const log_ciphers = T &redef; } event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) { if ( ! c?$ssl ) return; if ( log_ciphers ) { c$ssl$ciphers = vector(); } } ****************************** Scott Harrison -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150118/110b5cf7/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6727 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150118/110b5cf7/attachment-0001.bin From johanna at icir.org Sun Jan 18 10:02:18 2015 From: johanna at icir.org (Johanna Amann) Date: Sun, 18 Jan 2015 10:02:18 -0800 Subject: [Bro] Log all client cipher suites In-Reply-To: <20150118171254.2C6262C4024@rock.ICSI.Berkeley.EDU> References: <20150118171254.2C6262C4024@rock.ICSI.Berkeley.EDU> Message-ID: <20150118180218.GA6913@Beezling.local> Hello Daniel, On Sun, Jan 18, 2015 at 05:12:51PM +0000, Harrison, Daniel (US SSA) wrote: > I am trying to write a script to log all client_hello cipher suites to the > ssl log, preferably in the ascii hex format as they look in the pcap. I > hacked up a similar script and got it to create the log entry but the column > shows only (empty). Any idea on how to do this? Thanks. The reason your script does not work at the moment is, that you only assigned an empty vector in the ssl_client_hello event without passing it the actual data. I modified it slightly below to just dump the raw number of all client ciphers, converted into hex, into the log. Note that it drops 0's in the front. I hope this helps, Johanna ---- @load base/protocols/ssl/main module SSL; export { redef record Info += { ciphers: vector of string &log &optional; }; ## A boolean value to determine if client headers are to be logged. const log_ciphers = T &redef; } event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) { if ( ! c?$ssl ) return; if ( log_ciphers ) { c$ssl$ciphers = vector(); for ( i in ciphers ) c$ssl$ciphers[i] = fmt("%x", ciphers[i]); } } From itsecderek at gmail.com Sun Jan 18 12:11:49 2015 From: itsecderek at gmail.com (Derek Banks) Date: Sun, 18 Jan 2015 15:11:49 -0500 Subject: [Bro] Bro Intel framework - filter out In-Reply-To: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> References: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> Message-ID: You could stop CIF from doing the lookups if you wanted to (or not, depends on if you wan that data). Something like this (depending on how you are doing notices) should work: const intel_server_whitelist = {10.10.10.10}; hook Notice::policy(n: Notice::Info) { if ( n$note == Intel::Notice && n?$src && !(n$src in intel_server_whitelist ) ) { add n$actions[Notice::ACTION_EMAIL]; } } Regards, Derek On Sun, Jan 18, 2015 at 7:02 AM, Andrew Ratcliffe < andrew.ratcliffe at nswcsystems.co.uk> wrote: > Hi, > I am using a threat intelligence feed from a local installation of the > Collective Intelligence Framework v2 and putting data into the Bro Intel > framework. > andy at cif2:~$ cif --cc US --tags botnet -l 10 -c 85 -f bro > intel-2.dat > #fields indicator indicator_type meta.desc meta.cif_confidence meta.source > 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com > 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com > 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com > 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com > > echo -e "testmyids.com\tIntel::DOMAIN\tsuspicious\t85\tTester" >> > intel-2.dat > Add the above for testing purposes so I can trigger an Intel alert to test > everything is working. > > This all works great and I can check my Kibana Bro intel dashboard for > alerts. > > The problem is that, CIF2 queries DNS servers for IP addresses for > domains in the intel data - so I get a false positive showing my CIF2 > server as the source. > > I think the answer is to filter out my CIF2 server from Bro, but I?ve > not managed to find an example I can follow anywhere. Any suggestions > much appreciated. > > Kind regards, > Andy > Andrew.Ratcliffe at NSWCSystems.co.uk > CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP > Blog.InfoSecMatters.net > > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150118/c530b971/attachment.html From andrew.ratcliffe at nswcsystems.co.uk Sun Jan 18 14:50:12 2015 From: andrew.ratcliffe at nswcsystems.co.uk (Andrew Ratcliffe) Date: Sun, 18 Jan 2015 22:50:12 +0000 Subject: [Bro] Bro Intel framework - filter out In-Reply-To: References: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> Message-ID: <498FC5EB-A6F1-492F-B3C4-7480B8203AAB@nswcsystems.co.uk> Thanks for the suggestion. I'm not using the notice though just the intel.log : @load frameworks/intel/seen redef Intel::read_files += { "/usr/local/bro/share/bro/site/intel-2.dat" }; Is there no way to simply apply a BPF filter to Bro? Kind regards, Andy Andrew.Ratcliffe at NSWCSystems.co.uk CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP, CWSA Blog.InfoSecMatters.net On 18 Jan 2015, at 20:11, Derek Banks > wrote: You could stop CIF from doing the lookups if you wanted to (or not, depends on if you wan that data). Something like this (depending on how you are doing notices) should work: const intel_server_whitelist = {10.10.10.10}; hook Notice::policy(n: Notice::Info) { if ( n$note == Intel::Notice && n?$src && !(n$src in intel_server_whitelist ) ) { add n$actions[Notice::ACTION_EMAIL]; } } Regards, Derek On Sun, Jan 18, 2015 at 7:02 AM, Andrew Ratcliffe > wrote: Hi, I am using a threat intelligence feed from a local installation of the Collective Intelligence Framework v2 and putting data into the Bro Intel framework. andy at cif2:~$ cif --cc US --tags botnet -l 10 -c 85 -f bro > intel-2.dat #fields indicator indicator_type meta.desc meta.cif_confidence meta.source 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com 50.17.195.149 Intel::ADDR botnet|gozi 85 bambenekconsulting.com echo -e "testmyids.com\tIntel::DOMAIN\tsuspicious\t85\tTester" >> intel-2.dat Add the above for testing purposes so I can trigger an Intel alert to test everything is working. This all works great and I can check my Kibana Bro intel dashboard for alerts. The problem is that, CIF2 queries DNS servers for IP addresses for domains in the intel data - so I get a false positive showing my CIF2 server as the source. I think the answer is to filter out my CIF2 server from Bro, but I?ve not managed to find an example I can follow anywhere. Any suggestions much appreciated. Kind regards, Andy Andrew.Ratcliffe at NSWCSystems.co.uk CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP Blog.InfoSecMatters.net _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150118/12ace742/attachment-0001.html From mike.patterson at uwaterloo.ca Sun Jan 18 15:31:48 2015 From: mike.patterson at uwaterloo.ca (Mike Patterson) Date: Sun, 18 Jan 2015 23:31:48 +0000 Subject: [Bro] Bro Intel framework - filter out In-Reply-To: <498FC5EB-A6F1-492F-B3C4-7480B8203AAB@nswcsystems.co.uk> References: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> <498FC5EB-A6F1-492F-B3C4-7480B8203AAB@nswcsystems.co.uk> Message-ID: <697C027B-B61F-452A-B381-9F91873FE367@uwaterloo.ca> Here?s how I do it: event bro_init() &priority=-12 { restrict_filters["ignore"] = "not (net 10.0.0.1/24 or host 10.1.2.3)"; PacketFilter::install(); } There?s probably other, possibly even better, ways to do it, but this works for me. Mike > On Jan 18, 2015, at 5:50 PM, Andrew Ratcliffe wrote: > > Thanks for the suggestion. I'm not using the notice though just the intel.log : > > @load frameworks/intel/seen > > redef Intel::read_files += { > "/usr/local/bro/share/bro/site/intel-2.dat" > }; > > Is there no way to simply apply a BPF filter to Bro? > Kind regards, > Andy > Andrew.Ratcliffe at NSWCSystems.co.uk > CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP, CWSA > Blog.InfoSecMatters.net > > > > >> On 18 Jan 2015, at 20:11, Derek Banks wrote: >> >> You could stop CIF from doing the lookups if you wanted to (or not, depends on if you wan that data). Something like this (depending on how you are doing notices) should work: >> >> const intel_server_whitelist = {10.10.10.10}; >> >> hook Notice::policy(n: Notice::Info) >> { >> if ( n$note == Intel::Notice && n?$src && !(n$src in intel_server_whitelist ) ) >> { >> add n$actions[Notice::ACTION_EMAIL]; >> } >> } >> >> Regards, >> Derek >> >> On Sun, Jan 18, 2015 at 7:02 AM, Andrew Ratcliffe wrote: >> Hi, >> I am using a threat intelligence feed from a local installation of the Collective Intelligence Framework v2 and putting data into the Bro Intel framework. >> andy at cif2:~$ cif --cc US --tags botnet -l 10 -c 85 -f bro > intel-2.dat >> #fields indicator >> indicator_type meta.desc >> meta.cif_confidence meta.source >> 50.17.195.149 Intel::ADDR >> botnet|gozi 85 >> bambenekconsulting.com >> 50.17.195.149 Intel::ADDR >> botnet|gozi 85 >> bambenekconsulting.com >> 50.17.195.149 Intel::ADDR >> botnet|gozi 85 >> bambenekconsulting.com >> 50.17.195.149 Intel::ADDR >> botnet|gozi 85 >> bambenekconsulting.com >> >> echo -e "testmyids.com\tIntel::DOMAIN\tsuspicious\t85\tTester" >> intel-2.dat >> Add the above for testing purposes so I can trigger an Intel alert to test everything is working. >> >> This all works great and I can check my Kibana Bro intel dashboard for alerts. >> >> The problem is that, CIF2 queries DNS servers for IP addresses for domains in the intel data - so I get a false positive showing my CIF2 server as the source. >> >> I think the answer is to filter out my CIF2 server from Bro, but I?ve not managed to find an example I can follow anywhere. Any suggestions much appreciated. >> >> Kind regards, >> Andy >> Andrew.Ratcliffe at NSWCSystems.co.uk >> CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP >> Blog.InfoSecMatters.net >> >> >> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jdonnelly at dyn.com Mon Jan 19 04:06:26 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Mon, 19 Jan 2015 06:06:26 -0600 Subject: [Bro] Do I have to free params sent to mgr.dispatch ? Message-ID: Given the following sample: RecordVal* r = new RecordVal(dns_telemetry_qname_stats); r->Assign(0, new Val(ts, TYPE_DOUBLE)); r->Assign(1, new StringVal(key)); r->Assign(2, new Val(qname_v->zone_id, TYPE_COUNT)); r->Assign(3, new Val(qname_v->cust_id, TYPE_COUNT)); r->Assign(4, new Val(qname_v->cnt, TYPE_COUNT)); r->Assign(5, new StringVal(sts)); val_list* vl = new val_list; vl->append(r); mgr.Dispatch(new Event(dns_telemetry_qname_info, vl), true); Does Dispatch delete these resources ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/0fb0e237/attachment.html From jlay at slave-tothe-box.net Mon Jan 19 05:57:35 2015 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 19 Jan 2015 06:57:35 -0700 Subject: [Bro] Revisiting log rotate only In-Reply-To: <1421505437.3223.16.camel@JamesiMac> References: <1421505437.3223.16.camel@JamesiMac> Message-ID: <1421675855.3196.3.camel@JamesiMac> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: > Hey all, > > I posted about this last August here: > > http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html > > I also noticed someone have a disappearing log event which I have seen > before as well here: > > http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html > > I documented my process on installing bro on Ubuntu 14.04 using just > log rotation below: > > sudo apt-get -y install cmake > sudo apt-get -y install python-dev > sudo apt-get -y install swig > cp /usr/local/bro/share/bro/site > cp /opt/bin/startbro <- command line bro with long --filter line > cp /opt/bin/startbro to /etc/rc.local > sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ > sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ > sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ > sudo ln > -s /usr/local/bro/share/broctl/scripts/archive-log /usr/local/bin/ > sudo ln > -s /usr/local/bro/share/broctl/scripts/broctl-config.sh /usr/local/bin/ > sudo ln > -s /usr/local/bro/share/broctl/scripts/create-link-for-log /usr/local/bin/ > sudo ln > -s /usr/local/bro/share/broctl/scripts/make-archive-name /usr/local/bin/ > git clone https://github.com/jonschipp/mal-dnssearch.git > sudo make install > > specifics on log rotate only: > > add the below to local.bro > redef Log::default_rotation_interval = 86400 secs; > redef Log::default_rotation_postprocessor_cmd = "archive-log"; > edit the below in broctl.cfg > MailTo = jlay at slave-tothe-box.net > LogRotationInterval = 86400 > sudo /usr/local/bro/bin/broctl install > > Besides the edits to broctl.cfg, file locations are the default. The > above works well usually...it's after a reboot I have found things go > bad. Usually logs get rotated at midnight and I get an email with > statistics, just what I need. I rebooted the machine on the 13, and > that's the last email or log rotation I got....this morning I see > current has files and my logstash instance has data so I believe the > rotation got..."stuck". I'm kicking myself for not heading/tailing > the files first, but after issuing a "sudo killall bro", those file in > current vanished, no directory was created, and I received no email, > that data is now gone (no big deal as this is at home). I decided to > run broctl install again, then start and kill bro one more time. At > that point, I got a new directory with log rotation and an email with > minutes or so of stats. Please let me know if there's something I can > do on my end to trouble shoot. Thank you. > > James > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Confirming that this method is no longer working. Heading my connlog file I see: #open 2015-01-19-00-00-05 my /usr/local/bro/logs is completely missing Jan 18th. From my broctl.cfg: SpoolDir = /usr/local/bro/spool LogDir = /usr/local/bro/logs LogRotationInterval = 86400 >From my /usr/local/bro/share/bro/site/local.bro: redef Log::default_rotation_interval = 86400 secs; redef Log::default_rotation_postprocessor_cmd = "archive-log"; Anything else I can do to debug this? Thank you. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/31353507/attachment.html From seth at icir.org Mon Jan 19 07:19:51 2015 From: seth at icir.org (Seth Hall) Date: Mon, 19 Jan 2015 10:19:51 -0500 Subject: [Bro] Bro Intel framework - filter out In-Reply-To: <697C027B-B61F-452A-B381-9F91873FE367@uwaterloo.ca> References: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> <498FC5EB-A6F1-492F-B3C4-7480B8203AAB@nswcsystems.co.uk> <697C027B-B61F-452A-B381-9F91873FE367@uwaterloo.ca> Message-ID: <11B5C774-1D37-494E-8998-B0E1611E85C5@icir.org> > On Jan 18, 2015, at 6:31 PM, Mike Patterson wrote: > > There?s probably other, possibly even better, ways to do it, but this works for me. FWIW, there is the exclude function in the packet filter framework. event bro_init() { PacketFilter::exclude(?ignore this stuff?, "net 10.0.0.1/24 or host 10.1.2.3?); } .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From andrew.ratcliffe at nswcsystems.co.uk Mon Jan 19 09:00:04 2015 From: andrew.ratcliffe at nswcsystems.co.uk (Andrew Ratcliffe) Date: Mon, 19 Jan 2015 17:00:04 +0000 Subject: [Bro] Bro Intel framework - filter out In-Reply-To: <11B5C774-1D37-494E-8998-B0E1611E85C5@icir.org> References: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> <498FC5EB-A6F1-492F-B3C4-7480B8203AAB@nswcsystems.co.uk> <697C027B-B61F-452A-B381-9F91873FE367@uwaterloo.ca> <11B5C774-1D37-494E-8998-B0E1611E85C5@icir.org> Message-ID: Thanks, that?s really what I was looking for. I had seen the PacketFilter framework in the Bro documentation but when I look at the Bro docs it?s hard to figure out how to do stuff; I guess its me, I really need to find a good resource for learning the bro language. Kind regards, Andy Andrew.Ratcliffe at NSWCSystems.co.uk CISSP, GCIA, GCIH, GPEN, GWAPT, CSTA, CSTP Blog.InfoSecMatters.net On 19 Jan 2015, at 15:19, Seth Hall > wrote: On Jan 18, 2015, at 6:31 PM, Mike Patterson > wrote: There?s probably other, possibly even better, ways to do it, but this works for me. FWIW, there is the exclude function in the packet filter framework. event bro_init() { PacketFilter::exclude(?ignore this stuff?, "net 10.0.0.1/24 or host 10.1.2.3?); } .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/16456e67/attachment-0001.html From damonrouse at gmail.com Mon Jan 19 12:16:44 2015 From: damonrouse at gmail.com (Damon Rouse) Date: Mon, 19 Jan 2015 12:16:44 -0800 Subject: [Bro] Stats.log Growing Out of Control!!! Message-ID: @Dan: Both those files are there. What my main issue seems to be is that my stats.log file is growing by 20-30MB every 5 minutes when the cron runs. I then get the email below in my original post. I'm circling back here to hopefully find a resolution. I opened a thread in the Security Onion and tried limiting these events in my broctl.cfg. doesn't seem to work. I've stopped Bro, deleted the stats dir, did brotcl install and then start, no go there either. Here's my SO thread for ref: https://groups.google.com/forum/#!topic/security-onion/bdmFGn3oj24 If anyone has any ideas or thoughts, please let me know. Any help is truly appreciated! Thanks Damon On Fri, Jan 2, 2015 at 2:16 PM, Thayer, Daniel N wrote: > The stats-to-csv script creates files with a ".csv" file extension in > the directory /logs/stats/www/ (where is the bro > install directory). In order for this script to work, it needs to > read two files: /spool/stats.log and /logs/stats/meta.dat > > > > > From: bro-bounces at bro.org [bro-bounces at bro.org] on behalf of Damon Rouse [ > damonrouse at gmail.com] > > Sent: Friday, January 02, 2015 11:58 AM > > To: bro at bro-ids.org > > Subject: [Bro] (no subject) > > > > > > > Happy New Year Everyone!!! > > Has anyone ever seen the following error before? Email alerts that come > in looks like this: > > > > > Subject: [Bro] cron: stats-to-csv failed > Body: > stats-to-csv failed > -- > [Automatically generated.] > > I started receiving these yesterday. They come in every 5 minutes and > I've never received them before yesterday. > > Bro is running fine, my system is completely updated and everything looks > good when I run a sostat (running BRO under Security Onion). > > Any insight is appreciated as I have no idea if they are something I > should look into or not. > > Thanks > Damon > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/758cfdd7/attachment.html From daniel.harrison4 at baesystems.com Mon Jan 19 14:40:23 2015 From: daniel.harrison4 at baesystems.com (Harrison, Daniel (US SSA)) Date: Mon, 19 Jan 2015 22:40:23 +0000 Subject: [Bro] Log all client cipher suites In-Reply-To: <20150118180218.GA6913@Beezling.local> Message-ID: <20150119224032.52DF72C401A@rock.ICSI.Berkeley.EDU> That worked, thanks. I changed the format to add leading zeros for the 2 byte ciphers but that doesn't take into account the 3byte ones. Is there an easy way to keep the leading zeros in the hex no matter the length? @load base/protocols/ssl/main module SSL; export { redef record Info += { ciphers: vector of string &log &optional; }; ## A boolean value to determine if client headers are to be logged. const log_ciphers = T &redef; } event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) { if ( ! c?$ssl ) return; if ( log_ciphers ) { c$ssl$ciphers = vector(); for ( i in ciphers ) c$ssl$ciphers[i] = fmt("%04x", ciphers[i]); } } -----Original Message----- From: Johanna Amann [mailto:johanna at icir.org] Sent: Sunday, January 18, 2015 1:02 PM To: Harrison, Daniel (US SSA) Cc: bro at bro.org Subject: Re: [Bro] Log all client cipher suites Hello Daniel, On Sun, Jan 18, 2015 at 05:12:51PM +0000, Harrison, Daniel (US SSA) wrote: > I am trying to write a script to log all client_hello cipher suites to > the ssl log, preferably in the ascii hex format as they look in the > pcap. I hacked up a similar script and got it to create the log entry > but the column shows only (empty). Any idea on how to do this? Thanks. The reason your script does not work at the moment is, that you only assigned an empty vector in the ssl_client_hello event without passing it the actual data. I modified it slightly below to just dump the raw number of all client ciphers, converted into hex, into the log. Note that it drops 0's in the front. I hope this helps, Johanna ---- @load base/protocols/ssl/main module SSL; export { redef record Info += { ciphers: vector of string &log &optional; }; ## A boolean value to determine if client headers are to be logged. const log_ciphers = T &redef; } event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) { if ( ! c?$ssl ) return; if ( log_ciphers ) { c$ssl$ciphers = vector(); for ( i in ciphers ) c$ssl$ciphers[i] = fmt("%x", ciphers[i]); } } -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6727 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/21039f73/attachment.bin From dnthayer at illinois.edu Mon Jan 19 15:39:18 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Mon, 19 Jan 2015 17:39:18 -0600 Subject: [Bro] Stats.log Growing Out of Control!!! In-Reply-To: References: Message-ID: <54BD95A6.6000907@illinois.edu> I'd like to know why the stats-to-csv script is failing. Could you apply the attached patch, and then send me the contents of the "stats-to-csv failed" email? To apply the patch you'll need to change directory to (where is the Bro install prefix directory): /lib/broctl/BroControl In that directory you should see a file named "cron.py". On 01/19/2015 02:16 PM, Damon Rouse wrote: > @Dan: Both those files are there. > > What my main issue seems to be is that my stats.log file is growing by > 20-30MB every 5 minutes when the cron runs. I then get the email below > in my original post. > > I'm circling back here to hopefully find a resolution. I opened a > thread in the Security Onion and tried limiting these events in my > broctl.cfg. doesn't seem to work. I've stopped Bro, deleted the stats > dir, did brotcl install and then start, no go there either. > > Here's my SO thread for ref: > https://groups.google.com/forum/#!topic/security-onion/bdmFGn3oj24 > > If anyone has any ideas or thoughts, please let me know. Any help is > truly appreciated! > > Thanks > Damon > > On Fri, Jan 2, 2015 at 2:16 PM, Thayer, Daniel N > wrote: > > The stats-to-csv script creates files with a ".csv" file extension in > the directory /logs/stats/www/ (where is the bro > install directory). In order for this script to work, it needs to > read two files: /spool/stats.log and > /logs/stats/meta.dat > > > > > From: bro-bounces at bro.org > [bro-bounces at bro.org ] on behalf of > Damon Rouse [damonrouse at gmail.com ] > > Sent: Friday, January 02, 2015 11:58 AM > > To: bro at bro-ids.org > > Subject: [Bro] (no subject) > > > > > > > Happy New Year Everyone!!! > > Has anyone ever seen the following error before? Email alerts that > come in looks like this: > > > > > Subject: [Bro] cron: stats-to-csv failed > Body: > stats-to-csv failed > -- > [Automatically generated.] > > I started receiving these yesterday. They come in every 5 minutes > and I've never received them before yesterday. > > Bro is running fine, my system is completely updated and everything > looks good when I run a sostat (running BRO under Security Onion). > > Any insight is appreciated as I have no idea if they are something I > should look into or not. > > Thanks > Damon > > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: broctlcron.patch Type: text/x-patch Size: 388 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/b7b099b6/attachment.bin From damonrouse at gmail.com Mon Jan 19 17:26:39 2015 From: damonrouse at gmail.com (Damon Rouse) Date: Mon, 19 Jan 2015 17:26:39 -0800 Subject: [Bro] Stats.log Growing Out of Control!!! In-Reply-To: <54BD95A6.6000907@illinois.edu> References: <54BD95A6.6000907@illinois.edu> Message-ID: Here's the output after patching the cron.py file stats-to-csv failed ['manager ...', 'Traceback (most recent call last):', ' File "/opt/bro/share/broctl/scripts/stats-to-csv", line 134, in ', ' processNode(stats, wwwdir, "manager", False)', ' File "/opt/bro/share/broctl/scripts/stats-to-csv", line 87, in processNode', ' if m[1] != node:', 'IndexError: list index out of range'] -- [Automatically generated.] On Mon, Jan 19, 2015 at 3:39 PM, Daniel Thayer wrote: > I'd like to know why the stats-to-csv script is failing. > Could you apply the attached patch, and then send me > the contents of the "stats-to-csv failed" email? > > To apply the patch you'll need to change directory to (where > is the Bro install prefix directory): > /lib/broctl/BroControl > In that directory you should see a file named "cron.py". > > > > On 01/19/2015 02:16 PM, Damon Rouse wrote: > >> @Dan: Both those files are there. >> >> What my main issue seems to be is that my stats.log file is growing by >> 20-30MB every 5 minutes when the cron runs. I then get the email below >> in my original post. >> >> I'm circling back here to hopefully find a resolution. I opened a >> thread in the Security Onion and tried limiting these events in my >> broctl.cfg. doesn't seem to work. I've stopped Bro, deleted the stats >> dir, did brotcl install and then start, no go there either. >> >> Here's my SO thread for ref: >> https://groups.google.com/forum/#!topic/security-onion/bdmFGn3oj24 >> >> If anyone has any ideas or thoughts, please let me know. Any help is >> truly appreciated! >> >> Thanks >> Damon >> >> On Fri, Jan 2, 2015 at 2:16 PM, Thayer, Daniel N > > wrote: >> >> The stats-to-csv script creates files with a ".csv" file extension in >> the directory /logs/stats/www/ (where is the bro >> install directory). In order for this script to work, it needs to >> read two files: /spool/stats.log and >> /logs/stats/meta.dat >> >> >> >> >> From: bro-bounces at bro.org >> [bro-bounces at bro.org ] on behalf of >> Damon Rouse [damonrouse at gmail.com ] >> >> Sent: Friday, January 02, 2015 11:58 AM >> >> To: bro at bro-ids.org >> >> Subject: [Bro] (no subject) >> >> >> >> >> >> >> Happy New Year Everyone!!! >> >> Has anyone ever seen the following error before? Email alerts that >> come in looks like this: >> >> >> >> >> Subject: [Bro] cron: stats-to-csv failed >> Body: >> stats-to-csv failed >> -- >> [Automatically generated.] >> >> I started receiving these yesterday. They come in every 5 minutes >> and I've never received them before yesterday. >> >> Bro is running fine, my system is completely updated and everything >> looks good when I run a sostat (running BRO under Security Onion). >> >> Any insight is appreciated as I have no idea if they are something I >> should look into or not. >> >> Thanks >> Damon >> >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/2d970612/attachment-0001.html From johanna at icir.org Mon Jan 19 20:52:58 2015 From: johanna at icir.org (Johanna Amann) Date: Mon, 19 Jan 2015 20:52:58 -0800 Subject: [Bro] Log all client cipher suites In-Reply-To: <20150119224032.52DF72C401A@rock.ICSI.Berkeley.EDU> References: <20150118180218.GA6913@Beezling.local> <20150119224032.52DF72C401A@rock.ICSI.Berkeley.EDU> Message-ID: <20150120045258.GA16534@Beezling.local> Hello Daniel, On Mon, Jan 19, 2015 at 10:40:23PM +0000, Harrison, Daniel (US SSA) wrote: > That worked, thanks. I changed the format to add leading zeros for the 2 > byte ciphers but that doesn't take into account the 3byte ones. > Is there an easy way to keep the leading zeros in the hex no matter the > length? All cipher suites in TLS are always exactly 2 bytes long - so if you have code to handle that, you should be good. Johanna From dnthayer at illinois.edu Mon Jan 19 21:20:04 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Mon, 19 Jan 2015 23:20:04 -0600 Subject: [Bro] Stats.log Growing Out of Control!!! In-Reply-To: References: <54BD95A6.6000907@illinois.edu> Message-ID: <54BDE584.8080502@illinois.edu> Your spool/stats.log file became corrupt somehow, and then you started getting "stats-to-csv failed" emails every time cron ran. This was preventing broctl from removing this file, which explains why you were seeing such a fast rate of growth in the size of your logs/stats/stats.log file (broctl cron always appends spool/stats.log to logs/stats/stats.log). To fix this, you could just delete the spool/stats.log file, then you should no longer see the "stats-to-csv failed" emails. I will improve broctl in the next release to mitigate this problem. Thanks for reporting this issue. On 01/19/2015 07:26 PM, Damon Rouse wrote: > Here's the output after patching the cron.py file > > stats-to-csv failed > > ['manager ...', 'Traceback (most recent call last):', ' File > "/opt/bro/share/broctl/scripts/stats-to-csv", line 134, in ', > ' processNode(stats, wwwdir, "manager", False)', ' File > "/opt/bro/share/broctl/scripts/stats-to-csv", line 87, in processNode', > ' if m[1] != node:', 'IndexError: list index out of range'] > > -- > > [Automatically generated.] > > > On Mon, Jan 19, 2015 at 3:39 PM, Daniel Thayer > wrote: > > I'd like to know why the stats-to-csv script is failing. > Could you apply the attached patch, and then send me > the contents of the "stats-to-csv failed" email? > > To apply the patch you'll need to change directory to (where > is the Bro install prefix directory): > /lib/broctl/BroControl > In that directory you should see a file named "cron.py". > > > > On 01/19/2015 02:16 PM, Damon Rouse wrote: > > @Dan: Both those files are there. > > What my main issue seems to be is that my stats.log file is > growing by > 20-30MB every 5 minutes when the cron runs. I then get the > email below > in my original post. > > I'm circling back here to hopefully find a resolution. I opened a > thread in the Security Onion and tried limiting these events in my > broctl.cfg. doesn't seem to work. I've stopped Bro, deleted the > stats > dir, did brotcl install and then start, no go there either. > > Here's my SO thread for ref: > https://groups.google.com/__forum/#!topic/security-onion/__bdmFGn3oj24 > > > If anyone has any ideas or thoughts, please let me know. Any > help is > truly appreciated! > > Thanks > Damon > > On Fri, Jan 2, 2015 at 2:16 PM, Thayer, Daniel N > > >__> > wrote: > > The stats-to-csv script creates files with a ".csv" file > extension in > the directory /logs/stats/www/ (where is > the bro > install directory). In order for this script to work, it > needs to > read two files: /spool/stats.log and > /logs/stats/meta.dat > > > > > From: bro-bounces at bro.org > > > [bro-bounces at bro.org > >] on > behalf of > Damon Rouse [damonrouse at gmail.com > >] > > Sent: Friday, January 02, 2015 11:58 AM > > To: bro at bro-ids.org > > > > Subject: [Bro] (no subject) > > > > > > > Happy New Year Everyone!!! > > Has anyone ever seen the following error before? Email > alerts that > come in looks like this: > > > > > Subject: [Bro] cron: stats-to-csv failed > Body: > stats-to-csv failed > -- > [Automatically generated.] > > I started receiving these yesterday. They come in every 5 > minutes > and I've never received them before yesterday. > > Bro is running fine, my system is completely updated and > everything > looks good when I run a sostat (running BRO under Security > Onion). > > Any insight is appreciated as I have no idea if they are > something I > should look into or not. > > Thanks > Damon > > > > > > > > From damonrouse at gmail.com Mon Jan 19 23:18:08 2015 From: damonrouse at gmail.com (Damon Rouse) Date: Mon, 19 Jan 2015 23:18:08 -0800 Subject: [Bro] Stats.log Growing Out of Control!!! In-Reply-To: <54BDE584.8080502@illinois.edu> References: <54BD95A6.6000907@illinois.edu> <54BDE584.8080502@illinois.edu> Message-ID: Thanks Dan! That worked like a charm...no emails and my /nsm/bro/logs/stats/stats.log is no longer growing out of control. Thanks again and I really appreciate all your help on this! Damon On Mon, Jan 19, 2015 at 9:20 PM, Daniel Thayer wrote: > Your spool/stats.log file became corrupt somehow, and then you started > getting "stats-to-csv failed" emails every time cron ran. This was > preventing broctl from removing this file, which explains why you were > seeing such a fast rate of growth in the size of your > logs/stats/stats.log file (broctl cron always appends spool/stats.log > to logs/stats/stats.log). > > To fix this, you could just delete the spool/stats.log file, then > you should no longer see the "stats-to-csv failed" emails. > > I will improve broctl in the next release to mitigate this problem. > Thanks for reporting this issue. > > > On 01/19/2015 07:26 PM, Damon Rouse wrote: > >> Here's the output after patching the cron.py file >> >> stats-to-csv failed >> >> ['manager ...', 'Traceback (most recent call last):', ' File >> "/opt/bro/share/broctl/scripts/stats-to-csv", line 134, in ', >> ' processNode(stats, wwwdir, "manager", False)', ' File >> "/opt/bro/share/broctl/scripts/stats-to-csv", line 87, in processNode', >> ' if m[1] != node:', 'IndexError: list index out of range'] >> >> -- >> >> [Automatically generated.] >> >> >> On Mon, Jan 19, 2015 at 3:39 PM, Daniel Thayer > > wrote: >> >> I'd like to know why the stats-to-csv script is failing. >> Could you apply the attached patch, and then send me >> the contents of the "stats-to-csv failed" email? >> >> To apply the patch you'll need to change directory to (where >> is the Bro install prefix directory): >> /lib/broctl/BroControl >> In that directory you should see a file named "cron.py". >> >> >> >> On 01/19/2015 02:16 PM, Damon Rouse wrote: >> >> @Dan: Both those files are there. >> >> What my main issue seems to be is that my stats.log file is >> growing by >> 20-30MB every 5 minutes when the cron runs. I then get the >> email below >> in my original post. >> >> I'm circling back here to hopefully find a resolution. I opened a >> thread in the Security Onion and tried limiting these events in my >> broctl.cfg. doesn't seem to work. I've stopped Bro, deleted the >> stats >> dir, did brotcl install and then start, no go there either. >> >> Here's my SO thread for ref: >> https://groups.google.com/__forum/#!topic/security-onion/_ >> _bdmFGn3oj24 >> > bdmFGn3oj24> >> >> If anyone has any ideas or thoughts, please let me know. Any >> help is >> truly appreciated! >> >> Thanks >> Damon >> >> On Fri, Jan 2, 2015 at 2:16 PM, Thayer, Daniel N >> >> >__> >> wrote: >> >> The stats-to-csv script creates files with a ".csv" file >> extension in >> the directory /logs/stats/www/ (where is >> the bro >> install directory). In order for this script to work, it >> needs to >> read two files: /spool/stats.log and >> /logs/stats/meta.dat >> >> >> >> >> From: bro-bounces at bro.org >> > >> [bro-bounces at bro.org >> >] on >> behalf of >> Damon Rouse [damonrouse at gmail.com >> > >] >> >> Sent: Friday, January 02, 2015 11:58 AM >> >> To: bro at bro-ids.org >> > >> >> Subject: [Bro] (no subject) >> >> >> >> >> >> >> Happy New Year Everyone!!! >> >> Has anyone ever seen the following error before? Email >> alerts that >> come in looks like this: >> >> >> >> >> Subject: [Bro] cron: stats-to-csv failed >> Body: >> stats-to-csv failed >> -- >> [Automatically generated.] >> >> I started receiving these yesterday. They come in every 5 >> minutes >> and I've never received them before yesterday. >> >> Bro is running fine, my system is completely updated and >> everything >> looks good when I run a sostat (running BRO under Security >> Onion). >> >> Any insight is appreciated as I have no idea if they are >> something I >> should look into or not. >> >> Thanks >> Damon >> >> >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150119/4704aa38/attachment.html From Emmanuel.TORQUATO at monext.net Tue Jan 20 03:45:59 2015 From: Emmanuel.TORQUATO at monext.net (Emmanuel TORQUATO) Date: Tue, 20 Jan 2015 12:45:59 +0100 Subject: [Bro] using binpac for protocol parser Message-ID: Hello All, I would like to use binpac for protocol analyzer creation. The protocol is called cb2a, it's a bank exchange protocol. I find very few sources which explains the way of building analyzers using binpac from scratch. The only ones I have are "binpac: A yacc for writing application protocol Parsers" and the sample-message example. However, I have been able to use binpac for file creation .cc and .h. When adding the new folder in /usr/src/bro/src/analyzer/protocol/cb2a in the CMakeLists.txt with the below files and doing ./configure and then make, I have the error "Linking CXX executable bro CMakeFiles/bro.dir/plugins.cc.o: In function `__make_sure_to_use_plugin_globals()': /usr/src/bro-2.3/build/src/plugins.cc:69: undefined reference to `plugin::Bro_Cb2a::__plugin' " There is something to do with file Plugin.cc but I don't know what... this file is not generated by binpac, so I have done one, but still the same issue. Anyone can help me please? Files: ## Plugin.cc ## #include "plugin/Plugin.h" #include "cb2a_pac.h" BRO_PLUGIN_BEGIN(Bro, Cb2a) BRO_PLUGIN_DESCRIPTION("Cb2a analyzer"); BRO_PLUGIN_BIF_FILE(events); BRO_PLUGIN_END ## Cb2a.pac ## %include binpac.pac %include bro.pac %extern{ #include "events.bif.h" %} analyzer cb2a withcontext { connection: cb2a_Conn; flow: cb2a_Flow; }; %include cb2a-protocol.pac %include cb2a-analyzer.pac ## Cb2a-analyzer.pac ## connection cb2a_Conn(bro_analyzer: BroAnalyzer) { upflow = cb2a_Flow(true); downflow = cb2a_Flow(false); }; flow cb2a_Flow(is_orig: bool) { flowunit = CB2A_Header withcontext (connection, this); function deliver_message(length: uint32): bool %{ if ( ::cb2a_header ) { BifEvent::generate_cb2a_header( connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), is_orig(), length); } return true; %} }; ## CB2A-protocol.pac ## type CB2A_Header = record { length: uint32; pgi_field: uint8 &check(pgi_field == 0xc1 || pgi_field == 0xc2 || pgi_field == 0xc3 || pgi_field == 0xc4); lgi_length: uint8; after_length: uint8[length - 2]; } &byteorder = bigendian &length = msg_length &let { msg_length: int = length + 4; deliver: bool = $context.flow.deliver_message(length); }; ## Events.bif ## event cb2a_header%(c: connection, is_orig: bool, length: count%); Regards, Emmanuel Torquato -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150120/7b338218/attachment-0001.html From brunotaf31 at gmail.com Tue Jan 20 05:39:25 2015 From: brunotaf31 at gmail.com (A Bruno) Date: Tue, 20 Jan 2015 14:39:25 +0100 Subject: [Bro] Fwd: Disable binpac at compile time? In-Reply-To: References: Message-ID: Hello, I'd like to use gcov/lcov on bro in order to evaluate the code coverage of some bro tests. However I encounter a problem when I try to generate the lcov report. It seems to be due to some .yy files which are no more available after bro has been compiled. My tests don't need binpac functionalities, so I don't really need these .yy files. Is it possible to deactivate the binpac dependancy in bro? If yes, is there someone able to indicate me the way to disable binpac in order to compile bro without binpac? Thanks in advance. Best Regards, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150120/405eb5e1/attachment.html From seth at icir.org Tue Jan 20 06:01:07 2015 From: seth at icir.org (Seth Hall) Date: Tue, 20 Jan 2015 09:01:07 -0500 Subject: [Bro] Bro Intel framework - filter out In-Reply-To: References: <16513199-4EB3-4466-BAD5-29B305341105@nswcsystems.co.uk> <498FC5EB-A6F1-492F-B3C4-7480B8203AAB@nswcsystems.co.uk> <697C027B-B61F-452A-B381-9F91873FE367@uwaterloo.ca> <11B5C774-1D37-494E-8998-B0E1611E85C5@icir.org> Message-ID: > On Jan 19, 2015, at 12:00 PM, Andrew Ratcliffe wrote: > > I guess its me, I really need to find a good resource for learning the bro language. It?s not really you, lots of tutorials need to be written for various parts of Bro. :/ .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jdonnelly at dyn.com Tue Jan 20 07:23:11 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Tue, 20 Jan 2015 09:23:11 -0600 Subject: [Bro] How can I use the USE_PERFTOOLS_DEBUG ? Message-ID: I see : #ifdef USE_PERFTOOLS_DEBUG fprintf(stderr, " -m|--mem-leaks | show leaks [perftools]\n"); fprintf(stderr, " -M|--mem-profile | record heap [perftools]\n"); #endif in main.cc - How can I use these? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150120/147d6dd3/attachment.html From robin at icir.org Tue Jan 20 07:49:20 2015 From: robin at icir.org (Robin Sommer) Date: Tue, 20 Jan 2015 07:49:20 -0800 Subject: [Bro] How can I use the USE_PERFTOOLS_DEBUG ? In-Reply-To: References: Message-ID: <20150120154920.GA43442@icir.org> On Tue, Jan 20, 2015 at 09:23 -0600, John Donnelly wrote: > #ifdef USE_PERFTOOLS_DEBUG What this does is activating perftool's HeapChecker/HeapProfiler during Bro's main loop (i.e., ignoring of all initialization/shutdown code, which has known but uninteresting leaks). If you then set the environment variable HEAPCHECK=local and run Bro with -m, it will record leaks per https://google-perftools.googlecode.com/svn/trunk/doc/heap_checker.html. Similar for heap profiling. Take a look at the tests in testing/btest/core/leaks, they use this. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jsiwek at illinois.edu Tue Jan 20 08:19:42 2015 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 20 Jan 2015 16:19:42 +0000 Subject: [Bro] Do I have to free params sent to mgr.dispatch ? In-Reply-To: References: Message-ID: > On Jan 19, 2015, at 6:06 AM, John Donnelly wrote: > > Given the following sample: > > RecordVal* r = new RecordVal(dns_telemetry_qname_stats); r->Assign(0, new Val(ts, TYPE_DOUBLE)); > > r->Assign(1, new StringVal(key)); > > r->Assign(2, new Val(qname_v->zone_id, TYPE_COUNT)); > > r->Assign(3, new Val(qname_v->cust_id, TYPE_COUNT)); > > r->Assign(4, new Val(qname_v->cnt, TYPE_COUNT)); > > r->Assign(5, new StringVal(sts)); > > val_list* vl = new val_list; > > vl->append(r); > > mgr.Dispatch(new Event(dns_telemetry_qname_info, vl), true); > > > > Does Dispatch delete these resources ? > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jsiwek at illinois.edu Tue Jan 20 08:23:43 2015 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 20 Jan 2015 16:23:43 +0000 Subject: [Bro] Do I have to free params sent to mgr.dispatch ? In-Reply-To: References: Message-ID: <17B363A6-50C0-4393-A59B-E43FA56EA8BC@illinois.edu> > On Jan 19, 2015, at 6:06 AM, John Donnelly wrote: > > Given the following sample: > > RecordVal* r = new RecordVal(dns_telemetry_qname_stats); r->Assign(0, new Val(ts, TYPE_DOUBLE)); > > r->Assign(1, new StringVal(key)); > > r->Assign(2, new Val(qname_v->zone_id, TYPE_COUNT)); > > r->Assign(3, new Val(qname_v->cust_id, TYPE_COUNT)); > > r->Assign(4, new Val(qname_v->cnt, TYPE_COUNT)); > > r->Assign(5, new StringVal(sts)); > > val_list* vl = new val_list; > > vl->append(r); > > mgr.Dispatch(new Event(dns_telemetry_qname_info, vl), true); > > > > Does Dispatch delete these resources ? It should. Dispatch() will call all event handlers immediately. QueueEvent() is commonly used in most places in the code, and will dispatch it at a later time. You can also check if ?dns_telemetry_qname_info? evaluates to true before creating the argument list ? i.e. if no event handler is defined, you don?t need to create arguments for it. - Jon From jdonnelly at dyn.com Tue Jan 20 09:47:29 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Tue, 20 Jan 2015 11:47:29 -0600 Subject: [Bro] How can I use the USE_PERFTOOLS_DEBUG ? In-Reply-To: <20150120154920.GA43442@icir.org> References: <20150120154920.GA43442@icir.org> Message-ID: Thanks. Leak check net_run detected leaks of 96678 bytes in 912 objects The 20 largest leaks: Leak of 17920 bytes in 35 objects allocated from: @ 72ba06 @ 72b48d @ 72aabc @ 72a00b @ 7298f7 @ 728e47 @ 727916 @ 727df6 @ 7273a8 @ 724fa9 @ 69a5d5 @ 5f29e5 @ 7f250155dec5 @ 5d1bd8 What does @xxxxxx mean ? A function address ? On Tue, Jan 20, 2015 at 9:49 AM, Robin Sommer wrote: > > > On Tue, Jan 20, 2015 at 09:23 -0600, John Donnelly wrote: > > > #ifdef USE_PERFTOOLS_DEBUG > > What this does is activating perftool's HeapChecker/HeapProfiler > during Bro's main loop (i.e., ignoring of all initialization/shutdown > code, which has known but uninteresting leaks). If you then set the > environment variable HEAPCHECK=local and run Bro with -m, it will > record leaks per > https://google-perftools.googlecode.com/svn/trunk/doc/heap_checker.html. > Similar for heap profiling. > > Take a look at the tests in testing/btest/core/leaks, they use this. > > Robin > > -- > Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150120/010e8307/attachment-0001.html From dnthayer at illinois.edu Tue Jan 20 12:04:33 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 20 Jan 2015 14:04:33 -0600 Subject: [Bro] Revisiting log rotate only In-Reply-To: <1421675855.3196.3.camel@JamesiMac> References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> Message-ID: <54BEB4D1.5060400@illinois.edu> On 01/19/2015 07:57 AM, James Lay wrote: > On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: >> Hey all, >> >> I posted about this last August here: >> >> http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html >> >> I also noticed someone have a disappearing log event which I have seen >> before as well here: >> >> http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html >> >> I documented my process on installing bro on Ubuntu 14.04 using just >> log rotation below: >> >> sudo apt-get -y install cmake >> sudo apt-get -y install python-dev >> sudo apt-get -y install swig >> cp /usr/local/bro/share/bro/site >> cp /opt/bin/startbro <- command line bro with long --filter line >> cp /opt/bin/startbro to /etc/rc.local >> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ >> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ >> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ >> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log /usr/local/bin/ >> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh >> /usr/local/bin/ >> sudo ln -s /usr/local/bro/share/broctl/scripts/create-link-for-log >> /usr/local/bin/ >> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name >> /usr/local/bin/ >> git clone https://github.com/jonschipp/mal-dnssearch.git >> sudo make install >> >> specifics on log rotate only: >> >> add the below to local.bro >> redef Log::default_rotation_interval = 86400 secs; >> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >> edit the below in broctl.cfg >> MailTo = jlay at slave-tothe-box.net >> LogRotationInterval = 86400 >> sudo /usr/local/bro/bin/broctl install >> >> Besides the edits to broctl.cfg, file locations are the default. The >> above works well usually...it's after a reboot I have found things go >> bad. Usually logs get rotated at midnight and I get an email with >> statistics, just what I need. I rebooted the machine on the 13, and >> that's the last email or log rotation I got....this morning I see >> current has files and my logstash instance has data so I believe the >> rotation got..."stuck". I'm kicking myself for not heading/tailing >> the files first, but after issuing a "sudo killall bro", those file in >> current vanished, no directory was created, and I received no email, >> that data is now gone (no big deal as this is at home). I decided to >> run broctl install again, then start and kill bro one more time. At >> that point, I got a new directory with log rotation and an email with >> minutes or so of stats. Please let me know if there's something I can >> do on my end to trouble shoot. Thank you. >> >> James >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > Confirming that this method is no longer working. Heading my connlog > file I see: > > #open 2015-01-19-00-00-05 > > my /usr/local/bro/logs is completely missing Jan 18th. From my broctl.cfg: > > SpoolDir = /usr/local/bro/spool > LogDir = /usr/local/bro/logs > LogRotationInterval = 86400 > > From my /usr/local/bro/share/bro/site/local.bro: > > redef Log::default_rotation_interval = 86400 secs; > redef Log::default_rotation_postprocessor_cmd = "archive-log"; > > Anything else I can do to debug this? Thank you. > > James Are you using broctl to start and stop Bro? What does /opt/bin/startbro do? From dnthayer at illinois.edu Tue Jan 20 14:17:42 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 20 Jan 2015 16:17:42 -0600 Subject: [Bro] Revisiting log rotate only In-Reply-To: <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> <54BEB4D1.5060400@illinois.edu> <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> Message-ID: <54BED406.2070709@illinois.edu> On 01/20/2015 04:13 PM, James Lay wrote: > On 2015-01-20 01:04 PM, Daniel Thayer wrote: >> On 01/19/2015 07:57 AM, James Lay wrote: >>> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: >>>> Hey all, >>>> >>>> I posted about this last August here: >>>> >>>> >>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html >>>> >>>> I also noticed someone have a disappearing log event which I have seen >>>> before as well here: >>>> >>>> >>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html >>>> >>>> I documented my process on installing bro on Ubuntu 14.04 using just >>>> log rotation below: >>>> >>>> sudo apt-get -y install cmake >>>> sudo apt-get -y install python-dev >>>> sudo apt-get -y install swig >>>> cp /usr/local/bro/share/bro/site >>>> cp /opt/bin/startbro <- command line bro with long --filter line >>>> cp /opt/bin/startbro to /etc/rc.local >>>> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ >>>> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ >>>> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ >>>> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log >>>> /usr/local/bin/ >>>> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh >>>> /usr/local/bin/ >>>> sudo ln -s /usr/local/bro/share/broctl/scripts/create-link-for-log >>>> /usr/local/bin/ >>>> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name >>>> /usr/local/bin/ >>>> git clone https://github.com/jonschipp/mal-dnssearch.git >>>> sudo make install >>>> >>>> specifics on log rotate only: >>>> >>>> add the below to local.bro >>>> redef Log::default_rotation_interval = 86400 secs; >>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>>> edit the below in broctl.cfg >>>> MailTo = jlay at slave-tothe-box.net >>>> LogRotationInterval = 86400 >>>> sudo /usr/local/bro/bin/broctl install >>>> >>>> Besides the edits to broctl.cfg, file locations are the default. The >>>> above works well usually...it's after a reboot I have found things go >>>> bad. Usually logs get rotated at midnight and I get an email with >>>> statistics, just what I need. I rebooted the machine on the 13, and >>>> that's the last email or log rotation I got....this morning I see >>>> current has files and my logstash instance has data so I believe the >>>> rotation got..."stuck". I'm kicking myself for not heading/tailing >>>> the files first, but after issuing a "sudo killall bro", those file in >>>> current vanished, no directory was created, and I received no email, >>>> that data is now gone (no big deal as this is at home). I decided to >>>> run broctl install again, then start and kill bro one more time. At >>>> that point, I got a new directory with log rotation and an email with >>>> minutes or so of stats. Please let me know if there's something I can >>>> do on my end to trouble shoot. Thank you. >>>> >>>> James >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> Confirming that this method is no longer working. Heading my connlog >>> file I see: >>> >>> #open 2015-01-19-00-00-05 >>> >>> my /usr/local/bro/logs is completely missing Jan 18th. From my >>> broctl.cfg: >>> >>> SpoolDir = /usr/local/bro/spool >>> LogDir = /usr/local/bro/logs >>> LogRotationInterval = 86400 >>> >>> From my /usr/local/bro/share/bro/site/local.bro: >>> >>> redef Log::default_rotation_interval = 86400 secs; >>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>> >>> Anything else I can do to debug this? Thank you. >>> >>> James >> >> Are you using broctl to start and stop Bro? What does /opt/bin/startbro >> do? > > Thanks for looking Daniel. I am starting this with the below: > > /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter '( large > filter line here)' local "Site::local_nets += { 192.168.1.0/24 }" > > I'm not using broctl. The only small portion that I am is for the log > rotation as outlined in the email thread. After killing and starting > bro yesterday, this morning at midnight logs got rotated and I got my > report email. This appears to happen after a complete reboot of the > device. It's very odd. Thanks again. > > James What command do you use to stop (or restart) Bro? From jlay at slave-tothe-box.net Tue Jan 20 14:13:05 2015 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 20 Jan 2015 15:13:05 -0700 Subject: [Bro] Revisiting log rotate only In-Reply-To: <54BEB4D1.5060400@illinois.edu> References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> <54BEB4D1.5060400@illinois.edu> Message-ID: <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> On 2015-01-20 01:04 PM, Daniel Thayer wrote: > On 01/19/2015 07:57 AM, James Lay wrote: >> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: >>> Hey all, >>> >>> I posted about this last August here: >>> >>> >>> http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html >>> >>> I also noticed someone have a disappearing log event which I have >>> seen >>> before as well here: >>> >>> >>> http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html >>> >>> I documented my process on installing bro on Ubuntu 14.04 using >>> just >>> log rotation below: >>> >>> sudo apt-get -y install cmake >>> sudo apt-get -y install python-dev >>> sudo apt-get -y install swig >>> cp /usr/local/bro/share/bro/site >>> cp /opt/bin/startbro <- command line bro with long --filter line >>> cp /opt/bin/startbro to /etc/rc.local >>> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ >>> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ >>> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ >>> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log >>> /usr/local/bin/ >>> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh >>> /usr/local/bin/ >>> sudo ln -s /usr/local/bro/share/broctl/scripts/create-link-for-log >>> /usr/local/bin/ >>> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name >>> /usr/local/bin/ >>> git clone https://github.com/jonschipp/mal-dnssearch.git >>> sudo make install >>> >>> specifics on log rotate only: >>> >>> add the below to local.bro >>> redef Log::default_rotation_interval = 86400 secs; >>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>> edit the below in broctl.cfg >>> MailTo = jlay at slave-tothe-box.net >>> LogRotationInterval = 86400 >>> sudo /usr/local/bro/bin/broctl install >>> >>> Besides the edits to broctl.cfg, file locations are the default. >>> The >>> above works well usually...it's after a reboot I have found things >>> go >>> bad. Usually logs get rotated at midnight and I get an email with >>> statistics, just what I need. I rebooted the machine on the 13, >>> and >>> that's the last email or log rotation I got....this morning I see >>> current has files and my logstash instance has data so I believe >>> the >>> rotation got..."stuck". I'm kicking myself for not heading/tailing >>> the files first, but after issuing a "sudo killall bro", those file >>> in >>> current vanished, no directory was created, and I received no >>> email, >>> that data is now gone (no big deal as this is at home). I decided >>> to >>> run broctl install again, then start and kill bro one more time. >>> At >>> that point, I got a new directory with log rotation and an email >>> with >>> minutes or so of stats. Please let me know if there's something I >>> can >>> do on my end to trouble shoot. Thank you. >>> >>> James >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> Confirming that this method is no longer working. Heading my >> connlog >> file I see: >> >> #open 2015-01-19-00-00-05 >> >> my /usr/local/bro/logs is completely missing Jan 18th. From my >> broctl.cfg: >> >> SpoolDir = /usr/local/bro/spool >> LogDir = /usr/local/bro/logs >> LogRotationInterval = 86400 >> >> From my /usr/local/bro/share/bro/site/local.bro: >> >> redef Log::default_rotation_interval = 86400 secs; >> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >> >> Anything else I can do to debug this? Thank you. >> >> James > > Are you using broctl to start and stop Bro? What does > /opt/bin/startbro > do? Thanks for looking Daniel. I am starting this with the below: /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter '( large filter line here)' local "Site::local_nets += { 192.168.1.0/24 }" I'm not using broctl. The only small portion that I am is for the log rotation as outlined in the email thread. After killing and starting bro yesterday, this morning at midnight logs got rotated and I got my report email. This appears to happen after a complete reboot of the device. It's very odd. Thanks again. James From jlay at slave-tothe-box.net Tue Jan 20 14:52:22 2015 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 20 Jan 2015 15:52:22 -0700 Subject: [Bro] Revisiting log rotate only In-Reply-To: <54BED406.2070709@illinois.edu> References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> <54BEB4D1.5060400@illinois.edu> <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> <54BED406.2070709@illinois.edu> Message-ID: On 2015-01-20 03:17 PM, Daniel Thayer wrote: > On 01/20/2015 04:13 PM, James Lay wrote: >> On 2015-01-20 01:04 PM, Daniel Thayer wrote: >>> On 01/19/2015 07:57 AM, James Lay wrote: >>>> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: >>>>> Hey all, >>>>> >>>>> I posted about this last August here: >>>>> >>>>> >>>>> >>>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html >>>>> >>>>> I also noticed someone have a disappearing log event which I have >>>>> seen >>>>> before as well here: >>>>> >>>>> >>>>> >>>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html >>>>> >>>>> I documented my process on installing bro on Ubuntu 14.04 using >>>>> just >>>>> log rotation below: >>>>> >>>>> sudo apt-get -y install cmake >>>>> sudo apt-get -y install python-dev >>>>> sudo apt-get -y install swig >>>>> cp /usr/local/bro/share/bro/site >>>>> cp /opt/bin/startbro <- command line bro with long --filter line >>>>> cp /opt/bin/startbro to /etc/rc.local >>>>> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ >>>>> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ >>>>> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ >>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log >>>>> /usr/local/bin/ >>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh >>>>> /usr/local/bin/ >>>>> sudo ln -s >>>>> /usr/local/bro/share/broctl/scripts/create-link-for-log >>>>> /usr/local/bin/ >>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name >>>>> /usr/local/bin/ >>>>> git clone https://github.com/jonschipp/mal-dnssearch.git >>>>> sudo make install >>>>> >>>>> specifics on log rotate only: >>>>> >>>>> add the below to local.bro >>>>> redef Log::default_rotation_interval = 86400 secs; >>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>>>> edit the below in broctl.cfg >>>>> MailTo = jlay at slave-tothe-box.net >>>>> >>>>> LogRotationInterval = 86400 >>>>> sudo /usr/local/bro/bin/broctl install >>>>> >>>>> Besides the edits to broctl.cfg, file locations are the default. >>>>> The >>>>> above works well usually...it's after a reboot I have found >>>>> things go >>>>> bad. Usually logs get rotated at midnight and I get an email >>>>> with >>>>> statistics, just what I need. I rebooted the machine on the 13, >>>>> and >>>>> that's the last email or log rotation I got....this morning I see >>>>> current has files and my logstash instance has data so I believe >>>>> the >>>>> rotation got..."stuck". I'm kicking myself for not >>>>> heading/tailing >>>>> the files first, but after issuing a "sudo killall bro", those >>>>> file in >>>>> current vanished, no directory was created, and I received no >>>>> email, >>>>> that data is now gone (no big deal as this is at home). I >>>>> decided to >>>>> run broctl install again, then start and kill bro one more time. >>>>> At >>>>> that point, I got a new directory with log rotation and an email >>>>> with >>>>> minutes or so of stats. Please let me know if there's something >>>>> I can >>>>> do on my end to trouble shoot. Thank you. >>>>> >>>>> James >>>>> _______________________________________________ >>>>> Bro mailing list >>>>> bro at bro-ids.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> Confirming that this method is no longer working. Heading my >>>> connlog >>>> file I see: >>>> >>>> #open 2015-01-19-00-00-05 >>>> >>>> my /usr/local/bro/logs is completely missing Jan 18th. From my >>>> broctl.cfg: >>>> >>>> SpoolDir = /usr/local/bro/spool >>>> LogDir = /usr/local/bro/logs >>>> LogRotationInterval = 86400 >>>> >>>> From my /usr/local/bro/share/bro/site/local.bro: >>>> >>>> redef Log::default_rotation_interval = 86400 secs; >>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>>> >>>> Anything else I can do to debug this? Thank you. >>>> >>>> James >>> >>> Are you using broctl to start and stop Bro? What does >>> /opt/bin/startbro >>> do? >> >> Thanks for looking Daniel. I am starting this with the below: >> >> /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter '( >> large >> filter line here)' local "Site::local_nets += { 192.168.1.0/24 }" >> >> I'm not using broctl. The only small portion that I am is for the >> log >> rotation as outlined in the email thread. After killing and >> starting >> bro yesterday, this morning at midnight logs got rotated and I got >> my >> report email. This appears to happen after a complete reboot of the >> device. It's very odd. Thanks again. >> >> James > > What command do you use to stop (or restart) Bro? The classic: sudo killall bro :) when I have to do it manually. Then start with the command line above. Thanks again. James From scampbell at lbl.gov Tue Jan 20 18:38:27 2015 From: scampbell at lbl.gov (Scott Campbell) Date: Tue, 20 Jan 2015 21:38:27 -0500 Subject: [Bro] wordpress passive version/plugin tester Message-ID: <54BF1123.3000901@lbl.gov> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Given the breakneck patch cycle that wordpress and it's mighty army of plugins goes through, I put together a quick bit of policy that will look out for communications between the host and api.wordpress.com and record all the relevant data. This can probably be improved, but it seems a nice place to start. Code can be found here: https://github.com/set-element/misc-scripts/blob/master/wordpress.bro Sample software.log output looks like: > > nerscs-mbp:tmp scottc$ more software.log #separator \x09 > #set_separator , #empty_field (empty) #unset_field - #path > software #open 2015-01-20-17-30-01 #fields ts host host_p > software_type name version.major version.minor > version.minor2 version.minor3 version.addl unparsed_version > #types time addr port enum string count count > count count string string 1421262142.829722 10.10.10160 > 42440 WP_PARSE::WEB_WORDPRESS_CORE Wordpress 3 4 > 1 - - 3.4.1 1421262142.829722 10.10.10160 > 42440 WP_PARSE::WEB_WORDPRESS_APP WP_PHP 5 3 3 > - - 5.3.3 1421262142.829722 10.10.10160 42440 > WP_PARSE::WEB_WORDPRESS_APP WP_MySQL 5 0 95 > - - 5.0.95 1421262143.379851 10.10.10160 42441 > WP_PARSE::WEB_WORDPRESS_PLUGIN Akismet 2 5 6 - > - 2.5.6 1421262143.379851 10.10.10160 42441 > WP_PARSE::WEB_WORDPRESS_PLUGIN Contact+Form+Plugin 3 23 > - - - 3.23 1421262143.379851 10.10.10160 > 42441 WP_PARSE::WEB_WORDPRESS_PLUGIN Custom+Meta+Widget 1 > 4 0 - - 1.4.0 1421262143.379851 > 10.10.10160 42441 WP_PARSE::WEB_WORDPRESS_PLUGIN Hello+Dolly > 1 6 - - - 1.6 1421262143.379851 > 10.10.10160 42441 WP_PARSE::WEB_WORDPRESS_PLUGIN > Jetpack+by+WordPress.com 1 6 1 - - > 1.6.1 1421262143.379851 10.10.10160 42441 > WP_PARSE::WEB_WORDPRESS_PLUGIN papercite 0 5 5 > - - 0.5.5 1421262143.379851 10.10.10160 42441 > WP_PARSE::WEB_WORDPRESS_PLUGIN Revision+Control 2 1 > - - - 2.1 1421262143.379851 10.10.10160 > 42441 WP_PARSE::WEB_WORDPRESS_PLUGIN Ultimate+TinyMCE 3 > 0 - - - 3.0 1421262143.379851 > 10.10.10160 42441 WP_PARSE::WEB_WORDPRESS_PLUGIN > WordPress+Importer 0 6 - - - > 0.6 #close 2015-01-20-17-30-01 enjoy! scott -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: GPGTools - http://gpgtools.org iEYEARECAAYFAlS/ESMACgkQK2Plq8B7ZBx18wCgiN7at9Iweu3TjitrdDzS7Mg3 aOQAn1ievv0WTfsk3Z/hg01oAycVwRzd =zaMV -----END PGP SIGNATURE----- From dnthayer at illinois.edu Tue Jan 20 19:27:11 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 20 Jan 2015 21:27:11 -0600 Subject: [Bro] Revisiting log rotate only In-Reply-To: References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> <54BEB4D1.5060400@illinois.edu> <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> <54BED406.2070709@illinois.edu> Message-ID: <54BF1C8F.8020104@illinois.edu> On 01/20/2015 04:52 PM, James Lay wrote: > On 2015-01-20 03:17 PM, Daniel Thayer wrote: >> On 01/20/2015 04:13 PM, James Lay wrote: >>> On 2015-01-20 01:04 PM, Daniel Thayer wrote: >>>> On 01/19/2015 07:57 AM, James Lay wrote: >>>>> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: >>>>>> Hey all, >>>>>> >>>>>> I posted about this last August here: >>>>>> >>>>>> >>>>>> >>>>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html >>>>>> >>>>>> I also noticed someone have a disappearing log event which I have >>>>>> seen >>>>>> before as well here: >>>>>> >>>>>> >>>>>> >>>>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html >>>>>> >>>>>> I documented my process on installing bro on Ubuntu 14.04 using >>>>>> just >>>>>> log rotation below: >>>>>> >>>>>> sudo apt-get -y install cmake >>>>>> sudo apt-get -y install python-dev >>>>>> sudo apt-get -y install swig >>>>>> cp /usr/local/bro/share/bro/site >>>>>> cp /opt/bin/startbro <- command line bro with long --filter line >>>>>> cp /opt/bin/startbro to /etc/rc.local >>>>>> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ >>>>>> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ >>>>>> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log >>>>>> /usr/local/bin/ >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh >>>>>> /usr/local/bin/ >>>>>> sudo ln -s >>>>>> /usr/local/bro/share/broctl/scripts/create-link-for-log >>>>>> /usr/local/bin/ >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name >>>>>> /usr/local/bin/ >>>>>> git clone https://github.com/jonschipp/mal-dnssearch.git >>>>>> sudo make install >>>>>> >>>>>> specifics on log rotate only: >>>>>> >>>>>> add the below to local.bro >>>>>> redef Log::default_rotation_interval = 86400 secs; >>>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>>>>> edit the below in broctl.cfg >>>>>> MailTo = jlay at slave-tothe-box.net >>>>>> >>>>>> LogRotationInterval = 86400 >>>>>> sudo /usr/local/bro/bin/broctl install >>>>>> >>>>>> Besides the edits to broctl.cfg, file locations are the default. >>>>>> The >>>>>> above works well usually...it's after a reboot I have found >>>>>> things go >>>>>> bad. Usually logs get rotated at midnight and I get an email >>>>>> with >>>>>> statistics, just what I need. I rebooted the machine on the 13, >>>>>> and >>>>>> that's the last email or log rotation I got....this morning I see >>>>>> current has files and my logstash instance has data so I believe >>>>>> the >>>>>> rotation got..."stuck". I'm kicking myself for not >>>>>> heading/tailing >>>>>> the files first, but after issuing a "sudo killall bro", those >>>>>> file in >>>>>> current vanished, no directory was created, and I received no >>>>>> email, >>>>>> that data is now gone (no big deal as this is at home). I >>>>>> decided to >>>>>> run broctl install again, then start and kill bro one more time. >>>>>> At >>>>>> that point, I got a new directory with log rotation and an email >>>>>> with >>>>>> minutes or so of stats. Please let me know if there's something >>>>>> I can >>>>>> do on my end to trouble shoot. Thank you. >>>>>> >>>>>> James >>>>>> _______________________________________________ >>>>>> Bro mailing list >>>>>> bro at bro-ids.org >>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>>> >>>>> Confirming that this method is no longer working. Heading my >>>>> connlog >>>>> file I see: >>>>> >>>>> #open 2015-01-19-00-00-05 >>>>> >>>>> my /usr/local/bro/logs is completely missing Jan 18th. From my >>>>> broctl.cfg: >>>>> >>>>> SpoolDir = /usr/local/bro/spool >>>>> LogDir = /usr/local/bro/logs >>>>> LogRotationInterval = 86400 >>>>> >>>>> From my /usr/local/bro/share/bro/site/local.bro: >>>>> >>>>> redef Log::default_rotation_interval = 86400 secs; >>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >>>>> >>>>> Anything else I can do to debug this? Thank you. >>>>> >>>>> James >>>> >>>> Are you using broctl to start and stop Bro? What does >>>> /opt/bin/startbro >>>> do? >>> >>> Thanks for looking Daniel. I am starting this with the below: >>> >>> /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter '( >>> large >>> filter line here)' local "Site::local_nets += { 192.168.1.0/24 }" >>> >>> I'm not using broctl. The only small portion that I am is for the >>> log >>> rotation as outlined in the email thread. After killing and >>> starting >>> bro yesterday, this morning at midnight logs got rotated and I got >>> my >>> report email. This appears to happen after a complete reboot of the >>> device. It's very odd. Thanks again. >>> >>> James >> >> What command do you use to stop (or restart) Bro? > > The classic: sudo killall bro :) when I have to do it manually. Then > start with the command line above. Thanks again. > > James OK, since you're not using broctl to start/stop bro, here's what happens: When you stop bro, bro will rotate all log files (rename them with a timestamp). Then, bro will spawn "archive-log" processes, one per log file, to archive (i.e., copy or gzip to another directory) each rotated log file. This can take some time, depending on the log file size, and whether you're generating connection summary reports or not. If the machine is rebooted while this is happening, then one or more of the rotated logs might not get archived (because the "archive-log" processes were killed before they had a chance to finish). Next time you boot your machine and start bro, the rotated logs will still be there (unless you have some other script that removes that directory), but they will never get archived automatically. And, because the rotated log filenames contain a date/timestamp, they will not be overwritten by new logs. To avoid this issue when you want to reboot, I suggest stopping bro, and then waiting for all the logs to finish being archived, then reboot. From jlay at slave-tothe-box.net Wed Jan 21 03:01:41 2015 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 21 Jan 2015 04:01:41 -0700 Subject: [Bro] Revisiting log rotate only In-Reply-To: <54BF1C8F.8020104@illinois.edu> References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> <54BEB4D1.5060400@illinois.edu> <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> <54BED406.2070709@illinois.edu> <54BF1C8F.8020104@illinois.edu> Message-ID: <1421838101.3220.16.camel@JamesiMac> On Tue, 2015-01-20 at 21:27 -0600, Daniel Thayer wrote: > On 01/20/2015 04:52 PM, James Lay wrote: > > On 2015-01-20 03:17 PM, Daniel Thayer wrote: > >> On 01/20/2015 04:13 PM, James Lay wrote: > >>> On 2015-01-20 01:04 PM, Daniel Thayer wrote: > >>>> On 01/19/2015 07:57 AM, James Lay wrote: > >>>>> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: > >>>>>> Hey all, > >>>>>> > >>>>>> I posted about this last August here: > >>>>>> > >>>>>> > >>>>>> > >>>>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html > >>>>>> > >>>>>> I also noticed someone have a disappearing log event which I have > >>>>>> seen > >>>>>> before as well here: > >>>>>> > >>>>>> > >>>>>> > >>>>>> http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html > >>>>>> > >>>>>> I documented my process on installing bro on Ubuntu 14.04 using > >>>>>> just > >>>>>> log rotation below: > >>>>>> > >>>>>> sudo apt-get -y install cmake > >>>>>> sudo apt-get -y install python-dev > >>>>>> sudo apt-get -y install swig > >>>>>> cp /usr/local/bro/share/bro/site > >>>>>> cp /opt/bin/startbro <- command line bro with long --filter line > >>>>>> cp /opt/bin/startbro to /etc/rc.local > >>>>>> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ > >>>>>> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ > >>>>>> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ > >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log > >>>>>> /usr/local/bin/ > >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh > >>>>>> /usr/local/bin/ > >>>>>> sudo ln -s > >>>>>> /usr/local/bro/share/broctl/scripts/create-link-for-log > >>>>>> /usr/local/bin/ > >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name > >>>>>> /usr/local/bin/ > >>>>>> git clone https://github.com/jonschipp/mal-dnssearch.git > >>>>>> sudo make install > >>>>>> > >>>>>> specifics on log rotate only: > >>>>>> > >>>>>> add the below to local.bro > >>>>>> redef Log::default_rotation_interval = 86400 secs; > >>>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; > >>>>>> edit the below in broctl.cfg > >>>>>> MailTo = jlay at slave-tothe-box.net > >>>>>> > >>>>>> LogRotationInterval = 86400 > >>>>>> sudo /usr/local/bro/bin/broctl install > >>>>>> > >>>>>> Besides the edits to broctl.cfg, file locations are the default. > >>>>>> The > >>>>>> above works well usually...it's after a reboot I have found > >>>>>> things go > >>>>>> bad. Usually logs get rotated at midnight and I get an email > >>>>>> with > >>>>>> statistics, just what I need. I rebooted the machine on the 13, > >>>>>> and > >>>>>> that's the last email or log rotation I got....this morning I see > >>>>>> current has files and my logstash instance has data so I believe > >>>>>> the > >>>>>> rotation got..."stuck". I'm kicking myself for not > >>>>>> heading/tailing > >>>>>> the files first, but after issuing a "sudo killall bro", those > >>>>>> file in > >>>>>> current vanished, no directory was created, and I received no > >>>>>> email, > >>>>>> that data is now gone (no big deal as this is at home). I > >>>>>> decided to > >>>>>> run broctl install again, then start and kill bro one more time. > >>>>>> At > >>>>>> that point, I got a new directory with log rotation and an email > >>>>>> with > >>>>>> minutes or so of stats. Please let me know if there's something > >>>>>> I can > >>>>>> do on my end to trouble shoot. Thank you. > >>>>>> > >>>>>> James > >>>>>> _______________________________________________ > >>>>>> Bro mailing list > >>>>>> bro at bro-ids.org > >>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >>>>> > >>>>> Confirming that this method is no longer working. Heading my > >>>>> connlog > >>>>> file I see: > >>>>> > >>>>> #open 2015-01-19-00-00-05 > >>>>> > >>>>> my /usr/local/bro/logs is completely missing Jan 18th. From my > >>>>> broctl.cfg: > >>>>> > >>>>> SpoolDir = /usr/local/bro/spool > >>>>> LogDir = /usr/local/bro/logs > >>>>> LogRotationInterval = 86400 > >>>>> > >>>>> From my /usr/local/bro/share/bro/site/local.bro: > >>>>> > >>>>> redef Log::default_rotation_interval = 86400 secs; > >>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; > >>>>> > >>>>> Anything else I can do to debug this? Thank you. > >>>>> > >>>>> James > >>>> > >>>> Are you using broctl to start and stop Bro? What does > >>>> /opt/bin/startbro > >>>> do? > >>> > >>> Thanks for looking Daniel. I am starting this with the below: > >>> > >>> /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter '( > >>> large > >>> filter line here)' local "Site::local_nets += { 192.168.1.0/24 }" > >>> > >>> I'm not using broctl. The only small portion that I am is for the > >>> log > >>> rotation as outlined in the email thread. After killing and > >>> starting > >>> bro yesterday, this morning at midnight logs got rotated and I got > >>> my > >>> report email. This appears to happen after a complete reboot of the > >>> device. It's very odd. Thanks again. > >>> > >>> James > >> > >> What command do you use to stop (or restart) Bro? > > > > The classic: sudo killall bro :) when I have to do it manually. Then > > start with the command line above. Thanks again. > > > > James > > OK, since you're not using broctl to start/stop bro, here's > what happens: > > When you stop bro, bro will rotate all log files (rename them with > a timestamp). Then, bro will spawn "archive-log" processes, one > per log file, to archive (i.e., copy or gzip to another directory) > each rotated log file. This can take some time, depending on the > log file size, and whether you're generating connection summary > reports or not. If the machine is rebooted while this is > happening, then one or more of the rotated logs might not get > archived (because the "archive-log" processes were killed before > they had a chance to finish). > > Next time you boot your machine and start bro, the rotated logs will > still be there (unless you have some other script that removes that > directory), but they will never get archived automatically. > And, because the rotated log filenames contain a date/timestamp, they > will not be overwritten by new logs. > > To avoid this issue when you want to reboot, I suggest stopping bro, > and then waiting for all the logs to finish being archived, then reboot. Thanks Daniel, So compressed the entire directory of log files is 7.5 megs....really small, so I don't think it's a question of getting stuck during compression (truth be told the box doing the bro-ing is sitting right next to the box I'm typing this email on...I can hear the drive whir away when I stop bro and it lasts maybe 30 seconds). Also, before reboot I manually stop bro...out of habit. My only thought is that *maybe* the path of /usr/local/bin/ where I've symlinked the additional scripts aren't seen when my startbro script is run from /etc/rc.local file? In any case I can reproduce the behavior on reboot, so if there's a way to debug this I'd love to give it a go. I'll research the path thing on my end (Ubuntu 14.0.4) and I'll try a) rebooting and starting bro manually and b) symlinking the script files to /usr/local/sbin/. I'll report my findings for anyone else out there, but I kinda think most people are just using broctl anyways :) Thanks again Daniel. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150121/cab6a6f7/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 925 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150121/cab6a6f7/attachment-0001.bin From robin at icir.org Wed Jan 21 08:16:51 2015 From: robin at icir.org (Robin Sommer) Date: Wed, 21 Jan 2015 08:16:51 -0800 Subject: [Bro] How can I use the USE_PERFTOOLS_DEBUG ? In-Reply-To: References: <20150120154920.GA43442@icir.org> Message-ID: <20150121161651.GC10297@icir.org> On Tue, Jan 20, 2015 at 11:47 -0600, John Donnelly wrote: > @ 5d1bd8 > > What does @xxxxxx mean ? A function address ? yeah, addr2line should be able to make them to something more readable. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From dnthayer at illinois.edu Wed Jan 21 08:17:42 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Wed, 21 Jan 2015 10:17:42 -0600 Subject: [Bro] Revisiting log rotate only In-Reply-To: <1421838101.3220.16.camel@JamesiMac> References: <1421505437.3223.16.camel@JamesiMac> <1421675855.3196.3.camel@JamesiMac> <54BEB4D1.5060400@illinois.edu> <9a5b19652863fc2f4068ca2fcf1e1d5b@localhost> <54BED406.2070709@illinois.edu> <54BF1C8F.8020104@illinois.edu> <1421838101.3220.16.camel@JamesiMac> Message-ID: <54BFD126.8020007@illinois.edu> On 01/21/2015 05:01 AM, James Lay wrote: > On Tue, 2015-01-20 at 21:27 -0600, Daniel Thayer wrote: >> On 01/20/2015 04:52 PM, James Lay wrote: >> > On 2015-01-20 03:17 PM, Daniel Thayer wrote: >> >> On 01/20/2015 04:13 PM, James Lay wrote: >> >>> On 2015-01-20 01:04 PM, Daniel Thayer wrote: >> >>>> On 01/19/2015 07:57 AM, James Lay wrote: >> >>>>> On Sat, 2015-01-17 at 07:37 -0700, James Lay wrote: >> >>>>>> Hey all, >> >>>>>> >> >>>>>> I posted about this last August here: >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>>http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007329.html >> >>>>>> >> >>>>>> I also noticed someone have a disappearing log event which I have >> >>>>>> seen >> >>>>>> before as well here: >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>>http://mailman.icsi.berkeley.edu/pipermail/bro/2015-January/007935.html >> >>>>>> >> >>>>>> I documented my process on installing bro on Ubuntu 14.04 using >> >>>>>> just >> >>>>>> log rotation below: >> >>>>>> >> >>>>>> sudo apt-get -y install cmake >> >>>>>> sudo apt-get -y install python-dev >> >>>>>> sudo apt-get -y install swig >> >>>>>> cp /usr/local/bro/share/bro/site >> >>>>>> cp /opt/bin/startbro <- command line bro with long --filter line >> >>>>>> cp /opt/bin/startbro to /etc/rc.local >> >>>>>> sudo ln -s /usr/local/bro/bin/bro /usr/local/bin/ >> >>>>>> sudo ln -s /usr/local/bro/bin/bro-cut /usr/local/bin/ >> >>>>>> sudo ln -s /usr/local/bro/bin/broctl /usr/local/bin/ >> >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log >> >>>>>> /usr/local/bin/ >> >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh >> >>>>>> /usr/local/bin/ >> >>>>>> sudo ln -s >> >>>>>> /usr/local/bro/share/broctl/scripts/create-link-for-log >> >>>>>> /usr/local/bin/ >> >>>>>> sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name >> >>>>>> /usr/local/bin/ >> >>>>>> git clonehttps://github.com/jonschipp/mal-dnssearch.git >> >>>>>> sudo make install >> >>>>>> >> >>>>>> specifics on log rotate only: >> >>>>>> >> >>>>>> add the below to local.bro >> >>>>>> redef Log::default_rotation_interval = 86400 secs; >> >>>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >> >>>>>> edit the below in broctl.cfg >> >>>>>> MailTo =jlay at slave-tothe-box.net >> >>>>>> >> >>>>>> LogRotationInterval = 86400 >> >>>>>> sudo /usr/local/bro/bin/broctl install >> >>>>>> >> >>>>>> Besides the edits to broctl.cfg, file locations are the default. >> >>>>>> The >> >>>>>> above works well usually...it's after a reboot I have found >> >>>>>> things go >> >>>>>> bad. Usually logs get rotated at midnight and I get an email >> >>>>>> with >> >>>>>> statistics, just what I need. I rebooted the machine on the 13, >> >>>>>> and >> >>>>>> that's the last email or log rotation I got....this morning I see >> >>>>>> current has files and my logstash instance has data so I believe >> >>>>>> the >> >>>>>> rotation got..."stuck". I'm kicking myself for not >> >>>>>> heading/tailing >> >>>>>> the files first, but after issuing a "sudo killall bro", those >> >>>>>> file in >> >>>>>> current vanished, no directory was created, and I received no >> >>>>>> email, >> >>>>>> that data is now gone (no big deal as this is at home). I >> >>>>>> decided to >> >>>>>> run broctl install again, then start and kill bro one more time. >> >>>>>> At >> >>>>>> that point, I got a new directory with log rotation and an email >> >>>>>> with >> >>>>>> minutes or so of stats. Please let me know if there's something >> >>>>>> I can >> >>>>>> do on my end to trouble shoot. Thank you. >> >>>>>> >> >>>>>> James >> >>>>>> _______________________________________________ >> >>>>>> Bro mailing list >> >>>>>>bro at bro-ids.org >> >>>>>>http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >>>>> >> >>>>> Confirming that this method is no longer working. Heading my >> >>>>> connlog >> >>>>> file I see: >> >>>>> >> >>>>> #open 2015-01-19-00-00-05 >> >>>>> >> >>>>> my /usr/local/bro/logs is completely missing Jan 18th. From my >> >>>>> broctl.cfg: >> >>>>> >> >>>>> SpoolDir = /usr/local/bro/spool >> >>>>> LogDir = /usr/local/bro/logs >> >>>>> LogRotationInterval = 86400 >> >>>>> >> >>>>> From my /usr/local/bro/share/bro/site/local.bro: >> >>>>> >> >>>>> redef Log::default_rotation_interval = 86400 secs; >> >>>>> redef Log::default_rotation_postprocessor_cmd = "archive-log"; >> >>>>> >> >>>>> Anything else I can do to debug this? Thank you. >> >>>>> >> >>>>> James >> >>>> >> >>>> Are you using broctl to start and stop Bro? What does >> >>>> /opt/bin/startbro >> >>>> do? >> >>> >> >>> Thanks for looking Daniel. I am starting this with the below: >> >>> >> >>> /usr/local/bro/bin/bro --no-checksums -i eth0 -i ppp0 --filter '( >> >>> large >> >>> filter line here)' local "Site::local_nets += { 192.168.1.0/24 }" >> >>> >> >>> I'm not using broctl. The only small portion that I am is for the >> >>> log >> >>> rotation as outlined in the email thread. After killing and >> >>> starting >> >>> bro yesterday, this morning at midnight logs got rotated and I got >> >>> my >> >>> report email. This appears to happen after a complete reboot of the >> >>> device. It's very odd. Thanks again. >> >>> >> >>> James >> >> >> >> What command do you use to stop (or restart) Bro? >> > >> > The classic: sudo killall bro :) when I have to do it manually. Then >> > start with the command line above. Thanks again. >> > >> > James >> >> OK, since you're not using broctl to start/stop bro, here's >> what happens: >> >> When you stop bro, bro will rotate all log files (rename them with >> a timestamp). Then, bro will spawn "archive-log" processes, one >> per log file, to archive (i.e., copy or gzip to another directory) >> each rotated log file. This can take some time, depending on the >> log file size, and whether you're generating connection summary >> reports or not. If the machine is rebooted while this is >> happening, then one or more of the rotated logs might not get >> archived (because the "archive-log" processes were killed before >> they had a chance to finish). >> >> Next time you boot your machine and start bro, the rotated logs will >> still be there (unless you have some other script that removes that >> directory), but they will never get archived automatically. >> And, because the rotated log filenames contain a date/timestamp, they >> will not be overwritten by new logs. >> >> To avoid this issue when you want to reboot, I suggest stopping bro, >> and then waiting for all the logs to finish being archived, then reboot. > > Thanks Daniel, > > So compressed the entire directory of log files is 7.5 megs....really > small, so I don't think it's a question of getting stuck during > compression (truth be told the box doing the bro-ing is sitting right > next to the box I'm typing this email on...I can hear the drive whir > away when I stop bro and it lasts maybe 30 seconds). Also, before > reboot I manually stop bro...out of habit. My only thought is that > *maybe* the path of /usr/local/bin/ where I've symlinked the additional > scripts aren't seen when my startbro script is run from /etc/rc.local > file? In any case I can reproduce the behavior on reboot, so if there's > a way to debug this I'd love to give it a go. I'll research the path > thing on my end (Ubuntu 14.0.4) and I'll try a) rebooting and starting > bro manually and b) symlinking the script files to /usr/local/sbin/. > I'll report my findings for anyone else out there, but I kinda think > most people are just using broctl anyways :) Thanks again Daniel. > > James One other thing to check is which directory you are starting Bro from, because that's where Bro will create its log files (if you were using broctl, this should be /usr/local/bro/spool/bro). If you ever notice that you are missing logs in the archive directory (a subdirectory of /usr/local/bro/logs), then you'll want to check the directory where you were running Bro to see if it contains any unarchived logs (if you were using broctl to start/stop bro, then you'd also need to check all subdirectories of /usr/local/bro/spool/tmp). From dnthayer at illinois.edu Wed Jan 21 08:38:05 2015 From: dnthayer at illinois.edu (Daniel Thayer) Date: Wed, 21 Jan 2015 10:38:05 -0600 Subject: [Bro] [maintenance] what would cause a backlog/erasure in "...logs/current"? In-Reply-To: References: Message-ID: <54BFD5ED.60000@illinois.edu> On 01/08/2015 10:24 AM, Glenn Forbes Fleming Larratt wrote: > Folks, > > My Bro cluster is happily flagging and accumulating data - but: > > 1. The last two hourly cycles left uncompressed logfiles in > /opt/app/bro/logs/current: > > : > : > -rw-r--r-- 1 bro bro 73529 Jan 8 11:00 reporter-15-01-08_10.00.00.log > -rw-r--r-- 1 bro bro 749059 Jan 8 11:00 tunnel-15-01-08_10.00.00.log > -rw-r--r-- 1 bro bro 2474781 Jan 8 11:00 weird-15-01-08_10.00.00.log > -rw-r--r-- 1 bro bro 17062559659 Jan 8 10:00 conn-15-01-08_09.00.00.log > -rw-r--r-- 1 bro bro 2260979370 Jan 8 10:00 files-15-01-08_09.00.00.log > -rw-r--r-- 1 bro bro 4942559737 Jan 8 10:00 http-15-01-08_09.00.00.log > : etc. > : > > 2. No gzip processes were in evidence; > > 3. Figuring it might be the appropriate proverbial kick in the pants, I > did a "broctl restart", which ran cleanly - and to all appearances, > *erased* the older uncompressed files in question. > > I now have a hole where the data from 10:00-12:00 today used to be - can > anyone shed light on what's going on here? > Has this happened more than once? Have you tried looking for unarchived log files in /opt/app/bro/spool/tmp ? If there were any unarchived log files in logs/current, then when you do a broctl stop (or broctl restart), I would expect that they would get moved into a subdirectory of spool/tmp/ (assuming that you're using a recent version of broctl). From pooh_champ19 at yahoo.com Wed Jan 21 23:41:43 2015 From: pooh_champ19 at yahoo.com (pooja) Date: Thu, 22 Jan 2015 07:41:43 +0000 (UTC) Subject: [Bro] About port script to store service name Message-ID: hello, I have installed bro-2.3 version in my laptop. To store the service name like HTTP,FTP,TELNET,etc I have used following function: function service_name(c: connection): string { local p = c$id$resp_p; if ( p in port ) return port[p]; else return "other"; } I need to load the port.bro script in the file containing above function. I have tried running all the three port.bro script which are already present at paths: 1)/bro-2.3/testing/btest/language 2)/bro-2.3/testing/btest/scripts/base/frameworks/input 3)/bro-2.3/testing/btest/scripts/base/frameworks/input/sqlite The problem is I faced error in paths while loading port.bro script where as all the other script files are loaded as per the requirement without any error. If I have missed any port.bro script which will be useful to store the type of service please notify me. From giedrius.ramas at gmail.com Thu Jan 22 06:44:12 2015 From: giedrius.ramas at gmail.com (Giedrius Ramas) Date: Thu, 22 Jan 2015 16:44:12 +0200 Subject: [Bro] [bro] Bro intelligence framework meta data issue. Message-ID: Hi all , I am facing an issue when trying to get BRO intel working . The matter is that I cannot get meta data from Intel::MetaData. The Bro intelligence itself is working fine. Here is my intel.dat file: #fields indicator indicator_type meta.desc meta.cif_confidence meta.source honargah.ir/images/sampledata/2013gdoc Intel::URL phishing 85 phishtank.com and intel.log output: #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path intel #open 2015-01-22-09-36-43 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p fuid file_mime_type file_desc seen.indicator seen.indicator_type seen.where sources #types time string addr port addr port string string string string enum enum set[string] 1421919403.137259 Cz3Nvm4BHmAtqNxKHa 10.3.2.2 63982 142.4.119.66 80 - -- buy-pokerist-chips.com/wealth/t/ Intel::URL HTTP::IN_URL phishtank.com So as you can see there are any meta data fields on intel.log output. Please shed some light on this , Where should I look for troubleshooting ? I have these scripts loaded : @load frameworks/intel/seen @load frameworks/intel/do_notice @load policy/integration/collective-intel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150122/a341ca4b/attachment.html From silusilusilu at gmail.com Fri Jan 23 05:43:20 2015 From: silusilusilu at gmail.com (fasf safas) Date: Fri, 23 Jan 2015 14:43:20 +0100 Subject: [Bro] How to modify dns.log Message-ID: Hi, i want to introduce two new fields in dns.log: i've tried to use a code like this: -----script.bro------ redef record DNS::Info += { foo: bool &optional &log; }; event DNS::log_dns (rec: DNS::Info) { if(condition) rec$foo = T; } ------------------------- without any results. For example if i want to modify conn.log, i can use event connection_state_remove(c: connection) For dns.log, which event should be called? Thanks Fab -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150123/94a556b4/attachment.html From seth at icir.org Fri Jan 23 09:11:41 2015 From: seth at icir.org (Seth Hall) Date: Fri, 23 Jan 2015 12:11:41 -0500 Subject: [Bro] How to modify dns.log In-Reply-To: References: Message-ID: <1705FE67-4FB9-49F8-A244-F6B4A9049422@icir.org> > On Jan 23, 2015, at 8:43 AM, fasf safas wrote: > > For dns.log, which event should be called? The event should should handle is the one that has the data you?re basing your condition (in your example) off of. The log events are too late. The data is already set and gone at that point. I think there might be some justification for turning those log events into hooks so you could actually modify it in place before it?s actually logged (we?ll discuss this internally). What is the condition you?re working with in your dns log? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From qhu009 at aucklanduni.ac.nz Fri Jan 23 16:46:16 2015 From: qhu009 at aucklanduni.ac.nz (Qinwen Hu) Date: Sat, 24 Jan 2015 13:46:16 +1300 Subject: [Bro] How can I extract the DNS query Message-ID: Hi , I am a new Bro user. Recently, I observer a new way to launch a IPv6 address scanning. For instance, a attacker sends a IPv6 reverse DNS lookup query to a target DNS server and extracts a IPv6 record from the reverse DNS zone. The DNS query looks like: 0.0.0.0.0.0.0.0.0.f.d.0.1.0.0.2.ip6.arpa 1.0.0.0.0.0.0.0.0.f.d.0.1.0.0.2.ip6.arpa 2.0.0.0.0.0.0.0.0.f.d.0.1.0.0.2.ip6.arpa 0.2.0.0.0.0.0.0.0.0.f.d.0.1.0.0.2.ip6.arpa I try to use Bro to detect this kinds of attack. But when I use main.bro to read my trace file, I can't extract the DNS query? I looked the dns_request event and added some debug messages in this routine. Again, I can't see the ip6.arpa query print out. To detect this attack, I have to extract the DNS query and compare with the previous query. Is that possible to extract the DNS query by using some existing functions? Do you have any suggestion? Many thanks for your attention to this matter. Have a nice day. Kind regards, Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150124/9ab34625/attachment.html From pachinko.tw at gmail.com Sun Jan 25 04:42:47 2015 From: pachinko.tw at gmail.com (Po-Ching Lin) Date: Sun, 25 Jan 2015 20:42:47 +0800 Subject: [Bro] A strange connection Message-ID: <54C4E4C7.5090703@gmail.com> I saw a strange connection in a connection log. In this connection, the original bytes are 114,502,461, but most of the bytes are simply missing (114,502,154 bytes according to the missed bytes field). The original IP bytes are relatively few (only 519 bytes). What is the possible cause of the large sequence gap? Is it due to capture loss? Thanks. 1419498119.991707 CLQP0QdahFaFha0U2 140.x.x.x 58967 66.171.248.x 80 tcp http 253.220343 114502461 592490922 SF T 114502154 ShADadfF 5 519 6 578 (empty) Po-Ching From bala150985 at gmail.com Sun Jan 25 06:40:52 2015 From: bala150985 at gmail.com (Balasubramaniam Natarajan) Date: Sun, 25 Jan 2015 20:10:52 +0530 Subject: [Bro] A strange connection In-Reply-To: <54C4E4C7.5090703@gmail.com> References: <54C4E4C7.5090703@gmail.com> Message-ID: On Sun, Jan 25, 2015 at 6:12 PM, Po-Ching Lin wrote: > > 1419498119.991707 CLQP0QdahFaFha0U2 140.x.x.x 58967 > 66.171.248.x 80 tcp http 253.220343 114502461 592490922 > *SF* T 114502154 > ShADadfF 5 519 6 578 (empty) > > Po-Ching > > Is this by any chance a SF scan ? If this were a normal connection won't we be seeing an Ack Flag, Push Flag in addition to the SF noted above ? -- Regards, Balasubramaniam Natarajan http://blog.etutorshop.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150125/ec509b38/attachment.html From pachinko.tw at gmail.com Sun Jan 25 06:53:10 2015 From: pachinko.tw at gmail.com (Po-Ching Lin) Date: Sun, 25 Jan 2015 22:53:10 +0800 Subject: [Bro] A strange connection In-Reply-To: References: <54C4E4C7.5090703@gmail.com> Message-ID: <54C50356.1000609@gmail.com> Dear Balasubramania, The history field is ShADAdfF, so I think it is a normal connection. SF just means normal establishment and termination. Po-Ching On 2015/1/25 10:40 PM, Balasubramaniam Natarajan wrote: > > > On Sun, Jan 25, 2015 at 6:12 PM, Po-Ching Lin > wrote: > > > 1419498119.991707 CLQP0QdahFaFha0U2 140.x.x.x 58967 66.171.248.x 80 tcp http 253.220343 114502461 592490922 *SF* T 114502154 > ShADadfF 5 519 6 578 (empty) > > Po-Ching > > > Is this by any chance a SF scan ? If this were a normal connection won't we be seeing an Ack Flag, Push Flag in addition to the SF noted above ? > > -- > Regards, > Balasubramaniam Natarajan > http://blog.etutorshop.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150125/129cfb56/attachment.html From mlaterma at ucalgary.ca Sun Jan 25 06:58:19 2015 From: mlaterma at ucalgary.ca (Michel Laterman) Date: Sun, 25 Jan 2015 07:58:19 -0700 Subject: [Bro] A strange connection Message-ID: Hello, I recently saw the same thing in my logs. It's because orig_bytes and resp_bytes use sequence numbers to find bytes transferred; you are seeing the sequence number rollover. orig_ip_bytes and resp_ip_bytes should have the correct values of bytes (with TCP headers). Michel On Jan 25, 2015 7:40 AM, Balasubramaniam Natarajan wrote: > > > > On Sun, Jan 25, 2015 at 6:12 PM, Po-Ching Lin wrote: >> >> >> 1419498119.991707? ? ? ?CLQP0QdahFaFha0U2? ? ? ?140.x.x.x? 58967 66.171.248.x? 80? ? ? tcp? ?http? ? ?253.220343? ? ? 114502461 592490922? ? ? ?SF? ? ? T? ? ? ?114502154 >> ShADadfF 5? ? ? ?519? ? ?6? ? ? ?578? ? ?(empty) >> >> Po-Ching >> > > Is this by any chance a SF scan ?? If this were a normal connection won't we be seeing an Ack Flag, Push Flag in addition to the SF noted above ? > > -- > Regards, > Balasubramaniam Natarajan > http://blog.etutorshop.com From pachinko.tw at gmail.com Mon Jan 26 07:22:28 2015 From: pachinko.tw at gmail.com (Po-Ching Lin) Date: Mon, 26 Jan 2015 23:22:28 +0800 Subject: [Bro] A strange connection In-Reply-To: References: Message-ID: <54C65BB4.2080306@gmail.com> Dear Michel, If there are duplicated packets due to packet retransmission, will orig_ip_bytes and resp_ip_bytes be still correct (I mean the bytes may be counted more than once)? If not, what are the reliable fields to derive the transmitted bytes (not counting duplicated ones)? Thanks. Po-Ching On 2015/1/25 10:58PM, Michel Laterman wrote: > Hello, > > I recently saw the same thing in my logs. It's because orig_bytes and resp_bytes use sequence numbers to find bytes transferred; you are seeing the sequence number rollover. orig_ip_bytes and resp_ip_bytes should have the correct values of bytes (with TCP headers). > > Michel On Jan 25, 2015 7:40 AM, Balasubramaniam Natarajan wrote: >> >> >> On Sun, Jan 25, 2015 at 6:12 PM, Po-Ching Lin wrote: >>> >>> 1419498119.991707 CLQP0QdahFaFha0U2 140.x.x.x 58967 66.171.248.x 80 tcp http 253.220343 114502461 592490922 SF T 114502154 >>> ShADadfF 5 519 6 578 (empty) >>> >>> Po-Ching >>> >> Is this by any chance a SF scan ? If this were a normal connection won't we be seeing an Ack Flag, Push Flag in addition to the SF noted above ? >> >> -- >> Regards, >> Balasubramaniam Natarajan >> http://blog.etutorshop.com From jxbatchelor at gmail.com Mon Jan 26 07:35:03 2015 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Mon, 26 Jan 2015 09:35:03 -0600 Subject: [Bro] Best Way to Grab Unique Domains and IPs for Rotation Interval Message-ID: Hello all: I would like to have a rotated pair of files that simply logs unique IPs and domains seen for a given interval (in my case 30 minutes). In evaluating ways to do this - I noted the known_hosts.bro script to be a great reference point, so I cooked something up that I thought would do this for me. My script is as follows (it is heavily based on the known_hosts script).: ================= # Jason Batchelor # 1/26/2015 # Log unique IPs and domains for a given interval @load base/utils/directions-and-hosts module Unique; export { ## The logging stream identifiers. redef enum Log::ID += { IPS_LOG }; redef enum Log::ID += { DOMAINS_LOG }; ## The record type which contains the column fields of the unique_ips log. type UniqueIpInfo: record { ## The timestamp at which the host was detected. ts: time &log; ## The address that was detected host: addr &log; }; type UniqueDomainInfo: record { ts: time &log; domain: string &log; }; # When the expire interval refreshes const UNIQUE_EXPIRE_INTERVAL = 30min &redef; ## The set of all known addresses/doamins to store for preventing duplicate ## logging of addresses. It can also be used from other scripts to ## inspect if an address has been seen in use. ## Maintain the list of known domains/ips for 30 mins so that the existence ## of each individual address is logged each time logs are rotated. global unique_ips: set[addr] &create_expire=UNIQUE_EXPIRE_INTERVAL &synchronized &redef; global unique_domains: set[string] &create_expire=UNIQUE_EXPIRE_INTERVAL &synchronized &redef; ## An event that can be handled to access the :bro:type:`Known::HostsInfo` ## record as it is sent on to the logging framework. global log_unique_ips: event(rec: UniqueIpInfo); global log_unique_domains: event(rec: UniqueDomainInfo); } event bro_init() { Log::create_stream(Unique::IPS_LOG, [$columns=UniqueIpInfo, $ev=log_unique_ips]); Log::create_stream(Unique::DOMAINS_LOG, [$columns=UniqueDomainInfo, $ev=log_unique_domains]); } event new_connection(c: connection) &priority=5 { local id = c$id; for ( host in set(id$orig_h, id$resp_h) ) { if ( host !in unique_ips ) { add unique_ips[host]; Log::write(Unique::IPS_LOG, [$ts=network_time(), $host=host]); } } } event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5 { if ( query !in unique_domains ) { add unique_domains[query]; Log::write(Unique::DOMAINS_LOG, [$ts=network_time(), $domain=query]); } } ======================= My efforts appear to be in vain however, because while this works as designed against a standalone pcap file - it does not produce 100% reliable results when run on the wire. I encountered the following issues: * Duplicate IP addresses and domains being logged to the same file despite the scripted logic designed to prevent this. I confirmed this by simply grepping for certain IPs and domains in the generated log file. There were two that appeared - where in reality it should be one. grep login.yahoo.com unique_domains.log | awk 'BEGIN {FS=OFS="\t"} {$1=strftime("%D %T",$1)} {print}' 01/26/15 15:01:23 login.yahoo.com 01/26/15 15:09:04 login.yahoo.com FWIW - There were 10 requests for that domain in the dns.log, so there are some dupes not making it in, but clearly not all. Similar behavior was present for IPs as well. * Some IPs/Domains do not appear to be getting logged outright. I parsed out unique IPs from the conn.log, sorted and uniqued them, then stacked them up against output from those generated from my script and there were significant differences in filesizes. The output from the sorted/uniqued conn.log was much greater and I can only assume that represents data gaps. I was curious and decided to check out the log file for known_hosts. I noted here as well the same kind of issues I was experiancing. Where duplicate host records were being written. grep '10.10.10.1$' known_hosts.log | awk 'BEGIN {FS=OFS="\t"} {$1=strftime("%D %T",$1)} {print}' 01/26/15 15:00:17 10.10.10.1 01/26/15 15:00:17 10.10.10.1 01/26/15 15:00:17 10.10.10.1 01/26/15 15:00:17 10.10.10.1 01/26/15 15:00:17 10.10.10.1 Is there a better way for me to be doing this? My criteria is simple - I want a list of unique IPs and domains for a given time interval. I don't care if the connection was established or not. Ideally, if it is in the conn.log/dns.log at some point durring the log interval it should be in one of my logs once as well. I would ideally like to be doing this with Bro. Right now I have a horrendous cron/bash command that is executed against archived conn/dns logs to get me the data I need. I really don't like that approach and would even be willing to settle for kicking off my horrendous bash command from within a Bro script if there were some 'log complete' event for my two log types. Any assistance is much appreciated! Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150126/8075a07b/attachment.html From jsiwek at illinois.edu Mon Jan 26 08:47:55 2015 From: jsiwek at illinois.edu (Siwek, Jon) Date: Mon, 26 Jan 2015 16:47:55 +0000 Subject: [Bro] Bro v2.3.2 release Message-ID: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> Bro v2.3.2 is now available. For more details on vulnerabilities addressed, see this blog post: http://blog.bro.org/2015/01/bro-232-release.html The new version can be downloaded from: https://www.bro.org/download/index.html - Jon From abenson at gmail.com Mon Jan 26 08:59:50 2015 From: abenson at gmail.com (Andrew Benson) Date: Mon, 26 Jan 2015 09:59:50 -0700 Subject: [Bro] Strange Issue with Live Capture Message-ID: We're currently using Endace DAG capture cards to feed directly to bro, snort, and a rolling packet capture. The network we're currently looking at has a high number of retransmissions (at one point we counted 45% of traffic being retransmissions). Bro is currently logging each packet as a separate connection in conn.log, and is failing to run the protocol analyzers correctly (i.e. it'll detect it as FTP, but will only log the action, not the login, response). What's weird is that if I run bro against the rolling pcap, it works correctly. This problem only occurs when bro is listening to the device directly. This problem is still occurring with 2.3.1, so I'm at a loss. I enabled the capture-loss module, and it's reporting 0%. The capture card doesn't seem to be dropping anything either. Seen anything similar or have any suggestions for troubleshooting/fixing? -- AndrewB Knowing is Half the Battle. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150126/5c0a549b/attachment.html From mlaterma at ucalgary.ca Mon Jan 26 09:08:01 2015 From: mlaterma at ucalgary.ca (Michel Laterman) Date: Mon, 26 Jan 2015 17:08:01 +0000 Subject: [Bro] A strange connection In-Reply-To: <54C65BB4.2080306@gmail.com> References: , <54C65BB4.2080306@gmail.com> Message-ID: <1422292080385.54571@ucalgary.ca> I believe that orig_ip_bytes (and resp_ip_bytes) would recount bytes; the description of the fields states that they use the IP level total_length field to take their measurements. Michel ________________________________________ From: Po-Ching Lin Sent: January 26, 2015 8:22 AM To: Michel Laterman; Balasubramaniam Natarajan Cc: bro Subject: Re: [Bro] A strange connection Dear Michel, If there are duplicated packets due to packet retransmission, will orig_ip_bytes and resp_ip_bytes be still correct (I mean the bytes may be counted more than once)? If not, what are the reliable fields to derive the transmitted bytes (not counting duplicated ones)? Thanks. Po-Ching On 2015/1/25 10:58PM, Michel Laterman wrote: > Hello, > > I recently saw the same thing in my logs. It's because orig_bytes and resp_bytes use sequence numbers to find bytes transferred; you are seeing the sequence number rollover. orig_ip_bytes and resp_ip_bytes should have the correct values of bytes (with TCP headers). > > Michel On Jan 25, 2015 7:40 AM, Balasubramaniam Natarajan wrote: >> >> >> On Sun, Jan 25, 2015 at 6:12 PM, Po-Ching Lin wrote: >>> >>> 1419498119.991707 CLQP0QdahFaFha0U2 140.x.x.x 58967 66.171.248.x 80 tcp http 253.220343 114502461 592490922 SF T 114502154 >>> ShADadfF 5 519 6 578 (empty) >>> >>> Po-Ching >>> >> Is this by any chance a SF scan ? If this were a normal connection won't we be seeing an Ack Flag, Push Flag in addition to the SF noted above ? >> >> -- >> Regards, >> Balasubramaniam Natarajan >> http://blog.etutorshop.com From lists at g-clef.net Mon Jan 26 09:21:19 2015 From: lists at g-clef.net (Aaron Gee-Clough) Date: Mon, 26 Jan 2015 12:21:19 -0500 Subject: [Bro] Strange Issue with Live Capture References: Message-ID: <54C6778F.40401@g-clef.net> If I were to bet, I'd guess it has something to do with how the Endace card is load-balancing packets across your bro workers. If the retransmission packets are ending up on different workers than the original session, then each worker will think it's got a new session, and log it accordingly. How do you have the Endace card configured? (for the 9.2X2 I have, n_tuple_select is the pertinent config option.) aaron On 01/26/2015 11:59 AM, Andrew Benson wrote: > We're currently using Endace DAG capture cards to feed directly to bro, > snort, and a rolling packet capture. > > The network we're currently looking at has a high number of retransmissions > (at one point we counted 45% of traffic being retransmissions). > > Bro is currently logging each packet as a separate connection in conn.log, > and is failing to run the protocol analyzers correctly (i.e. it'll detect > it as FTP, but will only log the action, not the login, response). > > What's weird is that if I run bro against the rolling pcap, it works > correctly. This problem only occurs when bro is listening to the device > directly. > > This problem is still occurring with 2.3.1, so I'm at a loss. I enabled the > capture-loss module, and it's reporting 0%. The capture card doesn't seem > to be dropping anything either. > > Seen anything similar or have any suggestions for troubleshooting/fixing? > > -- > AndrewB > Knowing is Half the Battle. > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From rensvdheijden at gmail.com Mon Jan 26 10:32:35 2015 From: rensvdheijden at gmail.com (Rens van der Heijden) Date: Mon, 26 Jan 2015 19:32:35 +0100 Subject: [Bro] SYN Flood detection Message-ID: <54C68843.5020803@gmail.com> Hi everyone, As part of demonstrating Bro in a class setting, I'm preparing an exercise that asks students to detect SYN floods. I found some older Bro code that does this (and referenes to it on Robin Sommer's slides from a 2007 talk): http://www.gnu-darwin.org/www001/src/ports/security/bro/work/bro-1.2.1/policy/synflood.bro I noticed, however, that I couldn't find anything similar in SumStats. It might be that I missed something, but maybe SYN floods just aren't as interesting anymore? Does anyone know what happened there? Anyway, I tried to write a quick script to test it out first, which turned out to use a lot of memory (at least, in my perception -- perhaps it's an issue with the VM I'm testing it in though), which I guess might be the reason. Here's the code I used (unlike /scripts/policy/misc/scan.bro , this script uses connection_SYN_packet, which means we can detect SYNs that are not responded to): event connection_SYN_packet(c:connection, pkt: SYN_packet) { SumStats::observe("tcp.syn.rcvd", [$host=c$id$orig_h], [$str=fmt("%s",c$id$resp_h)]); } function f(ts:time, key:SumStats::Key, result:SumStats::Result) { local r=result["tcp.syn.rcvd"]; print fmt("Saw %d SYNs from %s", r$num, key$host); } event bro_init() { local r1 = SumStats::Reducer($stream="tcp.syn.rcvd", $apply=set(SumStats::SUM)); SumStats::create([$name="tcp.syn.scan", $epoch=30min, $reducers=set(r1), $epoch_result=f, $epoch_finished(ts:time_ = { print " -- new Epoch --";}]); Greetings, Rens -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150126/1793ef20/attachment.html From abenson at gmail.com Mon Jan 26 11:38:59 2015 From: abenson at gmail.com (Andrew Benson) Date: Mon, 26 Jan 2015 12:38:59 -0700 Subject: [Bro] Strange Issue with Live Capture In-Reply-To: <54C69427.4000401@g-clef.net> References: <54C6778F.40401@g-clef.net> <54C69427.4000401@g-clef.net> Message-ID: I think if it were getting multiple copies of each packet, it'd be logging those as well. The dup3 on the card just duplicates the stream so each process receives the same traffic. If a process attaches to the stream (dag0:0, :2, and :4) that stream can't be attached from another process. Looking back we had some time over the weekend when the traffic was a little slower and it didn't have any errors. Likewise, we had a test at a previous site (mockup of this one) that didn't have the issue, and we've been using this setup for a few years, this is just the first I've ever seen this issue. It's just really weird. I wonder how I could check to see if there's something causing bro to break when talking to the card? It's just weird that I can't recreate it anywhere else. -- AndrewB Knowing is Half the Battle. On 26 January 2015 at 12:23, Aaron Gee-Clough wrote: > > Right, so if it works with the pcaps, then there's clearly a problem with > bro's interaction with the card (hence my focusing on the load-balancing > aspect). If you're not load-balancing but are duplicating, are you sure > bro's not reading all three streams, and getting multiple copies of every > packet? > > I don't have experience with the 7.5G4, so I'm not sure how dup3 works. > > aaron > > > On 01/26/2015 02:14 PM, Andrew Benson wrote: > >> We are using the 7.5G4. We currently have it configured using dup3 to send >> all the traffic three streams (snort, bro, and tcpdump), so we're not load >> balancing between processes. I can verify against the pcaps that bro is >> reading the packet. It shows the S0 for the handshake, and then OTH for >> every packet after that. Every packet is accounted for. >> >> >> -- >> AndrewB >> Knowing is Half the Battle. >> >> On 26 January 2015 at 10:21, Aaron Gee-Clough wrote: >> >> >>> If I were to bet, I'd guess it has something to do with how the Endace >>> card is load-balancing packets across your bro workers. If the >>> retransmission packets are ending up on different workers than the >>> original session, then each worker will think it's got a new session, >>> and log it accordingly. >>> >>> How do you have the Endace card configured? (for the 9.2X2 I have, >>> n_tuple_select is the pertinent config option.) >>> >>> aaron >>> >>> On 01/26/2015 11:59 AM, Andrew Benson wrote: >>> >>>> We're currently using Endace DAG capture cards to feed directly to bro, >>>> snort, and a rolling packet capture. >>>> >>>> The network we're currently looking at has a high number of >>>> >>> retransmissions >>> >>>> (at one point we counted 45% of traffic being retransmissions). >>>> >>>> Bro is currently logging each packet as a separate connection in >>>> >>> conn.log, >>> >>>> and is failing to run the protocol analyzers correctly (i.e. it'll >>>> detect >>>> it as FTP, but will only log the action, not the login, response). >>>> >>>> What's weird is that if I run bro against the rolling pcap, it works >>>> correctly. This problem only occurs when bro is listening to the device >>>> directly. >>>> >>>> This problem is still occurring with 2.3.1, so I'm at a loss. I enabled >>>> >>> the >>> >>>> capture-loss module, and it's reporting 0%. The capture card doesn't >>>> seem >>>> to be dropping anything either. >>>> >>>> Seen anything similar or have any suggestions for >>>> troubleshooting/fixing? >>>> >>>> -- >>>> AndrewB >>>> Knowing is Half the Battle. >>>> >>>> >>>> >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150126/74c6758e/attachment.html From abenson at gmail.com Mon Jan 26 11:57:31 2015 From: abenson at gmail.com (Andrew Benson) Date: Mon, 26 Jan 2015 12:57:31 -0700 Subject: [Bro] Strange Issue with Live Capture In-Reply-To: References: <54C6778F.40401@g-clef.net> <54C69427.4000401@g-clef.net> Message-ID: I just had an epiphany when thinking about your response: the site we're dealing with configured the SPAN incorrectly. Looking back through, I'm seeing *every* packet twice. I think they configured it to SPAN every port on that switch, including the SPAN. I'm going to have them fix that, maybe use the VLANs as the source as I had originally directed them. -- AndrewB Knowing is Half the Battle. On 26 January 2015 at 12:38, Andrew Benson wrote: > I think if it were getting multiple copies of each packet, it'd be logging > those as well. The dup3 on the card just duplicates the stream so each > process receives the same traffic. If a process attaches to the stream > (dag0:0, :2, and :4) that stream can't be attached from another process. > > Looking back we had some time over the weekend when the traffic was a > little slower and it didn't have any errors. Likewise, we had a test at a > previous site (mockup of this one) that didn't have the issue, and we've > been using this setup for a few years, this is just the first I've ever > seen this issue. It's just really weird. > > I wonder how I could check to see if there's something causing bro to > break when talking to the card? It's just weird that I can't recreate it > anywhere else. > > -- > AndrewB > Knowing is Half the Battle. > > On 26 January 2015 at 12:23, Aaron Gee-Clough wrote: > >> >> Right, so if it works with the pcaps, then there's clearly a problem with >> bro's interaction with the card (hence my focusing on the load-balancing >> aspect). If you're not load-balancing but are duplicating, are you sure >> bro's not reading all three streams, and getting multiple copies of every >> packet? >> >> I don't have experience with the 7.5G4, so I'm not sure how dup3 works. >> >> aaron >> >> >> On 01/26/2015 02:14 PM, Andrew Benson wrote: >> >>> We are using the 7.5G4. We currently have it configured using dup3 to >>> send >>> all the traffic three streams (snort, bro, and tcpdump), so we're not >>> load >>> balancing between processes. I can verify against the pcaps that bro is >>> reading the packet. It shows the S0 for the handshake, and then OTH for >>> every packet after that. Every packet is accounted for. >>> >>> >>> -- >>> AndrewB >>> Knowing is Half the Battle. >>> >>> On 26 January 2015 at 10:21, Aaron Gee-Clough wrote: >>> >>> >>>> If I were to bet, I'd guess it has something to do with how the Endace >>>> card is load-balancing packets across your bro workers. If the >>>> retransmission packets are ending up on different workers than the >>>> original session, then each worker will think it's got a new session, >>>> and log it accordingly. >>>> >>>> How do you have the Endace card configured? (for the 9.2X2 I have, >>>> n_tuple_select is the pertinent config option.) >>>> >>>> aaron >>>> >>>> On 01/26/2015 11:59 AM, Andrew Benson wrote: >>>> >>>>> We're currently using Endace DAG capture cards to feed directly to bro, >>>>> snort, and a rolling packet capture. >>>>> >>>>> The network we're currently looking at has a high number of >>>>> >>>> retransmissions >>>> >>>>> (at one point we counted 45% of traffic being retransmissions). >>>>> >>>>> Bro is currently logging each packet as a separate connection in >>>>> >>>> conn.log, >>>> >>>>> and is failing to run the protocol analyzers correctly (i.e. it'll >>>>> detect >>>>> it as FTP, but will only log the action, not the login, response). >>>>> >>>>> What's weird is that if I run bro against the rolling pcap, it works >>>>> correctly. This problem only occurs when bro is listening to the device >>>>> directly. >>>>> >>>>> This problem is still occurring with 2.3.1, so I'm at a loss. I enabled >>>>> >>>> the >>>> >>>>> capture-loss module, and it's reporting 0%. The capture card doesn't >>>>> seem >>>>> to be dropping anything either. >>>>> >>>>> Seen anything similar or have any suggestions for >>>>> troubleshooting/fixing? >>>>> >>>>> -- >>>>> AndrewB >>>>> Knowing is Half the Battle. >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Bro mailing list >>>>> bro at bro-ids.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>>> >>>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150126/408028c7/attachment-0001.html From gl89 at cornell.edu Mon Jan 26 12:01:56 2015 From: gl89 at cornell.edu (Glenn Forbes Fleming Larratt) Date: Mon, 26 Jan 2015 15:01:56 -0500 (EST) Subject: [Bro] [maintenance] what would cause a backlog/erasure in "...logs/current"? In-Reply-To: <54BFD5ED.60000@illinois.edu> References: <54BFD5ED.60000@illinois.edu> Message-ID: I have anecdotal information, but no direct experience beyond the one time. I used a 'find' command to search throughout my installation for the lost files, to no effect. I would have expected the same thing, but found them gone instead. FWIW, broctl reports itself as v1.3 (in a v2.3 Bro deployment). Thanks, -g -- Glenn Forbes Fleming Larratt Cornell University IT Security Office On Wed, 21 Jan 2015, Daniel Thayer wrote: > On 01/08/2015 10:24 AM, Glenn Forbes Fleming Larratt wrote: >> Folks, >> >> My Bro cluster is happily flagging and accumulating data - but: >> >> 1. The last two hourly cycles left uncompressed logfiles in >> /opt/app/bro/logs/current: >> >> : >> : >> -rw-r--r-- 1 bro bro 73529 Jan 8 11:00 >> reporter-15-01-08_10.00.00.log >> -rw-r--r-- 1 bro bro 749059 Jan 8 11:00 tunnel-15-01-08_10.00.00.log >> -rw-r--r-- 1 bro bro 2474781 Jan 8 11:00 weird-15-01-08_10.00.00.log >> -rw-r--r-- 1 bro bro 17062559659 Jan 8 10:00 conn-15-01-08_09.00.00.log >> -rw-r--r-- 1 bro bro 2260979370 Jan 8 10:00 files-15-01-08_09.00.00.log >> -rw-r--r-- 1 bro bro 4942559737 Jan 8 10:00 http-15-01-08_09.00.00.log >> : etc. >> : >> >> 2. No gzip processes were in evidence; >> >> 3. Figuring it might be the appropriate proverbial kick in the pants, I >> did a "broctl restart", which ran cleanly - and to all appearances, >> *erased* the older uncompressed files in question. >> >> I now have a hole where the data from 10:00-12:00 today used to be - can >> anyone shed light on what's going on here? >> > > Has this happened more than once? Have you tried looking for > unarchived log files in /opt/app/bro/spool/tmp ? > > If there were any unarchived log files in logs/current, then when > you do a broctl stop (or broctl restart), I would expect that they > would get moved into a subdirectory of spool/tmp/ (assuming that > you're using a recent version of broctl). > > From daniel.harrison4 at baesystems.com Mon Jan 26 19:24:35 2015 From: daniel.harrison4 at baesystems.com (Harrison, Daniel (US SSA)) Date: Tue, 27 Jan 2015 03:24:35 +0000 Subject: [Bro] Missing server responses with keep-alive traffic Message-ID: <20150127032439.81B8E2C404A@rock.ICSI.Berkeley.EDU> We receive a lot of requests for a site via Akamai using keep-alives and I have noticed that the http.log is missing the server response info for these connections. When I do get the infrequent server response it does not include the client request data. I have tried increasing a couple of the timeouts without any change. Am I missing something obvious? Thank you, Daniel Harrison -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/d8e752a8/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 10949 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/d8e752a8/attachment.bin From plutochen2010 at gmail.com Mon Jan 26 22:17:33 2015 From: plutochen2010 at gmail.com (Clement Chen) Date: Mon, 26 Jan 2015 22:17:33 -0800 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> <20150109190051.GM14004@yaksha.lbl.gov> Message-ID: Hi Aashish, Could you please elaborate a little bit more on the "shunting capability"? Do you mean using a BPF filter for bro? And what are some good filters for large/encrypted flows? Thanks. -Clement On Fri, Jan 9, 2015 at 12:26 PM, Aashish Sharma wrote: > > Do you really see and can handle 1Gbit/sec of traffic per core? I'm > curious. > > Haven't measured if a core can handle 1Gbit/sec but I highly highly doubt. > > What saves us is the shunting capability - basically bro identifies and > cuts off the rest of the big flows by placing a src,src port - dst, > dst-port ACL on arista while continuing to allow control packets (and > dynamically removes ACL once connection_ends) > > So each core doesn't really see anything more then 20-40 Mbps > (approximation) > > (Notes for self, it would be good to get these numbers in a plot) > > Thanks, > Aashish > > On Fri, Jan 9, 2015 at 12:01 PM, Micha? Purzy?ski < > michalpurzynski1 at gmail.com> wrote: > >> Do you really see and can handle 1Gbit/sec of traffic per core? I'm >> curious. >> >> I would say, with a 2.6Ghz CPU my educated guess would be somewhere >> about 250Mbit/sec / core with Bro. Of course configuration is >> everything here, I'm just looking into "given you do it right, that's >> what's possible". >> >> On Fri, Jan 9, 2015 at 8:00 PM, Aashish Sharma wrote: >> > While, we at LBNL continue to work towards a formal documentation, I >> think I'd reply then causing further delays: >> > >> > Here is the 100G cluster setup we've done: >> > >> > - 5 nodes running 10 workers + 1 proxy each on them >> > - 100G split by arista to 5x10G >> > - 10G on each node is further split my myricom to 10x1G/worker with >> shunting enabled !! >> > >> > Note: Scott Campbell did some very early work on the concept of shunting >> > (http://dl.acm.org/citation.cfm?id=2195223.2195788) >> > >> > We are using react-framework to talk to arista written by Justin Azoff. >> > >> > With Shunting enabled cluster isn't even truly seeing 10G anymore. >> > >> > oh btw, Capture_loss is a good policy to run for sure. With above setup >> we get ~ 0.xx % packet drops. >> > >> > (Depending on kind of traffic you are monitoring you may need a >> slightly different shunting logic) >> > >> > >> > Here is hardware specs / node: >> > >> > - Motherboard-SM, X9DRi-F >> > - Intel E5-2643V2 3.5GHz Ivy Bridge (2x6-=12 Cores) >> > - 128GB DDRIII 1600MHz ECC/REG - (8x16GB Modules Installed) >> > - 10G-PCIE2-8C2-2S+; Myricom 10G "Gen2" (5 GT/s) PCI Express NIC with >> two SFP+ >> > - Myricom 10G-SR Modules >> > >> > On tapping side we have >> > - Arista 7504 (gets fed 100G TX/RX + backup and other 10Gb links) >> > - Arista 7150 (Symetric hashing via DANZ - splitting tcp sessions >> 1/link - 5 links to nodes >> > >> > on Bro side: >> > 5 nodes accepting 5 links from 7150 >> > Each node running 10 workers + 1 proxy >> > Myricom spliting/load balancing to each worker on the node. >> > >> > >> > Hope this helps, >> > >> > let us know if you have any further questions. >> > >> > Thanks, >> > Aashish >> > >> > On Fri, Jan 09, 2015 at 06:20:17PM +0000, Mike Patterson wrote: >> >> You're right, it's 32 on mine. >> >> >> >> I posted some specs for my system a couple of years ago now, I think. >> >> >> >> 6-8GB per worker should give some headroom (my workers usually use >> about 5 apiece I think). >> >> >> >> Mike >> >> >> >> -- >> >> Simple, clear purpose and principles give rise to complex and >> >> intelligent behavior. Complex rules and regulations give rise >> >> to simple and stupid behavior. - Dee Hock >> >> >> >> > On Jan 9, 2015, at 1:03 PM, Donaldson, John >> wrote: >> >> > >> >> > I'd agree with all of this. We're monitoring a few 10Gbps network >> segments with DAG 9.2X2s, too. I'll add in that, when processing that much >> traffic on a single device, you'll definitely not want to skimp on memory. >> >> > >> >> > I'm not sure which configurations you're using that might be >> limiting you to 16 streams -- we're run with at least 24 streams, and (at >> least with the 9.2X2s) you should be able to work with up to 32 receive >> streams. >> >> > >> >> > v/r >> >> > >> >> > John Donaldson >> >> > >> >> >> -----Original Message----- >> >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of >> >> >> Mike Patterson >> >> >> Sent: Thursday, January 08, 2015 7:29 AM >> >> >> To: coen bakkers >> >> >> Cc: bro at bro.org >> >> >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >> >> >> >> >> >> Succinctly, yes, although that provision is a big one. >> >> >> >> >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an >> Endace DAG >> >> >> 9.2X2. Both perform reasonably well. Although my hardware is >> somewhat >> >> >> underspecced (Dell R710s of differing vintages), I still get tons >> of useful data. >> >> >> >> >> >> If your next question would be "how should I spec my hardware", >> that's >> >> >> quite difficult to answer because it depends on a lot. Get the >> hottest CPUs >> >> >> you can afford, with as many cores. If you're actually sustaining >> 10+Gb you'll >> >> >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 >> 3.7Ghz >> >> >> cores, but Bro reports 10% or so loss. Note that some hardware >> >> >> configurations will limit the number of streams you can feed to >> Bro, eg my >> >> >> DAG can only produce 16 streams so even if I had it in a 24 core >> box, I'd only >> >> >> be making use of 2/3 of my CPU. >> >> >> >> >> >> Mike >> >> >> >> >> >>> On Jan 7, 2015, at 5:04 AM, coen bakkers >> wrote: >> >> >>> >> >> >>> Does anyone have experience with higher speed NIC's and Bro? Will >> it >> >> >> sustain 10Gb speeds or more provide the hardware is spec'd >> appropriately? >> >> >>> >> >> >>> regards, >> >> >>> >> >> >>> Coen >> >> >>> _______________________________________________ >> >> >>> Bro mailing list >> >> >>> bro at bro-ids.org >> >> >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> Bro mailing list >> >> >> bro at bro-ids.org >> >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> >> >> _______________________________________________ >> >> Bro mailing list >> >> bro at bro-ids.org >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > >> > -- >> > Aashish Sharma (asharma at lbl.gov) >> > Cyber Security, >> > Lawrence Berkeley National Laboratory >> > http://go.lbl.gov/pgp-aashish >> > Office: (510)-495-2680 Cell: (510)-612-7971 >> > >> > _______________________________________________ >> > Bro mailing list >> > bro at bro-ids.org >> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150126/f21f2dad/attachment-0001.html From kristoffer.bjork at gmail.com Mon Jan 26 23:01:30 2015 From: kristoffer.bjork at gmail.com (=?UTF-8?Q?Kristoffer_Bj=C3=B6rk?=) Date: Tue, 27 Jan 2015 08:01:30 +0100 Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: References: <1265785523.5697897.1420625041610.JavaMail.yahoo@jws11141.mail.ir2.yahoo.com> <58DBF613-2EF8-48BB-BA2F-B4EECEF9D543@uwaterloo.ca> <7F7CA4D1-B39D-466B-876E-333A8A2CC980@uwaterloo.ca> <20150109190051.GM14004@yaksha.lbl.gov> Message-ID: It can be donw with BPF, take a look at this https://www.bro.org/sphinx-git/scripts/policy/frameworks/packet-filter/shunt.bro.html Usually for lots of traffic it is better to do in a hardware device in front of your bro machines though. I know some people are using arista switches for loadbalancing and shunting but there are probably other devices also that work well for this. //Kristoffer On Tue, Jan 27, 2015 at 7:17 AM, Clement Chen wrote: > Hi Aashish, > > Could you please elaborate a little bit more on the "shunting capability"? > Do you mean using a BPF filter for bro? And what are some good filters for > large/encrypted flows? > > Thanks. > > -Clement > > On Fri, Jan 9, 2015 at 12:26 PM, Aashish Sharma wrote: > >> > Do you really see and can handle 1Gbit/sec of traffic per core? I'm >> curious. >> >> Haven't measured if a core can handle 1Gbit/sec but I highly highly >> doubt. >> >> What saves us is the shunting capability - basically bro identifies and >> cuts off the rest of the big flows by placing a src,src port - dst, >> dst-port ACL on arista while continuing to allow control packets (and >> dynamically removes ACL once connection_ends) >> >> So each core doesn't really see anything more then 20-40 Mbps >> (approximation) >> >> (Notes for self, it would be good to get these numbers in a plot) >> >> Thanks, >> Aashish >> >> On Fri, Jan 9, 2015 at 12:01 PM, Micha? Purzy?ski < >> michalpurzynski1 at gmail.com> wrote: >> >>> Do you really see and can handle 1Gbit/sec of traffic per core? I'm >>> curious. >>> >>> I would say, with a 2.6Ghz CPU my educated guess would be somewhere >>> about 250Mbit/sec / core with Bro. Of course configuration is >>> everything here, I'm just looking into "given you do it right, that's >>> what's possible". >>> >>> On Fri, Jan 9, 2015 at 8:00 PM, Aashish Sharma wrote: >>> > While, we at LBNL continue to work towards a formal documentation, I >>> think I'd reply then causing further delays: >>> > >>> > Here is the 100G cluster setup we've done: >>> > >>> > - 5 nodes running 10 workers + 1 proxy each on them >>> > - 100G split by arista to 5x10G >>> > - 10G on each node is further split my myricom to 10x1G/worker with >>> shunting enabled !! >>> > >>> > Note: Scott Campbell did some very early work on the concept of >>> shunting >>> > (http://dl.acm.org/citation.cfm?id=2195223.2195788) >>> > >>> > We are using react-framework to talk to arista written by Justin Azoff. >>> > >>> > With Shunting enabled cluster isn't even truly seeing 10G anymore. >>> > >>> > oh btw, Capture_loss is a good policy to run for sure. With above >>> setup we get ~ 0.xx % packet drops. >>> > >>> > (Depending on kind of traffic you are monitoring you may need a >>> slightly different shunting logic) >>> > >>> > >>> > Here is hardware specs / node: >>> > >>> > - Motherboard-SM, X9DRi-F >>> > - Intel E5-2643V2 3.5GHz Ivy Bridge (2x6-=12 Cores) >>> > - 128GB DDRIII 1600MHz ECC/REG - (8x16GB Modules Installed) >>> > - 10G-PCIE2-8C2-2S+; Myricom 10G "Gen2" (5 GT/s) PCI Express NIC with >>> two SFP+ >>> > - Myricom 10G-SR Modules >>> > >>> > On tapping side we have >>> > - Arista 7504 (gets fed 100G TX/RX + backup and other 10Gb links) >>> > - Arista 7150 (Symetric hashing via DANZ - splitting tcp sessions >>> 1/link - 5 links to nodes >>> > >>> > on Bro side: >>> > 5 nodes accepting 5 links from 7150 >>> > Each node running 10 workers + 1 proxy >>> > Myricom spliting/load balancing to each worker on the node. >>> > >>> > >>> > Hope this helps, >>> > >>> > let us know if you have any further questions. >>> > >>> > Thanks, >>> > Aashish >>> > >>> > On Fri, Jan 09, 2015 at 06:20:17PM +0000, Mike Patterson wrote: >>> >> You're right, it's 32 on mine. >>> >> >>> >> I posted some specs for my system a couple of years ago now, I think. >>> >> >>> >> 6-8GB per worker should give some headroom (my workers usually use >>> about 5 apiece I think). >>> >> >>> >> Mike >>> >> >>> >> -- >>> >> Simple, clear purpose and principles give rise to complex and >>> >> intelligent behavior. Complex rules and regulations give rise >>> >> to simple and stupid behavior. - Dee Hock >>> >> >>> >> > On Jan 9, 2015, at 1:03 PM, Donaldson, John >>> wrote: >>> >> > >>> >> > I'd agree with all of this. We're monitoring a few 10Gbps network >>> segments with DAG 9.2X2s, too. I'll add in that, when processing that much >>> traffic on a single device, you'll definitely not want to skimp on memory. >>> >> > >>> >> > I'm not sure which configurations you're using that might be >>> limiting you to 16 streams -- we're run with at least 24 streams, and (at >>> least with the 9.2X2s) you should be able to work with up to 32 receive >>> streams. >>> >> > >>> >> > v/r >>> >> > >>> >> > John Donaldson >>> >> > >>> >> >> -----Original Message----- >>> >> >> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf >>> Of >>> >> >> Mike Patterson >>> >> >> Sent: Thursday, January 08, 2015 7:29 AM >>> >> >> To: coen bakkers >>> >> >> Cc: bro at bro.org >>> >> >> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >>> >> >> >>> >> >> Succinctly, yes, although that provision is a big one. >>> >> >> >>> >> >> I'm running Bro on two 10 gig interfaces, an Intel X520 and an >>> Endace DAG >>> >> >> 9.2X2. Both perform reasonably well. Although my hardware is >>> somewhat >>> >> >> underspecced (Dell R710s of differing vintages), I still get tons >>> of useful data. >>> >> >> >>> >> >> If your next question would be "how should I spec my hardware", >>> that's >>> >> >> quite difficult to answer because it depends on a lot. Get the >>> hottest CPUs >>> >> >> you can afford, with as many cores. If you're actually sustaining >>> 10+Gb you'll >>> >> >> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on >>> 8 3.7Ghz >>> >> >> cores, but Bro reports 10% or so loss. Note that some hardware >>> >> >> configurations will limit the number of streams you can feed to >>> Bro, eg my >>> >> >> DAG can only produce 16 streams so even if I had it in a 24 core >>> box, I'd only >>> >> >> be making use of 2/3 of my CPU. >>> >> >> >>> >> >> Mike >>> >> >> >>> >> >>> On Jan 7, 2015, at 5:04 AM, coen bakkers >>> wrote: >>> >> >>> >>> >> >>> Does anyone have experience with higher speed NIC's and Bro? Will >>> it >>> >> >> sustain 10Gb speeds or more provide the hardware is spec'd >>> appropriately? >>> >> >>> >>> >> >>> regards, >>> >> >>> >>> >> >>> Coen >>> >> >>> _______________________________________________ >>> >> >>> Bro mailing list >>> >> >>> bro at bro-ids.org >>> >> >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >>> >> >> >>> >> >> _______________________________________________ >>> >> >> Bro mailing list >>> >> >> bro at bro-ids.org >>> >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >>> >> >>> >> _______________________________________________ >>> >> Bro mailing list >>> >> bro at bro-ids.org >>> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> > >>> > -- >>> > Aashish Sharma (asharma at lbl.gov) >>> > Cyber Security, >>> > Lawrence Berkeley National Laboratory >>> > http://go.lbl.gov/pgp-aashish >>> > Office: (510)-495-2680 Cell: (510)-612-7971 >>> > >>> > _______________________________________________ >>> > Bro mailing list >>> > bro at bro-ids.org >>> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/639ec455/attachment.html From stanv at altlinux.org Tue Jan 27 06:19:24 2015 From: stanv at altlinux.org (Andrew V. Stepanov) Date: Tue, 27 Jan 2015 17:19:24 +0300 Subject: [Bro] RST documentation for BRO Message-ID: <54C79E6C.9030401@altlinux.org> Hello. I need to make RPM package for BRO. https://www.bro.org/ I need to run in SPEC file: make doc But it fails with: Running Sphinx v1.3a0 loading pickled environment... not yet created building [html]: targets for 422 source files that are out of date updating environment: 422 added, 0 changed, 0 removed reading sources... [ 0%] broids/index running test doc/sphinx/ftp-bruteforce.btest ... reading sources... [ 0%] cluster/index reading sources... [ 0%] components/binpac/README Exception occurred: File "/usr/lib/python2.7/site-packages/pygments/lexers/__init__.py", line 252, in guess_lexer raise ClassNotFound('no lexer matching the text found') ClassNotFound: no lexer matching the text found The full traceback has been saved in /usr/src/tmp/sphinx-err-5Cm0Xi.log, if you want to report the issue to the developers. Contents of /usr/src/tmp/sphinx-err-5Cm0Xi.log is: # Sphinx version: 1.3a0 # Python version: 2.7.8 # Docutils version: 0.13 repository # Jinja2 version: 2.8-dev # Loaded extensions: # rst_directive from /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/rst_directive.py # bro from /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/bro.py # adapt-toc from /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/adapt-toc.py # sphinx.ext.todo from /usr/lib/python2.7/site-packages/sphinx/ext/todo.pyc # broxygen from /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/broxygen.py # btest-sphinx from /usr/src/RPM/BUILD/bro-2.3.1/aux/btest/sphinx/btest-sphinx.pyc Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/sphinx/cmdline.py", line 254, in main app.build(force_all, filenames) File "/usr/lib/python2.7/site-packages/sphinx/application.py", line 215, in build self.builder.build_update() File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 214, in build_update 'out of date' % len(to_build)) File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 234, in build purple, length): File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 134, in status_iterator for item in iterable: File "/usr/lib/python2.7/site-packages/sphinx/environment.py", line 474, in update_generator self.read_doc(docname, app=app) File "/usr/lib/python2.7/site-packages/sphinx/environment.py", line 621, in read_doc pub.publish() File "/usr/lib/python2.7/site-packages/docutils/core.py", line 217, in publish self.settings) File "/usr/lib/python2.7/site-packages/docutils/readers/__init__.py", line 72, in read self.parse() File "/usr/lib/python2.7/site-packages/docutils/readers/__init__.py", line 78, in parse self.parser.parse(self.input, document) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/__init__.py", line 172, in parse self.statemachine.run(inputlines, document, inliner=self.inliner) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 170, in run input_source=document['source']) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run context, state, transitions) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line return method(match, context, next_state) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2961, in text self.section(title.lstrip(), source, style, lineno + 1, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 327, in section self.new_subsection(title, lineno, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection node=section_node, match_titles=True) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 195, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run context, state, transitions) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line return method(match, context, next_state) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2726, in underline self.section(title, source, style, lineno - 1, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 327, in section self.new_subsection(title, lineno, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection node=section_node, match_titles=True) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 195, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run context, state, transitions) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line return method(match, context, next_state) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2726, in underline self.section(title, source, style, lineno - 1, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 327, in section self.new_subsection(title, lineno, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection node=section_node, match_titles=True) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 195, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run context, state, transitions) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line return method(match, context, next_state) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2726, in underline self.section(title, source, style, lineno - 1, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 327, in section self.new_subsection(title, lineno, messages) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection node=section_node, match_titles=True) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 195, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run context, state, transitions) File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line return method(match, context, next_state) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2299, in explicit_markup nodelist, blank_finish = self.explicit_construct(match) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2311, in explicit_construct return method(self, expmatch) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2054, in directive directive_class, match, type_name, option_presets) File "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2103, in run_directive result = directive_instance.run() File "/usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/rst_directive.py", line 138, in run lexer = guess_lexer(content) File "/usr/lib/python2.7/site-packages/pygments/lexers/__init__.py", line 252, in guess_lexer raise ClassNotFound('no lexer matching the text found') ClassNotFound: no lexer matching the text found $ rpm -q python-module-docutils python-module-docutils-0.13-alt1.svn20140708 bro-2.3.1.tar.gz https://github.com/bro/bro/tree/master/doc/components/binpac https://github.com/bro/binpac/blob/master/README Could you please help me ? From stanv at altlinux.org Tue Jan 27 06:29:57 2015 From: stanv at altlinux.org (Andrew V. Stepanov) Date: Tue, 27 Jan 2015 17:29:57 +0300 Subject: [Bro] RST documentation for BRO In-Reply-To: <54C79E6C.9030401@altlinux.org> References: <54C79E6C.9030401@altlinux.org> Message-ID: <54C7A0E5.50105@altlinux.org> People from pygments claim to ./doc/ext/rst_directive.py birkenfeld says: 15:31 hm 15:31 ok. in any case, lexer guessing is not always successful 15:31 and varies with pygments vesrion 15:31 so they should really guard against ClassNotFound Please fix it! 27.01.15 17:19, Andrew V. Stepanov ?????: > Hello. > > I need to make RPM package for BRO. > > https://www.bro.org/ > > I need to run in SPEC file: make doc > > But it fails with: > > Running Sphinx v1.3a0 > loading pickled environment... not yet created > building [html]: targets for 422 source files that are out of date > updating environment: 422 added, 0 changed, 0 removed > reading sources... [ 0%] broids/index > running test doc/sphinx/ftp-bruteforce.btest ... > reading sources... [ 0%] cluster/index > reading sources... [ 0%] components/binpac/README > > Exception occurred: > File "/usr/lib/python2.7/site-packages/pygments/lexers/__init__.py", > line 252, in guess_lexer > raise ClassNotFound('no lexer matching the text found') > ClassNotFound: no lexer matching the text found > The full traceback has been saved in /usr/src/tmp/sphinx-err-5Cm0Xi.log, > if you want to report the issue to the developers. > > > Contents of /usr/src/tmp/sphinx-err-5Cm0Xi.log is: > > > # Sphinx version: 1.3a0 > # Python version: 2.7.8 > # Docutils version: 0.13 repository > # Jinja2 version: 2.8-dev > # Loaded extensions: > # rst_directive from > /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/rst_directive.py > # bro from /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/bro.py > # adapt-toc from > /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/adapt-toc.py > # sphinx.ext.todo from > /usr/lib/python2.7/site-packages/sphinx/ext/todo.pyc > # broxygen from > /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/broxygen.py > # btest-sphinx from > /usr/src/RPM/BUILD/bro-2.3.1/aux/btest/sphinx/btest-sphinx.pyc > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/sphinx/cmdline.py", line 254, > in main > app.build(force_all, filenames) > File "/usr/lib/python2.7/site-packages/sphinx/application.py", line > 215, in build > self.builder.build_update() > File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", > line 214, in build_update > 'out of date' % len(to_build)) > File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", > line 234, in build > purple, length): > File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", > line 134, in status_iterator > for item in iterable: > File "/usr/lib/python2.7/site-packages/sphinx/environment.py", line > 474, in update_generator > self.read_doc(docname, app=app) > File "/usr/lib/python2.7/site-packages/sphinx/environment.py", line > 621, in read_doc > pub.publish() > File "/usr/lib/python2.7/site-packages/docutils/core.py", line 217, > in publish > self.settings) > File "/usr/lib/python2.7/site-packages/docutils/readers/__init__.py", > line 72, in read > self.parse() > File "/usr/lib/python2.7/site-packages/docutils/readers/__init__.py", > line 78, in parse > self.parser.parse(self.input, document) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/__init__.py", > line 172, in parse > self.statemachine.run(inputlines, document, inliner=self.inliner) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 170, in run > input_source=document['source']) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 239, in run > context, state, transitions) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 460, in check_line > return method(match, context, next_state) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2961, in text > self.section(title.lstrip(), source, style, lineno + 1, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 327, in section > self.new_subsection(title, lineno, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 395, in new_subsection > node=section_node, match_titles=True) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 282, in nested_parse > node=node, match_titles=match_titles) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 195, in run > results = StateMachineWS.run(self, input_lines, input_offset) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 239, in run > context, state, transitions) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 460, in check_line > return method(match, context, next_state) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2726, in underline > self.section(title, source, style, lineno - 1, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 327, in section > self.new_subsection(title, lineno, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 395, in new_subsection > node=section_node, match_titles=True) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 282, in nested_parse > node=node, match_titles=match_titles) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 195, in run > results = StateMachineWS.run(self, input_lines, input_offset) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 239, in run > context, state, transitions) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 460, in check_line > return method(match, context, next_state) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2726, in underline > self.section(title, source, style, lineno - 1, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 327, in section > self.new_subsection(title, lineno, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 395, in new_subsection > node=section_node, match_titles=True) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 282, in nested_parse > node=node, match_titles=match_titles) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 195, in run > results = StateMachineWS.run(self, input_lines, input_offset) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 239, in run > context, state, transitions) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 460, in check_line > return method(match, context, next_state) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2726, in underline > self.section(title, source, style, lineno - 1, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 327, in section > self.new_subsection(title, lineno, messages) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 395, in new_subsection > node=section_node, match_titles=True) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 282, in nested_parse > node=node, match_titles=match_titles) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 195, in run > results = StateMachineWS.run(self, input_lines, input_offset) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 239, in run > context, state, transitions) > File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", > line 460, in check_line > return method(match, context, next_state) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2299, in explicit_markup > nodelist, blank_finish = self.explicit_construct(match) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2311, in explicit_construct > return method(self, expmatch) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2054, in directive > directive_class, match, type_name, option_presets) > File > "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line > 2103, in run_directive > result = directive_instance.run() > File > "/usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/rst_directive.py", > line 138, in run > lexer = guess_lexer(content) > File "/usr/lib/python2.7/site-packages/pygments/lexers/__init__.py", > line 252, in guess_lexer > raise ClassNotFound('no lexer matching the text found') > ClassNotFound: no lexer matching the text found > > $ rpm -q python-module-docutils > python-module-docutils-0.13-alt1.svn20140708 > > bro-2.3.1.tar.gz > > https://github.com/bro/bro/tree/master/doc/components/binpac > https://github.com/bro/binpac/blob/master/README > > Could you please help me ? From jsiwek at illinois.edu Tue Jan 27 08:18:23 2015 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 27 Jan 2015 16:18:23 +0000 Subject: [Bro] RST documentation for BRO In-Reply-To: <54C7A0E5.50105@altlinux.org> References: <54C79E6C.9030401@altlinux.org> <54C7A0E5.50105@altlinux.org> Message-ID: <0ECD784B-C601-473A-A955-357F99E6C0FE@illinois.edu> Here?s a small patch you can try: https://github.com/bro/bro/commit/36bc7ba5b5d25cea881db22fb1a5bc2bc5fbc3e4 - Jon > On Jan 27, 2015, at 8:29 AM, Andrew V. Stepanov wrote: > > > People from pygments claim to > ./doc/ext/rst_directive.py > > birkenfeld says: > 15:31 hm > 15:31 ok. in any case, lexer guessing is not always successful > 15:31 and varies with pygments vesrion > 15:31 so they should really guard against ClassNotFound > > > Please fix it! > > 27.01.15 17:19, Andrew V. Stepanov ?????: >> Hello. >> >> I need to make RPM package for BRO. >> >> https://www.bro.org/ >> >> I need to run in SPEC file: make doc >> >> But it fails with: >> >> Running Sphinx v1.3a0 >> loading pickled environment... not yet created >> building [html]: targets for 422 source files that are out of date >> updating environment: 422 added, 0 changed, 0 removed >> reading sources... [ 0%] broids/index >> running test doc/sphinx/ftp-bruteforce.btest ... >> reading sources... [ 0%] cluster/index >> reading sources... [ 0%] components/binpac/README >> >> Exception occurred: >> File "/usr/lib/python2.7/site-packages/pygments/lexers/__init__.py", >> line 252, in guess_lexer >> raise ClassNotFound('no lexer matching the text found') >> ClassNotFound: no lexer matching the text found >> The full traceback has been saved in /usr/src/tmp/sphinx-err-5Cm0Xi.log, >> if you want to report the issue to the developers. >> >> >> Contents of /usr/src/tmp/sphinx-err-5Cm0Xi.log is: >> >> >> # Sphinx version: 1.3a0 >> # Python version: 2.7.8 >> # Docutils version: 0.13 repository >> # Jinja2 version: 2.8-dev >> # Loaded extensions: >> # rst_directive from >> /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/rst_directive.py >> # bro from /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/bro.py >> # adapt-toc from >> /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/adapt-toc.py >> # sphinx.ext.todo from >> /usr/lib/python2.7/site-packages/sphinx/ext/todo.pyc >> # broxygen from >> /usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/broxygen.py >> # btest-sphinx from >> /usr/src/RPM/BUILD/bro-2.3.1/aux/btest/sphinx/btest-sphinx.pyc >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/sphinx/cmdline.py", line 254, >> in main >> app.build(force_all, filenames) >> File "/usr/lib/python2.7/site-packages/sphinx/application.py", line >> 215, in build >> self.builder.build_update() >> File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", >> line 214, in build_update >> 'out of date' % len(to_build)) >> File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", >> line 234, in build >> purple, length): >> File "/usr/lib/python2.7/site-packages/sphinx/builders/__init__.py", >> line 134, in status_iterator >> for item in iterable: >> File "/usr/lib/python2.7/site-packages/sphinx/environment.py", line >> 474, in update_generator >> self.read_doc(docname, app=app) >> File "/usr/lib/python2.7/site-packages/sphinx/environment.py", line >> 621, in read_doc >> pub.publish() >> File "/usr/lib/python2.7/site-packages/docutils/core.py", line 217, >> in publish >> self.settings) >> File "/usr/lib/python2.7/site-packages/docutils/readers/__init__.py", >> line 72, in read >> self.parse() >> File "/usr/lib/python2.7/site-packages/docutils/readers/__init__.py", >> line 78, in parse >> self.parser.parse(self.input, document) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/__init__.py", >> line 172, in parse >> self.statemachine.run(inputlines, document, inliner=self.inliner) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 170, in run >> input_source=document['source']) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 239, in run >> context, state, transitions) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 460, in check_line >> return method(match, context, next_state) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2961, in text >> self.section(title.lstrip(), source, style, lineno + 1, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 327, in section >> self.new_subsection(title, lineno, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 395, in new_subsection >> node=section_node, match_titles=True) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 282, in nested_parse >> node=node, match_titles=match_titles) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 195, in run >> results = StateMachineWS.run(self, input_lines, input_offset) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 239, in run >> context, state, transitions) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 460, in check_line >> return method(match, context, next_state) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2726, in underline >> self.section(title, source, style, lineno - 1, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 327, in section >> self.new_subsection(title, lineno, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 395, in new_subsection >> node=section_node, match_titles=True) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 282, in nested_parse >> node=node, match_titles=match_titles) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 195, in run >> results = StateMachineWS.run(self, input_lines, input_offset) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 239, in run >> context, state, transitions) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 460, in check_line >> return method(match, context, next_state) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2726, in underline >> self.section(title, source, style, lineno - 1, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 327, in section >> self.new_subsection(title, lineno, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 395, in new_subsection >> node=section_node, match_titles=True) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 282, in nested_parse >> node=node, match_titles=match_titles) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 195, in run >> results = StateMachineWS.run(self, input_lines, input_offset) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 239, in run >> context, state, transitions) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 460, in check_line >> return method(match, context, next_state) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2726, in underline >> self.section(title, source, style, lineno - 1, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 327, in section >> self.new_subsection(title, lineno, messages) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 395, in new_subsection >> node=section_node, match_titles=True) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 282, in nested_parse >> node=node, match_titles=match_titles) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 195, in run >> results = StateMachineWS.run(self, input_lines, input_offset) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 239, in run >> context, state, transitions) >> File "/usr/lib/python2.7/site-packages/docutils/statemachine.py", >> line 460, in check_line >> return method(match, context, next_state) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2299, in explicit_markup >> nodelist, blank_finish = self.explicit_construct(match) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2311, in explicit_construct >> return method(self, expmatch) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2054, in directive >> directive_class, match, type_name, option_presets) >> File >> "/usr/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line >> 2103, in run_directive >> result = directive_instance.run() >> File >> "/usr/src/RPM/BUILD/bro-2.3.1/build/doc/sphinx_input/ext/rst_directive.py", >> line 138, in run >> lexer = guess_lexer(content) >> File "/usr/lib/python2.7/site-packages/pygments/lexers/__init__.py", >> line 252, in guess_lexer >> raise ClassNotFound('no lexer matching the text found') >> ClassNotFound: no lexer matching the text found >> >> $ rpm -q python-module-docutils >> python-module-docutils-0.13-alt1.svn20140708 >> >> bro-2.3.1.tar.gz >> >> https://github.com/bro/bro/tree/master/doc/components/binpac >> https://github.com/bro/binpac/blob/master/README >> >> Could you please help me ? > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From abenson at gmail.com Tue Jan 27 08:49:18 2015 From: abenson at gmail.com (Andrew Benson) Date: Tue, 27 Jan 2015 09:49:18 -0700 Subject: [Bro] Strange Issue with Live Capture In-Reply-To: References: <54C6778F.40401@g-clef.net> <54C69427.4000401@g-clef.net> Message-ID: I had them correct the issue, they were in fact sending us duplicates of packets. The issue still persists, though. I'm not sure what else to check? Is there anything like a timeout that could be set that'd be causing it to assuming the connection never completed? I've just found out one of our other teams has been having a similar problem, but with 2.3 only. They had discussed it with Liam, but never found a resolution. It's weird I'm having the same issue with both 2.1 and 2.3 -- AndrewB Knowing is Half the Battle. On 26 January 2015 at 12:57, Andrew Benson wrote: > I just had an epiphany when thinking about your response: the site we're > dealing with configured the SPAN incorrectly. Looking back through, I'm > seeing *every* packet twice. I think they configured it to SPAN every port > on that switch, including the SPAN. I'm going to have them fix that, maybe > use the VLANs as the source as I had originally directed them. > > -- > AndrewB > Knowing is Half the Battle. > > On 26 January 2015 at 12:38, Andrew Benson wrote: > >> I think if it were getting multiple copies of each packet, it'd be >> logging those as well. The dup3 on the card just duplicates the stream so >> each process receives the same traffic. If a process attaches to the stream >> (dag0:0, :2, and :4) that stream can't be attached from another process. >> >> Looking back we had some time over the weekend when the traffic was a >> little slower and it didn't have any errors. Likewise, we had a test at a >> previous site (mockup of this one) that didn't have the issue, and we've >> been using this setup for a few years, this is just the first I've ever >> seen this issue. It's just really weird. >> >> I wonder how I could check to see if there's something causing bro to >> break when talking to the card? It's just weird that I can't recreate it >> anywhere else. >> >> -- >> AndrewB >> Knowing is Half the Battle. >> >> On 26 January 2015 at 12:23, Aaron Gee-Clough wrote: >> >>> >>> Right, so if it works with the pcaps, then there's clearly a problem >>> with bro's interaction with the card (hence my focusing on the >>> load-balancing aspect). If you're not load-balancing but are duplicating, >>> are you sure bro's not reading all three streams, and getting multiple >>> copies of every packet? >>> >>> I don't have experience with the 7.5G4, so I'm not sure how dup3 works. >>> >>> aaron >>> >>> >>> On 01/26/2015 02:14 PM, Andrew Benson wrote: >>> >>>> We are using the 7.5G4. We currently have it configured using dup3 to >>>> send >>>> all the traffic three streams (snort, bro, and tcpdump), so we're not >>>> load >>>> balancing between processes. I can verify against the pcaps that bro is >>>> reading the packet. It shows the S0 for the handshake, and then OTH for >>>> every packet after that. Every packet is accounted for. >>>> >>>> >>>> -- >>>> AndrewB >>>> Knowing is Half the Battle. >>>> >>>> On 26 January 2015 at 10:21, Aaron Gee-Clough wrote: >>>> >>>> >>>>> If I were to bet, I'd guess it has something to do with how the Endace >>>>> card is load-balancing packets across your bro workers. If the >>>>> retransmission packets are ending up on different workers than the >>>>> original session, then each worker will think it's got a new session, >>>>> and log it accordingly. >>>>> >>>>> How do you have the Endace card configured? (for the 9.2X2 I have, >>>>> n_tuple_select is the pertinent config option.) >>>>> >>>>> aaron >>>>> >>>>> On 01/26/2015 11:59 AM, Andrew Benson wrote: >>>>> >>>>>> We're currently using Endace DAG capture cards to feed directly to >>>>>> bro, >>>>>> snort, and a rolling packet capture. >>>>>> >>>>>> The network we're currently looking at has a high number of >>>>>> >>>>> retransmissions >>>>> >>>>>> (at one point we counted 45% of traffic being retransmissions). >>>>>> >>>>>> Bro is currently logging each packet as a separate connection in >>>>>> >>>>> conn.log, >>>>> >>>>>> and is failing to run the protocol analyzers correctly (i.e. it'll >>>>>> detect >>>>>> it as FTP, but will only log the action, not the login, response). >>>>>> >>>>>> What's weird is that if I run bro against the rolling pcap, it works >>>>>> correctly. This problem only occurs when bro is listening to the >>>>>> device >>>>>> directly. >>>>>> >>>>>> This problem is still occurring with 2.3.1, so I'm at a loss. I >>>>>> enabled >>>>>> >>>>> the >>>>> >>>>>> capture-loss module, and it's reporting 0%. The capture card doesn't >>>>>> seem >>>>>> to be dropping anything either. >>>>>> >>>>>> Seen anything similar or have any suggestions for >>>>>> troubleshooting/fixing? >>>>>> >>>>>> -- >>>>>> AndrewB >>>>>> Knowing is Half the Battle. >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Bro mailing list >>>>>> bro at bro-ids.org >>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>>>> >>>>>> _______________________________________________ >>>>> Bro mailing list >>>>> bro at bro-ids.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>>> >>>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/d105b821/attachment-0001.html From plutochen2010 at gmail.com Tue Jan 27 13:21:35 2015 From: plutochen2010 at gmail.com (Clement Chen) Date: Tue, 27 Jan 2015 13:21:35 -0800 Subject: [Bro] Use PFRING_ZC for Bro Message-ID: Hi all, I am trying to use PFRING_ZC for Bro in my security onion box. I got the license from ntop but there was little document on how to enable this. Would appreciate any help/pointer to docs. I will compile a step-by-step instructions if I get this working. I have the Intel 82599EB 10G card and the ixgbe-zc driver installed. #dkms status ixgbe-zc, 3.22.3, 3.13.0-44-generic, x86_64: installed pf_ring, 6, 3.13.0-35-generic, x86_64: installed pf_ring, 6, 3.13.0-44-generic, x86_64: installed (WARNING! Diff between built and installed module!) pfring, 6.0.3, 3.13.0-44-generic, x86_64: installed not sure what to do next and how to enable it for Bro. Thanks. -Clement -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/d48615e7/attachment.html From plutochen2010 at gmail.com Tue Jan 27 13:42:40 2015 From: plutochen2010 at gmail.com (Clement Chen) Date: Tue, 27 Jan 2015 13:42:40 -0800 Subject: [Bro] Use PFRING_ZC for Bro In-Reply-To: <7D934230A65C8F498E98A3E6764DB983BE06E5C7@UCCS-EX3.uccs.edu> References: <7D934230A65C8F498E98A3E6764DB983BE06E5C7@UCCS-EX3.uccs.edu> Message-ID: I was seeing 60% packet loss rate. After some aggressive BPF filtering, it went down to about 15%-20%. Are you using a big box? Mine is 24 core CPU with 64GB mem. There is an email thread about Bro with 10G card and many people also see pretty significant packet loss. It would be great if you can share your configs and also your traffic throughput. Thanks. -Clement On Tue, Jan 27, 2015 at 1:35 PM, Greg Williams wrote: > Why do you want to use it? I?m using security onion with Bro and 2x2 > Intel x520 10Gb cards and have no packet loss with the base SO > configuration. > > > > Greg Williams, M.E., ISA, GPEN, GCFE > > Director of Networks and Infrastructure > Interim IT Security Manager/Information Security Officer/HIPAA Security > Officer > University of Colorado Colorado Springs - Department of Information > Technology > Phone: 719-255-3211 > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Clement > Chen > *Sent:* Tuesday, January 27, 2015 2:22 PM > *To:* bro at bro.org > *Subject:* [Bro] Use PFRING_ZC for Bro > > > > Hi all, > > > > I am trying to use PFRING_ZC for Bro in my security onion box. I got the > license from ntop but there was little document on how to enable this. > > > > Would appreciate any help/pointer to docs. I will compile a step-by-step > instructions if I get this working. > > > > I have the Intel 82599EB 10G card and the ixgbe-zc driver installed. > > > > #dkms status > > ixgbe-zc, 3.22.3, 3.13.0-44-generic, x86_64: installed > > pf_ring, 6, 3.13.0-35-generic, x86_64: installed > > pf_ring, 6, 3.13.0-44-generic, x86_64: installed (WARNING! Diff between > built and installed module!) > > pfring, 6.0.3, 3.13.0-44-generic, x86_64: installed > > > > not sure what to do next and how to enable it for Bro. > > > > Thanks. > > > > -Clement > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150127/f3218eb9/attachment.html From stanv at altlinux.org Wed Jan 28 01:18:24 2015 From: stanv at altlinux.org (Andrew V. Stepanov) Date: Wed, 28 Jan 2015 12:18:24 +0300 Subject: [Bro] RST documentation for BRO In-Reply-To: <54C7A0E5.50105@altlinux.org> References: <54C79E6C.9030401@altlinux.org> <54C7A0E5.50105@altlinux.org> Message-ID: <54C8A960.1050601@altlinux.org> 27.01.15 17:29, Andrew V. Stepanov ?????: Commit https://github.com/bro/bro/commit/36bc7ba5b5d25cea881db22fb1a5bc2bc5fbc3e4 works well Thank you! From hhoffman at ip-solutions.net Wed Jan 28 05:43:34 2015 From: hhoffman at ip-solutions.net (Harry Hoffman) Date: Wed, 28 Jan 2015 08:43:34 -0500 Subject: [Bro] DPDK and Bro Message-ID: <54C8E786.3030501@ip-solutions.net> Hi All, Whilst enduring snow-mageddon here in the northeast of the US I had some free time on my hands and was reading up on DPDK (http://dpdk.org/). I was wondering if any of the Bro devs were looking at integration with Bro? Cheers, Harry From jdonnelly at dyn.com Wed Jan 28 06:17:30 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Wed, 28 Jan 2015 08:17:30 -0600 Subject: [Bro] Bro v2.3.2 release In-Reply-To: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> References: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> Message-ID: Hi I am getting this error on a fresh checkout: git clone --recursive cd bro ./configure --enable-debug make[3]: Entering directory `/work/jpd/dyn/src/bro-fork/bro/build' [ 20%] Building CXX object src/analyzer/protocol/bittorrent/CMakeFs/plugin-Bro-BitTorrent.dir/BitTorrent.cc.o In file included from /work/jpd/dyn/src/bro-fork/bro/src/Net.h:12:0, from /work/jpd/dyn/src/bro-fork/bro/src/RuleMatcher.h:15, from /work/jpd/dyn/src/bro-fork/bro/src/Conn.h:13, from /work/jpd/dyn/src/bro-fork/bro/src/analyzer/protocol/tcp/TCP.h:11, from /work/jpd/dyn/src/bro-fork/bro/src/analyzer/protocol/bittorrent/BitTorrent.h:6, from /work/jpd/dyn/src/bro-fork/bro/src/analyzer/protocol/bittorrent/BitTorrent.cc:3: /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.h: In constructor ?iosource::PktSrc::Properties::Properties()?: /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.h:272:14: error: ?PCAP_NETMASK_UNKNOWN? was not declared in this scope netmask = PCAP_NETMASK_UNKNOWN; ^ On Mon, Jan 26, 2015 at 10:47 AM, Siwek, Jon wrote: > Bro v2.3.2 is now available. For more details on vulnerabilities > addressed, see this blog post: > > http://blog.bro.org/2015/01/bro-232-release.html > > The new version can be downloaded from: > > https://www.bro.org/download/index.html > > - Jon > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/22429321/attachment.html From jdonnelly at dyn.com Wed Jan 28 07:19:32 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Wed, 28 Jan 2015 09:19:32 -0600 Subject: [Bro] Bro v2.3.2 release In-Reply-To: References: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> Message-ID: #define PCAP_NETMASK_UNKNOWN 0xffffffff "/usr/include/pcap/pcap.h" 484L, 18147C Is on my system; Adding #include "pcap.h" to PktSrc.h doesn't solve it. So I added #define PCAP_NETMASK_UNKNOWN 0xfffffffff to PktSrc,h to get get farther . On Wed, Jan 28, 2015 at 8:17 AM, John Donnelly wrote: > Hi > > I am getting this error on a fresh checkout: > > git clone --recursive > cd bro > ./configure --enable-debug > > > make[3]: Entering directory `/work/jpd/dyn/src/bro-fork/bro/build' > [ 20%] Building CXX object > src/analyzer/protocol/bittorrent/CMakeFs/plugin-Bro-BitTorrent.dir/BitTorrent.cc.o > In file included from /work/jpd/dyn/src/bro-fork/bro/src/Net.h:12:0, > from /work/jpd/dyn/src/bro-fork/bro/src/RuleMatcher.h:15, > from /work/jpd/dyn/src/bro-fork/bro/src/Conn.h:13, > from > /work/jpd/dyn/src/bro-fork/bro/src/analyzer/protocol/tcp/TCP.h:11, > from > /work/jpd/dyn/src/bro-fork/bro/src/analyzer/protocol/bittorrent/BitTorrent.h:6, > from > /work/jpd/dyn/src/bro-fork/bro/src/analyzer/protocol/bittorrent/BitTorrent.cc:3: > /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.h: In constructor > ?iosource::PktSrc::Properties::Properties()?: > /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.h:272:14: error: > ?PCAP_NETMASK_UNKNOWN? was not declared in this scope > netmask = PCAP_NETMASK_UNKNOWN; > ^ > > > On Mon, Jan 26, 2015 at 10:47 AM, Siwek, Jon wrote: > >> Bro v2.3.2 is now available. For more details on vulnerabilities >> addressed, see this blog post: >> >> http://blog.bro.org/2015/01/bro-232-release.html >> >> The new version can be downloaded from: >> >> https://www.bro.org/download/index.html >> >> - Jon >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/7ffe3368/attachment-0001.html From jdonnelly at dyn.com Wed Jan 28 07:20:33 2015 From: jdonnelly at dyn.com (John Donnelly) Date: Wed, 28 Jan 2015 09:20:33 -0600 Subject: [Bro] Bro v2.3.2 release In-Reply-To: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> References: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> Message-ID: Yet another build failure: make[3]: Leaving directory `/work/jpd/dyn/src/bro-fork/bro/build' make[3]: Entering directory `/work/jpd/dyn/src/bro-fork/bro/build' [ 70%] Building CXX object src/iosource/CMakeFiles/bro_iosource.dir/BPF_Program.cc.o [ 70%] Building CXX object src/iosource/CMakeFiles/bro_iosource.dir/Component.cc.o [ 70%] Building CXX object src/iosource/CMakeFiles/bro_iosource.dir/Manager.cc.o [ 70%] Building CXX object src/iosource/CMakeFiles/bro_iosource.dir/PktDumper.cc.o [ 70%] Building CXX object src/iosource/CMakeFiles/bro_iosource.dir/PktSrc.cc.o /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.cc: In member function ?bool iosource::PktSrc::ApplyBPFFilter(int, const pcap_pkthdr*, const u_char*)?: /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.cc:516:57: error: ?pcap_offline_filter? was not declared in this scope return pcap_offline_filter(code->GetProgram(), hdr, pkt); ^ /work/jpd/dyn/src/bro-fork/bro/src/iosource/PktSrc.cc:517:2: warning: control reaches end of non-void function [-Wreturn-type] } ^ make[3]: *** [src/iosource/CMakeFiles/bro_iosource.dir/PktSrc.cc.o] Error 1 make[3]: Leaving directory `/work/jpd/dyn/src/bro-fork/bro/build' make[2]: *** [src/iosource/CMakeFiles/bro_iosource.dir/all] Error 2 make[2]: Leaving directory `/work/jpd/dyn/src/bro-fork/bro/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/work/jpd/dyn/src/bro-fork/bro/build' make: *** [all] Error 2 shell returned 2 On Mon, Jan 26, 2015 at 10:47 AM, Siwek, Jon wrote: > Bro v2.3.2 is now available. For more details on vulnerabilities > addressed, see this blog post: > > http://blog.bro.org/2015/01/bro-232-release.html > > The new version can be downloaded from: > > https://www.bro.org/download/index.html > > - Jon > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/a4514a1d/attachment.html From jsiwek at illinois.edu Wed Jan 28 07:45:12 2015 From: jsiwek at illinois.edu (Siwek, Jon) Date: Wed, 28 Jan 2015 15:45:12 +0000 Subject: [Bro] Bro v2.3.2 release In-Reply-To: References: <80670CB1-3BEF-4359-A375-D9DF788A5A97@illinois.edu> Message-ID: <8337985A-AF28-4C7C-9D80-1A4C30EDE0EF@illinois.edu> > On Jan 28, 2015, at 8:17 AM, John Donnelly wrote: > > I am getting this error on a fresh checkout: > > git clone ?recursive To use 2.3.2 from a git clone, you also have to checkout the v2.3.2 tag and ensure the git submodules are tracking correct versions. But I?ll follow up w/ your problem building the master branch in the tracker ticket you created. - Jon From robin at icir.org Wed Jan 28 07:58:02 2015 From: robin at icir.org (Robin Sommer) Date: Wed, 28 Jan 2015 07:58:02 -0800 Subject: [Bro] DPDK and Bro In-Reply-To: <54C8E786.3030501@ip-solutions.net> References: <54C8E786.3030501@ip-solutions.net> Message-ID: <20150128155802.GA38314@icir.org> On Wed, Jan 28, 2015 at 08:43 -0500, Harry Hoffman wrote: > Whilst enduring snow-mageddon here in the northeast of the US I had some > free time on my hands and was reading up on DPDK (http://dpdk.org/). I don't think anybody is looking more closely into DPDK on our end. However, we are working on a system that uses netmap to provide similar capabilities, see https://github.com/bro/packet-bricks Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From luismiguelferreirasilva at gmail.com Wed Jan 28 09:29:51 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 10:29:51 -0700 Subject: [Bro] Developing my own writer driver Message-ID: Dear all, I'm brand new to bro (just found out about it and tried yesterday) and I'm very intrigued by its capabilities. The documentation says we can write outputs into databases BUT, as I got to the logging framework documentation, it seems the only "non file based" writer driver available is for sqlite. I'm really interested in using a server based SQL instance (like postgresql, mysql or mariadb) AND a NoSQL service (mongodb or couchdb). Are there any other writer drivers available (even if they are not officially supported / are part of non committed contributions)? If not, can someone give me some pointers on how to develop extra writer drivers? Thank you, Luis Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/652b6abf/attachment.html From johanna at icir.org Wed Jan 28 10:14:29 2015 From: johanna at icir.org (Johanna Amann) Date: Wed, 28 Jan 2015 10:14:29 -0800 Subject: [Bro] Developing my own writer driver In-Reply-To: References: Message-ID: <20150128181429.GA67897@Beezling.local> There is a prototype of a postgresql writer in one a branch. However, it is seriously outdated and will not compile with current Bro versions -- it was either based on 2.2 or 2.1. And even for old bro versions it was barely functional. The best way to learn how to write logging writers is probably to take a look at the already existing ones - they are mostly decapsulated from Bro and quite easy to write. Johanna On Wed, Jan 28, 2015 at 10:29:51AM -0700, Luis Miguel Silva wrote: > Dear all, > > I'm brand new to bro (just found out about it and tried yesterday) and I'm > very intrigued by its capabilities. > > The documentation says we can write outputs into databases BUT, as I got to > the logging framework documentation, it seems the only "non file based" > writer driver available is for sqlite. > > I'm really interested in using a server based SQL instance (like > postgresql, mysql or mariadb) AND a NoSQL service (mongodb or couchdb). > > Are there any other writer drivers available (even if they are not > officially supported / are part of non committed contributions)? > > If not, can someone give me some pointers on how to develop extra writer > drivers? > > Thank you, > Luis Silva > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From john at giggled.org Wed Jan 28 10:39:46 2015 From: john at giggled.org (John Green) Date: Wed, 28 Jan 2015 18:39:46 +0000 Subject: [Bro] Developing my own writer driver In-Reply-To: References: Message-ID: Hi Luis, I had a similar requirement a while back and took a different approach to get my data into Postgres by importing the output from the default text writer. This was largely to keep things as simple as possible on the sensor side. See https://github.com/j-o-h-n-g/Mortimer/blob/master/broimport.py The code is quite dirty in places, but might give you some ideas for possible bro<->postgres type mappings. John On 28 January 2015 at 17:29, Luis Miguel Silva wrote: > Dear all, > > I'm brand new to bro (just found out about it and tried yesterday) and I'm > very intrigued by its capabilities. > > The documentation says we can write outputs into databases BUT, as I got to > the logging framework documentation, it seems the only "non file based" > writer driver available is for sqlite. > > I'm really interested in using a server based SQL instance (like postgresql, > mysql or mariadb) AND a NoSQL service (mongodb or couchdb). > > Are there any other writer drivers available (even if they are not > officially supported / are part of non committed contributions)? > > If not, can someone give me some pointers on how to develop extra writer > drivers? > > Thank you, > Luis Silva > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From luismiguelferreirasilva at gmail.com Wed Jan 28 11:01:16 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 12:01:16 -0700 Subject: [Bro] Developing my own writer driver In-Reply-To: References: Message-ID: Thanks John, that is very useful, though I was trying to avoid using an external import script as that will introduce a delay between the time the events happen and the registers get added to the DB. Also, based on my experience, it can be pretty expensive to run an import script that parses through the data and translates it into insert calls. Are the text writer logs created in append mode? If so, I could potentially have an external process that listens for new lines and adds things "in near realtime". Out of curisotiy, why didn't you create a custom writer instead? ...simplicity? Thank you! Luis On Wed, Jan 28, 2015 at 11:39 AM, John Green wrote: > Hi Luis, > I had a similar requirement a while back and took a different approach > to get my data into Postgres by importing the output from the default > text writer. This was largely to keep things as simple as possible on > the sensor side. > > See https://github.com/j-o-h-n-g/Mortimer/blob/master/broimport.py > > The code is quite dirty in places, but might give you some ideas for > possible bro<->postgres type mappings. > > John > > On 28 January 2015 at 17:29, Luis Miguel Silva > wrote: > > Dear all, > > > > I'm brand new to bro (just found out about it and tried yesterday) and > I'm > > very intrigued by its capabilities. > > > > The documentation says we can write outputs into databases BUT, as I got > to > > the logging framework documentation, it seems the only "non file based" > > writer driver available is for sqlite. > > > > I'm really interested in using a server based SQL instance (like > postgresql, > > mysql or mariadb) AND a NoSQL service (mongodb or couchdb). > > > > Are there any other writer drivers available (even if they are not > > officially supported / are part of non committed contributions)? > > > > If not, can someone give me some pointers on how to develop extra writer > > drivers? > > > > Thank you, > > Luis Silva > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/e633325d/attachment-0001.html From john at giggled.org Wed Jan 28 11:22:58 2015 From: john at giggled.org (John Green) Date: Wed, 28 Jan 2015 19:22:58 +0000 Subject: [Bro] Developing my own writer driver In-Reply-To: References: Message-ID: On 28 January 2015 at 19:01, Luis Miguel Silva wrote: > Out of curisotiy, why didn't you create a custom writer instead? > ...simplicity? At the time simplicity and I had multiple remote sensors with restricted network connectivity. I would rsync, or physically transfer, the completed logs back to a central postgres server for import and analysis. Real time alerting wasn't that important. Getting the data into Postgres did facilitate the writing of some useful SQL queries to spot odd/malicious behaviour. If I was doing it again I probably investigate using Postgres Foreign Data Wrappers instead. John From luismiguelferreirasilva at gmail.com Wed Jan 28 11:37:02 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 12:37:02 -0700 Subject: [Bro] Developing my own writer driver In-Reply-To: References: Message-ID: Cool! And since you talked about multiple remote sensors, let me ask another question that I'm really curious about... Lets imagine that I have a low end machine capturing traffic and want to send the prefiltered events into a more beefy remote machine for analysis and event capturing. Can I do that? Based on bro's Input framework, I believe I can redirect an entire tcpdump into it BUT, I want *some* filtering to happen upfront, though the MAIN processing work should be executed somewhere else. >From what I understood (based on this architectural description ), my low end computer in charge of the sniffing would run the "manager" code and my beefy machine(s) would run the workers. Is that how I would set things up? And who writes the outputs, is it the workers OR do the workers pass the result back to the manager? Thank you, Luis On Wed, Jan 28, 2015 at 12:22 PM, John Green wrote: > On 28 January 2015 at 19:01, Luis Miguel Silva > wrote: > > Out of curisotiy, why didn't you create a custom writer instead? > > ...simplicity? > > At the time simplicity and I had multiple remote sensors with > restricted network connectivity. I would rsync, or physically > transfer, the completed logs back to a central postgres server for > import and analysis. Real time alerting wasn't that important. > > Getting the data into Postgres did facilitate the writing of some > useful SQL queries to spot odd/malicious behaviour. If I was doing it > again I probably investigate using Postgres Foreign Data Wrappers > instead. > > John > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/2bd83f27/attachment.html From seth at icir.org Wed Jan 28 11:40:54 2015 From: seth at icir.org (Seth Hall) Date: Wed, 28 Jan 2015 14:40:54 -0500 Subject: [Bro] DPDK and Bro In-Reply-To: <54C8E786.3030501@ip-solutions.net> References: <54C8E786.3030501@ip-solutions.net> Message-ID: <3A672E2F-DB3A-4B6D-9A73-B96DC2511C9D@icir.org> > On Jan 28, 2015, at 8:43 AM, Harry Hoffman wrote: > > Whilst enduring snow-mageddon here in the northeast of the US I had some > free time on my hands and was reading up on DPDK (http://dpdk.org/). I > was wondering if any of the Bro devs were looking at integration with Bro? I looked a DPDK a while back but as Robin said, we ended up moving more in the direction of netmap because it gives us cross platform capability (freebsd and linux) and it works for multiple vendors NICs unlike DPDK. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Wed Jan 28 12:27:57 2015 From: seth at icir.org (Seth Hall) Date: Wed, 28 Jan 2015 15:27:57 -0500 Subject: [Bro] Strange Issue with Live Capture In-Reply-To: References: <54C6778F.40401@g-clef.net> <54C69427.4000401@g-clef.net> Message-ID: <07DF86C8-D380-459F-B45C-69A5B23C9A73@icir.org> > On Jan 27, 2015, at 11:49 AM, Andrew Benson wrote: > > I had them correct the issue, they were in fact sending us duplicates of packets. > > The issue still persists, though. I'm not sure what else to check? Could you send me some lines from conn.log? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Wed Jan 28 12:39:41 2015 From: seth at icir.org (Seth Hall) Date: Wed, 28 Jan 2015 15:39:41 -0500 Subject: [Bro] A strange connection In-Reply-To: <54C65BB4.2080306@gmail.com> References: <54C65BB4.2080306@gmail.com> Message-ID: <89DD8072-D1AB-41FC-9F4C-F0A156F025F7@icir.org> > On Jan 26, 2015, at 10:22 AM, Po-Ching Lin wrote: > > If there are duplicated packets due to packet retransmission, will orig_ip_bytes and resp_ip_bytes > be still correct (I mean the bytes may be counted more than once)? If not, what are the reliable fields to > derive the transmitted bytes (not counting duplicated ones)? Thanks. It?s the (orig/resp)_bytes field as you suspect. Something happened in this connection that tricked Bro?s sequence id tracking which caused the larger numbers in those fields. If you find it again and are able to capture a pcap of it, we?d be interested in seeing it. Thanks, .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From luismiguelferreirasilva at gmail.com Wed Jan 28 12:42:12 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 13:42:12 -0700 Subject: [Bro] Developing my own writer driver In-Reply-To: References: Message-ID: I realized my last question was off topic from the original topic so I'm going to create a new thread for it. Thank you! Luis On Wed, Jan 28, 2015 at 12:37 PM, Luis Miguel Silva < luismiguelferreirasilva at gmail.com> wrote: > Cool! And since you talked about multiple remote sensors, let me ask > another question that I'm really curious about... > > Lets imagine that I have a low end machine capturing traffic and want to > send the prefiltered events into a more beefy remote machine for analysis > and event capturing. Can I do that? > > Based on bro's Input framework, I believe I can redirect an entire tcpdump > into it BUT, I want *some* filtering to happen upfront, though the MAIN > processing work should be executed somewhere else. > > From what I understood (based on this architectural description > ), my low end computer in > charge of the sniffing would run the "manager" code and my beefy machine(s) > would run the workers. Is that how I would set things up? > And who writes the outputs, is it the workers OR do the workers pass the > result back to the manager? > > Thank you, > Luis > > On Wed, Jan 28, 2015 at 12:22 PM, John Green wrote: > >> On 28 January 2015 at 19:01, Luis Miguel Silva >> wrote: >> > Out of curisotiy, why didn't you create a custom writer instead? >> > ...simplicity? >> >> At the time simplicity and I had multiple remote sensors with >> restricted network connectivity. I would rsync, or physically >> transfer, the completed logs back to a central postgres server for >> import and analysis. Real time alerting wasn't that important. >> >> Getting the data into Postgres did facilitate the writing of some >> useful SQL queries to spot odd/malicious behaviour. If I was doing it >> again I probably investigate using Postgres Foreign Data Wrappers >> instead. >> >> John >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/107db2b2/attachment.html From seth at icir.org Wed Jan 28 12:46:19 2015 From: seth at icir.org (Seth Hall) Date: Wed, 28 Jan 2015 15:46:19 -0500 Subject: [Bro] [bro] Bro intelligence framework meta data issue. In-Reply-To: References: Message-ID: > On Jan 22, 2015, at 9:44 AM, Giedrius Ramas wrote: > > So as you can see there are any meta data fields on intel.log output. > > Please shed some light on this , Where should I look for troubleshooting ? Sorry about that. When I designed the intel framework, I ran into a few conceptual issues that I just offset to a later date. I have done some work to address the shortcoming and I?m hoping to get it merged back in for the 2.4 release. I?ll give some guidance now if you?d like to work with it today? Clone this repository into your site/ directory? cd /share/bro/site/ git clone https://github.com/sethhall/intel-ext.git intel-ext Add the ?intel-ext? module to your local.bro? echo ?@load intel-ext? >> local.bro Write and load a script that looks like this? ====script===== redef record Intel::Info += { descriptions: set[string] &optional &log; }; event Intel::extend_match(info: Intel::Info, s: Intel::Seen, items: set[Intel::Item]) &priority=0 { for ( item in items ) { if ( ! info?$descriptions ) info$descriptions = set(); add info$descriptions[item$meta$desc]; } } ====end script==== This will add descriptions from all of your intel in a log named intel-ext.log. Let me know if it works for you. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From luismiguelferreirasilva at gmail.com Wed Jan 28 12:45:59 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 13:45:59 -0700 Subject: [Bro] offloading the processing to different nodes Message-ID: Dear all, Lets imagine that I have a low end machine capturing traffic and want to send the pre-filtered events into a more beefy remote machine for analysis and event capturing. Can I do that? Based on bro's Input framework, I believe I can redirect an entire tcpdump into it BUT, I want *some* filtering to happen upfront, though the MAIN processing work should be executed somewhere else. >From what I understood (based on this architectural description ), my low end computer in charge of the sniffing would run the "manager" code and my beefy machine(s) would run the workers. Is that how I would set things up? And who writes the outputs, is it the workers OR do the workers pass the result back to the manager? Also, if I were to use the File analysis framework, would it be possible to extract and analyze the files in the beefy computers instead of the manager node? I suspect I'll have to transfer the full connection flow (so the file can be extracted) and that will generate a LOT of traffic (which is something I want to avoid). Are my assumptions correct? Thank you, Luis Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/50a51670/attachment.html From luismiguelferreirasilva at gmail.com Wed Jan 28 12:57:33 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 13:57:33 -0700 Subject: [Bro] Discovering known_hosts outside the network segment we are analyzing Message-ID: Dear all, As I started playing around with bro, I noticed the ability to identify known_hosts in the network. My problem is that I need to identify hosts that are NOT part of my networks.cfg: root at local-bro:~# cat /usr/local/bro/etc/networks.cfg # List of local networks in CIDR notation, optionally followed by a # descriptive tag. # For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes. 192.168.1.0/24 Private IP space root at local-bro:~# The default networks.cfg had multiple networks but, what I want to do is detect what "invalid" traffic is flowing in the network (e.g. machines in a *192.168.0.0/24 * segment, sending out packets in my *192.168.1.0/24 * network). Here's my use case: - I install a routing / sniffing appliance between the router and the existing local network (*192.168.0.0/24 *) so I can sniff the traffic with bro - My appliance changes the network segment for the internal network to something else (e.g. *192.168.1.0/24 *) and starts serving addresses in that range using dhcp -- all dynamically configured devices setup with the new address -- but then I discover that there were some devices in the previous network that had static ip addresses in the *192.168.0.0/24 * range, so they stop working What I would LIKE to do is have bro detect the "orphaned" *192.168.0.0/24 * nodes in the known_hosts, even though my network is now *192.168.1.0/24 *. I could do this by externally sniffing for arp requests but I would really like to do it through bro... Is the solution to specify all internal reserved ranges in networks.cfg? *192.168.0.0/16 10.0.0.0/8 ...* Is this good practice? And is there a better approach to achieve what I need? Thank you, Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/5210cb4b/attachment.html From somdutta.bose at gmail.com Wed Jan 28 13:40:45 2015 From: somdutta.bose at gmail.com (Somdutta Bose) Date: Thu, 29 Jan 2015 03:10:45 +0530 Subject: [Bro] Data anonymization and transformation using Bro Message-ID: Hi, I was reading a paper A High-level Programming Environment for Packet Trace Anonymization and Transformation by Ruoming Pang and Vern Paxson, which talks about anonymizing network data using Bro. It was mentioned that it was developed as an extension to Bro. Could you please let me know where I can find the source code of the mentioned extension so that I can implement scripts to anonymize network data. Thanks Somdutta Bose -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/767f3dd5/attachment-0001.html From donaldson8 at llnl.gov Wed Jan 28 14:07:10 2015 From: donaldson8 at llnl.gov (Donaldson, John) Date: Wed, 28 Jan 2015 22:07:10 +0000 Subject: [Bro] Discovering known_hosts outside the network segment we are analyzing In-Reply-To: References: Message-ID: Are you thinking of something along the lines of: redef Known::host_tracking = ALL_HOSTS; (see https://www.bro.org/sphinx/scripts/policy/protocols/conn/known-hosts.bro.html) This should record ALL observed hosts in the known_hosts file. v/r John From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Luis Miguel Silva Sent: Wednesday, January 28, 2015 12:58 PM To: bro Subject: [Bro] Discovering known_hosts outside the network segment we are analyzing Dear all, As I started playing around with bro, I noticed the ability to identify known_hosts in the network. My problem is that I need to identify hosts that are NOT part of my networks.cfg: root at local-bro:~# cat /usr/local/bro/etc/networks.cfg # List of local networks in CIDR notation, optionally followed by a # descriptive tag. # For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes. 192.168.1.0/24 Private IP space root at local-bro:~# The default networks.cfg had multiple networks but, what I want to do is detect what "invalid" traffic is flowing in the network (e.g. machines in a 192.168.0.0/24 segment, sending out packets in my 192.168.1.0/24 network). Here's my use case: - I install a routing / sniffing appliance between the router and the existing local network (192.168.0.0/24) so I can sniff the traffic with bro - My appliance changes the network segment for the internal network to something else (e.g. 192.168.1.0/24) and starts serving addresses in that range using dhcp -- all dynamically configured devices setup with the new address -- but then I discover that there were some devices in the previous network that had static ip addresses in the 192.168.0.0/24 range, so they stop working What I would LIKE to do is have bro detect the "orphaned" 192.168.0.0/24 nodes in the known_hosts, even though my network is now 192.168.1.0/24. I could do this by externally sniffing for arp requests but I would really like to do it through bro... Is the solution to specify all internal reserved ranges in networks.cfg? 192.168.0.0/16 10.0.0.0/8 ... Is this good practice? And is there a better approach to achieve what I need? Thank you, Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/9e25b8ba/attachment.html From slagell at illinois.edu Wed Jan 28 14:07:42 2015 From: slagell at illinois.edu (Slagell, Adam J) Date: Wed, 28 Jan 2015 22:07:42 +0000 Subject: [Bro] Data anonymization and transformation using Bro In-Reply-To: References: Message-ID: <5B9B5DA0-CE6F-4535-87FA-39B91BD8C4BF@illinois.edu> This code was actually pulled out of Bro a long time ago and will not work with the past several versions, including all of those in our git repository. Maybe send a request to info at bro.org asking for Vern?s attention in the subject as he may have it somewhere along with the appropriate version of Bro. On Jan 28, 2015, at 3:40 PM, Somdutta Bose > wrote: Hi, I was reading a paper A High-level Programming Environment for Packet Trace Anonymization and Transformation by Ruoming Pang and Vern Paxson, which talks about anonymizing network data using Bro. It was mentioned that it was developed as an extension to Bro. Could you please let me know where I can find the source code of the mentioned extension so that I can implement scripts to anonymize network data. Thanks Somdutta Bose _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro ------ Adam J. Slagell Chief Information Security Officer Assistant Director, Cybersecurity Directorate National Center for Supercomputing Applications University of Illinois at Urbana-Champaign www.slagell.info "Under the Illinois Freedom of Information Act (FOIA), any written communication to or from University employees regarding University business is a public record and may be subject to public disclosure." -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/2dffbd1d/attachment.html From luismiguelferreirasilva at gmail.com Wed Jan 28 14:55:48 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Wed, 28 Jan 2015 15:55:48 -0700 Subject: [Bro] Discovering known_hosts outside the network segment we are analyzing In-Reply-To: References: Message-ID: Ah, yes! This seems to be exactly what I was looking for! Let me ask you something else though, what is the best practice to set that variable without changing the base known-hosts.bro script? (as I was reading the documentation yesterday, it said we should avoid making changes to the base scripts). Do I set that global parameter somewhere in a config file OR should I copy the known-hosts.bro script to my site/ directory and change it there? p.s. this is probably a VERY stupid question but I'm brand new to bro (less then 24h), so I'm still trying to figure out how to properly use it :o) Thank you, Luis On Wed, Jan 28, 2015 at 3:07 PM, Donaldson, John wrote: > Are you thinking of something along the lines of: > > > > redef Known::host_tracking = ALL_HOSTS; > > > > (see > https://www.bro.org/sphinx/scripts/policy/protocols/conn/known-hosts.bro.html > ) > > > > This should record ALL observed hosts in the known_hosts file. > > > > v/r John > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Luis > Miguel Silva > *Sent:* Wednesday, January 28, 2015 12:58 PM > *To:* bro > *Subject:* [Bro] Discovering known_hosts outside the network segment we > are analyzing > > > > Dear all, > > As I started playing around with bro, I noticed the ability to identify > known_hosts in the network. > > My problem is that I need to identify hosts that are NOT part of my > networks.cfg: > root at local-bro:~# cat /usr/local/bro/etc/networks.cfg > # List of local networks in CIDR notation, optionally followed by a > # descriptive tag. > # For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes. > > 192.168.1.0/24 Private IP space > root at local-bro:~# > > The default networks.cfg had multiple networks but, what I want to do is > detect what "invalid" traffic is flowing in the network (e.g. machines in a *192.168.0.0/24 > * segment, sending out packets in my *192.168.1.0/24 > * network). > > Here's my use case: > > - I install a routing / sniffing appliance between the router and the > existing local network (*192.168.0.0/24 *) so I > can sniff the traffic with bro > > - My appliance changes the network segment for the internal network to > something else (e.g. *192.168.1.0/24 *) and starts > serving addresses in that range using dhcp > > -- all dynamically configured devices setup with the new address > > -- but then I discover that there were some devices in the previous > network that had static ip addresses in the *192.168.0.0/24 > * range, so they stop working > > What I would LIKE to do is have bro detect the "orphaned" *192.168.0.0/24 > * nodes in the known_hosts, even though my network > is now *192.168.1.0/24 *. > > I could do this by externally sniffing for arp requests but I would really > like to do it through bro... > > Is the solution to specify all internal reserved ranges in networks.cfg? > > > *192.168.0.0/16 10.0.0.0/8 ...* > > Is this good practice? And is there a better approach to achieve what I > need? > > > Thank you, > Luis > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/6f1f758b/attachment-0001.html From donaldson8 at llnl.gov Wed Jan 28 15:26:38 2015 From: donaldson8 at llnl.gov (Donaldson, John) Date: Wed, 28 Jan 2015 23:26:38 +0000 Subject: [Bro] Discovering known_hosts outside the network segment we are analyzing In-Reply-To: References: Message-ID: Just add the redef line to somewhere in your local site config. No need to change things anywhere else. From: Luis Miguel Silva [mailto:luismiguelferreirasilva at gmail.com] Sent: Wednesday, January 28, 2015 2:56 PM To: Donaldson, John Cc: bro Subject: Re: [Bro] Discovering known_hosts outside the network segment we are analyzing Ah, yes! This seems to be exactly what I was looking for! Let me ask you something else though, what is the best practice to set that variable without changing the base known-hosts.bro script? (as I was reading the documentation yesterday, it said we should avoid making changes to the base scripts). Do I set that global parameter somewhere in a config file OR should I copy the known-hosts.bro script to my site/ directory and change it there? p.s. this is probably a VERY stupid question but I'm brand new to bro (less then 24h), so I'm still trying to figure out how to properly use it :o) Thank you, Luis On Wed, Jan 28, 2015 at 3:07 PM, Donaldson, John > wrote: Are you thinking of something along the lines of: redef Known::host_tracking = ALL_HOSTS; (see https://www.bro.org/sphinx/scripts/policy/protocols/conn/known-hosts.bro.html) This should record ALL observed hosts in the known_hosts file. v/r John From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Luis Miguel Silva Sent: Wednesday, January 28, 2015 12:58 PM To: bro Subject: [Bro] Discovering known_hosts outside the network segment we are analyzing Dear all, As I started playing around with bro, I noticed the ability to identify known_hosts in the network. My problem is that I need to identify hosts that are NOT part of my networks.cfg: root at local-bro:~# cat /usr/local/bro/etc/networks.cfg # List of local networks in CIDR notation, optionally followed by a # descriptive tag. # For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes. 192.168.1.0/24 Private IP space root at local-bro:~# The default networks.cfg had multiple networks but, what I want to do is detect what "invalid" traffic is flowing in the network (e.g. machines in a 192.168.0.0/24 segment, sending out packets in my 192.168.1.0/24 network). Here's my use case: - I install a routing / sniffing appliance between the router and the existing local network (192.168.0.0/24) so I can sniff the traffic with bro - My appliance changes the network segment for the internal network to something else (e.g. 192.168.1.0/24) and starts serving addresses in that range using dhcp -- all dynamically configured devices setup with the new address -- but then I discover that there were some devices in the previous network that had static ip addresses in the 192.168.0.0/24 range, so they stop working What I would LIKE to do is have bro detect the "orphaned" 192.168.0.0/24 nodes in the known_hosts, even though my network is now 192.168.1.0/24. I could do this by externally sniffing for arp requests but I would really like to do it through bro... Is the solution to specify all internal reserved ranges in networks.cfg? 192.168.0.0/16 10.0.0.0/8 ... Is this good practice? And is there a better approach to achieve what I need? Thank you, Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/08a7cf61/attachment.html From qhu009 at aucklanduni.ac.nz Wed Jan 28 15:38:47 2015 From: qhu009 at aucklanduni.ac.nz (Qinwen Hu) Date: Thu, 29 Jan 2015 12:38:47 +1300 Subject: [Bro] How to configure Bro to detect UDP port 53 Message-ID: Hi all, I am a new Bro user, I did few experiments to read the same DNS trace file via Bro online version and Bro from my personal PC. The version number is 2.3.1. I got some interesting results. the online version checks UDP port 53 5353 and 5355 id.resp_p proto trans_id query 5353 UDP But, the one on my PC only checks port 5353 and 5355. Is this a configuration issue? And is there a way that I can configure my Bro to check port 53? Thanks Steven -- ????Qinwen Hu Ph.D. Candidate, Computer Science, University of Auckland ????? ????? ????? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/ae3917a5/attachment-0001.html From qhu009 at aucklanduni.ac.nz Wed Jan 28 15:42:50 2015 From: qhu009 at aucklanduni.ac.nz (Qinwen Hu) Date: Thu, 29 Jan 2015 12:42:50 +1300 Subject: [Bro] How to configure Bro to detect UDP port 53 Message-ID: Hi all, Please ignore my previous unfinished Email. I am a new Bro user, I did few experiments to read the same DNS trace file via Bro online version and Bro from my personal PC. The version number is 2.3.1. I got some interesting results. the online version checks UDP port 53 5353 and 5355. id.resp_p proto trans_id query 5353 UDP 0 sc-cs 53 UDP 533 2.0.0.0.0.0.....ip6.arpa But, the one on my PC only checks port 5353 and 5355. id.resp_p proto trans_id query 5353 UDP 0 sc-cs 53 UDP 533 - Is this a configuration issue? And is there a way that I can configure my Bro to check port 53? Thanks Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/9eb71d2b/attachment.html From giedrius.ramas at gmail.com Thu Jan 29 00:06:24 2015 From: giedrius.ramas at gmail.com (Giedrius Ramas) Date: Thu, 29 Jan 2015 10:06:24 +0200 Subject: [Bro] [bro] Bro intelligence framework meta data issue. In-Reply-To: References: Message-ID: Thank you for writing me back . I have just tried your suggestion however still no luck. Here what I have done : My intel data file looks like: #fields indicator indicator_type meta.desc meta.cif_confidence meta.source summitcpas.com/process/mbb/m2uAccountUpdate/M2ULoginsdo.html Intel::URL phishing 85 phishtank.com ==================================== Here's what I get now on intel_ext.log #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path intel_ext #open 2015-01-29-07-58-01 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p fuid file_mime_type file_desc seen.indicator seen.indicator_type seen.where sources descriptions #types time string addr port addr port string string string string enum enum set[string] set[string] 1422518281.529553 CUZQFO0cVtr52M9zj 10.3.2.2 49789 64.207.177.234 80 - -- summitcpas.com/process/mbb/m2uAccountUpdate/M2ULoginsdo.html Intel::URL HTTP::IN_URL phishtank.com phishing Still missing meta.desc meta.cif_confidence meta.source fields. =========================================== Here is how my BRO config looks like. local.bro @load intel-ext @load custom =========================================== /opt/bro/share/bro/custom# cat custom.bro redef record Intel::Info += { descriptions: set[string] &optional &log; }; event Intel::extend_match(info: Intel::Info, s: Intel::Seen, items: set[Intel::Item]) &priority=0 { for ( item in items ) { if ( ! info?$descriptions ) info$descriptions = set(); add info$descriptions[item$meta$desc]; } } ============================================= cat loaded_scripts.log /opt/bro/share/bro/intel-ext/__load__.bro /opt/bro/share/bro/intel-ext/scripts/main.bro /opt/bro/share/bro/intel-ext/scripts/extend.bro /opt/bro/share/bro/intel-ext/scripts/log.bro /opt/bro/share/bro/custom/__load__.bro /opt/bro/share/bro/custom/custom.bro On Wed, Jan 28, 2015 at 10:46 PM, Seth Hall wrote: > > > On Jan 22, 2015, at 9:44 AM, Giedrius Ramas > wrote: > > > > So as you can see there are any meta data fields on intel.log output. > > > > Please shed some light on this , Where should I look for troubleshooting > ? > > Sorry about that. When I designed the intel framework, I ran into a few > conceptual issues that I just offset to a later date. I have done some > work to address the shortcoming and I?m hoping to get it merged back in for > the 2.4 release. I?ll give some guidance now if you?d like to work with it > today? > > Clone this repository into your site/ directory? > cd /share/bro/site/ > git clone https://github.com/sethhall/intel-ext.git intel-ext > > Add the ?intel-ext? module to your local.bro? > echo ?@load intel-ext? >> local.bro > > Write and load a script that looks like this? > > ====script===== > redef record Intel::Info += { > descriptions: set[string] &optional &log; > }; > > event Intel::extend_match(info: Intel::Info, s: Intel::Seen, items: > set[Intel::Item]) &priority=0 > { > for ( item in items ) > { > if ( ! info?$descriptions ) > info$descriptions = set(); > > add info$descriptions[item$meta$desc]; > } > } > ====end script==== > > This will add descriptions from all of your intel in a log named > intel-ext.log. Let me know if it works for you. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/e0b0a05f/attachment.html From jtaylor1024 at yahoo.com Thu Jan 29 07:47:15 2015 From: jtaylor1024 at yahoo.com (Jerome Taylor) Date: Thu, 29 Jan 2015 15:47:15 +0000 (UTC) Subject: [Bro] Bro with 10Gb NIC's or higher In-Reply-To: <54B407DE.3000600@utexas.edu> References: <54B407DE.3000600@utexas.edu> Message-ID: <50866726.2447483.1422546435095.JavaMail.yahoo@mail.yahoo.com> Kelly makes a very good point.. Using an intellegent NIC as a front end to Bro makes a lot of since for networks with strenuous traffic profiles and/or high data throughput. Shunting or filtering in SW on the host eats up computing resources that can better be used to run additional worker threads. Hardware based filtering and load balancing gives you back these cycles. Filtering out flows that are not of interest is a typical use case (examples being SSL traffic, fragmented packets, elephant flows, etc.). ?Regards, Jerome Taylor M: 978-764-1269 On Monday, January 12, 2015 12:47 PM, Kelly Kerby wrote: Hello, We at UT Austin are fairly new to Bro and new? to the list (been following, but never posted), but I thought I'd share my experience. We have had good luck monitoring our traffic which sustains ~17-20 Gbps during peak hours with 2 devices made by a company called Netronome. The traffic is distributed between the 2 clustered devices using an integrated load balancer which evenly spreads the traffic across all the processors which have been pinned to corresponding bro workers. We see very little traffic loss - random ~2-3% drops per Bro instance with the occasional larger ~10% drop. Our configuration: - 2 clustered devices 40 cores each with 32 workers and 4 proxies - Primary device with 2 10 gig cards Hope this is helpful. -Kelly UT Austin On 1/9/15 1:11 PM, Mike Reeves wrote: > In all of the 10G deployments I have done I always do multiple boxes behind a flow based load balancer. That way I can use commodity boxes without special NICs and keep them at a reasonable price point. The bang for the buck goes down when you talk 4 x 12 core HT processors etc. vs a dual 10 core HT. You also get the ability to have some fault tolerance where if you have hardware issues you are not blind. I have a few deployments that are going from 10G to 100G and the only thing we have to change is the inbound interfaces on the LB gear. The other positive is as usage goes up I can add additional capacity incrementally instead of having to re-solution. > > Thanks > > Mike > > > > >> On Jan 9, 2015, at 1:20 PM, Mike Patterson wrote: >> >> You're right, it's 32 on mine. >> >> I posted some specs for my system a couple of years ago now, I think. >> >> 6-8GB per worker should give some headroom (my workers usually use about 5 apiece I think). >> >> Mike >> >> -- >> Simple, clear purpose and principles give rise to complex and >> intelligent behavior. Complex rules and regulations give rise >> to simple and stupid behavior. - Dee Hock >> >>> On Jan 9, 2015, at 1:03 PM, Donaldson, John wrote: >>> >>> I'd agree with all of this. We're monitoring a few 10Gbps network segments with DAG 9.2X2s, too. I'll add in that, when processing that much traffic on a single device, you'll definitely not want to skimp on memory. >>> >>> I'm not sure which configurations you're using that might be limiting you to 16 streams -- we're? run with at least 24 streams, and (at least with the 9.2X2s) you should be able to work with up to 32 receive streams. >>> >>> v/r >>> >>> John Donaldson >>> >>>> -----Original Message----- >>>> From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of >>>> Mike Patterson >>>> Sent: Thursday, January 08, 2015 7:29 AM >>>> To: coen bakkers >>>> Cc: bro at bro.org >>>> Subject: Re: [Bro] Bro with 10Gb NIC's or higher >>>> >>>> Succinctly, yes, although that provision is a big one. >>>> >>>> I'm running Bro on two 10 gig interfaces, an Intel X520 and an Endace DAG >>>> 9.2X2. Both perform reasonably well. Although my hardware is somewhat >>>> underspecced (Dell R710s of differing vintages), I still get tons of useful data. >>>> >>>> If your next question would be "how should I spec my hardware", that's >>>> quite difficult to answer because it depends on a lot. Get the hottest CPUs >>>> you can afford, with as many cores. If you're actually sustaining 10+Gb you'll >>>> probably want at least 20-30 cores. I'm sustaining 4.5Gb or so on 8 3.7Ghz >>>> cores, but Bro reports 10% or so loss. Note that some hardware >>>> configurations will limit the number of streams you can feed to Bro, eg my >>>> DAG can only produce 16 streams so even if I had it in a 24 core box, I'd only >>>> be making use of 2/3 of my CPU. >>>> >>>> Mike >>>> >>>>> On Jan 7, 2015, at 5:04 AM, coen bakkers wrote: >>>>> >>>>> Does anyone have experience with higher speed NIC's and Bro? Will it >>>> sustain 10Gb speeds or more provide the hardware is spec'd appropriately? >>>>> >>>>> regards, >>>>> >>>>> Coen >>>>> _______________________________________________ >>>>> Bro mailing list >>>>> bro at bro-ids.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/c1f7d310/attachment-0001.html From seth at icir.org Thu Jan 29 08:06:35 2015 From: seth at icir.org (Seth Hall) Date: Thu, 29 Jan 2015 11:06:35 -0500 Subject: [Bro] [bro] Bro intelligence framework meta data issue. In-Reply-To: References: Message-ID: <9E09DF9A-2B31-4705-9DF0-D74EBF3149B8@icir.org> > On Jan 29, 2015, at 3:06 AM, Giedrius Ramas wrote: > > #fields indicator indicator_type meta.desc meta.cif_confidence meta.source > summitcpas.com/process/mbb/m2uAccountUpdate/M2ULoginsdo.html Intel::URL phishing 85 phishtank.com > > 1422518281.529553 CUZQFO0cVtr52M9zj 10.3.2.2 49789 64.207.177.234 80 - -- summitcpas.com/process/mbb/m2uAccountUpdate/M2ULoginsdo.html Intel::URL HTTP::IN_URL phishtank.com phishing > > Still missing meta.desc meta.cif_confidence meta.source fields. Actually, meta.desc is there (so is meta.source). The descriptions were all that I added with my script. If you want more information added you will have to add it in your custom script. My example should make it easy for you. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From robin at icir.org Thu Jan 29 08:23:16 2015 From: robin at icir.org (Robin Sommer) Date: Thu, 29 Jan 2015 08:23:16 -0800 Subject: [Bro] Data anonymization and transformation using Bro In-Reply-To: <5B9B5DA0-CE6F-4535-87FA-39B91BD8C4BF@illinois.edu> References: <5B9B5DA0-CE6F-4535-87FA-39B91BD8C4BF@illinois.edu> Message-ID: <20150129162316.GH1745@icir.org> On Wed, Jan 28, 2015 at 22:07 +0000, Adam J. Slagell wrote: > the appropriate version of Bro. The code was last part of 1.5.3, but even there it wasn't straight-forward to use it, and some parts were already broken. There's something new [1] in the queue that is bringing this functionality back, but it's not quite ready for prime time yet. Robin [1] http://www.icir.org/hilti/ -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From vitologrillo at gmail.com Thu Jan 29 08:56:05 2015 From: vitologrillo at gmail.com (Vito Logrillo) Date: Thu, 29 Jan 2015 17:56:05 +0100 Subject: [Bro] How remove or redefine a field in a log? Message-ID: Hi, is it possible to remove or redefine an existing field in a log? For example, if i want to remove only the field local_orig: bool &log &optional; in conn.log, how can i do it? And if i want to redefine it in this way: local_orig: string &optional &log; ?? Thanks, Vito From luismiguelferreirasilva at gmail.com Thu Jan 29 09:36:14 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Thu, 29 Jan 2015 10:36:14 -0700 Subject: [Bro] How remove or redefine a field in a log? In-Reply-To: References: Message-ID: Vito, I'm brand new to bro so I apologize if this isn't a good suggestion... But as I was reading the documentation, I came across this which might help you with what you need: https://www.bro.org/development/logging.html#extending It doesn't redefine an existing field but it allows you to, at least, append to it! As for removing an existing field, just looking at the example on how to EXTEND logging (which basically adds an element to the Conn::Info array), couldn't we do something like this? *delete Conn::Info['field']* Best, Luis On Thu, Jan 29, 2015 at 9:56 AM, Vito Logrillo wrote: > Hi, > is it possible to remove or redefine an existing field in a log? > For example, if i want to remove only the field > > local_orig: bool &log &optional; > > in conn.log, how can i do it? > And if i want to redefine it in this way: > > local_orig: string &optional &log; > > ?? > Thanks, > Vito > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/77114dc6/attachment.html From luismiguelferreirasilva at gmail.com Thu Jan 29 12:07:10 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Thu, 29 Jan 2015 13:07:10 -0700 Subject: [Bro] Why am I seeing SSL "files" in my files.log? Message-ID: Dear all, I've been looking at my files.log file and I'm seeing a lot of logged transfers for source=SSL. root at appliance:/usr/local/bro/logs# cat current/files.log | grep -i ssl | head 1422561677.508576 FmK9Jn1by8UfJ7Uk6c 216.58.217.46 192.168.200.235 CUEEAE4YJ25B6LwU03 SSL 0 X509,MD5,SHA1 - -0.000000 F F 1737 - 0 0 F - 04805888dbfa26c78e52f8860be4a776 43ae5511994a4d13b2b1e8b013bff7196c5645d2 - - 1422561677.508576 FrcIKka3GRTlXwCYk 216.58.217.46 192.168.200.235 CUEEAE4YJ25B6LwU03 SSL 0 X509,MD5,SHA1 - -0.000000 F F 1012 - 0 0 F - 46f1bf2f24dd3aa9cfd760a3bade5ec7 bbdce13e9d537a5229915cb123c7aab0a855e798 - - 1422561677.508576 FEuCUs4oRjvbJIPB68 216.58.217.46 192.168.200.235 CUEEAE4YJ25B6LwU03 SSL 0 X509,MD5,SHA1 - -0.000000 F F 897 - 0 0 F - 2e7db2a31d0e3da4b25f49b9542a2e1a 7359755c6df9a0abc3060bce369564c8ec4542a3 - - 1422561677.588403 FKhNYN30aqixQTq0ya 216.58.217.14 192.168.200.235 CWx7Gs1ETyWn2IKu4h SSL 0 X509,MD5,SHA1 - -0.000000 F F 1737 - 0 0 F - 04805888dbfa26c78e52f8860be4a776 43ae5511994a4d13b2b1e8b013bff7196c5645d2 - - 1422561677.588403 F6KI5g2pFla0x2h4w4 216.58.217.14 192.168.200.235 CWx7Gs1ETyWn2IKu4h SSL 0 X509,MD5,SHA1 - -0.000000 F F 1012 - 0 0 F - 46f1bf2f24dd3aa9cfd760a3bade5ec7 bbdce13e9d537a5229915cb123c7aab0a855e798 - - 1422561677.588403 FMD4Yq4JDMdG7dTnC6 216.58.217.14 192.168.200.235 CWx7Gs1ETyWn2IKu4h SSL 0 X509,MD5,SHA1 - -0.000000 F F 897 - 0 0 F - 2e7db2a31d0e3da4b25f49b9542a2e1a 7359755c6df9a0abc3060bce369564c8ec4542a3 - - 1422561680.734060 F6kS0Y3B6xPUSr5bQ3 54.244.242.173 192.168.200.227 C2s8C31rDqouwSyREj SSL 0 X509,MD5,SHA1 - -0.000000 F F 931 - 0 0 F - 591c402fa2cbf8279323e5336dfe78e2 37c4666a6fb5535e01a113f5a25c7ae2b7d942c5 - - 1422561681.173742 FU1DBs1wCoSQhuW2O3 54.203.249.201 192.168.200.227 CIJSA81yUj2OZ3Zec SSL 0 X509,MD5,SHA1 - -0.000000 F F 1362 - 0 0 F - 1595a86ed4570a4804ccb459ba49c710 be032d527dcc970b2cb056c953036b3dac6d299f - - 1422561681.173742 FnauTv4UWVVeIEhKfb 54.203.249.201 192.168.200.227 CIJSA81yUj2OZ3Zec SSL 0 X509,MD5,SHA1 - -0.000000 F F 1433 - 0 0 F - f9a20bda18c130a3dd2c9300646baa70 12c9b291d19d3632d44f1069551c46490aea0542 - - 1422561681.173742 FJLfsb48MeGcQiiID5 54.203.249.201 192.168.200.227 CIJSA81yUj2OZ3Zec SSL 0 X509,MD5,SHA1 - -0.000000 F F 1087 - 0 0 F - d9e1f5ce2bf6982005dc6d95aa9f9875 20ee1b7a0dbae0cf16f5a6327fc4ae1cef25f12c - - root at appliance:/usr/local/bro/logs# What are these? Are these ssl certificates that are being transferred? Thank you, Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/bc5df61b/attachment.html From liam.randall at gmail.com Thu Jan 29 12:22:21 2015 From: liam.randall at gmail.com (Liam Randall) Date: Thu, 29 Jan 2015 15:22:21 -0500 Subject: [Bro] Why am I seeing SSL "files" in my files.log? In-Reply-To: References: Message-ID: These are the x509 Certificates that are exchanges as a part of the SSL/TLS handshake. The "X509, MD5, SHA1" indicates that three file analyzers were attached to the file. For further details on information extracted from the cert pivot, using the file id to the x509.log. I think in a default configuration of Bro you'll see that only the host certificate is loaded (client and server); that behavior can be modified: https://www.bro.org/sphinx/_downloads/log-hostcerts-only.bro Thanks, Liam On Thu, Jan 29, 2015 at 3:07 PM, Luis Miguel Silva < luismiguelferreirasilva at gmail.com> wrote: > Dear all, > > I've been looking at my files.log file and I'm seeing a lot of logged > transfers for source=SSL. > > root at appliance:/usr/local/bro/logs# cat current/files.log | grep -i ssl > | head > 1422561677.508576 FmK9Jn1by8UfJ7Uk6c 216.58.217.46 > 192.168.200.235 CUEEAE4YJ25B6LwU03 SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1737 - 0 0 F - > 04805888dbfa26c78e52f8860be4a776 > 43ae5511994a4d13b2b1e8b013bff7196c5645d2 - - > 1422561677.508576 FrcIKka3GRTlXwCYk 216.58.217.46 > 192.168.200.235 CUEEAE4YJ25B6LwU03 SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1012 - 0 0 F - > 46f1bf2f24dd3aa9cfd760a3bade5ec7 > bbdce13e9d537a5229915cb123c7aab0a855e798 - - > 1422561677.508576 FEuCUs4oRjvbJIPB68 216.58.217.46 > 192.168.200.235 CUEEAE4YJ25B6LwU03 SSL 0 X509,MD5,SHA1 - > -0.000000 F F 897 - 0 0 F - > 2e7db2a31d0e3da4b25f49b9542a2e1a > 7359755c6df9a0abc3060bce369564c8ec4542a3 - - > 1422561677.588403 FKhNYN30aqixQTq0ya 216.58.217.14 > 192.168.200.235 CWx7Gs1ETyWn2IKu4h SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1737 - 0 0 F - > 04805888dbfa26c78e52f8860be4a776 > 43ae5511994a4d13b2b1e8b013bff7196c5645d2 - - > 1422561677.588403 F6KI5g2pFla0x2h4w4 216.58.217.14 > 192.168.200.235 CWx7Gs1ETyWn2IKu4h SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1012 - 0 0 F - > 46f1bf2f24dd3aa9cfd760a3bade5ec7 > bbdce13e9d537a5229915cb123c7aab0a855e798 - - > 1422561677.588403 FMD4Yq4JDMdG7dTnC6 216.58.217.14 > 192.168.200.235 CWx7Gs1ETyWn2IKu4h SSL 0 X509,MD5,SHA1 - > -0.000000 F F 897 - 0 0 F - > 2e7db2a31d0e3da4b25f49b9542a2e1a > 7359755c6df9a0abc3060bce369564c8ec4542a3 - - > 1422561680.734060 F6kS0Y3B6xPUSr5bQ3 54.244.242.173 > 192.168.200.227 C2s8C31rDqouwSyREj SSL 0 X509,MD5,SHA1 - > -0.000000 F F 931 - 0 0 F - > 591c402fa2cbf8279323e5336dfe78e2 > 37c4666a6fb5535e01a113f5a25c7ae2b7d942c5 - - > 1422561681.173742 FU1DBs1wCoSQhuW2O3 54.203.249.201 > 192.168.200.227 CIJSA81yUj2OZ3Zec SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1362 - 0 0 F - > 1595a86ed4570a4804ccb459ba49c710 > be032d527dcc970b2cb056c953036b3dac6d299f - - > 1422561681.173742 FnauTv4UWVVeIEhKfb 54.203.249.201 > 192.168.200.227 CIJSA81yUj2OZ3Zec SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1433 - 0 0 F - > f9a20bda18c130a3dd2c9300646baa70 > 12c9b291d19d3632d44f1069551c46490aea0542 - - > 1422561681.173742 FJLfsb48MeGcQiiID5 54.203.249.201 > 192.168.200.227 CIJSA81yUj2OZ3Zec SSL 0 X509,MD5,SHA1 - > -0.000000 F F 1087 - 0 0 F - > d9e1f5ce2bf6982005dc6d95aa9f9875 > 20ee1b7a0dbae0cf16f5a6327fc4ae1cef25f12c - - > root at appliance:/usr/local/bro/logs# > > What are these? Are these ssl certificates that are being transferred? > > Thank you, > Luis > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/e586025a/attachment.html From kmcmahon at mitre.org Thu Jan 29 13:57:33 2015 From: kmcmahon at mitre.org (McMahon, Kevin J) Date: Thu, 29 Jan 2015 21:57:33 +0000 Subject: [Bro] Logging to SQLite only writes to one file Message-ID: <00D3CD29F7C24A44B4D23450BB8E55B31070090C@IMCMBX03.MITRE.ORG> A colleague of mine (not on this list) is trying to write logs to SQLite. The entries below were added to the bro_init event. The system creates bot h of these tables, but only writes records to one of the tables. The indication was that it seems to only write to whichever table is written to first. Does anyone know why this might be, or have any similar experiences? (The colleague did confirm that there should have been multiple entries in each of the logs ? and initially had SQLite entries for all of the standard logs.) local connFilter: Log::Filter = [ $name="sqlite", $path="/var/lib/sqlite/bro_db", $config=table(["tablename"] = "conn"), $writer=Log::WRITER_SQLITE ]; Log::add_filter(Conn::LOG, connFilter); local weirdFilter: Log::Filter = [ $name="sqlite", $path="/var/lib/sqlite/bro_db", $config=table(["tablename"] = "weird"), $writer=Log::WRITER_SQLITE ]; Log::add_filter(Weird::LOG, weirdFilter); ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/bf2c3b01/attachment-0001.html From luismiguelferreirasilva at gmail.com Thu Jan 29 22:48:44 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Thu, 29 Jan 2015 23:48:44 -0700 Subject: [Bro] Elasticsearch Writer vs logstash Message-ID: Dear all, I'm interested in dumping my bro logs into an elastic search instance and, based on what I was able to learn thus far, it seems I have two different options: - use the elasticsearch writer (which the documentation says should not be used in production as it doesn't have any error checking) - or use logstash to read info directly from the bro logs and externally dump it into elasticsearch It seems to me the logstash route is better, given that I should be able to massage the data into more "user friendly" fields that can be easily queried with elasticsearch. So my question is, based on your experience, what is the best option? And, if you do use logstash, can you share your logstash config? Thanks in advance, Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/64d5c7eb/attachment.html From luismiguelferreirasilva at gmail.com Thu Jan 29 23:11:28 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Fri, 30 Jan 2015 00:11:28 -0700 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: ...I just found a website that has a tutorial on how to parse bro logs with logstash AND points to the config used in the distro Security Onion . So I'd just like to know what your thoughts are on using the elasticsearch writer vs logstash? Thank you, Luis On Thu, Jan 29, 2015 at 11:48 PM, Luis Miguel Silva < luismiguelferreirasilva at gmail.com> wrote: > Dear all, > > I'm interested in dumping my bro logs into an elastic search instance and, > based on what I was able to learn thus far, it seems I have two different > options: > - use the elasticsearch writer (which the documentation says should not be > used in production as it doesn't have any error checking) > - or use logstash to read info directly from the bro logs and externally > dump it into elasticsearch > > It seems to me the logstash route is better, given that I should be able > to massage the data into more "user friendly" fields that can be easily > queried with elasticsearch. > > So my question is, based on your experience, what is the best option? And, > if you do use logstash, can you share your logstash config? > > Thanks in advance, > Luis > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/2f929107/attachment.html From anthony.kasza at gmail.com Thu Jan 29 23:29:28 2015 From: anthony.kasza at gmail.com (anthony kasza) Date: Thu, 29 Jan 2015 23:29:28 -0800 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: I thought the ES writer had some issues it needed worked out around indexes or something. Seth? -AK On Jan 29, 2015 11:17 PM, "Luis Miguel Silva" < luismiguelferreirasilva at gmail.com> wrote: > ...I just found a website that has a tutorial on how to parse bro logs > with logstash > AND points to the config used in the distro Security Onion > > . > > So I'd just like to know what your thoughts are on using the elasticsearch > writer vs logstash? > > Thank you, > Luis > > On Thu, Jan 29, 2015 at 11:48 PM, Luis Miguel Silva < > luismiguelferreirasilva at gmail.com> wrote: > >> Dear all, >> >> I'm interested in dumping my bro logs into an elastic search instance >> and, based on what I was able to learn thus far, it seems I have two >> different options: >> - use the elasticsearch writer (which the documentation says should not >> be used in production as it doesn't have any error checking) >> - or use logstash to read info directly from the bro logs and externally >> dump it into elasticsearch >> >> It seems to me the logstash route is better, given that I should be able >> to massage the data into more "user friendly" fields that can be easily >> queried with elasticsearch. >> >> So my question is, based on your experience, what is the best option? >> And, if you do use logstash, can you share your logstash config? >> >> Thanks in advance, >> Luis >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150129/89eb3ae0/attachment.html From giedrius.ramas at gmail.com Thu Jan 29 23:37:26 2015 From: giedrius.ramas at gmail.com (Giedrius Ramas) Date: Fri, 30 Jan 2015 09:37:26 +0200 Subject: [Bro] [bro] Bro intelligence framework meta data issue. In-Reply-To: <9E09DF9A-2B31-4705-9DF0-D74EBF3149B8@icir.org> References: <9E09DF9A-2B31-4705-9DF0-D74EBF3149B8@icir.org> Message-ID: Tons of thanks, get it working . On Thu, Jan 29, 2015 at 6:06 PM, Seth Hall wrote: > > > On Jan 29, 2015, at 3:06 AM, Giedrius Ramas > wrote: > > > > #fields indicator indicator_type meta.desc > meta.cif_confidence meta.source > > summitcpas.com/process/mbb/m2uAccountUpdate/M2ULoginsdo.html > Intel::URL phishing 85 phishtank.com > > > > 1422518281.529553 CUZQFO0cVtr52M9zj 10.3.2.2 49789 > 64.207.177.234 80 - -- > summitcpas.com/process/mbb/m2uAccountUpdate/M2ULoginsdo.html > Intel::URL HTTP::IN_URL phishtank.com phishing > > > > Still missing meta.desc meta.cif_confidence meta.source fields. > > Actually, meta.desc is there (so is meta.source). The descriptions were > all that I added with my script. If you want more information added you > will have to add it in your custom script. My example should make it easy > for you. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/d1ae5626/attachment.html From luismiguelferreirasilva at gmail.com Thu Jan 29 23:40:37 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Fri, 30 Jan 2015 00:40:37 -0700 Subject: [Bro] bro cluster security Message-ID: All, As I was looking at the bro cluster documentation , I noticed there wasn't any information / configuration parameters to authenticate / authorize the communication between the manager, worker and proxy components. How do we protect against malicious processes from impersonating real components? Thank you, Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/d9e901f3/attachment.html From bro at pingtrip.com Fri Jan 30 04:17:30 2015 From: bro at pingtrip.com (Dave Crawford) Date: Fri, 30 Jan 2015 07:17:30 -0500 Subject: [Bro] bro cluster security In-Reply-To: References: Message-ID: Can you mitigate the risk by running a local firewall (e.g. IPTables on Linux, or PF on FreeBSD) on each component with explicit rules pairing manger<->workers<->proxies on the appropriate ports? -Dave > On Jan 30, 2015, at 2:40 AM, Luis Miguel Silva wrote: > > All, > > As I was looking at the bro cluster documentation , I noticed there wasn't any information / configuration parameters to authenticate / authorize the communication between the manager, worker and proxy components. > > How do we protect against malicious processes from impersonating real components? > > Thank you, > Luis > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/41265ae7/attachment-0001.html From luismiguelferreirasilva at gmail.com Fri Jan 30 04:33:59 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Fri, 30 Jan 2015 05:33:59 -0700 Subject: [Bro] bro cluster security In-Reply-To: References: Message-ID: I guess I could, though that wouldn't protect from attacks coming from authorized hosts. Anyway, I'm just trying to figure out what level of security is there builtin! Thanks, Luis On Fri, Jan 30, 2015 at 5:17 AM, Dave Crawford wrote: > Can you mitigate the risk by running a local firewall (e.g. IPTables on > Linux, or PF on FreeBSD) on each component with explicit rules pairing > manger<->workers<->proxies on the appropriate ports? > > -Dave > > On Jan 30, 2015, at 2:40 AM, Luis Miguel Silva < > luismiguelferreirasilva at gmail.com> wrote: > > All, > > As I was looking at the bro cluster documentation > , I noticed there wasn't > any information / configuration parameters to authenticate / authorize the > communication between the manager, worker and proxy components. > > How do we protect against malicious processes from impersonating real > components? > > Thank you, > Luis > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/f734c7e7/attachment.html From hosom at battelle.org Fri Jan 30 04:51:06 2015 From: hosom at battelle.org (Hosom, Stephen M) Date: Fri, 30 Jan 2015 12:51:06 +0000 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: Some things to think about: 1. Logstash is easy, but all the easiness that comes with it comes at a performance hit. a. If you go this way, you could probably make this ?easier? by logging Bro?s logs to JSON for Logstash to send to Elasticsearch. i. This will put you in an odd spot compared to other Bro deployments. Not many people log JSON logs. If you do this, you?ll want to use jq as a replacement for bro-cut. b. Make sure you look at Heka as an alternative. 2. Some people have had success with the NSQ writer and using NSQ, but that is also not what most people would consider a ?production? deployment. If you do nothing else, please use a recent version of Elasticsearch. Older versions of Elasticsearch were MUCH worse on performance and lacked features that are very nice to have. You?ll want to look into tuning Elasticsearch as well. There are MANY articles out there on how to tune Elasticsearch for indexing large data volumes. Finally, keep in mind that a lot of how you keep Bro?s logs can vary depending on the size of your environment and your tolerance level for risk. If you can?t risk losing indexed logs when Elasticsearch is down, then you?ll want to look into a queuing system like Redis, NSQ, or RabbitMQ. Seems like everyone has their pet implementation of AMQP, so I?ll let you sort that one out. This conversation could really go on forever? feel free to hop on #bro on freenode if you want to chat. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of anthony kasza Sent: Friday, January 30, 2015 2:29 AM To: Luis Miguel Silva Cc: bro Subject: Re: [Bro] Elasticsearch Writer vs logstash I thought the ES writer had some issues it needed worked out around indexes or something. Seth? -AK On Jan 29, 2015 11:17 PM, "Luis Miguel Silva" > wrote: ...I just found a website that has a tutorial on how to parse bro logs with logstash AND points to the config used in the distro Security Onion. So I'd just like to know what your thoughts are on using the elasticsearch writer vs logstash? Thank you, Luis On Thu, Jan 29, 2015 at 11:48 PM, Luis Miguel Silva > wrote: Dear all, I'm interested in dumping my bro logs into an elastic search instance and, based on what I was able to learn thus far, it seems I have two different options: - use the elasticsearch writer (which the documentation says should not be used in production as it doesn't have any error checking) - or use logstash to read info directly from the bro logs and externally dump it into elasticsearch It seems to me the logstash route is better, given that I should be able to massage the data into more "user friendly" fields that can be easily queried with elasticsearch. So my question is, based on your experience, what is the best option? And, if you do use logstash, can you share your logstash config? Thanks in advance, Luis _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/f96e2743/attachment-0001.html From bro at pingtrip.com Fri Jan 30 05:25:48 2015 From: bro at pingtrip.com (Dave Crawford) Date: Fri, 30 Jan 2015 08:25:48 -0500 Subject: [Bro] bro cluster security In-Reply-To: References: Message-ID: <68009078-679E-4C03-A63D-D53CD72A4F55@pingtrip.com> True, but I?d argue that if an attack is sourcing from a Bro component an authorization/authentication mechanism would be the least of concerns. > On Jan 30, 2015, at 7:33 AM, Luis Miguel Silva wrote: > > I guess I could, though that wouldn't protect from attacks coming from authorized hosts. > > Anyway, I'm just trying to figure out what level of security is there builtin! > > Thanks, > Luis > > On Fri, Jan 30, 2015 at 5:17 AM, Dave Crawford > wrote: > Can you mitigate the risk by running a local firewall (e.g. IPTables on Linux, or PF on FreeBSD) on each component with explicit rules pairing manger<->workers<->proxies on the appropriate ports? > > -Dave > >> On Jan 30, 2015, at 2:40 AM, Luis Miguel Silva > wrote: >> >> All, >> >> As I was looking at the bro cluster documentation , I noticed there wasn't any information / configuration parameters to authenticate / authorize the communication between the manager, worker and proxy components. >> >> How do we protect against malicious processes from impersonating real components? >> >> Thank you, >> Luis >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/60bc25e4/attachment.html From slagell at illinois.edu Fri Jan 30 05:31:42 2015 From: slagell at illinois.edu (Slagell, Adam J) Date: Fri, 30 Jan 2015 13:31:42 +0000 Subject: [Bro] bro cluster security In-Reply-To: References: , Message-ID: <635E55BA-BE6F-4BF8-B304-D1064579D8AC@illinois.edu> A common setup would be to have the cluster privately addressed and behind a bastion host, using ssh host keys between trusted hosts. On Jan 30, 2015, at 6:41 AM, Luis Miguel Silva > wrote: I guess I could, though that wouldn't protect from attacks coming from authorized hosts. Anyway, I'm just trying to figure out what level of security is there builtin! Thanks, Luis On Fri, Jan 30, 2015 at 5:17 AM, Dave Crawford > wrote: Can you mitigate the risk by running a local firewall (e.g. IPTables on Linux, or PF on FreeBSD) on each component with explicit rules pairing manger<->workers<->proxies on the appropriate ports? -Dave On Jan 30, 2015, at 2:40 AM, Luis Miguel Silva > wrote: All, As I was looking at the bro cluster documentation, I noticed there wasn't any information / configuration parameters to authenticate / authorize the communication between the manager, worker and proxy components. How do we protect against malicious processes from impersonating real components? Thank you, Luis _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/ded511ef/attachment.html From jlay at slave-tothe-box.net Fri Jan 30 05:41:47 2015 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 30 Jan 2015 06:41:47 -0700 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: <1422625307.3168.5.camel@JamesiMac> On Thu, 2015-01-29 at 23:48 -0700, Luis Miguel Silva wrote: > Dear all, > > > > I'm interested in dumping my bro logs into an elastic search instance > and, based on what I was able to learn thus far, it seems I have two > different options: > - use the elasticsearch writer (which the documentation says should > not be used in production as it doesn't have any error checking) > - or use logstash to read info directly from the bro logs and > externally dump it into elasticsearch > > > It seems to me the logstash route is better, given that I should be > able to massage the data into more "user friendly" fields that can be > easily queried with elasticsearch. > > > So my question is, based on your experience, what is the best option? > And, if you do use logstash, can you share your logstash config? > > > Thanks in advance, > Luis > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro I've used bro and logstash with good success...one setup is everything is on one machine, the other is remote using rsyslog to get the data to logstash. I tried going direct bro->elasticsearch, but logstash creates logstash-* shards, and bro creates bro-* shards, and kibana had a hard time seeing both. I'm currently just piping conn.log, but here's my logstash entry: "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*))" An interesting gotcha is the fact that the above doesn't see sizes as values but strings, so I had to add a mutate to get that to work: mutate { convert => [ "resp_bytes", "integer" ] convert => [ "resp_ip_bytes", "integer" ] convert => [ "orig_bytes", "integer" ] convert => [ "orig_ip_bytes", "integer" ] } Hope that helps...feel free to ping me off list if you need any help. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/06c9d7ba/attachment.html From michalpurzynski1 at gmail.com Fri Jan 30 08:55:42 2015 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Fri, 30 Jan 2015 17:55:42 +0100 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: I keep Bro logging to files just to keep some local cache that's easy for quick browsing/query and to offload Bro from writing to ES. Heka reads Bro logs, transforms with Lua scripts inside a sandbox and can output to whatever you want. I think there's ES output from Heka already. On Fri, Jan 30, 2015 at 1:51 PM, Hosom, Stephen M wrote: > Some things to think about: > > > > 1. Logstash is easy, but all the easiness that comes with it comes at > a performance hit. > > a. If you go this way, you could probably make this ?easier? by > logging Bro?s logs to JSON for Logstash to send to Elasticsearch. > > i. This > will put you in an odd spot compared to other Bro deployments. Not many > people log JSON logs. If you do this, you?ll want to use jq as a replacement > for bro-cut. > > b. Make sure you look at Heka as an alternative. > > 2. Some people have had success with the NSQ writer and using NSQ, but > that is also not what most people would consider a ?production? deployment. > > > > If you do nothing else, please use a recent version of Elasticsearch. Older > versions of Elasticsearch were MUCH worse on performance and lacked features > that are very nice to have. You?ll want to look into tuning Elasticsearch as > well. There are MANY articles out there on how to tune Elasticsearch for > indexing large data volumes. > > > > Finally, keep in mind that a lot of how you keep Bro?s logs can vary > depending on the size of your environment and your tolerance level for risk. > If you can?t risk losing indexed logs when Elasticsearch is down, then > you?ll want to look into a queuing system like Redis, NSQ, or RabbitMQ. > Seems like everyone has their pet implementation of AMQP, so I?ll let you > sort that one out. This conversation could really go on forever? feel free > to hop on #bro on freenode if you want to chat. > > > > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of anthony > kasza > Sent: Friday, January 30, 2015 2:29 AM > To: Luis Miguel Silva > Cc: bro > Subject: Re: [Bro] Elasticsearch Writer vs logstash > > > > I thought the ES writer had some issues it needed worked out around indexes > or something. Seth? > > -AK > > On Jan 29, 2015 11:17 PM, "Luis Miguel Silva" > wrote: > > ...I just found a website that has a tutorial on how to parse bro logs with > logstash AND points to the config used in the distro Security Onion. > > > > So I'd just like to know what your thoughts are on using the elasticsearch > writer vs logstash? > > > > Thank you, > > Luis > > > > On Thu, Jan 29, 2015 at 11:48 PM, Luis Miguel Silva > wrote: > > Dear all, > > > > I'm interested in dumping my bro logs into an elastic search instance and, > based on what I was able to learn thus far, it seems I have two different > options: > > - use the elasticsearch writer (which the documentation says should not be > used in production as it doesn't have any error checking) > > - or use logstash to read info directly from the bro logs and externally > dump it into elasticsearch > > > > It seems to me the logstash route is better, given that I should be able to > massage the data into more "user friendly" fields that can be easily queried > with elasticsearch. > > > > So my question is, based on your experience, what is the best option? And, > if you do use logstash, can you share your logstash config? > > > > Thanks in advance, > > Luis > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at icir.org Fri Jan 30 09:00:42 2015 From: seth at icir.org (Seth Hall) Date: Fri, 30 Jan 2015 12:00:42 -0500 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: > On Jan 30, 2015, at 2:29 AM, anthony kasza wrote: > > I thought the ES writer had some issues it needed worked out around indexes or something. Seth?  I?ll go ahead and do the long response. :) This has been an area of confusion for people for quite some time. That?s been my fault to a great degree, I?ve been looking to provide guidance on the topic and offer easy configurations, but it?s been difficult to create that. I?ll do a break down of the current status of a number of different methods. == Bro -> ES (with ES log writer) == This was the original output method and has been in Bro since 2.1 I think. It was written pretty quickly because we assumed that we would just be able to shove logs at ES as fast as we could and it could accept them. This method absolutely works on many small deployments and is very easy. Just load one script and away you go. The problem with this method is that on larger deployments this causes the log messages to get queued up in Bro as the main Bro thread shuttles them over to the thread that will actually transfer the logs to ES. Typically people think this is a memory leak, but it?s just that too many logs are being held in memory and not getting a chance to be flushed because ES is taking so long to respond. It?s not a very fun result. We?ve been stuck for quite some time at this dilemma. == Bro -> NSQ -> Forwarding tool -> ES == This seems to be the most promising mechanism right now. We take advantage of the fact that NSQ spools to disk to deal with any memory overload issues and it always quickly accepts logs from Bro which causes Bro to keep it log queues flushed nicely. There is a prototype of a tool for forwarding, but it?s still pretty rough. I haven?t had time to get back to it and clean it up and write documentation. (it?s written in Go if anyone?s interested in taking this on, get in touch with me!). It looks like this method works well and can cope with ES becoming overloaded without causing anything to crash. There are still some larger questions that we need to answer relating to ES tuning because the default template that ES uses for Bro logs does a lot of stuff that we don?t need and causes a lot of unnecessary overhead. Vlad Grigorescu has done some work in this area, but in my opinion we still need to explore ways to automate this process. == Bro -> JSON logs -> logstash -> ES == Some people are using this because it?s really easy to setup and somewhat resilient. At least Bro doesn?t get overwhelmed because it?s just writing logs to disk. I do recommend that people write JSON logs and try to avoid creating filters with logstash to parse the Bro logs. It?s just going to increase your work for this one small part of your overall architecture. The logstash config with JSON logs should be *very* short (I don?t have it offhand). To get Bro to output JSON logs, it just takes putting this in local.bro (or some other loaded script)? redef LogAscii::use_json=T; I really don?t know about the performance of logstash with really high log volume, but I don?t have high hopes for it either. I hope this helps with some of the background. :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jswan at sugf.com Fri Jan 30 09:08:45 2015 From: jswan at sugf.com (Swan, Jay) Date: Fri, 30 Jan 2015 17:08:45 +0000 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: <1422625307.3168.5.camel@JamesiMac> References: <1422625307.3168.5.camel@JamesiMac> Message-ID: <20150130170855.26B0A2C4083@rock.ICSI.Berkeley.EDU> Yet another option: use nxlog on the Bro node. Have it forward the logs to Logstash as raw JSON and use the json_lines codec in Logstash to feed to Elasticsearch. The reason I like this option is that it allows you to do complex processing locally rather than use a lot of complex grok filters in Logstash, which can be really slow. Nxlog can do type conversions, regex filters, and a lot more. It also keeps your Logstash config simple. I configure nxlog to use a separate TCP or UDP output using JSON formatting, then my Logstash config just looks like: input { tcp { port => 5000 type => bro_dns codec => json_lines } Etc?. } I would love to have the native Elasticsearch writer fixed up and blessed for production use, though! Jay From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of James Lay Sent: Friday, January 30, 2015 6:42 AM To: bro at bro.org Subject: Re: [Bro] Elasticsearch Writer vs logstash On Thu, 2015-01-29 at 23:48 -0700, Luis Miguel Silva wrote: Dear all, I'm interested in dumping my bro logs into an elastic search instance and, based on what I was able to learn thus far, it seems I have two different options: - use the elasticsearch writer (which the documentation says should not be used in production as it doesn't have any error checking) - or use logstash to read info directly from the bro logs and externally dump it into elasticsearch It seems to me the logstash route is better, given that I should be able to massage the data into more "user friendly" fields that can be easily queried with elasticsearch. So my question is, based on your experience, what is the best option? And, if you do use logstash, can you share your logstash config? Thanks in advance, Luis _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro I've used bro and logstash with good success...one setup is everything is on one machine, the other is remote using rsyslog to get the data to logstash. I tried going direct bro->elasticsearch, but logstash creates logstash-* shards, and bro creates bro-* shards, and kibana had a hard time seeing both. I'm currently just piping conn.log, but here's my logstash entry: "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*))" An interesting gotcha is the fact that the above doesn't see sizes as values but strings, so I had to add a mutate to get that to work: mutate { convert => [ "resp_bytes", "integer" ] convert => [ "resp_ip_bytes", "integer" ] convert => [ "orig_bytes", "integer" ] convert => [ "orig_ip_bytes", "integer" ] } Hope that helps...feel free to ping me off list if you need any help. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/4ca17375/attachment.html From luismiguelferreirasilva at gmail.com Fri Jan 30 11:03:45 2015 From: luismiguelferreirasilva at gmail.com (Luis Miguel Silva) Date: Fri, 30 Jan 2015 12:03:45 -0700 Subject: [Bro] Elasticsearch Writer vs logstash In-Reply-To: References: Message-ID: Everyone, since Seth's answer was the most complete one, I'll just reply to this one and talk about the other options you guys kindly pointed me to! ####################################################### Seth, Maybe you can give me some advice on what will probably work best for my usage scenario... - I have an ARM 7 appliance (with a dual core at 1ghz processor and 1gb of ram) - I want to MOSTLY run the default bro config, just to learn more about the network and see what is going on (e.g. identify hosts, software and some more relevant connection info about what is going on) - It will be capturing my home traffic (e.g. 4 users total and a total of about 10-15 devices) - the appliance isn't super powerful, so I'm thinking about: -- *option 1*: sending the resulting logs to a remote database of some sort (and I include "elasticsearch" in my database definition though, in the end, I might end up using mongo or a relational database of some sort) and query information offline (as to not overload the appliance) -- *option 2*: (and I don't even know if this is possible), set up some sort of a bro cluster, where the appliance sniffs and pre-filters the connections, but offlines the actual processing work / log writing to one or more remote machines --- is this even possible? The reason why I'm asking is that, if I can get the log files to land and the processing to occur in a remote machine, then maybe I do not necessarily care about how I then convert the logs into a format I can easily query... Given that I do not even think that option 2 is a real option, here are all the options that have been presented thus far: *option 1*- Bro [at appliance] -> ES [at remote location] *option 2*- Bro [at appliance] -> NSQ [at appliance] -> Forwarding tool [at remote location] -> ES [at remote location] *option 3*- Bro -> JSON logs [at appliance] -> logstash [at appliance] -> ES [at remote location] *option 4*- Bro -> nxlog [at appliance] -> logstash [at remote location] -> ES [at remote location] (does it even make sense to consider this option if Bro can output json?) *option 5*- Bro -> Heka [at appliance] -> ES [at remote location] (how heavy is Heka and how well does it perform?) *option 6*- Bro -> rsyslog [at appliance] -> logstash [at remote location] -> ES [at remote location] *Note*: "[at appliance]" means the processing would happen on the appliance and "[at remote location]" means the location where the data ends up at. So the question is, given that I want to minimize the amount of processing / memory, I want to offload the "log data" anyway AND I'll be monitoring a typical home connection, what option do you guys think will work best for me? At a glance, I think *option 2* and *option 4 and 6* (which are very similar option, we just change the local log forwarding service) are the options that will perform the best. Thank you, Luis On Fri, Jan 30, 2015 at 10:00 AM, Seth Hall wrote: > > > On Jan 30, 2015, at 2:29 AM, anthony kasza > wrote: > > > > I thought the ES writer had some issues it needed worked out around > indexes or something. Seth? > > I?ll go ahead and do the long response. :) > > This has been an area of confusion for people for quite some time. That?s > been my fault to a great degree, I?ve been looking to provide guidance on > the topic and offer easy configurations, but it?s been difficult to create > that. I?ll do a break down of the current status of a number of different > methods. > > == Bro -> ES (with ES log writer) == > This was the original output method and has been in Bro since 2.1 I > think. It was written pretty quickly because we assumed that we would just > be able to shove logs at ES as fast as we could and it could accept them. > This method absolutely works on many small deployments and is very easy. > Just load one script and away you go. > > The problem with this method is that on larger deployments this causes the > log messages to get queued up in Bro as the main Bro thread shuttles them > over to the thread that will actually transfer the logs to ES. Typically > people think this is a memory leak, but it?s just that too many logs are > being held in memory and not getting a chance to be flushed because ES is > taking so long to respond. It?s not a very fun result. We?ve been stuck > for quite some time at this dilemma. > > == Bro -> NSQ -> Forwarding tool -> ES == > This seems to be the most promising mechanism right now. We take > advantage of the fact that NSQ spools to disk to deal with any memory > overload issues and it always quickly accepts logs from Bro which causes > Bro to keep it log queues flushed nicely. There is a prototype of a tool > for forwarding, but it?s still pretty rough. I haven?t had time to get > back to it and clean it up and write documentation. (it?s written in Go if > anyone?s interested in taking this on, get in touch with me!). > > It looks like this method works well and can cope with ES becoming > overloaded without causing anything to crash. There are still some larger > questions that we need to answer relating to ES tuning because the default > template that ES uses for Bro logs does a lot of stuff that we don?t need > and causes a lot of unnecessary overhead. Vlad Grigorescu has done some > work in this area, but in my opinion we still need to explore ways to > automate this process. > > == Bro -> JSON logs -> logstash -> ES == > Some people are using this because it?s really easy to setup and somewhat > resilient. At least Bro doesn?t get overwhelmed because it?s just writing > logs to disk. I do recommend that people write JSON logs and try to avoid > creating filters with logstash to parse the Bro logs. It?s just going to > increase your work for this one small part of your overall architecture. > The logstash config with JSON logs should be *very* short (I don?t have it > offhand). > > To get Bro to output JSON logs, it just takes putting this in local.bro > (or some other loaded script)? > redef LogAscii::use_json=T; > > I really don?t know about the performance of logstash with really high log > volume, but I don?t have high hopes for it either. > > I hope this helps with some of the background. :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150130/d269d448/attachment-0001.html From rogeriobastos at pop-ba.rnp.br Fri Jan 30 12:15:56 2015 From: rogeriobastos at pop-ba.rnp.br (rogeriobastos) Date: Fri, 30 Jan 2015 17:15:56 -0300 Subject: [Bro] Using bro without traffic capture Message-ID: <373fd9828909647b77ca63490ec143cd@pop-ba.rnp.br> Hi guys, I'm working in a project to develop a Network Security Early Warning System (NS-EWS). We need correlate events but we can't capture network traffic because privacy questions. I think we can insert events into bro with broccoli and use it to correlate events. I would like to know if anyone have made something similar or have some suggestions of how to do this. -- Rogerio Bastos PoP-BA/RNP From qhu009 at aucklanduni.ac.nz Wed Jan 28 15:35:45 2015 From: qhu009 at aucklanduni.ac.nz (Qinwen Hu) Date: Wed, 28 Jan 2015 23:35:45 -0000 Subject: [Bro] How to configure Bro to detect UDP port 53 Message-ID: Hi all, I am a new Bro user, I did few experiments to read the same DNS trace file via Bro online version and Bro from my personal PC. The version number is 2.3.1. I got some interesting results. the online version checks UDP port 53 5353 and 5355 (port 53 has record). But, the one on my PC only checks port 5353 and 5355(No DNS query print out). Is this a configuration issue? And is there a way that I can configure my Bro to check port 53? Thanks Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/f418e58e/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: No DNS query print out.png Type: image/png Size: 76225 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/f418e58e/attachment.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: port 53 has record.png Type: image/png Size: 22362 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20150128/f418e58e/attachment-0001.bin