From hckim at narusec.com Mon Sep 1 02:03:08 2014 From: hckim at narusec.com (=?UTF-8?B?6rmA7Z2s7LKg?=) Date: Mon, 1 Sep 2014 18:03:08 +0900 Subject: [Bro] lookup_ asn Message-ID: Hi I am trying to add AS name to log I found bro 2.3 has lookup_asn which is AS number only does bro 2.3 had any function to print AS name to log? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140901/4be0290e/attachment.html From seth at icir.org Mon Sep 1 20:15:16 2014 From: seth at icir.org (Seth Hall) Date: Mon, 1 Sep 2014 23:15:16 -0400 Subject: [Bro] connecting to bro with broccoli In-Reply-To: References: Message-ID: On Aug 31, 2014, at 2:31 PM, daniel nagar wrote: > I was sending out many HTTP requests which causes raising of many events per request/response Generally, I wouldn't recommend sending around protocol based events. Sending anything with a connection record that needs serialized and deserialized is probably not a good idea. Why are you sending so much data by the way? You may have approached the problem with a suboptimal design. > I've figured out the memory expansion problem, it seems that the "ChunkQueue" in "ChunkedIO" does not have a limit and I was sending events at higher speeds than my broccoli client could process so the queue just kept growing. I was sort of curious if that's what was going on. Nice to have an answer to that. :) > This is a temporary fix in my opinion, a more robust communication framework is needed such as using an external queue (such as ActiveMQ / ZeroMQ) for transferring events/chunks. There is already major overhaul of Bro's communication system underway. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140901/cd132e92/attachment.bin From dngr7512 at gmail.com Tue Sep 2 01:38:58 2014 From: dngr7512 at gmail.com (daniel nagar) Date: Tue, 2 Sep 2014 11:38:58 +0300 Subject: [Bro] connecting to bro with broccoli In-Reply-To: References: Message-ID: > > Why are you sending so much data by the way? You may have approached the > problem with a suboptimal design I'm extracting information about HTTP requests/responses going through the network and I'm using an external database to save some of that data so I couldn't just use Bro scripting so using broccoli was a nice solution at that time. If you have any suggestions how I could implement my application without using broccoli It'd be great. There is already major overhaul of Bro's communication system underway Is there a place I can find more information about that? Another problem I had is that I tried upgrading to Bro 2.3 but I couldn't receive any event through broccoli like I was receiving with Bro 2.2 no matter what configuration I was using on the bro client side, should have I enabled it on the Bro side somehow? On Tue, Sep 2, 2014 at 6:15 AM, Seth Hall wrote: > > On Aug 31, 2014, at 2:31 PM, daniel nagar wrote: > > > I was sending out many HTTP requests which causes raising of many events > per request/response > > Generally, I wouldn't recommend sending around protocol based events. > Sending anything with a connection record that needs serialized and > deserialized is probably not a good idea. > > Why are you sending so much data by the way? You may have approached the > problem with a suboptimal design. > > > I've figured out the memory expansion problem, it seems that the > "ChunkQueue" in "ChunkedIO" does not have a limit and I was sending events > at higher speeds than my broccoli client could process so the queue just > kept growing. > > I was sort of curious if that's what was going on. Nice to have an answer > to that. :) > > > This is a temporary fix in my opinion, a more robust communication > framework is needed such as using an external queue (such as ActiveMQ / > ZeroMQ) for transferring events/chunks. > > There is already major overhaul of Bro's communication system underway. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140902/e7b7e94f/attachment.html From seth at icir.org Tue Sep 2 06:12:48 2014 From: seth at icir.org (Seth Hall) Date: Tue, 2 Sep 2014 09:12:48 -0400 Subject: [Bro] connecting to bro with broccoli In-Reply-To: References: Message-ID: On Sep 2, 2014, at 4:38 AM, daniel nagar wrote: >> Why are you sending so much data by the way? You may have approached the problem with a suboptimal design > > I'm extracting information about HTTP requests/responses going through the network and I'm using an external database to save some of that data so I couldn't just use Bro scripting so using broccoli was a nice solution at that time. Ah. You could write a logging writer. We do have an SQLite writer already and there is a PostgreSQL writer in the pipeline. Alternately, you could write to a log on disk and then have some other process read that file in and pass it to the database. >> There is already major overhaul of Bro's communication system underway > > Is there a place I can find more information about that? Not really yet. It's in the early implementation phase still and there is no timeline on when it will be functional yet. > Another problem I had is that I tried upgrading to Bro 2.3 but I couldn't receive any event through broccoli like I was receiving with Bro 2.2 no matter what configuration I was using on the bro client side, should have I enabled it on the Bro side somehow? Are you positive that you're running all of the same scripts that you were and that you're using Broccoli from Bro 2.3? I'm not sure off the top of my head if there were any compatibility changes between the two releases or not, but it's certainly possible. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140902/e3f0396e/attachment.bin From luke at geekempire.com Tue Sep 2 07:54:14 2014 From: luke at geekempire.com (Mike Reeves) Date: Tue, 2 Sep 2014 10:54:14 -0400 Subject: [Bro] SMB Message-ID: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> Looking for the branch I can try that has SMB in it? Is it in main? Thanks Mike From michalpurzynski1 at gmail.com Tue Sep 2 10:38:03 2014 From: michalpurzynski1 at gmail.com (Michal Purzynski) Date: Tue, 02 Sep 2014 10:38:03 -0700 Subject: [Bro] SMB In-Reply-To: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> References: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> Message-ID: <5406007B.9080309@gmail.com> On 9/2/14, 7:54 AM, Mike Reeves wrote: > Looking for the branch I can try that has SMB in it? Is it in main? > > Thanks > > Mike > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Connecting to the thread and injecting two more slices of code: - is it possible to use the SMB packet analyzer on Bro 2.3? - do we have ways to detect other similar protocols? NFS, I'm looking at you. And MySQL. And Postgres. From SHille at heartland.com Tue Sep 2 13:41:59 2014 From: SHille at heartland.com (Hille, Samson) Date: Tue, 2 Sep 2014 20:41:59 +0000 Subject: [Bro] Using Bro's file extraction script Message-ID: <0B774F5E2B1584419DA3BCD9E8D1B4C477E47527@HOMX03.hdcare.local> Hello! I am using Bro in Doug Burke?s Security Onion Suite. I was wondering if there is a way to have the Bro script that extracts executables to also send the executables to my firewall?s API? Example of the API command that might be included into the Bro script: curl -i -k -vv -F apikey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -F file=@/nsm/bro/extracted/HTTP-FEQ4PS1wXd5LAgG3I4.exe https://examplefirewallapi.com Taking this one step further: Make the script verify the executables? file hashes before sending them into the API (to prevent checking the exact same exe twice). Any feedback would be greatly appreciated! Samson Hille IT Security Analyst [http://heartland.com/sites/all/themes/heartland/images/external-files/HeartlandDentalLogo.png] ________________________________ Privacy Notice: This electronic mail message, and any attachments, are confidential and are intended for the exclusive use of the addressee(s) and may contain information that is proprietary and that may be Individually Identifiable or Protected Health Information under HIPAA. If you are not the intended recipient, please immediately contact the sender by telephone, or by email, and destroy all copies of this message. If you are a regular recipient of our electronic mail, please notify us promptly if you change your email address. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140902/58ff03cf/attachment.html From sconzo at visiblerisk.com Tue Sep 2 14:00:01 2014 From: sconzo at visiblerisk.com (Mike Sconzo) Date: Tue, 2 Sep 2014 16:00:01 -0500 Subject: [Bro] Using Bro's file extraction script In-Reply-To: <0B774F5E2B1584419DA3BCD9E8D1B4C477E47527@HOMX03.hdcare.local> References: <0B774F5E2B1584419DA3BCD9E8D1B4C477E47527@HOMX03.hdcare.local> Message-ID: You can pretty much do whatever you'd like w/the extraction stuff. Here's something I wrote that uses curl to check the virustotal API https://github.com/sooshie/bro-scripts/blob/master/2.2-scripts/vt_check.bro There's no reason you can't reference the extracted file and curl it elsewhere. We actually did something similar, we just wrote an external script to call (instead of just curl) to keep track of hashes and then do the submit, etc... it works nicely. The biggest challenge was getting something to keep track of the hashes to check for duplicates. -=Mike On Tue, Sep 2, 2014 at 3:41 PM, Hille, Samson wrote: > Hello! > > > > I am using Bro in Doug Burke?s Security Onion Suite. > > I was wondering if there is a way to have the Bro script that extracts > executables to also send the executables to my firewall?s API? > > Example of the API command that might be included into the Bro script: > > curl -i -k -vv -F apikey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -F > file=@/nsm/bro/extracted/HTTP-FEQ4PS1wXd5LAgG3I4.exe > https://examplefirewallapi.com > > > > Taking this one step further: > > Make the script verify the executables? file hashes before sending them > into the API (to prevent checking the exact same exe twice). > > > > Any feedback would be greatly appreciated! > > > > *Samson Hille* > > IT Security Analyst > > > > ------------------------------ > > Privacy Notice: This electronic mail message, and any attachments, are > confidential and are intended for > the exclusive use of the addressee(s) and may contain information that is > proprietary and that may be > Individually Identifiable or Protected Health Information under HIPAA. If > you are not the intended > recipient, please immediately contact the sender by telephone, or by > email, and destroy all copies of this > message. If you are a regular recipient of our electronic mail, please > notify us promptly if you change > your email address. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- cat ~/.bash_history > documentation.txt -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140902/8278037a/attachment.html From vlad at grigorescu.org Tue Sep 2 18:22:26 2014 From: vlad at grigorescu.org (Vlad Grigorescu) Date: Tue, 2 Sep 2014 20:22:26 -0500 Subject: [Bro] SMB In-Reply-To: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> References: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> Message-ID: topic/vladg/smb It compiles against 2.3, doesn't compile against master yet (due to the plugin rewrite). At this point, it's only been lightly tested. Given the complexity of the protocol, it's still buggy, and there are large areas that aren't supported yet. Please let us know if you test it and encounter any issues - fair warning: they'll probably be difficult to fix without a PCAP :-) --Vlad On Tue, Sep 2, 2014 at 9:54 AM, Mike Reeves wrote: > Looking for the branch I can try that has SMB in it? Is it in main? > > Thanks > > Mike > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140902/8398dae6/attachment.html From vlad at grigorescu.org Tue Sep 2 18:33:11 2014 From: vlad at grigorescu.org (Vlad Grigorescu) Date: Tue, 2 Sep 2014 20:33:11 -0500 Subject: [Bro] SMB In-Reply-To: <5406007B.9080309@gmail.com> References: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> <5406007B.9080309@gmail.com> Message-ID: On Tue, Sep 2, 2014 at 12:38 PM, Michal Purzynski < michalpurzynski1 at gmail.com> wrote: > - do we have ways to detect other similar protocols? NFS, I'm looking at > you. And MySQL. And Postgres. I'm hoping you mean similar from a functionality standpoint, and not similar based on what's on the wire... :-) There was an old NFS analyzer: https://github.com/bro/bro/blob/v2.1/src/NFS.cc Apparently it didn't work all that well, but it might be a jumping off point. There's a MySQL analyzer that's currently in beta in topic/vladg/smb. I don't know of anyone working on Postgres right now. --Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140902/c985d36c/attachment.html From inetjunkmail at gmail.com Wed Sep 3 10:14:16 2014 From: inetjunkmail at gmail.com (inetjunkmail) Date: Wed, 3 Sep 2014 13:14:16 -0400 Subject: [Bro] Adding options to bro managed by broctl In-Reply-To: <49FAE13F-E71B-4F01-AAB3-F846D8DA5030@icir.org> References: <49FAE13F-E71B-4F01-AAB3-F846D8DA5030@icir.org> Message-ID: Seth: Thanks for the direction. We ended up leveraging the capture_filter as you described. Our traffic is MPLS so the capture filter is a little more complicated but we've got it working well. If anyone else needs it, here's what we've done to use capture_filters in an MPLS environment. We have anywhere from 0-2 MPLS labels on our traffic so: redef capture_filters += { ["inet_fltr"] = "(net 1.0.0.0/24 or port 443) or (mpls and (net 1.0.0.0/24 or port 443)) or (mpls and mpls and (net 1.0.0.0/24 or port 443))" }; There may be some better way to recursively pop any number of MPLS labels but this seems to work ok in our environment. Ultimately, we intend to have our tap aggregator pop the MPLS labels and apply any necessary filters but MPLS label popping is only roadmapped at this point on our tool. Thanks On Sun, Aug 31, 2014 at 12:07 PM, Seth Hall wrote: > > On Aug 28, 2014, at 11:07 AM, James Lay wrote: > > > broargs = -f 'net 1.0.0.0/24 or port 443' > > > > to your broctl.cfg file. > > That will work, but technically it might be a bit better to do something > like this... > > redef capture_filters += { > ["watched network"] = "net 1.0.0.0/24", > ["https"] = "port 443" > }; > > If you build up what you want to capture this way it gives Bro the chance > to automatically build your BPF filters for you, including checking each > component of your filter for mistakes which it will then detect at startup > and tell you which component of your filter failed. If you use the above > lines to indicate the traffic you'd like to allow into Bro, you can also > set restriction filters to limit something a bit. For instance, in that > 1.0.0.0/24 subnet you might want to ignore a single host. You could > implement that by adding the following lines... > > redef restrict_filters += { > ["unmonitored host"] = "host 1.0.0.54" > }; > > The filter that would ultimately be constructed by those lines is... > ((port 443) or (net 1.0.0.0/24)) and (host 1.0.0.54) > > One thing to be careful with this though is that generally when you take > the stance that you are doing filtering you have to be really careful to > understand your traffic. If you have any traffic with MPLS or VLAN tags, > the filters I gave won't allow that traffic through. If you're interested > in doing ARP analysis you won't see those packets either. Same goes for > IPv6. > > Filtering is an area where we've tried to make things simple by running a > fully open filter, there are a lot of dragons when you stray from that > path. :) > > .Seth > > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140903/39d1f5e1/attachment.html From pooh_champ19 at yahoo.com Wed Sep 3 23:52:32 2014 From: pooh_champ19 at yahoo.com (pooja champaneria) Date: Wed, 3 Sep 2014 23:52:32 -0700 Subject: [Bro] Bro Digest, Vol 101, Issue 4 In-Reply-To: References: Message-ID: <1409813552.1721.YahooMailNeo@web161804.mail.bf1.yahoo.com> Respected sir, I have just started using bro Ids. I am going through your tutorials to learn bro. When i run the command thats shows the connections which lasts more than 60secs it gave me absolutely correct result.Now i want to see all the connections that takes place.How can i implement that.Also i am unaware of how to use policy script.Suggest me the solution and resources which help me get learn bro faster. Reply awaited. On Thursday, September 4, 2014 12:31 AM, "bro-request at bro.org" wrote: Send Bro mailing list submissions to bro at bro.org To subscribe or unsubscribe via the World Wide Web, visit http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro or, via email, send a message with subject or body 'help' to bro-request at bro.org You can reach the person managing the list at bro-owner at bro.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Bro digest..." Today's Topics: 1. Re: Adding options to bro managed by broctl (inetjunkmail) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Sep 2014 13:14:16 -0400 From: inetjunkmail Subject: Re: [Bro] Adding options to bro managed by broctl To: Seth Hall Cc: bro at bro.org Message-ID: Content-Type: text/plain; charset="utf-8" Seth: Thanks for the direction. We ended up leveraging the capture_filter as you described. Our traffic is MPLS so the capture filter is a little more complicated but we've got it working well. If anyone else needs it, here's what we've done to use capture_filters in an MPLS environment. We have anywhere from 0-2 MPLS labels on our traffic so: redef capture_filters += { ["inet_fltr"] = "(net 1.0.0.0/24 or port 443) or (mpls and (net 1.0.0.0/24 or port 443)) or (mpls and mpls and (net 1.0.0.0/24 or port 443))" }; There may be some better way to recursively pop any number of MPLS labels but this seems to work ok in our environment. Ultimately, we intend to have our tap aggregator pop the MPLS labels and apply any necessary filters but MPLS label popping is only roadmapped at this point on our tool. Thanks On Sun, Aug 31, 2014 at 12:07 PM, Seth Hall wrote: > > On Aug 28, 2014, at 11:07 AM, James Lay wrote: > > > broargs = -f 'net 1.0.0.0/24 or port 443' > > > > to your broctl.cfg file. > > That will work, but technically it might be a bit better to do something > like this... > > redef capture_filters += { > ["watched network"] = "net 1.0.0.0/24", > ["https"] = "port 443" > }; > > If you build up what you want to capture this way it gives Bro the chance > to automatically build your BPF filters for you, including checking each > component of your filter for mistakes which it will then detect at startup > and tell you which component of your filter failed. If you use the above > lines to indicate the traffic you'd like to allow into Bro, you can also > set restriction filters to limit something a bit. For instance, in that > 1.0.0.0/24 subnet you might want to ignore a single host. You could > implement that by adding the following lines... > > redef restrict_filters += { > ["unmonitored host"] = "host 1.0.0.54" > }; > > The filter that would ultimately be constructed by those lines is... > ((port 443) or (net 1.0.0.0/24)) and (host 1.0.0.54) > > One thing to be careful with this though is that generally when you take > the stance that you are doing filtering you have to be really careful to > understand your traffic. If you have any traffic with MPLS or VLAN tags, > the filters I gave won't allow that traffic through. If you're interested > in doing ARP analysis you won't see those packets either. Same goes for > IPv6. > > Filtering is an area where we've tried to make things simple by running a > fully open filter, there are a lot of dragons when you stray from that > path. :) > > .Seth > > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140903/39d1f5e1/attachment-0001.html ------------------------------ _______________________________________________ Bro mailing list Bro at bro.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro End of Bro Digest, Vol 101, Issue 4 *********************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140903/1349d9cf/attachment.html From jdopheid at illinois.edu Thu Sep 4 08:19:19 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Thu, 4 Sep 2014 15:19:19 +0000 Subject: [Bro] BroCon '14: Slides and Videos are available Message-ID: We have updated the BroCon '14 event page to include the slides and videos from the presentations. http://www.bro.org/community/brocon2014.html Note: we are not publicly posting solutions to the training exercises. To request solutions, please contact info at bro.org. Regards, Jeannette Dopheide ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From tyler.schoenke at colorado.edu Thu Sep 4 11:37:46 2014 From: tyler.schoenke at colorado.edu (Tyler T. Schoenke) Date: Thu, 4 Sep 2014 12:37:46 -0600 Subject: [Bro] DDoS detection Message-ID: <0AA5D924DE90AF48BBD563CCD296B8FB010E4D4CE888@EXC2.ad.colorado.edu> Just wondering if anyone has a DDoS detection script for Bro 2.2+. I saw there was an older one for Bro 1.5, but was wondering if someone created an updated one using the new SumStats framework. Please let me know if there is an out-of-the-box way to detect DDoS that I am missing. Thanks, Tyler -- -- Tyler Schoenke Network Security Program Manager IT Security Office University of Colorado at Boulder -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140904/0b3d5ad8/attachment.html From r.fulton at auckland.ac.nz Thu Sep 4 13:39:49 2014 From: r.fulton at auckland.ac.nz (Russell Fulton) Date: Thu, 4 Sep 2014 20:39:49 +0000 Subject: [Bro] can not connect to git repository Message-ID: <3B9E333D-B320-4E16-BE93-272F177FA922@auckland.ac.nz> user at secmonprd03:~$ git clone --recursive git://git.bro.org/time-machine Cloning into 'time-machine'... fatal: unable to connect to git.bro.org: git.bro.org[0: 192.150.187.43]: errno=Connection refused I can ping it. Tried from two different machines here. Russell From r.fulton at auckland.ac.nz Thu Sep 4 13:52:05 2014 From: r.fulton at auckland.ac.nz (Russell Fulton) Date: Thu, 4 Sep 2014 20:52:05 +0000 Subject: [Bro] can not connect to git repository In-Reply-To: <3B9E333D-B320-4E16-BE93-272F177FA922@auckland.ac.nz> References: <3B9E333D-B320-4E16-BE93-272F177FA922@auckland.ac.nz> Message-ID: <82F5FB7D-DFB6-453F-9064-DA6888B7BB78@auckland.ac.nz> Figured it out. Turned out the second machine I tested it from was not the one I thought is was. Problems with Firewall. R On 5/09/2014, at 8:39 am, Russell Fulton wrote: > user at secmonprd03:~$ git clone --recursive git://git.bro.org/time-machine > Cloning into 'time-machine'... > fatal: unable to connect to git.bro.org: > git.bro.org[0: 192.150.187.43]: errno=Connection refused > > I can ping it. Tried from two different machines here. > > Russell From johanna at icir.org Thu Sep 4 14:02:36 2014 From: johanna at icir.org (Johanna Amann) Date: Thu, 04 Sep 2014 14:02:36 -0700 Subject: [Bro] can not connect to git repository In-Reply-To: <3B9E333D-B320-4E16-BE93-272F177FA922@auckland.ac.nz> References: <3B9E333D-B320-4E16-BE93-272F177FA922@auckland.ac.nz> Message-ID: <41E493F1-EC5B-4540-8987-9B3AFCED1630@icir.org> Hi, this works fine from several machines all over the world to which I have access. Could it be some local network policy on the git port? Johanna On 4 Sep 2014, at 13:39, Russell Fulton wrote: > user at secmonprd03:~$ git clone --recursive > git://git.bro.org/time-machine > Cloning into 'time-machine'... > fatal: unable to connect to git.bro.org: > git.bro.org[0: 192.150.187.43]: errno=Connection refused > > I can ping it. Tried from two different machines here. > > Russell > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at icir.org Thu Sep 4 18:31:27 2014 From: seth at icir.org (Seth Hall) Date: Thu, 4 Sep 2014 21:31:27 -0400 Subject: [Bro] DDoS detection In-Reply-To: <0AA5D924DE90AF48BBD563CCD296B8FB010E4D4CE888@EXC2.ad.colorado.edu> References: <0AA5D924DE90AF48BBD563CCD296B8FB010E4D4CE888@EXC2.ad.colorado.edu> Message-ID: <9004354B-0BEA-4C47-9745-7223D48B5C02@icir.org> On Sep 4, 2014, at 2:37 PM, Tyler T. Schoenke wrote: > Just wondering if anyone has a DDoS detection script for Bro 2.2+. Justin, did you ever end up creating one? It should be pretty easy once you define what exactly you want to measure. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140904/dcb48a03/attachment.bin From fengqingleiyue at 163.com Fri Sep 5 04:27:58 2014 From: fengqingleiyue at 163.com (fql) Date: Fri, 5 Sep 2014 19:27:58 +0800 (CST) Subject: [Bro] How to use Bro to listen netflow from some ip and port Message-ID: <5a378da3.e439.14845922469.Coremail.fengqingleiyue@163.com> Hi EveryOne, i was using bro to collect some data for network behavior detecting these days, however i want use Bro to listen some netflow v5 & netflow v9 from my new brought network device , but try to find some documentation for how to setup it , but it seems that in the http://www.bro.org/documentation/index.html ,i can not found some information about how to use bro to listen flow from my network devices. i really try this for a long time ,but i still can not make it , so can you guys help me , if you know how to set up Bro to listen netflow both v5 & v9 , please tell me , thank you very much . Regards fql . -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140905/c2231d2e/attachment.html From yardley at illinois.edu Fri Sep 5 05:54:09 2014 From: yardley at illinois.edu (Yardley, Tim) Date: Fri, 5 Sep 2014 12:54:09 +0000 Subject: [Bro] Web GUI's Message-ID: All, I'm curious as to what people use as a GUI for Bro in production. I'm aware that there are a few options out there, but I'd like to know what the consensus is on preferred approach. Items I am interested in... Bro administration Bro dashboard Bro log details/analysis Bro policy definition Etc. Appreciate your thoughts! Tim -- Tim Yardley Associate Director of Technology Information Trust Institute, University of Illinois yardley at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140905/b0674f07/attachment.html From seth at icir.org Fri Sep 5 06:13:44 2014 From: seth at icir.org (Seth Hall) Date: Fri, 5 Sep 2014 09:13:44 -0400 Subject: [Bro] Web GUI's In-Reply-To: References: Message-ID: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> On Sep 5, 2014, at 8:54 AM, Yardley, Tim wrote: > Bro administration > Bro dashboard > Bro log details/analysis > Bro policy definition There is no existing GUI for most of this. The only one that people have really approached is in log analysis and most people use splunk for those, although some people are starting to use ElasticSearch with Kibana for that. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140905/7a6616c1/attachment.bin From doug.burks at gmail.com Fri Sep 5 06:20:14 2014 From: doug.burks at gmail.com (Doug Burks) Date: Fri, 5 Sep 2014 09:20:14 -0400 Subject: [Bro] Web GUI's In-Reply-To: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> References: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> Message-ID: On Fri, Sep 5, 2014 at 9:13 AM, Seth Hall wrote: > > On Sep 5, 2014, at 8:54 AM, Yardley, Tim wrote: > >> Bro administration >> Bro dashboard >> Bro log details/analysis >> Bro policy definition > > There is no existing GUI for most of this. The only one that people have really approached is in log analysis and most people use splunk for those, although some people are starting to use ElasticSearch with Kibana for that. Many folks use ELSA as well. -- Doug Burks Need Security Onion Training or Commercial Support? http://securityonionsolutions.com From slagell at illinois.edu Fri Sep 5 06:24:16 2014 From: slagell at illinois.edu (Slagell, Adam J) Date: Fri, 5 Sep 2014 13:24:16 +0000 Subject: [Bro] Web GUI's In-Reply-To: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> References: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> Message-ID: <05DFF4FD-8576-4CCB-914B-68216475A6ED@illinois.edu> On Sep 5, 2014, at 8:13 AM, Seth Hall wrote: > The only one that people have really approached is in log analysis and most people use splunk for those, although some people are starting to use ElasticSearch with Kibana for that. This is not an endorsement of anything, but we use Splunk and there is: https://github.com/grigorescu/Brownian http://opensecgeek.blogspot.com/2013/02/nsm-with-bro-ids-part-4-bro-and-elsa.html But these do nothing for administration of Bro. Though I could see as we daemonize broctl someone writing a nice web interface for that for Bro 2.4. ------ Adam J. Slagell Chief Information Security Officer Assistant Director, Cybersecurity Directorate National Center for Supercomputing Applications University of Illinois at Urbana-Champaign www.slagell.info "Under the Illinois Freedom of Information Act (FOIA), any written communication to or from University employees regarding University business is a public record and may be subject to public disclosure." From seth at icir.org Fri Sep 5 06:34:16 2014 From: seth at icir.org (Seth Hall) Date: Fri, 5 Sep 2014 09:34:16 -0400 Subject: [Bro] Web GUI's In-Reply-To: <05DFF4FD-8576-4CCB-914B-68216475A6ED@illinois.edu> References: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> <05DFF4FD-8576-4CCB-914B-68216475A6ED@illinois.edu> Message-ID: <3172FD84-25C8-445B-83B9-2F1EC753109E@icir.org> On Sep 5, 2014, at 9:24 AM, Slagell, Adam J wrote: > This is not an endorsement of anything, but we use Splunk and there is: > https://github.com/grigorescu/Brownian Don't forget: http://brownian.bro.org/ .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From hosom at battelle.org Fri Sep 5 06:37:10 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Fri, 5 Sep 2014 13:37:10 +0000 Subject: [Bro] Web GUI's In-Reply-To: <05DFF4FD-8576-4CCB-914B-68216475A6ED@illinois.edu> References: <608C57F8-5C3C-4BA0-A470-9496A877DA99@icir.org> <05DFF4FD-8576-4CCB-914B-68216475A6ED@illinois.edu> Message-ID: Depending on what 'administration' consists of, some users have written Web UIs to perform some tasks. For example, we have an in-house Django app that generates intelligence files. Justin has a Django app that generates generic tables for Bro. His would include the intel file use case, but was not as user friendly as I needed. -----Original Message----- From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Slagell, Adam J Sent: Friday, September 05, 2014 9:24 AM To: Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Web GUI's On Sep 5, 2014, at 8:13 AM, Seth Hall wrote: > The only one that people have really approached is in log analysis and most people use splunk for those, although some people are starting to use ElasticSearch with Kibana for that. This is not an endorsement of anything, but we use Splunk and there is: https://github.com/grigorescu/Brownian http://opensecgeek.blogspot.com/2013/02/nsm-with-bro-ids-part-4-bro-and-elsa.html But these do nothing for administration of Bro. Though I could see as we daemonize broctl someone writing a nice web interface for that for Bro 2.4. ------ Adam J. Slagell Chief Information Security Officer Assistant Director, Cybersecurity Directorate National Center for Supercomputing Applications University of Illinois at Urbana-Champaign www.slagell.info "Under the Illinois Freedom of Information Act (FOIA), any written communication to or from University employees regarding University business is a public record and may be subject to public disclosure." _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jdopheid at illinois.edu Fri Sep 5 12:49:46 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Fri, 5 Sep 2014 19:49:46 +0000 Subject: [Bro] Bro Blog: Announcing Try.bro Message-ID: We've officially launched try.bro.org! Blog post here: http://blog.bro.org/2014/09/announcing-trybro.html ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From tyler.schoenke at colorado.edu Fri Sep 5 14:34:28 2014 From: tyler.schoenke at colorado.edu (Tyler T. Schoenke) Date: Fri, 5 Sep 2014 15:34:28 -0600 Subject: [Bro] DDoS detection In-Reply-To: <48FAFDDA-BAE4-4B11-86D2-8F4F288062CA@illinois.edu> References: <0AA5D924DE90AF48BBD563CCD296B8FB010E4D4CE888@EXC2.ad.colorado.edu> <9004354B-0BEA-4C47-9745-7223D48B5C02@icir.org> <48FAFDDA-BAE4-4B11-86D2-8F4F288062CA@illinois.edu> Message-ID: <0AA5D924DE90AF48BBD563CCD296B8FB010E4D4CEA27@EXC2.ad.colorado.edu> Thanks guys! Justin, if I'm reading this correctly, the script will only look at ports specified in dos_ports. Is there a way to match all ports? Are there dangers in doing that? Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140905/bdd4ec17/attachment.html From dave at dechellis.com Sun Sep 7 13:59:27 2014 From: dave at dechellis.com (Dave DeChellis) Date: Sun, 7 Sep 2014 15:59:27 -0500 (EST) Subject: [Bro] Question on file hashes and cyrmu db In-Reply-To: <1796319538.1573830.1408065982756.open-xchange@bosoxweb03.eigbox.net> References: <1796319538.1573830.1408065982756.open-xchange@bosoxweb03.eigbox.net> Message-ID: <1581448715.4171794.1410123567304.open-xchange@bosoxweb01.eigbox.net> Thanks to a few of you for helping me offline for this, I was sidetracked on other projects. I'm noticing some inconsistencies with Bro and the Cymru Hash Servce on my Bro box (2.3) 1. When i download some files I expect to match through the service, it fails but it matches virustotal when I enter the MD5/SHA1 hash on their site. 2. When I do get some matches from Cymru, I don't get the entry in notice.log via the detect bro script. I did change the detect-MHR.bro and made the following changes: changed the percent down to 1 (just to test) and added the .zip mime extension I am running with checksums disabled and I've experienced this on a few bro boxes including a virtual I have loaded. For others who are doing dynamic analysis of files for malware/viruses, is this the best approach? Is there anything else I could try before I dig deeper into the code? I've verified it's nothing stupid like DNS queries failing, what I haven't done is started to dump the SHA256 to see if I have better luck with this hash value. Also, the script seems to work with pcap files that people have provided so the network could be the issue but I don't see any signs of packet loss, frame errors or other data. Thanks again Dave > On August 14, 2014 at 8:26 PM Dave DeChellis wrote: > > Hello, > > I'm helping to customize an existing deployment of Bro and while I think I'm > collecting all the file info correctly, I'm not hitting any matches when I run > the hashes against cymru's database. I was wondering if someone could > confirm that none of these hashes match either. I've run them against the > DNS,Whois and web queries and had no luck. I work at a very open place and I > find it almost impossible that not one of the 1.7M hashes match. In the > event there are no matches, could someone point me to some sample pcap files > so I can test my scripts? > > If someone wanted to help cross correlate my findings, I could send offline a > .gz of 1.7M hashes from a few hours of collection. > > > Thanks again for any help or assistance > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140907/62548631/attachment.html From doug.burks at gmail.com Sun Sep 7 14:40:59 2014 From: doug.burks at gmail.com (Doug Burks) Date: Sun, 7 Sep 2014 17:40:59 -0400 Subject: [Bro] Question on file hashes and cyrmu db In-Reply-To: <1581448715.4171794.1410123567304.open-xchange@bosoxweb01.eigbox.net> References: <1796319538.1573830.1408065982756.open-xchange@bosoxweb03.eigbox.net> <1581448715.4171794.1410123567304.open-xchange@bosoxweb01.eigbox.net> Message-ID: On Sunday, September 7, 2014, Dave DeChellis wrote: > > Also, the script seems to work with pcap files that people have provided > so the network could be the issue but I don't see any signs of packet loss, > frame errors or other data. > > Hi Dave, Is it possible that NIC offloading functions are a factor? http://blog.securityonion.net/2011/10/when-is-full-packet-capture-not-full.html -- Doug Burks Need Security Onion Training or Commercial Support? http://securityonionsolutions.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140907/f1a043bb/attachment.html From dave at dechellis.com Sun Sep 7 15:34:18 2014 From: dave at dechellis.com (Dave DeChellis) Date: Sun, 07 Sep 2014 18:34:18 -0400 Subject: [Bro] Question on file hashes and cyrmu db Message-ID: <20140907223435.974EB2C4055@rock.ICSI.Berkeley.EDU> An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140907/c12dacbd/attachment.html From seth at icir.org Mon Sep 8 06:28:06 2014 From: seth at icir.org (Seth Hall) Date: Mon, 8 Sep 2014 09:28:06 -0400 Subject: [Bro] Question on file hashes and cyrmu db In-Reply-To: <1581448715.4171794.1410123567304.open-xchange@bosoxweb01.eigbox.net> References: <1796319538.1573830.1408065982756.open-xchange@bosoxweb03.eigbox.net> <1581448715.4171794.1410123567304.open-xchange@bosoxweb01.eigbox.net> Message-ID: <745FA782-ED17-44E0-9904-F480460E198B@icir.org> On Sep 7, 2014, at 4:59 PM, Dave DeChellis wrote: > 2. When I do get some matches from Cymru, I don't get the entry in notice.log via the detect bro script. How do you know you get a match from Team Cymru if it doesn't show up in your notice.log? > I did change the detect-MHR.bro and made the following changes: changed the percent down to 1 (just to test) and added the .zip mime extension You should really avoid making changes to that file. Instead you should have done this in local.bro (or elsewhere, just a script you control): redef TeamCymruMalwareHashRegistry::match_file_types += /application\/zip/; redef TeamCymruMalwareHashRegistry::notice_threshold = 1; .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From dhoelzer at sans.org Mon Sep 8 06:36:31 2014 From: dhoelzer at sans.org (David Hoelzer) Date: Mon, 8 Sep 2014 09:36:31 -0400 Subject: [Bro] Finding SYNs... Message-ID: Good morning! I?m curious as to whether or not an invalid checksum as a result of offloading would prevent the tcp_SYN_packet event from firing?? I took a cursory look but thought it might be faster to just ask. :) Thanks! From jsiwek at illinois.edu Mon Sep 8 07:16:38 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Mon, 8 Sep 2014 14:16:38 +0000 Subject: [Bro] Finding SYNs... In-Reply-To: References: Message-ID: <8670A496-1C53-4F95-85D7-C3955F3BF5A8@illinois.edu> On Sep 8, 2014, at 8:36 AM, David Hoelzer wrote: > I?m curious as to whether or not an invalid checksum as a result of offloading would prevent the tcp_SYN_packet event from firing?? If you mean ?connection_SYN_packet?, the default behavior is to not generate that event for packets w/ invalid checksums. - Jon From dhoelzer at sans.org Mon Sep 8 07:17:37 2014 From: dhoelzer at sans.org (David Hoelzer) Date: Mon, 8 Sep 2014 10:17:37 -0400 Subject: [Bro] Finding SYNs... In-Reply-To: <8670A496-1C53-4F95-85D7-C3955F3BF5A8@illinois.edu> References: <8670A496-1C53-4F95-85D7-C3955F3BF5A8@illinois.edu> Message-ID: <2FE1A6D2-821D-49C1-9BE7-C98834A1832D@sans.org> Oops! Sorry, yes. Thanks! I notice that you say ?default behavior?. Is there an enum or constant that I can adjust to change this behavior? On Sep 8, 2014, at 10:16 AM, Siwek, Jon wrote: > > On Sep 8, 2014, at 8:36 AM, David Hoelzer wrote: > >> I?m curious as to whether or not an invalid checksum as a result of offloading would prevent the tcp_SYN_packet event from firing?? > > If you mean ?connection_SYN_packet?, the default behavior is to not generate that event for packets w/ invalid checksums. > > - Jon From jsiwek at illinois.edu Mon Sep 8 07:24:05 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Mon, 8 Sep 2014 14:24:05 +0000 Subject: [Bro] Finding SYNs... In-Reply-To: <2FE1A6D2-821D-49C1-9BE7-C98834A1832D@sans.org> References: <8670A496-1C53-4F95-85D7-C3955F3BF5A8@illinois.edu> <2FE1A6D2-821D-49C1-9BE7-C98834A1832D@sans.org> Message-ID: <8BD3C425-11BE-47CA-860E-A5835C9437EB@illinois.edu> On Sep 8, 2014, at 9:17 AM, David Hoelzer wrote: > I notice that you say ?default behavior?. Is there an enum or constant that I can adjust to change this behavior? Not specifically for this event, but you can generally ignore checksum errors via ?redef ignore_checksums=T;? or using the -C command line option. - Jon From dhoelzer at sans.org Mon Sep 8 07:48:07 2014 From: dhoelzer at sans.org (David Hoelzer) Date: Mon, 8 Sep 2014 10:48:07 -0400 Subject: [Bro] Finding SYNs... In-Reply-To: <8BD3C425-11BE-47CA-860E-A5835C9437EB@illinois.edu> References: <8670A496-1C53-4F95-85D7-C3955F3BF5A8@illinois.edu> <2FE1A6D2-821D-49C1-9BE7-C98834A1832D@sans.org> <8BD3C425-11BE-47CA-860E-A5835C9437EB@illinois.edu> Message-ID: Thanks! On Sep 8, 2014, at 10:24 AM, Siwek, Jon wrote: > > On Sep 8, 2014, at 9:17 AM, David Hoelzer wrote: > >> I notice that you say ?default behavior?. Is there an enum or constant that I can adjust to change this behavior? > > Not specifically for this event, but you can generally ignore checksum errors via ?redef ignore_checksums=T;? or using the -C command line option. > > - Jon From brianallen at wustl.edu Mon Sep 8 11:48:51 2014 From: brianallen at wustl.edu (Allen, Brian) Date: Mon, 8 Sep 2014 18:48:51 +0000 Subject: [Bro] bro-column Message-ID: Hi, I was at Archcon last Friday and Liam Randall gave a great day-long training session. I was wondering where I could find the command bro-column he mentioned? We are going to upgrade to bro 2.3 soon (I assume it will be included there?) but in the meantime I'd like to have that tool available. Thanks, Brian Allen Information Security Manager Washington University -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140908/69f21c71/attachment.html From liam.randall at gmail.com Mon Sep 8 11:53:42 2014 From: liam.randall at gmail.com (Liam Randall) Date: Mon, 8 Sep 2014 19:53:42 +0100 Subject: [Bro] bro-column In-Reply-To: References: Message-ID: Thanks Brian. It's setup in the .bashrc on the training vm: alias bro-column="sed \"s/fields.//;s/types.//\" | column -s $'\t' -t" :) Liam On Mon, Sep 8, 2014 at 7:48 PM, Allen, Brian wrote: > Hi, I was at Archcon last Friday and Liam Randall gave a great day-long > training session. I was wondering where I could find the command > bro-column he mentioned? We are going to upgrade to bro 2.3 soon (I assume > it will be included there?) but in the meantime I'd like to have that tool > available. > > Thanks, > Brian Allen > Information Security Manager > Washington University > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140908/fc52baf4/attachment.html From dave at dechellis.com Mon Sep 8 15:09:34 2014 From: dave at dechellis.com (Dave DeChellis) Date: Mon, 8 Sep 2014 17:09:34 -0500 (EST) Subject: [Bro] Question on file hashes and cyrmu db In-Reply-To: <745FA782-ED17-44E0-9904-F480460E198B@icir.org> References: <1796319538.1573830.1408065982756.open-xchange@bosoxweb03.eigbox.net> <1581448715.4171794.1410123567304.open-xchange@bosoxweb01.eigbox.net> <745FA782-ED17-44E0-9904-F480460E198B@icir.org> Message-ID: <1299983301.4308938.1410214174213.open-xchange@bosoxweb01.eigbox.net> > On September 8, 2014 at 8:28 AM Seth Hall wrote: > > > > On Sep 7, 2014, at 4:59 PM, Dave DeChellis wrote: > > > 2. When I do get some matches from Cymru, I don't get the entry in > > notice.log via the detect bro script. > > How do you know you get a match from Team Cymru if it doesn't show up in your > notice.log? I manually dumped the MD5/SHA1 hashes from files.log and imported it into their web portal. For the ones that matches, I confirmed that the DNS query returned a match also. > > > I did change the detect-MHR.bro and made the following changes: changed the > > percent down to 1 (just to test) and added the .zip mime extension > > You should really avoid making changes to that file. Instead you should have > done this in local.bro (or elsewhere, just a script you control): > > redef TeamCymruMalwareHashRegistry::match_file_types += /application\/zip/; > redef TeamCymruMalwareHashRegistry::notice_threshold = 1; > Thanks again Dave > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140908/501f578c/attachment.html From jdopheid at illinois.edu Mon Sep 8 15:57:38 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Mon, 8 Sep 2014 22:57:38 +0000 Subject: [Bro] Bro v2.3.1 has been released Message-ID: <7EFD7D614A2BB84ABEA19B2CEDD24658018D6F65@CITESMBX5.ad.uillinois.edu> Bro v2.3.1 has been released. This release addresses a potential DOS vector using specially crafted DNS packets. It also fixes a bug in the OCSP validation code that could lead to crashes as well as a memory leak. The source distribution and binary packages are available on our downloads page. See CHANGES for the full commit list. Since this release addresses a bug fix, we encourage users to review and install at their earliest convenience. Feedback is encouraged and should be sent to the Bro mailing list. The Bro Team -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140908/3be59dbb/attachment.html From michalpurzynski1 at gmail.com Mon Sep 8 16:50:55 2014 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Mon, 8 Sep 2014 16:50:55 -0700 Subject: [Bro] Bro and amplifications attacks Message-ID: Hello. Let's say I wanted to detect an amplification attack using DNS/SNMP/NTP. Kind of just in case the edge filters and careful configuration and scanning for vulnerabilities didn't catch everything. Bro has analyzers for DNS/SNMP/NTP. A few months ago it was monlist it NTP, someone might come up with something else another day. I think it might be a good use of a SumStat framework. The key would be a client who has number of packets towards him counted and expired early and frequently. I see a problem here - a large number of keys. Also, I don't mind when a single client sends me 100 requests and gets 150 packets of answers, but I do if he sends 1 packet and gets 10 in return every time. Quite a field for false positives here. Any ideas how to correlate it? -- Micha? Purzy?ski -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140908/ea1771c0/attachment.html From vlad at grigorescu.org Tue Sep 9 09:41:20 2014 From: vlad at grigorescu.org (Vlad Grigorescu) Date: Tue, 9 Sep 2014 11:41:20 -0500 Subject: [Bro] SMB In-Reply-To: <6BE58C92-3BFA-4F66-B323-65A9EDABB135@geekempire.com> References: <4BD95AC5-2CE9-47A7-9520-B141D662C053@geekempire.com> <5406007B.9080309@gmail.com> <6BE58C92-3BFA-4F66-B323-65A9EDABB135@geekempire.com> Message-ID: There are no SMB policy scripts yet. Just the base scripts to generate the various SMB logs. --Vlad On Mon, Sep 8, 2014 at 5:36 PM, Mike Reeves wrote: > Are there any Bro scripts for SMB or is this something I need to figure > out on my own? > > > > On Sep 2, 2014, at 9:33 PM, Vlad Grigorescu wrote: > > On Tue, Sep 2, 2014 at 12:38 PM, Michal Purzynski < > michalpurzynski1 at gmail.com> wrote: > >> - do we have ways to detect other similar protocols? NFS, I'm looking at >> you. And MySQL. And Postgres. > > > I'm hoping you mean similar from a functionality standpoint, and not > similar based on what's on the wire... :-) > > There was an old NFS analyzer: > https://github.com/bro/bro/blob/v2.1/src/NFS.cc Apparently it didn't > work all that well, but it might be a jumping off point. > > There's a MySQL analyzer that's currently in beta in topic/vladg/smb. I > don't know of anyone working on Postgres right now. > > --Vlad > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140909/e052b466/attachment.html From EGAMARRO at depaul.edu Tue Sep 9 15:05:01 2014 From: EGAMARRO at depaul.edu (Gamarro, Estuardo) Date: Tue, 9 Sep 2014 22:05:01 +0000 Subject: [Bro] Napatech's Libpcap on Bro 2.3.1 Message-ID: <2E370166E445E24EB69118AADE8849C9A3E95914@xmbprd02-dft.dpu.depaul.edu> Hello, I'm a new Bro user, and for the most part it has been easy to install/run Bro with Intel cards, but I hit a bump with Napatech libraries. Compiling Bro {2.3.0, 2.3.1} with Napatech's libraries [1] completes without errors, but attempting to run bro or captures packets from command line results in full packet loss. No obvious errors, it just runs without processing packets. Oddly enough, Bro seemed to know how many packets it dropped... Disabling nonblocking in PktSrc.cc [2] allowed Bro to finally capture packets, and as far as I can tell seems to work OK. Has anybody else experienced this problem or found a better solution? E.J. Gamarro [1] Libpcap version 1.4 NT Drivers 3gd 15.2 (Linux) [2] 528c528 < if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 ) --- > if ( pcap_setnonblock(pd, 0, tmp_errbuf) < 0 ) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5969 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140909/1b60aac5/attachment.bin From grutz at jingojango.net Tue Sep 9 17:21:21 2014 From: grutz at jingojango.net (Kurt Grutzmacher) Date: Tue, 9 Sep 2014 17:21:21 -0700 Subject: [Bro] Napatech's Libpcap on Bro 2.3.1 In-Reply-To: <2E370166E445E24EB69118AADE8849C9A3E95914@xmbprd02-dft.dpu.depaul.edu> References: <2E370166E445E24EB69118AADE8849C9A3E95914@xmbprd02-dft.dpu.depaul.edu> Message-ID: We have had good success with the nPule patch applied to 2.1 for direct access but have not moved to the 2.3 branches yet. https://github.com/nPulse/bro/tree/release/2.1-napatech -- Kurt Grutzmacher -=- grutz at jingojango.net On Tue, Sep 9, 2014 at 3:05 PM, Gamarro, Estuardo wrote: > Hello, > > I'm a new Bro user, and for the most part it has been easy to > install/run Bro with Intel cards, but I hit a bump with Napatech libraries. > Compiling Bro {2.3.0, 2.3.1} with Napatech's libraries [1] completes > without errors, but attempting to run bro or captures packets from command > line results in full packet loss. No obvious errors, it just runs without > processing packets. Oddly enough, Bro seemed to know how many packets it > dropped... > > Disabling nonblocking in PktSrc.cc [2] allowed Bro to finally > capture packets, and as far as I can tell seems to work OK. Has anybody > else experienced this problem or found a better solution? > > > E.J. Gamarro > > > [1] > Libpcap version 1.4 > NT Drivers 3gd 15.2 (Linux) > > [2] > 528c528 > < if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 ) > --- > > if ( pcap_setnonblock(pd, 0, tmp_errbuf) < 0 ) > > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140909/195cd708/attachment.html From rotsted at reservoir.com Wed Sep 10 12:14:24 2014 From: rotsted at reservoir.com (Robert Rotsted) Date: Wed, 10 Sep 2014 12:14:24 -0700 Subject: [Bro] Exfil Framework Released Message-ID: Hi all, As announced at BroCon, Reservoir Labs just released the Exfil Framework on Github. The Exfil Framework is a suite of Bro scripts that detect file uploads in TCP connections. The Exfil Framework can detect file uploads in most TCP sessions including sessions that have encrypted payloads (SCP,SFTP,HTTPS). The scripts are located at: https://github.com/reservoirlabs/bro-scripts/tree/master/exfil-detection-framework Feel free to reach out to me if you have any questions, comments or suggestions for improvement. Best, Bob -- Bob Rotsted Senior Engineer Reservoir Labs, Inc. From jlay at slave-tothe-box.net Wed Sep 10 12:39:32 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 10 Sep 2014 13:39:32 -0600 Subject: [Bro] Exfil Framework Released In-Reply-To: References: Message-ID: <29f4cfc6c454e04eb02e07d11ba94531@localhost> On 2014-09-10 13:14, Robert Rotsted wrote: > Hi all, > > As announced at BroCon, Reservoir Labs just released the Exfil > Framework on Github. > > The Exfil Framework is a suite of Bro scripts that detect file > uploads > in TCP connections. The Exfil Framework can detect file uploads in > most TCP sessions including sessions that have encrypted payloads > (SCP,SFTP,HTTPS). > > The scripts are located at: > > https://github.com/reservoirlabs/bro-scripts/tree/master/exfil-detection-framework > > Feel free to reach out to me if you have any questions, comments or > suggestions for improvement. > > Best, > > Bob Good stuff...thanks Bob. James From bkellogg at dresser-rand.com Fri Sep 12 10:18:28 2014 From: bkellogg at dresser-rand.com (Kellogg, Brian D (OLN)) Date: Fri, 12 Sep 2014 17:18:28 +0000 Subject: [Bro] Exfil Framework Released In-Reply-To: References: Message-ID: Thank you for this, very much. I planned on writing something similar and have not had the time. Glad I didn't as yours is better than mine would have been to start with. These are some thoughts I have and plan to include in your scripts on my NSMs at some point. 1. A global ignore list of IPs for sources that are used for file uploads. Export { ... global ignored_sources_conn: set[subnet] = [1.1.1.1/32, 2.3.4.0/24] &redef; ... } event connection_established (c: connection) { ... if (c$id$orig_h in ignored_sources_conn ) return; ... } 2. Another global variable under which the estimated file size does not raise a notice. 3. Another global variable that tracks how many uploads any given source sends in X amount of time above which a notice is raised no matter how large the uploaded files were. I do the above in my rudimentary exfil script that simply looks at total upload size on connection end and have found it very useful. I've been running your scripts on two of our busiest Inet connections for the past couple hours and have seen no appreciable uptick in cpu or memory usage on Bro 2.3. I have it set to watch all RFC1918 connections to the Inet. Thanks again, Brian -----Original Message----- From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Robert Rotsted Sent: Wednesday, September 10, 2014 3:14 PM To: Bro Mailing List Subject: [Bro] Exfil Framework Released Hi all, As announced at BroCon, Reservoir Labs just released the Exfil Framework on Github. The Exfil Framework is a suite of Bro scripts that detect file uploads in TCP connections. The Exfil Framework can detect file uploads in most TCP sessions including sessions that have encrypted payloads (SCP,SFTP,HTTPS). The scripts are located at: https://github.com/reservoirlabs/bro-scripts/tree/master/exfil-detection-framework Feel free to reach out to me if you have any questions, comments or suggestions for improvement. Best, Bob -- Bob Rotsted Senior Engineer Reservoir Labs, Inc. _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From bkellogg at dresser-rand.com Fri Sep 12 11:05:38 2014 From: bkellogg at dresser-rand.com (Kellogg, Brian D (OLN)) Date: Fri, 12 Sep 2014 18:05:38 +0000 Subject: [Bro] Exfil Framework Released In-Reply-To: References: Message-ID: Here's some quick additions to the app-exfil-conn.bro script. ##! Watch all TCP,UDP,ICMP flows for Data Exfil module Exfil; export { ## Defines which subnets are monitored for data exfiltration global watched_subnets_conn: set[subnet] = [10.0.0.0/8] &redef; ## Defines which subnet/host sources to ignore global ignored_orig_conn: set[subnet] = [10.1.1.1/32, 10.3.4.0/24] &redef; ## Defines which subnet/host destinations to ignore global ignored_resp_conn: set[subnet] = [110.1.143.77/32, 9.3.4.0/24] &redef; ## Defines whether connections with local destinations should be monitored for data exfiltration global ignore_local_dest_conn: bool = T &redef; ## Defines the thresholds and polling interval for the exfil framework. See main.bro for more details. global settings_conn: Settings &redef; } event connection_established (c: connection) { if (ignore_local_dest_conn == T && Site::is_local_addr(c$id$resp_h) == T) return; if (c$id$orig_h !in watched_subnets_conn ) return; if (c$id$orig_h in ignored_orig_conn ) return; if (c$id$resp_h in ignored_resp_conn ) return; Exfil::watch_connection(c , settings_conn); } -----Original Message----- From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Robert Rotsted Sent: Wednesday, September 10, 2014 3:14 PM To: Bro Mailing List Subject: [Bro] Exfil Framework Released Hi all, As announced at BroCon, Reservoir Labs just released the Exfil Framework on Github. The Exfil Framework is a suite of Bro scripts that detect file uploads in TCP connections. The Exfil Framework can detect file uploads in most TCP sessions including sessions that have encrypted payloads (SCP,SFTP,HTTPS). The scripts are located at: https://github.com/reservoirlabs/bro-scripts/tree/master/exfil-detection-framework Feel free to reach out to me if you have any questions, comments or suggestions for improvement. Best, Bob -- Bob Rotsted Senior Engineer Reservoir Labs, Inc. _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From vat at mnworks.dk Mon Sep 15 04:02:20 2014 From: vat at mnworks.dk (Victor-Alexandru Truica) Date: Mon, 15 Sep 2014 13:02:20 +0200 Subject: [Bro] broctl reading from pcap files Message-ID: <5416C73C.4090207@mnworks.dk> Hello, I'm using BRO in Security Onion and I need to test the traffic captured from a deployment in a test environment. Instead of monitoring an interface, i want to read from a directory containing pcap files (and/or a large pcap file). SO uses broctl in its scripts to start/manage BRO but I don't know if there is an argument to add in any of broctl config files (node.cfg, broctl.cfg) that will make BRO read from PCAP files. I've also looked into BROs cli and if I were to use this it would be a problem because of the way logs are being stored in SO - in timestamped folders and a "current" folder. My questions are: - can broctl read from PCAP files? - can i use BROs cli to save the log files in a SO fashion (timestamped directories and others) without additional bash? Thanks! -- Victor-Alexandru Truica Product Architect MN Works ApS - www.mnworks.dk Telephone (DK) : +45 50 36 93 72 Blog/Website : http://truica-victor.com E-Mail : vat at mnworks.dk -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/6cebc01d/attachment.html From seth at icir.org Mon Sep 15 05:44:44 2014 From: seth at icir.org (Seth Hall) Date: Mon, 15 Sep 2014 08:44:44 -0400 Subject: [Bro] broctl reading from pcap files In-Reply-To: <5416C73C.4090207@mnworks.dk> References: <5416C73C.4090207@mnworks.dk> Message-ID: <565C26E1-9E46-43FC-8F63-4DD78E6AE58C@icir.org> On Sep 15, 2014, at 7:02 AM, Victor-Alexandru Truica wrote: > - can broctl read from PCAP files? Yes, look into the "process" command in broctl. > - can i use BROs cli to save the log files in a SO fashion (timestamped directories and others) without additional bash? There was a question like that on the mailing list recently. http://mailman.icsi.berkeley.edu/pipermail/bro/2014-August/007346.html The gist is that you need to set a rotation interval and then provide a program which will call to do the actual log rotation. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From lists at g-clef.net Mon Sep 15 06:22:13 2014 From: lists at g-clef.net (Aaron Gee-Clough) Date: Mon, 15 Sep 2014 09:22:13 -0400 Subject: [Bro] Removing IP from Intel Framework? Message-ID: <5416E805.4080101@g-clef.net> All, I'm working with the intel framework and enjoying it, but have hit a bit of a problem: I can successfully add new IPs to watchlists in the framework, but I can't remove them without restarting bro. I'd like to be able to do this to handle false-positives, for example. The fact that new watchlist entries are flagged says to me that I'm doing the "create the file then move it into place" bit properly...I don't know what's up with removing entries, though. I'm running bro 2.3 (the 06/16/14 release), and am invoking the intel framework like this: @load frameworks/intel/seen @load frameworks/intel/do_notice redef Intel::read_files += { "/opt/bro/etc/internalList.dat", }; internalList.dat looks like: #fields indicator indicator_type meta.source meta.url meta.do_notice meta.if_in targetDomain.blah Intel::DOMAIN internal_monitoring https://internalsite/campaign?arg1=text&arg2=some%20more%20text T - Any ideas? Thanks. Aaron From seth at icir.org Mon Sep 15 06:24:23 2014 From: seth at icir.org (Seth Hall) Date: Mon, 15 Sep 2014 09:24:23 -0400 Subject: [Bro] Bro and amplifications attacks In-Reply-To: References: Message-ID: <2DBB9D95-2259-4EAD-986E-D1A047D3D688@icir.org> On Sep 8, 2014, at 7:50 PM, Micha? Purzy?ski wrote: > Let's say I wanted to detect an amplification attack using DNS/SNMP/NTP. Kind of just in case the edge filters and careful configuration and scanning for vulnerabilities didn't catch everything. Generically detecting amplification attacks seems too broadly scoped to me. I'm not sure that I'd even know how to approach that in way that would work well. Do you have any more concrete ideas? > I think it might be a good use of a SumStat framework. I know you've been playing with SumStats a bit recently, have you tried to tackle amplification attack detection? > The key would be a client who has number of packets towards him counted and expired early and frequently. I see a problem here - a large number of keys. Regarding the large number of keys, that's already happening with scan.bro. Unfortunately I think it's mostly unavoidable but doesn't seem to cause too much trouble in practice. Although I can see it causing problems in certain circumstances. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From luke at geekempire.com Mon Sep 15 07:00:29 2014 From: luke at geekempire.com (Mike Reeves) Date: Mon, 15 Sep 2014 10:00:29 -0400 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: <5416E805.4080101@g-clef.net> References: <5416E805.4080101@g-clef.net> Message-ID: Unfortunately there is no way to remove them without restarting. On Monday, September 15, 2014, Aaron Gee-Clough wrote: > > All, > > I'm working with the intel framework and enjoying it, but have hit a bit > of a problem: I can successfully add new IPs to watchlists in the > framework, but I can't remove them without restarting bro. I'd like to > be able to do this to handle false-positives, for example. > > The fact that new watchlist entries are flagged says to me that I'm > doing the "create the file then move it into place" bit properly...I > don't know what's up with removing entries, though. > > I'm running bro 2.3 (the 06/16/14 release), and am invoking the intel > framework like this: > > @load frameworks/intel/seen > @load frameworks/intel/do_notice > > redef Intel::read_files += { > "/opt/bro/etc/internalList.dat", > }; > > internalList.dat looks like: > > #fields indicator indicator_type meta.source meta.url > meta.do_notice > meta.if_in > targetDomain.blah Intel::DOMAIN internal_monitoring > https://internalsite/campaign?arg1=text&arg2=some%20more%20text T - > > > Any ideas? > > Thanks. > > Aaron > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/96bf38cd/attachment.html From michalpurzynski1 at gmail.com Mon Sep 15 09:17:57 2014 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Mon, 15 Sep 2014 18:17:57 +0200 Subject: [Bro] Bro and amplifications attacks In-Reply-To: <2DBB9D95-2259-4EAD-986E-D1A047D3D688@icir.org> References: <2DBB9D95-2259-4EAD-986E-D1A047D3D688@icir.org> Message-ID: On Mon, Sep 15, 2014 at 3:24 PM, Seth Hall wrote: > > On Sep 8, 2014, at 7:50 PM, Micha? Purzy?ski > wrote: > > > Let's say I wanted to detect an amplification attack using DNS/SNMP/NTP. > Kind of just in case the edge filters and careful configuration and > scanning for vulnerabilities didn't catch everything. > > Generically detecting amplification attacks seems too broadly scoped to > me. I'm not sure that I'd even know how to approach that in way that would > work well. Do you have any more concrete ideas? > > A ratio of data sent by me in response to a request I've got from a client A. If one packet generates 10 and that's UDP we might have a problem. That requires tracing a flow in progress, and that's not something I'm brave enough to do. > > I think it might be a good use of a SumStat framework. > > I know you've been playing with SumStats a bit recently, have you tried to > tackle amplification attack detection? > > I was thinking about the UDP events but they are per packet, and frankly it's hard to change that, due to a very nature of the protocol. UDP session done does not give me information about amount of the data transmitted, unless I'm wrong? > > The key would be a client who has number of packets towards him counted > and expired early and frequently. I see a problem here - a large number of > keys. > > Regarding the large number of keys, that's already happening with > scan.bro. Unfortunately I think it's mostly unavoidable but doesn't seem > to cause too much trouble in practice. Although I can see it causing > problems in certain circumstances. > > Having all problematic protocols rate limited to Mbit/sec this might or might not be an issue that I would go for per packet analysis, but I don't know if the side effects are worth the effort. Still thinking about it. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/ca7252f6/attachment.html From seth at icir.org Mon Sep 15 10:13:12 2014 From: seth at icir.org (Seth Hall) Date: Mon, 15 Sep 2014 13:13:12 -0400 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: References: <5416E805.4080101@g-clef.net> Message-ID: On Sep 15, 2014, at 10:00 AM, Mike Reeves wrote: > Unfortunately there is no way to remove them without restarting. I'm hoping that I can get a repository up on github today/tonight that makes your statement incorrect. :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From hammadog at gmail.com Mon Sep 15 10:55:27 2014 From: hammadog at gmail.com (Tom OBrion) Date: Mon, 15 Sep 2014 13:55:27 -0400 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: References: <5416E805.4080101@g-clef.net> Message-ID: Thats awesome Seth! Thanks Tom "Life is too short to spend time with people who suck the happy out of you." On Sep 15, 2014, at 1:13 PM, Seth Hall wrote: > > On Sep 15, 2014, at 10:00 AM, Mike Reeves wrote: > >> Unfortunately there is no way to remove them without restarting. > > I'm hoping that I can get a repository up on github today/tonight that makes your statement incorrect. :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4821 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/cea4f0f7/attachment.bin From seth at icir.org Mon Sep 15 13:53:31 2014 From: seth at icir.org (Seth Hall) Date: Mon, 15 Sep 2014 16:53:31 -0400 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: References: <5416E805.4080101@g-clef.net> Message-ID: On Sep 15, 2014, at 1:13 PM, Seth Hall wrote: > I'm hoping that I can get a repository up on github today/tonight that makes your statement incorrect. :) https://github.com/sethhall/intel-ext This repository adds two features. - You can extend your intel log (now named intel_ext.log). - You can whitelist items. These features will likely be integrated into Bro at a future date. I'm trying to use this ext repository as a way to vet features for the intel framework before integrating them right into the main distribution. If you want to start whitelisting intel items at runtime, you should create a new intel file with an extra "meta.whitelist" field and set the field value to "T" (there is a test that shows this). As you add elements to this intel file, those items won't show up in your intel_ext.log. The intel file will look something like this... #fields indicator indicator_type meta.source meta.whitelist bro.org Intel::DOMAIN my_whitelist T You should probably maintain this as a separate file and make sure that you are giving the source as something distinct from where the data comes from originally (it's "my_whitelist" in my example). Have fun! :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From michalpurzynski1 at gmail.com Mon Sep 15 14:05:53 2014 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Mon, 15 Sep 2014 23:05:53 +0200 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: References: <5416E805.4080101@g-clef.net> Message-ID: W00t, thanks a lot, testing ASAP. On Mon, Sep 15, 2014 at 10:53 PM, Seth Hall wrote: > > On Sep 15, 2014, at 1:13 PM, Seth Hall wrote: > > > I'm hoping that I can get a repository up on github today/tonight that > makes your statement incorrect. :) > > https://github.com/sethhall/intel-ext > > This repository adds two features. > - You can extend your intel log (now named intel_ext.log). > - You can whitelist items. > > These features will likely be integrated into Bro at a future date. I'm > trying to use this ext repository as a way to vet features for the intel > framework before integrating them right into the main distribution. > > If you want to start whitelisting intel items at runtime, you should > create a new intel file with an extra "meta.whitelist" field and set the > field value to "T" (there is a test that shows this). As you add elements > to this intel file, those items won't show up in your intel_ext.log. > > The intel file will look something like this... > > #fields indicator indicator_type meta.source meta.whitelist > bro.org Intel::DOMAIN my_whitelist T > > You should probably maintain this as a separate file and make sure that > you are giving the source as something distinct from where the data comes > from originally (it's "my_whitelist" in my example). > > Have fun! :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Micha? Purzy?ski -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/1a9b672b/attachment.html From liburdi.joshua at gmail.com Mon Sep 15 15:24:04 2014 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Mon, 15 Sep 2014 15:24:04 -0700 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: References: <5416E805.4080101@g-clef.net> Message-ID: Just to clarify a couple things ... Do in-line indicator changes require a restart? That is, if my intel file is deployed with indicator blah.org and a whitelist value of F, then later I update that value to T, do I need to restart for that change to be picked up? IIRC you still would need to restart for the value change to be read. The whitelisting also wouldn't decrease any processing requirements of the Intel framework since the initial indicator match is still occurring, right? Josh On Mon, Sep 15, 2014 at 1:53 PM, Seth Hall wrote: > > On Sep 15, 2014, at 1:13 PM, Seth Hall wrote: > >> I'm hoping that I can get a repository up on github today/tonight that makes your statement incorrect. :) > > https://github.com/sethhall/intel-ext > > This repository adds two features. > - You can extend your intel log (now named intel_ext.log). > - You can whitelist items. > > These features will likely be integrated into Bro at a future date. I'm trying to use this ext repository as a way to vet features for the intel framework before integrating them right into the main distribution. > > If you want to start whitelisting intel items at runtime, you should create a new intel file with an extra "meta.whitelist" field and set the field value to "T" (there is a test that shows this). As you add elements to this intel file, those items won't show up in your intel_ext.log. > > The intel file will look something like this... > > #fields indicator indicator_type meta.source meta.whitelist > bro.org Intel::DOMAIN my_whitelist T > > You should probably maintain this as a separate file and make sure that you are giving the source as something distinct from where the data comes from originally (it's "my_whitelist" in my example). > > Have fun! :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From blackhole.em at gmail.com Mon Sep 15 15:38:59 2014 From: blackhole.em at gmail.com (Joe Blow) Date: Mon, 15 Sep 2014 18:38:59 -0400 Subject: [Bro] Bro + Log rotation (solr ?) Message-ID: Hey all, I'm using Bro + rsyslog filereader in order to pump Bro into our big data solution (Apache SOLR). I'm using custom python scripts to parse the incoming bro messages, batch them into appropriate sizes, and then POST them to the SOLR cluster we have setup. The main problem i'm running into is that rsyslog does not seem to 'follow' the files once they have gone through a Bro logrotate. Is there a way to completely disable logrotate? Has anyone had any luck with the Bro logrotate and not 'losing' file handles? I'd love some help in this matter. Also - i know that Bro supports elastic search POSTing (via libcurl). Is there any reason why an apache SOLR module can't be written/adapted? I don't see a need to write to a file and worry about file handles, when it's almost exactly the same to POST to SOLR as it is to ES. Since it's all libcurl (and JSON) under the hood, i'd be glad to post/share the SOLR schemas i've created for the Bro data. Thank in advance. Cheers, JB -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/5cbe9c92/attachment.html From jlay at slave-tothe-box.net Mon Sep 15 15:56:37 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 15 Sep 2014 16:56:37 -0600 Subject: [Bro] =?utf-8?q?Bro_+_Log_rotation_=28solr_=3F=29?= In-Reply-To: References: Message-ID: On 2014-09-15 16:38, Joe Blow wrote: > Hey all, > > Im using Bro + rsyslog filereader in order to pump Bro into our big > data solution (Apache SOLR).? Im using custom python scripts to parse > the incoming bro messages, batch them into appropriate sizes, and > then > POST them to the SOLR cluster we have setup.? The main problem im > running into is that rsyslog does not seem to follow the files once > they have gone through a Bro logrotate.? Is there a way to completely > disable logrotate?? Has anyone had any luck with the Bro logrotate > and not losing file handles? > > Id love some help in this matter.? Also - i know that Bro supports > elastic search POSTing (via libcurl).? Is there any reason why an > apache SOLR module cant be written/adapted?? I dont see a need to > write to a file and worry about file handles, when its almost exactly > the same to POST to SOLR as it is to ES.? Since its all libcurl (and > JSON) under the hood, id be glad to post/share the SOLR schemas ive > created for the Bro data. > > Thank in advance. > > Cheers, > > JB I experienced the same thing, but since I rotate the files manually, I restart the syslog service after rotating and that's done the trick for me. James From grutz at jingojango.net Mon Sep 15 16:40:35 2014 From: grutz at jingojango.net (Kurt Grutzmacher) Date: Mon, 15 Sep 2014 16:40:35 -0700 Subject: [Bro] Bro + Log rotation (solr ?) In-Reply-To: References: Message-ID: Hey Joe, It is possible with the current setup to write your own logging utility to pipe events directly to your system of choice. Since SOLR is REST-based just copy over the ElasticSearch module and do some code tweaking. Be aware that the devs are working on a new modular method for extending Bro that will include logging. Should hopefully be a less-painful migration. -- Kurt Grutzmacher -=- grutz at jingojango.net On Mon, Sep 15, 2014 at 3:38 PM, Joe Blow wrote: > Hey all, > > I'm using Bro + rsyslog filereader in order to pump Bro into our big data > solution (Apache SOLR). I'm using custom python scripts to parse the > incoming bro messages, batch them into appropriate sizes, and then POST > them to the SOLR cluster we have setup. The main problem i'm running into > is that rsyslog does not seem to 'follow' the files once they have gone > through a Bro logrotate. Is there a way to completely disable logrotate? > Has anyone had any luck with the Bro logrotate and not 'losing' file > handles? > > I'd love some help in this matter. Also - i know that Bro supports > elastic search POSTing (via libcurl). Is there any reason why an apache > SOLR module can't be written/adapted? I don't see a need to write to a > file and worry about file handles, when it's almost exactly the same to > POST to SOLR as it is to ES. Since it's all libcurl (and JSON) under the > hood, i'd be glad to post/share the SOLR schemas i've created for the Bro > data. > > Thank in advance. > > Cheers, > > JB > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140915/adce2f9b/attachment.html From seth at icir.org Mon Sep 15 19:13:23 2014 From: seth at icir.org (Seth Hall) Date: Mon, 15 Sep 2014 22:13:23 -0400 Subject: [Bro] Removing IP from Intel Framework? In-Reply-To: References: <5416E805.4080101@g-clef.net> Message-ID: On Sep 15, 2014, at 6:24 PM, Josh Liburdi wrote: > Do in-line indicator changes require a restart? That is, if my intel > file is deployed with indicator blah.org and a whitelist value of F, > then later I update that value to T, do I need to restart for that > change to be picked up? IIRC you still would need to restart for the > value change to be read.  I wouldn't recommend setting a whitelist value in your normal intel datasets. I would maintain it as a separate file as I recommended in my previous email. > The whitelisting also wouldn't decrease any processing requirements of > the Intel framework since the initial indicator match is still > occurring, right? Having fewer items being matched really doesn't change your processing time overhead so there isn't really an optimization to be made there. It primarily just uses less memory at runtime but you wouldn't notice that either unless you have some sort of monstrous whitelist file. The only case where I could see it really helping would be if you are having a really huge number of hits, but I still suspect most people wouldn't notice. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From blackhole.em at gmail.com Tue Sep 16 06:58:37 2014 From: blackhole.em at gmail.com (Joe Blow) Date: Tue, 16 Sep 2014 09:58:37 -0400 Subject: [Bro] Bro + Log rotation (solr ?) In-Reply-To: References: Message-ID: Hey James, How exactly are you completely disabling the bro file rotation? This is why i tried in broctl.conf: SitePolicyStandalone = local.bro CfgDir = /usr/local/bro/etc SpoolDir = /usr/local/bro/spool LogDir = /usr/local/bro/logs LogRotationInterval = 0 MinDiskSpace = 5 But i still see gz files being created. Am i missing something to completely disable? Cheers, Justin On Mon, Sep 15, 2014 at 6:56 PM, James Lay wrote: > On 2014-09-15 16:38, Joe Blow wrote: > > Hey all, > > > > Im using Bro + rsyslog filereader in order to pump Bro into our big > > data solution (Apache SOLR). Im using custom python scripts to parse > > the incoming bro messages, batch them into appropriate sizes, and > > then > > POST them to the SOLR cluster we have setup. The main problem im > > running into is that rsyslog does not seem to follow the files once > > they have gone through a Bro logrotate. Is there a way to completely > > disable logrotate? Has anyone had any luck with the Bro logrotate > > and not losing file handles? > > > > Id love some help in this matter. Also - i know that Bro supports > > elastic search POSTing (via libcurl). Is there any reason why an > > apache SOLR module cant be written/adapted? I dont see a need to > > write to a file and worry about file handles, when its almost exactly > > the same to POST to SOLR as it is to ES. Since its all libcurl (and > > JSON) under the hood, id be glad to post/share the SOLR schemas ive > > created for the Bro data. > > > > Thank in advance. > > > > Cheers, > > > > JB > > I experienced the same thing, but since I rotate the files manually, I > restart the syslog service after rotating and that's done the trick for > me. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140916/02267dae/attachment.html From jlay at slave-tothe-box.net Tue Sep 16 07:26:25 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 16 Sep 2014 08:26:25 -0600 Subject: [Bro] =?utf-8?q?Bro_+_Log_rotation_=28solr_=3F=29?= In-Reply-To: References: Message-ID: <3990d792440c23539ab7d82a2daba354@localhost> On 2014-09-16 07:58, Joe Blow wrote: > Hey James, > > How exactly are you completely disabling the bro file rotation?? This > is why i tried in broctl.conf: > > SitePolicyStandalone = local.bro > CfgDir = /usr/local/bro/etc > SpoolDir = /usr/local/bro/spool > LogDir = /usr/local/bro/logs > LogRotationInterval = 0 > MinDiskSpace = 5 > > But i still see gz files being created.? Am i missing something to > completely disable? > > Cheers, > > Justin > > On Mon, Sep 15, 2014 at 6:56 PM, James Lay [3]> wrote: > >> On 2014-09-15 16:38, Joe Blow wrote: >> > Hey all, >> > >> > Im using Bro + rsyslog filereader in order to pump Bro into our >> big >> > data solution (Apache SOLR).? Im using custom python scripts to >> parse >> > the incoming bro messages, batch them into appropriate sizes, and >> > then >> > POST them to the SOLR cluster we have setup.? The main problem >> im >> > running into is that rsyslog does not seem to follow the files >> once >> > they have gone through a Bro logrotate.? Is there a way to >> completely >> > disable logrotate?? Has anyone had any luck with the Bro >> logrotate >> > and not losing file handles? >> > >> > Id love some help in this matter.? Also - i know that Bro >> supports >> > elastic search POSTing (via libcurl).? Is there any reason why >> an >> > apache SOLR module cant be written/adapted?? I dont see a need >> to >> > write to a file and worry about file handles, when its almost >> exactly >> > the same to POST to SOLR as it is to ES.? Since its all libcurl >> (and >> > JSON) under the hood, id be glad to post/share the SOLR schemas >> ive >> > created for the Bro data. >> > >> > Thank in advance. >> > >> > Cheers, >> > >> > JB >> >> I experienced the same thing, but since I rotate the files >> manually, I >> restart the syslog service after rotating and thats done the trick >> for >> me. >> >> James I don't use broctl, I use bro command line only. Something like: /usr/local/bro/bin/bro --no-checksums -i eth0 local "Site::local_nets += { 192.168.1.0/24 }" James From dnthayer at illinois.edu Tue Sep 16 08:10:40 2014 From: dnthayer at illinois.edu (Daniel Thayer) Date: Tue, 16 Sep 2014 10:10:40 -0500 Subject: [Bro] Bro + Log rotation (solr ?) In-Reply-To: References: Message-ID: <541852F0.7000306@illinois.edu> After changing broctl.cfg, did you remember to run "broctl install"? Your changes do not take effect until you "install" them. Next, you need to restart Bro ("broctl restart") so that Bro will read the new settings. On 09/16/2014 08:58 AM, Joe Blow wrote: > Hey James, > > How exactly are you completely disabling the bro file rotation? This is > why i tried in broctl.conf: > > SitePolicyStandalone = local.bro > CfgDir = /usr/local/bro/etc > SpoolDir = /usr/local/bro/spool > LogDir = /usr/local/bro/logs > LogRotationInterval = 0 > MinDiskSpace = 5 > > But i still see gz files being created. Am i missing something to > completely disable? > > Cheers, > > Justin > > On Mon, Sep 15, 2014 at 6:56 PM, James Lay > wrote: > > On 2014-09-15 16:38, Joe Blow wrote: > > Hey all, > > > > Im using Bro + rsyslog filereader in order to pump Bro into our big > > data solution (Apache SOLR). Im using custom python scripts to parse > > the incoming bro messages, batch them into appropriate sizes, and > > then > > POST them to the SOLR cluster we have setup. The main problem im > > running into is that rsyslog does not seem to follow the files once > > they have gone through a Bro logrotate. Is there a way to completely > > disable logrotate? Has anyone had any luck with the Bro logrotate > > and not losing file handles? > > > > Id love some help in this matter. Also - i know that Bro supports > > elastic search POSTing (via libcurl). Is there any reason why an > > apache SOLR module cant be written/adapted? I dont see a need to > > write to a file and worry about file handles, when its almost exactly > > the same to POST to SOLR as it is to ES. Since its all libcurl (and > > JSON) under the hood, id be glad to post/share the SOLR schemas ive > > created for the Bro data. > > > > Thank in advance. > > > > Cheers, > > > > JB > > I experienced the same thing, but since I rotate the files manually, I > restart the syslog service after rotating and that's done the trick for > me. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From doris at bro.org Tue Sep 16 11:06:14 2014 From: doris at bro.org (Doris Schioberg) Date: Tue, 16 Sep 2014 11:06:14 -0700 Subject: [Bro] First Bro Newsletter Message-ID: <54187C16.4010607@bro.org> We are happy to announce the first issue of the new Bro Monthly newsletter http://blog.bro.org/2014/09/bro-monthly-1.html. This very first newsletter is longer than future newsletters will be since there was BroCon'14 and a lot of other special things were going on in August. Have a look. We'd love to hear your feedback on it! -The Bro Team -- Doris Schioberg Bro Outreach, Training, and Education Coordinator International Computer Science Institute (ICSI Berkeley) Phone: +1 (510) 289-8406 * doris at bro.org From jonathon.s.wright at gmail.com Tue Sep 16 18:54:28 2014 From: jonathon.s.wright at gmail.com (Jonathon Wright) Date: Tue, 16 Sep 2014 15:54:28 -1000 Subject: [Bro] Bro Log ingestion Message-ID: Hello, Requirement: I'm trying to find the most efficient way to ingest all of Bro's logs, where Bro is running on multiple servers, and get a single server/point of query/mining/reporting, etc. Servers are running Red Hat 6.5 and Bro 2.3 built from source with file extraction enabled (HTTP protocol for exe files). All Bro logs and extracted files seem to be by default owned by root:root, but I'd like to have them available to a non-root group once on the single server/point/interface to the analyst. (My apologies if this has been covered, but I do not know where to search other than just ask or google it. ) Current setup Red Hat is running fine, Bro 2.3 with file extraction is working fine. So no worries, I just need the best methodology to implement for ingesting all the Bro logs (and extracted files) to a single point for analysis/mining/querying/reporting etc. Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. I look forward to your reply, thanks! JW -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140916/6484054a/attachment.html From rsreese at gmail.com Tue Sep 16 19:28:21 2014 From: rsreese at gmail.com (Stephen Reese) Date: Tue, 16 Sep 2014 22:28:21 -0400 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright < jonathon.s.wright at gmail.com> wrote: > > Research > Looking around and doing some reading, I've found two possible solutions > ELSA and LOGSTASH although I don't know them very well and / or what their > capabilities are either. But I'd like to know if they are viable, > especially given my scenario, or if there is something better. Also, a > how-to so I can set it up. > You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch. [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html [2] http://www.elasticsearch.org/overview/kibana/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140916/89cebff2/attachment.html From jonathon.s.wright at gmail.com Tue Sep 16 20:03:32 2014 From: jonathon.s.wright at gmail.com (Jonathon Wright) Date: Tue, 16 Sep 2014 17:03:32 -1000 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: Thanks Steven, I'll take a look at those. I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info! On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese wrote: > On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright < > jonathon.s.wright at gmail.com> wrote: >> >> Research >> Looking around and doing some reading, I've found two possible solutions >> ELSA and LOGSTASH although I don't know them very well and / or what their >> capabilities are either. But I'd like to know if they are viable, >> especially given my scenario, or if there is something better. Also, a >> how-to so I can set it up. >> > > You might want to skip on the Logstash piece and push the data directly to > ElasticSearch per [1] unless you have a specific requirement. From there > you could use Kibana [2] or whatever to interface with data stored in > ElasticSearch. > > [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html > [2] http://www.elasticsearch.org/overview/kibana/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140916/c3f1c72b/attachment.html From hosom at battelle.org Wed Sep 17 04:54:38 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Wed, 17 Sep 2014 11:54:38 +0000 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: Jonathon, As a nit-pick, just because the files are owned by root, doesn?t mean they aren?t world-readable. ? The absolute simplest solution to allow the logs to be viewable by non-root users is to scp them to a centralized server, but I?m guessing you want something a little fancier than that. If you can do it, go with free Splunk. If you can afford it, go with paid Splunk. Otherwise: For log viewing with Elasticsearch Kibana works great, but, you could also check out Brownian: https://github.com/grigorescu/Brownian. For log storage, if you want to consider something other than Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s no GUI, so that might be a downer for you. As far as Elasticsearch architecture goes, using Bro to write directly into Elasticsearch is definitely the easiest option. The only concern with this setup is that if Elasticsearch gets busy, nobody is happy. Elasticsearch has a tendency to drop writes when it is too occupied. This combined with the fact that (to the best of my knowledge) the Elasticsearch writer is a ?send it and forget it? could result in some hardship if you under build your Elasticsearch cluster or you undergo a period of unusually high utilization. Seth has some interesting stuff using NSQ that he has written, but I?m not sure that it is technically ?supported?. His NSQ stuff allows you to send the events to Elasticsearch at a rate that Elasticsearch is comfortable with. Lastly, you could use the Logstash agent to send logs to a Redis server, which buffers the logs for additional Logstash agents to pull from and parse to insert into Elasticsearch. At the moment, I think that this is the most redundant setup. If you want as many logs to make it into Elasticsearch as possible while keeping the Bro side of things as simple as possible, this is likely the way to go. The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact? Hopefully that helps at least a little! -Stephen From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Tuesday, September 16, 2014 11:04 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Thanks Steven, I'll take a look at those. I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info! On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese > wrote: On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright > wrote: Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch. [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html [2] http://www.elasticsearch.org/overview/kibana/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/4d597e2b/attachment.html From jlanders at paymetric.com Wed Sep 17 05:26:49 2014 From: jlanders at paymetric.com (John Landers) Date: Wed, 17 Sep 2014 07:26:49 -0500 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: <199F5CD38D5E984990F92D2CDE955F66581BE3DEC1@34093-MBX-C12.mex07a.mlsrvr.com> I?m not sure it?s an option for you, but I?m using Splunk to ingest logs from multiple Bro sensors. It?s a great way to compliment the other data I have in Splunk and after creating some field extractions, it becomes really easy to search the data or create statistics of the data. John Landers From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Tuesday, September 16, 2014 8:54 PM To: bro at bro.org Subject: [Bro] Bro Log ingestion Hello, Requirement: I'm trying to find the most efficient way to ingest all of Bro's logs, where Bro is running on multiple servers, and get a single server/point of query/mining/reporting, etc. Servers are running Red Hat 6.5 and Bro 2.3 built from source with file extraction enabled (HTTP protocol for exe files). All Bro logs and extracted files seem to be by default owned by root:root, but I'd like to have them available to a non-root group once on the single server/point/interface to the analyst. (My apologies if this has been covered, but I do not know where to search other than just ask or google it. ) Current setup Red Hat is running fine, Bro 2.3 with file extraction is working fine. So no worries, I just need the best methodology to implement for ingesting all the Bro logs (and extracted files) to a single point for analysis/mining/querying/reporting etc. Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. I look forward to your reply, thanks! JW -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/ab90d50a/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6593 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/ab90d50a/attachment.bin From paul.halliday at gmail.com Wed Sep 17 06:04:43 2014 From: paul.halliday at gmail.com (Paul Halliday) Date: Wed, 17 Sep 2014 10:04:43 -0300 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: I am using logstash. I have Bro 2.3 running on a sensor and the logs are sent to a collector via syslog-ng. There, they are written to disk where they are read by logstash and sent to elasticsearch. I use logrotate to gzip these files once they get close to about a gig and keep them just in case ES craps out or I need to process them in other ways. I use squert (www.squertproject.org) to browse them once in ES but kibana would probably be a more versatile tool. I process anywhere from 1800-2500 entries/second on a 8core box with 96GB ram running FreeBSD. If you want to quickly PoC something take a look at securityonion ( http://blog.securityonion.net/). On Tue, Sep 16, 2014 at 10:54 PM, Jonathon Wright < jonathon.s.wright at gmail.com> wrote: > Hello, > > Requirement: > I'm trying to find the most efficient way to ingest all of Bro's > logs, where Bro is running on multiple servers, and get a > single server/point of query/mining/reporting, etc. Servers are > running Red Hat 6.5 and Bro 2.3 built from source with file extraction > enabled (HTTP protocol for exe files). All Bro logs and extracted files > seem to be by default owned by root:root, but I'd like to have them > available to a non-root group once on the single server/point/interface to > the analyst. > > > (My apologies if this has been covered, but I do not know where to search > other than just ask or google it. ) > > Current setup > Red Hat is running fine, Bro 2.3 with file extraction is working fine. So > no worries, I just need the best methodology to implement for ingesting all > the Bro logs (and extracted files) to a single point for > analysis/mining/querying/reporting etc. > > Research > Looking around and doing some reading, I've found two possible solutions > ELSA and LOGSTASH although I don't know them very well and / or what their > capabilities are either. But I'd like to know if they are viable, > especially given my scenario, or if there is something better. Also, a > how-to so I can set it up. > > I look forward to your reply, thanks! > > JW > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- Paul Halliday http://www.pintumbler.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/60b09f54/attachment.html From will.havlovick at zenimax.com Wed Sep 17 06:13:13 2014 From: will.havlovick at zenimax.com (Will Havlovick) Date: Wed, 17 Sep 2014 13:13:13 +0000 Subject: [Bro] Bro Log ingestion In-Reply-To: <199F5CD38D5E984990F92D2CDE955F66581BE3DEC1@34093-MBX-C12.mex07a.mlsrvr.com> References: , <199F5CD38D5E984990F92D2CDE955F66581BE3DEC1@34093-MBX-C12.mex07a.mlsrvr.com> Message-ID: <55EDB926-EE50-4709-A38B-DB7E6C504EEB@zenimax.com> I agree with using Splunk. It has really helped us with our massive amount of Bro logs. We are also dumping other logs(AD, FW, etc.) into Splunk and correlating them with the Bro logs. With Splunk though, it does tend to get pricey as you put more and more data into. But I believe you can use up to 500mb a day without cost. Will On 17.09.2014, at 08:49, "John Landers" > wrote: I?m not sure it?s an option for you, but I?m using Splunk to ingest logs from multiple Bro sensors. It?s a great way to compliment the other data I have in Splunk and after creating some field extractions, it becomes really easy to search the data or create statistics of the data. John Landers From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Tuesday, September 16, 2014 8:54 PM To: bro at bro.org Subject: [Bro] Bro Log ingestion Hello, Requirement: I'm trying to find the most efficient way to ingest all of Bro's logs, where Bro is running on multiple servers, and get a single server/point of query/mining/reporting, etc. Servers are running Red Hat 6.5 and Bro 2.3 built from source with file extraction enabled (HTTP protocol for exe files). All Bro logs and extracted files seem to be by default owned by root:root, but I'd like to have them available to a non-root group once on the single server/point/interface to the analyst. (My apologies if this has been covered, but I do not know where to search other than just ask or google it. ) Current setup Red Hat is running fine, Bro 2.3 with file extraction is working fine. So no worries, I just need the best methodology to implement for ingesting all the Bro logs (and extracted files) to a single point for analysis/mining/querying/reporting etc. Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. I look forward to your reply, thanks! JW _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/a809ea0b/attachment.html From rsreese at gmail.com Wed Sep 17 07:42:55 2014 From: rsreese at gmail.com (Stephen Reese) Date: Wed, 17 Sep 2014 10:42:55 -0400 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: Jonathon, As pointed out, a Redis solution may be an ideal open-source route, e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ On Wednesday, September 17, 2014, Hosom, Stephen M wrote: > Jonathon, > > > > As a nit-pick, just because the files are owned by root, doesn?t mean they > aren?t world-readable. J The absolute simplest solution to allow the logs > to be viewable by non-root users is to scp them to a centralized server, > but I?m guessing you want something a little fancier than that. > > > > If you can do it, go with free Splunk. If you can afford it, go with paid > Splunk. > > > > Otherwise: > > > > For log viewing with Elasticsearch Kibana works great, but, you could also > check out Brownian: https://github.com/grigorescu/Brownian. > > > > For log storage, if you want to consider something other than > Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s > no GUI, so that might be a downer for you. > > > > As far as Elasticsearch architecture goes, using Bro to write directly > into Elasticsearch is definitely the easiest option. The only concern with > this setup is that if Elasticsearch gets busy, nobody is happy. > Elasticsearch has a tendency to drop writes when it is too occupied. This > combined with the fact that (to the best of my knowledge) the Elasticsearch > writer is a ?send it and forget it? could result in some hardship if you > under build your Elasticsearch cluster or you undergo a period of unusually > high utilization. > > > > Seth has some interesting stuff using NSQ that he has written, but I?m not > sure that it is technically ?supported?. His NSQ stuff allows you to send > the events to Elasticsearch at a rate that Elasticsearch is comfortable > with. > > > > Lastly, you could use the Logstash agent to send logs to a Redis server, > which buffers the logs for additional Logstash agents to pull from and > parse to insert into Elasticsearch. At the moment, I think that this is the > most redundant setup. If you want as many logs to make it into > Elasticsearch as possible while keeping the Bro side of things as simple as > possible, this is likely the way to go. The downside is that this can > require quite the large amount of infrastructure? and the only way to find > out exactly how much your environment will need is to build it and see. It > also requires that you keep up to date in knowledge on 3 pieces of software > and how they interact? > > > > Hopefully that helps at least a little! > > > > -Stephen > > > > *From:* bro-bounces at bro.org > [mailto: > bro-bounces at bro.org ] > *On Behalf Of *Jonathon Wright > *Sent:* Tuesday, September 16, 2014 11:04 PM > *To:* Stephen Reese > *Cc:* bro at bro.org > *Subject:* Re: [Bro] Bro Log ingestion > > > > Thanks Steven, I'll take a look at those. > > I'm assuming my central point server would then need Apache with > ElasticSearch and Kibana installed. I'm sure more questions will come as I > start looking into this. Thanks again for the info! > > > > > > On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese > wrote: > > On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright < > jonathon.s.wright at gmail.com > > wrote: > > Research > > Looking around and doing some reading, I've found two possible solutions > ELSA and LOGSTASH although I don't know them very well and / or what their > capabilities are either. But I'd like to know if they are viable, > especially given my scenario, or if there is something better. Also, a > how-to so I can set it up. > > > > You might want to skip on the Logstash piece and push the data directly to > ElasticSearch per [1] unless you have a specific requirement. From there > you could use Kibana [2] or whatever to interface with data stored in > ElasticSearch. > > [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html > [2] http://www.elasticsearch.org/overview/kibana/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/45fe1340/attachment.html From jonathon.s.wright at gmail.com Wed Sep 17 12:25:43 2014 From: jonathon.s.wright at gmail.com (Jonathon Wright) Date: Wed, 17 Sep 2014 09:25:43 -1000 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: Quite the responses, thanks! Here are my thoughts. I saw your post Doug, and on some of our projects we can use Security Onion w/Bro and ELSA, but in this case it must be a RHEL based solution. The solution Stephen R. demo'd with the Kibana approach [1] is pretty nice. But it brought an issue to my attention. It appears that Logstash needs to startup listening on a different port, 9292. I'm wondering if I missed something or why Kibana wouldn't simply run as a plugin or additional module under apache on port 443. We are in a highly regulated network, and if I stand up an Apache server (where all the Bro logs are going to be placed), and the Apache server is listening on a non secure (!443) port such as 9292, then it causes flags to be thrown up everywhere and always kills my project. Additional thoughts on that? Stephen H, not a nit-pick at all, great post! =) My method for moving the logs from all the seensors to a central collector at this point are still in the works. My best route is probably to use 'rsync'. The problem I have right now is that Bro logs and extracted files have 600 permissions when they are created. The cause is simply the umask for root on the servers, which is set to 077. Since the servers are configured (correctly) to not allow SSH by root, then my rsync proposal also died since all the files are accessible by root only. Also, I'm unable to change the umask of root (regulations not know how) so short of creating an every minute chmod 644 cronjob, I'm scratching my head on how to get the logs over to the collector/ apache server. You make an excellent point though " The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact?" The knowledge and infrastructure count / increase is a large flag that will prohibit that endeavor (but great to know about). Both you, John L., and Will H. indicate Splunk though as your solution which gives me another option. But I have the same "question about ingestion" =) How did you get the logs from the multiple sensors to the "ingestion / collector server"? Rsync, SCP, owner / permission issues? I'm interested for sure. But.....the cost is a big no-no as well. As Will H. indicated the cost can go up based on usage, I do need a truly open-source free solution, so I am now leaning back to ElasticSearch / LogStash unless I missed something. Paul H. , you get to use FreeBSD... ... Man do I miss FreeBSD! Give me packages or give me death, haha. Ever since we were forced to use RHEL I miss it more and more! But to your comments, this sentence really caught my attention: "...the logs are sent to a collector via syslog-ng.." Then you said "There, they are written to disk where they are read by logstash and sent to elasticsearch". Since I'm leaning in the Logstash / ElasticSearch method, based on above thoughts, can you share a bit more on how you set up the syslog-ng, logstash, elasticsearch? That seems to be really close to meeting my requirement. I'm assuming you installed them as source and set them in the rc.conf to enabled YES to startup on boot. I'm more interested in the details of the conf files on with what arguments the daemons start up and especially how you were able to get the syslog-ng piece working between the sensor and the collector. [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ Thanks again to all, this is great stuff. JW On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese wrote: > Jonathon, > > As pointed out, a Redis solution may be an ideal open-source route, e.g. > http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ > > > On Wednesday, September 17, 2014, Hosom, Stephen M > wrote: > >> Jonathon, >> >> >> >> As a nit-pick, just because the files are owned by root, doesn?t mean >> they aren?t world-readable. J The absolute simplest solution to allow >> the logs to be viewable by non-root users is to scp them to a centralized >> server, but I?m guessing you want something a little fancier than that. >> >> >> >> If you can do it, go with free Splunk. If you can afford it, go with paid >> Splunk. >> >> >> >> Otherwise: >> >> >> >> For log viewing with Elasticsearch Kibana works great, but, you could >> also check out Brownian: https://github.com/grigorescu/Brownian. >> >> >> >> For log storage, if you want to consider something other than >> Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s >> no GUI, so that might be a downer for you. >> >> >> >> As far as Elasticsearch architecture goes, using Bro to write directly >> into Elasticsearch is definitely the easiest option. The only concern with >> this setup is that if Elasticsearch gets busy, nobody is happy. >> Elasticsearch has a tendency to drop writes when it is too occupied. This >> combined with the fact that (to the best of my knowledge) the Elasticsearch >> writer is a ?send it and forget it? could result in some hardship if you >> under build your Elasticsearch cluster or you undergo a period of unusually >> high utilization. >> >> >> >> Seth has some interesting stuff using NSQ that he has written, but I?m >> not sure that it is technically ?supported?. His NSQ stuff allows you to >> send the events to Elasticsearch at a rate that Elasticsearch is >> comfortable with. >> >> >> >> Lastly, you could use the Logstash agent to send logs to a Redis server, >> which buffers the logs for additional Logstash agents to pull from and >> parse to insert into Elasticsearch. At the moment, I think that this is the >> most redundant setup. If you want as many logs to make it into >> Elasticsearch as possible while keeping the Bro side of things as simple as >> possible, this is likely the way to go. The downside is that this can >> require quite the large amount of infrastructure? and the only way to find >> out exactly how much your environment will need is to build it and see. It >> also requires that you keep up to date in knowledge on 3 pieces of software >> and how they interact? >> >> >> >> Hopefully that helps at least a little! >> >> >> >> -Stephen >> >> >> >> *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Jonathon >> Wright >> *Sent:* Tuesday, September 16, 2014 11:04 PM >> *To:* Stephen Reese >> *Cc:* bro at bro.org >> *Subject:* Re: [Bro] Bro Log ingestion >> >> >> >> Thanks Steven, I'll take a look at those. >> >> I'm assuming my central point server would then need Apache with >> ElasticSearch and Kibana installed. I'm sure more questions will come as I >> start looking into this. Thanks again for the info! >> >> >> >> >> >> On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese wrote: >> >> On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright < >> jonathon.s.wright at gmail.com> wrote: >> >> Research >> >> Looking around and doing some reading, I've found two possible solutions >> ELSA and LOGSTASH although I don't know them very well and / or what their >> capabilities are either. But I'd like to know if they are viable, >> especially given my scenario, or if there is something better. Also, a >> how-to so I can set it up. >> >> >> >> You might want to skip on the Logstash piece and push the data directly >> to ElasticSearch per [1] unless you have a specific requirement. From there >> you could use Kibana [2] or whatever to interface with data stored in >> ElasticSearch. >> >> [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html >> [2] http://www.elasticsearch.org/overview/kibana/ >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/e4548b27/attachment.html From jlay at slave-tothe-box.net Wed Sep 17 13:06:38 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 17 Sep 2014 14:06:38 -0600 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: On 2014-09-17 13:25, Jonathon Wright wrote: > Quite the responses, thanks! > ? > Here are my thoughts. > ? > I saw your post Doug, and on some of our projects we can use Security > Onion w/Bro and ELSA, but in this case it must be a RHEL based > solution. The solution Stephen R. demod with the Kibana approach [1] > is pretty nice. But it brought an issue to my attention. It appears > that Logstash needs to startup listening on a different port, 9292. > Im > wondering if I missed something or why?Kibana wouldnt simply run as a > plugin or additional module under apache on port 443. We are in > a?highly regulated?network, and if I stand up an Apache server > (where all the Bro logs are going to be placed), and the Apache > server > is listening on a non?secure?(!443) port such as 9292, then it > causes flags to be thrown up everywhere and?always kills my project. > Additional thoughts on that? To set this straight...logstash itself doesn't listen on any port unless configured to do so. The Elasticsearch engine behind it does, you'd need to have the backend Elasticsearch engine able to listen on port 9200, and your client workstation will need to be able to connect to it on that port. As for Kibana, it works just fine with any current Apache install. > Stephen H, not a nit-pick at all, great post! =) My?method for moving > the logs from all the seensors to a central?collector at this point > are still in the works. My best route is probably to use rsync. The > problem I have right now is that Bro logs and extracted files have > 600 > permissions when they are created. The cause is simply the umask for > root on the servers, which is set to?077. Since the servers are > configured (correctly) to not allow SSH by root, then my rsync > proposal also died since all the files are accessible by root only. > Also, Im unable to change the umask of root (regulations not know > how)?so short?of creating an every minute chmod 644 cronjob, Im > scratching my head on how to get the logs over to the?collector/ > apache?server. Rsyslog on my sensors have been excellent to pipe to a listening Logstash instance (high ports mean I can run as standard user). Conversely, you can use a cheesy hack of "sudo /usr/bin/tail -f conn.log | logger -d -n remote.syslog.ip -P -u /tmp/ignored". This worked as I was getting my rsyslog instance able to read the conn.log file. Since rsyslog is running as root it's able to read the bro files. ? > You make an excellent point though " The downside is that this can > require quite the large amount of infrastructure? and the only way > to find out exactly how much your environment will need is to build > it > and see. It also requires that you keep up to date in knowledge on 3 > pieces of software and how they interact?" > The knowledge and infrastructure count / increase is a large > flag?that will prohibit that endeavor (but great to know about). > ? > Both you,?John L., and Will H.?indicate Splunk though as your > solution which gives me another option.? But I have the same > "question about ingestion" =) How did you get the logs from the > multiple sensors to the "ingestion / collector server"? Rsync, SCP, > owner / permission issues? Im interested for sure. But.....the cost > is > a big no-no as well. As Will H. indicated the cost can go up based on > usage, I do need a truly open-source free solution, so I am now > leaning back to ElasticSearch / LogStash unless I missed something. > ? > Paul H. , you get to use FreeBSD... ... Man do I miss FreeBSD! > Give me packages or give me death, haha. Ever since we were forced to > use RHEL I miss it more and more! But to your comments, this sentence > really caught my attention: "...the logs are sent to a collector via > syslog-ng.." Then you said "There, they are written to disk where > they > are read by logstash and sent to elasticsearch". Since Im leaning in > the Logstash / ElasticSearch method, based on above thoughts, can you > share a bit more on how you set up the syslog-ng, logstash, > elasticsearch? That seems to be really close to meeting my > requirement. Im assuming you installed them as source and set them in > the rc.conf to enabled YES to startup on boot. Im more interested in > the details of the conf files on?with what arguments?the daemons > start up and especially how you were able to get the syslog-ng piece > working between the sensor and the collector. > ? > ? > [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ [7] orig_bytes, orig_ip_bytes, resp_bytes, and resp_ip_bytes using the logstash entries from the above link are not treated as integers, so you'll need this in your filter entry in your logstash.conf: mutate { convert => [ "resp_bytes", "integer" ] convert => [ "resp_ip_bytes", "integer" ] convert => [ "orig_bytes", "integer" ] convert => [ "orig_ip_bytes", "integer" ] } Let me know if you need any assistance...I have a full working complete set up of a single backend host running logstash/elasticsearch/kibana, with a syslog server piping firewall hits to it, and an IDS piping Bro's conn log, and snort IDS logs to it. James > ? > ? > Thanks again to all, this is great stuff. > ? > JW > ? > ? > ? > > On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese [8]> > wrote: > >> Jonathon, >> >> As pointed out, a Redis solution may be an ideal open-source route, >> > > e.g.?http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ >> [5] >> >> On Wednesday, September 17, 2014, Hosom, Stephen M >> wrote: >> >>> Jonathon, >>> >>> ? >>> >>> As a nit-pick, just because the files are owned by root, doesn?t >>> mean they aren?t world-readable. J The absolute simplest >>> solution to allow the logs to be viewable by non-root users is to >>> scp them to a centralized server, but I?m guessing you want >>> something a little fancier than that. >>> >>> ? >>> >>> If you can do it, go with free Splunk. If you can afford it, go >>> with paid Splunk. >>> >>> ? >>> >>> Otherwise: >>> >>> ? >>> >>> For log viewing with Elasticsearch Kibana works great, but, you >>> could also check out Brownian: >>> https://github.com/grigorescu/Brownian [3]. >>> >>> ? >>> >>> For log storage, if you want to consider something other than >>> Elasticsearch, VAST is an option! https://github.com/mavam/vast >>> [4] There?s no GUI, so that might be a downer for you. >>> >>> ? >>> >>> As far as Elasticsearch architecture goes, using Bro to write >>> directly into Elasticsearch is definitely the easiest option. The >>> only concern with this setup is that if Elasticsearch gets busy, >>> nobody is happy. Elasticsearch has a tendency to drop writes when >>> it is too occupied. This combined with the fact that (to the best >>> of my knowledge) the Elasticsearch writer is a ?send it and >>> forget it? could result in some hardship if you under build your >>> Elasticsearch cluster or you undergo a period of unusually high >>> utilization. >>> >>> ? >>> >>> Seth has some interesting stuff using NSQ that he has written, but >>> I?m not sure that it is technically ?supported?. His NSQ >>> stuff allows you to send the events to Elasticsearch at a rate >>> that Elasticsearch is comfortable with. >>> >>> ? >>> >>> Lastly, you could use the Logstash agent to send logs to a Redis >>> server, which buffers the logs for additional Logstash agents to >>> pull from and parse to insert into Elasticsearch. At the moment, I >>> think that this is the most redundant setup. If you want as many >>> logs to make it into Elasticsearch as possible while keeping the >>> Bro side of things as simple as possible, this is likely the way >>> to go. The downside is that this can require quite the large >>> amount of infrastructure? and the only way to find out exactly >>> how much your environment will need is to build it and see. It >>> also requires that you keep up to date in knowledge on 3 pieces of >>> software and how they interact? >>> >>> ? >>> >>> Hopefully that helps at least a little! >>> >>> ? >>> >>> -Stephen >>> >>> ? >>> >>> FROM: bro-bounces at bro.org [mailto:bro-bounces at bro.org] ON BEHALF >>> OF Jonathon Wright >>> SENT: Tuesday, September 16, 2014 11:04 PM >>> TO: Stephen Reese >>> CC: bro at bro.org >>> SUBJECT: Re: [Bro] Bro Log ingestion >>> >>> ? >>> >>> Thanks Steven, Ill take a look at those. >>> >>> Im assuming my?central point server would then need Apache with >>> ElasticSearch and Kibana installed. Im sure more questions will >>> come as I start looking into this. Thanks again for the info! >>> >>> ? >>> >>> ? >>> >>> On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese >>> wrote: >>> >>>> On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright >>>> wrote: >>>> >>>>> Research >>>>> >>>>> Looking around and doing some reading, Ive found two possible >>>>> solutions ELSA and LOGSTASH although I dont know them very >>>>> well and / or what their capabilities are either. But Id like >>>>> to know if they are viable, especially given my scenario, or >>>>> if there is something better. Also,?a how-to so I can set it >>>>> up. >>>> >>>> ? >>>> >>>> You might want to skip on the Logstash piece and push the data >>>> directly to ElasticSearch per [1] unless you have a specific >>>> requirement. From there you could use Kibana [2] or whatever to >>>> interface with data stored in ElasticSearch. >>>> >>>> [1] >>>> https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html >>>> [1] >>>> [2] http://www.elasticsearch.org/overview/kibana/ [2] >>> >>> ? > > > > Links: > ------ > [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html > [2] http://www.elasticsearch.org/overview/kibana/ > [3] https://github.com/grigorescu/Brownian > [4] https://github.com/mavam/vast > [5] > > http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ > [6] mailto:hosom at battelle.org > [7] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ > [8] mailto:rsreese at gmail.com From jlanders at paymetric.com Wed Sep 17 13:08:34 2014 From: jlanders at paymetric.com (John Landers) Date: Wed, 17 Sep 2014 15:08:34 -0500 Subject: [Bro] Bro Log ingestion In-Reply-To: References: Message-ID: <199F5CD38D5E984990F92D2CDE955F66581BE3E031@34093-MBX-C12.mex07a.mlsrvr.com> As it relates to Splunk, you can consume the data in a number of ways. I use a universal forwarder ? agent on the box ? and configure it to monitor the logs I want to consume (conn.log, dns.log, files.log, etc.) in the Bro ?current? working directory. So, as Bro logs it to file, it gets replicated to the Splunk indexer by the agent. Once the file roles, I don?t care anymore. Though if you wanted to ingest old logs, that would pretty easy to accomplish as well. (Just reference splunk documentation on the inputs.conf config file.) John Landers From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Wednesday, September 17, 2014 2:26 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Quite the responses, thanks! Here are my thoughts. I saw your post Doug, and on some of our projects we can use Security Onion w/Bro and ELSA, but in this case it must be a RHEL based solution. The solution Stephen R. demo'd with the Kibana approach [1] is pretty nice. But it brought an issue to my attention. It appears that Logstash needs to startup listening on a different port, 9292. I'm wondering if I missed something or why Kibana wouldn't simply run as a plugin or additional module under apache on port 443. We are in a highly regulated network, and if I stand up an Apache server (where all the Bro logs are going to be placed), and the Apache server is listening on a non secure (!443) port such as 9292, then it causes flags to be thrown up everywhere and always kills my project. Additional thoughts on that? Stephen H, not a nit-pick at all, great post! =) My method for moving the logs from all the seensors to a central collector at this point are still in the works. My best route is probably to use 'rsync'. The problem I have right now is that Bro logs and extracted files have 600 permissions when they are created. The cause is simply the umask for root on the servers, which is set to 077. Since the servers are configured (correctly) to not allow SSH by root, then my rsync proposal also died since all the files are accessible by root only. Also, I'm unable to change the umask of root (regulations not know how) so short of creating an every minute chmod 644 cronjob, I'm scratching my head on how to get the logs over to the collector/ apache server. You make an excellent point though " The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact?" The knowledge and infrastructure count / increase is a large flag that will prohibit that endeavor (but great to know about). Both you, John L., and Will H. indicate Splunk though as your solution which gives me another option. But I have the same "question about ingestion" =) How did you get the logs from the multiple sensors to the "ingestion / collector server"? Rsync, SCP, owner / permission issues? I'm interested for sure. But.....the cost is a big no-no as well. As Will H. indicated the cost can go up based on usage, I do need a truly open-source free solution, so I am now leaning back to ElasticSearch / LogStash unless I missed something. Paul H. , you get to use FreeBSD... ... Man do I miss FreeBSD! Give me packages or give me death, haha. Ever since we were forced to use RHEL I miss it more and more! But to your comments, this sentence really caught my attention: "...the logs are sent to a collector via syslog-ng.." Then you said "There, they are written to disk where they are read by logstash and sent to elasticsearch". Since I'm leaning in the Logstash / ElasticSearch method, based on above thoughts, can you share a bit more on how you set up the syslog-ng, logstash, elasticsearch? That seems to be really close to meeting my requirement. I'm assuming you installed them as source and set them in the rc.conf to enabled YES to startup on boot. I'm more interested in the details of the conf files on with what arguments the daemons start up and especially how you were able to get the syslog-ng piece working between the sensor and the collector. [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ Thanks again to all, this is great stuff. JW On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese > wrote: Jonathon, As pointed out, a Redis solution may be an ideal open-source route, e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ On Wednesday, September 17, 2014, Hosom, Stephen M > wrote: Jonathon, As a nit-pick, just because the files are owned by root, doesn?t mean they aren?t world-readable. :) The absolute simplest solution to allow the logs to be viewable by non-root users is to scp them to a centralized server, but I?m guessing you want something a little fancier than that. If you can do it, go with free Splunk. If you can afford it, go with paid Splunk. Otherwise: For log viewing with Elasticsearch Kibana works great, but, you could also check out Brownian: https://github.com/grigorescu/Brownian. For log storage, if you want to consider something other than Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s no GUI, so that might be a downer for you. As far as Elasticsearch architecture goes, using Bro to write directly into Elasticsearch is definitely the easiest option. The only concern with this setup is that if Elasticsearch gets busy, nobody is happy. Elasticsearch has a tendency to drop writes when it is too occupied. This combined with the fact that (to the best of my knowledge) the Elasticsearch writer is a ?send it and forget it? could result in some hardship if you under build your Elasticsearch cluster or you undergo a period of unusually high utilization. Seth has some interesting stuff using NSQ that he has written, but I?m not sure that it is technically ?supported?. His NSQ stuff allows you to send the events to Elasticsearch at a rate that Elasticsearch is comfortable with. Lastly, you could use the Logstash agent to send logs to a Redis server, which buffers the logs for additional Logstash agents to pull from and parse to insert into Elasticsearch. At the moment, I think that this is the most redundant setup. If you want as many logs to make it into Elasticsearch as possible while keeping the Bro side of things as simple as possible, this is likely the way to go. The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact? Hopefully that helps at least a little! -Stephen From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Tuesday, September 16, 2014 11:04 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Thanks Steven, I'll take a look at those. I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info! On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese > wrote: On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright > wrote: Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch. [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html [2] http://www.elasticsearch.org/overview/kibana/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/aa798560/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6593 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/aa798560/attachment.bin From donaldson8 at llnl.gov Wed Sep 17 14:08:56 2014 From: donaldson8 at llnl.gov (Donaldson, John) Date: Wed, 17 Sep 2014 21:08:56 +0000 Subject: [Bro] Bro Log ingestion In-Reply-To: <199F5CD38D5E984990F92D2CDE955F66581BE3E031@34093-MBX-C12.mex07a.mlsrvr.com> References: <199F5CD38D5E984990F92D2CDE955F66581BE3E031@34093-MBX-C12.mex07a.mlsrvr.com> Message-ID: We also feed our Bro logs into Splunk and have been pretty happy with that. We have a pretty good idea of what our daily volume looks like, and have been able to plan comfortably around that. We?ve only been bitten by unusually large spikes in volume once or twice in the couple of years that we?ve been Splunking our data. John Donaldson From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of John Landers Sent: Wednesday, September 17, 2014 1:09 PM To: Jonathon Wright; Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion As it relates to Splunk, you can consume the data in a number of ways. I use a universal forwarder ? agent on the box ? and configure it to monitor the logs I want to consume (conn.log, dns.log, files.log, etc.) in the Bro ?current? working directory. So, as Bro logs it to file, it gets replicated to the Splunk indexer by the agent. Once the file roles, I don?t care anymore. Though if you wanted to ingest old logs, that would pretty easy to accomplish as well. (Just reference splunk documentation on the inputs.conf config file.) John Landers From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Wednesday, September 17, 2014 2:26 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Quite the responses, thanks! Here are my thoughts. I saw your post Doug, and on some of our projects we can use Security Onion w/Bro and ELSA, but in this case it must be a RHEL based solution. The solution Stephen R. demo'd with the Kibana approach [1] is pretty nice. But it brought an issue to my attention. It appears that Logstash needs to startup listening on a different port, 9292. I'm wondering if I missed something or why Kibana wouldn't simply run as a plugin or additional module under apache on port 443. We are in a highly regulated network, and if I stand up an Apache server (where all the Bro logs are going to be placed), and the Apache server is listening on a non secure (!443) port such as 9292, then it causes flags to be thrown up everywhere and always kills my project. Additional thoughts on that? Stephen H, not a nit-pick at all, great post! =) My method for moving the logs from all the seensors to a central collector at this point are still in the works. My best route is probably to use 'rsync'. The problem I have right now is that Bro logs and extracted files have 600 permissions when they are created. The cause is simply the umask for root on the servers, which is set to 077. Since the servers are configured (correctly) to not allow SSH by root, then my rsync proposal also died since all the files are accessible by root only. Also, I'm unable to change the umask of root (regulations not know how) so short of creating an every minute chmod 644 cronjob, I'm scratching my head on how to get the logs over to the collector/ apache server. You make an excellent point though " The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact?" The knowledge and infrastructure count / increase is a large flag that will prohibit that endeavor (but great to know about). Both you, John L., and Will H. indicate Splunk though as your solution which gives me another option. But I have the same "question about ingestion" =) How did you get the logs from the multiple sensors to the "ingestion / collector server"? Rsync, SCP, owner / permission issues? I'm interested for sure. But.....the cost is a big no-no as well. As Will H. indicated the cost can go up based on usage, I do need a truly open-source free solution, so I am now leaning back to ElasticSearch / LogStash unless I missed something. Paul H. , you get to use FreeBSD... ... Man do I miss FreeBSD! Give me packages or give me death, haha. Ever since we were forced to use RHEL I miss it more and more! But to your comments, this sentence really caught my attention: "...the logs are sent to a collector via syslog-ng.." Then you said "There, they are written to disk where they are read by logstash and sent to elasticsearch". Since I'm leaning in the Logstash / ElasticSearch method, based on above thoughts, can you share a bit more on how you set up the syslog-ng, logstash, elasticsearch? That seems to be really close to meeting my requirement. I'm assuming you installed them as source and set them in the rc.conf to enabled YES to startup on boot. I'm more interested in the details of the conf files on with what arguments the daemons start up and especially how you were able to get the syslog-ng piece working between the sensor and the collector. [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ Thanks again to all, this is great stuff. JW On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese > wrote: Jonathon, As pointed out, a Redis solution may be an ideal open-source route, e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ On Wednesday, September 17, 2014, Hosom, Stephen M > wrote: Jonathon, As a nit-pick, just because the files are owned by root, doesn?t mean they aren?t world-readable. ? The absolute simplest solution to allow the logs to be viewable by non-root users is to scp them to a centralized server, but I?m guessing you want something a little fancier than that. If you can do it, go with free Splunk. If you can afford it, go with paid Splunk. Otherwise: For log viewing with Elasticsearch Kibana works great, but, you could also check out Brownian: https://github.com/grigorescu/Brownian. For log storage, if you want to consider something other than Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s no GUI, so that might be a downer for you. As far as Elasticsearch architecture goes, using Bro to write directly into Elasticsearch is definitely the easiest option. The only concern with this setup is that if Elasticsearch gets busy, nobody is happy. Elasticsearch has a tendency to drop writes when it is too occupied. This combined with the fact that (to the best of my knowledge) the Elasticsearch writer is a ?send it and forget it? could result in some hardship if you under build your Elasticsearch cluster or you undergo a period of unusually high utilization. Seth has some interesting stuff using NSQ that he has written, but I?m not sure that it is technically ?supported?. His NSQ stuff allows you to send the events to Elasticsearch at a rate that Elasticsearch is comfortable with. Lastly, you could use the Logstash agent to send logs to a Redis server, which buffers the logs for additional Logstash agents to pull from and parse to insert into Elasticsearch. At the moment, I think that this is the most redundant setup. If you want as many logs to make it into Elasticsearch as possible while keeping the Bro side of things as simple as possible, this is likely the way to go. The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact? Hopefully that helps at least a little! -Stephen From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Tuesday, September 16, 2014 11:04 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Thanks Steven, I'll take a look at those. I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info! On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese > wrote: On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright > wrote: Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch. [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html [2] http://www.elasticsearch.org/overview/kibana/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/3b35060e/attachment.html From anthony.kasza at gmail.com Wed Sep 17 14:19:32 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Wed, 17 Sep 2014 14:19:32 -0700 Subject: [Bro] Plugins Message-ID: Hi list, I'm curious if anyone has had success with the new plugin structure (for the bro binary, not broctl plugins). Has anyone used it yet? If so, what have you done? -AK -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/777b0a4a/attachment.html From jonathon.s.wright at gmail.com Wed Sep 17 15:38:13 2014 From: jonathon.s.wright at gmail.com (Jonathon Wright) Date: Wed, 17 Sep 2014 12:38:13 -1000 Subject: [Bro] Bro Log ingestion In-Reply-To: References: <199F5CD38D5E984990F92D2CDE955F66581BE3E031@34093-MBX-C12.mex07a.mlsrvr.com> Message-ID: Excellent information James. Thanks also for the vote of confidence too John, but you guys are making it harder, haha. It seems I need more information to determine the best course as the opinions are varied over using Splunk or LogStash. James, couple questions on your post. So if I understand correctly, ElasticSearch is what listens (as a Virtu Apache module I'm assuming?), LogStash merely feeds ElasticSearch the logs. Getting logs to the server that is running LogStash and ElasticSearch is where Rsyslog-vs-Splunk-vs-whatever else comes into play...correct? You indicated "Rsyslog on my sensors have been excellent to pipe to a listening Logstash instance (high ports mean I can run as standard user)." Does this mean you have LogStash listening on a high port where rsyslog connects too? If so, this would be a problem for me. In my over regulated environment, the logs have to be transferred on a low port, preferrably on a known standard port (such as ssh/22), and the logs must be transferred on an encrypted channel. This is the main reason I initially wanted to use rsync, which uses ssh, encrypts the connection, and obviously runs on a known/standard low port, 22. The problem being that rysnc runs with permissions of the thread owner, in this case a non-root user. And since root is not allowed to SSH into a box, I cannot use rsync. So... can you elaborate a bit more on what ports you are using (or is it random high ports), and if its encrypted, or if you have any other thoughts on how I can solve the movement of the Bro logs in a secure manner? Once I have a good solution for getting the Bro logs over to the collector/apache server, I'd be real excited to discuss some more details about logstash.conf and configuring it to feed ElasticSearch. Any additional thoughts from the group are welcome, thanks again for the assistance thus far! On Wed, Sep 17, 2014 at 11:08 AM, Donaldson, John wrote: > We also feed our Bro logs into Splunk and have been pretty happy with > that. We have a pretty good idea of what our daily volume looks like, and > have been able to plan comfortably around that. We?ve only been bitten by > unusually large spikes in volume once or twice in the couple of years that > we?ve been Splunking our data. > > > > John Donaldson > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *John > Landers > *Sent:* Wednesday, September 17, 2014 1:09 PM > *To:* Jonathon Wright; Stephen Reese > > *Cc:* bro at bro.org > *Subject:* Re: [Bro] Bro Log ingestion > > > > As it relates to Splunk, you can consume the data in a number of ways. I > use a universal forwarder ? agent on the box ? and configure it to monitor > the logs I want to consume (conn.log, dns.log, files.log, etc.) in the Bro > ?current? working directory. > > > > So, as Bro logs it to file, it gets replicated to the Splunk indexer by > the agent. Once the file roles, I don?t care anymore. Though if you wanted > to ingest old logs, that would pretty easy to accomplish as well. (Just > reference splunk documentation on the inputs.conf config file.) > > > > > > > > > > John Landers > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org > ] *On Behalf Of *Jonathon Wright > *Sent:* Wednesday, September 17, 2014 2:26 PM > *To:* Stephen Reese > *Cc:* bro at bro.org > *Subject:* Re: [Bro] Bro Log ingestion > > > > Quite the responses, thanks! > > > > Here are my thoughts. > > > > I saw your post Doug, and on some of our projects we can use Security > Onion w/Bro and ELSA, but in this case it must be a RHEL based solution. > The solution Stephen R. demo'd with the Kibana approach [1] is pretty nice. > But it brought an issue to my attention. It appears that Logstash needs to > startup listening on a different port, 9292. I'm wondering if I missed > something or why Kibana wouldn't simply run as a plugin or additional > module under apache on port 443. We are in a highly regulated network, and > if I stand up an Apache server (where all the Bro logs are going to be > placed), and the Apache server is listening on a non secure (!443) port > such as 9292, then it causes flags to be thrown up everywhere and always > kills my project. Additional thoughts on that? > > > > Stephen H, not a nit-pick at all, great post! =) My method for moving the > logs from all the seensors to a central collector at this point are still > in the works. My best route is probably to use 'rsync'. The problem I have > right now is that Bro logs and extracted files have 600 permissions when > they are created. The cause is simply the umask for root on the servers, > which is set to 077. Since the servers are configured (correctly) to not > allow SSH by root, then my rsync proposal also died since all the files are > accessible by root only. Also, I'm unable to change the umask of root > (regulations not know how) so short of creating an every minute chmod 644 > cronjob, I'm scratching my head on how to get the logs over to > the collector/ apache server. > > > > You make an excellent point though " The downside is that this can > require quite the large amount of infrastructure? and the only way to find > out exactly how much your environment will need is to build it and see. It > also requires that you keep up to date in knowledge on 3 pieces of software > and how they interact?" > > The knowledge and infrastructure count / increase is a large flag that > will prohibit that endeavor (but great to know about). > > > > Both you, John L., and Will H. indicate Splunk though as your solution > which gives me another option. But I have the same "question about > ingestion" =) How did you get the logs from the multiple sensors to the > "ingestion / collector server"? Rsync, SCP, owner / permission issues? I'm > interested for sure. But.....the cost is a big no-no as well. As Will H. > indicated the cost can go up based on usage, I do need a truly open-source > free solution, so I am now leaning back to ElasticSearch / LogStash unless > I missed something. > > > > Paul H. , you get to use FreeBSD... ... Man do I miss FreeBSD! Give > me packages or give me death, haha. Ever since we were forced to use RHEL I > miss it more and more! But to your comments, this sentence really caught my > attention: "...the logs are sent to a collector via syslog-ng.." Then you > said "There, they are written to disk where they are read by logstash and > sent to elasticsearch". Since I'm leaning in the Logstash / ElasticSearch > method, based on above thoughts, can you share a bit more on how you set up > the syslog-ng, logstash, elasticsearch? That seems to be really close to > meeting my requirement. I'm assuming you installed them as source and set > them in the rc.conf to enabled YES to startup on boot. I'm more interested > in the details of the conf files on with what arguments the daemons start > up and especially how you were able to get the syslog-ng piece working > between the sensor and the collector. > > > > > > [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ > > > > > > Thanks again to all, this is great stuff. > > > > JW > > > > > > > > > > On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese wrote: > > Jonathon, > > > > As pointed out, a Redis solution may be an ideal open-source route, e.g. > http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ > > > > On Wednesday, September 17, 2014, Hosom, Stephen M > wrote: > > Jonathon, > > > > As a nit-pick, just because the files are owned by root, doesn?t mean they > aren?t world-readable. J The absolute simplest solution to allow the logs > to be viewable by non-root users is to scp them to a centralized server, > but I?m guessing you want something a little fancier than that. > > > > If you can do it, go with free Splunk. If you can afford it, go with paid > Splunk. > > > > Otherwise: > > > > For log viewing with Elasticsearch Kibana works great, but, you could also > check out Brownian: https://github.com/grigorescu/Brownian. > > > > For log storage, if you want to consider something other than > Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s > no GUI, so that might be a downer for you. > > > > As far as Elasticsearch architecture goes, using Bro to write directly > into Elasticsearch is definitely the easiest option. The only concern with > this setup is that if Elasticsearch gets busy, nobody is happy. > Elasticsearch has a tendency to drop writes when it is too occupied. This > combined with the fact that (to the best of my knowledge) the Elasticsearch > writer is a ?send it and forget it? could result in some hardship if you > under build your Elasticsearch cluster or you undergo a period of unusually > high utilization. > > > > Seth has some interesting stuff using NSQ that he has written, but I?m not > sure that it is technically ?supported?. His NSQ stuff allows you to send > the events to Elasticsearch at a rate that Elasticsearch is comfortable > with. > > > > Lastly, you could use the Logstash agent to send logs to a Redis server, > which buffers the logs for additional Logstash agents to pull from and > parse to insert into Elasticsearch. At the moment, I think that this is the > most redundant setup. If you want as many logs to make it into > Elasticsearch as possible while keeping the Bro side of things as simple as > possible, this is likely the way to go. The downside is that this can > require quite the large amount of infrastructure? and the only way to find > out exactly how much your environment will need is to build it and see. It > also requires that you keep up to date in knowledge on 3 pieces of software > and how they interact? > > > > Hopefully that helps at least a little! > > > > -Stephen > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org > ] *On Behalf Of *Jonathon Wright > *Sent:* Tuesday, September 16, 2014 11:04 PM > *To:* Stephen Reese > *Cc:* bro at bro.org > *Subject:* Re: [Bro] Bro Log ingestion > > > > Thanks Steven, I'll take a look at those. > > I'm assuming my central point server would then need Apache with > ElasticSearch and Kibana installed. I'm sure more questions will come as I > start looking into this. Thanks again for the info! > > > > > > On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese wrote: > > On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright < > jonathon.s.wright at gmail.com> wrote: > > Research > > Looking around and doing some reading, I've found two possible solutions > ELSA and LOGSTASH although I don't know them very well and / or what their > capabilities are either. But I'd like to know if they are viable, > especially given my scenario, or if there is something better. Also, a > how-to so I can set it up. > > > > You might want to skip on the Logstash piece and push the data directly to > ElasticSearch per [1] unless you have a specific requirement. From there > you could use Kibana [2] or whatever to interface with data stored in > ElasticSearch. > > [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html > [2] http://www.elasticsearch.org/overview/kibana/ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/7d3121b3/attachment.html From jlay at slave-tothe-box.net Wed Sep 17 16:56:22 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 17 Sep 2014 17:56:22 -0600 Subject: [Bro] Bro Log ingestion In-Reply-To: References: <199F5CD38D5E984990F92D2CDE955F66581BE3E031@34093-MBX-C12.mex07a.mlsrvr.com> Message-ID: <1410998182.2909.20.camel@JamesiMac> Ok....so elasticsearch is it's own, self contained application, as is Logstash. Logstash formats the data to go into elasticsearch (or bro can go direct). Kibana is the web front end that you to get to the elasticsearch backend. This link http://logstash.net/docs/1.4.2/ contains all the inputs, codecs, filters, and outputs that logstash supports. I've been the udp input for going to the remote logstash/elasticsearch server. I chose to have Logstash listening on a high port, but you can just as easily choose something else. I am not sure on encryption, so you'll want to look through that list. Also, you can test all of this in one shot by downloading the logstash tar.gz, starting it with the embedded elasticsearch, then start the web agent...it's pretty cool for testing. James On Wed, 2014-09-17 at 12:38 -1000, Jonathon Wright wrote: > Excellent information James. Thanks also for the vote of confidence > too John, but you guys are making it harder, haha. It seems I need > more information to determine the best course as the opinions are > varied over using Splunk or LogStash. > > James, couple questions on your post. > > So if I understand correctly, ElasticSearch is what listens (as a > Virtu Apache module I'm assuming?), LogStash merely feeds > ElasticSearch the logs. Getting logs to the server that is running > LogStash and ElasticSearch is where Rsyslog-vs-Splunk-vs-whatever else > comes into play...correct? > > You indicated "Rsyslog on my sensors have been excellent to pipe to a > listening Logstash instance (high ports mean I can run as standard > user)." Does this mean you have LogStash listening on a high port > where rsyslog connects too? If so, this would be a problem for me. In > my over regulated environment, the logs have to be transferred on > a low port, preferrably on a known standard port (such as ssh/22), and > the logs must be transferred on an encrypted channel. This is the main > reason I initially wanted to use rsync, which uses ssh, encrypts the > connection, and obviously runs on a known/standard low port, 22. The > problem being that rysnc runs with permissions of the thread owner, in > this case a non-root user. And since root is not allowed to SSH into a > box, I cannot use rsync. So... can you elaborate a bit more on what > ports you are using (or is it random high ports), and if its > encrypted, or if you have any other thoughts on how I can solve the > movement of the Bro logs in a secure manner? > > Once I have a good solution for getting the Bro logs over to the > collector/apache server, I'd be real excited to discuss some more > details about logstash.conf and configuring it to feed ElasticSearch. > > Any additional thoughts from the group are welcome, thanks again for > the assistance thus far! > > > > On Wed, Sep 17, 2014 at 11:08 AM, Donaldson, John > wrote: > > We also feed our Bro logs into Splunk and have been pretty > happy with that. We have a pretty good idea of what our daily > volume looks like, and have been able to plan comfortably > around that. We?ve only been bitten by unusually large spikes > in volume once or twice in the couple of years that we?ve been > Splunking our data. > > > > > John Donaldson > > > > > > > From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On > Behalf Of John Landers > Sent: Wednesday, September 17, 2014 1:09 PM > To: Jonathon Wright; Stephen Reese > > > > Cc: bro at bro.org > Subject: Re: [Bro] Bro Log ingestion > > > As it relates to Splunk, you can consume the data in a number > of ways. I use a universal forwarder ? agent on the box ? and > configure it to monitor the logs I want to consume (conn.log, > dns.log, files.log, etc.) in the Bro ?current? working > directory. > > > > So, as Bro logs it to file, it gets replicated to the Splunk > indexer by the agent. Once the file roles, I don?t care > anymore. Though if you wanted to ingest old logs, that would > pretty easy to accomplish as well. (Just reference splunk > documentation on the inputs.conf config file.) > > > > > > > > > > John Landers > > > > From:bro-bounces at bro.org [mailto:bro-bounces at bro.org] On > Behalf Of Jonathon Wright > Sent: Wednesday, September 17, 2014 2:26 PM > To: Stephen Reese > Cc: bro at bro.org > Subject: Re: [Bro] Bro Log ingestion > > > > > Quite the responses, thanks! > > > > > > Here are my thoughts. > > > > > > I saw your post Doug, and on some of our projects we can use > Security Onion w/Bro and ELSA, but in this case it must be a > RHEL based solution. The solution Stephen R. demo'd with the > Kibana approach [1] is pretty nice. But it brought an issue to > my attention. It appears that Logstash needs to startup > listening on a different port, 9292. I'm wondering if I missed > something or why Kibana wouldn't simply run as a plugin or > additional module under apache on port 443. We are in a highly > regulated network, and if I stand up an Apache server (where > all the Bro logs are going to be placed), and the Apache > server is listening on a non secure (!443) port such as 9292, > then it causes flags to be thrown up everywhere and always > kills my project. Additional thoughts on that? > > > > > > Stephen H, not a nit-pick at all, great post! =) My method for > moving the logs from all the seensors to a central collector > at this point are still in the works. My best route is > probably to use 'rsync'. The problem I have right now is that > Bro logs and extracted files have 600 permissions when they > are created. The cause is simply the umask for root on the > servers, which is set to 077. Since the servers are configured > (correctly) to not allow SSH by root, then my rsync proposal > also died since all the files are accessible by root only. > Also, I'm unable to change the umask of root (regulations not > know how) so short of creating an every minute chmod 644 > cronjob, I'm scratching my head on how to get the logs over to > the collector/ apache server. > > > > > > You make an excellent point though " The downside is that this > can require quite the large amount of infrastructure? and the > only way to find out exactly how much your environment will > need is to build it and see. It also requires that you keep up > to date in knowledge on 3 pieces of software and how they > interact?" > > > The knowledge and infrastructure count / increase is a large > flag that will prohibit that endeavor (but great to know > about). > > > > > > Both you, John L., and Will H. indicate Splunk though as your > solution which gives me another option. But I have the same > "question about ingestion" =) How did you get the logs from > the multiple sensors to the "ingestion / collector server"? > Rsync, SCP, owner / permission issues? I'm interested for > sure. But.....the cost is a big no-no as well. As Will H. > indicated the cost can go up based on usage, I do need a truly > open-source free solution, so I am now leaning back to > ElasticSearch / LogStash unless I missed something. > > > > > > Paul H. , you get to use FreeBSD... ... Man do I miss > FreeBSD! Give me packages or give me death, haha. Ever since > we were forced to use RHEL I miss it more and more! But to > your comments, this sentence really caught my attention: > "...the logs are sent to a collector via syslog-ng.."Then you > said "There, they are written to disk where they are read by > logstash and sent to elasticsearch".Since I'm leaning in the > Logstash / ElasticSearch method, based on above thoughts, can > you share a bit more on how you set up the syslog-ng, > logstash, elasticsearch? That seems to be really close to > meeting my requirement. I'm assuming you installed them as > source and set them in the rc.conf to enabled YES to startup > on boot. I'm more interested in the details of the conf files > on with what arguments the daemons start up and especially how > you were able to get the syslog-ng piece working between the > sensor and the collector. > > > > > > > > > [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ > > > > > > > > > Thanks again to all, this is great stuff. > > > > > > JW > > > > > > > > > > > > > > > On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese > wrote: > > > Jonathon, > > > > > > As pointed out, a Redis solution may be an ideal > open-source route, > e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ > > > > > On Wednesday, September 17, 2014, Hosom, Stephen M > wrote: > > > Jonathon, > > > > As a nit-pick, just because the files are > owned by root, doesn?t mean they aren?t > world-readable.J The absolute simplest > solution to allow the logs to be viewable by > non-root users is to scp them to a centralized > server, but I?m guessing you want something a > little fancier than that. > > > > If you can do it, go with free Splunk. If you > can afford it, go with paid Splunk. > > > > Otherwise: > > > > For log viewing with Elasticsearch Kibana > works great, but, you could also check out > Brownian: > https://github.com/grigorescu/Brownian. > > > > For log storage, if you want to consider > something other than Elasticsearch, VAST is an > option! https://github.com/mavam/vast There?s > no GUI, so that might be a downer for you. > > > > As far as Elasticsearch architecture goes, > using Bro to write directly into Elasticsearch > is definitely the easiest option. The only > concern with this setup is that if > Elasticsearch gets busy, nobody is happy. > Elasticsearch has a tendency to drop writes > when it is too occupied. This combined with > the fact that (to the best of my knowledge) > the Elasticsearch writer is a ?send it and > forget it? could result in some hardship if > you under build your Elasticsearch cluster or > you undergo a period of unusually high > utilization. > > > > Seth has some interesting stuff using NSQ that > he has written, but I?m not sure that it is > technically ?supported?. His NSQ stuff allows > you to send the events to Elasticsearch at a > rate that Elasticsearch is comfortable with. > > > > Lastly, you could use the Logstash agent to > send logs to a Redis server, which buffers the > logs for additional Logstash agents to pull > from and parse to insert into Elasticsearch. > At the moment, I think that this is the most > redundant setup. If you want as many logs to > make it into Elasticsearch as possible while > keeping the Bro side of things as simple as > possible, this is likely the way to go. The > downside is that this can require quite the > large amount of infrastructure? and the only > way to find out exactly how much your > environment will need is to build it and see. > It also requires that you keep up to date in > knowledge on 3 pieces of software and how they > interact? > > > > Hopefully that helps at least a little! > > > > -Stephen > > > > From:bro-bounces at bro.org > [mailto:bro-bounces at bro.org] On Behalf Of > Jonathon Wright > Sent: Tuesday, September 16, 2014 11:04 PM > To: Stephen Reese > Cc: bro at bro.org > Subject: Re: [Bro] Bro Log ingestion > > > > > Thanks Steven, I'll take a look at those. > > > I'm assuming my central point server would > then need Apache with ElasticSearch and Kibana > installed. I'm sure more questions will come > as I start looking into this. Thanks again for > the info! > > > > > > > > > On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese > wrote: > > > On Tue, Sep 16, 2014 at 9:54 PM, > Jonathon Wright > wrote: > > > Research > > > Looking around and doing some > reading, I've found two > possible solutions ELSA and > LOGSTASH although I don't know > them very well and / or what > their capabilities are either. > But I'd like to know if they > are viable, especially given > my scenario, or if there is > something better. Also, a > how-to so I can set it up. > > > > > > You might want to skip on the Logstash > piece and push the data directly to > ElasticSearch per [1] unless you have > a specific requirement. From there you > could use Kibana [2] or whatever to > interface with data stored in > ElasticSearch. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140917/4783661e/attachment.html From hosom at battelle.org Thu Sep 18 05:01:11 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Thu, 18 Sep 2014 12:01:11 +0000 Subject: [Bro] Bro Log ingestion In-Reply-To: References: <199F5CD38D5E984990F92D2CDE955F66581BE3E031@34093-MBX-C12.mex07a.mlsrvr.com> Message-ID: If you need a protocol that plays well with Logstash, can operate on a custom ?low port? (sigh?), and also supports encryption, you have described lumberjack. Lumberjack is supported by Logstash Agent (when you run full blown logstash on the Bro boxes as forwarders) or a smaller, more slim application called Logstash-Forwarder. You?re going to limit yourself a lot by using that configuration though. First of all, you?re going to need to have a very good working knowledge of TLS. The Logstash-Forwarder and Agent are both very picky about their TLS and requests to have options allowing for users to circumvent some of the checks have been largely met with opposition. As far as getting Logstash to connect to your elasticsearch cluster, the documentation is very clear. Depending on which version of elasticsearch you?re using, the steps will vary. If you want some help configuring it, I could help you offline? seems like a pretty big distraction for me to send Logstash and Elasticsearch configuration tips on this mailing list ? From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Wednesday, September 17, 2014 6:38 PM To: Donaldson, John Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Excellent information James. Thanks also for the vote of confidence too John, but you guys are making it harder, haha. It seems I need more information to determine the best course as the opinions are varied over using Splunk or LogStash. James, couple questions on your post. So if I understand correctly, ElasticSearch is what listens (as a Virtu Apache module I'm assuming?), LogStash merely feeds ElasticSearch the logs. Getting logs to the server that is running LogStash and ElasticSearch is where Rsyslog-vs-Splunk-vs-whatever else comes into play...correct? You indicated "Rsyslog on my sensors have been excellent to pipe to a listening Logstash instance (high ports mean I can run as standard user)." Does this mean you have LogStash listening on a high port where rsyslog connects too? If so, this would be a problem for me. In my over regulated environment, the logs have to be transferred on a low port, preferrably on a known standard port (such as ssh/22), and the logs must be transferred on an encrypted channel. This is the main reason I initially wanted to use rsync, which uses ssh, encrypts the connection, and obviously runs on a known/standard low port, 22. The problem being that rysnc runs with permissions of the thread owner, in this case a non-root user. And since root is not allowed to SSH into a box, I cannot use rsync. So... can you elaborate a bit more on what ports you are using (or is it random high ports), and if its encrypted, or if you have any other thoughts on how I can solve the movement of the Bro logs in a secure manner? Once I have a good solution for getting the Bro logs over to the collector/apache server, I'd be real excited to discuss some more details about logstash.conf and configuring it to feed ElasticSearch. Any additional thoughts from the group are welcome, thanks again for the assistance thus far! On Wed, Sep 17, 2014 at 11:08 AM, Donaldson, John > wrote: We also feed our Bro logs into Splunk and have been pretty happy with that. We have a pretty good idea of what our daily volume looks like, and have been able to plan comfortably around that. We?ve only been bitten by unusually large spikes in volume once or twice in the couple of years that we?ve been Splunking our data. John Donaldson From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of John Landers Sent: Wednesday, September 17, 2014 1:09 PM To: Jonathon Wright; Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion As it relates to Splunk, you can consume the data in a number of ways. I use a universal forwarder ? agent on the box ? and configure it to monitor the logs I want to consume (conn.log, dns.log, files.log, etc.) in the Bro ?current? working directory. So, as Bro logs it to file, it gets replicated to the Splunk indexer by the agent. Once the file roles, I don?t care anymore. Though if you wanted to ingest old logs, that would pretty easy to accomplish as well. (Just reference splunk documentation on the inputs.conf config file.) John Landers From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Wednesday, September 17, 2014 2:26 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Quite the responses, thanks! Here are my thoughts. I saw your post Doug, and on some of our projects we can use Security Onion w/Bro and ELSA, but in this case it must be a RHEL based solution. The solution Stephen R. demo'd with the Kibana approach [1] is pretty nice. But it brought an issue to my attention. It appears that Logstash needs to startup listening on a different port, 9292. I'm wondering if I missed something or why Kibana wouldn't simply run as a plugin or additional module under apache on port 443. We are in a highly regulated network, and if I stand up an Apache server (where all the Bro logs are going to be placed), and the Apache server is listening on a non secure (!443) port such as 9292, then it causes flags to be thrown up everywhere and always kills my project. Additional thoughts on that? Stephen H, not a nit-pick at all, great post! =) My method for moving the logs from all the seensors to a central collector at this point are still in the works. My best route is probably to use 'rsync'. The problem I have right now is that Bro logs and extracted files have 600 permissions when they are created. The cause is simply the umask for root on the servers, which is set to 077. Since the servers are configured (correctly) to not allow SSH by root, then my rsync proposal also died since all the files are accessible by root only. Also, I'm unable to change the umask of root (regulations not know how) so short of creating an every minute chmod 644 cronjob, I'm scratching my head on how to get the logs over to the collector/ apache server. You make an excellent point though " The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact?" The knowledge and infrastructure count / increase is a large flag that will prohibit that endeavor (but great to know about). Both you, John L., and Will H. indicate Splunk though as your solution which gives me another option. But I have the same "question about ingestion" =) How did you get the logs from the multiple sensors to the "ingestion / collector server"? Rsync, SCP, owner / permission issues? I'm interested for sure. But.....the cost is a big no-no as well. As Will H. indicated the cost can go up based on usage, I do need a truly open-source free solution, so I am now leaning back to ElasticSearch / LogStash unless I missed something. Paul H. , you get to use FreeBSD... ... Man do I miss FreeBSD! Give me packages or give me death, haha. Ever since we were forced to use RHEL I miss it more and more! But to your comments, this sentence really caught my attention: "...the logs are sent to a collector via syslog-ng.." Then you said "There, they are written to disk where they are read by logstash and sent to elasticsearch". Since I'm leaning in the Logstash / ElasticSearch method, based on above thoughts, can you share a bit more on how you set up the syslog-ng, logstash, elasticsearch? That seems to be really close to meeting my requirement. I'm assuming you installed them as source and set them in the rc.conf to enabled YES to startup on boot. I'm more interested in the details of the conf files on with what arguments the daemons start up and especially how you were able to get the syslog-ng piece working between the sensor and the collector. [1] http://www.appliednsm.com/parsing-bro-logs-with-logstash/ Thanks again to all, this is great stuff. JW On Wed, Sep 17, 2014 at 4:42 AM, Stephen Reese > wrote: Jonathon, As pointed out, a Redis solution may be an ideal open-source route, e.g. http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/ On Wednesday, September 17, 2014, Hosom, Stephen M > wrote: Jonathon, As a nit-pick, just because the files are owned by root, doesn?t mean they aren?t world-readable. ? The absolute simplest solution to allow the logs to be viewable by non-root users is to scp them to a centralized server, but I?m guessing you want something a little fancier than that. If you can do it, go with free Splunk. If you can afford it, go with paid Splunk. Otherwise: For log viewing with Elasticsearch Kibana works great, but, you could also check out Brownian: https://github.com/grigorescu/Brownian. For log storage, if you want to consider something other than Elasticsearch, VAST is an option! https://github.com/mavam/vast There?s no GUI, so that might be a downer for you. As far as Elasticsearch architecture goes, using Bro to write directly into Elasticsearch is definitely the easiest option. The only concern with this setup is that if Elasticsearch gets busy, nobody is happy. Elasticsearch has a tendency to drop writes when it is too occupied. This combined with the fact that (to the best of my knowledge) the Elasticsearch writer is a ?send it and forget it? could result in some hardship if you under build your Elasticsearch cluster or you undergo a period of unusually high utilization. Seth has some interesting stuff using NSQ that he has written, but I?m not sure that it is technically ?supported?. His NSQ stuff allows you to send the events to Elasticsearch at a rate that Elasticsearch is comfortable with. Lastly, you could use the Logstash agent to send logs to a Redis server, which buffers the logs for additional Logstash agents to pull from and parse to insert into Elasticsearch. At the moment, I think that this is the most redundant setup. If you want as many logs to make it into Elasticsearch as possible while keeping the Bro side of things as simple as possible, this is likely the way to go. The downside is that this can require quite the large amount of infrastructure? and the only way to find out exactly how much your environment will need is to build it and see. It also requires that you keep up to date in knowledge on 3 pieces of software and how they interact? Hopefully that helps at least a little! -Stephen From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jonathon Wright Sent: Tuesday, September 16, 2014 11:04 PM To: Stephen Reese Cc: bro at bro.org Subject: Re: [Bro] Bro Log ingestion Thanks Steven, I'll take a look at those. I'm assuming my central point server would then need Apache with ElasticSearch and Kibana installed. I'm sure more questions will come as I start looking into this. Thanks again for the info! On Tue, Sep 16, 2014 at 4:28 PM, Stephen Reese > wrote: On Tue, Sep 16, 2014 at 9:54 PM, Jonathon Wright > wrote: Research Looking around and doing some reading, I've found two possible solutions ELSA and LOGSTASH although I don't know them very well and / or what their capabilities are either. But I'd like to know if they are viable, especially given my scenario, or if there is something better. Also, a how-to so I can set it up. You might want to skip on the Logstash piece and push the data directly to ElasticSearch per [1] unless you have a specific requirement. From there you could use Kibana [2] or whatever to interface with data stored in ElasticSearch. [1] https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html [2] http://www.elasticsearch.org/overview/kibana/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140918/9571be8c/attachment.html From seth at icir.org Thu Sep 18 06:23:27 2014 From: seth at icir.org (Seth Hall) Date: Thu, 18 Sep 2014 09:23:27 -0400 Subject: [Bro] Plugins In-Reply-To: References: Message-ID: <105E0C1B-2360-4127-A838-CFD29AEDA7D1@icir.org> On Sep 17, 2014, at 5:19 PM, anthony kasza wrote: > I'm curious if anyone has had success with the new plugin structure (for the bro binary, not broctl plugins). Has anyone used it yet? If so, what have you done? I started working on a RAR analyzer as a plugin recently. ;) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jxbatchelor at gmail.com Thu Sep 18 09:00:10 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Thu, 18 Sep 2014 11:00:10 -0500 Subject: [Bro] File Extraction Related Scripting Questions Message-ID: Hello: I would like for a quick way to simply get the directory size of the extract_files directory. If it meets a certain threshold I don't want to extract the file. I tried looking for a builtin function that did this but could not locate one. I then attempted to do the following system command: local somevar = system(fmt("du -b %s | cut -f1", FileExtract::prefix)) However, I am unable to capture the output (since it goes directly to stdout). Does anyone have any advice on how to tackle this? Additionally, I was wondering if Bro is able to identify MIME types of modern Office docuements down to the type of application they support (Excel, Powerpoint, etc)... From my testing, it seems that the only thing one gets is 'application/zip' for the MIME type for a modern office document, this is technically correct, but I was hoping for a way to zone in on this a little more by being able to specify 'application/vnd.openxmlformats-officedocument.presentationml.presentation' (if I wanted pptx files). Does Bro MIME detection support this in any way? Many thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140918/48a45f53/attachment.html From jxbatchelor at gmail.com Thu Sep 18 10:12:01 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Thu, 18 Sep 2014 12:12:01 -0500 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: References: Message-ID: FWIW - I managed to cobble together the following poc once I stumbled across 'exec' :) when ( local dir_size = Exec::run([$cmd=fmt("du -b %s | cut -f1", FileExtract::prefix)]) ) { if ( to_int(dir_size$stdout[0]) < dir_size_limit ) print "file can be written"; else print "file cannot be written"; } Interested if this is the 'best' way or not. The drawback is this required the use of 'when' which requires me to wait a little bit before I can utilize the returned result. It also seems that if I place an 'extract' analyzer inside the if statement when a file can be written, I get the error 'field value missing [dir_size$stdout]'. This probably relates to a timing issue on the part of the issued command I am guessing? Back to the drawing board I suppose but that is as far as I've gotten so far :) Also interested as well in the MIME type question with respect to Office documents. Thanks! Jason On Thu, Sep 18, 2014 at 11:00 AM, Jason Batchelor wrote: > Hello: > > I would like for a quick way to simply get the directory size of the > extract_files directory. If it meets a certain threshold I don't want to > extract the file. I tried looking for a builtin function that did this but > could not locate one. I then attempted to do the following system command: > > local somevar = system(fmt("du -b %s | cut -f1", FileExtract::prefix)) > > However, I am unable to capture the output (since it goes directly to > stdout). Does anyone have any advice on how to tackle this? > > Additionally, I was wondering if Bro is able to identify MIME types of > modern Office docuements down to the type of application they support > (Excel, Powerpoint, etc)... From my testing, it seems that the only thing > one gets is 'application/zip' for the MIME type for a modern office > document, this is technically correct, but I was hoping for a way to zone > in on this a little more by being able to specify > 'application/vnd.openxmlformats-officedocument.presentationml.presentation' > (if I wanted pptx files). Does Bro MIME detection support this in any way? > > Many thanks, > Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140918/ea8c8bc4/attachment.html From seth at icir.org Thu Sep 18 12:47:15 2014 From: seth at icir.org (Seth Hall) Date: Thu, 18 Sep 2014 15:47:15 -0400 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: References: Message-ID: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> On Sep 18, 2014, at 1:12 PM, Jason Batchelor wrote: > FWIW - I managed to cobble together the following poc once I stumbled across 'exec' :) Yep, that's probably the correct thing to do for now. > The drawback is this required the use of 'when' which requires me to wait a little bit before I can utilize the returned result. Since Bro needs to keep running in a non-blocking manner all the time, basically any solution you aim for will be using when since looking at the file system is almost intrinsically a blocking operation. What I would recommend is that you have a scheduled event that regularly checks the size of the directory and modifies a global value to let you know if you're safe to extract or not. That will combine the benefit of the asynchronous operation with the benefit of being able to check in an if statement if your extraction directory is overly full. > It also seems that if I place an 'extract' analyzer inside the if statement when a file can be written, I get the error 'field value missing [dir_size$stdout]'. This probably relates to a timing issue on the part of the issued command I am guessing?  I'm not sure why you're seeing that problem, that seems weird. However, I wouldn't expect that to work generally because once the when statement returns could be after quite a bit of the file has already transferred. > Also interested as well in the MIME type question with respect to Office documents. Yeah, what would help a lot there is for someone to pull together files that they don't feel are being detected with accurate mime types and to provide those files or links to files on the internet that don't get detected accurately. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jxbatchelor at gmail.com Thu Sep 18 14:26:25 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Thu, 18 Sep 2014 16:26:25 -0500 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> References: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> Message-ID: Thanks Seth, Is it expected for Bro to detect specific Office documents? Older versions of Office documents (2003 and back I believe) had easily identifiable file magic that one could use to at least inform the observation that the file is an MS office document (ex: D0 CF 11 E0 A1 B1 1A E1). Now however, with 'office open xml', the initial file magic for modern Office documents is equivilent to that of a zip file at face value. However, if properly insturmented you can deliniate between a regular ZIP file and an Office document, down to being able to state if one is a pptx, docx, etc... When extracting files off the wire that are what I believe to be modern MS Office documents, Bro tends to classify them as ZIP files via the MIME type. This is technically correct, however there are higher fidelity attributes that may be absent, such as the fact that the file is an Office document for Word, Powerpoint, etc. I did a little playing around to see if perhaps Bro was simply claiming the 'ZIP' file magic identification was the strongest - and the other more application centric file magic identifiers were burried in the variable f$mime_types mime_matches vector as defined here: https://www.bro.org/sphinx-git/scripts/base/init-bare.bro.html#type-mime_matches After running , it doesn't seem that Bro is identifying this. Below is the snip of code I am using to attempt this: if ( f?$mime_types && |f$mime_types| == 2 ) { if ( f$mime_types[0]$mime == "application/zip" ) { ext = ext_map_zip_subset[f$mime_types[1]$mime]; } } The subset mime types I have defined to map the specific versions of MS Office are as follows: application/vnd.openxmlformats-officedocument.wordprocessingml.document application/vnd.openxmlformats-officedocument.spreadsheetml.sheet application/vnd.openxmlformats-officedocument.presentationml.presentation Also curious about JAR files too, those seem to fold into the broader ZIP file magic but may be detected with something a little more specific. The magic/type list posted here (under 'zip') perhaps better illustrates the issue: http://www.garykessler.net/library/file_sigs.html Hope that helps explain better, Jason On Thu, Sep 18, 2014 at 2:47 PM, Seth Hall wrote: > > On Sep 18, 2014, at 1:12 PM, Jason Batchelor > wrote: > > > FWIW - I managed to cobble together the following poc once I stumbled > across 'exec' :) > > Yep, that's probably the correct thing to do for now. > > > The drawback is this required the use of 'when' which requires me to > wait a little bit before I can utilize the returned result. > > Since Bro needs to keep running in a non-blocking manner all the time, > basically any solution you aim for will be using when since looking at the > file system is almost intrinsically a blocking operation. > > What I would recommend is that you have a scheduled event that regularly > checks the size of the directory and modifies a global value to let you > know if you're safe to extract or not. That will combine the benefit of > the asynchronous operation with the benefit of being able to check in an if > statement if your extraction directory is overly full. > > > It also seems that if I place an 'extract' analyzer inside the if > statement when a file can be written, I get the error 'field value missing > [dir_size$stdout]'. This probably relates to a timing issue on the part of > the issued command I am guessing? > > I'm not sure why you're seeing that problem, that seems weird. However, I > wouldn't expect that to work generally because once the when statement > returns could be after quite a bit of the file has already transferred. > > > Also interested as well in the MIME type question with respect to > Office documents. > > Yeah, what would help a lot there is for someone to pull together files > that they don't feel are being detected with accurate mime types and to > provide those files or links to files on the internet that don't get > detected accurately. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140918/f9decca5/attachment.html From jxbatchelor at gmail.com Fri Sep 19 08:57:49 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Fri, 19 Sep 2014 10:57:49 -0500 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: References: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> Message-ID: Seth and group: As a tangental (but related) topic, it appears that the MIME type 'application/msword' is possibly being used to universally classify all legacy MS Office filetypes (doc, ppt, xls, etc), instead of just Office documents with the .doc extension. From what I understand, the MIME type 'application/msword' is specifically used for 'doc' files and is not universal to describe any or all Office documents. The following table supports this notion, as you can see different extentions within office have thier own respective MIME type. http://filext.com/faq/office_mime_types.php In testing this, I had a script that pulled files off the wire that matched the 'msword' MIME type, waited, then reviewed. I ended up with a diverse collection of office documents (doc/xls). It may be purposeful, since all OLECF files have the same magic (D0 CF 11 E0 A1 B1 1A E1). Is this the case? Would it be more appropriate/clear to have a MIME type such as 'application/ole'? Additionally, if you look 512 bytes in you can determine the type of file for older office documents. Is this an opportunity to create clearer, more specific file type signatures? I am certainly not an authority on this matter, but would appreciate any insight into the topic as it will help drive the direction of a solution I am developing. Thanks, Jason On Thu, Sep 18, 2014 at 4:26 PM, Jason Batchelor wrote: > Thanks Seth, > > Is it expected for Bro to detect specific Office documents? Older versions > of Office documents (2003 and back I believe) had easily identifiable file > magic that one could use to at least inform the observation that the file > is an MS office document (ex: D0 CF 11 E0 A1 B1 1A E1). > > Now however, with 'office open xml', the initial file magic for modern > Office documents is equivilent to that of a zip file at face value. > However, if properly insturmented you can deliniate between a regular ZIP > file and an Office document, down to being able to state if one is a pptx, > docx, etc... > > When extracting files off the wire that are what I believe to be modern MS > Office documents, Bro tends to classify them as ZIP files via the MIME > type. This is technically correct, however there are higher fidelity > attributes that may be absent, such as the fact that the file is an Office > document for Word, Powerpoint, etc. > > I did a little playing around to see if perhaps Bro was simply claiming > the 'ZIP' file magic identification was the strongest - and the other more > application centric file magic identifiers were burried in the variable > f$mime_types mime_matches vector as defined here: > > > https://www.bro.org/sphinx-git/scripts/base/init-bare.bro.html#type-mime_matches > > After running , it doesn't seem that Bro is identifying this. Below is the > snip of code I am using to attempt this: > > if ( f?$mime_types && |f$mime_types| == 2 ) > { > if ( f$mime_types[0]$mime == "application/zip" ) > { > ext = ext_map_zip_subset[f$mime_types[1]$mime]; > } > } > The subset mime types I have defined to map the specific versions of MS > Office are as follows: > > application/vnd.openxmlformats-officedocument.wordprocessingml.document > application/vnd.openxmlformats-officedocument.spreadsheetml.sheet > application/vnd.openxmlformats-officedocument.presentationml.presentation > > Also curious about JAR files too, those seem to fold into the broader ZIP > file magic but may be detected with something a little more specific. > > The magic/type list posted here (under 'zip') perhaps better illustrates > the issue: > http://www.garykessler.net/library/file_sigs.html > > Hope that helps explain better, > Jason > > > > > > On Thu, Sep 18, 2014 at 2:47 PM, Seth Hall wrote: > >> >> On Sep 18, 2014, at 1:12 PM, Jason Batchelor >> wrote: >> >> > FWIW - I managed to cobble together the following poc once I stumbled >> across 'exec' :) >> >> Yep, that's probably the correct thing to do for now. >> >> > The drawback is this required the use of 'when' which requires me to >> wait a little bit before I can utilize the returned result. >> >> Since Bro needs to keep running in a non-blocking manner all the time, >> basically any solution you aim for will be using when since looking at the >> file system is almost intrinsically a blocking operation. >> >> What I would recommend is that you have a scheduled event that regularly >> checks the size of the directory and modifies a global value to let you >> know if you're safe to extract or not. That will combine the benefit of >> the asynchronous operation with the benefit of being able to check in an if >> statement if your extraction directory is overly full. >> >> > It also seems that if I place an 'extract' analyzer inside the if >> statement when a file can be written, I get the error 'field value missing >> [dir_size$stdout]'. This probably relates to a timing issue on the part of >> the issued command I am guessing? >> >> I'm not sure why you're seeing that problem, that seems weird. However, >> I wouldn't expect that to work generally because once the when statement >> returns could be after quite a bit of the file has already transferred. >> >> > Also interested as well in the MIME type question with respect to >> Office documents. >> >> Yeah, what would help a lot there is for someone to pull together files >> that they don't feel are being detected with accurate mime types and to >> provide those files or links to files on the internet that don't get >> detected accurately. >> >> .Seth >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140919/ced0e0ba/attachment.html From seth at icir.org Fri Sep 19 10:06:34 2014 From: seth at icir.org (Seth Hall) Date: Fri, 19 Sep 2014 13:06:34 -0400 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: References: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> Message-ID: On Sep 19, 2014, at 11:57 AM, Jason Batchelor wrote: > It may be purposeful, since all OLECF files have the same magic (D0 CF 11 E0 A1 B1 1A E1). Is this the case? Would it be more appropriate/clear to have a MIME type such as 'application/ole'? Additionally, if you look 512 bytes in you can determine the type of file for older office documents. Is this an opportunity to create clearer, more specific file type signatures? I view this as the opportunity. We can make type signatures and indicators that fit our use case. Are you interested in leading an effort to clean up the MS Office document identification? That's a nice, tightly defined problem scope and it sounds like it's in an area that you need to address for yourself anyway. > I am certainly not an authority on this matter, but would appreciate any insight into the topic as it will help drive the direction of a solution I am developing. The general problem with this stuff is that everyone ends up saying that same thing. I'm sure that even libmagic developers would say the same thing because they are just trying to show mime types that are defined and allocated by IANA. This is an area where we're just going to have to let ourselves be free to extend and expand beyond libmagic or even IANA in some cases (they have a mechanism for unallocated extensions that we should evaluate closely). .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From phrackmod at gmail.com Fri Sep 19 10:29:56 2014 From: phrackmod at gmail.com (PeLo) Date: Fri, 19 Sep 2014 22:59:56 +0530 Subject: [Bro] Clarification needed Message-ID: <541C6814.8010104@gmail.com> Hello, I do have a question regarding data types. I would like to create a dynamic set of domain names in a script and would like to do a dns lookup on each of those domains in another script. The problem is that bro automatically tries to perform dns lookup on any domain names provided. Using a single domain name works well (*global restricted_domains = abc.com*) but when i try to assign a group of domains at a time (*global restriced_domains = { abc.com, 123.net };* or *global restricted_domains: set[addr]* and using *add* statement), I get an error which states "Type Clash". I would like to know if there is a way to create a set of hostnames so that I can work on them later. Since domain names are essentially strings, I think it would be nice to have an explicit conversion function to convert from strings to domain names. If there was one, the above problem would have been solved easily. - Pelo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140919/48bc4fbd/attachment.html From jxbatchelor at gmail.com Fri Sep 19 10:41:14 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Fri, 19 Sep 2014 12:41:14 -0500 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: References: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> Message-ID: > I view this as the opportunity. We can make type signatures and indicators that fit our use case. Are you interested in leading an effort to clean up the MS Office document identification? That's a nice, tightly defined problem scope and it sounds like it's in an area that you need to address for yourself anyway. I would be :). Would you mind pointing me in the right direction to how I might make type signatures and indicators as you describe. If it is as simple as adding more detailed content to an existing file or library, could you point me to the file I should be tinkering with? I've done this sort of stuff before with Yara but have not explored doing so with Bro. Thanks, Jason On Fri, Sep 19, 2014 at 12:06 PM, Seth Hall wrote: > > On Sep 19, 2014, at 11:57 AM, Jason Batchelor > wrote: > > > It may be purposeful, since all OLECF files have the same magic (D0 CF > 11 E0 A1 B1 1A E1). Is this the case? Would it be more appropriate/clear to > have a MIME type such as 'application/ole'? Additionally, if you look 512 > bytes in you can determine the type of file for older office documents. Is > this an opportunity to create clearer, more specific file type signatures? > > I view this as the opportunity. We can make type signatures and > indicators that fit our use case. Are you interested in leading an effort > to clean up the MS Office document identification? That's a nice, tightly > defined problem scope and it sounds like it's in an area that you need to > address for yourself anyway. > > > I am certainly not an authority on this matter, but would appreciate > any insight into the topic as it will help drive the direction of a > solution I am developing. > > The general problem with this stuff is that everyone ends up saying that > same thing. I'm sure that even libmagic developers would say the same > thing because they are just trying to show mime types that are defined and > allocated by IANA. This is an area where we're just going to have to let > ourselves be free to extend and expand beyond libmagic or even IANA in some > cases (they have a mechanism for unallocated extensions that we should > evaluate closely). > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140919/d4379bb3/attachment.html From jsiwek at illinois.edu Fri Sep 19 11:44:47 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Fri, 19 Sep 2014 18:44:47 +0000 Subject: [Bro] Clarification needed In-Reply-To: <541C6814.8010104@gmail.com> References: <541C6814.8010104@gmail.com> Message-ID: On Sep 19, 2014, at 12:29 PM, PeLo wrote: > The problem is that bro automatically tries to perform dns lookup on any domain names provided. Using a single domain name works well (global restricted_domains = abc.com) but when i try to assign a group of domains at a time (global restriced_domains = { abc.com, 123.net }; or global restricted_domains: set[addr] and using add statement), I get an error which states "Type Clash". I would like to know if there is a way to create a set of hostnames so that I can work on them later. If you?re purely using unquoted domain names, you can think of that as being automatically converted in to a set[addr] at parse-time. E.g. global mydomains: set[addr] = { bro.org, google.com }; for ( i in example.com ) add mydomains[i]; print my domains; Note, the loop over example.com is because it?s technically a set[addr] and you can only add a single element to the mydomains set at a time (at least I can?t recall an easier way to merge two sets). > Since domain names are essentially strings, I think it would be nice to have an explicit conversion function to convert from strings to domain names. If there was one, the above problem would have been solved easily. There?s not really a distinct type for domain names ? if the parser sees a string of characters that looks like a domain name and it?s not in quotes, Bro will resolve those in to a set[addr] as part of the initialization process. For run-time resolution of domain names, storing the domain name as a string data type (e.g. by putting quotes around it) and then passing that as an argument to the ?lookup_hostname? function may be what you want. - Jon From seth at icir.org Fri Sep 19 11:50:40 2014 From: seth at icir.org (Seth Hall) Date: Fri, 19 Sep 2014 14:50:40 -0400 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: References: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> Message-ID: <154D582E-E62A-4FC6-9FC7-E0EF24AA366E@icir.org> On Sep 19, 2014, at 1:41 PM, Jason Batchelor wrote: > I would be :). Woo! > Would you mind pointing me in the right direction to how I might make type signatures and indicators as you describe. https://github.com/bro/bro/tree/master/scripts/base/frameworks/files/magic Any attention to those file detections would be great. I would also like to start getting some tests in place that verify we are detecting these files correctly going into the future. Feel free to ask if you have any questions. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Fri Sep 19 12:01:03 2014 From: seth at icir.org (Seth Hall) Date: Fri, 19 Sep 2014 15:01:03 -0400 Subject: [Bro] Clarification needed In-Reply-To: <541C6814.8010104@gmail.com> References: <541C6814.8010104@gmail.com> Message-ID: On Sep 19, 2014, at 1:29 PM, PeLo wrote: > The problem is that bro automatically tries to perform dns lookup on any domain names provided. I don't tend to use that feature of Bro because I never have a problem that fits it quite right. > Using a single domain name works well (global restricted_domains = abc.com) but when i try to assign a group of domains at a time (global restriced_domains = { abc.com, 123.net }; or global restricted_domains: set[addr] and using add statement), I get an error which states "Type Clash". I would like to know if there is a way to create a set of hostnames so that I can work on them later. I would have to see your code to know exactly what was failing. > Since domain names are essentially strings, I think it would be nice to have an explicit conversion function to convert from strings to domain names. If there was one, the above problem would have been solved easily. https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-lookup_hostname https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-lookup_hostname_txt https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-lookup_addr You can see an example using one of these scripts here: https://github.com/bro/bro/blob/master/scripts/policy/protocols/ssh/interesting-hostnames.bro#L34 (you have to use them in when statements) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From phrackmod at gmail.com Fri Sep 19 12:43:28 2014 From: phrackmod at gmail.com (PeLo) Date: Sat, 20 Sep 2014 01:13:28 +0530 Subject: [Bro] Clarification needed In-Reply-To: References: <541C6814.8010104@gmail.com> Message-ID: Thanks for the links. Below is a sample code. Error messages are also included in comments. event bro_init() { ### This Works fine local amazon_ips = amazon.com; for (i in amazon_ips) print(i); ### Error occurs here ### Error Output ### ============ ### error : type clash (addr and {74.125.236.213,2404:6800:4007:803::1015}) ### error : type mismatch ({74.125.236.213,2404:6800:4007:803::1015} and addr) local google_ips: set[addr] = { mail.google.com, maps.google.com, youtube.com }; for (i in google_ips) print(i); ### No errors and output here ### Anything wrong with the code??? local ip_list: set[addr]; local domain_list: set[string] = { "google.com", "bro.org" }; for (domain in domain_list){ when( local temp = lookup_hostname(domain) ){ for (ip in temp) add ip_list[ip]; } } for (i in ip_list) print(i); } On Sat, Sep 20, 2014 at 12:31 AM, Seth Hall wrote: > > On Sep 19, 2014, at 1:29 PM, PeLo wrote: > > > The problem is that bro automatically tries to perform dns lookup on any > domain names provided. > > I don't tend to use that feature of Bro because I never have a problem > that fits it quite right. > > > Using a single domain name works well (global restricted_domains = > abc.com) but when i try to assign a group of domains at a time (global > restriced_domains = { abc.com, 123.net }; or global restricted_domains: > set[addr] and using add statement), I get an error which states "Type > Clash". I would like to know if there is a way to create a set of hostnames > so that I can work on them later. > > I would have to see your code to know exactly what was failing. > > > Since domain names are essentially strings, I think it would be nice to > have an explicit conversion function to convert from strings to domain > names. If there was one, the above problem would have been solved easily. > > > https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-lookup_hostname > > https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-lookup_hostname_txt > https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html#id-lookup_addr > > You can see an example using one of these scripts here: > > https://github.com/bro/bro/blob/master/scripts/policy/protocols/ssh/interesting-hostnames.bro#L34 > > (you have to use them in when statements) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140920/9c21a5f8/attachment.html From lists at g-clef.net Fri Sep 19 12:57:51 2014 From: lists at g-clef.net (Aaron Gee-Clough) Date: Fri, 19 Sep 2014 15:57:51 -0400 Subject: [Bro] Multiple Intel framework hits for same connection? Message-ID: <541C8ABF.7070500@g-clef.net> Hello, all, I have a question about the intel framework: if a flow matches both an Intel::ADDR and Intel::CERT_HASH (for example), will the intel framework generate notice logs for both matches, or just one? Right now it looks like it's just flagging on one, but I'd like to make sure I haven't done something wrong. Thanks. aaron From seth at icir.org Fri Sep 19 13:15:11 2014 From: seth at icir.org (Seth Hall) Date: Fri, 19 Sep 2014 16:15:11 -0400 Subject: [Bro] Multiple Intel framework hits for same connection? In-Reply-To: <541C8ABF.7070500@g-clef.net> References: <541C8ABF.7070500@g-clef.net> Message-ID: <33F71F16-3AA4-4B0C-8542-9651698811BA@icir.org> On Sep 19, 2014, at 3:57 PM, Aaron Gee-Clough wrote: > I have a question about the intel framework: if a flow matches both an > Intel::ADDR and Intel::CERT_HASH (for example), will the intel framework > generate notice logs for both matches, or just one? It should definitely match both. That's a problem if it's not. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Fri Sep 19 13:19:53 2014 From: seth at icir.org (Seth Hall) Date: Fri, 19 Sep 2014 16:19:53 -0400 Subject: [Bro] Clarification needed In-Reply-To: References: <541C6814.8010104@gmail.com> Message-ID: <11E74A09-E81C-41D2-AA06-AEC12CEC5E95@icir.org> On Sep 19, 2014, at 3:43 PM, PeLo wrote: > ### Error occurs here > ### Error Output > ### ============ > ### error : type clash (addr and {74.125.236.213,2404:6800:4007:803::1015}) > ### error : type mismatch ({74.125.236.213,2404:6800:4007:803::1015} and addr) > > local google_ips: set[addr] = { mail.google.com, maps.google.com, youtube.com }; > for (i in google_ips) print(i); Ugh, I suspect this has something to do with using the "{ }" constructor syntax somewhere that it shouldn't be used. I.e., you've encountered a wart. > ### No errors and output here > ### Anything wrong with the code??? You have an issue where you are trying to synchronously access data from asynchronous operations. :) When statements return immediately and the body only executes after the condition becomes true. You are printing before you've actually gotten a response from the DNS server. Let me try restructuring your code a bit... http://try.bro.org/#/trybro/saved/89b6a856-c785-4cea-bfc3-206947bc054a Does that explain it a bit better? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From jsiwek at illinois.edu Fri Sep 19 13:22:58 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Fri, 19 Sep 2014 20:22:58 +0000 Subject: [Bro] Clarification needed In-Reply-To: References: <541C6814.8010104@gmail.com> Message-ID: <500297F3-3C50-495A-B134-291F7FD0A09D@illinois.edu> On Sep 19, 2014, at 2:43 PM, PeLo wrote: > ### Error occurs here > ### Error Output > ### ============ > ### error : type clash (addr and {74.125.236.213,2404:6800:4007:803::1015}) > ### error : type mismatch ({74.125.236.213,2404:6800:4007:803::1015} and addr) > > local google_ips: set[addr] = { mail.google.com, maps.google.com, youtube.com }; > for (i in google_ips) print(i); Moving the declaration up in to a global makes it work for me. > ### No errors and output here > ### Anything wrong with the code??? > > local ip_list: set[addr]; > local domain_list: set[string] = { "google.com", "bro.org" }; > > for (domain in domain_list){ > when( local temp = lookup_hostname(domain) ){ > for (ip in temp) > add ip_list[ip]; > } > } > for (i in ip_list) print(i); > } In the absence of input sources (e.g. reading live traffic), it may terminate before the lookup returns. The way to tell it not to do that is to redef ?exit_only_after_terminate?. Then you also have do the printing within the body of the when statement as that?s when results are actually available. For example see: http://try.bro.org/#/trybro/saved/46c2c025-6462-41e2-a581-14a9c3eba656 - Jon From phrackmod at gmail.com Fri Sep 19 13:44:41 2014 From: phrackmod at gmail.com (PeLo) Date: Sat, 20 Sep 2014 02:14:41 +0530 Subject: [Bro] Clarification needed In-Reply-To: <11E74A09-E81C-41D2-AA06-AEC12CEC5E95@icir.org> References: <541C6814.8010104@gmail.com> <11E74A09-E81C-41D2-AA06-AEC12CEC5E95@icir.org> Message-ID: ?? Regarding the *Schedule* statement used in the code, I see that the execution is halted until the specified time expires. Since Bro executes all the event handlers in a FIFO style, if by mistake I wrote a schedule statement with a time interval of say 10 sec, will this then block the execution all the event handlers in the queue thereby delaying the whole process?? ?- Pelo? On Sat, Sep 20, 2014 at 1:49 AM, Seth Hall wrote: > > On Sep 19, 2014, at 3:43 PM, PeLo wrote: > > > ### Error occurs here > > ### Error Output > > ### ============ > > ### error : type clash (addr and > {74.125.236.213,2404:6800:4007:803::1015}) > > ### error : type mismatch > ({74.125.236.213,2404:6800:4007:803::1015} and addr) > > > > local google_ips: set[addr] = { mail.google.com, maps.google.com, > youtube.com }; > > for (i in google_ips) print(i); > > Ugh, I suspect this has something to do with using the "{ }" constructor > syntax somewhere that it shouldn't be used. I.e., you've encountered a > wart. > > > ### No errors and output here > > ### Anything wrong with the code??? > > You have an issue where you are trying to synchronously access data from > asynchronous operations. :) > > When statements return immediately and the body only executes after the > condition becomes true. You are printing before you've actually gotten a > response from the DNS server. Let me try restructuring your code a bit... > > > http://try.bro.org/#/trybro/saved/89b6a856-c785-4cea-bfc3-206947bc054a > > Does that explain it a bit better? > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140920/2e89183e/attachment.html From phrackmod at gmail.com Fri Sep 19 13:53:42 2014 From: phrackmod at gmail.com (PeLo) Date: Sat, 20 Sep 2014 02:23:42 +0530 Subject: [Bro] Clarification needed In-Reply-To: <500297F3-3C50-495A-B134-291F7FD0A09D@illinois.edu> References: <541C6814.8010104@gmail.com> <500297F3-3C50-495A-B134-291F7FD0A09D@illinois.edu> Message-ID: Regaring *?exit_only_after_terminate?*, does the execution repeat a specific loop until it satisfies the condition?? As per the logic, it should be repeating the loop from the statement right after the when statement. Could you clear me on this. On Sat, Sep 20, 2014 at 1:52 AM, Siwek, Jon wrote: > > On Sep 19, 2014, at 2:43 PM, PeLo wrote: > > > ### Error occurs here > > ### Error Output > > ### ============ > > ### error : type clash (addr and > {74.125.236.213,2404:6800:4007:803::1015}) > > ### error : type mismatch > ({74.125.236.213,2404:6800:4007:803::1015} and addr) > > > > local google_ips: set[addr] = { mail.google.com, maps.google.com, > youtube.com }; > > for (i in google_ips) print(i); > > Moving the declaration up in to a global makes it work for me. > > > ### No errors and output here > > ### Anything wrong with the code??? > > > > local ip_list: set[addr]; > > local domain_list: set[string] = { "google.com", "bro.org" }; > > > > for (domain in domain_list){ > > when( local temp = lookup_hostname(domain) ){ > > for (ip in temp) > > add ip_list[ip]; > > } > > } > > for (i in ip_list) print(i); > > } > > In the absence of input sources (e.g. reading live traffic), it may > terminate before the lookup returns. The way to tell it not to do that is > to redef ?exit_only_after_terminate?. Then you also have do the printing > within the body of the when statement as that?s when results are actually > available. For example see: > > http://try.bro.org/#/trybro/saved/46c2c025-6462-41e2-a581-14a9c3eba656 > > - Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140920/636bf760/attachment.html From gc355804 at ohio.edu Fri Sep 19 17:12:56 2014 From: gc355804 at ohio.edu (Clark, Gilbert) Date: Sat, 20 Sep 2014 00:12:56 +0000 Subject: [Bro] Clarification needed In-Reply-To: References: <541C6814.8010104@gmail.com> <11E74A09-E81C-41D2-AA06-AEC12CEC5E95@icir.org>, Message-ID: <1411171976050.63414@ohio.edu> Neither timers (schedule blocks) nor triggers block execution. Instead, when bro sees a timer / trigger, it just makes a note of it and moves on to the next line of code it sees. In the case of the timer described below, bro would keep doing other stuff for 1 second before eventually coming back to execute the code in the { }. In the case of the typo, bro would keep doing other stuff for 10 seconds before eventually coming back to execute the code in the { }. Triggers operate in a similar fashion to timers, except that the conditions for *every* trigger are evaluated at least once / every packet bro observes. In general, this means that *every registered trigger* is going to add per-packet overhead, so there's a pretty good argument to be made that relatively few triggers should be active at once. Also, as far as I know,?exit_only_after_terminate is a global flag that?will simply?request?that?bro wait to exit until there's an explicit request to do so [1]. ?It shouldn't really?have any impact on bro's execution otherwise: it's only there to allow operations with longer execution times to complete before bro actually exits. As a note, there are actually relatively few blocking calls supported by bro just because blocking script execution for any reason is going to eat through queue space *incredibly* quickly (and likely lead to burst losses). HTH, Gilbert [1]?http://comments.gmane.org/gmane.comp.security.detection.bro/5998 From: bro-bounces at bro.org on behalf of PeLo Sent: Friday, September 19, 2014 4:44 PM To: Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Clarification needed ? ?? Regarding the Schedule statement used in the code, I see that the execution is halted until the specified time expires. Since Bro executes all the event handlers in a FIFO style, if by mistake I wrote a schedule statement with a time interval of say 10 sec, will this then block the execution all the event handlers in the queue thereby delaying the whole process?? ?- Pelo? On Sat, Sep 20, 2014 at 1:49 AM, Seth Hall wrote: On Sep 19, 2014, at 3:43 PM, PeLo wrote: >? ? ? ?### Error occurs here >? ? ? ?### Error Output >? ? ? ?### ============ >? ? ? ?### error : type clash (addr and {74.125.236.213,2404:6800:4007:803::1015}) >? ? ? ?### error : type mismatch ({74.125.236.213,2404:6800:4007:803::1015} and addr) > >? ? ? ?local google_ips: set[addr] = { mail.google.com, maps.google.com, youtube.com }; >? ? ? ?for (i in google_ips)? ?print(i); Ugh, I suspect this has something to do with using the "{ }" constructor syntax somewhere that it shouldn't be used.? I.e., you've encountered a wart. >? ? ? ?### No errors and output here >? ? ? ?### Anything wrong with the code??? You have an issue where you are trying to synchronously access data from asynchronous operations. :) When statements return immediately and the body only executes after the condition becomes true.? You are printing before you've actually gotten a response from the DNS server.? Let me try restructuring your code a bit... ? ? ? ? http://try.bro.org/#/trybro/saved/89b6a856-c785-4cea-bfc3-206947bc054a Does that explain it a bit better? ? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From gc355804 at ohio.edu Fri Sep 19 21:28:44 2014 From: gc355804 at ohio.edu (Clark, Gilbert) Date: Sat, 20 Sep 2014 04:28:44 +0000 Subject: [Bro] Clarification needed In-Reply-To: <1411171976050.63414@ohio.edu> References: <541C6814.8010104@gmail.com> <11E74A09-E81C-41D2-AA06-AEC12CEC5E95@icir.org>, , <1411171976050.63414@ohio.edu> Message-ID: <1411187321938.87048@ohio.edu> In hindsight, that triggers paragraph was poorly written. Let me try again: Triggers operate in a similar fashion to timers, except that the conditions for *every* trigger are evaluated at least once / every packet bro observes. Note, however, that bro is actually pretty intelligent about the way it evaluates the condition for a trigger. Bro actually keeps track of which values a when() depends on, and will actually only *re-execute* the code in the when() if one of these values has been changed. Thus, the actual total overhead per trigger per packet should actually work out to be a function of *not only* how many active triggers there are, *but also* what exactly those triggers have defined in their when(). More experienced bro folks can feel free to correct / refine the above if desired :) Regardless, I'm afraid the last e-mail I sent may have come across more strongly than I intended. The point I was trying to make there wasn't that "defining triggers is terribly expensive and no one should ever do it", but instead that the cost of maintaining a trigger could be relatively more expensive than maintaining a timer, and that their use should therefore be considered more carefully. Cheers, Gilbert ________________________________________ From: bro-bounces at bro.org on behalf of Clark, Gilbert Sent: Friday, September 19, 2014 8:12 PM To: PeLo; Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Clarification needed Neither timers (schedule blocks) nor triggers block execution. Instead, when bro sees a timer / trigger, it just makes a note of it and moves on to the next line of code it sees. In the case of the timer described below, bro would keep doing other stuff for 1 second before eventually coming back to execute the code in the { }. In the case of the typo, bro would keep doing other stuff for 10 seconds before eventually coming back to execute the code in the { }. Triggers operate in a similar fashion to timers, except that the conditions for *every* trigger are evaluated at least once / every packet bro observes. In general, this means that *every registered trigger* is going to add per-packet overhead, so there's a pretty good argument to be made that relatively few triggers should be active at once. Also, as far as I know, exit_only_after_terminate is a global flag that will simply request that bro wait to exit until there's an explicit request to do so [1]. It shouldn't really have any impact on bro's execution otherwise: it's only there to allow operations with longer execution times to complete before bro actually exits. As a note, there are actually relatively few blocking calls supported by bro just because blocking script execution for any reason is going to eat through queue space *incredibly* quickly (and likely lead to burst losses). HTH, Gilbert [1]?http://comments.gmane.org/gmane.comp.security.detection.bro/5998 From: bro-bounces at bro.org on behalf of PeLo Sent: Friday, September 19, 2014 4:44 PM To: Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Clarification needed ?? Regarding the Schedule statement used in the code, I see that the execution is halted until the specified time expires. Since Bro executes all the event handlers in a FIFO style, if by mistake I wrote a schedule statement with a time interval of say 10 sec, will this then block the execution all the event handlers in the queue thereby delaying the whole process?? ?- Pelo? On Sat, Sep 20, 2014 at 1:49 AM, Seth Hall wrote: On Sep 19, 2014, at 3:43 PM, PeLo wrote: >? ### Error occurs here >? ### Error Output >? ### ============ >? ### error : type clash (addr and {74.125.236.213,2404:6800:4007:803::1015}) >? ### error : type mismatch ({74.125.236.213,2404:6800:4007:803::1015} and addr) > >? local google_ips: set[addr] = { mail.google.com, maps.google.com, youtube.com }; >? for (i in google_ips) print(i); Ugh, I suspect this has something to do with using the "{ }" constructor syntax somewhere that it shouldn't be used. I.e., you've encountered a wart. >? ### No errors and output here >? ### Anything wrong with the code??? You have an issue where you are trying to synchronously access data from asynchronous operations. :) When statements return immediately and the body only executes after the condition becomes true. You are printing before you've actually gotten a response from the DNS server. Let me try restructuring your code a bit... http://try.bro.org/#/trybro/saved/89b6a856-c785-4cea-bfc3-206947bc054a Does that explain it a bit better? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From gc355804 at ohio.edu Sat Sep 20 12:48:39 2014 From: gc355804 at ohio.edu (Clark, Gilbert) Date: Sat, 20 Sep 2014 19:48:39 +0000 Subject: [Bro] Clarification needed In-Reply-To: <1411187321938.87048@ohio.edu> References: <541C6814.8010104@gmail.com> <11E74A09-E81C-41D2-AA06-AEC12CEC5E95@icir.org>, , <1411171976050.63414@ohio.edu>,<1411187321938.87048@ohio.edu> Message-ID: <1411242518557.85285@ohio.edu> I went and double-checked what I wrote about triggers (again), and found that they don't seem to work quite how I thought they did. There seems to be a lot *less* overhead than I was expecting there to be [1]. So, rather than trying to correct myself again and possibly putting more FUD on the list: would someone (e.g. Robin, Seth) mind offering the Right Explanation (tm) here? It looks like I don't understand triggers quite as well as I originally thought I did, so time to retract and punt the question! Cheers, Gilbert [1] Script I used to test the overhead introduced by a set of triggers redef exit_only_after_terminate = T; module Counters; global conn_count:int; global exec_count:int; global eval_count:int; global pending_count:int; global shared_var:int; global target_count:int; event bro_init() { Counters::conn_count = 0; Counters::exec_count = 0; Counters::eval_count = 0; Counters::pending_count = 0; Counters::shared_var = 0; Counters::target_count = 0; } function evalfunction() :bool { eval_count = eval_count + 1; return shared_var > target_count; } event new_packet(c: connection, p: pkt_hdr) { pending_count = pending_count + Instrumentation::GetPendingTriggerCount(); } event connection_established(c: connection) { conn_count = conn_count + 1; shared_var = shared_var + 1; when ( evalfunction() ) { exec_count = exec_count + 1; target_count += 100; print("New target count:"); print target_count; } } event bro_done() { print("Counters:"); print("Executed"); print(exec_count); print("Number of connections"); print(conn_count); print("Number of evaluations"); print(eval_count); print("Total pending"); print(pending_count); print("Shared value"); print(shared_var); print("Next target count"); print(target_count); } ________________________________________ From: bro-bounces at bro.org on behalf of Clark, Gilbert Sent: Saturday, September 20, 2014 12:28 AM To: PeLo; Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Clarification needed In hindsight, that triggers paragraph was poorly written. Let me try again: Triggers operate in a similar fashion to timers, except that the conditions for *every* trigger are evaluated at least once / every packet bro observes. Note, however, that bro is actually pretty intelligent about the way it evaluates the condition for a trigger. Bro actually keeps track of which values a when() depends on, and will actually only *re-execute* the code in the when() if one of these values has been changed. Thus, the actual total overhead per trigger per packet should actually work out to be a function of *not only* how many active triggers there are, *but also* what exactly those triggers have defined in their when(). More experienced bro folks can feel free to correct / refine the above if desired :) Regardless, I'm afraid the last e-mail I sent may have come across more strongly than I intended. The point I was trying to make there wasn't that "defining triggers is terribly expensive and no one should ever do it", but instead that the cost of maintaining a trigger could be relatively more expensive than maintaining a timer, and that their use should therefore be considered more carefully. Cheers, Gilbert ________________________________________ From: bro-bounces at bro.org on behalf of Clark, Gilbert Sent: Friday, September 19, 2014 8:12 PM To: PeLo; Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Clarification needed Neither timers (schedule blocks) nor triggers block execution. Instead, when bro sees a timer / trigger, it just makes a note of it and moves on to the next line of code it sees. In the case of the timer described below, bro would keep doing other stuff for 1 second before eventually coming back to execute the code in the { }. In the case of the typo, bro would keep doing other stuff for 10 seconds before eventually coming back to execute the code in the { }. Triggers operate in a similar fashion to timers, except that the conditions for *every* trigger are evaluated at least once / every packet bro observes. In general, this means that *every registered trigger* is going to add per-packet overhead, so there's a pretty good argument to be made that relatively few triggers should be active at once. Also, as far as I know, exit_only_after_terminate is a global flag that will simply request that bro wait to exit until there's an explicit request to do so [1]. It shouldn't really have any impact on bro's execution otherwise: it's only there to allow operations with longer execution times to complete before bro actually exits. As a note, there are actually relatively few blocking calls supported by bro just because blocking script execution for any reason is going to eat through queue space *incredibly* quickly (and likely lead to burst losses). HTH, Gilbert [1]?http://comments.gmane.org/gmane.comp.security.detection.bro/5998 From: bro-bounces at bro.org on behalf of PeLo Sent: Friday, September 19, 2014 4:44 PM To: Seth Hall Cc: bro at bro.org Subject: Re: [Bro] Clarification needed ?? Regarding the Schedule statement used in the code, I see that the execution is halted until the specified time expires. Since Bro executes all the event handlers in a FIFO style, if by mistake I wrote a schedule statement with a time interval of say 10 sec, will this then block the execution all the event handlers in the queue thereby delaying the whole process?? ?- Pelo? On Sat, Sep 20, 2014 at 1:49 AM, Seth Hall wrote: On Sep 19, 2014, at 3:43 PM, PeLo wrote: >? ### Error occurs here >? ### Error Output >? ### ============ >? ### error : type clash (addr and {74.125.236.213,2404:6800:4007:803::1015}) >? ### error : type mismatch ({74.125.236.213,2404:6800:4007:803::1015} and addr) > >? local google_ips: set[addr] = { mail.google.com, maps.google.com, youtube.com }; >? for (i in google_ips) print(i); Ugh, I suspect this has something to do with using the "{ }" constructor syntax somewhere that it shouldn't be used. I.e., you've encountered a wart. >? ### No errors and output here >? ### Anything wrong with the code??? You have an issue where you are trying to synchronously access data from asynchronous operations. :) When statements return immediately and the body only executes after the condition becomes true. You are printing before you've actually gotten a response from the DNS server. Let me try restructuring your code a bit... http://try.bro.org/#/trybro/saved/89b6a856-c785-4cea-bfc3-206947bc054a Does that explain it a bit better? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From roihat168 at yahoo.com Sun Sep 21 04:37:27 2014 From: roihat168 at yahoo.com (roi hatam) Date: Sun, 21 Sep 2014 04:37:27 -0700 Subject: [Bro] broccoli and elasticsearch Message-ID: <1411299447.55888.YahooMailNeo@web162105.mail.bf1.yahoo.com> Hello, I configured my bro to send http requests to broccoli and elasticsearch. My question is: Can I send to broccoli the _index,_type and _id which the elasticsearch creates? or I should search by myself every request with my unique request_id. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140921/7855257f/attachment.html From blackhole.em at gmail.com Mon Sep 22 07:19:00 2014 From: blackhole.em at gmail.com (Joe Blow) Date: Mon, 22 Sep 2014 10:19:00 -0400 Subject: [Bro] Memory leak? Message-ID: Hey guys, I'm trying to figure out if there is a memory leak in Bro that's causing me issues. I wonder if it's a memory leak because every few hours i'll have a worker die like this: If you want to help us debug this problem, then please forward this mail to reports at bro.org Bro 2.3.1 Linux 2.6.32-358.11.1.el6.x86_64 ==== No reporter.log ==== stderr.log listening on bond1, capture length 8192 bytes 1411357505.663118 processing suspended 1411357505.663118 processing continued 1411392324.017784 received termination signal 1411392324.017784 105126164 packets received on interface bond1, 0 dropped ==== stdout.log max memory size (kbytes, -m) unlimited data seg size (kbytes, -d) unlimited virtual memory (kbytes, -v) unlimited core file size (blocks, -c) unlimited ==== .cmdline -i bond1 -U .status -p broctl -p broctl-live -p local -p worker-0-3 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto ==== .env_vars PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts:/sbin:/bin:/usr/sbin:/usr/bin BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/site::/usr/local/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/bro/policy:/usr/local/bro/share/bro/site CLUSTER_NODE=worker-0-3 ==== .status TERMINATED [atexit] ==== No prof.log ==== No packet_filter.log ==== No loaded_scripts.log -- [Automatically generated.] The picture of the system's memory looks like this: [image: Inline image 1] This sort of corresponds with the crashing of Bro. Any ideas? Where would you guys start digging if the workers just started dying (seemingly) randomly? Cheers, JB -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/3dd55f80/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21941 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/3dd55f80/attachment.bin From hosom at battelle.org Mon Sep 22 07:24:55 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Mon, 22 Sep 2014 14:24:55 +0000 Subject: [Bro] Memory leak? In-Reply-To: References: Message-ID: At a glance? with only the information available? It looks more like Bro is legitimately filling up the system memory and the kernel is OOM killing your Bro worker. How much memory is in this worker, what kind of connection is it monitoring? Are you willing to provide a copy of your reporter.log? From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Joe Blow Sent: Monday, September 22, 2014 10:19 AM To: bro at bro-ids.org List Subject: [Bro] Memory leak? Hey guys, I'm trying to figure out if there is a memory leak in Bro that's causing me issues. I wonder if it's a memory leak because every few hours i'll have a worker die like this: If you want to help us debug this problem, then please forward this mail to reports at bro.org Bro 2.3.1 Linux 2.6.32-358.11.1.el6.x86_64 ==== No reporter.log ==== stderr.log listening on bond1, capture length 8192 bytes 1411357505.663118 processing suspended 1411357505.663118 processing continued 1411392324.017784 received termination signal 1411392324.017784 105126164 packets received on interface bond1, 0 dropped ==== stdout.log max memory size (kbytes, -m) unlimited data seg size (kbytes, -d) unlimited virtual memory (kbytes, -v) unlimited core file size (blocks, -c) unlimited ==== .cmdline -i bond1 -U .status -p broctl -p broctl-live -p local -p worker-0-3 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto ==== .env_vars PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts:/sbin:/bin:/usr/sbin:/usr/bin BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/site::/usr/local/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/bro/policy:/usr/local/bro/share/bro/site CLUSTER_NODE=worker-0-3 ==== .status TERMINATED [atexit] ==== No prof.log ==== No packet_filter.log ==== No loaded_scripts.log -- [Automatically generated.] The picture of the system's memory looks like this: [Inline image 1] This sort of corresponds with the crashing of Bro. Any ideas? Where would you guys start digging if the workers just started dying (seemingly) randomly? Cheers, JB -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/dcfb4e67/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 19357 bytes Desc: image002.jpg Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/dcfb4e67/attachment.jpg From blackhole.em at gmail.com Mon Sep 22 13:26:40 2014 From: blackhole.em at gmail.com (Joe Blow) Date: Mon, 22 Sep 2014 16:26:40 -0400 Subject: [Bro] Memory leak? In-Reply-To: References: Message-ID: Fantastic advise. The reporter.log showed a bug for an empty variable inside a when statement. Fixed it with an if - check and i'm not seeing any errors. Hopefully this will quell the memory issues. Thanks for the help. Cheers, JB On Mon, Sep 22, 2014 at 10:24 AM, Hosom, Stephen M wrote: > At a glance? with only the information available? It looks more like Bro > is legitimately filling up the system memory and the kernel is OOM killing > your Bro worker. How much memory is in this worker, what kind of connection > is it monitoring? Are you willing to provide a copy of your reporter.log? > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Joe > Blow > *Sent:* Monday, September 22, 2014 10:19 AM > *To:* bro at bro-ids.org List > *Subject:* [Bro] Memory leak? > > > > Hey guys, > > I'm trying to figure out if there is a memory leak in Bro that's causing > me issues. I wonder if it's a memory leak because every few hours i'll > have a worker die like this: > > > > If you want to help us debug this problem, then please forward this mail > to reports at bro.org > > Bro 2.3.1 > > Linux 2.6.32-358.11.1.el6.x86_64 > > ==== No reporter.log > > ==== stderr.log > > listening on bond1, capture length 8192 bytes > > 1411357505.663118 processing suspended > > 1411357505.663118 processing continued > > 1411392324.017784 received termination signal > > 1411392324.017784 105126164 packets received on interface bond1, 0 dropped > > ==== stdout.log > > max memory size (kbytes, -m) unlimited > > data seg size (kbytes, -d) unlimited > > virtual memory (kbytes, -v) unlimited > > core file size (blocks, -c) unlimited > > ==== .cmdline > > -i bond1 -U .status -p broctl -p broctl-live -p local -p worker-0-3 > local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto > > ==== .env_vars > > > PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts:/sbin:/bin:/usr/sbin:/usr/bin > > > BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/site::/usr/local/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/bro/policy:/usr/local/bro/share/bro/site > > CLUSTER_NODE=worker-0-3 > > ==== .status > > TERMINATED [atexit] > > ==== No prof.log > > ==== No packet_filter.log > > ==== No loaded_scripts.log > > -- > > [Automatically generated.] > > > > The picture of the system's memory looks like this: > > [image: Inline image 1] > > This sort of corresponds with the crashing of Bro. Any ideas? Where > would you guys start digging if the workers just started dying (seemingly) > randomly? > > Cheers, > > JB > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/9eca8f00/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 19357 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/9eca8f00/attachment.jpg From anthony.kasza at gmail.com Mon Sep 22 14:20:07 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Mon, 22 Sep 2014 14:20:07 -0700 Subject: [Bro] Stepping Stone Detection Message-ID: I've noticed some remnants of Vern's work around detecting systems used as stepping stones within Bro's source. Could someone on the list shed light on why and when it was deprecated? Many thanks, -AK -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/1f890c36/attachment.html From bro at pingtrip.com Mon Sep 22 15:04:23 2014 From: bro at pingtrip.com (Dave Crawford) Date: Mon, 22 Sep 2014 18:04:23 -0400 Subject: [Bro] Cluster Best Practices Message-ID: <3511A10F-E9A3-48AD-B62E-E840194EE8C2@pingtrip.com> I?m looking for feedback (or pointers to existing write-ups) on ?best practices? for Bro cluster deployments. I?m planning to deploy workers to multiple geographic datacenters and I looking to weigh the pros/cons of two scenarios: 1) Global Manager for all workers - Should there also be a global proxy or are there benefits to having one in each datacenter? 2) Local Manager (per datacenter) for workers in that specific datacenter - Proxy would be local as well A global manager would obviously be easier to manage/maintain but my concerns are: - Amount of ?long-haul? traffic being generated to push log events to the manager - If the manager crashes are the workers queuing events until they re-connect to the manager? In a scenario of separate managers per datacenter: - Can proxies still ?sync? with each other? (e.g. push intel to workers watching similar traffic in each datacenter) Thanks! -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/2971a110/attachment.html From hosom at battelle.org Mon Sep 22 16:45:10 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Mon, 22 Sep 2014 23:45:10 +0000 Subject: [Bro] Memory leak? In-Reply-To: References: Message-ID: No problem at all! Errors in script land DO cause memory leaks :) -----Original Message----- From: Joe Blow [blackhole.em at gmail.com] Sent: Monday, September 22, 2014 04:27 PM Eastern Standard Time To: Hosom, Stephen M Cc: bro at bro-ids.org List Subject: Re: [Bro] Memory leak? Fantastic advise. The reporter.log showed a bug for an empty variable inside a when statement. Fixed it with an if - check and i'm not seeing any errors. Hopefully this will quell the memory issues. Thanks for the help. Cheers, JB On Mon, Sep 22, 2014 at 10:24 AM, Hosom, Stephen M > wrote: At a glance? with only the information available? It looks more like Bro is legitimately filling up the system memory and the kernel is OOM killing your Bro worker. How much memory is in this worker, what kind of connection is it monitoring? Are you willing to provide a copy of your reporter.log? From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Joe Blow Sent: Monday, September 22, 2014 10:19 AM To: bro at bro-ids.org List Subject: [Bro] Memory leak? Hey guys, I'm trying to figure out if there is a memory leak in Bro that's causing me issues. I wonder if it's a memory leak because every few hours i'll have a worker die like this: If you want to help us debug this problem, then please forward this mail to reports at bro.org Bro 2.3.1 Linux 2.6.32-358.11.1.el6.x86_64 ==== No reporter.log ==== stderr.log listening on bond1, capture length 8192 bytes 1411357505.663118 processing suspended 1411357505.663118 processing continued 1411392324.017784 received termination signal 1411392324.017784 105126164 packets received on interface bond1, 0 dropped ==== stdout.log max memory size (kbytes, -m) unlimited data seg size (kbytes, -d) unlimited virtual memory (kbytes, -v) unlimited core file size (blocks, -c) unlimited ==== .cmdline -i bond1 -U .status -p broctl -p broctl-live -p local -p worker-0-3 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto ==== .env_vars PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts:/sbin:/bin:/usr/sbin:/usr/bin BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/site::/usr/local/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/bro/policy:/usr/local/bro/share/bro/site CLUSTER_NODE=worker-0-3 ==== .status TERMINATED [atexit] ==== No prof.log ==== No packet_filter.log ==== No loaded_scripts.log -- [Automatically generated.] The picture of the system's memory looks like this: [Inline image 1] This sort of corresponds with the crashing of Bro. Any ideas? Where would you guys start digging if the workers just started dying (seemingly) randomly? Cheers, JB -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/4d6b7f67/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 19357 bytes Desc: image002.jpg Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/4d6b7f67/attachment.jpg From anthony.kasza at gmail.com Mon Sep 22 19:27:56 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Mon, 22 Sep 2014 19:27:56 -0700 Subject: [Bro] Stream Extraction from Scriptland Message-ID: Hello List, I'd first like to point out something I never knew existed. bro/aux/bro-aux/devel-tools/extract-conn-by-uid will build a BPF from a UID in a conn.log file and extract that stream from a pcap. Nifty. That got me thinking if it would be possible to call extract-conn-by-uid from scriptland with the exec framework. I wrote a few PoC scripts but things became rather complicated when I couldn't figure out when the conn.log file is available on disk from scriptland. I'm curious if anyone has done something similar to this before. I then started playing around with the set_record_packets bif, but I could not seem to get that function to do anything with packets that weren't TCP data packets. The documentation around this function says nothing specifically about TCP and only references 'connections', which in Bro parlance includes TCP, UDP, and ICMP. I've included a sample trace file, bro script, and some notes that resemble a bug report around the set_record_packets functionality. I supposed my root question is this: is there a way to use Bro scripting to identify a connection of interest and have it written to disk (either with the exec framework or with set_record_packet) instead of including dumb BPFs with Bro's invocation? Thanks all, -AK -------------- next part -------------- When running: bro connection_extractor.bro -Cr sample.pcap -w interesting.pcap Debian version 7.5 Bro version 2.3-178 Expected results: interesting.pcap contains only packets relating to DNS connections Actual results: interesting.pcap contains ICMP packets, TCP signaling (SYN, SYNACK, FINACK) packets, and DNS packets Sample.pcap was generated by running the following commands: ping -c 3 example.com curl ww.google.com Does set_record_packets only work with TCP connections? -------------- next part -------------- A non-text attachment was scrubbed... Name: connection_extractor.bro Type: application/octet-stream Size: 806 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/110fcfd4/attachment.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: sample.pcap Type: application/cap Size: 22739 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140922/110fcfd4/attachment.bin From seth at icir.org Tue Sep 23 05:42:32 2014 From: seth at icir.org (Seth Hall) Date: Tue, 23 Sep 2014 08:42:32 -0400 Subject: [Bro] Stream Extraction from Scriptland In-Reply-To: References: Message-ID: On Sep 22, 2014, at 10:27 PM, anthony kasza wrote: > I supposed my root question is this: is there a way to use Bro > scripting to identify a connection of interest and have it written to > disk (either with the exec framework or with set_record_packet) > instead of including dumb BPFs with Bro's invocation?] That's one of the features of the TimeMachine framework that I haven't finished yet. :)  You can use the set_record_packets BiF as you found too, but that requires that you are running Bro with the -w flag to write packets to disk. Ultimately I think that something like the TimeMachine approach is the most scaleable because you could even do your bulk packet recording on a separate device and just have Bro communicate to it when you want to extract some packets (even going back in time). .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Tue Sep 23 05:50:21 2014 From: seth at icir.org (Seth Hall) Date: Tue, 23 Sep 2014 08:50:21 -0400 Subject: [Bro] Cluster Best Practices In-Reply-To: <3511A10F-E9A3-48AD-B62E-E840194EE8C2@pingtrip.com> References: <3511A10F-E9A3-48AD-B62E-E840194EE8C2@pingtrip.com> Message-ID: On Sep 22, 2014, at 6:04 PM, Dave Crawford wrote: > I?m looking for feedback (or pointers to existing write-ups) on ?best practices? for Bro cluster deployments. I?m planning to deploy workers to multiple geographic datacenters and I looking to weigh the pros/cons of two scenarios: > > 1) Global Manager for all workers > - Should there also be a global proxy or are there benefits to having one in each datacenter? Right now you can only have a single manager and your proxies can't currently be connected with specific workers so it's likely that if you setup one large cluster across multiple data centers that you would end up with workers connecting to proxies in the other data center. > 2) Local Manager (per datacenter) for workers in that specific datacenter > - Proxy would be local as well I think my previous answer answered this as well. :) > A global manager would obviously be easier to manage/maintain but my concerns are: > - Amount of ?long-haul? traffic being generated to push log events to the manager > - If the manager crashes are the workers queuing events until they re-connect to the manager? Right now, workers don't queue events like this. Events are delivered immediately to everyone subscribed to them (so if a host is crashed and not connected, it's obviously not subscribed at that moment). > In a scenario of separate managers per datacenter: > - Can proxies still ?sync? with each other? (e.g. push intel to workers watching similar traffic in each datacenter) Intel data is not synchronized though the proxies. I chose to do manual synchronization through events for the intel framework. Your questions bring up a larger goal we have of creating hierarchical clusters. This is something where I think the current broctl overhaul is the first step toward that, but there is a lot more work to do. Ultimately what I'd like to see is that no matter how large your cluster is or how geographically dispersed, you can run your entire infrastructure as one Bro cluster. Unfortunately we aren't there yet and it's going to be a while before we are. For now, I would setup separate clusters in each data center. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From vlad at grigorescu.org Tue Sep 23 07:08:16 2014 From: vlad at grigorescu.org (Vlad Grigorescu) Date: Tue, 23 Sep 2014 09:08:16 -0500 Subject: [Bro] Stepping Stone Detection In-Reply-To: References: Message-ID: If I recall correctly, I believe the detection doesn't work well on clusters. The same worker would need to see all traffic associated with a given stepping stone (both traffic from the internet to that hop, and from that hop to the target system). --Vlad On Mon, Sep 22, 2014 at 4:20 PM, anthony kasza wrote: > I've noticed some remnants of Vern's work around detecting systems used as > stepping stones within Bro's source. Could someone on the list shed light > on why and when it was deprecated? Many thanks, > > -AK > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140923/eea47184/attachment.html From anthony.kasza at gmail.com Tue Sep 23 07:24:51 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Tue, 23 Sep 2014 07:24:51 -0700 Subject: [Bro] Stepping Stone Detection In-Reply-To: References: Message-ID: That makes sense. Thanks for satisfying my curiosity. -AK On Sep 23, 2014 7:08 AM, "Vlad Grigorescu" wrote: > If I recall correctly, I believe the detection doesn't work well on > clusters. The same worker would need to see all traffic associated with a > given stepping stone (both traffic from the internet to that hop, and from > that hop to the target system). > > --Vlad > > On Mon, Sep 22, 2014 at 4:20 PM, anthony kasza > wrote: > >> I've noticed some remnants of Vern's work around detecting systems used >> as stepping stones within Bro's source. Could someone on the list shed >> light on why and when it was deprecated? Many thanks, >> >> -AK >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140923/b4252f5f/attachment.html From anthony.kasza at gmail.com Tue Sep 23 07:35:33 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Tue, 23 Sep 2014 07:35:33 -0700 Subject: [Bro] Stream Extraction from Scriptland In-Reply-To: References: Message-ID: I agree with your point on scalability and look forward to the TimeMachine framework. It does seem that set_record_packets only works on TCP data packets, though. I'm not sure if thats an issue with the function or with the documentation about the function. The script I included sets all new connection's to false with set_record_packets, then sets connections to true from the connection_state_remove event if they contain DNS. The notes.txt file shows the bro command I ran (including the -w option) against the sample.pcap file, included previously, to produce a new trace file with unexpected contents. Is this a bug in the function or am I reading the doc incorrectly? Thanks all (Seth). -AK On Sep 23, 2014 5:42 AM, "Seth Hall" wrote: > > On Sep 22, 2014, at 10:27 PM, anthony kasza > wrote: > > > I supposed my root question is this: is there a way to use Bro > > scripting to identify a connection of interest and have it written to > > disk (either with the exec framework or with set_record_packet) > > instead of including dumb BPFs with Bro's invocation?] > > That's one of the features of the TimeMachine framework that I haven't > finished yet. :) > > You can use the set_record_packets BiF as you found too, but that requires > that you are running Bro with the -w flag to write packets to disk. > Ultimately I think that something like the TimeMachine approach is the most > scaleable because you could even do your bulk packet recording on a > separate device and just have Bro communicate to it when you want to > extract some packets (even going back in time). > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140923/2473a67e/attachment.html From robin at icir.org Tue Sep 23 07:54:35 2014 From: robin at icir.org (Robin Sommer) Date: Tue, 23 Sep 2014 07:54:35 -0700 Subject: [Bro] Stepping Stone Detection In-Reply-To: References: Message-ID: <20140923145435.GL18134@icir.org> On Tue, Sep 23, 2014 at 09:08 -0500, Vlad Grigorescu wrote: > If I recall correctly, I believe the detection doesn't work well on > clusters. Yeah, that's one problem. Another (related) is that conceptually the stepping stone detector is hardcoded into the core system, rather than implemented at script-land as pretty much evertthing else is. Robin -- Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin From robin at icir.org Tue Sep 23 07:59:38 2014 From: robin at icir.org (Robin Sommer) Date: Tue, 23 Sep 2014 07:59:38 -0700 Subject: [Bro] Plugins In-Reply-To: References: Message-ID: <20140923145938.GM18134@icir.org> On Wed, Sep 17, 2014 at 14:19 -0700, anthony kasza wrote: > I'm curious if anyone has had success with the new plugin structure (for > the bro binary, not broctl plugins). Has anyone used it yet? If so, what > have you done? I've used it, naturally. :) The bro-plugins repository has all the plugins I've implemented so far. There's also a set of unit tests in testing/btest/plugins, which can serve as examples as well. And it's easy to miss right now that there's some documentation on writing plugins: https://www.bro.org/sphinx-git/devel/plugins.html (that's not yet linked from anywhere, need to fix). Robin -- Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin From ian.richmond at ge.com Tue Sep 23 07:59:48 2014 From: ian.richmond at ge.com (Richmond, Ian (GE Corporate)) Date: Tue, 23 Sep 2014 14:59:48 +0000 Subject: [Bro] peer_description in intel framework Message-ID: Morning Bro List, I?ve noticed in my scripting attempts that I can?t seem to identify the worker that matched an item from the intel framework. This works for instance when trying to get the peer_description into the conn log like this ( after a redef ): event connection_state_remove(c: connection) { if (c?$conn) c$conn$worker_name = peer_description; } But if the same thing is tried with the Intel framework: event Intel::match(s: Intel::Seen, items: set[Intel::Item]) { if (s?$conn) s$worker_name = peer_description; } The worker_name remains ?manager?. Are intel framework hits processed from worker to manager in a way that loses the peer_description tied to the intel hit? Is there a way to script around this and deliver the peer_description to the intel notice? Am I doing something wrong? Thanks. Ian Richmond From robin at icir.org Tue Sep 23 08:07:23 2014 From: robin at icir.org (Robin Sommer) Date: Tue, 23 Sep 2014 08:07:23 -0700 Subject: [Bro] Stream Extraction from Scriptland In-Reply-To: References: Message-ID: <20140923150723.GN18134@icir.org> On Tue, Sep 23, 2014 at 07:35 -0700, anthony kasza wrote: > It does seem that set_record_packets only works on TCP data packets, > though. Actually, I'm surprised that it works with TCP at all. The problem with set_record_packets() is that at the time when an event handler calls it, the packet may already be gone at Bro's lower levels (handlers are executed asynchronously and Bro doesn't buffer any packets). Have you tried calling set_record_packets(c$id, T) in new_connection() for UDP traffic? (Understood that that isn't what you want, just to see if it works). Regarding TM integration, the old 1.5 time machine script may actually still work, it has the functionality. Robin -- Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin From michalpurzynski1 at gmail.com Tue Sep 23 08:46:17 2014 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Tue, 23 Sep 2014 17:46:17 +0200 Subject: [Bro] Stepping Stone Detection In-Reply-To: <20140923145435.GL18134@icir.org> References: <20140923145435.GL18134@icir.org> Message-ID: This is harder than it sounds. Bro could be used to provide input to some kind of machine learning system, that discovers patterns on how/when your internals servers are accessed and to warn on something that's 'interesting', with potentially a scoring system. On Tue, Sep 23, 2014 at 4:54 PM, Robin Sommer wrote: > > > On Tue, Sep 23, 2014 at 09:08 -0500, Vlad Grigorescu wrote: > >> If I recall correctly, I believe the detection doesn't work well on >> clusters. > > Yeah, that's one problem. Another (related) is that conceptually the > stepping stone detector is hardcoded into the core system, rather than > implemented at script-land as pretty much evertthing else is. > > Robin > > -- > Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org > ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Micha? Purzy?ski From seth at icir.org Tue Sep 23 08:48:23 2014 From: seth at icir.org (Seth Hall) Date: Tue, 23 Sep 2014 11:48:23 -0400 Subject: [Bro] peer_description in intel framework In-Reply-To: References: Message-ID: On Sep 23, 2014, at 10:59 AM, Richmond, Ian (GE Corporate) wrote: > I?ve noticed in my scripting attempts that I can?t seem to identify the worker that matched an item from the intel framework. > This works for instance when trying to get the peer_description into the conn log like this ( after a redef ): Arg! Total design oversight on my part! > event Intel::match(s: Intel::Seen, items: set[Intel::Item]) { > if (s?$conn) > s$worker_name = peer_description; > } This makes sense because the Intel::match event is actually generated on the manager in clusters right now. It's even documented. :) https://www.bro.org/sphinx-git/scripts/base/frameworks/intel/main.bro.html#id-Intel::match > Is there a way to script around this and deliver the peer_description to the intel notice? Am I doing something wrong? It would be easy to add that into the intel framework. I'll do a commit now that adds it (but it will only be in the master branch of our git repository for now). .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From seth at icir.org Tue Sep 23 09:30:47 2014 From: seth at icir.org (Seth Hall) Date: Tue, 23 Sep 2014 12:30:47 -0400 Subject: [Bro] peer_description in intel framework In-Reply-To: References: Message-ID: <376CEF07-1426-4AF5-AAC6-3CC103FFE952@icir.org> On Sep 23, 2014, at 11:48 AM, Seth Hall wrote: > It would be easy to add that into the intel framework. I'll do a commit now that adds it (but it will only be in the master branch of our git repository for now). I lied. It's in fastpath instead of master, but we'll make sure and get it there soon. The only change is that in the seen record, you will have a field named "node" that will have the name of the node where the match happened. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From damian.gerow at shopify.com Tue Sep 23 11:14:43 2014 From: damian.gerow at shopify.com (Damian Gerow) Date: Tue, 23 Sep 2014 14:14:43 -0400 Subject: [Bro] Packet loss during log rotation Message-ID: I'm trying to set up a new standalone Bro instance, but I seem to be experiencing regular packet loss. The host is processing minimal traffic -- always <10Mbps, usually around 2Mbps -- but I've noticed that the packet loss almost always occurs at time of log rotation. Below is a quick sampling of the notice.log creation date (^#open), and all instances of packet loss, covering today thus far. Is it normal that Bro drops packets during log rotation? Is there some kind of tuning I can/should be doing to address this? Or is this just a red herring? #open 2014-09-23-00-02-10 2014-09-23T00:02:09+0000 PacketFilter::Dropped_Packets 2866 packets dropped after filtering, 143726 received, 143726 on link 2014-09-23T00:12:09+0000 PacketFilter::Dropped_Packets 94 packets dropped after filtering, 145724 received, 145724 on link #open 2014-09-23-01-02-10 2014-09-23T01:02:09+0000 PacketFilter::Dropped_Packets 2803 packets dropped after filtering, 152045 received, 152045 on link #open 2014-09-23-02-02-10 2014-09-23T02:02:09+0000 PacketFilter::Dropped_Packets 2772 packets dropped after filtering, 145405 received, 145405 on link #open 2014-09-23-03-02-10 2014-09-23T03:02:09+0000 PacketFilter::Dropped_Packets 3197 packets dropped after filtering, 141184 received, 141184 on link 2014-09-23T03:57:09+0000 PacketFilter::Dropped_Packets 6 packets dropped after filtering, 140874 received, 140874 on link #open 2014-09-23-04-02-10 2014-09-23T04:02:09+0000 PacketFilter::Dropped_Packets 2599 packets dropped after filtering, 136745 received, 136745 on link #open 2014-09-23-05-02-10 2014-09-23T05:02:09+0000 PacketFilter::Dropped_Packets 2448 packets dropped after filtering, 134282 received, 134282 on link #open 2014-09-23-06-02-10 2014-09-23T06:02:09+0000 PacketFilter::Dropped_Packets 2921 packets dropped after filtering, 131329 received, 131329 on link #open 2014-09-23-07-02-10 2014-09-23T07:02:09+0000 PacketFilter::Dropped_Packets 3230 packets dropped after filtering, 139087 received, 139087 on link #open 2014-09-23-08-00-07 2014-09-23T08:02:09+0000 PacketFilter::Dropped_Packets 44840 packets dropped after filtering, 179889 received, 179884 on link #open 2014-09-23-09-02-10 2014-09-23T09:02:09+0000 PacketFilter::Dropped_Packets 3291 packets dropped after filtering, 135096 received, 135095 on link #open 2014-09-23-10-02-10 2014-09-23T10:02:09+0000 PacketFilter::Dropped_Packets 2428 packets dropped after filtering, 134041 received, 134041 on link #open 2014-09-23-11-02-10 2014-09-23T11:02:09+0000 PacketFilter::Dropped_Packets 2544 packets dropped after filtering, 131655 received, 131655 on link #open 2014-09-23-12-02-10 2014-09-23T12:02:09+0000 PacketFilter::Dropped_Packets 2655 packets dropped after filtering, 136899 received, 136899 on link #open 2014-09-23-13-02-10 2014-09-23T13:02:09+0000 PacketFilter::Dropped_Packets 2722 packets dropped after filtering, 142520 received, 142520 on link -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140923/67fc2386/attachment.html From seth at icir.org Tue Sep 23 11:33:37 2014 From: seth at icir.org (Seth Hall) Date: Tue, 23 Sep 2014 14:33:37 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: References: Message-ID: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> On Sep 23, 2014, at 2:14 PM, Damian Gerow wrote: > I'm trying to set up a new standalone Bro instance, but I seem to be experiencing regular packet loss. The host is processing minimal traffic -- always <10Mbps, usually around 2Mbps -- but I've noticed that the packet loss almost always occurs at time of log rotation. Are you running in cluster mode or standalone? If you're running in standalone, it's very possible that something is blocking briefly when the logs are rotated which could cause a small back up of packets, leading to loss. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From ian.richmond at ge.com Tue Sep 23 11:36:46 2014 From: ian.richmond at ge.com (Richmond, Ian (GE Corporate)) Date: Tue, 23 Sep 2014 18:36:46 +0000 Subject: [Bro] peer_description in intel framework In-Reply-To: <376CEF07-1426-4AF5-AAC6-3CC103FFE952@icir.org> References: <376CEF07-1426-4AF5-AAC6-3CC103FFE952@icir.org> Message-ID: Awesome thanks! On 9/23/14, 12:30 PM, "Seth Hall" wrote: > >On Sep 23, 2014, at 11:48 AM, Seth Hall wrote: > >> It would be easy to add that into the intel framework. I'll do a >>commit now that adds it (but it will only be in the master branch of our >>git repository for now). > >I lied. It's in fastpath instead of master, but we'll make sure and get >it there soon. The only change is that in the seen record, you will have >a field named "node" that will have the name of the node where the match >happened. > > .Seth > >-- >Seth Hall >International Computer Science Institute >(Bro) because everyone has a network >http://www.bro.org/ > From damian.gerow at shopify.com Tue Sep 23 11:46:14 2014 From: damian.gerow at shopify.com (Damian Gerow) Date: Tue, 23 Sep 2014 14:46:14 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> References: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> Message-ID: On Tue, Sep 23, 2014 at 2:33 PM, Seth Hall wrote: > > I'm trying to set up a new standalone Bro instance, but I seem to be > experiencing regular packet loss. The host is processing minimal traffic > -- always <10Mbps, usually around 2Mbps -- but I've noticed that the packet > loss almost always occurs at time of log rotation. > > Are you running in cluster mode or standalone? If you're running in > standalone, it's very possible that something is blocking briefly when the > logs are rotated which could cause a small back up of packets, leading to > loss. > Standalone, as I slowly work towards cluster mode. Is there a single thread handling both reading packets and disk I/O? Even at 5Mbps, I would have expected a single thread to be able to keep up with everything, unless it's waiting for compression. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140923/76d0b3c7/attachment.html From seth at icir.org Tue Sep 23 11:51:58 2014 From: seth at icir.org (Seth Hall) Date: Tue, 23 Sep 2014 14:51:58 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: References: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> Message-ID: On Sep 23, 2014, at 2:46 PM, Damian Gerow wrote: > Standalone, as I slowly work towards cluster mode. Switching to cluster mode with a single worker process is easy. Just use the cluster config example and only configure a single worker. Things should work basically the same as before. > Is there a single thread handling both reading packets and disk I/O? Even at 5Mbps, I would have expected a single thread to be able to keep up with everything, unless it's waiting for compression. Sort of. The actual file I/O is threaded, but I think that the way the external script is called that performs the rotation might accidentally block in some cases. Probably an area we should look into more closely some time. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From anthony.kasza at gmail.com Tue Sep 23 19:27:23 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Tue, 23 Sep 2014 19:27:23 -0700 Subject: [Bro] Stream Extraction from Scriptland In-Reply-To: <20140923150723.GN18134@icir.org> References: <20140923150723.GN18134@icir.org> Message-ID: Hi Robin, The same results occur (UDP, ICMP and TCP signalling packets are present in the interesting.pcap file) when running: bro -Cr sample.pcap new_test.bro -w interesting.pcap Where sample.pcap is the previously included sample.pcap trace file and new_test.bro includes the following code: event new_connection(c: connection) { if ("DNS" in c$service) # this is surprisingly set before this event is handled, so we can use it { set_record_packets(c$id, T); } else { set_record_packets(c$id, F); } } To me, it seems the function only works with TCP connections. -AK On Tue, Sep 23, 2014 at 8:07 AM, Robin Sommer wrote: > > > On Tue, Sep 23, 2014 at 07:35 -0700, anthony kasza wrote: > >> It does seem that set_record_packets only works on TCP data packets, >> though. > > Actually, I'm surprised that it works with TCP at all. The problem > with set_record_packets() is that at the time when an event handler > calls it, the packet may already be gone at Bro's lower levels > (handlers are executed asynchronously and Bro doesn't buffer any > packets). > > Have you tried calling set_record_packets(c$id, T) in new_connection() > for UDP traffic? (Understood that that isn't what you want, just to > see if it works). > > Regarding TM integration, the old 1.5 time machine script may actually > still work, it has the functionality. > > Robin > > > -- > Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org > ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin From damian.gerow at shopify.com Wed Sep 24 09:34:55 2014 From: damian.gerow at shopify.com (Damian Gerow) Date: Wed, 24 Sep 2014 12:34:55 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: References: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> Message-ID: On Tue, Sep 23, 2014 at 2:51 PM, Seth Hall wrote: > > On Sep 23, 2014, at 2:46 PM, Damian Gerow > wrote: > > > Standalone, as I slowly work towards cluster mode. > > Switching to cluster mode with a single worker process is easy. Just use > the cluster config example and only configure a single worker. Things > should work basically the same as before. > Beauty. It does. I was even able to run both configurations together for a while, to confirm that everything was working as expected. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140924/9b14d187/attachment.html From seth at icir.org Wed Sep 24 10:35:52 2014 From: seth at icir.org (Seth Hall) Date: Wed, 24 Sep 2014 13:35:52 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: References: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> Message-ID: On Sep 24, 2014, at 12:34 PM, Damian Gerow wrote: > Beauty. It does. I was even able to run both configurations together for a while, to confirm that everything was working as expected. You aren't seeing the packet loss when files rotate anymore? I think we still need to look into the rotation issue, I think that a few other people have seen similar effects. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From damian.gerow at shopify.com Wed Sep 24 10:49:39 2014 From: damian.gerow at shopify.com (Damian Gerow) Date: Wed, 24 Sep 2014 13:49:39 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: References: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> Message-ID: On Wed, Sep 24, 2014 at 1:35 PM, Seth Hall wrote: > > Beauty. It does. I was even able to run both configurations together > for a while, to confirm that everything was working as expected. > > You aren't seeing the packet loss when files rotate anymore? I think we > still need to look into the rotation issue, I think that a few other people > have seen similar effects. Correct. Granted, it's only been about three rotations with the cluster configuration, but none of those rotations have resulted in packet loss, whereas almost every rotation in 'standalone' does. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140924/99553dfd/attachment.html From seth at icir.org Wed Sep 24 10:56:47 2014 From: seth at icir.org (Seth Hall) Date: Wed, 24 Sep 2014 13:56:47 -0400 Subject: [Bro] Packet loss during log rotation In-Reply-To: References: <0A094B6D-251F-4109-9733-27DA9F98BFBA@icir.org> Message-ID: <4FDAD609-0351-4334-B64C-F309B129D7DE@icir.org> On Sep 24, 2014, at 1:49 PM, Damian Gerow wrote: > Correct. Granted, it's only been about three rotations with the cluster configuration, but none of those rotations have resulted in packet loss, whereas almost every rotation in 'standalone' does. Great! .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From bro at pingtrip.com Wed Sep 24 19:20:44 2014 From: bro at pingtrip.com (Dave Crawford) Date: Wed, 24 Sep 2014 22:20:44 -0400 Subject: [Bro] Cluster Best Practices In-Reply-To: References: <3511A10F-E9A3-48AD-B62E-E840194EE8C2@pingtrip.com> Message-ID: On Sep 23, 2014, at 8:50 AM, Seth Hall wrote: > > On Sep 22, 2014, at 6:04 PM, Dave Crawford wrote: > >> I?m looking for feedback (or pointers to existing write-ups) on ?best practices? for Bro cluster deployments. I?m planning to deploy workers to multiple geographic datacenters and I looking to weigh the pros/cons of two scenarios: >> >> 1) Global Manager for all workers >> - Should there also be a global proxy or are there benefits to having one in each datacenter? > > Right now you can only have a single manager and your proxies can't currently be connected with specific workers so it's likely that if you setup one large cluster across multiple data centers that you would end up with workers connecting to proxies in the other data center. > >> 2) Local Manager (per datacenter) for workers in that specific datacenter >> - Proxy would be local as well > > I think my previous answer answered this as well. :) > >> A global manager would obviously be easier to manage/maintain but my concerns are: >> - Amount of ?long-haul? traffic being generated to push log events to the manager >> - If the manager crashes are the workers queuing events until they re-connect to the manager? > > Right now, workers don't queue events like this. Events are delivered immediately to everyone subscribed to them (so if a host is crashed and not connected, it's obviously not subscribed at that moment). > >> In a scenario of separate managers per datacenter: >> - Can proxies still ?sync? with each other? (e.g. push intel to workers watching similar traffic in each datacenter) > > Intel data is not synchronized though the proxies. I chose to do manual synchronization through events for the intel framework. > > Your questions bring up a larger goal we have of creating hierarchical clusters. This is something where I think the current broctl overhaul is the first step toward that, but there is a lot more work to do. Ultimately what I'd like to see is that no matter how large your cluster is or how geographically dispersed, you can run your entire infrastructure as one Bro cluster. Unfortunately we aren't there yet and it's going to be a while before we are. > > For now, I would setup separate clusters in each data center. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > Thanks for the feedback Seth. In the scenario of running separate clusters in each data center; is it possible to sync Intel between clusters? For example, inbound email is load balanced across multiple data centers, as well as outbound client internet traffic. My goal is to extract URLs from inbound emails and push them into the Intel framework for alerting when outbound traffic matches (e.g. user clicked a link in an email), would that require all data centers to be in a single cluster? -Dave From seth at icir.org Wed Sep 24 19:31:25 2014 From: seth at icir.org (Seth Hall) Date: Wed, 24 Sep 2014 22:31:25 -0400 Subject: [Bro] Cluster Best Practices In-Reply-To: References: <3511A10F-E9A3-48AD-B62E-E840194EE8C2@pingtrip.com> Message-ID: <248C4165-7CC4-4643-973D-0BF6DDA7A1D0@icir.org> On Sep 24, 2014, at 10:20 PM, Dave Crawford wrote: > Thanks for the feedback Seth. In the scenario of running separate clusters in each data center; is it possible to sync Intel between clusters? Unfortunately not without some work. > For example, inbound email is load balanced across multiple data centers, as well as outbound client internet traffic. My goal is to extract URLs from inbound emails and push them into the Intel framework for alerting when outbound traffic matches (e.g. user clicked a link in an email), would that require all data centers to be in a single cluster? This sort of stuff is one of the many reasons we've been talking about doing hierarchical Bro clusters because it should make it straight forward to actually maintain intelligence data like you'd like to. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From scampbell at lbl.gov Wed Sep 24 19:53:49 2014 From: scampbell at lbl.gov (Scott Campbell) Date: Wed, 24 Sep 2014 22:53:49 -0400 Subject: [Bro] CVE-2014-6271/ detection script Message-ID: <542383BD.5090903@lbl.gov> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I just posted a quick policy file which should look at header fields and examine the data section for the telltale formatting of a bash function. I have *not* tested this extensively, so please test before deploying. Happy to update with better regex etc... https://github.com/set-element/misc-scripts/blob/master/header-test.bro cheers, scott -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.19 (Darwin) Comment: GPGTools - http://gpgtools.org iEYEARECAAYFAlQjg70ACgkQK2Plq8B7ZByhoACgzW+/Ks+8LzNErWW+TiVOnn8C T+kAnjmS6ilxS6NbxFkybu8iI53NAq3Y =d76q -----END PGP SIGNATURE----- From liam.randall at gmail.com Wed Sep 24 20:12:25 2014 From: liam.randall at gmail.com (Liam Randall) Date: Wed, 24 Sep 2014 23:12:25 -0400 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: <542383BD.5090903@lbl.gov> References: <542383BD.5090903@lbl.gov> Message-ID: Hey Scott, Playing around with it, I couldn't get it to work via http headers with out starting with: "() { " I unsuccessfully tried URI encoding a few other things as well, so for now I put up: \x28\x29\x20\x7b\x20 Here's my crack at it: https://github.com/CriticalStack/bro-scripts/tree/master/bash-cve-2014-6271 There are going to be a lot of other exploit vectors for this- dhcp, cups maybe? I'm going to try and update mine as new POCs emerge. Would love feedback or examples to update the regex. Liam On Wed, Sep 24, 2014 at 10:53 PM, Scott Campbell wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I just posted a quick policy file which should look at header fields > and examine the data section for the telltale formatting of a bash > function. > > I have *not* tested this extensively, so please test before deploying. > Happy to update with better regex etc... > > https://github.com/set-element/misc-scripts/blob/master/header-test.bro > > cheers, > scott > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.19 (Darwin) > Comment: GPGTools - http://gpgtools.org > > iEYEARECAAYFAlQjg70ACgkQK2Plq8B7ZByhoACgzW+/Ks+8LzNErWW+TiVOnn8C > T+kAnjmS6ilxS6NbxFkybu8iI53NAq3Y > =d76q > -----END PGP SIGNATURE----- > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140924/9f670cfb/attachment.html From gfaulkner.nsm at gmail.com Wed Sep 24 20:18:31 2014 From: gfaulkner.nsm at gmail.com (Gary Faulkner) Date: Wed, 24 Sep 2014 22:18:31 -0500 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: <542383BD.5090903@lbl.gov> References: <542383BD.5090903@lbl.gov> Message-ID: <54238987.4000405@gmail.com> Critical Stack has a version as well: https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 On 9/24/2014 9:53 PM, Scott Campbell wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I just posted a quick policy file which should look at header fields > and examine the data section for the telltale formatting of a bash > function. > > I have *not* tested this extensively, so please test before deploying. > Happy to update with better regex etc... > > https://github.com/set-element/misc-scripts/blob/master/header-test.bro > > cheers, > scott > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.19 (Darwin) > Comment: GPGTools - http://gpgtools.org > > iEYEARECAAYFAlQjg70ACgkQK2Plq8B7ZByhoACgzW+/Ks+8LzNErWW+TiVOnn8C > T+kAnjmS6ilxS6NbxFkybu8iI53NAq3Y > =d76q > -----END PGP SIGNATURE----- > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From gfaulkner.nsm at gmail.com Wed Sep 24 21:29:44 2014 From: gfaulkner.nsm at gmail.com (Gary Faulkner) Date: Wed, 24 Sep 2014 23:29:44 -0500 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> Message-ID: <54239A38.9010501@gmail.com> Oops, somehow I missed your email when I replied. Good work guys. On 9/24/2014 10:12 PM, Liam Randall wrote: > Hey Scott, > > Playing around with it, I couldn't get it to work via http headers with out > starting with: "() { " > > I unsuccessfully tried URI encoding a few other things as well, so for now > I put up: > \x28\x29\x20\x7b\x20 > > Here's my crack at it: > https://github.com/CriticalStack/bro-scripts/tree/master/bash-cve-2014-6271 > > There are going to be a lot of other exploit vectors for this- dhcp, cups > maybe? I'm going to try and update mine as new POCs emerge. > > Would love feedback or examples to update the regex. > > Liam > > On Wed, Sep 24, 2014 at 10:53 PM, Scott Campbell wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> I just posted a quick policy file which should look at header fields >> and examine the data section for the telltale formatting of a bash >> function. >> >> I have *not* tested this extensively, so please test before deploying. >> Happy to update with better regex etc... >> >> https://github.com/set-element/misc-scripts/blob/master/header-test.bro >> >> cheers, >> scott >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG/MacGPG2 v2.0.19 (Darwin) >> Comment: GPGTools - http://gpgtools.org >> >> iEYEARECAAYFAlQjg70ACgkQK2Plq8B7ZByhoACgzW+/Ks+8LzNErWW+TiVOnn8C >> T+kAnjmS6ilxS6NbxFkybu8iI53NAq3Y >> =d76q >> -----END PGP SIGNATURE----- >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140924/5c37d8e0/attachment.html From nweaver at ICSI.Berkeley.EDU Thu Sep 25 08:06:43 2014 From: nweaver at ICSI.Berkeley.EDU (Nicholas Weaver) Date: Thu, 25 Sep 2014 08:06:43 -0700 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: <54238987.4000405@gmail.com> References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> Message-ID: <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> On Sep 24, 2014, at 8:18 PM, Gary Faulkner wrote: > Critical Stack has a version as well: > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 The constraints based on experimenting that I just did to independently validate Liam's script: The regexp its keying in on: /\x28\x29\x20\x7b\x20/ "() { " Is correct: adding/changing whitespace or other characters between the () or ) {, and removing the space after the { cause this to fail (but {\t MIGHT work, but my limited shell fu is not able to check that case). However, does anyone know if any web servers will urldecode headers? -- Nicholas Weaver it is a tale, told by an idiot, nweaver at icsi.berkeley.edu full of sound and fury, 510-666-2903 .signifying nothing PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/b32b9902/attachment.bin From jxbatchelor at gmail.com Thu Sep 25 10:06:32 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Thu, 25 Sep 2014 12:06:32 -0500 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: I took the version from Critical Stack and added the ability to whitelist certain ranges. It may be valuable if, for example, you have an external auditing service like White Hat Security conducting scans that you don't deem actionable. Perhaps a more broader discussion, but would it be a good idea to have a global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? Something that is present, and must always be defined by the end user. Just a thought, it might encourage future script writers to provision for things like this. Of course, there is an even broader philisophical discussion on whitelisting IP ranges, which is why I would suggest leaving the variable as something that needs to be defined by the end user. FWIW, Jason On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver wrote: > > On Sep 24, 2014, at 8:18 PM, Gary Faulkner > wrote: > > > Critical Stack has a version as well: > > > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 > > The constraints based on experimenting that I just did to independently > validate Liam's script: > > The regexp its keying in on: > > /\x28\x29\x20\x7b\x20/ > > "() { " > > Is correct: adding/changing whitespace or other characters between the () > or ) {, and removing the space after the { cause this to fail (but {\t > MIGHT work, but my limited shell fu is not able to check that case). > > However, does anyone know if any web servers will urldecode headers? > > -- > Nicholas Weaver it is a tale, told by an idiot, > nweaver at icsi.berkeley.edu full of sound and fury, > 510-666-2903 .signifying nothing > PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/22d6193f/attachment.html -------------- next part -------------- # Copyright (c) 2014 Critical Stack LLC. All Rights Reserved. # Liam Randall (@Hectaman) # Set of detection routines to monitor for CVE-2014-6271 # CHANGES: # 2014-9-7 Initial support for http header vector via mod_cgi # 2014-9-25 Jason Batchelor: Added white listing support from known security vendor(s) IP ranges. # - 63.128.163.0/27 WhiteHat Security module Bash; export { redef enum Notice::Type += { ## Indicates that a host may have attempted a bash cgi header attack HTTP_Header_Attack, }; } event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=3 { local whitelist = vector(63.128.163.0/27); if ( is_orig ) { for ( w in whitelist ) { if ( c$id$orig_h !in whitelist[w] ) { # This particular string seems to be necessary if ( /\x28\x29\x20\x7b\x20/ in value) { NOTICE([$note=Bash::HTTP_Header_Attack, $conn=c, $msg=fmt("%s may have attempted to exploit CVE-2014-6271, bash environment variable attack, via HTTP mod_cgi header against %s submitting \"%s\"=\"%s\"",c$id$orig_h, c$id$resp_h, name, value), $identifier=c$uid]); } } } } } From jxbatchelor at gmail.com Thu Sep 25 10:16:58 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Thu, 25 Sep 2014 12:16:58 -0500 Subject: [Bro] File Extraction Related Scripting Questions In-Reply-To: <154D582E-E62A-4FC6-9FC7-E0EF24AA366E@icir.org> References: <884D2774-E31E-43D3-B901-E68CD3CFC066@icir.org> <154D582E-E62A-4FC6-9FC7-E0EF24AA366E@icir.org> Message-ID: Just FYI to the group, I created the following after having spent some time looking at magic.sig. I placed them in general.sig and so far they seem to do the trick on identifying OLECF (legacy MS Office) and OOXML (modern MS Office) documents. Seth indicated to me offline this would be reviewed and folded into the next release. For your immediate use. # Jason Batchelor Edits, 9/19/2014 # Signatures informed by the following resource # http://www.garykessler.net/library/file_sigs.html signature file-olecf { file-magic /(\xd0\xcf\x11\xe0\xa1\xb1\x1a\xe1)/ file-mime "application/olecf", 150 } signature file-ooxml { file-magic /(\x50\x4b\x03\x04\x14\x00\x06\x00)/ file-mime "application/vnd.openxmlformats-officedocument", 100 } On Fri, Sep 19, 2014 at 1:50 PM, Seth Hall wrote: > > On Sep 19, 2014, at 1:41 PM, Jason Batchelor > wrote: > > > I would be :). > > Woo! > > > Would you mind pointing me in the right direction to how I might make > type signatures and indicators as you describe. > > https://github.com/bro/bro/tree/master/scripts/base/frameworks/files/magic > > Any attention to those file detections would be great. I would also like > to start getting some tests in place that verify we are detecting these > files correctly going into the future. Feel free to ask if you have any > questions. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/139bcfcb/attachment.html From anthony.kasza at gmail.com Thu Sep 25 10:45:02 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Thu, 25 Sep 2014 10:45:02 -0700 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: I know not all exploits of this vulnerability need to include a reverse shell, but it may be useful to monitor for outbound connections to an IP which previously made HTTP requests with headers matching this pattern. -AK On Sep 25, 2014 10:21 AM, "Jason Batchelor" wrote: > I took the version from Critical Stack and added the ability to whitelist > certain ranges. It may be valuable if, for example, you have an external > auditing service like White Hat Security conducting scans that you > don't deem actionable. > > Perhaps a more broader discussion, but would it be a good idea to have a > global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? > Something that is present, and must always be defined by the end user. Just > a thought, it might encourage future script writers to provision for things > like this. Of course, there is an even broader philisophical discussion on > whitelisting IP ranges, which is why I would suggest leaving the variable > as something that needs to be defined by the end user. > > FWIW, > Jason > > On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver < > nweaver at icsi.berkeley.edu> wrote: > >> >> On Sep 24, 2014, at 8:18 PM, Gary Faulkner >> wrote: >> >> > Critical Stack has a version as well: >> > >> https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 >> >> The constraints based on experimenting that I just did to independently >> validate Liam's script: >> >> The regexp its keying in on: >> >> /\x28\x29\x20\x7b\x20/ >> >> "() { " >> >> Is correct: adding/changing whitespace or other characters between the () >> or ) {, and removing the space after the { cause this to fail (but {\t >> MIGHT work, but my limited shell fu is not able to check that case). >> >> However, does anyone know if any web servers will urldecode headers? >> >> -- >> Nicholas Weaver it is a tale, told by an idiot, >> nweaver at icsi.berkeley.edu full of sound and fury, >> 510-666-2903 .signifying nothing >> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/910872cd/attachment.html From liam.randall at gmail.com Thu Sep 25 10:42:01 2014 From: liam.randall at gmail.com (Liam Randall) Date: Thu, 25 Sep 2014 13:42:01 -0400 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: Hey Jason, I added support for ignoring subnets of subnets. Pulls accepted- I am kind of surprised there aren't POCs floating around for DHCP yet and other vectors... or maybe they are ;) ! Liam On Thu, Sep 25, 2014 at 1:06 PM, Jason Batchelor wrote: > I took the version from Critical Stack and added the ability to whitelist > certain ranges. It may be valuable if, for example, you have an external > auditing service like White Hat Security conducting scans that you > don't deem actionable. > > Perhaps a more broader discussion, but would it be a good idea to have a > global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? > Something that is present, and must always be defined by the end user. Just > a thought, it might encourage future script writers to provision for things > like this. Of course, there is an even broader philisophical discussion on > whitelisting IP ranges, which is why I would suggest leaving the variable > as something that needs to be defined by the end user. > > FWIW, > Jason > > On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver < > nweaver at icsi.berkeley.edu> wrote: > >> >> On Sep 24, 2014, at 8:18 PM, Gary Faulkner >> wrote: >> >> > Critical Stack has a version as well: >> > >> https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 >> >> The constraints based on experimenting that I just did to independently >> validate Liam's script: >> >> The regexp its keying in on: >> >> /\x28\x29\x20\x7b\x20/ >> >> "() { " >> >> Is correct: adding/changing whitespace or other characters between the () >> or ) {, and removing the space after the { cause this to fail (but {\t >> MIGHT work, but my limited shell fu is not able to check that case). >> >> However, does anyone know if any web servers will urldecode headers? >> >> -- >> Nicholas Weaver it is a tale, told by an idiot, >> nweaver at icsi.berkeley.edu full of sound and fury, >> 510-666-2903 .signifying nothing >> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/522349b7/attachment.html From inetjunkmail at gmail.com Thu Sep 25 11:10:46 2014 From: inetjunkmail at gmail.com (inetjunkmail) Date: Thu, 25 Sep 2014 14:10:46 -0400 Subject: [Bro] Help resolving proxy crash Message-ID: All: I'm having an issue with the proxy service crashing. I was having it on 2.3 and I'm having it on 2.3.1 too. It generally occurs a few minutes after restarting. I have a _little_ evidence that suggests that it's stable at lower traffic rates. This is a single node cluster with 16 cores (32 counting hyper threading). Below is some hopefully relevant information. Can anyone provide some tips at what to look at next to correct the issue? Thanks -------------------------------------------------- [e at b3 ~]$ sudo broctl status proxy-3 Name Type Host Status Pid Peers Started proxy-3 proxy biggsanalyzer3 running 1810 ??? 25 Sep 13:09:07 [e at b3 ~]$ sudo broctl status proxy-3 [sudo] password for eric: Name Type Host Status Pid Peers Started proxy-3 proxy biggsanalyzer3 crashed [e at b3 ~]$ sudo broctl netstats worker-3-1: 1411666855.350645 recvd=23952666 dropped=0 link=23952666 worker-3-10: 1411666855.550643 recvd=26529426 dropped=0 link=26529426 worker-3-11: 1411666855.750069 recvd=25799879 dropped=0 link=25799879 worker-3-12: 1411666855.952250 recvd=27786138 dropped=0 link=27786138 worker-3-13: 1411666856.152395 recvd=33072225 dropped=0 link=33072225 worker-3-14: 1411666856.352869 recvd=26334798 dropped=0 link=26334798 worker-3-2: 1411666856.554573 recvd=26726716 dropped=0 link=26726716 worker-3-3: 1411666856.754446 recvd=32427073 dropped=0 link=32427073 worker-3-4: 1411666856.955059 recvd=26646497 dropped=0 link=26646497 worker-3-5: 1411666857.156298 recvd=27240324 dropped=0 link=27240324 worker-3-6: 1411666857.356603 recvd=24139487 dropped=0 link=24139487 worker-3-7: 1411666857.555774 recvd=28722053 dropped=0 link=28722053 worker-3-8: 1411666857.757538 recvd=27019501 dropped=0 link=27019501 worker-3-9: 1411666857.126295 recvd=25049180 dropped=0 link=25049180 [e at b3 ~]$ sudo broctl capstats Interface kpps mbps (10s average) ---------------------------------------- b3/em1 331.7 1146.1 Total 331.7 1146.1 [e at b3 ~]$ sudo broctl diag proxy-3 [proxy-3] Bro 2.3.1 Linux 3.10.0-123.6.3.el7.x86_64 core.1810 [New LWP 1810] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/local/bro/bin/bro -U .status -p broctl -p broctl-live -p local -p proxy-3'. Program terminated with signal 6, Aborted. #0 0x00007fce6a5015c9 in raise () from /lib64/libc.so.6 Thread 1 (Thread 0x7fce6c86c840 (LWP 1810)): #0 0x00007fce6a5015c9 in raise () from /lib64/libc.so.6 #1 0x00007fce6a502cd8 in abort () from /lib64/libc.so.6 #2 0x000000000059dae1 in Reporter::InternalError (this=, fmt=fmt at entry=0x7f209b "%s") at /home/e/bro-2.3.1/src/Reporter.cc:137 #3 0x00000000005bc85a in InternalCommError (msg=, this=0x1915530) at /home/e/bro-2.3.1/src/RemoteSerializer.cc:3231 #4 RemoteSerializer::Poll (this=0x1915530, may_block=may_block at entry=false) at /home/e/bro-2.3.1/src/RemoteSerializer.cc:1576 #5 0x00000000005bc9df in Poll (may_block=false, this=0x1915530) at /home/e/bro-2.3.1/src/RemoteSerializer.cc:1413 #6 RemoteSerializer::NextTimestamp (this=0x1915530, local_network_time=0x7fffa1562040) at /home/e/bro-2.3.1/src/RemoteSerializer.cc:1380 #7 0x00000000005965fb in IOSourceRegistry::FindSoonest (this=0xaea0d0 , ts=ts at entry=0x7fffa1562108) at /home/e/bro-2.3.1/src/IOSource.cc:61 #8 0x000000000059fd82 in net_run () at /home/e/bro-2.3.1/src/Net.cc:370 #9 0x0000000000503df8 in main (argc=, argv=) at /home/e/bro-2.3.1/src/main.cc:1165 ==== No reporter.log ==== stderr.log internal error: unknown msg type 115 in Poll() /usr/local/bro/share/broctl/scripts/run-bro: line 85: 1810 Aborted (core dumped) nohup $mybro "$@" ==== stdout.log max memory size (kbytes, -m) unlimited data seg size (kbytes, -d) unlimited virtual memory (kbytes, -v) unlimited core file size (blocks, -c) unlimited ==== .cmdline -U .status -p broctl -p broctl-live -p local -p proxy-3 local.bro broctl base/frameworks/cluster local-proxy broctl/auto ==== .env_vars PATH=/usr/local/bro/bin:/usr/local/bro/share/broctl/scripts:/home/e/perl5/bin:/usr/local/bro/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/e/.local/bin:/home/e/bin BROPATH=/usr/local/bro/spool/installed-scripts-do-not-touch/site::/usr/local/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/bro/share/bro:/usr/local/bro/share/bro/policy:/usr/local/bro/share/bro/site CLUSTER_NODE=proxy-3 ==== .status TERMINATED [internal_error] ==== No prof.log ==== No packet_filter.log ==== No loaded_scripts.log [e at b3 ~]$ cat /usr/local/bro/etc/node.cfg # Example BroControl node configuration. # # Example BroControl node configuration. # # This example has a standalone node ready to go except for possibly changing # the sniffing interface. # This is a complete standalone configuration. Most likely you will # only need to change the interface. #[bro] #type=standalone #host=localhost #interface=eth0 ## Below is an example clustered configuration. If you use this, ## remove the [bro] node above. [manager] type=manager host=b3 [proxy-3] type=proxy host=b3 [worker-3] type=worker host=b3 interface=em1 lb_method=pf_ring lb_procs=14 #pin_cpus=4,6,8,10,12,14,16,18,20,22,24,26,28,30 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/ba102485/attachment.html From seth at icir.org Thu Sep 25 11:10:57 2014 From: seth at icir.org (Seth Hall) Date: Thu, 25 Sep 2014 14:10:57 -0400 Subject: [Bro] Shellshock successful exploitation detection Message-ID: I just pushed out an expanded take on a ShellShock detector that watches for successful exploitation. https://github.com/broala/bro-shellshock It logs all of the possible attacks over HTTP in http.log too. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From liam.randall at gmail.com Thu Sep 25 11:50:01 2014 From: liam.randall at gmail.com (Liam Randall) Date: Thu, 25 Sep 2014 14:50:01 -0400 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: Here's another advanced detector from @Broala_: https://github.com/broala/bro-shellshock Nice work Seth. Liam On Thu, Sep 25, 2014 at 1:45 PM, anthony kasza wrote: > I know not all exploits of this vulnerability need to include a reverse > shell, but it may be useful to monitor for outbound connections to an IP > which previously made HTTP requests with headers matching this pattern. > > -AK > On Sep 25, 2014 10:21 AM, "Jason Batchelor" wrote: > >> I took the version from Critical Stack and added the ability to whitelist >> certain ranges. It may be valuable if, for example, you have an external >> auditing service like White Hat Security conducting scans that you >> don't deem actionable. >> >> Perhaps a more broader discussion, but would it be a good idea to have a >> global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? >> Something that is present, and must always be defined by the end user. Just >> a thought, it might encourage future script writers to provision for things >> like this. Of course, there is an even broader philisophical discussion on >> whitelisting IP ranges, which is why I would suggest leaving the variable >> as something that needs to be defined by the end user. >> >> FWIW, >> Jason >> >> On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver < >> nweaver at icsi.berkeley.edu> wrote: >> >>> >>> On Sep 24, 2014, at 8:18 PM, Gary Faulkner >>> wrote: >>> >>> > Critical Stack has a version as well: >>> > >>> https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 >>> >>> The constraints based on experimenting that I just did to independently >>> validate Liam's script: >>> >>> The regexp its keying in on: >>> >>> /\x28\x29\x20\x7b\x20/ >>> >>> "() { " >>> >>> Is correct: adding/changing whitespace or other characters between the >>> () or ) {, and removing the space after the { cause this to fail (but {\t >>> MIGHT work, but my limited shell fu is not able to check that case). >>> >>> However, does anyone know if any web servers will urldecode headers? >>> >>> -- >>> Nicholas Weaver it is a tale, told by an idiot, >>> nweaver at icsi.berkeley.edu full of sound and fury, >>> 510-666-2903 .signifying nothing >>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/23e39696/attachment.html From sconzo at visiblerisk.com Thu Sep 25 11:52:35 2014 From: sconzo at visiblerisk.com (Mike Sconzo) Date: Thu, 25 Sep 2014 13:52:35 -0500 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: Anybody have a pcap (they're willing to share) to verify these scripts on? Thanks! On Thu, Sep 25, 2014 at 12:45 PM, anthony kasza wrote: > I know not all exploits of this vulnerability need to include a reverse > shell, but it may be useful to monitor for outbound connections to an IP > which previously made HTTP requests with headers matching this pattern. > > -AK > > On Sep 25, 2014 10:21 AM, "Jason Batchelor" wrote: >> >> I took the version from Critical Stack and added the ability to whitelist >> certain ranges. It may be valuable if, for example, you have an external >> auditing service like White Hat Security conducting scans that you don't >> deem actionable. >> >> Perhaps a more broader discussion, but would it be a good idea to have a >> global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? >> Something that is present, and must always be defined by the end user. Just >> a thought, it might encourage future script writers to provision for things >> like this. Of course, there is an even broader philisophical discussion on >> whitelisting IP ranges, which is why I would suggest leaving the variable as >> something that needs to be defined by the end user. >> >> FWIW, >> Jason >> >> On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver >> wrote: >>> >>> >>> On Sep 24, 2014, at 8:18 PM, Gary Faulkner >>> wrote: >>> >>> > Critical Stack has a version as well: >>> > >>> > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 >>> >>> The constraints based on experimenting that I just did to independently >>> validate Liam's script: >>> >>> The regexp its keying in on: >>> >>> /\x28\x29\x20\x7b\x20/ >>> >>> "() { " >>> >>> Is correct: adding/changing whitespace or other characters between the () >>> or ) {, and removing the space after the { cause this to fail (but {\t MIGHT >>> work, but my limited shell fu is not able to check that case). >>> >>> However, does anyone know if any web servers will urldecode headers? >>> >>> -- >>> Nicholas Weaver it is a tale, told by an idiot, >>> nweaver at icsi.berkeley.edu full of sound and fury, >>> 510-666-2903 .signifying nothing >>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- cat ~/.bash_history > documentation.txt From liam.randall at gmail.com Thu Sep 25 11:59:00 2014 From: liam.randall at gmail.com (Liam Randall) Date: Thu, 25 Sep 2014 14:59:00 -0400 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: @Broala_ has one posted in their repo. DHCP POC is out: https://www.trustedsec.com/september-2014/shellshock-dhcp-rce-proof-concept/ Add it to the list. Liam On Thu, Sep 25, 2014 at 2:52 PM, Mike Sconzo wrote: > Anybody have a pcap (they're willing to share) to verify these scripts on? > > Thanks! > > On Thu, Sep 25, 2014 at 12:45 PM, anthony kasza > wrote: > > I know not all exploits of this vulnerability need to include a reverse > > shell, but it may be useful to monitor for outbound connections to an IP > > which previously made HTTP requests with headers matching this pattern. > > > > -AK > > > > On Sep 25, 2014 10:21 AM, "Jason Batchelor" > wrote: > >> > >> I took the version from Critical Stack and added the ability to > whitelist > >> certain ranges. It may be valuable if, for example, you have an external > >> auditing service like White Hat Security conducting scans that you don't > >> deem actionable. > >> > >> Perhaps a more broader discussion, but would it be a good idea to have a > >> global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? > >> Something that is present, and must always be defined by the end user. > Just > >> a thought, it might encourage future script writers to provision for > things > >> like this. Of course, there is an even broader philisophical discussion > on > >> whitelisting IP ranges, which is why I would suggest leaving the > variable as > >> something that needs to be defined by the end user. > >> > >> FWIW, > >> Jason > >> > >> On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver > >> wrote: > >>> > >>> > >>> On Sep 24, 2014, at 8:18 PM, Gary Faulkner > >>> wrote: > >>> > >>> > Critical Stack has a version as well: > >>> > > >>> > > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 > >>> > >>> The constraints based on experimenting that I just did to independently > >>> validate Liam's script: > >>> > >>> The regexp its keying in on: > >>> > >>> /\x28\x29\x20\x7b\x20/ > >>> > >>> "() { " > >>> > >>> Is correct: adding/changing whitespace or other characters between the > () > >>> or ) {, and removing the space after the { cause this to fail (but {\t > MIGHT > >>> work, but my limited shell fu is not able to check that case). > >>> > >>> However, does anyone know if any web servers will urldecode headers? > >>> > >>> -- > >>> Nicholas Weaver it is a tale, told by an idiot, > >>> nweaver at icsi.berkeley.edu full of sound and fury, > >>> 510-666-2903 .signifying nothing > >>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc > >>> > >>> > >>> _______________________________________________ > >>> Bro mailing list > >>> bro at bro-ids.org > >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >> > >> > >> > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > cat ~/.bash_history > documentation.txt > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/db4f3967/attachment.html From hartley.87 at osu.edu Thu Sep 25 12:06:32 2014 From: hartley.87 at osu.edu (Hartley, Christopher J.) Date: Thu, 25 Sep 2014 19:06:32 +0000 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: We used to do just that on one of our clusters. Sadly, it did not bear the fruit we hoped? coupled with an attempt at this exploit, it would certainly yield better results. I think you?ll find that most attackers working on this are just trying to build a set of vulnerable servers for ?phase 2?? 95% of the attempts we?ve seen so far are tying to use icmp as a callback... Chris On Sep 25, 2014, at 1:45 PM, anthony kasza > wrote: I know not all exploits of this vulnerability need to include a reverse shell, but it may be useful to monitor for outbound connections to an IP which previously made HTTP requests with headers matching this pattern. -AK On Sep 25, 2014 10:21 AM, "Jason Batchelor" > wrote: I took the version from Critical Stack and added the ability to whitelist certain ranges. It may be valuable if, for example, you have an external auditing service like White Hat Security conducting scans that you don't deem actionable. Perhaps a more broader discussion, but would it be a good idea to have a global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? Something that is present, and must always be defined by the end user. Just a thought, it might encourage future script writers to provision for things like this. Of course, there is an even broader philisophical discussion on whitelisting IP ranges, which is why I would suggest leaving the variable as something that needs to be defined by the end user. FWIW, Jason On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver > wrote: On Sep 24, 2014, at 8:18 PM, Gary Faulkner > wrote: > Critical Stack has a version as well: > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 The constraints based on experimenting that I just did to independently validate Liam's script: The regexp its keying in on: /\x28\x29\x20\x7b\x20/ "() { " Is correct: adding/changing whitespace or other characters between the () or ) {, and removing the space after the { cause this to fail (but {\t MIGHT work, but my limited shell fu is not able to check that case). However, does anyone know if any web servers will urldecode headers? -- Nicholas Weaver it is a tale, told by an idiot, nweaver at icsi.berkeley.edu full of sound and fury, 510-666-2903 .signifying nothing PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/08647644/attachment.html From sconzo at visiblerisk.com Thu Sep 25 12:08:37 2014 From: sconzo at visiblerisk.com (Mike Sconzo) Date: Thu, 25 Sep 2014 14:08:37 -0500 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: Awesome, thanks! On Thu, Sep 25, 2014 at 1:59 PM, Liam Randall wrote: > @Broala_ has one posted in their repo. > > DHCP POC is out: > > https://www.trustedsec.com/september-2014/shellshock-dhcp-rce-proof-concept/ > > Add it to the list. > > Liam > > On Thu, Sep 25, 2014 at 2:52 PM, Mike Sconzo wrote: >> >> Anybody have a pcap (they're willing to share) to verify these scripts on? >> >> Thanks! >> >> On Thu, Sep 25, 2014 at 12:45 PM, anthony kasza >> wrote: >> > I know not all exploits of this vulnerability need to include a reverse >> > shell, but it may be useful to monitor for outbound connections to an IP >> > which previously made HTTP requests with headers matching this pattern. >> > >> > -AK >> > >> > On Sep 25, 2014 10:21 AM, "Jason Batchelor" >> > wrote: >> >> >> >> I took the version from Critical Stack and added the ability to >> >> whitelist >> >> certain ranges. It may be valuable if, for example, you have an >> >> external >> >> auditing service like White Hat Security conducting scans that you >> >> don't >> >> deem actionable. >> >> >> >> Perhaps a more broader discussion, but would it be a good idea to have >> >> a >> >> global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? >> >> Something that is present, and must always be defined by the end user. >> >> Just >> >> a thought, it might encourage future script writers to provision for >> >> things >> >> like this. Of course, there is an even broader philisophical discussion >> >> on >> >> whitelisting IP ranges, which is why I would suggest leaving the >> >> variable as >> >> something that needs to be defined by the end user. >> >> >> >> FWIW, >> >> Jason >> >> >> >> On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver >> >> wrote: >> >>> >> >>> >> >>> On Sep 24, 2014, at 8:18 PM, Gary Faulkner >> >>> wrote: >> >>> >> >>> > Critical Stack has a version as well: >> >>> > >> >>> > >> >>> > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 >> >>> >> >>> The constraints based on experimenting that I just did to >> >>> independently >> >>> validate Liam's script: >> >>> >> >>> The regexp its keying in on: >> >>> >> >>> /\x28\x29\x20\x7b\x20/ >> >>> >> >>> "() { " >> >>> >> >>> Is correct: adding/changing whitespace or other characters between the >> >>> () >> >>> or ) {, and removing the space after the { cause this to fail (but {\t >> >>> MIGHT >> >>> work, but my limited shell fu is not able to check that case). >> >>> >> >>> However, does anyone know if any web servers will urldecode headers? >> >>> >> >>> -- >> >>> Nicholas Weaver it is a tale, told by an idiot, >> >>> nweaver at icsi.berkeley.edu full of sound and fury, >> >>> 510-666-2903 .signifying nothing >> >>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc >> >>> >> >>> >> >>> _______________________________________________ >> >>> Bro mailing list >> >>> bro at bro-ids.org >> >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> >> >> >> >> _______________________________________________ >> >> Bro mailing list >> >> bro at bro-ids.org >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > >> > >> > _______________________________________________ >> > Bro mailing list >> > bro at bro-ids.org >> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> -- >> cat ~/.bash_history > documentation.txt >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- cat ~/.bash_history > documentation.txt From jxbatchelor at gmail.com Thu Sep 25 12:34:53 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Thu, 25 Sep 2014 14:34:53 -0500 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: Looks like Seth put one up: https://github.com/broala/bro-shellshock 'exploit.pcap' On Thu, Sep 25, 2014 at 1:52 PM, Mike Sconzo wrote: > Anybody have a pcap (they're willing to share) to verify these scripts on? > > Thanks! > > On Thu, Sep 25, 2014 at 12:45 PM, anthony kasza > wrote: > > I know not all exploits of this vulnerability need to include a reverse > > shell, but it may be useful to monitor for outbound connections to an IP > > which previously made HTTP requests with headers matching this pattern. > > > > -AK > > > > On Sep 25, 2014 10:21 AM, "Jason Batchelor" > wrote: > >> > >> I took the version from Critical Stack and added the ability to > whitelist > >> certain ranges. It may be valuable if, for example, you have an external > >> auditing service like White Hat Security conducting scans that you don't > >> deem actionable. > >> > >> Perhaps a more broader discussion, but would it be a good idea to have a > >> global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? > >> Something that is present, and must always be defined by the end user. > Just > >> a thought, it might encourage future script writers to provision for > things > >> like this. Of course, there is an even broader philisophical discussion > on > >> whitelisting IP ranges, which is why I would suggest leaving the > variable as > >> something that needs to be defined by the end user. > >> > >> FWIW, > >> Jason > >> > >> On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver > >> wrote: > >>> > >>> > >>> On Sep 24, 2014, at 8:18 PM, Gary Faulkner > >>> wrote: > >>> > >>> > Critical Stack has a version as well: > >>> > > >>> > > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 > >>> > >>> The constraints based on experimenting that I just did to independently > >>> validate Liam's script: > >>> > >>> The regexp its keying in on: > >>> > >>> /\x28\x29\x20\x7b\x20/ > >>> > >>> "() { " > >>> > >>> Is correct: adding/changing whitespace or other characters between the > () > >>> or ) {, and removing the space after the { cause this to fail (but {\t > MIGHT > >>> work, but my limited shell fu is not able to check that case). > >>> > >>> However, does anyone know if any web servers will urldecode headers? > >>> > >>> -- > >>> Nicholas Weaver it is a tale, told by an idiot, > >>> nweaver at icsi.berkeley.edu full of sound and fury, > >>> 510-666-2903 .signifying nothing > >>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc > >>> > >>> > >>> _______________________________________________ > >>> Bro mailing list > >>> bro at bro-ids.org > >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > >> > >> > >> > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > -- > cat ~/.bash_history > documentation.txt > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/89c93086/attachment.html From hartley.87 at osu.edu Thu Sep 25 12:53:01 2014 From: hartley.87 at osu.edu (Hartley, Christopher J.) Date: Thu, 25 Sep 2014 19:53:01 +0000 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: Nice, Seth :) Chris On Sep 25, 2014, at 3:34 PM, Jason Batchelor > wrote: Looks like Seth put one up: https://github.com/broala/bro-shellshock 'exploit.pcap' On Thu, Sep 25, 2014 at 1:52 PM, Mike Sconzo > wrote: Anybody have a pcap (they're willing to share) to verify these scripts on? Thanks! On Thu, Sep 25, 2014 at 12:45 PM, anthony kasza > wrote: > I know not all exploits of this vulnerability need to include a reverse > shell, but it may be useful to monitor for outbound connections to an IP > which previously made HTTP requests with headers matching this pattern. > > -AK > > On Sep 25, 2014 10:21 AM, "Jason Batchelor" > wrote: >> >> I took the version from Critical Stack and added the ability to whitelist >> certain ranges. It may be valuable if, for example, you have an external >> auditing service like White Hat Security conducting scans that you don't >> deem actionable. >> >> Perhaps a more broader discussion, but would it be a good idea to have a >> global 'ip_whitelist' variable in Bro (assuming it doesn't have one)? >> Something that is present, and must always be defined by the end user. Just >> a thought, it might encourage future script writers to provision for things >> like this. Of course, there is an even broader philisophical discussion on >> whitelisting IP ranges, which is why I would suggest leaving the variable as >> something that needs to be defined by the end user. >> >> FWIW, >> Jason >> >> On Thu, Sep 25, 2014 at 10:06 AM, Nicholas Weaver >> > wrote: >>> >>> >>> On Sep 24, 2014, at 8:18 PM, Gary Faulkner > >>> wrote: >>> >>> > Critical Stack has a version as well: >>> > >>> > https://github.com/CriticalStack/bro-scripts/tree/cve-2014-6271/bash-cve-2014-6271 >>> >>> The constraints based on experimenting that I just did to independently >>> validate Liam's script: >>> >>> The regexp its keying in on: >>> >>> /\x28\x29\x20\x7b\x20/ >>> >>> "() { " >>> >>> Is correct: adding/changing whitespace or other characters between the () >>> or ) {, and removing the space after the { cause this to fail (but {\t MIGHT >>> work, but my limited shell fu is not able to check that case). >>> >>> However, does anyone know if any web servers will urldecode headers? >>> >>> -- >>> Nicholas Weaver it is a tale, told by an idiot, >>> nweaver at icsi.berkeley.edu full of sound and fury, >>> 510-666-2903 .signifying nothing >>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- cat ~/.bash_history > documentation.txt _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/b8967da2/attachment.html From nweaver at ICSI.Berkeley.EDU Thu Sep 25 13:02:58 2014 From: nweaver at ICSI.Berkeley.EDU (Nicholas Weaver) Date: Thu, 25 Sep 2014 13:02:58 -0700 Subject: [Bro] CVE-2014-6271/ detection script In-Reply-To: References: <542383BD.5090903@lbl.gov> <54238987.4000405@gmail.com> <4AB31B83-E2A2-4230-974A-1CA4030C124E@icsi.berkeley.edu> Message-ID: On Sep 25, 2014, at 11:59 AM, Liam Randall wrote: > @Broala_ has one posted in their repo. > > DHCP POC is out: > > https://www.trustedsec.com/september-2014/shellshock-dhcp-rce-proof-concept/ > > Add it to the list. i've got that working under Linux at least, it doesnt' work on my mac, but it may be I'm not doing quite the right thing in the DHCP server. -- Nicholas Weaver it is a tale, told by an idiot, nweaver at icsi.berkeley.edu full of sound and fury, 510-666-2903 .signifying nothing PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/9bae4cb4/attachment.bin From kchiang at sandia.gov Thu Sep 25 16:36:58 2014 From: kchiang at sandia.gov (Ken Chiang) Date: Thu, 25 Sep 2014 16:36:58 -0700 Subject: [Bro] bro question. Message-ID: <20140925233658.GA21988@sandia.gov> Hello all, I am setting up a service that uses bro to simply extract exe files for a network stream for sandbox analysis. Currently, everything in my test environment is local. I have an apache web server that is serving up a few exe files. On the same server, I have bro 2.3.1 running the attached file extraction script below. The problem is that the file extracted never exactly match the downloaded file and the behavior is very inconsistent, i.e. sometimes the file would be extracted and most times, the file would not even show up in the file.log log. I suspect that I need to do something to check for file write completion but don't know how to go about doing it as there is not a file_done event. There is,however, a file_gap event that I read about. Has anyone successfully done this? I am using the loopback device on a linux server. sudo bro -i lo extract.bro wget http://localhost/test.exe ================extract.bro======================================= global ext_map: table[string] of string = { ["application/x-dosexec"] = "exe", } &default =""; event file_new(f: fa_file) { if ( ! f?$mime_type || ext_map[f$mime_type] == "" ) return; local ext = ""; ext = ext_map[f$mime_type]; local fname = fmt("%s-%s.%s", f$source, f$id, ext); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } ======================================= Thanks, Ken From mkhan04 at gmail.com Thu Sep 25 17:31:19 2014 From: mkhan04 at gmail.com (M K) Date: Thu, 25 Sep 2014 20:31:19 -0400 Subject: [Bro] bro question. In-Reply-To: <20140925233658.GA21988@sandia.gov> References: <20140925233658.GA21988@sandia.gov> Message-ID: You're probably looking for the file_state_remove event ( https://www.bro.org/sphinx-git/scripts/base/bif/event.bif.bro.html#id-file_state_remove). Afaik, that is the only reliable way to know that a file has has fully downloaded. On Thu, Sep 25, 2014 at 7:36 PM, Ken Chiang wrote: > Hello all, > > I am setting up a service that uses bro to simply extract exe files for a > network stream for sandbox analysis. Currently, everything in my test > environment is local. > > I have an apache web server that is serving up a few exe files. On the > same server, I have bro 2.3.1 running the attached file extraction script > below. > > The problem is that the file extracted never exactly match the downloaded > file and the behavior is very inconsistent, i.e. sometimes the file would > be extracted and most times, the file would not even show up in the > file.log log. > > I suspect that I need to do something to check for file write completion > but don't know how to go about doing it as there is not a file_done event. > There is,however, a file_gap event that I read about. > > Has anyone successfully done this? > > > I am using the loopback device on a linux server. > sudo bro -i lo extract.bro > > wget http://localhost/test.exe > > > ================extract.bro======================================= > > global ext_map: table[string] of string = { > ["application/x-dosexec"] = "exe", > } &default =""; > > event file_new(f: fa_file) > { > if ( ! f?$mime_type || ext_map[f$mime_type] == "" ) > return; > > local ext = ""; > ext = ext_map[f$mime_type]; > > local fname = fmt("%s-%s.%s", f$source, f$id, ext); > Files::add_analyzer(f, Files::ANALYZER_EXTRACT, > [$extract_filename=fname]); > } > > > > > ======================================= > > Thanks, > > Ken > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140925/1357ea09/attachment.html From edthoma at sandia.gov Fri Sep 26 09:23:19 2014 From: edthoma at sandia.gov (Thomas, Eric D) Date: Fri, 26 Sep 2014 16:23:19 +0000 Subject: [Bro] I miss cf Message-ID: Most of the time I use bro-cut, I just want to convert the date to human readable format. Usually I want to do it after I've grepped out logs and left out the bro log headers. How do I get the basic cf functionality back? -- Eric Thomas edthoma at sandia.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140926/3704c931/attachment.html From asharma at lbl.gov Fri Sep 26 09:36:24 2014 From: asharma at lbl.gov (Aashish Sharma) Date: Fri, 26 Sep 2014 09:36:24 -0700 Subject: [Bro] I miss cf In-Reply-To: References: Message-ID: <20140926163622.GA391@yaksha.lbl.gov> Eric: cf and hf are available at: ftp://ftp.ee.lbl.gov/ Aashish On Fri, Sep 26, 2014 at 04:23:19PM +0000, Thomas, Eric D wrote: > > Most of the time I use bro-cut, I just want to convert the date to human > readable format. Usually I want to do it after I?ve grepped out logs and > left out the bro log headers. How do I get the basic cf functionality back? > > -- > Eric Thomas > edthoma at sandia.gov > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Aashish Sharma (asharma at lbl.gov) Cyber Security, Lawrence Berkeley National Laboratory http://go.lbl.gov/pgp-aashish Office: (510)-495-2680 Cell: (510)-612-7971 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140926/872d9cb5/attachment.bin From roihat168 at yahoo.com Sun Sep 28 01:37:16 2014 From: roihat168 at yahoo.com (roi hatam) Date: Sun, 28 Sep 2014 01:37:16 -0700 Subject: [Bro] communication with broccoli Message-ID: <1411893436.58478.YahooMailNeo@web162101.mail.bf1.yahoo.com> Hello, I have a C program which works fine with the broccoli library when the bro have the standalone configuration. When I'm trying to activate bro with the manager, proxy and 2 workers configuration broccoli does not get any events. Which port the broccoli should listen(manager, proxy ,workers) ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140928/ad76ec04/attachment.html From bala150985 at gmail.com Sun Sep 28 02:43:09 2014 From: bala150985 at gmail.com (Balasubramaniam Natarajan) Date: Sun, 28 Sep 2014 15:13:09 +0530 Subject: [Bro] Query regarding pattern match Message-ID: Hi Could someone explain to me as to why would the pattern match of [:digit:] match characters and print s2 in the link given below ? http://try.bro.org/#/trybro/saved/d1c1dbb4-0148-4097-8967-0a5d2e796d69 Just in case the above link changes as I work on new exercise I have uploaded a screenshot as well. -- Regards, Balasubramaniam Natarajan http://blog.etutorshop.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140928/6757b866/attachment.html From johanna at icir.org Sun Sep 28 05:15:57 2014 From: johanna at icir.org (Johanna Amann) Date: Sun, 28 Sep 2014 14:15:57 +0200 Subject: [Bro] Query regarding pattern match In-Reply-To: References: Message-ID: Hello, Bro regular expressions does not support this kind of character classes. As far as I know it only supports listing all letters you want to include, negation or ranges. The line matches because the letter t is included in :digit: as well as s2. Johanna On 28 Sep 2014, at 11:43, Balasubramaniam Natarajan wrote: > Hi > > Could someone explain to me as to why would the pattern match of > [:digit:] > match characters and print s2 in the link given below ? > > http://try.bro.org/#/trybro/saved/d1c1dbb4-0148-4097-8967-0a5d2e796d69 > > Just in case the above link changes as I work on new exercise I have > uploaded a screenshot as well. > > > > > -- > Regards, > Balasubramaniam Natarajan > http://blog.etutorshop.com > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From bala150985 at gmail.com Sun Sep 28 07:46:16 2014 From: bala150985 at gmail.com (Balasubramaniam Natarajan) Date: Sun, 28 Sep 2014 20:16:16 +0530 Subject: [Bro] Query regarding pattern match In-Reply-To: References: Message-ID: Thanks for explaining. I got a bit confused by the Subexercise Part 2 example on the link https://www.bro.org/current/exercises/prog-primer/part1.bro Now I understand that we cannot use such character classes. On Sun, Sep 28, 2014 at 5:45 PM, Johanna Amann wrote: > Hello, > > Bro regular expressions does not support this kind of character classes. > As far as I know it only supports listing all letters you want to include, > negation or ranges. The line matches because the letter t is included in > :digit: as well as s2. > > Johanna > > -- Regards, Balasubramaniam Natarajan http://blog.etutorshop.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140928/3eeb326f/attachment.html From dngr7512 at gmail.com Sun Sep 28 07:46:51 2014 From: dngr7512 at gmail.com (daniel nagar) Date: Sun, 28 Sep 2014 17:46:51 +0300 Subject: [Bro] (no subject) Message-ID: I'm trying to collect HTTP events of the same request/response into a single structure (much like wireshark does) and extract it to an outside source like a message queue. What's is the best way to approach this problem? Much Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140928/b2c0bbed/attachment.html From johanna at icir.org Sun Sep 28 07:49:25 2014 From: johanna at icir.org (Johanna Amann) Date: Sun, 28 Sep 2014 16:49:25 +0200 Subject: [Bro] Query regarding pattern match In-Reply-To: References: Message-ID: Actually I was wrong - the example you cited explains it correctly. You need to use double parentheses with the character classes - if you use [[:digit:]] it works perfectly. Johanna On 28 Sep 2014, at 16:46, Balasubramaniam Natarajan wrote: > Thanks for explaining. > > I got a bit confused by the Subexercise Part 2 example on the link > https://www.bro.org/current/exercises/prog-primer/part1.bro Now I > understand that we cannot use such character classes. > > On Sun, Sep 28, 2014 at 5:45 PM, Johanna Amann > wrote: > >> Hello, >> >> Bro regular expressions does not support this kind of character >> classes. >> As far as I know it only supports listing all letters you want to >> include, >> negation or ranges. The line matches because the letter t is included >> in >> :digit: as well as s2. >> >> Johanna >> >> > > -- > Regards, > Balasubramaniam Natarajan > http://blog.etutorshop.com From dhuse at american.edu Mon Sep 29 11:11:54 2014 From: dhuse at american.edu (Alec Dhuse) Date: Mon, 29 Sep 2014 14:11:54 -0400 Subject: [Bro] ShellShock Detector for Bro Message-ID: Hi everyone, I'm trying to get the ShellShock Detector for Bro ( https://github.com/broala/bro-shellshock) installed. Currently I have the files on my Bro box and the config updated. But when I try to check the script I get an error with an unrelated library, see below: $ ./broctl check manager failed. error in /.../bro/share/bro/base/frameworks/sumstats/./main.bro, line 191 and /.../bro/share/bro/base/frameworks/sumstats/./plugins/./average.bro, line 17: incompatible types (hook(r:record { stream:string; apply:set[enum]; pred:function(key:record { str:string; host:addr; }; obs:record { num:count; dbl:double; str:string; };) : bool; normalize_key:function(key:record { str:string; host:addr; };) : record { str:string; host:addr; }; sid:string; }; val:double; data:record { num:count; dbl:double; str:string; }; rv:record { begin:time; end:time; num:count; average:double; };) : bool and hook(r:record { stream:string; apply:set[enum]; pred:function(key:record { str:string; host:addr; }; obs:record { num:count; dbl:double; str:string; };) : bool; normalize_key:function(key:record { str:string; host:addr; };) : record { str:string; host:addr; }; sid:string; }; val:double; obs:record { num:count; dbl:double; str:string; }; rv:record { begin:time; end:time; num:count; average:double; };) : bool) proxy-1 failed. I'm stuck at this point, so any help is appreciated. Thanks! - Alec -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140929/d5d0f18f/attachment.html From lists at g-clef.net Tue Sep 30 13:39:47 2014 From: lists at g-clef.net (Aaron Gee-Clough) Date: Tue, 30 Sep 2014 16:39:47 -0400 Subject: [Bro] Multiple Intel framework hits for same connection? References: <541C8ABF.7070500@g-clef.net> <33F71F16-3AA4-4B0C-8542-9651698811BA@icir.org> Message-ID: <542B1513.7080203@g-clef.net> Thanks, Seth. I've done some more digging on this, and I'm a bit more confused than when I started (not the first time that's been true). I have put a cert for a server I control into the intel framework with a line like this: 62e00e51aaf306e7738a50d7c1f4746d271f9a12 Intel::CERT_HASH blacklist_test https://testingurl/ T - Bro's intel framework never fires for connections to the host with this cert. I do see the cert's hash in files.log, so it is being passed over the wire past bro. If I add the host's IP to the intel file, the intel framework generates notices properly, so I know the intel framework is loaded & generally working. The thing that confuses me is that when I look at the scripts in policy/frameworks/intel/seen, I can see scripts that will generate source information for every Intel type *except* for Intel::USER_NAME and Intel::CERT_HASH. Am I barking up a wrong tree here, or did those two not get implemented in the intel framework scripts? If they did get implemented, then I'm not sure what I'm doing wrong...I just can't get bro to fire for SSL cert hashes. I'm running bro 2.3.1 (just updated today), if that makes any difference. Thanks. Aaron On 09/19/2014 04:15 PM, Seth Hall wrote: > > > On Sep 19, 2014, at 3:57 PM, Aaron Gee-Clough wrote: > >> I have a question about the intel framework: if a flow matches both an >> Intel::ADDR and Intel::CERT_HASH (for example), will the intel framework >> generate notice logs for both matches, or just one? > > It should definitely match both. That's a problem if it's not. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > From seth at icir.org Tue Sep 30 13:46:46 2014 From: seth at icir.org (Seth Hall) Date: Tue, 30 Sep 2014 16:46:46 -0400 Subject: [Bro] Multiple Intel framework hits for same connection? In-Reply-To: <542B1513.7080203@g-clef.net> References: <541C8ABF.7070500@g-clef.net> <33F71F16-3AA4-4B0C-8542-9651698811BA@icir.org> <542B1513.7080203@g-clef.net> Message-ID: <144199A0-660E-4A15-A03F-9D14211606C4@icir.org> On Sep 30, 2014, at 4:39 PM, Aaron Gee-Clough wrote: > If they did get implemented, then I'm not sure what I'm doing wrong...I just can't get bro to fire for SSL cert hashes. I'm running bro 2.3.1 (just updated today), if that makes any difference. Sorry, that's my mistake. I never actually implemented a script that used CERT_HASH. Just make those FILE_HASH instead. That's more proper anyway now that certs are handled as files. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From liburdi.joshua at gmail.com Tue Sep 30 13:58:21 2014 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Tue, 30 Sep 2014 13:58:21 -0700 Subject: [Bro] Multiple Intel framework hits for same connection? In-Reply-To: <144199A0-660E-4A15-A03F-9D14211606C4@icir.org> References: <541C8ABF.7070500@g-clef.net> <33F71F16-3AA4-4B0C-8542-9651698811BA@icir.org> <542B1513.7080203@g-clef.net> <144199A0-660E-4A15-A03F-9D14211606C4@icir.org> Message-ID: There also aren't scripts that use USER_NAME, but I have some additions to fix that. :) On Tue, Sep 30, 2014 at 1:46 PM, Seth Hall wrote: > > On Sep 30, 2014, at 4:39 PM, Aaron Gee-Clough wrote: > >> If they did get implemented, then I'm not sure what I'm doing wrong...I just can't get bro to fire for SSL cert hashes. I'm running bro 2.3.1 (just updated today), if that makes any difference. > > Sorry, that's my mistake. I never actually implemented a script that used CERT_HASH. Just make those FILE_HASH instead. That's more proper anyway now that certs are handled as files. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From lists at g-clef.net Tue Sep 30 13:59:35 2014 From: lists at g-clef.net (Aaron Gee-Clough) Date: Tue, 30 Sep 2014 16:59:35 -0400 Subject: [Bro] Multiple Intel framework hits for same connection? References: <541C8ABF.7070500@g-clef.net> <33F71F16-3AA4-4B0C-8542-9651698811BA@icir.org> <542B1513.7080203@g-clef.net> <144199A0-660E-4A15-A03F-9D14211606C4@icir.org> Message-ID: <542B19B7.5020209@g-clef.net> No worries. That's fixed it. I'm seeing hits for certs when I change to use FILE_HASH. Thanks for your help. aaron On 09/30/2014 04:46 PM, Seth Hall wrote: > > > On Sep 30, 2014, at 4:39 PM, Aaron Gee-Clough > wrote: > >> If they did get implemented, then I'm not sure what I'm doing >> wrong...I just can't get bro to fire for SSL cert hashes. I'm >> running bro 2.3.1 (just updated today), if that makes any >> difference. > > Sorry, that's my mistake. I never actually implemented a script that > used CERT_HASH. Just make those FILE_HASH instead. That's more > proper anyway now that certs are handled as files. > > .Seth > > -- Seth Hall International Computer Science Institute (Bro) because > everyone has a network http://www.bro.org/ >