From daniel.guerra69 at gmail.com Tue Jul 1 05:01:29 2014 From: daniel.guerra69 at gmail.com (daniel.guerra69) Date: Tue, 01 Jul 2014 14:01:29 +0200 Subject: [Bro] Unanswered http post Message-ID: <53B2A319.9090409@gmail.com> Hi, I have an unanswered HTTP post, this post contains username and password. The dpd signature only works when the post is answered. Is there a way to deal with this ? I would like to see it in my http.log. Regards, Daniel From jxbatchelor at gmail.com Wed Jul 2 08:35:30 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Wed, 2 Jul 2014 10:35:30 -0500 Subject: [Bro] Bro Scripting Question Message-ID: Hello all: I am interested in learning Bro scripting, and I am attempting to write a simple first script that simply extracts EXE files and have the MD5 hash of the file as part of the filename written to disk. I am aware of, and have studied the example and documentation here: http://www.bro.org/bro-exchange-2013/exercises/faf.html http://www.bro.org/sphinx-git/scripts/base/frameworks/files/main.bro.html#type-Files::Info >From that I came up with the following... --------------------- @load base/frameworks/files @load frameworks/files/hash-all-files export { const ext_map: table[string] of string = { ["application/x-dosexec"] = "exe" } &redef; } event file_new(f: fa_file) { Files::add_analyzer(f, Files::ANALYZER_MD5); } event file_hash(f: fa_file, kind: string, hash: string) { local ext = ""; if ( f?$mime_type ) ext = ext_map[f$mime_type]; if ( kind == "md5" && ext != "") local fname = fmt("%s-%s-%s", f$source, hash, ext); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } ----------------------- The file extraction event shows up in the files.log along with the appropriate filename, and the extract_files directory is created under the appropriate worker. Unfortunately, no file is ever written to disk. Oddly (to me at least), when I use the example script (from the above link) I am able to retrieve files. However, my goal was to have the hash in the filename written to disk (replacing the id). I tried the following derivative, with no luck either (more inline with the example). With this one I don't have any evidence in files.log that extractions are taking place. ------------------------- @load base/frameworks/files @load frameworks/files/hash-all-files global ext_map: table[string] of string = { ["application/x-dosexec"] = "exe", ["text/plain"] = "txt", ["image/jpeg"] = "jpg", ["image/png"] = "png", ["text/html"] = "html", } &default =""; event file_new(f: fa_file) { if ( ! f?$mime_type || f$mime_type != "application/x-dosexec" ) return; local ext = ext_map[f$mime_type]; Files::add_analyzer(f, Files::ANALYZER_MD5); local fname = fmt("%s-%s-%s", f$source, f$info$md5, ext); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } event file_hash(f: fa_file, kind: string, hash: string) { f$info$md5 = hash; } ------------------------------- Curious if anyone has any tips or pointers. This is likely something simple I am missing, or a lack of understanding on my part. Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140702/67d88f5f/attachment.html From seth at icir.org Wed Jul 2 10:01:33 2014 From: seth at icir.org (Seth Hall) Date: Wed, 2 Jul 2014 13:01:33 -0400 Subject: [Bro] Bro Scripting Question In-Reply-To: References: Message-ID: On Jul 2, 2014, at 11:35 AM, Jason Batchelor wrote: > Hello all: > > I am interested in learning Bro scripting, and I am attempting to write a simple first script that simply extracts EXE files and have the MD5 hash of the file as part of the filename written to disk. You have a chicken and egg problem. :) You have to begin extracting the file as soon as the file starts to be transferred but you don't have the hash of the file until the file is done being transferred. I did some work quite a while back that would give you the ability to do what you want but it did it by spooling the file into a temporary file name and then moving the file into the correct name once the file is complete and all needed information is available. That's what you'll have to do. I'll let you spend some time implementing that if you're interested, but if you're having any trouble getting to a workable solution, reach out again and I can give you some more hints. ;) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140702/5e5b2e23/attachment.bin From jxbatchelor at gmail.com Wed Jul 2 10:59:35 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Wed, 2 Jul 2014 12:59:35 -0500 Subject: [Bro] Bro Scripting Question In-Reply-To: References: Message-ID: Thanks Seth that helps. I thought of that as a possibility but I didn't understand enough about what exactly happens during a file extract trigger to settle on that conclusion (is the file stream tagged, spooled in memory, hashed, then written, or something else... etc). With those guidelines whipping something up that does this should not be to terrible an exercise. One additional question however, if someone is interested in writing a new analyzer, what would be a good place to start? For example, what if someone wanted to write an analyzer that examined the MZ header of an executable for certain characteristics? What would be a good starting point for them? I've started reviewing the following... http://www.bro.org/sphinx-git/scripts/base/frameworks/files/main.bro.html#type-Files::AnalyzerArgs As well as different modules like /files/extract/main.bro, but didn't know if you knew of a better place to begin for an ambitious novice :) Also Kevin, thanks for your reply. I think you are correct, and combining your input with Seth's, it is clear to me why the example was working and why I was getting halfway then zero results with my earlier attempts. Thanks, Jason On Wed, Jul 2, 2014 at 12:01 PM, Seth Hall wrote: > > On Jul 2, 2014, at 11:35 AM, Jason Batchelor > wrote: > > > Hello all: > > > > I am interested in learning Bro scripting, and I am attempting to write > a simple first script that simply extracts EXE files and have the MD5 hash > of the file as part of the filename written to disk. > > You have a chicken and egg problem. :) > > You have to begin extracting the file as soon as the file starts to be > transferred but you don't have the hash of the file until the file is done > being transferred. I did some work quite a while back that would give you > the ability to do what you want but it did it by spooling the file into a > temporary file name and then moving the file into the correct name once the > file is complete and all needed information is available. That's what > you'll have to do. > > I'll let you spend some time implementing that if you're interested, but > if you're having any trouble getting to a workable solution, reach out > again and I can give you some more hints. ;) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140702/10c017dd/attachment.html From seth at icir.org Wed Jul 2 11:36:06 2014 From: seth at icir.org (Seth Hall) Date: Wed, 2 Jul 2014 14:36:06 -0400 Subject: [Bro] Bro Scripting Question In-Reply-To: References: Message-ID: <92D8CEF2-F378-4636-8B77-CFD6BAC4F2E5@icir.org> On Jul 2, 2014, at 1:59 PM, Jason Batchelor wrote: > One additional question however, if someone is interested in writing a new analyzer, what would be a good place to start? You could watch Vlad Grigorescu's presentation at last year's Bro Exchange about how to write a protocol analyzer: https://www.youtube.com/watch?v=1eDIl9y6ZnM > For example, what if someone wanted to write an analyzer that examined the MZ header of an executable for certain characteristics? What would be a good starting point for them? I've started reviewing the following... Writing file analyzers is a tiny bit different than writing protocol analyzers but generally the same model holds in both cases. We actually have a draft of a windows executable analyzer that Vlad recently made some changes to and can be found in the git repository under topic/vladg/file-analysis-exe-analyzer if you're curious about what that would look like. It's still incomplete and doesn't do everything we'd like it to. Unfortunately it's still not something that you will be doing in a Bro script however (in case that's what you were asking). .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140702/23bd033c/attachment.bin From itsecderek at gmail.com Thu Jul 3 05:49:54 2014 From: itsecderek at gmail.com (Derek Banks) Date: Thu, 3 Jul 2014 08:49:54 -0400 Subject: [Bro] Question about syntax with notice suppression on intel hits Message-ID: Hello all, I am hooking into the notice framework to alert on hits from the intel framework. For a given hit I get multiple emails. I'd like to suppress the notice, but I am having a syntax issue. This is what I have that doesn't work - what is the right syntax to add in a suppression interval of X minutes? hook Notice::policy(n: Notice::Info) { if ( n$note == Intel::Notice && n?$src && !(n$src in intel_server_whitelist ) ) { add n$actions[Notice::ACTION_EMAIL]; add n$suppress_for=5min; } } Best Regards, Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140703/1573f4bd/attachment.html From Bill.Stackpole at rit.edu Thu Jul 3 10:51:55 2014 From: Bill.Stackpole at rit.edu (Bill Stackpole) Date: Thu, 03 Jul 2014 17:51:55 +0000 Subject: [Bro] basic scripting questions... Message-ID: 1 - how can I iterate thru the name/value pairs in any given bro event type? For example, if I were to do the following: *** begin script *** Event file_new(f:fa_file) { local finfo = f$info; local fuid = f$id; local fsource = f$source; local ftype = f$type; local fname = f$name; print fmt(?*** found %s in %s. saved as %s. FileID is %s. \n File info is %s.?, ftype, fsource, fname, fuid, finfo); *** end script *** The finfo variable contents would be displayed. (is this a complete list of the name/value pairs?) I would like to do the same with "event file_hash? but cannot understand how to display a similar variable to that of ?info?. 2 - as an extension of the above enumeration question, how do I determine what elements in a given event are available for me to use for conditionals/structured programming/etc? My first goal is to understand the variable types that are defined and be able to explain that to my students. Then we can move on to use them to create scripts to act on interesting things. Finally, I would like to explore machine learning with bro. Thanks! Bill Bill.stackpole at rit.edu From JAzoff at albany.edu Thu Jul 3 11:33:18 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Thu, 3 Jul 2014 14:33:18 -0400 Subject: [Bro] basic scripting questions... In-Reply-To: References: Message-ID: <20140703183318.GB11061@datacomm.albany.edu> I think the simplest thing that can help is something like this: event file_new(f:fa_file) { print f; } The documentation also has the structure info: http://www.bro.org/sphinx/scripts/base/bif/event.bif.bro.html#id-file_new http://www.bro.org/sphinx/scripts/base/init-bare.bro.html#type-fa_file -- -- Justin Azoff On Thu, Jul 03, 2014 at 05:51:55PM +0000, Bill Stackpole wrote: > 1 - how can I iterate thru the name/value pairs in any given bro event > type? > > For example, if I were to do the following: > > *** begin script *** > Event file_new(f:fa_file) { > local finfo = f$info; > local fuid = f$id; > local fsource = f$source; > local ftype = f$type; > > local fname = f$name; > > print fmt(?*** found %s in %s. saved as %s. FileID is %s. \n File info is > %s.?, ftype, fsource, fname, fuid, finfo); > *** end script *** > > The finfo variable contents would be displayed. (is this a complete list > of the name/value pairs?) > > I would like to do the same with "event file_hash? but cannot understand > how to display a similar variable to that of ?info?. > > 2 - as an extension of the above enumeration question, how do I determine > what elements in a given event are available for me to use for > conditionals/structured programming/etc? > > > My first goal is to understand the variable types that are defined and be > able to explain that to my students. Then we can move on to use them to > create scripts to act on interesting things. Finally, I would like to > explore machine learning with bro. > > Thanks! > Bill > > Bill.stackpole at rit.edu From jxbatchelor at gmail.com Mon Jul 7 07:23:47 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Mon, 7 Jul 2014 09:23:47 -0500 Subject: [Bro] Memory Consumption In-Reply-To: <53B2127D.4060707@ohio.edu> References: <86D9FB05-B648-4646-A7B7-7F08E663E67F@icir.org> <53ACA02A.70709@ohio.edu> <53ADD670.3000401@ohio.edu> <53B2127D.4060707@ohio.edu> Message-ID: I wanted to circle back on this real quick because after doing a little more poking into this matter I believe I have found the root cause of my issues. What was happening ultimately, was that the high volumes of memory usage I was seeing was due to large amounts of memory previously allocated to (presumably) bro processes, become inactive. Inactive memory is memory that has previously been allocated to some process but is no longer running. While this pool of memory is an option the OS goes to when it needs to go back to the well for more memory (to accommodate other processes), it is NOT included in 'free memory'. For example... cat /proc/meminfo MemTotal: 49376004 kB MemFree: 1909988 kB Buffers: 231036 kB Cached: 17096308 kB SwapCached: 75124 kB Active: 21040696 kB Inactive: 16141408 kB Active(anon): 17410144 kB Inactive(anon): 2445380 kB Active(file): 3630552 kB Inactive(file): 13696028 kB Very little free memory (relatively speaking) here after running Bro on a system that sees high volumes of traffic over the weekend. This memory can be freed however, and one need not reboot the server to make this happen... ; clear the memory buffers and cache #] sync && echo 3 > /proc/sys/vm/drop_caches cat /proc/meminfo MemTotal: 49376004 kB MemFree: 19784904 kB Buffers: 3316 kB Cached: 135400 kB SwapCached: 75120 kB Active: 17439080 kB Inactive: 2554152 kB Active(anon): 17410136 kB Inactive(anon): 2445380 kB Active(file): 28944 kB Inactive(file): 108772 kB Now that's more like it :) At the end of the day I am not sure if this is something we need to be concerned about since the OS will allocate memory from the 'inactive' pool as well as the 'free' pool as warranted. However, if you are running any monitoring apps that complain when system resources meet a certain threshold, this may be something to certainly consider. Of course, anyone who is experiencing similar issues like I had and is curious ought to be made aware of this in my mind. Helpful link explaining: http://www.tinylan.com/article/how-to-clear-inactive-memory-in-Linux One note on performance, at least one article I have found explains how clearing this may actually be a detriment. http://apple.stackexchange.com/questions/67031/isnt-inactive-memory-a-waste-of-resources However, there are offerings out there that claim to free inactive memory under the banner of performance optimization. https://www.youtube.com/watch?v=qd9V9eSdjLQ Not really sure what the case is with Bro, but I am interested if anyone cares to weigh in on that point? Is it worth setting up a cron of the command above every so often? Many thanks, Jason On Mon, Jun 30, 2014 at 8:44 PM, Gilbert Clark wrote: > No worries. Hope this works out :) > > Cheers, > Gilbert > > > On 6/30/14, 4:32 PM, Jason Batchelor wrote: > > Thanks again Gilbert. > > I put in an aggressive bpf filter that eliminates 90% of traffic and > memory flat lined so I do not believe there to be a memory leak. > > I then removed the old restrictive filter and replaced it with a more > broad one that does nix out mulicast and broadcast traffic. I did notice > the ConnectionInactivityTimer to be much more stable actually. > > grep 'ConnectionInactivityTimer' prof.log | awk 'NR % 10 == 1' > 1404155766.423613 ConnectionInactivityTimer = 4412 > 1404155916.424489 ConnectionInactivityTimer = 4776 > 1404156066.425171 ConnectionInactivityTimer = 4627 > 1404156216.426120 ConnectionInactivityTimer = 4974 > 1404156366.426889 ConnectionInactivityTimer = 4784 > 1404156516.428065 ConnectionInactivityTimer = 4750 > 1404156666.429125 ConnectionInactivityTimer = 4687 > 1404156816.431119 ConnectionInactivityTimer = 5006 > 1404156966.431837 ConnectionInactivityTimer = 4830 > > So far with multicast and broadcast traffic being filtered out, I have > noticed the memory to gradually increase but at a much slower rate. It has > not come close to using all resources yet but I will check again tomorrow. > It would seem that the perscription for more RAM needs to be written based > on the observables collected so far. Many thanks for the helpful tips. > > > > On Fri, Jun 27, 2014 at 3:39 PM, Gilbert Clark wrote: > >> Hi Jason: >> >> It's only when *pending* begins to grow that there's a large queue of >> messages waiting to be written (as far as I know). In this case, pending >> stays at 0/0 for each of those log updates, so I don't think that's an >> issue (for this log, at least :) >> >> One possibility here is going to be that bro is actually leaking memory >> somewhere, and you're lucky enough to have found a bug :) Usually this is >> going to be identified by a steady rise in memory usage over time with a >> relatively constant traffic volume / types. One thing to try (if it's >> something you feel comfortable with) might be to pull down the current copy >> of bro in master, building that, and pushing this out to the nodes to see >> what kind of an effect that has (if anything). >> >> Another thing to try might be to start tweaking which scripts are >> actually loaded in local.bro: comment out all the @load statements in >> local.bro (add a '#' to the beginning of each line) and see what kind of an >> effect that has on memory utilization. Assuming memory usage drops, then >> you can start slowly start removing the # characters / restarting bro to >> re-load scripts one at a time. This is going to be pretty tedious, though. >> >> A third thing to try might be doing a bit of sampling so that bro only >> sees some % of incoming packets. From there, slowly start to bring traffic >> back to see how bro's memory utilization rises as traffic is restored. >> Might want to define a few different levels (drop 90% of connections, drop >> 75% of connections, drop 50% of connections, drop 25% of connections) and >> just leave the drop rate at each step for a while (read: a few hours would >> probably be a good start) to see what happens to the memory utilization. >> If memory utilization stays pretty constant at each level, and the overall >> memory pattern ends up looking like a staircase, it might be time to >> consider that RAM upgrade :) >> >> *** Note that it's important that traffic be connection-sampled when >> trying the above: packet-sampling will lead to dropped packets in the >> middle of connections, which might not work quite as expected. Seth: how >> does connection sampling work for the packet filter framework? I haven't >> ever really used it, but I think it can do that, right? Also, any other >> ideas / things I might be missing here? >> >> One other question: did adjusting those timeouts change the number of >> inactivity timers reported? Probably not relevant to this issue, but just >> wondering if the change had any measurable effect. >> >> -Gilbert >> >> >> On 6/27/14, 11:29 AM, Jason Batchelor wrote: >> >> Thanks Gilbert! >> >> I think I am getting close to at least isolating the issue. >> >> I redefined some of the inactivity timeout values to something pregger >> aggressive... >> redef tcp_inactivity_timeout = 15 sec; >> redef udp_inactivity_timeout = 5 sec; >> redef icmp_inactivity_timeout = 5 sec; >> >> After committing the changes and restarting I am still seeing the same >> kind of slow memory consumption behavior. >> >> I checked the IO statistics you gave above and think this is where I am >> getting backed up. Below is a brief escalation of just the http logs >> themselves. >> >> grep 'http/Log::WRITER_ASCII' prof.log | awk 'NR % 10 == 1' >> 1403880551.747191 http/Log::WRITER_ASCII in=40 out=11 pending=0/0 >> (#queue r/w: in=40/40 out=11/11) >> 1403880701.759282 http/Log::WRITER_ASCII in=632 out=160 pending=0/0 >> (#queue r/w: in=632/632 out=160/160) >> 1403880851.764553 http/Log::WRITER_ASCII in=1254 out=310 pending=0/0 >> (#queue r/w: in=1254/1254 out=310/310) >> 1403881001.794827 http/Log::WRITER_ASCII in=1881 out=459 pending=0/0 >> (#queue r/w: in=1881/1881 out=459/459) >> 1403881151.907771 http/Log::WRITER_ASCII in=2496 out=607 pending=0/0 >> (#queue r/w: in=2496/2496 out=607/607) >> 1403881302.133110 http/Log::WRITER_ASCII in=3140 out=754 pending=0/0 >> (#queue r/w: in=3140/3140 out=754/754) >> 1403881452.684259 http/Log::WRITER_ASCII in=3781 out=900 pending=0/0 >> (#queue r/w: in=3781/3781 out=900/900) >> 1403881611.446692 http/Log::WRITER_ASCII in=4321 out=1000 >> pending=0/0 (#queue r/w: in=4321/4321 out=1000/1000) >> 1403881783.945970 http/Log::WRITER_ASCII in=4816 out=1069 >> pending=0/0 (#queue r/w: in=4816/4816 out=1069/1069) >> 1403881991.154812 http/Log::WRITER_ASCII in=5435 out=1105 >> pending=0/0 (#queue r/w: in=5435/5435 out=1105/1105) >> 1403882156.814938 http/Log::WRITER_ASCII in=6066 out=1190 >> pending=0/0 (#queue r/w: in=6066/6066 out=1190/1190) >> 1403882336.215055 http/Log::WRITER_ASCII in=6690 out=1267 >> pending=0/0 (#queue r/w: in=6690/6690 out=1267/1267) >> 1403882494.089058 http/Log::WRITER_ASCII in=7350 out=1377 >> pending=0/0 (#queue r/w: in=7350/7350 out=1377/1377) >> >> If I am interpreting this correctly, I am far exceeding my ability to >> write out logs as time goes on, resulting in a backup of that data in >> memory presumably. The same kind of behavior is seen in other log types as >> well. >> >> Am I interpreting this correctly? If so the solution seems to be I need >> faster drives and/or more memory :) >> >> >> >> On Thu, Jun 26, 2014 at 5:35 PM, Gilbert Clark wrote: >> >>> Hi: >>> >>> I believe this particular timer is a general timer used to track >>> inactivity for all protocols (but someone can correct me if I'm wrong :). >>> Maybe try tuning the following: >>> >>> const tcp_inactivity_timeout = 5 min &redef; >>> const udp_inactivity_timeout = 1 min &redef; >>> const icmp_inactivity_timeout = 1 min &redef; >>> >>> Reference: >>> http://www.notary.icsi.berkeley.edu/sphinx-git/scripts/base/init-bare.bro.html#id-udp_inactivity_timeout >>> >>> Also, I believe it's possible to set timeouts per-connection based on >>> properties of the established connections. For an example of doing this / >>> how this might be useful, take a look at: >>> >>> https://bro.org/sphinx/scripts/base/protocols/conn/inactivity.bro.html >>> >>> Re: interpreting prof.log output -- a few notes from my experience: >>> >>> There should be lines that include the number of connections currently >>> active for each major protocol type, e.g: >>> >>> Conns: tcp=1/130 udp=1/70 icmp=0/0 >>> >>> Syntax here is: tcp=/>> active connections ever observed> udp=/>> number of active connections ever observed> icmp=>> connections>/ >>> >>> The line following the above includes more detailed connection overhead >>> information: >>> >>> Conns: total=6528 current=2/2 ext=0 mem=9312K avg=4656.0 table=24K >>> connvals=6K >>> >>> A few notes about fields that might be useful there: >>> >>> * total=total number of connections (aggregate: not just at this >>> particular moment) >>> * current=X/Y: X and Y are two counts that will usually differ to some >>> extent, but both count the number of connections observed >>> - X: the number of active connections in total (not necessarily all >>> of which are kept in state tables) >>> - Y: the number of connections stored in bro's state tables (tcp + >>> udp + icmp) at this moment in time >>> * avg=average memory use (in bytes) per active connection >>> * table=total amount of memory used by the state tables (tcp + udp + >>> icmp) >>> >>> 'avg' and 'table' are only recorded occasionally because computing these >>> values can be expensive. When that "Global_sizes ..." output is >>> included in a log entry, these numbers will be accurate. Otherwise, they >>> will be 0. >>> >>> For an idea of the overhead associated with the Timer objects themselves >>> (read: the overhead for the timers isn't included in the overhead computed >>> for the connection state), take a look at the line that looks something >>> like: >>> >>> Timers: current=19 max=19 mem=1K lag=0.00s >>> >>> *current=number of timers currently active in total >>> *max=maximum number of timers ever active at once >>> *mem=total memory consumed by all of the currently active timers >>> (usually pretty small compared to other things, though) >>> >>> Also, one other note: under 'Threads', there's a bunch of lines that >>> look something like: >>> >>> http/Log::WRITER_ASCII in=11318 out=10882 pending=0/0 (#queue r/w: >>> in=11318/11318 out=10882/10882) >>> ssl/Log::WRITER_ASCII in=10931 out=10878 pending=0/0 (#queue r/w: >>> in=10931/10931 out=10878/10878) >>> files/Log::WRITER_ASCII in=10989 out=10792 pending=0/0 (#queue r/w: >>> in=10989/10989 out=10792/10792) >>> dhcp/Log::WRITER_ASCII in=1031 out=1029 pending=0/0 (#queue r/w: >>> in=1031/1031 out=1029/1029) >>> >>> Generally, pending X/Y will describe how much memory is currently being >>> consumed (relatively speaking) by messages waiting to be written to a log >>> file / that have been read from that input source but not yet processed by >>> bro. >>> >>> A pending X/Y that grows steadily over time is an indication that bro >>> could eventually run out of room to store outstanding log / input framework >>> messages, and that these messages could eventually come to consume so much >>> memory that the worker would thrash the machine into sweet digital oblivion. >>> >>> Hope something in there is useful, >>> Gilbert >>> >>> >>> On 6/26/14, 2:26 PM, Jason Batchelor wrote: >>> >>> Small follow up to this as well since it may be relevant. I notice the >>> timers for stale connections seems to increase in paralel with memory... >>> >>> grep 'ConnectionInactivityTimer' prof.log | awk 'NR % 10 == 1' >>> 1403802069.314888 ConnectionInactivityTimer = 5844 >>> 1403802219.315759 ConnectionInactivityTimer = 21747 >>> 1403802369.316387 ConnectionInactivityTimer = 32275 >>> 1403802519.317613 ConnectionInactivityTimer = 32716 >>> 1403802669.318303 ConnectionInactivityTimer = 32597 >>> 1403802819.319193 ConnectionInactivityTimer = 34207 >>> 1403802969.320204 ConnectionInactivityTimer = 39176 >>> 1403803119.321978 ConnectionInactivityTimer = 40394 >>> 1403803269.323058 ConnectionInactivityTimer = 38631 >>> 1403803419.323688 ConnectionInactivityTimer = 35847 >>> 1403803569.324716 ConnectionInactivityTimer = 34432 >>> 1403803719.325888 ConnectionInactivityTimer = 34591 >>> 1403803869.326713 ConnectionInactivityTimer = 34716 >>> 1403804019.327664 ConnectionInactivityTimer = 35361 >>> 1403804169.329254 ConnectionInactivityTimer = 35915 >>> 1403804319.330507 ConnectionInactivityTimer = 34994 >>> 1403804469.331842 ConnectionInactivityTimer = 33212 >>> 1403804619.332236 ConnectionInactivityTimer = 32290 >>> 1403804769.332993 ConnectionInactivityTimer = 32513 >>> 1403804919.333717 ConnectionInactivityTimer = 32592 >>> 1403805069.334477 ConnectionInactivityTimer = 32388 >>> 1403805219.334875 ConnectionInactivityTimer = 32932 >>> 1403805369.335753 ConnectionInactivityTimer = 31771 >>> 1403805519.337054 ConnectionInactivityTimer = 28749 >>> 1403805669.337563 ConnectionInactivityTimer = 26509 >>> 1403805819.339240 ConnectionInactivityTimer = 26654 >>> 1403805969.340812 ConnectionInactivityTimer = 26297 >>> 1403806119.341841 ConnectionInactivityTimer = 25362 >>> 1403806269.344342 ConnectionInactivityTimer = 24435 >>> 1403806419.345146 ConnectionInactivityTimer = 24954 >>> 1403806569.346057 ConnectionInactivityTimer = 24088 >>> 1403806719.347671 ConnectionInactivityTimer = 30207 >>> 1403806869.349643 ConnectionInactivityTimer = 34276 >>> >>> Notice the steady increase, then slight decrease, then steady increase >>> again. Is there a way to control these settings for performance testing >>> purposes? >>> >>> I know while I was tuning Suricata, I needed to be mindful of connection >>> timeouts and due to the volume of flows I am getting I needed to be pretty >>> aggressive. >>> >>> Thanks, >>> Jason >>> >>> >>> >>> On Thu, Jun 26, 2014 at 12:29 PM, Jason Batchelor >> > wrote: >>> >>>> Thanks Seth: >>>> >>>> I'm not sure I have a license for an experianced bro memory debugger, >>>> however I will document what I've done here for folks in hopes it proves >>>> useful! >>>> >>>> I've enabled profiling by adding the following. >>>> >>>> Vim /opt/bro/share/bro/site/local.bro >>>> @load misc/profiling >>>> >>>> Then enforced the changes... >>>> >>>> broctl stop >>>> broctl install >>>> broctl start >>>> >>>> At the moment I have 46308184k used 3067820k free memory. >>>> >>>> In /var/opt/bro/spool/worker-1-1, prof.log content is captured as you >>>> mentioned (and likewise for all nodes). >>>> >>>> Earlier you wrote: >>>> >>>> > Every so often in there will be an indication of the largest >>>> global variables >>>> >>>> Is this what you mean (taken from one worker)....? >>>> >>>> 1403803224.322453 Global_sizes > 100k: 0K >>>> 1403803224.322453 Known::known_services = 469K >>>> (3130/3130 entries) >>>> 1403803224.322453 Cluster::manager2worker_events = 137K >>>> 1403803224.322453 Weird::weird_ignore = 31492K >>>> (146569/146569 entries) >>>> 1403803224.322453 Known::certs = 58K (310/310 entries) >>>> 1403803224.322453 SumStats::threshold_tracker = 668K >>>> (4/2916 entries) >>>> 1403803224.322453 FTP::ftp_data_expected = 181K (46/46 >>>> entries) >>>> 1403803224.322453 Notice::suppressing = 595K (2243/2243 >>>> entries) >>>> 1403803224.322453 Communication::connected_peers = 156K >>>> (2/2 entries) >>>> 1403803224.322453 SumStats::sending_results = 8028K >>>> (3/5545 entries) >>>> 1403803224.322453 Software::tracked = 33477K >>>> (12424/31111 entries) >>>> 1403803224.322453 FTP::cmd_reply_code = 48K (325/325 >>>> entries) >>>> 1403803224.322453 SumStats::result_store = 27962K >>>> (5/19978 entries) >>>> 1403803224.322453 SSL::cipher_desc = 97K (356/356 >>>> entries) >>>> 1403803224.322453 RADIUS::attr_types = 44K (169/169 >>>> entries) >>>> 1403803224.322453 Weird::actions = 35K (163/163 entries) >>>> 1403803224.322453 Known::known_hosts = 3221K >>>> (21773/21773 entries) >>>> 1403803224.322453 Weird::did_log = 54K (287/287 entries) >>>> 1403803224.322453 SSL::recently_validated_certs = 8667K >>>> (24752/24752 entries) >>>> 1403803224.322453 Communication::nodes = 188K (4/4 >>>> entries) >>>> 1403803224.322453 SSL::root_certs = 204K (144/144 >>>> entries) >>>> 1403803224.322453 Global_sizes total: 116727K >>>> 1403803224.322453 Total number of table entries: 213548/260715 >>>> 1403803239.322685 ------------------------ >>>> 1403803239.322685 Memory: total=1185296K total_adj=1137108K malloced: >>>> 1144576K >>>> >>>> Any other pointers on how to interpret this data? >>>> >>>> FWIW, here are some additional statistics from the worker prof.log... >>>> >>>> grep "Memory: " prof.log | awk 'NR % 10 == 1' >>>> 0.000000 Memory: total=48188K total_adj=0K malloced: 47965K >>>> 1403802189.315606 Memory: total=614476K total_adj=566288K malloced: >>>> 614022K >>>> 1403802339.316381 Memory: total=938380K total_adj=890192K malloced: >>>> 938275K >>>> 1403802489.317426 Memory: total=1006168K total_adj=957980K malloced: >>>> 1003385K >>>> 1403802639.318199 Memory: total=1041288K total_adj=993100K malloced: >>>> 1035422K >>>> 1403802789.319107 Memory: total=1063544K total_adj=1015356K malloced: >>>> 1058229K >>>> 1403802939.320170 Memory: total=1140652K total_adj=1092464K malloced: >>>> 1139608K >>>> 1403803089.321327 Memory: total=1184540K total_adj=1136352K malloced: >>>> 1179411K >>>> 1403803239.322685 Memory: total=1185296K total_adj=1137108K malloced: >>>> 1144576K >>>> 1403803389.323680 Memory: total=1185296K total_adj=1137108K malloced: >>>> 1118961K >>>> 1403803539.324677 Memory: total=1185296K total_adj=1137108K malloced: >>>> 1092719K >>>> 1403803689.325763 Memory: total=1185296K total_adj=1137108K malloced: >>>> 1091447K >>>> >>>> >>>> On Thu, Jun 26, 2014 at 11:49 AM, Seth Hall wrote: >>>> >>>>> >>>>> On Jun 26, 2014, at 12:43 PM, Jason Batchelor >>>>> wrote: >>>>> >>>>> > > Bro typically does consume quite a bit of memory and you're a bit >>>>> tight on memory for the number of workers you're running. >>>>> > Curious what would you recommend for just bro itself? Double, triple >>>>> this? >>>>> >>>>> It seems like most people just put 128G of memory in Bro boxes now >>>>> because the cost just isn't really worth going any lower if there's a >>>>> remote possibility you might use it. >>>>> >>>>> > I will definately take a look, thanks for the info! >>>>> >>>>> Feel free to ask again if you're having trouble. We really should >>>>> write up some debugging documentation for this process sometime. Anyone >>>>> with experience doing this memory debugging activity up for it? Doesn't >>>>> have to be anything fancy, just the steps and various things to look at to >>>>> figure out what exactly is happening. >>>>> >>>>> .Seth >>>>> >>>>> -- >>>>> Seth Hall >>>>> International Computer Science Institute >>>>> (Bro) because everyone has a network >>>>> http://www.bro.org/ >>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140707/3b039a89/attachment.html From JAzoff at albany.edu Mon Jul 7 07:33:39 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Mon, 7 Jul 2014 10:33:39 -0400 Subject: [Bro] Memory Consumption In-Reply-To: References: <86D9FB05-B648-4646-A7B7-7F08E663E67F@icir.org> <53ACA02A.70709@ohio.edu> <53ADD670.3000401@ohio.edu> <53B2127D.4060707@ohio.edu> Message-ID: <20140707143339.GA18605@datacomm.albany.edu> On Mon, Jul 07, 2014 at 09:23:47AM -0500, Jason Batchelor wrote: > I wanted to circle back on this real quick because after doing a little more > poking into this matter I believe I have found the root cause of my issues. > > What was happening ultimately, was that the high volumes of memory usage I was > seeing was due to large amounts of memory previously allocated to (presumably) > bro processes, become inactive. Inactive memory is memory that has previously > been allocated to some process but is no longer running. While this pool of > memory is an option the OS goes to when it needs to go back to the well for > more memory (to accommodate other processes), it is NOT included in 'free > memory'. Sounds like this is what you are talking about http://www.linuxatemyram.com/ -- -- Justin Azoff From nweaver at ICSI.Berkeley.EDU Mon Jul 7 08:05:39 2014 From: nweaver at ICSI.Berkeley.EDU (Nicholas Weaver) Date: Mon, 7 Jul 2014 08:05:39 -0700 Subject: [Bro] rexmit_inconsistency? Message-ID: <2682E9EF-C1BC-4C04-88ED-6708C6DFAC77@icsi.berkeley.edu> I'm trying to build a test for packet injection, which Bro should complain about as it generates retransmission inconsistencies and/or data after RST or other TCP weirdnesses. Yet in my simple test trace (attached) and this simple policy script: -------------- next part -------------- A non-text attachment was scrubbed... Name: inject.tcpdump Type: application/octet-stream Size: 14684 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140707/07453158/attachment.obj -------------- next part -------------- event rexmit_inconsistency(c: connection, t1: string, t2: string){ print "Inconsistency"; print t1; print t2; } its not flagging. Is it because the data has already been ACKed and therefore the reassembler is no longer keeping track of the data? -- Nicholas Weaver it is a tale, told by an idiot, nweaver at icsi.berkeley.edu full of sound and fury, 510-666-2903 .signifying nothing PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140707/07453158/attachment.bin From daniel.guerra69 at gmail.com Mon Jul 7 08:21:14 2014 From: daniel.guerra69 at gmail.com (daniel.guerra69) Date: Mon, 07 Jul 2014 17:21:14 +0200 Subject: [Bro] Unanswered http post Message-ID: <53BABAEA.4090203@gmail.com> Hi, I have an unanswered HTTP post, this post contains username and password. The dpd signature only works when the post is answered. Is there a way to deal with this ? I would like to see it in my http.log. Regards, Daniel From jsiwek at illinois.edu Mon Jul 7 08:34:35 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Mon, 7 Jul 2014 15:34:35 +0000 Subject: [Bro] rexmit_inconsistency? In-Reply-To: <2682E9EF-C1BC-4C04-88ED-6708C6DFAC77@icsi.berkeley.edu> References: <2682E9EF-C1BC-4C04-88ED-6708C6DFAC77@icsi.berkeley.edu> Message-ID: <4A608338-0A68-4AD7-848D-C7B0C9E689C8@illinois.edu> On Jul 7, 2014, at 10:05 AM, Nicholas Weaver wrote: > > I'm trying to build a test for packet injection, which Bro should complain about as it generates retransmission inconsistencies and/or data after RST or other TCP weirdnesses. > > Yet in my simple test trace (attached) and this simple policy script: > > > > event rexmit_inconsistency(c: connection, t1: string, t2: string){ > print "Inconsistency"; > print t1; > print t2; > } > > its not flagging. > > Is it because the data has already been ACKed and therefore the reassembler is no longer keeping track of the data? Probably, but didn?t look close at the particular trace you gave ? if it has been ACK?d, I don?t expect the reassembler to keep that data around and so can?t compare with the contents of a future overlapping segment. - Jon From nweaver at ICSI.Berkeley.EDU Mon Jul 7 08:39:11 2014 From: nweaver at ICSI.Berkeley.EDU (Nicholas Weaver) Date: Mon, 7 Jul 2014 08:39:11 -0700 Subject: [Bro] rexmit_inconsistency? In-Reply-To: <4A608338-0A68-4AD7-848D-C7B0C9E689C8@illinois.edu> References: <2682E9EF-C1BC-4C04-88ED-6708C6DFAC77@icsi.berkeley.edu> <4A608338-0A68-4AD7-848D-C7B0C9E689C8@illinois.edu> Message-ID: On Jul 7, 2014, at 8:34 AM, Siwek, Jon wrote: >> Is it because the data has already been ACKed and therefore the reassembler is no longer keeping track of the data? > > Probably, but didn?t look close at the particular trace you gave ? if it has been ACK?d, I don?t expect the reassembler to keep that data around and so can?t compare with the contents of a future overlapping segment. Yes it has. In this case, the injector in my test is very close to the client, but the server is far away, so the ACK and FIN appear from the client before the real packet from the server. -- Nicholas Weaver it is a tale, told by an idiot, nweaver at icsi.berkeley.edu full of sound and fury, 510-666-2903 .signifying nothing PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140707/e66560b1/attachment.bin From robin at icir.org Mon Jul 7 08:53:05 2014 From: robin at icir.org (Robin Sommer) Date: Mon, 7 Jul 2014 08:53:05 -0700 Subject: [Bro] Unanswered http post In-Reply-To: <53BABAEA.4090203@gmail.com> References: <53BABAEA.4090203@gmail.com> Message-ID: <20140707155305.GL23418@icir.org> On Mon, Jul 07, 2014 at 17:21 +0200, daniel.guerra69 wrote: > I have an unanswered HTTP post, this post contains username and > password. The dpd signature only works when the post is answered. Generally the DPD signatures trigger only if there's something looking like the assumed protocol on either side of the connection; that's to avoid attacks where a client generates tons of bogus traffic without any server responding. A more specific answer to your question depends on what exactly "unanswered" means. If there's some reply from the server at all, maybe we could tweak the DPD signature to take that into account. Alternatively, you could add your own custom DPD signature that matches on just client side traffic if that's what you prefer. Robin -- Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin From jmellander at lbl.gov Mon Jul 7 10:32:46 2014 From: jmellander at lbl.gov (Jim Mellander) Date: Mon, 7 Jul 2014 10:32:46 -0700 Subject: [Bro] Unanswered http post In-Reply-To: <53BABAEA.4090203@gmail.com> References: <53BABAEA.4090203@gmail.com> Message-ID: The attached policy performs regular expression matching on http post bodies, and raises a notice on regular expression match. By default it looks for passwd|password (upper or lower case) in the body - not quite exactly what you requested, but should get you part of the way. Hope this helps On Mon, Jul 7, 2014 at 8:21 AM, daniel.guerra69 wrote: > Hi, > > I have an unanswered HTTP post, this post contains username and > password. The dpd signature only works when the post is answered. > Is there a way to deal with this ? I would like to see it in my http.log. > > Regards, > > Daniel > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140707/3c0153e5/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: http-sensitive_POSTs.bro Type: application/octet-stream Size: 2889 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140707/3c0153e5/attachment.obj From daniel.guerra69 at gmail.com Tue Jul 8 07:52:50 2014 From: daniel.guerra69 at gmail.com (daniel.guerra69) Date: Tue, 08 Jul 2014 16:52:50 +0200 Subject: [Bro] Unanswered http post In-Reply-To: <20140707155305.GL23418@icir.org> References: <53BABAEA.4090203@gmail.com> <20140707155305.GL23418@icir.org> Message-ID: <53BC05C2.3040309@gmail.com> Hi Robin, The problem is the dpd signature. I thqink I need a DPD signature that just matches on client side http. I tried this simple example but this doesn't work signature password-sig { ip-proto == tcp dst-port == 80 payload /.*password/ enable "http" event "Found password!" } Could it be conflicting with the http dpd signature ? Strings on the pcap shows the POST i seek. On 07/07/2014 05:53 PM, Robin Sommer wrote: > On Mon, Jul 07, 2014 at 17:21 +0200, daniel.guerra69 wrote: > >> I have an unanswered HTTP post, this post contains username and >> password. The dpd signature only works when the post is answered. > Generally the DPD signatures trigger only if there's something looking > like the assumed protocol on either side of the connection; that's to > avoid attacks where a client generates tons of bogus traffic without > any server responding. > > A more specific answer to your question depends on what exactly > "unanswered" means. If there's some reply from the server at all, > maybe we could tweak the DPD signature to take that into account. > Alternatively, you could add your own custom DPD signature that > matches on just client side traffic if that's what you prefer. > > Robin > From edthoma at sandia.gov Tue Jul 8 08:44:27 2014 From: edthoma at sandia.gov (Thomas, Eric D) Date: Tue, 8 Jul 2014 15:44:27 +0000 Subject: [Bro] Handling connections missing TCP handshake Message-ID: I have a pcap with a bunch of HTTP connections. The TCP handshake (SYN, SYN-ACK, ACK) is missing for most of those connections. When processing the PCAP with a default bro config, those HTTP sessions missing the handshake are not logged in http.log (I can see the GET requests and HTTP responses in the PCAP). Is there an easy way to get Bro?s HTTP analyzer to process them anyway? -- Eric Thomas edthoma at sandia.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140708/31c39319/attachment.html From itsecderek at gmail.com Tue Jul 8 09:43:32 2014 From: itsecderek at gmail.com (Derek Banks) Date: Tue, 8 Jul 2014 12:43:32 -0400 Subject: [Bro] Error when extracting URLs from email traffic Message-ID: Hello Bro list, I am attempting to write a script to extract URLs from SMTP. The script below is my starting point and it seems to work pretty well except that I am getting an error occasionally on some of the connections. The end goal (and I am a ways away atm) is to eventually get the URLs fed into the intel framework to attempt to alert on potential spearphishing. Script: @load base/frameworks/intel @load base/utils/urls @load ./where-locations.bro event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) { const mail_servers = { 192.168.50.72, 192.168.50.75 }; if ( c$id$orig_h !in mail_servers ) return; if ( ! f?$conns ) return; if ( f$source != "SMTP" ) return; if ( ! f?$bof_buffer ) return; for ( cid in f$conns ) { local urls = find_all_urls_without_scheme(f$bof_buffer); for ( url in urls ) { print fmt(url); } } } The error is: 1404827445.346519 error in ./extract_urls_in_email_v1.bro, line 38: too few arguments for format (fmt(url) and Does anyone know what might be causing this error? Best Regards, Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140708/0370ca56/attachment.html From JAzoff at albany.edu Tue Jul 8 09:51:18 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Tue, 8 Jul 2014 12:51:18 -0400 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: References: Message-ID: <20140708165118.GB18605@datacomm.albany.edu> On Tue, Jul 08, 2014 at 12:43:32PM -0400, Derek Banks wrote: > print fmt(url); > The error is: > 1404827445.346519 error in ./extract_urls_in_email_v1.bro, line 38: too few > arguments for format (fmt(url) and > > > Does anyone know what might be causing this error? fmt() is like sprintf. you just want print url; -- -- Justin Azoff From liburdi.joshua at gmail.com Tue Jul 8 09:52:41 2014 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Tue, 8 Jul 2014 12:52:41 -0400 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: References: Message-ID: I think your error might be a simple one ... fmt() should use this syntx: print fmt("%s",url); -Josh On Tue, Jul 8, 2014 at 12:43 PM, Derek Banks wrote: > Hello Bro list, > I am attempting to write a script to extract URLs from SMTP. The script > below is my starting point and it seems to work pretty well except that I am > getting an error occasionally on some of the connections. The end goal > (and I am a ways away atm) is to eventually get the URLs fed into the intel > framework to attempt to alert on potential spearphishing. > > Script: > @load base/frameworks/intel > @load base/utils/urls > @load ./where-locations.bro > > event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) > { > const mail_servers = { 192.168.50.72, 192.168.50.75 }; > > if ( c$id$orig_h !in mail_servers ) > return; > if ( ! f?$conns ) > return; > if ( f$source != "SMTP" ) > return; > > if ( ! f?$bof_buffer ) > return; > > for ( cid in f$conns ) > { > local urls = find_all_urls_without_scheme(f$bof_buffer); > for ( url in urls ) > { > > print fmt(url); > > } > } > } > > The error is: > 1404827445.346519 error in ./extract_urls_in_email_v1.bro, line 38: too few > arguments for format (fmt(url) and > > > Does anyone know what might be causing this error? > > Best Regards, > Derek > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From hosom at battelle.org Tue Jul 8 09:57:07 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Tue, 8 Jul 2014 16:57:07 +0000 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: References: Message-ID: This is actually a script that has been written already. Check out policy/frameworks/intel/seen/smtp-url-extraction.bro. You?ll need to modify this script a little, but it has most of what you need. If you just want to see if certain URLs are in emails, then you could actually already do that with the Intelligence Framework, without having to write your own script. From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Derek Banks Sent: Tuesday, July 08, 2014 12:44 PM To: bro at bro.org List Subject: [Bro] Error when extracting URLs from email traffic Hello Bro list, I am attempting to write a script to extract URLs from SMTP. The script below is my starting point and it seems to work pretty well except that I am getting an error occasionally on some of the connections. The end goal (and I am a ways away atm) is to eventually get the URLs fed into the intel framework to attempt to alert on potential spearphishing. Script: @load base/frameworks/intel @load base/utils/urls @load ./where-locations.bro event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) { const mail_servers = { 192.168.50.72, 192.168.50.75 }; if ( c$id$orig_h !in mail_servers ) return; if ( ! f?$conns ) return; if ( f$source != "SMTP" ) return; if ( ! f?$bof_buffer ) return; for ( cid in f$conns ) { local urls = find_all_urls_without_scheme(f$bof_buffer); for ( url in urls ) { print fmt(url); } } } The error is: 1404827445.346519 error in ./extract_urls_in_email_v1.bro, line 38: too few arguments for format (fmt(url) and Does anyone know what might be causing this error? Best Regards, Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140708/f5b5020b/attachment.html From liburdi.joshua at gmail.com Tue Jul 8 10:02:01 2014 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Tue, 8 Jul 2014 13:02:01 -0400 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: References: Message-ID: Actually, nevermind. fmt() will accept either version if you are passing data into it. I copied your script and removed some elements (const mail_servers, logic checks for SMTP and mail_servers) and it processed correctly. -Josh On Tue, Jul 8, 2014 at 12:52 PM, Josh Liburdi wrote: > I think your error might be a simple one ... fmt() should use this > syntx: print fmt("%s",url); > > -Josh > > On Tue, Jul 8, 2014 at 12:43 PM, Derek Banks wrote: >> Hello Bro list, >> I am attempting to write a script to extract URLs from SMTP. The script >> below is my starting point and it seems to work pretty well except that I am >> getting an error occasionally on some of the connections. The end goal >> (and I am a ways away atm) is to eventually get the URLs fed into the intel >> framework to attempt to alert on potential spearphishing. >> >> Script: >> @load base/frameworks/intel >> @load base/utils/urls >> @load ./where-locations.bro >> >> event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) >> { >> const mail_servers = { 192.168.50.72, 192.168.50.75 }; >> >> if ( c$id$orig_h !in mail_servers ) >> return; >> if ( ! f?$conns ) >> return; >> if ( f$source != "SMTP" ) >> return; >> >> if ( ! f?$bof_buffer ) >> return; >> >> for ( cid in f$conns ) >> { >> local urls = find_all_urls_without_scheme(f$bof_buffer); >> for ( url in urls ) >> { >> >> print fmt(url); >> >> } >> } >> } >> >> The error is: >> 1404827445.346519 error in ./extract_urls_in_email_v1.bro, line 38: too few >> arguments for format (fmt(url) and >> >> >> Does anyone know what might be causing this error? >> >> Best Regards, >> Derek >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From itsecderek at gmail.com Tue Jul 8 10:24:11 2014 From: itsecderek at gmail.com (Derek Banks) Date: Tue, 8 Jul 2014 13:24:11 -0400 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: References: Message-ID: Cool thanks all! If you just want to see if certain URLs are in emails, then you could > actually already do that with the Intelligence Framework, without having to > write your own script. > That's essentially what I want to do, I just want to generate the intel "on-the-fly" by taking out URLs from emails, white listing out common legit domains seen in our environment, feeding the list into the intel framework then writing a notice or a specific log file of potential spearphish when the URL is found in http traffic. Basically an attempt to alert on a clicker in a spearphish when we are not already aware that the Domain/URL is bad. It could turn out that the volume of clickers even after whitelisting makes it not feasible for analysis but I thought it would be a good exercise to go down the road. On Tue, Jul 8, 2014 at 12:57 PM, Hosom, Stephen M wrote: > This is actually a script that has been written already. Check out > policy/frameworks/intel/seen/smtp-url-extraction.bro. You?ll need to modify > this script a little, but it has most of what you need. > > > > If you just want to see if certain URLs are in emails, then you could > actually already do that with the Intelligence Framework, without having to > write your own script. > > > > *From:* bro-bounces at bro.org [mailto:bro-bounces at bro.org] *On Behalf Of *Derek > Banks > *Sent:* Tuesday, July 08, 2014 12:44 PM > *To:* bro at bro.org List > *Subject:* [Bro] Error when extracting URLs from email traffic > > > > Hello Bro list, > > I am attempting to write a script to extract URLs from SMTP. The script > below is my starting point and it seems to work pretty well except that I > am getting an error occasionally on some of the connections. The end goal > (and I am a ways away atm) is to eventually get the URLs fed into the intel > framework to attempt to alert on potential spearphishing. > > Script: > @load base/frameworks/intel > @load base/utils/urls > @load ./where-locations.bro > > event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) > { > const mail_servers = { 192.168.50.72, 192.168.50.75 }; > > if ( c$id$orig_h !in mail_servers ) > return; > if ( ! f?$conns ) > return; > if ( f$source != "SMTP" ) > return; > > if ( ! f?$bof_buffer ) > return; > > for ( cid in f$conns ) > { > local urls = find_all_urls_without_scheme(f$bof_buffer); > for ( url in urls ) > { > > print fmt(url); > > } > } > } > > The error is: > 1404827445.346519 error in ./extract_urls_in_email_v1.bro, line 38: too > few arguments for format (fmt(url) and > > Does anyone know what might be causing this error? > > Best Regards, > Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140708/4c9a4837/attachment.html From JAzoff at albany.edu Tue Jul 8 10:28:32 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Tue, 8 Jul 2014 13:28:32 -0400 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: References: Message-ID: <20140708172832.GC18605@datacomm.albany.edu> On Tue, Jul 08, 2014 at 01:02:01PM -0400, Josh Liburdi wrote: > Actually, nevermind. fmt() will accept either version if you are > passing data into it. I copied your script and removed some elements > (const mail_servers, logic checks for SMTP and mail_servers) and it > processed correctly. That works correctly most of the time, but it has the same problem that printf does: jazoff at air /tmp $ cat f.bro event bro_init() { local s = "hello %s world"; print fmt(s); } jazoff at air /tmp $ bro f.bro error in ./f.bro, line 3 and ./f.bro, line 2: too few arguments for format (fmt(s) and hello %s world) -- -- Justin Azoff From liburdi.joshua at gmail.com Tue Jul 8 12:08:15 2014 From: liburdi.joshua at gmail.com (Josh Liburdi) Date: Tue, 8 Jul 2014 15:08:15 -0400 Subject: [Bro] Error when extracting URLs from email traffic In-Reply-To: <20140708172832.GC18605@datacomm.albany.edu> References: <20140708172832.GC18605@datacomm.albany.edu> Message-ID: Good point, thanks Justin. -Josh On Tue, Jul 8, 2014 at 1:28 PM, Justin Azoff wrote: > On Tue, Jul 08, 2014 at 01:02:01PM -0400, Josh Liburdi wrote: >> Actually, nevermind. fmt() will accept either version if you are >> passing data into it. I copied your script and removed some elements >> (const mail_servers, logic checks for SMTP and mail_servers) and it >> processed correctly. > > That works correctly most of the time, but it has the same problem that printf does: > > jazoff at air /tmp $ cat f.bro > event bro_init() { > local s = "hello %s world"; > print fmt(s); > } > > jazoff at air /tmp $ bro f.bro > error in ./f.bro, line 3 and ./f.bro, line 2: too few arguments for format (fmt(s) and hello %s world) > > > -- > -- Justin Azoff From vern at icir.org Tue Jul 8 14:00:55 2014 From: vern at icir.org (Vern Paxson) Date: Tue, 08 Jul 2014 14:00:55 -0700 Subject: [Bro] rexmit_inconsistency? In-Reply-To: <4A608338-0A68-4AD7-848D-C7B0C9E689C8@illinois.edu> (Mon, 07 Jul 2014 15:34:35 -0000). Message-ID: <20140708210055.43AB22C40A5@rock.ICSI.Berkeley.EDU> > if it has been ACK'd, I don't expect the reassembler to keep that data around Indeed, it has to release the data upon ACK in order to not wind up buffering entire byte streams. Vern From jdopheid at illinois.edu Thu Jul 10 09:41:27 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Thu, 10 Jul 2014 16:41:27 +0000 Subject: [Bro] BroCon '14 Agenda Message-ID: Bro Community, The BroCon '14 Agenda has been posted: http://www.bro.org/community/brocon2014.html#agenda Note: The schedule is subject to change. Don't forget to register for BroCon: https://www.regonline.com/brocon2014 And, in case you missed it, Bro v2.3 released last month: http://www.bro.org/download/index.html See you in August! The Bro Team ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From sangdrax8 at gmail.com Mon Jul 14 04:57:30 2014 From: sangdrax8 at gmail.com (sangdrax8) Date: Mon, 14 Jul 2014 07:57:30 -0400 Subject: [Bro] turn off host up/down emails Message-ID: How would I go about stopping the e-mails about hosts being up/down? I use nagios to track host status and would rather not have this done in multiple places. I believe this e-mail is done as part of the tasks from the cron job, but I don't if there is an option to turn this off or if it would require a hook into the notice framework. If it does take a notice hook, what would be the cleanest way to suppress these e-mails? Perhaps there is a specific type that I could suppress? If someone could point me in the right direction it would be much appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140714/00df93db/attachment.html From dnthayer at illinois.edu Mon Jul 14 06:44:45 2014 From: dnthayer at illinois.edu (Daniel Thayer) Date: Mon, 14 Jul 2014 08:44:45 -0500 Subject: [Bro] turn off host up/down emails In-Reply-To: References: Message-ID: <53C3DECD.4070304@illinois.edu> Those emails are generated by broctl cron, and there is no option to configure that behavior. On 07/14/2014 06:57 AM, sangdrax8 wrote: > How would I go about stopping the e-mails about hosts being up/down? I > use nagios to track host status and would rather not have this done in > multiple places. I believe this e-mail is done as part of the tasks > from the cron job, but I don't if there is an option to turn this off or > if it would require a hook into the notice framework. If it does take a > notice hook, what would be the cleanest way to suppress these e-mails? > Perhaps there is a specific type that I could suppress? > > If someone could point me in the right direction it would be much > appreciated! > > From jlay at slave-tothe-box.net Mon Jul 14 07:13:25 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 14 Jul 2014 08:13:25 -0600 Subject: [Bro] turn off host up/down emails In-Reply-To: <53C3DECD.4070304@illinois.edu> References: <53C3DECD.4070304@illinois.edu> Message-ID: On 2014-07-14 07:44, Daniel Thayer wrote: > Those emails are generated by broctl cron, and there is no option > to configure that behavior. > > > On 07/14/2014 06:57 AM, sangdrax8 wrote: >> How would I go about stopping the e-mails about hosts being up/down? >> I >> use nagios to track host status and would rather not have this done >> in >> multiple places. I believe this e-mail is done as part of the tasks >> from the cron job, but I don't if there is an option to turn this >> off or >> if it would require a hook into the notice framework. If it does >> take a >> notice hook, what would be the cleanest way to suppress these >> e-mails? >> Perhaps there is a specific type that I could suppress? >> >> If someone could point me in the right direction it would be much >> appreciated! Before I stopped using broctl I had to configure postfix to drop these...not sure how your setup is though. Seth, we really need an option to turn these off. James From seth at icir.org Mon Jul 14 07:48:41 2014 From: seth at icir.org (Seth Hall) Date: Mon, 14 Jul 2014 10:48:41 -0400 Subject: [Bro] turn off host up/down emails In-Reply-To: References: <53C3DECD.4070304@illinois.edu> Message-ID: <5F6F874F-FFB1-46B9-9982-F34889E46220@icir.org> On Jul 14, 2014, at 10:13 AM, James Lay wrote: > Seth, we really need an option to turn these off. Fortunately I don't think I'm the appropriate person to direct this request to anymore! :) There is currently work going on to heavily revamp broctl and I'm sure that Justin and Daniel will take your suggestion into account. Justin, Daniel, do you guys have any thoughts into how this behavior could be better handled or have you already begun to restructure this code? I vaguely recall someone mentioning going through and documenting all of the cases where emails are sent from BroControl as an attempt to rethink the whole approach to be more user focused and modernized (with how people run broctl clusters these days). Am I remembering that correctly? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140714/0224adc5/attachment.bin From JAzoff at albany.edu Mon Jul 14 08:01:12 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Mon, 14 Jul 2014 11:01:12 -0400 Subject: [Bro] turn off host up/down emails In-Reply-To: References: Message-ID: <20140714150112.GE18605@datacomm.albany.edu> On Mon, Jul 14, 2014 at 07:57:30AM -0400, sangdrax8 wrote: > How would I go about stopping the e-mails about hosts being up/down? I use > nagios to track host status and would rather not have this done in multiple > places. I believe this e-mail is done as part of the tasks from the cron job, > but I don't if there is an option to turn this off or if it would require a > hook into the notice framework. If it does take a notice hook, what would be > the cleanest way to suppress these e-mails? Perhaps there is a specific type > that I could suppress? > > If someone could point me in the right direction it would be much appreciated! No option to disable these currently, but in the meantime if you can, just edit BroControl/cron.py and comment out line 202: util.output("host %s %s" % (node.host, alive == "1" and "up" or "down")) That will stop that output for now.. -- -- Justin Azoff From jlay at slave-tothe-box.net Mon Jul 14 10:21:32 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Mon, 14 Jul 2014 11:21:32 -0600 Subject: [Bro] turn off host up/down emails In-Reply-To: <5F6F874F-FFB1-46B9-9982-F34889E46220@icir.org> References: <53C3DECD.4070304@illinois.edu> <5F6F874F-FFB1-46B9-9982-F34889E46220@icir.org> Message-ID: <1696802810d31680f82e091c66117823@localhost> On 2014-07-14 08:48, Seth Hall wrote: > On Jul 14, 2014, at 10:13 AM, James Lay > wrote: > >> Seth, we really need an option to turn these off. > > Fortunately I don't think I'm the appropriate person to direct this > request to anymore! :) > > There is currently work going on to heavily revamp broctl and I'm > sure that Justin and Daniel will take your suggestion into account. > > Justin, Daniel, do you guys have any thoughts into how this behavior > could be better handled or have you already begun to restructure this > code? I vaguely recall someone mentioning going through and > documenting all of the cases where emails are sent from BroControl as > an attempt to rethink the whole approach to be more user focused and > modernized (with how people run broctl clusters these days). Am I > remembering that correctly? > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ Ah that's great thanks for the info Seth. I'm thinking just an option in broctl.conf: AlertOnInterface = no Type of thing. James From dnthayer at illinois.edu Mon Jul 14 12:00:57 2014 From: dnthayer at illinois.edu (Daniel Thayer) Date: Mon, 14 Jul 2014 14:00:57 -0500 Subject: [Bro] turn off host up/down emails In-Reply-To: <5F6F874F-FFB1-46B9-9982-F34889E46220@icir.org> References: <53C3DECD.4070304@illinois.edu> <5F6F874F-FFB1-46B9-9982-F34889E46220@icir.org> Message-ID: <53C428E9.9090306@illinois.edu> On 07/14/2014 09:48 AM, Seth Hall wrote: > > On Jul 14, 2014, at 10:13 AM, James Lay wrote: > >> Seth, we really need an option to turn these off. > > Fortunately I don't think I'm the appropriate person to direct this request to anymore! :) > > There is currently work going on to heavily revamp broctl and I'm sure that Justin and Daniel will take your suggestion into account. > > Justin, Daniel, do you guys have any thoughts into how this behavior could be better handled or have you already begun to restructure this code? I vaguely recall someone mentioning going through and documenting all of the cases where emails are sent from BroControl as an attempt to rethink the whole approach to be more user focused and modernized (with how people run broctl clusters these days). Am I remembering that correctly? > > .Seth > I've created a ticket as a reminder to address this issue before the next release. From sangdrax8 at gmail.com Tue Jul 15 05:05:12 2014 From: sangdrax8 at gmail.com (sangdrax8) Date: Tue, 15 Jul 2014 08:05:12 -0400 Subject: [Bro] turn off host up/down emails In-Reply-To: <53C428E9.9090306@illinois.edu> References: <53C3DECD.4070304@illinois.edu> <5F6F874F-FFB1-46B9-9982-F34889E46220@icir.org> <53C428E9.9090306@illinois.edu> Message-ID: Thank you, I'll comment out the indicated line until an official option is added at a later date. On Mon, Jul 14, 2014 at 3:00 PM, Daniel Thayer wrote: > On 07/14/2014 09:48 AM, Seth Hall wrote: > > > > On Jul 14, 2014, at 10:13 AM, James Lay > wrote: > > > >> Seth, we really need an option to turn these off. > > > > Fortunately I don't think I'm the appropriate person to direct this > request to anymore! :) > > > > There is currently work going on to heavily revamp broctl and I'm sure > that Justin and Daniel will take your suggestion into account. > > > > Justin, Daniel, do you guys have any thoughts into how this behavior > could be better handled or have you already begun to restructure this code? > I vaguely recall someone mentioning going through and documenting all of > the cases where emails are sent from BroControl as an attempt to rethink > the whole approach to be more user focused and modernized (with how people > run broctl clusters these days). Am I remembering that correctly? > > > > .Seth > > > > I've created a ticket as a reminder to address this issue before the > next release. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140715/0dd7c129/attachment.html From jlay at slave-tothe-box.net Tue Jul 15 09:40:58 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 15 Jul 2014 10:40:58 -0600 Subject: [Bro] SSLBL Message-ID: Interesting: https://sslbl.abuse.ch/blacklist/ Wonder if bro can support this? James From johanna at icir.org Tue Jul 15 09:55:34 2014 From: johanna at icir.org (Johanna Amann) Date: Tue, 15 Jul 2014 09:55:34 -0700 Subject: [Bro] SSLBL In-Reply-To: References: Message-ID: <80BCF1CC-982E-45D3-800A-C83139F9A395@icir.org> Hello James, using blacklists like this is actually quite easy nowadays. Just loading the list of blacklisted SHA-1 hashes into the intel framework and making sure that policy/frameworks/intel/seen/file-hashes.bro is loaded should be enough. Certificates used in SSL connections are handled just like files, so if one of the certificates is encountered after loading the data, it should trigger a notification. You just have to reformat the list for the intel framework. Johanna On 15 Jul 2014, at 9:40, James Lay wrote: > Interesting: > > https://sslbl.abuse.ch/blacklist/ > > Wonder if bro can support this? > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jlay at slave-tothe-box.net Tue Jul 15 09:59:52 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 15 Jul 2014 10:59:52 -0600 Subject: [Bro] SSLBL In-Reply-To: <80BCF1CC-982E-45D3-800A-C83139F9A395@icir.org> References: <80BCF1CC-982E-45D3-800A-C83139F9A395@icir.org> Message-ID: On 2014-07-15 10:55, Johanna Amann wrote: > Hello James, > > using blacklists like this is actually quite easy nowadays. Just > loading the list of blacklisted SHA-1 hashes into the intel framework > and making sure that policy/frameworks/intel/seen/file-hashes.bro is > loaded should be enough. > > Certificates used in SSL connections are handled just like files, so > if one of the certificates is encountered after loading the data, it > should trigger a notification. > > You just have to reformat the list for the intel framework. > > Johanna > > On 15 Jul 2014, at 9:40, James Lay wrote: > >> Interesting: >> >> https://sslbl.abuse.ch/blacklist/ >> >> Wonder if bro can support this? >> >> James Thank you Johanna...I will go down that path. James From pachinko.tw at gmail.com Wed Jul 16 06:25:58 2014 From: pachinko.tw at gmail.com (Po-Ching Lin) Date: Wed, 16 Jul 2014 21:25:58 +0800 Subject: [Bro] Bro as an IPS Message-ID: <53C67D66.9020705@gmail.com> Dear all, We would like to use Bro as an IPS, which reads packets from netfilter, ipfw, or something like that, and may drop packets when some conditions are met. Somebody refers to an old presentation at Bro workshop 2007, but the link has been broken. Is there any ready-to-use solution for this purpose, or should we resort to modifying the source code? Any suggestion is appreciated. Po-Ching From jdopheid at illinois.edu Wed Jul 16 11:01:58 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Wed, 16 Jul 2014 18:01:58 +0000 Subject: [Bro] Bro as an IPS In-Reply-To: <53C67D66.9020705@gmail.com> Message-ID: Hello Po-Ching, Will you point me to the page you were on when you found the broken link? I'm not able to find this broken link. Thanks, Jeannette ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign On 7/16/14, 8:25 AM, "Po-Ching Lin" wrote: >Dear all, > > We would like to use Bro as an IPS, which reads packets from >netfilter, ipfw, or something like that, and may drop packets when >some conditions are met. Somebody refers to an old presentation at >Bro workshop 2007, but the link has been broken. Is there any ready-to-use >solution for this purpose, or should we resort to modifying the source >code? >Any suggestion is appreciated. > >Po-Ching >_______________________________________________ >Bro mailing list >bro at bro-ids.org >http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From jdopheid at illinois.edu Thu Jul 17 06:07:09 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Thu, 17 Jul 2014 13:07:09 +0000 Subject: [Bro] Bro as an IPS In-Reply-To: <53C749E1.7080502@gmail.com> Message-ID: Po-Ching, Unfortunately that forum is referencing old information from a previous generation of our site . But I was able to dig this up using the Way Back Machine: https://web.archive.org/web/20071029084731/http://www.bro-ids.org/bro-works hop-2007/slides/Bro-IPS.pdf Does this help answer your question? Thanks, Jeannette ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign On 7/16/14, 10:58 PM, "Po-Ching Lin" wrote: >Dear Jeannette, > > It's >http://comments.gmane.org/gmane.comp.security.detection.bro/2369 > >Best regards, >Po-Ching > >On 2014/7/17 02:01 AM, Dopheide, Jeannette M wrote: >> Hello Po-Ching, >> >> Will you point me to the page you were on when you found the broken >>link? >> I'm not able to find this broken link. >> >> Thanks, >> Jeannette >> >> ------ >> >> Jeannette M. Dopheide >> Bro Outreach Coordinator >> National Center for Supercomputing Applications >> University of Illinois at Urbana-Champaign >> From slagell at illinois.edu Thu Jul 17 06:35:57 2014 From: slagell at illinois.edu (Slagell, Adam J) Date: Thu, 17 Jul 2014 13:35:57 +0000 Subject: [Bro] Bro as an IPS In-Reply-To: References: <53C749E1.7080502@gmail.com>, Message-ID: Bro really isn't an IPS, and that breaks the model of how Bro works. That said in the future our work with netmap and SDN technologies could allow you to do such things. I could see Bro as controlling a SDN firewall for example. > On Jul 17, 2014, at 9:15 AM, "Dopheide, Jeannette M" wrote: > > Po-Ching, > > Unfortunately that forum is referencing old information from a previous > generation of our site . But I was able to dig this up using the Way Back > Machine: > > https://web.archive.org/web/20071029084731/http://www.bro-ids.org/bro-works > hop-2007/slides/Bro-IPS.pdf > > > Does this help answer your question? > > Thanks, > Jeannette > > ------ > > Jeannette M. Dopheide > Bro Outreach Coordinator > National Center for Supercomputing Applications > University of Illinois at Urbana-Champaign > > > > > > >> On 7/16/14, 10:58 PM, "Po-Ching Lin" wrote: >> >> Dear Jeannette, >> >> It's >> http://comments.gmane.org/gmane.comp.security.detection.bro/2369 >> >> Best regards, >> Po-Ching >> >>> On 2014/7/17 02:01 AM, Dopheide, Jeannette M wrote: >>> Hello Po-Ching, >>> >>> Will you point me to the page you were on when you found the broken >>> link? >>> I'm not able to find this broken link. >>> >>> Thanks, >>> Jeannette >>> >>> ------ >>> >>> Jeannette M. Dopheide >>> Bro Outreach Coordinator >>> National Center for Supercomputing Applications >>> University of Illinois at Urbana-Champaign >>> > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From fabian at affolter-engineering.ch Thu Jul 17 07:17:31 2014 From: fabian at affolter-engineering.ch (Fabian Affolter) Date: Thu, 17 Jul 2014 16:17:31 +0200 Subject: [Bro] New BTest release Message-ID: <53C7DAFB.3000309@affolter-engineering.ch> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, I'm working on a btest package for the Fedora Package Collection (his is part of a bro update from 1.5 to 2.3 in Fedora). It seems that after the 0.52 release a couple of files were introduced to the git repository (especially docs like COPYING). It's possible that during the review process the missing license file could become a blocker. It would be nice if you could provide an updated source tarball of btest as 0.52.1 or 0.53. Thanks and kind regards, Fabian -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlPH2vcACgkQ4jzS3TakOX+Y/QCePbsqXSq8q6pM8IZ7Udgbqk+h KpYAoIvkCZ9oXU5dqylFs8OVm1zoIcfF =xFik -----END PGP SIGNATURE----- From jlay at slave-tothe-box.net Thu Jul 17 08:33:21 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 17 Jul 2014 09:33:21 -0600 Subject: [Bro] Append instead of overwrite Message-ID: Hey All, So I run bro instead of broctl. Currently, if I stop a running bro, and start it again, bro overwrites any previous log files...is there a way to change this behavior? Thank you. James From seth at icir.org Fri Jul 18 10:51:55 2014 From: seth at icir.org (Seth Hall) Date: Fri, 18 Jul 2014 13:51:55 -0400 Subject: [Bro] Having phun on a friday Message-ID: <52F27700-BBA7-46D4-B319-04CEBD48DE0F@icir.org> I wrote a Phant.io module this morning. Phant is a server for data handling written by SparkFun electronics. It's really meant as a data collection tool for the internet of things (generally for small, embedded devices) but I figured that Bro is a thing too. :) Here's the module: https://github.com/sethhall/brophant Here's a stream that I've been posting data to on the data.sparkfun.com phant instance: https://data.sparkfun.com/streams/XGGajLdOKWtzOJpA8w6y If you want to see how it's used: https://github.com/sethhall/brophant/blob/master/test/example.bro (also, I know that in my example script I have in the brophant module that the private key is there, but I'm not really concerned about it) Have fun! .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140718/ac405bbf/attachment.bin From jlay at slave-tothe-box.net Fri Jul 18 15:51:05 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 18 Jul 2014 16:51:05 -0600 Subject: [Bro] Binpac exception Message-ID: I added the below to remove syslog from getting logged in my local.bro, and I do not have a syslog.log as wanted: event bro_init() { Log::disable_stream(Syslog::LOG); } However I am seeing a large amount of the below in weird.log: 1405648595.773644 Comss94xWJf5CHpgnl 10.1.2.72 54619 10.21.0.23 514 binpac exception: string mismatch at /bro-2.3/src/analyzer/protocol/syslog/syslog-protocol.pac:8: \x0aexpected pattern: "[[:digit:]]+"\x0aactual data: "syslog message here" - F bro My start line: /usr/local/bin/bro --no-checksums -i eth0 local "Site::local_nets += { 192.168.1.0/24 }" Is there a way I can troubleshoot this? Thank you. James From vlad at grigorescu.org Fri Jul 18 16:47:26 2014 From: vlad at grigorescu.org (Vlad Grigorescu) Date: Fri, 18 Jul 2014 19:47:26 -0400 Subject: [Bro] Binpac exception In-Reply-To: References: Message-ID: Hi James, Try adding this to your local.bro: > event bro_init() { > Analyzer::disable_analyzer(Analyzer::ANALYZER_SYSLOG); > } This will disable the analyzer, while the code you tried will just disable the syslog.log output. Hope that helps, --Vlad On Fri, Jul 18, 2014 at 6:51 PM, James Lay wrote: > I added the below to remove syslog from getting logged in my local.bro, > and I do not have a syslog.log as wanted: > > event bro_init() > { > Log::disable_stream(Syslog::LOG); > } > > However I am seeing a large amount of the below in weird.log: > > > 1405648595.773644 Comss94xWJf5CHpgnl 10.1.2.72 54619 > 10.21.0.23 514 binpac exception: string mismatch at > /bro-2.3/src/analyzer/protocol/syslog/syslog-protocol.pac:8: > \x0aexpected pattern: "[[:digit:]]+"\x0aactual data: "syslog message > here" - F bro > > > My start line: > > /usr/local/bin/bro --no-checksums -i eth0 local "Site::local_nets += { > 192.168.1.0/24 }" > > Is there a way I can troubleshoot this? Thank you. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140718/7d6e78fd/attachment.html From jlay at slave-tothe-box.net Sat Jul 19 05:26:53 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Sat, 19 Jul 2014 06:26:53 -0600 Subject: [Bro] Binpac exception In-Reply-To: References: Message-ID: <1405772813.2885.0.camel@JamesiMac> On Fri, 2014-07-18 at 19:47 -0400, Vlad Grigorescu wrote: > Analyzer::disable_analyzer(Analyzer::ANALYZER_SYSLOG); Thanks Vlad...I'll give that a go. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140719/83204607/attachment.html From juan.caballero at imdea.org Mon Jul 21 06:26:54 2014 From: juan.caballero at imdea.org (Juan Caballero) Date: Mon, 21 Jul 2014 15:26:54 +0200 Subject: [Bro] dpd unknown port Message-ID: <017701cfa4e7$70acfee0$5206fca0$@imdea.org> Hi all, With Bro 2.2 and/or 2.3 what is the best way to tell Bro that I want a DPD signature to be matched on any connection regardless of port? I know I can use Analyzer::register_for_ports at bro_init to enable a set of ports to analyze with an analyzer, but I have a case where I cannot predict a priori the destination port in use by the protocol. It does not seem like I can pass wildcards to register_for_port(s). If I have a DPD signature for the protocol (e.g., for HTTP) what is the easiest way to tell Bro to use the signature on any connection regardless of port? Can this be done on the scripting layer? If not, any pointers to where do I need to modify the code to hook say the HTTP analyzer for every connection? (I am not concerned about efficiency as I am running Bro on pcaps) BTW, I searched the mailing list for a reply but all hits I found for Bro 2.2 and 2.3 referred to Analyzer::register_for_ports Thanks, Juan From jsiwek at illinois.edu Mon Jul 21 07:54:27 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Mon, 21 Jul 2014 14:54:27 +0000 Subject: [Bro] dpd unknown port In-Reply-To: <017701cfa4e7$70acfee0$5206fca0$@imdea.org> References: <017701cfa4e7$70acfee0$5206fca0$@imdea.org> Message-ID: <57BE605C-8C74-401F-B7E3-519FEDDD39C4@illinois.edu> On Jul 21, 2014, at 8:26 AM, Juan Caballero wrote: > If I have a DPD signature for > the protocol (e.g., for HTTP) what is the easiest way to tell Bro to use the > signature on any connection regardless of port? Can this be done on the > scripting layer? It sounds like you want to write a signature [1] with a particular ?payload? content condition and an ?enable? action to active a particular protocol analyzer. - Jon [1] http://www.bro.org/sphinx/frameworks/signatures.html From juan.caballero at imdea.org Mon Jul 21 09:25:42 2014 From: juan.caballero at imdea.org (Juan Caballero) Date: Mon, 21 Jul 2014 18:25:42 +0200 Subject: [Bro] dpd unknown port In-Reply-To: <57BE605C-8C74-401F-B7E3-519FEDDD39C4@illinois.edu> References: <017701cfa4e7$70acfee0$5206fca0$@imdea.org> <57BE605C-8C74-401F-B7E3-519FEDDD39C4@illinois.edu> Message-ID: <018801cfa500$6a9f1fb0$3fdd5f10$@imdea.org> Hi Jon, Thanks for your answer > It sounds like you want to write a signature [1] with a particular "payload" content condition In my case I simply want to use protocols such as HTTP for which Bro already has a DPD signature, so no need to create a new one > and an "enable" action to active a particular protocol analyzer. This is the step I do not know how to do. The only "enable" function I see is "Analyzer::enable_analyzer(Analyzer::ANALYZER_HTTP)" However when I use that function it does not seem to enable the DPD signature for all ports, i.e., an HTTP connection on port 7623/tcp does not get analyzed unless I use Analyzer::register_for_ports to add port 7623/tcp Any suggestions for this step? Thanks, Juan From jsiwek at illinois.edu Mon Jul 21 10:39:40 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Mon, 21 Jul 2014 17:39:40 +0000 Subject: [Bro] dpd unknown port In-Reply-To: <018801cfa500$6a9f1fb0$3fdd5f10$@imdea.org> References: <017701cfa4e7$70acfee0$5206fca0$@imdea.org> <57BE605C-8C74-401F-B7E3-519FEDDD39C4@illinois.edu> <018801cfa500$6a9f1fb0$3fdd5f10$@imdea.org> Message-ID: <292C278F-3182-4F6A-8D37-ED91E8E6F392@illinois.edu> On Jul 21, 2014, at 11:25 AM, Juan Caballero wrote: >> It sounds like you want to write a signature [1] with a particular > "payload" content condition > > In my case I simply want to use protocols such as HTTP for which Bro already > has a DPD signature, so no need to create a new one > >> and an "enable" action to active a particular protocol analyzer. > > This is the step I do not know how to do. The only "enable" function I see > is "Analyzer::enable_analyzer(Analyzer::ANALYZER_HTTP)" > However when I use that function it does not seem to enable the DPD > signature for all ports, i.e., an HTTP connection on port 7623/tcp does not > get analyzed unless I use Analyzer::register_for_ports to add port 7623/tcp > Any suggestions for this step? There?s two main ways to tell a protocol analyzer what connections it needs to parse: (1) well-known ports (i.e. "Analyzer::register_for_ports()?) (2) signatures (i.e. the documentation I linked to before) Those two are unrelated ? the ports given to "Analyzer::register_for_ports()? will cause the analyzer to be activated on connections that use those ports regardless of whether any signatures match. And conversely, signature matches that enable an analyzer won?t be restricted by what well-known ports are registered. The two are also specified in different grammars: you?re already familiar with the scripting language that can be used for registering well-known ports. There?s a different signature language that?s described by that documentation I linked, and you can also see some examples by looking at ?dpd.sig" files shipped in Bro. The ?enable? action I referred to before is part of the signature language, not the scripting language. For the particular example you?re giving, it may be worthwhile to figure out why the default HTTP signature (base/protocols/http/dpd.sig) is not matching and maybe write one that will (if you?re desperate, do a signature to match every connection). - Jon From redlamb19 at gmail.com Mon Jul 21 21:36:25 2014 From: redlamb19 at gmail.com (Pete) Date: Mon, 21 Jul 2014 23:36:25 -0500 Subject: [Bro] Extracting File from Particular FTP Commands Message-ID: <53CDEA49.9000306@gmail.com> I am looking to extract data from an FTP session, but would only like to do so for those using the RETR or STOR command. I've been able to extract data from all FTP sessions by looking for the FTP_DATA source during a file_new event, but can't seem to find a way to access the associated ftp record with the command attribute. I'm assuming that this is complicated by the separate connection for ftp data. I've thought about modifying the default FTP::file_over_new_connection event to associate the ftp command channel with the data channel, but was wondering if there is a better (more accepted) approach before doing so. Any advise would be appreciated. My current file_new event is as follows: event file_new(f: fa_file) { if ( f$source != "FTP_DATA" ) return; for ( cid in f$conns ) { if ( f$conns[cid]?$ftp ) { print fmt("Command: %s", f$conns[cid]$ftp$command); } } local fname = fmt("%s_%s.bin", to_lower(f$source), f$id); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } To modify the base/protocols/ftp/files.bro script, I was going to simply add a statement to save the stored ftp record from ftp_data_expected to the current connection (c) which holds the FTP_DATA data. event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5 { if ( [c$id$resp_h, c$id$resp_p] !in ftp_data_expected ) return; local ftp = ftp_data_expected[c$id$resp_h, c$id$resp_p]; c$ftp = ftp; ftp$fuid = f$id; if ( f?$mime_type ) ftp$mime_type = f$mime_type; } From jsiwek at illinois.edu Tue Jul 22 07:19:01 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 22 Jul 2014 14:19:01 +0000 Subject: [Bro] Extracting File from Particular FTP Commands In-Reply-To: <53CDEA49.9000306@gmail.com> References: <53CDEA49.9000306@gmail.com> Message-ID: On Jul 21, 2014, at 11:36 PM, Pete wrote: > I've thought about modifying the default FTP::file_over_new_connection event to associate the ftp command channel with the data > channel, but was wondering if there is a better (more accepted) approach before doing so. Maybe have your own ?file_over_new_connection? handler that sets the field. The downside to modifying the default handler in-place is that you have to remember the change will be overwritten on the next Bro install. The downside of having your own handler is sometimes duplication of logic (e.g. the ?ftp_data_expected" table lookup). You can decide which is better, but the general suggestion is usually to just maintain your own event handlers separately. - Jon From jacques.beland at ontario.ca Tue Jul 22 08:05:43 2014 From: jacques.beland at ontario.ca (Jacques Beland) Date: Tue, 22 Jul 2014 15:05:43 +0000 (UTC) Subject: [Bro] Installation Issue References: Message-ID: Seth Hall icir.org> writes: > > > On May 24, 2013, at 11:56 AM, "Champion,Jerry" synovus.com> wrote: > > > I am getting a Dependency is not satisfiable: libc6(<2.12) error message. > > What version of Ubuntu are you running? It's possible that our .deb was built for an older version. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > _______________________________________________ > Bro mailing list > bro bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > Was this resolved? same issue here: Ubuntu VM installed from the SO iso (securityonion-12.04.4-20140222) soup reports everything up-to-date. ran the required updates: sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev Linux Onion 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux results: jacques at Onion:~/Downloads$ sudo gdebi Bro-2.3-Linux-x86_64.deb Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Building data structures... Done This package is uninstallable Dependency is not satisfiable: libc6 (< 2.12) Thanks! Jacques From doug.burks at gmail.com Tue Jul 22 08:14:35 2014 From: doug.burks at gmail.com (Doug Burks) Date: Tue, 22 Jul 2014 11:14:35 -0400 Subject: [Bro] Installation Issue In-Reply-To: References: Message-ID: Hi Jacques, If you can wait a few weeks, we should have a Security Onion package for Bro 2.3 available in our normal PPA: https://code.google.com/p/security-onion/wiki/Roadmap On Tue, Jul 22, 2014 at 11:05 AM, Jacques Beland wrote: > Seth Hall icir.org> writes: > >> >> >> On May 24, 2013, at 11:56 AM, "Champion,Jerry" > synovus.com> wrote: >> >> > I am getting a Dependency is not satisfiable: libc6(<2.12) error message. >> >> What version of Ubuntu are you running? It's possible that our .deb was > built for an older version. >> >> .Seth >> >> -- >> Seth Hall >> International Computer Science Institute >> (Bro) because everyone has a network >> http://www.bro.org/ >> >> _______________________________________________ >> Bro mailing list >> bro bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> > > > > Was this resolved? > > same issue here: Ubuntu VM installed from the SO iso > (securityonion-12.04.4-20140222) > > soup reports everything up-to-date. ran the required updates: > > sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev > python-dev swig zlib1g-dev > > > Linux Onion 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 > x86_64 x86_64 x86_64 GNU/Linux > > results: > jacques at Onion:~/Downloads$ sudo gdebi Bro-2.3-Linux-x86_64.deb > Reading package lists... Done > Building dependency tree > Reading state information... Done > Building data structures... Done > Building data structures... Done > This package is uninstallable > Dependency is not satisfiable: libc6 (< 2.12) > > > Thanks! > > Jacques > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Doug Burks http://securityonionsolutions.com From jdopheid at illinois.edu Tue Jul 22 13:50:00 2014 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Tue, 22 Jul 2014 20:50:00 +0000 Subject: [Bro] BroCon '14 attendance hits 100 Message-ID: Bro Community, BroCon '14 has hit 100 attendees. The conference is less than a month away. If you haven't registered yet, please do so soon: http://www.bro.org/community/brocon2014.html See you soon, The Bro Team ------ Jeannette M. Dopheide Bro Outreach Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign From xiangpan2011 at gmail.com Tue Jul 22 16:16:52 2014 From: xiangpan2011 at gmail.com (Xiang Pan) Date: Tue, 22 Jul 2014 16:16:52 -0700 Subject: [Bro] How to enable SMB analyzer in Bro 2.3? Message-ID: Hi All, I'm a newbie for bro. Currently I'm working on a project which needs to analyze smb traffic. I want to enable all the smb-related events so I googled a little bit and tried with the following script: ##################SCRIPT BEGIN############### *const smbports = {* * 135/tcp, 137/tcp, 138/tcp, 139/tcp, 445/tcp* *};* *redef capture_filters += {* * ["msrpc"] = "tcp port 135",* * ["netbios-ns"] = "tcp port 137",* * ["netbios-ds"] = "tcp port 138",* * ["netbios"] = "tcp port 139",* * ["smb"] = "tcp port 445"* *};* *redef dpd_config += { [Analyzer::ANALYZER_SMB] = [$ports = smbports] };* *redef likely_server_ports += { 445/tcp };* *redef record connection += {* * smb: Info &optional;* *};* *#analyze smb data* *event smb_com_read_andx(c: connection, hdr: smb_hdr, data: string){* * print data;* *}* ###################SCRIPT END################ Then I saved this file as smb_try.bro and executed command: *bro -r ./smb.pcap -B dpd ./smb_try.bro* However, bro gave me the following error message: *"redef" used but not previously defined (dpd_config)* It seems that bro can't find identifier *did_condig*. Am I missing some scripts that need to be loaded in the beginning? What else should I do to enable smb analyzer? Best, Xiang -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140722/41fedcfe/attachment.html From mkolkebeck at gmail.com Tue Jul 22 20:48:22 2014 From: mkolkebeck at gmail.com (Mike Kolkebeck) Date: Tue, 22 Jul 2014 22:48:22 -0500 Subject: [Bro] How to enable SMB analyzer in Bro 2.3? In-Reply-To: References: Message-ID: <891201BE-95EA-43FF-A9BB-E1B18405A6B5@gmail.com> Assuming you're working in Bro 2.2 or 2.3, activating analyzers is much different than in previous versions. You should remove these lines: > redef dpd_config += { [Analyzer::ANALYZER_SMB] = [$ports = smbports] }; > And add this code: event bro_init() &priority=5 { Analyzer::register_for_ports(Analyzer::ANALYZER_SMB, smbports); } With that said, and to my knowledge, the SMB analyzer is still not in a complete, working state. Anyone, please correct me if I am wrong. I'd look forward to seeing if anyone, or the core development team, can make improvements on it. Seth did work on a 2.1 development branch, but this no longer seems to be functioning for the latest stable releases. > On Jul 22, 2014, at 6:16 PM, Xiang Pan wrote: > > Hi All, > > I'm a newbie for bro. Currently I'm working on a project which needs to analyze smb traffic. I want to enable all the smb-related events so I googled a little bit and tried with the following script: > > > > ##################SCRIPT BEGIN############### > > const smbports = { > > 135/tcp, 137/tcp, 138/tcp, 139/tcp, 445/tcp > > }; > > redef capture_filters += { > > ["msrpc"] = "tcp port 135", > > ["netbios-ns"] = "tcp port 137", > > ["netbios-ds"] = "tcp port 138", > > ["netbios"] = "tcp port 139", > > ["smb"] = "tcp port 445" > > }; > > redef dpd_config += { [Analyzer::ANALYZER_SMB] = [$ports = smbports] }; > > redef likely_server_ports += { 445/tcp }; > > redef record connection += { > > smb: Info &optional; > > }; > > > > #analyze smb data > > event smb_com_read_andx(c: connection, hdr: smb_hdr, data: string){ > > print data; > > } > > ###################SCRIPT END################ > > > > > > Then I saved this file as smb_try.bro and executed command: > > bro -r ./smb.pcap -B dpd ./smb_try.bro > > > > However, bro gave me the following error message: > > "redef" used but not previously defined (dpd_config) > > > > It seems that bro can't find identifier did_condig. Am I missing some scripts that need to be loaded in the beginning? What else should I do to enable smb analyzer? > > > > Best, > > Xiang > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140722/843a045b/attachment.html From jlay at slave-tothe-box.net Wed Jul 23 08:10:44 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 23 Jul 2014 09:10:44 -0600 Subject: [Bro] Couple elasticsearch questions Message-ID: Hey all, A few questions: 1. Is there a proper way to set which logs to send to elasticsearch that I can use in local.bro instead of modifying logs-to-elasticsearch.bro? I am assuming that logs-to-elasticsearch.bro might change in future versions of bro. 2. The docs say to add @load tuning/logs-to-elasticsearch in local.bro...how can I send bro data to a remote elasticsearch server instead? 3. And lastly, as I look at the Brownian demo, I see that all the fields are correctly laid out..was this down with Brownian, or with elasticsearch itself? I'm trying to get bro data into logstash direct, instead of using log files. Thanks for any insight. James From JAzoff at albany.edu Wed Jul 23 08:25:58 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Wed, 23 Jul 2014 11:25:58 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: Message-ID: <20140723152558.GB10456@datacomm.albany.edu> On Wed, Jul 23, 2014 at 09:10:44AM -0600, James Lay wrote: > Hey all, > > A few questions: > > 1. Is there a proper way to set which logs to send to elasticsearch > that I can use in local.bro instead of modifying > logs-to-elasticsearch.bro? I am assuming that logs-to-elasticsearch.bro > might change in future versions of bro. Yes, you should just create your own .bro file and take what you need from logs-to-elasticsearch.bro > 2. The docs say to add @load tuning/logs-to-elasticsearch in > local.bro...how can I send bro data to a remote elasticsearch server > instead? redef LogElasticSearch::server_host = "..."; > 3. And lastly, as I look at the Brownian demo, I see that all the > fields are correctly laid out..was this down with Brownian, or with > elasticsearch itself? No idea.. Vlad would know :-) > I'm trying to get bro data into logstash direct, instead of using log > files. Thanks for any insight. Keep in mind that in a failure of communication between Bro and ES you might have a very bad time. -- -- Justin Azoff From seth at icir.org Wed Jul 23 08:40:40 2014 From: seth at icir.org (Seth Hall) Date: Wed, 23 Jul 2014 11:40:40 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: Message-ID: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> On Jul 23, 2014, at 11:10 AM, James Lay wrote: > 1. Is there a proper way to set which logs to send to elasticsearch > that I can use in local.bro instead of modifying > logs-to-elasticsearch.bro? Yes, there are settings that you can change. In local.bro, you can do this... @load tuning/logs-to-elasticsearch redef LogElasticSearch::send_logs += { Conn::LOG, HTTP::LOG }; That will only send the conn.log and http.log to ElasticSearch. > 2. The docs say to add @load tuning/logs-to-elasticsearch in > local.bro...how can I send bro data to a remote elasticsearch server > instead? redef LogElasticSearch::server_host = "1.2.3.4"; > 3. And lastly, as I look at the Brownian demo, I see that all the > fields are correctly laid out..was this down with Brownian, or with > elasticsearch itself? Could you explain what you mean by "correctly laid out"? > I'm trying to get bro data into logstash direct, instead of using log > files. Thanks for any insight. Cool! With the current mechanism, you could encounter overload situations that cause Bro to grow in memory until you run out of memory. We're slowly working on extensions to the ES writer to make it write to a disk backed queuing system so that things should remain more stable over time. I am interested to hear any experiences you have with this though. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/1d9c9d12/attachment.bin From roihat168 at yahoo.com Wed Jul 23 08:42:51 2014 From: roihat168 at yahoo.com (roi hatam) Date: Wed, 23 Jul 2014 08:42:51 -0700 Subject: [Bro] using broccoli to receive events to bro Message-ID: <1406130171.95036.YahooMailNeo@web162106.mail.bf1.yahoo.com> Hello, I need help please. I'm trying to connect with broccoli and intercept http requests. this is how my c code looks like: ------------------------------ ------------------------------ ---------------- #include #include #include #include #include #include #include #include const char *host_str = "127.0.0.1"; const char *port_str = "47761"; int seqcheck =0; static void http_request_c(BroConn *conn, void *data, ... /*BroRecord *c, BroString *method, BroString *original_URI, BroString *unescaped_URI, BroString *version*/){ ??? seqcheck++; ??? fprintf(stdout, "inside http_request_c"); ??? fflush(stdout); ??? conn = NULL; ??? data = NULL; } int main(int argc, char **argv) { ??? BroConn *bc; ??? char hostname[512]; ??? bro_init(NULL); ??? snprintf(hostname, 512, "%s:%s", host_str, port_str); ??? /* Connect to Bro */ ??? if (! (bc = bro_conn_new_str(hostname, BRO_CFLAG_RECONNECT | BRO_CFLAG_ALWAYS_QUEUE))){ ??? ??? printf("Could not get Bro connection handle.\n"); ??? ??? exit(-1); ??? } ??? bro_conn_set_class(bc, "control"); ??? if (! bro_conn_connect(bc)){ ??? ??? printf("Could not connect to Bro at %s:%s.\n", host_str, port_str); ??? ??? exit(-1); ??? } ??? for ( ; ; ){ ??? ??? bro_conn_process_input(bc); ??? ??? sleep(1); ??? ??? fprintf(stdout,"sleep...%d\n", seqcheck); ??? ??? fflush(stdout); ??? } ??? /* Disconnect from Bro and release state. */ ??? bro_conn_delete(bc); ??? return 0; } ------------------------------ ------------------------------ ---------------- and this is how the communication.log looks like: ------------------------------ ------------------------------ ---------------- 1405931366.168104??? bro??? child??? -??? -??? -??? info??? selects=13100000 canwrites=0 timeouts=13096722 1405931391.046791??? bro??? child??? -??? -??? -??? info??? selects=13200000 canwrites=0 timeouts=13196721 1405931415.866668??? bro??? child??? -??? -??? -??? info??? selects=13300000 canwrites=0 timeouts=13296721 1405931418.801334??? bro??? child??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] accepted clear connection 1405931418.801869??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] added peer 1405931418.801869??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] peer connected 1405931418.801869??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] phase: version 1405931418.802301??? bro??? parent??? -??? -??? -??? info??? parent statistics: pending=0 bytes=121K/315286K chunks=2670/5206 io=1265/2097 bytes/io=0.10K/150.35K events=1056/2537 operations=0/0 1405931418.802301??? bro??? parent??? -??? -??? -??? info??? child statistics: [0] pending=0 bytes=0K/0K chunks=0/0 io=0/0 bytes/io=-nanK/-nanK 1405931418.802301??? bro??? script??? -??? -??? -??? info??? connection established 1405931418.802301??? bro??? script??? -??? -??? -??? info??? requesting events matching /^?(Control::.*_request)$?/ 1405931418.802301??? bro??? script??? -??? -??? -??? info??? accepting state 1405931418.803341??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] peer sent class "control" 1405931418.803341??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] phase: handshake 1405931419.041201??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] peer does not support 64bit PIDs; using compatibility mode 1405931419.041201??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] peer is a Broccoli 1405931419.041201??? bro??? parent??? -??? -??? -??? info??? [#10014/127.0.0.1:50634] phase: running ------------------------------ ------------------------------ ---------------- I don't see on my screen anything else than "sleep...0". I know for sure that the http_request is trigger because I see that in the http.log. I will be very thankful for any kind of help. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/148aca60/attachment.html From jsiwek at illinois.edu Wed Jul 23 08:44:34 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Wed, 23 Jul 2014 15:44:34 +0000 Subject: [Bro] New BTest release In-Reply-To: <53C7DAFB.3000309@affolter-engineering.ch> References: <53C7DAFB.3000309@affolter-engineering.ch> Message-ID: <6009DF0D-61D2-440C-B024-1263B0F28489@illinois.edu> On Jul 17, 2014, at 9:17 AM, Fabian Affolter wrote: > I'm working on a btest package for the Fedora Package Collection (his > is part of a bro update from 1.5 to 2.3 in Fedora). It seems that > after the 0.52 release a couple of files were introduced to the git > repository (especially docs like COPYING). It's possible that during > the review process the missing license file could become a blocker. > > It would be nice if you could provide an updated source tarball of > btest as 0.52.1 or 0.53. Thanks, the packaging manifest was out of date. An updated 0.53 tarball is now linked on the downloads page: https://www.bro.org/download/index.html - Jon From jlay at slave-tothe-box.net Wed Jul 23 08:50:59 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 23 Jul 2014 09:50:59 -0600 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> Message-ID: On 2014-07-23 09:40, Seth Hall wrote: > On Jul 23, 2014, at 11:10 AM, James Lay > wrote: > >> 1. Is there a proper way to set which logs to send to elasticsearch >> that I can use in local.bro instead of modifying >> logs-to-elasticsearch.bro? > > Yes, there are settings that you can change. In local.bro, you can > do this... > > @load tuning/logs-to-elasticsearch > redef LogElasticSearch::send_logs += { > Conn::LOG, > HTTP::LOG > }; > > That will only send the conn.log and http.log to ElasticSearch. > >> 2. The docs say to add @load tuning/logs-to-elasticsearch in >> local.bro...how can I send bro data to a remote elasticsearch server >> instead? > > redef LogElasticSearch::server_host = "1.2.3.4"; > >> 3. And lastly, as I look at the Brownian demo, I see that all the >> fields are correctly laid out..was this down with Brownian, or with >> elasticsearch itself? > > Could you explain what you mean by "correctly laid out"? > >> I'm trying to get bro data into logstash direct, instead of using >> log >> files. Thanks for any insight. > > Cool! With the current mechanism, you could encounter overload > situations that cause Bro to grow in memory until you run out of > memory. We're slowly working on extensions to the ES writer to make > it write to a disk backed queuing system so that things should remain > more stable over time. I am interested to hear any experiences you > have with this though. > > .Seth Thanks for the responses Gents...they do help. So...for example here...I have snort currently going to logstash. In order to match fields I have this: filter { grok { match => [ "message", "%{SYSLOGTIMESTAMP:date} %{IPORHOST:device} %{WORD:snort}\[%{INT:snort_pid}\]\: \[%{INT:gid}\:%{INT:sid}\:%{INT:rev}\] %{DATA:ids_alert} \[Classification\: %{DATA:ids_classification}\] \[Priority\: %{INT:ids_priority}\] \{%{WORD:proto}\} %{IP:ids_src_ip}\:%{INT:ids_src_port} \-\> %{IP:ids_dst_ip}\:%{INT:ids_dst_port}" ] } to match: Jul 23 09:44:46 gateway snort[13205]: [1:2500084:3305] ET COMPROMISED Known Compromised or Hostile Host Traffic TCP group 43 [Classification: Misc Attack] [Priority: 2] {TCP} 61.174.51.229:6000 -> x.x.x.x:22 I'm guessing I'm going to have to create something like the above grok for each bro log file....which...is going to be a hoot ;) I was hoping that work was already done somewhere...and I think I had it working at one time for conn.log that I posted here some time ago. Thanks again...after looking at the Brownian source I think I'm going to have to just bite the bullet and generate the grok lines. James From craigp at iup.edu Wed Jul 23 08:58:05 2014 From: craigp at iup.edu (Craig Pluchinsky) Date: Wed, 23 Jul 2014 11:58:05 -0400 (EDT) Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> Message-ID: I've done most of them using grok and custom patterns. Conn.log below Using logstash to read the log files, process and insert into elasticsearch. Then using kibana as a web front end. grok { match => [ "message", "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*))" ] } ------------------------------- Craig Pluchinsky IT Services Indiana University of Pennsylvania 724-357-3327 On Wed, 23 Jul 2014, James Lay wrote: > On 2014-07-23 09:40, Seth Hall wrote: >> On Jul 23, 2014, at 11:10 AM, James Lay >> wrote: >> >>> 1. Is there a proper way to set which logs to send to elasticsearch >>> that I can use in local.bro instead of modifying >>> logs-to-elasticsearch.bro? >> >> Yes, there are settings that you can change. In local.bro, you can >> do this... >> >> @load tuning/logs-to-elasticsearch >> redef LogElasticSearch::send_logs += { >> Conn::LOG, >> HTTP::LOG >> }; >> >> That will only send the conn.log and http.log to ElasticSearch. >> >>> 2. The docs say to add @load tuning/logs-to-elasticsearch in >>> local.bro...how can I send bro data to a remote elasticsearch server >>> instead? >> >> redef LogElasticSearch::server_host = "1.2.3.4"; >> >>> 3. And lastly, as I look at the Brownian demo, I see that all the >>> fields are correctly laid out..was this down with Brownian, or with >>> elasticsearch itself? >> >> Could you explain what you mean by "correctly laid out"? >> >>> I'm trying to get bro data into logstash direct, instead of using >>> log >>> files. Thanks for any insight. >> >> Cool! With the current mechanism, you could encounter overload >> situations that cause Bro to grow in memory until you run out of >> memory. We're slowly working on extensions to the ES writer to make >> it write to a disk backed queuing system so that things should remain >> more stable over time. I am interested to hear any experiences you >> have with this though. >> >> .Seth > > Thanks for the responses Gents...they do help. So...for example > here...I have snort currently going to logstash. In order to match > fields I have this: > > filter { > grok { > match => [ "message", "%{SYSLOGTIMESTAMP:date} > %{IPORHOST:device} %{WORD:snort}\[%{INT:snort_pid}\]\: > \[%{INT:gid}\:%{INT:sid}\:%{INT:rev}\] %{DATA:ids_alert} > \[Classification\: %{DATA:ids_classification}\] \[Priority\: > %{INT:ids_priority}\] \{%{WORD:proto}\} > %{IP:ids_src_ip}\:%{INT:ids_src_port} \-\> > %{IP:ids_dst_ip}\:%{INT:ids_dst_port}" ] > } > > to match: > > Jul 23 09:44:46 gateway snort[13205]: [1:2500084:3305] ET COMPROMISED > Known Compromised or Hostile Host Traffic TCP group 43 [Classification: > Misc Attack] [Priority: 2] {TCP} 61.174.51.229:6000 -> x.x.x.x:22 > > I'm guessing I'm going to have to create something like the above grok > for each bro log file....which...is going to be a hoot ;) I was hoping > that work was already done somewhere...and I think I had it working at > one time for conn.log that I posted here some time ago. Thanks > again...after looking at the Brownian source I think I'm going to have > to just bite the bullet and generate the grok lines. > > James > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From jlay at slave-tothe-box.net Wed Jul 23 09:03:56 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 23 Jul 2014 10:03:56 -0600 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> Message-ID: <79abf3422866c1fff31aea4702bc08da@localhost> On 2014-07-23 09:58, Craig Pluchinsky wrote: > I've done most of them using grok and custom patterns. Conn.log > below Using logstash to read the log files, process and insert into > elasticsearch. Then using kibana as a web front end. > > grok { > match => [ "message", > > "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*))" > ] > } > > > > ------------------------------- > Craig Pluchinsky > IT Services > Indiana University of Pennsylvania > 724-357-3327 > > > On Wed, 23 Jul 2014, James Lay wrote: > >> On 2014-07-23 09:40, Seth Hall wrote: >>> On Jul 23, 2014, at 11:10 AM, James Lay >>> wrote: >>> >>>> 1. Is there a proper way to set which logs to send to >>>> elasticsearch >>>> that I can use in local.bro instead of modifying >>>> logs-to-elasticsearch.bro? >>> >>> Yes, there are settings that you can change. In local.bro, you can >>> do this... >>> >>> @load tuning/logs-to-elasticsearch >>> redef LogElasticSearch::send_logs += { >>> Conn::LOG, >>> HTTP::LOG >>> }; >>> >>> That will only send the conn.log and http.log to ElasticSearch. >>> >>>> 2. The docs say to add @load tuning/logs-to-elasticsearch in >>>> local.bro...how can I send bro data to a remote elasticsearch >>>> server >>>> instead? >>> >>> redef LogElasticSearch::server_host = "1.2.3.4"; >>> >>>> 3. And lastly, as I look at the Brownian demo, I see that all the >>>> fields are correctly laid out..was this down with Brownian, or >>>> with >>>> elasticsearch itself? >>> >>> Could you explain what you mean by "correctly laid out"? >>> >>>> I'm trying to get bro data into logstash direct, instead of using >>>> log >>>> files. Thanks for any insight. >>> >>> Cool! With the current mechanism, you could encounter overload >>> situations that cause Bro to grow in memory until you run out of >>> memory. We're slowly working on extensions to the ES writer to >>> make >>> it write to a disk backed queuing system so that things should >>> remain >>> more stable over time. I am interested to hear any experiences you >>> have with this though. >>> >>> .Seth >> >> Thanks for the responses Gents...they do help. So...for example >> here...I have snort currently going to logstash. In order to match >> fields I have this: >> >> filter { >> grok { >> match => [ "message", "%{SYSLOGTIMESTAMP:date} >> %{IPORHOST:device} %{WORD:snort}\[%{INT:snort_pid}\]\: >> \[%{INT:gid}\:%{INT:sid}\:%{INT:rev}\] %{DATA:ids_alert} >> \[Classification\: %{DATA:ids_classification}\] \[Priority\: >> %{INT:ids_priority}\] \{%{WORD:proto}\} >> %{IP:ids_src_ip}\:%{INT:ids_src_port} \-\> >> %{IP:ids_dst_ip}\:%{INT:ids_dst_port}" ] >> } >> >> to match: >> >> Jul 23 09:44:46 gateway snort[13205]: [1:2500084:3305] ET >> COMPROMISED >> Known Compromised or Hostile Host Traffic TCP group 43 >> [Classification: >> Misc Attack] [Priority: 2] {TCP} 61.174.51.229:6000 -> x.x.x.x:22 >> >> I'm guessing I'm going to have to create something like the above >> grok >> for each bro log file....which...is going to be a hoot ;) I was >> hoping >> that work was already done somewhere...and I think I had it working >> at >> one time for conn.log that I posted here some time ago. Thanks >> again...after looking at the Brownian source I think I'm going to >> have >> to just bite the bullet and generate the grok lines. >> >> James Hey thanks Chris that's a big help...if you'd be willing to share any of the others that would be excellent as well. I have to admit I'm fairly excited to see a dashboard that shows me things like "show me snort ids hits AND firewall hits AND connection tracking" :) James From seth at icir.org Wed Jul 23 09:08:05 2014 From: seth at icir.org (Seth Hall) Date: Wed, 23 Jul 2014 12:08:05 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> Message-ID: <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> On Jul 23, 2014, at 11:50 AM, James Lay wrote: > I'm guessing I'm going to have to create something like the above grok > for each bro log file....which...is going to be a hoot ;) Are you saying that you're going to have to do this because you don't want Bro to write directly to ElasticSearch? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/213b8530/attachment.bin From jlay at slave-tothe-box.net Wed Jul 23 09:15:25 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 23 Jul 2014 10:15:25 -0600 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> Message-ID: <5a0053fea85892c5e080015871b041d9@localhost> On 2014-07-23 10:08, Seth Hall wrote: > On Jul 23, 2014, at 11:50 AM, James Lay > wrote: > >> I'm guessing I'm going to have to create something like the above >> grok >> for each bro log file....which...is going to be a hoot ;) > > Are you saying that you're going to have to do this because you don't > want Bro to write directly to ElasticSearch? > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ Negative. In order to get Logstash/Kibana to identify fields, the grok patterns are what is used. I guess that's the question for me....does Bro dump the data raw into elasticsearch? If it does then I'll need to include a grok line in my logstash config to parse out the data of each type of log that bro generates. I hope that makes sense..thanks Seth. James From rjenkins at rmjconsulting.net Wed Jul 23 09:22:19 2014 From: rjenkins at rmjconsulting.net (Ron Jenkins) Date: Wed, 23 Jul 2014 16:22:19 +0000 Subject: [Bro] Bro v2.3 and MySQL Message-ID: Good morning; Does v2.3 support writing directly to MySQL? Thank you! Ron Jenkins (Owner / Senior Architect) RMJ Consulting, LLC. "Bringing Companies and Solutions Together" 11715 Bricksome Ave STE B-7 Baton Rouge, LA 70816 Toll: 855-448-5214 Direct. 225-448-5214 Ext #101 Fax. 225-448-5324 Cell. 225-931-1632 Email. rjenkins at rmjconsulting.net Web. http://www.rmjconsulting.net Log Siphon. http://www.logsiphon.com Linkedin. www.linkedin.com/in/ronmjenkins/ Twitter: www.twitter.com/rmj__consulting RMJ Consulting's Technology Corner. https://www.rmjconsulting.net/main/paper.php -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/5fe82d49/attachment.html From mkhan04 at gmail.com Wed Jul 23 09:27:58 2014 From: mkhan04 at gmail.com (M K) Date: Wed, 23 Jul 2014 12:27:58 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <5a0053fea85892c5e080015871b041d9@localhost> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> <5a0053fea85892c5e080015871b041d9@localhost> Message-ID: Bro converts the data to json and then writes that to elasticsearch using ES's bulk interface. But it does a "fire and forget" so doesn't confirm that the data was actually accepted. I wrote an AMQPRiver writer a while back that allows you to leverage an ElasticSearch River, it provided for a higher level of reliability of data ingestion, but I haven't touched it since I wrote it a few months back. On Wed, Jul 23, 2014 at 12:15 PM, James Lay wrote: > On 2014-07-23 10:08, Seth Hall wrote: > > On Jul 23, 2014, at 11:50 AM, James Lay > > wrote: > > > >> I'm guessing I'm going to have to create something like the above > >> grok > >> for each bro log file....which...is going to be a hoot ;) > > > > Are you saying that you're going to have to do this because you don't > > want Bro to write directly to ElasticSearch? > > > > .Seth > > > > -- > > Seth Hall > > International Computer Science Institute > > (Bro) because everyone has a network > > http://www.bro.org/ > > Negative. In order to get Logstash/Kibana to identify fields, the grok > patterns are what is used. I guess that's the question for me....does > Bro dump the data raw into elasticsearch? If it does then I'll need to > include a grok line in my logstash config to parse out the data of each > type of log that bro generates. I hope that makes sense..thanks Seth. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/9000c44b/attachment.html From jsiwek at illinois.edu Wed Jul 23 09:31:19 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Wed, 23 Jul 2014 16:31:19 +0000 Subject: [Bro] using broccoli to receive events to bro In-Reply-To: <1406130171.95036.YahooMailNeo@web162106.mail.bf1.yahoo.com> References: <1406130171.95036.YahooMailNeo@web162106.mail.bf1.yahoo.com> Message-ID: <05BF59B0-C1BF-42BA-92D4-B8D8E4C19A4A@illinois.edu> On Jul 23, 2014, at 10:42 AM, roi hatam wrote: > I'm trying to connect with broccoli and intercept http requests. > > static void > http_request_c(BroConn *conn, void *data, ... /*BroRecord *c, BroString *method, BroString *original_URI, BroString *unescaped_URI, BroString *version*/){ > seqcheck++; > fprintf(stdout, "inside http_request_c"); > fflush(stdout); > > conn = NULL; > data = NULL; > } At least one thing that appears to be missing is that this callback is never registered, so it?s not linked to any event generated in Bro. Doing that (sometime before your bro_conn_connect() call) will probably look something like: bro_event_registry_add(bc, ?http_request", (BroEventFunc) http_request_c, NULL); - Jon From seth at icir.org Wed Jul 23 09:31:41 2014 From: seth at icir.org (Seth Hall) Date: Wed, 23 Jul 2014 12:31:41 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <5a0053fea85892c5e080015871b041d9@localhost> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> <5a0053fea85892c5e080015871b041d9@localhost> Message-ID: <56A66A01-D8E6-4F78-9F1F-08C51EA8C723@icir.org> On Jul 23, 2014, at 12:15 PM, James Lay wrote: > Negative. In order to get Logstash/Kibana to identify fields, the grok > patterns are what is used. I guess that's the question for me....does > Bro dump the data raw into elasticsearch? Bro will write the logs directly into elasticsearch (with the fields separated and named correctly). You don't need logstash at all. The only difference is that in your kibana config, you'll need to make it use slightly different index names. I'm hoping that this is something we'll have more guidance on at some point. I definitely recognize that more cleanup needs to done to this code to make it more resilient and make it easier to get to an end-result. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/e0dfa66c/attachment.bin From hosom at battelle.org Wed Jul 23 10:11:59 2014 From: hosom at battelle.org (Hosom, Stephen M) Date: Wed, 23 Jul 2014 17:11:59 +0000 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <56A66A01-D8E6-4F78-9F1F-08C51EA8C723@icir.org> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> <5a0053fea85892c5e080015871b041d9@localhost> <56A66A01-D8E6-4F78-9F1F-08C51EA8C723@icir.org> Message-ID: How does Bro handle indexes within ES? Does it rotate indexes, or does it write to one extremely large index with TTLs? -----Original Message----- From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Seth Hall Sent: Wednesday, July 23, 2014 12:32 PM To: James Lay Cc: bro at bro-ids.org Subject: Re: [Bro] Couple elasticsearch questions On Jul 23, 2014, at 12:15 PM, James Lay wrote: > Negative. In order to get Logstash/Kibana to identify fields, the > grok patterns are what is used. I guess that's the question for > me....does Bro dump the data raw into elasticsearch? Bro will write the logs directly into elasticsearch (with the fields separated and named correctly). You don't need logstash at all. The only difference is that in your kibana config, you'll need to make it use slightly different index names. I'm hoping that this is something we'll have more guidance on at some point. I definitely recognize that more cleanup needs to done to this code to make it more resilient and make it easier to get to an end-result. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ From openjaf at gmail.com Wed Jul 23 12:29:59 2014 From: openjaf at gmail.com (James Feister) Date: Wed, 23 Jul 2014 15:29:59 -0400 Subject: [Bro] Signature framework questions, endianess and bitwise operations Message-ID: Had some questions about the signature framework for detecting an application protocol. Is it possible to manipulate bytes for endianness or will they always come in little endian? Is it possible to perform bitwise opperations on payload bytes so that you may perform checks against subsets of bits within the byte? For example I have to look at the first 4 bits of a bigendian defined application layer protocol. For my test cases I can match signatures against a known 8 bit little endian regex but not sure how to get to 4 bits because the next 4 bits will change in an operational environment. If not Im guessing I would have to pump all traffic through my binpac analyzer and do the detection there? Thanks, James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/b36caba6/attachment.html From seth at icir.org Wed Jul 23 12:44:00 2014 From: seth at icir.org (Seth Hall) Date: Wed, 23 Jul 2014 15:44:00 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> <5a0053fea85892c5e080015871b041d9@localhost> <56A66A01-D8E6-4F78-9F1F-08C51EA8C723@icir.org> Message-ID: <01EBD266-0DC5-4D2C-A250-9CFBC8214577@icir.org> On Jul 23, 2014, at 1:11 PM, Hosom, Stephen M wrote: > How does Bro handle indexes within ES? Does it rotate indexes, or does it write to one extremely large index with TTLs? Right now we're handling indexes with Bro log rotation. The logs-to-elasticsearch script sets a log rotation interval of 3 hours so you'll have a new index created every three hours. Bro is also not doing anything to clean up old indexes so you'll have to do that on your own. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/cd28914a/attachment.bin From michael.wenthold at gmail.com Wed Jul 23 13:20:27 2014 From: michael.wenthold at gmail.com (Michael Wenthold) Date: Wed, 23 Jul 2014 16:20:27 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> Message-ID: I'm for from being an expert, but I think that using the built in grok patterns and/or being more specific with the regex syntax will result in better logstash performance. for example, I'm profiling performance for some of our dns grok parsing patterns: match => [ "message", "(?[0-9\.]{14})[0-9]+\t%{IP:dns_requester}\s%(?[0-9]{1,5})\t%{IP:dns_server}\s%(?[0-9]{1,5})\t%{WORD:dns_query_proto}\t(?[0-9]+)\t%{HOSTNAME:dns_query}\t%{NOTSPACE:dns_query_class}\t(?[A-Za-z0-9\-\*]+)\t%{NOTSPACE:dns_query_result>[A-Z\*]+)\t(?[TF])\t(?[TF])\t(?[TF])\t%{GREEDYDATA:dns_response}" ] I'm also sure that there's more efficient ways to write it than what I did. The odd parsing of the timestamp is because we use logstash to rewrite event times where possible, using the actual event time with the date filter: date { match => [ "bro_event_time", "UNIX" ] } Just my .02. On Wed, Jul 23, 2014 at 11:58 AM, Craig Pluchinsky wrote: > I've done most of them using grok and custom patterns. Conn.log below > Using logstash to read the log files, process and insert into > elasticsearch. Then using kibana as a web front end. > > grok { > match => [ "message", > > "(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*?))\t(?(.*))" > ] > } > > > > ------------------------------- > Craig Pluchinsky > IT Services > Indiana University of Pennsylvania > 724-357-3327 > > > On Wed, 23 Jul 2014, James Lay wrote: > > > On 2014-07-23 09:40, Seth Hall wrote: > >> On Jul 23, 2014, at 11:10 AM, James Lay > >> wrote: > >> > >>> 1. Is there a proper way to set which logs to send to elasticsearch > >>> that I can use in local.bro instead of modifying > >>> logs-to-elasticsearch.bro? > >> > >> Yes, there are settings that you can change. In local.bro, you can > >> do this... > >> > >> @load tuning/logs-to-elasticsearch > >> redef LogElasticSearch::send_logs += { > >> Conn::LOG, > >> HTTP::LOG > >> }; > >> > >> That will only send the conn.log and http.log to ElasticSearch. > >> > >>> 2. The docs say to add @load tuning/logs-to-elasticsearch in > >>> local.bro...how can I send bro data to a remote elasticsearch server > >>> instead? > >> > >> redef LogElasticSearch::server_host = "1.2.3.4"; > >> > >>> 3. And lastly, as I look at the Brownian demo, I see that all the > >>> fields are correctly laid out..was this down with Brownian, or with > >>> elasticsearch itself? > >> > >> Could you explain what you mean by "correctly laid out"? > >> > >>> I'm trying to get bro data into logstash direct, instead of using > >>> log > >>> files. Thanks for any insight. > >> > >> Cool! With the current mechanism, you could encounter overload > >> situations that cause Bro to grow in memory until you run out of > >> memory. We're slowly working on extensions to the ES writer to make > >> it write to a disk backed queuing system so that things should remain > >> more stable over time. I am interested to hear any experiences you > >> have with this though. > >> > >> .Seth > > > > Thanks for the responses Gents...they do help. So...for example > > here...I have snort currently going to logstash. In order to match > > fields I have this: > > > > filter { > > grok { > > match => [ "message", "%{SYSLOGTIMESTAMP:date} > > %{IPORHOST:device} %{WORD:snort}\[%{INT:snort_pid}\]\: > > \[%{INT:gid}\:%{INT:sid}\:%{INT:rev}\] %{DATA:ids_alert} > > \[Classification\: %{DATA:ids_classification}\] \[Priority\: > > %{INT:ids_priority}\] \{%{WORD:proto}\} > > %{IP:ids_src_ip}\:%{INT:ids_src_port} \-\> > > %{IP:ids_dst_ip}\:%{INT:ids_dst_port}" ] > > } > > > > to match: > > > > Jul 23 09:44:46 gateway snort[13205]: [1:2500084:3305] ET COMPROMISED > > Known Compromised or Hostile Host Traffic TCP group 43 [Classification: > > Misc Attack] [Priority: 2] {TCP} 61.174.51.229:6000 -> x.x.x.x:22 > > > > I'm guessing I'm going to have to create something like the above grok > > for each bro log file....which...is going to be a hoot ;) I was hoping > > that work was already done somewhere...and I think I had it working at > > one time for conn.log that I posted here some time ago. Thanks > > again...after looking at the Brownian source I think I'm going to have > > to just bite the bullet and generate the grok lines. > > > > James > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/ee1fc293/attachment.html From jsiwek at illinois.edu Wed Jul 23 13:42:05 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Wed, 23 Jul 2014 20:42:05 +0000 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: References: Message-ID: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> On Jul 23, 2014, at 2:29 PM, James Feister wrote: > Had some questions about the signature framework for detecting an application protocol. > > Is it possible to manipulate bytes for endianness or will they always come in little endian? Byte order isn?t considered; payloads are a string of bytes and signatures may use a regex to match on that. > Is it possible to perform bitwise opperations on payload bytes so that you may perform checks against subsets of bits within the byte? > > For example I have to look at the first 4 bits of a bigendian defined application layer protocol. For my test cases I can match signatures against a known 8 bit little endian regex but not sure how to get to 4 bits because the next 4 bits will change in an operational environment. Can character classes express what you want? - Jon From mfw113 at psu.edu Wed Jul 23 17:39:58 2014 From: mfw113 at psu.edu (Mike Waite) Date: Wed, 23 Jul 2014 20:39:58 -0400 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> Message-ID: <53D055DE.2040509@psu.edu> Take a look at http://brostash.herokuapp.com/ -Mike On 7/23/14, 11:50 AM, James Lay wrote: > I'm guessing I'm going to have to create something like the above grok > for each bro log file....which...is going to be a hoot ;) I was hoping > that work was already done somewhere...and I think I had it working at > one time for conn.log that I posted here some time ago. Thanks > again...after looking at the Brownian source I think I'm going to have > to just bite the bullet and generate the grok lines. > > James > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 601 bytes Desc: OpenPGP digital signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140723/4a26fd47/attachment.bin From jlay at slave-tothe-box.net Wed Jul 23 18:04:32 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 23 Jul 2014 19:04:32 -0600 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <53D055DE.2040509@psu.edu> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <53D055DE.2040509@psu.edu> Message-ID: <1406163872.2701.10.camel@JamesiMac> On Wed, 2014-07-23 at 20:39 -0400, Mike Waite wrote: > Take a look at > > http://brostash.herokuapp.com/ > > -Mike > > On 7/23/14, 11:50 AM, James Lay wrote: > > > I'm guessing I'm going to have to create something like the above grok > > for each bro log file....which...is going to be a hoot ;) I was hoping > > that work was already done somewhere...and I think I had it working at > > one time for conn.log that I posted here some time ago. Thanks > > again...after looking at the Brownian source I think I'm going to have > > to just bite the bullet and generate the grok lines. > > > > James > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro Yea that's a thing of beauty...thank you! James From jlay at slave-tothe-box.net Wed Jul 23 18:06:02 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Wed, 23 Jul 2014 19:06:02 -0600 Subject: [Bro] Couple elasticsearch questions In-Reply-To: References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> <5a0053fea85892c5e080015871b041d9@localhost> Message-ID: <1406163962.2701.11.camel@JamesiMac> On Wed, 2014-07-23 at 12:27 -0400, M K wrote: > Bro converts the data to json and then writes that to elasticsearch > using ES's bulk interface. But it does a "fire and forget" so doesn't > confirm that the data was actually accepted. > > I wrote an AMQPRiver writer a while back that allows you to leverage > an ElasticSearch River, it provided for a higher level of reliability > of data ingestion, but I haven't touched it since I wrote it a few > months back. > > > > On Wed, Jul 23, 2014 at 12:15 PM, James Lay > wrote: > On 2014-07-23 10:08, Seth Hall wrote: > > On Jul 23, 2014, at 11:50 AM, James Lay > > > wrote: > > > >> I'm guessing I'm going to have to create something like the > above > >> grok > >> for each bro log file....which...is going to be a hoot ;) > > > > Are you saying that you're going to have to do this because > you don't > > want Bro to write directly to ElasticSearch? > > > > .Seth > > > > -- > > Seth Hall > > International Computer Science Institute > > (Bro) because everyone has a network > > http://www.bro.org/ > > > Negative. In order to get Logstash/Kibana to identify fields, > the grok > patterns are what is used. I guess that's the question for > me....does > Bro dump the data raw into elasticsearch? If it does then > I'll need to > include a grok line in my logstash config to parse out the > data of each > type of log that bro generates. I hope that makes > sense..thanks Seth. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > Thanks MK...that does help...this has been an interesting day of discovery. James From Robert_Yang at trendmicro.com.cn Wed Jul 23 23:45:54 2014 From: Robert_Yang at trendmicro.com.cn (Robert_Yang at trendmicro.com.cn) Date: Thu, 24 Jul 2014 06:45:54 +0000 Subject: [Bro] How to extract data to a eml file from smtp traffic Message-ID: <6FCE7872AA66C246990EC5623F91A014BBF85278@CDCEXMBX02.tw.trendnet.org> Hi everyone, I want to extract the whole data to a eml file from smtp traffic. And the system event - file_new() only save every mime entity of an email as a file instead of the whole email. This is not I want. I try to add an event in ./share/bro/base/protocols/smtp/file.bro. event smtp_data(c: connection, is_orig:bool, data:string) { print fmt("DATA %d", |data|); } I print size of every data. The amount of every data size is always less than actually size the eml file ( 23137 Byte < 23831 Byte). So what I miss? And how to save data to file in smtp_data event? Please help me about the above question if you are free. Thank you a lot! BR Robert Yang TREND MICRO EMAIL NOTICE The information contained in this email and any attachments is confidential and may be subject to copyright or other intellectual property protection. If you are not the intended recipient, you are not authorized to use or disclose this information, and we request that you notify us by reply mail or telephone and delete the original message from your mail system. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140724/5032b16b/attachment.html From seth at icir.org Thu Jul 24 06:40:59 2014 From: seth at icir.org (Seth Hall) Date: Thu, 24 Jul 2014 09:40:59 -0400 Subject: [Bro] How to extract data to a eml file from smtp traffic In-Reply-To: <6FCE7872AA66C246990EC5623F91A014BBF85278@CDCEXMBX02.tw.trendnet.org> References: <6FCE7872AA66C246990EC5623F91A014BBF85278@CDCEXMBX02.tw.trendnet.org> Message-ID: On Jul 24, 2014, at 2:45 AM, Robert_Yang at trendmicro.com.cn wrote: > I want to extract the whole data to a eml file from smtp traffic. And the system event ? file_new() only save every mime entity of an email as a file instead of the whole email. This is not I want. I'm going to assume you're saying that you want the entire SMTP data transaction. I don't actually know what microsoft does for their eml format but it sounds like you're just describing a full mime transfer.  Eventually I think things will be changing with the SMTP analyzer where the whole message is passed as a file and the MIME analyzer will be separated as a file analyzer (it's directly integrated into the smtp analyzer right now). This will make it possible to get the whole message if you want it, but you'll also be able to have Bro extract and analyze all of the mime entities separately too. > I print size of every data. The amount of every data size is always less than actually size the eml file ( 23137 Byte < 23831 Byte). So what I miss? And how to save data to file in smtp_data event? Could you send along a trace file where you are having this problem? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140724/20f1c5f5/attachment.bin From openjaf at gmail.com Thu Jul 24 06:49:42 2014 From: openjaf at gmail.com (James Feister) Date: Thu, 24 Jul 2014 09:49:42 -0400 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> References: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> Message-ID: On Wed, Jul 23, 2014 at 4:42 PM, Siwek, Jon wrote: > > > Is it possible to perform bitwise opperations on payload bytes so that > you may perform checks against subsets of bits within the byte? > > > > For example I have to look at the first 4 bits of a bigendian defined > application layer protocol. For my test cases I can match signatures > against a known 8 bit little endian regex but not sure how to get to 4 bits > because the next 4 bits will change in an operational environment. > > Can character classes express what you want? > I think so, but it would mean I could match the first 4 bits but would then have to include all possible permutations for the next 4 bits with each of those desired first 4. Had hoped I could just generate a mask to grab the first four bits 0x0F, and then match against those. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140724/af6908b8/attachment.html From robin at icir.org Thu Jul 24 07:26:30 2014 From: robin at icir.org (Robin Sommer) Date: Thu, 24 Jul 2014 07:26:30 -0700 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: References: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> Message-ID: <20140724142630.GI62432@icir.org> On Thu, Jul 24, 2014 at 09:49 -0400, James Feister wrote: > Had hoped I could just generate a mask to grab the first four bits 0x0F, > and then match against those. No, masking is not supported for payload data, only for header fields. Robin -- Robin Sommer * Phone +1 (510) 722-6541 * robin at icir.org ICSI/LBNL * Fax +1 (510) 666-2956 * www.icir.org/robin From jsiwek at illinois.edu Thu Jul 24 08:16:56 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Thu, 24 Jul 2014 15:16:56 +0000 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: References: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> Message-ID: On Jul 24, 2014, at 8:49 AM, James Feister wrote: > I think so, but it would mean I could match the first 4 bits but would then have to include all possible permutations for the next 4 bits with each of those desired first 4. > > Had hoped I could just generate a mask to grab the first four bits 0x0F, and then match against those. Yeah, the result isn?t always concise and you may want to code/script something to auto-generate character classes for a given mask/value, but that?s a way that?s worked for some signatures I?ve done. - Jon From openjaf at gmail.com Thu Jul 24 08:50:18 2014 From: openjaf at gmail.com (James Feister) Date: Thu, 24 Jul 2014 11:50:18 -0400 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: References: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> Message-ID: On Thu, Jul 24, 2014 at 11:16 AM, Siwek, Jon wrote: > On Jul 24, 2014, at 8:49 AM, James Feister wrote: > >> I think so, but it would mean I could match the first 4 bits but would then have to include all possible permutations for the next 4 bits with each of those desired first 4. >> >> Had hoped I could just generate a mask to grab the first four bits 0x0F, and then match against those. > >Yeah, the result isn?t always concise and you may want to code/script something to auto-generate character classes for a given mask/value, but that?s a way that?s worked for some signatures I?ve done. I will do that then. As an alternative I wanted to look at every stream (tcp) and packet (udp) then do the match in my analyzer code. But site documentation only references DPM.cc to perform this hooking which I can only find in the 2.1 code base not 2.2 or 2.3. Which of the analyzers in the 2.3 release could I use as a reference? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140724/8649625a/attachment.html From gfaulkner.nsm at gmail.com Thu Jul 24 11:51:05 2014 From: gfaulkner.nsm at gmail.com (Gary Faulkner) Date: Thu, 24 Jul 2014 13:51:05 -0500 Subject: [Bro] unmatched_HTTP_reply in weird.log Message-ID: <53D15599.2080508@gmail.com> Hello, Recently my Bro cluster started producing a lot of unmatched_HTTP_reply messages in weird.log and seemed to also stop logging outbound GET requests in http.log. I did some testing by following both Bro logs as I browsed to various websites and it looks like every time I visit a new site, the initial GET request doesn't get logged and a weird is generated. As such I'm wondering if this may be an indication that Bro is only seeing half the conversation? I can trace the change in logging behavior to a specific day, but I can't find any indication that there were any changes locally that would have stopped Bro from seeing any particular traffic. Anyone thoughts? Am I interpreting the logs correctly? Regards, Gary From JAzoff at albany.edu Thu Jul 24 12:09:17 2014 From: JAzoff at albany.edu (Justin Azoff) Date: Thu, 24 Jul 2014 15:09:17 -0400 Subject: [Bro] unmatched_HTTP_reply in weird.log In-Reply-To: <53D15599.2080508@gmail.com> References: <53D15599.2080508@gmail.com> Message-ID: <20140724190917.GC10456@datacomm.albany.edu> On Thu, Jul 24, 2014 at 01:51:05PM -0500, Gary Faulkner wrote: > Hello, > > Recently my Bro cluster started producing a lot of unmatched_HTTP_reply > messages in weird.log and seemed to also stop logging outbound GET > requests in http.log. I did some testing by following both Bro logs as I > browsed to various websites and it looks like every time I visit a new > site, the initial GET request doesn't get logged and a weird is > generated. As such I'm wondering if this may be an indication that Bro > is only seeing half the conversation? I can trace the change in logging > behavior to a specific day, but I can't find any indication that there > were any changes locally that would have stopped Bro from seeing any > particular traffic. Anyone thoughts? Am I interpreting the logs correctly? > > Regards, > Gary Most likely this is a problem upstream from Bro. To rule out bro as a problem here do something like tcpdump -nn -i eth1 host your.ip.address and port 80 if you only see one side of the conversion in tcpdump as well that will rule bro out as the problem. -- -- Justin Azoff From jlay at slave-tothe-box.net Thu Jul 24 12:59:31 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 24 Jul 2014 13:59:31 -0600 Subject: [Bro] Couple elasticsearch questions In-Reply-To: <56A66A01-D8E6-4F78-9F1F-08C51EA8C723@icir.org> References: <864895E3-1ED5-47BF-997E-197B89A26A42@icir.org> <76A28E50-BB11-4828-BEEC-84BAD394FAF0@icir.org> <5a0053fea85892c5e080015871b041d9@localhost> <56A66A01-D8E6-4F78-9F1F-08C51EA8C723@icir.org> Message-ID: <1f1146ae2b9f3069ed3079728a9cb22d@localhost> On 2014-07-23 10:31, Seth Hall wrote: > On Jul 23, 2014, at 12:15 PM, James Lay > wrote: > >> Negative. In order to get Logstash/Kibana to identify fields, the >> grok >> patterns are what is used. I guess that's the question for >> me....does >> Bro dump the data raw into elasticsearch? > > Bro will write the logs directly into elasticsearch (with the fields > separated and named correctly). You don't need logstash at all. The > only difference is that in your kibana config, you'll need to make it > use slightly different index names. I'm hoping that this is > something > we'll have more guidance on at some point. I definitely recognize > that more cleanup needs to done to this code to make it more > resilient > and make it easier to get to an end-result. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ Confirming that this works like a champ. My testing here is using Logstash with it's built in Kibana, and a separate instance of Elasticsearch since there's more going in then just Bro. In fact the whole idea is to tie in bro, snort, and syslogs. With bro going direct to elasticsearch, there's nothing to really configure, save just to make sure your Kibana index is set to _all. Kibana also allows you to tweak the timestamp so the original unix time, after tweaking, shows up as 2014-07-24T12:16:05.795-06:00 for example. My next step will be to get snort and firewall logs in....ironically, the Bro portion has been the easiest :) Thanks for the work on this! James From jsiwek at illinois.edu Thu Jul 24 13:00:15 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Thu, 24 Jul 2014 20:00:15 +0000 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: References: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> Message-ID: <902ED21E-0D2A-47D7-878A-890EE2556117@illinois.edu> On Jul 24, 2014, at 10:50 AM, James Feister wrote: > As an alternative I wanted to look at every stream (tcp) and packet (udp) then do the match in my analyzer code. But site documentation only references DPM.cc to perform this hooking which I can only find in the 2.1 code base not 2.2 or 2.3. Which of the analyzers in the 2.3 release could I use as a reference? analyzer::Manager::BuildInitialAnalyzerTree() is what that the documentation should say for newer versions. Another way maybe you can do what you want without changing source code directly is to make a payload regex that matches everything and enables the analyzer you are writing. - Jon From openjaf at gmail.com Thu Jul 24 15:03:51 2014 From: openjaf at gmail.com (James Feister) Date: Thu, 24 Jul 2014 18:03:51 -0400 Subject: [Bro] Signature framework questions, endianess and bitwise operations In-Reply-To: <902ED21E-0D2A-47D7-878A-890EE2556117@illinois.edu> References: <2370AA84-DF99-408C-8CBB-D69DC38839C8@illinois.edu> <902ED21E-0D2A-47D7-878A-890EE2556117@illinois.edu> Message-ID: On Thu, Jul 24, 2014 at 4:00 PM, Siwek, Jon wrote: > analyzer::Manager::BuildInitialAnalyzerTree() is what that the documentation should say for newer versions. Another way maybe you can do what you want without changing source code directly is to make a payload regex that matches everything and enables the analyzer you are writing. Thanks for the guidance. Will give that a go. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140724/ccd21c2f/attachment.html From Robert_Yang at trendmicro.com.cn Thu Jul 24 18:17:41 2014 From: Robert_Yang at trendmicro.com.cn (Robert_Yang at trendmicro.com.cn) Date: Fri, 25 Jul 2014 01:17:41 +0000 Subject: [Bro] How to extract data to a eml file from smtp traffic In-Reply-To: References: <6FCE7872AA66C246990EC5623F91A014BBF85278@CDCEXMBX02.tw.trendnet.org> Message-ID: <6FCE7872AA66C246990EC5623F91A014BBF86643@CDCEXMBX02.tw.trendnet.org> Hi Seth, Thanks for your rapidly reply! Actually, I want to catch the whole message indeed as you mentioned. In my environment, I try to send a eml file as payload of DATA command, then catch it by bro and compare with the original eml file. You mention that " Eventually I think things will be changing with the SMTP analyzer where the whole message is passed as a file", so I try to catch the data in smtp_data event in files.bro. And finally I can get the original of mail's content indeed. About data size, I double check my data and find out root cause. The original eml file is 23831 byte as windows EOL format. The captured data is saved as UNIX EOL format, so it is a little bigger. After fixed this issue, the captured data is equal with the original eml file. The bro is very great! Robert Yang -----Original Message----- From: Seth Hall [mailto:seth at icir.org] Sent: 2014??7??24?? 21:41 To: Robert Yang (RD-CN) Cc: bro at bro.org Subject: Re: [Bro] How to extract data to a eml file from smtp traffic On Jul 24, 2014, at 2:45 AM, Robert_Yang at trendmicro.com.cn wrote: > I want to extract the whole data to a eml file from smtp traffic. And the system event ?C file_new() only save every mime entity of an email as a file instead of the whole email. This is not I want. I'm going to assume you're saying that you want the entire SMTP data transaction. I don't actually know what microsoft does for their eml format but it sounds like you're just describing a full mime transfer.  Eventually I think things will be changing with the SMTP analyzer where the whole message is passed as a file and the MIME analyzer will be separated as a file analyzer (it's directly integrated into the smtp analyzer right now). This will make it possible to get the whole message if you want it, but you'll also be able to have Bro extract and analyze all of the mime entities separately too. > I print size of every data. The amount of every data size is always less than actually size the eml file ( 23137 Byte < 23831 Byte). So what I miss? And how to save data to file in smtp_data event? Could you send along a trace file where you are having this problem? .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ TREND MICRO EMAIL NOTICE The information contained in this email and any attachments is confidential and may be subject to copyright or other intellectual property protection. If you are not the intended recipient, you are not authorized to use or disclose this information, and we request that you notify us by reply mail or telephone and delete the original message from your mail system. From jxbatchelor at gmail.com Fri Jul 25 07:03:40 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Fri, 25 Jul 2014 09:03:40 -0500 Subject: [Bro] Bro + Yara File Scanning Module? Message-ID: Hello all: I wanted to poke the hive mind to see if anyone has considered, or is actively pursuing integrating Yara into a Bro script? An idea for a script I would like to write is to simply take any file from a 'file_new' event. Then add something like Files::ANALYZER_YARA that would do the heavy lifting and take a user defined path to a master Yara file, scan the file, append the results to either files.log or notice.log, and finally, extract any file that hit on a signature (for further analysis). Interested if this is something that has been considered previously? If so, what were the results? If not, I'm happy to spin off an effort of my own. Either way I see it as a good project to get into Bro scripting at a deeper level. Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/4321f9f8/attachment.html From John_Lankau at sra.com Fri Jul 25 07:20:01 2014 From: John_Lankau at sra.com (Lankau, John) Date: Fri, 25 Jul 2014 14:20:01 +0000 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: References: Message-ID: <0464E9BF13BEE74EA662DE35911D64232B9BA2D8@SRAexMBX03.sra.com> Jason, I would be curious to hear more about this as well. I don?t know if it already exists, but we are considering a functionality here very similar to what you?ve described. We were considering moving the extracted files to another system for Yara scanning, but integrating it within Bro might be a more efficient process. Thanks, John From: bro-bounces at bro.org [mailto:bro-bounces at bro.org] On Behalf Of Jason Batchelor Sent: Friday, July 25, 2014 10:04 AM To: bro at bro.org Subject: [Bro] Bro + Yara File Scanning Module? Hello all: I wanted to poke the hive mind to see if anyone has considered, or is actively pursuing integrating Yara into a Bro script? An idea for a script I would like to write is to simply take any file from a 'file_new' event. Then add something like Files::ANALYZER_YARA that would do the heavy lifting and take a user defined path to a master Yara file, scan the file, append the results to either files.log or notice.log, and finally, extract any file that hit on a signature (for further analysis). Interested if this is something that has been considered previously? If so, what were the results? If not, I'm happy to spin off an effort of my own. Either way I see it as a good project to get into Bro scripting at a deeper level. Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/03157611/attachment.html From luke at geekempire.com Fri Jul 25 07:38:33 2014 From: luke at geekempire.com (Mike Reeves) Date: Fri, 25 Jul 2014 10:38:33 -0400 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: References: Message-ID: The process I use is I have all of the files being written to a directory then a python scripts monitors that for new files. It uses a Redis keystore and checks the sha256 of the file. If it exists in the keystore it simply deletes the file and moves on. If it does not exist it adds it to the keystore and then moves it somewhere else. This could be Yara or whatever. I will see if I can dig it up but it was rather simple python. I did this because I didn?t want to tie up Bro especially if you are seeing high file volume. Mike On Jul 25, 2014, at 10:03 AM, Jason Batchelor wrote: > Hello all: > > I wanted to poke the hive mind to see if anyone has considered, or is actively pursuing integrating Yara into a Bro script? > > An idea for a script I would like to write is to simply take any file from a 'file_new' event. Then add something like Files::ANALYZER_YARA that would do the heavy lifting and take a user defined path to a master Yara file, scan the file, append the results to either files.log or notice.log, and finally, extract any file that hit on a signature (for further analysis). > > Interested if this is something that has been considered previously? If so, what were the results? If not, I'm happy to spin off an effort of my own. Either way I see it as a good project to get into Bro scripting at a deeper level. > > Thanks, > Jason > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at icir.org Fri Jul 25 07:54:27 2014 From: seth at icir.org (Seth Hall) Date: Fri, 25 Jul 2014 10:54:27 -0400 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: References: Message-ID: <967CE801-1149-4A3A-998F-EB7A3E006381@icir.org> On Jul 25, 2014, at 10:03 AM, Jason Batchelor wrote: > Interested if this is something that has been considered previously? If so, what were the results? If not, I'm happy to spin off an effort of my own. Either way I see it as a good project to get into Bro scripting at a deeper level. I was working on this a while ago and got it working. :) Unfortunately it required some changes to Yara itself to add an incremental analysis API which I need to update because the Yara developers have been making changes in the areas that I had to make changes. I've been thinking of coming back around to that code to get it cleaned up and contributed back to the Yara developers so that we could easily have a Yara analyzer in Bro. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/dee35d88/attachment.bin From grutz at jingojango.net Fri Jul 25 08:01:13 2014 From: grutz at jingojango.net (Kurt Grutzmacher) Date: Fri, 25 Jul 2014 08:01:13 -0700 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: References: Message-ID: These solutions are very awesome and mirror the path we are taking at Cisco with OpenSOC to scale up and out. I'll be speaking a bit deeper about our plans at BroCon in a few weeks but the theories are very similar: gather telemetry data (bro logs), gather intelligence data (yara results, threat intel lists, etc), inspect (storm, python scripts, etc). For this specific instance we queue the logs through kafka to enter our storm topology and plan to throw the files into hdfs for retention/deeper analysis. -- Kurt Grutzmacher -=- grutz at jingojango.net On Fri, Jul 25, 2014 at 7:38 AM, Mike Reeves wrote: > The process I use is I have all of the files being written to a directory > then a python scripts monitors that for new files. It uses a Redis keystore > and checks the sha256 of the file. If it exists in the keystore it simply > deletes the file and moves on. If it does not exist it adds it to the > keystore and then moves it somewhere else. This could be Yara or whatever. > I will see if I can dig it up but it was rather simple python. I did this > because I didn?t want to tie up Bro especially if you are seeing high file > volume. > > Mike > > On Jul 25, 2014, at 10:03 AM, Jason Batchelor > wrote: > > > Hello all: > > > > I wanted to poke the hive mind to see if anyone has considered, or is > actively pursuing integrating Yara into a Bro script? > > > > An idea for a script I would like to write is to simply take any file > from a 'file_new' event. Then add something like Files::ANALYZER_YARA that > would do the heavy lifting and take a user defined path to a master Yara > file, scan the file, append the results to either files.log or notice.log, > and finally, extract any file that hit on a signature (for further > analysis). > > > > Interested if this is something that has been considered previously? If > so, what were the results? If not, I'm happy to spin off an effort of my > own. Either way I see it as a good project to get into Bro scripting at a > deeper level. > > > > Thanks, > > Jason > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/53059d19/attachment.html From seth at icir.org Fri Jul 25 08:09:03 2014 From: seth at icir.org (Seth Hall) Date: Fri, 25 Jul 2014 11:09:03 -0400 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: <967CE801-1149-4A3A-998F-EB7A3E006381@icir.org> References: <967CE801-1149-4A3A-998F-EB7A3E006381@icir.org> Message-ID: <89642C3C-C1B0-4A45-9E51-ADD20679C5C6@icir.org> On Jul 25, 2014, at 10:54 AM, Seth Hall wrote: > On Jul 25, 2014, at 10:03 AM, Jason Batchelor wrote: > >> Interested if this is something that has been considered previously? If so, what were the results? If not, I'm happy to spin off an effort of my own. Either way I see it as a good project to get into Bro scripting at a deeper level. > > I was working on this a while ago and got it working. :) I forgot to mention one more point. It was pretty slow because of the internal architecture of Yara and I had started reworking a bit of Yara to fix the problem (compiling rule sets is slow and they mix rule match state with the compiled rule structure so you can't match multiple files concurrently with the same compiled rule set). Is there anyone out there interested in taking on this rework and pushing it to completion? (you need to know C) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/cb473283/attachment.bin From jxbatchelor at gmail.com Fri Jul 25 10:42:58 2014 From: jxbatchelor at gmail.com (Jason Batchelor) Date: Fri, 25 Jul 2014 12:42:58 -0500 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: <89642C3C-C1B0-4A45-9E51-ADD20679C5C6@icir.org> References: <967CE801-1149-4A3A-998F-EB7A3E006381@icir.org> <89642C3C-C1B0-4A45-9E51-ADD20679C5C6@icir.org> Message-ID: Out of curiosity, were you working with Yara 2.0 when you were developing? It is several orders of magnitude faster than previous versions. To your question, I would be interested in this effort but before diving in would like some time to familiarize myself more with Bro development. I will be at this years BroCon in pursuit of that goal and would welcome further collaboration toward this end :) Ideally, what I would love to see is a way to take actions on alerts generated by some kind of 'Files::ANALYZER_YARA'. So say if I have a ZIP file for example and a Yara rule to detect a ZIP. I think it would be very valuable for someone to not only just trigger on that, but then invoke an event that decompresses the ZIP and feeds the contents through the same scanning engine. Now replace ZIP files with a known crypter/obfuscation or something else and you can perhaps start to see the power and possibilities that begin to unfold :) Full disclosure time...: I am a malware reverse engineer by trade. When I RE a binary I can tell my customers (the analysts) a lot about it, however, their ability to take action on the intelligence I give them is oftentimes limited by their capabilities / security posture as an organization. Enter Bro, with a modular framework, I look to this as a means to make the observables I gain from my RE efforts as more valuable actionable intelligence for my team. By implementing this modular 'take action on X' mentality with respect to Bro and Yara, my signatures get more milage, as well as my observables on how certain crypters/encodings can be defeated. Imagine this, I have a signature for shellcode that decrypts a PE in a certain way always at a certain offset. My Yara rule hits on this signature and triggers an event that unmaskes the binary as well, out pops the dropper, that is scanned again, and hits on the signature I created for the dropper, etc, etc.. So I've automated analysis that is usually done by someone more experienced on the command line. Not only that, but now the analyst knows more about what they are dealing with which directly informs IR/Intel efforts. Hope that helps paint the picture a little more :) - Jason On Fri, Jul 25, 2014 at 10:09 AM, Seth Hall wrote: > > On Jul 25, 2014, at 10:54 AM, Seth Hall wrote: > > > On Jul 25, 2014, at 10:03 AM, Jason Batchelor > wrote: > > > >> Interested if this is something that has been considered previously? If > so, what were the results? If not, I'm happy to spin off an effort of my > own. Either way I see it as a good project to get into Bro scripting at a > deeper level. > > > > I was working on this a while ago and got it working. :) > > I forgot to mention one more point. It was pretty slow because of the > internal architecture of Yara and I had started reworking a bit of Yara to > fix the problem (compiling rule sets is slow and they mix rule match state > with the compiled rule structure so you can't match multiple files > concurrently with the same compiled rule set). > > Is there anyone out there interested in taking on this rework and pushing > it to completion? (you need to know C) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/cab9073a/attachment.html From seth at icir.org Fri Jul 25 11:45:04 2014 From: seth at icir.org (Seth Hall) Date: Fri, 25 Jul 2014 14:45:04 -0400 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: References: <967CE801-1149-4A3A-998F-EB7A3E006381@icir.org> <89642C3C-C1B0-4A45-9E51-ADD20679C5C6@icir.org> Message-ID: <213E6E65-7EA8-417B-8AC5-2E86D52F933F@icir.org> On Jul 25, 2014, at 1:42 PM, Jason Batchelor wrote: > Out of curiosity, were you working with Yara 2.0 when you were developing? It is several orders of magnitude faster than previous versions. I was working on it during the lead up to the 2.0 code so my work was developed around the changes they made. > To your question, I would be interested in this effort but before diving in would like some time to familiarize myself more with Bro development. I will be at this years BroCon in pursuit of that goal and would welcome further collaboration toward this end :) Once an incremental analysis api is added to Yara and Yara's match state and compiled rules are separated, the Bro module is really simple (and it's already been written somewhere...). > Ideally, what I would love to see is a way to take actions on alerts generated by some kind of 'Files::ANALYZER_YARA'. So say if I have a ZIP file for example and a Yara rule to detect a ZIP. I think it would be very valuable for someone to not only just trigger on that, but then invoke an event that decompresses the ZIP and feeds the contents through the same scanning engine. Now replace ZIP files with a known crypter/obfuscation or something else and you can perhaps start to see the power and possibilities that begin to unfold :) It's a bit more complicated than that unfortunately. :) Everything in Bro is organized around incremental analysis. If you have a yara rule fire you can't go back and look at the old data, it's gone already. You'd need to write Bro scripts that extract files temporarily and then possibly re-analyze them with new information. > By implementing this modular 'take action on X' mentality with respect to Bro and Yara, my signatures get more milage, I agree there, but there are some questions left lingering. We aren't really sure if you'll be able to run large rule sets again all files and just how much help they will be. > Imagine this, I have a signature for shellcode that decrypts a PE in a certain way always at a certain offset. My Yara rule hits on this signature and triggers an event that unmaskes the binary as well, out pops the dropper, that is scanned again, and hits on the signature I created for the dropper, etc, etc.. This is one of those areas where the file would need to be extracted and re-analyzed. > Hope that helps paint the picture a little more :) Yes! I'm just excited that someone that doesn't primarily look at network traffic is playing with Bro, or at least looking into it. :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/4263098c/attachment.bin From anthony.kasza at gmail.com Fri Jul 25 12:08:16 2014 From: anthony.kasza at gmail.com (anthony kasza) Date: Fri, 25 Jul 2014 12:08:16 -0700 Subject: [Bro] Bro + Yara File Scanning Module? In-Reply-To: <213E6E65-7EA8-417B-8AC5-2E86D52F933F@icir.org> References: <967CE801-1149-4A3A-998F-EB7A3E006381@icir.org> <89642C3C-C1B0-4A45-9E51-ADD20679C5C6@icir.org> <213E6E65-7EA8-417B-8AC5-2E86D52F933F@icir.org> Message-ID: It probably isn't what you're looking for, but I tried making something similar to Yara a little while back. It is a hack on top of the Intel framework. https://github.com/anthonykasza/scratch_pad/tree/master/rules On Jul 25, 2014 11:48 AM, "Seth Hall" wrote: > > On Jul 25, 2014, at 1:42 PM, Jason Batchelor > wrote: > > > Out of curiosity, were you working with Yara 2.0 when you were > developing? It is several orders of magnitude faster than previous versions. > > I was working on it during the lead up to the 2.0 code so my work was > developed around the changes they made. > > > To your question, I would be interested in this effort but before > diving in would like some time to familiarize myself more with Bro > development. I will be at this years BroCon in pursuit of that goal and > would welcome further collaboration toward this end :) > > Once an incremental analysis api is added to Yara and Yara's match state > and compiled rules are separated, the Bro module is really simple (and it's > already been written somewhere...). > > > Ideally, what I would love to see is a way to take actions on alerts > generated by some kind of 'Files::ANALYZER_YARA'. So say if I have a ZIP > file for example and a Yara rule to detect a ZIP. I think it would be very > valuable for someone to not only just trigger on that, but then invoke an > event that decompresses the ZIP and feeds the contents through the same > scanning engine. Now replace ZIP files with a known crypter/obfuscation or > something else and you can perhaps start to see the power and possibilities > that begin to unfold :) > > It's a bit more complicated than that unfortunately. :) > > Everything in Bro is organized around incremental analysis. If you have a > yara rule fire you can't go back and look at the old data, it's gone > already. You'd need to write Bro scripts that extract files temporarily > and then possibly re-analyze them with new information. > > > By implementing this modular 'take action on X' mentality with respect > to Bro and Yara, my signatures get more milage, > > I agree there, but there are some questions left lingering. We aren't > really sure if you'll be able to run large rule sets again all files and > just how much help they will be. > > > Imagine this, I have a signature for shellcode that decrypts a PE in a > certain way always at a certain offset. My Yara rule hits on this signature > and triggers an event that unmaskes the binary as well, out pops the > dropper, that is scanned again, and hits on the signature I created for the > dropper, etc, etc.. > > This is one of those areas where the file would need to be extracted and > re-analyzed. > > > Hope that helps paint the picture a little more :) > > Yes! I'm just excited that someone that doesn't primarily look at network > traffic is playing with Bro, or at least looking into it. :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140725/236b3bf4/attachment.html From jlay at slave-tothe-box.net Fri Jul 25 16:42:15 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Fri, 25 Jul 2014 17:42:15 -0600 Subject: [Bro] Identifying interface when running with multiple interfaces Message-ID: <1406331735.2667.12.camel@JamesiMac> Hey all, So I run bro with: /usr/local/bin/bro --no-checksums -i eth0 -i ppp0 local "Site::local_nets += { x.x.x.x/32,192.168.1.0/24 }" & Is there something I can do to add a field that would let me know which interface the traffic came in on? Obviously in this example it's pretty simple...private IP space will be on eth0 whereas public will be on ppp0. I am thinking of scenarios where there might be the same IP space on several interfaces. Thanks for any guidance. James From seth at icir.org Fri Jul 25 21:32:27 2014 From: seth at icir.org (Seth Hall) Date: Sat, 26 Jul 2014 00:32:27 -0400 Subject: [Bro] Identifying interface when running with multiple interfaces In-Reply-To: <1406331735.2667.12.camel@JamesiMac> References: <1406331735.2667.12.camel@JamesiMac> Message-ID: On Jul 25, 2014, at 7:42 PM, James Lay wrote: > /usr/local/bin/bro --no-checksums -i eth0 -i ppp0 local > "Site::local_nets += { x.x.x.x/32,192.168.1.0/24 }" & > > Is there something I can do to add a field that would let me know which > interface the traffic came in on? Nope, sorry. I would recommend running this as a cluster with two workers. One sniffing each interface. This is how SecurityOnion approaches this issue. .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140726/f53a9c26/attachment.bin From jlay at slave-tothe-box.net Sat Jul 26 05:37:02 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Sat, 26 Jul 2014 06:37:02 -0600 Subject: [Bro] Identifying interface when running with multiple interfaces In-Reply-To: References: <1406331735.2667.12.camel@JamesiMac> Message-ID: <1406378222.2521.1.camel@JamesiMac> On Sat, 2014-07-26 at 00:32 -0400, Seth Hall wrote: > On Jul 25, 2014, at 7:42 PM, James Lay wrote: > > > /usr/local/bin/bro --no-checksums -i eth0 -i ppp0 local > > "Site::local_nets += { x.x.x.x/32,192.168.1.0/24 }" & > > > > Is there something I can do to add a field that would let me know which > > interface the traffic came in on? > > Nope, sorry. I would recommend running this as a cluster with two workers. One sniffing each interface. This is how SecurityOnion approaches this issue. > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > Thanks Seth...does clustering require using broctl? James From mkolkebeck at gmail.com Mon Jul 28 14:56:19 2014 From: mkolkebeck at gmail.com (Mike Kolkebeck) Date: Mon, 28 Jul 2014 16:56:19 -0500 Subject: [Bro] File extraction filters Message-ID: <1EC46D25-1EB3-4C97-B6A1-DA2964836CFE@gmail.com> I have two questions on the file extraction framework: 1) If I only want to capture files from a specific worker or ip ranges, what is the best/simplest way to ensure that this happens? -I've tried using f$info$tx_hosts with event file_new, but this seems inconsistently populated, and using f$conns with event file_new seems consistent, but I don't know if it's the best/simplest way. 2) If missing_bytes > 0, what is the best/simplest way to remove the file (and possibly clear it from logging a successful extract in the files.log file)? -I've tested using event file_state_remove, and I can use system to rm the file, but again I'm not sure this is the best/simplest way, and the files.log continues to show this as extracted. From jsiwek at illinois.edu Tue Jul 29 07:48:34 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 29 Jul 2014 14:48:34 +0000 Subject: [Bro] File extraction filters In-Reply-To: <1EC46D25-1EB3-4C97-B6A1-DA2964836CFE@gmail.com> References: <1EC46D25-1EB3-4C97-B6A1-DA2964836CFE@gmail.com> Message-ID: <9C5B3759-374A-4246-A0B0-FD8954BA3999@illinois.edu> On Jul 28, 2014, at 4:56 PM, Mike Kolkebeck wrote: > I have two questions on the file extraction framework: > > 1) If I only want to capture files from a specific worker or ip ranges, what is the best/simplest way to ensure that this happens? > -I've tried using f$info$tx_hosts with event file_new, but this seems inconsistently populated, and using f$conns with event file_new seems consistent, but I don't know if it's the best/simplest way. In either case, I?d probably try using ?file_over_new_connection? instead of ?file_new? ? it might end up not mattering for your use, but the fields you?re inspecting are more closely associated with the former event. A given file can technically be transferred over many different connections, depending on the protocol involved, so using ?file_new? may not always give the full story since that?s only ever raised once for a given file. Using f$info${tx,rx}_hosts may be better if transfer direction is important, otherwise f$conns should be fine. > 2) If missing_bytes > 0, what is the best/simplest way to remove the file (and possibly clear it from logging a successful extract in the files.log file)? > -I've tested using event file_state_remove, and I can use system to rm the file, but again I'm not sure this is the best/simplest way, and the files.log continues to show this as extracted. There?s the ?file_gap? event that you might want to handle, call ?Files::remove_analyzer?, then use a system call to rm the file, and finally ?delete f$info$extracted;? to unset the field and prevent it from being logged in files.log. - Jon From jlay at slave-tothe-box.net Tue Jul 29 08:26:21 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 29 Jul 2014 09:26:21 -0600 Subject: [Bro] A question on barnyard2 integration Message-ID: <8a19224f856c1509c71de16a57e517d0@localhost> Ok actually two questions: 1) I'm not able to get this to load with either: @policy/integration/barnyard2 @integration/barnyard2 And from barnyard2 docs: alert_bro ---------------------------------------------------------------------------- Purpose: Send alerts to a Bro-IDS instance. Arguments: hostname:port Examples: output alert_bro: 127.0.0.1:47757 How do I set the port that bro listens to? Thank you. James From mkolkebeck at gmail.com Tue Jul 29 08:38:05 2014 From: mkolkebeck at gmail.com (Mike Kolkebeck) Date: Tue, 29 Jul 2014 10:38:05 -0500 Subject: [Bro] File extraction filters In-Reply-To: <9C5B3759-374A-4246-A0B0-FD8954BA3999@illinois.edu> References: <1EC46D25-1EB3-4C97-B6A1-DA2964836CFE@gmail.com> <9C5B3759-374A-4246-A0B0-FD8954BA3999@illinois.edu> Message-ID: Does "file_over_new_connection" fire at the same time as "file_new" when there is a new file? More specifically, will I ever lose any bytes by using this event over "file_new"? > On Jul 29, 2014, at 9:48 AM, "Siwek, Jon" wrote: > > >> On Jul 28, 2014, at 4:56 PM, Mike Kolkebeck wrote: >> >> I have two questions on the file extraction framework: >> >> 1) If I only want to capture files from a specific worker or ip ranges, what is the best/simplest way to ensure that this happens? >> -I've tried using f$info$tx_hosts with event file_new, but this seems inconsistently populated, and using f$conns with event file_new seems consistent, but I don't know if it's the best/simplest way. > > In either case, I?d probably try using ?file_over_new_connection? instead of ?file_new? ? it might end up not mattering for your use, but the fields you?re inspecting are more closely associated with the former event. A given file can technically be transferred over many different connections, depending on the protocol involved, so using ?file_new? may not always give the full story since that?s only ever raised once for a given file. > > Using f$info${tx,rx}_hosts may be better if transfer direction is important, otherwise f$conns should be fine. > >> 2) If missing_bytes > 0, what is the best/simplest way to remove the file (and possibly clear it from logging a successful extract in the files.log file)? >> -I've tested using event file_state_remove, and I can use system to rm the file, but again I'm not sure this is the best/simplest way, and the files.log continues to show this as extracted. > > There?s the ?file_gap? event that you might want to handle, call ?Files::remove_analyzer?, then use a system call to rm the file, and finally ?delete f$info$extracted;? to unset the field and prevent it from being logged in files.log. > > - Jon From jlay at slave-tothe-box.net Tue Jul 29 09:14:50 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 29 Jul 2014 10:14:50 -0600 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <8a19224f856c1509c71de16a57e517d0@localhost> References: <8a19224f856c1509c71de16a57e517d0@localhost> Message-ID: <1d987c8370b63f6b0e97458797547dad@localhost> On 2014-07-29 09:26, James Lay wrote: > Ok actually two questions: > > 1) I'm not able to get this to load with either: > > @policy/integration/barnyard2 > @integration/barnyard2 > > And from barnyard2 docs: > > alert_bro > > > ---------------------------------------------------------------------------- > > Purpose: Send alerts to a Bro-IDS instance. > > Arguments: hostname:port > > Examples: > output alert_bro: 127.0.0.1:47757 > > How do I set the port that bro listens to? Thank you. > > James Ok I've got this loading now with the below in local.bro: @load policy/integration/barnyard2 tail: loaded_scripts.log: file truncated /usr/local/bro/share/bro/policy/integration/barnyard2/__load__.bro /usr/local/bro/share/bro/policy/integration/barnyard2/types.bro /usr/local/bro/share/bro/policy/integration/barnyard2/main.bro The next bit...how do I tell bro to open a listening port? Thank you. James From jsiwek at illinois.edu Tue Jul 29 09:37:33 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 29 Jul 2014 16:37:33 +0000 Subject: [Bro] File extraction filters In-Reply-To: References: <1EC46D25-1EB3-4C97-B6A1-DA2964836CFE@gmail.com> <9C5B3759-374A-4246-A0B0-FD8954BA3999@illinois.edu> Message-ID: <1C9CE9D1-F1B9-4DD4-AFAC-7EC426817A4D@illinois.edu> On Jul 29, 2014, at 10:38 AM, Mike Kolkebeck wrote: > Does "file_over_new_connection" fire at the same time as "file_new" when there is a new file? More specifically, will I ever lose any bytes by using this event over "file_new"? ?file_new? is immediately followed by at least one ?file_over_new_connection? (if you?re dealing w/ only files extracted from the network), so there?s not a difference in terms of what bytes have been seen yet. But you may have to think about that event being raised more than once per file and possibly not at the start of a file after the first time, whereas ?file_new? is guaranteed to be once at the start of a file. Not sure which will end up better/simpler for the code you?re writing, but hope that helps explain the differences. - Jon From jsiwek at illinois.edu Tue Jul 29 09:39:33 2014 From: jsiwek at illinois.edu (Siwek, Jon) Date: Tue, 29 Jul 2014 16:39:33 +0000 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <1d987c8370b63f6b0e97458797547dad@localhost> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> Message-ID: <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> > The next bit...how do I tell bro to open a listening port? Thank you. @load frameworks/communication/listen The default port is 47757/tcp, you can redef "Communication::listen_port? to change it. - Jon From jlay at slave-tothe-box.net Tue Jul 29 09:50:18 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 29 Jul 2014 10:50:18 -0600 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> Message-ID: On 2014-07-29 10:39, Siwek, Jon wrote: >> The next bit...how do I tell bro to open a listening port? Thank >> you. > > @load frameworks/communication/listen > > The default port is 47757/tcp, you can redef > "Communication::listen_port? to change it. > > - Jon Excellent thank you. Last question...I have this: @load tuning/logs-to-elasticsearch redef LogElasticSearch::send_logs += { Conn::LOG, }; Will I need to add an additional item? Or will bro pipe the barnyard2 data automatically to elasticsearch? Thanks again. James From seth at icir.org Tue Jul 29 10:14:17 2014 From: seth at icir.org (Seth Hall) Date: Tue, 29 Jul 2014 13:14:17 -0400 Subject: [Bro] A question on barnyard2 integration In-Reply-To: References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> Message-ID: <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> On Jul 29, 2014, at 12:50 PM, James Lay wrote: > Will I need to add an additional item? Or will bro pipe the barnyard2 > data automatically to elasticsearch? Thanks again. If you don't specify to send the barnyard log to ES, then it won't go (unless you don't specify which logs to send and all logs are sent). The Log::ID for the barnyard2 log is: Barnyard2::LOG .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140729/c36f44bf/attachment.bin From jlay at slave-tothe-box.net Tue Jul 29 10:32:11 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 29 Jul 2014 11:32:11 -0600 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> Message-ID: Perfect...thanks so much Seth. Sent from my iPhone > On Jul 29, 2014, at 11:14, Seth Hall wrote: > > >> On Jul 29, 2014, at 12:50 PM, James Lay wrote: >> >> Will I need to add an additional item? Or will bro pipe the barnyard2 >> data automatically to elasticsearch? Thanks again. > > If you don't specify to send the barnyard log to ES, then it won't go (unless you don't specify which logs to send and all logs are sent). The Log::ID for the barnyard2 log is: Barnyard2::LOG > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > From jlay at slave-tothe-box.net Tue Jul 29 17:21:22 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 29 Jul 2014 18:21:22 -0600 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> Message-ID: <1406679682.2788.10.camel@JamesiMac> On Tue, 2014-07-29 at 13:14 -0400, Seth Hall wrote: > On Jul 29, 2014, at 12:50 PM, James Lay wrote: > > > Will I need to add an additional item? Or will bro pipe the barnyard2 > > data automatically to elasticsearch? Thanks again. > > If you don't specify to send the barnyard log to ES, then it won't go (unless you don't specify which logs to send and all logs are sent). The Log::ID for the barnyard2 log is: Barnyard2::LOG > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > Hrmm....maybe I put this in wrong? @load tuning/logs-to-elasticsearch redef LogElasticSearch::send_logs += { Conn::LOG, Barnyard2::LOG }; Error in /usr/local/bro/share/bro/site/local.bro, line 91: unknown identifier Barnyard2::LOG, at or near "Barnyard2::LOG" James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140729/ed562786/attachment.html From seth at icir.org Tue Jul 29 18:03:15 2014 From: seth at icir.org (Seth Hall) Date: Tue, 29 Jul 2014 21:03:15 -0400 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <1406679682.2788.10.camel@JamesiMac> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> <1406679682.2788.10.camel@JamesiMac> Message-ID: On Jul 29, 2014, at 8:21 PM, James Lay wrote: > Error in /usr/local/bro/share/bro/site/local.bro, line 91: unknown identifier Barnyard2::LOG, at or near "Barnyard2::LOG" Make sure you're loading the Barnyard2 integration stuff before adding those lines... @load policy/integration/barnyard2 .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140729/1ecc07ca/attachment.bin From jlay at slave-tothe-box.net Tue Jul 29 18:13:38 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Tue, 29 Jul 2014 19:13:38 -0600 Subject: [Bro] A question on barnyard2 integration In-Reply-To: References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> <1406679682.2788.10.camel@JamesiMac> Message-ID: <1406682818.2788.11.camel@JamesiMac> On Tue, 2014-07-29 at 21:03 -0400, Seth Hall wrote: > On Jul 29, 2014, at 8:21 PM, James Lay wrote: > > > Error in /usr/local/bro/share/bro/site/local.bro, line 91: unknown identifier Barnyard2::LOG, at or near "Barnyard2::LOG" > > Make sure you're loading the Barnyard2 integration stuff before adding those lines... > > @load policy/integration/barnyard2 > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ > Ah crud...had the Barnyard2::LOG line added on the production box, but the @load policy on the dev box 8-| Just one of those days I guess...thanks again Seth. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140729/67a925de/attachment.html From seth at icir.org Tue Jul 29 18:24:15 2014 From: seth at icir.org (Seth Hall) Date: Tue, 29 Jul 2014 21:24:15 -0400 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <1406682818.2788.11.camel@JamesiMac> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> <1406679682.2788.10.camel@JamesiMac> <1406682818.2788.11.camel@JamesiMac> Message-ID: <411C3A6A-1F7D-48DE-B6C8-786B620A2071@icir.org> On Jul 29, 2014, at 9:13 PM, James Lay wrote: > Ah crud...had the Barnyard2::LOG line added on the production box, but the @load policy on the dev box 8-| Just one of those days I guess...thanks again Seth. No problem. I wouldn't even complain if you documented your experiences with this stuff somewhere. :) .Seth -- Seth Hall International Computer Science Institute (Bro) because everyone has a network http://www.bro.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140729/57343116/attachment.bin From netantho at gmail.com Wed Jul 30 15:08:19 2014 From: netantho at gmail.com (Anthony VEREZ) Date: Wed, 30 Jul 2014 15:08:19 -0700 Subject: [Bro] SSLBL In-Reply-To: References: <80BCF1CC-982E-45D3-800A-C83139F9A395@icir.org> Message-ID: <53D96CD3.8090709@gmail.com> Hi, I created a python script to parse get the latest version of a blacklist and convert it to the bro intel framework format: https://gist.github.com/netantho/b4f5a3df008184119695#file-gistfile1-py Thanks James and Johanna for the idea :) Anthony On 7/15/14, 9:59 AM, James Lay wrote: > On 2014-07-15 10:55, Johanna Amann wrote: >> Hello James, >> >> using blacklists like this is actually quite easy nowadays. Just >> loading the list of blacklisted SHA-1 hashes into the intel framework >> and making sure that policy/frameworks/intel/seen/file-hashes.bro is >> loaded should be enough. >> >> Certificates used in SSL connections are handled just like files, so >> if one of the certificates is encountered after loading the data, it >> should trigger a notification. >> >> You just have to reformat the list for the intel framework. >> >> Johanna >> >> On 15 Jul 2014, at 9:40, James Lay wrote: >> >>> Interesting: >>> >>> https://sslbl.abuse.ch/blacklist/ >>> >>> Wonder if bro can support this? >>> >>> James > > Thank you Johanna...I will go down that path. > > James > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > From johanna at icir.org Wed Jul 30 15:21:43 2014 From: johanna at icir.org (Johanna Amann) Date: Wed, 30 Jul 2014 15:21:43 -0700 Subject: [Bro] SSLBL In-Reply-To: <53D96CD3.8090709@gmail.com> References: <80BCF1CC-982E-45D3-800A-C83139F9A395@icir.org> <53D96CD3.8090709@gmail.com> Message-ID: ...and the same in perl: https://github.com/0xxon/bro-utils/blob/master/convert-blacklist.pl I sent that to James a while ago but forgot to CC the list. Johanna On 30 Jul 2014, at 15:08, Anthony VEREZ wrote: > Hi, > > I created a python script to parse get the latest version of a > blacklist > and convert it to the bro intel framework format: > https://gist.github.com/netantho/b4f5a3df008184119695#file-gistfile1-py > > Thanks James and Johanna for the idea :) > > Anthony > > On 7/15/14, 9:59 AM, James Lay wrote: >> On 2014-07-15 10:55, Johanna Amann wrote: >>> Hello James, >>> >>> using blacklists like this is actually quite easy nowadays. Just >>> loading the list of blacklisted SHA-1 hashes into the intel >>> framework >>> and making sure that policy/frameworks/intel/seen/file-hashes.bro is >>> loaded should be enough. >>> >>> Certificates used in SSL connections are handled just like files, so >>> if one of the certificates is encountered after loading the data, it >>> should trigger a notification. >>> >>> You just have to reformat the list for the intel framework. >>> >>> Johanna >>> >>> On 15 Jul 2014, at 9:40, James Lay wrote: >>> >>>> Interesting: >>>> >>>> https://sslbl.abuse.ch/blacklist/ >>>> >>>> Wonder if bro can support this? >>>> >>>> James >> >> Thank you Johanna...I will go down that path. >> >> James >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From roihat168 at yahoo.com Thu Jul 31 01:59:06 2014 From: roihat168 at yahoo.com (roi hatam) Date: Thu, 31 Jul 2014 01:59:06 -0700 Subject: [Bro] filter extension Message-ID: <1406797146.6059.YahooMailNeo@web162104.mail.bf1.yahoo.com> Hello I'm try to sniff http requests, but I don't want to see any kind of pictures like gif, jpg ,png ... What is the earliest stage that I can block those file extensions ? the pcapfilter? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20140731/4fdf94dd/attachment.html From jlay at slave-tothe-box.net Thu Jul 31 07:56:46 2014 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 31 Jul 2014 08:56:46 -0600 Subject: [Bro] A question on barnyard2 integration In-Reply-To: <411C3A6A-1F7D-48DE-B6C8-786B620A2071@icir.org> References: <8a19224f856c1509c71de16a57e517d0@localhost> <1d987c8370b63f6b0e97458797547dad@localhost> <68DD395E-B72E-44BE-9F7C-0046F01208A6@illinois.edu> <8743C372-4757-4F74-B972-86F886F3FACE@icir.org> <1406679682.2788.10.camel@JamesiMac> <1406682818.2788.11.camel@JamesiMac> <411C3A6A-1F7D-48DE-B6C8-786B620A2071@icir.org> Message-ID: <35fb641be6b1dc4bd603d83a717a9419@localhost> On 2014-07-29 19:24, Seth Hall wrote: > On Jul 29, 2014, at 9:13 PM, James Lay > wrote: > >> Ah crud...had the Barnyard2::LOG line added on the production box, >> but the @load policy on the dev box 8-| Just one of those days I >> guess...thanks again Seth. > > No problem. I wouldn't even complain if you documented your > experiences with this stuff somewhere. :) > > .Seth > > -- > Seth Hall > International Computer Science Institute > (Bro) because everyone has a network > http://www.bro.org/ Thanks Seth. So far I haven't been able to get this to work. Everything seems to be functioning, but I don't get any snort data into elasticsearch (I do get conn.log data though). Info below: installed brocolli recompile barnyard2 with ./configure --enable-ipv6 --enable-gre --enable-bro --with-mysql --with-tcl=/usr/local/lib and I do see "checking for broccoli... yes" local.bro: @load frameworks/communication/listen @load policy/integration/barnyard2 @load tuning/logs-to-elasticsearch redef LogElasticSearch::send_logs += { Conn::LOG, Barnyard2::LOG }; redef LogElasticSearch::server_host = "x.x.x.x"; Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:47757 0.0.0.0:* LISTEN 25340/bro barnyard: output alert_bro: 127.0.0.1:47757 from runtime with -v: alert_bro Connecting to Bro (127.0.0.1:47757)...done. But all I see is conn.log info...no barnyard2 data. Not sure what else to do at this point...thanks Seth. James