From johanna at icir.org Thu Feb 1 09:10:57 2018 From: johanna at icir.org (Johanna Amann) Date: Thu, 01 Feb 2018 09:10:57 -0800 Subject: [Bro-Dev] Merged branches deletion In-Reply-To: <20180130233607.a2ehpp7at3yb5373@user237.sys.ICSI.Berkeley.EDU> References: <20180130233607.a2ehpp7at3yb5373@user237.sys.ICSI.Berkeley.EDU> Message-ID: <3447FE82-E643-4758-9FBD-4BCCB1970100@icir.org> This is done. Johanna On 30 Jan 2018, at 15:36, Johanna Amann wrote: > Hi, > > I am going to delete these (merged) branches thursday, unless someone > feels especially attached to them: > > topic/dnthayer/ticket1821 > topic/dnthayer/ticket1836 > topic/dnthayer/ticket1863 > topic/jazoff/contentline-limit > topic/jazoff/fix-gridftp > topic/jazoff/fix-intel-error > topic/jazoff/speedup-for > topic/robin/broker-logging > topic/robin/event-args > topic/robin/plugin-version-check > topic/seth/add-file-lookup-functions > topic/seth/input-thread-behavior > topic/seth/remove-dns-weird > topic/vladg/bit-1838 > > Johanna > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev From mfernandez at mitre.org Fri Feb 2 06:54:02 2018 From: mfernandez at mitre.org (Fernandez, Mark I) Date: Fri, 2 Feb 2018 14:54:02 +0000 Subject: [Bro-Dev] Bro DCE-RPC Fix for AlterContext and AlterContextResponse Parsers In-Reply-To: References: Message-ID: Bro-Dev Group, I am digging thru the BinPAC code for the DCE-RPC analyzer, and I noticed a couple of developer-comments that I think could be related, and perhaps even resolved, by a simple fix. 1. Developer BinPAC Comments See Lines 153-155 of dce_rpc-protocol.pac [https://github.com/bro/bro/blob/master/src/analyzer/protocol/dce-rpc/dce_rpc-protocol.pac#L153], stating that DCE_RPC_ALTER_CONTEXT and DCE_RPC_ALTER_CONTEXT_RESP are not being handled correctly and consequently, the parsers for each one are disabled/commented out. 2. Issue / Problem: dce_rpc-protocol.pac According to the original Open Group specification for DCE RPC (dated October 1997), the format of the AlterContext packet is identical to the Bind packet, and the format of the AlterContextResponse is identical to the BindAck. See the following URL for more info; or I could send you the PDF document separately, if needed. http://pubs.opengroup.org/onlinepubs/9629399/chap12.htm#tagcjh_17_06_04_01 http://pubs.opengroup.org/onlinepubs/9629399/chap12.htm#tagcjh_17_06_04_02 When looking at the BinPAC file, the type records for DCE_RPC_ALTER_CONTEXT and DCE_RPC_BIND are different, should be identical. Similarly, the type records for DCE_RPC_ALTER_CONTEXT_RESP and DCE_RPC_BIND_ACK are very different, should be identical. 3. Proposed Fix: dce_rpc-protocol.pac Modify the type record for DCE_RPC_ALTER_CONTEXT to be identical to DCE_RPC_BIND. Modify the type record for DCE_RPC_ALTER_CONTEXT_RESP to be identical to DCE_RPC_BIND_ACK. Remove '#' on Lines 154 and 155 to un-comment these lines and re-enable the parsers. In dce_rpc-analyzer.pac, generate events resulting from the AlterContext packet to allow logging of the new binding information in script-land. 4. Developer Script-land Comments See Lines 137 and 187 of main.bro [https://github.com/bro/bro/blob/master/scripts/base/protocols/dce-rpc/main.bro#L137], stating a condition where sometimes the binding is not seen. I can think of a couple of scenarios under which this would occur: (a) packet loss/drop; and (b) AlterContext packet not parsed. I think the fix described above will address (b) and help reduce the number instances where the binding isn't seen. 5. Bro Issue Tracker I plan to submit this to Bro Issue Tracker. Just wanted to give you a heads up here. Cheers! Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180202/4f9fde97/attachment.html From balint.martina at gmail.com Fri Feb 2 08:33:55 2018 From: balint.martina at gmail.com (Martina Balintova) Date: Fri, 2 Feb 2018 16:33:55 +0000 Subject: [Bro-Dev] (no subject) In-Reply-To: References: Message-ID: Hi, Here are 2 questions, one about usage of memory after free, another seems to me like memory leak. Could you please either confirm that these are bugs and suggest fixes or help me work out why no? Function src/loggin/Manager.cc: bool Manager::Write(EnumVal* id, RecordVal* columns) seems to be using memory that might have been freed already. CreateWriter function can delete info_and_fields and still return non null writer. Any suggestions how to fix it? Maybe do not delete info in CreateWriter if old writer still exists? Will info's memory be freed somewhere later? I do not have a test case for this issue though. // CreateWriter() will set the other fields in info. writer = CreateWriter(stream->id, filter->writer, info, filter->num_fields, arg_fields, filter->local, filter->remote, false, filter->name); if ( ! writer ) { Unref(columns); return false; } } // Alright, can do the write now. threading::Value** vals = RecordToFilterVals(stream, filter, columns); if ( ! PLUGIN_HOOK_WITH_RESULT(HOOK_LOG_WRITE, HookLogWrite(filter->writer->Type()->AsEnumType()->Lookup(fi lter->writer->InternalInt()), filter->name, *info, filter->num_fields, filter->fields, vals), true) ) Another question is - inside src/input/Manager.cc -> a lot of places where ev is allocated from EnumVal, it is not being freed or Unrefed. Eg in function Manager::Delete(...reader, vals) , it seems like predidx and ev are allocated, but are not freed if there was not convert_error. This looks like memory is leaked in all those cases. if ( stream->pred ) { int startpos = 0; Val* predidx = ValueToRecordVal(i, vals, stream->itype, &startpos, convert_error); if ( convert_error ) Unref(predidx); else { Ref(val); EnumVal *ev = new EnumVal(BifEnum::Input::EVENT_REMOVED, BifType::Enum::Input::Event); streamresult = CallPred(stream->pred, 3, ev, predidx, val); if ( streamresult == false ) { // keep it. Unref(idxval); success = true; } } } Thank you for suggestions, Martina -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180202/b9bb1377/attachment.html From johanna at icir.org Fri Feb 2 09:34:56 2018 From: johanna at icir.org (Johanna Amann) Date: Fri, 2 Feb 2018 09:34:56 -0800 Subject: [Bro-Dev] (no subject) In-Reply-To: References: Message-ID: <20180202173456.k6nu4qf7jetpo43v@user237.sys.ICSI.Berkeley.EDU> Hi Martina, you have picked out one of the more confusing parts of Bro to look at. The logging code is sadly quite complex - mostly because it has to handle a lot of different cases. On Fri, Feb 02, 2018 at 04:33:55PM +0000, Martina Balintova wrote: > Hi, > Here are 2 questions, one about usage of memory after free, another seems > to me like memory leak. Could you please either confirm that these are bugs > and suggest fixes or help me work out why no? > > Function src/loggin/Manager.cc: bool Manager::Write(EnumVal* id, RecordVal* > columns) seems to be using memory that might have been freed already. > CreateWriter function can delete info_and_fields and still return non null > writer. Any suggestions how to fix it? Maybe do not delete info in > CreateWriter if old writer still exists? Will info's memory be freed > somewhere later? This is not a problem. If you check the 2 cases in which delete_info_and_fields are called, you see that one of them is when FindStream returns a nullptr, and the other one is if the Steram already exists in WriterMap. If you look at the Manager::Write function, both of these cases are checked before calling CreateStream - in both of these cases, CreateStream will not be called. So - this write-after-use cannto happen. I think I tried to think about a way to make this nicer looking in the past, but was not able to find one. > Another question is - inside src/input/Manager.cc -> a lot of places where > ev is allocated from EnumVal, it is not being freed or Unrefed. Eg in > function Manager::Delete(...reader, vals) , it seems like predidx and ev > are allocated, but are not freed if there was not convert_error. This looks > like memory is leaked in all those cases. The values here are only Unref'd in the error case, because they are consumed, and will be deleted, by CallPred in the case where there is no error. In this case they are basically "passed to scriptland" and their freeing will be managed by the script interpreter. Sadly it is a bit difficult to follow what happens in cases like these. Basically, one needs to knpw if each function call that goes into the Bro core will take ownership and free the data structure afterwards, or if that is up to the calling function. Often this means looking the function that is called up to see what happens there - and even then one has to run tests afterwards to make sure one got it correct in all cases. I hope this made a few things slightly more clear, Johanna > if ( stream->pred ) > { > int startpos = 0; > Val* predidx = ValueToRecordVal(i, vals, > stream->itype, &startpos, convert_error); > > if ( convert_error ) > Unref(predidx); > else > { > Ref(val); > EnumVal *ev = new > EnumVal(BifEnum::Input::EVENT_REMOVED, BifType::Enum::Input::Event); > > streamresult = > CallPred(stream->pred, 3, ev, predidx, val); > > if ( streamresult == false ) > { > // keep it. > Unref(idxval); > success = true; > } > } > > } > > > Thank you for suggestions, > > Martina > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev From seth at corelight.com Sat Feb 3 19:46:07 2018 From: seth at corelight.com (Seth Hall) Date: Sat, 03 Feb 2018 22:46:07 -0500 Subject: [Bro-Dev] Bro DCE-RPC Fix for AlterContext and AlterContextResponse Parsers In-Reply-To: References: Message-ID: On 2 Feb 2018, at 9:54, Fernandez, Mark I wrote: > 5. Bro Issue Tracker > > I plan to submit this to Bro Issue Tracker. Just wanted to give you a > heads up here. Thanks Mark! Those were probably my comments. Unfortunately there were a number of areas where I just ran out of steam doing investigations into why things were happening the way they were so this investigation is deeply appreciated. Do you have PCAPs with ALTER_CONTEXT messages in them? Because this is difficult-to-understand change without seeing actual traffic it would be best if you were able to submit the changes along with tests. Thanks, .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From mfernandez at mitre.org Thu Feb 8 04:16:02 2018 From: mfernandez at mitre.org (Fernandez, Mark I) Date: Thu, 8 Feb 2018 12:16:02 +0000 Subject: [Bro-Dev] Bro DCE-RPC Fix for AlterContext and AlterContextResponse Parsers In-Reply-To: References: Message-ID: Hi Seth, Yes, I have a pcap containing the ALTER_CONTEXT req/resp packets. I will start working on the bug fix and submit to BIT, with pcap and test script, hopefully soon. Cheers, Mark -----Original Message----- From: Seth Hall [mailto:seth at corelight.com] Sent: Saturday, February 3, 2018 10:46 PM To: Fernandez, Mark I Cc: bro-dev at bro.org Subject: Re: [Bro-Dev] Bro DCE-RPC Fix for AlterContext and AlterContextResponse Parsers On 2 Feb 2018, at 9:54, Fernandez, Mark I wrote: > 5. Bro Issue Tracker > > I plan to submit this to Bro Issue Tracker. Just wanted to give you a > heads up here. Thanks Mark! Those were probably my comments. Unfortunately there were a number of areas where I just ran out of steam doing investigations into why things were happening the way they were so this investigation is deeply appreciated. Do you have PCAPs with ALTER_CONTEXT messages in them? Because this is difficult-to-understand change without seeing actual traffic it would be best if you were able to submit the changes along with tests. Thanks, .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From johanna at icir.org Thu Feb 8 10:01:55 2018 From: johanna at icir.org (Johanna Amann) Date: Thu, 8 Feb 2018 10:01:55 -0800 Subject: [Bro-Dev] 'async' update and proposal In-Reply-To: <20180130153842.GB39249@icir.org> References: <20180126194003.GA1786@icir.org> <20180127052530.3l4vnuhjoi6rnvs2@Beezling.local> <7289248a-4c95-9729-1fd0-59010cdd84ed@gmail.com> <20180129170014.GA39249@icir.org> <20180130153842.GB39249@icir.org> Message-ID: <20180208180155.avwgnnh4lgfttqud@user134.sys.ICSI.Berkeley.EDU> I just wanted to quickly chime in here to say that I generally like the idea of having these contexts. I have no idea how complex it would be to implement something like this, but that seems like it might be a relatively clean solution to our problem :) Johanna On Tue, Jan 30, 2018 at 07:38:42AM -0800, Robin Sommer wrote: > > > On Mon, Jan 29, 2018 at 13:58 -0600, you wrote: > > > And if 'function_call' starts as a synchronous function and later > > changes, that's also kind of a problem, so you might see people > > cautiously implementing the same type of code patterns everywhere > > even if not required for some cases. > > That's a good point more generally: if we require "async" at call > sites, an internal change to a framework can break existing code. > > > event protocol_event_1(c: connection ...) &context = { return c$id; } { ... } > > > > I only skimmed the paper, though seemed like it outlined a similar way > > of generalizing contexts/scopes ? > > Yeah, that's pretty much the idea there. For concurrency, we'd hash > that context value and use that to determine a target threat to > schedule execution too, just like in a cluster the process/machine is > determined. > > An attribute can work if we're confident that the relevant information > can always be extracted from the event parameters. In a concurrent > prototype many years ago we instead used a hardcoded set of choices > based on the underlying connection triggering the event (5-tuple, host > pair, src IP, dst IP). So you'd write (iirc): > > event protocol_event_1(c: connection ...) &scope = connection > > That detaches the context calculation from event parameters, with the > obvious disadvantage that it can't be customized any further. May be > there's some middle ground where we'd get both. > > (To clarify terminology: In that paper "scope" is the scheduling > granularity, e.g., "by connection". "context" is the current > instantiation of that scope (e.g., "1.2.3.4:1234,2.3.4.5:80" for > connection scope). > > Robin > > -- > Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev > From seth at corelight.com Thu Feb 8 10:55:25 2018 From: seth at corelight.com (Seth Hall) Date: Thu, 08 Feb 2018 13:55:25 -0500 Subject: [Bro-Dev] Bro DCE-RPC Fix for AlterContext and AlterContextResponse Parsers In-Reply-To: References: Message-ID: <9CCBD317-44F0-4EBD-98D9-66A55A2AAC8B@corelight.com> Cool, thanks Mark! .Seth On 8 Feb 2018, at 7:16, Fernandez, Mark I wrote: > Hi Seth, > > Yes, I have a pcap containing the ALTER_CONTEXT req/resp packets. I > will start working on the bug fix and submit to BIT, with pcap and > test script, hopefully soon. > > Cheers, > Mark > > -----Original Message----- > From: Seth Hall [mailto:seth at corelight.com] > Sent: Saturday, February 3, 2018 10:46 PM > To: Fernandez, Mark I > Cc: bro-dev at bro.org > Subject: Re: [Bro-Dev] Bro DCE-RPC Fix for AlterContext and > AlterContextResponse Parsers > > > > On 2 Feb 2018, at 9:54, Fernandez, Mark I wrote: > >> 5. Bro Issue Tracker >> >> I plan to submit this to Bro Issue Tracker. Just wanted to give you >> a >> heads up here. > > Thanks Mark! Those were probably my comments. Unfortunately there > were > a number of areas where I just ran out of steam doing investigations > into why things were happening the way they were so this investigation > is deeply appreciated. > > Do you have PCAPs with ALTER_CONTEXT messages in them? Because this > is > difficult-to-understand change without seeing actual traffic it would > be > best if you were able to submit the changes along with tests. > > Thanks, > .Seth > > -- > Seth Hall * Corelight, Inc * www.corelight.com -- Seth Hall * Corelight, Inc * www.corelight.com From pierre at droids-corp.org Mon Feb 12 12:19:38 2018 From: pierre at droids-corp.org (Pierre LALET) Date: Mon, 12 Feb 2018 21:19:38 +0100 Subject: [Bro-Dev] Logging TCP server banners Message-ID: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> Hi everyone, [This mail has been sent to bro@ first, but I think I might have more luck (and answers) here. Sorry for the inconvenience to those who have already read it.] For a network recon framework I am working on (https://ivre.rocks/ -- for those interested), I would like to log each "TCP server banner" Bro sees. I call "TCP server banner" the first chunk of data a server sends, before the client has sent data (if the client sends data before the server, I don't want to log anything). Here is what I have done so far (`PassiveRecon` is my module's name): ``` export { redef tcp_content_deliver_all_resp = T; [...] } [...] event tcp_contents(c: connection, is_orig: bool, seq: count, contents: string) { if (! is_orig && seq == 1 && c$orig$num_pkts == 2) { Log::write(PassiveRecon::LOG, [$ts=c$start_time, $host=c$id$resp_h, $srvport=c$id$resp_p, $recon_type=TCP_SERVER_BANNER, $value=contents]); } } ``` Basically, I consider we have a "TCP server banner" when `is_orig` is false, when `seq` equals 1 and when we have seen exactly two packets from the client (which should be a SYN and the first ACK). This solution generally works **but** I sometimes log a data chunk when I should not, particularly if I have missed part of the traffic. As an example, the following Scapy script creates a PCAP file that would trick my script into logging a "TCP server banner" while the client has actually sent some data (and we have missed an ACK packet, left as a comment in the script): ``` wrpcap("test.cap", [ Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / TCP(dport=80, sport=5678, flags="S", ack=0, seq=555678), Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / TCP(sport=80, dport=5678, flags="SA", seq=111234, ack=555679), # Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / # TCP(dport=80, sport=5678, flags="A", ack=111235, seq=555679), Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / TCP(dport=80, sport=5678, flags="PA", ack=111235, seq=555679) / "DATA", Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / TCP(sport=80, dport=5678, flags="PA", seq=111235, ack=555683) / "DATA" ]) ``` Is there a way to know that I have not missed any packet from the client and/or a way to know that the client has not sent any data on the connection (like an equivalent of the `seq` parameter, but for the `ack`)? Also, when `seq` equals 1, am I certain that I have not missed any packet from the server? One more question: is there a better, cleaner, etc. way to do what I'm trying to do? Thanks a lot, Pierre -- Pierre http://pierre.droids-corp.org/ From seth at corelight.com Mon Feb 12 14:18:05 2018 From: seth at corelight.com (Seth Hall) Date: Mon, 12 Feb 2018 17:18:05 -0500 Subject: [Bro-Dev] Logging TCP server banners In-Reply-To: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> References: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> Message-ID: <649F70BD-9F44-4AB8-809D-09A4EAFC01DE@corelight.com> This fits with a feature that I've been talking to several people about for quite a while which would make a bit of the beginning of each direction of a connection available in script-land. That would help with your problem a bit, but it sounds like since there is a particular packet that you want, you may want to write your own analyzer that gets the exact data that you are looking for because you should be able to do packet level stuff easily there. .Seth On 12 Feb 2018, at 15:19, Pierre LALET wrote: > Hi everyone, > > [This mail has been sent to bro@ first, but I think I might have more > luck (and answers) here. Sorry for the inconvenience to those who have > already read it.] > > For a network recon framework I am working on (https://ivre.rocks/ -- > for those interested), I would like to log each "TCP server banner" > Bro sees. > > I call "TCP server banner" the first chunk of data a server sends, > before the client has sent data (if the client sends data before the > server, I don't want to log anything). > > Here is what I have done so far (`PassiveRecon` is my module's name): > > ``` > export { > redef tcp_content_deliver_all_resp = T; > > [...] > } > > [...] > > event tcp_contents(c: connection, is_orig: bool, seq: count, contents: > string) > { > if (! is_orig && seq == 1 && c$orig$num_pkts == 2) > { > Log::write(PassiveRecon::LOG, [$ts=c$start_time, > $host=c$id$resp_h, > $srvport=c$id$resp_p, > $recon_type=TCP_SERVER_BANNER, > $value=contents]); > } > } > ``` > > Basically, I consider we have a "TCP server banner" when `is_orig` is > false, when `seq` equals 1 and when we have seen exactly two packets > from the client (which should be a SYN and the first ACK). > > This solution generally works **but** I sometimes log a data chunk > when I should not, particularly if I have missed part of the > traffic. > > As an example, the following Scapy script creates a PCAP file that > would trick my script into logging a "TCP server banner" while the > client has actually sent some data (and we have missed an ACK packet, > left as a comment in the script): > > ``` > wrpcap("test.cap", [ > Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / > TCP(dport=80, sport=5678, flags="S", ack=0, seq=555678), > Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / > TCP(sport=80, dport=5678, flags="SA", seq=111234, ack=555679), > # Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / > # TCP(dport=80, sport=5678, flags="A", ack=111235, seq=555679), > Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / > TCP(dport=80, sport=5678, flags="PA", ack=111235, seq=555679) / > "DATA", > Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / > TCP(sport=80, dport=5678, flags="PA", seq=111235, ack=555683) / > "DATA" > ]) > ``` > > Is there a way to know that I have not missed any packet from the > client and/or a way to know that the client has not sent any data on > the connection (like an equivalent of the `seq` parameter, but for the > `ack`)? > > Also, when `seq` equals 1, am I certain that I have not missed any > packet from the server? > > One more question: is there a better, cleaner, etc. way to do what I'm > trying to do? > > Thanks a lot, > > Pierre > > -- > Pierre > http://pierre.droids-corp.org/ > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev -- Seth Hall * Corelight, Inc * www.corelight.com From jeffrey.bencteux at ssi.gouv.fr Tue Feb 13 00:15:21 2018 From: jeffrey.bencteux at ssi.gouv.fr (Bencteux Jeffrey) Date: Tue, 13 Feb 2018 09:15:21 +0100 Subject: [Bro-Dev] Logging TCP server banners In-Reply-To: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> References: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> Message-ID: > I call "TCP server banner" the first chunk of data a server sends, > before the client has sent data (if the client sends data before the > server, I don't want to log anything). A solution could be to blacklist such connections, i-e if there is data sent by the client, then do not log: > if (! is_orig && seq == 1 && c$orig$num_pkts == 2 && c$orig$size == 0) Another thing that comes to me is what if you miss the SYN or the SYN-ACK segment sent by your client? You will not log the banner so I am not sure about the second condition : c$orig$num_pkts == 2. I would remove it. With the pcap generated with the scapy script you gave, I do not log anymore, however if I change it to this: wrpcap("test.cap", [ Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / TCP(dport=80, sport=5678, flags="S", ack=0, seq=555678), Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / TCP(sport=80, dport=5678, flags="SA", seq=111234, ack=555679), Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / TCP(dport=80, sport=5678, flags="A", ack=111235, seq=555679), # Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / no more data sent by the client # TCP(dport=80, sport=5678, flags="PA", ack=111235, seq=555679) / "DATA", Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / TCP(sport=80, dport=5678, flags="PA", seq=111235, ack=555679) / "DATA" ]) I do have an entry in the log. > Also, when `seq` equals 1, am I certain that I have not missed any > packet from the server? No idea about that, I think the answer is in Bro's TCP implementation in src/analyzer/protocol/tcp somewhere. Regards, From jeffrey.bencteux at ssi.gouv.fr Tue Feb 13 00:20:48 2018 From: jeffrey.bencteux at ssi.gouv.fr (Bencteux Jeffrey) Date: Tue, 13 Feb 2018 09:20:48 +0100 Subject: [Bro-Dev] Logging TCP server banners In-Reply-To: References: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> Message-ID: > Another thing that comes to me is what if you miss the SYN or the > SYN-ACK segment sent by your client? I meant SYN or ACK (third one in the handshake) segment sent by the client. Sorry. Regards, From p.antoine at catenacyber.fr Tue Feb 13 01:01:01 2018 From: p.antoine at catenacyber.fr (Philippe Antoine) Date: Tue, 13 Feb 2018 10:01:01 +0100 Subject: [Bro-Dev] Fuzzing In-Reply-To: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> References: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> Message-ID: <84C49883-3480-4608-A302-9B3877E065C1@catenacyber.fr> Hi bro team, Do you have plans to integrate oss-fuzz ? like https://github.com/google/oss-fuzz/issues/624 All the best, Philippe -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180213/0209fb65/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180213/0209fb65/attachment.bin From pierre at droids-corp.org Tue Feb 13 01:47:46 2018 From: pierre at droids-corp.org (Pierre LALET) Date: Tue, 13 Feb 2018 10:47:46 +0100 Subject: [Bro-Dev] Logging TCP server banners In-Reply-To: <649F70BD-9F44-4AB8-809D-09A4EAFC01DE@corelight.com> References: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> <649F70BD-9F44-4AB8-809D-09A4EAFC01DE@corelight.com> Message-ID: <20180213094746.vtlvkmdxjacxpb2k@droids-corp.org> Hi, On Mon, Feb 12, 2018 at 05:18:05PM -0500, Seth Hall wrote: > This fits with a feature that I've been talking to several people > about for quite a while which would make a bit of the beginning of > each direction of a connection available in script-land. I think that would be great! > That would help with your problem a bit, but it sounds like since > there is a particular packet that you want, you may want to write > your own analyzer that gets the exact data that you are looking for > because you should be able to do packet level stuff easily there. I wanted to avoid that, but actually I think you're right. Thanks for your answer, Pierre -- Pierre http://pierre.droids-corp.org/ From pierre at droids-corp.org Tue Feb 13 02:02:12 2018 From: pierre at droids-corp.org (Pierre LALET) Date: Tue, 13 Feb 2018 11:02:12 +0100 Subject: [Bro-Dev] Logging TCP server banners In-Reply-To: References: <20180212201938.2hc7xdwivkjxrupd@droids-corp.org> Message-ID: <20180213100212.rmy4oqfrrhg2gnuy@droids-corp.org> Hi, On Tue, Feb 13, 2018 at 09:15:21AM +0100, Bencteux Jeffrey wrote: > A solution could be to blacklist such connections, i-e if there is data > sent by the client, then do not log: > > if (! is_orig && seq == 1 && c$orig$num_pkts == 2 && c$orig$size == 0) > > Another thing that comes to me is what if you miss the SYN or the > SYN-ACK segment sent by your client? You will not log the banner so I am > not sure about the second condition : c$orig$num_pkts == 2. I would > remove it. Thanks! Indeed, changing `c$orig$num_pkts == 2` to `c$orig$size == 0` is a good move, I wish I had this idea! > With the pcap generated with the scapy script you gave, I do not log > anymore, however if I change it to this: > > wrpcap("test.cap", [ > Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / > TCP(dport=80, sport=5678, flags="S", ack=0, seq=555678), > Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / > TCP(sport=80, dport=5678, flags="SA", seq=111234, ack=555679), > Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / > TCP(dport=80, sport=5678, flags="A", ack=111235, seq=555679), > # Ether() / IP(dst="1.2.3.4", src="5.6.7.8") / no more data sent by the client > # TCP(dport=80, sport=5678, flags="PA", ack=111235, seq=555679) / "DATA", > Ether() / IP(src="1.2.3.4", dst="5.6.7.8") / > TCP(sport=80, dport=5678, flags="PA", seq=111235, ack=555679) / "DATA" > ]) > > I do have an entry in the log. > > > Also, when `seq` equals 1, am I certain that I have not missed any > > packet from the server? > > No idea about that, I think the answer is in Bro's TCP implementation in > src/analyzer/protocol/tcp somewhere. I think, as suggested by Seth Hall, that I would have to write my own analyzer for that. Thanks a lot, Pierre -- Pierre http://pierre.droids-corp.org/ From robin at icir.org Tue Feb 13 07:44:33 2018 From: robin at icir.org (Robin Sommer) Date: Tue, 13 Feb 2018 07:44:33 -0800 Subject: [Bro-Dev] 'async' update and proposal In-Reply-To: <20180208180155.avwgnnh4lgfttqud@user134.sys.ICSI.Berkeley.EDU> References: <20180126194003.GA1786@icir.org> <20180127052530.3l4vnuhjoi6rnvs2@Beezling.local> <7289248a-4c95-9729-1fd0-59010cdd84ed@gmail.com> <20180129170014.GA39249@icir.org> <20180130153842.GB39249@icir.org> <20180208180155.avwgnnh4lgfttqud@user134.sys.ICSI.Berkeley.EDU> Message-ID: <20180213154433.GL96562@icir.org> On Thu, Feb 08, 2018 at 10:01 -0800, Johanna wrote: > I just wanted to quickly chime in here to say that I generally like the > idea of having these contexts. Sounds like we all like that idea. Now the question is if we want to wait for that to materialize (which will take quite a bit more brainstorming and then implementation, obviously), or if we want to get async in in the current state and then put that on the TODO list? I can see arguments either way, curious what others think. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From robin at icir.org Tue Feb 13 08:31:24 2018 From: robin at icir.org (Robin Sommer) Date: Tue, 13 Feb 2018 08:31:24 -0800 Subject: [Bro-Dev] Shipping CAF with Broker? Message-ID: <20180213163124.GP96562@icir.org> I was wondering the other day if we could add CAF as submodule to Broker, and then just start compiling it along with everything else. A long time ago we decided generally against shipping dependencies along with Bro, but in this case it might make people's lives quite a bit easier, as hardly any Bro user will have CAF installed already. And even if they had (say from an earlier Bro version), it might not be the right version. If we included it with Broker, things would just work. We could even go a step further and compile CAF statically into libbroker, so that in the end from a user's perspective all they deal with is Broker: if they link against it, they get everything they need. Would that make sense? Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From robin at icir.org Tue Feb 13 08:35:49 2018 From: robin at icir.org (Robin Sommer) Date: Tue, 13 Feb 2018 08:35:49 -0800 Subject: [Bro-Dev] Queueing in Broker? Message-ID: <20180213163549.GQ96562@icir.org> One more Broker idea: I'm thinking we should add a queuing mechanism to Broker that buffers outgoing messages for a while when a peer goes down. Once it comes back up, we'd pass them on. That way an endpoint could restart for example without us loosing data. I'm not immediately sure how/where we'd integrate that. For outgoing messages, we could add it to the transparent reconnect. However, for incoming connections, where the local endpoint doesn't have a notion of "that peer should be coming back", it might not be as straight forward? Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From seth at corelight.com Tue Feb 13 08:36:16 2018 From: seth at corelight.com (Seth Hall) Date: Tue, 13 Feb 2018 11:36:16 -0500 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: <20180213163124.GP96562@icir.org> References: <20180213163124.GP96562@icir.org> Message-ID: <86A3BF1C-A142-47EE-93B9-C34DDFF91AF7@corelight.com> On 13 Feb 2018, at 11:31, Robin Sommer wrote: > We could even go a step further and compile CAF statically into > libbroker, so that in the end from a user's perspective all they deal > with is Broker: if they link against it, they get everything they > need. > > Would that make sense? I think we're quite a ways off from CAF being generally packaged with most operating systems (especially having the correct version to work with broker). I think that including it with broker and building libcaf into libbroker statically makes sense too. I've been concerned about the difficulty of building Bro from source too. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From jazoff at illinois.edu Tue Feb 13 08:45:31 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 13 Feb 2018 16:45:31 +0000 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: <86A3BF1C-A142-47EE-93B9-C34DDFF91AF7@corelight.com> References: <20180213163124.GP96562@icir.org> <86A3BF1C-A142-47EE-93B9-C34DDFF91AF7@corelight.com> Message-ID: > On Feb 13, 2018, at 11:36 AM, Seth Hall wrote: > > > > On 13 Feb 2018, at 11:31, Robin Sommer wrote: > >> We could even go a step further and compile CAF statically into >> libbroker, so that in the end from a user's perspective all they deal >> with is Broker: if they link against it, they get everything they >> need. >> >> Would that make sense? > > I think we're quite a ways off from CAF being generally packaged with > most operating systems (especially having the correct version to work > with broker). I think that including it with broker and building libcaf > into libbroker statically makes sense too. I've been concerned about > the difficulty of building Bro from source too. > > .Seth This will also make pip install pybroker feasible, which would be a huge usability improvement for integrating external systems with bro. ? Justin Azoff From jsiwek at corelight.com Tue Feb 13 09:02:48 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 13 Feb 2018 11:02:48 -0600 Subject: [Bro-Dev] 'async' update and proposal In-Reply-To: <20180213154433.GL96562@icir.org> References: <20180126194003.GA1786@icir.org> <20180127052530.3l4vnuhjoi6rnvs2@Beezling.local> <7289248a-4c95-9729-1fd0-59010cdd84ed@gmail.com> <20180129170014.GA39249@icir.org> <20180130153842.GB39249@icir.org> <20180208180155.avwgnnh4lgfttqud@user134.sys.ICSI.Berkeley.EDU> <20180213154433.GL96562@icir.org> Message-ID: On Tue, Feb 13, 2018 at 9:44 AM, Robin Sommer wrote: > Sounds like we all like that idea. Now the question is if we want to > wait for that to materialize (which will take quite a bit more > brainstorming and then implementation, obviously), or if we want to > get async in in the current state and then put that on the TODO list? > I can see arguments either way, curious what others think. Releasing the async feature by itself sounds unsafe. Due to the "code understandability" concern people had, it didn't sound like there was a big demand to start using it without a solution to that anyway, and releasing async by itself risks that solution falling through or being delayed/unusable for an extended period, resulting in "weird code" being written in the meantime and possibly never rewritten later on because "it still technically works". I'd maybe even go about implementing context/scope by branching off the async branch, just in case there's details that arise that make it less suitable than originally thought and have to go back to the drawing board. - Jon From johanna at icir.org Tue Feb 13 09:03:21 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 13 Feb 2018 09:03:21 -0800 Subject: [Bro-Dev] 'async' update and proposal In-Reply-To: <20180213154433.GL96562@icir.org> References: <20180126194003.GA1786@icir.org> <20180127052530.3l4vnuhjoi6rnvs2@Beezling.local> <7289248a-4c95-9729-1fd0-59010cdd84ed@gmail.com> <20180129170014.GA39249@icir.org> <20180130153842.GB39249@icir.org> <20180208180155.avwgnnh4lgfttqud@user134.sys.ICSI.Berkeley.EDU> <20180213154433.GL96562@icir.org> Message-ID: <4334F626-1343-4588-9D33-8ED0B5098FD5@icir.org> On 13 Feb 2018, at 7:44, Robin Sommer wrote: > On Thu, Feb 08, 2018 at 10:01 -0800, Johanna wrote: > >> I just wanted to quickly chime in here to say that I generally like >> the >> idea of having these contexts. > > Sounds like we all like that idea. Now the question is if we want to > wait for that to materialize (which will take quite a bit more > brainstorming and then implementation, obviously), or if we want to > get async in in the current state and then put that on the TODO list? > I can see arguments either way, curious what others think. I have one obvious fear if we put it in now. Which is that it will be more or less put off forever :). And we might end up with a pretty annoying situation where people (and we) start using it and you end up with all these really hard to debug corner-cases with event ordering. This is already kind of hard to do, especially with protocols that have a bunch of events - I managed to get this wrong a few times in the case of SSL where some parts of the connections where not logged correctly in some circumstances. Basically - I fear that this will make it really hard to write scripts that work correctly in all cases. Johanna From johanna at icir.org Tue Feb 13 09:04:09 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 13 Feb 2018 09:04:09 -0800 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: <86A3BF1C-A142-47EE-93B9-C34DDFF91AF7@corelight.com> References: <20180213163124.GP96562@icir.org> <86A3BF1C-A142-47EE-93B9-C34DDFF91AF7@corelight.com> Message-ID: On 13 Feb 2018, at 8:36, Seth Hall wrote: > On 13 Feb 2018, at 11:31, Robin Sommer wrote: > >> We could even go a step further and compile CAF statically into >> libbroker, so that in the end from a user's perspective all they deal >> with is Broker: if they link against it, they get everything they >> need. >> >> Would that make sense? > > I think we're quite a ways off from CAF being generally packaged with > most operating systems (especially having the correct version to work > with broker). I think that including it with broker and building > libcaf > into libbroker statically makes sense too. I've been concerned about > the difficulty of building Bro from source too. I agree - while I see the argument of not packaging external stuff, I think we should make an exception here. Johanna From jsiwek at corelight.com Tue Feb 13 09:12:07 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 13 Feb 2018 11:12:07 -0600 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: <20180213163124.GP96562@icir.org> References: <20180213163124.GP96562@icir.org> Message-ID: On Tue, Feb 13, 2018 at 10:31 AM, Robin Sommer wrote: > I was wondering the other day if we could add CAF as submodule to > Broker Sounds fine to me. Also means one less variable (CAF version) to get under control when troubleshooting/debugging reported issues. - Jon From seth at corelight.com Tue Feb 13 09:15:48 2018 From: seth at corelight.com (Seth Hall) Date: Tue, 13 Feb 2018 12:15:48 -0500 Subject: [Bro-Dev] Queueing in Broker? In-Reply-To: <20180213163549.GQ96562@icir.org> References: <20180213163549.GQ96562@icir.org> Message-ID: <36EC7119-02C1-489D-8E64-B8EFD05C238F@corelight.com> On 13 Feb 2018, at 11:35, Robin Sommer wrote: > One more Broker idea: I'm thinking we should add a queuing mechanism > to Broker that buffers outgoing messages for a while when a peer goes > down. Once it comes back up, we'd pass them on. That way an endpoint > could restart for example without us loosing data. Yes! > I'm not immediately sure how/where we'd integrate that. For outgoing > messages, we could add it to the transparent reconnect. However, for > incoming connections, where the local endpoint doesn't have a notion > of "that peer should be coming back", it might not be as straight > forward? I can imagine being able to define queue length and queue (byte) size for consumers and producers might be interesting. As a producer: Keep up to 1000 messages and/or 1MByte of data. As a consumer: Only be willing to receive up to the 1000 most recent message or up to 1MByte of data. I still haven't spent time with the broker API to see if these thoughts actually make sense though. :) .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From jsiwek at corelight.com Tue Feb 13 09:41:21 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 13 Feb 2018 11:41:21 -0600 Subject: [Bro-Dev] Queueing in Broker? In-Reply-To: <20180213163549.GQ96562@icir.org> References: <20180213163549.GQ96562@icir.org> Message-ID: On Tue, Feb 13, 2018 at 10:35 AM, Robin Sommer wrote: > One more Broker idea: I'm thinking we should add a queuing mechanism > to Broker that buffers outgoing messages for a while when a peer goes > down. Once it comes back up, we'd pass them on. That way an endpoint > could restart for example without us loosing data. If the goals is to prevent loss of data, then don't we need more than just buffering, like message acknowledgements from the peer? e.g. you can think your peer is up, send a message, then immediately find out it went offline and so the message got lost "in the middle". You would also need to keep the message buffered until receiving an ACK from *all* peers that are subscribed (and the subscription list is a potentially moving target) ? And if you still planned on message routing/auto-forwarding being more widely used, I think you would want to buffer the message while the longest subscribed *path* has a down node? > I'm not immediately sure how/where we'd integrate that. For outgoing > messages, we could add it to the transparent reconnect. However, for > incoming connections, where the local endpoint doesn't have a notion > of "that peer should be coming back", it might not be as straight > forward? Yeah, I'm also unclear if there's anyway you can tell if the peer is supposed to be permanent vs. transient in come cases. Last observation is that I think any of these types of changes would be to the internal messaging pattern/protocol and so maybe reasonable to change/improve in subsequent releases in a way that's transparent to users. - Jon From robin at icir.org Wed Feb 14 07:43:16 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 14 Feb 2018 07:43:16 -0800 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: <20180213163124.GP96562@icir.org> References: <20180213163124.GP96562@icir.org> Message-ID: <20180214154316.GA9589@icir.org> Sounds like everybody likes this idea. Jon, want to take a stab at it? Seems like something we should do before merging the branch into master so that we get everybody gets on the right track right away. Let's try the the static library approach: link CAF statically into libbroker. I'm not 100% sure if that's straight-forward to do, but I hope so ... Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From dominik.charousset at haw-hamburg.de Wed Feb 14 07:52:01 2018 From: dominik.charousset at haw-hamburg.de (Dominik Charousset) Date: Wed, 14 Feb 2018 16:52:01 +0100 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: References: <20180213163124.GP96562@icir.org> Message-ID: > On Tue, Feb 13, 2018 at 10:31 AM, Robin Sommer wrote: >> I was wondering the other day if we could add CAF as submodule to >> Broker I can understand where you?re coming from. Dependency management in C++ is lackluster, to say the least. However, baking CAF into Broker in this way is not trivial and can break many, many things. CAF headers are included in public broker headers. This means users of libbroker must be able to find those headers in the system paths. You could install CAF headers via CMake along the Broker headers, but that would override any other local CAF installation. Worse, if a user installs CAF from a different source afterwards you could produce an ABI clash when compiling a libbroker application with the wrong CAF headers. It might even compile fine, as long as the API remains compatible, but produce very nasty hard-to-debug errors at runtime. Moving *all* CAF headers out of Broker headers could solve this, but it requires a lot of pimpl boilerplate code to make everything CAF opaque. Completely hiding away the CAF actor system also takes away any opportunity for integrations. For example, we are currently thinking about how we could integrate Broker with VAST. Locking away the actor system would take away many benefits of integrating Broker. For one, we would have to run two distinct actor systems in the same process instead of using one and getting all the scalability benefits from it. Two actor systems means two schedulers that either constantly fight for resources or leave performance on the table. If each scheduler gets only half of the available HW via config that would waste a lot of CPU bandwidth if one scheduler is idle but the other at capacity. Performance aside, not having access to the Broker actor system would potentially require us to duplicate a lot of code for using Broker types in VAST. Of course this is also prone to error now, because any change in Broker no longer automatically updates the serialization code in the now detached actor system. Long story short, doing this would shut a lot of doors for us. Did you consider using something like CMake?s External Project [1] feature or Conan [2]? There is a Conan recipe for CAF (contributed by a user a while ago) that could make life for Bro users easier. I?m happy to work on the recipe if it isn?t working out of the box right now, if that would work for you as an alternative. Dominik [1] https://cmake.org/cmake/help/v3.0/module/ExternalProject.html [2] https://www.conan.io/ From robin at icir.org Wed Feb 14 08:00:28 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 14 Feb 2018 08:00:28 -0800 Subject: [Bro-Dev] Queueing in Broker? In-Reply-To: Message-ID: <20180214160028.GB9589@icir.org> On Tue, Feb 13, 2018 at 11:41 -0600, Jon wrote: > If the goals is to prevent loss of data, then don't we need more than > just buffering, like message acknowledgements from the peer? Yeah, I wouldn't see it as bullet-proof reliability, rather as a best effort "let's no needlesly drop stuff on the floor" kind of thing. I'm thinking less here of the cluster setting (where things can get complex and we'd usually restart everything anyways), and more of external agents streaming stuff into Bro, like with the osquery plugin. If one needs to restart the receiving-side Bro, it would be nice to not just drop any activity reported in the meantime. With that perspective, it would really just need just a bit of buffering of messages that cannot be sent out right now. And if in the end they still don't make it, that's not the end of the world. > And if you still planned on message routing/auto-forwarding being more > widely used, I think you would want to buffer the message while the > longest subscribed *path* has a down node? I was thinking to do the buffering at the routing/hop-level. The messsage would get as far as it can at first. If a peer is down that a node would have normally forwarded to, it'd buffer for a bit until that comes back (but I realize this makes it even more fuzzy which peers to wait for: in a flexible topology peers could come and go all the time; see below). That said, I'm now wondering if such buffering functionality should really be located inside CAF, as that's in charge of low-level message propagation. > Yeah, I'm also unclear if there's anyway you can tell if the peer is > supposed to be permanent vs. transient in come cases. We could make that an explicit endpoint option: "for this peer, on disconnect buffer stuff it would normally receive until it comes back (subject to some limits)". We may need a better way to identify the same peer though, just IP probably wouldn't work well. Maybe through some ID/name sent during the handshake? One would need to configure such a name for peers when turning on the buffering. > Last observation is that I think any of these types of changes would > be to the internal messaging pattern/protocol and so maybe reasonable > to change/improve in subsequent releases in a way that's transparent > to users. Yeah, nothing to get in immediately, still needs some thinking. I'm getting the sense though that we'll need it for some applications, osquery being the main one on my mind. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From robin at icir.org Wed Feb 14 08:02:45 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 14 Feb 2018 08:02:45 -0800 Subject: [Bro-Dev] Queueing in Broker? In-Reply-To: <36EC7119-02C1-489D-8E64-B8EFD05C238F@corelight.com> References: <20180213163549.GQ96562@icir.org> <36EC7119-02C1-489D-8E64-B8EFD05C238F@corelight.com> Message-ID: <20180214160245.GC9589@icir.org> On Tue, Feb 13, 2018 at 12:15 -0500, Seth wrote: > As a producer: > As a consumer: Producer-side it should be easy to enforce limits, but consumer-side it seems more difficult as it would need either some kind of a handshake or a notion what data represents a buffered activity. Do you think consumer-side is important? We already can not prevent a peer from sending too much data during normal operation either. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From dominik.charousset at haw-hamburg.de Wed Feb 14 08:39:47 2018 From: dominik.charousset at haw-hamburg.de (Dominik Charousset) Date: Wed, 14 Feb 2018 17:39:47 +0100 Subject: [Bro-Dev] Queueing in Broker? In-Reply-To: <20180214160028.GB9589@icir.org> References: <20180214160028.GB9589@icir.org> Message-ID: <6388A40F-1716-4656-B534-C456F7B57E6E@haw-hamburg.de> >> And if you still planned on message routing/auto-forwarding being more >> widely used, I think you would want to buffer the message while the >> longest subscribed *path* has a down node? > > I was thinking to do the buffering at the routing/hop-level. The > messsage would get as far as it can at first. If a peer is down that a > node would have normally forwarded to, it'd buffer for a bit until > that comes back (but I realize this makes it even more fuzzy which > peers to wait for: in a flexible topology peers could come and go all > the time; see below). > > That said, I'm now wondering if such buffering functionality should > really be located inside CAF, as that's in charge of low-level message > propagation. CAF already implements cumulative ACKs. Combine this with send buffers, snapshotting and a cluster manager and you have fault-tolerant pipelines with automatic redeployment/failover - in theory. That?s all up in the air of course, since we don?t have the manpower to fully flesh this out at the moment. However, many prerequisites are already there (such as ACKs on a per-batch level and customization points in stream mangers to deal with errors) that we could leverage for this use case. I think your use case is simple enough that we can make a few additions to CAF and then implement this in Broker-land. Let me outline a solution here: - on disconnect, keep the outbound path alive - add new data to path?s buffer up to maximum (or timeout) - include some form of unique identifier (host name? configured ID?) in handshakes - rebind and resume sends on an outbound path if a client reconnects An outbound path in a CAF stream is essentially a buffer with additional state for batch ID and credit bookkeeping. Does that outlined solution make sense? This would have "at least once" semantics, so the receiving peer can receive messages twice for anything it already processed but didn?t have the chance to ACK. Just pointing it out. Disclaimer: I?m weeks away from finishing work in my topic/streaming branch. After that point it?s straightforward to give you scaffold for this. >> Yeah, I'm also unclear if there's anyway you can tell if the peer is >> supposed to be permanent vs. transient in come cases. > > We could make that an explicit endpoint option: "for this peer, on > disconnect buffer stuff it would normally receive until it comes back > (subject to some limits)". We may need a better way to identify the > same peer though, just IP probably wouldn't work well. Maybe through > some ID/name sent during the handshake? One would need to configure > such a name for peers when turning on the buffering. Yes, I think a custom ID via the caf-application.ini is the simplest solution. Using the hostname is an option too, as long as users make sure hostnames in their network are unique. >> Last observation is that I think any of these types of changes would >> be to the internal messaging pattern/protocol and so maybe reasonable >> to change/improve in subsequent releases in a way that's transparent >> to users. > > Yeah, nothing to get in immediately, still needs some thinking. I'm > getting the sense though that we'll need it for some applications, > osquery being the main one on my mind. That?s good to know. I will keep this in mind as a topic for later, when my topic branch is merged back to master. Dominik From jsiwek at corelight.com Wed Feb 14 13:15:42 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 14 Feb 2018 15:15:42 -0600 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: References: <20180213163124.GP96562@icir.org> Message-ID: On Wed, Feb 14, 2018 at 9:52 AM, Dominik Charousset wrote: > CAF headers are included in public broker headers. Good point, I didn't remember that, it does complicate the situation. Though maybe it's still only more a problem for the less common use-case of someone actually wanting to use libbroker themselves. I think more commonly someone just wants to get Bro up and running and so there could possibly get away with static linking and not bothering to install headers. > Moving *all* CAF headers out of Broker headers could solve this, but it requires a lot of pimpl boilerplate code That does sound like a bunch of work someone probably would not enjoy doing :) Though Broker originally used to pimpl everything and that approach worked and seemed fine at the time. > Completely hiding away the CAF actor system also takes away any opportunity for integrations. Yeah, that all sounds less desirable and not sure what could be done about it. > [1] https://cmake.org/cmake/help/v3.0/module/ExternalProject.html Maybe as a last resort. Generally, I think I'd rather not -- every time I've tried to use them or seen them used there's been a "flakiness" to them that I wish I could describe in more logical terms. I also think every project where I've tried to use a CMake External Project myself has never actually made it into an actual release or at least any that are still used at all. I'm not sure there's a relation, though it also doesn't make me feel great about the prospect of using that same approach here. > [2] https://www.conan.io/ Could be something to consider/evaluate; never used it myself. At this point, I'd rather just stick to the current situation until it's actually clear what problems people will have (or take the time to complain about), if any. Then once pain points are better known, decide on what would improve the situation. - Jon From robin at icir.org Wed Feb 14 13:21:44 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 14 Feb 2018 13:21:44 -0800 Subject: [Bro-Dev] 'async' update and proposal In-Reply-To: References: <20180126194003.GA1786@icir.org> <20180127052530.3l4vnuhjoi6rnvs2@Beezling.local> <7289248a-4c95-9729-1fd0-59010cdd84ed@gmail.com> <20180129170014.GA39249@icir.org> <20180130153842.GB39249@icir.org> <20180208180155.avwgnnh4lgfttqud@user134.sys.ICSI.Berkeley.EDU> <20180213154433.GL96562@icir.org> Message-ID: <20180214212144.GN53121@icir.org> Ok, agree that it's best to postpone "async". My gut isn't quite as skeptical about its safety but I see the argument. (Although then nobody will be allowed to complain about "when" anymore ;-) Jon, if you want to think more about context/scoping it would be great to keep the concurrency aspects of this in mind as well, as this could eventually get us there, too. For more context here are a couple more pointers to past ideas that eventually led to that CCS paper I had pointed to earlier: - The scope-based concurrency model was originally described in Section 5.1 of this paper: http://www.icir.org/robin/papers/cc-multi-core.pdf. - I actually e found the old concurrency code from the Bro 1.x era). I'll send you a pointer to that. "grep '&scope'" over those policy scripts yields the output at the end of this mail. - I now also remember that we indeed needed the internal tracking of the current context; just relying on event parameters wasn't sufficient. For concurrency the most tricky parts are events triggered indirectly, like through timers, as they will often need to follow the same scheduling constraints as the original one (not sure if that applies to "async"). Robin --------- cut ------------------------------------------------------- superlinear/policy//bro.init:# &scope=". superlinear/policy//conn.bro: # determine_service() runs with &scope=pair. superlinear/policy//demux.bro:event _demux_conn(id: conn_id, tag: string, otag: string, rtag: string) &scope=connection(id) superlinear/policy//dns.bro: msg: dns_msg, query: string) &scope=connection(c$id) superlinear/policy//firewall.bro:event report_violation(c: connection, r:rule) &scope=connection(c$id) superlinear/policy//ftp.bro:event add_to_first_seen_cmds(command: string) &scope=custom(command) superlinear/policy//hot.bro:event check_hot_event(c: connection, state: count) &scope=connection(c$id) superlinear/policy//icmp.bro:event update_flow(icmp: icmp_conn, id: count, is_orig: bool, payload: string) &scope=hostpair(icmp$orig_h, icmp$resp_h) superlinear/policy//interconn.bro:event _remove_from_demuxed_conn(id: conn_id) &scope=connection(id) superlinear/policy//nfs.bro:event nfs_request_getattr(n: connection, fh: string, attrs: nfs3_attrs) &scope=custom(fh) superlinear/policy//nfs.bro:event nfs_attempt_getattr(n: connection, status: count, fh: string) &scope=custom(fh) superlinear/policy//nfs.bro:event nfs_request_fsstat(n: connection, root_fh: string, stat: nfs3_fsstat) &scope=custom(root_fh) superlinear/policy//nfs.bro:event nfs_attempt_fsstat(n: connection, status: count, root_fh: string) &scope=custom(root_fh) superlinear/policy//notice-action-filters.bro:event notice_alarm_per_orig_tally(n: notice_info, host: addr) &scope=originator(host) superlinear/policy//notice.bro:# will run with &scope connection so that we can store notice_tags. superlinear/policy//notice.bro:event NOTICE_conn(n: notice_info) &scope=connection(n$conn$id) superlinear/policy//portmapper.bro:event _do_pm_request(r: connection, proc: string, addl: string, log_it: bool) &scope=originator(r$id$orig_h) superlinear/policy//scan.bro:event do_rpts_check(c: connection, num: count) &scope=hostpair(c$id$orig_h, c$id$resp_h) superlinear/policy//scan.bro:event check_scan(c: connection, established: bool, reverse: bool) &scope=originator(reverse? c$id$resp_h : c$id$orig_h) superlinear/policy//scan.bro:event account_tried(c: connection, user: string, passwd: string) &scope=originator(c$id$orig_h) superlinear/policy//signatures.bro:event _do_signature_match_notice(state: signature_state, msg: string, data: string) &scope=originator(state$conn$id$orig_h) superlinear/policy//signatures.bro:event _do_count_per_resp(state: signature_state, msg: string, data: string) &scope=responder(state$conn$id$resp_h) superlinear/policy//signatures.bro:event _check_alarm_once(state: signature_state, msg: string, data: string) &scope=custom(state$id) superlinear/policy//trw-impl.bro:event add_to_friendly_remotes(a: addr) &scope=originator(a) superlinear/policy//trw-impl.bro:event check_TRW_scan(c: connection, state: string, reverse: bool) &scope=originator(reverse? c$id$resp_h : c$id$orig_h) superlinear/policy//weird.bro:event report_weird_conn_once(t: time, name: string, addl: string, c: connection, action: NoticeAction) &scope=custom(name) superlinear/policy//weird.bro:event net_weird(name: string) &scope=custom(name) superlinear/policy//worm.bro:global worm_list: table[addr] of count &default=0 &read_expire = 2 days; #&scope=originator; superlinear/policy//worm.bro: &default=0 &read_expire = 2 days &expire_func=expi; # &scope=originator; -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin On Tue, Feb 13, 2018 at 11:02 -0600, you wrote: > On Tue, Feb 13, 2018 at 9:44 AM, Robin Sommer wrote: > > > Sounds like we all like that idea. Now the question is if we want to > > wait for that to materialize (which will take quite a bit more > > brainstorming and then implementation, obviously), or if we want to > > get async in in the current state and then put that on the TODO list? > > I can see arguments either way, curious what others think. > > Releasing the async feature by itself sounds unsafe. Due to the "code > understandability" concern people had, it didn't sound like there was > a big demand to start using it without a solution to that anyway, and > releasing async by itself risks that solution falling through or being > delayed/unusable for an extended period, resulting in "weird code" > being written in the meantime and possibly never rewritten later on > because "it still technically works". > > I'd maybe even go about implementing context/scope by branching off > the async branch, just in case there's details that arise that make it > less suitable than originally thought and have to go back to the > drawing board. > > - Jon > -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From robin at icir.org Wed Feb 14 13:26:08 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 14 Feb 2018 13:26:08 -0800 Subject: [Bro-Dev] Queueing in Broker? In-Reply-To: <6388A40F-1716-4656-B534-C456F7B57E6E@haw-hamburg.de> References: <20180214160028.GB9589@icir.org> <6388A40F-1716-4656-B534-C456F7B57E6E@haw-hamburg.de> Message-ID: <20180214212608.GO53121@icir.org> On Wed, Feb 14, 2018 at 17:39 +0100, you wrote: > I think your use case is simple enough that we can make a few additions to CAF and then implement this in Broker-land. Let me outline a solution here: Yeah, that sounds like a good plan to me and should make the remaining parts on the Broker-side pretty straight forward. > This would have "at least once" semantics, so the receiving peer can > receive messages twice for anything it already processed but didn?t > have the chance to ACK. Just pointing it out. Hmm ... Need to think about that. More than once could be a problem for some use cases, we might need to add way to recognizes duplicates. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From robin at icir.org Wed Feb 14 13:41:56 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 14 Feb 2018 13:41:56 -0800 Subject: [Bro-Dev] Shipping CAF with Broker? In-Reply-To: References: <20180213163124.GP96562@icir.org> Message-ID: <20180214214156.GQ53121@icir.org> On Wed, Feb 14, 2018 at 15:15 -0600, you wrote: > > CAF headers are included in public broker headers. > > Good point, I didn't remember that, it does complicate the situation. Yeah, same here, I didn't think about that part either, it's definitely a concern. Not immediately sure if there's a middle-ground that would give us the best of both worlds: easy of installation for Bro users, while remaining flexible for external Broker/CAF usages. But agree that this needs more thought before moving ahead with anything. Thanks for bringing that up, Dominik. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From jsiwek at corelight.com Sun Feb 18 15:50:40 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Sun, 18 Feb 2018 17:50:40 -0600 Subject: [Bro-Dev] [Bro-Commits] [git/bro] master: Update Mozilla CA list to state of NSS 3.35. (8ea7de9) In-Reply-To: <201802161906.w1GJ6DXK011481@bro-ids.icir.org> References: <201802161906.w1GJ6DXK011481@bro-ids.icir.org> Message-ID: This update breaks a unit test: scripts.base.files.x509.signed_certificate_timestamp - Jon On Fri, Feb 16, 2018 at 12:53 PM, Johanna Amann wrote: > Repository : ssh://git at bro-ids.icir.org/bro > On branch : master > Link : https://github.com/bro/bro/commit/8ea7de9380e2045e393b3bc22f6be0fa252a94ba > >>--------------------------------------------------------------- > > commit 8ea7de9380e2045e393b3bc22f6be0fa252a94ba > Author: Johanna Amann > Date: Fri Feb 16 10:52:13 2018 -0800 > > Update Mozilla CA list to state of NSS 3.35. > > >>--------------------------------------------------------------- > > 8ea7de9380e2045e393b3bc22f6be0fa252a94ba > scripts/base/protocols/ssl/mozilla-ca-list.bro | 58 ++++++++------------------ > 1 file changed, 17 insertions(+), 41 deletions(-) > > Diff suppressed because of size. To see it, use: > > git diff-tree --root --patch-with-stat --no-color --ignore-space-at-eol --textconv --cc 8ea7de9380e2045e393b3bc22f6be0fa252a94ba > > > _______________________________________________ > bro-commits mailing list > bro-commits at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-commits From johanna at icir.org Sun Feb 18 19:52:01 2018 From: johanna at icir.org (Johanna Amann) Date: Sun, 18 Feb 2018 19:52:01 -0800 Subject: [Bro-Dev] [Bro-Commits] [git/bro] master: Update Mozilla CA list to state of NSS 3.35. (8ea7de9) In-Reply-To: References: <201802161906.w1GJ6DXK011481@bro-ids.icir.org> Message-ID: Sorry about that - I ran the SSL tests but forgot the x.509 tests. I will fix that by tuesday at the very latest. (At the moment I have a bit spotty Internet) Johanna On 18 Feb 2018, at 15:50, Jon Siwek wrote: > This update breaks a unit test: > scripts.base.files.x509.signed_certificate_timestamp > > - Jon > > On Fri, Feb 16, 2018 at 12:53 PM, Johanna Amann > wrote: >> Repository : ssh://git at bro-ids.icir.org/bro >> On branch : master >> Link : >> https://github.com/bro/bro/commit/8ea7de9380e2045e393b3bc22f6be0fa252a94ba >> >>> --------------------------------------------------------------- >> >> commit 8ea7de9380e2045e393b3bc22f6be0fa252a94ba >> Author: Johanna Amann >> Date: Fri Feb 16 10:52:13 2018 -0800 >> >> Update Mozilla CA list to state of NSS 3.35. >> >> >>> --------------------------------------------------------------- >> >> 8ea7de9380e2045e393b3bc22f6be0fa252a94ba >> scripts/base/protocols/ssl/mozilla-ca-list.bro | 58 >> ++++++++------------------ >> 1 file changed, 17 insertions(+), 41 deletions(-) >> >> Diff suppressed because of size. To see it, use: >> >> git diff-tree --root --patch-with-stat --no-color >> --ignore-space-at-eol --textconv --cc >> 8ea7de9380e2045e393b3bc22f6be0fa252a94ba >> >> >> _______________________________________________ >> bro-commits mailing list >> bro-commits at bro.org >> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-commits > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev From johanna at icir.org Tue Feb 20 09:04:08 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 20 Feb 2018 09:04:08 -0800 Subject: [Bro-Dev] [Bro-Commits] [git/bro] master: Update Mozilla CA list to state of NSS 3.35. (8ea7de9) In-Reply-To: References: <201802161906.w1GJ6DXK011481@bro-ids.icir.org> Message-ID: This is fixed - thanks again for pointing it out. Johanna On 18 Feb 2018, at 19:52, Johanna Amann wrote: > Sorry about that - I ran the SSL tests but forgot the x.509 tests. > > I will fix that by tuesday at the very latest. (At the moment I have a > bit spotty Internet) > > Johanna > > On 18 Feb 2018, at 15:50, Jon Siwek wrote: > >> This update breaks a unit test: >> scripts.base.files.x509.signed_certificate_timestamp >> >> - Jon >> >> On Fri, Feb 16, 2018 at 12:53 PM, Johanna Amann >> wrote: >>> Repository : ssh://git at bro-ids.icir.org/bro >>> On branch : master >>> Link : >>> https://github.com/bro/bro/commit/8ea7de9380e2045e393b3bc22f6be0fa252a94ba >>> >>>> --------------------------------------------------------------- >>> >>> commit 8ea7de9380e2045e393b3bc22f6be0fa252a94ba >>> Author: Johanna Amann >>> Date: Fri Feb 16 10:52:13 2018 -0800 >>> >>> Update Mozilla CA list to state of NSS 3.35. >>> >>> >>>> --------------------------------------------------------------- >>> >>> 8ea7de9380e2045e393b3bc22f6be0fa252a94ba >>> scripts/base/protocols/ssl/mozilla-ca-list.bro | 58 >>> ++++++++------------------ >>> 1 file changed, 17 insertions(+), 41 deletions(-) >>> >>> Diff suppressed because of size. To see it, use: >>> >>> git diff-tree --root --patch-with-stat --no-color >>> --ignore-space-at-eol --textconv --cc >>> 8ea7de9380e2045e393b3bc22f6be0fa252a94ba >>> >>> >>> _______________________________________________ >>> bro-commits mailing list >>> bro-commits at bro.org >>> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-commits >> _______________________________________________ >> bro-dev mailing list >> bro-dev at bro.org >> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev From mfernandez at mitre.org Fri Feb 23 07:09:47 2018 From: mfernandez at mitre.org (Fernandez, Mark I) Date: Fri, 23 Feb 2018 15:09:47 +0000 Subject: [Bro-Dev] Bro SMB1 Issue in smb_cmd.log Message-ID: Bro-Dev Group, ISSUE: I encountered an issue where Bro is not logging some rather significant SMB1 commands in the smb_cmd.log file. I understand that some SMB commands are deliberately omitted from the log (such as Negotiate Protocol, Session Setup, and Tree Connect); however, I observe that an instance of NT Create and Delete are not being recorded. I also understand that some SMB messages are deliberately omitted based on the status code; but the status codes ire STATUS_SUCCESS, so it should be logged. In this particular traffic sample, there are more than 100+ SMB messages going back and forth in the TCP stream, but only first several are recorded in smb_cmd.log, then it stops. Please help. Bro Version: I am using the Bro v2.5.1 docker image I pulled from the following URL: https://hub.docker.com/r/rsmmr/hilti/ PCAP File: I downloaded the "smbtorture" pcap file from the Wireshark public repository, at the URL: https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=sm btorture.cap.gz The issue I observe corresponds to stream #1 extracted from the file above, via filter: 'tcp.stream eq 1'. I attached a PCAP file containing stream #1 only. PCAP Analysis of SMB Messages: >From the PCAP file, using Wireshark, the following sequence of SMB Messages are observed (summarized below as Request & Response pairs): (01) Negotiate Protocol Req & Resp (02) Session Setup AndX Req & Resp [x2] (03) Tree Connect AndX Req & Resp (04) Delete Req & Resp [file \torture_qfileinfo.txt] (05) NT Create AndX Req & Resp [fid 4000, file \torture_qfileinfo.txt] (06) Write AndX Req & Resp (07) Trans2 Req & Resp (08) Set Information2 Req & Resp (09) Query Information2 Req & Resp (10) Query Information Req & Resp (11) Query Information2 Req & Resp (12) Trans2 Req & Resp [x57] (13) Close Req & Resp [fid 4000] (14) NT Create AndX Req & Resp [fid 4001, file TORTUR~1.TXT] (15) Close Req & Resp [fid 4001] (16) Delete Req & Resp [file \torture_qfileinfo.txt -> formerly fid 4000] (17) Tree Disconnect Bro Analysis of smb_cmd.log: The Bro smb_cmd.log records events (04) - (10). I understand that events (01) - (03) are deliberately omitted from the log, but I am concerned that nothing is logged after event (10), Query Information Req & Resp. I think this is an important issue because the smb_cmd.log fails to record two significant events in this TCP stream: (i) A second file is created in step (14) (ii) The first file (create in step [05]) is deleted in step (16) The SMB messages look well-formed in Wireshark. Nothing seems to be wrong. The SMB status code is STATUS_SUCCESS for the requests and the responses, so it should be logged. Artifacts: Attached are the following artifacts to help you reproduce the issue: (a) ws_smbtorture_stream001.pcap (pcap of stream #1 only) (b) test.bro script (c) smb_cmd.log (d) smb_files.log (e) files.log (f) conn.log (g) packet_filter.log Not sure what is going wrong. Please help. Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: ws_smbtorture_stream001.pcap Type: application/octet-stream Size: 27117 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0007.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: test.bro Type: application/octet-stream Size: 105 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0008.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: smb_cmd.log Type: application/octet-stream Size: 2829 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0009.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: smb_files.log Type: application/octet-stream Size: 582 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0010.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: files.log Type: application/octet-stream Size: 726 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0011.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: conn.log Type: application/octet-stream Size: 646 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0012.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: packet_filter.log Type: application/octet-stream Size: 253 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0013.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6341 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/b8b29f80/attachment-0001.bin From seth at corelight.com Fri Feb 23 15:06:53 2018 From: seth at corelight.com (Seth Hall) Date: Fri, 23 Feb 2018 18:06:53 -0500 Subject: [Bro-Dev] Bro SMB1 Issue in smb_cmd.log In-Reply-To: References: Message-ID: <20783968-2B3E-468C-90EC-1EE970DE876B@corelight.com> This is probably a bug. That smb torture pcap is a notoriously bad example (although it does exhibit some far, far edge case type of behavior). I deliberately did not use that pcap as an example while I was writing the SMB analyzer because it sent me down a lot of rabbit holes that didn't provide much benefit for the first run at the SMB analyzer. If you identify the bug, please report back. My experience is that just running down these bugs to the exact failure can take quite a while. .Seth On 23 Feb 2018, at 10:09, Fernandez, Mark I wrote: > Bro-Dev Group, > > ISSUE: I encountered an issue where Bro is not logging some rather > significant SMB1 commands in the smb_cmd.log file. I understand that > some > SMB commands are deliberately omitted from the log (such as Negotiate > Protocol, Session Setup, and Tree Connect); however, I observe that an > instance of NT Create and Delete are not being recorded. I also > understand > that some SMB messages are deliberately omitted based on the status > code; > but the status codes ire STATUS_SUCCESS, so it should be logged. In > this > particular traffic sample, there are more than 100+ SMB messages going > back > and forth in the TCP stream, but only first several are recorded in > smb_cmd.log, then it stops. Please help. > > Bro Version: > I am using the Bro v2.5.1 docker image I pulled from the following > URL: > https://hub.docker.com/r/rsmmr/hilti/ > > > PCAP File: > I downloaded the "smbtorture" pcap file from the Wireshark public > repository, at the URL: > > https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=sm > btorture.cap.gz > > The issue I observe corresponds to stream #1 extracted from the file > above, > via filter: 'tcp.stream eq 1'. I attached a PCAP file containing > stream #1 > only. > > > PCAP Analysis of SMB Messages: >> From the PCAP file, using Wireshark, the following sequence of SMB >> Messages > are observed (summarized below as Request & Response pairs): > > (01) Negotiate Protocol Req & Resp > (02) Session Setup AndX Req & Resp [x2] > (03) Tree Connect AndX Req & Resp > (04) Delete Req & Resp [file \torture_qfileinfo.txt] > (05) NT Create AndX Req & Resp [fid 4000, file > \torture_qfileinfo.txt] > (06) Write AndX Req & Resp > (07) Trans2 Req & Resp > (08) Set Information2 Req & Resp > (09) Query Information2 Req & Resp > (10) Query Information Req & Resp > (11) Query Information2 Req & Resp > (12) Trans2 Req & Resp [x57] > (13) Close Req & Resp [fid 4000] > (14) NT Create AndX Req & Resp [fid 4001, file > TORTUR~1.TXT] > (15) Close Req & Resp [fid 4001] > (16) Delete Req & Resp [file \torture_qfileinfo.txt -> > formerly fid 4000] > (17) Tree Disconnect > > > Bro Analysis of smb_cmd.log: > The Bro smb_cmd.log records events (04) - (10). I understand that > events > (01) - (03) are deliberately omitted from the log, but I am concerned > that > nothing is logged after event (10), Query Information Req & Resp. > > I think this is an important issue because the smb_cmd.log fails to > record > two significant events in this TCP stream: > (i) A second file is created in step (14) > (ii) The first file (create in step [05]) is deleted > in step > (16) > > The SMB messages look well-formed in Wireshark. Nothing seems to be > wrong. > The SMB status code is STATUS_SUCCESS for the requests and the > responses, so > it should be logged. > > > Artifacts: > Attached are the following artifacts to help you reproduce the issue: > (a) ws_smbtorture_stream001.pcap (pcap of stream #1 > only) > (b) test.bro script > (c) smb_cmd.log > (d) smb_files.log > (e) files.log > (f) conn.log > (g) packet_filter.log > > > Not sure what is going wrong. Please help. > > Cheers, > Mark > _______________________________________________ > bro-dev mailing list > bro-dev at bro.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev -- Seth Hall * Corelight, Inc * www.corelight.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180223/d9655684/attachment-0001.html From 3110000099 at zju.edu.cn Mon Feb 26 17:50:48 2018 From: 3110000099 at zju.edu.cn (=?UTF-8?B?6ZmI5LqR6LaF?=) Date: Tue, 27 Feb 2018 09:50:48 +0800 (GMT+08:00) Subject: [Bro-Dev] Problem about BinPAC , wish to get your help Message-ID: <60a04dc.23dae.161d4f47e57.Coremail.3110000099@zju.edu.cn> Hi , I am very glad to write this email . I am a user of bro and recently I start to use BinPAC which is a subcomponent of bro . After learning the syntax of BinPAC , I wrote some simple BinPAC programs and tested them . I got a problem during the test and it really confused me . So I am writing to you and hope to get your help . I will describe my problem below . My BinPAC version is 0.47 . In the test I have two machines , A and B . One process on machine A sends messages to another process on machine B once per second . the message is in this format : la(uint32) + lb (uint32) + s(a random string whose length is not fixed) In the message , "la" and "lb" are both the length of the string "s" . For example , la = 26 , lb = 26 , s = "abcdefghijklmnopqrstuvwxyz" . Another example , la = 10 , lb = 10 , s = "helloworld" . So I wrote a BinPAC program (see file_1.pac in attachment) and it worked as expected . But when I made a small change to the BinPAC program (see file_2.pac in attachment), a bug existed . In the first case , I defined a type "header" which contains "la : unint32" and "lb : uint32" and I defined another type "body" which only contains "s : bytestring" . And I defined a high-level type which contains "header" and "body" . Then I print out "la" and the length of "s" . It showed that the program worked properly , the output is like this : 238 238 309 309 311 311 339 339 344 344 252 252 290 290 312 312 298 298 300 300 281 281 ... That is what I want . The first number in each line is "la" and the second number is the length of "s" . But when I didn't define the "header" type but wrote "la : uint32 ; lb : uint32" in the high-level type directly instead , it failed to work , I mean , nothing was printed out . In "file_2.pac" , I wrote like this : type trans_pdu(is_orig: bool) = record { la: uint32 &byteorder=bigendian; lb: uint32 &byteorder=bigendian; body: trans_body(x) &requires(x); } &let{ x = la; } &length = x + 8; I read the file generated by binpac ("file_2.cc") , and I added 2 lines into "file_2.cc" to debug it to see what would happen (the additional codes just print out buffering state , the address of "t_begin_of_data" , the address of "t_end_of_data" and "throw") . The following is part of the code of "file_2.cc" . Only the two "printf" sentences are added by me . bool trans_pdu::ParseBuffer(flow_buffer_t t_flow_buffer, Contexttrans * t_context) { bool t_val_parsing_complete; t_val_parsing_complete = false; const_byteptr t_begin_of_data = t_flow_buffer->begin(); const_byteptr t_end_of_data = t_flow_buffer->end(); printf("buffering state: %d t_begin_of_data: %d t_end_of_data: %d \n" , buffering_state_ , (void*)t_begin_of_data , (void*)t_end_of_data); switch ( buffering_state_ ) { case 0: if ( buffering_state_ == 0 ) { t_flow_buffer->NewFrame(4, false); buffering_state_ = 1; } buffering_state_ = 1; break; case 1: { buffering_state_ = 2; // Checking out-of-bound for "trans_pdu:lb" if ( (t_begin_of_data + 4) + (4) > t_end_of_data || (t_begin_of_data + 4) + (4) < (t_begin_of_data + 4) ) { printf("throw\n"); // Handle out-of-bound condition throw binpac::ExceptionOutOfBound("trans_pdu:lb", (4) + (4), (t_end_of_data) - (t_begin_of_data)); } // Parse "la" la_ = FixByteOrder(bigendian, *((uint32 const *) (t_begin_of_data))); // Evaluate 'let' and 'withinput' fields x_ = la(); t_flow_buffer->GrowFrame(x() + 8); } break; case 2: BINPAC_ASSERT(t_flow_buffer->ready()); if ( t_flow_buffer->ready() ) { // Parse "lb" lb_ = FixByteOrder(bigendian, *((uint32 const *) ((t_begin_of_data + 4)))); // Evaluate 'let' and 'withinput' fields // Parse "body" body_ = new trans_body(x()); int t_body__size; t_body__size = body_->Parse((t_begin_of_data + 8), t_end_of_data); // Evaluate 'let' and 'withinput' fields t_val_parsing_complete = true; if ( t_val_parsing_complete ) { // Evaluate 'let' and 'withinput' fields proc_ = t_context->flow()->proc_sample_message(this); } BINPAC_ASSERT(t_val_parsing_complete); buffering_state_ = 0; } break; default: BINPAC_ASSERT(buffering_state_ <= 2); break; } return t_val_parsing_complete; } void trans_flow::NewData(const_byteptr t_begin_of_data, const_byteptr t_end_of_data) { ...... ...... while ( ! t_dataunit_parsing_complete && flow_buffer_->ready() ) { const_byteptr t_begin_of_data = flow_buffer()->begin(); const_byteptr t_end_of_data = flow_buffer()->end(); t_dataunit_parsing_complete = dataunit_->ParseBuffer(flow_buffer(), context_); if ( t_dataunit_parsing_complete ) { // Evaluate 'let' and 'withinput' fields } } ...... ...... } and the following is the output : buffering state: 0 t_begin_of_data: 44665856 t_end_of_data: 44665856 buffering state: 1 t_begin_of_data: 44665856 t_end_of_data: 44665860 throw buffering state: 0 t_begin_of_data: 44669328 t_end_of_data: 44669328 buffering state: 1 t_begin_of_data: 44669328 t_end_of_data: 44669332 throw buffering state: 0 t_begin_of_data: 44687568 t_end_of_data: 44687568 buffering state: 1 t_begin_of_data: 44687568 t_end_of_data: 44687572 throw buffering state: 0 t_begin_of_data: 44688176 t_end_of_data: 44688176 buffering state: 1 t_begin_of_data: 44688176 t_end_of_data: 44688180 throw ... I guess that in " case 0 " , the sentence " t_flow_buffer->NewFrame(4, false) " is used to create a 4-byte frame to parse "la" , since "la" occupies the first 4 bytes of the message and BinPAC need it evaluate the length of the message ? After this operation , "t_begin_of_data " points to the begin of the message and "t_end_of_data" points to the end of "la" . But then " case 1 " will take place . "lb" will be checked whether out-of-bound . "t_begin_of_data + 4" is the begin of "lb" , " 4 " is the length of "lb" , but "t_end_of_data" still points to the end of la . So "(t_begin_of_data + 4) + (4) > t_end_of_data" was met , and the program didn't go ahead . However , that is only the direct reason but not the source reason . I guess the source reason is that I didn't write the BinPAC code in the correct way somewhere or maybe there is a small bug in BinPAC . I really really hope to use BinPAC to deal with some protocol analysis . I tried to read the source code of BinPAC but failed to understand . I don't know what to do and I really want to get your help ! Wish to get your reply ! Good luck ! Yunchao Chen 2018.2.27 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180227/e7d312f4/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: file_2.pac Type: application/octet-stream Size: 1082 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180227/e7d312f4/attachment-0002.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: file_1.pac Type: application/octet-stream Size: 1150 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/bro-dev/attachments/20180227/e7d312f4/attachment-0003.obj