From John.Odell at dell.com Tue Mar 5 10:29:06 2019 From: John.Odell at dell.com (ODell, John) Date: Tue, 5 Mar 2019 18:29:06 +0000 Subject: [Zeek-Dev] Zeek using Network attached storage Message-ID: I have a customer that will be storing PB's of data and they will be using Zeek to analyze it (not all of it at once). They would like to use a NAS (network attached storage) and have asked me to validate that it will work. I have gone thru the documents but do not see any references to NAS or external storage. Any assistance would be greatly appreciated. Thank You, John W. O'Dell -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190305/17063987/attachment.html From oldpopsong at qq.com Wed Mar 6 06:14:19 2019 From: oldpopsong at qq.com (=?ISO-8859-1?B?U29uZw==?=) Date: Wed, 6 Mar 2019 22:14:19 +0800 Subject: [Zeek-Dev] BinPac generated code: checking out-of-bound too early Message-ID: Hi, I'm writing a BinPac flowunit analyzer, a PDU is like below: type test_pdu = record { lenAB : uint32; # length of rest of data lenA : uint16; # length of dataA dataA : bytestring &length = lenA; dataB : bytestring &length = (lenAB - 2 - lenA); } &byteorder=bigendian &length=(lenAB + 4); There are 2 problems: 1. binpac failed to compile (cannot handle incremental input) if I remove &length=(lenAB - 2 -lenA), although the overall length of the PDU can be calculated using the 4 field length 2. the generated parser seems to check out-of-bound of lenA field too early: 1577 bool test_pdu::ParseBuffer(flow_buffer_t t_flow_buffer) 1578 { 1579 bool t_val_parsing_complete; 1580 t_val_parsing_complete = false; 1581 const_byteptr t_begin_of_data = t_flow_buffer->begin(); 1582 const_byteptr t_end_of_data = t_flow_buffer->end(); 1583 switch ( buffering_state_ ) 1584 { 1585 case 0: 1586 if ( buffering_state_ == 0 ) 1587 { 1588 t_flow_buffer->NewFrame(4, false); 1589 buffering_state_ = 1; 1590 } 1591 buffering_state_ = 1; 1592 break; 1593 case 1: 1594 { 1595 buffering_state_ = 2; 1596 // Checking out-of-bound for "test_pdu:lenA" 1597 if ( (t_begin_of_data + 4) + (2) > t_end_of_data || (t_begin_of_data + 4) + (2) < (t_begin_of_data + 4) ) 1598 { 1599 // Handle out-of-bound condition 1600 throw binpac::ExceptionOutOfBound("test_pdu:lenA", 1601 (4) + (2), 1602 (t_end_of_data) - (t_begin_of_data)); 1603 } 1604 // Parse "lenAB" 1605 lenAB_ = FixByteOrder(byteorder(), *((uint32 const *) (t_begin_of_data))); 1606 // Evaluate 'let' and 'withinput' fields 1607 t_flow_buffer->GrowFrame( ( lenAB() + 4 ) ); 1608 } 1609 break; Since we only make a new frame of length 4 in line #1588 (the flow buffer will not grow to full size until line #1607), the test in line #1597 will be evaluated to true and the parsing will fail. What did I missed? Thanks in advance. Best regards, Song -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190306/4548c81e/attachment.html From vlad at es.net Wed Mar 6 07:19:35 2019 From: vlad at es.net (Vlad Grigorescu) Date: Wed, 6 Mar 2019 09:19:35 -0600 Subject: [Zeek-Dev] BinPac generated code: checking out-of-bound too early In-Reply-To: References: Message-ID: I?d try reworking this, so that you have lenAB, and then another record for the rest of the data, and just setting the length of that record to lenAB. Does that work? ?Vlad On Wed, Mar 6, 2019 at 08:17 Song wrote: > Hi, > > I'm writing a BinPac flowunit analyzer, a PDU is like below: > > type test_pdu = record { > lenAB : uint32; # length of rest of data > lenA : uint16; # length of dataA > dataA : bytestring &length = lenA; > dataB : bytestring &length = (lenAB - 2 - lenA); > } &byteorder=bigendian &length=(lenAB + 4); > > There are 2 problems: > > 1. binpac failed to compile (cannot handle incremental input) if I remove > &length=(lenAB - 2 -lenA), although the overall length of the PDU can be > calculated using the 4 field length > > 2. the generated parser seems to check out-of-bound of lenA field too > early: > > 1577 bool test_pdu::ParseBuffer(flow_buffer_t t_flow_buffer) > 1578 { > 1579 bool t_val_parsing_complete; > 1580 t_val_parsing_complete = false; > 1581 const_byteptr t_begin_of_data = t_flow_buffer->begin(); > 1582 const_byteptr t_end_of_data = t_flow_buffer->end(); > 1583 switch ( buffering_state_ ) > 1584 { > 1585 case 0: > 1586 if ( buffering_state_ == 0 ) > 1587 { > 1588 t_flow_buffer->NewFrame(4, false); > 1589 buffering_state_ = 1; > 1590 } > 1591 buffering_state_ = 1; > 1592 break; > 1593 case 1: > 1594 { > 1595 buffering_state_ = 2; > 1596 // Checking out-of-bound for "test_pdu:lenA" > 1597 if ( (t_begin_of_data + 4) + (2) > t_end_of_data || > (t_begin_of_data + 4) + (2) < (t_begin_of_data + 4) ) > 1598 { > 1599 // Handle out-of-bound condition > 1600 throw binpac::ExceptionOutOfBound("test_pdu:lenA", > 1601 (4) + (2), > 1602 (t_end_of_data) - (t_begin_of_data)); > 1603 } > 1604 // Parse "lenAB" > 1605 lenAB_ = FixByteOrder(byteorder(), *((uint32 const *) > (t_begin_of_data))); > 1606 // Evaluate 'let' and 'withinput' fields > 1607 t_flow_buffer->GrowFrame( ( lenAB() + 4 ) ); > 1608 } > 1609 break; > > Since we only make a new frame of length 4 in line #1588 (the flow buffer > will not grow to full size until line #1607), the test in line #1597 will > be evaluated to true and the parsing will fail. > > What did I missed? Thanks in advance. > > Best regards, > Song > > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190306/1fed8691/attachment.html From oldpopsong at qq.com Wed Mar 6 19:19:22 2019 From: oldpopsong at qq.com (=?gb18030?B?U29uZw==?=) Date: Thu, 7 Mar 2019 11:19:22 +0800 Subject: [Zeek-Dev] BinPac generated code: checking out-of-bound too early In-Reply-To: References: Message-ID: Thanks Vlad. I tried as you suggestd and it works: type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB); } &byteorder=bigendian &length=(lenAB + 4); # fail to compile without &length type test_pdu_ab(len: uint32) = record { lenA : uint16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA); } &byteorder=bigendian; # do not need &length here The result is correct: 1577 bool test_pdu::ParseBuffer(flow_buffer_t t_flow_buffer) 1578 { 1579 bool t_val_parsing_complete; 1580 t_val_parsing_complete = false; 1581 const_byteptr t_begin_of_data = t_flow_buffer->begin(); 1582 const_byteptr t_end_of_data = t_flow_buffer->end(); 1583 switch ( buffering_state_ ) 1584 { 1585 case 0: 1586 if ( buffering_state_ == 0 ) 1587 { 1588 t_flow_buffer->NewFrame(4, false); 1589 buffering_state_ = 1; 1590 } 1591 buffering_state_ = 1; 1592 break; 1593 case 1: 1594 { 1595 buffering_state_ = 2; 1596 // Checking out-of-bound for "test_pdu:lenAB" 1597 if ( t_begin_of_data + (4) > t_end_of_data || t_begin_of_data + (4) < t_begin_of_data ) 1598 { 1599 // Handle out-of-bound condition 1600 throw binpac::ExceptionOutOfBound("test_pdu:lenAB", 1601 (0) + (4), 1602 (t_end_of_data) - (t_begin_of_data)); 1603 } 1604 // Parse "lenAB" 1605 lenAB_ = FixByteOrder(byteorder(), *((uint32 const *) (t_begin_of_data))); 1606 // Evaluate 'let' and 'withinput' fields 1607 t_flow_buffer->GrowFrame( ( lenAB() + 4 ) ); 1608 } 1609 break; 1610 case 2: 1611 BINPAC_ASSERT(t_flow_buffer->ready()); 1612 if ( t_flow_buffer->ready() ) 1613 { 1614 1615 // Parse "pduAB" 1616 pduAB_ = new test_pdu_ab(lenAB()); 1617 int t_pduAB__size; 1618 t_pduAB__size = pduAB_->Parse((t_begin_of_data + 4), t_end_of_data); ...... ...... 1649 1650 int test_pdu_ab::Parse(const_byteptr const t_begin_of_data, const_byteptr const t_end_of_data) 1651 { 1652 // Checking out-of-bound for "test_pdu_ab:lenA" 1653 if ( t_begin_of_data + (2) > t_end_of_data || t_begin_of_data + (2) < t_begin_of_data ) 1654 { 1655 // Handle out-of-bound condition 1656 throw binpac::ExceptionOutOfBound("test_pdu_ab:lenA", 1657 (0) + (2), 1658 (t_end_of_data) - (t_begin_of_data)); 1659 } 1660 // Parse "lenA" 1661 lenA_ = FixByteOrder(byteorder(), *((uint16 const *) (t_begin_of_data))); 1662 // Evaluate 'let' and 'withinput' fields 1663 1664 // Parse "dataA" 1665 int t_dataA__size; 1666 t_dataA__size = lenA(); 1667 // Checking out-of-bound for "test_pdu_ab:dataA" 1668 if ( (t_begin_of_data + 2) + (t_dataA__size) > t_end_of_data || (t_begin_of_data + 2) + (t_dataA__size) < (t_begin_of_data + 2) ) 1669 { Does it necessary to do those out-of-bound checks like in line #1597 #1653 #1668? I think we have already got filled frame with correct size when these checks are performed. Thanks again and best regards, Song ------------------ Original ------------------ From: "Vlad Grigorescu"; Date: Wed, Mar 6, 2019 11:19 PM To: "Song"; Cc: "zeek-dev at zeek.org"; Subject: Re: [Zeek-Dev] BinPac generated code: checking out-of-bound too early I?d try reworking this, so that you have lenAB, and then another record for the rest of the data, and just setting the length of that record to lenAB. Does that work? ?Vlad On Wed, Mar 6, 2019 at 08:17 Song wrote: Hi, I'm writing a BinPac flowunit analyzer, a PDU is like below: type test_pdu = record { lenAB : uint32; # length of rest of data lenA : uint16; # length of dataA dataA : bytestring &length = lenA; dataB : bytestring &length = (lenAB - 2 - lenA); } &byteorder=bigendian &length=(lenAB + 4); There are 2 problems: 1. binpac failed to compile (cannot handle incremental input) if I remove &length=(lenAB - 2 -lenA), although the overall length of the PDU can be calculated using the 4 field length 2. the generated parser seems to check out-of-bound of lenA field too early: 1577 bool test_pdu::ParseBuffer(flow_buffer_t t_flow_buffer) 1578 { 1579 bool t_val_parsing_complete; 1580 t_val_parsing_complete = false; 1581 const_byteptr t_begin_of_data = t_flow_buffer->begin(); 1582 const_byteptr t_end_of_data = t_flow_buffer->end(); 1583 switch ( buffering_state_ ) 1584 { 1585 case 0: 1586 if ( buffering_state_ == 0 ) 1587 { 1588 t_flow_buffer->NewFrame(4, false); 1589 buffering_state_ = 1; 1590 } 1591 buffering_state_ = 1; 1592 break; 1593 case 1: 1594 { 1595 buffering_state_ = 2; 1596 // Checking out-of-bound for "test_pdu:lenA" 1597 if ( (t_begin_of_data + 4) + (2) > t_end_of_data || (t_begin_of_data + 4) + (2) < (t_begin_of_data + 4) ) 1598 { 1599 // Handle out-of-bound condition 1600 throw binpac::ExceptionOutOfBound("test_pdu:lenA", 1601 (4) + (2), 1602 (t_end_of_data) - (t_begin_of_data)); 1603 } 1604 // Parse "lenAB" 1605 lenAB_ = FixByteOrder(byteorder(), *((uint32 const *) (t_begin_of_data))); 1606 // Evaluate 'let' and 'withinput' fields 1607 t_flow_buffer->GrowFrame( ( lenAB() + 4 ) ); 1608 } 1609 break; Since we only make a new frame of length 4 in line #1588 (the flow buffer will not grow to full size until line #1607), the test in line #1597 will be evaluated to true and the parsing will fail. What did I missed? Thanks in advance. Best regards, Song _______________________________________________ zeek-dev mailing list zeek-dev at zeek.org http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190307/90b84d97/attachment.html From oldpopsong at qq.com Thu Mar 7 05:33:37 2019 From: oldpopsong at qq.com (=?ISO-8859-1?B?U29uZw==?=) Date: Thu, 7 Mar 2019 21:33:37 +0800 Subject: [Zeek-Dev] binpac crash triggered by exportsourcedata Message-ID: Hi, I define a PDU like below: type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB); } &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crash type test_pdu_ab(len: uint32) = record { lenA : uint16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA); } &exportsourcedata; # &exportsourcedata here is OK The error message is: binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()' failed. Aborted (core dumped) Best regards, Song -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190307/ceadbbf2/attachment-0001.html From john at alt.house Thu Mar 7 08:58:17 2019 From: john at alt.house (John Althouse) Date: Thu, 7 Mar 2019 11:58:17 -0500 Subject: [Zeek-Dev] Google QUIC Protocol Analyzer Message-ID: Is there a Zeek QUIC Analyzer that anyone is aware of? I know Corelight has this: https://github.com/corelight/bro-quic but as far as I can tell, it just identifies QUIC traffic, it doesn't actually provide any metadata. There's a lot of juicy information in the packets so I may have a go at writing my first analyzer followed by a JA3-style fingerprinting method - I just wanted to check here to make sure I'm not duplicating efforts. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190307/1e4c459b/attachment.html From dopheide at es.net Thu Mar 7 10:11:39 2019 From: dopheide at es.net (Michael Dopheide) Date: Thu, 7 Mar 2019 12:11:39 -0600 Subject: [Zeek-Dev] Google QUIC Protocol Analyzer In-Reply-To: References: Message-ID: I'm not aware of anyone else working on it. I'd originally taken a stab at identifying Google QUIC as well as the IETF draft versions, but as Jon pointed out to me, those are just draft and we'd have to keep changing them. I can also verify from doing that that we saw zero IETF quic traffic in the wild. I would initially suggest forking corelight's version and then doing a pull request with your added features rather than reinventing the wheel. -Dop On Thu, Mar 7, 2019 at 10:59 AM John Althouse wrote: > Is there a Zeek QUIC Analyzer that anyone is aware of? > > I know Corelight has this: https://github.com/corelight/bro-quic but as > far as I can tell, it just identifies QUIC traffic, it doesn't actually > provide any metadata. There's a lot of juicy information in the packets so > I may have a go at writing my first analyzer followed by a JA3-style > fingerprinting method - I just wanted to check here to make sure I'm not > duplicating efforts. > > Thanks! > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190307/93b1438c/attachment.html From oldpopsong at qq.com Thu Mar 7 20:00:48 2019 From: oldpopsong at qq.com (=?ISO-8859-1?B?U29uZw==?=) Date: Fri, 8 Mar 2019 12:00:48 +0800 Subject: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered by exportsourcedata In-Reply-To: <20190307150507.E2A72F42@centrum.cz> References: <20190307150507.E2A72F42@centrum.cz> Message-ID: Thank you Ronka. It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunit analyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata only apply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsourcedata But I'm not sure about what is non-incremental type, I have to check the generated code. The reason that I want sourcedata field is that I want to feed the whole test_pdu to another analyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig); to bring back the first 4 bytes to form the original whole PDU. Maybe I should try datagram analyzer. Song ------------------ Original ------------------ From: "ronka_mata"; Date: Thu, Mar 7, 2019 10:05 PM To: "Song"; Cc: "zeek-dev"; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata Hi, What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flow You do not need to define length for datagrams. Look at other protocols for example of differences. Eg radius for datagrams and smb for flows. Ronka ---------- Forwarded message --------- From: Song Date: Thu, Mar 7, 2019, 13:40 Subject: [Zeek-Dev] binpac crash triggered by exportsourcedata To: zeek-dev Hi,I define a PDU like below: type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB); } &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crash type test_pdu_ab(len: uint32) = record { lenA : uint16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OK The error message is: binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed. Aborted (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190308/d9d1b0bb/attachment.html From oldpopsong at qq.com Fri Mar 8 00:26:17 2019 From: oldpopsong at qq.com (=?ISO-8859-1?B?U29uZw==?=) Date: Fri, 8 Mar 2019 16:26:17 +0800 Subject: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata In-Reply-To: References: <20190307150507.E2A72F42@centrum.cz> Message-ID: It turns out that my workaround is wrong. Every field is a copy, so field.begin() - 4 will not give you the address of the data 4 bytes before the field. It may contains random data even be an illegal address. ------------------ Original ------------------ From: "Song"; Date: Fri, Mar 8, 2019 12:00 PM To: "ronka_mata"; Cc: "zeek-dev at zeek.org"; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata Thank you Ronka. It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunit analyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata only apply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsourcedata But I'm not sure about what is non-incremental type, I have to check the generated code. The reason that I want sourcedata field is that I want to feed the whole test_pdu to another analyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig); to bring back the first 4 bytes to form the original whole PDU. Maybe I should try datagram analyzer. Song ------------------ Original ------------------ From: "ronka_mata"; Date: Thu, Mar 7, 2019 10:05 PM To: "Song"; Cc: "zeek-dev"; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata Hi, What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flow You do not need to define length for datagrams. Look at other protocols for example of differences. Eg radius for datagrams and smb for flows. Ronka ---------- Forwarded message --------- From: Song Date: Thu, Mar 7, 2019, 13:40 Subject: [Zeek-Dev] binpac crash triggered by exportsourcedata To: zeek-dev Hi,I define a PDU like below: type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB); } &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crash type test_pdu_ab(len: uint32) = record { lenA : uint16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OK The error message is: binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed. Aborted (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190308/a25f9f81/attachment.html From vern at corelight.com Fri Mar 8 11:24:47 2019 From: vern at corelight.com (Vern Paxson) Date: Fri, 08 Mar 2019 11:24:47 -0800 Subject: [Zeek-Dev] Zeek using Network attached storage In-Reply-To: (Tue, 05 Mar 2019 18:29:06 GMT). Message-ID: <20190308192447.278AD2C7733@rock.ICSI.Berkeley.EDU> > They would like to use a NAS (network attached storage) and have asked > me to validate that it will work. Zeek will happily read pcaps from Unix files. Assuming that's the interface that the NAS provides, sure this will work. Maybe though I'm not understanding the question, as it seems quite straightforward. Vern From michalpurzynski1 at gmail.com Fri Mar 8 11:42:58 2019 From: michalpurzynski1 at gmail.com (=?UTF-8?B?TWljaGHFgiBQdXJ6ecWEc2tp?=) Date: Fri, 8 Mar 2019 11:42:58 -0800 Subject: [Zeek-Dev] Zeek using Network attached storage In-Reply-To: <20190308192447.278AD2C7733@rock.ICSI.Berkeley.EDU> References: <20190308192447.278AD2C7733@rock.ICSI.Berkeley.EDU> Message-ID: Just make sure you do not sniff the interface that you use for packet storage. Nothing like a positive feedback loop that can happen because of some crazy network configurations ;] On Fri, Mar 8, 2019 at 11:31 AM Vern Paxson wrote: > > They would like to use a NAS (network attached storage) and have asked > > me to validate that it will work. > > Zeek will happily read pcaps from Unix files. Assuming that's the > interface > that the NAS provides, sure this will work. Maybe though I'm not > understanding > the question, as it seems quite straightforward. > > Vern > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190308/9bdcebb8/attachment-0001.html From John.Odell at dell.com Fri Mar 8 13:01:57 2019 From: John.Odell at dell.com (ODell, John) Date: Fri, 8 Mar 2019 21:01:57 +0000 Subject: [Zeek-Dev] Zeek using Network attached storage In-Reply-To: References: <20190308192447.278AD2C7733@rock.ICSI.Berkeley.EDU>, Message-ID: <9A998E08-3746-46E6-9F91-A1BF921DC22F@emc.com> Thank you for the feedback! I am not familiar with Zeek and my customer wanted a guarantee they can use a NAS device (nfs) to store collected data for long term analysis. Sounds like they can use it!!! Great news! Thank You, Dell/EMC -Federal - Isilon John W. ODell 614-309-3085 On Mar 8, 2019, at 11:43, Micha? Purzy?ski > wrote: [EXTERNAL EMAIL] Just make sure you do not sniff the interface that you use for packet storage. Nothing like a positive feedback loop that can happen because of some crazy network configurations ;] On Fri, Mar 8, 2019 at 11:31 AM Vern Paxson > wrote: > They would like to use a NAS (network attached storage) and have asked > me to validate that it will work. Zeek will happily read pcaps from Unix files. Assuming that's the interface that the NAS provides, sure this will work. Maybe though I'm not understanding the question, as it seems quite straightforward. Vern _______________________________________________ zeek-dev mailing list zeek-dev at zeek.org http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190308/5181db45/attachment.html From oldpopsong at qq.com Fri Mar 8 19:13:28 2019 From: oldpopsong at qq.com (=?utf-8?B?U29uZw==?=) Date: Sat, 9 Mar 2019 11:13:28 +0800 Subject: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered by exportsourcedata In-Reply-To: <20190308150823.A8807891@centrum.cz> References: <20190307150507.E2A72F42@centrum.cz> <20190308150823.A8807891@centrum.cz> Message-ID: Hi Ronka, The protocol I'm trying to analyze supports multiple authentication methods, including SASL Kerberos GSSAPI. After authentication, according to the authentication method chosen and security layer negotiated, the RPC requests/responses followed could be in plain text, signed or encrypted. In the plain text form, the PDU is like: <4 bytes length field> While in signed or encrypted form, the outmost layer of PDU is like: <4 bytes length field> In the later case, the RPC requests/responses PDU (including the 4 bytes length field indicating the length of the request/response data) is encapsulated in the Wrap Tokens. It is possible that a big RPC request/response will be carried by multiple Wrap Token PDUs. So I have two analyzers: - controlling analyzer: deal with authentication and decryption, forward decrypted RPC PDU data to RPC analyzer - RPC analyzer: decode RPC request/response I need the &exportsourcedata for the plain text case in which the whole controlling analyzer PDU should be forwarded to the RPC analyzer. Today I will try to change the type of controlling analyzer to datagram. Best regards, Song ------------------ Original ------------------ From: "ronka_mata"; Date: Fri, Mar 8, 2019 10:08 PM To: "Song"; Cc: "zeek-dev at zeek.org"; Subject: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata Hi Song, Could you explain a bit more what you are trying to achieve? Do you want to deliver the same data into two analyzers? Or is it just part of it? Or deliver to one if condition is met and to second one otherwise? Do you have to wait with second analyzer after it was passed by the first interpreter or can you call second in the deliverstream function? Make the second one into child analyzer and then call ForwardStream function or some similar approach? I understand where your problem is with your current code. I was not able to get around the len problem yet, but I will give it a go a bit later today, unless someone else knows solution first hand. For delivery of parts of stream data, defined as a bytestream, you can take example from smb-pipe.pac forward_dce_rpc funct. Hope this helps a bit. Ronka __________________________________________________________ > Od: "Song" > Komu: "ronka_mata" > Datum: 08.03.2019 05:01 > P?edm?t: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata > > CC: Thank you Ronka.It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunitanalyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata onlyapply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsourcedataBut I'm not sure about what is non-incremental type, I have to check the generated code.The reason that I want sourcedata field is that I want to feed the whole test_pdu to anotheranalyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig);to bring back the first 4 bytes to form the original whole PDU.Maybe I should try datagram analyzer.Song------------------ Original ------------------From: "ronka_mata";Date: Thu, Mar 7, 2019 10:05 PMTo: "Song";Cc: "zee k-dev"; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi,What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flowYou do not need to define length for datagrams. Look at other protocols for example of differences. Eg radius for datagrams and smb for flows.Ronka---------- Forwarded message ---------From: Song Date: Thu, Mar 7, 2019, 13:40Subject: [Zeek-Dev] binpac crash triggered by exportsourcedataTo: zeek-dev Hi,I define a PDU like below:type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB);} &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crashtype test_pdu_ab(len: uint32) = record { lenA : uint 16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OKThe error message is:binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed.Aborted (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190309/90ae9a78/attachment.html From oldpopsong at qq.com Sat Mar 9 01:39:53 2019 From: oldpopsong at qq.com (=?utf-8?B?U29uZw==?=) Date: Sat, 9 Mar 2019 17:39:53 +0800 Subject: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata In-Reply-To: References: <20190307150507.E2A72F42@centrum.cz> <20190308150823.A8807891@centrum.cz> Message-ID: I tried to set flow type of the controlling analyzer to datagram and used &exportsourcedata. Although the resulted analyzer works great against my test pcap file, but after checking the code binpac generated I think the datagram analyzer is not suited to the TCP-based protocols. Below are the generated codes: 2308 void TEST_Flow::NewData(const_byteptr t_begin_of_data, const_byteptr t_end_of_data) 2309 { 2310 try 2311 { 2312 dataunit_ = new TEST_PDU(is_orig()); 2313 context_ = new ContextTEST(connection(), this); 2314 int t_dataunit__size; 2315 t_dataunit__size = dataunit_->Parse(t_begin_of_data, t_end_of_data, context_); 2316 // Evaluate 'let' and 'withinput' fields 2317 delete dataunit_; 2318 dataunit_ = 0; 2319 delete context_; 2320 context_ = 0; 2321 } 2322 catch ( binpac::Exception const &e ) 2323 { 2324 delete dataunit_; 2325 dataunit_ = 0; 2326 delete context_; 2327 context_ = 0; 2328 throw; 2329 } 2330 } Notice that in line #2312, every piece of data will be treated as a new PDU which obviously is not good for TCP data stream. I think now the only option I have is to build a new bytestring from the length and data fields and to feed it to the RPC analyzer. This solution is bad from the performance point of view since we have to do 2 extra memory copy: first to generate data field, second to regenerate the original whole PDU. ------------------ Original ------------------ From: "Song"; Date: Sat, Mar 9, 2019 11:13 AM To: "ronka_mata"; Cc: "zeek-dev at zeek.org"; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata Hi Ronka, The protocol I'm trying to analyze supports multiple authentication methods, including SASL Kerberos GSSAPI. After authentication, according to the authentication method chosen and security layer negotiated, the RPC requests/responses followed could be in plain text, signed or encrypted. In the plain text form, the PDU is like: <4 bytes length field> While in signed or encrypted form, the outmost layer of PDU is like: <4 bytes length field> In the later case, the RPC requests/responses PDU (including the 4 bytes length field indicating the length of the request/response data) is encapsulated in the Wrap Tokens. It is possible that a big RPC request/response will be carried by multiple Wrap Token PDUs. So I have two analyzers: - controlling analyzer: deal with authentication and decryption, forward decrypted RPC PDU data to RPC analyzer - RPC analyzer: decode RPC request/response I need the &exportsourcedata for the plain text case in which the whole controlling analyzer PDU should be forwarded to the RPC analyzer. Today I will try to change the type of controlling analyzer to datagram. Best regards, Song ------------------ Original ------------------ From: "ronka_mata"; Date: Fri, Mar 8, 2019 10:08 PM To: "Song"; Cc: "zeek-dev at zeek.org"; Subject: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata Hi Song, Could you explain a bit more what you are trying to achieve? Do you want to deliver the same data into two analyzers? Or is it just part of it? Or deliver to one if condition is met and to second one otherwise? Do you have to wait with second analyzer after it was passed by the first interpreter or can you call second in the deliverstream function? Make the second one into child analyzer and then call ForwardStream function or some similar approach? I understand where your problem is with your current code. I was not able to get around the len problem yet, but I will give it a go a bit later today, unless someone else knows solution first hand. For delivery of parts of stream data, defined as a bytestream, you can take example from smb-pipe.pac forward_dce_rpc funct. Hope this helps a bit. Ronka __________________________________________________________ > Od: "Song" > Komu: "ronka_mata" > Datum: 08.03.2019 05:01 > P?edm?t: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata > > CC: Thank you Ronka.It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunitanalyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata onlyapply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsourcedataBut I'm not sure about what is non-incremental type, I have to check the generated code.The reason that I want sourcedata field is that I want to feed the whole test_pdu to anotheranalyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig);to bring back the first 4 bytes to form the original whole PDU.Maybe I should try datagram analyzer.Song------------------ Original ------------------From: "ronka_mata";Date: Thu, Mar 7, 2019 10:05 PMTo: "Song";Cc: "zee k-dev"; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi,What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flowYou do not need to define length for datagrams. Look at other protocols for example of differences. Eg radius for datagrams and smb for flows.Ronka---------- Forwarded message ---------From: Song Date: Thu, Mar 7, 2019, 13:40Subject: [Zeek-Dev] binpac crash triggered by exportsourcedataTo: zeek-dev Hi,I define a PDU like below:type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB);} &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crashtype test_pdu_ab(len: uint32) = record { lenA : uint 16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OKThe error message is:binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed.Aborted (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190309/92a2f136/attachment-0001.html From oldpopsong at qq.com Sat Mar 9 06:05:28 2019 From: oldpopsong at qq.com (=?utf-8?B?U29uZw==?=) Date: Sat, 9 Mar 2019 22:05:28 +0800 Subject: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggeredbyexportsourcedata In-Reply-To: References: <20190307150507.E2A72F42@centrum.cz> <20190308150823.A8807891@centrum.cz> Message-ID: Just realized that the ASSERT() in binpac's pac_type.cc is only effective for DEBUG build, the non-debug version will accept &exportsourcedata for incremental inputs happily. But I need DBG_LOG() macro, so I just comment out 2 ASSERT() in pac_type.cc, everything goes smoothly so far :) diff --git a/src/pac_type.cc b/src/pac_type.cc index 1b827ea..eb8b868 100644 --- a/src/pac_type.cc +++ b/src/pac_type.cc @@ -837,7 +837,7 @@ bool Type::AddSizeVar(Output* out_cc, Env* env) if ( StaticSize(env) >= 0 ) return false; - ASSERT(! incremental_input()); + //ASSERT(! incremental_input()); ID *size_var_id = new ID(strfmt("%s__size", value_var() ? value_var()->Name() : decl_id()->Name())); @@ -854,7 +854,7 @@ bool Type::AddSizeVar(Output* out_cc, Env* env) string Type::EvalLengthExpr(Output* out_cc, Env* env) { - ASSERT(!incremental_input()); + //ASSERT(!incremental_input()); ASSERT(attr_length_expr_); int static_length; if ( attr_length_expr_->ConstFold(env, &static_length) ) I have checked the generated code, everything is fine except a harmless superflours statement near the end of the type's ParseBuffer() method: sourcedata_.set_end(t_begin_of_data + t_TEST_Req_plain__size); Really I can not think of a reason why incremental types cannot have a sourcedata field. ------------------ Original ------------------ From: "Song"; Date: Sat, Mar 9, 2019 05:39 PM To: "zeek-dev at zeek.org"; Cc: "ronka_mata"; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggeredbyexportsourcedata I tried to set flow type of the controlling analyzer to datagram and used &exportsourcedata. Although the resulted analyzer works great against my test pcap file, but after checking the code binpac generated I think the datagram analyzer is not suited to the TCP-based protocols. Below are the generated codes: 2308 void TEST_Flow::NewData(const_byteptr t_begin_of_data, const_byteptr t_end_of_data) 2309 { 2310 try 2311 { 2312 dataunit_ = new TEST_PDU(is_orig()); 2313 context_ = new ContextTEST(connection(), this); 2314 int t_dataunit__size; 2315 t_dataunit__size = dataunit_->Parse(t_begin_of_data, t_end_of_data, context_); 2316 // Evaluate 'let' and 'withinput' fields 2317 delete dataunit_; 2318 dataunit_ = 0; 2319 delete context_; 2320 context_ = 0; 2321 } 2322 catch ( binpac::Exception const &e ) 2323 { 2324 delete dataunit_; 2325 dataunit_ = 0; 2326 delete context_; 2327 context_ = 0; 2328 throw; 2329 } 2330 } Notice that in line #2312, every piece of data will be treated as a new PDU which obviously is not good for TCP data stream. I think now the only option I have is to build a new bytestring from the length and data fields and to feed it to the RPC analyzer. This solution is bad from the performance point of view since we have to do 2 extra memory copy: first to generate data field, second to regenerate the original whole PDU. ------------------ Original ------------------ From: "Song"; Date: Sat, Mar 9, 2019 11:13 AM To: "ronka_mata"; Cc: "zeek-dev at zeek.org"; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata Hi Ronka, The protocol I'm trying to analyze supports multiple authentication methods, including SASL Kerberos GSSAPI. After authentication, according to the authentication method chosen and security layer negotiated, the RPC requests/responses followed could be in plain text, signed or encrypted. In the plain text form, the PDU is like: <4 bytes length field> While in signed or encrypted form, the outmost layer of PDU is like: <4 bytes length field> In the later case, the RPC requests/responses PDU (including the 4 bytes length field indicating the length of the request/response data) is encapsulated in the Wrap Tokens. It is possible that a big RPC request/response will be carried by multiple Wrap Token PDUs. So I have two analyzers: - controlling analyzer: deal with authentication and decryption, forward decrypted RPC PDU data to RPC analyzer - RPC analyzer: decode RPC request/response I need the &exportsourcedata for the plain text case in which the whole controlling analyzer PDU should be forwarded to the RPC analyzer. Today I will try to change the type of controlling analyzer to datagram. Best regards, Song ------------------ Original ------------------ From: "ronka_mata"; Date: Fri, Mar 8, 2019 10:08 PM To: "Song"; Cc: "zeek-dev at zeek.org"; Subject: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata Hi Song, Could you explain a bit more what you are trying to achieve? Do you want to deliver the same data into two analyzers? Or is it just part of it? Or deliver to one if condition is met and to second one otherwise? Do you have to wait with second analyzer after it was passed by the first interpreter or can you call second in the deliverstream function? Make the second one into child analyzer and then call ForwardStream function or some similar approach? I understand where your problem is with your current code. I was not able to get around the len problem yet, but I will give it a go a bit later today, unless someone else knows solution first hand. For delivery of parts of stream data, defined as a bytestream, you can take example from smb-pipe.pac forward_dce_rpc funct. Hope this helps a bit. Ronka __________________________________________________________ > Od: "Song" > Komu: "ronka_mata" > Datum: 08.03.2019 05:01 > P?edm?t: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata > > CC: Thank you Ronka.It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunitanalyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata onlyapply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsourcedataBut I'm not sure about what is non-incremental type, I have to check the generated code.The reason that I want sourcedata field is that I want to feed the whole test_pdu to anotheranalyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig);to bring back the first 4 bytes to form the original whole PDU.Maybe I should try datagram analyzer.Song------------------ Original ------------------From: "ronka_mata";Date: Thu, Mar 7, 2019 10:05 PMTo: "Song";Cc: "zee k-dev"; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi,What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flowYou do not need to define length for datagrams. Look at other protocols for example of differences. Eg radius for datagrams and smb for flows.Ronka---------- Forwarded message ---------From: Song Date: Thu, Mar 7, 2019, 13:40Subject: [Zeek-Dev] binpac crash triggered by exportsourcedataTo: zeek-dev Hi,I define a PDU like below:type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB);} &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crashtype test_pdu_ab(len: uint32) = record { lenA : uint 16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OKThe error message is:binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed.Aborted (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190309/bab4807e/attachment-0001.html From oldpopsong at qq.com Sat Mar 9 06:36:13 2019 From: oldpopsong at qq.com (=?utf-8?B?U29uZw==?=) Date: Sat, 9 Mar 2019 22:36:13 +0800 Subject: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata In-Reply-To: <20190309111805.70BE63DB@centrum.cz> References: <20190307150507.E2A72F42@centrum.cz>, , <20190308150823.A8807891@centrum.cz>, <20190309111805.70BE63DB@centrum.cz> Message-ID: Hi Ronka, Did you mean doing everything in a single analyzer? That would make things complicated. As I said, the clear text extracted from a single Wrap Token may be just a fragment of a RPC PDU so we need to resemble those fragments to form a complete RPC PDU, then feed the result RPC PDU to a RPC type. The most simple solution to do the resembling I can think of is to delegate the work to a dedicated RPC flowunit analyzer. Please note that this is a completely different analyzer. I have 2 analyzers in 1 plugin (2 AddComponent() in Plugin.cc). The other solution I could think of is to do the resembling inside the flow or connection, maybe implemented with FlowBuffer. But I think the code will not be trival (more states to keep, more boundary checks to do, buffer management ...) and I'm too lazy ... Song ------------------ Original ------------------ From: "ronka_mata"; Date: Sat, Mar 9, 2019 06:18 PM To: "Song";"zeek-dev at zeek.org"; Cc: "zeek-dev"; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata Did you think about keeping some member variable inside dce_Conn,that would be used as a switch? Once you established that you are getting plain text or encrypted rpc, you can set your var set appropriately and make your pdu look like.. Type test()= record { Len:...; Body: case $context.connection.my_switch_var() of{ PLAIN -> plaint: test_abc(len); ENCR -> encrt: ....; } } ..and then test_ab can define rpc req/response pdu.. or forward to gssapi. You would have to reset the variable on every gap/connection restart/.. but you would not need feed data back again. Sorry, I am currently on phone, so proof of concept coming later. Ronka ____________________________________________________________ > Od: "Song" > Komu: "zeek-dev at zeek.org" <> > Datum: 09.03.2019 10:41 > P?edm?t: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata > I tried to set flow type of the controlling analyzer to datagram and used &exportsourcedata. Althoughthe resulted analyzer works great against my test pcap file, but after checking the code binpac generatedI think the datagram analyzer is not suited to the TCP-based protocols. Below are the generated codes:2308 void TEST_Flow::NewData(const_byteptr t_begin_of_data, const_byteptr t_end_of_data)2309 {2310 try2311 {2312 dataunit_ = new TEST_PDU(is_orig());2313 context_ = new ContextTEST(connection(), this);2314 int t_dataunit__size;2315 t_dataunit__size = dataunit_->Parse(t_begin_of_data, t_end_of_data, context_);2316 // Evaluate 'let' and 'withinput' fields2317 delete dataunit_;2318 dataunit_ = 0;2319 delete context_;2320 context_ = 0;2321 }2322 catch ( binpac::Exception const &e )2323 {2324 delete dataunit_;2325 dataunit_ = 0;2326 delete context_;2327 context_ = 0;2328 throw;2329 }2330 }Notice that in line #2312, every piece of data will be treated as a new PDU which obviously is not good for TCPdata stream.I think now the only option I have is to build a new bytestring from the length and data fields and to feed it to the RPC analyzer. This solution is bad from the performance point of view since we have to do 2 extra memorycopy: first to generate data field, second to regenerate the original whole PDU.------------------ Original ------------------From: "Song";Date: Sat, Mar 9, 2019 11:13 AMTo: "ronka_mata";Cc: "zeek-dev at zeek.org"; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedataHi Ronka,The protocol I'm trying to analyze supports multiple authe ntication methods, including SASL Kerberos GSSAPI.After authentication, according to the authentication method chosen and security layer negotiated, the RPCrequests/responses followed could be in plain text, signed or encrypted.In the plain text form, the PDU is like: <4 bytes length field> While in signed or encrypted form, the outmost layer of PDU is like: <4 bytes length field> In the later case, the RPC requests/responses PDU (including the 4 bytes length field indicating the length of therequest/response data) is encapsulated in the Wrap Tokens. It is possible that a big RPC request/response willbe carried by multiple Wrap Token PDUs.So I have two analyzers: - controlling analyzer: deal with authentication and decryption, forward decrypted RPC PDU data to RPC analyzer - RPC analyzer: decode RPC request /responseI need the &exportsourcedata for the plain text case in which the whole controlling analyzer PDU should be forwardedto the RPC analyzer.Today I will try to change the type of controlling analyzer to datagram.Best regards,Song------------------ Original ------------------From: "ronka_mata";Date: Fri, Mar 8, 2019 10:08 PMTo: "Song";Cc: "zeek-dev at zeek.org"; Subject: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi Song,Could you explain a bit more what you are trying to achieve? Do you want to deliver the same data into two analyzers? Or is it just part of it? Or deliver to one if condition is met and to second one otherwise? Do you have to wait with second analyzer after it was passed by the first interpreter or can you call second in the deliverstream function? Make the second one into child analyzer and then call ForwardStream function or some similar approach?I understand where your problem is with your current code. I was not able to get around the len problem yet, but I will give it a go a bit later today, unless someone else knows solution first hand.For delivery of parts of stream data, defined as a bytestream, you can take example from smb-pipe.pac forward_dce_rpc funct.Hope this helps a bit.Ronka__________________________________________________________> Od: "Song" > Komu: "ronka_mata" > Datum: 08.03.2019 05:01> P?edm?t: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata>> CC: Thank you Ronka.It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunitanalyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata onlyapply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsou rcedataBut I'm not sure about what is non-incremental type, I have to check the generated code.The reason that I want sourcedata field is that I want to feed the whole test_pdu to anotheranalyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig);to bring back the first 4 bytes to form the original whole PDU.Maybe I should try datagram analyzer.Song------------------ Original ------------------From: "ronka_mata";Date: Thu, Mar 7, 2019 10:05 PMTo: "Song";Cc: "zee k-dev"; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi,What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flowYou do not need to define length for datagrams. Look at oth er protocols for example of differences. Eg radius for datagrams and smb for flows.Ronka---------- Forwarded message ---------From: Song Date: Thu, Mar 7, 2019, 13:40Subject: [Zeek-Dev] binpac crash triggered by exportsourcedataTo: zeek-dev Hi,I define a PDU like below:type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB);} &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crashtype test_pdu_ab(len: uint32) = record { lenA : uint 16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OKThe error message is:binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed.Aborted (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190309/c9d708e9/attachment.html From oldpopsong at qq.com Mon Mar 11 00:21:34 2019 From: oldpopsong at qq.com (=?ISO-8859-1?B?U29uZw==?=) Date: Mon, 11 Mar 2019 15:21:34 +0800 Subject: [Zeek-Dev] move krb5_init_context out of KRB.cc Message-ID: Hi, I'm trying to write a Kerberos GSSAPI decryption support analyzer. Currently krb5_init_context() is called to get a krb5_context inside KRB analyzer. I think it's a good thing to share the context among all the components that need to call KRB5 API. Is there any mechanism to do so? Or should I just call krb5_init_context() in main.cc and export the context via a new .h file? Best regards, Song -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190311/99fc9dff/attachment.html From jsiwek at corelight.com Mon Mar 11 10:38:48 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Mon, 11 Mar 2019 10:38:48 -0700 Subject: [Zeek-Dev] move krb5_init_context out of KRB.cc In-Reply-To: References: Message-ID: On Mon, Mar 11, 2019 at 12:29 AM Song wrote: > I'm trying to write a Kerberos GSSAPI decryption support analyzer. Currently krb5_init_context() is called to get a krb5_context inside KRB > analyzer. I think it's a good thing to share the context among all the components that need to call KRB5 API. Can you elaborate why? Because I think the opposite: a context per connection/analyzer makes sense as those are logically distinct units that should have their own state instead of sharing a global state. - Jon From oldpopsong at qq.com Mon Mar 11 19:03:21 2019 From: oldpopsong at qq.com (=?ISO-8859-1?B?U29uZw==?=) Date: Tue, 12 Mar 2019 10:03:21 +0800 Subject: [Zeek-Dev] move krb5_init_context out of KRB.cc In-Reply-To: References: Message-ID: My concerns were: 1. The context returned by krb5_init_context() is a library context, not session/connection context. I was a little nervous to do multiple library initializations in a single process. 2. Performance impact. I had quickly read the source of krb5_init_context(), most work it does is irrelevant to us as a passive analyzer, such as setting security policy (allow weak encryption or not, etc.) according to KRB configuration files, seeding random number generator, adding entropy to random number generator, initializing mutex. But now I'd rather not bother to move the call out of KRB analyzer. Since: 1. From my practice it seems OK to do multiple KRB library initializations. 2. The performance impact is very limited. Currently 1 context for all KRB analyzer instances and it will not be a big deal to add one for all new support analyzer instances. Song ------------------ Original ------------------ From: "Jon Siwek"; Date: Tue, Mar 12, 2019 01:38 AM To: "Song"; Cc: "zeek-dev at zeek.org"; Subject: Re: [Zeek-Dev] move krb5_init_context out of KRB.cc On Mon, Mar 11, 2019 at 12:29 AM Song wrote: > I'm trying to write a Kerberos GSSAPI decryption support analyzer. Currently krb5_init_context() is called to get a krb5_context inside KRB > analyzer. I think it's a good thing to share the context among all the components that need to call KRB5 API. Can you elaborate why? Because I think the opposite: a context per connection/analyzer makes sense as those are logically distinct units that should have their own state instead of sharing a global state. - Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190312/e95ee978/attachment.html From zeolla at gmail.com Tue Mar 12 09:15:34 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Tue, 12 Mar 2019 12:15:34 -0400 Subject: [Zeek-Dev] Using a BiF across C++ and Zeek Policy Message-ID: I am working on improving the btests for the kafka writer plugin with the goal of validating some logic in KafkaWriter::DoInit. The best approach that I have so far is to write a BiF and use it in both DoInit and the btest via Zeek policy, but I have only been able to find limited documentation[1][2] on the topic. I've looked around for examples of this approach without success, including in the past few years of the Zeek-dev mailing list archives. I explicitly want to stay away from the assumption that the Manager/Logger has a kafka broker available to it at the time of testing. My BiF is fairly simple: ``` function SelectTopicName%(override: string, default: string, fallback: string%) : string %{ // Things %} ``` bifcl appears to be generating the following: ``` namespace BifFunc { namespace Kafka { extern Val* bro_SelectTopicName(Frame* frame, val_list*); } } ``` At this point I'm just randomly poking around in Zeek/src trying to find my way - any pointers regarding how to use this function in C++ (or another approach altogether) would be appreciated. Thanks, 1: https://www.zeek.org/development/howtos/bif-doc/index.html#functions 2: https://www.zeek.org/development/howtos/bif-doc/example.html Jon -- Jon Zeolla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190312/d478be0f/attachment.html From anthony.kasza at gmail.com Wed Mar 13 08:16:10 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Wed, 13 Mar 2019 09:16:10 -0600 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin Message-ID: Hello Zeek Devs, I would like to write a protocol analyzer and need some direction. I would like to write something simple which works on TCP, similar to the ConnSize analyzer. I would like my analyzer to be distributed as a plugin, similar to MITRE's HTTP2 analyzer, so I am following the docs here: https://docs.zeek.org/en/stable/devel/plugins.html However, the docs don't detail much beyond creating a built in function. A colleague pointed me at this quickstart script for binpac: https://github.com/grigorescu/binpac_quickstart The quickstart script seems to be intended for writing a protocol analyzer which gets merged into the Zeek source. This is not how plugins operate. I'm looking for some guidance on how to proceed. Thanks in advance. -AK -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/f97338fd/attachment.html From robin at corelight.com Wed Mar 13 08:29:02 2019 From: robin at corelight.com (Robin Sommer) Date: Wed, 13 Mar 2019 08:29:02 -0700 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: <20190313152902.GD5008@corelight.com> See if this helps: https://github.com/zeek/zeek/blob/master/testing/btest/plugins/protocol.bro That may be the most compact tutorial on writing a protocol analyzer plugin. :) Robin On Wed, Mar 13, 2019 at 09:16 -0600, anthony kasza wrote: > Hello Zeek Devs, > > I would like to write a protocol analyzer and need some direction. I would > like to write something simple which works on TCP, similar to the ConnSize > analyzer. I would like my analyzer to be distributed as a plugin, similar > to MITRE's HTTP2 analyzer, so I am following the docs here: > https://docs.zeek.org/en/stable/devel/plugins.html > > However, the docs don't detail much beyond creating a built in function. A > colleague pointed me at this quickstart script for binpac: > https://github.com/grigorescu/binpac_quickstart > > The quickstart script seems to be intended for writing a protocol analyzer > which gets merged into the Zeek source. This is not how plugins operate. > > I'm looking for some guidance on how to proceed. Thanks in advance. > > -AK > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev -- Robin Sommer * Corelight, Inc. * robin at corelight.com * www.corelight.com From vlad at es.net Wed Mar 13 08:50:57 2019 From: vlad at es.net (Vlad Grigorescu) Date: Wed, 13 Mar 2019 10:50:57 -0500 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: On Wed, Mar 13, 2019 at 10:17 AM anthony kasza wrote: > However, the docs don't detail much beyond creating a built in function. A > colleague pointed me at this quickstart script for binpac: > https://github.com/grigorescu/binpac_quickstart > Oops! Sorry about that. Try this one: https://github.com/esnet/binpac_quickstart That has a '--plugin' option. That will at least get the boilerplate stuff built, and then you can start digging into the protocol specifics. --Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/3c077df0/attachment.html From jsiwek at corelight.com Wed Mar 13 10:37:09 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 13 Mar 2019 10:37:09 -0700 Subject: [Zeek-Dev] Using a BiF across C++ and Zeek Policy In-Reply-To: References: Message-ID: On Tue, Mar 12, 2019 at 9:23 AM Zeolla at GMail.com wrote: > function SelectTopicName%(override: string, default: string, fallback: string%) : string > %{ > // Things > %} > ``` > > bifcl appears to be generating the following: > > ``` > namespace BifFunc { namespace Kafka { extern Val* bro_SelectTopicName(Frame* frame, val_list*); } } > ``` > > At this point I'm just randomly poking around in Zeek/src trying to find my way - any pointers regarding how to use this function in C++ (or another approach altogether) would be appreciated. Thanks, If I understand what you want, I think you may treat this just like calling any other script-layer function from the C++ layer. So for an example, this would lookup the function you want to call (substitute the name of the function you want): https://github.com/zeek/zeek/blob/a36ac12e885a60ee77f9141d4c35882cb53bc1f2/src/broker/Manager.cc#L161 You can also find the definition of "get_option" in the same file in case that helps to look at. Then here's an example of calling that function: https://github.com/zeek/zeek/blob/a36ac12e885a60ee77f9141d4c35882cb53bc1f2/src/broker/Manager.cc#L543-L557 Note that the return value becomes your responsibility -- here it gets used and then Unref()'d right away to take care of the required memory management duties. - Jon From jan.grashoefer at gmail.com Wed Mar 13 11:36:37 2019 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Wed, 13 Mar 2019 19:36:37 +0100 Subject: [Zeek-Dev] Using a BiF across C++ and Zeek Policy In-Reply-To: References: Message-ID: <5f7eeb15-fb40-4a47-c806-39a031f5486a@gmail.com> On 12/03/2019 17:15, Zeolla at GMail.com wrote: > I am working on improving the btests for the kafka writer plugin with the > goal of validating some logic in KafkaWriter::DoInit. The best approach > that I have so far is to write a BiF and use it in both DoInit and the > btest via Zeek policy, but I have only been able to find limited > documentation[1][2] on the topic. If you just need to wrap some internal logic you could extract it into a normal C++ function and use a BiF to call that function out of a Bro-Script. Jan From jsiwek at corelight.com Wed Mar 13 12:14:22 2019 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 13 Mar 2019 12:14:22 -0700 Subject: [Zeek-Dev] Using a BiF across C++ and Zeek Policy In-Reply-To: <5f7eeb15-fb40-4a47-c806-39a031f5486a@gmail.com> References: <5f7eeb15-fb40-4a47-c806-39a031f5486a@gmail.com> Message-ID: On Wed, Mar 13, 2019 at 11:48 AM Jan Grash?fer wrote: > > On 12/03/2019 17:15, Zeolla at GMail.com wrote: > > I am working on improving the btests for the kafka writer plugin with the > > goal of validating some logic in KafkaWriter::DoInit. The best approach > > that I have so far is to write a BiF and use it in both DoInit and the > > btest via Zeek policy, but I have only been able to find limited > > documentation[1][2] on the topic. > If you just need to wrap some internal logic you could extract it into > a normal C++ function and use a BiF to call that function out of a > Bro-Script. Re-reading the problem statement, I agree that does seem like all that may be needed -- factor out a common C++ function that get's called from inside both the BIF and the DoInit() function. - Jon From anthony.kasza at gmail.com Wed Mar 13 12:34:24 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Wed, 13 Mar 2019 13:34:24 -0600 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: Many thanks for the quick responses! I am receiving these errors: ``` error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin Demo::ConnTaste is not available fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: Failed to activate requested dynamic plugin(s). ``` After executing these commands: ``` git clone --recursive https://github.com/zeek/zeek.git cd zeek ./configure make DIST=`pwd` cd aux/bro-aux/plugin-support ./init-plugin -u ./conn-taste Demo ConnTaste BRO_PLUGIN_PATH=`pwd` cd ${DIST} cd ../ git clone https://github.com/esnet/binpac_quickstart.git cd binpac_quickstart pip install docopt jinja2 ./start.py ConnTaste "Connection Byte Offset Tasting" ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin cd ${BRO_PLUGIN_PATH}/conn-taste ./configure --bro-dist=${DIST} make cd ${DIST} ./configure make make install bro -NN Demo::ConnTaste ``` I'm guessing there is some environment variable I am missing as I tried zeek/testing/btest/plugins/protocol.bro as Robin suggested and the @TEST-EXEC statements worked as expected. -AK On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: > On Wed, Mar 13, 2019 at 10:17 AM anthony kasza > wrote: > > >> However, the docs don't detail much beyond creating a built in function. >> A colleague pointed me at this quickstart script for binpac: >> https://github.com/grigorescu/binpac_quickstart >> > > Oops! Sorry about that. Try this one: > https://github.com/esnet/binpac_quickstart > > That has a '--plugin' option. That will at least get the boilerplate stuff > built, and then you can start digging into the protocol specifics. > > --Vlad > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/a9a6ed1c/attachment.html From dopheide at es.net Wed Mar 13 12:43:25 2019 From: dopheide at es.net (Michael Dopheide) Date: Wed, 13 Mar 2019 14:43:25 -0500 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: I believe you want to change this line: ./start.py ConnTaste "Connection Byte Offset Tasting" ... to ./start.py Demo::ConnTaste "Connection Byte Offset Tasting" ... -Dop On Wed, Mar 13, 2019 at 2:35 PM anthony kasza wrote: > Many thanks for the quick responses! > > I am receiving these errors: > ``` > error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin > Demo::ConnTaste is not available > fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: > Failed to activate requested dynamic plugin(s). > ``` > > After executing these commands: > ``` > git clone --recursive https://github.com/zeek/zeek.git > cd zeek > ./configure > make > DIST=`pwd` > > cd aux/bro-aux/plugin-support > ./init-plugin -u ./conn-taste Demo ConnTaste > BRO_PLUGIN_PATH=`pwd` > > cd ${DIST} > cd ../ > git clone https://github.com/esnet/binpac_quickstart.git > cd binpac_quickstart > pip install docopt jinja2 > ./start.py ConnTaste "Connection Byte Offset Tasting" > ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin > > cd ${BRO_PLUGIN_PATH}/conn-taste > ./configure --bro-dist=${DIST} > make > > cd ${DIST} > ./configure > make > make install > > bro -NN Demo::ConnTaste > ``` > > I'm guessing there is some environment variable I am missing as I tried > zeek/testing/btest/plugins/protocol.bro as Robin suggested and the > @TEST-EXEC statements worked as expected. > > -AK > > On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: > >> On Wed, Mar 13, 2019 at 10:17 AM anthony kasza >> wrote: >> >> >>> However, the docs don't detail much beyond creating a built in function. >>> A colleague pointed me at this quickstart script for binpac: >>> https://github.com/grigorescu/binpac_quickstart >>> >> >> Oops! Sorry about that. Try this one: >> https://github.com/esnet/binpac_quickstart >> >> That has a '--plugin' option. That will at least get the boilerplate >> stuff built, and then you can start digging into the protocol specifics. >> >> --Vlad >> > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/d6683a40/attachment.html From zeolla at gmail.com Wed Mar 13 13:52:07 2019 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 13 Mar 2019 16:52:07 -0400 Subject: [Zeek-Dev] Using a BiF across C++ and Zeek Policy In-Reply-To: References: <5f7eeb15-fb40-4a47-c806-39a031f5486a@gmail.com> Message-ID: Ah, that sounds like a better approach! Thanks Jan Jon On Wed, Mar 13, 2019 at 3:22 PM Jon Siwek wrote: > On Wed, Mar 13, 2019 at 11:48 AM Jan Grash?fer > wrote: > > > > On 12/03/2019 17:15, Zeolla at GMail.com wrote: > > > I am working on improving the btests for the kafka writer plugin with > the > > > goal of validating some logic in KafkaWriter::DoInit. The best approach > > > that I have so far is to write a BiF and use it in both DoInit and the > > > btest via Zeek policy, but I have only been able to find limited > > > documentation[1][2] on the topic. > > > If you just need to wrap some internal logic you could extract it into > > a normal C++ function and use a BiF to call that function out of a > > Bro-Script. > > Re-reading the problem statement, I agree that does seem like all that > may be needed -- factor out a common C++ function that get's called > from inside both the BIF and the DoInit() function. > > - Jon > > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev > -- Jon Zeolla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/b3271a2f/attachment-0001.html From anthony.kasza at gmail.com Wed Mar 13 14:44:11 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Wed, 13 Mar 2019 15:44:11 -0600 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: I tried changing the name provided to the setup script as suggested. Doing so gives me many errors when I try to ./configure the plugin from within the conn-taste/ directory. CMake states that DEMO::CONNTASTE-events.bif is "reserved or not valid for for certain CMake features". It complains about many of the file names. Additionally, all the files in conn-taste/src/ look like DEMO::CONNTASTE.cc :( -AK On Wed, Mar 13, 2019, 13:43 Michael Dopheide wrote: > I believe you want to change this line: > > ./start.py ConnTaste "Connection Byte Offset Tasting" ... > > to > > ./start.py Demo::ConnTaste "Connection Byte Offset Tasting" ... > > -Dop > > > On Wed, Mar 13, 2019 at 2:35 PM anthony kasza > wrote: > >> Many thanks for the quick responses! >> >> I am receiving these errors: >> ``` >> error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin >> Demo::ConnTaste is not available >> fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: >> Failed to activate requested dynamic plugin(s). >> ``` >> >> After executing these commands: >> ``` >> git clone --recursive https://github.com/zeek/zeek.git >> cd zeek >> ./configure >> make >> DIST=`pwd` >> >> cd aux/bro-aux/plugin-support >> ./init-plugin -u ./conn-taste Demo ConnTaste >> BRO_PLUGIN_PATH=`pwd` >> >> cd ${DIST} >> cd ../ >> git clone https://github.com/esnet/binpac_quickstart.git >> cd binpac_quickstart >> pip install docopt jinja2 >> ./start.py ConnTaste "Connection Byte Offset Tasting" >> ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin >> >> cd ${BRO_PLUGIN_PATH}/conn-taste >> ./configure --bro-dist=${DIST} >> make >> >> cd ${DIST} >> ./configure >> make >> make install >> >> bro -NN Demo::ConnTaste >> ``` >> >> I'm guessing there is some environment variable I am missing as I tried >> zeek/testing/btest/plugins/protocol.bro as Robin suggested and the >> @TEST-EXEC statements worked as expected. >> >> -AK >> >> On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: >> >>> On Wed, Mar 13, 2019 at 10:17 AM anthony kasza >>> wrote: >>> >>> >>>> However, the docs don't detail much beyond creating a built in >>>> function. A colleague pointed me at this quickstart script for binpac: >>>> https://github.com/grigorescu/binpac_quickstart >>>> >>> >>> Oops! Sorry about that. Try this one: >>> https://github.com/esnet/binpac_quickstart >>> >>> That has a '--plugin' option. That will at least get the boilerplate >>> stuff built, and then you can start digging into the protocol specifics. >>> >>> --Vlad >>> >> _______________________________________________ >> zeek-dev mailing list >> zeek-dev at zeek.org >> http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/d8b2d73e/attachment.html From anthony.kasza at gmail.com Wed Mar 13 17:16:59 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Wed, 13 Mar 2019 18:16:59 -0600 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: I'm sure there is at least one other Carl Sagan fan on list. I feel like if I wish to make an analyzer from scratch, I must first invent the universe. -AK On Wed, Mar 13, 2019, 15:44 anthony kasza wrote: > I tried changing the name provided to the setup script as suggested. Doing > so gives me many errors when I try to ./configure the plugin from within > the conn-taste/ directory. CMake states that DEMO::CONNTASTE-events.bif is > "reserved or not valid for for certain CMake features". It complains about > many of the file names. > > Additionally, all the files in conn-taste/src/ look like > DEMO::CONNTASTE.cc :( > > -AK > > On Wed, Mar 13, 2019, 13:43 Michael Dopheide wrote: > >> I believe you want to change this line: >> >> ./start.py ConnTaste "Connection Byte Offset Tasting" ... >> >> to >> >> ./start.py Demo::ConnTaste "Connection Byte Offset Tasting" ... >> >> -Dop >> >> >> On Wed, Mar 13, 2019 at 2:35 PM anthony kasza >> wrote: >> >>> Many thanks for the quick responses! >>> >>> I am receiving these errors: >>> ``` >>> error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin >>> Demo::ConnTaste is not available >>> fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: >>> Failed to activate requested dynamic plugin(s). >>> ``` >>> >>> After executing these commands: >>> ``` >>> git clone --recursive https://github.com/zeek/zeek.git >>> cd zeek >>> ./configure >>> make >>> DIST=`pwd` >>> >>> cd aux/bro-aux/plugin-support >>> ./init-plugin -u ./conn-taste Demo ConnTaste >>> BRO_PLUGIN_PATH=`pwd` >>> >>> cd ${DIST} >>> cd ../ >>> git clone https://github.com/esnet/binpac_quickstart.git >>> cd binpac_quickstart >>> pip install docopt jinja2 >>> ./start.py ConnTaste "Connection Byte Offset Tasting" >>> ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin >>> >>> cd ${BRO_PLUGIN_PATH}/conn-taste >>> ./configure --bro-dist=${DIST} >>> make >>> >>> cd ${DIST} >>> ./configure >>> make >>> make install >>> >>> bro -NN Demo::ConnTaste >>> ``` >>> >>> I'm guessing there is some environment variable I am missing as I tried >>> zeek/testing/btest/plugins/protocol.bro as Robin suggested and the >>> @TEST-EXEC statements worked as expected. >>> >>> -AK >>> >>> On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: >>> >>>> On Wed, Mar 13, 2019 at 10:17 AM anthony kasza >>>> wrote: >>>> >>>> >>>>> However, the docs don't detail much beyond creating a built in >>>>> function. A colleague pointed me at this quickstart script for binpac: >>>>> https://github.com/grigorescu/binpac_quickstart >>>>> >>>> >>>> Oops! Sorry about that. Try this one: >>>> https://github.com/esnet/binpac_quickstart >>>> >>>> That has a '--plugin' option. That will at least get the boilerplate >>>> stuff built, and then you can start digging into the protocol specifics. >>>> >>>> --Vlad >>>> >>> _______________________________________________ >>> zeek-dev mailing list >>> zeek-dev at zeek.org >>> http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/a8949e0c/attachment.html From dopheide at es.net Wed Mar 13 19:25:33 2019 From: dopheide at es.net (Michael Dopheide) Date: Wed, 13 Mar 2019 21:25:33 -0500 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: Okay, with your original line for quickstart, this works rather than Demo::ConnTaste. bash-3.2# /usr/local/bro/bin/bro -NN Bro::CONNTASTE Bro::CONNTASTE - This thing analyzer (dynamic, no version information) [Analyzer] CONNTASTE (ANALYZER_CONNTASTE, enabled) [Event] conntaste_event So we've got some plugin naming issues to deal with, which I hope to work out tomorrow. It shouldn't be about reinventing the universe, binpac is hard enough. :) -Dop On Wed, Mar 13, 2019 at 4:44 PM anthony kasza wrote: > I tried changing the name provided to the setup script as suggested. Doing > so gives me many errors when I try to ./configure the plugin from within > the conn-taste/ directory. CMake states that DEMO::CONNTASTE-events.bif is > "reserved or not valid for for certain CMake features". It complains about > many of the file names. > > Additionally, all the files in conn-taste/src/ look like > DEMO::CONNTASTE.cc :( > > -AK > > On Wed, Mar 13, 2019, 13:43 Michael Dopheide wrote: > >> I believe you want to change this line: >> >> ./start.py ConnTaste "Connection Byte Offset Tasting" ... >> >> to >> >> ./start.py Demo::ConnTaste "Connection Byte Offset Tasting" ... >> >> -Dop >> >> >> On Wed, Mar 13, 2019 at 2:35 PM anthony kasza >> wrote: >> >>> Many thanks for the quick responses! >>> >>> I am receiving these errors: >>> ``` >>> error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin >>> Demo::ConnTaste is not available >>> fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: >>> Failed to activate requested dynamic plugin(s). >>> ``` >>> >>> After executing these commands: >>> ``` >>> git clone --recursive https://github.com/zeek/zeek.git >>> cd zeek >>> ./configure >>> make >>> DIST=`pwd` >>> >>> cd aux/bro-aux/plugin-support >>> ./init-plugin -u ./conn-taste Demo ConnTaste >>> BRO_PLUGIN_PATH=`pwd` >>> >>> cd ${DIST} >>> cd ../ >>> git clone https://github.com/esnet/binpac_quickstart.git >>> cd binpac_quickstart >>> pip install docopt jinja2 >>> ./start.py ConnTaste "Connection Byte Offset Tasting" >>> ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin >>> >>> cd ${BRO_PLUGIN_PATH}/conn-taste >>> ./configure --bro-dist=${DIST} >>> make >>> >>> cd ${DIST} >>> ./configure >>> make >>> make install >>> >>> bro -NN Demo::ConnTaste >>> ``` >>> >>> I'm guessing there is some environment variable I am missing as I tried >>> zeek/testing/btest/plugins/protocol.bro as Robin suggested and the >>> @TEST-EXEC statements worked as expected. >>> >>> -AK >>> >>> On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: >>> >>>> On Wed, Mar 13, 2019 at 10:17 AM anthony kasza >>>> wrote: >>>> >>>> >>>>> However, the docs don't detail much beyond creating a built in >>>>> function. A colleague pointed me at this quickstart script for binpac: >>>>> https://github.com/grigorescu/binpac_quickstart >>>>> >>>> >>>> Oops! Sorry about that. Try this one: >>>> https://github.com/esnet/binpac_quickstart >>>> >>>> That has a '--plugin' option. That will at least get the boilerplate >>>> stuff built, and then you can start digging into the protocol specifics. >>>> >>>> --Vlad >>>> >>> _______________________________________________ >>> zeek-dev mailing list >>> zeek-dev at zeek.org >>> http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190313/8267df61/attachment-0001.html From mauropalumbo75 at gmail.com Wed Mar 13 23:50:44 2019 From: mauropalumbo75 at gmail.com (Mauro Palumbo) Date: Thu, 14 Mar 2019 07:50:44 +0100 Subject: [Zeek-Dev] updating intelligence data without restarting Zeek Message-ID: <81fdf9ed-3715-01de-a706-9fa0e001f508@gmail.com> Hello Zeek Devs, ?? I am working with the intel framework, using intel data from a file which is updated periodically. As far as I have seen in the documentation, it should be possible to update this file with new data and Zeek can adjust its behavior accordingly without restarting. The intel file must be loaded in mode=REREAD to achieve this. However, I noticed that this works fine if new fields are added to the intel data file, but NOT if some fields are removed (for example if an ip address previously believed to be malicious is removed from the intel file because it was later realized to be safe). At the script level in the intel framework, intel data are stored into global data_store: DataStore &redef; and there are some functions for removing items from the record ( remove(item: Item, purge_indicator: bool), remove_meta_data(item: Item): bool ). But I am not sure they are really called anywhere. Is anyone aware of this issue? Is it a work in progress? Thanks in advance. Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190314/def3c5f4/attachment.html From jan.grashoefer at gmail.com Thu Mar 14 02:43:40 2019 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Thu, 14 Mar 2019 10:43:40 +0100 Subject: [Zeek-Dev] updating intelligence data without restarting Zeek In-Reply-To: <81fdf9ed-3715-01de-a706-9fa0e001f508@gmail.com> References: <81fdf9ed-3715-01de-a706-9fa0e001f508@gmail.com> Message-ID: <990d97e1-d82a-f1ac-cd25-df8ba75eca42@gmail.com> Hi Mauro, On 14/03/2019 07:50, Mauro Palumbo wrote: > However, I noticed that this works fine if new fields are added to the > intel data file, but NOT if some fields are removed (for example if an > ip address previously believed to be malicious is removed from the intel > file because it was later realized to be safe). At the script level in > the intel framework, intel data are stored into > > global data_store: DataStore &redef; > > and there are some functions for removing items from the record ( > remove(item: Item, purge_indicator: bool), remove_meta_data(item: Item): > bool ). But I am not sure they are really called anywhere. that's correct. One has to keep in mind, that the files you provide to Zeek are just "input" for the data store. However, there are different approaches to remove Intel data from the internal data store. 1. You can use item expiration to expire items. If you update the intel file periodically, rereading items will reset the expiration timeout so that items removed from the file will eventually expire while the others are kept in the data store. 2. You could define which items to remove explicitly. Either by processing a file of items to delete, introducing a new meta data field indicating that an item should be removed or interfacing Zeek in any other way. This approach would require writing some custom script but shouldn't be too hard. There is a blog post that provides some more details: https://blog.zeek.org//2016/12/the-intelligence-framework-update.html I hope that helps! Jan From dopheide at es.net Thu Mar 14 09:48:39 2019 From: dopheide at es.net (Michael Dopheide) Date: Thu, 14 Mar 2019 11:48:39 -0500 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: Heh.. this is what I get for not following up on a WIP merge... Try the topic/dopheide/namespace branch of github.com/esnet/binpac_quickstart. That should allow you to specify Demo::ConnTaste, but it will uppercase that to Demo::CONNTASTE, which I believe was an old convention. -Dop On Wed, Mar 13, 2019 at 9:25 PM Michael Dopheide wrote: > Okay, with your original line for quickstart, this works rather than > Demo::ConnTaste. > > bash-3.2# /usr/local/bro/bin/bro -NN Bro::CONNTASTE > Bro::CONNTASTE - This thing analyzer (dynamic, no version information) > [Analyzer] CONNTASTE (ANALYZER_CONNTASTE, enabled) > [Event] conntaste_event > > So we've got some plugin naming issues to deal with, which I hope to work > out tomorrow. It shouldn't be about reinventing the universe, binpac is > hard enough. :) > > -Dop > > On Wed, Mar 13, 2019 at 4:44 PM anthony kasza > wrote: > >> I tried changing the name provided to the setup script as suggested. >> Doing so gives me many errors when I try to ./configure the plugin from >> within the conn-taste/ directory. CMake states that >> DEMO::CONNTASTE-events.bif is "reserved or not valid for for certain CMake >> features". It complains about many of the file names. >> >> Additionally, all the files in conn-taste/src/ look like >> DEMO::CONNTASTE.cc :( >> >> -AK >> >> On Wed, Mar 13, 2019, 13:43 Michael Dopheide wrote: >> >>> I believe you want to change this line: >>> >>> ./start.py ConnTaste "Connection Byte Offset Tasting" ... >>> >>> to >>> >>> ./start.py Demo::ConnTaste "Connection Byte Offset Tasting" ... >>> >>> -Dop >>> >>> >>> On Wed, Mar 13, 2019 at 2:35 PM anthony kasza >>> wrote: >>> >>>> Many thanks for the quick responses! >>>> >>>> I am receiving these errors: >>>> ``` >>>> error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin >>>> Demo::ConnTaste is not available >>>> fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: >>>> Failed to activate requested dynamic plugin(s). >>>> ``` >>>> >>>> After executing these commands: >>>> ``` >>>> git clone --recursive https://github.com/zeek/zeek.git >>>> cd zeek >>>> ./configure >>>> make >>>> DIST=`pwd` >>>> >>>> cd aux/bro-aux/plugin-support >>>> ./init-plugin -u ./conn-taste Demo ConnTaste >>>> BRO_PLUGIN_PATH=`pwd` >>>> >>>> cd ${DIST} >>>> cd ../ >>>> git clone https://github.com/esnet/binpac_quickstart.git >>>> cd binpac_quickstart >>>> pip install docopt jinja2 >>>> ./start.py ConnTaste "Connection Byte Offset Tasting" >>>> ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin >>>> >>>> cd ${BRO_PLUGIN_PATH}/conn-taste >>>> ./configure --bro-dist=${DIST} >>>> make >>>> >>>> cd ${DIST} >>>> ./configure >>>> make >>>> make install >>>> >>>> bro -NN Demo::ConnTaste >>>> ``` >>>> >>>> I'm guessing there is some environment variable I am missing as I tried >>>> zeek/testing/btest/plugins/protocol.bro as Robin suggested and the >>>> @TEST-EXEC statements worked as expected. >>>> >>>> -AK >>>> >>>> On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: >>>> >>>>> On Wed, Mar 13, 2019 at 10:17 AM anthony kasza < >>>>> anthony.kasza at gmail.com> wrote: >>>>> >>>>> >>>>>> However, the docs don't detail much beyond creating a built in >>>>>> function. A colleague pointed me at this quickstart script for binpac: >>>>>> https://github.com/grigorescu/binpac_quickstart >>>>>> >>>>> >>>>> Oops! Sorry about that. Try this one: >>>>> https://github.com/esnet/binpac_quickstart >>>>> >>>>> That has a '--plugin' option. That will at least get the boilerplate >>>>> stuff built, and then you can start digging into the protocol specifics. >>>>> >>>>> --Vlad >>>>> >>>> _______________________________________________ >>>> zeek-dev mailing list >>>> zeek-dev at zeek.org >>>> http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190314/5dfeb1c0/attachment.html From anthony.kasza at gmail.com Thu Mar 14 12:04:46 2019 From: anthony.kasza at gmail.com (anthony kasza) Date: Thu, 14 Mar 2019 13:04:46 -0600 Subject: [Zeek-Dev] Writing a Protocol Analyzer Plugin In-Reply-To: References: Message-ID: I'll give that a whirl. Thanks again for the quick responses on this! -AK On Thu, Mar 14, 2019, 10:49 Michael Dopheide wrote: > Heh.. this is what I get for not following up on a WIP merge... Try the > topic/dopheide/namespace branch of github.com/esnet/binpac_quickstart. > > That should allow you to specify Demo::ConnTaste, but it will uppercase > that to Demo::CONNTASTE, which I believe was an old convention. > > -Dop > > On Wed, Mar 13, 2019 at 9:25 PM Michael Dopheide wrote: > >> Okay, with your original line for quickstart, this works rather than >> Demo::ConnTaste. >> >> bash-3.2# /usr/local/bro/bin/bro -NN Bro::CONNTASTE >> Bro::CONNTASTE - This thing analyzer (dynamic, no version information) >> [Analyzer] CONNTASTE (ANALYZER_CONNTASTE, enabled) >> [Event] conntaste_event >> >> So we've got some plugin naming issues to deal with, which I hope to work >> out tomorrow. It shouldn't be about reinventing the universe, binpac is >> hard enough. :) >> >> -Dop >> >> On Wed, Mar 13, 2019 at 4:44 PM anthony kasza >> wrote: >> >>> I tried changing the name provided to the setup script as suggested. >>> Doing so gives me many errors when I try to ./configure the plugin from >>> within the conn-taste/ directory. CMake states that >>> DEMO::CONNTASTE-events.bif is "reserved or not valid for for certain CMake >>> features". It complains about many of the file names. >>> >>> Additionally, all the files in conn-taste/src/ look like >>> DEMO::CONNTASTE.cc :( >>> >>> -AK >>> >>> On Wed, Mar 13, 2019, 13:43 Michael Dopheide wrote: >>> >>>> I believe you want to change this line: >>>> >>>> ./start.py ConnTaste "Connection Byte Offset Tasting" ... >>>> >>>> to >>>> >>>> ./start.py Demo::ConnTaste "Connection Byte Offset Tasting" ... >>>> >>>> -Dop >>>> >>>> >>>> On Wed, Mar 13, 2019 at 2:35 PM anthony kasza >>>> wrote: >>>> >>>>> Many thanks for the quick responses! >>>>> >>>>> I am receiving these errors: >>>>> ``` >>>>> error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: plugin >>>>> Demo::ConnTaste is not available >>>>> fatal error in /usr/local/bro/share/bro/base/init-bare.bro, line 1: >>>>> Failed to activate requested dynamic plugin(s). >>>>> ``` >>>>> >>>>> After executing these commands: >>>>> ``` >>>>> git clone --recursive https://github.com/zeek/zeek.git >>>>> cd zeek >>>>> ./configure >>>>> make >>>>> DIST=`pwd` >>>>> >>>>> cd aux/bro-aux/plugin-support >>>>> ./init-plugin -u ./conn-taste Demo ConnTaste >>>>> BRO_PLUGIN_PATH=`pwd` >>>>> >>>>> cd ${DIST} >>>>> cd ../ >>>>> git clone https://github.com/esnet/binpac_quickstart.git >>>>> cd binpac_quickstart >>>>> pip install docopt jinja2 >>>>> ./start.py ConnTaste "Connection Byte Offset Tasting" >>>>> ${BRO_PLUGIN_PATH}/conn-taste/ --tcp --buffered --plugin >>>>> >>>>> cd ${BRO_PLUGIN_PATH}/conn-taste >>>>> ./configure --bro-dist=${DIST} >>>>> make >>>>> >>>>> cd ${DIST} >>>>> ./configure >>>>> make >>>>> make install >>>>> >>>>> bro -NN Demo::ConnTaste >>>>> ``` >>>>> >>>>> I'm guessing there is some environment variable I am missing as I >>>>> tried zeek/testing/btest/plugins/protocol.bro as Robin suggested and the >>>>> @TEST-EXEC statements worked as expected. >>>>> >>>>> -AK >>>>> >>>>> On Wed, Mar 13, 2019, 09:51 Vlad Grigorescu wrote: >>>>> >>>>>> On Wed, Mar 13, 2019 at 10:17 AM anthony kasza < >>>>>> anthony.kasza at gmail.com> wrote: >>>>>> >>>>>> >>>>>>> However, the docs don't detail much beyond creating a built in >>>>>>> function. A colleague pointed me at this quickstart script for binpac: >>>>>>> https://github.com/grigorescu/binpac_quickstart >>>>>>> >>>>>> >>>>>> Oops! Sorry about that. Try this one: >>>>>> https://github.com/esnet/binpac_quickstart >>>>>> >>>>>> That has a '--plugin' option. That will at least get the boilerplate >>>>>> stuff built, and then you can start digging into the protocol specifics. >>>>>> >>>>>> --Vlad >>>>>> >>>>> _______________________________________________ >>>>> zeek-dev mailing list >>>>> zeek-dev at zeek.org >>>>> http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190314/50e860f5/attachment.html From mauropalumbo75 at gmail.com Tue Mar 19 02:48:57 2019 From: mauropalumbo75 at gmail.com (mpalumbo75) Date: Tue, 19 Mar 2019 10:48:57 +0100 Subject: [Zeek-Dev] measuring zeek's performance Message-ID: Dear Zeek-devs, I think it is a common experience to need to evaluate zeek's performance according to different customizations at the script level, when using an event rather than another or a new plugin. Obviously this heavily depends on the traffic zeek is analyzing. But it would be of great help if there were a tool which could count the amount of time zeek has spent on certain events/plugins when analyzing traffic. Is something like this available? Thanks in advance. Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190319/17a5005b/attachment.html From jmellander at lbl.gov Wed Mar 20 12:24:56 2019 From: jmellander at lbl.gov (Jim Mellander) Date: Wed, 20 Mar 2019 12:24:56 -0700 Subject: [Zeek-Dev] measuring zeek's performance In-Reply-To: References: Message-ID: The attached may prove useful. The contents are: 1. instrument.sh - awk script that takes a bro script in stdin & outputs the script with instrumentation added. It does a passable job of adding instrumentation to entry & exits of functions/events/hooks, although at times there is manual fixup required. To do an exact job would require a full bro language parser, which was more than I wanted to tackle (although in a fit of experimentation, I did once write a recursive descent compiler-compiler in awk) 2. Instrument.bro - which prints timestamps upon function entry & exit (for production use, this probably needs to be a logfile). This needs to be @load'ed before the bro scripts that you've instrumented. By processing the log & matching up the function calls, the elapsed time in the function can be calculated. This could also be expanded to record memory usage before & after, if that is of interest. I never got around to productionizing this, but hopefully it will be of interest.... Hope this helps, Jim On Tue, Mar 19, 2019 at 2:50 AM mpalumbo75 wrote: > Dear Zeek-devs, > I think it is a common experience to need to evaluate zeek's > performance according to different customizations at the script level, when > using an event rather than another or a new plugin. Obviously this heavily > depends on the traffic zeek is analyzing. But it would be of great help if > there were a tool which could count the amount of time zeek has spent on > certain events/plugins when analyzing traffic. Is something like this > available? > > Thanks in advance. > Mauro > _______________________________________________ > zeek-dev mailing list > zeek-dev at zeek.org > http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190320/6328918b/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: instrument.tar Type: application/tar Size: 6144 bytes Desc: not available Url : http://mailman.icsi.berkeley.edu/pipermail/zeek-dev/attachments/20190320/6328918b/attachment.bin From robin at corelight.com Mon Mar 25 14:44:32 2019 From: robin at corelight.com (Robin Sommer) Date: Mon, 25 Mar 2019 14:44:32 -0700 Subject: [Zeek-Dev] Help wanted: Remaining renaming tasks In-Reply-To: <20190321145701.GF75142@corelight.com> References: <20190321145701.GF75142@corelight.com> Message-ID: <20190325214432.GA524@corelight.com> We have about 10 tasks left for the renaming from Bro to Zeek before the next release. Any help addressing those is appreciated, see this board: https://github.com/zeek/zeek/projects/2 We're hoping to get these in place within the next 4 weeks. If you can work on any of these, please assign the ticket to yourself. It's best to start with a short proposal on what you plan to do. You can also use the ticket discussion for any further clarification you might need. Thanks! Robin -- Robin Sommer * Corelight, Inc. * robin at corelight.com * www.corelight.com From seth at corelight.com Tue Mar 26 07:07:36 2019 From: seth at corelight.com (Seth Hall) Date: Tue, 26 Mar 2019 10:07:36 -0400 Subject: [Zeek-Dev] Help wanted: Remaining renaming tasks In-Reply-To: <20190325214432.GA524@corelight.com> References: <20190321145701.GF75142@corelight.com> <20190325214432.GA524@corelight.com> Message-ID: <4E86B1B8-58F0-4F2D-A26B-1C0CB762F3F3@corelight.com> On 25 Mar 2019, at 17:44, Robin Sommer wrote: > We're hoping to get these in place within the next 4 weeks. If you can > work on any of these, please assign the ticket to yourself. It's best > to start with a short proposal on what you plan to do. You can also > use the ticket discussion for any further clarification you might > need. I grabbed the one about renaming events! Thanks for setting the timeline. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com