From yjohn8697 at gmail.com Tue May 1 01:07:34 2018 From: yjohn8697 at gmail.com (John Y) Date: Tue, 01 May 2018 08:07:34 +0000 Subject: [Bro] Correct way to log record with modification Message-ID: Hi all! I am new with bro and try to solve programming problem: I am catching dns packets from the interface, changing some fields and try to write it to log. For that goal, i use record with the fields: Type info: record{ Ts: string &log; Src_ip: addr &log; Query: string &log; } Getting the dns data from conn log. Making manipulation on the ts, like changing format. And than writing the fields to a new log: Log::write(new_dns::Log, [$ts=change_format_func(conn$ts), $src_ip=conn$src_ip, $query=conn$query] Here is the questions: 1. How i handle with uninitialize field? The assignment conn$query failed. 2. If i have a lot of fields to log, do i need to write them all in the write commans or there is some shortcut? Remeber that i must modify the fields. Love for your help, John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180501/b3e957c6/attachment.html From yjohn8697 at gmail.com Tue May 1 06:05:03 2018 From: yjohn8697 at gmail.com (John Y) Date: Tue, 01 May 2018 13:05:03 +0000 Subject: [Bro] Uninitialize field Message-ID: Hi! I am using the connection type to make custom logging. How can i check that each of his fields are initialize before i pull them? Thanking in advance, John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180501/aae68786/attachment.html From jazoff at illinois.edu Tue May 1 07:04:38 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 1 May 2018 14:04:38 +0000 Subject: [Bro] Regarding bro capture_loss In-Reply-To: References: Message-ID: > On Apr 30, 2018, at 6:59 PM, sourav maji wrote: > > Hi, > > Sorry if my questions have already been answered but it would be really helpful if anyone can provide information on the following. > > 1. Does bro capture_loss indicate that packets that are mirrored using a switch's SPAN/TAP port to a server running bro, drop packets in the mirroring process somewhere upstream? > > In our particular setting, we are seeing zero packet drops reported by "broctl netstats" but more than 40% packet losses in capture_loss. Does that imply that the server running bro is not dropping any packets but that packets are being dropped upstream? Bidirectional traffic is sent to the server running bro using SPAN ports. > > 2. Is there a document that explains in detail how capture loss is computed? > It says "Reported loss is computed in terms of the number of ?gap events? (ACKs for a sequence number that?s above a gap)." > What exactly is a gap event and how is the function call "get_gap_stats()" defined? The code in "capture-loss.bro" does not explain how acks and gaps can be used to estimate capture loss. Any detailed documentation would be useful. Bro simply counts tcp ACKs for packets that it did not see in the first place. If it saw the ACK, but not the original packet, there was capture loss. > Thanks and regards, > Sourav Maji Capture loss by itself is kind of a useless metric.. when it's zero, that's great, but any number above a very small percentage just tells you there is a problem somewhere but not where it is. It's kind of like a "Check engine" light. You need to figure out where your loss is coming from. Analyzing the "missed_bytes" column in the conn.log will help. If you install bro-doctor (https://github.com/ncsa/bro-doctor) bro-pkg install ncsa/bro-doctor broctl doctor.bro the "Checking what percentage of recent tcp connections show loss" section in the output will tell you what percentage of your recent connections is seeing loss. The number of connections seeing loss can often be a better metric than the overall loss count itself. If that is also 40% then you are missing a lot of traffic. If it's 1%, you have a small number of broken connections. A really good test (that I still haven't figure out how to add to bro-doctor) is to run something like this from somewhere on your network: for x in $(seq 1 9); do echo -e 'GET / HTTP/1.1\r\nHost: www.bro.org\r\n\r\n' | socat - tcp-connect:www.bro.org:80,sp=2000$x,reuseaddr; sleep 1; done Then see what bro logged using cat conn.log |bro-cut -d ts id.orig_h id.orig_p id.resp_h id.resp_p orig_pkts resp_pkts missed_bytes | fgrep 192.150.187.43 You should see 9 almost identical lines like 141.142.148.70 20001 192.150.187.43 80 6 4 0 ShADFadf ? Justin Azoff From jsiwek at corelight.com Tue May 1 08:02:03 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 1 May 2018 10:02:03 -0500 Subject: [Bro] Correct way to log record with modification In-Reply-To: References: Message-ID: <0301202e-adc4-bd0d-3554-1281015e3b4e@corelight.com> On 5/1/18 3:07 AM, John Y wrote: > 1. How i handle with uninitialize field? The assignment conn$query failed. You can first check if it's initialized via the ?$ operator. More docs on various operators at [1]. > 2. If i have a lot of fields to log, do i need to write them all in the > write commans or there is some shortcut? Remeber that i must modify the > fields. You don't have to put the them inside the Log::write() function call, though there's no getting around the fact that you'll need to create one 'info' value per call. You can do that inline with the call like you had shown or you can create the value and store it in a local/global variable, or possibly abstract out common patterns that you find into some other custom function. You can decide whichever way fits. - Jon [1] https://www.bro.org/sphinx/script-reference/operators.html From jsiwek at corelight.com Tue May 1 08:11:23 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 1 May 2018 10:11:23 -0500 Subject: [Bro] Uninitialize field In-Reply-To: References: Message-ID: On 5/1/18 8:05 AM, John Y wrote: > I am using the connection type to make custom logging. > How can i check that each of his fields are initialize before i pull them? If you want to check that a single field exists, use the ?$ operator. See [1] for operator docs. If you want to check that a set of fields exists (e.g. all of them), then you'll either need to individually check them all via the ?$ operator you use the record_fields() function [2] to introspect whether some set of fields in the record are initialized. I'm guessing the introspection route is overkill for what you need, though just mentioning it for completeness. - Jon [1] https://www.bro.org/sphinx/script-reference/operators.html [2] https://www.bro.org/sphinx/scripts/base/bif/bro.bif.bro.html?highlight=record_fields#id-record_fields From jghobrial at rice.edu Tue May 1 11:47:59 2018 From: jghobrial at rice.edu (Joseph Ghobrial) Date: Tue, 1 May 2018 13:47:59 -0500 Subject: [Bro] Bro and Splunk forwarder Message-ID: We've got a bro cluster up and running on our SciDMZ. I'm running the splunk forwarder on the head node. We've seen the splunk forwarder having issues after some time sending data. I'm not seeing anything in the system logs or splunk logs showing a reason. Anyone running this type of configuration and seen contention? Thanks, Joseph -- Joseph Ghobrial Systems Analyst II Office of Information Technology Rice University jghobrial @ rice.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180501/a13a966d/attachment.html From reswob10 at gmail.com Tue May 1 13:31:36 2018 From: reswob10 at gmail.com (craig bowser) Date: Tue, 01 May 2018 20:31:36 +0000 Subject: [Bro] Bro and Splunk forwarder In-Reply-To: References: Message-ID: We used syslog to send the logs to a Splunk HF. On Tue, May 1, 2018, 2:50 PM Joseph Ghobrial wrote: > We've got a bro cluster up and running on our SciDMZ. I'm running the > splunk forwarder on the head node. We've seen the splunk forwarder having > issues after some time sending data. I'm not seeing anything in the system > logs or splunk logs showing a reason. > > Anyone running this type of configuration and seen contention? > > Thanks, > Joseph > > -- > Joseph Ghobrial > Systems Analyst II > Office of Information Technology > Rice University > jghobrial @ rice.edu > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180501/995e96b2/attachment.html From fatema.bannatwala at gmail.com Tue May 1 13:52:38 2018 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Tue, 1 May 2018 16:52:38 -0400 Subject: [Bro] Bro and Splunk forwarder Message-ID: Hi Joseph, Just wanted to get clarity, are you running Splunk forwarder on the manager of your Bro cluster? If yes, then how are you monitoring the log files generated by bro in current dir (i.e. contents of your inputs.conf of Splunk Forwarder)? I believe, Splunk monitoring should work just fine on the bro log files on manager. Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180501/755889ff/attachment.html From buysse at umn.edu Tue May 1 17:59:11 2018 From: buysse at umn.edu (Joshua Buysse) Date: Tue, 1 May 2018 19:59:11 -0500 Subject: [Bro] Bro and Splunk forwarder In-Reply-To: References: Message-ID: I've run in to something like this. It may be related to a known issue in the Splunk forwarder (SPL-99316). The forwarder appeared to lose track of files, and usually picked up the data on a delay after the log was rotated. It seemed to be volume-related, with files that grow quickly more likely to trigger it. The workaround in the docs works - I've seen it happen after pushing the workaround, but it's extremely rare. >From the forwarder known issues in the release notes: > 2015-04-07 SPL-99316 Universal Forwarders stop sending data repeatedly throughout the day > Workaround: > In limits.conf, try changing file_tracking_db_threshold_mb in the [inputproc] stanza to a lower value. Otherwise, if splunkd has a cpu core pegged, you may need to do additional tuning to enable an additional parsing pipeline. Also, splunkd has a default output limit of 256Kbit/s to the indexers and will rate-limit itself. It may fall far enough behind that it appears that it's stopped. For our busiest forwarders, I push these tuning values to the forwarder in a simple app: --- limits.conf --- [thruput] # unlimited output, default is 256 (kb/s) maxKBps = 0 [inputproc] # default is 100 max_fd = 256 # * workaround for SPL-99316 # default is 500; the note in "known issues" on SPL-99316 # recommends setting to a lower value. file_tracking_db_threshold_mb = 400 --- end limits.conf --- --- server.conf --- [general] # parse and read multiple files at once, significantly increases CPU usage parallelIngestionPipelines = 4 [queue] maxSize = 128MB [queue=parsingQueue] maxSize = 32MB --- end server.conf --- One note about those configs - we're load balancing the forwarder between a couple dozen clustered indexers. If you're using a standalone indexer, I'd be careful about parallelIngestionPipelines being too high. We went overkill on memory, so 256MB just for parsing queues isn't an issue, and the bro masters have plenty of available CPU. If you're stretched for resources on the box, you probably don't want to allow Splunk to push that hard. There's a lot more tuning that can be done - we switched to JSON output for the bro logs, and the amount of processing needed on the Splunk forwarder went down quite a bit (along with saving quite a bit of disk space on the indexers), at the cost of more Splunk license usage. JSON has fields extracted at search time, while the default delimited logs have all the fields extracted as the file is ingested - smaller size for _raw, but more disk usage since all the fields are stored in the indexes. Performance is actually a little better with JSON as well. Hopefully that's helpful. - J -- Joshua Buysse - Security Engineer University Information Security - University of Minnesota *"Computers are useless. They can only give you answers." - Pablo Picasso* On Tue, May 1, 2018 at 3:52 PM, fatema bannatwala < fatema.bannatwala at gmail.com> wrote: > Hi Joseph, > > Just wanted to get clarity, are you running Splunk forwarder on the > manager of your Bro cluster? > If yes, then how are you monitoring the log files generated by bro in > current dir (i.e. contents of your inputs.conf of Splunk Forwarder)? > > I believe, Splunk monitoring should work just fine on the bro log files on > manager. > > Fatema. > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180501/eb9c85fc/attachment.html From yjohn8697 at gmail.com Wed May 2 03:54:51 2018 From: yjohn8697 at gmail.com (John Y) Date: Wed, 02 May 2018 10:54:51 +0000 Subject: [Bro] Access to record file dynamiclly Message-ID: Hi ! Great community, i got helpfull advices from here. I got this record: type info: record { a: string; } And variable which his value is the record's field name: local field_name: string = 'a'; I try to assign value to a field using field_name, maby like this: local myrec = info(); myrec$field_name = "some value"; But that command try to find the field "field_name". Any suggestion? John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180502/1b55af0c/attachment.html From jsiwek at corelight.com Wed May 2 07:36:02 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 2 May 2018 09:36:02 -0500 Subject: [Bro] Access to record file dynamiclly In-Reply-To: References: Message-ID: On 5/2/18 5:54 AM, John Y wrote: > And variable which his value is the record's field name: > local field_name: string = 'a'; > > I try to assign value to a field using field_name, maby like this: > local myrec = info(); > myrec$field_name = "some value"; > > But that command try to find the field "field_name". I don't think there's currently a way to get write-access via dynamic field name. Though dynamic read-access is possible via the record_fields() function. - Jon From carlrotenan at gmail.com Wed May 2 13:13:07 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Wed, 2 May 2018 16:13:07 -0400 Subject: [Bro] Dropped data Message-ID: Hello, Can someone give me some direction on trying to figure out why I have dropped data? This output is from a machine getting about 3G of traffic a minute or so into starting Bro 2.5.3 with PF_RING 7.0.0. How much data per worker should I expect to budget for? Ideally I'd like Bro to be able to do 10G of traffic. Has anyone used PF_RING ZC with success? worker-0-1: 1525291681.760081 recvd=564836 dropped=0 link=564836 worker-0-2: 1525291681.961074 recvd=723187 dropped=0 link=723187 worker-0-3: 1525291682.162178 recvd=682598 dropped=4619 link=682598 worker-0-4: 1525291682.364202 recvd=1094776 dropped=0 link=1094776 worker-0-5: 1525291682.566055 recvd=6722748 dropped=30902 link=6722748 worker-0-6: 1525291682.768050 recvd=2180528 dropped=0 link=2180528 worker-0-7: 1525291682.969023 recvd=3252824 dropped=0 link=3252824 worker-0-8: 1525291683.179065 recvd=414112 dropped=0 link=414112 worker-0-9: 1525291683.379083 recvd=2228892 dropped=52543 link=2228892 worker-0-10: 1525291683.579973 recvd=1735298 dropped=0 link=1735298 worker-0-11: 1525291683.780260 recvd=2720785 dropped=1437 link=2720785 worker-0-12: 1525291683.981421 recvd=5835651 dropped=7610 link=5835651 worker-0-13: 1525291684.181057 recvd=566766 dropped=0 link=566766 worker-0-14: 1525291684.381979 recvd=335114 dropped=0 link=335114 worker-0-15: 1525291684.582077 recvd=743998 dropped=0 link=743998 worker-0-16: 1525291684.782897 recvd=6124252 dropped=54604 link=6124252 worker-0-17: 1525291684.980916 recvd=3476401 dropped=17138 link=3476401 worker-0-18: 1525291685.184047 recvd=1286574 dropped=0 link=1286574 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180502/6e25cbea/attachment.html From pcain at coopercain.com Thu May 3 08:05:09 2018 From: pcain at coopercain.com (Patrick Cain) Date: Thu, 3 May 2018 11:05:09 -0400 Subject: [Bro] ES cluster and logstash In-Reply-To: References: Message-ID: <0c1301d3e2f0$219c82d0$64d58870$@coopercain.com> Hi, >From my adventures with two ES clusters receiving bro logs: The big ?sizing? issues relate to how many bro events are being inserted into ES. One of my ES clusters does about 25k events/sec from bro. We found this is more than a simple bro ->logstash ->ES can take. We ended up putting a kafka buffer between bro and logstash, so now it?s bro -> kafka -> logstash -> ES. We tried with other buffer-type things, like syslog-ng and a rabbitmq, but ended up on kafka since there is a bro-pkg for it, too. I hear some people like redis for this funstion. So bro can now burst event logs out and kafka makes the logstash to ES process run at a more sustained pace. Kafka and logstash run on some old box we had laying around so the bro box doesn't have to do anything but slop logs out. The other issue was the rate at which ES can insert events when it?s doing other stuff. We ended up making data nodes just do data; the masters just be masters, and specifically crafting ?insertion? nodes that only talk to logstash. This takes the index loading and computational work off the data nodes. A couple of our systems have a master node and a data node on them since the masters use very little resources. Note when you search in ES it pauses its other activities since ES thinks search is its primary function in life. So insertion rate drops. What we found is that having specific insertion nodes lets ES keep taking inserts even when you do heavy searches. Keeps my OPS people happy. ? When I saw your "25TB of data per year" I chuckled... My *small* ES cluster is three Dell R380s (each with 20C, 64GB, 35TB disk). (I should have gotten more memory.) But our 95TB of disk lets us keep about 120 days of bro logs before curator deletes old logs to makes us more free disk space. This cluster has been continuously up about 4 months since I last played with the configs, so I'm content. I think your 4 data nodes is fine. If your insertion rate is high make an insertion node on one of your masters. Pat p.s. Keep your java heap under 26GB; 23GB if you hate compressed java pointers. From: bro-bounces at bro.org On Behalf Of erik clark Sent: Friday, April 27, 2018 8:48 AM To: Bro-IDS Subject: Re: [Bro] ES cluster and logstash We are looking to set up a proper ES cluster and dumping bro logs into it via logstash. The thought is to have 6 ES nodes (2 dedicated master, 4 data nodes). If we are dumping 15 TB of data into the cluster a year (possibly as high as 20 or 25TB) from Bro, is 4 data nodes sufficient? The boxen will only have 64 gigs of ram (30 for java heap, 34 for system use) and probably 16 discrete cores. I have a feeling that this is horribly insufficient for a data cluster of that size. From philosnef at gmail.com Thu May 3 08:11:35 2018 From: philosnef at gmail.com (erik clark) Date: Thu, 3 May 2018 11:11:35 -0400 Subject: [Bro] ES cluster and logstash In-Reply-To: <0c1301d3e2f0$219c82d0$64d58870$@coopercain.com> References: <0c1301d3e2f0$219c82d0$64d58870$@coopercain.com> Message-ID: Thank you for your response Patrick! To be honest, I am not sure just how large our data set is. One of the problems we have is that we just don't have enough disk space to unpack our gzip'd logs to see what a year would look like. Do you happen to have a good document on how you are interfacing kafka with logstash? Erik On Thu, May 3, 2018 at 11:05 AM, Patrick Cain wrote: > Hi, > > From my adventures with two ES clusters receiving bro logs: > The big ?sizing? issues relate to how many bro events are being inserted > into ES. One of my ES clusters does about 25k events/sec from bro. We found > this is more than a simple bro ->logstash ->ES can take. We ended up > putting a kafka buffer between bro and logstash, so now it?s bro -> kafka > -> logstash -> ES. We tried with other buffer-type things, like syslog-ng > and a rabbitmq, but ended up on kafka since there is a bro-pkg for it, too. > I hear some people like redis for this funstion. So bro can now burst > event logs out and kafka makes the logstash to ES process run at a more > sustained pace. Kafka and logstash run on some old box we had laying around > so the bro box doesn't have to do anything but slop logs out. > The other issue was the rate at which ES can insert events when it?s doing > other stuff. We ended up making data nodes just do data; the masters just > be masters, and specifically crafting ?insertion? nodes that only talk to > logstash. This takes the index loading and computational work off the data > nodes. A couple of our systems have a master node and a data node on them > since the masters use very little resources. Note when you search in ES it > pauses its other activities since ES thinks search is its primary function > in life. So insertion rate drops. What we found is that having specific > insertion nodes lets ES keep taking inserts even when you do heavy > searches. Keeps my OPS people happy. ? > > When I saw your "25TB of data per year" I chuckled... My *small* ES > cluster is three Dell R380s (each with 20C, 64GB, 35TB disk). (I should > have gotten more memory.) But our 95TB of disk lets us keep about 120 days > of bro logs before curator deletes old logs to makes us more free disk > space. This cluster has been continuously up about 4 months since I last > played with the configs, so I'm content. I think your 4 data nodes is fine. > If your insertion rate is high make an insertion node on one of your > masters. > > Pat > p.s. Keep your java heap under 26GB; 23GB if you hate compressed java > pointers. > > From: bro-bounces at bro.org On Behalf Of erik clark > Sent: Friday, April 27, 2018 8:48 AM > To: Bro-IDS > Subject: Re: [Bro] ES cluster and logstash > > We are looking to set up a proper ES cluster and dumping bro logs into it > via logstash. The thought is to have 6 ES nodes (2 dedicated master, 4 data > nodes). If we are dumping 15 TB of data into the cluster a year (possibly > as high as 20 or 25TB) from Bro, is 4 data nodes sufficient? The boxen will > only have 64 gigs of ram (30 for java heap, 34 for system use) and probably > 16 discrete cores. I have a feeling that this is horribly insufficient for > a data cluster of that size. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180503/49c53b5b/attachment.html From carlrotenan at gmail.com Thu May 3 11:41:44 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Thu, 3 May 2018 14:41:44 -0400 Subject: [Bro] Broctl netstats Message-ID: Could someone explain the the dropped and link columns in the broctl netstats output? Ex recvd=12409185 dropped=33782 link=12409185 I'm trying to figure out what is causing the drops. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180503/a0abe0bf/attachment.html From ossamabzos at gmail.com Fri May 4 03:14:19 2018 From: ossamabzos at gmail.com (bz Os) Date: Fri, 4 May 2018 11:14:19 +0100 Subject: [Bro] can we integrate bro with ibm qradar? Message-ID: hello Evry Body i have a stage in an entreprise and it tell me to integrate bro with ibmqradar siem ,so i want to know if is possible to integrate bro with ibm qradar Siem -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180504/7792de4b/attachment.html From jghobrial at rice.edu Fri May 4 08:52:43 2018 From: jghobrial at rice.edu (Joseph Ghobrial) Date: Fri, 04 May 2018 15:52:43 +0000 Subject: [Bro] Bro and Splunk forwarder In-Reply-To: References: Message-ID: Thanks everyone. I'm passing this along to our Splunk person to see what we can do. Just to clarify I'm running the manager, logger, some of the workers and a splunk forward on the main node. The remaining nodes just run the workers. This is modeled after the 100Gb Bro cluster paper from Berkeley except I don't believe they had the splunk forwarder. However, we provided more hardware to accommodate this configuration which is the experimental part of this setup. Thanks, Joseph -- Joseph Ghobrial Systems Analyst II Office of Information Technology Rice University jghobrial @ rice.edu On Tue, May 1, 2018 at 7:59 PM Joshua Buysse wrote: > I've run in to something like this. It may be related to a known issue in > the Splunk forwarder (SPL-99316). The forwarder appeared to lose track of > files, and usually picked up the data on a delay after the log was > rotated. It seemed to be volume-related, with files that grow quickly more > likely to trigger it. The workaround in the docs works - I've seen it > happen after pushing the workaround, but it's extremely rare. > > From the forwarder known issues in the release notes: > > 2015-04-07 SPL-99316 Universal Forwarders stop sending data > repeatedly throughout the day > > Workaround: > > In limits.conf, try changing file_tracking_db_threshold_mb in the > [inputproc] stanza to a lower value. > > Otherwise, if splunkd has a cpu core pegged, you may need to do additional > tuning to enable an additional parsing pipeline. Also, splunkd has a > default output limit of 256Kbit/s to the indexers and will rate-limit > itself. It may fall far enough behind that it appears that it's stopped. > For our busiest forwarders, I push these tuning values to the forwarder in > a simple app: > > --- limits.conf --- > [thruput] > # unlimited output, default is 256 (kb/s) > maxKBps = 0 > > [inputproc] > # default is 100 > max_fd = 256 > > # * workaround for SPL-99316 > # default is 500; the note in "known issues" on SPL-99316 > # recommends setting to a lower value. > file_tracking_db_threshold_mb = 400 > --- end limits.conf --- > > --- server.conf --- > [general] > # parse and read multiple files at once, significantly increases CPU usage > parallelIngestionPipelines = 4 > > [queue] > maxSize = 128MB > > [queue=parsingQueue] > maxSize = 32MB > --- end server.conf --- > > > One note about those configs - we're load balancing the forwarder between > a couple dozen clustered indexers. If you're using a standalone indexer, > I'd be careful about parallelIngestionPipelines being too high. We went > overkill on memory, so 256MB just for parsing queues isn't an issue, and > the bro masters have plenty of available CPU. If you're stretched for > resources on the box, you probably don't want to allow Splunk to push that > hard. > > There's a lot more tuning that can be done - we switched to JSON output > for the bro logs, and the amount of processing needed on the Splunk > forwarder went down quite a bit (along with saving quite a bit of disk > space on the indexers), at the cost of more Splunk license usage. JSON has > fields extracted at search time, while the default delimited logs have all > the fields extracted as the file is ingested - smaller size for _raw, but > more disk usage since all the fields are stored in the indexes. > Performance is actually a little better with JSON as well. > > Hopefully that's helpful. > - J > -- > Joshua Buysse - Security Engineer > University Information Security - University of Minnesota > > > *"Computers are useless. They can only give you answers." > - Pablo Picasso* > > On Tue, May 1, 2018 at 3:52 PM, fatema bannatwala < > fatema.bannatwala at gmail.com> wrote: > >> Hi Joseph, >> >> Just wanted to get clarity, are you running Splunk forwarder on the >> manager of your Bro cluster? >> If yes, then how are you monitoring the log files generated by bro in >> current dir (i.e. contents of your inputs.conf of Splunk Forwarder)? >> >> I believe, Splunk monitoring should work just fine on the bro log files >> on manager. >> >> Fatema. >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180504/b035171e/attachment.html From scars.8 at gmail.com Fri May 4 14:52:11 2018 From: scars.8 at gmail.com (steven johnson) Date: Fri, 4 May 2018 23:52:11 +0200 Subject: [Bro] module creation Message-ID: Hello, I'm writing a new module for a protocol, and there is few things that I'm not sure about : 1/ Is there anything to do other than creating __load__ & main.bro files to make my module to work ? 2/ if I want to use dpd with signature, the only thing that i have to do is to have my sig file in the module, isn't it? regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180504/390f0e44/attachment.html From sm8kk at virginia.edu Sun May 6 11:10:16 2018 From: sm8kk at virginia.edu (sourav maji) Date: Sun, 6 May 2018 14:10:16 -0400 Subject: [Bro] Regarding bro capture_loss In-Reply-To: References: Message-ID: Hi Justin, Sorry for my late response. Appreciate the detailed explanation about capture loss. We will try out the experiments that you have suggested. Thanks and regards, Sourav Maji On Tue, May 1, 2018 at 10:04 AM, Azoff, Justin S wrote: > > > On Apr 30, 2018, at 6:59 PM, sourav maji wrote: > > > > Hi, > > > > Sorry if my questions have already been answered but it would be > really helpful if anyone can provide information on the following. > > > > 1. Does bro capture_loss indicate that packets that are mirrored using a > switch's SPAN/TAP port to a server running bro, drop packets in the > mirroring process somewhere upstream? > > > > In our particular setting, we are seeing zero packet drops reported by > "broctl netstats" but more than 40% packet losses in capture_loss. Does > that imply that the server running bro is not dropping any packets but that > packets are being dropped upstream? Bidirectional traffic is sent to the > server running bro using SPAN ports. > > > > 2. Is there a document that explains in detail how capture loss is > computed? > > It says "Reported loss is computed in terms of the number of ?gap > events? (ACKs for a sequence number that?s above a gap)." > > What exactly is a gap event and how is the function call > "get_gap_stats()" defined? The code in "capture-loss.bro" does not explain > how acks and gaps can be used to estimate capture loss. Any detailed > documentation would be useful. > > Bro simply counts tcp ACKs for packets that it did not see in the first > place. If it saw the ACK, but not the original packet, there was capture > loss. > > > Thanks and regards, > > Sourav Maji > > Capture loss by itself is kind of a useless metric.. when it's zero, > that's great, but any number above a very small percentage just tells you > there is a problem somewhere but not where it is. > > It's kind of like a "Check engine" light. > > You need to figure out where your loss is coming from. Analyzing the > "missed_bytes" column in the conn.log will help. > > If you install bro-doctor (https://github.com/ncsa/bro-doctor) > > bro-pkg install ncsa/bro-doctor > broctl doctor.bro > > the "Checking what percentage of recent tcp connections show loss" section > in the output will tell you what percentage of your recent connections is > seeing loss. > > The number of connections seeing loss can often be a better metric than > the overall loss count itself. If that is also 40% then you are missing a > lot of traffic. If it's 1%, you have a small number of broken connections. > > A really good test (that I still haven't figure out how to add to > bro-doctor) is to run something like this from somewhere on your network: > > for x in $(seq 1 9); do echo -e 'GET / HTTP/1.1\r\nHost: www.bro.org\r\n\r\n' > | socat - tcp-connect:www.bro.org:80,sp=2000$x,reuseaddr; sleep 1; done > > Then see what bro logged using > > cat conn.log |bro-cut -d ts id.orig_h id.orig_p id.resp_h id.resp_p > orig_pkts resp_pkts missed_bytes | fgrep 192.150.187.43 > > You should see 9 almost identical lines like > > 141.142.148.70 20001 192.150.187.43 80 6 4 0 > ShADFadf > > ? > Justin Azoff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180506/8139531d/attachment.html From bill.de.ping at gmail.com Sun May 6 23:06:41 2018 From: bill.de.ping at gmail.com (william de ping) Date: Mon, 7 May 2018 09:06:41 +0300 Subject: [Bro] Bro and Splunk forwarder In-Reply-To: References: Message-ID: Hi there, I have encountered this issue, it seems to be related to inputs.conf if splunk UF is set to batch:// the directory of spool/manager, then it will delete the files it sees after processing them. meaning you could get a race condition between splunk UF and bro, since bro will try to move the file after a certain interval and gzip them. you should use monitor:// instead of batch:// in inputs.conf B On Fri, May 4, 2018 at 6:52 PM, Joseph Ghobrial wrote: > Thanks everyone. I'm passing this along to our Splunk person to see what > we can do. > > Just to clarify I'm running the manager, logger, some of the workers and a > splunk forward on the main node. The remaining nodes just run the workers. > This is modeled after the 100Gb Bro cluster paper from Berkeley except I > don't believe they had the splunk forwarder. However, we provided more > hardware to accommodate this configuration which is the experimental part > of this setup. > > Thanks, > Joseph > > -- > Joseph Ghobrial > Systems Analyst II > Office of Information Technology > Rice University > jghobrial @ rice.edu > > > On Tue, May 1, 2018 at 7:59 PM Joshua Buysse wrote: > >> I've run in to something like this. It may be related to a known issue >> in the Splunk forwarder (SPL-99316). The forwarder appeared to lose track >> of files, and usually picked up the data on a delay after the log was >> rotated. It seemed to be volume-related, with files that grow quickly more >> likely to trigger it. The workaround in the docs works - I've seen it >> happen after pushing the workaround, but it's extremely rare. >> >> From the forwarder known issues in the release notes: >> > 2015-04-07 SPL-99316 Universal Forwarders stop sending data >> repeatedly throughout the day >> > Workaround: >> > In limits.conf, try changing file_tracking_db_threshold_mb in the >> [inputproc] stanza to a lower value. >> >> Otherwise, if splunkd has a cpu core pegged, you may need to do >> additional tuning to enable an additional parsing pipeline. Also, splunkd >> has a default output limit of 256Kbit/s to the indexers and will rate-limit >> itself. It may fall far enough behind that it appears that it's stopped. >> For our busiest forwarders, I push these tuning values to the forwarder in >> a simple app: >> >> --- limits.conf --- >> [thruput] >> # unlimited output, default is 256 (kb/s) >> maxKBps = 0 >> >> [inputproc] >> # default is 100 >> max_fd = 256 >> >> # * workaround for SPL-99316 >> # default is 500; the note in "known issues" on SPL-99316 >> # recommends setting to a lower value. >> file_tracking_db_threshold_mb = 400 >> --- end limits.conf --- >> >> --- server.conf --- >> [general] >> # parse and read multiple files at once, significantly increases CPU usage >> parallelIngestionPipelines = 4 >> >> [queue] >> maxSize = 128MB >> >> [queue=parsingQueue] >> maxSize = 32MB >> --- end server.conf --- >> >> >> One note about those configs - we're load balancing the forwarder between >> a couple dozen clustered indexers. If you're using a standalone indexer, >> I'd be careful about parallelIngestionPipelines being too high. We went >> overkill on memory, so 256MB just for parsing queues isn't an issue, and >> the bro masters have plenty of available CPU. If you're stretched for >> resources on the box, you probably don't want to allow Splunk to push that >> hard. >> >> There's a lot more tuning that can be done - we switched to JSON output >> for the bro logs, and the amount of processing needed on the Splunk >> forwarder went down quite a bit (along with saving quite a bit of disk >> space on the indexers), at the cost of more Splunk license usage. JSON has >> fields extracted at search time, while the default delimited logs have all >> the fields extracted as the file is ingested - smaller size for _raw, but >> more disk usage since all the fields are stored in the indexes. >> Performance is actually a little better with JSON as well. >> >> Hopefully that's helpful. >> - J >> -- >> Joshua Buysse - Security Engineer >> University Information Security - University of Minnesota >> >> >> *"Computers are useless. They can only give you answers." >> - Pablo Picasso* >> >> On Tue, May 1, 2018 at 3:52 PM, fatema bannatwala < >> fatema.bannatwala at gmail.com> wrote: >> >>> Hi Joseph, >>> >>> Just wanted to get clarity, are you running Splunk forwarder on the >>> manager of your Bro cluster? >>> If yes, then how are you monitoring the log files generated by bro in >>> current dir (i.e. contents of your inputs.conf of Splunk Forwarder)? >>> >>> I believe, Splunk monitoring should work just fine on the bro log files >>> on manager. >>> >>> Fatema. >>> >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>> >> >> >> >> >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180507/82976603/attachment.html From ossamabzos at gmail.com Mon May 7 04:34:32 2018 From: ossamabzos at gmail.com (bz Os) Date: Mon, 7 May 2018 12:34:32 +0100 Subject: [Bro] bro notice framework Message-ID: hello Evry one i attempt to have a notice on my email when an scan against my network done ,i writed this script : @load policy/misc/scan.bro > hook Notice::policy(n:Notice::type){ > if(n$note==Scan::Address_Scan){ > add n$actions[Notice::ACTION_EMAIL]; > } > } but when i test scan against my network ,i had nothing in my email ,but i have a notice that a scan is made in the file notice.log how can i resolve this probleme? and how make the file notice.log to log a significant notice for example when a scan is made it wil create scan made by and adresse of the host -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180507/9ad320e9/attachment.html From assaf.morami at gmail.com Tue May 8 02:19:05 2018 From: assaf.morami at gmail.com (Assaf) Date: Tue, 08 May 2018 09:19:05 +0000 Subject: [Bro] dump_packet and dump_current_packet ignores file name Message-ID: Hi. I'm trying to dump each connection to a different file. E.g: event new_packet(c: connection, p: pkt_hdr) { dump_current_packet(c$uid + ".pcap"); } But bro writes all of the packets to the first "c$uid" and ignores the rest. Looking at the source code ( https://github.com/bro/bro/blob/091d1e163f687105bb6454d61252cbe4edae7d30/src/bro.bif#L3282-L3299), it seems that bro ignores "file_name" if "addl_pkt_dumper" already exists. Reading the changelog (https://www.bro.org/download/CHANGES.bro.txt), it seems that "rotate_file_by_name" can be used to close "addl_pkt_dumper", but it throws "can't move x.pcap to x.pcap.17946.1255209915.175512.tmp: No such file or directory". How can I solve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180508/10746647/attachment.html From johanna at icir.org Tue May 8 11:28:53 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 11:28:53 -0700 Subject: [Bro] dump_packet and dump_current_packet ignores file name In-Reply-To: References: Message-ID: <20180508182853.xizl5c27wpqxvy4v@Beezling.dhcp.lbnl.us> Hi, just to follow up - your pull request at https://github.com/bro/bro/pull/132 has just been merged and this should work now. Johanna On Tue, May 08, 2018 at 09:19:05AM +0000, Assaf wrote: > Hi. > > I'm trying to dump each connection to a different file. E.g: > > event new_packet(c: connection, p: pkt_hdr) { > dump_current_packet(c$uid + ".pcap"); > } > > But bro writes all of the packets to the first "c$uid" and ignores the rest. > > Looking at the source code ( > https://github.com/bro/bro/blob/091d1e163f687105bb6454d61252cbe4edae7d30/src/bro.bif#L3282-L3299), > it seems that bro ignores "file_name" if "addl_pkt_dumper" already exists. > > Reading the changelog (https://www.bro.org/download/CHANGES.bro.txt), it > seems that "rotate_file_by_name" can be used to close "addl_pkt_dumper", > but it throws "can't move x.pcap to x.pcap.17946.1255209915.175512.tmp: No > such file or directory". > > How can I solve this? > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From assaf.morami at gmail.com Tue May 8 12:09:03 2018 From: assaf.morami at gmail.com (Assaf) Date: Tue, 08 May 2018 19:09:03 +0000 Subject: [Bro] dump_packet and dump_current_packet ignores file name In-Reply-To: <20180508182853.xizl5c27wpqxvy4v@Beezling.dhcp.lbnl.us> References: <20180508182853.xizl5c27wpqxvy4v@Beezling.dhcp.lbnl.us> Message-ID: Thanks! On Tue, May 8, 2018 at 9:28 PM Johanna Amann wrote: > Hi, > > just to follow up - your pull request at > https://github.com/bro/bro/pull/132 has just been merged and this should > work now. > > Johanna > > On Tue, May 08, 2018 at 09:19:05AM +0000, Assaf wrote: > > Hi. > > > > I'm trying to dump each connection to a different file. E.g: > > > > event new_packet(c: connection, p: pkt_hdr) { > > dump_current_packet(c$uid + ".pcap"); > > } > > > > But bro writes all of the packets to the first "c$uid" and ignores the > rest. > > > > Looking at the source code ( > > > https://github.com/bro/bro/blob/091d1e163f687105bb6454d61252cbe4edae7d30/src/bro.bif#L3282-L3299 > ), > > it seems that bro ignores "file_name" if "addl_pkt_dumper" already > exists. > > > > Reading the changelog (https://www.bro.org/download/CHANGES.bro.txt), it > > seems that "rotate_file_by_name" can be used to close "addl_pkt_dumper", > > but it throws "can't move x.pcap to x.pcap.17946.1255209915.175512.tmp: > No > > such file or directory". > > > > How can I solve this? > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180508/a0ffe9bf/attachment.html From johanna at icir.org Tue May 8 16:58:37 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 16:58:37 -0700 Subject: [Bro] bro notice framework In-Reply-To: References: Message-ID: <20180508235837.zre6tmspnpho34pk@Beezling.dhcp.lbnl.us> On Mon, May 07, 2018 at 12:34:32PM +0100, bz Os wrote: > hello Evry one i attempt to have a notice on my email when an scan against > my network done ,i writed this script : > > @load policy/misc/scan.bro > > hook Notice::policy(n:Notice::type){ > > if(n$note==Scan::Address_Scan){ > > add n$actions[Notice::ACTION_EMAIL]; > > } > > } > > > but when i test scan against my network ,i had nothing in my email ,but i > have a notice that a scan is made in the file notice.log > how can i resolve this probleme? This might be that the path to sendmail is set incorrectly. If you use broctl, check the broctl.conf to check if the sendmail path is correct. If it is correct try if you manually can send email via sendmail and it arrives. reporter.log also might contain output. > and how make the file notice.log to log a significant notice for example > when a scan is made it wil create scan made by and adresse of the host I am not 100% sure what you mean here. Assuming you want to customize the text that the notice has. Generally you can put all information into the notice.log that you pass in the call to NOTICE. In this case the notice is raised by scan.bro, which sadly is not easily customizable. You could copy it to your own script directory, customize it and then load your own scan.bro instead. Johanna From johanna at icir.org Tue May 8 17:02:53 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:02:53 -0700 Subject: [Bro] module creation In-Reply-To: References: Message-ID: <20180509000253.uykcwuh72m46xdwb@Beezling.local> Hi, > 1/ Is there anything to do other than creating __load__ & main.bro files to > make my module to work ? It really depends what you are talking about here. If you are talking about something that can be done purely in script-level, yes that's basically it. If you want to distribute it using the Bro package manager you also need to add a file with metainformation, but that is similarly easy. If you want to write a protocol parser you have to use C++ and things get a bit more complex. > 2/ if I want to use dpd with signature, the only thing that i have to do is > to have my sig file in the module, isn't it? And you have to load the signature file, e.g. in __load__.bro. But besides that - yup. Johanna From johanna at icir.org Tue May 8 17:04:29 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:04:29 -0700 Subject: [Bro] can we integrate bro with ibm qradar? In-Reply-To: References: Message-ID: <20180509000429.3x763przn6iitrjj@Beezling.local> Hi, there seem to be some people that do this - here is a thread from 2016: https://www.ibm.com/developerworks/community/forums/html/topic?id=90cc72f3-3879-4994-9408-1ad3b76ce639 That being said I do not have any personal experiences with this. Johanna On Fri, May 04, 2018 at 11:14:19AM +0100, bz Os wrote: > hello Evry Body i have a stage in an entreprise and it tell me to integrate > bro with ibmqradar siem ,so i want to know if is possible to integrate bro > with ibm qradar Siem > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue May 8 17:12:30 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:12:30 -0700 Subject: [Bro] Broctl netstats In-Reply-To: References: Message-ID: <20180509001230.aran7ptjdwisyr5r@Beezling.local> Hi, On Thu, May 03, 2018 at 02:41:44PM -0400, Carl Rotenan wrote: > Could someone explain the the dropped and link columns in the broctl > netstats output? > > Ex > recvd=12409185 dropped=33782 link=12409185 The information comes out of the get_net_stats.bif. Documentation is at https://www.bro.org/sphinx/scripts/base/bif/stats.bif.bro.html#id-get_net_stats. According to this, the numbers mean: "the number of packets (i) received by Bro, (ii) dropped, and (iii) seen on the link (not always available)." For the standard pcap input method, received is incremented each time that Bro handles a packet. Dropped and link come out of pcap_stats (https://www.tcpdump.org/manpages/pcap_stats.3pcap.html) and are set to ps_recv (link) and ps_drop (dropped). ps_ifdrop does not seem to be available. I hope that helps, Johanna From johanna at icir.org Tue May 8 17:17:11 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:17:11 -0700 Subject: [Bro] Dropped data In-Reply-To: References: Message-ID: <20180509001711.x4w23sqfn2zxmu7s@Beezling.local> Hi, this actually does not look very bad to me - on most interfaces you do not seem to have any drops. One of them has a bit over 2% which is not that pretty but also not catastrophic. I have no experience with ZC, but generally packet loss can be caused by a number of issues. Single high-speed connections can be problematic (because they add to the normal load of a single Bro process). Microbursts also happen and can lead to a bit of packet loss. If there is a setting to increase the available buffer, that might be worth playing around with. Johanna On Wed, May 02, 2018 at 04:13:07PM -0400, Carl Rotenan wrote: > Hello, > > Can someone give me some direction on trying to figure out why I have > dropped data? > > This output is from a machine getting about 3G of traffic a minute or so > into starting Bro 2.5.3 with PF_RING 7.0.0. > > How much data per worker should I expect to budget for? Ideally I'd like > Bro to be able to do 10G of traffic. > > Has anyone used PF_RING ZC with success? > > worker-0-1: 1525291681.760081 recvd=564836 dropped=0 link=564836 > worker-0-2: 1525291681.961074 recvd=723187 dropped=0 link=723187 > worker-0-3: 1525291682.162178 recvd=682598 dropped=4619 link=682598 > worker-0-4: 1525291682.364202 recvd=1094776 dropped=0 link=1094776 > worker-0-5: 1525291682.566055 recvd=6722748 dropped=30902 link=6722748 > worker-0-6: 1525291682.768050 recvd=2180528 dropped=0 link=2180528 > worker-0-7: 1525291682.969023 recvd=3252824 dropped=0 link=3252824 > worker-0-8: 1525291683.179065 recvd=414112 dropped=0 link=414112 > worker-0-9: 1525291683.379083 recvd=2228892 dropped=52543 link=2228892 > worker-0-10: 1525291683.579973 recvd=1735298 dropped=0 link=1735298 > worker-0-11: 1525291683.780260 recvd=2720785 dropped=1437 link=2720785 > worker-0-12: 1525291683.981421 recvd=5835651 dropped=7610 link=5835651 > worker-0-13: 1525291684.181057 recvd=566766 dropped=0 link=566766 > worker-0-14: 1525291684.381979 recvd=335114 dropped=0 link=335114 > worker-0-15: 1525291684.582077 recvd=743998 dropped=0 link=743998 > worker-0-16: 1525291684.782897 recvd=6124252 dropped=54604 link=6124252 > worker-0-17: 1525291684.980916 recvd=3476401 dropped=17138 link=3476401 > worker-0-18: 1525291685.184047 recvd=1286574 dropped=0 link=1286574 > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue May 8 17:23:52 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:23:52 -0700 Subject: [Bro] what are the important configuration file in bro? In-Reply-To: References: Message-ID: <20180509002352.6j5e3v2mmuwauca7@Beezling.local> On Tue, Apr 24, 2018 at 12:27:12PM +0100, bz Os wrote: > hello wath are the configuration file in bro (such as local.bro)and there > role The important ones are documented at https://www.bro.org/sphinx/quickstart/index.html. Johanna From johanna at icir.org Tue May 8 17:43:00 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:43:00 -0700 Subject: [Bro] internal error: file analyzer instantiation failed In-Reply-To: References: Message-ID: <20180509004300.6fp467uvlyqy2uqp@Beezling.local> Two questions: * is this an unmodified version of Bro, or did you install any 3rd party modules? * Does this also happen in 2.5.3? 2.4.1 was released in 2015 and we do not really support that anymore... Johanna On Mon, Apr 09, 2018 at 05:05:43PM +0300, Assaf wrote: > Hi to all. :-) > > When running: > bro -r my.pcap -b -C base/protocols/rdp > (ubuntu server 16.04 - bro version 2.4.1) > > I'm getting an error: > Error: 1521789435.202907 internal error: file analyzer instantiation failed > > I've found nothing on google or the docs. > > How can I fix this? > > Thanks, > Assaf. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From johanna at icir.org Tue May 8 17:45:39 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:45:39 -0700 Subject: [Bro] docker bro with pf_ring In-Reply-To: References: Message-ID: <20180509004539.dma3aykd4hflkekw@Beezling.local> On Fri, Apr 06, 2018 at 08:49:42PM +0530, ps sunu wrote: > How to configure docker bro with pfring any proper steps have to try ? I assume you found https://www.ntop.org/guides/pf_ring/vm_support/docker.html? I did not try that but it looks relatively straightforward. Johanna From johanna at icir.org Tue May 8 17:48:26 2018 From: johanna at icir.org (Johanna Amann) Date: Tue, 8 May 2018 17:48:26 -0700 Subject: [Bro] read json In-Reply-To: References: Message-ID: <20180509004826.6dgk43oksxp2qixc@Beezling.local> On Mon, Mar 19, 2018 at 05:14:45PM +0100, Rober Fern?ndez wrote: > Hi,can Bro read a json file? No (assuming you mean with the Input framework). The only way to do that currently would be to use the raw reader and parse it yourself in scriptland :) Johanna From michalpurzynski1 at gmail.com Tue May 8 18:20:08 2018 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Tue, 8 May 2018 18:20:08 -0700 Subject: [Bro] Dropped data In-Reply-To: <20180509001711.x4w23sqfn2zxmu7s@Beezling.local> References: <20180509001711.x4w23sqfn2zxmu7s@Beezling.local> Message-ID: <0BDA67AB-349F-4626-B911-41BEDB539185@gmail.com> What kind of cards and distribution do you have? Maybe you could just switch to afpacket to avoid the problem entirely > On May 8, 2018, at 5:17 PM, Johanna Amann wrote: > > Hi, > > this actually does not look very bad to me - on most interfaces you do not > seem to have any drops. One of them has a bit over 2% which is not that > pretty but also not catastrophic. > > I have no experience with ZC, but generally packet loss can be caused by a > number of issues. Single high-speed connections can be problematic > (because they add to the normal load of a single Bro process). Microbursts > also happen and can lead to a bit of packet loss. > > If there is a setting to increase the available buffer, that might be > worth playing around with. > > Johanna > >> On Wed, May 02, 2018 at 04:13:07PM -0400, Carl Rotenan wrote: >> Hello, >> >> Can someone give me some direction on trying to figure out why I have >> dropped data? >> >> This output is from a machine getting about 3G of traffic a minute or so >> into starting Bro 2.5.3 with PF_RING 7.0.0. >> >> How much data per worker should I expect to budget for? Ideally I'd like >> Bro to be able to do 10G of traffic. >> >> Has anyone used PF_RING ZC with success? >> >> worker-0-1: 1525291681.760081 recvd=564836 dropped=0 link=564836 >> worker-0-2: 1525291681.961074 recvd=723187 dropped=0 link=723187 >> worker-0-3: 1525291682.162178 recvd=682598 dropped=4619 link=682598 >> worker-0-4: 1525291682.364202 recvd=1094776 dropped=0 link=1094776 >> worker-0-5: 1525291682.566055 recvd=6722748 dropped=30902 link=6722748 >> worker-0-6: 1525291682.768050 recvd=2180528 dropped=0 link=2180528 >> worker-0-7: 1525291682.969023 recvd=3252824 dropped=0 link=3252824 >> worker-0-8: 1525291683.179065 recvd=414112 dropped=0 link=414112 >> worker-0-9: 1525291683.379083 recvd=2228892 dropped=52543 link=2228892 >> worker-0-10: 1525291683.579973 recvd=1735298 dropped=0 link=1735298 >> worker-0-11: 1525291683.780260 recvd=2720785 dropped=1437 link=2720785 >> worker-0-12: 1525291683.981421 recvd=5835651 dropped=7610 link=5835651 >> worker-0-13: 1525291684.181057 recvd=566766 dropped=0 link=566766 >> worker-0-14: 1525291684.381979 recvd=335114 dropped=0 link=335114 >> worker-0-15: 1525291684.582077 recvd=743998 dropped=0 link=743998 >> worker-0-16: 1525291684.782897 recvd=6124252 dropped=54604 link=6124252 >> worker-0-17: 1525291684.980916 recvd=3476401 dropped=17138 link=3476401 >> worker-0-18: 1525291685.184047 recvd=1286574 dropped=0 link=1286574 > >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From dnj0496 at gmail.com Tue May 8 21:05:03 2018 From: dnj0496 at gmail.com (Dk Jack) Date: Tue, 8 May 2018 21:05:03 -0700 Subject: [Bro] duplicate traffic Message-ID: Hi, I am dealing with a peculiar problem. Our bro sniffing traffic on a span port. Sometimes, we are seeing duplicate traffic. When this happens our downstream analysis has problems because of the duplicate log entries in the bro logs etc. Is there a way to detect and prevent the logging of duplicate entries in bro logs. Regards, Dk. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180508/4cc24610/attachment.html From jizquierdo at owlh.net Wed May 9 03:03:48 2018 From: jizquierdo at owlh.net (jose antonio izquierdo lopez) Date: Wed, 09 May 2018 10:03:48 +0000 Subject: [Bro] Store ASCII and JSON output format at the same time Message-ID: Hi Bro Family, We want to implement a logging configuration with Bro that will allow us to store the output in both formats at the same time: JSON and ASCII. The main idea is to have something like: .- weird.log .- weird.json As each filter seems to be able to use one writer, I can't see the way to accomplish this configuration with current plugins, configs, packets. Hopefully, I'm wrong. Does someone know if there is a configuration or packet that can help to achieve this config? Thanks a lo t, ? ?B est Regards, Jose Antonio Izquierdo m - +34 673 055 255 skype - izquierdo.lopez -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/ffd33e58/attachment.html From jan.grashoefer at gmail.com Wed May 9 04:38:01 2018 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Wed, 9 May 2018 13:38:01 +0200 Subject: [Bro] Store ASCII and JSON output format at the same time In-Reply-To: References: Message-ID: <0a5d55d2-12b6-1b86-911a-a73383c142fb@gmail.com> On 09/05/18 12:03, jose antonio izquierdo lopez wrote: > Does someone know if there is a configuration or packet that can help to > achieve this config? There is a package that provides more or less exactly this functionality: https://github.com/J-Gras/add-json If you have installed the Bro Package Manager: bro-pkg install add-json Jan From jizquierdo at owlh.net Wed May 9 04:51:08 2018 From: jizquierdo at owlh.net (jose antonio izquierdo lopez) Date: Wed, 09 May 2018 11:51:08 +0000 Subject: [Bro] Store ASCII and JSON output format at the same time In-Reply-To: <0a5d55d2-12b6-1b86-911a-a73383c142fb@gmail.com> References: <0a5d55d2-12b6-1b86-911a-a73383c142fb@gmail.com> Message-ID: Thanks Jan, Will check it right now. Best Regards, Jose Antonio Izquierdo m - +34 673 055 255 skype - izquierdo.lopez On Wed, May 9, 2018 at 1:46 PM Jan Grash?fer wrote: > On 09/05/18 12:03, jose antonio izquierdo lopez wrote: > > Does someone know if there is a configuration or packet that can help to > > achieve this config? > > There is a package that provides more or less exactly this > functionality: https://github.com/J-Gras/add-json > > If you have installed the Bro Package Manager: > bro-pkg install add-json > > Jan > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/9e654843/attachment-0001.html From jizquierdo at owlh.net Wed May 9 05:17:19 2018 From: jizquierdo at owlh.net (jose antonio izquierdo lopez) Date: Wed, 09 May 2018 12:17:19 +0000 Subject: [Bro] Store ASCII and JSON output format at the same time In-Reply-To: References: <0a5d55d2-12b6-1b86-911a-a73383c142fb@gmail.com> Message-ID: Tested... it works. Thanks Best Regards, Jose Antonio Izquierdo m - +34 673 055 255 skype - izquierdo.lopez On Wed, May 9, 2018 at 1:51 PM jose antonio izquierdo lopez < jizquierdo at owlh.net> wrote: > Thanks Jan, > > Will check it right now. > > Best Regards, > > Jose Antonio Izquierdo > m - +34 673 055 255 > skype - izquierdo.lopez > > > > > > On Wed, May 9, 2018 at 1:46 PM Jan Grash?fer > wrote: > >> On 09/05/18 12:03, jose antonio izquierdo lopez wrote: >> > Does someone know if there is a configuration or packet that can help to >> > achieve this config? >> >> There is a package that provides more or less exactly this >> functionality: https://github.com/J-Gras/add-json >> >> If you have installed the Bro Package Manager: >> bro-pkg install add-json >> >> Jan >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/743a62b2/attachment.html From ossamabzos at gmail.com Wed May 9 08:23:31 2018 From: ossamabzos at gmail.com (bz Os) Date: Wed, 9 May 2018 16:23:31 +0100 Subject: [Bro] how can evaluate bro Message-ID: hello Every One can some one tel me if there is an dataset or tool that allow me for evaluation of bro ids against new attack and technic evastion and also generation of false alert and the number of droped packet -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/e908f1c8/attachment.html From jizquierdo at owlh.net Wed May 9 08:47:43 2018 From: jizquierdo at owlh.net (jose antonio izquierdo lopez) Date: Wed, 09 May 2018 15:47:43 +0000 Subject: [Bro] how can evaluate bro In-Reply-To: References: Message-ID: Not sure if this may help https://www.netresec.com/?page=PcapFiles Best Regards, Jose Antonio Izquierdo m - +34 673 055 255 skype - izquierdo.lopez On Wed, May 9, 2018 at 5:44 PM bz Os wrote: > hello Every One can some one tel me if there is an dataset or tool that > allow me for evaluation of bro ids against new attack and technic evastion > and also generation of false alert and the number of droped packet > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/738e2589/attachment.html From johanna at icir.org Wed May 9 08:54:20 2018 From: johanna at icir.org (Johanna Amann) Date: Wed, 09 May 2018 08:54:20 -0700 Subject: [Bro] how can evaluate bro In-Reply-To: References: Message-ID: Hi, > hello Every One can some one tel me if there is an dataset or tool > that > allow me for evaluation of bro ids against new attack and technic > evastion > and also generation of false alert and the number of droped packet I am not aware of anything - I think you are on yourself here. Have fun building it :) Also note that Bro mostly does not really do attack detection; by default the logs (mostly) only describe what happened on the networks without attaching any opinion to it. So - you probably also have to write the attack detection code yourself. Johanna From charles.mckee at decisivedge.com Wed May 9 09:45:35 2018 From: charles.mckee at decisivedge.com (Charles Mckee) Date: Wed, 9 May 2018 12:45:35 -0400 Subject: [Bro] How to configure Bro to listen on multiple interfaces Message-ID: <5703cb07cdb65808928b48f87381e47f@mail.gmail.com> Hello Bro, I have been researching the net and cannot find a clear way for my scenario. I have bro in standalone mode, it has been running but now we added another nic to bro and want Bro to monitor on that nic as well keeping it in standalone mode. How can I make this happen, please can one how me what and where I need to make changes ? Respectfully Yours Charles McKee *Decisiv**E**dge**, LLC* *O:* 302.299.1570 x43 <(302)%20299-1570>2 *|* *C:* 302.3 <(302)%20299-0406>20.6968 *|* *F:* 302.299.1578 <(302)%20299-1578> 131 Continental Dr | Suite 409 | Newark, DE 19713 charles.mckee at decisivedge.com *|* www.DecisivEdge.com -- This email and any files transmitted with it are considered privileged and confidential unless otherwise explicitly stated otherwise. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. All email data and contents may be monitored to ensure that their use is authorized, for management of the system, to facilitate protection against unauthorized use, and to verify security procedures, survivability and operational security. Under no circumstance should the user of this email have an expectation of privacy for this correspondence. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/e60565a9/attachment-0001.html From patrick.kelley at criticalpathsecurity.com Wed May 9 10:31:19 2018 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Wed, 9 May 2018 13:31:19 -0400 Subject: [Bro] How to configure Bro to listen on multiple interfaces In-Reply-To: <5703cb07cdb65808928b48f87381e47f@mail.gmail.com> References: <5703cb07cdb65808928b48f87381e47f@mail.gmail.com> Message-ID: <67269DEB-F1C8-4443-9C09-7D6CCA0AFF15@criticalpathsecurity.com> Assuming you're using broctl, add: broargs = -i eth2 to your broctl.cfg file Patrick Kelley, CISSP, C|EH, ITIL Principal Security Engineer patrick.kelley at criticalpathsecurity.com > On May 9, 2018, at 12:45 PM, Charles Mckee wrote: > > Hello Bro, > > I have been researching the net and cannot find a clear way for my scenario. > > I have bro in standalone mode, it has been running but now we added another nic to bro and want Bro to monitor on that nic as well keeping it in standalone mode. > > How can I make this happen, please can one how me what and where I need to make changes ? > > Respectfully Yours > > Charles McKee > > DecisivEdge, LLC > O: 302.299.1570 x432 | C: 302.320.6968 | F: 302.299.1578 > 131 Continental Dr | Suite 409 | Newark, DE 19713 > charles.mckee at decisivedge.com | www.DecisivEdge.com > > > This email and any files transmitted with it are considered privileged and confidential unless otherwise explicitly stated otherwise. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. All email data and contents may be monitored to ensure that their use is authorized, for management of the system, to facilitate protection against unauthorized use, and to verify security procedures, survivability and operational security. Under no circumstance should the user of this email have an expectation of privacy for this correspondence. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/64330d94/attachment.html From carlrotenan at gmail.com Wed May 9 15:06:17 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Wed, 9 May 2018 18:06:17 -0400 Subject: [Bro] Module execution Message-ID: Hello, I'm looking to squeeze every bit of performance out of my Bro implementation, and wanted to know: 1. Is there any over head that be turned off? I'm only looking to capture HTTP, SMTP, and FTP, extract any files in transit, and calculate a SHA1 hash of those files. 2. Are there any tips for writing fast event code? Are there any known slow moving operations? 3. Has anyone done any time execution analysis of their code and could share the results? Thank you as always, Carl -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/b0e0bed1/attachment.html From zeolla at gmail.com Wed May 9 16:29:09 2018 From: zeolla at gmail.com (Zeolla@GMail.com) Date: Wed, 09 May 2018 23:29:09 +0000 Subject: [Bro] Module execution In-Reply-To: References: Message-ID: Are you optimizing for a specific piece of hardware? Knowing any constraints there will be helpful as there are some changes that can be made based off of the hardware setup or traffic composition/speeds. Jon On Wed, May 9, 2018, 18:21 Carl Rotenan wrote: > Hello, > > I'm looking to squeeze every bit of performance out of my Bro > implementation, and wanted to know: > > 1. Is there any over head that be turned off? I'm only looking to capture > HTTP, SMTP, and FTP, extract any files in transit, and calculate a SHA1 > hash of those files. > > 2. Are there any tips for writing fast event code? Are there any known > slow moving operations? > > 3. Has anyone done any time execution analysis of their code and could > share the results? > > Thank you as always, > > Carl > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/8bd67666/attachment.html From dwdixon at umich.edu Wed May 9 17:32:32 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Wed, 9 May 2018 20:32:32 -0400 Subject: [Bro] how to get not duplicated packets In-Reply-To: References: Message-ID: You should also probably configure a logger process in node.cfg to run on on the same box as your manager and proxy. -Drew On Sun, Apr 29, 2018 at 9:08 PM, Seong Hyeok Seo wrote: > thanks a lot, Mark! > it?s solved by adding ?PFRINGClusterID = 21? in the cfg file. > it works well! > > > 2018? 4? 28? ???, Mark Buchanan?? ??? ???: > > From of one of Justin's posts a while back (as I have struggled with this >> numerous times) - this may or may not be the issue, but putting it out >> there if it is as it has the same symptoms. >> >> [root at bro-dev ~]# broctl config | grep pfring >> pfringclusterid = 21 >> pfringclustertype = 4-tuple >> ringfirstappinstance = 0 >> >> if you have pfringclusterid set to 0, that's the problem that was just >> fixed. You can easily workaround that by adding >> >> PFRINGClusterID = 21 >> >> to your /usr/local/bro/etc/broctl.cfg >> >> Mark >> >> On Fri, Apr 27, 2018 at 9:59 AM Seong Hyeok Seo >> wrote: >> >>> Yes, I will do that. >>> >>> On Fri, 27 Apr 2018 at 11:54 PM Vlad Grigorescu wrote: >>> >>>> Would you mind also sending your reply to the bro mailing list? That >>>> way other people can also help you, and it will provide information to >>>> anyone else that might run into this same issue in the future. Thanks. >>>> >>>> On Fri, Apr 27, 2018 at 2:49 PM, Seong Hyeok Seo >>>> wrote: >>>> >>>>> we?re working on 2 machines. we set one worker on a single server and >>>>> a manager and a proxy on the other one. >>>>> and actually we emailed to a pfring developer and they replied this... >>>>> ?it seems that Bro is not setting up a pf_ring cluster to distribute >>>>> the traffic across the instances (it should call pfring_set_cluster), >>>>> please write to the Bro mailing list as we are not maintaining that >>>>> code sorry.? >>>>> >>>>> >>>>> On Fri, 27 Apr 2018 at 11:33 PM Vlad Grigorescu wrote: >>>>> >>>>>> Could you provide a bit more detail about your setup? Are the workers >>>>>> all running on a single server, or are they distributed across multiple >>>>>> servers? >>>>>> >>>>>> What I'm trying to determine is at what point the duplication is >>>>>> happening. >>>>>> >>>>>> On Fri, Apr 27, 2018 at 9:47 AM, Seong Hyeok Seo >>>>>> wrote: >>>>>> >>>>>>> Hi, we're doing a job that collecting traffic by using Bro and >>>>>>> PF_RING >>>>>>> , but we found that each Bro worker got the same full traffic >>>>>>> stream. >>>>>>> We think the packet is duplicated as much as the process number that >>>>>>> we set in a config file(bro/etc/node.cfg) >>>>>>> >>>>>>> These are OS, Bro, PF_RING Ver. that we're using. >>>>>>> >>>>>>> >>>>>>> OS: CentOS 7.4.1708 (Core) >>>>>>> Bro: 2.5.3 >>>>>>> PF RING: 7.1.0-1859 >>>>>>> >>>>>>> we installed those things, referring this page, >>>>>>> https://www.bro.org/documentation/load-balancing.html >>>>>>> and node.cfg is like this >>>>>>> ------------------------------------------ >>>>>>> >>>>>>> [manager] >>>>>>> type=manager >>>>>>> host=X.X.X.X >>>>>>> >>>>>>> [proxy-1] >>>>>>> type=proxy >>>>>>> host=X.X.X.X >>>>>>> >>>>>>> [worker-1] >>>>>>> type=worker >>>>>>> host=X.X.X.X >>>>>>> interface=eth0 >>>>>>> lb_method=pf_ring >>>>>>> lb_procs=8 >>>>>>> -------------------------------------------------- >>>>>>> >>>>>>> please, help us to fix this and thank you in advance. >>>>>>> >>>>>>> Sincerely, >>>>>>> Seonghyoek >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Bro mailing list >>>>>>> bro at bro-ids.org >>>>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>>>>> >>>>>> >>>>>> >>>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> -- >> Mark Buchanan >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180509/3e8f9405/attachment-0001.html From carlrotenan at gmail.com Wed May 9 17:55:53 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Thu, 10 May 2018 00:55:53 +0000 Subject: [Bro] Module execution In-Reply-To: References: Message-ID: I?m trying to get as close as I can to supporting 10Gbs. I?ve been running upwards of 18 workers supporting 1 10G Intel NIC. As I increase network traffic I?m getting packet loss, and want to explore the code as a bottleneck. On Wed, May 9, 2018 at 7:29 PM Zeolla at GMail.com wrote: > Are you optimizing for a specific piece of hardware? Knowing any > constraints there will be helpful as there are some changes that can be > made based off of the hardware setup or traffic composition/speeds. > > Jon > > On Wed, May 9, 2018, 18:21 Carl Rotenan wrote: > >> Hello, >> >> I'm looking to squeeze every bit of performance out of my Bro >> implementation, and wanted to know: >> >> 1. Is there any over head that be turned off? I'm only looking to capture >> HTTP, SMTP, and FTP, extract any files in transit, and calculate a SHA1 >> hash of those files. >> >> 2. Are there any tips for writing fast event code? Are there any known >> slow moving operations? >> >> 3. Has anyone done any time execution analysis of their code and could >> share the results? >> >> Thank you as always, >> >> Carl >> > _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -- > > Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180510/12ae6442/attachment.html From hckim at narusec.com Thu May 10 00:56:06 2018 From: hckim at narusec.com (=?UTF-8?B?6rmA7Z2s7LKg?=) Date: Thu, 10 May 2018 16:56:06 +0900 Subject: [Bro] How to configure Bro to listen on multiple interfaces Message-ID: Hi if you are using broctl change node.cfg file [bro] type=standalone host=localhost interface='eth0 -i eth1 -i eth2 -i eth3' but there is catch, you can not use 'broctl capstats' > > ------------------------------ > > Message: 5 > Date: Wed, 9 May 2018 12:45:35 -0400 > From: Charles Mckee > Subject: [Bro] How to configure Bro to listen on multiple interfaces > To: bro at bro.org > Message-ID: <5703cb07cdb65808928b48f87381e47f at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hello Bro, > > > > I have been researching the net and cannot find a clear way for my > scenario. > > > > I have bro in standalone mode, it has been running but now we added another > nic to bro and want Bro to monitor on that nic as well keeping it in > standalone mode. > > > > How can I make this happen, please can one how me what and where I need to > make changes ? > > > > Respectfully Yours > > Charles McKee > > > > *Decisiv**E**dge**, LLC* > > *O:* 302.299.1570 x43 <(302)%20299-1570>2 *|* *C:* 302.3 > <(302)%20299-0406>20.6968 *|* *F:* 302.299.1578 <(302)%20299-1578> > > 131 Continental Dr | Suite 409 | Newark, DE 19713 > A0Suite+409+%C2%A0%7C+%C2%A0Newark,+DE+19713&entry=gmail&source=g> > > charles.mckee at decisivedge.com *|* www.DecisivEdge.com > > > -- > This email and any files transmitted with it are considered privileged and > confidential unless otherwise explicitly stated otherwise. If you are not > the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. All email data and contents may be > monitored to ensure that their use is authorized, for management of the > system, to facilitate protection against unauthorized use, and to verify > security procedures, survivability and operational security. Under no > circumstance should the user of this email have an expectation of privacy > for this correspondence. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/ > 20180509/e60565a9/attachment.html > > ------------------------------ > > _______________________________________________ > Bro mailing list > Bro at bro.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > -- ------------------------------------------------------ Hichul Kim ??? ?? ??? Naru Security (?)?????? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180510/ff883987/attachment.html From carlrotenan at gmail.com Thu May 10 05:45:19 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Thu, 10 May 2018 08:45:19 -0400 Subject: [Bro] Dropped data In-Reply-To: <0BDA67AB-349F-4626-B911-41BEDB539185@gmail.com> References: <20180509001711.x4w23sqfn2zxmu7s@Beezling.local> <0BDA67AB-349F-4626-B911-41BEDB539185@gmail.com> Message-ID: Michal, Could you explain what you meant by switching to AF_PACKET and avoiding the problem all together? Thanks, Carl On Tue, May 8, 2018 at 9:20 PM, Micha? Purzy?ski wrote: > What kind of cards and distribution do you have? Maybe you could just > switch to afpacket to avoid the problem entirely > > > On May 8, 2018, at 5:17 PM, Johanna Amann wrote: > > > > Hi, > > > > this actually does not look very bad to me - on most interfaces you do > not > > seem to have any drops. One of them has a bit over 2% which is not that > > pretty but also not catastrophic. > > > > I have no experience with ZC, but generally packet loss can be caused by > a > > number of issues. Single high-speed connections can be problematic > > (because they add to the normal load of a single Bro process). > Microbursts > > also happen and can lead to a bit of packet loss. > > > > If there is a setting to increase the available buffer, that might be > > worth playing around with. > > > > Johanna > > > >> On Wed, May 02, 2018 at 04:13:07PM -0400, Carl Rotenan wrote: > >> Hello, > >> > >> Can someone give me some direction on trying to figure out why I have > >> dropped data? > >> > >> This output is from a machine getting about 3G of traffic a minute or so > >> into starting Bro 2.5.3 with PF_RING 7.0.0. > >> > >> How much data per worker should I expect to budget for? Ideally I'd like > >> Bro to be able to do 10G of traffic. > >> > >> Has anyone used PF_RING ZC with success? > >> > >> worker-0-1: 1525291681.760081 recvd=564836 dropped=0 link=564836 > >> worker-0-2: 1525291681.961074 recvd=723187 dropped=0 link=723187 > >> worker-0-3: 1525291682.162178 recvd=682598 dropped=4619 link=682598 > >> worker-0-4: 1525291682.364202 recvd=1094776 dropped=0 link=1094776 > >> worker-0-5: 1525291682.566055 recvd=6722748 dropped=30902 link=6722748 > >> worker-0-6: 1525291682.768050 recvd=2180528 dropped=0 link=2180528 > >> worker-0-7: 1525291682.969023 recvd=3252824 dropped=0 link=3252824 > >> worker-0-8: 1525291683.179065 recvd=414112 dropped=0 link=414112 > >> worker-0-9: 1525291683.379083 recvd=2228892 dropped=52543 link=2228892 > >> worker-0-10: 1525291683.579973 recvd=1735298 dropped=0 link=1735298 > >> worker-0-11: 1525291683.780260 recvd=2720785 dropped=1437 link=2720785 > >> worker-0-12: 1525291683.981421 recvd=5835651 dropped=7610 link=5835651 > >> worker-0-13: 1525291684.181057 recvd=566766 dropped=0 link=566766 > >> worker-0-14: 1525291684.381979 recvd=335114 dropped=0 link=335114 > >> worker-0-15: 1525291684.582077 recvd=743998 dropped=0 link=743998 > >> worker-0-16: 1525291684.782897 recvd=6124252 dropped=54604 link=6124252 > >> worker-0-17: 1525291684.980916 recvd=3476401 dropped=17138 link=3476401 > >> worker-0-18: 1525291685.184047 recvd=1286574 dropped=0 link=1286574 > > > >> _______________________________________________ > >> Bro mailing list > >> bro at bro-ids.org > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180510/7968b01f/attachment.html From Stephen.Donnelly at endace.com Thu May 10 14:28:18 2018 From: Stephen.Donnelly at endace.com (Stephen Donnelly) Date: Thu, 10 May 2018 21:28:18 +0000 Subject: [Bro] bro-dag plugin available Message-ID: Hi, I'm pleased to announce the release of the bro-dag, a Bro packet source plugin for live capture from Endace DAG cards. https://github.com/endace/bro-dag It is available via bro-pkg; note you need to have a DAG card and software installed (available with registration at the support portal https://www.endace.com/support). With bro-pkg: bro-pkg refresh bro-pkg install endace/bro-dag bro -i endace::dag0:0 The first number is the DAG card index, and the second number is the stream number on that card. In our experience this plugin provides the best capture performance on DAG cards. The bro-dag README covers example node.cfg for hardware flow balancing across multiple workers (see github above). There are two alternative methods for live capture using DAG cards in Bro: libpcap or PF_RING. If libpcap is compiled on a system with DAG software installed, it will support capture from DAG devices with full kernel bypass. Using Bro's native pcap packet source and linking with the correct libpcap library: bro -i dag0:0 If a recent PF_RING version is installed on a system with DAG software, it dynamically supports DAG cards without any manual compilation/linking required. The bro-pfring plugin can then be used for high performance capture: bro -i pfring::dag:0:0 Dr Stephen Donnelly CTO www.endace.com From michalpurzynski1 at gmail.com Fri May 11 01:32:42 2018 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Fri, 11 May 2018 01:32:42 -0700 Subject: [Bro] Dropped data In-Reply-To: References: <20180509001711.x4w23sqfn2zxmu7s@Beezling.local> <0BDA67AB-349F-4626-B911-41BEDB539185@gmail.com> Message-ID: There?s no advantage using crazy solutions that make you jump through multiple hoops when most of the time the default and built in packet capture mechanism works well. > On May 10, 2018, at 5:45 AM, Carl Rotenan wrote: > > Michal, > > Could you explain what you meant by switching to AF_PACKET and avoiding the problem all together? > > Thanks, > > Carl > >> On Tue, May 8, 2018 at 9:20 PM, Micha? Purzy?ski wrote: >> What kind of cards and distribution do you have? Maybe you could just switch to afpacket to avoid the problem entirely >> >> > On May 8, 2018, at 5:17 PM, Johanna Amann wrote: >> > >> > Hi, >> > >> > this actually does not look very bad to me - on most interfaces you do not >> > seem to have any drops. One of them has a bit over 2% which is not that >> > pretty but also not catastrophic. >> > >> > I have no experience with ZC, but generally packet loss can be caused by a >> > number of issues. Single high-speed connections can be problematic >> > (because they add to the normal load of a single Bro process). Microbursts >> > also happen and can lead to a bit of packet loss. >> > >> > If there is a setting to increase the available buffer, that might be >> > worth playing around with. >> > >> > Johanna >> > >> >> On Wed, May 02, 2018 at 04:13:07PM -0400, Carl Rotenan wrote: >> >> Hello, >> >> >> >> Can someone give me some direction on trying to figure out why I have >> >> dropped data? >> >> >> >> This output is from a machine getting about 3G of traffic a minute or so >> >> into starting Bro 2.5.3 with PF_RING 7.0.0. >> >> >> >> How much data per worker should I expect to budget for? Ideally I'd like >> >> Bro to be able to do 10G of traffic. >> >> >> >> Has anyone used PF_RING ZC with success? >> >> >> >> worker-0-1: 1525291681.760081 recvd=564836 dropped=0 link=564836 >> >> worker-0-2: 1525291681.961074 recvd=723187 dropped=0 link=723187 >> >> worker-0-3: 1525291682.162178 recvd=682598 dropped=4619 link=682598 >> >> worker-0-4: 1525291682.364202 recvd=1094776 dropped=0 link=1094776 >> >> worker-0-5: 1525291682.566055 recvd=6722748 dropped=30902 link=6722748 >> >> worker-0-6: 1525291682.768050 recvd=2180528 dropped=0 link=2180528 >> >> worker-0-7: 1525291682.969023 recvd=3252824 dropped=0 link=3252824 >> >> worker-0-8: 1525291683.179065 recvd=414112 dropped=0 link=414112 >> >> worker-0-9: 1525291683.379083 recvd=2228892 dropped=52543 link=2228892 >> >> worker-0-10: 1525291683.579973 recvd=1735298 dropped=0 link=1735298 >> >> worker-0-11: 1525291683.780260 recvd=2720785 dropped=1437 link=2720785 >> >> worker-0-12: 1525291683.981421 recvd=5835651 dropped=7610 link=5835651 >> >> worker-0-13: 1525291684.181057 recvd=566766 dropped=0 link=566766 >> >> worker-0-14: 1525291684.381979 recvd=335114 dropped=0 link=335114 >> >> worker-0-15: 1525291684.582077 recvd=743998 dropped=0 link=743998 >> >> worker-0-16: 1525291684.782897 recvd=6124252 dropped=54604 link=6124252 >> >> worker-0-17: 1525291684.980916 recvd=3476401 dropped=17138 link=3476401 >> >> worker-0-18: 1525291685.184047 recvd=1286574 dropped=0 link=1286574 >> > >> >> _______________________________________________ >> >> Bro mailing list >> >> bro at bro-ids.org >> >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > >> > _______________________________________________ >> > Bro mailing list >> > bro at bro-ids.org >> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180511/d5a09e1b/attachment.html From ossamabzos at gmail.com Fri May 11 03:27:26 2018 From: ossamabzos at gmail.com (bz Os) Date: Fri, 11 May 2018 11:27:26 +0100 Subject: [Bro] can we configure bro logs to use standard message logging syslog ? Message-ID: hello evry one i want to know if bro logs are syslog ,and if not can we configure the bro logs to use standard message logging syslog ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180511/ef2a9b93/attachment.html From cgaylord at vt.edu Fri May 11 03:50:03 2018 From: cgaylord at vt.edu (Clark Gaylord) Date: Fri, 11 May 2018 06:50:03 -0400 Subject: [Bro] can we configure bro logs to use standard message logging syslog ? In-Reply-To: References: Message-ID: https://groups.google.com/forum/m/#!topic/security-onion/uhNLZCfMKwA On Fri, May 11, 2018, 06:39 bz Os wrote: > hello evry one i want to know if bro logs are syslog ,and if not can we > configure the bro logs to use standard message logging syslog ? > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- -- Clark Gaylord cgaylord at vt.edu ... Autocorrect may have improved this message ... -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180511/a926d79c/attachment.html From cherdt at umn.edu Fri May 11 10:44:06 2018 From: cherdt at umn.edu (Chris Herdt) Date: Fri, 11 May 2018 12:44:06 -0500 Subject: [Bro] Myricom SNFv3 and CentOS 7.5 Message-ID: A heads-up for other Myricom SNFv3 users: I recently updated a couple of dev hosts to CentOS 7.5 and found that the Myricom SNFv3 drivers (I'm using 3.0.13) no longer load and fail to rebuild. I've contacted CSPi support and they are looking into it. -- Chris Herdt University of Minnesota cherdt at umn.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180511/c33306df/attachment-0001.html From dwdixon at umich.edu Fri May 11 14:00:11 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Fri, 11 May 2018 17:00:11 -0400 Subject: [Bro] Myricom SNFv3 and CentOS 7.5 In-Reply-To: References: Message-ID: +1 on this...I also contacted them a week or so ago and gave them a bit of feedback......you could say. Like maybe release a dkms package, seems like that woud be nice?...and maybe actually test their latest drivers with new RHEL/CentOS release/kernel versions ahead of time to make sure they still work...If you also use Myricom cards and agree, maybe suggest these things as well to CSPi support so we can get some more traction on them doing so... The latest released SNFv3 drivers (SNFv3.0.13) kernel module will not compile against the kernel version (3.10.0-862.el7.x86_64) shipped with the RHEL/CentOS 7.5... Personally, the new Endace DAG plugin package is starting to look enticing as a motivator to maybe test out one of their cards after continued issues with CSPi IMO. -Drew On Fri, May 11, 2018 at 1:44 PM, Chris Herdt wrote: > A heads-up for other Myricom SNFv3 users: > > I recently updated a couple of dev hosts to CentOS 7.5 and found that the > Myricom SNFv3 drivers (I'm using 3.0.13) no longer load and fail to rebuild. > > I've contacted CSPi support and they are looking into it. > > > -- > Chris Herdt > University of Minnesota > cherdt at umn.edu > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180511/3e732bc9/attachment.html From assaf.morami at gmail.com Sat May 12 04:14:57 2018 From: assaf.morami at gmail.com (Assaf) Date: Sat, 12 May 2018 14:14:57 +0300 Subject: [Bro] internal error: file analyzer instantiation failed In-Reply-To: <20180509004300.6fp467uvlyqy2uqp@Beezling.local> References: <20180509004300.6fp467uvlyqy2uqp@Beezling.local> Message-ID: Thanks, I'll check on 2.5.3. On Wed, May 9, 2018 at 3:43 AM Johanna Amann wrote: > Two questions: > > * is this an unmodified version of Bro, or did you install any 3rd party > modules? > > * Does this also happen in 2.5.3? 2.4.1 was released in 2015 and we do not > really support that anymore... > > Johanna > > On Mon, Apr 09, 2018 at 05:05:43PM +0300, Assaf wrote: > > Hi to all. :-) > > > > When running: > > bro -r my.pcap -b -C base/protocols/rdp > > (ubuntu server 16.04 - bro version 2.4.1) > > > > I'm getting an error: > > Error: 1521789435.202907 internal error: file analyzer instantiation > failed > > > > I've found nothing on google or the docs. > > > > How can I fix this? > > > > Thanks, > > Assaf. > > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180512/6094c40e/attachment.html From greg.grasmehr at caltech.edu Sat May 12 07:53:59 2018 From: greg.grasmehr at caltech.edu (Greg Grasmehr) Date: Sat, 12 May 2018 07:53:59 -0700 Subject: [Bro] Myricom SNFv3 and CentOS 7.5 In-Reply-To: References: Message-ID: <20180512145359.daswjfvtamjztskp@dakine> This has always been encountered with the SNF driver - CSPi is generally behind the latest kernel which is not a big deal; just revert back to the previous kernel and you are good to go until the update is released. I have always found CSPi very responsive to kernel update patches; I reported this latest problem two weeks ago however, and have not received a response that a patch has been release so it may not be something simple this time around. Greg On 05/11/18 17:00:11, Drew Dixon wrote: > +1 on this...I also contacted them a week or so ago and gave them a bit of > feedback......you could say. Like maybe release a dkms package, seems like > that woud be nice?...and maybe actually test their latest drivers with new RHEL > /CentOS release/kernel versions ahead of time to make sure they still work...If > you also use Myricom cards and agree, maybe suggest these things as well to > CSPi support so we can get some more traction on them doing so... > > The latest released SNFv3 drivers (SNFv3.0.13) kernel module will not compile > against the kernel version (3.10.0-862.el7.x86_64) shipped with the RHEL/CentOS > 7.5... > > Personally, the new Endace DAG plugin package is starting to look enticing as a > motivator to maybe test out one of their cards after continued issues with CSPi > IMO. > > -Drew > > On Fri, May 11, 2018 at 1:44 PM, Chris Herdt wrote: > > A heads-up for other Myricom SNFv3 users: > > I recently updated a couple of dev hosts to CentOS 7.5 and found that the > Myricom SNFv3 drivers (I'm using 3.0.13) no longer load and fail to > rebuild. > > I've contacted CSPi support and they are looking into it. > > > -- > Chris Herdt > University of Minnesota > cherdt at umn.edu > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From seth at corelight.com Mon May 14 07:26:15 2018 From: seth at corelight.com (Seth Hall) Date: Mon, 14 May 2018 10:26:15 -0400 Subject: [Bro] internal error: file analyzer instantiation failed In-Reply-To: References: Message-ID: <4D78AB19-96E3-45DF-8479-4555E1B214B5@corelight.com> On 9 Apr 2018, at 10:05, Assaf wrote: > When running: > bro -r my.pcap -b -C base/protocols/rdp > (ubuntu server 16.04 - bro version 2.4.1) > > I'm getting an error: > Error: 1521789435.202907 internal error: file analyzer instantiation > failed Please send the pcap if you are able. We don't have enough information to understand the problem otherwise. .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From seth at corelight.com Mon May 14 07:28:49 2018 From: seth at corelight.com (Seth Hall) Date: Mon, 14 May 2018 10:28:49 -0400 Subject: [Bro] Dropped data In-Reply-To: References: <20180509001711.x4w23sqfn2zxmu7s@Beezling.local> <0BDA67AB-349F-4626-B911-41BEDB539185@gmail.com> Message-ID: <55AAA3A6-E90F-44EF-AC19-57574B20CFDD@corelight.com> I think he may have been looking for pointers to a next step to take. :) Carl, I think Michal might be telling you to look into the AF_Packet plugin by Jan Grashofer.... https://github.com/J-Gras/bro-af_packet-plugin That page has full instructions on how to install and use the plugin. .Seth On 11 May 2018, at 4:32, Micha? Purzy?ski wrote: > There?s no advantage using crazy solutions that make you jump > through multiple hoops when most of the time the default and built in > packet capture mechanism works well. > >> On May 10, 2018, at 5:45 AM, Carl Rotenan >> wrote: >> >> Michal, >> >> Could you explain what you meant by switching to AF_PACKET and >> avoiding the problem all together? >> >> Thanks, >> >> Carl >> >>> On Tue, May 8, 2018 at 9:20 PM, Micha? Purzy?ski >>> wrote: >>> What kind of cards and distribution do you have? Maybe you could >>> just switch to afpacket to avoid the problem entirely >>> >>>> On May 8, 2018, at 5:17 PM, Johanna Amann wrote: >>>> >>>> Hi, >>>> >>>> this actually does not look very bad to me - on most interfaces you >>>> do not >>>> seem to have any drops. One of them has a bit over 2% which is not >>>> that >>>> pretty but also not catastrophic. >>>> >>>> I have no experience with ZC, but generally packet loss can be >>>> caused by a >>>> number of issues. Single high-speed connections can be problematic >>>> (because they add to the normal load of a single Bro process). >>>> Microbursts >>>> also happen and can lead to a bit of packet loss. >>>> >>>> If there is a setting to increase the available buffer, that might >>>> be >>>> worth playing around with. >>>> >>>> Johanna >>>> >>>>> On Wed, May 02, 2018 at 04:13:07PM -0400, Carl Rotenan wrote: >>>>> Hello, >>>>> >>>>> Can someone give me some direction on trying to figure out why I >>>>> have >>>>> dropped data? >>>>> >>>>> This output is from a machine getting about 3G of traffic a minute >>>>> or so >>>>> into starting Bro 2.5.3 with PF_RING 7.0.0. >>>>> >>>>> How much data per worker should I expect to budget for? Ideally >>>>> I'd like >>>>> Bro to be able to do 10G of traffic. >>>>> >>>>> Has anyone used PF_RING ZC with success? >>>>> >>>>> worker-0-1: 1525291681.760081 recvd=564836 dropped=0 link=564836 >>>>> worker-0-2: 1525291681.961074 recvd=723187 dropped=0 link=723187 >>>>> worker-0-3: 1525291682.162178 recvd=682598 dropped=4619 >>>>> link=682598 >>>>> worker-0-4: 1525291682.364202 recvd=1094776 dropped=0 link=1094776 >>>>> worker-0-5: 1525291682.566055 recvd=6722748 dropped=30902 >>>>> link=6722748 >>>>> worker-0-6: 1525291682.768050 recvd=2180528 dropped=0 link=2180528 >>>>> worker-0-7: 1525291682.969023 recvd=3252824 dropped=0 link=3252824 >>>>> worker-0-8: 1525291683.179065 recvd=414112 dropped=0 link=414112 >>>>> worker-0-9: 1525291683.379083 recvd=2228892 dropped=52543 >>>>> link=2228892 >>>>> worker-0-10: 1525291683.579973 recvd=1735298 dropped=0 >>>>> link=1735298 >>>>> worker-0-11: 1525291683.780260 recvd=2720785 dropped=1437 >>>>> link=2720785 >>>>> worker-0-12: 1525291683.981421 recvd=5835651 dropped=7610 >>>>> link=5835651 >>>>> worker-0-13: 1525291684.181057 recvd=566766 dropped=0 link=566766 >>>>> worker-0-14: 1525291684.381979 recvd=335114 dropped=0 link=335114 >>>>> worker-0-15: 1525291684.582077 recvd=743998 dropped=0 link=743998 >>>>> worker-0-16: 1525291684.782897 recvd=6124252 dropped=54604 >>>>> link=6124252 >>>>> worker-0-17: 1525291684.980916 recvd=3476401 dropped=17138 >>>>> link=3476401 >>>>> worker-0-18: 1525291685.184047 recvd=1286574 dropped=0 >>>>> link=1286574 >>>> >>>>> _______________________________________________ >>>>> Bro mailing list >>>>> bro at bro-ids.org >>>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >>>> >>>> _______________________________________________ >>>> Bro mailing list >>>> bro at bro-ids.org >>>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Seth Hall * Corelight, Inc * www.corelight.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180514/f7be81b5/attachment.html From seth at corelight.com Mon May 14 07:36:14 2018 From: seth at corelight.com (Seth Hall) Date: Mon, 14 May 2018 10:36:14 -0400 Subject: [Bro] bro-dag plugin available In-Reply-To: References: Message-ID: On 10 May 2018, at 17:28, Stephen Donnelly wrote: > Hi, I'm pleased to announce the release of the bro-dag, a Bro packet > source plugin for live capture from Endace DAG cards. Cool! Thanks for not only creating a new packet source but also making a broctl plugin and adding it to the Bro package manager! Nice job all around. :) I only have one small request. Would you mind changing the documentation to use two workers instead of the "lb_interfaces" option? I don't know if anyone actually uses that option anymore and I'd be a little worried that we might break it on accident. You should be able to break that into two separate worker stanzas, each with one of the interfaces that you've defined in your documentation. You get the additional benefit that you can specify different numbers of "lb_procs" for each one and any other differentiated configuration that you need. Thanks! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From Stephen.Donnelly at endace.com Mon May 14 15:30:26 2018 From: Stephen.Donnelly at endace.com (Stephen Donnelly) Date: Mon, 14 May 2018 22:30:26 +0000 Subject: [Bro] bro-dag plugin available In-Reply-To: References: Message-ID: <39c3e14f-8c62-4679-89cf-17426a7198c5@aukmbx01.ad.endace.com> >From: Seth Hall > >Cool! Thanks for not only creating a new packet source but also making a >broctl plugin and adding it to the Bro package manager! Nice job all around. :) Thanks! We wanted to integrate it as smoothly as possible, so please let us know if you have any further feedback. >I only have one small request. Would you mind changing the documentation >to use two workers instead of the "lb_interfaces" option? >I don't know if anyone actually uses that option anymore and I'd be a little >worried that we might break it on accident. You should be able to break that >into two separate worker stanzas, each with one of the interfaces that you've >defined in your documentation. You get the additional benefit that you can >specify different numbers of "lb_procs" >for each one and any other differentiated configuration that you need. I didn't realise lb_interfaces was out of date, so I'll remove that section. I think it is clear enough how to use multiple workers. Stephen From franky.meier.1 at gmx.de Tue May 15 03:28:11 2018 From: franky.meier.1 at gmx.de (Frank Meier) Date: Tue, 15 May 2018 12:28:11 +0200 Subject: [Bro] ascii logger: unexpected modification to default_rotation_postprocessor_cmd and default_rotation_date_format during runtime Message-ID: <20180515122811.3160a460@NB181106> Hi! I noticed a strange behavior: my bro 2.5.3 running on Linux for about 15 days suddenly "forgot" my settings for Log::default_rotation_postprocessor_cmd and Log::default_rotation_date_format. When the change happened, the rotated files piled up, because the post-processing script was not started. Also the filenames did no longer contain the time zone. (I use Log::default_rotation_date_format = %Y-%m-%d-%H-%M-%S%Z to avoid file name collisions when switching to/from daylight-saving time). A quick look at the code was not enough to understand the way rotation works. I can spend more time, if nobody comes up with an explanation. I can only assume, that some internal error in bro resets the values without an error showing up (or the error was lost in bro's tmux session). Restarting bro helped for now. Thanks for any ideas. Franky. From brian.oberry at bluvector.io Tue May 15 06:13:40 2018 From: brian.oberry at bluvector.io (Brian OBerry) Date: Tue, 15 May 2018 13:13:40 +0000 Subject: [Bro] Manager memory requirements for the intel framework Message-ID: <0A1ED725-60B6-4781-B2B7-F04A7FEED9A7@bluvector.io> Hello, All, We?re trying to understand manager memory requirements when the intel framework is in use, after experiencing multiple manager crashes per day when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 machine running a production Bro 2.4.1 cluster. These are happening because the manager is exhausting its tcmalloc heap limit of 16G, as reported in its stderr.log. We removed the heap limit on an idle (no network traffic) Bro 2.4.1 test system, and found the parent VSize reported by ?broctl top manager? went to 27G for an intel input file of 18K unique Intel::DOMAIN items. It remained at 27G after many cycles of replacing the input file with 18K new unique items. Restoring the heap limit and attaching gdb to the manager on the test system shows a malloc failure backtrace that comes out of RemoteSerializer::SendCall (). We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. We then built bro from master (version 2.5-569) and retested. The manager VSize is much lower, but is still about 15G. Any advice on how to proceed with further diagnostics to hopefully reign in the manager memory requirements for intel? It doesn?t appear at first blush that upgrading Bro will fix it, at least not entirely, and we?re reluctant to upgrade the production system without fully understanding the problem. Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/f0cced9b/attachment.html From carlrotenan at gmail.com Tue May 15 08:39:21 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Tue, 15 May 2018 11:39:21 -0400 Subject: [Bro] Endace DAG Message-ID: Is anyone using the Endace DAG cards? I looking for the performance gains over using PF_RING and off the shelf Intel cards. Ultimately I'm looking for the best file extraction performance that can be achieved. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/b4d59ead/attachment.html From mike.patterson at uwaterloo.ca Tue May 15 08:59:16 2018 From: mike.patterson at uwaterloo.ca (Mike Patterson) Date: Tue, 15 May 2018 15:59:16 +0000 Subject: [Bro] Endace DAG In-Reply-To: References: Message-ID: <6E283C41-5A13-49F7-897A-6D85B53D4D11@uwaterloo.ca> I don't know how useful my contribution here is, but... Yes, I have a 9.2X2 we purchased in 2010, now in its second server and fourth or fifth Bro install. Obviously having kept it this long, I don't have many complaints. At the same time, I don't find a whole lot of difference between it and the Intel X520s we have deployed with PF_RING (and one of our newer PF_RING installations is outperforming the DAG). That said, I've spent more time playing with the X520s, so it's possible the DAG could outperform them with equivalent TLC (and also obviously this is an older card) - but X520s are older nowadays too. I haven't tried the bro-pkg for the DAG yet, although once I've got some free time (hahaha) I would very much like to give it a try. Also YMMV quite a bit depending on the hardware you're marrying to your NICs, your real-world network traffic, specific distribution/kernel version, etc etc etc. And I expect that at least one regular list contributor might suggest you try AF_PACKET with your Intels. :) Mike > On May 15, 2018, at 11:39 AM, Carl Rotenan wrote: > > Is anyone using the Endace DAG cards? I looking for the performance gains over using PF_RING and off the shelf Intel cards. Ultimately I'm looking for the best file extraction performance that can be achieved. Thanks in advance. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From dwdixon at umich.edu Tue May 15 09:34:46 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Tue, 15 May 2018 12:34:46 -0400 Subject: [Bro] Manager memory requirements for the intel framework In-Reply-To: <0A1ED725-60B6-4781-B2B7-F04A7FEED9A7@bluvector.io> References: <0A1ED725-60B6-4781-B2B7-F04A7FEED9A7@bluvector.io> Message-ID: It may be worth upgrading your production system either way since I do not believe 2.4.x is technically supported any longer being that it's a release from 2015. Plus, lots of improvements since then... I'm not sure if this will help your specific problem but I'd be curious to know if you're also running the default scan detection script on this box? In your local.bro are you loading "misc/scan"? If so, that script is known to be a huge memory hog and could be indirectly contributing to your high memory usage...just wanted to mention that, since you mentioned you commented out the conditional that invokes ?event Intel::new_item(item)? the problem may reside more directly with the Intel Framework. How many indicators of each indicator type are you loading into the Intel Framework? -Drew On Tue, May 15, 2018 at 9:13 AM, Brian OBerry wrote: > Hello, All, > > > > We?re trying to understand manager memory requirements when the intel > framework is in use, after experiencing multiple manager crashes per day > when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 > machine running a production Bro 2.4.1 cluster. These are happening > because the manager is exhausting its tcmalloc heap limit of 16G, as > reported in its stderr.log. We removed the heap limit on an idle (no > network traffic) Bro 2.4.1 test system, and found the parent VSize reported > by ?broctl top manager? went to 27G for an intel input file of 18K unique > Intel::DOMAIN items. It remained at 27G after many cycles of replacing the > input file with 18K new unique items. > > Restoring the heap limit and attaching gdb to the manager on the test > system shows a malloc failure backtrace that comes out of > RemoteSerializer::SendCall (). We commented the conditional that invokes > ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable > remote synchronization with the workers, and the huge VSize disappeared. > > We then built bro from master (version 2.5-569) and retested. The manager > VSize is much lower, but is still about 15G. > > Any advice on how to proceed with further diagnostics to hopefully reign > in the manager memory requirements for intel? It doesn?t appear at first > blush that upgrading Bro will fix it, at least not entirely, and we?re > reluctant to upgrade the production system without fully understanding the > problem. > > Thanks, > > Brian > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/921f1226/attachment.html From carlrotenan at gmail.com Tue May 15 10:06:59 2018 From: carlrotenan at gmail.com (Carl Rotenan) Date: Tue, 15 May 2018 13:06:59 -0400 Subject: [Bro] Endace DAG In-Reply-To: <6E283C41-5A13-49F7-897A-6D85B53D4D11@uwaterloo.ca> References: <6E283C41-5A13-49F7-897A-6D85B53D4D11@uwaterloo.ca> Message-ID: Would you say AF_PACKET over PF_RING? Thanks. On Tue, May 15, 2018 at 11:59 AM, Mike Patterson < mike.patterson at uwaterloo.ca> wrote: > I don't know how useful my contribution here is, but... > > Yes, I have a 9.2X2 we purchased in 2010, now in its second server and > fourth or fifth Bro install. Obviously having kept it this long, I don't > have many complaints. At the same time, I don't find a whole lot of > difference between it and the Intel X520s we have deployed with PF_RING > (and one of our newer PF_RING installations is outperforming the DAG). That > said, I've spent more time playing with the X520s, so it's possible the DAG > could outperform them with equivalent TLC (and also obviously this is an > older card) - but X520s are older nowadays too. > > I haven't tried the bro-pkg for the DAG yet, although once I've got some > free time (hahaha) I would very much like to give it a try. Also YMMV quite > a bit depending on the hardware you're marrying to your NICs, your > real-world network traffic, specific distribution/kernel version, etc etc > etc. > > And I expect that at least one regular list contributor might suggest you > try AF_PACKET with your Intels. :) > > Mike > > > > On May 15, 2018, at 11:39 AM, Carl Rotenan > wrote: > > > > Is anyone using the Endace DAG cards? I looking for the performance > gains over using PF_RING and off the shelf Intel cards. Ultimately I'm > looking for the best file extraction performance that can be achieved. > Thanks in advance. > > _______________________________________________ > > Bro mailing list > > bro at bro-ids.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/980354a8/attachment.html From shirkdog.bsd at gmail.com Tue May 15 10:11:24 2018 From: shirkdog.bsd at gmail.com (Michael Shirk) Date: Tue, 15 May 2018 13:11:24 -0400 Subject: [Bro] Manager memory requirements for the intel framework In-Reply-To: References: <0A1ED725-60B6-4781-B2B7-F04A7FEED9A7@bluvector.io> Message-ID: You should update to at least 2.4.2 due to this vulnerability: http://blog.bro.org/2017/10/bro-252-242-release-security-update.html On Tue, May 15, 2018 at 12:34 PM, Drew Dixon wrote: > It may be worth upgrading your production system either way since I do not > believe 2.4.x is technically supported any longer being that it's a release > from 2015. Plus, lots of improvements since then... > > I'm not sure if this will help your specific problem but I'd be curious to > know if you're also running the default scan detection script on this box? > In your local.bro are you loading "misc/scan"? If so, that script is known > to be a huge memory hog and could be indirectly contributing to your high > memory usage...just wanted to mention that, since you mentioned you > commented out the conditional that invokes ?event Intel::new_item(item)? the > problem may reside more directly with the Intel Framework. > > How many indicators of each indicator type are you loading into the Intel > Framework? > > -Drew > > On Tue, May 15, 2018 at 9:13 AM, Brian OBerry > wrote: >> >> Hello, All, >> >> >> >> We?re trying to understand manager memory requirements when the intel >> framework is in use, after experiencing multiple manager crashes per day >> when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 >> machine running a production Bro 2.4.1 cluster. These are happening because >> the manager is exhausting its tcmalloc heap limit of 16G, as reported in its >> stderr.log. We removed the heap limit on an idle (no network traffic) Bro >> 2.4.1 test system, and found the parent VSize reported by ?broctl top >> manager? went to 27G for an intel input file of 18K unique Intel::DOMAIN >> items. It remained at 27G after many cycles of replacing the input file >> with 18K new unique items. >> >> Restoring the heap limit and attaching gdb to the manager on the test >> system shows a malloc failure backtrace that comes out of >> RemoteSerializer::SendCall (). We commented the conditional that invokes >> ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable >> remote synchronization with the workers, and the huge VSize disappeared. >> >> We then built bro from master (version 2.5-569) and retested. The manager >> VSize is much lower, but is still about 15G. >> >> Any advice on how to proceed with further diagnostics to hopefully reign >> in the manager memory requirements for intel? It doesn?t appear at first >> blush that upgrading Bro will fix it, at least not entirely, and we?re >> reluctant to upgrade the production system without fully understanding the >> problem. >> >> Thanks, >> >> Brian >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Michael Shirk Daemon Security, Inc. https://www.daemon-security.com From brian.oberry at bluvector.io Tue May 15 10:16:56 2018 From: brian.oberry at bluvector.io (Brian OBerry) Date: Tue, 15 May 2018 17:16:56 +0000 Subject: [Bro] Manager memory requirements for the intel framework Message-ID: Thanks, Drew, Agreed that it would be best to upgrade, and we moved on to Bro 2.5.3 a couple releases ago. No, misc/scan is not loaded. Unfortunately, I don?t have the production intel input file. I?m told it?s about 100K indicators of about 50% domain, 25% addr, and 25% hash, plus a few email and url indicators less than 1%. We started testing with a file of 18K domain indicators, but will be expanding to a more realistic profile next. Maybe I started this thread a little early, without the representative input file, but I wanted to query the community for any ideas or suggestions about our initial results. Thank you for your ideas! Brian From: Drew Dixon Date: Tuesday, May 15, 2018 at 12:35 PM To: Brian OBerry Cc: "bro at bro.org" Subject: EXT: Re: [Bro] Manager memory requirements for the intel framework It may be worth upgrading your production system either way since I do not believe 2.4.x is technically supported any longer being that it's a release from 2015. Plus, lots of improvements since then... I'm not sure if this will help your specific problem but I'd be curious to know if you're also running the default scan detection script on this box? In your local.bro are you loading "misc/scan"? If so, that script is known to be a huge memory hog and could be indirectly contributing to your high memory usage...just wanted to mention that, since you mentioned you commented out the conditional that invokes ?event Intel::new_item(item)? the problem may reside more directly with the Intel Framework. How many indicators of each indicator type are you loading into the Intel Framework? -Drew On Tue, May 15, 2018 at 9:13 AM, Brian OBerry > wrote: Hello, All, We?re trying to understand manager memory requirements when the intel framework is in use, after experiencing multiple manager crashes per day when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 machine running a production Bro 2.4.1 cluster. These are happening because the manager is exhausting its tcmalloc heap limit of 16G, as reported in its stderr.log. We removed the heap limit on an idle (no network traffic) Bro 2.4.1 test system, and found the parent VSize reported by ?broctl top manager? went to 27G for an intel input file of 18K unique Intel::DOMAIN items. It remained at 27G after many cycles of replacing the input file with 18K new unique items. Restoring the heap limit and attaching gdb to the manager on the test system shows a malloc failure backtrace that comes out of RemoteSerializer::SendCall (). We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. We then built bro from master (version 2.5-569) and retested. The manager VSize is much lower, but is still about 15G. Any advice on how to proceed with further diagnostics to hopefully reign in the manager memory requirements for intel? It doesn?t appear at first blush that upgrading Bro will fix it, at least not entirely, and we?re reluctant to upgrade the production system without fully understanding the problem. Thanks, Brian _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/89c98255/attachment-0001.html From michalpurzynski1 at gmail.com Tue May 15 10:21:05 2018 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Tue, 15 May 2018 10:21:05 -0700 Subject: [Bro] Endace DAG In-Reply-To: References: <6E283C41-5A13-49F7-897A-6D85B53D4D11@uwaterloo.ca> Message-ID: <90A309B5-94E8-4A20-B829-19BA2187A344@gmail.com> Yes I would :) Try afpacket and maybe X710. You?re going to invest in cards that cost more than your server (DAG) do why not spend 300 usd and make an experiment. https://github.com/pevma/SEPTun https://github.com/pevma/SEPTun-Mark-II This applies to Bro as well, especially the part about hardware and OS tuning. > On May 15, 2018, at 10:06 AM, Carl Rotenan wrote: > > Would you say AF_PACKET over PF_RING? Thanks. > >> On Tue, May 15, 2018 at 11:59 AM, Mike Patterson wrote: >> I don't know how useful my contribution here is, but... >> >> Yes, I have a 9.2X2 we purchased in 2010, now in its second server and fourth or fifth Bro install. Obviously having kept it this long, I don't have many complaints. At the same time, I don't find a whole lot of difference between it and the Intel X520s we have deployed with PF_RING (and one of our newer PF_RING installations is outperforming the DAG). That said, I've spent more time playing with the X520s, so it's possible the DAG could outperform them with equivalent TLC (and also obviously this is an older card) - but X520s are older nowadays too. >> >> I haven't tried the bro-pkg for the DAG yet, although once I've got some free time (hahaha) I would very much like to give it a try. Also YMMV quite a bit depending on the hardware you're marrying to your NICs, your real-world network traffic, specific distribution/kernel version, etc etc etc. >> >> And I expect that at least one regular list contributor might suggest you try AF_PACKET with your Intels. :) >> >> Mike >> >> >> > On May 15, 2018, at 11:39 AM, Carl Rotenan wrote: >> > >> > Is anyone using the Endace DAG cards? I looking for the performance gains over using PF_RING and off the shelf Intel cards. Ultimately I'm looking for the best file extraction performance that can be achieved. Thanks in advance. >> > _______________________________________________ >> > Bro mailing list >> > bro at bro-ids.org >> > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/eb41b285/attachment.html From brian.oberry at bluvector.io Tue May 15 10:32:13 2018 From: brian.oberry at bluvector.io (Brian OBerry) Date: Tue, 15 May 2018 17:32:13 +0000 Subject: [Bro] Manager memory requirements for the intel framework Message-ID: Oh, right, thanks for pointing that out. If the decision is to stay with 2.4.x on that system, we'll definitely look to install 2.4.2. ?-----Original Message----- From: Michael Shirk Date: Tuesday, May 15, 2018 at 1:11 PM To: Drew Dixon Cc: Brian OBerry , "bro at bro.org" Subject: EXT: Re: [Bro] Manager memory requirements for the intel framework You should update to at least 2.4.2 due to this vulnerability: http://blog.bro.org/2017/10/bro-252-242-release-security-update.html On Tue, May 15, 2018 at 12:34 PM, Drew Dixon wrote: > It may be worth upgrading your production system either way since I do not > believe 2.4.x is technically supported any longer being that it's a release > from 2015. Plus, lots of improvements since then... > > I'm not sure if this will help your specific problem but I'd be curious to > know if you're also running the default scan detection script on this box? > In your local.bro are you loading "misc/scan"? If so, that script is known > to be a huge memory hog and could be indirectly contributing to your high > memory usage...just wanted to mention that, since you mentioned you > commented out the conditional that invokes ?event Intel::new_item(item)? the > problem may reside more directly with the Intel Framework. > > How many indicators of each indicator type are you loading into the Intel > Framework? > > -Drew > > On Tue, May 15, 2018 at 9:13 AM, Brian OBerry > wrote: >> >> Hello, All, >> >> >> >> We?re trying to understand manager memory requirements when the intel >> framework is in use, after experiencing multiple manager crashes per day >> when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 >> machine running a production Bro 2.4.1 cluster. These are happening because >> the manager is exhausting its tcmalloc heap limit of 16G, as reported in its >> stderr.log. We removed the heap limit on an idle (no network traffic) Bro >> 2.4.1 test system, and found the parent VSize reported by ?broctl top >> manager? went to 27G for an intel input file of 18K unique Intel::DOMAIN >> items. It remained at 27G after many cycles of replacing the input file >> with 18K new unique items. >> >> Restoring the heap limit and attaching gdb to the manager on the test >> system shows a malloc failure backtrace that comes out of >> RemoteSerializer::SendCall (). We commented the conditional that invokes >> ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable >> remote synchronization with the workers, and the huge VSize disappeared. >> >> We then built bro from master (version 2.5-569) and retested. The manager >> VSize is much lower, but is still about 15G. >> >> Any advice on how to proceed with further diagnostics to hopefully reign >> in the manager memory requirements for intel? It doesn?t appear at first >> blush that upgrading Bro will fix it, at least not entirely, and we?re >> reluctant to upgrade the production system without fully understanding the >> problem. >> >> Thanks, >> >> Brian >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -- Michael Shirk Daemon Security, Inc. https://www.daemon-security.com From michalpurzynski1 at gmail.com Tue May 15 10:50:43 2018 From: michalpurzynski1 at gmail.com (=?utf-8?Q?Micha=C5=82_Purzy=C5=84ski?=) Date: Tue, 15 May 2018 10:50:43 -0700 Subject: [Bro] Manager memory requirements for the intel framework In-Reply-To: References: Message-ID: <6462F0FE-C15D-4A93-A706-C11B919D6C96@gmail.com> 100k indicators sounds like your matches will be mostly noise anyway. Does your source have some metadata like - category (to tell CDN from CnC) - last time seen malicious And so on and so forth, so you can decide what to load, instead of loading everything? I?m not saying Bro cannot handle that, it most likely can, I?m saying you?ll have another class of problems soon. > On May 15, 2018, at 10:16 AM, Brian OBerry wrote: > > Thanks, Drew, > > Agreed that it would be best to upgrade, and we moved on to Bro 2.5.3 a couple releases ago. No, misc/scan is not loaded. Unfortunately, I don?t have the production intel input file. I?m told it?s about 100K indicators of about 50% domain, 25% addr, and 25% hash, plus a few email and url indicators less than 1%. We started testing with a file of 18K domain indicators, but will be expanding to a more realistic profile next. Maybe I started this thread a little early, without the representative input file, but I wanted to query the community for any ideas or suggestions about our initial results. Thank you for your ideas! > > Brian > > From: Drew Dixon > Date: Tuesday, May 15, 2018 at 12:35 PM > To: Brian OBerry > Cc: "bro at bro.org" > Subject: EXT: Re: [Bro] Manager memory requirements for the intel framework > > It may be worth upgrading your production system either way since I do not believe 2.4.x is technically supported any longer being that it's a release from 2015. Plus, lots of improvements since then... > > I'm not sure if this will help your specific problem but I'd be curious to know if you're also running the default scan detection script on this box? In your local.bro are you loading "misc/scan"? If so, that script is known to be a huge memory hog and could be indirectly contributing to your high memory usage...just wanted to mention that, since you mentioned you commented out the conditional that invokes ?event Intel::new_item(item)? the problem may reside more directly with the Intel Framework. > > How many indicators of each indicator type are you loading into the Intel Framework? > > -Drew > > On Tue, May 15, 2018 at 9:13 AM, Brian OBerry wrote: > Hello, All, > > We?re trying to understand manager memory requirements when the intel framework is in use, after experiencing multiple manager crashes per day when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 machine running a production Bro 2.4.1 cluster. These are happening because the manager is exhausting its tcmalloc heap limit of 16G, as reported in its stderr.log. We removed the heap limit on an idle (no network traffic) Bro 2.4.1 test system, and found the parent VSize reported by ?broctl top manager? went to 27G for an intel input file of 18K unique Intel::DOMAIN items. It remained at 27G after many cycles of replacing the input file with 18K new unique items. > Restoring the heap limit and attaching gdb to the manager on the test system shows a malloc failure backtrace that comes out of RemoteSerializer::SendCall (). We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. > We then built bro from master (version 2.5-569) and retested. The manager VSize is much lower, but is still about 15G. > Any advice on how to proceed with further diagnostics to hopefully reign in the manager memory requirements for intel? It doesn?t appear at first blush that upgrading Bro will fix it, at least not entirely, and we?re reluctant to upgrade the production system without fully understanding the problem. > Thanks, > Brian > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/a2320962/attachment-0001.html From jazoff at illinois.edu Tue May 15 11:06:47 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Tue, 15 May 2018 18:06:47 +0000 Subject: [Bro] Manager memory requirements for the intel framework In-Reply-To: <0A1ED725-60B6-4781-B2B7-F04A7FEED9A7@bluvector.io> References: <0A1ED725-60B6-4781-B2B7-F04A7FEED9A7@bluvector.io> Message-ID: <619669D5-C65A-4AFB-BCB4-F8E739351518@illinois.edu> On May 15, 2018, at 9:13 AM, Brian OBerry wrote: > > It remained at 27G after many cycles of replacing the input file with 18K new unique items. That is interesting because by default the intel framework doesn't expire items, so every time you replaced the file you were loading an additional 18k items.. If I get a chance I will resurrect the benchmarking code I was working on a while ago.. It would do things like create a table of hosts and add 10k,20k,30k,40k hosts to it and see what the memory usage was for each count to see what the real work data usage is for different sized data structures. I never tried it with the intel framework though. > We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. > This makes more sense.. I don't think your memory usage has anything to do with the intel itself, I think the communication code is falling behind. How many worker processes do you have configured? Are they running on the same box or separate boxes? If you load up 18k indicators but have 100 worker nodes, the bro manager needs to send out 1,800,000 events to all the workers. if the workers can't keep up, that data just ends up buffered in memory on the manager until it can be sent out. Jon: this is the use case I had for the Cluster::relay_rr, offloading the messaging load from the manager # On the manager, the new_item event indicates a new indicator that # has to be distributed. event Intel::new_item(item: Item) &priority=5 { Broker::publish(indicator_topic, Intel::insert_indicator, item); } so that should maybe be used there, instead of the manager having to do all the communication work. ? Justin Azoff From jan.grashoefer at gmail.com Tue May 15 11:35:58 2018 From: jan.grashoefer at gmail.com (=?UTF-8?Q?Jan_Grash=c3=b6fer?=) Date: Tue, 15 May 2018 20:35:58 +0200 Subject: [Bro] Manager memory requirements for the intel framework In-Reply-To: References: Message-ID: <2b660d26-dcdc-d9fe-8176-1a17b74fd291@gmail.com> On 15/05/18 19:32, Brian OBerry wrote: > Oh, right, thanks for pointing that out. If the decision is to stay with 2.4.x on that > system, we'll definitely look to install 2.4.2. You should consider to update to Bro 2.5.3 as the intel framework was heavily refactored in Bro 2.5. If I remember correctly, there was even a bug that lead to creating new items internally, although they had been added before. Jan From seth at corelight.com Tue May 15 11:56:44 2018 From: seth at corelight.com (Seth Hall) Date: Tue, 15 May 2018 14:56:44 -0400 Subject: [Bro] bro-dag plugin available In-Reply-To: <39c3e14f-8c62-4679-89cf-17426a7198c5@aukmbx01.ad.endace.com> References: <39c3e14f-8c62-4679-89cf-17426a7198c5@aukmbx01.ad.endace.com> Message-ID: <43468A0E-4A4D-4602-B4A6-7A342F9EDF0B@corelight.com> On 14 May 2018, at 18:30, Stephen Donnelly wrote: > I didn't realise lb_interfaces was out of date, so I'll remove that > section. I think it is clear enough how to use multiple workers. Yeah, that's our bad. It's not documented anywhere that it's sort of falling out of date (I forgot that it was even a feature!). I'm just concerned that we'd have a small portion of the community that started using it and we wouldn't be aware of that usage and accidentally break it or something. Thanks! .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From seth at corelight.com Tue May 15 11:59:10 2018 From: seth at corelight.com (Seth Hall) Date: Tue, 15 May 2018 14:59:10 -0400 Subject: [Bro] ascii logger: unexpected modification to default_rotation_postprocessor_cmd and default_rotation_date_format during runtime In-Reply-To: <20180515122811.3160a460@NB181106> References: <20180515122811.3160a460@NB181106> Message-ID: <533FF934-523A-4D5E-8F34-AA1F995A6A8C@corelight.com> On 15 May 2018, at 6:28, Frank Meier wrote: > I noticed a strange behavior: my bro 2.5.3 running on Linux for about > 15 > days suddenly "forgot" my settings for > Log::default_rotation_postprocessor_cmd and This has been a long standing bug that only ever seems to express itself on heavily loaded systems but generally seems to be somewhat rare. We haven't been able to find the exact bug yet, but some of us have known about it for quite a while. It would help if we could reproduce it reliably, but I suspect that in order to reproduce it we need to fully understand it first. :) .Seth -- Seth Hall * Corelight, Inc * www.corelight.com From brian.oberry at bluvector.io Tue May 15 12:16:42 2018 From: brian.oberry at bluvector.io (Brian OBerry) Date: Tue, 15 May 2018 19:16:42 +0000 Subject: [Bro] Manager memory requirements for the intel framework Message-ID: <4823F246-B234-4674-8E98-E7EB9EED2EE4@bluvector.io> Thanks, I don?t control the intel input and can?t answer the questions, but will forward this to those who can. What do you mean by ?? have another class of problems soon?? From: Micha? Purzy?ski Date: Tuesday, May 15, 2018 at 1:50 PM To: Brian OBerry Cc: Drew Dixon , "bro at bro.org" Subject: EXT: Re: [Bro] Manager memory requirements for the intel framework 100k indicators sounds like your matches will be mostly noise anyway. Does your source have some metadata like - category (to tell CDN from CnC) - last time seen malicious And so on and so forth, so you can decide what to load, instead of loading everything? I?m not saying Bro cannot handle that, it most likely can, I?m saying you?ll have another class of problems soon. On May 15, 2018, at 10:16 AM, Brian OBerry > wrote: Thanks, Drew, Agreed that it would be best to upgrade, and we moved on to Bro 2.5.3 a couple releases ago. No, misc/scan is not loaded. Unfortunately, I don?t have the production intel input file. I?m told it?s about 100K indicators of about 50% domain, 25% addr, and 25% hash, plus a few email and url indicators less than 1%. We started testing with a file of 18K domain indicators, but will be expanding to a more realistic profile next. Maybe I started this thread a little early, without the representative input file, but I wanted to query the community for any ideas or suggestions about our initial results. Thank you for your ideas! Brian From: Drew Dixon > Date: Tuesday, May 15, 2018 at 12:35 PM To: Brian OBerry > Cc: "bro at bro.org" > Subject: EXT: Re: [Bro] Manager memory requirements for the intel framework It may be worth upgrading your production system either way since I do not believe 2.4.x is technically supported any longer being that it's a release from 2015. Plus, lots of improvements since then... I'm not sure if this will help your specific problem but I'd be curious to know if you're also running the default scan detection script on this box? In your local.bro are you loading "misc/scan"? If so, that script is known to be a huge memory hog and could be indirectly contributing to your high memory usage...just wanted to mention that, since you mentioned you commented out the conditional that invokes ?event Intel::new_item(item)? the problem may reside more directly with the Intel Framework. How many indicators of each indicator type are you loading into the Intel Framework? -Drew On Tue, May 15, 2018 at 9:13 AM, Brian OBerry > wrote: Hello, All, We?re trying to understand manager memory requirements when the intel framework is in use, after experiencing multiple manager crashes per day when using the framework on a low-bandwidth (less than 1Gbps) CentOS 6 machine running a production Bro 2.4.1 cluster. These are happening because the manager is exhausting its tcmalloc heap limit of 16G, as reported in its stderr.log. We removed the heap limit on an idle (no network traffic) Bro 2.4.1 test system, and found the parent VSize reported by ?broctl top manager? went to 27G for an intel input file of 18K unique Intel::DOMAIN items. It remained at 27G after many cycles of replacing the input file with 18K new unique items. Restoring the heap limit and attaching gdb to the manager on the test system shows a malloc failure backtrace that comes out of RemoteSerializer::SendCall (). We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. We then built bro from master (version 2.5-569) and retested. The manager VSize is much lower, but is still about 15G. Any advice on how to proceed with further diagnostics to hopefully reign in the manager memory requirements for intel? It doesn?t appear at first blush that upgrading Bro will fix it, at least not entirely, and we?re reluctant to upgrade the production system without fully understanding the problem. Thanks, Brian _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180515/89e82424/attachment-0001.html From brian.oberry at bluvector.io Tue May 15 13:17:21 2018 From: brian.oberry at bluvector.io (Brian OBerry) Date: Tue, 15 May 2018 20:17:21 +0000 Subject: [Bro] Manager memory requirements for the intel framework Message-ID: <4A59EAA3-4916-405B-A68E-1FF2978B0132@bluvector.io> > It remained at 27G after many cycles of replacing the input file with 18K new unique items. That is interesting because by default the intel framework doesn't expire items, so every time you replaced the file you were loading an additional 18k items.. I took a closer look and see that the manager virt value does increase about 22M with each update, which is roughly the sum of the starting Intel::data_store and Intel::min_data_store sizes (21M) as reported by the global_sizes() table. it was masked by the 27G value before I commented out the "event Intel::new_item()" call. Makes sense, we must be adding that sum each time we run the update. If I get a chance I will resurrect the benchmarking code I was working on a while ago.. It would do things like create a table of hosts and add 10k,20k,30k,40k hosts to it and see what the memory usage was for each count to see what the real work data usage is for different sized data structures. I never tried it with the intel framework though. That would be handy! Can we help with that? I need to generate intel input files in tiered sizes for testing. If your code is written in C/C++ or python, it might make a great starting point for that. > We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. This makes more sense.. I don't think your memory usage has anything to do with the intel itself, I think the communication code is falling behind. Okay. I'm curious why the heap isn't released afterwards, though. Or maybe the communication never completes? I don't know how event processing works, but wonder if the remote Intel::new_item event is "stuck" in the worker if it's processed synchronously in the same thread that's waiting for network input. I never tried this test with traffic running to the test system, but will do so. I did run a heap check on a manager running in debug mode (-m), and confirmed it wasn't leaking memory. How many worker processes do you have configured? Are they running on the same box or separate boxes? 32, so lots of communication for intel updates. If you load up 18k indicators but have 100 worker nodes, the bro manager needs to send out 1,800,000 events to all the workers. if the workers can't keep up, that data just ends up buffered in memory on the manager until it can be sent out. Ouch! Thanks for the help! Brian From franky.meier.1 at gmx.de Tue May 15 22:57:51 2018 From: franky.meier.1 at gmx.de (Frank Meier) Date: Wed, 16 May 2018 07:57:51 +0200 Subject: [Bro] ascii logger: unexpected modification to default_rotation_postprocessor_cmd and default_rotation_date_format during runtime In-Reply-To: <533FF934-523A-4D5E-8F34-AA1F995A6A8C@corelight.com> References: <20180515122811.3160a460@NB181106> <533FF934-523A-4D5E-8F34-AA1F995A6A8C@corelight.com> Message-ID: <20180516075751.7f986fa6@NB181106> Hi! On Tue, 15 May 2018 14:59:10 -0400 "Seth Hall" wrote: > > On 15 May 2018, at 6:28, Frank Meier wrote: > > > I noticed a strange behavior: my bro 2.5.3 running on Linux for > > about 15 > > days suddenly "forgot" my settings for > > Log::default_rotation_postprocessor_cmd and > > This has been a long standing bug that only ever seems to express > itself on heavily loaded systems but generally seems to be somewhat > rare. We haven't been able to find the exact bug yet, but some of us > have known about it for quite a while. It would help if we could > reproduce it reliably, but I suspect that in order to reproduce it we > need to fully understand it first. :) > Thanks, at least I am not alone with the problem! Perhaps it makes sense to add some logging to the exotic code paths so we get some clues when it happens again. I will have a look. Franky. From hacecky at jlab.org Wed May 16 12:25:13 2018 From: hacecky at jlab.org (Eric Hacecky) Date: Wed, 16 May 2018 15:25:13 -0400 (EDT) Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing Message-ID: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> I'm having some anomalies in my conn.log. Scenario: Internal host on my network (10.10.10.10) is browsing autotrader (20.20.20.20) Inbetween normal bro logs for the related traffic, I have things like this showing up: // conn.log 1524177777.577777 Ccq8hi7x7jIegYyKE7 10.10.10.10 63971 20.20.20.20 443 tcp - 0.015780 1284714853 0 S0 T F 0 Sa 1 48 1 44 (empty) As I'm reading this, it shows my internal host sent ~1.2gigs of data in .015 seconds to this external host. S0 for the conn_state "Connection attempt seen, no reply." So bro thinks my host tried to send 1.2 gigs off-site but failed? (there are many more similar log entries for the same host) Any ideas what can cause this? Thanks, Eric From jazoff at illinois.edu Wed May 16 13:05:03 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 16 May 2018 20:05:03 +0000 Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> Message-ID: > On May 16, 2018, at 3:25 PM, Eric Hacecky wrote: > > I'm having some anomalies in my conn.log. > > Scenario: > > Internal host on my network (10.10.10.10) is browsing autotrader (20.20.20.20) > > Inbetween normal bro logs for the related traffic, I have things like this showing up: > > // conn.log > 1524177777.577777 Ccq8hi7x7jIegYyKE7 10.10.10.10 63971 20.20.20.20 443 tcp - 0.015780 1284714853 0 S0 T F 0 Sa 1 48 1 44 (empty) > > > As I'm reading this, it shows my internal host sent ~1.2gigs of data in .015 seconds to this external host. > > S0 for the conn_state "Connection attempt seen, no reply." > > > So bro thinks my host tried to send 1.2 gigs off-site but failed? (there are many more similar log entries for the same host) > > Any ideas what can cause this? > > Thanks, > Eric that's probably a websocket connection or something that is idle for long periods of time. Since it's idle for so long bro is assuming the connection ended and is then getting confused when they start talking again. You can fix it by redeffing this value to be higher: ## If a TCP connection is inactive, time it out after this interval. If 0 secs, ## then don't time it out. ## ## .. bro:see:: udp_inactivity_timeout icmp_inactivity_timeout set_inactivity_timeout const tcp_inactivity_timeout = 5 min &redef; You can figure out how high it needs to be based on how frequently you are seeing that connection logged. ? Justin Azoff From hacecky at jlab.org Wed May 16 13:54:54 2018 From: hacecky at jlab.org (Eric Hacecky) Date: Wed, 16 May 2018 16:54:54 -0400 (EDT) Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <163508146.227326.1526504039991.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> Message-ID: <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> Justin, Thanks for the response. > You can figure out how high it needs to be based on how frequently you are seeing that connection logged. Here are some more of the logs (chopped down for readability). You can see there are multiple "large transfers" in a small time window, less than 5 minutes. Does this mean setting the window higher isn't going to make a difference since I'm already seeing connections more frequently than 5 minutes? 4:38:33.098 PM - 10.10.10.10 63962 20.20.20.20 443 tcp - 0.015809 73288814 0 S0 4:38:31.815 PM - 10.10.10.10 63951 20.20.20.20 443 tcp - 0.015764 1834934747 0 S0 4:38:31.565 PM - 10.10.10.10 63949 20.20.20.20 443 tcp - 0.015718 616216164 0 S0 4:38:28.952 PM - 10.10.10.10 64031 20.20.20.20 443 tcp - 3.014994 1213244309 0 S0 4:38:28.701 PM - 10.10.10.10 64028 20.20.20.20 443 tcp - 3.023816 1777413339 0 S0 4:38:28.329 PM - 10.10.10.10 64024 20.20.20.20 443 TCP_ack_underflow_or_misorder - F worker-2 4:38:28.313 PM - 10.10.10.10 64024 20.20.20.20 443 tcp - 3.017128 0 0 S0 4:38:28.272 PM - 10.10.10.10 64022 20.20.20.20 443 TCP_ack_underflow_or_misorder - F worker-2 4:38:28.257 PM - 10.10.10.10 64022 20.20.20.20 443 tcp - 3.010362 0 0 S0 I included the tcp_ack_underflow_or_misorder from weird log too in case that sheds any light. -Eric ----- Original Message ----- From: "Justin S Azoff" To: "Eric Hacecky" Cc: bro at bro.org Sent: Wednesday, May 16, 2018 4:05:03 PM Subject: Re: [Bro] Conn log shows massive file transfer inbetween normal browsing > On May 16, 2018, at 3:25 PM, Eric Hacecky wrote: > > I'm having some anomalies in my conn.log. > > Scenario: > > Internal host on my network (10.10.10.10) is browsing autotrader (20.20.20.20) > > Inbetween normal bro logs for the related traffic, I have things like this showing up: > > // conn.log > 1524177777.577777 Ccq8hi7x7jIegYyKE7 10.10.10.10 63971 20.20.20.20 443 tcp - 0.015780 1284714853 0 S0 T F 0 Sa 1 48 1 44 (empty) > > > As I'm reading this, it shows my internal host sent ~1.2gigs of data in .015 seconds to this external host. > > S0 for the conn_state "Connection attempt seen, no reply." > > > So bro thinks my host tried to send 1.2 gigs off-site but failed? (there are many more similar log entries for the same host) > > Any ideas what can cause this? > > Thanks, > Eric that's probably a websocket connection or something that is idle for long periods of time. Since it's idle for so long bro is assuming the connection ended and is then getting confused when they start talking again. You can fix it by redeffing this value to be higher: ## If a TCP connection is inactive, time it out after this interval. If 0 secs, ## then don't time it out. ## ## .. bro:see:: udp_inactivity_timeout icmp_inactivity_timeout set_inactivity_timeout const tcp_inactivity_timeout = 5 min &redef; You can figure out how high it needs to be based on how frequently you are seeing that connection logged. ? Justin Azoff From jazoff at illinois.edu Wed May 16 14:15:58 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Wed, 16 May 2018 21:15:58 +0000 Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> Message-ID: <41729817-303B-4406-AA7B-91975207698A@illinois.edu> > On May 16, 2018, at 4:54 PM, Eric Hacecky wrote: > > Justin, > > Thanks for the response. > >> You can figure out how high it needs to be based on how frequently you are seeing that connection logged. > > Here are some more of the logs (chopped down for readability). You can see there are multiple "large transfers" in a small time window, less than 5 minutes. Does this mean setting the window higher isn't going to make a difference since I'm already seeing connections more frequently than 5 minutes? > > 4:38:33.098 PM - 10.10.10.10 63962 20.20.20.20 443 tcp - 0.015809 73288814 0 S0 > 4:38:31.815 PM - 10.10.10.10 63951 20.20.20.20 443 tcp - 0.015764 1834934747 0 S0 > 4:38:31.565 PM - 10.10.10.10 63949 20.20.20.20 443 tcp - 0.015718 616216164 0 S0 > 4:38:28.952 PM - 10.10.10.10 64031 20.20.20.20 443 tcp - 3.014994 1213244309 0 S0 > 4:38:28.701 PM - 10.10.10.10 64028 20.20.20.20 443 tcp - 3.023816 1777413339 0 S0 > 4:38:28.329 PM - 10.10.10.10 64024 20.20.20.20 443 TCP_ack_underflow_or_misorder - F worker-2 > 4:38:28.313 PM - 10.10.10.10 64024 20.20.20.20 443 tcp - 3.017128 0 0 S0 > 4:38:28.272 PM - 10.10.10.10 64022 20.20.20.20 443 TCP_ack_underflow_or_misorder - F worker-2 > 4:38:28.257 PM - 10.10.10.10 64022 20.20.20.20 443 tcp - 3.010362 0 0 S0 hmm, that is showing 7 different tcp source ports, so this wouldn't be the same connection. If you search for 63949 do you find earlier matches? > > I included the tcp_ack_underflow_or_misorder from weird log too in case that sheds any light. > > -Eric It's probably realted.. ? Justin Azoff From Stephen.Donnelly at endace.com Wed May 16 22:45:44 2018 From: Stephen.Donnelly at endace.com (Stephen Donnelly) Date: Thu, 17 May 2018 05:45:44 +0000 Subject: [Bro] Endace DAG In-Reply-To: <90A309B5-94E8-4A20-B829-19BA2187A344@gmail.com> References: <6E283C41-5A13-49F7-897A-6D85B53D4D11@uwaterloo.ca> <90A309B5-94E8-4A20-B829-19BA2187A344@gmail.com> Message-ID: <5a5b75e9d46b4059ac197e01064f7293@endace.com> As a note, DAG cards are still not $300 (at least new!), but should not cost more than your server. You can figure out who to ask if you are interested in actual pricing I would think. Stephen From: bro-bounces at bro.org On Behalf Of Michal Purzynski Sent: Wednesday, 16 May 2018 5:21 AM To: Carl Rotenan Cc: bro Subject: Re: [Bro] Endace DAG Yes I would :) Try afpacket and maybe X710. You?re going to invest in cards that cost more than your server (DAG) do why not spend 300 usd and make an experiment. https://github.com/pevma/SEPTun https://github.com/pevma/SEPTun-Mark-II This applies to Bro as well, especially the part about hardware and OS tuning. On May 15, 2018, at 10:06 AM, Carl Rotenan > wrote: Would you say AF_PACKET over PF_RING? Thanks. On Tue, May 15, 2018 at 11:59 AM, Mike Patterson > wrote: I don't know how useful my contribution here is, but... Yes, I have a 9.2X2 we purchased in 2010, now in its second server and fourth or fifth Bro install. Obviously having kept it this long, I don't have many complaints. At the same time, I don't find a whole lot of difference between it and the Intel X520s we have deployed with PF_RING (and one of our newer PF_RING installations is outperforming the DAG). That said, I've spent more time playing with the X520s, so it's possible the DAG could outperform them with equivalent TLC (and also obviously this is an older card) - but X520s are older nowadays too. I haven't tried the bro-pkg for the DAG yet, although once I've got some free time (hahaha) I would very much like to give it a try. Also YMMV quite a bit depending on the hardware you're marrying to your NICs, your real-world network traffic, specific distribution/kernel version, etc etc etc. And I expect that at least one regular list contributor might suggest you try AF_PACKET with your Intels. :) Mike > On May 15, 2018, at 11:39 AM, Carl Rotenan > wrote: > > Is anyone using the Endace DAG cards? I looking for the performance gains over using PF_RING and off the shelf Intel cards. Ultimately I'm looking for the best file extraction performance that can be achieved. Thanks in advance. > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/addfde74/attachment-0001.html From hacecky at jlab.org Thu May 17 05:47:06 2018 From: hacecky at jlab.org (Eric Hacecky) Date: Thu, 17 May 2018 08:47:06 -0400 (EDT) Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <1595369299.301564.1526561200527.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> <41729817-303B-4406-AA7B-91975207698A@illinois.edu> Message-ID: <468108940.301629.1526561226942.JavaMail.zimbra@jlab.org> Here are the matching source port entries for the last 3 connections. The other two from ports 64031 and 64028 did not have any others from those ports. The connection IDs are different throughout, I've included them this time around. 4:38:31.565 PM CmtAeYrQjXqUGW4xi 10.10.10.10 63949 20.20.20.20 443 tcp - 0.015718 616216164 0 S0 T F 0 Sa 1 48 1 44 4:38:22.566 PM CiyCGi1xwft9PDrqG9 10.10.10.10 63949 20.20.20.20 443 tcp - 3.013144 616216164 0 S0 T F 0 Sa 2 104 2 88 4:38:31.815 PM Cv2Tqo4ErGAdpsnth2 10.10.10.10 63951 20.20.20.20 443 tcp - 0.015764 1834934747 0 S0 T F 0 Sa 1 48 1 44 4:38:22.817 PM CYpYXo175dS7gtQ1p1 10.10.10.10 63951 20.20.20.20 443 tcp - 3.011727 1834934747 0 S0 T F 0 Sa 2 104 2 88 4:38:33.098 PM C3g5Yo4goIOLJEzvSh 10.10.10.10 63962 20.20.20.20 443 tcp - 0.015809 73288814 0 S0 T F 0 Sa 1 48 1 44 4:38:24.099 PM Cv9aWbc0kwMKi7BC2 10.10.10.10 63962 20.20.20.20 443 tcp - 3.007776 73288814 0 S0 T F 0 Sa 2 104 2 88 Is there any significance to the orig_bytes and S0 conn state? I'm considering filtering these log entires but not sure if I would end up filtering any 'real' traffic in the process. Eric ----- Original Message ----- From: "Justin S Azoff" To: "Eric Hacecky" Cc: bro at bro.org Sent: Wednesday, May 16, 2018 5:15:58 PM Subject: Re: [Bro] Conn log shows massive file transfer inbetween normal browsing > On May 16, 2018, at 4:54 PM, Eric Hacecky wrote: > > Justin, > > Thanks for the response. > >> You can figure out how high it needs to be based on how frequently you are seeing that connection logged. > > Here are some more of the logs (chopped down for readability). You can see there are multiple "large transfers" in a small time window, less than 5 minutes. Does this mean setting the window higher isn't going to make a difference since I'm already seeing connections more frequently than 5 minutes? > > 4:38:33.098 PM - 10.10.10.10 63962 20.20.20.20 443 tcp - 0.015809 73288814 0 S0 > 4:38:31.815 PM - 10.10.10.10 63951 20.20.20.20 443 tcp - 0.015764 1834934747 0 S0 > 4:38:31.565 PM - 10.10.10.10 63949 20.20.20.20 443 tcp - 0.015718 616216164 0 S0 > 4:38:28.952 PM - 10.10.10.10 64031 20.20.20.20 443 tcp - 3.014994 1213244309 0 S0 > 4:38:28.701 PM - 10.10.10.10 64028 20.20.20.20 443 tcp - 3.023816 1777413339 0 S0 > 4:38:28.329 PM - 10.10.10.10 64024 20.20.20.20 443 TCP_ack_underflow_or_misorder - F worker-2 > 4:38:28.313 PM - 10.10.10.10 64024 20.20.20.20 443 tcp - 3.017128 0 0 S0 > 4:38:28.272 PM - 10.10.10.10 64022 20.20.20.20 443 TCP_ack_underflow_or_misorder - F worker-2 > 4:38:28.257 PM - 10.10.10.10 64022 20.20.20.20 443 tcp - 3.010362 0 0 S0 hmm, that is showing 7 different tcp source ports, so this wouldn't be the same connection. If you search for 63949 do you find earlier matches? > > I included the tcp_ack_underflow_or_misorder from weird log too in case that sheds any light. > > -Eric It's probably realted.. ? Justin Azoff From jazoff at illinois.edu Thu May 17 06:00:15 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 17 May 2018 13:00:15 +0000 Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <468108940.301629.1526561226942.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> <41729817-303B-4406-AA7B-91975207698A@illinois.edu> <468108940.301629.1526561226942.JavaMail.zimbra@jlab.org> Message-ID: > On May 17, 2018, at 8:47 AM, Eric Hacecky wrote: > > Here are the matching source port entries for the last 3 connections. > > The other two from ports 64031 and 64028 did not have any others from those ports. > > The connection IDs are different throughout, I've included them this time around. > > 4:38:31.565 PM CmtAeYrQjXqUGW4xi 10.10.10.10 63949 20.20.20.20 443 tcp - 0.015718 616216164 0 S0 T F 0 Sa 1 48 1 44 > 4:38:22.566 PM CiyCGi1xwft9PDrqG9 10.10.10.10 63949 20.20.20.20 443 tcp - 3.013144 616216164 0 S0 T F 0 Sa 2 104 2 88 > 4:38:31.815 PM Cv2Tqo4ErGAdpsnth2 10.10.10.10 63951 20.20.20.20 443 tcp - 0.015764 1834934747 0 S0 T F 0 Sa 1 48 1 44 > 4:38:22.817 PM CYpYXo175dS7gtQ1p1 10.10.10.10 63951 20.20.20.20 443 tcp - 3.011727 1834934747 0 S0 T F 0 Sa 2 104 2 88 > 4:38:33.098 PM C3g5Yo4goIOLJEzvSh 10.10.10.10 63962 20.20.20.20 443 tcp - 0.015809 73288814 0 S0 T F 0 Sa 1 48 1 44 > 4:38:24.099 PM Cv9aWbc0kwMKi7BC2 10.10.10.10 63962 20.20.20.20 443 tcp - 3.007776 73288814 0 S0 T F 0 Sa 2 104 2 88 > > Is there any significance to the orig_bytes and S0 conn state? > > I'm considering filtering these log entires but not sure if I would end up filtering any 'real' traffic in the process. Now that I look closer at this i think my original comment was wrong, if these were long connections that bro was getting confused about, the history field would be Da (data + ack), not Sa (syn + ack). Are other connections logged properly by bro ? Connections with a full history of something like ShAdDafF? would be interesting to see a pcap of the traffic between those two hosts, then you could see if the system is even getting the full 3 way handshake or not. ? Justin Azoff From hacecky at jlab.org Thu May 17 06:47:28 2018 From: hacecky at jlab.org (Eric Hacecky) Date: Thu, 17 May 2018 09:47:28 -0400 (EDT) Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <215427909.318954.1526564833784.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> <41729817-303B-4406-AA7B-91975207698A@illinois.edu> <468108940.301629.1526561226942.JavaMail.zimbra@jlab.org> Message-ID: <143101804.319008.1526564848805.JavaMail.zimbra@jlab.org> I sent a few screenshots and the pcap for 63949 out of band. 3 way handshake not present. For clarity, the bro conn log is coming from a sensor that is being fed a decrypted stream of 443 traffic. I run snort/sancp full packet capture on the same box (same stream) and it doesn't have any data for these connections at all. My other sensor that gets the encrypted traffic does see the connections, which is where I sourced the pcap but as you can see it doesn't show a huge chunk of data like the bro log does. > Are other connections logged properly by bro ? Connections with a full history of something like ShAdDafF? In general? Yes. I've had bro running at my site for a number of years now. For these specific connections no. The only conn state I have for 10.10.10.10 to 20.20.20.20 is S0. -Eric ----- Original Message ----- From: "Justin S Azoff" To: "Eric Hacecky" Cc: bro at bro.org Sent: Thursday, May 17, 2018 9:00:15 AM Subject: Re: [Bro] Conn log shows massive file transfer inbetween normal browsing > On May 17, 2018, at 8:47 AM, Eric Hacecky wrote: > > Here are the matching source port entries for the last 3 connections. > > The other two from ports 64031 and 64028 did not have any others from those ports. > > The connection IDs are different throughout, I've included them this time around. > > 4:38:31.565 PM CmtAeYrQjXqUGW4xi 10.10.10.10 63949 20.20.20.20 443 tcp - 0.015718 616216164 0 S0 T F 0 Sa 1 48 1 44 > 4:38:22.566 PM CiyCGi1xwft9PDrqG9 10.10.10.10 63949 20.20.20.20 443 tcp - 3.013144 616216164 0 S0 T F 0 Sa 2 104 2 88 > 4:38:31.815 PM Cv2Tqo4ErGAdpsnth2 10.10.10.10 63951 20.20.20.20 443 tcp - 0.015764 1834934747 0 S0 T F 0 Sa 1 48 1 44 > 4:38:22.817 PM CYpYXo175dS7gtQ1p1 10.10.10.10 63951 20.20.20.20 443 tcp - 3.011727 1834934747 0 S0 T F 0 Sa 2 104 2 88 > 4:38:33.098 PM C3g5Yo4goIOLJEzvSh 10.10.10.10 63962 20.20.20.20 443 tcp - 0.015809 73288814 0 S0 T F 0 Sa 1 48 1 44 > 4:38:24.099 PM Cv9aWbc0kwMKi7BC2 10.10.10.10 63962 20.20.20.20 443 tcp - 3.007776 73288814 0 S0 T F 0 Sa 2 104 2 88 > > Is there any significance to the orig_bytes and S0 conn state? > > I'm considering filtering these log entires but not sure if I would end up filtering any 'real' traffic in the process. Now that I look closer at this i think my original comment was wrong, if these were long connections that bro was getting confused about, the history field would be Da (data + ack), not Sa (syn + ack). Are other connections logged properly by bro ? Connections with a full history of something like ShAdDafF? would be interesting to see a pcap of the traffic between those two hosts, then you could see if the system is even getting the full 3 way handshake or not. ? Justin Azoff From jazoff at illinois.edu Thu May 17 08:30:40 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 17 May 2018 15:30:40 +0000 Subject: [Bro] Conn log shows massive file transfer inbetween normal browsing In-Reply-To: <143101804.319008.1526564848805.JavaMail.zimbra@jlab.org> References: <2125406100.208880.1526498713837.JavaMail.zimbra@jlab.org> <1507523300.227362.1526504094933.JavaMail.zimbra@jlab.org> <41729817-303B-4406-AA7B-91975207698A@illinois.edu> <468108940.301629.1526561226942.JavaMail.zimbra@jlab.org> <143101804.319008.1526564848805.JavaMail.zimbra@jlab.org> Message-ID: <4BE8F3A4-6AE8-4380-B55D-7C9AABAACA82@illinois.edu> > On May 17, 2018, at 9:47 AM, Eric Hacecky wrote: > > I sent a few screenshots and the pcap for 63949 out of band. 3 way handshake not present. > > For clarity, the bro conn log is coming from a sensor that is being fed a decrypted stream of 443 traffic. Ah.. that explains it, seems whatever device that is decrypting the ssl traffic is sending garbage to bro. ? Justin Azoff From jlay at slave-tothe-box.net Thu May 17 09:25:22 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 17 May 2018 10:25:22 -0600 Subject: [Bro] An assist with Splunk addon Message-ID: All, So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup a listener, installed the App on the Splunk server, and installed the Universal Forwarder (just trying it out; I know I can just use rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal Forwarder to a listener, and install the TA addon on the machine running bro and the Universal Forwarder. Alas, my output is...unexpected: Anyone have any hints on what the issue might be? Thank you. James -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/8c62db4b/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: 2018-05-17 10_21_02-Search _ Splunk 7.1.0.png Type: image/png Size: 24445 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/8c62db4b/attachment-0001.bin From buysse at umn.edu Thu May 17 09:50:26 2018 From: buysse at umn.edu (Joshua Buysse) Date: Thu, 17 May 2018 11:50:26 -0500 Subject: [Bro] An assist with Splunk addon In-Reply-To: References: Message-ID: This looks like you?re sending ?cooked? Splunk output to a TCP input, which is suitable for syslog data or similar (though I would recommend using an intermediate like syslog-ng and picking up the files rather than having splunkd receive syslog directly). If you?re using the GUI, you want to add the input port from Settings -> Data -> Forwarding and Receiving and configure a port for receiving the cooked data there. -J -- Joshua Buysse University of Minnesota - University Information Security "On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." -- Charles Babbage > On May 17, 2018, at 11:25, James Lay wrote: > > All, > > So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup a listener, installed the App on the Splunk server, and installed the Universal Forwarder (just trying it out; I know I can just use rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal Forwarder to a listener, and install the TA addon on the machine running bro and the Universal Forwarder. Alas, my output is...unexpected: > > <2018-05-17 10_21_02-Search _ Splunk 7.1.0.png> > > Anyone have any hints on what the issue might be? Thank you. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/cd9b90a3/attachment.html From steve at brant.nu Thu May 17 10:19:58 2018 From: steve at brant.nu (Steve Brant) Date: Thu, 17 May 2018 10:19:58 -0700 Subject: [Bro] An assist with Splunk addon In-Reply-To: References: Message-ID: This is because the indexer (listener) is expecting Splunk "cooked" data. Your inputs.conf setting on the indexer is probably something like: [tcp://:9997] it should be: [splunktcp://:9997] https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Inputsconf Steve On Thu, May 17, 2018 at 9:37 AM James Lay wrote: > All, > > So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup a > listener, installed the App on the Splunk server, and installed the > Universal Forwarder (just trying it out; I know I can just use > rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal > Forwarder to a listener, and install the TA addon on the machine running > bro and the Universal Forwarder. Alas, my output is...unexpected: > > > Anyone have any hints on what the issue might be? Thank you. > > James > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/3b6686e2/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: 2018-05-17 10_21_02-Search _ Splunk 7.1.0.png Type: image/png Size: 24445 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/3b6686e2/attachment-0002.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: 2018-05-17 10_21_02-Search _ Splunk 7.1.0.png Type: image/png Size: 24445 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/3b6686e2/attachment-0003.bin From jlay at slave-tothe-box.net Thu May 17 10:39:00 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 17 May 2018 11:39:00 -0600 Subject: [Bro] An assist with Splunk addon In-Reply-To: References: Message-ID: Thanks all...puts me on the right track. James On 2018-05-17 11:19, Steve Brant wrote: > This is because the indexer (listener) is expecting Splunk "cooked" data. Your inputs.conf setting on the indexer is probably something like: > > [tcp://:9997] > > it should be: > > [splunktcp://:9997] > > https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Inputsconf > > Steve > > On Thu, May 17, 2018 at 9:37 AM James Lay wrote: > >> All, >> >> So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup a listener, installed the App on the Splunk server, and installed the Universal Forwarder (just trying it out; I know I can just use rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal Forwarder to a listener, and install the TA addon on the machine running bro and the Universal Forwarder. Alas, my output is...unexpected: >> >> Anyone have any hints on what the issue might be? Thank you. >> >> James _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/23cfc609/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 24445 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/23cfc609/attachment-0001.bin From jlay at slave-tothe-box.net Thu May 17 12:58:02 2018 From: jlay at slave-tothe-box.net (James Lay) Date: Thu, 17 May 2018 13:58:02 -0600 Subject: [Bro] An assist with Splunk addon In-Reply-To: References: Message-ID: <425134b80526c797fe9f205f5eb3c7e8@localhost> Ok....so now I see data when searching: sourcetype="conn" However the Corelight App proper shows no info....any other hints? Thank you. James On 2018-05-17 11:39, James Lay wrote: > Thanks all...puts me on the right track. > > James > > On 2018-05-17 11:19, Steve Brant wrote: > This is because the indexer (listener) is expecting Splunk "cooked" data. Your inputs.conf setting on the indexer is probably something like: > > [tcp://:9997] > > it should be: > > [splunktcp://:9997] > > https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Inputsconf > > Steve > > On Thu, May 17, 2018 at 9:37 AM James Lay wrote: > > All, > > So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup a listener, installed the App on the Splunk server, and installed the Universal Forwarder (just trying it out; I know I can just use rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal Forwarder to a listener, and install the TA addon on the machine running bro and the Universal Forwarder. Alas, my output is...unexpected: > > Anyone have any hints on what the issue might be? Thank you. > > James _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180517/50dfdd84/attachment.html From soehlert at es.net Fri May 18 09:41:45 2018 From: soehlert at es.net (Samuel Oehlert) Date: Fri, 18 May 2018 11:41:45 -0500 Subject: [Bro] An assist with Splunk addon In-Reply-To: <425134b80526c797fe9f205f5eb3c7e8@localhost> References: <425134b80526c797fe9f205f5eb3c7e8@localhost> Message-ID: I haven't played with the Corelight App so I'm not sure what the index names they're looking for are, but usually I've found when TAs don't show anything, yet I can see it search, it's because the index name is not the same. - Sam On Thu, May 17, 2018 at 3:00 PM James Lay wrote: > Ok....so now I see data when searching: > > sourcetype="conn" > > However the Corelight App proper shows no info....any other hints? Thank > you. > > James > > On 2018-05-17 11:39, James Lay wrote: > > Thanks all...puts me on the right track. > > James > > > > On 2018-05-17 11:19, Steve Brant wrote: > > This is because the indexer (listener) is expecting Splunk "cooked" data. > Your inputs.conf setting on the indexer is probably something like: > > [tcp://:9997] > > it should be: > > [splunktcp://:9997] > > https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Inputsconf > > Steve > > On Thu, May 17, 2018 at 9:37 AM James Lay > wrote: > >> All, >> >> So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup >> a listener, installed the App on the Splunk server, and installed the >> Universal Forwarder (just trying it out; I know I can just use >> rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal >> Forwarder to a listener, and install the TA addon on the machine running >> bro and the Universal Forwarder. Alas, my output is...unexpected: >> >> >> Anyone have any hints on what the issue might be? Thank you. >> >> James >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180518/93ff91a1/attachment.html From patrick.kelley at criticalpathsecurity.com Fri May 18 10:15:50 2018 From: patrick.kelley at criticalpathsecurity.com (Patrick Kelley) Date: Fri, 18 May 2018 13:15:50 -0400 Subject: [Bro] An assist with Splunk addon In-Reply-To: References: <425134b80526c797fe9f205f5eb3c7e8@localhost> Message-ID: Echoing the same, but with some additional insight. When app developers build out a TA and Splunk app, they generally make a best effort to anticipate what index and sourcetype an individual's data will claim when ingested. Sometimes that will fail. However, this generally isn't very hard to remedy. If you do the following, you should be able to associate the proper information. In the search window, you should be able to find the sourcetype and source that is correlated with your Bro data. With that information, go to the Corelight App and press the "edit" button in the top right-hand corner of the window. You should then see some magnifying glass icons on the panels. If you click on those, you can substitute the sourcetype and source data in the search query. When you press "Save", the panels should refresh and render your data. If this isn't helpful, I apologize. Feel free to reach out if you'd like more assistance. -PK On Fri, May 18, 2018 at 12:41 PM, Samuel Oehlert wrote: > I haven't played with the Corelight App so I'm not sure what the index > names they're looking for are, but usually I've found when TAs don't show > anything, yet I can see it search, it's because the index name is not the > same. > > - Sam > > On Thu, May 17, 2018 at 3:00 PM James Lay > wrote: > >> Ok....so now I see data when searching: >> >> sourcetype="conn" >> >> However the Corelight App proper shows no info....any other hints? Thank >> you. >> >> James >> >> On 2018-05-17 11:39, James Lay wrote: >> >> Thanks all...puts me on the right track. >> >> James >> >> >> >> On 2018-05-17 11:19, Steve Brant wrote: >> >> This is because the indexer (listener) is expecting Splunk "cooked" data. >> Your inputs.conf setting on the indexer is probably something like: >> >> [tcp://:9997] >> >> it should be: >> >> [splunktcp://:9997] >> >> https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Inputsconf >> >> Steve >> >> On Thu, May 17, 2018 at 9:37 AM James Lay >> wrote: >> >>> All, >>> >>> So I've been dabbling with Splunk, Bro, and the Corelight apps. I setup >>> a listener, installed the App on the Splunk server, and installed the >>> Universal Forwarder (just trying it out; I know I can just use >>> rsyslog/syslog-ng) on the machine that's running bro, pointed the Universal >>> Forwarder to a listener, and install the TA addon on the machine running >>> bro and the Universal Forwarder. Alas, my output is...unexpected: >>> >>> >>> Anyone have any hints on what the issue might be? Thank you. >>> >>> James >>> _______________________________________________ >>> Bro mailing list >>> bro at bro-ids.org >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro >> >> >> >> _______________________________________________ >> Bro mailing list >> bro at bro-ids.org >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > > > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro > -- *Patrick Kelley, CISSP, C|EH, ITIL* *CTO* patrick.kelley at criticalpathsecurity.com (o) 770-224-6482 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180518/9ca99033/attachment-0001.html From ossamabzos at gmail.com Fri May 18 19:16:18 2018 From: ossamabzos at gmail.com (bz Os) Date: Sat, 19 May 2018 03:16:18 +0100 Subject: [Bro] how can detect attack from pcap by bro Message-ID: hello evry one i I tested bro ids with tcpdump darpa 1999 I imported all the script for the detection,as results i had nodetection all results are about protocol detector ,i tested also suricata with the same tcpdump as results suricata detect large number of attack , i want to know how can i use bro for detect attack in the tcp dump to compare the number of attack detected against suricata -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180519/bd8ee5a4/attachment.html From brian.oberry at bluvector.io Mon May 21 08:53:39 2018 From: brian.oberry at bluvector.io (Brian OBerry) Date: Mon, 21 May 2018 15:53:39 +0000 Subject: [Bro] Manager memory requirements for the intel framework Message-ID: <813FEFB8-2009-4C9D-B52D-A6B51411672D@bluvector.io> All, A quick update: cluster workers apparently must be receiving traffic to process Intel::new_item events, because the rapid manager memory growth didn't occur while our test system was receiving traffic. Thanks for the help, we've moved on to generating a more representative intel input file and replaying traffic for further testing. Brian ?-----Original Message----- From: "Azoff, Justin S" Date: Tuesday, May 15, 2018 at 2:06 PM To: Brian OBerry Cc: "bro at bro.org" , Jon Siwek Subject: EXT: Re: [Bro] Manager memory requirements for the intel framework On May 15, 2018, at 9:13 AM, Brian OBerry wrote: > > It remained at 27G after many cycles of replacing the input file with 18K new unique items. That is interesting because by default the intel framework doesn't expire items, so every time you replaced the file you were loading an additional 18k items.. If I get a chance I will resurrect the benchmarking code I was working on a while ago.. It would do things like create a table of hosts and add 10k,20k,30k,40k hosts to it and see what the memory usage was for each count to see what the real work data usage is for different sized data structures. I never tried it with the intel framework though. > We commented the conditional that invokes ?event Intel::new_item(item)? in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared. > This makes more sense.. I don't think your memory usage has anything to do with the intel itself, I think the communication code is falling behind. How many worker processes do you have configured? Are they running on the same box or separate boxes? If you load up 18k indicators but have 100 worker nodes, the bro manager needs to send out 1,800,000 events to all the workers. if the workers can't keep up, that data just ends up buffered in memory on the manager until it can be sent out. Jon: this is the use case I had for the Cluster::relay_rr, offloading the messaging load from the manager # On the manager, the new_item event indicates a new indicator that # has to be distributed. event Intel::new_item(item: Item) &priority=5 { Broker::publish(indicator_topic, Intel::insert_indicator, item); } so that should maybe be used there, instead of the manager having to do all the communication work. ? Justin Azoff From rahulbroids at gmail.com Mon May 21 23:51:44 2018 From: rahulbroids at gmail.com (rahul rakesh) Date: Tue, 22 May 2018 12:21:44 +0530 Subject: [Bro] HI Message-ID: Hi all, I am trying install bro ids from source on CentOS-7-x86_64-DVD-1804.iso and after executing ./configure and make files, got below error.How it could be resolved. Error: ==================================================== [ 92%] Building CXX object src/CMakeFiles/bro.dir/plugin/Plugin.cc.o [ 92%] Building C object src/CMakeFiles/bro.dir/nb_dns.c.o Linking CXX executable bro CMakeFiles/bro.dir/Func.cc.o: In function `__RegisterBif': /root/Documents/bro/conf/bro-2.5.3/src/plugin/Manager.h:453: undefined reference to `plugin::Bro_SMB::__bif_smb2_com_tree_disconnect_init(plugin::Plugin*)' /root/Documents/bro/conf/bro-2.5.3/src/plugin/Manager.h:453: undefined reference to `plugin::Bro_SMB::__bif_smb2_com_write_init(plugin::Plugin*)' analyzer/protocol/smb/libplugin-Bro-SMB.a(smb_pac.cc.o): In function `EventHandlerPtr::operator bool() const': /root/Documents/bro/conf/bro-2.5.3/src/EventHandler.h:105: undefined reference to `smb2_write_request' analyzer/protocol/smb/libplugin-Bro-SMB.a(smb_pac.cc.o): In function `binpac::SMB::SMB_Conn::proc_smb2_write_request(binpac::SMB::SMB2_Header*, binpac::SMB::SMB2_write_request*)': /root/Documents/bro/conf/bro-2.5.3/build/src/analyzer/protocol/smb/smb_pac.cc:1357: undefined reference to `BifEvent::generate_smb2_write_request(analyzer::Analyzer*, Connection*, Val*, Val*, unsigned long, unsigned long)' collect2: error: ld returned 1 exit status make[3]: *** [src/bro] Error 1 make[3]: Leaving directory `/root/Documents/bro/conf/bro-2.5.3/build' make[2]: *** [src/CMakeFiles/bro.dir/all] Error 2 make[2]: Leaving directory `/root/Documents/bro/conf/bro-2.5.3/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/root/Documents/bro/conf/bro-2.5.3/build' make: *** [all] Error 2 ==================================================== Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180522/8902b956/attachment.html From dopheide at gmail.com Wed May 23 13:54:53 2018 From: dopheide at gmail.com (Mike Dopheide) Date: Wed, 23 May 2018 15:54:53 -0500 Subject: [Bro] Broker coding question Message-ID: Maybe jumping the gun a little here, but I've started playing with the new Broker functions a bit and run into an issue that's probably just lack of understanding on my part. I've crafted a policy specific for this discussion. Basically, I'm trying to send data from the manager to my workers and it's not showing up as I'd expect. In this policy you'll see a couple different ways I thought were right based on the documentation and looking at other examples. One was using Broker::auto_publish so any call to my 'manager_to_workers' event should go out automatically. The other is a straight Broker::publish. When I run this and then check with "broctl print Dop::bourbon", all I ever see is Eagle Rare, none of the published events appear to make it into the set. Thanks, Dop -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180523/c29dfb58/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: broker-when.bro Type: application/octet-stream Size: 877 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180523/c29dfb58/attachment.obj From jsiwek at corelight.com Wed May 23 20:08:05 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Wed, 23 May 2018 22:08:05 -0500 Subject: [Bro] Broker coding question In-Reply-To: References: Message-ID: On Wed, May 23, 2018 at 3:54 PM, Mike Dopheide wrote: > When I run this and then check with "broctl print Dop::bourbon", all I ever > see is Eagle Rare, none of the published events appear to make it into the > set. You're running into a longstanding inconsistency in the way Bro resolves event identifiers [1], which was also a source of confusion before Broker. A general rule to follow when using event names in Bro is: if you define it inside a module/namespace, then just always use that namespace scoping when referring to the event name, so try replacing all references to "manager_to_workers" in your script with "Dop::manager_to_workers". Another thing to note about that script is that a cluster will start worker nodes after the manager node, so I expect only the scheduled "Elijah Craig" event to consistently reach workers. Since all the other events happen at bro_init() time (or very close to it), the worker has not yet connected. You should also notice that dispatching via "event" will still call any local event handlers as it did before, but Broker::publish will not. - Jon [1] https://bro-tracker.atlassian.net/browse/BIT-71 From dopheide at gmail.com Thu May 24 07:47:58 2018 From: dopheide at gmail.com (Mike Dopheide) Date: Thu, 24 May 2018 09:47:58 -0500 Subject: [Bro] Broker coding question In-Reply-To: References: Message-ID: Ah, thanks. I knew I was missing something silly and I feel like others will run into this as well. What do you think about reflecting that in the Broker docs? I'm happy to make those changes and submit a pull request. -Dop On Wed, May 23, 2018 at 10:08 PM, Jon Siwek wrote: > On Wed, May 23, 2018 at 3:54 PM, Mike Dopheide wrote: > > When I run this and then check with "broctl print Dop::bourbon", all I > ever > > see is Eagle Rare, none of the published events appear to make it into > the > > set. > > You're running into a longstanding inconsistency in the way Bro > resolves event identifiers [1], which was also a source of confusion > before Broker. > > A general rule to follow when using event names in Bro is: if you > define it inside a module/namespace, then just always use that > namespace scoping when referring to the event name, so try replacing > all references to "manager_to_workers" in your script with > "Dop::manager_to_workers". > > Another thing to note about that script is that a cluster will start > worker nodes after the manager node, so I expect only the scheduled > "Elijah Craig" event to consistently reach workers. Since all the > other events happen at bro_init() time (or very close to it), the > worker has not yet connected. > > You should also notice that dispatching via "event" will still call > any local event handlers as it did before, but Broker::publish will > not. > > - Jon > > [1] https://bro-tracker.atlassian.net/browse/BIT-71 > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180524/48308090/attachment.html From jsiwek at corelight.com Thu May 24 08:15:29 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Thu, 24 May 2018 10:15:29 -0500 Subject: [Bro] Broker coding question In-Reply-To: References: Message-ID: Yeah, that will be good if you can suggest a place where it would have helped (others have indeed run into it already). Note that it's not just Broker / remote-communication that needs to obey this event naming restriction, it's event handling/dispatching in general. - Jon On Thu, May 24, 2018 at 9:47 AM, Mike Dopheide wrote: > Ah, thanks. I knew I was missing something silly and I feel like others > will run into this as well. What do you think about reflecting that in the > Broker docs? I'm happy to make those changes and submit a pull request. > > -Dop > > On Wed, May 23, 2018 at 10:08 PM, Jon Siwek wrote: >> >> On Wed, May 23, 2018 at 3:54 PM, Mike Dopheide wrote: >> > When I run this and then check with "broctl print Dop::bourbon", all I >> > ever >> > see is Eagle Rare, none of the published events appear to make it into >> > the >> > set. >> >> You're running into a longstanding inconsistency in the way Bro >> resolves event identifiers [1], which was also a source of confusion >> before Broker. >> >> A general rule to follow when using event names in Bro is: if you >> define it inside a module/namespace, then just always use that >> namespace scoping when referring to the event name, so try replacing >> all references to "manager_to_workers" in your script with >> "Dop::manager_to_workers". >> >> Another thing to note about that script is that a cluster will start >> worker nodes after the manager node, so I expect only the scheduled >> "Elijah Craig" event to consistently reach workers. Since all the >> other events happen at bro_init() time (or very close to it), the >> worker has not yet connected. >> >> You should also notice that dispatching via "event" will still call >> any local event handlers as it did before, but Broker::publish will >> not. >> >> - Jon >> >> [1] https://bro-tracker.atlassian.net/browse/BIT-71 > > From ossamabzos at gmail.com Thu May 24 08:27:39 2018 From: ossamabzos at gmail.com (bz Os) Date: Thu, 24 May 2018 15:27:39 +0000 Subject: [Bro] bro not alert nessus attack Message-ID: hello evry one i loaded all default scrpt ,as results want to use bro in detection mode ,when i test nessus against bro i had any alert in the notice.log -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180524/b8818171/attachment.html From fatema.bannatwala at gmail.com Thu May 24 12:52:00 2018 From: fatema.bannatwala at gmail.com (fatema bannatwala) Date: Thu, 24 May 2018 15:52:00 -0400 Subject: [Bro] bro not alert nessus attack Message-ID: Hi Bz Oz, It depends on what you are testing with nessus and how are you testing it. Bro should be able to detect scanning, ssh-bruteforce, sql injection, htp-bruteforce etc. by default. Hence, if you are scanning the systems from your nessus machine, and if Bro is able to sniff that traffic, then scanning get reported in notice.log file. It might not able to detect all the attacks that you launch from nessus, unless you have custom scripts/plugins installed in Bro. Fatema. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180524/312a7b90/attachment.html From philosnef at gmail.com Fri May 25 09:35:41 2018 From: philosnef at gmail.com (erik clark) Date: Fri, 25 May 2018 12:35:41 -0400 Subject: [Bro] issues with binpac and bro 253 Message-ID: {"ts":1526476092.155226,"uid":"CLBfQGYsYuPPYghW6","id.orig_h":"10.171.248.5","id.orig_p":59860,"id.resp_h":"10.171.3.35","id.resp_p":5901,"proto":"tcp","analyzer":"RFB","failure_reason":"Binpac exception: binpac exception: out_of_bound: RFBVNCAuthenticationResponse:response: 16 > 4"} {"ts":1526902777.802284,"uid":"CRbgOr2vlXZquGHbC4","id.orig_h":"10.171.253.5","id.orig_p":51389,"id.resp_h":"209.208.26.64","id.resp_p":1883,"proto":"tcp","analyzer":"MQTT","failure_reason":"Binpac exception: binpac exception: out_of_bound: MQTT_string:str: 258 > 2"} {"ts":1526385277.166233,"uid":"Cp5ewt2gFK34Hk2vSg","id.orig_h":"128.154.164.150","id.orig_p":59357,"id.resp_h":"10.171.253.18","id.resp_p":22,"proto":"tcp","analyzer":"SSH","failure_reason":"Binpac exception: binpac exception: out_of_bound: SSH2_KEXINIT: -82 > 30"} {"ts":1526385276.305273,"uid":"CEv2fC11PlksxaS5Tf","id.orig_h":"128.154.164.150","id.orig_p":59356,"id.resp_h":"10.171.253.15","id.resp_p":22,"proto":"tcp","analyzer":"SSH","failure_reason":"Binpac exception: binpac exception: out_of_bound: SSH2_KEXINIT:cookie: 16 > 4"} {"ts":1526385714.957199,"uid":"CKBKhA2vqPokc34a43","id.orig_h":"128.154.164.150","id.orig_p":59463,"id.resp_h":"10.171.253.6","id.resp_p":22,"proto":"tcp","analyzer":"SSH","failure_reason":"Binpac exception: binpac exception: out_of_bound: SSH2_KEXINIT: -154 > 30"} The ssh analyzer and rfb analyzer are both throwing binpac exceptions; Also, so is the newly converted MQTT plugin that Seth built. Why are these failing? I do not have pcap. I would like to know why the ssh analyzer specifically would be failing. This is a new install of bro and we do not have an old version on this network to compare dpd logs on. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180525/18a014d2/attachment.html From jdopheid at illinois.edu Fri May 25 10:49:32 2018 From: jdopheid at illinois.edu (Dopheide, Jeannette M) Date: Fri, 25 May 2018 17:49:32 +0000 Subject: [Bro] Bro Blog: Broker is Coming -- Persistent Stores Message-ID: <7EFD7D614A2BB84ABEA19B2CEDD246583B6D766D@CITESMBX5.ad.uillinois.edu> Hi All, We have a new blog post about Broker we'd like to share. It's written by guest blogger Mike Dopheide, thanks Dop! You can find it here: http://blog.bro.org/2018/05/broker-is-coming-persistent-stores.html ------ Jeannette M. Dopheide Sr. Education, Outreach and Training Coordinator National Center for Supercomputing Applications University of Illinois at Urbana-Champaign -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180525/0b035099/attachment.html From jsiwek at corelight.com Fri May 25 13:26:09 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Fri, 25 May 2018 15:26:09 -0500 Subject: [Bro] issues with binpac and bro 253 In-Reply-To: References: Message-ID: On Fri, May 25, 2018 at 11:35 AM, erik clark wrote: > {"ts":1526476092.155226,"uid":"CLBfQGYsYuPPYghW6","id.orig_h":"10.171.248.5","id.orig_p":59860,"id.resp_h":"10.171.3.35","id.resp_p":5901,"proto":"tcp","analyzer":"RFB","failure_reason":"Binpac > exception: binpac exception: out_of_bound: > RFBVNCAuthenticationResponse:response: 16 > 4"} > {"ts":1526902777.802284,"uid":"CRbgOr2vlXZquGHbC4","id.orig_h":"10.171.253.5","id.orig_p":51389,"id.resp_h":"209.208.26.64","id.resp_p":1883,"proto":"tcp","analyzer":"MQTT","failure_reason":"Binpac > exception: binpac exception: out_of_bound: MQTT_string:str: 258 > 2"} > {"ts":1526385277.166233,"uid":"Cp5ewt2gFK34Hk2vSg","id.orig_h":"128.154.164.150","id.orig_p":59357,"id.resp_h":"10.171.253.18","id.resp_p":22,"proto":"tcp","analyzer":"SSH","failure_reason":"Binpac > exception: binpac exception: out_of_bound: SSH2_KEXINIT: -82 > 30"} > {"ts":1526385276.305273,"uid":"CEv2fC11PlksxaS5Tf","id.orig_h":"128.154.164.150","id.orig_p":59356,"id.resp_h":"10.171.253.15","id.resp_p":22,"proto":"tcp","analyzer":"SSH","failure_reason":"Binpac > exception: binpac exception: out_of_bound: SSH2_KEXINIT:cookie: 16 > 4"} > {"ts":1526385714.957199,"uid":"CKBKhA2vqPokc34a43","id.orig_h":"128.154.164.150","id.orig_p":59463,"id.resp_h":"10.171.253.6","id.resp_p":22,"proto":"tcp","analyzer":"SSH","failure_reason":"Binpac > exception: binpac exception: out_of_bound: SSH2_KEXINIT: -154 > 30"} > > > The ssh analyzer and rfb analyzer are both throwing binpac exceptions; Also, > so is the newly converted MQTT plugin that Seth built. Why are these > failing? I do not have pcap. I would like to know why the ssh analyzer > specifically would be failing. This is a new install of bro and we do not > have an old version on this network to compare dpd logs on. Thanks! The general reason for those would be that the analyzer/parser was given input that does not match its protocol definition. It's either legitimately failing to parse malformed traffic or the analyzer has not defined the protocol specification in a way that matches the actual implementation/spec. It's difficult to say which case it is without a pcap, but it's also not necessarily alarming to see these unless there's an overwhelming amount of it or you had previous logs to compare with and suddenly see a big difference. - Jon From gary.w.weasel2.civ at mail.mil Tue May 29 07:55:38 2018 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W Jr CIV DISA RE (US)) Date: Tue, 29 May 2018 14:55:38 +0000 Subject: [Bro] File Extraction Gaps Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> Hey Bro List, So I seem to be running into a problem with file extraction (or perhaps just file analysis in general). I have a basic extraction script running pulling out EXEs that are seen coming across HTTP and for some reason, there are consistently a large number of file gaps in the file it sees. I have a custom log outputting the fuid for any file_gap event (https://www.bro.org/sphinx/scripts/base/bif/event.bif.bro.html#id-file_gap), and I seem to get a wildly varying number of file gap events for a given file. In my example, I am curling an exe to a server, where that traffic is spanned to my Bro sensor (the exe in question is 1 MB in size). If I curl repeatedly, Bro sees all the files, but the number of file gap events varies wildly (anywhere from 2 or 3 to over 100). The part that gets me, if I tcpdump alongside Bro, and pull the files out of pcap, they're all intact and hash correctly, so I know I'm getting all the packets on wire. Bro and PF_RING report 0 packet loss. Does anyone have anything that could shed light on what's going on here? Thanks, - Gary -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5577 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180529/e3be3b4c/attachment.bin From greg.grasmehr at caltech.edu Tue May 29 09:33:53 2018 From: greg.grasmehr at caltech.edu (Greg Grasmehr) Date: Tue, 29 May 2018 09:33:53 -0700 Subject: [Bro] Myricom SNFv3 and CentOS 7.5 In-Reply-To: <20180512145359.daswjfvtamjztskp@dakine> References: <20180512145359.daswjfvtamjztskp@dakine> Message-ID: <20180529163353.647bep7uga77pft5@dakine> Hello, Was just notified that CSPi released the updated version of SNF+ to address the current kernel compile issue. Greg From jsiwek at corelight.com Tue May 29 09:51:54 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 29 May 2018 11:51:54 -0500 Subject: [Bro] File Extraction Gaps In-Reply-To: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> References: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> Message-ID: On Tue, May 29, 2018 at 9:55 AM, Weasel, Gary W Jr CIV DISA RE (US) wrote: > In my example, I am curling an exe to a server, where that traffic is spanned to my Bro sensor (the exe in question is 1 MB in size). If I curl repeatedly, Bro sees all the files, but the number of file gap events varies wildly (anywhere from 2 or 3 to over 100). The part that gets me, if I tcpdump alongside Bro, and pull the files out of pcap, they're all intact and hash correctly, so I know I'm getting all the packets on wire. Bro and PF_RING report 0 packet loss. Seems either Bro behaves differently in offline vs. live usage (not what I'd expect to see in this case, though I can't rule it out for sure) or it's not actually seeing the same packet input in the live deployment as it was in offline usage. Maybe to pursue whether the later is true you could have Bro write out the packets that it actually saw with the `-w ` command line flag and examine the resulting pcap to see if it looks like you expect. - Jon From gary.w.weasel2.civ at mail.mil Tue May 29 10:22:01 2018 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W Jr CIV DISA RE (US)) Date: Tue, 29 May 2018 17:22:01 +0000 Subject: [Bro] [Non-DoD Source] Re: File Extraction Gaps In-Reply-To: References: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B286D3D768@UMECHPA68.easf.csd.disa.mil> So I just tested running bro in command line mode (i.e. not using broctl), fed it my usual policy files and dumped to pcap. The extracted_files folder showed the same story, lots of file gaps, all different hashes for the same file. When I loaded up the pcap in wireshark and extracted all the files, all files hash correctly. So this tells me, at some point Bro is getting all the data. Something is just messing up for some reason when it comes to the file analysis and/or file extraction modules. -----Original Message----- From: Jon Siwek Sent: Tuesday, May 29, 2018 12:52 PM To: Weasel, Gary W Jr CIV DISA RE (US) Cc: bro at bro.org Subject: [Non-DoD Source] Re: [Bro] File Extraction Gaps On Tue, May 29, 2018 at 9:55 AM, Weasel, Gary W Jr CIV DISA RE (US) wrote: > In my example, I am curling an exe to a server, where that traffic is spanned to my Bro sensor (the exe in question is 1 MB in size). If I curl repeatedly, Bro sees all the files, but the number of file gap events varies wildly (anywhere from 2 or 3 to over 100). The part that gets me, if I tcpdump alongside Bro, and pull the files out of pcap, they're all intact and hash correctly, so I know I'm getting all the packets on wire. Bro and PF_RING report 0 packet loss. Seems either Bro behaves differently in offline vs. live usage (not what I'd expect to see in this case, though I can't rule it out for sure) or it's not actually seeing the same packet input in the live deployment as it was in offline usage. Maybe to pursue whether the later is true you could have Bro write out the packets that it actually saw with the `-w ` command line flag and examine the resulting pcap to see if it looks like you expect. - Jon From jsiwek at corelight.com Tue May 29 10:57:25 2018 From: jsiwek at corelight.com (Jon Siwek) Date: Tue, 29 May 2018 12:57:25 -0500 Subject: [Bro] [Non-DoD Source] Re: File Extraction Gaps In-Reply-To: <0C34D9CA9B9DBB45B1C51871C177B4B286D3D768@UMECHPA68.easf.csd.disa.mil> References: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> <0C34D9CA9B9DBB45B1C51871C177B4B286D3D768@UMECHPA68.easf.csd.disa.mil> Message-ID: Would be most helpful to get a pcap, scripts, and command that can reproduce what you see. Or else steps one could follow to try to reproduce a pcap that may exhibit the same problem. Otherwise, only guess I have is to check for anything unusual at the TCP-layer -- the file analysis/extraction is dependent on the TCP reassembly process, so if the sequence of events at the TCP level lead Bro to believe it missed part of the TCP stream, that also manifests as a gap event in any associated file analysis. You could go as far as looking at the contents/ordering of things in a tcp_packet event handler as a sanity check, though there may also be residuals in weird.log that you could simply check (I don't recall particular names to look for off the top of my head). - Jon On Tue, May 29, 2018 at 12:22 PM, Weasel, Gary W Jr CIV DISA RE (US) wrote: > So I just tested running bro in command line mode (i.e. not using broctl), fed it my usual policy files and dumped to pcap. > > The extracted_files folder showed the same story, lots of file gaps, all different hashes for the same file. > > When I loaded up the pcap in wireshark and extracted all the files, all files hash correctly. > > So this tells me, at some point Bro is getting all the data. Something is just messing up for some reason when it comes to the file analysis and/or file extraction modules. > > -----Original Message----- > From: Jon Siwek > Sent: Tuesday, May 29, 2018 12:52 PM > To: Weasel, Gary W Jr CIV DISA RE (US) > Cc: bro at bro.org > Subject: [Non-DoD Source] Re: [Bro] File Extraction Gaps > > On Tue, May 29, 2018 at 9:55 AM, Weasel, Gary W Jr CIV DISA RE (US) > wrote: > >> In my example, I am curling an exe to a server, where that traffic is spanned to my Bro sensor (the exe in question is 1 MB in size). If I curl repeatedly, Bro sees all the files, but the number of file gap events varies wildly (anywhere from 2 or 3 to over 100). The part that gets me, if I tcpdump alongside Bro, and pull the files out of pcap, they're all intact and hash correctly, so I know I'm getting all the packets on wire. Bro and PF_RING report 0 packet loss. > > Seems either Bro behaves differently in offline vs. live usage (not > what I'd expect to see in this case, though I can't rule it out for > sure) or it's not actually seeing the same packet input in the live > deployment as it was in offline usage. Maybe to pursue whether the > later is true you could have Bro write out the packets that it > actually saw with the `-w ` command line flag and examine > the resulting pcap to see if it looks like you expect. > > - Jon > From r.bortolameotti at utwente.nl Tue May 29 11:57:05 2018 From: r.bortolameotti at utwente.nl (BortolameottiR) Date: Tue, 29 May 2018 20:57:05 +0200 Subject: [Bro] [Non-DoD Source] Re: File Extraction Gaps In-Reply-To: References: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> <0C34D9CA9B9DBB45B1C51871C177B4B286D3D768@UMECHPA68.easf.csd.disa.mil> Message-ID: <492a9fbe-6409-543a-95ab-840baa653af6@utwente.nl> Bro should write in weird.log something like : inflate_failed. This error should represent (tell me if I am wrong) that bro failed to decompress (inflate/deflate of http) the file correctly. By cross-checking the connection ids you should be able to verify if that was the problem. It happened to me in different settings (with a manipulated pcap) that file reconstruction did not work properly. I was not able to fix it though. R. On 05/29/2018 07:57 PM, Jon Siwek wrote: > Would be most helpful to get a pcap, scripts, and command that can > reproduce what you see. Or else steps one could follow to try to > reproduce a pcap that may exhibit the same problem. Otherwise, only > guess I have is to check for anything unusual at the TCP-layer -- the > file analysis/extraction is dependent on the TCP reassembly process, > so if the sequence of events at the TCP level lead Bro to believe it > missed part of the TCP stream, that also manifests as a gap event in > any associated file analysis. You could go as far as looking at the > contents/ordering of things in a tcp_packet event handler as a sanity > check, though there may also be residuals in weird.log that you could > simply check (I don't recall particular names to look for off the top > of my head). > > - Jon > > On Tue, May 29, 2018 at 12:22 PM, Weasel, Gary W Jr CIV DISA RE (US) > wrote: >> So I just tested running bro in command line mode (i.e. not using broctl), fed it my usual policy files and dumped to pcap. >> >> The extracted_files folder showed the same story, lots of file gaps, all different hashes for the same file. >> >> When I loaded up the pcap in wireshark and extracted all the files, all files hash correctly. >> >> So this tells me, at some point Bro is getting all the data. Something is just messing up for some reason when it comes to the file analysis and/or file extraction modules. >> >> -----Original Message----- >> From: Jon Siwek >> Sent: Tuesday, May 29, 2018 12:52 PM >> To: Weasel, Gary W Jr CIV DISA RE (US) >> Cc: bro at bro.org >> Subject: [Non-DoD Source] Re: [Bro] File Extraction Gaps >> >> On Tue, May 29, 2018 at 9:55 AM, Weasel, Gary W Jr CIV DISA RE (US) >> wrote: >> >>> In my example, I am curling an exe to a server, where that traffic is spanned to my Bro sensor (the exe in question is 1 MB in size). If I curl repeatedly, Bro sees all the files, but the number of file gap events varies wildly (anywhere from 2 or 3 to over 100). The part that gets me, if I tcpdump alongside Bro, and pull the files out of pcap, they're all intact and hash correctly, so I know I'm getting all the packets on wire. Bro and PF_RING report 0 packet loss. >> Seems either Bro behaves differently in offline vs. live usage (not >> what I'd expect to see in this case, though I can't rule it out for >> sure) or it's not actually seeing the same packet input in the live >> deployment as it was in offline usage. Maybe to pursue whether the >> later is true you could have Bro write out the packets that it >> actually saw with the `-w ` command line flag and examine >> the resulting pcap to see if it looks like you expect. >> >> - Jon >> > _______________________________________________ > Bro mailing list > bro at bro-ids.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro From dwdixon at umich.edu Tue May 29 20:10:38 2018 From: dwdixon at umich.edu (Drew Dixon) Date: Tue, 29 May 2018 23:10:38 -0400 Subject: [Bro] Myricom SNFv3 and CentOS 7.5 In-Reply-To: <20180529163353.647bep7uga77pft5@dakine> References: <20180512145359.daswjfvtamjztskp@dakine> <20180529163353.647bep7uga77pft5@dakine> Message-ID: I'm not seeing an update that has been published for download on their website? Still the release(s) from this past January- SNF v3.0.13 & SNF v5.3.2.1 https://www.cspi.com/ethernet-products/support/downloads/ Am I missing something? -Drew On Tue, May 29, 2018 at 12:33 PM Greg Grasmehr wrote: > Hello, > > Was just notified that CSPi released the updated version of SNF+ to > address the current kernel compile issue. > > Greg > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180529/00532b42/attachment.html From maerzsa at ornl.gov Wed May 30 05:49:10 2018 From: maerzsa at ornl.gov (Maerz, Stefan A.) Date: Wed, 30 May 2018 12:49:10 +0000 Subject: [Bro] Myricom SNFv3 and CentOS 7.5 In-Reply-To: References: <20180512145359.daswjfvtamjztskp@dakine> <20180529163353.647bep7uga77pft5@dakine> Message-ID: <8C4A19C9-C26A-494D-B504-D727AB3FDC71@ornl.gov> Hi Drew, I noticed this discrepancy yesterday. If you register in the support portal, you can get it from there. Join the ?Sniffer v3 - LANai? group and they posted it there. Otherwise, I suppose you?ll have to wait for CSPi to update their website or contact support. Best Regards, -Stefan -- Stefan Maerz linkedin.com/in/stefanmaerz On May 29, 2018, at 11:10 PM, Drew Dixon > wrote: I'm not seeing an update that has been published for download on their website? Still the release(s) from this past January- SNF v3.0.13 & SNF v5.3.2.1 https://www.cspi.com/ethernet-products/support/downloads/ Am I missing something? -Drew On Tue, May 29, 2018 at 12:33 PM Greg Grasmehr > wrote: Hello, Was just notified that CSPi released the updated version of SNF+ to address the current kernel compile issue. Greg _______________________________________________ Bro mailing list bro at bro-ids.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/bro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180530/df88b6e8/attachment.html From gary.w.weasel2.civ at mail.mil Wed May 30 07:43:33 2018 From: gary.w.weasel2.civ at mail.mil (Weasel, Gary W Jr CIV DISA RE (US)) Date: Wed, 30 May 2018 14:43:33 +0000 Subject: [Bro] [Non-DoD Source] Re: File Extraction Gaps In-Reply-To: References: <0C34D9CA9B9DBB45B1C51871C177B4B286D3BF38@UMECHPA68.easf.csd.disa.mil> <0C34D9CA9B9DBB45B1C51871C177B4B286D3D768@UMECHPA68.easf.csd.disa.mil> Message-ID: <0C34D9CA9B9DBB45B1C51871C177B4B286D4281F@UMECHPA68.easf.csd.disa.mil> So here's what I did for reproducibility sake. I installed CentOS 7 (1804) minimal install, and manually compiled and installed bro-2.5.3, all default settings. I then made the following policy changes listed below. Interestingly, I don't get as many corrupt files as I was previously seeing (only 25-40% instead of 80-90%), but the number of file gaps are still pretty large (well over 100). Makes me wonder if this is some sort of bug dependent on performance? Musing out loud with that. To /bro/share/bro/site/local.bro I added at the end (does not include horizontal rules made here) --- local.bro ----------------------------------------------- @load policy/custom/extract-files.bro @load policy/custom/file-gap.bro --------------------------------------------------------------- Then I created the following files --- policy/custom/file-gap.bro -------------- module FileGap; export { redef enum Log::ID += { LOG }; type Info: record { fuid: string &log; offset: count &log; len: count &log; }; global log_file_gap: event(rec: Info); } event bro_init() { Log::create_stream(FileGap::LOG, [$columns=Info, $ev=log_file_gap]); } event file_gap(f: fa_file, offset: count, len: count) { local rec: FileGap::Info = [$fuid=f$id, $offset=offset, $len=len]; Log::write(FileGap::LOG, rec); } ------------------------------------------------------ --- policy/custom/extract-files.bro -------- @load base/files/extract global ext_map: table[string] of string = { ["application/x-dosexec"] = "exe", } &default =""; event file_sniff(f: fa_file, meta: fa_metadata) { local ext = ""; if (!meta?$mime_type || meta$mime_type !in ext_map) return; else { ext = ext_map[meta$mime_type]; local fname = fmt("%s-%s.%s", f$source, f$id, ext); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); } } ------------------------------------------------------ -----Original Message----- From: Jon Siwek Sent: Tuesday, May 29, 2018 1:57 PM To: Weasel, Gary W Jr CIV DISA RE (US) Cc: bro at bro.org Subject: Re: [Non-DoD Source] Re: [Bro] File Extraction Gaps Would be most helpful to get a pcap, scripts, and command that can reproduce what you see. Or else steps one could follow to try to reproduce a pcap that may exhibit the same problem. Otherwise, only guess I have is to check for anything unusual at the TCP-layer -- the file analysis/extraction is dependent on the TCP reassembly process, so if the sequence of events at the TCP level lead Bro to believe it missed part of the TCP stream, that also manifests as a gap event in any associated file analysis. You could go as far as looking at the contents/ordering of things in a tcp_packet event handler as a sanity check, though there may also be residuals in weird.log that you could simply check (I don't recall particular names to look for off the top of my head). - Jon On Tue, May 29, 2018 at 12:22 PM, Weasel, Gary W Jr CIV DISA RE (US) wrote: > So I just tested running bro in command line mode (i.e. not using broctl), fed it my usual policy files and dumped to pcap. > > The extracted_files folder showed the same story, lots of file gaps, all different hashes for the same file. > > When I loaded up the pcap in wireshark and extracted all the files, all files hash correctly. > > So this tells me, at some point Bro is getting all the data. Something is just messing up for some reason when it comes to the file analysis and/or file extraction modules. > > -----Original Message----- > From: Jon Siwek > Sent: Tuesday, May 29, 2018 12:52 PM > To: Weasel, Gary W Jr CIV DISA RE (US) > Cc: bro at bro.org > Subject: [Non-DoD Source] Re: [Bro] File Extraction Gaps > > On Tue, May 29, 2018 at 9:55 AM, Weasel, Gary W Jr CIV DISA RE (US) > wrote: > >> In my example, I am curling an exe to a server, where that traffic is spanned to my Bro sensor (the exe in question is 1 MB in size). If I curl repeatedly, Bro sees all the files, but the number of file gap events varies wildly (anywhere from 2 or 3 to over 100). The part that gets me, if I tcpdump alongside Bro, and pull the files out of pcap, they're all intact and hash correctly, so I know I'm getting all the packets on wire. Bro and PF_RING report 0 packet loss. > > Seems either Bro behaves differently in offline vs. live usage (not > what I'd expect to see in this case, though I can't rule it out for > sure) or it's not actually seeing the same packet input in the live > deployment as it was in offline usage. Maybe to pursue whether the > later is true you could have Bro write out the packets that it > actually saw with the `-w ` command line flag and examine > the resulting pcap to see if it looks like you expect. > > - Jon > From robin at icir.org Wed May 30 15:26:51 2018 From: robin at icir.org (Robin Sommer) Date: Wed, 30 May 2018 15:26:51 -0700 Subject: [Bro] BroCon 2018: Registration is open Message-ID: <20180530222651.GH1911@icir.org> Dear Bro Community, We're excited to announce that registration for BroCon 2018 is now open at https://www.brocon2018.com . BroCon 2018 will take place October 10-12, in Arlington, VA. It offers the Bro community a chance to meet face-to-face, share new ideas and developments, and better understand and secure their networks. The conference is composed of presentations from members of the community and the Bro development team. We'll post the Call for Presentations shortly. If your organization is interested in supporting BroCon, please check out the sponsorship opportunities. Robin -- Robin Sommer * ICSI/LBNL * robin at icir.org * www.icir.org/robin From mrkortek at gmail.com Thu May 31 06:35:35 2018 From: mrkortek at gmail.com (Michael Kortekaas) Date: Thu, 31 May 2018 09:35:35 -0400 Subject: [Bro] Best Way to Dynamically Update Signatures? Message-ID: Bro Community, I currently have a Bro script that downloads and dynamically updates Intel data from a central source. It is a scheduled event running in Bro so it doesn't require a check/restart. I need to create a similar type mechanism for signatures but the only documentation I can find seems to indicate that I need to use @load-sigs and have a file available at startup. However I do need the ability to update signatures on a fairly frequent basis. Although I opted not to use Intel data files which have the feature of reloading when modified, I am wondering if a similar feature exists for .sig files. Signatures appear to be the type of data that would be stored in a data structure rather than compiled as code. Is there a corresponding API for accessing (add, update, remove) the signature data from a Bro script? Another potential is to write a Python script to update the signature file and have it trigger a reload of Bro. Rather than forcing Bro to shut down and restart for a signature file update at an arbitrary time that could interfere with normal processing, is there a regular event/operation where this reload could/should be triggered for minimal impact? Or, is there another mechanism for signature updates that I have not yet considered? Any related issues or considerations regarding Bro clusters would be useful to know as well. Any help or insight into how best to dynamically update signatures would be much appreciated. Regards, Michael Kortekaas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180531/ebc6b7c5/attachment.html From jeffrey.s.poore at bankofamerica.com Thu May 31 11:06:35 2018 From: jeffrey.s.poore at bankofamerica.com (Poore, Jeffrey S) Date: Thu, 31 May 2018 18:06:35 +0000 Subject: [Bro] bro cluster in containers Message-ID: Has anyone implemented a bro cluster in containers? The reason I ask is that we are looking to build a cluster on top of Mesos / DC/OS so that we can have high availability as we are processing tons of traffic, and it is just easier to deploy things on top of it if we do it in containers. I understand how to do most of it, but the configuration so that the cluster master knows about all the other instances is kind of my sticking point. Is there a way to utilize a tool like zookeeper so that it can dynamically manage the instances in case one of them crashes and then gets spun up on a different host? ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/bro/attachments/20180531/179440a3/attachment.html From jazoff at illinois.edu Thu May 31 14:35:25 2018 From: jazoff at illinois.edu (Azoff, Justin S) Date: Thu, 31 May 2018 21:35:25 +0000 Subject: [Bro] bro cluster in containers In-Reply-To: References: Message-ID: <5F9BDAAA-DC07-4B20-8A6E-BC8DFB729017@illinois.edu> > On May 31, 2018, at 2:06 PM, Poore, Jeffrey S wrote: > > Has anyone implemented a bro cluster in containers? I've been meaning to try to build this out using k8s, just haven't had time. > The reason I ask is that we are looking to build a cluster on top of Mesos / DC/OS so that we can have high availability as we are processing tons of traffic, and it is just easier to deploy things on top of it if we do it in containers. To really be useful you also need to automate the configuration of the tapagg layer. > I understand how to do most of it, but the configuration so that the cluster master knows about all the other instances is kind of my sticking point. Right now it would break because of how this is written: event Cluster::hello(name: string, id: string) &priority=10 { if ( name !in nodes ) { Reporter::error(fmt("Got Cluster::hello msg from unexpected node: %s", name)); return; } local n = nodes[name]; if ( n?$id ) { if ( n$id != id ) Reporter::error(fmt("Got Cluster::hello msg from duplicate node:%s", name)); } else event Cluster::node_up(name, id); n$id = id; Cluster::log(fmt("got hello from %s (%s)", name, id)); if ( n$node_type == WORKER ) { add active_worker_ids[id]; worker_count = |active_worker_ids|; } } but I'm sure you could have a variation of that function that doesn't care if the node is unexpected. > Is there a way to utilize a tool like zookeeper so that it can dynamically manage the instances in case one of them crashes and then gets spun up on a different host? k8s and mesos should just do that for you, but what environment are you running in where that would be useful? The deployment I was thinking of would involve a k8s operator to manage the Arista so as a cluster is created or scaled up and down it would automatically manage the tool port etherchannel groups for me. Without that it wouldn't be useful at all. ? Justin Azoff