[Zeek] zbalance_ipc and Zeek

C Blair mnmblair at hotmail.com
Sat Mar 23 05:58:24 PDT 2019


Hi Bill,
I just wanted to follow up. I have had success after disabling hyper-threading. I have also isolated the cores for the queue consumers. Zeek and Snort now reliably process over 2Gbps simultaneously with zbalance_ipc. The traffic is a vanilla enterprise profile generated by a traffic generator. I will look into tuning the Zeek analyzers. Thank you for the assist.

CB

On Mar 18, 2019, at 6:41 AM, william de ping <bill.de.ping at gmail.com> wrote:

Hi Colin

Have you seen any difference in traffic rate of virtual NICs between zbalance_ipc and RSS LB ?
Can you send htop when bro workers are running ?
Drops should mean that a worker reaches more than 100% CPU usage, if this is the case, I would dive into the world of cpuset.
With this pseudo directory you can view what other processes are running on a core in addition to bro's instance so you could make the core exclusive for bro and the OS will use other available cpus for the rest of the processes.
I would first resort to the difference between RSS and zbalance_ipc prior to making cpus exclusive.

There are many tweaks in bro, but it really depends on the type of traffic and what you do with it. using dump-events script you get a sense of the most active events. grep these events and search for the bro scripts that registered them, it could very well be the case that no log file will be generated in that script. Such scripts could be irrelevant so you can switch off their analyzer (comment loading them in init-default.bro)

Let me know how its working out for you
B

________________________________
From: C Blair <mnmblair at hotmail.com>
Sent: Sunday, March 17, 2019 6:09 PM
To: bill.de.ping at gmail.com
Subject: Re: [Zeek] zbalance_ipc and Zeek

Hi Bill,

The server is a single socket.  Attached is my lstopo output. I have run zbalance_ipc with the -p option. This sends the packet per queue data to stdout and you can view what is happening in real time. The queues receive with zero drops and then Zeek drops packets equally. I have pinned zbalance_ipc to logical core 15 and the Bro workers are pinned to 1-10 logical cores. I have reserved core 0 for packet time stamping. I let CentOS schedule the remaining logical cores. I have not tried turning off Hyper-threading. Can you recommend core affinity for my given hw?

Are there certain analyzers you recommend turning off and how do I accomplish that?

Thanks again,
CB
On Mar 17, 2019, at 1:38 PM, william de ping <bill.de.ping at gmail.com> wrote:

Hi,

I would check the followings :


  *   Numa node configuration - This server should have 2 CPU sockets, if you pinned zbalance_ipc to a numa node which is not directly connected to the PCI bus hosting the NIC all traffic will go through the QPI and that could explain why it will be slower. I would check that the zbalance_ipc app is pinned to the CPU socket that is closer to the PCI NIC to avoid this
  *   Check line rate on each virtual interface using PF_RING/userland/examples/pfcount. check on : zc:99@[0,1,...,9] after using zbalance_ipc and without zbalance_ipc using the RSS. This should give you a clue if there is a specific worker instance that is receiving significantly more traffic than others (RSS and zbalance_ipc LB might differ). It really depends on the type of traffic, but I assume that on a 2.3Ghz processor, a single bro worker can process anything between 150-400mbps.
  *   Run a single instance of bro with local configuration and dump-events.bro script (you can redef include_args=F to get only events name without parameters). Output, sort, uniq -c it to get a clue on what event occur more often. Some analyzers might be turned off to save CPU cycles.

Let me know if it helps
B


________________________________
From: C Blair
Sent: Sunday, March 17, 2019 8:34 AM
To: bill.de.ping at gmail.com
Cc: zeek at zeek.org
Subject: Re: [Zeek] zbalance_ipc and Zeek

Hi Bill,
Thank you for the assist. Currently, Zeek cannot reliably capture more than 300Mbps with this configuration. When I remove zbalance_ipc and use RSS with num_rss_queues=lb_procs Zeek can capture up to 2Gbps. I need to use zbalance_ipc because I use a single capture interface with multiple consuming applications, i.e. Zeek and Snort. It seems obvious that a software load balancer will perform less than hardware, however, I don't see the same significant performance drop with other consuming applications like Snort.

Ingress Line speed:
I am using a traffic generator so I can regulate up to 10Gbps.

ZEEK node.cfg
[manager]
type=manager
host=localhost

[logger]
type=logger
host=localhost

[proxy-1]
type=proxy
host=localhost

[worker-1]
type=worker
host=localhost
interface=zc:99
lb_method=pf_ring
lb_procs=10
pin_cpus=1,2,3,4,5,6,7,8,9,10

ZBALANCE_IPC run config
zbalance_ipc -i zc:eth0 -c 99 -n 10 -m 4 -g 15 -S 0

PFRING-ZC INFO
PF_RING Version    : 7.5.0 (unknown)
Total rings     : 22
Standard (non ZC) Options
Ring slots     : 65536
Slot version    : 17
Capture TX     : No [RX only]
IP Defragment    : No
Socket Mode     : Standard
Cluster Fragment Queue  : 0
Cluster Fragment Discard : 0
Name      : ethØ
Index      : 40
Address      : XX:XX:XX:XX:XX:XX
Polling Mode    : NAPI/ZC
Type      : Ethernet
Family      : ixgbe
TX Queues     : 1
RX Queues     : 1
Num RX Slots    : 32768
Num TX Slots    : 32768

System Specs:
Xeon D-1587 16 cores, 32 logical, 1.7 Ghz, 2.3 Ghz turbo, 20M Cache
128GB DDR4 2133Mhz
8TB SSD
Intel 10GBase-T X557 ixgbe


On Mar 17, 2019, at 9:08 AM, william de ping <bill.de.ping at gmail.com> wrote:

Hi Colin,

Can you please clarify your deployment ? (node.cfg file, NIC type, PF_RING version, zbalance_ipc parameters and the ingress line rate )

Thanks
B

On Fri, Mar 15, 2019 at 12:38 AM COLIN BLAIR <mnmblair at hotmail.com<mailto:mnmblair at hotmail.com>> wrote:
Hi All,

Does anyone have a success story using zbalance_ipc and Zeek. We are getting very high packet loss using zbalance_ipc. When we remove zbalance_ipc, Zeek performs well on pf_ring zero copy with RSS. Any advice is appreciated.

R,
CB
_______________________________________________
Zeek mailing list
zeek at zeek.org<mailto:zeek at zeek.org>
http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek<http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190323/bbd13049/attachment-0001.html 


More information about the Zeek mailing list