[Zeek] zbalance_ipc and Zeek
william de ping
bill.de.ping at gmail.com
Sun Mar 17 10:38:26 PDT 2019
Hi,
I would check the followings :
- Numa node configuration - This server should have 2 CPU sockets, if
you pinned zbalance_ipc to a numa node which is not directly connected to
the PCI bus hosting the NIC all traffic will go through the QPI and that
could explain why it will be slower. I would check that the zbalance_ipc
app is pinned to the CPU socket that is closer to the PCI NIC to avoid this
- Check line rate on each virtual interface using
PF_RING/userland/examples/pfcount. check on : zc:99@[0,1,...,9] after
using zbalance_ipc and without zbalance_ipc using the RSS. This should give
you a clue if there is a specific worker instance that is receiving
significantly more traffic than others (RSS and zbalance_ipc LB might
differ). It really depends on the type of traffic, but I assume that on a
2.3Ghz processor, a single bro worker can process anything between
150-400mbps.
- Run a single instance of bro with local configuration and
dump-events.bro script (you can redef include_args=F to get only events
name without parameters). Output, sort, uniq -c it to get a clue on what
event occur more often. Some analyzers might be turned off to save CPU
cycles.
Let me know if it helps
B
On Sun, Mar 17, 2019 at 2:34 PM C Blair <mnmblair at hotmail.com> wrote:
> Hi Bill,
> Thank you for the assist. Currently, Zeek cannot reliably capture more
> than 300Mbps with this configuration. When I remove zbalance_ipc and use
> RSS with num_rss_queues=lb_procs Zeek can capture up to 2Gbps. I need to
> use zbalance_ipc because I use a single capture interface with multiple
> consuming applications, i.e. Zeek and Snort. It seems obvious that a
> software load balancer will perform less than hardware, however, I don't
> see the same significant performance drop with other consuming applications
> like Snort.
>
> Ingress Line speed:
> I am using a traffic generator so I can regulate up to 10Gbps.
>
> ZEEK node.cfg
>
> [manager]
> type=manager
> host=localhost
>
> [logger]
> type=logger
> host=localhost
>
> [proxy-1]
> type=proxy
> host=localhost
>
> [worker-1]
> type=worker
> host=localhost
> interface=zc:99
> lb_method=pf_ring
> lb_procs=10
> pin_cpus=1,2,3,4,5,6,7,8,9,10
>
>
> ZBALANCE_IPC run config
>
> zbalance_ipc -i zc:eth0 -c 99 -n 10 -m 4 -g 15 -S 0
>
>
> PFRING-ZC INFO
>
> PF_RING Version : 7.5.0 (unknown)
> Total rings : 22
> Standard (non ZC) Options
> Ring slots : 65536
> Slot version : 17
> Capture TX : No [RX only]
> IP Defragment : No
> Socket Mode : Standard
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
> Name : ethØ
> Index : 40
> Address : XX:XX:XX:XX:XX:XX
> Polling Mode : NAPI/ZC
> Type : Ethernet
> Family : ixgbe
> TX Queues : 1
> RX Queues : 1
> Num RX Slots : 32768
> Num TX Slots : 32768
>
>
> System Specs:
> Xeon D-1587 16 cores, 32 logical, 1.7 Ghz, 2.3 Ghz turbo, 20M Cache
> 128GB DDR4 2133Mhz
> 8TB SSD
> Intel 10GBase-T X557 ixgbe
>
>
> On Mar 17, 2019, at 9:08 AM, william de ping <bill.de.ping at gmail.com>
> wrote:
>
> Hi Colin,
>
> Can you please clarify your deployment ? (node.cfg file, NIC type, PF_RING
> version, zbalance_ipc parameters and the ingress line rate )
>
> Thanks
> B
>
> On Fri, Mar 15, 2019 at 12:38 AM COLIN BLAIR <mnmblair at hotmail.com> wrote:
>
> Hi All,
>
> Does anyone have a success story using zbalance_ipc and Zeek. We are
> getting very high packet loss using zbalance_ipc. When we remove
> zbalance_ipc, Zeek performs well on pf_ring zero copy with RSS. Any advice
> is appreciated.
>
> R,
> CB
> _______________________________________________
> Zeek mailing list
> zeek at zeek.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/zeek
> <http://mailman.icsi.berkeley.edu/mailman/listinfo/zeek>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ICSI.Berkeley.EDU/pipermail/zeek/attachments/20190317/381311e4/attachment-0001.html
More information about the Zeek
mailing list