You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the nfacctd executable to parse NetFlow/IPFIX traffic. In one of our networks, we are receiving a high volume of traffic, exceeding 50,000 FPS. This load causes the core process to consume significant CPU and memory resources as it tries to parse all incoming flows.
To manage this situation, I need a configuration parameter to control the socket buffer size used by nfacctd. I am okay with dropping excess packets if the incoming traffic rate exceeds the buffer capacity.
Is there a parameter available in nfacctd that allows me to set the size of the socket buffer? If so, could you provide guidance or examples for configuring this?
Thank you!
Version
NetFlow Accounting Daemon, nfacctd 1.7.9 [RELEASE]
The text was updated successfully, but these errors were encountered:
Nevertheless, if what you want is to limit CPU or memory usage, you can probably use cgroups/nice to control the maximum amount of resources the process can use, or use a the nfacctd docker container with cpu and memory limits (which uses cgroups under the hood).
Hello @paololucente,
I am using the nfacctd executable to parse NetFlow/IPFIX traffic. In one of our networks, we are receiving a high volume of traffic, exceeding 50,000 FPS. This load causes the core process to consume significant CPU and memory resources as it tries to parse all incoming flows.
To manage this situation, I need a configuration parameter to control the socket buffer size used by nfacctd. I am okay with dropping excess packets if the incoming traffic rate exceeds the buffer capacity.
Is there a parameter available in nfacctd that allows me to set the size of the socket buffer? If so, could you provide guidance or examples for configuring this?
Thank you!
Version
NetFlow Accounting Daemon, nfacctd 1.7.9 [RELEASE]
The text was updated successfully, but these errors were encountered: