You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use both DPDK and SPDK in my application. I reserve 16 GB of hugepages during the boot by adding GRUB_CMDLINE_LINUX_DEFAULT="default_hugepagesz=1G hugepagesz=1G hugepages=16" in the /etc/default/grub.
Calling spdk_nvme_ns_cmd_write returns -ENOMEM. When I execute ./scripts/setup.sh I get the following message
0000:82:00.0 (144d a80a): Already using the vfio-pci driver
0000:06:00.0 (144d a80a): Already using the vfio-pci driver
INFO: Requested 2 hugepages but 4 already allocated on node0
"mkru" user memlock limit: 8036 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as user "mkru".
To change this, please adjust limits.conf memlock limit for user "mkru".
Looks like I have memlock limit set for 8 GB by default. I have changed it to unlimited. This time when I call ./scripts/setup.sh I get:
0000:82:00.0 (144d a80a): Already using the vfio-pci driver
0000:06:00.0 (144d a80a): Already using the vfio-pci driver
INFO: Requested 2 hugepages but 4 already allocated on node0
It looks like no errors this time.
However, when I call spdk_nvme_ns_cmd_write I still get -ENOMEM. I do not understand why. What is more, the buffer I pass to the spdk_nvme_ns_cmd_write is already allocated in the rte mempool by the DPDK code. Why spdk_nvme_ns_cmd_write would like to allocate some memory is even more unclear to me.
Linux host 6.8.0-45-generic #45-Ubuntu SMP PREEMPT_DYNAMIC x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered:
However, when I call spdk_nvme_ns_cmd_write I still get -ENOMEM. I do not understand why. What is more, the buffer I pass to the spdk_nvme_ns_cmd_write is already allocated in the rte mempool by the DPDK code. Why spdk_nvme_ns_cmd_write would like to allocate some memory is even more unclear to me.
I don't think spdk_nvme_ns_cmd_write() allocates buffers for the data. But it might return -ENOMEM when the request pool is empty. You could increase the number of requests via spdk_nvme_io_qpair_opts.io_queue_requests.
I use both DPDK and SPDK in my application. I reserve 16 GB of hugepages during the boot by adding
GRUB_CMDLINE_LINUX_DEFAULT="default_hugepagesz=1G hugepagesz=1G hugepages=16"
in the/etc/default/grub
.Calling
spdk_nvme_ns_cmd_write
returns-ENOMEM
. When I execute./scripts/setup.sh
I get the following messageLooks like I have memlock limit set for 8 GB by default. I have changed it to unlimited. This time when I call
./scripts/setup.sh
I get:It looks like no errors this time.
However, when I call
spdk_nvme_ns_cmd_write
I still get-ENOMEM
. I do not understand why. What is more, the buffer I pass to thespdk_nvme_ns_cmd_write
is already allocated in the rte mempool by the DPDK code. Whyspdk_nvme_ns_cmd_write
would like to allocate some memory is even more unclear to me.Linux host 6.8.0-45-generic #45-Ubuntu SMP PREEMPT_DYNAMIC x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: