Warning: Leaking Caffe2 thread-pool after fork
when using DataLoader
with num_workers>0
and pin_memory=True
#57273
Closed
Description
🐛 Bug
When using a DataLoader
with num_workers>0
and pin_memory=True
, warnings trigger about Leaking Caffe2 thread-pool after fork
. This warning shows multiple times, and populates the screen.
The warning doesn't trigger when either num_workers=0
or pin_memory=False
.
To Reproduce
Steps to reproduce the behavior:
- Run the following:
from torch.utils.data import DataLoader from torchvision.datasets import FakeData from torchvision.transforms import ToTensor def main(): data = FakeData(transform=ToTensor()) dataloader = DataLoader(data, num_workers=2, pin_memory=True) for e in range(1, 6): print(f'epoch {e}:') for _ in dataloader: pass if __name__ == '__main__': main()
Output:
epoch 1:
epoch 2:
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
epoch 3:
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
epoch 4:
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
epoch 5:
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:88] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
Expected behavior
No warnings
Environment
PyTorch version: 1.9.0.dev20210428
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce RTX 2080 Ti
GPU 3: GeForce RTX 2080 Ti
GPU 4: GeForce RTX 2080 Ti
GPU 5: GeForce RTX 2080 Ti
GPU 6: GeForce RTX 2080 Ti
GPU 7: GeForce RTX 2080 Ti
Nvidia driver version: 460.32.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.1
[pip3] torch==1.9.0.dev20210428
[pip3] torchaudio==0.9.0a0+999026d
[pip3] torchvision==0.10.0.dev20210428
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] numpy 1.20.1 py38h93e21f0_0
[conda] numpy-base 1.20.1 py38h7d8b39e_0
[conda] pytorch 1.9.0.dev20210428 py3.8_cuda11.1_cudnn8.0.5_0 pytorch-nightly
[conda] torchaudio 0.9.0.dev20210428 py38 pytorch-nightly
[conda] torchvision 0.10.0.dev20210428 py38_cu111 pytorch-nightly
Additional context
It looks like this warning was introduced in #54895. I don't quite follow the details there, though.