MPS backend leaks memory when input sizes vary #132596
Labels
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: memory usage
PyTorch is using more memory than it should, or it is leaking memory
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
Possibly similar to an old issue with the CPU backend: #27971 #32037
In my case both CPU and CUDA work fine, and only MPS has the issue. Is there anything similar to
LRU_CACHE_CAPACITY
but for MPS?Versions
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.2 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.29.6
Libc version: N/A
Python version: 3.12.3 (main, Apr 15 2024, 17:43:11) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-14.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] torch==2.4.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
The text was updated successfully, but these errors were encountered: