Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using multiple GPUs, the process on the other GPU will occupy a certain amount of memory on the main GPU #214

Open
gaozhiguang opened this issue Apr 14, 2021 · 0 comments

Comments

@gaozhiguang
Copy link

gaozhiguang commented Apr 14, 2021

Hi , i use the code here for another program, i replace the bert here for another pretrained model, and then when i use multiple gpus for training, something like this happens:
image
i use the 5,7th gpu, and the process running on GPU07 occupy a 777 MB memory of GPU05, and my program will stop because the error: cuda out of memory.
What is the 777MB on GPU05, how can i fix this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant