We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问在有多张显卡时,如何利用多卡做inference呢?
我的做法是用python多个进程,每个进程对应一个gpu,分别把demo_inference.py中的device改为对应的gpu编号(如cuda:0 - 3)。
但是在做的过程中我发现,同一个模型对同样的图片做inference,修改device为cuda:1(只要不是cuda:0)时,检测的结果就会出错,结果图片上全是各种各样的杂乱的bbox框。
The text was updated successfully, but these errors were encountered:
参考:
s2anet/tools/test.py
Lines 37 to 57 in 7d66620
Sorry, something went wrong.
No branches or pull requests
请问在有多张显卡时,如何利用多卡做inference呢?
我的做法是用python多个进程,每个进程对应一个gpu,分别把demo_inference.py中的device改为对应的gpu编号(如cuda:0 - 3)。
但是在做的过程中我发现,同一个模型对同样的图片做inference,修改device为cuda:1(只要不是cuda:0)时,检测的结果就会出错,结果图片上全是各种各样的杂乱的bbox框。
The text was updated successfully, but these errors were encountered: