Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

一机多显卡做inference #103

Open
galaxyGGG opened this issue Aug 18, 2021 · 1 comment
Open

一机多显卡做inference #103

galaxyGGG opened this issue Aug 18, 2021 · 1 comment

Comments

@galaxyGGG
Copy link

galaxyGGG commented Aug 18, 2021

请问在有多张显卡时,如何利用多卡做inference呢?

我的做法是用python多个进程,每个进程对应一个gpu,分别把demo_inference.py中的device改为对应的gpu编号(如cuda:0 - 3)。

但是在做的过程中我发现,同一个模型对同样的图片做inference,修改device为cuda:1(只要不是cuda:0)时,检测的结果就会出错,结果图片上全是各种各样的杂乱的bbox框。

@csuhan
Copy link
Owner

csuhan commented Aug 18, 2021

参考:

s2anet/tools/test.py

Lines 37 to 57 in 7d66620

def multi_gpu_test(model, data_loader, tmpdir=None):
model.eval()
results = []
dataset = data_loader.dataset
rank, world_size = get_dist_info()
if rank == 0:
prog_bar = mmcv.ProgressBar(len(dataset))
for i, data in enumerate(data_loader):
with torch.no_grad():
result = model(return_loss=False, rescale=True, **data)
results.append(result)
if rank == 0:
batch_size = data['img'][0].size(0)
for _ in range(batch_size * world_size):
prog_bar.update()
# collect results from all ranks
results = collect_results(results, len(dataset), tmpdir)
return results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants