Skip to content

about multi-GPU training setup #10

Open
@Jason-u

Description

Dear author,

Thank you for your work. I would like to inquire why the model's computation is placed on device 1. When I set two cards to be visible, for example, gpus=0,1, and I set the batch_size to 1, I noticed a strange occurrence: both card 0 and card 1 are running simultaneously. May I ask you for guidance on how to modify this so that the data and model for a batch_size are all on one card using DataParallel?

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions