Closed
Description
I would like to know if it's possible to train a model with multiprocess parallelism (no GPU available) using Lightning (sync analogue of https://pytorch.org/docs/stable/notes/multiprocessing.html#hogwild) ? After a quick glance, I've the impression that in Trainer
all available options for parallelism are GPU based (if I'm not mistaken torch.DPD
supports multiproc CPU-only training).