Translations:PyTorch/307/en
Jump to navigation
Jump to search
The alternative is to use Horovod to run Distributed Training or set the backend to 'mpi'
when using DistributedDataParallel
.