Translations:PyTorch/307/en

From Alliance Doc
Jump to navigation Jump to search

The alternative is to use Horovod to run Distributed Training or set the backend to 'mpi' when using DistributedDataParallel.