PyTorch: Difference between revisions
m (More module loads are needed) |
|||
(113 intermediate revisions by 12 users not shown) | |||
Line 1: | Line 1: | ||
<languages /> | <languages /> | ||
[[Category:Software]] | [[Category:Software]][[Category:AI and Machine Learning]] | ||
<translate> | <translate> | ||
<!--T:14--> | <!--T:14--> | ||
Line 8: | Line 8: | ||
<!--T:61--> | <!--T:61--> | ||
If you are porting a PyTorch program to | If you are porting a PyTorch program to one of our clusters, you should follow [[Tutoriel Apprentissage machine/en|our tutorial on the subject]]. | ||
= Disambiguation = <!--T:62--> | = Disambiguation = <!--T:62--> | ||
Line 23: | Line 23: | ||
To see the latest version of PyTorch that we have built: | To see the latest version of PyTorch that we have built: | ||
{{Command|avail_wheels "torch*"}} | {{Command|avail_wheels "torch*"}} | ||
For more information | For more information, see [[Python#Available_wheels |Available wheels]]. | ||
==Installing | ==Installing our wheel== <!--T:15--> | ||
<!--T:25--> | <!--T:25--> | ||
The preferred option is to install it using the Python [https://pythonwheels.com/ wheel] as follows: | The preferred option is to install it using the Python [https://pythonwheels.com/ wheel] as follows: | ||
:1. Load a Python [[Utiliser_des_modules/en#Sub-command_load|module]], thus < | :1. Load a Python [[Utiliser_des_modules/en#Sub-command_load|module]], thus <code>module load python</code> | ||
:2. Create and start a [[Python#Creating_and_using_a_virtual_environment|virtual environment]]. | :2. Create and start a [[Python#Creating_and_using_a_virtual_environment|virtual environment]]. | ||
:3. Install PyTorch in the virtual environment with <code>pip install</code>. | :3. Install PyTorch in the virtual environment with <code>pip install</code>. | ||
Line 35: | Line 35: | ||
==== GPU and CPU ==== <!--T:18--> | ==== GPU and CPU ==== <!--T:18--> | ||
:{{Command|prompt=(venv) [name@server ~]|pip install --no-index torch }} | :{{Command|prompt=(venv) [name@server ~]|pip install --no-index torch }} | ||
<!--T:546--> | |||
<b>Note:</b> There are known issues with PyTorch 1.10 on our clusters (except for Narval). If you encounter problems while using distributed training, or if you get an error containing <code>c10::Error</code>, we recommend installing PyTorch 1.9.1 using <code>pip install --no-index torch==1.9.1</code>. | |||
====Extra==== <!--T:21--> | ====Extra==== <!--T:21--> | ||
In addition to < | In addition to <code>torch</code>, you can install <code>torchvision</code>, <code>torchtext</code> and <code>torchaudio</code>: | ||
{{Command|prompt=(venv) [name@server ~]|pip install --no-index torch torchvision torchtext torchaudio }} | {{Command|prompt=(venv) [name@server ~]|pip install --no-index torch torchvision torchtext torchaudio }} | ||
Line 57: | Line 60: | ||
#SBATCH --output=%N-%j.out | #SBATCH --output=%N-%j.out | ||
module load python/ | module load python/<select version> # Make sure to choose a version that suits your application | ||
virtualenv --no-download $SLURM_TMPDIR/env | virtualenv --no-download $SLURM_TMPDIR/env | ||
source $SLURM_TMPDIR/env/bin/activate | source $SLURM_TMPDIR/env/bin/activate | ||
Line 91: | Line 94: | ||
{{Command|sbatch pytorch-test.sh}} | {{Command|sbatch pytorch-test.sh}} | ||
= High | = High performance with PyTorch = <!--T:164--> | ||
== TF32: Performance vs numerical accuracy == <!--T:547--> | |||
<!--T:548--> | |||
On version 1.7.0 PyTorch has introduced support for [https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/ Nvidia's TensorFloat-32 (TF32) Mode], which in turn is available only on Ampere and later Nvidia GPU architectures. This mode of executing tensor operations has been shown to yield up to 20x speed-ups compared to equivalent single precision (FP32) operations and is enabled by default in PyTorch versions 1.7.x up to 1.11.x. However, such gains in performance come at the cost of potentially decreased accuracy in the results of operations, which may become problematic in cases such as when dealing with ill-conditioned matrices, or when performing long sequences of tensor operations as is common in deep learning models. Following calls from its user community, TF32 is now <b>disabled by default for matrix multiplications</b>, but still <b>enabled by default for convolutions</b> starting with PyTorch version 1.12.0. | |||
== PyTorch with | <!--T:549--> | ||
As of October 2022, our only cluster equipped with Ampere GPUs is [[Narval]]. When using PyTorch on Narval, users should be cognizant of the following: | |||
# You may notice a significant slowdown when running the exact same GPU-enabled code with <code>torch < 1.12.0</code> and <code>torch >= 1.12.0</code>. | |||
# You may get different results when running the exact same GPU-enabled code with <code>torch < 1.12.0</code> and <code>torch >= 1.12.0</code>. | |||
<!--T:550--> | |||
To enable or disable TF32 on <code>torch >= 1.12.0</code> set the following flags to <code>True</code> or <code>False</code> accordingly: | |||
<!--T:551--> | |||
torch.backends.cuda.matmul.allow_tf32 = False # Enable/disable TF32 for matrix multiplications | |||
torch.backends.cudnn.allow_tf32 = False # Enable/disable TF32 for convolutions | |||
<!--T:552--> | |||
For more information, see [https://pytorch.org/docs/stable/notes/cuda.html#tf32-on-ampere PyTorch's official documentation] | |||
== PyTorch with multiple CPUs == <!--T:165--> | |||
<!--T:166--> | <!--T:166--> | ||
PyTorch natively supports parallelizing work across multiple CPUs in two ways: | PyTorch natively supports parallelizing work across multiple CPUs in two ways: intra-op parallelism and inter-op parallelism. | ||
* <b>intra-op</b> refers to PyTorch's parallel implementations of operators commonly used in Deep Learning, such as matrix multiplication and convolution, using [https://www.openmp.org OpenMP] directly or through low-level libraries like [https://en.wikipedia.org/wiki/Math_Kernel_Library MKL] and [https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/api-based-programming/intel-oneapi-deep-neural-network-library-onednn.html OneDNN]. Whenever you run PyTorch code that performs such operations, they will automatically leverage multi-threading over as many CPU cores as are available to your job. | |||
* <b>inter-op</b> parallelism on the other hand refers to PyTorch's ability to execute different parts of your code concurrently. This modality of parallelism typically requires that you explicitly design your program such that different parts can run in parallel. Examples include code that leverages PyTorch's Just-In-Time compiler <code>torch.jit</code> to run asynchronous tasks in a [https://pytorch.org/docs/stable/jit.html#built-in-functions-and-modules TorchScript] program. | |||
<!--T:167--> | <!--T:167--> | ||
With small scale models, we strongly recommend using | With small scale models, we strongly recommend using <b>multiple CPUs instead of using a GPU</b>. While training will almost certainly run faster on a GPU (except in cases where the model is very small), if your model and your dataset are not large enough, the speed up relative to CPU will likely not be very significant and your job will end up using only a small portion of the GPU's compute capabilities. This might not be an issue on your own workstation, but in a shared environment like our HPC clusters, this means you are unnecessarily blocking a resource that another user may need to run actual large scale computations! Furthermore, you would be unnecessarily using up your group's allocation and affecting the priority of your colleagues' jobs. | ||
<!--T:168--> | <!--T:168--> | ||
Line 148: | Line 173: | ||
import argparse | import argparse | ||
import os | |||
parser = argparse.ArgumentParser(description='cifar10 classification models, cpu performance test') | parser = argparse.ArgumentParser(description='cifar10 classification models, cpu performance test') | ||
Line 157: | Line 183: | ||
args = parser.parse_args() | args = parser.parse_args() | ||
torch.set_num_threads(int(os.environ['SLURM_CPUS_PER_TASK'])) | |||
class Net(nn.Module): | class Net(nn.Module): | ||
Line 185: | Line 211: | ||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | ||
### This next line will attempt to download the CIFAR10 dataset from the internet if you don't already have it stored in ./data | |||
### Run this line on a login node with "download=True" prior to submitting your job, or manually download the data from | |||
### https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz and place it under ./data | |||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | ||
Line 218: | Line 248: | ||
<translate> | <translate> | ||
== PyTorch with a | == PyTorch with a single GPU == <!--T:205--> | ||
<!--T:206--> | <!--T:206--> | ||
There is a common misconception that you should definitely use a GPU for model training if one is available. While this may ''almost always'' hold true (training very small models is often faster on one or more CPUs) on your own local workstation equipped with a GPU, it is not the case on | There is a common misconception that you should definitely use a GPU for model training if one is available. While this may ''almost always'' hold true (training very small models is often faster on one or more CPUs) on your own local workstation equipped with a GPU, it is not the case on our HPC clusters. | ||
<!--T:207--> | <!--T:207--> | ||
Simply put, '''you should not ask for a GPU''' if your code is not capable of making a reasonable use of its compute capacity | Simply put, '''you should not ask for a GPU''' if your code is not capable of making a reasonable use of its compute capacity. | ||
<!--T:208--> | <!--T:208--> | ||
Line 246: | Line 276: | ||
<!--T:213--> | <!--T:213--> | ||
Of course, <code>batch_size</code> is also an important parameter with respect to a model's performance on a given task (accuracy, error, etc.) and different schools of thought have different views on the impact of using large batches. This page will not go into this subject, but if you have reason to believe that a small (relative to space in GPU memory) batch size is best for your application, skip to | Of course, <code>batch_size</code> is also an important parameter with respect to a model's performance on a given task (accuracy, error, etc.) and different schools of thought have different views on the impact of using large batches. This page will not go into this subject, but if you have reason to believe that a small (relative to space in GPU memory) batch size is best for your application, skip to [[PyTorch#Data_Parallelism_with_a_single_GPU|Data Parallelism with a single GPU]] to see how to maximize GPU utilization with small inputs. | ||
</translate> | </translate> | ||
Line 367: | Line 397: | ||
<translate> | <translate> | ||
=== Data | === Data parallelism with a single GPU === <!--T:250--> | ||
<!--T:251--> | <!--T:251--> | ||
In cases where a model is fairly small, such that it does not take up a large portion of GPU memory and it cannot use a reasonable amount of its compute capacity, it is | In cases where a model is fairly small, such that it does not take up a large portion of GPU memory and it cannot use a reasonable amount of its compute capacity, it is <b>not advisable to use a GPU</b>. Use [[PyTorch#PyTorch_with_Multiple_CPUs|one or more CPUs]] instead. However, in a scenario where you have such a model, but have a very large dataset and wish to perform training with a small batch size, taking advantage of Data parallelism on a GPU becomes a viable option. | ||
<!--T:252--> | <!--T:252--> | ||
Data Parallelism, in this context, refers to methods to perform training over multiple replicas of a model in parallel, where each replica receives a different chunk of training data at each iteration. Gradients are then aggregated at the end of an iteration and the parameters of all replicas are updated in a synchronous or asynchronous fashion, depending on the method. Using this approach may provide a significant speed-up by iterating through all examples in a large dataset approximately ''N'' times faster, where ''N'' is the number of model replicas. An | Data Parallelism, in this context, refers to methods to perform training over multiple replicas of a model in parallel, where each replica receives a different chunk of training data at each iteration. Gradients are then aggregated at the end of an iteration and the parameters of all replicas are updated in a synchronous or asynchronous fashion, depending on the method. Using this approach may provide a significant speed-up by iterating through all examples in a large dataset approximately ''N'' times faster, where ''N'' is the number of model replicas. An <b>important caveat</b> of this approach, is that in order to get a trained model that is equivalent to the same model trained without Data Parallelism, the user must scale either the learning rate or the desired batch size in function of the number of replicas. See [https://discuss.pytorch.org/t/should-we-split-batch-size-according-to-ngpu-per-node-when-distributeddataparallel/72769/13 this discussion] for more information. | ||
<!--T:253--> | <!--T:253--> | ||
Line 379: | Line 409: | ||
<!--T:254--> | <!--T:254--> | ||
In the example that follows, we adapt the single GPU code from the previous section to use Data Parallelism. This task is fairly small - with a batch size of 512 images, our model takes up about 1GB of GPU memory space, and it uses only about 6% of its compute capacity during training. This is a model that '''should not''' be trained on | In the example that follows, we adapt the single GPU code from the previous section to use Data Parallelism. This task is fairly small - with a batch size of 512 images, our model takes up about 1GB of GPU memory space, and it uses only about 6% of its compute capacity during training. This is a model that '''should not''' be trained on our clusters. However, using Data Parallelism, we can fit up to 14 or 15 replicas of this model on a V100 GPU with 16GB memory and increase our resource usage, while getting a nice speed-up. We use Nvidia's [https://docs.nvidia.com/deploy/mps/index.html Multi-Process Service (MPS)], along with [https://docs.computecanada.ca/wiki/MPI MPI] to efficiently place multiple model replicas on one GPU: | ||
</translate> | </translate> | ||
Line 408: | Line 438: | ||
echo "starting training..." | echo "starting training..." | ||
time srun python cifar10-gpu-mps.py --batch_size=512 --num_workers=0 | time srun --cpus-per-task=$SLURM_CPUS_PER_TASK python cifar10-gpu-mps.py --batch_size=512 --num_workers=0 | ||
}} | }} | ||
Line 530: | Line 560: | ||
<translate> | <translate> | ||
== PyTorch with | == PyTorch with multiple GPUs == <!--T:63--> | ||
=== | === Issue with DistributedDataParallel and PyTorch 1.10 === <!--T:305--> | ||
<!--T:306--> | <!--T:306--> | ||
There is a known issue with our PyTorch 1.10 wheel <code>torch-1.10.0+computecanada</code>. Multi-GPU code that uses [[PyTorch#Using_DistributedDataParallel|DistributedDataParallel]] running with this PyTorch | There is a known issue with our PyTorch 1.10 wheel <code>torch-1.10.0+computecanada</code>. Multi-GPU code that uses [[PyTorch#Using_DistributedDataParallel|DistributedDataParallel]] running with this PyTorch version may fail unpredictably if the backend is set to <code>'nccl'</code> or <code>'gloo'</code>. We recommend using our latest PyTorch build instead of version 1.10 on all GP clusters. | ||
<!--T: | === Data parallelism with multiple GPUs === <!--T:64--> | ||
Data Parallelism, in this context, refers to methods to perform training over multiple replicas of a model in parallel, where each replica receives a different chunk of training data at each iteration. Gradients are then aggregated at the end of an iteration and the parameters of all replicas are updated in a synchronous or asynchronous fashion, depending on the method. Using this approach may provide a significant speed-up by iterating through all examples in a large dataset approximately N times faster, where N is the number of model replicas. An important caveat of this approach, is that in order to get a trained model that is equivalent to the same model trained without Data Parallelism, the user must scale either the learning rate or the desired batch size in function of the number of replicas. See [https://discuss.pytorch.org/t/should-we-split-batch-size-according-to-ngpu-per-node-when-distributeddataparallel/72769/13 this discussion] for more information. In the multiple-GPU case, each GPU hosts a replica of your model. Consequently, the model must be small enough to fit inside the memory of a single GPU. Refer to the [[PyTorch#Model_parallelism_with_multiple_GPUs|Model Parallelism]] section for options to train very large models that do not fit inside a single GPU. | |||
<!--T:315--> | |||
There are several ways to | There are several ways to perform Data Parallelism using PyTorch. This section features tutorials on three of them: using the '''DistributedDataParallel''' class, using the '''PyTorch Lightning''' package and using the '''Horovod''' package. | ||
====Using DistributedDataParallel==== <!--T:65--> | ====Using DistributedDataParallel==== <!--T:65--> | ||
<!--T:66--> | <!--T:66--> | ||
The '''DistributedDataParallel''' class is the way [https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel recommended by PyTorch maintainers] to use multiple GPUs, whether they are all on a single node, or distributed across multiple nodes. | The '''DistributedDataParallel''' class is the way [https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel recommended by PyTorch maintainers] to use multiple GPUs, whether they are all on a single node, or distributed across multiple nodes. | ||
</translate> | </translate> | ||
Line 554: | Line 584: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --nodes | #SBATCH --nodes 1 | ||
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources” | #SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”. | ||
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | #SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | ||
#SBATCH --mem=8G | #SBATCH --mem=8G | ||
Line 562: | Line 592: | ||
module load python # Using Default Python version - Make sure to choose a version that suits your application | module load python # Using Default Python version - Make sure to choose a version that suits your application | ||
srun -N $SLURM_NNODES -n $SLURM_NNODES bash << EOF | |||
virtualenv --no-download $SLURM_TMPDIR/env | virtualenv --no-download $SLURM_TMPDIR/env | ||
source $SLURM_TMPDIR/env/bin/activate | source $SLURM_TMPDIR/env/bin/activate | ||
pip install torchvision --no-index | pip install torchvision --no-index | ||
EOF | |||
export | export TORCH_NCCL_ASYNC_HANDLING=1 | ||
export MASTER_ADDR=$(hostname) #Store the master node’s IP address in the MASTER_ADDR environment variable. | export MASTER_ADDR=$(hostname) #Store the master node’s IP address in the MASTER_ADDR environment variable. | ||
Line 572: | Line 604: | ||
echo "r$SLURM_NODEID Launching python script" | echo "r$SLURM_NODEID Launching python script" | ||
# The | # The $((SLURM_NTASKS_PER_NODE * SLURM_JOB_NUM_NODES)) variable tells the script how many processes are available for this execution. “srun” executes the script <tasks-per-node * nodes> times | ||
source $SLURM_TMPDIR/env/bin/activate | |||
srun python pytorch-ddp-test.py --init_method tcp://$MASTER_ADDR:3456 --world_size $ | srun python pytorch-ddp-test.py --init_method tcp://$MASTER_ADDR:3456 --world_size $((SLURM_NTASKS_PER_NODE * SLURM_JOB_NUM_NODES)) --batch_size 256 | ||
}} | }} | ||
<translate> | <translate> | ||
Line 736: | Line 770: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --nodes | #SBATCH --nodes 1 | ||
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources” | #SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”. | ||
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | #SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | ||
#SBATCH --mem=8G | #SBATCH --mem=8G | ||
Line 748: | Line 782: | ||
pip install torchvision pytorch-lightning --no-index | pip install torchvision pytorch-lightning --no-index | ||
export | export TORCH_NCCL_ASYNC_HANDLING=1 | ||
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job | # PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job | ||
Line 820: | Line 854: | ||
net = Net() | net = Net() | ||
""" Here we initialize a Trainer() explicitly with | """ Here we initialize a Trainer() explicitly with 1 node and 2 GPUs per node. | ||
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs | To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs | ||
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes. | and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes. | ||
Line 826: | Line 860: | ||
which can cause issues due to updating logs too frequently.""" | which can cause issues due to updating logs too frequently.""" | ||
trainer = pl.Trainer( | trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy='ddp', max_epochs = args.max_epochs, enable_progress_bar=False) | ||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | ||
Line 851: | Line 885: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --nodes | #SBATCH --nodes 1 | ||
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources” | #SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”. | ||
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | #SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | ||
Line 865: | Line 899: | ||
pip install torch torchvision horovod --no-index | pip install torch torchvision horovod --no-index | ||
export | export TORCH_NCCL_ASYNC_HANDLING=1 | ||
srun python pytorch_horovod.py --batch_size 256 | srun python pytorch_horovod.py --batch_size 256 | ||
Line 884: | Line 918: | ||
import torch | import torch | ||
import torch.nn as nn | import torch.nn as nn | ||
import torch.nn.functional as F | |||
import torch.optim as optim | import torch.optim as optim | ||
Line 1,003: | Line 1,038: | ||
main() | main() | ||
}} | }} | ||
<translate> | |||
=== Model | === Model parallelism with multiple GPUs === <!--T:316--> | ||
< | In cases where a model is too large to fit inside a [[PyTorch#PyTorch_with_a_single_GPU|single GPU]], you can split it into multiple parts and load each one onto a separate GPU. In the example below, we revisit the code example from previous sections to illustrate how this works: we will split a Convolutional Neural Network in two parts - the convolutional/pooling layers and the densely connected feedforward layers. This job will request 2 GPUs and each of the two parts of the model will be loaded on its own GPU. We will also add code to perform [https://pytorch.org/docs/stable/pipeline.html?highlight=pipeline pipeline parallelism] and minimize as much as possible the amount of time the second GPU sits idle waiting for the outputs of the first. To do this, we will create a separate <code>nn.Module</code> for each part of our model, create a sequence of modules by wrapping our model parts with <code>nn.Sequential</code>, then use <code>torch.distributed.pipeline.sync.Pipe</code> to break each input batch into chunks and feed them in parallel to all parts of our model. | ||
In cases where a model is too large to fit inside a [[PyTorch# | |||
<!--T:317--> | |||
{{File | {{File | ||
|name=pytorch-modelpar-pipelined-rpc.sh | |name=pytorch-modelpar-pipelined-rpc.sh | ||
Line 1,020: | Line 1,056: | ||
#SBATCH --time=0:10:00 | #SBATCH --time=0:10:00 | ||
#SBATCH --output=%N-%j.out | #SBATCH --output=%N-%j.out | ||
#SBATCH --account= | #SBATCH --account=<your account> | ||
<!--T:318--> | |||
module load python # Using Default Python version - Make sure to choose a version that suits your application | module load python # Using Default Python version - Make sure to choose a version that suits your application | ||
virtualenv --no-download $SLURM_TMPDIR/env | virtualenv --no-download $SLURM_TMPDIR/env | ||
Line 1,027: | Line 1,064: | ||
pip install torch torchvision --no-index | pip install torch torchvision --no-index | ||
<!--T:319--> | |||
# This is needed to initialize pytorch's RPC module, required for the Pipe class which we'll use for Pipeline Parallelism | # This is needed to initialize pytorch's RPC module, required for the Pipe class which we'll use for Pipeline Parallelism | ||
export MASTER_ADDR=$(hostname) | export MASTER_ADDR=$(hostname) | ||
Line 1,035: | Line 1,073: | ||
}} | }} | ||
<!--T:320--> | |||
{{File | {{File | ||
|name=pytorch-modelpar-pipelined-rpc.py | |name=pytorch-modelpar-pipelined-rpc.py | ||
Line 1,041: | Line 1,080: | ||
import time | import time | ||
<!--T:321--> | |||
import torch | import torch | ||
import torch.nn as nn | import torch.nn as nn | ||
Line 1,046: | Line 1,086: | ||
from torch.distributed.pipeline.sync import Pipe | from torch.distributed.pipeline.sync import Pipe | ||
<!--T:322--> | |||
import torchvision | import torchvision | ||
import torchvision.transforms as transforms | import torchvision.transforms as transforms | ||
Line 1,051: | Line 1,092: | ||
from torch.utils.data import DataLoader | from torch.utils.data import DataLoader | ||
<!--T:323--> | |||
import argparse | import argparse | ||
parser = argparse.ArgumentParser(description='cifar10 classification models, single | <!--T:324--> | ||
parser = argparse.ArgumentParser(description='cifar10 classification models, single node model parallelism test') | |||
parser.add_argument('--lr', default=0.1, help='') | parser.add_argument('--lr', default=0.1, help='') | ||
parser.add_argument('--batch_size', type=int, default=512, help='') | parser.add_argument('--batch_size', type=int, default=512, help='') | ||
Line 1,059: | Line 1,102: | ||
<!--T:325--> | |||
def main(): | def main(): | ||
args = parser.parse_args() | <!--T:326--> | ||
args = parser.parse_args() | |||
# Convolutional + pooling part of the model | <!--T:327--> | ||
# Convolutional + pooling part of the model | |||
class ConvPart(nn.Module): | class ConvPart(nn.Module): | ||
def __init__(self): | <!--T:328--> | ||
def __init__(self): | |||
super(ConvPart, self).__init__() | super(ConvPart, self).__init__() | ||
self.conv1 = nn.Conv2d(3, 6, 5) | <!--T:329--> | ||
self.conv1 = nn.Conv2d(3, 6, 5) | |||
self.pool = nn.MaxPool2d(2, 2) | self.pool = nn.MaxPool2d(2, 2) | ||
self.conv2 = nn.Conv2d(6, 16, 5) | self.conv2 = nn.Conv2d(6, 16, 5) | ||
self.relu = nn.ReLU() | self.relu = nn.ReLU() | ||
def forward(self, x): | <!--T:330--> | ||
def forward(self, x): | |||
x = self.pool(self.relu(self.conv1(x))) | x = self.pool(self.relu(self.conv1(x))) | ||
x = self.pool(self.relu(self.conv2(x))) | x = self.pool(self.relu(self.conv2(x))) | ||
x = x.view(-1, 16 * 5 * 5) | x = x.view(-1, 16 * 5 * 5) | ||
return x | <!--T:331--> | ||
return x | |||
# Dense feedforward part of the model | <!--T:332--> | ||
# Dense feedforward part of the model | |||
class MLPPart(nn.Module): | class MLPPart(nn.Module): | ||
def __init__(self): | <!--T:333--> | ||
def __init__(self): | |||
super(MLPPart, self).__init__() | super(MLPPart, self).__init__() | ||
self.fc1 = nn.Linear(16 * 5 * 5, 120) | <!--T:334--> | ||
self.fc1 = nn.Linear(16 * 5 * 5, 120) | |||
self.fc2 = nn.Linear(120, 84) | self.fc2 = nn.Linear(120, 84) | ||
self.fc3 = nn.Linear(84, 10) | self.fc3 = nn.Linear(84, 10) | ||
self.relu = nn.ReLU() | self.relu = nn.ReLU() | ||
def forward(self, x): | <!--T:335--> | ||
def forward(self, x): | |||
x = self.relu(self.fc1(x)) | x = self.relu(self.fc1(x)) | ||
x = self.relu(self.fc2(x)) | x = self.relu(self.fc2(x)) | ||
x = self.fc3(x) | x = self.fc3(x) | ||
return x | <!--T:336--> | ||
return x | |||
torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1) # initializing RPC is required by Pipe we use below | <!--T:337--> | ||
torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1) # initializing RPC is required by Pipe we use below | |||
part1 = ConvPart().to('cuda:0') # Load part1 on the first GPU | <!--T:338--> | ||
part1 = ConvPart().to('cuda:0') # Load part1 on the first GPU | |||
part2 = MLPPart().to('cuda:1') # Load part2 on the second GPU | part2 = MLPPart().to('cuda:1') # Load part2 on the second GPU | ||
net = nn.Sequential(part1,part2) # Pipe requires all modules be wrapped with nn.Sequential() | <!--T:339--> | ||
net = nn.Sequential(part1,part2) # Pipe requires all modules be wrapped with nn.Sequential() | |||
net = Pipe(net, chunks=32) # Wrap with Pipe to perform Pipeline Parallelism | <!--T:340--> | ||
net = Pipe(net, chunks=32) # Wrap with Pipe to perform Pipeline Parallelism | |||
criterion = nn.CrossEntropyLoss().to('cuda:1') # Load the loss function on the last GPU | <!--T:341--> | ||
criterion = nn.CrossEntropyLoss().to('cuda:1') # Load the loss function on the last GPU | |||
optimizer = optim.SGD(net.parameters(), lr=args.lr) | optimizer = optim.SGD(net.parameters(), lr=args.lr) | ||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | <!--T:342--> | ||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | |||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | <!--T:343--> | ||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | |||
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers) | <!--T:344--> | ||
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers) | |||
perf = [] | <!--T:345--> | ||
perf = [] | |||
total_start = time.time() | <!--T:346--> | ||
total_start = time.time() | |||
for batch_idx, (inputs, targets) in enumerate(train_loader): | <!--T:347--> | ||
for batch_idx, (inputs, targets) in enumerate(train_loader): | |||
start = time.time() | <!--T:348--> | ||
start = time.time() | |||
inputs = inputs.to('cuda:0') | <!--T:349--> | ||
inputs = inputs.to('cuda:0') | |||
targets = targets.to('cuda:1') | targets = targets.to('cuda:1') | ||
# Models wrapped with Pipe() return a RRef object. Since the example is single node, all values are local to the node and we can grab them | <!--T:350--> | ||
# Models wrapped with Pipe() return a RRef object. Since the example is single node, all values are local to the node and we can grab them | |||
outputs = net(inputs).local_value() | outputs = net(inputs).local_value() | ||
loss = criterion(outputs, targets) | loss = criterion(outputs, targets) | ||
optimizer.zero_grad() | <!--T:351--> | ||
optimizer.zero_grad() | |||
loss.backward() | loss.backward() | ||
optimizer.step() | optimizer.step() | ||
print(f"Loss: {loss.item()}") | print(f"Loss: {loss.item()}") | ||
batch_time = time.time() - start | <!--T:352--> | ||
batch_time = time.time() - start | |||
images_per_sec = args.batch_size/batch_time | <!--T:353--> | ||
images_per_sec = args.batch_size/batch_time | |||
perf.append(images_per_sec) | <!--T:354--> | ||
perf.append(images_per_sec) | |||
total_time = time.time() - total_start | <!--T:355--> | ||
total_time = time.time() - total_start | |||
<!--T:356--> | |||
if __name__=='__main__': | if __name__=='__main__': | ||
main() | main() | ||
Line 1,150: | Line 1,225: | ||
}} | }} | ||
= | ===Combining model and data parallelism=== <!--T:357--> | ||
In cases where a model is too large to fit inside a Single GPU and, additionally, the goal is to train such a model using a very large training set, combining model parallelism with data parallelism becomes a viable option to achieve high performance. The idea is straightforward: you will split a large model into smaller parts, give each part its own GPU, perform pipeline parallelism on the inputs, then, additionally, you will create replicas of this whole process, which will be trained in parallel over separate subsets of the training set. As in the [[PyTorch#Data_Parallelism_with_Multiple_GPUs|example from the previous section]], gradients are computed independently within each replica, then an aggregation of these gradients is used to update all replicas synchronously or asynchronously, depending on the method used. The main difference here is that each model replica lives in more than one GPU. | |||
== | ==== Using Torch RPC and DDP ==== <!--T:358--> | ||
<!--T: | <!--T:359--> | ||
The following example is a reprise of the ones from previous sections. Here we combine Torch RPC and DistributedDataParallel to split a model in two parts, then train four replicas of the model distributed over two nodes in parallel. In other words, we will have 2 model replicas spanning 2 GPUs on each node. An '''important caveat''' of using Torch RPC is that currently it only supports splitting models inside a single node. For very large models that do not fit inside the combined memory space of all GPUs of a single compute node, see the next section on [[PyTorch#DeepSpeed|DeepSpeed]]. | |||
</translate> | <!--T:360--> | ||
callbacks = [pl.callbacks.ModelCheckpoint(dirpath="./ckpt",every_n_epochs=1)] | {{File | ||
trainer = pl.Trainer(callbacks=callbacks) | |name=pytorch-model-data-par.sh | ||
trainer.fit(model) | |lang="bash" | ||
<translate> | |contents= | ||
#!/bin/bash | |||
#SBATCH --nodes 2 | |||
#SBATCH --gres=gpu:4 # Request 4 GPUs per node | |||
#SBATCH --tasks-per-node=2 # Request one task per MODEL per node | |||
#SBATCH --cpus-per-task=1 # change this parameter to 2,4,6,... and increase "--num_workers" accordingly to see the effect on performance | |||
#SBATCH --mem=16G | |||
#SBATCH --time=0:10:00 | |||
#SBATCH --output=%N-%j.out | |||
#SBATCH --account=<your account> | |||
<!--T:361--> | |||
module load StdEnv/2020 gcc/11.3.0 | |||
module load python # Using Default Python version - Make sure to choose a version that suits your application, python/3.10.2 works with this demo | |||
module load cuda/11.8.0 | |||
virtualenv --no-download $SLURM_TMPDIR/env | |||
source $SLURM_TMPDIR/env/bin/activate | |||
pip install torch torchvision --no-index | |||
<!--T:362--> | |||
export MAIN_NODE=$(hostname) | |||
<!--T:363--> | |||
echo "starting training..." | |||
<!--T:364--> | |||
srun python pytorch-model-data-par.py --init_method tcp://$MAIN_NODE:3456 --world_size $SLURM_NTASKS --batch_size 512 | |||
}} | |||
<!--T:365--> | |||
{{File | |||
|name=pytorch-model-data-par.py | |||
|lang="python" | |||
|contents= | |||
import time | |||
import os | |||
<!--T:366--> | |||
import torch | |||
import torch.nn as nn | |||
import torch.optim as optim | |||
from torch.distributed.pipeline.sync import Pipe | |||
<!--T:367--> | |||
import torchvision | |||
import torchvision.transforms as transforms | |||
from torchvision.datasets import CIFAR10 | |||
from torch.utils.data import DataLoader | |||
<!--T:368--> | |||
import torch.distributed as dist | |||
import torch.utils.data.distributed | |||
<!--T:369--> | |||
import argparse | |||
<!--T:370--> | |||
parser = argparse.ArgumentParser(description='cifar10 classification models, distributed data & model parallel test') | |||
parser.add_argument('--lr', default=0.1, help='') | |||
parser.add_argument('--batch_size', type=int, default=768, help='') | |||
parser.add_argument('--max_epochs', type=int, default=4, help='') | |||
parser.add_argument('--num_workers', type=int, default=0, help='') | |||
<!--T:371--> | |||
parser.add_argument('--init_method', default='tcp://127.0.0.1:3456', type=str, help='') | |||
parser.add_argument('--dist-backend', default='mpi', type=str, help='') | |||
parser.add_argument('--world_size', default=1, type=int, help='') | |||
parser.add_argument('--distributed', action='store_true', help='') | |||
<!--T:372--> | |||
def main(): | |||
<!--T:373--> | |||
args = parser.parse_args() | |||
<!--T:374--> | |||
# Convolutional + pooling part of the model | |||
class ConvPart(nn.Module): | |||
<!--T:375--> | |||
def __init__(self): | |||
super(ConvPart, self).__init__() | |||
<!--T:376--> | |||
self.conv1 = nn.Conv2d(3, 6, 5) | |||
self.pool = nn.MaxPool2d(2, 2) | |||
self.conv2 = nn.Conv2d(6, 16, 5) | |||
self.relu = nn.ReLU() | |||
<!--T:377--> | |||
def forward(self, x): | |||
x = self.pool(self.relu(self.conv1(x))) | |||
x = self.pool(self.relu(self.conv2(x))) | |||
x = x.view(-1, 16 * 5 * 5) | |||
<!--T:378--> | |||
return x | |||
<!--T:379--> | |||
# Dense feedforward part of the model | |||
class MLPPart(nn.Module): | |||
<!--T:380--> | |||
def __init__(self): | |||
super(MLPPart, self).__init__() | |||
<!--T:381--> | |||
self.fc1 = nn.Linear(16 * 5 * 5, 120) | |||
self.fc2 = nn.Linear(120, 84) | |||
self.fc3 = nn.Linear(84, 10) | |||
self.relu = nn.ReLU() | |||
<!--T:382--> | |||
def forward(self, x): | |||
x = self.relu(self.fc1(x)) | |||
x = self.relu(self.fc2(x)) | |||
x = self.fc3(x) | |||
<!--T:383--> | |||
return x | |||
<!--T:384--> | |||
ngpus_per_node = torch.cuda.device_count() | |||
local_rank = int(os.environ.get("SLURM_LOCALID")) | |||
rank = int(os.environ.get("SLURM_NODEID"))*(ngpus_per_node//2) + local_rank # Divide ngpus_per_node by the number of model parts | |||
<!--T:385--> | |||
os.environ['MASTER_ADDR'] = '127.0.0.1' # Each model replica will run its own RPC server to run pipeline parallelism | |||
os.environ['MASTER_PORT'] = str(34567 + local_rank) # Make sure each RPC server starts on a different port | |||
torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1) # Different replicas won't communicate through RPC, but through DDP | |||
<!--T:386--> | |||
dist.init_process_group(backend=args.dist_backend, init_method=args.init_method, world_size=args.world_size, rank=rank) # Initialize Data Parallelism communications | |||
<!--T:387--> | |||
part1 = ConvPart().cuda(local_rank) # First part of the model goes on the first GPU of each process | |||
part2 = MLPPart().cuda(local_rank + 1) # Second part goes on the second GPU of each process | |||
<!--T:388--> | |||
net = nn.Sequential(part1,part2) | |||
<!--T:389--> | |||
net = Pipe(net, chunks=32, checkpoint="never") | |||
<!--T:390--> | |||
net = torch.nn.parallel.DistributedDataParallel(net) | |||
<!--T:391--> | |||
criterion = nn.CrossEntropyLoss().cuda(local_rank + 1) # Loss function goes on the second GPU of each process | |||
optimizer = optim.SGD(net.parameters(), lr=args.lr) | |||
<!--T:392--> | |||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | |||
<!--T:393--> | |||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | |||
<!--T:394--> | |||
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset_train) | |||
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.num_workers, sampler=train_sampler) | |||
<!--T:395--> | |||
for epoch in range(args.max_epochs): | |||
<!--T:396--> | |||
train_sampler.set_epoch(epoch) | |||
<!--T:397--> | |||
train(epoch, net, criterion, optimizer, train_loader, rank, local_rank) | |||
<!--T:398--> | |||
def train(epoch, net, criterion, optimizer, train_loader, train_rank, model_rank): | |||
<!--T:399--> | |||
train_loss = 0 | |||
correct = 0 | |||
total = 0 | |||
<!--T:400--> | |||
epoch_start = time.time() | |||
<!--T:401--> | |||
for batch_idx, (inputs, targets) in enumerate(train_loader): | |||
<!--T:402--> | |||
start = time.time() | |||
<!--T:403--> | |||
inputs = inputs.cuda(model_rank) | |||
targets = targets.cuda(model_rank + 1) | |||
<!--T:404--> | |||
outputs = net(inputs).local_value() | |||
loss = criterion(outputs, targets) | |||
<!--T:405--> | |||
optimizer.zero_grad() | |||
loss.backward() | |||
optimizer.step() | |||
print(f"From Rank {train_rank} - Loss: {loss.item()}") | |||
<!--T:406--> | |||
batch_time = time.time() - start | |||
<!--T:407--> | |||
if __name__=='__main__': | |||
main() | |||
<!--T:408--> | |||
}} | |||
=== DeepSpeed === <!--T:409--> | |||
[https://www.deepspeed.ai DeepSpeed] is a deep learning training optimization library, providing the means to train massive billion parameter models at scale. Fully compatible with PyTorch, DeepSpeed features implementations of novel memory-efficient distributed training methods, based on the [https://arxiv.org/abs/1910.02054 Zero Redundancy Optimizer (ZeRO)] concept. Through the use of ZeRO, DeepSpeed enables distributed storage and computing of different elements of a training task - such as optimizer states, model weights, model gradients and model activations - across multiple devices, including GPU, CPU, local hard disk, and/or combinations of these devices. This "pooling" of resources, notably for storage, allows models with massive amounts of parameters to be trained efficiently, across multiple nodes, without explicitly handling Model, Pipeline or Data Parallelism in your code. The examples below show how to take advantage of DeepSpeed and its implementations of ZeRO variants through its PyTorch Lightning interface for ease of use. | |||
==== ZeRO on GPU ==== <!--T:410--> | |||
<!--T:455--> | |||
In the following example, we use ZeRO Stage 3 to train a model using a "pool" of 4 GPUs. Stage 3 means all three of: optimizer states; model parameters; and model gradients will be split (sharded) between all 4 GPUs. This is more memory-efficient than pure Data Parallelism, where we would have a full replica of the model loaded on each GPU. Using DeepSpeed's optimizer <code>FusedAdam</code> instead of a native PyTorch one, performance is comparable with pure Data Parallelism. DeepSpeed's optimizers are JIT compiled at run-time and you must load the module <code>cuda/<version></code> where '''<version>''' must match the version used to build the PyTorch install you are using. | |||
<!--T:456--> | |||
{{File | |||
|name=deepspeed-stage3.sh | |||
|lang="bash" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --nodes 1 | |||
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”. | |||
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | |||
#SBATCH --mem=32G | |||
#SBATCH --time=0-00:20 | |||
#SBATCH --output=%N-%j.out | |||
#SBATCH --account=<your account> | |||
<!--T:457--> | |||
module load python cuda # CUDA must be loaded if using a DeepSpeed optimizer | |||
virtualenv --no-download $SLURM_TMPDIR/env | |||
source $SLURM_TMPDIR/env/bin/activate | |||
pip install torchvision pytorch-lightning deepspeed --no-index | |||
<!--T:458--> | |||
export TORCH_NCCL_ASYNC_HANDLING=1 | |||
<!--T:459--> | |||
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job | |||
# If it is, it expects the user to have requested one task per GPU. | |||
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail! | |||
<!--T:460--> | |||
srun python deepspeed-stage3.py --batch_size 256 | |||
}} | |||
<!--T:461--> | |||
{{File | |||
|name=deepspeed-stage3.py | |||
|lang="python" | |||
|contents= | |||
import torch | |||
from torch import nn | |||
import torch.nn.functional as F | |||
<!--T:561--> | |||
import pytorch_lightning as pl | |||
<!--T:562--> | |||
import torchvision | |||
import torchvision.transforms as transforms | |||
from torchvision.datasets import CIFAR10 | |||
from torch.utils.data import DataLoader | |||
<!--T:563--> | |||
from deepspeed.ops.adam import FusedAdam | |||
from pytorch_lightning.strategies import DeepSpeedStrategy | |||
<!--T:564--> | |||
import argparse | |||
<!--T:565--> | |||
parser = argparse.ArgumentParser(description='cifar10 classification models deep seed stage 3 test') | |||
parser.add_argument('--lr', default=0.1, help='') | |||
parser.add_argument('--max_epochs', type=int, default=2, help='') | |||
parser.add_argument('--batch_size', type=int, default=768, help='') | |||
parser.add_argument('--num_workers', type=int, default=0, help='') | |||
<!--T:566--> | |||
def main(): | |||
print("Starting...") | |||
<!--T:567--> | |||
args = parser.parse_args() | |||
<!--T:568--> | |||
class ConvPart(nn.Module): | |||
<!--T:569--> | |||
def __init__(self): | |||
super(ConvPart, self).__init__() | |||
<!--T:570--> | |||
self.conv1 = nn.Conv2d(3, 6, 5) | |||
self.pool = nn.MaxPool2d(2, 2) | |||
self.conv2 = nn.Conv2d(6, 16, 5) | |||
self.relu = nn.ReLU() | |||
<!--T:571--> | |||
def forward(self, x): | |||
x = self.pool(self.relu(self.conv1(x))) | |||
x = self.pool(self.relu(self.conv2(x))) | |||
x = x.view(-1, 16 * 5 * 5) | |||
<!--T:572--> | |||
return x | |||
<!--T:573--> | |||
# Dense feedforward part of the model | |||
class MLPPart(nn.Module): | |||
<!--T:574--> | |||
def __init__(self): | |||
super(MLPPart, self).__init__() | |||
<!--T:575--> | |||
self.fc1 = nn.Linear(16 * 5 * 5, 120) | |||
self.fc2 = nn.Linear(120, 84) | |||
self.fc3 = nn.Linear(84, 10) | |||
self.relu = nn.ReLU() | |||
<!--T:576--> | |||
def forward(self, x): | |||
x = self.relu(self.fc1(x)) | |||
x = self.relu(self.fc2(x)) | |||
x = self.fc3(x) | |||
<!--T:577--> | |||
return x | |||
<!--T:578--> | |||
class Net(pl.LightningModule): | |||
<!--T:579--> | |||
def __init__(self): | |||
super(Net, self).__init__() | |||
<!--T:580--> | |||
self.conv_part = ConvPart() | |||
self.mlp_part = MLPPart() | |||
<!--T:581--> | |||
def configure_sharded_model(self): | |||
<!--T:582--> | |||
self.block = nn.Sequential(self.conv_part, self.mlp_part) | |||
<!--T:583--> | |||
def forward(self, x): | |||
x = self.block(x) | |||
<!--T:584--> | |||
return x | |||
<!--T:585--> | |||
def training_step(self, batch, batch_idx): | |||
x, y = batch | |||
y_hat = self(x) | |||
loss = F.cross_entropy(y_hat, y) | |||
return loss | |||
<!--T:586--> | |||
def configure_optimizers(self): | |||
return FusedAdam(self.parameters()) | |||
<!--T:587--> | |||
net = Net() | |||
<!--T:588--> | |||
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPU. | |||
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs | |||
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes. | |||
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs, | |||
which can cause issues due to updating logs too frequently.""" | |||
<!--T:589--> | |||
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy="deepspeed_stage_3", max_epochs = args.max_epochs) | |||
<!--T:590--> | |||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | |||
<!--T:591--> | |||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | |||
<!--T:592--> | |||
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers) | |||
<!--T:593--> | |||
trainer.fit(net,train_loader) | |||
<!--T:594--> | |||
if __name__=='__main__': | |||
main() | |||
<!--T:595--> | |||
}} | |||
==== ZeRO with offload to CPU ==== <!--T:411--> | |||
<!--T:498--> | |||
In this example, we will again use ZeRO stage 3, but this time we enable offloading model parameters and optimizers states to the CPU. This means that the compute node's memory will be available to store these tensors while they are not required by any GPU computations, and additionally, optimizer steps will be computed on the CPU. For practical purposes, you can think of this as though your GPUs were gaining an extra 32GB of memory. This takes even more pressure off from GPU memory and would allow you to increase your batch size, for example, or increase the size of the model. Using DeepSpeed's optimizer <code>DeepSpeedCPUAdam</code> instead of a native PyTorch one, performance remains at par with pure Data Parallelism. DeepSpeed's optimizers are JIT compiled at run-time and you must load the module <code>cuda/<version></code> where '''<version>''' must match the version used to build the PyTorch install you are using. | |||
<!--T:412--> | |||
{{File | |||
|name=deepspeed-stage3-offload-cpu.sh | |||
|lang="bash" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --nodes 1 | |||
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”. | |||
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | |||
#SBATCH --mem=32G | |||
#SBATCH --time=0-00:20 | |||
#SBATCH --output=%N-%j.out | |||
#SBATCH --account=<your account> | |||
<!--T:413--> | |||
module load python cuda # CUDA must be loaded if using ZeRO offloading to CPU or NVMe. Version must be the same used to compile PyTorch. | |||
virtualenv --no-download $SLURM_TMPDIR/env | |||
source $SLURM_TMPDIR/env/bin/activate | |||
pip install torchvision pytorch-lightning deepspeed --no-index | |||
<!--T:414--> | |||
export TORCH_NCCL_ASYNC_HANDLING=1 | |||
<!--T:415--> | |||
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job | |||
# If it is, it expects the user to have requested one task per GPU. | |||
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail! | |||
<!--T:416--> | |||
srun python deepspeed-stage3-offload-cpu.py --batch_size 256 | |||
}} | |||
<!--T:417--> | |||
{{File | |||
|name=deepspeed-stage3-offload-cpu.py | |||
|lang="python" | |||
|contents= | |||
<!--T:596--> | |||
import torch | |||
from torch import nn | |||
import torch.nn.functional as F | |||
<!--T:597--> | |||
import pytorch_lightning as pl | |||
<!--T:598--> | |||
import torchvision | |||
import torchvision.transforms as transforms | |||
from torchvision.datasets import CIFAR10 | |||
from torch.utils.data import DataLoader | |||
<!--T:599--> | |||
from deepspeed.ops.adam import DeepSpeedCPUAdam | |||
from pytorch_lightning.strategies import DeepSpeedStrategy | |||
<!--T:600--> | |||
import argparse | |||
<!--T:601--> | |||
parser = argparse.ArgumentParser(description='cifar10 classification models, deepspeed offload to cpu test') | |||
parser.add_argument('--lr', default=0.1, help='') | |||
parser.add_argument('--max_epochs', type=int, default=2, help='') | |||
parser.add_argument('--batch_size', type=int, default=768, help='') | |||
parser.add_argument('--num_workers', type=int, default=0, help='') | |||
<!--T:602--> | |||
def main(): | |||
print("Starting...") | |||
<!--T:603--> | |||
args = parser.parse_args() | |||
<!--T:604--> | |||
class ConvPart(nn.Module): | |||
<!--T:605--> | |||
def __init__(self): | |||
super(ConvPart, self).__init__() | |||
<!--T:606--> | |||
self.conv1 = nn.Conv2d(3, 6, 5) | |||
self.pool = nn.MaxPool2d(2, 2) | |||
self.conv2 = nn.Conv2d(6, 16, 5) | |||
self.relu = nn.ReLU() | |||
<!--T:607--> | |||
def forward(self, x): | |||
x = self.pool(self.relu(self.conv1(x))) | |||
x = self.pool(self.relu(self.conv2(x))) | |||
x = x.view(-1, 16 * 5 * 5) | |||
<!--T:608--> | |||
return x | |||
<!--T:609--> | |||
# Dense feedforward part of the model | |||
class MLPPart(nn.Module): | |||
<!--T:610--> | |||
def __init__(self): | |||
super(MLPPart, self).__init__() | |||
<!--T:611--> | |||
self.fc1 = nn.Linear(16 * 5 * 5, 120) | |||
self.fc2 = nn.Linear(120, 84) | |||
self.fc3 = nn.Linear(84, 10) | |||
self.relu = nn.ReLU() | |||
<!--T:612--> | |||
def forward(self, x): | |||
x = self.relu(self.fc1(x)) | |||
x = self.relu(self.fc2(x)) | |||
x = self.fc3(x) | |||
<!--T:613--> | |||
return x | |||
<!--T:614--> | |||
class Net(pl.LightningModule): | |||
<!--T:615--> | |||
def __init__(self): | |||
super(Net, self).__init__() | |||
<!--T:616--> | |||
self.conv_part = ConvPart() | |||
self.mlp_part = MLPPart() | |||
<!--T:617--> | |||
def configure_sharded_model(self): | |||
<!--T:618--> | |||
self.block = nn.Sequential(self.conv_part, self.mlp_part) | |||
<!--T:619--> | |||
def forward(self, x): | |||
x = self.block(x) | |||
<!--T:620--> | |||
return x | |||
<!--T:621--> | |||
def training_step(self, batch, batch_idx): | |||
x, y = batch | |||
y_hat = self(x) | |||
loss = F.cross_entropy(y_hat, y) | |||
return loss | |||
<!--T:622--> | |||
def configure_optimizers(self): | |||
return DeepSpeedCPUAdam(self.parameters()) | |||
<!--T:623--> | |||
net = Net() | |||
<!--T:624--> | |||
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPU. | |||
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs | |||
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes. | |||
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs, | |||
which can cause issues due to updating logs too frequently.""" | |||
<!--T:625--> | |||
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy=DeepSpeedStrategy( | |||
stage=3, | |||
offload_optimizer=True, | |||
offload_parameters=True, | |||
), max_epochs = args.max_epochs) | |||
<!--T:626--> | |||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | |||
<!--T:627--> | |||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | |||
<!--T:628--> | |||
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers) | |||
<!--T:629--> | |||
trainer.fit(net,train_loader) | |||
<!--T:630--> | |||
if __name__=='__main__': | |||
main() | |||
<!--T:453--> | |||
}} | |||
==== ZeRO with offload to NVMe ==== <!--T:454--> | |||
In this example, we use ZeRO stage 3 yet again, but this time we enable offloading model parameters and optimizers states to the local disk. This means that the compute node's local disk storage will be available to store these tensors while they are not required by any GPU computations. As before, optimizer steps will be computed on the CPU. Again, for practical purposes, you can think of this as extending GPU memory by however much storage is available on the local disk, though this time performance will significantly degrade. This approach works best (i.e., performance degradation is least noticeable) on NVMe-enabled drives, which have higher throughput and faster response times, but it can be used with any type of storage. | |||
<!--T:499--> | |||
{{File | |||
|name=deepspeed-stage3-offload-nvme.sh | |||
|lang="bash" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --nodes 1 | |||
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”. | |||
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel. | |||
#SBATCH --mem=32G | |||
#SBATCH --time=0-00:20 | |||
#SBATCH --output=%N-%j.out | |||
#SBATCH --account=<your account> | |||
<!--T:500--> | |||
module load python cuda # CUDA must be loaded if using ZeRO offloading to CPU or NVMe. Version must be the same used to compile PyTorch. | |||
virtualenv --no-download $SLURM_TMPDIR/env | |||
source $SLURM_TMPDIR/env/bin/activate | |||
pip install torchvision pytorch-lightning deepspeed --no-index | |||
<!--T:501--> | |||
export TORCH_NCCL_ASYNC_HANDLING=1 | |||
<!--T:502--> | |||
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job | |||
# If it is, it expects the user to have requested one task per GPU. | |||
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail! | |||
<!--T:503--> | |||
srun python deepspeed-stage3-offload-nvme.py --batch_size 256 | |||
}} | |||
<!--T:504--> | |||
{{File | |||
|name=deepspeed-stage3-offload-nvme.py | |||
|lang="python" | |||
|contents= | |||
import os | |||
<!--T:631--> | |||
import torch | |||
from torch import nn | |||
import torch.nn.functional as F | |||
<!--T:632--> | |||
import pytorch_lightning as pl | |||
<!--T:633--> | |||
import torchvision | |||
import torchvision.transforms as transforms | |||
from torchvision.datasets import CIFAR10 | |||
from torch.utils.data import DataLoader | |||
<!--T:634--> | |||
from deepspeed.ops.adam import DeepSpeedCPUAdam | |||
from pytorch_lightning.strategies import DeepSpeedStrategy | |||
<!--T:635--> | |||
import argparse | |||
<!--T:636--> | |||
parser = argparse.ArgumentParser(description='cifar10 classification models, deepspeed offload to nvme test') | |||
parser.add_argument('--lr', default=0.1, help='') | |||
parser.add_argument('--max_epochs', type=int, default=2, help='') | |||
parser.add_argument('--batch_size', type=int, default=768, help='') | |||
parser.add_argument('--num_workers', type=int, default=0, help='') | |||
<!--T:637--> | |||
def main(): | |||
print("Starting...") | |||
<!--T:638--> | |||
args = parser.parse_args() | |||
<!--T:639--> | |||
class ConvPart(nn.Module): | |||
<!--T:640--> | |||
def __init__(self): | |||
super(ConvPart, self).__init__() | |||
<!--T:641--> | |||
self.conv1 = nn.Conv2d(3, 6, 5) | |||
self.pool = nn.MaxPool2d(2, 2) | |||
self.conv2 = nn.Conv2d(6, 16, 5) | |||
self.relu = nn.ReLU() | |||
<!--T:642--> | |||
def forward(self, x): | |||
x = self.pool(self.relu(self.conv1(x))) | |||
x = self.pool(self.relu(self.conv2(x))) | |||
x = x.view(-1, 16 * 5 * 5) | |||
<!--T:643--> | |||
return x | |||
<!--T:644--> | |||
# Dense feedforward part of the model | |||
class MLPPart(nn.Module): | |||
<!--T:645--> | |||
def __init__(self): | |||
super(MLPPart, self).__init__() | |||
<!--T:646--> | |||
self.fc1 = nn.Linear(16 * 5 * 5, 120) | |||
self.fc2 = nn.Linear(120, 84) | |||
self.fc3 = nn.Linear(84, 10) | |||
self.relu = nn.ReLU() | |||
<!--T:647--> | |||
def forward(self, x): | |||
x = self.relu(self.fc1(x)) | |||
x = self.relu(self.fc2(x)) | |||
x = self.fc3(x) | |||
<!--T:648--> | |||
return x | |||
<!--T:649--> | |||
class Net(pl.LightningModule): | |||
<!--T:650--> | |||
def __init__(self): | |||
super(Net, self).__init__() | |||
<!--T:651--> | |||
self.conv_part = ConvPart() | |||
self.mlp_part = MLPPart() | |||
<!--T:652--> | |||
def configure_sharded_model(self): | |||
<!--T:653--> | |||
self.block = nn.Sequential(self.conv_part, self.mlp_part) | |||
<!--T:654--> | |||
def forward(self, x): | |||
x = self.block(x) | |||
<!--T:655--> | |||
return x | |||
<!--T:656--> | |||
def training_step(self, batch, batch_idx): | |||
x, y = batch | |||
y_hat = self(x) | |||
loss = F.cross_entropy(y_hat, y) | |||
return loss | |||
<!--T:657--> | |||
def configure_optimizers(self): | |||
return DeepSpeedCPUAdam(self.parameters()) | |||
<!--T:658--> | |||
net = Net() | |||
<!--T:659--> | |||
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPU. | |||
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs | |||
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes. | |||
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs, | |||
which can cause issues due to updating logs too frequently.""" | |||
<!--T:660--> | |||
local_scratch = os.environ['SLURM_TMPDIR'] # Get path where local storage is mounted | |||
<!--T:661--> | |||
print(f'Offloading to: {local_scratch}') | |||
<!--T:662--> | |||
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy=DeepSpeedStrategy( | |||
stage=3, | |||
offload_optimizer=True, | |||
offload_parameters=True, | |||
remote_device="nvme", | |||
offload_params_device="nvme", | |||
offload_optimizer_device="nvme", | |||
nvme_path="local_scratch", | |||
), max_epochs = args.max_epochs) | |||
<!--T:663--> | |||
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) | |||
<!--T:664--> | |||
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train) | |||
<!--T:665--> | |||
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers) | |||
<!--T:666--> | |||
trainer.fit(net,train_loader) | |||
<!--T:667--> | |||
if __name__=='__main__': | |||
main() | |||
<!--T:668--> | |||
}} | |||
=Creating model checkpoints= <!--T:156--> | |||
Whether or not you expect your code to run for long time periods, it is a good habit to create Checkpoints during training. A checkpoint is a snapshot of your model at a given point during the training process (after a certain number of iterations or after a number of epochs) that is saved to disk and can be loaded at a later time. It is a handy way of breaking up jobs that are expected to run for a very long time, into multiple shorter jobs that may get allocated on the cluster more quickly. It is also a good way of avoiding losing progress in case of unexpected errors in your code or node failures. | |||
==With PyTorch Lightning== <!--T:157--> | |||
<!--T:158--> | |||
To create a checkpoint when training with <code>pytorch-lightning</code>, we recommend using the callbacks parameter of the <code>Trainer()</code> class. The following example shows how to instruct PyTorch to create a checkpoint at the end of every training epoch. Make sure the path where you want to create the checkpoint exists. | |||
</translate> | |||
callbacks = [pl.callbacks.ModelCheckpoint(dirpath="./ckpt",every_n_epochs=1)] | |||
trainer = pl.Trainer(callbacks=callbacks) | |||
trainer.fit(model) | |||
<translate> | |||
<!--T:160--> | |||
This code snippet will also load a checkpoint from <code>./ckpt</code>, if there is one, and continue training from that point. For more information, please refer to the [https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.callbacks.model_checkpoint.html official PyTorch Lightning documentation]. | |||
==With custom training loops== <!--T:161--> | |||
<!--T:162--> | |||
Please refer to the [https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html official PyTorch documentation] for examples on how to create and load checkpoints inside of a training loop. | |||
== During distributed training == <!--T:309--> | |||
<!--T:310--> | |||
Checkpointing can also be done while running a distributed training program. With PyTorch Lightning, no extra code is required other than using the checkpoint callback as described above. If you are using DistributedDataParallel or Horovod however, checkpointing should be done only by one process (one of the ranks) of your program, since all ranks will have the same state at the end of each iteration. The following example uses the first process (rank 0) to create a checkpoint: | |||
<!--T:311--> | |||
if global_rank == 0: | |||
torch.save(ddp_model.state_dict(), "./checkpoint_path") | |||
<!--T:314--> | |||
You must be careful when loading a checkpoint created in this manner. If a process tries to load a checkpoint that has not yet been saved by another, you may see errors or get wrong results. To avoid this, you can add a barrier to your code to make sure the process that will create the checkpoint finishes writing it to disk before other processes attempt to load it. Also note that <code>torch.load</code> will attempt to load tensors to the GPU that saved them originally (<code>cuda:0</code> in this case) by default. To avoid issues, pass <code>map_location</code> to <code>torch.load</code> to load tensors on the correct GPU for each rank. | |||
<!--T:313--> | |||
torch.distributed.barrier() | |||
map_location = f"cuda:{local_rank}" | |||
ddp_model.load_state_dict( | |||
torch.load("./checkpoint_path", map_location=map_location)) | |||
<!-- this section is hidden until results are verified. | |||
= Benchmarks = <!--T:32--> | |||
<!--T:33--> | |||
This section gives ResNet-18 benchmark results on different clusters with various configurations. | |||
<!--T:34--> | |||
All numbers are images per second '''per GPU''', using <code>DistributedDataParallel</code> and NCCL. | |||
<!--T:35--> | |||
'''These results are provisional and there is a lot of variance in their measurement. Work is being done to get a clearer picture.''' | |||
<!--T:36--> | |||
{| class="wikitable" | |||
|+ Graham[P100], images per second per GPU | |||
|- | |||
! Batch size !! 1 node, 1 GPU !! 1 node, 2 GPUs !! 2 * (1 node, 2 GPUs) !! 3 * (1 node, 2 GPUs) | |||
|- | |||
| 32 || 542 || 134 || 103 || 82 | |||
|- | |||
| 64 || 620 || 190 || 149 || 134 | |||
|- | |||
| 128 || 646 || 241 || 197 || 180 | |||
|- | |||
| 256 || 587 || 263 || 184 || 368 | |||
|} | |||
--> | |||
= Troubleshooting = <!--T:23--> | |||
<!--T: | == Memory leak == <!--T:30--> | ||
On AVX512 hardware (Béluga, Skylake or V100 nodes), older versions of Pytorch (less than v1.0.1) using older libraries (cuDNN < v7.5 or MAGMA < v2.5) may considerably leak memory resulting in an out-of-memory exception and death of your tasks. Please upgrade to the latest <code>torch</code> version. | |||
== | == c10::Error == <!--T:542--> | ||
<!--T: | <!--T:543--> | ||
There are cases where we get this kind of error: | |||
<!--T:544--> | |||
terminate called after throwing an instance of 'c10::Error' | |||
what(): Given groups=1, weight of size [256, 1, 3, 3], expected input[16, 10, 16, 16] to have 1 channels, but got 10 channels instead | |||
Exception raised from check_shape_forward at /tmp/coulombc/pytorch_build_2021-11-09_14-57-01/avx2/python3.8/pytorch/aten/src/ATen/native/Convolution.cpp:496 (most recent call first): | |||
... | |||
<!--T: | <!--T:545--> | ||
A C++ exception is thrown instead of a Python exception. This might happen when programming in C++ with libtorch, but it is unexpected when programming in Python. We cannot see the Python traceback, which makes it difficult to pinpoint the cause of the error in our python script. On Graham, it has been observed that using PyTorch 1.9.1 (instead of PyTorch 1.10.x) helps: it allows to get the Python traceback. | |||
= LibTorch = <!--T:38--> | |||
<!--T: | <!--T:553--> | ||
LibTorch allows one to implement both C++ extensions to PyTorch and '''pure C++ machine learning applications'''. It contains "all headers, libraries and CMake configuration files required to depend on PyTorch", as described in the [https://pytorch.org/cppdocs/installing.html documentation]. | |||
=== How to use LibTorch === <!--T:39--> | |||
==== Setting up the environment ==== <!--T:40--> | |||
= | |||
<!--T: | <!--T:554--> | ||
Load the modules required by Libtorch, then install PyTorch in a Python virtual environment: | |||
<!--T: | <!--T:669--> | ||
<tabs> | |||
<tab name="StdEnv/2023"> | |||
module load StdEnv/2023 gcc cuda/12.2 cmake protobuf cudnn python/3.11 abseil cusparselt opencv/4.8.1 | |||
virtualenv --no-download --clear ~/ENV && source ~/ENV/bin/activate | |||
pip install --no-index torch numpy | |||
<!--T: | <!--T:670--> | ||
Note that the versions for the abseil, cusparselt and opencv modules may need to be adjusted, | |||
depending on the version of the torch package. In order to find out which version of those | |||
modules was used to compile the Python wheel for torch, use the following command: | |||
<!--T: | <!--T:671--> | ||
{| | {{Command | ||
|prompt=$ | |||
|- | |ldd $VIRTUAL_ENV/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so {{!}} sed -n 's&^.*/\(\(opencv\{{!}}abseil\{{!}}cusparselt\)/[^/]*\).*&\1&p' {{!}} sort -u | ||
! | |result= | ||
abseil/20230125.3 | |||
cusparselt/0.5.0.1 | |||
opencv/4.8.1 | |||
}} | |||
</tab> | |||
| | <tab name="StdEnv/2020"> | ||
module load gcc cuda/11.4 cmake protobuf cudnn python/3.10 | |||
virtualenv --no-download --clear ~/ENV && source ~/ENV/bin/activate | |||
pip install --no-index torch numpy | |||
</tab> | |||
</tabs> | |||
= | |||
</ | |||
==== | ==== Compiling a minimal example ==== <!--T:44--> | ||
<!--T:45--> | <!--T:45--> | ||
Create the following two files: | Create the following two files: | ||
< | <!--T:555--> | ||
{{File | {{File | ||
|name=example | |name=example.cpp | ||
|lang="cpp" | |lang="cpp" | ||
|contents= | |contents= | ||
Line 1,260: | Line 2,185: | ||
#include <iostream> | #include <iostream> | ||
int main() { | <!--T:556--> | ||
int main() | |||
{ | |||
torch::Device device(torch::kCPU); | torch::Device device(torch::kCPU); | ||
if (torch::cuda::is_available()) { | if (torch::cuda::is_available()) | ||
{ | |||
std::cout << "CUDA is available! Using GPU." << std::endl; | std::cout << "CUDA is available! Using GPU." << std::endl; | ||
device = torch::Device(torch::kCUDA); | device = torch::Device(torch::kCUDA); | ||
} | } | ||
torch::Tensor tensor = torch::rand({2, 3}).to(device); | <!--T:557--> | ||
torch::Tensor tensor = torch::rand({2, 3}).to(device); | |||
std::cout << tensor << std::endl; | std::cout << tensor << std::endl; | ||
} | } | ||
}} | }} | ||
<!--T:558--> | |||
{{File | {{File | ||
|name=CMakeLists.txt | |name=CMakeLists.txt | ||
Line 1,277: | Line 2,207: | ||
|contents= | |contents= | ||
cmake_minimum_required(VERSION 3.0 FATAL_ERROR) | cmake_minimum_required(VERSION 3.0 FATAL_ERROR) | ||
project(example | project(example) | ||
<!--T:559--> | |||
find_package(Torch REQUIRED) | find_package(Torch REQUIRED) | ||
add_executable(example | <!--T:560--> | ||
target_link_libraries(example | add_executable(example example.cpp) | ||
set_property(TARGET example | target_link_libraries(example "${TORCH_LIBRARIES}") | ||
set_property(TARGET example PROPERTY CXX_STANDARD 14) | |||
}} | }} | ||
<!--T:54--> | <!--T:54--> | ||
With the python virtualenv activated, configure the project and compile the program: | |||
<tabs> | |||
</ | <tab name="StdEnv/2023"> | ||
cmake -B build -S . -DCMAKE_PREFIX_PATH=$VIRTUAL_ENV/lib/python3.11/site-packages \ | |||
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath=$VIRTUAL_ENV/lib/python3.11/site-packages/torch/lib,-L$EBROOTCUDA/extras/CUPTI/lib64 \ | |||
cmake -DCMAKE_PREFIX_PATH= | -DCMAKE_SKIP_RPATH=ON -DTORCH_CUDA_ARCH_LIST="6.0;7.0;7.5;8.0;9.0" | ||
cmake --build build | |||
< | </tab> | ||
<tab name="StdEnv/2020"> | |||
cmake -B build -S . -DCMAKE_PREFIX_PATH=$VIRTUAL_ENV/lib/python3.10/site-packages \ | |||
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath=$VIRTUAL_ENV/lib/python3.10/site-packages/torch/lib \ | |||
-DCMAKE_SKIP_RPATH=ON | |||
cmake --build build | |||
</tab> | |||
</tabs> | |||
<!--T:56--> | <!--T:56--> | ||
Run the program: | Run the program: | ||
build/example | |||
<!--T:58--> | <!--T:58--> | ||
To test an application with CUDA, request an [[Running_jobs#Interactive_jobs|interactive job]] with a [[Using_GPUs_with_Slurm|GPU]]. | To test an application with CUDA, request an [[Running_jobs#Interactive_jobs|interactive job]] with a [[Using_GPUs_with_Slurm|GPU]]. | ||
= Resources = <!--T:59--> | |||
<!--T:60--> | <!--T:60--> | ||
https://pytorch.org/cppdocs/ | https://pytorch.org/cppdocs/ | ||
</translate> | </translate> |
Latest revision as of 14:28, 1 October 2024
PyTorch is a Python package that provides two high-level features:
- Tensor computation (like NumPy) with strong GPU acceleration
- Deep neural networks built on a tape-based autograd system
If you are porting a PyTorch program to one of our clusters, you should follow our tutorial on the subject.
Disambiguation[edit]
PyTorch has a distant connection with Torch, but for all practical purposes you can treat them as separate projects.
PyTorch developers also offer LibTorch, which allows one to implement extensions to PyTorch using C++, and to implement pure C++ machine learning applications. Models written in Python using PyTorch can be converted and used in pure C++ through TorchScript.
Installation[edit]
Latest available wheels[edit]
To see the latest version of PyTorch that we have built:
[name@server ~]$ avail_wheels "torch*"
For more information, see Available wheels.
Installing our wheel[edit]
The preferred option is to install it using the Python wheel as follows:
- 1. Load a Python module, thus
module load python
- 2. Create and start a virtual environment.
- 3. Install PyTorch in the virtual environment with
pip install
.
GPU and CPU[edit]
-
(venv) [name@server ~] pip install --no-index torch
Note: There are known issues with PyTorch 1.10 on our clusters (except for Narval). If you encounter problems while using distributed training, or if you get an error containing c10::Error
, we recommend installing PyTorch 1.9.1 using pip install --no-index torch==1.9.1
.
Extra[edit]
In addition to torch
, you can install torchvision
, torchtext
and torchaudio
:
(venv) [name@server ~] pip install --no-index torch torchvision torchtext torchaudio
Job submission[edit]
Here is an example of a job submission script using the python wheel, with a virtual environment inside a job:
#!/bin/bash
#SBATCH --gres=gpu:1 # Request GPU "generic resources"
#SBATCH --cpus-per-task=6 # Cores proportional to GPUs: 6 on Cedar, 16 on Graham.
#SBATCH --mem=32000M # Memory proportional to GPUs: 32000 Cedar, 64000 Graham.
#SBATCH --time=0-03:00
#SBATCH --output=%N-%j.out
module load python/<select version> # Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch --no-index
python pytorch-test.py
The Python script pytorch-test.py
has the form
import torch
x = torch.Tensor(5, 3)
print(x)
y = torch.rand(5, 3)
print(y)
# let us run the following only if CUDA is available
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
print(x + y)
You can then submit a PyTorch job with:
[name@server ~]$ sbatch pytorch-test.sh
High performance with PyTorch[edit]
TF32: Performance vs numerical accuracy[edit]
On version 1.7.0 PyTorch has introduced support for Nvidia's TensorFloat-32 (TF32) Mode, which in turn is available only on Ampere and later Nvidia GPU architectures. This mode of executing tensor operations has been shown to yield up to 20x speed-ups compared to equivalent single precision (FP32) operations and is enabled by default in PyTorch versions 1.7.x up to 1.11.x. However, such gains in performance come at the cost of potentially decreased accuracy in the results of operations, which may become problematic in cases such as when dealing with ill-conditioned matrices, or when performing long sequences of tensor operations as is common in deep learning models. Following calls from its user community, TF32 is now disabled by default for matrix multiplications, but still enabled by default for convolutions starting with PyTorch version 1.12.0.
As of October 2022, our only cluster equipped with Ampere GPUs is Narval. When using PyTorch on Narval, users should be cognizant of the following:
- You may notice a significant slowdown when running the exact same GPU-enabled code with
torch < 1.12.0
andtorch >= 1.12.0
. - You may get different results when running the exact same GPU-enabled code with
torch < 1.12.0
andtorch >= 1.12.0
.
To enable or disable TF32 on torch >= 1.12.0
set the following flags to True
or False
accordingly:
torch.backends.cuda.matmul.allow_tf32 = False # Enable/disable TF32 for matrix multiplications torch.backends.cudnn.allow_tf32 = False # Enable/disable TF32 for convolutions
For more information, see PyTorch's official documentation
PyTorch with multiple CPUs[edit]
PyTorch natively supports parallelizing work across multiple CPUs in two ways: intra-op parallelism and inter-op parallelism.
- intra-op refers to PyTorch's parallel implementations of operators commonly used in Deep Learning, such as matrix multiplication and convolution, using OpenMP directly or through low-level libraries like MKL and OneDNN. Whenever you run PyTorch code that performs such operations, they will automatically leverage multi-threading over as many CPU cores as are available to your job.
- inter-op parallelism on the other hand refers to PyTorch's ability to execute different parts of your code concurrently. This modality of parallelism typically requires that you explicitly design your program such that different parts can run in parallel. Examples include code that leverages PyTorch's Just-In-Time compiler
torch.jit
to run asynchronous tasks in a TorchScript program.
With small scale models, we strongly recommend using multiple CPUs instead of using a GPU. While training will almost certainly run faster on a GPU (except in cases where the model is very small), if your model and your dataset are not large enough, the speed up relative to CPU will likely not be very significant and your job will end up using only a small portion of the GPU's compute capabilities. This might not be an issue on your own workstation, but in a shared environment like our HPC clusters, this means you are unnecessarily blocking a resource that another user may need to run actual large scale computations! Furthermore, you would be unnecessarily using up your group's allocation and affecting the priority of your colleagues' jobs.
The code example below contains many opportunities for intra-op parallelism. By simply requesting more CPUs and without any code changes, we can observe the effect of PyTorch's native support for parallelism on performance:
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=1 # change this parameter to 2,4,6,... to see the effect on performance
#SBATCH --mem=8G
#SBATCH --time=0:05:00
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python # Using Default Python version - Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch torchvision --no-index
echo "starting training..."
time python cifar10-cpu.py
import numpy as np
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import argparse
import os
parser = argparse.ArgumentParser(description='cifar10 classification models, cpu performance test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=512, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
args = parser.parse_args()
torch.set_num_threads(int(os.environ['SLURM_CPUS_PER_TASK']))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=args.lr)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
### This next line will attempt to download the CIFAR10 dataset from the internet if you don't already have it stored in ./data
### Run this line on a login node with "download=True" prior to submitting your job, or manually download the data from
### https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz and place it under ./data
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
perf = []
total_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
batch_time = time.time() - start
images_per_sec = args.batch_size/batch_time
perf.append(images_per_sec)
total_time = time.time() - total_start
if __name__=='__main__':
main()
PyTorch with a single GPU[edit]
There is a common misconception that you should definitely use a GPU for model training if one is available. While this may almost always hold true (training very small models is often faster on one or more CPUs) on your own local workstation equipped with a GPU, it is not the case on our HPC clusters.
Simply put, you should not ask for a GPU if your code is not capable of making a reasonable use of its compute capacity.
GPUs draw their performance advantage in Deep Learning tasks mainly from two sources:
- Their ability to parallelize the execution of certain key numerical operations, such as multiply-accumulate, over many thousands of compute cores compared to the single-digit count of cores available in most common CPUs.
- A much higher memory bandwidth than CPUs, which allows GPUs to efficiently use their massive number of cores to process much larger amounts of data per compute cycle.
Like in the multi-cpu case, PyTorch contains parallel implementations of operators commonly used in Deep Learning, such as matrix multiplication and convolution, using GPU-specific libraries like CUDNN or MIOpen, depending on the hardware platform. This means that for a learning task to be worth running on a GPU, it must be composed of elements that scale out with massive parallelism in terms of the number of operations that can be performed in parallel, the amount of data they require, or, ideally, both. Concretely this means, for example, large models (with large numbers of units and layers), large inputs, or, ideally, both.
In the example below, we adapt the multi-cpu code from the previous section to run on one GPU and examine its performance. We can observe that two parameters play an important role: batch_size
and num_workers
. The first influences performance by increasing the size of our inputs at each iteration, thus putting more of the GPU's capacity to use. The second influences performance by streamlining the movement of our inputs from the Host's (or the CPU's) memory to the GPU's memory, thus reducing the amount of time the GPU sits idle waiting for data to process.
Two takeaways emerge from this:
- Increase your
batch_size
to as much as you can fit in the GPU's memory to optimize your compute performance. - Use a
DataLoader
with as many workers as you havecpus-per-task
to streamline feeding data to the GPU.
Of course, batch_size
is also an important parameter with respect to a model's performance on a given task (accuracy, error, etc.) and different schools of thought have different views on the impact of using large batches. This page will not go into this subject, but if you have reason to believe that a small (relative to space in GPU memory) batch size is best for your application, skip to Data Parallelism with a single GPU to see how to maximize GPU utilization with small inputs.
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:1 # request a GPU
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=1 # change this parameter to 2,4,6,... and increase "--num_workers" accordingly to see the effect on performance
#SBATCH --mem=8G
#SBATCH --time=0:05:00
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python # Using Default Python version - Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch torchvision --no-index
echo "starting training..."
time python cifar10-gpu.py --batch_size=512 --num_workers=0
import numpy as np
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, single gpu performance test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=512, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
args = parser.parse_args()
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().cuda() # Load model on the GPU
criterion = nn.CrossEntropyLoss().cuda() # Load the loss function on the GPU
optimizer = optim.SGD(net.parameters(), lr=args.lr)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
perf = []
total_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
inputs = inputs.cuda()
targets = targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
batch_time = time.time() - start
images_per_sec = args.batch_size/batch_time
perf.append(images_per_sec)
total_time = time.time() - total_start
if __name__=='__main__':
main()
Data parallelism with a single GPU[edit]
In cases where a model is fairly small, such that it does not take up a large portion of GPU memory and it cannot use a reasonable amount of its compute capacity, it is not advisable to use a GPU. Use one or more CPUs instead. However, in a scenario where you have such a model, but have a very large dataset and wish to perform training with a small batch size, taking advantage of Data parallelism on a GPU becomes a viable option.
Data Parallelism, in this context, refers to methods to perform training over multiple replicas of a model in parallel, where each replica receives a different chunk of training data at each iteration. Gradients are then aggregated at the end of an iteration and the parameters of all replicas are updated in a synchronous or asynchronous fashion, depending on the method. Using this approach may provide a significant speed-up by iterating through all examples in a large dataset approximately N times faster, where N is the number of model replicas. An important caveat of this approach, is that in order to get a trained model that is equivalent to the same model trained without Data Parallelism, the user must scale either the learning rate or the desired batch size in function of the number of replicas. See this discussion for more information.
PyTorch has implementations of Data Parallelism methods, with the DistributedDataParallel
class being the one recommended by PyTorch maintainers for best performance. Designed to work with multiple GPUs, it can be also be used with a single GPU.
In the example that follows, we adapt the single GPU code from the previous section to use Data Parallelism. This task is fairly small - with a batch size of 512 images, our model takes up about 1GB of GPU memory space, and it uses only about 6% of its compute capacity during training. This is a model that should not be trained on our clusters. However, using Data Parallelism, we can fit up to 14 or 15 replicas of this model on a V100 GPU with 16GB memory and increase our resource usage, while getting a nice speed-up. We use Nvidia's Multi-Process Service (MPS), along with MPI to efficiently place multiple model replicas on one GPU:
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:1 # request a GPU
#SBATCH --tasks-per-node=8 # This is the number of model replicas we will place on the GPU. Change this to 10,12,14,... to see the effect on performance
#SBATCH --cpus-per-task=1 # increase this parameter and increase "--num_workers" accordingly to see the effect on performance
#SBATCH --mem=8G
#SBATCH --time=0:05:00
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python # Using Default Python version - Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch torchvision --no-index
# Activate Nvidia MPS:
export CUDA_MPS_PIPE_DIRECTORY=/tmp/nvidia-mps
export CUDA_MPS_LOG_DIRECTORY=/tmp/nvidia-log
nvidia-cuda-mps-control -d
echo "starting training..."
time srun --cpus-per-task=$SLURM_CPUS_PER_TASK python cifar10-gpu-mps.py --batch_size=512 --num_workers=0
import os
import time
import datetime
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import torch.distributed as dist
import torch.utils.data.distributed
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, distributed data parallel maps test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=512, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
parser.add_argument('--init_method', default='tcp://127.0.0.1:3456', type=str, help='')
def main():
print("Starting...")
args = parser.parse_args()
rank = os.environ.get("SLURM_LOCALID")
current_device = 0
torch.cuda.set_device(current_device)
""" this block initializes a process group and initiate communications
between all processes that will run a model replica """
print('From Rank: {}, ==> Initializing Process Group...'.format(rank))
dist.init_process_group(backend="mpi", init_method=args.init_method) # Use backend="mpi" or "gloo". NCCL does not work on a single GPU due to a hard-coded multi-GPU topology check.
print("process group ready!")
print('From Rank: {}, ==> Making model..'.format(rank))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.cuda()
net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[current_device]) # Wrap the model with DistributedDataParallel
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=args.lr)
print('From Rank: {}, ==> Preparing data..'.format(rank))
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='~/data', train=True, download=False, transform=transform_train)
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.num_workers, sampler=train_sampler)
perf = []
total_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
inputs = inputs.cuda()
targets = targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
batch_time = time.time() - start
images_per_sec = args.batch_size/batch_time
perf.append(images_per_sec)
total_time = time.time() - total_start
if __name__=='__main__':
main()
PyTorch with multiple GPUs[edit]
Issue with DistributedDataParallel and PyTorch 1.10[edit]
There is a known issue with our PyTorch 1.10 wheel torch-1.10.0+computecanada
. Multi-GPU code that uses DistributedDataParallel running with this PyTorch version may fail unpredictably if the backend is set to 'nccl'
or 'gloo'
. We recommend using our latest PyTorch build instead of version 1.10 on all GP clusters.
Data parallelism with multiple GPUs[edit]
Data Parallelism, in this context, refers to methods to perform training over multiple replicas of a model in parallel, where each replica receives a different chunk of training data at each iteration. Gradients are then aggregated at the end of an iteration and the parameters of all replicas are updated in a synchronous or asynchronous fashion, depending on the method. Using this approach may provide a significant speed-up by iterating through all examples in a large dataset approximately N times faster, where N is the number of model replicas. An important caveat of this approach, is that in order to get a trained model that is equivalent to the same model trained without Data Parallelism, the user must scale either the learning rate or the desired batch size in function of the number of replicas. See this discussion for more information. In the multiple-GPU case, each GPU hosts a replica of your model. Consequently, the model must be small enough to fit inside the memory of a single GPU. Refer to the Model Parallelism section for options to train very large models that do not fit inside a single GPU.
There are several ways to perform Data Parallelism using PyTorch. This section features tutorials on three of them: using the DistributedDataParallel class, using the PyTorch Lightning package and using the Horovod package.
Using DistributedDataParallel[edit]
The DistributedDataParallel class is the way recommended by PyTorch maintainers to use multiple GPUs, whether they are all on a single node, or distributed across multiple nodes.
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”.
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel.
#SBATCH --mem=8G
#SBATCH --time=0-03:00
#SBATCH --output=%N-%j.out
module load python # Using Default Python version - Make sure to choose a version that suits your application
srun -N $SLURM_NNODES -n $SLURM_NNODES bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torchvision --no-index
EOF
export TORCH_NCCL_ASYNC_HANDLING=1
export MASTER_ADDR=$(hostname) #Store the master node’s IP address in the MASTER_ADDR environment variable.
echo "r$SLURM_NODEID master: $MASTER_ADDR"
echo "r$SLURM_NODEID Launching python script"
# The $((SLURM_NTASKS_PER_NODE * SLURM_JOB_NUM_NODES)) variable tells the script how many processes are available for this execution. “srun” executes the script <tasks-per-node * nodes> times
source $SLURM_TMPDIR/env/bin/activate
srun python pytorch-ddp-test.py --init_method tcp://$MASTER_ADDR:3456 --world_size $((SLURM_NTASKS_PER_NODE * SLURM_JOB_NUM_NODES)) --batch_size 256
The Python script pytorch-ddp-test.py
has the form
import os
import time
import datetime
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import torch.distributed as dist
import torch.utils.data.distributed
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, distributed data parallel test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=768, help='')
parser.add_argument('--max_epochs', type=int, default=4, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
parser.add_argument('--init_method', default='tcp://127.0.0.1:3456', type=str, help='')
parser.add_argument('--dist-backend', default='gloo', type=str, help='')
parser.add_argument('--world_size', default=1, type=int, help='')
parser.add_argument('--distributed', action='store_true', help='')
def main():
print("Starting...")
args = parser.parse_args()
ngpus_per_node = torch.cuda.device_count()
""" This next line is the key to getting DistributedDataParallel working on SLURM:
SLURM_NODEID is 0 or 1 in this example, SLURM_LOCALID is the id of the
current process inside a node and is also 0 or 1 in this example."""
local_rank = int(os.environ.get("SLURM_LOCALID"))
rank = int(os.environ.get("SLURM_NODEID"))*ngpus_per_node + local_rank
current_device = local_rank
torch.cuda.set_device(current_device)
""" this block initializes a process group and initiate communications
between all processes running on all nodes """
print('From Rank: {}, ==> Initializing Process Group...'.format(rank))
#init the process group
dist.init_process_group(backend=args.dist_backend, init_method=args.init_method, world_size=args.world_size, rank=rank)
print("process group ready!")
print('From Rank: {}, ==> Making model..'.format(rank))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.cuda()
net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[current_device])
print('From Rank: {}, ==> Preparing data..'.format(rank))
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.num_workers, sampler=train_sampler)
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=1e-4)
for epoch in range(args.max_epochs):
train_sampler.set_epoch(epoch)
train(epoch, net, criterion, optimizer, train_loader, rank)
def train(epoch, net, criterion, optimizer, train_loader, train_rank):
train_loss = 0
correct = 0
total = 0
epoch_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
inputs = inputs.cuda()
targets = targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
batch_time = time.time() - start
elapse_time = time.time() - epoch_start
elapse_time = datetime.timedelta(seconds=elapse_time)
print("From Rank: {}, Training time {}".format(train_rank, elapse_time))
if __name__=='__main__':
main()
Using PyTorch Lightning[edit]
PyTorch Lightning is a Python package that provides interfaces to PyTorch to make many common, but otherwise code-heavy tasks, more straightforward. This includes training on multiple GPUs. The following is the same tutorial from the section above, but using PyTorch Lightning instead of explicitly leveraging the DistributedDataParallel class:
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”.
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel.
#SBATCH --mem=8G
#SBATCH --time=0-03:00
#SBATCH --output=%N-%j.out
module load python # Using Default Python version - Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torchvision pytorch-lightning --no-index
export TORCH_NCCL_ASYNC_HANDLING=1
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job
# If it is, it expects the user to have requested one task per GPU.
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail!
srun python pytorch-ddp-test-pl.py --batch_size 256
import datetime
import torch
from torch import nn
import torch.nn.functional as F
import pytorch_lightning as pl
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, pytorch-lightning parallel test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--max_epochs', type=int, default=4, help='')
parser.add_argument('--batch_size', type=int, default=768, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
print("Starting...")
args = parser.parse_args()
class Net(pl.LightningModule):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=args.lr)
net = Net()
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPUs per node.
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes.
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs,
which can cause issues due to updating logs too frequently."""
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy='ddp', max_epochs = args.max_epochs, enable_progress_bar=False)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
trainer.fit(net,train_loader)
if __name__=='__main__':
main()
Using Horovod[edit]
Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. Its API allows you to retain the level of control over your training code that DistributedDataParallel
provides, but makes writing your scripts easier by abstracting away the need to directly configure process groups and dealing with the cluster scheduler's environment variables. It also features distributed optimizers, which may increase performance in some cases. The following is the same example as above, re-implemented using Horovod:
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”.
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel.
#SBATCH --mem=8G
#SBATCH --time=0-03:00
#SBATCH --output=%N-%j.out
module load python # Using Default Python version - Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch torchvision horovod --no-index
export TORCH_NCCL_ASYNC_HANDLING=1
srun python pytorch_horovod.py --batch_size 256
import os
import time
import datetime
import numpy as np
import horovod.torch as hvd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import torch.distributed as dist
import torch.utils.data.distributed
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, horovod test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=512, help='')
parser.add_argument('--max_epochs', type=int, default=1, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
args = parser.parse_args()
hvd.init()
print("Starting...")
local_rank = hvd.local_rank()
global_rank = hvd.rank()
torch.cuda.set_device(local_rank)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.cuda()
print('From Rank: {}, ==> Preparing data..'.format(global_rank))
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset_train, num_replicas=hvd.size(),rank=global_rank)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.num_workers, sampler=train_sampler)
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=1e-4)
optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=net.named_parameters())
hvd.broadcast_parameters(net.state_dict(), root_rank=0)
for epoch in range(args.max_epochs):
train_sampler.set_epoch(epoch)
train(args,epoch, net, criterion, optimizer, train_loader, global_rank)
def train(args,epoch, net, criterion, optimizer, train_loader, train_rank):
train_loss = 0
correct = 0
total = 0
epoch_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
inputs = inputs.cuda()
targets = targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
batch_time = time.time() - start
elapse_time = time.time() - epoch_start
elapse_time = datetime.timedelta(seconds=elapse_time)
print("From Rank: {}, Training time {}".format(train_rank, elapse_time))
if __name__=='__main__':
main()
Model parallelism with multiple GPUs[edit]
In cases where a model is too large to fit inside a single GPU, you can split it into multiple parts and load each one onto a separate GPU. In the example below, we revisit the code example from previous sections to illustrate how this works: we will split a Convolutional Neural Network in two parts - the convolutional/pooling layers and the densely connected feedforward layers. This job will request 2 GPUs and each of the two parts of the model will be loaded on its own GPU. We will also add code to perform pipeline parallelism and minimize as much as possible the amount of time the second GPU sits idle waiting for the outputs of the first. To do this, we will create a separate nn.Module
for each part of our model, create a sequence of modules by wrapping our model parts with nn.Sequential
, then use torch.distributed.pipeline.sync.Pipe
to break each input batch into chunks and feed them in parallel to all parts of our model.
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # request 2 GPUs
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=1 # change this parameter to 2,4,6,... and increase "--num_workers" accordingly to see the effect on performance
#SBATCH --mem=8G
#SBATCH --time=0:10:00
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python # Using Default Python version - Make sure to choose a version that suits your application
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch torchvision --no-index
# This is needed to initialize pytorch's RPC module, required for the Pipe class which we'll use for Pipeline Parallelism
export MASTER_ADDR=$(hostname)
export MASTER_PORT=34567
echo "starting training..."
time python pytorch-modelpar-pipelined-rpc.py --batch_size=512 --num_workers=0
import time
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributed.pipeline.sync import Pipe
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, single node model parallelism test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=512, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
args = parser.parse_args()
# Convolutional + pooling part of the model
class ConvPart(nn.Module):
def __init__(self):
super(ConvPart, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
return x
# Dense feedforward part of the model
class MLPPart(nn.Module):
def __init__(self):
super(MLPPart, self).__init__()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1) # initializing RPC is required by Pipe we use below
part1 = ConvPart().to('cuda:0') # Load part1 on the first GPU
part2 = MLPPart().to('cuda:1') # Load part2 on the second GPU
net = nn.Sequential(part1,part2) # Pipe requires all modules be wrapped with nn.Sequential()
net = Pipe(net, chunks=32) # Wrap with Pipe to perform Pipeline Parallelism
criterion = nn.CrossEntropyLoss().to('cuda:1') # Load the loss function on the last GPU
optimizer = optim.SGD(net.parameters(), lr=args.lr)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
perf = []
total_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
inputs = inputs.to('cuda:0')
targets = targets.to('cuda:1')
# Models wrapped with Pipe() return a RRef object. Since the example is single node, all values are local to the node and we can grab them
outputs = net(inputs).local_value()
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Loss: {loss.item()}")
batch_time = time.time() - start
images_per_sec = args.batch_size/batch_time
perf.append(images_per_sec)
total_time = time.time() - total_start
if __name__=='__main__':
main()
Combining model and data parallelism[edit]
In cases where a model is too large to fit inside a Single GPU and, additionally, the goal is to train such a model using a very large training set, combining model parallelism with data parallelism becomes a viable option to achieve high performance. The idea is straightforward: you will split a large model into smaller parts, give each part its own GPU, perform pipeline parallelism on the inputs, then, additionally, you will create replicas of this whole process, which will be trained in parallel over separate subsets of the training set. As in the example from the previous section, gradients are computed independently within each replica, then an aggregation of these gradients is used to update all replicas synchronously or asynchronously, depending on the method used. The main difference here is that each model replica lives in more than one GPU.
Using Torch RPC and DDP[edit]
The following example is a reprise of the ones from previous sections. Here we combine Torch RPC and DistributedDataParallel to split a model in two parts, then train four replicas of the model distributed over two nodes in parallel. In other words, we will have 2 model replicas spanning 2 GPUs on each node. An important caveat of using Torch RPC is that currently it only supports splitting models inside a single node. For very large models that do not fit inside the combined memory space of all GPUs of a single compute node, see the next section on DeepSpeed.
#!/bin/bash
#SBATCH --nodes 2
#SBATCH --gres=gpu:4 # Request 4 GPUs per node
#SBATCH --tasks-per-node=2 # Request one task per MODEL per node
#SBATCH --cpus-per-task=1 # change this parameter to 2,4,6,... and increase "--num_workers" accordingly to see the effect on performance
#SBATCH --mem=16G
#SBATCH --time=0:10:00
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load StdEnv/2020 gcc/11.3.0
module load python # Using Default Python version - Make sure to choose a version that suits your application, python/3.10.2 works with this demo
module load cuda/11.8.0
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torch torchvision --no-index
export MAIN_NODE=$(hostname)
echo "starting training..."
srun python pytorch-model-data-par.py --init_method tcp://$MAIN_NODE:3456 --world_size $SLURM_NTASKS --batch_size 512
import time
import os
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributed.pipeline.sync import Pipe
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
import torch.distributed as dist
import torch.utils.data.distributed
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, distributed data & model parallel test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--batch_size', type=int, default=768, help='')
parser.add_argument('--max_epochs', type=int, default=4, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
parser.add_argument('--init_method', default='tcp://127.0.0.1:3456', type=str, help='')
parser.add_argument('--dist-backend', default='mpi', type=str, help='')
parser.add_argument('--world_size', default=1, type=int, help='')
parser.add_argument('--distributed', action='store_true', help='')
def main():
args = parser.parse_args()
# Convolutional + pooling part of the model
class ConvPart(nn.Module):
def __init__(self):
super(ConvPart, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
return x
# Dense feedforward part of the model
class MLPPart(nn.Module):
def __init__(self):
super(MLPPart, self).__init__()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
ngpus_per_node = torch.cuda.device_count()
local_rank = int(os.environ.get("SLURM_LOCALID"))
rank = int(os.environ.get("SLURM_NODEID"))*(ngpus_per_node//2) + local_rank # Divide ngpus_per_node by the number of model parts
os.environ['MASTER_ADDR'] = '127.0.0.1' # Each model replica will run its own RPC server to run pipeline parallelism
os.environ['MASTER_PORT'] = str(34567 + local_rank) # Make sure each RPC server starts on a different port
torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1) # Different replicas won't communicate through RPC, but through DDP
dist.init_process_group(backend=args.dist_backend, init_method=args.init_method, world_size=args.world_size, rank=rank) # Initialize Data Parallelism communications
part1 = ConvPart().cuda(local_rank) # First part of the model goes on the first GPU of each process
part2 = MLPPart().cuda(local_rank + 1) # Second part goes on the second GPU of each process
net = nn.Sequential(part1,part2)
net = Pipe(net, chunks=32, checkpoint="never")
net = torch.nn.parallel.DistributedDataParallel(net)
criterion = nn.CrossEntropyLoss().cuda(local_rank + 1) # Loss function goes on the second GPU of each process
optimizer = optim.SGD(net.parameters(), lr=args.lr)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.num_workers, sampler=train_sampler)
for epoch in range(args.max_epochs):
train_sampler.set_epoch(epoch)
train(epoch, net, criterion, optimizer, train_loader, rank, local_rank)
def train(epoch, net, criterion, optimizer, train_loader, train_rank, model_rank):
train_loss = 0
correct = 0
total = 0
epoch_start = time.time()
for batch_idx, (inputs, targets) in enumerate(train_loader):
start = time.time()
inputs = inputs.cuda(model_rank)
targets = targets.cuda(model_rank + 1)
outputs = net(inputs).local_value()
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"From Rank {train_rank} - Loss: {loss.item()}")
batch_time = time.time() - start
if __name__=='__main__':
main()
DeepSpeed[edit]
DeepSpeed is a deep learning training optimization library, providing the means to train massive billion parameter models at scale. Fully compatible with PyTorch, DeepSpeed features implementations of novel memory-efficient distributed training methods, based on the Zero Redundancy Optimizer (ZeRO) concept. Through the use of ZeRO, DeepSpeed enables distributed storage and computing of different elements of a training task - such as optimizer states, model weights, model gradients and model activations - across multiple devices, including GPU, CPU, local hard disk, and/or combinations of these devices. This "pooling" of resources, notably for storage, allows models with massive amounts of parameters to be trained efficiently, across multiple nodes, without explicitly handling Model, Pipeline or Data Parallelism in your code. The examples below show how to take advantage of DeepSpeed and its implementations of ZeRO variants through its PyTorch Lightning interface for ease of use.
ZeRO on GPU[edit]
In the following example, we use ZeRO Stage 3 to train a model using a "pool" of 4 GPUs. Stage 3 means all three of: optimizer states; model parameters; and model gradients will be split (sharded) between all 4 GPUs. This is more memory-efficient than pure Data Parallelism, where we would have a full replica of the model loaded on each GPU. Using DeepSpeed's optimizer FusedAdam
instead of a native PyTorch one, performance is comparable with pure Data Parallelism. DeepSpeed's optimizers are JIT compiled at run-time and you must load the module cuda/<version>
where <version> must match the version used to build the PyTorch install you are using.
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”.
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel.
#SBATCH --mem=32G
#SBATCH --time=0-00:20
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python cuda # CUDA must be loaded if using a DeepSpeed optimizer
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torchvision pytorch-lightning deepspeed --no-index
export TORCH_NCCL_ASYNC_HANDLING=1
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job
# If it is, it expects the user to have requested one task per GPU.
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail!
srun python deepspeed-stage3.py --batch_size 256
import torch
from torch import nn
import torch.nn.functional as F
import pytorch_lightning as pl
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
from deepspeed.ops.adam import FusedAdam
from pytorch_lightning.strategies import DeepSpeedStrategy
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models deep seed stage 3 test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--max_epochs', type=int, default=2, help='')
parser.add_argument('--batch_size', type=int, default=768, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
print("Starting...")
args = parser.parse_args()
class ConvPart(nn.Module):
def __init__(self):
super(ConvPart, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
return x
# Dense feedforward part of the model
class MLPPart(nn.Module):
def __init__(self):
super(MLPPart, self).__init__()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
class Net(pl.LightningModule):
def __init__(self):
super(Net, self).__init__()
self.conv_part = ConvPart()
self.mlp_part = MLPPart()
def configure_sharded_model(self):
self.block = nn.Sequential(self.conv_part, self.mlp_part)
def forward(self, x):
x = self.block(x)
return x
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return loss
def configure_optimizers(self):
return FusedAdam(self.parameters())
net = Net()
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPU.
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes.
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs,
which can cause issues due to updating logs too frequently."""
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy="deepspeed_stage_3", max_epochs = args.max_epochs)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
trainer.fit(net,train_loader)
if __name__=='__main__':
main()
ZeRO with offload to CPU[edit]
In this example, we will again use ZeRO stage 3, but this time we enable offloading model parameters and optimizers states to the CPU. This means that the compute node's memory will be available to store these tensors while they are not required by any GPU computations, and additionally, optimizer steps will be computed on the CPU. For practical purposes, you can think of this as though your GPUs were gaining an extra 32GB of memory. This takes even more pressure off from GPU memory and would allow you to increase your batch size, for example, or increase the size of the model. Using DeepSpeed's optimizer DeepSpeedCPUAdam
instead of a native PyTorch one, performance remains at par with pure Data Parallelism. DeepSpeed's optimizers are JIT compiled at run-time and you must load the module cuda/<version>
where <version> must match the version used to build the PyTorch install you are using.
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”.
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel.
#SBATCH --mem=32G
#SBATCH --time=0-00:20
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python cuda # CUDA must be loaded if using ZeRO offloading to CPU or NVMe. Version must be the same used to compile PyTorch.
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torchvision pytorch-lightning deepspeed --no-index
export TORCH_NCCL_ASYNC_HANDLING=1
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job
# If it is, it expects the user to have requested one task per GPU.
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail!
srun python deepspeed-stage3-offload-cpu.py --batch_size 256
import torch
from torch import nn
import torch.nn.functional as F
import pytorch_lightning as pl
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
from deepspeed.ops.adam import DeepSpeedCPUAdam
from pytorch_lightning.strategies import DeepSpeedStrategy
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, deepspeed offload to cpu test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--max_epochs', type=int, default=2, help='')
parser.add_argument('--batch_size', type=int, default=768, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
print("Starting...")
args = parser.parse_args()
class ConvPart(nn.Module):
def __init__(self):
super(ConvPart, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
return x
# Dense feedforward part of the model
class MLPPart(nn.Module):
def __init__(self):
super(MLPPart, self).__init__()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
class Net(pl.LightningModule):
def __init__(self):
super(Net, self).__init__()
self.conv_part = ConvPart()
self.mlp_part = MLPPart()
def configure_sharded_model(self):
self.block = nn.Sequential(self.conv_part, self.mlp_part)
def forward(self, x):
x = self.block(x)
return x
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return loss
def configure_optimizers(self):
return DeepSpeedCPUAdam(self.parameters())
net = Net()
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPU.
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes.
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs,
which can cause issues due to updating logs too frequently."""
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy=DeepSpeedStrategy(
stage=3,
offload_optimizer=True,
offload_parameters=True,
), max_epochs = args.max_epochs)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
trainer.fit(net,train_loader)
if __name__=='__main__':
main()
ZeRO with offload to NVMe[edit]
In this example, we use ZeRO stage 3 yet again, but this time we enable offloading model parameters and optimizers states to the local disk. This means that the compute node's local disk storage will be available to store these tensors while they are not required by any GPU computations. As before, optimizer steps will be computed on the CPU. Again, for practical purposes, you can think of this as extending GPU memory by however much storage is available on the local disk, though this time performance will significantly degrade. This approach works best (i.e., performance degradation is least noticeable) on NVMe-enabled drives, which have higher throughput and faster response times, but it can be used with any type of storage.
#!/bin/bash
#SBATCH --nodes 1
#SBATCH --gres=gpu:2 # Request 2 GPU "generic resources”.
#SBATCH --tasks-per-node=2 # Request 1 process per GPU. You will get 1 CPU per process by default. Request more CPUs with the "cpus-per-task" parameter to enable multiple data-loader workers to load data in parallel.
#SBATCH --mem=32G
#SBATCH --time=0-00:20
#SBATCH --output=%N-%j.out
#SBATCH --account=<your account>
module load python cuda # CUDA must be loaded if using ZeRO offloading to CPU or NVMe. Version must be the same used to compile PyTorch.
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install torchvision pytorch-lightning deepspeed --no-index
export TORCH_NCCL_ASYNC_HANDLING=1
# PyTorch Lightning will query the environment to figure out if it is running inside a SLURM batch job
# If it is, it expects the user to have requested one task per GPU.
# If you do not ask for 1 task per GPU, and you do not run your script with "srun", your job will fail!
srun python deepspeed-stage3-offload-nvme.py --batch_size 256
import os
import torch
from torch import nn
import torch.nn.functional as F
import pytorch_lightning as pl
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
from deepspeed.ops.adam import DeepSpeedCPUAdam
from pytorch_lightning.strategies import DeepSpeedStrategy
import argparse
parser = argparse.ArgumentParser(description='cifar10 classification models, deepspeed offload to nvme test')
parser.add_argument('--lr', default=0.1, help='')
parser.add_argument('--max_epochs', type=int, default=2, help='')
parser.add_argument('--batch_size', type=int, default=768, help='')
parser.add_argument('--num_workers', type=int, default=0, help='')
def main():
print("Starting...")
args = parser.parse_args()
class ConvPart(nn.Module):
def __init__(self):
super(ConvPart, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
return x
# Dense feedforward part of the model
class MLPPart(nn.Module):
def __init__(self):
super(MLPPart, self).__init__()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
class Net(pl.LightningModule):
def __init__(self):
super(Net, self).__init__()
self.conv_part = ConvPart()
self.mlp_part = MLPPart()
def configure_sharded_model(self):
self.block = nn.Sequential(self.conv_part, self.mlp_part)
def forward(self, x):
x = self.block(x)
return x
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return loss
def configure_optimizers(self):
return DeepSpeedCPUAdam(self.parameters())
net = Net()
""" Here we initialize a Trainer() explicitly with 1 node and 2 GPU.
To make this script more generic, you can use torch.cuda.device_count() to set the number of GPUs
and you can use int(os.environ.get("SLURM_JOB_NUM_NODES")) to set the number of nodes.
We also set progress_bar_refresh_rate=0 to avoid writing a progress bar to the logs,
which can cause issues due to updating logs too frequently."""
local_scratch = os.environ['SLURM_TMPDIR'] # Get path where local storage is mounted
print(f'Offloading to: {local_scratch}')
trainer = pl.Trainer(accelerator="gpu", devices=2, num_nodes=1, strategy=DeepSpeedStrategy(
stage=3,
offload_optimizer=True,
offload_parameters=True,
remote_device="nvme",
offload_params_device="nvme",
offload_optimizer_device="nvme",
nvme_path="local_scratch",
), max_epochs = args.max_epochs)
transform_train = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = CIFAR10(root='./data', train=True, download=False, transform=transform_train)
train_loader = DataLoader(dataset_train, batch_size=args.batch_size, num_workers=args.num_workers)
trainer.fit(net,train_loader)
if __name__=='__main__':
main()
Creating model checkpoints[edit]
Whether or not you expect your code to run for long time periods, it is a good habit to create Checkpoints during training. A checkpoint is a snapshot of your model at a given point during the training process (after a certain number of iterations or after a number of epochs) that is saved to disk and can be loaded at a later time. It is a handy way of breaking up jobs that are expected to run for a very long time, into multiple shorter jobs that may get allocated on the cluster more quickly. It is also a good way of avoiding losing progress in case of unexpected errors in your code or node failures.
With PyTorch Lightning[edit]
To create a checkpoint when training with pytorch-lightning
, we recommend using the callbacks parameter of the Trainer()
class. The following example shows how to instruct PyTorch to create a checkpoint at the end of every training epoch. Make sure the path where you want to create the checkpoint exists.
callbacks = [pl.callbacks.ModelCheckpoint(dirpath="./ckpt",every_n_epochs=1)] trainer = pl.Trainer(callbacks=callbacks) trainer.fit(model)
This code snippet will also load a checkpoint from ./ckpt
, if there is one, and continue training from that point. For more information, please refer to the official PyTorch Lightning documentation.
With custom training loops[edit]
Please refer to the official PyTorch documentation for examples on how to create and load checkpoints inside of a training loop.
During distributed training[edit]
Checkpointing can also be done while running a distributed training program. With PyTorch Lightning, no extra code is required other than using the checkpoint callback as described above. If you are using DistributedDataParallel or Horovod however, checkpointing should be done only by one process (one of the ranks) of your program, since all ranks will have the same state at the end of each iteration. The following example uses the first process (rank 0) to create a checkpoint:
if global_rank == 0: torch.save(ddp_model.state_dict(), "./checkpoint_path")
You must be careful when loading a checkpoint created in this manner. If a process tries to load a checkpoint that has not yet been saved by another, you may see errors or get wrong results. To avoid this, you can add a barrier to your code to make sure the process that will create the checkpoint finishes writing it to disk before other processes attempt to load it. Also note that torch.load
will attempt to load tensors to the GPU that saved them originally (cuda:0
in this case) by default. To avoid issues, pass map_location
to torch.load
to load tensors on the correct GPU for each rank.
torch.distributed.barrier() map_location = f"cuda:{local_rank}" ddp_model.load_state_dict( torch.load("./checkpoint_path", map_location=map_location))
Troubleshooting[edit]
Memory leak[edit]
On AVX512 hardware (Béluga, Skylake or V100 nodes), older versions of Pytorch (less than v1.0.1) using older libraries (cuDNN < v7.5 or MAGMA < v2.5) may considerably leak memory resulting in an out-of-memory exception and death of your tasks. Please upgrade to the latest torch
version.
c10::Error[edit]
There are cases where we get this kind of error:
terminate called after throwing an instance of 'c10::Error' what(): Given groups=1, weight of size [256, 1, 3, 3], expected input[16, 10, 16, 16] to have 1 channels, but got 10 channels instead Exception raised from check_shape_forward at /tmp/coulombc/pytorch_build_2021-11-09_14-57-01/avx2/python3.8/pytorch/aten/src/ATen/native/Convolution.cpp:496 (most recent call first): ...
A C++ exception is thrown instead of a Python exception. This might happen when programming in C++ with libtorch, but it is unexpected when programming in Python. We cannot see the Python traceback, which makes it difficult to pinpoint the cause of the error in our python script. On Graham, it has been observed that using PyTorch 1.9.1 (instead of PyTorch 1.10.x) helps: it allows to get the Python traceback.
LibTorch[edit]
LibTorch allows one to implement both C++ extensions to PyTorch and pure C++ machine learning applications. It contains "all headers, libraries and CMake configuration files required to depend on PyTorch", as described in the documentation.
How to use LibTorch[edit]
Setting up the environment[edit]
Load the modules required by Libtorch, then install PyTorch in a Python virtual environment:
module load StdEnv/2023 gcc cuda/12.2 cmake protobuf cudnn python/3.11 abseil cusparselt opencv/4.8.1 virtualenv --no-download --clear ~/ENV && source ~/ENV/bin/activate pip install --no-index torch numpy
Note that the versions for the abseil, cusparselt and opencv modules may need to be adjusted, depending on the version of the torch package. In order to find out which version of those modules was used to compile the Python wheel for torch, use the following command:
$ ldd $VIRTUAL_ENV/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so | sed -n 's&^.*/\(\(opencv\|abseil\|cusparselt\)/[^/]*\).*&\1&p' | sort -u
abseil/20230125.3
cusparselt/0.5.0.1
opencv/4.8.1
module load gcc cuda/11.4 cmake protobuf cudnn python/3.10 virtualenv --no-download --clear ~/ENV && source ~/ENV/bin/activate pip install --no-index torch numpy
Compiling a minimal example[edit]
Create the following two files:
#include <torch/torch.h>
#include <iostream>
int main()
{
torch::Device device(torch::kCPU);
if (torch::cuda::is_available())
{
std::cout << "CUDA is available! Using GPU." << std::endl;
device = torch::Device(torch::kCUDA);
}
torch::Tensor tensor = torch::rand({2, 3}).to(device);
std::cout << tensor << std::endl;
}
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example)
find_package(Torch REQUIRED)
add_executable(example example.cpp)
target_link_libraries(example "${TORCH_LIBRARIES}")
set_property(TARGET example PROPERTY CXX_STANDARD 14)
With the python virtualenv activated, configure the project and compile the program:
cmake -B build -S . -DCMAKE_PREFIX_PATH=$VIRTUAL_ENV/lib/python3.11/site-packages \ -DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath=$VIRTUAL_ENV/lib/python3.11/site-packages/torch/lib,-L$EBROOTCUDA/extras/CUPTI/lib64 \ -DCMAKE_SKIP_RPATH=ON -DTORCH_CUDA_ARCH_LIST="6.0;7.0;7.5;8.0;9.0" cmake --build build
cmake -B build -S . -DCMAKE_PREFIX_PATH=$VIRTUAL_ENV/lib/python3.10/site-packages \ -DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath=$VIRTUAL_ENV/lib/python3.10/site-packages/torch/lib \ -DCMAKE_SKIP_RPATH=ON cmake --build build
Run the program:
build/example
To test an application with CUDA, request an interactive job with a GPU.