PyTorch
PyTorch is a Python package that provides two high-level features:
- Tensor computation (like NumPy) with strong GPU acceleration
- Deep neural networks built on a tape-based autograd system
PyTorch has a distant connection with Torch, but for all practical purposes you can treat them as separate packages.
Installation
Latest available wheels
To see the latest version of PyTorch that we have built:
[name@server ~]$ avail_wheels "torch*"
For more information on listing wheels, see listing available wheels.
Pre-build
The preferred option is to install it using the python wheel that we compile, as follows:
- 1. Load a python module, either python/2.7, python/3.5, or python/3.6
- 2. Create and start a virtual environment.
- 3. Install PyTorch in the virtual environment with
pip install
. For both GPU and CPU support: -
(venv) [name@server ~] pip install numpy torch_gpu --no-index
- If you only need CPU support:
-
(venv) [name@server ~] pip install numpy torch_cpu --no-index
Extra
In addition to torch_cpu or torch_gpu, you can install torchvision, torchtext and torchaudio:
(venv) [name@server ~] pip install numpy six torch_cpu torchvision torchtext torchaudio --no-index
Note: For torchaudio, torch_cpu==0.4.0 or torch_gpu==0.4.0 is required.
Job submission
Once the setup is completed, you can submit a PyTorch job with
[name@server ~]$ sbatch pytorch-test.sh
Here is an example of a job submission script using the python wheel, with a virtual environment in $HOME/pytorch:
#!/bin/bash
#SBATCH --gres=gpu:1 # Request GPU "generic resources"
#SBATCH --cpus-per-task=6 # Cores proportional to GPUs: 6 on Cedar, 16 on Graham.
#SBATCH --mem=32000M # Memory proportional to GPUs: 32000 Cedar, 64000 Graham.
#SBATCH --time=0-03:00
#SBATCH --output=%N-%j.out
module load python/3.6
source $HOME/pytorch/bin/activate
python ./pytorch-test.py
and here is an example of a job submission script using Anaconda:
#!/bin/bash
#SBATCH --gres=gpu:1 # Request GPU "generic resources"
#SBATCH --cpus-per-task=6 # Cores proportional to GPUs: 6 on Cedar, 16 on Graham.
#SBATCH --mem=32000M # Memory proportional to GPUs: 32000 Cedar, 64000 Graham.
#SBATCH --time=0-03:00
#SBATCH --output=%N-%j.out
module load miniconda3
source activate pytorch
python ./pytorch-test.py
The Python script pytorch-test.py
has the form
import torch
x = torch.Tensor(5, 3)
print(x)
y = torch.rand(5, 3)
print(y)
# let us run the following only if CUDA is available
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
print(x + y)