PyTorch
Jump to navigation
Jump to search
PyTorch is a Python package that provides two high-level features:
- Tensor computation (like NumPy) with strong GPU acceleration
- Deep neural networks built on a tape-based autograd system
Installation
There are two options to install PyTorch.
- Using Anaconda. You need to install Anaconda and then install PyTorch in a conda environment.
- 1. Load the Miniconda 2 or Miniconda 3 module.
-
[name@server ~]$ module load miniconda3
- 2. Create a new conda virtual environment.
-
[name@server ~]$ conda create --name pytorch
- 3. when conda asks you to proceed, type
y
. - 4. Activate the newly created conda virtual environment.
-
[name@server ~]$ source activate pytorch
- 5. Install PyTorch in the conda virtual environment.
-
[name@server ~]$ conda install pytorch torchvision cuda80 -c soumith
- Here, we instruct conda to use the soumith channel to retrieve the packages from the release channel belonging to the main PyTorch developer, Soumith Chintala. This guarantees you will have the latest version.
- Using python wheel. You need to create and activate your virtual environment and then use
pip
command to install PyTorch.
- 1. Using module command load your python module with Numpy. For python 2
-
[name@server ~]$ module load python27-scipy-stack/2017a
- and for Python 3
-
[name@server ~]$ module load python35-scipy-stack/2017a
- 2. Create and use a virtual environment.
- 3. Install PyTorch in virtual environment. For both GPU and CPU support
-
[name@server ~]$ pip install torch_gpu
- and for CPU support only
-
[name@server ~]$ pip install torch_cpu
- The default version for PyTorch wheel is PyTorch-0.2
Job submission
Once the setup is completed, you can submit a PyTorch job as
[name@server ~]$ sbatch pytorch-test.sh
The job submission script has the following contents.
File : pytorch-test.sh
#!/bin/bash
#SBATCH --gres=gpu:1 # request GPU "generic resource"
#SBATCH --cpus-per-task=6 #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.
#SBATCH --mem=32000M # memory per node
#SBATCH --time=0-03:00 # time (DD-HH:MM)
#SBATCH --output=%N-%j.out # %N for node name, %j for jobID
module load miniconda3
source activate pytorch
python ./pytorch-test.py
The Python script pytorch-test.py
has the form
File : pytorch-test.py
import torch
x = torch.Tensor(5, 3)
print(x)
y = torch.rand(5, 3)
print(y)
# let us run the following only if CUDA is available
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
print(x + y)