Python

From Alliance Doc
Revision as of 17:36, 27 July 2021 by Diane27 (talk | contribs)
Jump to navigation Jump to search
Other languages:

Description

Python is an interpreted programming language with a design philosophy stressing the readability of code. Its syntax is simple and expressive. Python has an extensive, easy-to-use standard library.

The capabilities of Python can be extended with packages developed by third parties. In general, to simplify operations, it is left up to individual users and groups to install these third-party packages in their own directories. However, most systems offer several versions of Python as well as tools to help you install the third-party packages that you need.

The following sections discuss the Python interpreter, and how to install and use packages.

Loading an interpreter

Default Python version

When you log into our clusters, a default Python version will be available, but that is generally not the one that you should use, especially if you need to install any Python packages. You should try to find out which version of Python is required to run your Python programs and load the appropriate module. If you are not sure which version you need, then it is reasonable to use the latest version available.

Loading a Python module

To discover the versions of Python available:

Question.png
[name@server ~]$ module avail python

You can then load the version of your choice using module load. For example, to load Python 3.6 you can use the command

Question.png
[name@server ~]$ module load python/3.6

SciPy stack

In addition to the base Python module, the SciPy package is also available as an environment module. The scipy-stack module includes:

  • NumPy
  • SciPy
  • Matplotlib
    • dateutil
    • pytz
  • IPython
    • pyzmq
    • tornado
  • pandas
  • Sympy
  • nose

If you want to use any of these Python packages, load a Python version of your choice and then module load scipy-stack.

To get a complete list of the packages contained in scipy-stack, along with their version numbers, run module spider scipy-stack/2020a (replacing 2020a with whichever version you want to find out about).

Creating and using a virtual environment

With each version of Python, we provide the tool virtualenv. This tool allows users to create virtual environments within which you can easily install Python packages. These environments allow one to install many versions of the same package, for example, or to compartmentalize a Python installation according to the needs of a specific project. Usually you should create your Python virtual environment(s) in your /home directory or in one of your /project directories. (See "Creating virtual environments inside of your jobs" below for a third alternative.)

To create a virtual environment, make sure you have selected a Python version with module load python as shown above in section Loading a Python module. If you expect to use any of the packages listed in section SciPy stack above, also run module load scipy-stack. Then enter the following command, where ENV is the name of the directory for your new environment:

Question.png
[name@server ~]$ virtualenv --no-download ENV

Once the virtual environment has been created, it must be activated:

Question.png
[name@server ~]$ source ENV/bin/activate

You should also upgrade pip in the environment:

Question.png
[name@server ~]$ pip install --no-index --upgrade pip

To exit the virtual environment, simply enter the command deactivate:

Question.png
(ENV) [name@server ~] deactivate

You can now use the same virtual environment over and over again. Each time:

  1. Load the same environment modules that you loaded when you created the virtual environment, e.g. module load python scipy-stack
  2. Activate the environment, source ENV/bin/activate

Installing packages

Once you have a virtual environment loaded, you will be able to run the pip command. This command takes care of compiling and installing most of Python packages and their dependencies. A comprehensive index of Python packages can be found at PyPI.

All of pip's commands are explained in detail in the user guide. We will cover only the most important commands and use the Numpy package as an example.

We first load the Python interpreter:

Question.png
[name@server ~]$ module load python/3.6

We then activate the virtual environment, previously created using the virtualenv command:

Question.png
[name@server ~]$ source ENV/bin/activate

Finally, we install the latest stable version of Numpy:

Question.png
(ENV) [name@server ~] pip install numpy --no-index

The pip command can install packages from a variety of sources, including PyPI and pre-built distribution packages called Python wheels. Compute Canada provides Python wheels for a number of packages. In the above example, the --no-index option tells pip to not install from PyPI, but instead to install only from locally-available packages, i.e. the Compute Canada wheels.

Whenever a Compute Canada wheel is available for a given package, we strongly recommend to use it by way of the --no-index option. Compared to using packages from PyPI, wheels that have been compiled by Compute Canada staff can prevent issues with missing or conflicting dependencies, and were optimised for our clusters hardware and libraries. See Available wheels.

If you omit the --no-index option, pip will search both PyPI and local packages, and use the latest version available. If PyPI has a newer version, it will be installed instead of the Compute Canada wheel, possibly causing issues. If you are certain that you prefer to download a package from PyPI rather than use a wheel, you can use the --no-binary option, which tells pip to ignore pre-built packages entirely. Note that this will also ignore wheels that are distributed through PyPI, and will always compile the package from source.

To see where the pip command is installing a python package from, you can tell it to be more verbose with the -vvv option.

Installing dependent packages

In some cases, such as TensorFlow, Compute Canada provides wheels for a specific host (cpu or gpu), suffixed with _cpu or _gpu. Packages dependent on tensorflow will then fail to install. If my_package depend on numpy and tensorflow, then the following will allow us to install it:

(ENV) [name@server ~] pip install numpy tensorflow_cpu --no-index
(ENV) [name@server ~] pip install my_package --no-deps

The --no-deps options tells pip to ignore dependencies.

Creating virtual environments inside of your jobs

Parallel filesystems such as the ones used on our clusters are very good at reading or writing large chunks of data, but can be bad for intensive use of small files. Launching a software and loading libraries, such as starting python and loading a virtual environment, can be slow for this reason.

As a workaround for this kind of slowdown, and especially for single-node Python jobs, you can create your virtual environment inside of your job, using the compute node's local disk. It may seem counter-intuitive to recreate your environment for every job, but it can be faster than running from the parallel filesystem, and will give you some protection against some filesystem performance issues. This approach, of creating a node-local virtualenv, has to be done for each node in the job, since the virtualenv is only accessible on one node. Following job submission script demonstrates how to do this for a single-node job:


File : submit_venv.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --mem-per-cpu=1.5G      # increase as needed
#SBATCH --time=1:00:00

module load python/3.6
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install --no-index --upgrade pip

pip install --no-index -r requirements.txt
python ...


where the requirements.txt file will have been created from a test environment. For example, if you want to create an environment for TensorFlow, you would do the following on a login node :

[name@server ~]$ module load python/3.6
[name@server ~]$ ENVDIR=/tmp/$RANDOM
[name@server ~]$ virtualenv --no-download $ENVDIR
[name@server ~]$ source $ENVDIR/bin/activate
[name@server ~]$ pip install --no-index --upgrade pip
[name@server ~]$ pip install --no-index tensorflow_gpu
[name@server ~]$ pip freeze > requirements.txt
[name@server ~]$ deactivate
[name@server ~]$ rm -rf $ENVDIR


This will yield a file called requirements.txt, with content such as the following

File : requirements.txt

absl-py==0.5.0
astor==0.7.1
gast==0.2.0
grpcio==1.17.1
h5py==2.8.0
Keras-Applications==1.0.6
Keras-Preprocessing==1.0.5
Markdown==2.6.11
numpy==1.16.0
protobuf==3.6.1
six==1.12.0
tensorboard==1.12.2
tensorflow-gpu==1.12.0+computecanada
termcolor==1.1.0
Werkzeug==0.14.1


This file will ensure that your environment is reproducible between jobs.

Note that the above instructions require all of the packages you need to be available in the python wheels that we provide (see "Available wheels" below). If the wheel is not available in our wheelhouse, you can pre-download it (see "Pre-downloading packages" section below). If you think that the missing wheel should be included in the Compute Canada wheelhouse, please contact Technical support to make a request.

Available wheels

Currently available wheels are listed on the Available Python wheels page. You can also run the command avail_wheels on the cluster. By default, it will:

  • only show you the latest version of a specific package (unless versions are given);
  • only show you versions that are compatible with the python module (if one loaded), otherwise all python versions will be shown;
  • only show you versions that are compatible with the CPU architecture that you are currently running on.

To list wheels containing "cdf" (case insensitive) in its name:

Question.png
[name@server ~]$ avail_wheels --name "*cdf*"
name     version    build    python    arch
-------  ---------  -------  --------  ------
netCDF4  1.4.0               cp27      avx2

Or to list all available versions:

Question.png
[name@server ~]$ avail_wheels --name "*cdf*" --all_version
name     version    build    python    arch
-------  ---------  -------  --------  ------
netCDF4  1.4.0               cp27      avx2
netCDF4  1.3.1               cp36      avx2
netCDF4  1.3.1               cp35      avx2
netCDF4  1.3.1               cp27      avx2
netCDF4  1.2.8               cp27      avx2

Or to list a specific version:

Question.png
[name@server ~]$ avail_wheels --name "*cdf*" --version 1.3
name     version    build    python    arch
-------  ---------  -------  --------  ------
netCDF4  1.3.1               cp36      avx2
netCDF4  1.3.1               cp35      avx2
netCDF4  1.3.1               cp27      avx2

Or to list for a specific version of python:

Question.png
[name@server ~]$ avail_wheels --name "*cdf*" --python 3.6
name     version    build    python    arch
-------  ---------  -------  --------  ------
netCDF4  1.3.1               cp36      avx2

The python column tell us for which python version the wheel is available, where cp36 stands for cpython 3.6.

A few other examples
  • List multiple packages and multiple versions: avail_wheels numpy biopython --version 1.15.0 1.7
  • List the wheels for specific architectures : avail_wheels --arch avx avx2
  • List the wheels specifically for GPU and display only name, version, python columns: avail_wheels --column name version python --all_versions --name "*gpu"
  • Display usage and help: avail_wheels --help

Pre-downloading packages

Here is how to pre-download a package called tensorboardX on a login node, and install it on a compute node:

  1. Run pip download --no-deps tensorboardX. This will download the package as tensorboardX-1.9-py2.py3-none-any.whl (or similar) in the working directory. The syntax of pip download is the same as pip install.
  2. If the filename does not end with none-any, and ends with something like linux_x86_64 or manylinux*_x86_64, the wheel might not function correctly. You should contact Technical support so that we compile the wheel and make it available on our systems.
  3. Then, when installing, use the path for file pip install tensorboardX-1.9-py2.py3-none-any.whl.

Parallel programming with the Python multiprocessing module

Doing parallel programming with Python can be an easy way to get results faster. An usual way of doing so is to use the multiprocessing module. Of particular interest is the Pool class of this module, since it allows one to control the number of processes started in parallel, and apply the same calculation to multiple data. As an example, suppose we want to calculate the cube of a list of numbers. The serial code would look like this :

File : cubes_sequential.py

def cube(x):
    return x**3

data = [1, 2, 3, 4, 5, 6]
cubes = [cube(x) for x in data]
print(cubes)


File : cubes_sequential.py

def cube(x):
    return x**3

data = [1, 2, 3, 4, 5, 6]
cubes = list(map(cube,data))
print(cubes)


Using the Pool class, running in parallel, the above codes become :

File : cubes_parallel.py

import multiprocessing as mp

def cube(x):
    return x**3

pool = mp.Pool(processes=4)
data = [1, 2, 3, 4, 5, 6]
results = [pool.apply_async(cube, args=(x,)) for x in data]
cubes = [p.get() for p in results]
print(cubes)


File : cubes_parallel.py

import multiprocessing as mp

def cube(x):
    return x**3

pool = mp.Pool(processes=4)
data = [1, 2, 3, 4, 5, 6]
cubes = pool.map(cube, data)
print(cubes)


The above examples will however be limited to using 4 processes. On a cluster, it is very important to use the cores that are allocated to your job. Launching more processes than you have cores requested will slow down your calculation and possibly overload the compute node. Launching fewer processes than you have cores will result in wasted resources and cores remaining idle. The correct number of cores to use in your code is determined by the amount of resources you requested to the scheduler. For example, if you have the same computation to perform on many tens of data or more, it would make sense to use all of the cores of a node. In this case, you can write your job submission script with the following header :

File : submit.sh

#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=32

python cubes_parallel.py


and then, your code would become the following :

File : cubes_parallel.py

import multiprocessing as mp
import os

def cube(x):
    return x**3

ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1))
pool = mp.Pool(processes=ncpus)
data = [1, 2, 3, 4, 5, 6]
results = [pool.apply_async(cube, args=(x,)) for x in data]
cubes = [p.get() for p in results]
print(cubes)


File : cubes_parallel.py

import multiprocessing as mp
import os

def cube(x):
    return x**3

ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1))
pool = mp.Pool(processes=ncpus)
data = [1, 2, 3, 4, 5, 6]
cubes = pool.map(cube, data)
print(cubes)


Note that in the above example, the function cube itself is sequential. If you are calling some external library, such as numpy, it is possible that the functions called by your code are themselves parallel. If you want to distribute processes with the technique above, you should verify whether the functions you call are themselves parallel, and if they are, you need to control how many threads they will take themselves. If, for example, they take all the cores available (32 in the above example), and you are yourself starting 32 processes, this will slow down your code and possibly overload the node as well.

Note that the multiprocessing module is restricted to using a single compute node, so the speedup achievable by your program is usually limited to the total number of CPU cores in that node. If you want to go beyond this limit and use multiple nodes, consider using mpi4py or PySpark. Other methods of parallelizing Python (not all of them necessarily supported on Compute Canada clusters) are listed here. Also note that you can greatly improve the performance of your Python program by ensuring it is written efficiently, so that should be done first before parallelizing. If you are not sure if your Python code is efficient, please contact technical support and have them look at your code.

Anaconda

Please see Anaconda.

Jupyter

Please see Jupyter.