Python: Difference between revisions
No edit summary |
|||
(108 intermediate revisions by 12 users not shown) | |||
Line 4: | Line 4: | ||
== Description == <!--T:1--> | == Description == <!--T:1--> | ||
[http://www.python.org/ Python] is an interpreted programming language with a design philosophy stressing the readability of code. Its syntax is simple and expressive. Python has an extensive, easy-to-use library | [http://www.python.org/ Python] is an interpreted programming language with a design philosophy stressing the readability of code. Its syntax is simple and expressive. Python has an extensive, easy-to-use standard library. | ||
<!--T:2--> | <!--T:2--> | ||
The capabilities of Python can be extended with | The capabilities of Python can be extended with packages developed by third parties. In general, to simplify operations, it is left up to individual users and groups to install these third-party packages in their own directories. However, most systems offer several versions of Python as well as tools to help you install the third-party packages that you need. | ||
<!--T:3--> | <!--T:3--> | ||
The following sections discuss the Python interpreter, and how to install and use | The following sections discuss the Python interpreter, and how to install and use packages. | ||
== Loading an interpreter == <!--T:4--> | == Loading an interpreter == <!--T:4--> | ||
<!--T:5--> | ===Default Python version=== <!--T:70--> | ||
When you log into our clusters, a default Python version will be available, but that is generally not the one that you should use, especially if you need to install any Python packages. You should try to find out which version of Python is required to run your Python programs and load the appropriate [[Utiliser des modules/en | module]]. If you are not sure which version you need, then it is reasonable to use the latest version available. | |||
===Loading a Python module=== <!--T:5--> | |||
To discover the versions of Python available: | To discover the versions of Python available: | ||
{{Command|module avail python}} | {{Command|module avail python}} | ||
<!--T:6--> | <!--T:6--> | ||
You can then load the version of your choice using | You can then load the version of your choice using <code>module load</code>. For example, to load Python 3.10 you can use the command | ||
{{Command|module load python/2.7 | {{Command|module load python/3.10}} | ||
===Python version supported=== <!--T:88--> | |||
In general in the Python ecosystem, the transition to more modern versions of python is accelerating, with many packages only supporting the latest few versions of Python 3.x. In our case, we provide prebuilt Python packages in our [[Available Python wheels|wheelhouse]] only for the 3 most recent Python versions available on the systems. This will result in dependencies issues when trying to install those packages with older versions of Python. See [[Python#Package_.27X.27_requires_a_different_Python:_X.Y.Z_not_in_.27.3E.3DX.Y.27|Troubleshooting]]. | |||
<!--T:90--> | |||
Below is a table indicating when we stopped building wheels for each version of Python. | |||
{| class="wikitable" | |||
|- | |||
! Python version | |||
! Date | |||
|- | |||
| 3.10 | |||
| | |||
|- | |||
| 3.9 | |||
| | |||
|- | |||
| 3.8 | |||
| | |||
|- | |||
| 3.7 | |||
| 2022-02 | |||
|- | |||
| 3.6 | |||
| 2021-02 | |||
|- | |||
| 3.5 | |||
| 2020-02 | |||
|- | |||
| 2.7 | |||
| 2020-01 | |||
|} | |||
=== SciPy stack === <!--T:31--> | === SciPy stack === <!--T:31--> | ||
Line 39: | Line 74: | ||
<!--T:32--> | <!--T:32--> | ||
If you want to use any of these Python | If you want to use any of these Python packages, load a Python version of your choice and then <code>module load scipy-stack</code>. | ||
<!--T:33--> | <!--T:33--> | ||
To get a complete list of the packages contained in <code>scipy-stack</code>, along with their version numbers, run <code>module spider scipy-stack/2020a</code> (replacing <code>2020a</code> with whichever version you want to find out about). | |||
== Creating and using a virtual environment == <!--T:7--> | == Creating and using a virtual environment == <!--T:7--> | ||
<!--T:8--> | <!--T:8--> | ||
With each version of Python, we provide the tool [http://pypi.python.org/pypi/virtualenv virtualenv]. This tool allows users to create virtual environments within which you can easily install Python | With each version of Python, we provide the tool [http://pypi.python.org/pypi/virtualenv virtualenv]. This tool allows users to create virtual environments within which you can easily install Python packages. These environments allow one to install many versions of the same package, for example, or to compartmentalize a Python installation according to the needs of a specific project. Usually you should create your Python virtual environment(s) in your /home directory or in one of your /project directories. (See "Creating virtual environments inside of your jobs" below for a third alternative.) | ||
<!--T:99--> | |||
{{Warning | |||
|title=Virtual environment location | |||
|content=Do not create your virtual environment under <tt>$SCRATCH</tt> as it may get partially deleted. | |||
Instead, [[Python#Creating virtual environments inside of your jobs|create it inside your job]]. | |||
}} | |||
<!--T:9--> | <!--T:9--> | ||
To create a virtual environment, enter the following command, where < | To create a virtual environment, make sure you have selected a Python version with <code>module load python/X.Y.Z</code> as shown above in section ''Loading a Python module''. If you expect to use any of the packages listed in section ''SciPy stack'' above, also run <code>module load scipy-stack/X.Y.Z</code>. Then enter the following command, where <code>ENV</code> is the name of the directory for your new environment: | ||
{{Command|virtualenv --no-download | {{Command|virtualenv --no-download ENV}} | ||
<!--T:11--> | <!--T:11--> | ||
Once the virtual environment has been created, it must be activated: | Once the virtual environment has been created, it must be activated: | ||
{{Command|source | {{Command|source ENV/bin/activate}} | ||
<!--T:44--> | <!--T:44--> | ||
You should also upgrade < | You should also upgrade <code>pip</code> in the environment: | ||
{{Command|pip install --upgrade pip}} | {{Command|pip install --no-index --upgrade pip}} | ||
<!--T:12--> | <!--T:12--> | ||
To exit the virtual environment, simply enter the command < | To exit the virtual environment, simply enter the command <code>deactivate</code>: | ||
{{Command|prompt=(ENV) [name@server ~]|deactivate}} | {{Command|prompt=(ENV) [name@server ~]|deactivate}} | ||
=== Installing | <!--T:71--> | ||
You can now use the same virtual environment over and over again. Each time: | |||
# Load the same environment modules that you loaded when you created the virtual environment, e.g. <code>module load python scipy-stack</code> | |||
# Activate the environment, <code>source ENV/bin/activate</code> | |||
=== Installing packages === <!--T:13--> | |||
<!--T:14--> | <!--T:14--> | ||
Once you have a virtual environment loaded, you will be able to run the [http://www.pip-installer.org/ pip] command. This command takes care of compiling and installing most of Python | Once you have a virtual environment loaded, you will be able to run the [http://www.pip-installer.org/ pip] command. This command takes care of compiling and installing most of Python packages and their dependencies. A comprehensive index of Python packages can be found at [https://pypi.python.org/pypi PyPI]. | ||
<!--T:15--> | <!--T:15--> | ||
All of < | All of <code>pip</code>'s commands are explained in detail in the [https://pip.pypa.io/en/stable/user_guide/ user guide]. We will cover only the most important commands and use the [http://numpy.scipy.org/ Numpy] package as an example. | ||
<!--T:16--> | <!--T:16--> | ||
We first load the Python interpreter: | We first load the Python interpreter: | ||
{{Command|module load python/ | {{Command|module load python/3.10}} | ||
<!--T:18--> | <!--T:18--> | ||
We then activate the virtual environment, previously created using the < | We then activate the virtual environment, previously created using the <code>virtualenv</code> command: | ||
{{Command|source | {{Command|source ENV/bin/activate}} | ||
Finally, we install the latest stable version of Numpy: | Finally, we install the latest stable version of Numpy: | ||
{{Command|prompt=(ENV) [name@server ~]|pip install numpy --no-index}} | {{Command|prompt=(ENV) [name@server ~]|pip install numpy --no-index}} | ||
<!--T: | <!--T:68--> | ||
The < | The <code>pip</code> command can install packages from a variety of sources, including PyPI and prebuilt distribution packages called Python [https://pythonwheels.com/ wheels]. We provide Python wheels for a number of packages. In the above example, the [https://pip.pypa.io/en/stable/reference/pip_wheel/#cmdoption-no-index <code>--no-index</code>] option tells <code>pip</code> to ''not'' install from PyPI, but instead to install only from locally available packages, i.e. our wheels. | ||
<!--T: | <!--T:69--> | ||
Whenever we provide a wheel for a given package, we strongly recommend to use it by way of the <code>--no-index</code> option. Compared to using packages from PyPI, wheels that have been compiled by our staff can prevent issues with missing or conflicting dependencies, and were optimized for our clusters hardware and libraries. See [[#Available_wheels|Available wheels]]. | |||
<!--T: | <!--T:34--> | ||
If you omit the <code>--no-index</code> option, <code>pip</code> will search both PyPI and local packages, and use the latest version available. If PyPI has a newer version, it will be installed instead of our wheel, possibly causing issues. If you are certain that you prefer to download a package from PyPI rather than use a wheel, you can use the <code>--no-binary</code> option, which tells <code>pip</code> to ignore prebuilt packages entirely. Note that this will also ignore wheels that are distributed through PyPI, and will always compile the package from source. | |||
<!--T:30--> | <!--T:30--> | ||
To see where the < | To see where the <code>pip</code> command is installing a python package from, diagnosing installation issues, you can tell it to be more verbose with the <code>-vvv</code> option. It is also worth mentioning that when installing multiple packages it is advisable to install them with one command as it helps pip resolve dependencies. | ||
=== | === Creating virtual environments inside of your jobs === <!--T:36--> | ||
<!--T:66--> | |||
Parallel filesystems such as the ones used on our clusters are very good at reading or writing large chunks of data, but can be bad for intensive use of small files. Launching a software and loading libraries, such as starting Python and loading a virtual environment, can be slow for this reason. | |||
<!--T:64--> | |||
As a workaround for this kind of slowdown, and especially for single-node Python jobs, you can create your virtual environment inside of your job, using the compute node's local disk. It may seem counter-intuitive to recreate your environment for every job, but it can be faster than running from the parallel filesystem, and will give you some protection against some filesystem performance issues. This approach, of creating a node-local virtualenv, has to be done for each node in the job, since the virtualenv is only accessible on one node. Following job submission script demonstrates how to do this for a single-node job: | |||
<!--T:37--> | <!--T:37--> | ||
Line 119: | Line 161: | ||
<!--T:38--> | <!--T:38--> | ||
module load python/3. | module load python/3.10 | ||
virtualenv --no-download $SLURM_TMPDIR/env | virtualenv --no-download $SLURM_TMPDIR/env | ||
source $SLURM_TMPDIR/env/bin/activate | source $SLURM_TMPDIR/env/bin/activate | ||
pip install --upgrade pip | pip install --no-index --upgrade pip | ||
<!--T:39--> | <!--T:39--> | ||
Line 130: | Line 172: | ||
<!--T:40--> | <!--T:40--> | ||
}} | }} | ||
where the < | where the <code>requirements.txt</code> file will have been created from a test environment. For example, if you want to create an environment for [[TensorFlow]], you would do the following on a login node : | ||
{{Commands | {{Commands | ||
|module load python/3. | |module load python/3.10 | ||
|ENVDIR{{=}}/tmp/$RANDOM | |ENVDIR{{=}}/tmp/$RANDOM | ||
|virtualenv --no-download $ENVDIR | |virtualenv --no-download $ENVDIR | ||
|source $ENVDIR/bin/activate | |source $ENVDIR/bin/activate | ||
|pip install --upgrade pip | |pip install --no-index --upgrade pip | ||
|pip install --no-index | |pip install --no-index tensorflow | ||
|pip freeze > requirements.txt | |pip freeze --local > requirements.txt | ||
|deactivate | |deactivate | ||
|rm -rf $ENVDIR | |rm -rf $ENVDIR | ||
Line 144: | Line 186: | ||
<!--T:41--> | <!--T:41--> | ||
This will yield a file called < | This will yield a file called <code>requirements.txt</code>, with content such as the following | ||
{{File | {{File | ||
|name=requirements.txt | |name=requirements.txt | ||
|lang="txt" | |lang="txt" | ||
|contents= | |contents= | ||
absl_py==1.2.0+computecanada | |||
astunparse==1.6.3+computecanada | |||
cachetools==5.2.0+computecanada | |||
certifi==2022.6.15+computecanada | |||
charset_normalizer==2.1.0+computecanada | |||
grpcio==1. | flatbuffers==1.12+computecanada | ||
h5py==2. | gast==0.4.0+computecanada | ||
Keras- | google-pasta==0.2.0+computecanada | ||
google_auth==2.9.1+computecanada | |||
Markdown== | google_auth_oauthlib==0.4.6+computecanada | ||
numpy==1. | grpcio==1.47.0+computecanada | ||
protobuf==3. | h5py==3.6.0+computecanada | ||
six==1. | idna==3.3+computecanada | ||
tensorboard==1. | keras==2.9.0+computecanada | ||
Keras-Preprocessing==1.1.2+computecanada | |||
termcolor==1.1.0 | libclang==14.0.1+computecanada | ||
Werkzeug== | Markdown==3.4.1+computecanada | ||
numpy==1.23.0+computecanada | |||
oauthlib==3.2.0+computecanada | |||
opt-einsum==3.3.0+computecanada | |||
packaging==21.3+computecanada | |||
protobuf==3.19.4+computecanada | |||
pyasn1==0.4.8+computecanada | |||
pyasn1-modules==0.2.8+computecanada | |||
pyparsing==3.0.9+computecanada | |||
requests==2.28.1+computecanada | |||
requests_oauthlib==1.3.1+computecanada | |||
rsa==4.8+computecanada | |||
six==1.16.0+computecanada | |||
tensorboard==2.9.1+computecanada | |||
tensorboard-data-server==0.6.1+computecanada | |||
tensorboard_plugin_wit==1.8.1+computecanada | |||
tensorflow==2.9.0+computecanada | |||
tensorflow_estimator==2.9.0+computecanada | |||
tensorflow_io_gcs_filesystem==0.23.1+computecanada | |||
termcolor==1.1.0+computecanada | |||
typing_extensions==4.3.0+computecanada | |||
urllib3==1.26.11+computecanada | |||
Werkzeug==2.1.2+computecanada | |||
wrapt==1.13.3+computecanada | |||
}} | }} | ||
Line 172: | Line 237: | ||
<!--T:43--> | <!--T:43--> | ||
Note that the above instructions require all of the packages you need to be available in the python wheels that we provide (see below). If | Note that the above instructions require all of the packages you need to be available in the python wheels that we provide (see "Available wheels" below). If the wheel is not available in our wheelhouse, you can pre-download it (see "Pre-downloading packages" section below). If you think that the missing wheel should be included in our wheelhouse, please contact [[Technical support]] to make a request. | ||
==== Creating virtual environments inside of your jobs (multi-nodes) ==== <!--T:102--> | |||
<!--T:103--> | |||
In order to run scripts across multiple nodes, each node must have its own virtual environment activated. | |||
<!--T:104--> | |||
1. In your submission script, create the virtual environment on each allocated node: | |||
<syntaxhighlight lang="bash"> | |||
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF | |||
=== | <!--T:105--> | ||
Currently available wheels are listed on the [[Available Python wheels]] page. You can also run the command < | virtualenv --no-download $SLURM_TMPDIR/env | ||
source $SLURM_TMPDIR/env/bin/activate | |||
<!--T:106--> | |||
pip install --no-index --upgrade pip | |||
pip install --no-index -r requirements.txt | |||
<!--T:107--> | |||
EOF | |||
</syntaxhighlight> | |||
<!--T:108--> | |||
2. Activate the virtual environment on the main node, | |||
<syntaxhighlight lang="bash">source $SLURM_TMPDIR/env/bin/activate;</syntaxhighlight> | |||
<!--T:109--> | |||
3. Use <tt>srun</tt> to run your script | |||
<syntaxhighlight lang="bash">srun python myscript.py;</syntaxhighlight> | |||
==== Example (multi-nodes) ==== <!--T:110--> | |||
{{File | |||
|name=submit-nnodes-venv.sh | |||
|lang="bash" | |||
|lines=yes | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --account=<your account> | |||
#SBATCH --time=00:30:00 | |||
#SBATCH --nodes=2 | |||
#SBATCH --ntasks=2 | |||
#SBATCH --mem-per-cpu=2000M | |||
<!--T:111--> | |||
module load StdEnv/2023 python/3.11 mpi4py | |||
<!--T:112--> | |||
# create the virtual environment on each node : | |||
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF | |||
virtualenv --no-download $SLURM_TMPDIR/env | |||
source $SLURM_TMPDIR/env/bin/activate | |||
<!--T:113--> | |||
pip install --no-index --upgrade pip | |||
pip install --no-index -r requirements.txt | |||
EOF | |||
<!--T:114--> | |||
# activate only on main node | |||
source $SLURM_TMPDIR/env/bin/activate; | |||
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables | |||
srun python myscript-mpi.py; | |||
}} | |||
=== Available wheels === <!--T:23--> | |||
Currently available wheels are listed on the [[Available Python wheels]] page. You can also run the command <code>avail_wheels</code> on the cluster. | |||
By default, it will: | By default, it will: | ||
* only show you the | * only show you the <b>latest version</b> of a specific package (unless versions are given); | ||
* only show you versions that are compatible with the python module (if one loaded), otherwise all | * only show you versions that are compatible with the python module (if one loaded) or virtual environment (if activated), otherwise all versions will be shown; | ||
* only show you versions that are compatible with the CPU architecture that you are currently running on. | * only show you versions that are compatible with the CPU architecture and software environment (StdEnv) that you are currently running on. | ||
==== Names ==== <!--T:77--> | |||
To list wheels containing <code>cdf</code> (case insensitive) in its name: | |||
{{Command | |||
|avail_wheels "*cdf*" | |||
|result= | |||
name version python arch | |||
-------- --------- -------- ------- | |||
h5netcdf 0.7.4 py2,py3 generic | |||
netCDF4 1.5.8 cp39 avx2 | |||
netCDF4 1.5.8 cp38 avx2 | |||
netCDF4 1.5.8 cp310 avx2 | |||
}} | |||
<!--T: | <!--T:78--> | ||
Or an exact name: | |||
{{Command|avail_wheels | {{Command | ||
name | |avail_wheels numpy | ||
|result= | |||
name version python arch | |||
------ --------- -------- ------- | |||
numpy 1.23.0 cp39 generic | |||
numpy 1.23.0 cp38 generic | |||
numpy 1.23.0 cp310 generic | |||
}} | }} | ||
<!--T: | ==== Version ==== <!--T:79--> | ||
Or to list all available versions: | To list a specific version, you can use the same format as with `pip`: | ||
{{Command|avail_wheels | {{Command | ||
name | |avail_wheels numpy{{=}}{{=}}1.23 | ||
------- --------- ------- ------- | |result= | ||
netCDF4 | name version python arch | ||
netCDF4 | ------ --------- -------- ------- | ||
netCDF4 | numpy 1.23.0 cp39 generic | ||
netCDF4 | numpy 1.23.0 cp38 generic | ||
netCDF4 | numpy 1.23.0 cp310 generic | ||
}} | |||
Or use the long option: | |||
{{Command | |||
|avail_wheels numpy --version 1.23 | |||
|result= | |||
name version python arch | |||
------ --------- -------- ------- | |||
numpy 1.23.0 cp39 generic | |||
numpy 1.23.0 cp38 generic | |||
numpy 1.23.0 cp310 generic | |||
}} | |||
With the <code>pip</code> format, you can use different operators : <code>==</code>, <code><</code>, <code>></code>, <code>~=</code>, <code><=</code>,<code>>=</code>, <code>!=</code>. For instance, to list inferior versions: | |||
{{Command | |||
|avail_wheels 'numpy<1.23' | |||
|result= | |||
name version python arch | |||
------ --------- -------- ------- | |||
numpy 1.22.2 cp39 generic | |||
numpy 1.22.2 cp38 generic | |||
numpy 1.22.2 cp310 generic | |||
}} | |||
And to list all available versions: | |||
{{Command | |||
|avail_wheels "*cdf*" --all-version | |||
|result= | |||
name version python arch | |||
-------- --------- -------- ------- | |||
h5netcdf 0.7.4 py2,py3 generic | |||
netCDF4 1.5.8 cp39 avx2 | |||
netCDF4 1.5.8 cp38 avx2 | |||
netCDF4 1.5.8 cp310 avx2 | |||
netCDF4 1.5.6 cp38 avx2 | |||
netCDF4 1.5.6 cp37 avx2 | |||
netCDF4 1.5.4 cp38 avx2 | |||
netCDF4 1.5.4 cp37 avx2 | |||
netCDF4 1.5.4 cp36 avx2 | |||
}} | }} | ||
<!--T: | ==== Python ==== <!--T:80--> | ||
You can list a specific version of Python: | |||
{{Command|avail_wheels -- | {{Command | ||
name | |avail_wheels 'numpy<1.23' --python 3.9 | ||
|result= | |||
name version python arch | |||
------ --------- -------- ------- | |||
numpy 1.22.2 cp39 generic | |||
}} | }} | ||
The <i>python</i> column tells us for which version the wheel is available, where <code>cp39</code> stands for <code>cpython 3.9</code>. | |||
<!--T: | ==== Requirements file ==== <!--T:81--> | ||
One can list available wheels based on a <code>requirements.txt</code> file with: | |||
{{Command|avail_wheels - | {{Command | ||
name | |avail_wheels -r requirements.txt | ||
------- --------- ------- -------- ------ | |result= | ||
name version python arch | |||
--------- --------- -------- ------- | |||
packaging 21.3 py3 generic | |||
tabulate 0.8.10 py3 generic | |||
}} | |||
And display wheels that are not available: | |||
{{Command | |||
|avail_wheels -r requirements.txt --not-available | |||
|result= | |||
name version python arch | |||
--------- --------- -------- ------- | |||
packaging 21.3 py3 generic | |||
pip | |||
tabulate 0.8.10 py3 generic | |||
}} | }} | ||
=== | === Pre-downloading packages === <!--T:61--> | ||
<!--T:62--> | |||
* | Here is how to pre-download a package called <code>tensorboardX</code> on a login node, and install it on a compute node: | ||
<!--T:63--> | |||
# Run <code>pip download --no-deps tensorboardX</code>. This will download the package as <code>tensorboardX-1.9-py2.py3-none-any.whl</code> (or similar) in the working directory. The syntax of <code>pip download</code> is the same as <code>pip install</code>. | |||
# If the filename does not end with <code>none-any</code>, and ends with something like <code>linux_x86_64</code> or <code>manylinux*_x86_64</code>, the wheel might not function correctly. You should contact [[Technical support]] so that we compile the wheel and make it available on our systems. | |||
# Then, when installing, use the path for file <code>pip install tensorboardX-1.9-py2.py3-none-any.whl</code>. | |||
== Parallel programming with the Python <code>multiprocessing</code> module == <!--T:45--> | |||
<!--T:67--> | |||
Doing parallel programming with Python can be an easy way to get results faster. | Doing parallel programming with Python can be an easy way to get results faster. An usual way of doing so is to use the [https://sebastianraschka.com/Articles/2014_multiprocessing.html <code>multiprocessing</code>] module. Of particular interest is the <code>Pool</code> class of this module, since it allows one to control the number of processes started in parallel, and apply the same calculation to multiple data. As an example, suppose we want to calculate the <code>cube</code> of a list of numbers. The serial code would look like this : | ||
<tabs> | <tabs> | ||
<tab name="Using a loop"> | <tab name="Using a loop"> | ||
Line 237: | Line 438: | ||
return x**3 | return x**3 | ||
<!--T:46--> | |||
data = [1, 2, 3, 4, 5, 6] | data = [1, 2, 3, 4, 5, 6] | ||
cubes = [cube(x) for x in data] | cubes = [cube(x) for x in data] | ||
Line 250: | Line 452: | ||
return x**3 | return x**3 | ||
<!--T:47--> | |||
data = [1, 2, 3, 4, 5, 6] | data = [1, 2, 3, 4, 5, 6] | ||
cubes = list(map(cube,data)) | cubes = list(map(cube,data)) | ||
Line 257: | Line 460: | ||
</tabs> | </tabs> | ||
Using the < | <!--T:48--> | ||
Using the <code>Pool</code> class, running in parallel, the above codes become : | |||
<tabs> | <tabs> | ||
<tab name="Using a loop"> | <tab name="Using a loop"> | ||
Line 266: | Line 470: | ||
import multiprocessing as mp | import multiprocessing as mp | ||
<!--T:49--> | |||
def cube(x): | def cube(x): | ||
return x**3 | return x**3 | ||
<!--T:50--> | |||
pool = mp.Pool(processes=4) | pool = mp.Pool(processes=4) | ||
data = [1, 2, 3, 4, 5, 6] | data = [1, 2, 3, 4, 5, 6] | ||
Line 283: | Line 489: | ||
import multiprocessing as mp | import multiprocessing as mp | ||
<!--T:51--> | |||
def cube(x): | def cube(x): | ||
return x**3 | return x**3 | ||
<!--T:52--> | |||
pool = mp.Pool(processes=4) | pool = mp.Pool(processes=4) | ||
data = [1, 2, 3, 4, 5, 6] | data = [1, 2, 3, 4, 5, 6] | ||
Line 294: | Line 502: | ||
</tabs> | </tabs> | ||
The above examples will however be limited to using < | <!--T:53--> | ||
The above examples will however be limited to using <code>4</code> processes. On a cluster, it is very important to use the cores that are allocated to your job. Launching more processes than you have cores requested will slow down your calculation and possibly overload the compute node. Launching fewer processes than you have cores will result in wasted resources and cores remaining idle. The correct number of cores to use in your code is determined by the amount of resources you requested to the scheduler. For example, if you have the same computation to perform on many tens of data or more, it would make sense to use all of the cores of a node. In this case, you can write your job submission script with the following header : | |||
{{File|language=bash|name=submit.sh|contents= | {{File|language=bash|name=submit.sh|contents= | ||
#SBATCH --ntasks-per-node=1 | #SBATCH --ntasks-per-node=1 | ||
#SBATCH --cpus-per-task=32 | #SBATCH --cpus-per-task=32 | ||
<!--T:54--> | |||
python cubes_parallel.py | python cubes_parallel.py | ||
}} | }} | ||
Line 311: | Line 521: | ||
import os | import os | ||
<!--T:55--> | |||
def cube(x): | def cube(x): | ||
return x**3 | return x**3 | ||
<!--T:56--> | |||
ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1)) | ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1)) | ||
pool = mp.Pool(processes=ncpus) | pool = mp.Pool(processes=ncpus) | ||
Line 330: | Line 542: | ||
import os | import os | ||
<!--T:57--> | |||
def cube(x): | def cube(x): | ||
return x**3 | return x**3 | ||
<!--T:58--> | |||
ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1)) | ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1)) | ||
pool = mp.Pool(processes=ncpus) | pool = mp.Pool(processes=ncpus) | ||
Line 342: | Line 556: | ||
</tabs> | </tabs> | ||
Note that in the above example, the function < | <!--T:59--> | ||
Note that in the above example, the function <code>cube</code> itself is sequential. If you are calling some external library, such as <code>numpy</code>, it is possible that the functions called by your code are themselves parallel. If you want to distribute processes with the technique above, you should verify whether the functions you call are themselves parallel, and if they are, you need to control how many threads they will take themselves. If, for example, they take all the cores available (32 in the above example), and you are yourself starting 32 processes, this will slow down your code and possibly overload the node as well. | |||
Note that the multiprocessing module is restricted to using a single compute node, so the speedup achievable by your program is usually limited to the total number of CPU cores in that node. If you want to go beyond this limit and use multiple nodes, consider using mpi4py or [[Apache Spark|PySpark]]. Other methods of parallelizing Python (not all of them necessarily supported on | <!--T:60--> | ||
Note that the <code>multiprocessing</code> module is restricted to using a single compute node, so the speedup achievable by your program is usually limited to the total number of CPU cores in that node. If you want to go beyond this limit and use multiple nodes, consider using mpi4py or [[Apache Spark/en#PySpark|PySpark]]. Other methods of parallelizing Python (not all of them necessarily supported on our clusters) are listed [https://wiki.python.org/moin/ParallelProcessing here]. Also note that you can greatly improve the performance of your Python program by ensuring it is written efficiently, so that should be done first before parallelizing. If you are not sure if your Python code is efficient, please contact [[technical support]] and have them look at your code. | |||
== Anaconda == <!--T:21--> | == Anaconda == <!--T:21--> | ||
Line 351: | Line 567: | ||
== Jupyter == <!--T:22--> | == Jupyter == <!--T:22--> | ||
Please see [[Jupyter]]. | Please see [[Jupyter]]. | ||
== Troubleshooting == <!--T:72--> | |||
=== Python script is hanging === <!--T:73--> | |||
<!--T:74--> | |||
By using the [https://docs.python.org/3.8/library/faulthandler.html faulthandler] module, you can edit your script to allow dumping a traceback after a timeout. See <code>faulthandler.dump_traceback_later()</code>. | |||
<!--T:75--> | |||
You can also inspect a python process while the job is running, without modifying it beforehand, using [https://pythonrepo.com/repo/benfred-py-spy-python-debugging-tools py-spy]: | |||
<!--T:76--> | |||
# Install py-spy in a virtualenv in your home | |||
# Attach to the running job, using <code>srun --pty --jobid JOBID bash</code> | |||
# Use <code>htop -u $USER</code> to find the process ID of your python script | |||
# Activate the virtualenv where py-spy is installed | |||
# Run <code>py-spy top --pid PID</code> to see live feedback about where your code is spending time | |||
# Run <code>py-spy dump --pid PID</code> to get a traceback of where your code is currently at. | |||
=== Package 'X' requires a different Python: X.Y.Z not in '>=X.Y' === <!--T:82--> | |||
When installing packages, you may encounter an error similar to: | |||
<code>ERROR: Package 'X' requires a different Python: 3.6.10 not in '>=3.7'</code>. | |||
<!--T:83--> | |||
The current python module loaded (3.6.10 in this case) is not supported by that package. You can update to a more recent version, such as the latest available module. Or install an older version of package 'X'. | |||
=== Package has requirement X, but you'll have Y which is incompatible === <!--T:84--> | |||
When installing packages, you may encounter an error similar to: | |||
<code>ERROR: Package has requirement X, but you'll have Y which is incompatible.</code>. | |||
<!--T:85--> | |||
Upgrade <code>pip</code> to the latest version or higher than <code>[[https://pip.pypa.io/en/stable/news/#v21-3 21.3]]</code> to use the new dependency resolver: | |||
{{Command | |||
|prompt=(ENV) [name@server ~] | |||
|pip install --no-index --upgrade pip}} | |||
Then rerun your install command. | |||
=== No matching distribution found for X === <!--T:86--> | |||
When installing packages, you may encounter an error similar to: | |||
{{Command | |||
|prompt=(ENV) [name@server ~] | |||
|pip install X | |||
|result= | |||
ERROR: Could not find a version that satisfies the requirement X (from versions: none) | |||
ERROR: No matching distribution found for X | |||
}} | |||
<code>pip</code> did not find a package to install that satisfies the requirements (name, version or tags). | |||
Verify that the name and version are correct. | |||
Note also that <code>manylinux_x_y</code> wheels are discarded. | |||
<!--T:87--> | |||
You can also verify that the package is available from the wheelhouse with the [[Python#Available_wheels|avail_wheels]] command or by searching on [[Available Python wheels]] page. | |||
=== Installing many packages === <!--T:91--> | |||
When installing multiple packages, it is best to install them in one command when possible: | |||
{{Commands | |||
|prompt=(ENV) [name@server ~] | |||
|pip install --upgrade pip | |||
|pip install package1 package2 package3 package4 | |||
}} | |||
as this helps <code>pip</code> resolve dependencies issues. | |||
=== My virtual environment was working yesterday but not anymore === <!--T:92--> | |||
Packages are often updated and this leads to a non-reproducible virtual environment. | |||
<!--T:100--> | |||
Another reason might be that the virtual environment was created in $SCRATCH and part of it was deleted with the automatic purge of the filesystem; this would make the virtual environment nonfunctional. | |||
<!--T:101--> | |||
To remedy that, freeze the specific packages and their versions with | |||
{{Commands | |||
|prompt=(ENV) [name@server ~] | |||
|pip install --upgrade pip | |||
|pip install --no-index 'package1{{=}}{{=}}X.Y' 'package2{{=}}{{=}}X.Y.Z' 'package3<X.Y' 'package4>X.Y' | |||
}} | |||
and then create a [[Python#Creating_virtual_environments_inside_of_your_jobs|requirements file]] that will be used to install the required packages in your job. | |||
=== X is not a supported wheel on this platform === <!--T:93--> | |||
When installing a package, you may encounter the following error: <code>ERROR: package-3.8.1-cp311-cp311-manylinux_2_28_x86_64.whl is not a supported wheel on this platform.</code> | |||
<!--T:94--> | |||
Some packages may be incompatible or not supported on the systems. | |||
Two common cases are: | |||
* trying to install a <code>manylinux</code> package | |||
* or a python package built for a different Python version (e.g. installing a package built for python 3.11 when you have python 3.9). | |||
<!--T:95--> | |||
Some <code>manylinux</code> package can be made available through the [[Available Python wheels|wheelhouse]]. | |||
=== AttributeError: module ‘numpy’ has no attribute ‘X’ === <!--T:96--> | |||
When installing <code>numpy</code> without specifying a version number, the latest available version will be installed. | |||
In Numpy v1.20, many attributes were set for deprecation and are now [https://numpy.org/devdocs/release/1.24.0-notes.html#expired-deprecations expired in v1.24]. | |||
<!--T:97--> | |||
This may result in an error, depending on the attribute accessed. For example, <code>AttributeError: module ‘numpy’ has no attribute ‘bool’</code>. | |||
<!--T:98--> | |||
This can be solved by installing a previous version of Numpy: <code>pip install --no-index 'numpy<1.24'</code>. | |||
=== ModuleNotFoundError: No module named 'X' === <!--T:115--> | |||
When trying to import a Python module, it may not be found. Some common causes are: | |||
* the package is not installed or is not visible to the python interpreter; | |||
* the name of the module to import is not the same as the name of the package that provides it; | |||
* a broken virtual environment. | |||
<!--T:116--> | |||
To avoid such problems, do not: | |||
* modify the <code>PYTHONPATH</code> environment variable; | |||
* modify the <code>PATH</code> environment variable; | |||
* load a module while a virtual environment is activated (activate your virtual environment only after loading all the required modules) | |||
<!--T:117--> | |||
When you encounter this problem, first make sure you followed the above advice. Then: | |||
* make sure that the package is installed; run <code>pip list</code>; | |||
* double-check the module name (upper or lower case and underscores matter); | |||
* make sure that the module is imported at the correct level (when importing from its source directory). | |||
<!--T:118--> | |||
In doubt, start over with a new virtual environment. | |||
=== ImportError: numpy.core.multiarray failed to import === <!--T:119--> | |||
<!--T:120--> | |||
When trying to import a Python module that depends on Numpy, one may encounter <code>ImportError: numpy.core.multiarray failed to import</code>. | |||
<!--T:121--> | |||
This is caused by an incompatible version of Numpy installed or used and you must install a compatible version. | |||
<!--T:122--> | |||
This is especially true with the [https://numpy.org/devdocs/dev/depending_on_numpy.html#numpy-2-0-specific-advice release of Numpy 2.0 which breaks the ABI.] | |||
In the case of a wheel that was built with version 1.x but installed version 2.x, one must installed a lower version with: <code>pip install --no-index 'numpy<2.0'</code> | |||
</translate> | </translate> |
Latest revision as of 19:35, 17 October 2024
Description
Python is an interpreted programming language with a design philosophy stressing the readability of code. Its syntax is simple and expressive. Python has an extensive, easy-to-use standard library.
The capabilities of Python can be extended with packages developed by third parties. In general, to simplify operations, it is left up to individual users and groups to install these third-party packages in their own directories. However, most systems offer several versions of Python as well as tools to help you install the third-party packages that you need.
The following sections discuss the Python interpreter, and how to install and use packages.
Loading an interpreter
Default Python version
When you log into our clusters, a default Python version will be available, but that is generally not the one that you should use, especially if you need to install any Python packages. You should try to find out which version of Python is required to run your Python programs and load the appropriate module. If you are not sure which version you need, then it is reasonable to use the latest version available.
Loading a Python module
To discover the versions of Python available:
[name@server ~]$ module avail python
You can then load the version of your choice using module load
. For example, to load Python 3.10 you can use the command
[name@server ~]$ module load python/3.10
Python version supported
In general in the Python ecosystem, the transition to more modern versions of python is accelerating, with many packages only supporting the latest few versions of Python 3.x. In our case, we provide prebuilt Python packages in our wheelhouse only for the 3 most recent Python versions available on the systems. This will result in dependencies issues when trying to install those packages with older versions of Python. See Troubleshooting.
Below is a table indicating when we stopped building wheels for each version of Python.
Python version | Date |
---|---|
3.10 | |
3.9 | |
3.8 | |
3.7 | 2022-02 |
3.6 | 2021-02 |
3.5 | 2020-02 |
2.7 | 2020-01 |
SciPy stack
In addition to the base Python module, the SciPy package is also available as an environment module. The scipy-stack
module includes:
- NumPy
- SciPy
- Matplotlib
- dateutil
- pytz
- IPython
- pyzmq
- tornado
- pandas
- Sympy
- nose
If you want to use any of these Python packages, load a Python version of your choice and then module load scipy-stack
.
To get a complete list of the packages contained in scipy-stack
, along with their version numbers, run module spider scipy-stack/2020a
(replacing 2020a
with whichever version you want to find out about).
Creating and using a virtual environment
With each version of Python, we provide the tool virtualenv. This tool allows users to create virtual environments within which you can easily install Python packages. These environments allow one to install many versions of the same package, for example, or to compartmentalize a Python installation according to the needs of a specific project. Usually you should create your Python virtual environment(s) in your /home directory or in one of your /project directories. (See "Creating virtual environments inside of your jobs" below for a third alternative.)
Do not create your virtual environment under $SCRATCH as it may get partially deleted. Instead, create it inside your job.
To create a virtual environment, make sure you have selected a Python version with module load python/X.Y.Z
as shown above in section Loading a Python module. If you expect to use any of the packages listed in section SciPy stack above, also run module load scipy-stack/X.Y.Z
. Then enter the following command, where ENV
is the name of the directory for your new environment:
[name@server ~]$ virtualenv --no-download ENV
Once the virtual environment has been created, it must be activated:
[name@server ~]$ source ENV/bin/activate
You should also upgrade pip
in the environment:
[name@server ~]$ pip install --no-index --upgrade pip
To exit the virtual environment, simply enter the command deactivate
:
(ENV) [name@server ~] deactivate
You can now use the same virtual environment over and over again. Each time:
- Load the same environment modules that you loaded when you created the virtual environment, e.g.
module load python scipy-stack
- Activate the environment,
source ENV/bin/activate
Installing packages
Once you have a virtual environment loaded, you will be able to run the pip command. This command takes care of compiling and installing most of Python packages and their dependencies. A comprehensive index of Python packages can be found at PyPI.
All of pip
's commands are explained in detail in the user guide. We will cover only the most important commands and use the Numpy package as an example.
We first load the Python interpreter:
[name@server ~]$ module load python/3.10
We then activate the virtual environment, previously created using the virtualenv
command:
[name@server ~]$ source ENV/bin/activate
Finally, we install the latest stable version of Numpy:
(ENV) [name@server ~] pip install numpy --no-index
The pip
command can install packages from a variety of sources, including PyPI and prebuilt distribution packages called Python wheels. We provide Python wheels for a number of packages. In the above example, the --no-index
option tells pip
to not install from PyPI, but instead to install only from locally available packages, i.e. our wheels.
Whenever we provide a wheel for a given package, we strongly recommend to use it by way of the --no-index
option. Compared to using packages from PyPI, wheels that have been compiled by our staff can prevent issues with missing or conflicting dependencies, and were optimized for our clusters hardware and libraries. See Available wheels.
If you omit the --no-index
option, pip
will search both PyPI and local packages, and use the latest version available. If PyPI has a newer version, it will be installed instead of our wheel, possibly causing issues. If you are certain that you prefer to download a package from PyPI rather than use a wheel, you can use the --no-binary
option, which tells pip
to ignore prebuilt packages entirely. Note that this will also ignore wheels that are distributed through PyPI, and will always compile the package from source.
To see where the pip
command is installing a python package from, diagnosing installation issues, you can tell it to be more verbose with the -vvv
option. It is also worth mentioning that when installing multiple packages it is advisable to install them with one command as it helps pip resolve dependencies.
Creating virtual environments inside of your jobs
Parallel filesystems such as the ones used on our clusters are very good at reading or writing large chunks of data, but can be bad for intensive use of small files. Launching a software and loading libraries, such as starting Python and loading a virtual environment, can be slow for this reason.
As a workaround for this kind of slowdown, and especially for single-node Python jobs, you can create your virtual environment inside of your job, using the compute node's local disk. It may seem counter-intuitive to recreate your environment for every job, but it can be faster than running from the parallel filesystem, and will give you some protection against some filesystem performance issues. This approach, of creating a node-local virtualenv, has to be done for each node in the job, since the virtualenv is only accessible on one node. Following job submission script demonstrates how to do this for a single-node job:
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --mem-per-cpu=1.5G # increase as needed
#SBATCH --time=1:00:00
module load python/3.10
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install --no-index --upgrade pip
pip install --no-index -r requirements.txt
python ...
where the requirements.txt
file will have been created from a test environment. For example, if you want to create an environment for TensorFlow, you would do the following on a login node :
[name@server ~]$ module load python/3.10
[name@server ~]$ ENVDIR=/tmp/$RANDOM
[name@server ~]$ virtualenv --no-download $ENVDIR
[name@server ~]$ source $ENVDIR/bin/activate
[name@server ~]$ pip install --no-index --upgrade pip
[name@server ~]$ pip install --no-index tensorflow
[name@server ~]$ pip freeze --local > requirements.txt
[name@server ~]$ deactivate
[name@server ~]$ rm -rf $ENVDIR
This will yield a file called requirements.txt
, with content such as the following
absl_py==1.2.0+computecanada
astunparse==1.6.3+computecanada
cachetools==5.2.0+computecanada
certifi==2022.6.15+computecanada
charset_normalizer==2.1.0+computecanada
flatbuffers==1.12+computecanada
gast==0.4.0+computecanada
google-pasta==0.2.0+computecanada
google_auth==2.9.1+computecanada
google_auth_oauthlib==0.4.6+computecanada
grpcio==1.47.0+computecanada
h5py==3.6.0+computecanada
idna==3.3+computecanada
keras==2.9.0+computecanada
Keras-Preprocessing==1.1.2+computecanada
libclang==14.0.1+computecanada
Markdown==3.4.1+computecanada
numpy==1.23.0+computecanada
oauthlib==3.2.0+computecanada
opt-einsum==3.3.0+computecanada
packaging==21.3+computecanada
protobuf==3.19.4+computecanada
pyasn1==0.4.8+computecanada
pyasn1-modules==0.2.8+computecanada
pyparsing==3.0.9+computecanada
requests==2.28.1+computecanada
requests_oauthlib==1.3.1+computecanada
rsa==4.8+computecanada
six==1.16.0+computecanada
tensorboard==2.9.1+computecanada
tensorboard-data-server==0.6.1+computecanada
tensorboard_plugin_wit==1.8.1+computecanada
tensorflow==2.9.0+computecanada
tensorflow_estimator==2.9.0+computecanada
tensorflow_io_gcs_filesystem==0.23.1+computecanada
termcolor==1.1.0+computecanada
typing_extensions==4.3.0+computecanada
urllib3==1.26.11+computecanada
Werkzeug==2.1.2+computecanada
wrapt==1.13.3+computecanada
This file will ensure that your environment is reproducible between jobs.
Note that the above instructions require all of the packages you need to be available in the python wheels that we provide (see "Available wheels" below). If the wheel is not available in our wheelhouse, you can pre-download it (see "Pre-downloading packages" section below). If you think that the missing wheel should be included in our wheelhouse, please contact Technical support to make a request.
Creating virtual environments inside of your jobs (multi-nodes)
In order to run scripts across multiple nodes, each node must have its own virtual environment activated.
1. In your submission script, create the virtual environment on each allocated node:
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install --no-index --upgrade pip
pip install --no-index -r requirements.txt
EOF
2. Activate the virtual environment on the main node,
source $SLURM_TMPDIR/env/bin/activate;
3. Use srun to run your script
srun python myscript.py;
Example (multi-nodes)
#!/bin/bash
#SBATCH --account=<your account>
#SBATCH --time=00:30:00
#SBATCH --nodes=2
#SBATCH --ntasks=2
#SBATCH --mem-per-cpu=2000M
module load StdEnv/2023 python/3.11 mpi4py
# create the virtual environment on each node :
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install --no-index --upgrade pip
pip install --no-index -r requirements.txt
EOF
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python myscript-mpi.py;
Available wheels
Currently available wheels are listed on the Available Python wheels page. You can also run the command avail_wheels
on the cluster.
By default, it will:
- only show you the latest version of a specific package (unless versions are given);
- only show you versions that are compatible with the python module (if one loaded) or virtual environment (if activated), otherwise all versions will be shown;
- only show you versions that are compatible with the CPU architecture and software environment (StdEnv) that you are currently running on.
Names
To list wheels containing cdf
(case insensitive) in its name:
[name@server ~]$ avail_wheels "*cdf*"
name version python arch
-------- --------- -------- -------
h5netcdf 0.7.4 py2,py3 generic
netCDF4 1.5.8 cp39 avx2
netCDF4 1.5.8 cp38 avx2
netCDF4 1.5.8 cp310 avx2
Or an exact name:
[name@server ~]$ avail_wheels numpy
name version python arch
------ --------- -------- -------
numpy 1.23.0 cp39 generic
numpy 1.23.0 cp38 generic
numpy 1.23.0 cp310 generic
Version
To list a specific version, you can use the same format as with `pip`:
[name@server ~]$ avail_wheels numpy==1.23
name version python arch
------ --------- -------- -------
numpy 1.23.0 cp39 generic
numpy 1.23.0 cp38 generic
numpy 1.23.0 cp310 generic
Or use the long option:
[name@server ~]$ avail_wheels numpy --version 1.23
name version python arch
------ --------- -------- -------
numpy 1.23.0 cp39 generic
numpy 1.23.0 cp38 generic
numpy 1.23.0 cp310 generic
With the pip
format, you can use different operators : ==
, <
, >
, ~=
, <=
,>=
, !=
. For instance, to list inferior versions:
[name@server ~]$ avail_wheels 'numpy<1.23'
name version python arch
------ --------- -------- -------
numpy 1.22.2 cp39 generic
numpy 1.22.2 cp38 generic
numpy 1.22.2 cp310 generic
And to list all available versions:
[name@server ~]$ avail_wheels "*cdf*" --all-version
name version python arch
-------- --------- -------- -------
h5netcdf 0.7.4 py2,py3 generic
netCDF4 1.5.8 cp39 avx2
netCDF4 1.5.8 cp38 avx2
netCDF4 1.5.8 cp310 avx2
netCDF4 1.5.6 cp38 avx2
netCDF4 1.5.6 cp37 avx2
netCDF4 1.5.4 cp38 avx2
netCDF4 1.5.4 cp37 avx2
netCDF4 1.5.4 cp36 avx2
Python
You can list a specific version of Python:
[name@server ~]$ avail_wheels 'numpy<1.23' --python 3.9
name version python arch
------ --------- -------- -------
numpy 1.22.2 cp39 generic
The python column tells us for which version the wheel is available, where cp39
stands for cpython 3.9
.
Requirements file
One can list available wheels based on a requirements.txt
file with:
[name@server ~]$ avail_wheels -r requirements.txt
name version python arch
--------- --------- -------- -------
packaging 21.3 py3 generic
tabulate 0.8.10 py3 generic
And display wheels that are not available:
[name@server ~]$ avail_wheels -r requirements.txt --not-available
name version python arch
--------- --------- -------- -------
packaging 21.3 py3 generic
pip
tabulate 0.8.10 py3 generic
Pre-downloading packages
Here is how to pre-download a package called tensorboardX
on a login node, and install it on a compute node:
- Run
pip download --no-deps tensorboardX
. This will download the package astensorboardX-1.9-py2.py3-none-any.whl
(or similar) in the working directory. The syntax ofpip download
is the same aspip install
. - If the filename does not end with
none-any
, and ends with something likelinux_x86_64
ormanylinux*_x86_64
, the wheel might not function correctly. You should contact Technical support so that we compile the wheel and make it available on our systems. - Then, when installing, use the path for file
pip install tensorboardX-1.9-py2.py3-none-any.whl
.
Parallel programming with the Python multiprocessing
module
Doing parallel programming with Python can be an easy way to get results faster. An usual way of doing so is to use the multiprocessing
module. Of particular interest is the Pool
class of this module, since it allows one to control the number of processes started in parallel, and apply the same calculation to multiple data. As an example, suppose we want to calculate the cube
of a list of numbers. The serial code would look like this :
def cube(x):
return x**3
data = [1, 2, 3, 4, 5, 6]
cubes = [cube(x) for x in data]
print(cubes)
def cube(x):
return x**3
data = [1, 2, 3, 4, 5, 6]
cubes = list(map(cube,data))
print(cubes)
Using the Pool
class, running in parallel, the above codes become :
import multiprocessing as mp
def cube(x):
return x**3
pool = mp.Pool(processes=4)
data = [1, 2, 3, 4, 5, 6]
results = [pool.apply_async(cube, args=(x,)) for x in data]
cubes = [p.get() for p in results]
print(cubes)
import multiprocessing as mp
def cube(x):
return x**3
pool = mp.Pool(processes=4)
data = [1, 2, 3, 4, 5, 6]
cubes = pool.map(cube, data)
print(cubes)
The above examples will however be limited to using 4
processes. On a cluster, it is very important to use the cores that are allocated to your job. Launching more processes than you have cores requested will slow down your calculation and possibly overload the compute node. Launching fewer processes than you have cores will result in wasted resources and cores remaining idle. The correct number of cores to use in your code is determined by the amount of resources you requested to the scheduler. For example, if you have the same computation to perform on many tens of data or more, it would make sense to use all of the cores of a node. In this case, you can write your job submission script with the following header :
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=32
python cubes_parallel.py
and then, your code would become the following :
import multiprocessing as mp
import os
def cube(x):
return x**3
ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1))
pool = mp.Pool(processes=ncpus)
data = [1, 2, 3, 4, 5, 6]
results = [pool.apply_async(cube, args=(x,)) for x in data]
cubes = [p.get() for p in results]
print(cubes)
import multiprocessing as mp
import os
def cube(x):
return x**3
ncpus = int(os.environ.get('SLURM_CPUS_PER_TASK',default=1))
pool = mp.Pool(processes=ncpus)
data = [1, 2, 3, 4, 5, 6]
cubes = pool.map(cube, data)
print(cubes)
Note that in the above example, the function cube
itself is sequential. If you are calling some external library, such as numpy
, it is possible that the functions called by your code are themselves parallel. If you want to distribute processes with the technique above, you should verify whether the functions you call are themselves parallel, and if they are, you need to control how many threads they will take themselves. If, for example, they take all the cores available (32 in the above example), and you are yourself starting 32 processes, this will slow down your code and possibly overload the node as well.
Note that the multiprocessing
module is restricted to using a single compute node, so the speedup achievable by your program is usually limited to the total number of CPU cores in that node. If you want to go beyond this limit and use multiple nodes, consider using mpi4py or PySpark. Other methods of parallelizing Python (not all of them necessarily supported on our clusters) are listed here. Also note that you can greatly improve the performance of your Python program by ensuring it is written efficiently, so that should be done first before parallelizing. If you are not sure if your Python code is efficient, please contact technical support and have them look at your code.
Anaconda
Please see Anaconda.
Jupyter
Please see Jupyter.
Troubleshooting
Python script is hanging
By using the faulthandler module, you can edit your script to allow dumping a traceback after a timeout. See faulthandler.dump_traceback_later()
.
You can also inspect a python process while the job is running, without modifying it beforehand, using py-spy:
- Install py-spy in a virtualenv in your home
- Attach to the running job, using
srun --pty --jobid JOBID bash
- Use
htop -u $USER
to find the process ID of your python script - Activate the virtualenv where py-spy is installed
- Run
py-spy top --pid PID
to see live feedback about where your code is spending time - Run
py-spy dump --pid PID
to get a traceback of where your code is currently at.
Package 'X' requires a different Python: X.Y.Z not in '>=X.Y'
When installing packages, you may encounter an error similar to:
ERROR: Package 'X' requires a different Python: 3.6.10 not in '>=3.7'
.
The current python module loaded (3.6.10 in this case) is not supported by that package. You can update to a more recent version, such as the latest available module. Or install an older version of package 'X'.
Package has requirement X, but you'll have Y which is incompatible
When installing packages, you may encounter an error similar to:
ERROR: Package has requirement X, but you'll have Y which is incompatible.
.
Upgrade pip
to the latest version or higher than [21.3]
to use the new dependency resolver:
(ENV) [name@server ~] pip install --no-index --upgrade pip
Then rerun your install command.
No matching distribution found for X
When installing packages, you may encounter an error similar to:
(ENV) [name@server ~] pip install X
ERROR: Could not find a version that satisfies the requirement X (from versions: none)
ERROR: No matching distribution found for X
pip
did not find a package to install that satisfies the requirements (name, version or tags).
Verify that the name and version are correct.
Note also that manylinux_x_y
wheels are discarded.
You can also verify that the package is available from the wheelhouse with the avail_wheels command or by searching on Available Python wheels page.
Installing many packages
When installing multiple packages, it is best to install them in one command when possible:
(ENV) [name@server ~] pip install --upgrade pip
(ENV) [name@server ~] pip install package1 package2 package3 package4
as this helps pip
resolve dependencies issues.
My virtual environment was working yesterday but not anymore
Packages are often updated and this leads to a non-reproducible virtual environment.
Another reason might be that the virtual environment was created in $SCRATCH and part of it was deleted with the automatic purge of the filesystem; this would make the virtual environment nonfunctional.
To remedy that, freeze the specific packages and their versions with
(ENV) [name@server ~] pip install --upgrade pip
(ENV) [name@server ~] pip install --no-index 'package1==X.Y' 'package2==X.Y.Z' 'package3<X.Y' 'package4>X.Y'
and then create a requirements file that will be used to install the required packages in your job.
X is not a supported wheel on this platform
When installing a package, you may encounter the following error: ERROR: package-3.8.1-cp311-cp311-manylinux_2_28_x86_64.whl is not a supported wheel on this platform.
Some packages may be incompatible or not supported on the systems. Two common cases are:
- trying to install a
manylinux
package - or a python package built for a different Python version (e.g. installing a package built for python 3.11 when you have python 3.9).
Some manylinux
package can be made available through the wheelhouse.
AttributeError: module ‘numpy’ has no attribute ‘X’
When installing numpy
without specifying a version number, the latest available version will be installed.
In Numpy v1.20, many attributes were set for deprecation and are now expired in v1.24.
This may result in an error, depending on the attribute accessed. For example, AttributeError: module ‘numpy’ has no attribute ‘bool’
.
This can be solved by installing a previous version of Numpy: pip install --no-index 'numpy<1.24'
.
ModuleNotFoundError: No module named 'X'
When trying to import a Python module, it may not be found. Some common causes are:
- the package is not installed or is not visible to the python interpreter;
- the name of the module to import is not the same as the name of the package that provides it;
- a broken virtual environment.
To avoid such problems, do not:
- modify the
PYTHONPATH
environment variable; - modify the
PATH
environment variable; - load a module while a virtual environment is activated (activate your virtual environment only after loading all the required modules)
When you encounter this problem, first make sure you followed the above advice. Then:
- make sure that the package is installed; run
pip list
; - double-check the module name (upper or lower case and underscores matter);
- make sure that the module is imported at the correct level (when importing from its source directory).
In doubt, start over with a new virtual environment.
ImportError: numpy.core.multiarray failed to import
When trying to import a Python module that depends on Numpy, one may encounter ImportError: numpy.core.multiarray failed to import
.
This is caused by an incompatible version of Numpy installed or used and you must install a compatible version.
This is especially true with the release of Numpy 2.0 which breaks the ABI.
In the case of a wheel that was built with version 1.x but installed version 2.x, one must installed a lower version with: pip install --no-index 'numpy<2.0'