SpaCy: Difference between revisions
Jump to navigation
Jump to search
(Marked this version for translation) |
m (Added section for gpu fix on cuda libraries) |
||
Line 22: | Line 22: | ||
:If you only need CPU support: | :If you only need CPU support: | ||
:{{Command|prompt=(venv) [name@server ~]|pip install spacy --no-index}} | :{{Command|prompt=(venv) [name@server ~]|pip install spacy --no-index}} | ||
'''GPU version''': for the moment, you need to point out where the CUDA libraries live: | |||
{{Commands | |||
|prompt=(venv) [name@server ~] | |||
|module load gcc/5.4.0 cuda/9 | |||
|export LD_LIBRARY_PATH{{=}}$CUDA_HOME/lib64:$LD_LIBRARY_PATH | |||
}} | |||
<!--T:29--> | <!--T:29--> | ||
'''Note''': if you want to use <tt>thinc</tt> [https://docs.computecanada.ca/wiki/PyTorch Pytorch] wrapper, you'll also need to install <tt>torch_cpu</tt> or <tt>torch_gpu</tt> wheel. | '''Note''': if you want to use <tt>thinc</tt> [https://docs.computecanada.ca/wiki/PyTorch Pytorch] wrapper, you'll also need to install <tt>torch_cpu</tt> or <tt>torch_gpu</tt> wheel. | ||
</translate> | </translate> |
Revision as of 15:43, 13 November 2018
spaCy is a Python package that provides industrial-strength natural language processing.
Installation
Latest available wheels
To see the latest version of spaCy that we have built:
[name@server ~]$ avail_wheels spacy thinc thinc_gpu_ops
For more information on listing wheels, see listing available wheels.
Pre-build
The preferred option is to install it using the python wheel that we compile, as follows:
- 1. Load a python module, either python/2.7, python/3.5, or python/3.6
- 2. Create and start a virtual environment.
- 3. Install spaCy in the virtual environment with
pip install
. For both GPU and CPU support: -
(venv) [name@server ~] pip install spacy[cuda] --no-index
- If you only need CPU support:
-
(venv) [name@server ~] pip install spacy --no-index
GPU version: for the moment, you need to point out where the CUDA libraries live:
(venv) [name@server ~] module load gcc/5.4.0 cuda/9
(venv) [name@server ~] export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
Note: if you want to use thinc Pytorch wrapper, you'll also need to install torch_cpu or torch_gpu wheel.