Accessing CVMFS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
m (→‎System paths: fix "allow to" without a grammatical patient)
m (Remove stray word)
 
(226 intermediate revisions by 6 users not shown)
Line 1: Line 1:
[[Category:CVMFS]]
<languages />
<translate>
= Introduction = <!--T:1-->
We provide repositories of software and data via a file system called the [[CVMFS|CERN Virtual Machine File System]] (CVMFS). On our systems, CVMFS is already set up for you, so the repositories are automatically available for your use. For more information on using our software environment, please refer to wiki pages [[Available software]], [[Using modules]], [[Python]], [[R]] and [[Installing software in your home directory]].
<!--T:2-->
The purpose of this page is to describe how you can install and configure CVMFS on <i>your</i> computer or cluster, so that you can access the same repositories (and software environment) on your system that are available on ours.
<!--T:3-->
The software environment described on this page has been [https://ssl.linklings.net/conferences/pearc/pearc19_program/views/includes/files/pap139s3-file1.pdf presented] at Practices and Experience in Advanced Research Computing 2019 (PEARC 2019).
= Before you start = <!--T:4-->
{{Note|Note to staff: see the [https://wiki.computecanada.ca/staff/CVMFS_client_setup internal documentation].|reminder}}
</translate>
{{Panel
{{Panel
   |title=This article is a draft
   |title=Important
   |panelstyle=draft
   |panelstyle=callout
   |content='''This is not a complete article'''. Final details are still being put in place before you can access our software stack on your own computer or cluster.
   |content=
[[Category:Draft]]
<translate><!--T:55--> <b>Please [[Accessing_CVMFS#Subscribe_to_announcements|subscribe to announcements]] to remain informed of important changes regarding our software environment and CVMFS, and fill out the [https://docs.google.com/forms/d/1eDJEeaMgooVoc4lTkxcZ9y65iR8hl4qeXMOEU9slEck/viewform registration form]. If use of our software environment contributes to your research, please acknowledge it according to [https://alliancecan.ca/en/services/advanced-research-computing/acknowledging-alliance these guidelines].</b> (We would appreciate that you also cite our [https://ssl.linklings.net/conferences/pearc/pearc19_program/views/includes/files/pap139s3-file1.pdf paper]). </translate>
}}
}}
<translate>
== Subscribe to announcements == <!--T:5-->
Occasionally, changes will be made regarding CVMFS or the software or other content provided by our CVMFS repositories, which <b>may affect users</b> or <b>require administrators to take action</b> in order to ensure uninterrupted access to our CVMFS repositories. Subscribe to the cvmfs-announce@gw.alliancecan.ca mailing list in order to receive important but infrequent notifications about these changes, by emailing [mailto:cvmfs-announce+subscribe@gw.alliancecan.ca cvmfs-announce+subscribe@gw.alliancecan.ca] and then replying to the confirmation email you subsequently receive. (Our staff can alternatively subscribe [https://groups.google.com/u/0/a/gw.alliancecan.ca/g/cvmfs-announce/about here].)


== Terms of use and support == <!--T:6-->
The CVMFS client software is provided by CERN. Our CVMFS repositories are provided <b>without any warranty</b>. We reserve the right to limit or block your access to the CVMFS repositories and software environment if you violate applicable [https://ccdb.computecanada.ca/agreements/user_aup_2021/user_display terms of use] or at our discretion.


= Introduction =
== CVMFS requirements == <!--T:7-->
Compute Canada provides repositories of software and data via a file system called CVMFS. On Compute Canada systems, CVMFS is already set up for you, so the repositories are automatically available for your use. For more information on using the Compute Canada software environment, please refer to [[available software]], [[using modules]], [[Python]], [[R]] and [[Installing software in your home directory]] pages.
=== For a single system ===
To install CVMFS on an individual system, such as your laptop or desktop, you will need:
* A supported operating system (see [[Accessing_CVMFS#Minimal_requirements|Minimal requirements below]]).
* Support for [https://en.wikipedia.org/wiki/Filesystem_in_Userspace FUSE].
* Approximately 50 GB of available local storage, for the cache. (It will only be filled based on usage, and a larger or smaller cache may be suitable in different situations. For light use on a personal computer, just ~ 5-10 GB may suffice. See [https://cvmfs.readthedocs.io/en/stable/cpt-configure.html#sct-cache cache settings] for more details.)
* Outbound HTTP access to the internet.
** Or at least outbound HTTP access to one or more local proxy servers.


The purpose of this page is to describe how you can install and configure CVMFS on ''your'' computer or cluster, so that you can access the same repositories on your system that are available on Compute Canada systems.
<!--T:8-->
If your system lacks FUSE support or local storage, or has limited network connectivity or other restrictions, you may be able to use some [https://cvmfs.readthedocs.io/en/stable/cpt-hpc.html other option].


The software environment that you can access using the information on this page has been presented at the conference Practices and Experience in Advanced Research Computing 2019 (PEARC 2019). The corresponding paper [https://ssl.linklings.net/conferences/pearc/pearc19_program/views/includes/files/pap139s3-file1.pdf can be found here].
=== For multiple systems === <!--T:9-->
If multiple CVMFS clients are deployed, for example on a cluster, in a laboratory, campus or other site, each system must meet the above requirements, and the following considerations apply as well:
* We recommend that you deploy forward caching HTTP proxy servers at your site to improve performance and bandwidth usage, especially if you have a large number of clients. Refer to [https://cvmfs.readthedocs.io/en/stable/cpt-squid.html Setting up a Local Squid Proxy].
** Note that if you have only one such proxy server it will be a single point of failure for your site. Generally, you should have at least two local proxies at your site, and potentially additional nearby or regional proxies as backups.
* It is recommended to synchronize the identity of the <code>cvmfs</code> service account across all client nodes (e.g. using LDAP or other means).
** This facilitates use of an [https://cvmfs.readthedocs.io/en/stable/cpt-configure.html#alien-cache alien cache] and should be done <b>before</b> CVMFS is installed. Even if you do not anticipate using an alien cache at this time, it is easier to synchronize the accounts initially than to try to potentially change them later.


= Before you start =
== Software environment requirements == <!--T:10-->
== Terms of support ==
=== Minimal requirements ===
Note that our repositories are provided '''without any warranty'''. '''If you are planning to use our software environment, we ask that you first [[Technical support|contact us]]''' to let us know ahead of time.
*Supported operating systems:
** Linux: with a Kernel 2.6.32 or newer for our 2016 and 2018 environments, and 3.2 or newer for the 2020 environment.
** Windows: with Windows Subsystem for Linux version 2, with a distribution of Linux that matches the requirement above.
** Mac OS: only through a virtual machine.
* CPU: x86 CPU supporting at least one of SSE3, AVX, AVX2 or AVX512 instruction sets.


== Caveats ==
=== Optimal requirements === <!--T:11-->
=== Software packages that are not available ===
* Scheduler: Slurm or Torque, for tight integration with OpenMPI applications.
While on Compute Canada systems, we support a number of commercial software packages through agreements with the license owners, these will not be available through the instructions on this page. This include for example the Intel and Portland Group compilers. While the modules for the Intel and PGI compilers will be available, you will only have access to the redistributable parts of these packages, usually the shared objects. These are sufficient to run software packages compiled with these compilers, but not to compile new software.  
* Network interconnect: Ethernet, InfiniBand or OmniPath, for parallel applications.
* GPU: NVidia GPU with CUDA drivers (7.5 or newer) installed, for CUDA-enabled applications. (See below for caveats about CUDA.)
* As few Linux packages installed as possible (fewer packages reduce the odds of conflicts).


=== <tt>LD_LIBRARY_PATH</tt> ===
= Installing CVMFS = <!--T:12-->
Our software environment is designed to use [https://en.wikipedia.org/wiki/Rpath RUNPATH]. Defining <tt>LD_LIBRARY_PATH</tt> is [https://gms.tf/ld_library_path-considered-harmful.html not recommended] and can lead to the environment not working.  
If you wish to use [https://docs.ansible.com/ansible/latest/index.html Ansible], a [https://github.com/cvmfs-contrib/ansible-cvmfs-client CVMFS client role] is provided as-is, for basic configuration of a CVMFS client on an RPM-based system.
Also, some [https://github.com/ComputeCanada/CVMFS/tree/main/cvmfs-cloud-scripts scripts] may be used to facilitate installing CVMFS on cloud instances.
Otherwise, use the following instructions.


=== Missing libraries ===
== Pre-installation == <!--T:54-->
Because we do not define <tt>LD_LIBRARY_PATH</tt>, and because our libraries are not installed in default Linux locations, binary packages, such as Anaconda, will often not find libraries that they would usually expect. Please see our documentation on [[Installing_software_in_your_home_directory#Installing_binary_packages|Installing binary packages]]
It is recommended that the local CVMFS cache (located at <code>/var/lib/cvmfs</code> by default, configurable via the <code>CVMFS_CACHE_BASE</code> setting) be on a dedicated file system so that the storage usage of CVMFS is not shared with that of other applications. Accordingly, you should provision that file system <b>before</b> installing CVMFS.


=== CUDA location ===
== Installation and configuration == <!--T:22-->
For CUDA-enabled software packages, our software environment relies on having driver libraries installed in the path <tt>/usr/lib64/nvidia</tt>. On some platforms, recent NVidia drivers will install libraries in <tt>/usr/lib64</tt>. Because it is not possible to add <tt>/usr/lib64</tt> to the <tt>LD_LIBRARY_PATH</tt> without also pulling all of the system libraries which may be incompatible with our software environment, we recommend you create symbolic links to the installed NVidia libraries into <tt>/usr/lib64/nvidia</tt>.
For installation instructions, refer to [https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html#getting-the-software Getting the Software].


== Supported platforms ==
<!--T:74-->
CVMFS clients are available for a [https://cernvm.cern.ch/portal/filesystem/downloads variety of operating systems and architectures].
For standard client configuration, see [https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html#setting-up-the-software Setting up the Software] and [http://cvmfs.readthedocs.io/en/stable/apx-parameters.html#client-parameters Client parameters].


=== Linux ===
<!--T:73-->
The software stack itself should work on any Linux with a kernel 2.6.32 or more recent. In particular, it has been tested on CentOS 6, CentOS 7 and Ubuntu.  
The <code>soft.computecanada.ca</code> repository is provided by the default configuration, so no additional steps are required to access it (though you may wish to include it in <code>CVMFS_REPOSITORIES</code> in your client configuration).


=== Windows ===
== Testing == <!--T:27-->
The software stack has been tested to work with Windows when using Windows Subsystem for Linux (WSL) version 2. WSL version 1 does not work because it does not provide a real Linux kernel and it does not support [https://fr.wikipedia.org/wiki/Filesystem_in_Userspace FUSE]


=== Mac OS (not supported) ===
<!--T:28-->
While a CVMFS client exists for Mac OS, the software environment does not work because Mac OS does not have a Linux kernel.
* First ensure that the repositories you want to test are listed in <code>CVMFS_REPOSITORIES</code>.
* Validate the configuration:
{{Command|sudo cvmfs_config chksetup}}
* Make sure to address any warnings or errors that are reported.
* Check that the repositories are OK:
{{Command|cvmfs_config probe}}


= Mounting CVMFS on your computer =
<!--T:29-->
If you encounter problems, [https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html#troubleshooting this debugging guide] may help.


== Mounting our repositories on your own cluster ==
= Enabling our environment in your session = <!--T:33-->
 
Once you have mounted the CVMFS repository, enabling our environment in your sessions is as simple as running the bash script <code>/cvmfs/soft.computecanada.ca/config/profile/bash.sh</code>.
= Enabling our environment in your session =
This will load some default modules. If you want to mimic a specific cluster exactly, simply define the environment variable <code>CC_CLUSTER</code> to one of <code>beluga</code>, <code>cedar</code> or <code>graham</code> before using the script, for example:
Once you have mounted the CVMFS repository, enabling our environment in your sessions is as simple as running
{{Command|export CC_CLUSTER{{=}}beluga}}
{{Command|source /cvmfs/soft.computecanada.ca/config/profile/bash.sh}}
{{Command|source /cvmfs/soft.computecanada.ca/config/profile/bash.sh}}


The above command '''will not run anything if your user ID is below 1000'''. This is a safeguard, because you should not rely on our software environment for privileged operation. If you nevertheless want it to enable our environment, you can first define the environment variable <tt>FORCE_CC_CVMFS=1</tt>, with the command
<!--T:34-->
The above command <b>will not run anything if your user ID is below 1000</b>. This is a safeguard, because you should not rely on our software environment for privileged operation. If you nevertheless want to enable our environment, you can first define the environment variable <code>FORCE_CC_CVMFS=1</code>, with the command
{{Command|export FORCE_CC_CVMFS{{=}}1}}
{{Command|export FORCE_CC_CVMFS{{=}}1}}
or you can create a file <tt>$HOME/.force_cc_cvmfs</tt> in your home folder if you want it to always be active, with
or you can create a file <code>$HOME/.force_cc_cvmfs</code> in your home folder if you want it to always be active, with
{{Command|touch $HOME/.force_cc_cvmfs}}
{{Command|touch $HOME/.force_cc_cvmfs}}


If, on the contrary, you want to avoid enabling our environment, you can define <tt>SKIP_CC_CVMFS=1</tt> or create the file <tt>$HOME/.skip_cc_cvmfs</tt> to ensure that the environment is never enabled in a given account.
<!--T:35-->
If, on the contrary, you want to avoid enabling our environment, you can define <code>SKIP_CC_CVMFS=1</code> or create the file <code>$HOME/.skip_cc_cvmfs</code> to ensure that the environment is never enabled in a given account.


== Customizing your environment ==
== Customizing your environment == <!--T:36-->
By default, enabling our environment will automatically detect a number of features of your system, and load default modules. You can control the default behaviour by defining specific environment variables prior to enabling the environment. These are described below.  
By default, enabling our environment will automatically detect a number of features of your system, and load default modules. You can control the default behaviour by defining specific environment variables prior to enabling the environment. These are described below.  


While our software environment strives to be as independent from the host operating system as possible, there are a number of system paths that are taken into account by our environment to facilitate interaction with tools installed on the host operating system.  
=== Environment variables === <!--T:37-->
==== <code>CC_CLUSTER</code> ====
This variable is used to identify a cluster. It is used to send some information to the system logs, as well as define behaviour relative to licensed software. By default, its value is <code>computecanada</code>. You may want to set the value of this variable if you want to have system logs tailored to the name of your system.


=== Environment variables ===
==== <code>RSNT_ARCH</code> ==== <!--T:38-->
==== <tt>CC_CLUSTER</tt> ====
This environment variable is used to identify the set of CPU instructions supported by the system. By default, it will be automatically detected based on <code>/proc/cpuinfo</code>. However if you want to force a specific one to be used, you can define it before enabling the environment. The supported instruction sets for our software environment are:
This variable is used to identify a cluster. It is used to send some information to the system logs, as well as define behaviour relative to licensed software. By default, its value is <tt>computecanada</tt>. You may want to set the value of this variable if you want to have system logs tailored to the name of your system.
 
==== <tt>RSNT_ARCH</tt> ====
This environment variable is used to identify the set of CPU instructions supported by the system. By default, it will be automatically detected based on <tt>/proc/cpuinfo</tt>. You can however define it before enabling the environment if you want to force a specific set of instruction. The supported sets for our software environment are
* sse3
* sse3
* avx
* avx
Line 74: Line 121:
* avx512
* avx512


==== <tt>RSNT_INTERCONNECT</tt> ====
==== <code>RSNT_INTERCONNECT</code> ==== <!--T:39-->
This environment variable is used to identify the type of interconnect supported by the system. By default, it will be automatically detected based on the presence of <tt>/sys/module/opa_vnic</tt> (for Intel OmniPath) or <tt>/sys/module/ib_core</tt> (for InfiniBand). The fall-back value is <tt>ethernet</tt>. The supported values are
This environment variable is used to identify the type of interconnect supported by the system. By default, it will be automatically detected based on the presence of <code>/sys/module/opa_vnic</code> (for Intel OmniPath) or <code>/sys/module/ib_core</code> (for InfiniBand). The fall-back value is <code>ethernet</code>. The supported values are
* omnipath
* omnipath
* infiniband
* infiniband
* ethernet
* ethernet


The value of this variable will trigger different options of transport protocol used in OpenMPI.
<!--T:40-->
The value of this variable will trigger different options of transport protocol to be used in OpenMPI.


==== <tt>LMOD_SYSTEM_DEFAULT_MODULES</tt> ====
==== <code>RSNT_CUDA_DRIVER_VERSION</code> ==== <!--T:61-->
This environment variable defines which modules are loaded by default. If this is left undefined, our environment will define it to load the <tt>StdEnv</tt> module, while will load by default a version of the Intel compiler, and version of OpenMPI.  
This environment variable is used to hide or show some versions of our CUDA modules, according to the required version of NVidia drivers, as documented [[https://docs.nvidia.com/deploy/cuda-compatibility/index.html here]]. If not defined, this is detected based on the files founds under <code>/usr/lib64/nvidia</code>.  


==== <tt>MODULERCFILE</tt> ====
<!--T:62-->
This is an environment variable used by Lmod to define default version of modules and aliases. You can define your own <tt>modulerc</tt> file and add it to the environment variable <tt>MODULERCFILE</tt>. This will take precedence over what is defined in our environment.
For backward compatibility reasons, if no library is found under <code>/usr/lib64/nvidia</code>, we assume that the driver versions are enough for CUDA 10.2. This is because this feature was introduced just as CUDA 11.0 was released.
 
<!--T:63-->
Defining <code>RSNT_CUDA_DRIVER_VERSION=0.0</code> will hide all versions of CUDA.
 
==== <code>RSNT_LOCAL_MODULEPATHS</code> ==== <!--T:64-->
This environment variable allows to define locations for local module trees, which will be automatically mesh into our central tree. To use it, define
{{Command|export RSNT_LOCAL_MODULEPATHS{{=}}/opt/software/easybuild/modules}}
and then install your [[EasyBuild]] recipe using
{{Command|eb --installpath /opt/software/easybuild <your recipe>.eb}}
 
<!--T:65-->
This will use our module naming scheme to install your recipe locally, and it will be picked up by the module hierarchy. For example, if this recipe was using the <code>iompi,2018.3</code> toolchain, the module will become available after loading the <code>intel/2018.3</code> and the <code>openmpi/3.1.2</code> modules.
 
==== <code>LMOD_SYSTEM_DEFAULT_MODULES</code> ==== <!--T:41-->
This environment variable defines which modules are loaded by default. If it is left undefined, our environment will define it to load the <code>StdEnv</code> module, which will load by default a version of the Intel compiler, and a version of OpenMPI.
 
==== <code>MODULERCFILE</code> ==== <!--T:42-->
This is an environment variable used by Lmod to define the default version of modules and aliases. You can define your own <code>modulerc</code> file and add it to the environment variable <code>MODULERCFILE</code>. This will take precedence over what is defined in our environment.
 
=== System paths === <!--T:43-->
While our software environment strives to be as independent from the host operating system as possible, there are a number of system paths that are taken into account by our environment to facilitate interaction with tools installed on the host operating system. Below are some of these paths.
 
==== <code>/opt/software/modulefiles</code> ==== <!--T:44-->
If this path exists, it will automatically be added to the default <code>MODULEPATH</code>. This allows the use of our software environment while also maintaining locally installed modules.
 
==== <code>$HOME/modulefiles</code> ==== <!--T:45-->
If this path exists, it will automatically be added to the default <code>MODULEPATH</code>. This allows the use of our software environment while also allowing installation of modules inside of home directories.
 
==== <code>/opt/software/slurm/bin</code>, <code>/opt/software/bin</code>, <code>/opt/slurm/bin</code> ==== <!--T:46-->
These paths are all automatically added to the default <code>PATH</code>. This allows your own executable to be added in the search path.
 
== Installing software locally == <!--T:57-->
Since June 2020, we support installing additional modules locally and have it discovered by our central hierarchy. This was discussed and implemented in [https://github.com/ComputeCanada/software-stack/issues/11 this issue].
 
<!--T:58-->
To do so, first identify a path where you want to install local software. For example <code>/opt/software/easybuild</code>. Make sure that folder exists. Then, export the environment variable <code>RSNT_LOCAL_MODULEPATHS</code>:
{{Command|export RSNT_LOCAL_MODULEPATHS{{=}}/opt/software/easybuild/modules}}
 
<!--T:59-->
If you want this branch of the software hierarchy to be found by your users, we recommend you define this environment variable in the cluster's common profile. Then, install the software packages you want using [[EasyBuild]]:
{{Command|eb --installpath /opt/software/easybuild <some easyconfig recipe>}}
 
<!--T:60-->
This will install the piece of software locally, using the hierarchical layout driven by our module naming scheme. It will also be automatically found when users load our compiler, MPI and Cuda modules.
 
= Caveats = <!--T:47-->
== Use of software environment by system administrators ==
If you perform privileged system operations, or operations related to CVMFS, [[Accessing_CVMFS#Enabling_our_environment_in_your_session|ensure]] that your session does <i>not</i> depend on our software environment when performing any such operations. For example, if you attempt to update CVMFS using YUM while your session uses a Python module loaded from CVMFS, YUM may run using that module and lose access to it during the update, and the update may become deadlocked. Similarly, if your environment depends on CVMFS and you reconfigure CVMFS in a way that temporarily interrupts access to CVMFS, your session may interfere with CVMFS operations, or hang. (When these precautions are taken, in most cases CVMFS can be updated and reconfigured without interrupting access to CVMFS for users, because the update or reconfiguration itself will complete successfully without encountering a circular dependency.)
 
== Software packages that are not available == <!--T:49-->
On our systems, a number of commercial software packages are made available to authorized users according to the terms of the license owners, but they are not available externally, and following the instructions on this page will not grant you access to them. This includes for example the Intel and Portland Group compilers. While the modules for the Intel and PGI compilers are available, you will only have access to the redistributable parts of these packages, usually the shared objects. These are sufficient to run software packages compiled with these compilers, but not to compile new software.
 
== CUDA location == <!--T:50-->
For CUDA-enabled software packages, our software environment relies on having driver libraries installed in the path <code>/usr/lib64/nvidia</code>. However on some platforms, recent NVidia drivers will install libraries in <code>/usr/lib64</code> instead. Because it is not possible to add <code>/usr/lib64</code> to the <code>LD_LIBRARY_PATH</code> without also pulling in all system libraries (which may have incompatibilities with our software environment), we recommend that you create symbolic links in <code>/usr/lib64/nvidia</code> pointing to the installed NVidia libraries. The script below will install the drivers and create the symbolic links that are needed (adjust the driver version that you want)
 
<!--T:56-->
{{File|name=script.sh|contents=
NVIDIA_DRV_VER="410.48"
nv_pkg=( "nvidia-driver" "nvidia-driver-libs" "nvidia-driver-cuda" "nvidia-driver-cuda-libs" "nvidia-driver-NVML" "nvidia-driver-NvFBCOpenGL" "nvidia-modprobe" )
yum -y install ${nv_pkg[@]/%/-${NVIDIA_DRV_VER{{)}}{{)}}
for file in $(rpm -ql ${nv_pkg[@]}); do
  [ "${file%/*}" = '/usr/lib64' ] && [ ! -d "${file}" ] && \
  ln -snf "$file" "${file%/*}/nvidia/${file##*/}"
done
}}


=== System paths ===
== <code>LD_LIBRARY_PATH</code> == <!--T:51-->
==== <tt>/opt/software/modulefiles</tt> ====
Our software environment is designed to use [https://en.wikipedia.org/wiki/Rpath RUNPATH]. Defining <code>LD_LIBRARY_PATH</code> is [https://gms.tf/ld_library_path-considered-harmful.html not recommended] and can lead to the environment not working.  
If this path exists, it will automatically be added to the default <tt>MODULEPATH</tt>. This allows the use of our software environment while also maintaining locally installed modules.  


==== <tt>$HOME/modulefiles</tt> ====
== Missing libraries == <!--T:52-->
If this path exists, it will automatically be added to the default <tt>MODULEPATH</tt>. This allows the use of our software environment while also allowing installation of modules inside of a user's account.  
Because we do not define <code>LD_LIBRARY_PATH</code>, and because our libraries are not installed in default Linux locations, binary packages, such as Anaconda, will often not find libraries that they would usually expect. Please see our documentation on [[Installing_software_in_your_home_directory#Installing_binary_packages|Installing binary packages]].


==== <tt>/opt/software/slurm/bin</tt>, <tt>/opt/software/bin</tt>, <tt>/opt/slurm/bin</tt> ====
== dbus == <!--T:53-->
These paths are all automatically added to the default <tt>PATH</tt>. This allows your own executable to be added in the search path.
For some applications, <code>dbus</code> needs to be installed. This needs to be installed locally, on the host operating system.
</translate>

Latest revision as of 18:55, 24 May 2024

Other languages:

Introduction[edit]

We provide repositories of software and data via a file system called the CERN Virtual Machine File System (CVMFS). On our systems, CVMFS is already set up for you, so the repositories are automatically available for your use. For more information on using our software environment, please refer to wiki pages Available software, Using modules, Python, R and Installing software in your home directory.

The purpose of this page is to describe how you can install and configure CVMFS on your computer or cluster, so that you can access the same repositories (and software environment) on your system that are available on ours.

The software environment described on this page has been presented at Practices and Experience in Advanced Research Computing 2019 (PEARC 2019).

Before you start[edit]

Light-bulb.pngNote to staff: see the internal documentation.


Important

Please subscribe to announcements to remain informed of important changes regarding our software environment and CVMFS, and fill out the registration form. If use of our software environment contributes to your research, please acknowledge it according to these guidelines. (We would appreciate that you also cite our paper).


Subscribe to announcements[edit]

Occasionally, changes will be made regarding CVMFS or the software or other content provided by our CVMFS repositories, which may affect users or require administrators to take action in order to ensure uninterrupted access to our CVMFS repositories. Subscribe to the cvmfs-announce@gw.alliancecan.ca mailing list in order to receive important but infrequent notifications about these changes, by emailing cvmfs-announce+subscribe@gw.alliancecan.ca and then replying to the confirmation email you subsequently receive. (Our staff can alternatively subscribe here.)

Terms of use and support[edit]

The CVMFS client software is provided by CERN. Our CVMFS repositories are provided without any warranty. We reserve the right to limit or block your access to the CVMFS repositories and software environment if you violate applicable terms of use or at our discretion.

CVMFS requirements[edit]

For a single system[edit]

To install CVMFS on an individual system, such as your laptop or desktop, you will need:

  • A supported operating system (see Minimal requirements below).
  • Support for FUSE.
  • Approximately 50 GB of available local storage, for the cache. (It will only be filled based on usage, and a larger or smaller cache may be suitable in different situations. For light use on a personal computer, just ~ 5-10 GB may suffice. See cache settings for more details.)
  • Outbound HTTP access to the internet.
    • Or at least outbound HTTP access to one or more local proxy servers.

If your system lacks FUSE support or local storage, or has limited network connectivity or other restrictions, you may be able to use some other option.

For multiple systems[edit]

If multiple CVMFS clients are deployed, for example on a cluster, in a laboratory, campus or other site, each system must meet the above requirements, and the following considerations apply as well:

  • We recommend that you deploy forward caching HTTP proxy servers at your site to improve performance and bandwidth usage, especially if you have a large number of clients. Refer to Setting up a Local Squid Proxy.
    • Note that if you have only one such proxy server it will be a single point of failure for your site. Generally, you should have at least two local proxies at your site, and potentially additional nearby or regional proxies as backups.
  • It is recommended to synchronize the identity of the cvmfs service account across all client nodes (e.g. using LDAP or other means).
    • This facilitates use of an alien cache and should be done before CVMFS is installed. Even if you do not anticipate using an alien cache at this time, it is easier to synchronize the accounts initially than to try to potentially change them later.

Software environment requirements[edit]

Minimal requirements[edit]

  • Supported operating systems:
    • Linux: with a Kernel 2.6.32 or newer for our 2016 and 2018 environments, and 3.2 or newer for the 2020 environment.
    • Windows: with Windows Subsystem for Linux version 2, with a distribution of Linux that matches the requirement above.
    • Mac OS: only through a virtual machine.
  • CPU: x86 CPU supporting at least one of SSE3, AVX, AVX2 or AVX512 instruction sets.

Optimal requirements[edit]

  • Scheduler: Slurm or Torque, for tight integration with OpenMPI applications.
  • Network interconnect: Ethernet, InfiniBand or OmniPath, for parallel applications.
  • GPU: NVidia GPU with CUDA drivers (7.5 or newer) installed, for CUDA-enabled applications. (See below for caveats about CUDA.)
  • As few Linux packages installed as possible (fewer packages reduce the odds of conflicts).

Installing CVMFS[edit]

If you wish to use Ansible, a CVMFS client role is provided as-is, for basic configuration of a CVMFS client on an RPM-based system. Also, some scripts may be used to facilitate installing CVMFS on cloud instances. Otherwise, use the following instructions.

Pre-installation[edit]

It is recommended that the local CVMFS cache (located at /var/lib/cvmfs by default, configurable via the CVMFS_CACHE_BASE setting) be on a dedicated file system so that the storage usage of CVMFS is not shared with that of other applications. Accordingly, you should provision that file system before installing CVMFS.

Installation and configuration[edit]

For installation instructions, refer to Getting the Software.

For standard client configuration, see Setting up the Software and Client parameters.

The soft.computecanada.ca repository is provided by the default configuration, so no additional steps are required to access it (though you may wish to include it in CVMFS_REPOSITORIES in your client configuration).

Testing[edit]

  • First ensure that the repositories you want to test are listed in CVMFS_REPOSITORIES.
  • Validate the configuration:
Question.png
[name@server ~]$ sudo cvmfs_config chksetup
  • Make sure to address any warnings or errors that are reported.
  • Check that the repositories are OK:
Question.png
[name@server ~]$ cvmfs_config probe

If you encounter problems, this debugging guide may help.

Enabling our environment in your session[edit]

Once you have mounted the CVMFS repository, enabling our environment in your sessions is as simple as running the bash script /cvmfs/soft.computecanada.ca/config/profile/bash.sh. This will load some default modules. If you want to mimic a specific cluster exactly, simply define the environment variable CC_CLUSTER to one of beluga, cedar or graham before using the script, for example:

Question.png
[name@server ~]$ export CC_CLUSTER=beluga
Question.png
[name@server ~]$ source /cvmfs/soft.computecanada.ca/config/profile/bash.sh

The above command will not run anything if your user ID is below 1000. This is a safeguard, because you should not rely on our software environment for privileged operation. If you nevertheless want to enable our environment, you can first define the environment variable FORCE_CC_CVMFS=1, with the command

Question.png
[name@server ~]$ export FORCE_CC_CVMFS=1

or you can create a file $HOME/.force_cc_cvmfs in your home folder if you want it to always be active, with

Question.png
[name@server ~]$ touch $HOME/.force_cc_cvmfs

If, on the contrary, you want to avoid enabling our environment, you can define SKIP_CC_CVMFS=1 or create the file $HOME/.skip_cc_cvmfs to ensure that the environment is never enabled in a given account.

Customizing your environment[edit]

By default, enabling our environment will automatically detect a number of features of your system, and load default modules. You can control the default behaviour by defining specific environment variables prior to enabling the environment. These are described below.

Environment variables[edit]

CC_CLUSTER[edit]

This variable is used to identify a cluster. It is used to send some information to the system logs, as well as define behaviour relative to licensed software. By default, its value is computecanada. You may want to set the value of this variable if you want to have system logs tailored to the name of your system.

RSNT_ARCH[edit]

This environment variable is used to identify the set of CPU instructions supported by the system. By default, it will be automatically detected based on /proc/cpuinfo. However if you want to force a specific one to be used, you can define it before enabling the environment. The supported instruction sets for our software environment are:

  • sse3
  • avx
  • avx2
  • avx512

RSNT_INTERCONNECT[edit]

This environment variable is used to identify the type of interconnect supported by the system. By default, it will be automatically detected based on the presence of /sys/module/opa_vnic (for Intel OmniPath) or /sys/module/ib_core (for InfiniBand). The fall-back value is ethernet. The supported values are

  • omnipath
  • infiniband
  • ethernet

The value of this variable will trigger different options of transport protocol to be used in OpenMPI.

RSNT_CUDA_DRIVER_VERSION[edit]

This environment variable is used to hide or show some versions of our CUDA modules, according to the required version of NVidia drivers, as documented [here]. If not defined, this is detected based on the files founds under /usr/lib64/nvidia.

For backward compatibility reasons, if no library is found under /usr/lib64/nvidia, we assume that the driver versions are enough for CUDA 10.2. This is because this feature was introduced just as CUDA 11.0 was released.

Defining RSNT_CUDA_DRIVER_VERSION=0.0 will hide all versions of CUDA.

RSNT_LOCAL_MODULEPATHS[edit]

This environment variable allows to define locations for local module trees, which will be automatically mesh into our central tree. To use it, define

Question.png
[name@server ~]$ export RSNT_LOCAL_MODULEPATHS=/opt/software/easybuild/modules

and then install your EasyBuild recipe using

Question.png
[name@server ~]$ eb --installpath /opt/software/easybuild <your recipe>.eb

This will use our module naming scheme to install your recipe locally, and it will be picked up by the module hierarchy. For example, if this recipe was using the iompi,2018.3 toolchain, the module will become available after loading the intel/2018.3 and the openmpi/3.1.2 modules.

LMOD_SYSTEM_DEFAULT_MODULES[edit]

This environment variable defines which modules are loaded by default. If it is left undefined, our environment will define it to load the StdEnv module, which will load by default a version of the Intel compiler, and a version of OpenMPI.

MODULERCFILE[edit]

This is an environment variable used by Lmod to define the default version of modules and aliases. You can define your own modulerc file and add it to the environment variable MODULERCFILE. This will take precedence over what is defined in our environment.

System paths[edit]

While our software environment strives to be as independent from the host operating system as possible, there are a number of system paths that are taken into account by our environment to facilitate interaction with tools installed on the host operating system. Below are some of these paths.

/opt/software/modulefiles[edit]

If this path exists, it will automatically be added to the default MODULEPATH. This allows the use of our software environment while also maintaining locally installed modules.

$HOME/modulefiles[edit]

If this path exists, it will automatically be added to the default MODULEPATH. This allows the use of our software environment while also allowing installation of modules inside of home directories.

/opt/software/slurm/bin, /opt/software/bin, /opt/slurm/bin[edit]

These paths are all automatically added to the default PATH. This allows your own executable to be added in the search path.

Installing software locally[edit]

Since June 2020, we support installing additional modules locally and have it discovered by our central hierarchy. This was discussed and implemented in this issue.

To do so, first identify a path where you want to install local software. For example /opt/software/easybuild. Make sure that folder exists. Then, export the environment variable RSNT_LOCAL_MODULEPATHS:

Question.png
[name@server ~]$ export RSNT_LOCAL_MODULEPATHS=/opt/software/easybuild/modules

If you want this branch of the software hierarchy to be found by your users, we recommend you define this environment variable in the cluster's common profile. Then, install the software packages you want using EasyBuild:

Question.png
[name@server ~]$ eb --installpath /opt/software/easybuild <some easyconfig recipe>

This will install the piece of software locally, using the hierarchical layout driven by our module naming scheme. It will also be automatically found when users load our compiler, MPI and Cuda modules.

Caveats[edit]

Use of software environment by system administrators[edit]

If you perform privileged system operations, or operations related to CVMFS, ensure that your session does not depend on our software environment when performing any such operations. For example, if you attempt to update CVMFS using YUM while your session uses a Python module loaded from CVMFS, YUM may run using that module and lose access to it during the update, and the update may become deadlocked. Similarly, if your environment depends on CVMFS and you reconfigure CVMFS in a way that temporarily interrupts access to CVMFS, your session may interfere with CVMFS operations, or hang. (When these precautions are taken, in most cases CVMFS can be updated and reconfigured without interrupting access to CVMFS for users, because the update or reconfiguration itself will complete successfully without encountering a circular dependency.)

Software packages that are not available[edit]

On our systems, a number of commercial software packages are made available to authorized users according to the terms of the license owners, but they are not available externally, and following the instructions on this page will not grant you access to them. This includes for example the Intel and Portland Group compilers. While the modules for the Intel and PGI compilers are available, you will only have access to the redistributable parts of these packages, usually the shared objects. These are sufficient to run software packages compiled with these compilers, but not to compile new software.

CUDA location[edit]

For CUDA-enabled software packages, our software environment relies on having driver libraries installed in the path /usr/lib64/nvidia. However on some platforms, recent NVidia drivers will install libraries in /usr/lib64 instead. Because it is not possible to add /usr/lib64 to the LD_LIBRARY_PATH without also pulling in all system libraries (which may have incompatibilities with our software environment), we recommend that you create symbolic links in /usr/lib64/nvidia pointing to the installed NVidia libraries. The script below will install the drivers and create the symbolic links that are needed (adjust the driver version that you want)


File : script.sh

NVIDIA_DRV_VER="410.48"
nv_pkg=( "nvidia-driver" "nvidia-driver-libs" "nvidia-driver-cuda" "nvidia-driver-cuda-libs" "nvidia-driver-NVML" "nvidia-driver-NvFBCOpenGL" "nvidia-modprobe" )
yum -y install ${nv_pkg[@]/%/-${NVIDIA_DRV_VER}}
for file in $(rpm -ql ${nv_pkg[@]}); do
  [ "${file%/*}" = '/usr/lib64' ] && [ ! -d "${file}" ] && \ 
  ln -snf "$file" "${file%/*}/nvidia/${file##*/}"
done


LD_LIBRARY_PATH[edit]

Our software environment is designed to use RUNPATH. Defining LD_LIBRARY_PATH is not recommended and can lead to the environment not working.

Missing libraries[edit]

Because we do not define LD_LIBRARY_PATH, and because our libraries are not installed in default Linux locations, binary packages, such as Anaconda, will often not find libraries that they would usually expect. Please see our documentation on Installing binary packages.

dbus[edit]

For some applications, dbus needs to be installed. This needs to be installed locally, on the host operating system.