Standard software environments: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Marked this version for translation)
Line 19: Line 19:
In general, new versions of software packages will get installed with the newest software environment.
In general, new versions of software packages will get installed with the newest software environment.


== <code>StdEnv/2023</code> ==  
== <code>StdEnv/2023</code> == <!--T:22-->
This is the most recent iteration of our software environment with the most changes so far. It uses GCC 12.3.0, Intel 2023.1, and Open MPI 4.1.5 as defaults.  
This is the most recent iteration of our software environment with the most changes so far. It uses GCC 12.3.0, Intel 2023.1, and Open MPI 4.1.5 as defaults.  


<!--T:23-->
To activate this environment, use the command  
To activate this environment, use the command  
{{Command|module load StdEnv/2023}}
{{Command|module load StdEnv/2023}}


=== Performance improvements ===  
=== Performance improvements === <!--T:24-->
The minimum CPU instruction set supported by this environment is AVX2, or more generally, <tt>x86-64-v3</tt>. Even the compatibility layer which provides basic Linux commands is compiled with optimisations for this instruction set.  
The minimum CPU instruction set supported by this environment is AVX2, or more generally, <tt>x86-64-v3</tt>. Even the compatibility layer which provides basic Linux commands is compiled with optimisations for this instruction set.  


=== Changes of default modules ===  
=== Changes of default modules === <!--T:25-->
GCC becomes the default compiler, instead of Intel. We compile with Intel only software which have been known to offer better performance using Intel. CUDA becomes an add-on to OpenMPI, rather than the other way around, i.e. CUDA-aware MPI is loaded at run time if CUDA is loaded. This allows to share a lot of MPI libraries across CUDA and non-CUDA branches.
GCC becomes the default compiler, instead of Intel. We compile with Intel only software which have been known to offer better performance using Intel. CUDA becomes an add-on to OpenMPI, rather than the other way around, i.e. CUDA-aware MPI is loaded at run time if CUDA is loaded. This allows to share a lot of MPI libraries across CUDA and non-CUDA branches.


<!--T:26-->
The following core modules have seen their default version upgraded:
The following core modules have seen their default version upgraded:
* GCC 9.3 => GCC 12.3
* GCC 9.3 => GCC 12.3

Revision as of 15:38, 12 February 2024

Other languages:

For questions about migration to different standard environments, please see Migration to the 2020 standard environment.

What are standard software environments?[edit]

Our software environments are provided through a set of modules which allow you to switch between different versions of software packages. These modules are organized in a tree structure with the trunk made up of typical utilities provided by any Linux environment. Branches are compiler versions and sub-branches are versions of MPI or CUDA.

Standard environments identify combinations of specific compiler and MPI modules that are used most commonly by our team to build other software. These combinations are grouped in modules named StdEnv.

As of October 2020, there are three such standard environments, versioned 2016.4, 2018.3, and 2020, with each new version incorporating major improvements.

This page describes these changes and explains why you should upgrade to a more recent version.

In general, new versions of software packages will get installed with the newest software environment.

StdEnv/2023[edit]

This is the most recent iteration of our software environment with the most changes so far. It uses GCC 12.3.0, Intel 2023.1, and Open MPI 4.1.5 as defaults.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2023

Performance improvements[edit]

The minimum CPU instruction set supported by this environment is AVX2, or more generally, x86-64-v3. Even the compatibility layer which provides basic Linux commands is compiled with optimisations for this instruction set.

Changes of default modules[edit]

GCC becomes the default compiler, instead of Intel. We compile with Intel only software which have been known to offer better performance using Intel. CUDA becomes an add-on to OpenMPI, rather than the other way around, i.e. CUDA-aware MPI is loaded at run time if CUDA is loaded. This allows to share a lot of MPI libraries across CUDA and non-CUDA branches.

The following core modules have seen their default version upgraded:

  • GCC 9.3 => GCC 12.3
  • OpenMPI 4.0.3 => OpenMPI 4.1.5
  • Intel compilers 2020 => 2023
  • Intel MKL 2020 => Flexiblas 3.3.1 (with MKL 2023 or BLIS 0.9.0)
  • CUDA 11 => CUDA 12

StdEnv/2020[edit]

This is the most recent iteration of our software environment with the most changes so far. It uses GCC 9.3.0, Intel 2020.1, and Open MPI 4.0.3 as defaults.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2020

Performance improvements[edit]

Binaries compiled with the Intel compiler now automatically support both AVX2 and AVX512 instruction sets. In technical terms, we call them multi-architecture binaries, also known as fat binaries. This means that when running on a cluster such as Cedar and Graham which has multiple generations of processors, you don't have to manually load one of the arch modules if you use software packages generated by the Intel compiler.

Many software packages which were previously installed either with GCC or with Intel are now installed at a lower level of the software hierarchy, which makes the same module visible, irrespective of which compiler is loaded. For example, this is the case for many bioinformatics software packages as well as the R modules, which previously required loading the gcc module. This could be done because we introduced optimizations specific to CPU architectures at a level of the software hierarchy lower than the compiler level.

We also installed a more recent version of the GNU C Library, which introduces optimizations in some mathematical functions. This has increased the requirement on the version of the Linux Kernel (see below).

Change in the compatibility layer[edit]

Another enhancement for the 2020 release was a change in tools for our compatibility layer. The compatibility layer is between the operating system and all other software packages. This layer is designed to ensure that compilers and scientific applications will work whether they run on CentOS, Ubuntu, or Fedora. For the 2016.4 and 2018.3 versions, we used the Nix package manager, while for the 2020 version, we used Gentoo Prefix.

Change in kernel requirement[edit]

Versions 2016.4 and 2018.3 required a Linux kernel version 2.6.32 or more recent. This supported CentOS versions starting at CentOS 6. With the 2020 version, we require a Linux kernel 3.10 or better. This means it no longer supports CentOS 6, but requires CentOS 7 instead. Other distributions usually have kernels which are much more recent, so you probably don't need to change your distribution if you are using this standard environment on something other than CentOS.

Module extensions[edit]

With the 2020 environment, we started installing more Python extensions inside of their corresponding core modules. For example, we installed PyQt5 inside of the qt/5.12.8 module so that it supports multiple versions of Python. The module system has also been adjusted so you can find such extensions. For example, if you run

Question.png
[name@server ~]$ module spider pyqt5

it will tell you that you can get this by loading the qt/5.12.8 module.

StdEnv/2018.3[edit]

Deprecated

This environment is no longer supported.



This is the second version of our software environment. It was released in 2018 with the deployment of Béluga, and shortly after the deployment of Niagara. Defaults were upgraded to GCC 7.3.0, Intel 2018.3, and Open MPI 3.1.2. This is the first version to support AVX512 instructions.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2018.3

StdEnv/2016.4[edit]

Deprecated

This environment is no longer supported.



This is the initial version of our software environment released in 2016 with the deployment of Cedar and Graham. It features GCC 5.4.0 and Intel 2016.4 as default compilers, and Open MPI 2.1.1 as its default implementation of MPI. Most of the software compiled with this environment does not support AVX512 instructions provided by the Skylake processors on Béluga, Niagara, as well as on the most recent additions to Cedar and Graham.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2016.4