|This site replaces the former Compute Canada documentation site, and is now being managed by the Digital Research Alliance of Canada. |
Ce site remplace l'ancien site de documentation de Calcul Canada et est maintenant géré par l'Alliance de recherche numérique du Canada.
ORCA is a flexible, efficient and easy-to-use general-purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
If you wish to use prebuilt ORCA executables:
- You have to register at https://orcaforum.kofo.mpg.de/
- You will receive a first email to verify the email address and activate the account. Follow the instructions in that email.
- Once the registration is complete, you will get a second email stating that the "registration for ORCA download and usage has been completed".
- Contact us requesting access to ORCA with a copy of the second email.
On July 2021, a first version 5 of ORCA was released. This is a major upgrade of ORCA 4.
The first released versions 5.0 and 5.0.1 had a few bugs that were fixed in the following 5.0.2 version. Even if version 5.0.1 is installed on our clusters, we recommend that you use 5.0.2.
To load version 5.0.2, use:
module load StdEnv/2020 gcc/10.3.0 openmpi/4.1.1 orca/5.0.2
Note: Version 5.0.1 is in our software stack but could be removed at any time.
The latest released version of ORCA 4 is 4.2.1. Other versions prior to this one are also available in our software stack.
To load version 4.2.1, use:
module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 orca/4.2.1
module load nixpkgs/16.09 gcc/7.3.0 openmpi/3.1.4 orca/4.2.1
Setting ORCA input files
In addition to the different keywords required to run a given simulation, you should make sure to set two additional parameters:
- number of CPUs
Using the software
To see which versions of ORCA are currently available, type
module spider orca. For detailed information about a specific version, including the other modules that must be loaded first, use the module's full name. For example,
module spider orca/220.127.116.11.
See Using modules for general guidance.
For a general discussion about submitting jobs, see Running jobs.
NOTE: If you run into MPI errors with some of the ORCA executables, you can try to define the following variables:
export OMPI_MCA_mtl='^mxm' export OMPI_MCA_pml='^yalla'
The following is a job script to run ORCA using MPI:
#!/bin/bash #SBATCH --account=def-youPIs #SBATCH --ntasks=8 # cpus, the nprocs defined in the input file #SBATCH --mem-per-cpu=3G # memory per cpu #SBATCH --time=00-03:00 # time (DD-HH:MM) #SBATCH --output=benzene.log # output .log file module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 module load orca/4.2.1 $EBROOTORCA/orca benzene.inp
Example of the input file, benzene.inp:
# Benzene RHF Opt Calculation %pal nprocs 8 end ! RHF TightSCF PModel ! opt * xyz 0 1 C 0.000000000000 1.398696930758 0.000000000000 C 0.000000000000 -1.398696930758 0.000000000000 C 1.211265339156 0.699329968382 0.000000000000 C 1.211265339156 -0.699329968382 0.000000000000 C -1.211265339156 0.699329968382 0.000000000000 C -1.211265339156 -0.699329968382 0.000000000000 H 0.000000000000 2.491406946734 0.000000000000 H 0.000000000000 -2.491406946734 0.000000000000 H 2.157597486829 1.245660462400 0.000000000000 H 2.157597486829 -1.245660462400 0.000000000000 H -2.157597486829 1.245660462400 0.000000000000 H -2.157597486829 -1.245660462400 0.000000000000 *
- To make sure that the program runs efficiently and makes use of all the resources or the cores asked for in your job script, please add this line
%pal nprocs <ncores> endto your input file as shown in the above example. Replace
<ncores>by the number of cores you used in your script.
(Sep. 6 2019) Temporary fix to OpenMPI version inconsistency issue
For some type of calculations (DLPNO-STEOM-CCSD in particular), you could receive unknown openmpi related fatal errors. This could be due to using an older version of openmpi (i.e. 3.1.2 as suggested by 'module' for both orca/4.1.0 and 4.2.0) than recommended officially (3.1.3 for orca/4.1.0 and 3.1.4 for orca/4.2.0). To temporarily fix this issue, one can build a custom version of openmpi.
The following two commands prepare a custom openmpi/3.1.4 for orca/4.2.0:
module load gcc/7.3.0 eb OpenMPI-3.1.2-GCC-7.3.0.eb --try-software-version=3.1.4
When the building is finished, you can load the custom openmpi using module:
module load openmpi/3.1.4
At this step, one can manually install orca/4.2.0 binaries from the official forum under the home directory after finishing the registration on the official orca forum and being granted access to the orca program on our clusters.
Additional notes from the contributor:
This is a temporary fix prior to the official upgrade of openmpi on our clusters. Please remember to delete the manually installed orca binaries once the official openmpi version is up to date.
The compiling command does not seem to apply to openmpi/2.1.x.