ORCA
Introduction[edit]
ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
Licensing[edit]
If you wish to use pre-built ORCA executables:
- You have to register at https://orcaforum.kofo.mpg.de/
- You will receive a first email to verify the email address and activate the account. Follow the instructions in that email.
- Once the registration is complete you will get a second email stating that the "registration for ORCA download and usage has been completed".
- Contact us requesting access to ORCA with a copy of the registration email mentioned above.
Using the software[edit]
To see what versions of ORCA are currently available, type module spider orca
. For detailed information about a specific version, including what other modules must be loaded first, use the module's full name. For example, module spider orca/4.0.1.2
.
See Using modules for general guidance.
Job submission[edit]
For a general discussion about submitting jobs, see Running jobs.
NOTE: If you run into MPI errors with some of the ORCA executables, you can try to define the following variables:
export OMPI_MCA_mtl='^mxm' export OMPI_MCA_pml='^yalla'
The following is a job script to run ORCA using MPI:
#!/bin/bash
#SBATCH --ntasks=8 # cpus, the nprocs defined in the input file
#SBATCH --mem-per-cpu=3G # memory per cpu
#SBATCH --time=00-03:00 # time (DD-HH:MM)
#SBATCH --output=benzene.log # output .log file
module load openmpi/2.0.2
module load orca/4.0.1.2
$EBROOTORCA/orca benzene.inp
Example of the input file, benzene.inp:
# Benzene RHF Opt Calculation
%pal nprocs 8 end
! RHF TightSCF PModel
! opt
* xyz 0 1
C 0.000000000000 1.398696930758 0.000000000000
C 0.000000000000 -1.398696930758 0.000000000000
C 1.211265339156 0.699329968382 0.000000000000
C 1.211265339156 -0.699329968382 0.000000000000
C -1.211265339156 0.699329968382 0.000000000000
C -1.211265339156 -0.699329968382 0.000000000000
H 0.000000000000 2.491406946734 0.000000000000
H 0.000000000000 -2.491406946734 0.000000000000
H 2.157597486829 1.245660462400 0.000000000000
H 2.157597486829 -1.245660462400 0.000000000000
H -2.157597486829 1.245660462400 0.000000000000
H -2.157597486829 -1.245660462400 0.000000000000
*
Notes[edit]
- To make sure that the program runs efficiently and makes use of all the resources or the cores asked for in your job script, please add this line
%pal nprocs <ncores> end
to your input file as shown in the above example. Replace<ncores>
by the number of cores you used in your script.
(Sep. 6 2019) Temporary fix to OpenMPI version inconsistency issue[edit]
For some type of calculations (DLPNO-STEOM-CCSD in particular), one could receive unknown openmpi related fatal errors. This could be due to using an older version of openmpi (i.e. 3.1.2 as suggested by 'module' for both orca/4.1.0 and 4.2.0) than recommended officially (3.1.3 for orca/4.1.0 and 3.1.4 for orca/4.2.0). To temporarily fix this issue, one can build a custom version of openmpi.
The following two commands prepares a custom openmpi/3.1.4 for orca/4.2.0:
module load gcc/7.3.0
eb OpenMPI-3.1.2-GCC-7.3.0.eb --try-software-version=3.1.4
When the building is finished, one can load the custom openmpi using module:
module load openmpi/3.1.4
At this step, one can manually install orca/4.2.0 binaries from the official forum under the home directory after finishing the registration on the official orca forum and being granted access to the orca program on Compute Canada clusters.
Additional notes from the contributor:
This is a temporary fix prior to the official upgrade of openmpi on Compute Canada clusters. Please remember to delete the manually installed orca binaries once the official openmpi version is up to date.
The compiling command does not seem to apply to openmpi/2.1.x.