Gaussian: Difference between revisions
No edit summary |
No edit summary |
||
Line 21: | Line 21: | ||
We will then grant you access to Gaussian. | We will then grant you access to Gaussian. | ||
==Running Gaussian on Graham== <!--T:6--> | ==Running Gaussian on Graham and Cedar== <!--T:6--> | ||
Gaussian g16.b01, g16.a03, g09.e01 and g03.d01 are installed on Graham cluster and available through the modules system. You should load the required version in your job script as shown in the mysub.sh example below. | Gaussian g16.b01, g16.a03, g09.e01 and g03.d01 are installed on Graham cluster and available through the modules system. You should load the required version in your job script as shown in the mysub.sh example below. | ||
</translate> | </translate> |
Revision as of 18:26, 27 April 2018
Gaussian is a computational chemistry application produced by Gaussian, Inc.
License limitations
Compute Canada currently supports Gaussian only on Graham and Cedar as well as certain legacy systems.
In order to use Gaussian you must agree to certain conditions. Send an email with a copy of the following statement to support@computecanada.ca.
- I am not a member of a research group developing software competitive to Gaussian.
- I will not copy the Gaussian software, nor make it available to anyone else.
- I will properly acknowledge Gaussian Inc. and Compute Canada in publications.
- I will notify Compute Canada of any change in the above acknowledgement.
We will then grant you access to Gaussian.
Running Gaussian on Graham and Cedar
Gaussian g16.b01, g16.a03, g09.e01 and g03.d01 are installed on Graham cluster and available through the modules system. You should load the required version in your job script as shown in the mysub.sh example below.
Job submission
Graham uses the Slurm scheduler; for details about submitting jobs, see Running jobs.
Besides your input file (in our example name.com), you have to prepare a job script to define the compute resources for the job; both input file and job script must be in the same directory.
There are two options to run your Gaussian job on Graham based on the location of the default runtime files and the job size.
G16 (G09, G03)
This option will save the default runtime files (unnamed .rwf, .inp, .d2e, .int, .skr files) to /scratch/username/jobid/. Those files will stay there when the job is unfinished or failed for whatever reason, you could locate the .rwf file for restart purpose later.
The following example is a G16 job script:
Note that for coherence, we use the same name for each files, changing only the extension (name.sh, name.com, name.log).
#!/bin/bash
#SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --time=02-00:00 # expect run time (DD-HH:MM)
#SBATCH --cpus-per-task=16 # No. of cpus for the job as defined by %nprocs in the name.com file
module load gaussian/g16.b01
G16 name.com # G16 command, input: name.com, output: name.log
To use Gaussian 09 or Gaussian 03, simply modify the module load gaussian/g16.a03 to gaussian/g09.e01 or gaussian/g03.d01, and change G16 to G09 or G03. You can modify the --mem, --time, --cpus-per-task to match your job's requirements for compute resources.
g16 (g09, g03)
This option will save the default runtime files (unnamed .rwf, .inp, .d2e, .int, .skr files) temporarily in $SLURM_TMPDIR (/localscratch/username.jobid.0/) on the compute node where the job was scheduled to. The files will be removed by the scheduler when a job is done (successful or not). If you do not expect to use the .rwf file to restart in a later time, you can use this option.
/localscratch is ~800G shared by all jobs running on the same node. If your job files would be bigger than or close to that size range, you would instead use the G16 (G09, G03) option.
The following example is a g16 job script:
Note that for coherence, we use the same name for each files, changing only the extension (name.sh, name.com, name.log).
#!/bin/bash
#SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --time=02-00:00 # expect run time (DD-HH:MM)
#SBATCH --cpus-per-task=16 # No. of cpus for the job as defined by %nprocs in the name.com file
module load gaussian/g16.b01
g16 < name.com >& name.log # g16 command, input: name.com, output: name.log by default
Submit the job
sbatch mysub.sh
Interactive jobs
You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like Goto the input file directory first, then use salloc command:
[name@server ~]$ salloc --time=1:0:0 --cpus-per-task=8 --mem=10g
Then use either
[name@server ~]$ module load gaussian/g16.b01
[name@server ~]$ G16 g16_test2.com # G16 saves runtime file (.rwf etc.) to /scratch/yourid/93288/
or
[name@server ~]$ module load gaussian/g16.b01
[name@server ~]$ g16 < g16_test2.com >& g16_test2.log & # g16 saves runtime file to /localscratch/yourid/
Examples
Sample script *.sh and input files can be found on Graham under
/home/jemmyhu/tests/test_Gaussian/
Errors
Some of the error messages produced by Gaussian have been collected, with suggestions for their resolution. See Gaussian error messages.