Gaussian: Difference between revisions
(Change "you" to "I" for copy & pasting) |
No edit summary |
||
Line 22: | Line 22: | ||
==Running Gaussian on Graham== <!--T:6--> | ==Running Gaussian on Graham== <!--T:6--> | ||
Gaussian g09.e01 and | Gaussian g16.a03, g09.e01 and g03.d01 are installed on Graham cluster and available through the modules system. You can them using either of | ||
</translate> | </translate> | ||
Line 28: | Line 28: | ||
{{Command|module load gaussian/g09.e01}} | {{Command|module load gaussian/g09.e01}} | ||
{{Command|module load gaussian/g03.d01}} | |||
<translate> | <translate> | ||
Line 38: | Line 40: | ||
<!--T:9--> | <!--T:9--> | ||
There are two options to run your Gaussian job on Graham based on the size of your job files: | There are two options to run your Gaussian job on Graham based on the size of your job files: | ||
* g16 | * g16, g09, g03 for regular size jobs | ||
* G16 | * G16, G09, G03 for large jobs | ||
====g16 (or | ====g16 (g09 or g03) for regular size jobs==== <!--T:10--> | ||
<!--T:11--> | <!--T:11--> | ||
Line 47: | Line 49: | ||
<!--T:12--> | <!--T:12--> | ||
The following example is a g16 job script; for a g09 job, simply change g16 to g09. | The following example is a g16 job script; for a g09 or g03 job, simply change the module load line and g16 to g09 or g03. | ||
<!--T:31--> | <!--T:31--> | ||
Line 80: | Line 82: | ||
<!--T:21--> | <!--T:21--> | ||
The following example is a G16 job script; for a | The following example is a G16 job script; for a G09 or G03 job, simply change the module load line and G16 to G09 or G03. | ||
<!--T:32--> | <!--T:32--> | ||
Line 97: | Line 99: | ||
No. of cpus for the job as defined by %nprocs in the name.com file</translate> | No. of cpus for the job as defined by %nprocs in the name.com file</translate> | ||
module load gaussian/g16.a03 | module load gaussian/g16.a03 | ||
G16 name.com # <translate><!--T:25--> | |||
G16 command, input: name.com, output: name.log by default</translate> | G16 command, input: name.com, output: name.log by default</translate> | ||
}} | }} | ||
<translate> | <translate> | ||
====Submit the job==== | |||
sbatch mysub.sh | |||
=== Interactive jobs === <!--T:26--> | === Interactive jobs === <!--T:26--> | ||
You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like | You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like | ||
Line 124: | Line 129: | ||
g16 saves runtime file to /localscratch/yourid/</translate> | g16 saves runtime file to /localscratch/yourid/</translate> | ||
}} | }} | ||
===Examples=== <!--T:7--> | |||
Sample script *.sh and input files can be found on Graham under | |||
/home/jemmyhu/tests/test_Gaussian/ |
Revision as of 15:01, 30 November 2017
Gaussian is a computational chemistry application produced by Gaussian, Inc.
License limitations
Compute Canada currently supports Gaussian only on Graham and certain legacy systems.
In order to use Gaussian you must agree to certain conditions. Send an email with a copy of the following statement to support@computecanada.ca.
- I am not a member of a research group developing software competitive to Gaussian.
- I will not copy the Gaussian software, nor make it available to anyone else.
- I will properly acknowledge Gaussian Inc. and Compute Canada in publications.
- I will notify Compute Canada of any change in the above acknowledgement.
We will then grant you access to Gaussian.
Running Gaussian on Graham
Gaussian g16.a03, g09.e01 and g03.d01 are installed on Graham cluster and available through the modules system. You can them using either of
[name@server ~]$ module load gaussian/g16.a03
[name@server ~]$ module load gaussian/g09.e01
[name@server ~]$ module load gaussian/g03.d01
Job submission
Graham uses the Slurm scheduler; for details about submitting jobs, see Running jobs.
Besides your input file (in our example name.com), you have to prepare a job script to define the compute resources for the job; both input file and job script must be in the same directory.
There are two options to run your Gaussian job on Graham based on the size of your job files:
- g16, g09, g03 for regular size jobs
- G16, G09, G03 for large jobs
g16 (g09 or g03) for regular size jobs
This option will save the runtime files (.rwf, .inp, .d2e, .int, .skr) to local scratch (/localscratch/username/) on the compute node where the job was scheduled to. The files on local scratch will be deleted by the scheduler afterwards; to keep trace of them we recommend that users note the computer node number.
The following example is a g16 job script; for a g09 or g03 job, simply change the module load line and g16 to g09 or g03.
Note that for coherence, we use the same name for each files, changing only the extension (name.sh, name.com, name.log).
#!/bin/bash
#SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --time=02-00:00 # expect run time (DD-HH:MM)
#SBATCH --cpus-per-task=16 # No. of cpus for the job as defined by %nprocs in the name.com file
module load gaussian/g16.a03
g16 < name.com >& name.log # g16 command, input: name.com, output: name.log
You can modify the script to fit your job's requirements for compute resources.
G16 (or G09) for large size jobs
localscratch is ~800G shared by any jobs running on the node. If your job files are bigger than or close to that size range, you would instead use this option to save files to your /scratch. However it's hard for us to define what size of job is considered as a large job because we cannot predict how many jobs will be running on a node at certain time, how many jobs may save files and the size of the files to /localscratch. It is however possible to have multiple Gaussian jobs running on the same node sharing the ~800G space.
G16 provides a better way to manage your files as they are located within the /scratch/username/jobid/ directory, and it's easier to locate the .rwf file to restart a job in a later time.
The following example is a G16 job script; for a G09 or G03 job, simply change the module load line and G16 to G09 or G03.
Note that for coherence, we use the same name for each files, changing only the extension (name.sh, name.com, name.log).
#!/bin/bash
#SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --time=02-00:00 # expect run time (DD-HH:MM)
#SBATCH --cpus-per-task=16 # No. of cpus for the job as defined by %nprocs in the name.com file
module load gaussian/g16.a03
G16 name.com # G16 command, input: name.com, output: name.log by default
Submit the job
sbatch mysub.sh
Interactive jobs
You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like Goto the input file directory first, then use salloc command:
[name@server ~]$ salloc --time=1:0:0 --cpus-per-task=8 --mem=10g
Then use either
[name@server ~]$ module load gaussian/g16.a03
[name@server ~]$ G16 g16_test2.com # G16 saves runtime file (.rwf etc.) to /scratch/yourid/93288/
or
[name@server ~]$ module load gaussian/g16.a03
[name@server ~]$ g16 < g16_test2.com >& g16_test2.log & # g16 saves runtime file to /localscratch/yourid/
Examples
Sample script *.sh and input files can be found on Graham under
/home/jemmyhu/tests/test_Gaussian/