Gaussian: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Marked this version for translation)
Line 2: Line 2:


<translate>
<translate>
<!--T:1-->
Gaussian is a computational chemistry application produced by [http://gaussian.com/ Gaussian, Inc.]
Gaussian is a computational chemistry application produced by [http://gaussian.com/ Gaussian, Inc.]


== License limitations ==
== License limitations == <!--T:2-->


<!--T:3-->
Compute Canada currently supports Gaussian only on [[Graham]] and certain legacy systems.  
Compute Canada currently supports Gaussian only on [[Graham]] and certain legacy systems.  


<!--T:4-->
In order to use Gaussian you must agree to the following:
In order to use Gaussian you must agree to the following:
# You are not a member of a research group developing software competitive to Gaussian.
# You are not a member of a research group developing software competitive to Gaussian.
Line 14: Line 17:
# You will notify us of any change in the above acknowledgement.
# You will notify us of any change in the above acknowledgement.


<!--T:5-->
If you do, please send an email with a copy of those conditions, saying that you agree to
If you do, please send an email with a copy of those conditions, saying that you agree to
them, to support@computecanada.ca. We will then grant you access to Gaussian.
them, to support@computecanada.ca. We will then grant you access to Gaussian.


==Running Gaussian on Graham==
==Running Gaussian on Graham== <!--T:6-->
Gaussian g09.e01 and g16.a03 are installed on the newest cluster Graham with modules. You can load them using either of
Gaussian g09.e01 and g16.a03 are installed on the newest cluster Graham with modules. You can load them using either of
</translate>  
</translate>  
Line 26: Line 30:


<translate>
<translate>
===Job Submission===
===Job Submission=== <!--T:7-->
Graham uses Slurm scheduler. For details about submitting jobs, see [[Running jobs]].
Graham uses Slurm scheduler. For details about submitting jobs, see [[Running jobs]].


<!--T:8-->
Besides your input name.com file, you have to prepare a job script in the same input file directory to define the compute resources for the job.  
Besides your input name.com file, you have to prepare a job script in the same input file directory to define the compute resources for the job.  


<!--T:9-->
There are Two Options to run your Gaussian job on Graham based on the size of your job files.
There are Two Options to run your Gaussian job on Graham based on the size of your job files.


====g16 (or g09) for regular size jobs====
====g16 (or g09) for regular size jobs==== <!--T:10-->


<!--T:11-->
This option will save the unnamed runtime files (.rwf, .inp, .d2e, .int, .skr) to localscratch /localscratch/yourid/ on the compute node where the job was scheduled to. The files on localscratch will be deleted by the scheduler afterwards, usually users do not track the computer node number, those files could be lost easily. If you do not expect to use the .rwf file for restart in a later time, this is the option to go
This option will save the unnamed runtime files (.rwf, .inp, .d2e, .int, .skr) to localscratch /localscratch/yourid/ on the compute node where the job was scheduled to. The files on localscratch will be deleted by the scheduler afterwards, usually users do not track the computer node number, those files could be lost easily. If you do not expect to use the .rwf file for restart in a later time, this is the option to go


<!--T:12-->
Example g16 job script, e.g., name.sh is like (simply change g16 to g09 for a g09 job):  
Example g16 job script, e.g., name.sh is like (simply change g16 to g09 for a g09 job):  
</translate>
</translate>
Line 44: Line 52:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --mem=16G            # <translate>memory, roughly 2 times %mem defined in the input name.com file</translate>
#SBATCH --mem=16G            # <translate><!--T:13-->
#SBATCH --time=02-00:00      # <translate>expect run time (DD-HH:MM)</translate>
memory, roughly 2 times %mem defined in the input name.com file</translate>
#SBATCH --cpus-per-task=16    # <translate>No. of cpus for the job as defined by %nprocs in the name.com file</translate>
#SBATCH --time=02-00:00      # <translate><!--T:14-->
expect run time (DD-HH:MM)</translate>
#SBATCH --cpus-per-task=16    # <translate><!--T:15-->
No. of cpus for the job as defined by %nprocs in the name.com file</translate>
module load gaussian/g16.a03
module load gaussian/g16.a03
g16 < name.com >& name.log &  # <translate>g16 command, input: name.com, output: name.log</translate>
g16 < name.com >& name.log &  # <translate><!--T:16-->
g16 command, input: name.com, output: name.log</translate>
}}
}}
<translate>
<translate>
<!--T:17-->
You can modify the script to fit your job's reqirements for compute resources.
You can modify the script to fit your job's reqirements for compute resources.


====G16 (or G09) for large size jobs====
====G16 (or G09) for large size jobs==== <!--T:18-->


<!--T:19-->
localscratch is ~800G shared by any jobs running on the node. If your job files would be bigger than or close to that size range, you would instead use this option to save files to your /scratch. However it's hard for us to define what size of job would be considered as a large job because we could not predict how many jobs will be running on a node at certain time, how many jobs may save files and the size of the files to /localscratch. It's possible to have multiple Gaussian jobs running on the same node sharing the ~800G space.  
localscratch is ~800G shared by any jobs running on the node. If your job files would be bigger than or close to that size range, you would instead use this option to save files to your /scratch. However it's hard for us to define what size of job would be considered as a large job because we could not predict how many jobs will be running on a node at certain time, how many jobs may save files and the size of the files to /localscratch. It's possible to have multiple Gaussian jobs running on the same node sharing the ~800G space.  


<!--T:20-->
G16 provides a better way to manage your files as files are within the jobid directory: /scratch/youris/jobid/, and it's easier to locate the .rwf file to restart a job in a later time.
G16 provides a better way to manage your files as files are within the jobid directory: /scratch/youris/jobid/, and it's easier to locate the .rwf file to restart a job in a later time.


<!--T:21-->
Example G16 job script, name.sh is like (simply change G16 to G09 for a g09 job):</translate>
Example G16 job script, name.sh is like (simply change G16 to G09 for a g09 job):</translate>
{{File
{{File
Line 65: Line 81:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --mem=16G            # <translate>memory, roughly 2 times %mem defined in the input name.com file</translate>
#SBATCH --mem=16G            # <translate><!--T:22-->
#SBATCH --time=02-00:00      # <translate>expect run time (DD-HH:MM)</translate>
memory, roughly 2 times %mem defined in the input name.com file</translate>
#SBATCH --cpus-per-task=16    # <translate>No. of cpus for the job as defined by %nprocs in the name.com file</translate>
#SBATCH --time=02-00:00      # <translate><!--T:23-->
expect run time (DD-HH:MM)</translate>
#SBATCH --cpus-per-task=16    # <translate><!--T:24-->
No. of cpus for the job as defined by %nprocs in the name.com file</translate>
module load gaussian/g16.a03
module load gaussian/g16.a03
  G16 name.com                # <translate>G16 command, input: name.com, output: name.log by default</translate>
  G16 name.com                # <translate><!--T:25-->
G16 command, input: name.com, output: name.log by default</translate>
}}
}}
<translate>
<translate>
=== Interactive jobs ===
=== Interactive jobs === <!--T:26-->
You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like
You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like
Goto the input file directory first, then use salloc command:
Goto the input file directory first, then use salloc command:
Line 79: Line 99:


<translate>
<translate>
<!--T:27-->
Then use either
Then use either
</translate>
</translate>
{{Commands
{{Commands
|module load gaussian/g16.a03
|module load gaussian/g16.a03
|G16 g16_test2.com    # <translate>G16 saves runtime file (.rwf etc.) to /scratch/yourid/93288/</translate>
|G16 g16_test2.com    # <translate><!--T:28-->
G16 saves runtime file (.rwf etc.) to /scratch/yourid/93288/</translate>
}}
}}


<translate>or </translate>
<translate><!--T:29-->
or </translate>
{{Commands
{{Commands
|module load gaussian/g16.a03
|module load gaussian/g16.a03
|g16 < g16_test2.com >& g16_test2.log &  # <translate>g16 saves runtime file to /localscratch/yourid/</translate>
|g16 < g16_test2.com >& g16_test2.log &  # <translate><!--T:30-->
g16 saves runtime file to /localscratch/yourid/</translate>
}}
}}

Revision as of 13:45, 1 August 2017

Other languages:

Gaussian is a computational chemistry application produced by Gaussian, Inc.

License limitations

Compute Canada currently supports Gaussian only on Graham and certain legacy systems.

In order to use Gaussian you must agree to the following:

  1. You are not a member of a research group developing software competitive to Gaussian.
  2. You will not copy the Gaussian software, nor make it available to anyone else.
  3. You will properly acknowledge Gaussian Inc. and Compute Canada in publications.
  4. You will notify us of any change in the above acknowledgement.

If you do, please send an email with a copy of those conditions, saying that you agree to them, to support@computecanada.ca. We will then grant you access to Gaussian.

Running Gaussian on Graham

Gaussian g09.e01 and g16.a03 are installed on the newest cluster Graham with modules. You can load them using either of

Question.png
[name@server ~]$ module load gaussian/g16.a03
Question.png
[name@server ~]$ module load gaussian/g09.e01

Job Submission

Graham uses Slurm scheduler. For details about submitting jobs, see Running jobs.

Besides your input name.com file, you have to prepare a job script in the same input file directory to define the compute resources for the job.

There are Two Options to run your Gaussian job on Graham based on the size of your job files.

g16 (or g09) for regular size jobs

This option will save the unnamed runtime files (.rwf, .inp, .d2e, .int, .skr) to localscratch /localscratch/yourid/ on the compute node where the job was scheduled to. The files on localscratch will be deleted by the scheduler afterwards, usually users do not track the computer node number, those files could be lost easily. If you do not expect to use the .rwf file for restart in a later time, this is the option to go

Example g16 job script, e.g., name.sh is like (simply change g16 to g09 for a g09 job):

File : mysub.sh

#!/bin/bash
#SBATCH --mem=16G             # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --time=02-00:00       # expect run time (DD-HH:MM)
#SBATCH --cpus-per-task=16    # No. of cpus for the job as defined by %nprocs in the name.com file
module load gaussian/g16.a03
g16 < name.com >& name.log &  # g16 command, input: name.com, output: name.log


You can modify the script to fit your job's reqirements for compute resources.

G16 (or G09) for large size jobs

localscratch is ~800G shared by any jobs running on the node. If your job files would be bigger than or close to that size range, you would instead use this option to save files to your /scratch. However it's hard for us to define what size of job would be considered as a large job because we could not predict how many jobs will be running on a node at certain time, how many jobs may save files and the size of the files to /localscratch. It's possible to have multiple Gaussian jobs running on the same node sharing the ~800G space.

G16 provides a better way to manage your files as files are within the jobid directory: /scratch/youris/jobid/, and it's easier to locate the .rwf file to restart a job in a later time.

Example G16 job script, name.sh is like (simply change G16 to G09 for a g09 job):

File : mysub.sh

#!/bin/bash
#SBATCH --mem=16G             # memory, roughly 2 times %mem defined in the input name.com file
#SBATCH --time=02-00:00       # expect run time (DD-HH:MM)
#SBATCH --cpus-per-task=16    # No. of cpus for the job as defined by %nprocs in the name.com file
module load gaussian/g16.a03
 G16 name.com                # G16 command, input: name.com, output: name.log by default


Interactive jobs

You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like Goto the input file directory first, then use salloc command:

Question.png
[name@server ~]$ salloc --time=1:0:0 --cpus-per-task=8 --mem=10g

Then use either

[name@server ~]$ module load gaussian/g16.a03
[name@server ~]$ G16 g16_test2.com    # G16 saves runtime file (.rwf etc.) to /scratch/yourid/93288/


or

[name@server ~]$ module load gaussian/g16.a03
[name@server ~]$ g16 < g16_test2.com >& g16_test2.log &   # g16 saves runtime file to /localscratch/yourid/