GAMESS-US

From Alliance Doc
Revision as of 15:30, 8 March 2018 by Stuekero (talk | contribs) (managing memory)
Jump to navigation Jump to search


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.

The General Atomic and Molecular Electronic Structure System (GAMESS) [1] is a general ab initio quantum chemistry package.


Running GAMESS

Job Submission

Our Clusters are using the Slurm scheduler; for details about submitting jobs, see Running jobs.

First step is to prepare a GAMESS input file containing the molecular geometry and run parameters. Please refer to the GAMESS Documentation [2] and particularly Chapter 2 "Input Description"[3] for a description the file format and all available keywords.

Besides your input file (in our example name.inp), you have to prepare a job script to define the compute resources for the job; both input file and job script must be in the same directory.


File : gamess_job.sh

#!/bin/bash
#SBATCH --cpus-per-task=1       # Number of CPUs
#SBATCH --mem-per-cpu=4000M     # memory per CPU in MB
#SBATCH --time=0-00:30          # time (DD-HH:MM)

export SLURM_CPUS_PER_TASK
## uncomment the following 2 lines to use network $SCRATCH
#export USRSCR="$SCRATCH/gamess_${SLURM_JOB_ID}/"
#mkdir -p $USRSCR

module load gamess-us/20170420-R1

rungms name.inp  &>  name.out


Use the following command to submit the job to the scheduler:

 sbatch gamess_job.sh

Running GAMESS on multiple CPUs

GAMESS calculations can make use of more than one CPU. The number of CPUs used for a calculation is controlled by the --cpus-per-task setting in the submission script. As GAMESS has been built using the "sockets" parallelization, it can only use CPU cores that are located on the same compute node and therefore the maximum number of CPU cores that can be used for a job is dictated by the node configuraion of the system (e.g. 32 CPU cores per node on Graham).

Quantum chemistry calculations are known to not scale as well across many CPUs as compared to e.g. classical molecular mechanics, which means that they can't use large numbers of CPUs efficiently. Exactly how many CPUs can be utilized efficiently, depends on the size of a system (i.e. number of atoms, number of basis functions and level of theory).

To determine a reasonable number of CPUs to use, one needs to run a scaling test - that is running the same input file using different numbers of CPUs and comparing the execution time. Ideally the exection time should be half as long when using twice as many CPUs (= 100% speedup). Obviously it is not a good use of resources when a calculation runs only 30% faster when doubling the number of CPUs and in extreme cases calculations can become even slower when further increasing the number of CPUs.

Memory

Quantum chemistry calculations are often "memory bound" - that means that larger molecules at high level of theory need a lot of memory (RAM) and in fact often much more than is available in a typical computer. Therefore QM packages like GAMESS will use disk-storage (SCRATCH) to store intermediate results to free up memory and load them back at a later time.

As even our fastes SCRATCH storage is several oders of magnitues slower than the memory, one should make sure to assign sufficient memory to GAMESS. This is a two-step process:

1. First one needs to request memory for the job via the Slurm submission script. Using --mem-per-cpu=4000M is a reasonable value, as it's compatible with the memory to CPU core ratio on the base nodes. Reqesting more than that will either cause the jobs to wait for being started on a large-memory node or being charged for CPUs it didn't actually used.

2. In the $SYSTEM group of the input file one needs to define the MWORDS and MEMDDI options. This will tell GAMESS how much memory it is allowed to use. MWORDS is the maximum replicated memory which a job can use, on every core. This is given in units of 1,000,000 words (as opposed to 1024*1024 words), where a word is defined as 64 bits. MEMDDI is the grand total memory needed for the distributed data interface (DDI) storage, given in units of 1,000,000 words. The memory required on each processor core for a run using p CPU-cores is therefore MEMDDI/p + MWORDS. Please refer to the $SYSTEM group section in the GAMESS documentation<ref name="gamess-input">.

It is important to leave a few hundred MB of memory between the memory requested from the scheduler and the memory that GAMESS is allowed to use, as a safety margin. If the slurm-{JOBID}.out file contains a message like "slurmstepd: error: Exceeded step/job memory limit at some point", then Slurm has terminated the job for trying to use more memory than was requested for the job. In that case one needs to either reduce the MWORDS or MEMDDI in the input file or increase the --mem-per-cpu in the submission script.

References