AMS: Difference between revisions

538 bytes removed ,  2 months ago
no edit summary
No edit summary
No edit summary
Line 31: Line 31:
<!--T:14-->
<!--T:14-->
Graham uses the Slurm scheduler; for details about submitting jobs, see [[Running jobs]].
Graham uses the Slurm scheduler; for details about submitting jobs, see [[Running jobs]].
<b>Because Graham's scratch file system is aged now, large AMS jobs may run slow on the cluster.
For now, we recommend user to run AMS jobs using cpus within one node (one full or partial node) and use the local disk for SCM_TMPDIR.
Each computer node has about 950GB local disk space. See example .sh file below </b>


====Example scripts for an adf job ==== <!--T:16-->
====Example scripts for an adf job ==== <!--T:16-->
Line 53: Line 49:
module unload openmpi
module unload openmpi
module load ams/2024.102
module load ams/2024.102
export SCM_TMPDIR=$SLURM_TMPDIR       # for cpus within 1 node and file size <900GB. comment out this line when you run large across multi-node job
export SCM_TMPDIR=$SLURM_TMPDIR    
bash H2O_adf.run                    # run the input script
bash H2O_adf.run                    # run the input script
}}
}}
Line 165: Line 161:
module unload openmpi
module unload openmpi
module load ams/2024.102
module load ams/2024.102
export SCM_TMPDIR=$SLURM_TMPDIR      # for cpus within 1 node and file size <900GB. comment out this line when you run large across multi-node job
export SCM_TMPDIR=$SLURM_TMPDIR       
bash SnO_EFG_band.run                # run the input file
bash SnO_EFG_band.run                # run the input file
}}
}}
cc_staff
120

edits