cc_staff
229
edits
(Marked this version for translation) |
(start updating for newest version) |
||
Line 14: | Line 14: | ||
= Environment modules = <!--T:4--> | = Environment modules = <!--T:4--> | ||
The latest version of NAMD is 2.14 and it has been installed on all clusters. We recommend users run the newest version. | |||
The | |||
Older versions 2.13 and 2.12 are also available. | |||
To run jobs that span nodes, use OFI versions on cedar and UCX versions on other clusters. | |||
= Submission scripts = <!--T:13--> | = Submission scripts = <!--T:13--> | ||
Line 127: | Line 100: | ||
#SBATCH --account=def-specifyaccount | #SBATCH --account=def-specifyaccount | ||
module load namd-ucx/2. | module load StdEnv/2020 namd-ucx/2.14 | ||
srun --mpi=pmi2 namd2 apoa1.namd | srun --mpi=pmi2 namd2 apoa1.namd | ||
}} | }} | ||
Line 161: | Line 134: | ||
}} | }} | ||
== | == OFI jobs == | ||
'''NOTE''': | |||
'''NOTE''': OFI versions will run '''ONLY''' on Cedar because of its different interconnect. | |||
{{File | {{File | ||
|name= | |name=ucx_namd_job.sh | ||
|lang="sh" | |lang="sh" | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
# | #SBATCH --account=def-specifyaccount | ||
#SBATCH --ntasks 64 # number of tasks | #SBATCH --ntasks 64 # number of tasks | ||
#SBATCH --nodes=2 | #SBATCH --nodes=2 | ||
#SBATCH --mem 0 # memory per node, 0 means all memory | #SBATCH --ntasks-per-node=32 | ||
#SBATCH -t 0:05:00 # time (D-HH:MM) | |||
#SBATCH --mem=0 # memory per node, 0 means all memory | |||
#SBATCH -o slurm.%N.%j.out # STDOUT | #SBATCH -o slurm.%N.%j.out # STDOUT | ||
module load StdEnv/2020 namd-ofi/2.14 | |||
srun --mpi=pmi2 namd2 stmv.namd | |||
}} | |||
== OFI GPU jobs == | |||
'''NOTE''': OFI versions will run '''ONLY''' on Cedar because of its different interconnect. | |||
{{File | |||
|name=ucx_namd_job.sh | |||
|lang="sh" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --account=def-specifyaccount | |||
#SBATCH --ntasks 8 # number of tasks | |||
#SBATCH --nodes=2 | |||
#SBATCH --cpus-per-task=6 | |||
#SBATCH --gres=gpu:4 | |||
#SBATCH -t 0:05:00 # time (D-HH:MM) | #SBATCH -t 0:05:00 # time (D-HH:MM) | ||
#SBATCH -- | #SBATCH --mem=0 # memory per node, 0 means all memory | ||
module load namd- | module load StdEnv/2020 cuda/11.0 namd-ofi-smp/2.14 | ||
NUM_PES=$(expr $SLURM_CPUS_PER_TASK - 1 ) | |||
srun $ | srun --mpi=pmi2 namd2 ++ppn $NUM_PES stmv.namd | ||
}} | }} | ||
== MPI jobs == <!--T:18--> | |||
'''NOTE''': MPI should not be used. Instead use OFI on Cedar and UCX on other clusters. | |||
</translate> | |||
<translate> | <translate> | ||
== GPU jobs == <!--T:19--> | == GPU jobs == <!--T:19--> | ||
Line 206: | Line 204: | ||
== Verbs-GPU jobs == <!--T:20--> | == Verbs-GPU jobs == <!--T:20--> | ||
NOTE: For NAMD 2.14, use OFI GPU on cedar and UCX GPU on other clusters. Instructions below apply only to NAMD versions 2.13 and 2.12. | |||
This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus <code>ntasks-per-node</code> should be 32 (on Graham). For best performance, NAMD jobs should use full nodes. | This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus <code>ntasks-per-node</code> should be 32 (on Graham). For best performance, NAMD jobs should use full nodes. | ||