NAMD: Difference between revisions

Jump to navigation Jump to search
69 bytes added ,  3 years ago
no edit summary
(Marked this version for translation)
No edit summary
Line 49: Line 49:
<translate>
<translate>


== Verbs jobs == <!--T:16-->
== Parallel CPU jobs ==
 
=== MPI jobs === <!--T:18-->
'''NOTE''': MPI should not be used.  Instead use OFI on Cedar and UCX on other clusters.
 
=== Verbs jobs === <!--T:16-->


<!--T:51-->
<!--T:51-->
Line 86: Line 91:
$CHARMRUN ++p $P ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd
$CHARMRUN ++p $P ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd
}}
}}
<translate>
<translate>


== UCX jobs == <!--T:42-->
=== UCX jobs === <!--T:42-->
This example uses 80 processes in total on 2 nodes, each node running 40 processes, thus fully utilizing its 80 cores.  This script assumes full nodes are used, thus <code>ntasks-per-node</code> should be 40 (on Béluga).  For best performance, NAMD jobs should use full nodes.
This example uses 80 processes in total on 2 nodes, each node running 40 processes, thus fully utilizing its 80 cores.  This script assumes full nodes are used, thus <code>ntasks-per-node</code> should be 40 (on Béluga).  For best performance, NAMD jobs should use full nodes.


Line 115: Line 119:
<translate>
<translate>


== UCX GPU jobs == <!--T:44-->
=== OFI jobs === <!--T:53-->
 
<!--T:54-->
'''NOTE''': OFI versions will run '''ONLY''' on Cedar because of its different interconnect.
</translate>
{{File
  |name=ucx_namd_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --account=def-specifyaccount
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --mem=0            # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out    # STDOUT
 
<!--T:55-->
module load StdEnv/2020 namd-ofi/2.14
srun --mpi=pmi2 namd2 stmv.namd
}}
<translate>
 
== Single GPU jobs == <!--T:19-->
This example uses 8 CPU cores and 1 GPU on a single node.
</translate>
{{File
  |name=multicore_gpu_namd_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#
#SBATCH --cpus-per-task=8
#SBATCH --mem 2048           
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --gres=gpu:1
#SBATCH --account=def-specifyaccount
 
 
module load StdEnv/2020
module load cuda/11.0
module load namd-multicore/2.14
namd2 +p$SLURM_CPUS_PER_TASK  +idlepoll apoa1.namd
}}
 
<translate>
== Parallel GPU jobs ==
=== UCX GPU jobs === <!--T:44-->
This example is for Béluga and it assumes that full nodes are used, which gives best performance for NAMD jobs. It uses 8 processes in total on 2 nodes, each process(task) using 10 threads and 1 GPU.  This fully utilizes Béluga GPU nodes which have 40 cores and 4 GPUs per node.  Note that 1 core per task has to be reserved for a communications thread, so NAMD will report that only 72 cores are being used but this is normal.   
This example is for Béluga and it assumes that full nodes are used, which gives best performance for NAMD jobs. It uses 8 processes in total on 2 nodes, each process(task) using 10 threads and 1 GPU.  This fully utilizes Béluga GPU nodes which have 40 cores and 4 GPUs per node.  Note that 1 core per task has to be reserved for a communications thread, so NAMD will report that only 72 cores are being used but this is normal.   


Line 144: Line 197:
}}
}}


== OFI jobs == <!--T:53-->
=== OFI GPU jobs === <!--T:56-->
 
<!--T:54-->
'''NOTE''': OFI versions will run '''ONLY''' on Cedar because of its different interconnect.
{{File
  |name=ucx_namd_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --account=def-specifyaccount
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --mem=0            # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out    # STDOUT
 
<!--T:55-->
module load StdEnv/2020 namd-ofi/2.14
srun --mpi=pmi2 namd2 stmv.namd
}}
 
== OFI GPU jobs == <!--T:56-->


<!--T:57-->
<!--T:57-->
Line 189: Line 220:
}}
}}


== MPI jobs == <!--T:18-->
=== Verbs-GPU jobs === <!--T:20-->
'''NOTE''': MPI should not be used.  Instead use OFI on Cedar and UCX on other clusters.
</translate>
 
<translate>
== GPU jobs == <!--T:19-->
This example uses 8 CPU cores and 1 GPU on a single node.
</translate>
{{File
  |name=multicore_gpu_namd_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#
#SBATCH --cpus-per-task=8
#SBATCH --mem 2048           
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --gres=gpu:1
#SBATCH --account=def-specifyaccount
 
 
module load StdEnv/2020
module load cuda/11.0
module load namd-multicore/2.14
namd2 +p$SLURM_CPUS_PER_TASK  +idlepoll apoa1.namd
}}
<translate>
 
== Verbs-GPU jobs == <!--T:20-->


<!--T:59-->
<!--T:59-->
Bureaucrats, cc_docs_admin, cc_staff, rsnt_translations
2,837

edits

Navigation menu