NAMD: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Marked this version for translation)
No edit summary
Line 70: Line 70:


<!--T:17-->
<!--T:17-->
'''NOTE''': The verbs version will not run on Cedar because of its different interconnect.  Use the MPI version instead.
'''NOTE''': Verbs versions will not run on Cedar because of its different interconnect.  Use the MPI version instead.
</translate>
</translate>
{{File
{{File
Line 97: Line 97:
<translate>
<translate>
== MPI jobs == <!--T:18-->
== MPI jobs == <!--T:18-->
'''NOTE''': Use this only on Cedar, where the verbs version will not work.
'''NOTE''': Use this only on Cedar, where verbs versions will not work.
</translate>
</translate>
{{File
{{File
Line 144: Line 144:


<!--T:21-->
<!--T:21-->
'''NOTE''': The verbs version will not run on Cedar because of its different interconnect.   
'''NOTE''': Verbs versions will not run on Cedar because of its different interconnect.   
</translate>
</translate>
{{File
{{File

Revision as of 18:09, 16 January 2018

Other languages:


NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. Simulation preparation and analysis is integrated into the VMD visualization package.


Installation[edit]

NAMD is installed by the Compute Canada software team and is available as a module. If a new version is required or if for some reason you need to do your own installation, please contact Technical support. You can also ask for details of how our NAMD modules were compiled.

Environment modules[edit]

The following modules providing NAMD are available on Graham and Cedar.

  • compiled without CUDA support:
  • namd-multicore/2.12
  • namd-verbs/2.12
  • compiled with CUDA support:
  • namd-multicore/2.12
  • namd-verbs-smp/2.12
To access these modules which require CUDA, first execute
module load cuda/8.0.44

Note: Using a verbs library is more efficient than using OpenMPI, hence only verbs versions are provided.

Submission scripts[edit]

Please refer to the Running jobs page for help on using the SLURM workload manager.

Serial jobs[edit]

Here is a simple job script for a serial simulation:

File : serial_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 1            # number of tasks
#SBATCH --mem 1024            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:20:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount


module load namd-multicore/2.12
namd2 +p1 +idlepoll apoa1.namd


Verbs jobs[edit]

These provisional instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on Graham). For best performance, NAMD jobs should use full nodes.

NOTE: Verbs versions will not run on Cedar because of its different interconnect. Use the MPI version instead.

File : verbs_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --mem 0            # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount

slurm_hl2hl.py --format CHARM > nodefile.dat
NODEFILE=nodefile.dat
P=$SLURM_NTASKS

module load namd-verbs/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd


MPI jobs[edit]

NOTE: Use this only on Cedar, where verbs versions will not work.

File : mpi_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --mem 0            # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount

module load namd-mpi/2.12
NAMD2=`which namd2`
srun $NAMD2 apoa1.namd


GPU jobs[edit]

This example uses 8 CPU cores and 1 GPU on a single node.

File : multicore_gpu_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 8            # number of tasks
#SBATCH --mem 2048            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --gres=gpu:1
#SBATCH --account=def-specifyaccount


module load cuda/8.0.44
module load namd-multicore/2.12
namd2 +p8 +idlepoll apoa1.namd


Verbs-GPU jobs[edit]

These provisional instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on Graham). For best performance, NAMD jobs should use full nodes.

NOTE: Verbs versions will not run on Cedar because of its different interconnect.

File : verbsgpu_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --mem 0            # memory per node, 0 means all memory
#SBATCH --gres=gpu:2
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount

slurm_hl2hl.py --format CHARM > nodefile.dat
NODEFILE=nodefile.dat
OMP_NUM_THREADS=32
P=$SLURM_NTASKS

module load cuda/8.0.44
module load namd-verbs-smp/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++ppn $OMP_NUM_THREADS ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd


References[edit]