NAMD

From Alliance Doc
Revision as of 18:50, 13 April 2018 by Ppomorsk (talk | contribs) (mention what does not work on cedar)
Jump to navigation Jump to search
Other languages:


NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. Simulation preparation and analysis is integrated into the VMD visualization package.


Installation[edit]

NAMD is installed by the Compute Canada software team and is available as a module. If a new version is required or if for some reason you need to do your own installation, please contact Technical support. You can also ask for details of how our NAMD modules were compiled.

Environment modules[edit]

The following modules are available:

  • compiled without CUDA support:
  • namd-multicore/2.12
  • namd-verbs/2.12 (disabled on cedar)
  • namd-mpi/2.12 (disabled on graham)
  • compiled with CUDA support:
  • namd-multicore/2.12
  • namd-verbs-smp/2.12 (disabled on cedar)
To access these modules which require CUDA, first execute
module load cuda/8.0.44

Note: Using a verbs library is more efficient than using OpenMPI, hence only verbs versions are provided on systems where those are supported. Currently verbs versions do not work on cedar as they are incompatible with the communications fabric, so use MPI version instead.

Submission scripts[edit]

Please refer to the Running jobs page for help on using the SLURM workload manager.

Serial jobs[edit]

Here is a simple job script for a serial simulation:

File : serial_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 1            # number of tasks
#SBATCH --mem 1024            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:20:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount


module load namd-multicore/2.12
namd2 +p1 +idlepoll apoa1.namd


Verbs jobs[edit]

These provisional instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on Graham). For best performance, NAMD jobs should use full nodes.

NOTE: Verbs versions will not run on Cedar because of its different interconnect. Use the MPI version instead.

File : verbs_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=0            # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount

NODEFILE=nodefile.dat
slurm_hl2hl.py --format CHARM > $NODEFILE
P=$SLURM_NTASKS

module load namd-verbs/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd


MPI jobs[edit]

NOTE: Use this only on Cedar, where verbs versions will not work.

File : mpi_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --mem 0            # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount

module load namd-mpi/2.12
NAMD2=`which namd2`
srun $NAMD2 apoa1.namd


GPU jobs[edit]

This example uses 8 CPU cores and 1 GPU on a single node.

File : multicore_gpu_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 8            # number of tasks
#SBATCH --mem 2048            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --gres=gpu:1
#SBATCH --account=def-specifyaccount


module load cuda/8.0.44
module load namd-multicore/2.12
namd2 +p8 +idlepoll apoa1.namd


Verbs-GPU jobs[edit]

These provisional instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on Graham). For best performance, NAMD jobs should use full nodes.

NOTE: Verbs versions will not run on Cedar because of its different interconnect.

File : verbsgpu_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --mem 0            # memory per node, 0 means all memory
#SBATCH --gres=gpu:2
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --account=def-specifyaccount

slurm_hl2hl.py --format CHARM > nodefile.dat
NODEFILE=nodefile.dat
OMP_NUM_THREADS=32
P=$SLURM_NTASKS

module load cuda/8.0.44
module load namd-verbs-smp/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++ppn $OMP_NUM_THREADS ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd


References[edit]