NAMD: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(→‎Submission Scripts: provide some initial instructions for running with verbs module)
Line 68: Line 68:


==== Verbs Job ====
==== Verbs Job ====
Instructions will be provided once this configuration can be tested on the new clusters.
These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters.
This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores.  This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham).  For best performance, NAMD jobs should use full nodes.
 
{{File
  |name=verbs_namd_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --mem 1024            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
 
cat << EOF > nodefile.py
#!/usr/bin/python
import sys
a=sys.argv[1]
nodefile=open("nodefile.dat","w")
 
cluster=a[0:3]
for st in a.lstrip(cluster+"[").rstrip("]").split(","):
    d=st.split("-")
    start=int(d[0])
    finish=start
    if(len(d)==2):
        finish=int(d[1])
 
    for i in range(start,finish+1):
        nodefile.write("host "+cluster+str(i)+"\n")
 
nodefile.close()
 
EOF
 
python nodefile.py $SLURM_NODELIST
NODEFILE=nodefile.dat
OMP_NUM_THREADS=32
P=$SLURM_NTASKS
 
module load namd-verbs/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++ppn $OMP_NUM_THREADS ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd
}}


==== GPU Job ====
==== GPU Job ====

Revision as of 23:19, 20 June 2017


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




General

NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. Simulation preparation and analysis is integrated into the visualization package VMD.

A registration required to download software.

Release notes:

NAMD Wiki, How to compile: https://proteusmaster.urcf.drexel.edu/urcfwiki/index.php/Compiling_NAMD

Strengths

Weak points

GPU support

Quickstart Guide

This section summarizes configuration details.

Environment Modules

The following modules providing NAMD are available on graham and cedar.

Compiled without CUDA support:

  • namd-multicore/2.12
  • namd-verbs/2.12

Compiled with CUDA support:

  • namd-multicore/2.12
  • namd-verbs-smp/2.12

To access these modules which require CUDA, first execute:

module load cuda/8.0.44

Note: using verbs library is more efficient than using OpenMPI, hence only verbs versions are provided.

Submission Scripts

These examples of submission scripts will still have to be tested once the national system are available for testing.

Please refer to the page "Running jobs" for help on using the SLURM workload manager.

Serial Job

Here's a simple job script for serial simulation:

File : serial_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 1            # number of tasks
#SBATCH --mem 1024            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:20:00            # time (D-HH:MM)

module load namd-multicore/2.12
namd2 +p1 +idlepoll apoa1.namd


Verbs Job

These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes.


File : verbs_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 64            # number of tasks
#SBATCH --nodes=2
#SBATCH --mem 1024            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)

cat << EOF > nodefile.py
#!/usr/bin/python
import sys
a=sys.argv[1]
nodefile=open("nodefile.dat","w")

cluster=a[0:3]
for st in a.lstrip(cluster+"[").rstrip("]").split(","):
    d=st.split("-")
    start=int(d[0])
    finish=start
    if(len(d)==2):
        finish=int(d[1])

    for i in range(start,finish+1):
        nodefile.write("host "+cluster+str(i)+"\n")

nodefile.close()

EOF

python nodefile.py $SLURM_NODELIST
NODEFILE=nodefile.dat
OMP_NUM_THREADS=32
P=$SLURM_NTASKS

module load namd-verbs/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++ppn $OMP_NUM_THREADS ++nodelist $NODEFILE  $NAMD2  +idlepoll apoa1.namd


GPU Job

This example uses 8 CPU cores and 1 GPU on a single node.

File : multicore_gpu_namd_job.sh

#!/bin/bash
#
#SBATCH --ntasks 8            # number of tasks
#SBATCH --mem 1024            # memory pool per process
#SBATCH -o slurm.%N.%j.out    # STDOUT
#SBATCH -t 0:05:00            # time (D-HH:MM)
#SBATCH --gres=gpu:1

module load cuda/8.0.44
module load namd-multicore/2.12
namd2 +p8 +idlepoll apoa1.namd


Usage

Installation

Links