NAMD: Difference between revisions
(translate tags) |
(Marked this version for translation) |
||
Line 3: | Line 3: | ||
<translate> | <translate> | ||
= General = | = General = <!--T:1--> | ||
'''NAMD''' is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. | '''NAMD''' is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. | ||
Simulation preparation and analysis is integrated into the visualization package [[VMD]]. | Simulation preparation and analysis is integrated into the visualization package [[VMD]]. | ||
<!--T:2--> | |||
* Project web site: http://www.ks.uiuc.edu/Research/namd/ | * Project web site: http://www.ks.uiuc.edu/Research/namd/ | ||
* Manual: http://www.ks.uiuc.edu/Research/namd/current/ug/ | * Manual: http://www.ks.uiuc.edu/Research/namd/current/ug/ | ||
Line 13: | Line 14: | ||
A registration required to download software. | A registration required to download software. | ||
= Quickstart Guide = | = Quickstart Guide = <!--T:3--> | ||
This section summarizes configuration details. | This section summarizes configuration details. | ||
=== Environment Modules === | === Environment Modules === <!--T:4--> | ||
<!--T:5--> | |||
The following modules providing NAMD are available on graham and cedar. | The following modules providing NAMD are available on graham and cedar. | ||
<!--T:6--> | |||
Compiled without CUDA support: | Compiled without CUDA support: | ||
<!--T:7--> | |||
* namd-multicore/2.12 | * namd-multicore/2.12 | ||
* namd-verbs/2.12 | * namd-verbs/2.12 | ||
<!--T:8--> | |||
Compiled with CUDA support: | Compiled with CUDA support: | ||
<!--T:9--> | |||
* namd-multicore/2.12 | * namd-multicore/2.12 | ||
* namd-verbs-smp/2.12 | * namd-verbs-smp/2.12 | ||
<!--T:10--> | |||
To access these modules which require CUDA, first execute: | To access these modules which require CUDA, first execute: | ||
module load cuda/8.0.44 | <!--T:11--> | ||
module load cuda/8.0.44 | |||
<!--T:12--> | |||
Note: using verbs library is more efficient than using OpenMPI, hence only verbs versions are provided. | Note: using verbs library is more efficient than using OpenMPI, hence only verbs versions are provided. | ||
=== Submission Scripts === | === Submission Scripts === <!--T:13--> | ||
<!--T:14--> | |||
Please refer to the page "[[Running jobs]]" for help on using the SLURM workload manager. | Please refer to the page "[[Running jobs]]" for help on using the SLURM workload manager. | ||
==== Serial Job ==== | ==== Serial Job ==== <!--T:15--> | ||
Here's a simple job script for serial simulation: | Here's a simple job script for serial simulation: | ||
Line 61: | Line 71: | ||
}} | }} | ||
<translate> | <translate> | ||
==== Verbs Job ==== | ==== Verbs Job ==== <!--T:16--> | ||
These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. | These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. | ||
This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes. | This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes. | ||
<!--T:17--> | |||
'''NOTE''': The verbs version will not run on cedar because of its different interconnect. Use the MPI version instead. | '''NOTE''': The verbs version will not run on cedar because of its different interconnect. Use the MPI version instead. | ||
</translate> | </translate> | ||
Line 91: | Line 102: | ||
}} | }} | ||
<translate> | <translate> | ||
==== MPI Job ==== | ==== MPI Job ==== <!--T:18--> | ||
'''NOTE''': Use this only on cedar, where verbs version will not work. | '''NOTE''': Use this only on cedar, where verbs version will not work. | ||
</translate> | </translate> | ||
Line 112: | Line 123: | ||
}} | }} | ||
<translate> | <translate> | ||
==== GPU Job ==== | ==== GPU Job ==== <!--T:19--> | ||
This example uses 8 CPU cores and 1 GPU on a single node. | This example uses 8 CPU cores and 1 GPU on a single node. | ||
</translate> | </translate> | ||
Line 134: | Line 145: | ||
}} | }} | ||
<translate> | <translate> | ||
==== Verbs-GPU Job ==== | ==== Verbs-GPU Job ==== <!--T:20--> | ||
These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. | These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. | ||
This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes. | This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes. | ||
<!--T:21--> | |||
'''NOTE''': The verbs version will not run on cedar because of its different interconnect. | '''NOTE''': The verbs version will not run on cedar because of its different interconnect. | ||
</translate> | </translate> | ||
Line 167: | Line 179: | ||
}} | }} | ||
<translate> | <translate> | ||
= Installation = | = Installation = <!--T:22--> | ||
NAMD is installed by the Compute Canada software team and is available as a module. If a new version is required, please email tech support and request it. Also, if for some reason you need to do your own installation, please contact tech support for advice and help with that. You can also ask for details of how our NAMD modules were compiled. | NAMD is installed by the Compute Canada software team and is available as a module. If a new version is required, please email tech support and request it. Also, if for some reason you need to do your own installation, please contact tech support for advice and help with that. You can also ask for details of how our NAMD modules were compiled. | ||
= Links = | = Links = <!--T:23--> | ||
*[http://www.ks.uiuc.edu/Research/namd/ NAMD website at www.ks.uiuc.edu] | *[http://www.ks.uiuc.edu/Research/namd/ NAMD website at www.ks.uiuc.edu] | ||
*[http://www.ks.uiuc.edu/Research/namd/2.12/ug/ NAMD Users's guide for version 2.12] | *[http://www.ks.uiuc.edu/Research/namd/2.12/ug/ NAMD Users's guide for version 2.12] |
Revision as of 20:58, 15 January 2018
General[edit]
NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. Simulation preparation and analysis is integrated into the visualization package VMD.
- Project web site: http://www.ks.uiuc.edu/Research/namd/
- Manual: http://www.ks.uiuc.edu/Research/namd/current/ug/
- Downloads: http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD
- Tutorials: http://www.ks.uiuc.edu/Training/Tutorials/
A registration required to download software.
Quickstart Guide[edit]
This section summarizes configuration details.
Environment Modules[edit]
The following modules providing NAMD are available on graham and cedar.
Compiled without CUDA support:
- namd-multicore/2.12
- namd-verbs/2.12
Compiled with CUDA support:
- namd-multicore/2.12
- namd-verbs-smp/2.12
To access these modules which require CUDA, first execute:
module load cuda/8.0.44
Note: using verbs library is more efficient than using OpenMPI, hence only verbs versions are provided.
Submission Scripts[edit]
Please refer to the page "Running jobs" for help on using the SLURM workload manager.
Serial Job[edit]
Here's a simple job script for serial simulation:
#!/bin/bash
#
#SBATCH --ntasks 1 # number of tasks
#SBATCH --mem 1024 # memory pool per process
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -t 0:20:00 # time (D-HH:MM)
#SBATCH --account=def-specifyaccount
module load namd-multicore/2.12
namd2 +p1 +idlepoll apoa1.namd
Verbs Job[edit]
These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes.
NOTE: The verbs version will not run on cedar because of its different interconnect. Use the MPI version instead.
#!/bin/bash
#
#SBATCH --ntasks 64 # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --mem 0 # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -t 0:05:00 # time (D-HH:MM)
#SBATCH --account=def-specifyaccount
slurm_hl2hl.py --format CHARM > nodefile.dat
NODEFILE=nodefile.dat
P=$SLURM_NTASKS
module load namd-verbs/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++nodelist $NODEFILE $NAMD2 +idlepoll apoa1.namd
MPI Job[edit]
NOTE: Use this only on cedar, where verbs version will not work.
#!/bin/bash
#
#SBATCH --ntasks 64 # number of tasks
#SBATCH --nodes=2
#SBATCH --mem 0 # memory per node, 0 means all memory
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -t 0:05:00 # time (D-HH:MM)
#SBATCH --account=def-specifyaccount
module load namd-mpi/2.12
NAMD2=`which namd2`
srun $NAMD2 apoa1.namd
GPU Job[edit]
This example uses 8 CPU cores and 1 GPU on a single node.
#!/bin/bash
#
#SBATCH --ntasks 8 # number of tasks
#SBATCH --mem 2048 # memory pool per process
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -t 0:05:00 # time (D-HH:MM)
#SBATCH --gres=gpu:1
#SBATCH --account=def-specifyaccount
module load cuda/8.0.44
module load namd-multicore/2.12
namd2 +p8 +idlepoll apoa1.namd
Verbs-GPU Job[edit]
These provisional Instructions will be refined further once this configuration can be fully tested on the new clusters. This example uses 64 processes in total on 2 nodes, each node running 32 processes, thus fully utilizing its 32 cores. Each node uses 2 GPUs, so job uses 4 GPUs in total. This script assumes full nodes are used, thus ntasks/nodes should be 32 (on graham). For best performance, NAMD jobs should use full nodes.
NOTE: The verbs version will not run on cedar because of its different interconnect.
#!/bin/bash
#
#SBATCH --ntasks 64 # number of tasks
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --mem 0 # memory per node, 0 means all memory
#SBATCH --gres=gpu:2
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -t 0:05:00 # time (D-HH:MM)
#SBATCH --account=def-specifyaccount
slurm_hl2hl.py --format CHARM > nodefile.dat
NODEFILE=nodefile.dat
OMP_NUM_THREADS=32
P=$SLURM_NTASKS
module load cuda/8.0.44
module load namd-verbs-smp/2.12
CHARMRUN=`which charmrun`
NAMD2=`which namd2`
$CHARMRUN ++p $P ++ppn $OMP_NUM_THREADS ++nodelist $NODEFILE $NAMD2 +idlepoll apoa1.namd
Installation[edit]
NAMD is installed by the Compute Canada software team and is available as a module. If a new version is required, please email tech support and request it. Also, if for some reason you need to do your own installation, please contact tech support for advice and help with that. You can also ask for details of how our NAMD modules were compiled.
Links[edit]