MrBayes: Difference between revisions
m (Fix numbering and title) |
No edit summary |
||
(8 intermediate revisions by 3 users not shown) | |||
Line 8: | Line 8: | ||
{{Command|module spider mrbayes}} | {{Command|module spider mrbayes}} | ||
== | <!--T:34--> | ||
For more on finding and selecting a version of MrBayes using <code>module</code> commands see [[Utiliser_des_modules/en|Using modules]] | |||
== Examples == <!--T:3--> | |||
=== Sequential === <!--T:4--> | === Sequential === <!--T:4--> | ||
1. | The following job script uses only one CPU core (<code>--cpus-per-task=1</code>). | ||
The example uses an input file (<code>primates.nex</code>) distributed with MrBayes. | |||
<!--T:35--> | |||
{{File | {{File | ||
|name=submit-mrbayes-seq.sh | |name=submit-mrbayes-seq.sh | ||
Line 24: | Line 30: | ||
<!--T:5--> | <!--T:5--> | ||
module load mrbayes/3.2.7 | module load mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T:8--> | <!--T:8--> | ||
mb primates.nex | mb primates.nex | ||
}} | }} | ||
<!--T:9--> | <!--T:9--> | ||
The job script can be submitted with | |||
{{Command|sbatch submit-mrbayes-seq.sh}} | {{Command|sbatch submit-mrbayes-seq.sh}} | ||
=== Parallel === <!--T:10--> | === Parallel === <!--T:10--> | ||
MrBayes | MrBayes can be run on multiple cores, on multiple nodes, and on GPUs. | ||
==== MPI ==== <!--T:11--> | ==== MPI ==== <!--T:11--> | ||
The following job script will use 8 CPU cores in total, on one or more nodes. | |||
Like the previous example, it uses an input file (<code>primates.nex</code>) distributed with MrBayes. | |||
<!--T:36--> | |||
{{File | {{File | ||
|name=submit-mrbayes-parallel.sh | |name=submit-mrbayes-parallel.sh | ||
Line 58: | Line 61: | ||
<!--T:12--> | <!--T:12--> | ||
module load mrbayes/3.2.7 | module load mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T:15--> | <!--T:15--> | ||
srun mb primates.nex | srun mb primates.nex | ||
}} | }} | ||
<!--T:16--> | <!--T:16--> | ||
The job script can be submitted with | |||
{{Command|sbatch submit-mrbayes-parallel.sh}} | {{Command|sbatch submit-mrbayes-parallel.sh}} | ||
==== GPU ==== <!--T:17--> | ==== GPU ==== <!--T:17--> | ||
The following job script will use a GPU. | |||
Like the previous examples, it uses an input file (<code>primates.nex</code>) distributed with MrBayes. | |||
<!--T:37--> | |||
{{File | {{File | ||
|name=submit-mrbayes-gpu.sh | |name=submit-mrbayes-gpu.sh | ||
Line 90: | Line 90: | ||
<!--T:18--> | <!--T:18--> | ||
module load gcc cuda/12 mrbayes/3.2.7 | module load gcc cuda/12 mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T:21--> | <!--T:21--> | ||
srun mb primates.nex | srun mb primates.nex | ||
}} | }} | ||
<!--T:22--> | <!--T:22--> | ||
The job script can be submitted with | |||
{{Command|sbatch submit-mrbayes-gpu.sh}} | {{Command|sbatch submit-mrbayes-gpu.sh}} | ||
== Checkpointing == | == Checkpointing == <!--T:23--> | ||
If you need very long runs of MrBayes, we suggest you break up the work into several small jobs rather than one very long job. Long jobs have are more likely to be interrupted by hardware failure or maintenance outage. Fortunately, MrBayes has a mechanism for creating checkpoints, in which progress can be saved from one job and continued in a subsequent job. | |||
<!--T:38--> | |||
Here is an example of how to split a calculation into two Slurm jobs which will run one after the other. Create two files, <code>job1.nex</code> and <code>job2.nex</code>, as shown below. Notice that the key difference between them is the presence of the <code>append</code> keyword in the second. | |||
<!--T:24--> | <!--T:24--> | ||
{{File | {{File | ||
|name=job1.nex | |name=job1.nex | ||
Line 125: | Line 121: | ||
<!--T:26--> | <!--T:26--> | ||
{{File | {{File | ||
|name=job2.nex | |name=job2.nex | ||
Line 139: | Line 134: | ||
<!--T:28--> | <!--T:28--> | ||
Then create a job script. This example is a job array, which means that one script and | |||
one <code>sbatch</code> command will be sufficient to launch two Slurm jobs, and therefore | |||
both parts of the calculation. See [[Job arrays]] for more about the <code>--array</code> | |||
parameter and the <code>$SLURM_ARRAY_TASK_ID</code> variable used here. | |||
<!--T:39--> | |||
{{File | {{File | ||
|name=submit-mrbayes-cp.sh | |name=submit-mrbayes-cp.sh | ||
Line 153: | Line 153: | ||
<!--T:29--> | <!--T:29--> | ||
module load gcc mrbayes/3.2.7 | module load gcc mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T: | <!--T:40--> | ||
srun mb job${SLURM_ARRAY_TASK_ID}.nex | srun mb job${SLURM_ARRAY_TASK_ID}.nex | ||
}} | }} | ||
<!--T:33--> | <!--T:33--> | ||
The example can be submitted with | |||
{{Command|sbatch submit-mrbayes-cp.sh}} | {{Command|sbatch submit-mrbayes-cp.sh}} | ||
</translate> | </translate> |
Latest revision as of 16:49, 28 June 2024
MrBayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models. MrBayes uses Markov chain Monte Carlo (MCMC) methods to estimate the posterior distribution of model parameters.
Finding available modules[edit]
[name@server ~]$ module spider mrbayes
For more on finding and selecting a version of MrBayes using module
commands see Using modules
Examples[edit]
Sequential[edit]
The following job script uses only one CPU core (--cpus-per-task=1
).
The example uses an input file (primates.nex
) distributed with MrBayes.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
mb primates.nex
The job script can be submitted with
[name@server ~]$ sbatch submit-mrbayes-seq.sh
Parallel[edit]
MrBayes can be run on multiple cores, on multiple nodes, and on GPUs.
MPI[edit]
The following job script will use 8 CPU cores in total, on one or more nodes.
Like the previous example, it uses an input file (primates.nex
) distributed with MrBayes.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --ntasks=8 # increase as needed
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
srun mb primates.nex
The job script can be submitted with
[name@server ~]$ sbatch submit-mrbayes-parallel.sh
GPU[edit]
The following job script will use a GPU.
Like the previous examples, it uses an input file (primates.nex
) distributed with MrBayes.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --cpus-per-task=1
#SBATCH --gpus=1
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load gcc cuda/12 mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
srun mb primates.nex
The job script can be submitted with
[name@server ~]$ sbatch submit-mrbayes-gpu.sh
Checkpointing[edit]
If you need very long runs of MrBayes, we suggest you break up the work into several small jobs rather than one very long job. Long jobs have are more likely to be interrupted by hardware failure or maintenance outage. Fortunately, MrBayes has a mechanism for creating checkpoints, in which progress can be saved from one job and continued in a subsequent job.
Here is an example of how to split a calculation into two Slurm jobs which will run one after the other. Create two files, job1.nex
and job2.nex
, as shown below. Notice that the key difference between them is the presence of the append
keyword in the second.
execute primates.nex;
mcmc ngen=10000000 nruns=2 temp=0.02 mcmcdiag=yes samplefreq=1000
stoprule=yes stopval=0.005 relburnin=yes burninfrac=0.1 printfreq=1000
checkfreq=1000;
execute primates.nex;
mcmc ngen=20000000 nruns=2 temp=0.02 mcmcdiag=yes samplefreq=1000
stoprule=yes stopval=0.005 relburnin=yes burninfrac=0.1 printfreq=1000
append=yes checkfreq=1000;
Then create a job script. This example is a job array, which means that one script and
one sbatch
command will be sufficient to launch two Slurm jobs, and therefore
both parts of the calculation. See Job arrays for more about the --array
parameter and the $SLURM_ARRAY_TASK_ID
variable used here.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --ntasks=8 # increase as needed
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
#SBATCH --array=1-2%1 # match the number of sub-jobs, only 1 at a time
module load gcc mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
srun mb job${SLURM_ARRAY_TASK_ID}.nex
The example can be submitted with
[name@server ~]$ sbatch submit-mrbayes-cp.sh