MrBayes: Difference between revisions
(Marked this version for translation) |
(copy editing) |
||
Line 8: | Line 8: | ||
{{Command|module spider mrbayes}} | {{Command|module spider mrbayes}} | ||
== | == Examples == <!--T:3--> | ||
=== Sequential === <!--T:4--> | === Sequential === <!--T:4--> | ||
1. | The following job script uses only one CPU core (<code>--cpus-per-task=1</code>). | ||
The example uses an input file (<code>primates.nex</code>) distributed with MrBayes. | |||
{{File | {{File | ||
|name=submit-mrbayes-seq.sh | |name=submit-mrbayes-seq.sh | ||
Line 24: | Line 26: | ||
<!--T:5--> | <!--T:5--> | ||
module load mrbayes/3.2.7 | module load mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T:8--> | <!--T:8--> | ||
mb primates.nex | mb primates.nex | ||
}} | }} | ||
<!--T:9--> | <!--T:9--> | ||
The job script can be submitted with | |||
{{Command|sbatch submit-mrbayes-seq.sh}} | {{Command|sbatch submit-mrbayes-seq.sh}} | ||
=== Parallel === <!--T:10--> | === Parallel === <!--T:10--> | ||
MrBayes | MrBayes can be run on multiple cores, on multiple nodes, and on GPUs. | ||
==== MPI ==== <!--T:11--> | ==== MPI ==== <!--T:11--> | ||
The following job script will use 8 CPU cores in total, on one or more nodes. | |||
Like the previous example it uses an input file (<code>primates.nex</code>) distributed with MrBayes. | |||
{{File | {{File | ||
|name=submit-mrbayes-parallel.sh | |name=submit-mrbayes-parallel.sh | ||
Line 58: | Line 56: | ||
<!--T:12--> | <!--T:12--> | ||
module load mrbayes/3.2.7 | module load mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T:15--> | <!--T:15--> | ||
srun mb primates.nex | srun mb primates.nex | ||
}} | }} | ||
<!--T:16--> | <!--T:16--> | ||
The job script can be submitted with | |||
{{Command|sbatch submit-mrbayes-parallel.sh}} | {{Command|sbatch submit-mrbayes-parallel.sh}} | ||
==== GPU ==== <!--T:17--> | ==== GPU ==== <!--T:17--> | ||
The following job script will use a GPU. | |||
Like the previous examples it uses an input file (<code>primates.nex</code>) distributed with MrBayes. | |||
{{File | {{File | ||
|name=submit-mrbayes-gpu.sh | |name=submit-mrbayes-gpu.sh | ||
Line 90: | Line 84: | ||
<!--T:18--> | <!--T:18--> | ||
module load gcc cuda/12 mrbayes/3.2.7 | module load gcc cuda/12 mrbayes/3.2.7 | ||
cd $SCRATCH | cd $SCRATCH | ||
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex . | ||
<!--T:21--> | <!--T:21--> | ||
srun mb primates.nex | srun mb primates.nex | ||
}} | }} | ||
<!--T:22--> | <!--T:22--> | ||
The job script can be submitted with | |||
{{Command|sbatch submit-mrbayes-gpu.sh}} | {{Command|sbatch submit-mrbayes-gpu.sh}} | ||
== Checkpointing == <!--T:23--> | == Checkpointing == <!--T:23--> | ||
For users needing very long runs of MrBayes, | For users needing very long runs of MrBayes, we suggest you break up the work into several small jobs rather than one very long job. Long jobs have are more likely to be interrupted a hardware failure or a maintenance outage. Fortunately, MrBayes has a built-in mechanism for creating checkpoints, where progress can be saved from one job and continued in a subsequent job. | ||
<!--T:24--> | <!--T:24--> |
Revision as of 18:58, 21 June 2024
MrBayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models. MrBayes uses Markov chain Monte Carlo (MCMC) methods to estimate the posterior distribution of model parameters.
Finding available modules[edit]
[name@server ~]$ module spider mrbayes
Examples[edit]
Sequential[edit]
The following job script uses only one CPU core (--cpus-per-task=1
).
The example uses an input file (primates.nex
) distributed with MrBayes.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
mb primates.nex
The job script can be submitted with
[name@server ~]$ sbatch submit-mrbayes-seq.sh
Parallel[edit]
MrBayes can be run on multiple cores, on multiple nodes, and on GPUs.
MPI[edit]
The following job script will use 8 CPU cores in total, on one or more nodes.
Like the previous example it uses an input file (primates.nex
) distributed with MrBayes.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --ntasks=8 # increase as needed
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
srun mb primates.nex
The job script can be submitted with
[name@server ~]$ sbatch submit-mrbayes-parallel.sh
GPU[edit]
The following job script will use a GPU.
Like the previous examples it uses an input file (primates.nex
) distributed with MrBayes.
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --cpus-per-task=1
#SBATCH --gpus=1
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load gcc cuda/12 mrbayes/3.2.7
cd $SCRATCH
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
srun mb primates.nex
The job script can be submitted with
[name@server ~]$ sbatch submit-mrbayes-gpu.sh
Checkpointing[edit]
For users needing very long runs of MrBayes, we suggest you break up the work into several small jobs rather than one very long job. Long jobs have are more likely to be interrupted a hardware failure or a maintenance outage. Fortunately, MrBayes has a built-in mechanism for creating checkpoints, where progress can be saved from one job and continued in a subsequent job.
1. Create the first script (job).
execute primates.nex;
mcmc ngen=10000000 nruns=2 temp=0.02 mcmcdiag=yes samplefreq=1000
stoprule=yes stopval=0.005 relburnin=yes burninfrac=0.1 printfreq=1000
checkfreq=1000;
2. Create a second script (job).
execute primates.nex;
mcmc ngen=20000000 nruns=2 temp=0.02 mcmcdiag=yes samplefreq=1000
stoprule=yes stopval=0.005 relburnin=yes burninfrac=0.1 printfreq=1000
append=yes checkfreq=1000;
3. Create the submission script to run the smaller jobs
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --ntasks=8 # increase as needed
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
#SBATCH --array=1-2%1 # match the number of sub-jobs, only 1 at a time
module load gcc mrbayes/3.2.7
cd $SCRATCH
# Copy one of the example locally on the local storage
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
# Run using $SLURM_NTASKS
srun mb job${SLURM_ARRAY_TASK_ID}.nex
4. Submit the jobs
[name@server ~]$ sbatch submit-mrbayes-cp.sh