MrBayes: Difference between revisions
(Marked this version for translation) |
m (Fix numbering and title) |
||
Line 139: | Line 139: | ||
<!--T:28--> | <!--T:28--> | ||
3. | 3. Create the submission script to run the smaller jobs | ||
{{File | {{File | ||
|name=submit-mrbayes-cp.sh | |name=submit-mrbayes-cp.sh | ||
Line 167: | Line 167: | ||
<!--T:33--> | <!--T:33--> | ||
4. Submit the jobs | |||
{{Command|sbatch submit-mrbayes-cp.sh}} | {{Command|sbatch submit-mrbayes-cp.sh}} | ||
</translate> | </translate> |
Revision as of 17:22, 21 June 2024
MrBayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models. MrBayes uses Markov chain Monte Carlo (MCMC) methods to estimate the posterior distribution of model parameters.
Finding available modules[edit]
[name@server ~]$ module spider mrbayes
Example[edit]
Sequential[edit]
1. Write the submission script
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load mrbayes/3.2.7
cd $SCRATCH
# Copy one of the example locally on the local storage
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
# Run using 1 core
mb primates.nex
2. Submit the sequential job
[name@server ~]$ sbatch submit-mrbayes-seq.sh
Parallel[edit]
MrBayes support running on multi-cores and multi-nodes, and GPU.
MPI[edit]
1. Write the submission script
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --ntasks=8 # increase as needed
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load mrbayes/3.2.7
cd $SCRATCH
# Copy one of the example locally on the local storage
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
# Run using $SLURM_NTASKS
srun mb primates.nex
2. Submit the sequential job
[name@server ~]$ sbatch submit-mrbayes-parallel.sh
GPU[edit]
1. Write the submission script
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --cpus-per-task=1
#SBATCH --gpus=1
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
module load gcc cuda/12 mrbayes/3.2.7
cd $SCRATCH
# Copy one of the example locally on the local storage
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
# Run using $SLURM_NTASKS
srun mb primates.nex
2. Submit the sequential job
[name@server ~]$ sbatch submit-mrbayes-gpu.sh
Checkpointing[edit]
For users needing very long runs of MrBayes, it is suggested to break up the work into several small jobs rather than one very long job. Long jobs have a higher probably of being interrupted by maintenance windows or unforeseen problems. Fortunately, MrBayes has a built in mechanism for creating checkpoints, where progress can be saved from one job and continued in a subsequent job.
1. Create the first script (job).
execute primates.nex;
mcmc ngen=10000000 nruns=2 temp=0.02 mcmcdiag=yes samplefreq=1000
stoprule=yes stopval=0.005 relburnin=yes burninfrac=0.1 printfreq=1000
checkfreq=1000;
2. Create a second script (job).
execute primates.nex;
mcmc ngen=20000000 nruns=2 temp=0.02 mcmcdiag=yes samplefreq=1000
stoprule=yes stopval=0.005 relburnin=yes burninfrac=0.1 printfreq=1000
append=yes checkfreq=1000;
3. Create the submission script to run the smaller jobs
#!/bin/bash
#SBATCH --account=def-someuser # replace with your PI account
#SBATCH --ntasks=8 # increase as needed
#SBATCH --mem-per-cpu=3G # increase as needed
#SBATCH --time=1:00:00 # increase as needed
#SBATCH --array=1-2%1 # match the number of sub-jobs, only 1 at a time
module load gcc mrbayes/3.2.7
cd $SCRATCH
# Copy one of the example locally on the local storage
cp -v $EBROOTMRBAYES/share/examples/mrbayes/primates.nex .
# Run using $SLURM_NTASKS
srun mb job${SLURM_ARRAY_TASK_ID}.nex
4. Submit the jobs
[name@server ~]$ sbatch submit-mrbayes-cp.sh