Checkpoints
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
The execution time for a program is sometimes too long for the maximum duration of a job permitted by the job schedulers used on Compute Canada clusters. Long-running jobs are also subject to all of the risks of system instability due to power outages, hardware defects and so forth. A program with a short execution time can easily be restarted with little concern but for long-running software it is preferable to use checkpoints to minimize the risk of losing several days' worth of computation. These checkpoints take the form of binary disk files from which the program can be restarted at the point in the computation where the checkpoint file was initially created.
Creating and Loading a Checkpoint
The creation and loading of a checkpoint may already be taken care of by the application you're using. In this case you simply need to read the relevant documentation about how to use this functionality.
However, if you have access to the source code of the software and/or if you are the author, you can implement a checkpoint/restart functionality in the program yourself. The essential steps are:
- The creation of a checkpoint file is done periodically, with a suggested frequency of every 2 to 24 hours
- While writing the checkpoint file, it's important to remember that the program could be interrupted at any moment and this for a variety of reasons. As a consequence,
- It is preferable to not delete the preceding checkpoint when creating the new one.
- The creation of the checkpoint file can be made atomic by performing an operation which confirms the end of the checkpoint process. For example, the checkpoint file can be initially named based on the date and time and, as the final step, a symbolic link latest-version is pointed at this new checkpoint file. Another more advanced method would be to create a second file which contains a hash of the checkpoint file's content by means of which the restart function can verify the integrity of the checkpoint when it is loaded.
- Once the atomic write has been completed, one can choose whether or not to delete any older checkpoints.
Afin de ne pas réinventer la roue, surtout si la modification du code source n'est pas une option, nous suggérons l'utilisation de DMTCP.
So as not to re-invent the wheel, particularly in situations where modifying the source code isn't an option, an alternative solution is the use of the software DMTCP.
.
.
#!/bin/bash
# ---------------------------------------------------------------------
# SLURM script for job resubmission on a Compute Canada cluster.
# ---------------------------------------------------------------------
#SBATCH --job-name=job_chain
#SBATCH --account=def-someuser
#SBATCH --cpus-per-task=1
#SBATCH --time=0-10:00
#SBATCH --mem=100M
# ---------------------------------------------------------------------
echo "Current working directory: $(pwd)"
echo "Starting run at: $(date)"
# ---------------------------------------------------------------------
# Run your simulation step here...
if test -e "dmtcp_restart_script.sh"; then
# There is a checkpoint file, restart;
./dmtcp_restart_script.sh -h $(hostname)
else
# There is no checkpoint file, start a new simulation.
dmtcp_launch --rm -i 3600 -q <programme> <arg1> ... <argn>
fi
# ---------------------------------------------------------------------
echo "Job finished with exit code $? at: $(date)"
# ---------------------------------------------------------------------
Resubmitting a Job for Long-Running Computations
If you plan on breaking up a lengthy computation into several Slurm jobs, there are two recommended methods: