Nextflow: Difference between revisions
No edit summary |
No edit summary |
||
Line 70: | Line 70: | ||
<!--T:15--> | <!--T:15--> | ||
Once the pipeline is ready to be launched, Nextflow will look at the <code>nextflow.config</code> file and also at the <code>~/.nextflow/config</code> files (if it exists) to control how to run the workflow. The nf-core pipelines all have a default config, a test config, and container configs (singularity, podman, etc). You also need a custom config for the cluster (Narval, Béluga, Cedar or Graham) you are running on. Nextflow pipelines could also run on Niagara if they | Once the pipeline is ready to be launched, Nextflow will look at the <code>nextflow.config</code> file and also at the <code>~/.nextflow/config</code> files (if it exists) to control how to run the workflow. The nf-core pipelines all have a default config, a test config, and container configs (singularity, podman, etc). You also need a custom config for the cluster (Narval, Béluga, Cedar or Graham) you are running on. Nextflow pipelines could also run on Niagara if they were designed with that specific cluster in mind, but we generally discourage you to run an nf-core or any other generic Nextflow pipeline. | ||
==== A config for our clusters ==== <!--T:16--> | ==== A config for our clusters ==== <!--T:16--> | ||
Line 124: | Line 124: | ||
<!--T:23--> | <!--T:23--> | ||
Use the two profiles provided by nf-core—test and singularity—and the profile we have just created for Béluga. Note that Nextflow is mainly written in Java which tends to use a lot of virtual memory. On the Narval cluster that won't be a problem, but with the Béluga login node, you will need to change the virtual memory to run most workflows. To set the virtual memory limit to 40G, use the <code>ulimit -v 40000000</code> command. We also used a [[Prolonging_terminal_sessions#Terminal_multiplexers|terminal multiplexer]], so | Use the two profiles provided by nf-core—test and singularity—and the profile we have just created for Béluga. Note that Nextflow is mainly written in Java which tends to use a lot of virtual memory. On the Narval cluster that won't be a problem, but with the Béluga login node, you will need to change the virtual memory to run most workflows. To set the virtual memory limit to 40G, use the <code>ulimit -v 40000000</code> command. We also used a [[Prolonging_terminal_sessions#Terminal_multiplexers|terminal multiplexer]], so the Nextflow pipeline will still run if you are disconnected and you will be able to reconnect to the controller process. Note that running Nextflow on login nodes is easy on Béluga and Naval, but harder on Graham and Cedar since the login node virtual memory limit cannot be changed on these clusters; we recommend launching Nextflow from a compute node, where the virtual memory is never limited. | ||
<source lang="bash"> | <source lang="bash"> | ||
nextflow run nf-core-${NFCORE_PL}-${PL_VERSION}/workflow -profile test,singularity,beluga --outdir ${NFCORE_PL}_OUTPUT | nextflow run nf-core-${NFCORE_PL}-${PL_VERSION}/workflow -profile test,singularity,beluga --outdir ${NFCORE_PL}_OUTPUT | ||
</source> | </source> | ||
So now you have started the Nexflow sub scheduler on the login node. This process sends jobs to | So now you have started the Nexflow sub-scheduler on the login node. This process sends jobs to Slurm when they are ready to be processed. | ||
<!--T:24--> | <!--T:24--> | ||
You see the progression of the pipeline | You can see the progression of the pipeline. You can also open a new session on the cluster or detach from the tmux session to have a look at the jobs in the Slurm queue with <code>squeue -u $USER</code> | ||
</translate> | </translate> | ||
[[Category:Software]] | [[Category:Software]] |
Revision as of 22:00, 27 March 2023
Nextflow is software for running reproducible scientific workflows. The term Nextflow is used to describe both the domain-specific-language (DSL) the pipelines are written in, and the software used to interpret those workflows.
Usage
On our systems, Nextflow is provided as a module you can load with module load nextflow
.
While you can build your own workflow, you can also rely on the published nf-core pipelines. We will describe here a simple configuration that will let you run nf-core pipelines on our systems and help you to configure Nextflow properly for your own pipelines.
Our example uses the nf-core/smrnaseq
pipeline.
Installation
The following procedure is to be run on a login node.
Start by installing a pip package to help with the setup; please note that the nf-core tools can be slow to install.
module purge # we make sure that some previously loaded package are not polluting the installation
module load python/3.8
python -m venv nf-core-env
source nf-core-env/bin/activate
python -m pip install nf_core
Set the name of the pipeline to be tested, and load Nextflow and Apptainer, which isthe new name for the Singularity container utility. Nexflow integrates well with Apptainer/Singularity.
export NFCORE_PL=smrnaseq
export PL_VERSION=1.1.0
module load nextflow/22.04.3
module load apptainer/1.1.3
An important step is to download all the Singularity images that will be used to run the pipeline at the same time we download the workflow itself. If this isn't done, Nexflow will try to download the images from the compute nodes, just before steps are executed. This would not work on most of our clusters since there is no Internet connection on the compute nodes.
Create a folder where Singularity images will be stored and set the environment variable NXF_SINGULARITY_CACHEDIR
to it. Workflow images tend to be big, so do not store them in your $HOME space because it has a small quota; instead, store them in the /project space.
mkdir /project/<def-group>/NFX_SINGULARITY_CACHEDIR
export NXF_SINGULARITY_CACHEDIR=/project/<def-group>/NFX_SINGULARITY_CACHEDIR
You can add the export line to your .bashrc
as a convenience and you should share this folder with other members of your group who are planning to use Nextflow with Singularity.
The following command downloads the smrnaseq
pipeline to your /scratch directory and puts all the Apptainer/Singularity containers in the cache directory.
cd ~/scratch
nf-core download --singularity-cache-only --container singularity --compress none -r ${PL_VERSION} -p 6 ${NFCORE_PL}
This workflow downloads 18 containers for a total of about 4Go and creates an nf-core-${NFCORE_PL}-${PL_VERSION}
folder with the workflow
and config
subfolders. The config
subfolder includes the institutional configuration while the workflow itself is in the workflow
subfolder.
This is what a typical nf-core pipeline looks like:
$ ls nf-core-${NFCORE_PL}-${PL_VERSION}/workflow
assets bin CHANGELOG.md CODE_OF_CONDUCT.md conf Dockerfile docs environment.yml lib LICENSE main.nf nextflow.config nextflow_schema.json README.md
Once the pipeline is ready to be launched, Nextflow will look at the nextflow.config
file and also at the ~/.nextflow/config
files (if it exists) to control how to run the workflow. The nf-core pipelines all have a default config, a test config, and container configs (singularity, podman, etc). You also need a custom config for the cluster (Narval, Béluga, Cedar or Graham) you are running on. Nextflow pipelines could also run on Niagara if they were designed with that specific cluster in mind, but we generally discourage you to run an nf-core or any other generic Nextflow pipeline.
A config for our clusters
You can use the following config by changing the default value for nf-core processes and enter the correct information for the Béluga and Narval clusters. This config is saved in a profile block that will be loaded at runtime.
process {
executor = 'slurm'
pollInterval = '60 sec'
clusterOptions = '--account=<my-account>'
submitRateLimit = '60/1min'
queueSize = 100
errorStrategy = 'retry'
maxRetries = 1
errorStrategy = { task.exitStatus in [125,139] ? 'retry' : 'finish' }
memory = { check_max( 4.GB * task.attempt, 'memory' ) }
cpu = 1
time = '3h'
}
profiles {
beluga{
max_memory='186G'
max_cpu=40
max_time='168h'
}
narval{
max_memory='249G'
max_cpu=64
max_time='168h'
}
}
Replace <my-account>
with your own account, which looks like def-pname
.
This configuration ensures that there are no more than 100 jobs in the Slurm queue and that only 60 jobs are submitted per minute. It indicates that Béluga machines have 40 cores and 186G of RAM with a maximum walltime of one week (168 hours).
The config is linked to the system you are running on, but it is also related to the pipeline itself. For example, here cpu = 1 is the default value, but steps in the pipeline can have more than that. This can get quite complicated and labels in the workflow/config/base.config
file are used to identify a step with a specific configuration, which is not covered in this page.
The script implements a default restart behavior that automatically adds some memory on failed steps that have ret code 125 (out of memory) or 139 (omm killed because the process used more memory that what was allowed by cgroup).
Running the pipeline
Use the two profiles provided by nf-core—test and singularity—and the profile we have just created for Béluga. Note that Nextflow is mainly written in Java which tends to use a lot of virtual memory. On the Narval cluster that won't be a problem, but with the Béluga login node, you will need to change the virtual memory to run most workflows. To set the virtual memory limit to 40G, use the ulimit -v 40000000
command. We also used a terminal multiplexer, so the Nextflow pipeline will still run if you are disconnected and you will be able to reconnect to the controller process. Note that running Nextflow on login nodes is easy on Béluga and Naval, but harder on Graham and Cedar since the login node virtual memory limit cannot be changed on these clusters; we recommend launching Nextflow from a compute node, where the virtual memory is never limited.
nextflow run nf-core-${NFCORE_PL}-${PL_VERSION}/workflow -profile test,singularity,beluga --outdir ${NFCORE_PL}_OUTPUT
So now you have started the Nexflow sub-scheduler on the login node. This process sends jobs to Slurm when they are ready to be processed.
You can see the progression of the pipeline. You can also open a new session on the cluster or detach from the tmux session to have a look at the jobs in the Slurm queue with squeue -u $USER