BLAST: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Marked this version for translation)
No edit summary
 
(16 intermediate revisions by 3 users not shown)
Line 8: Line 8:


== User manual == <!--T:8-->
== User manual == <!--T:8-->
You can find more information its arguments in the [https://www.ncbi.nlm.nih.gov/books/NBK279684/ user manual].
You can find more information on its arguments in the [https://www.ncbi.nlm.nih.gov/books/NBK279684/ user manual]  
Or with :
or with
{{Command|blastn -help}}
{{Command|blastn -help}}


== Databases == <!--T:9-->
== Databases == <!--T:9-->
Some frequently-used sequence databases are installed on Compute Canada clusters.  
Some frequently used sequence databases are installed on the clusters in <code>/cvmfs/bio.data.computecanada.ca/content/databases/Core/blast_dbs/2022_03_23/</code>.
You can find information on the BLAST databases available here : [[Genomics data]].
Examine that directory and its subdirectories, e.g. with
 
<!--T:38-->
{{Command|ls /cvmfs/bio.data.computecanada.ca/content/databases/Core/blast_dbs/2022_03_23/}}
 
 


== Accelerating the search == <!--T:10-->
== Accelerating the search == <!--T:10-->
For the examples below, the file <tt>ref.fa</tt> will be used as the reference database in FASTA format and <tt>seq.fa</tt> as the queries we are looking for.
For the examples below, the file <tt>ref.fa</tt> will be used as the reference database in FASTA format, and <tt>seq.fa</tt> as the queries.


=== <tt>makeblastdb</tt> === <!--T:11-->
=== <tt>makeblastdb</tt> === <!--T:11-->
Before running a search, we must build the database. Building the database can be a preprocessing job, where the other job are dependent on the completion of the <tt>makeblastdb</tt> job.
Before running a search, we must build the database. This can be a preprocessing job, where the other jobs are dependent on the completion of the <tt>makeblastdb</tt> job.
Here is an example of a submission script:
Here is an example of a submission script:
{{File
{{File
Line 35: Line 40:


<!--T:13-->
<!--T:13-->
module load gcc/7.3.0 blast+/2.7.1
module load gcc/7.3.0 blast+/2.9.0


<!--T:14-->
<!--T:14-->
Line 43: Line 48:


=== Task array === <!--T:15-->
=== Task array === <!--T:15-->
Blast search can greatly benefit from data parallelism by splitting the query file into multiples queries and running these queries against the database.
BLAST search can greatly benefit from data parallelism by splitting the query file into multiples queries and running these queries against the database.


==== Preprocess ==== <!--T:16-->
==== Preprocessing ==== <!--T:16-->
In order to accelerate the search, the <tt>seq.fa</tt> file must be split into smaller chunks. The file chunks should be at least <tt>1Mb</tt> or greater, but not '''smaller''' as it may hurt the parallel file system.
In order to accelerate the search, the <tt>seq.fa</tt> file must be split into smaller chunks. These should be at least <tt>1MB</tt> or greater, but '''not smaller''' as it may hurt the parallel filesystem.
 
<!--T:17-->
'''Important''': To correctly split a FASTA format file, it must be in its original format and not multiline format. In other words, the sequence must be on one line.


<!--T:18-->
<!--T:18-->
Using the <tt>split</tt> utility:
Using the <tt>faSplit</tt> utility:
{{Command|split -d -a 1 -l 2 seq.fa seq.fa.}}
{{Command|module load kentutils/20180716}}
will create 10 files named <tt>seq.fa.N</tt> where <tt>N</tt> is in the range of <tt>[0..9]</tt> for 10 queries (sequences).
{{Command|faSplit sequence seqs.fa 10 seq}}
will create 10 files named <tt>seqN.fa</tt> where <tt>N</tt> is in the range of <tt>[0..9]</tt> for 10 queries (sequences).


==== Job submission ==== <!--T:19-->
==== Job submission ==== <!--T:19-->
Line 60: Line 63:


<!--T:20-->
<!--T:20-->
This solution allows the scheduler to fit the smaller jobs from the array where there's available resources in the cluster.
This solution allows the scheduler to fit the smaller jobs from the array where there are resources available in the cluster.
{{File
{{File
   |name=blastn_array.sh
   |name=blastn_array.sh
Line 75: Line 78:


<!--T:22-->
<!--T:22-->
module load gcc/7.3.0 blast+/2.7.1
module load gcc/7.3.0 blast+/2.9.0


<!--T:23-->
<!--T:23-->
Line 83: Line 86:


<!--T:24-->
<!--T:24-->
With the above submission script, we can submit our blast search and it will run after the creation of the database:
With the above submission script, we can submit our search and it will run after the database has been created.
{{Command|sbatch --dependency{{=}}afterok:$(sbatch makeblastdb.sh) blastn_array.sh}}
{{Command|sbatch --dependency{{=}}afterok:$(sbatch makeblastdb.sh) blastn_array.sh}}


<!--T:25-->
<!--T:25-->
Once all the tasks from the array are done, the results can be concatenated using:
Once all the tasks from the array are done, the results can be concatenated using
{{Command|cat seq.ref.{0..9} > seq.ref}}
{{Command|cat seq.ref.{0..9} > seq.ref}}
where the 10 files will be catenated into <tt>seq.ref</tt> file.
where the 10 files will be concatenated into <tt>seq.ref</tt> file.
This could be done from the login node or as a dependent job upon completion of all the tasks from the array.
This could be done from the login node or as a dependent job upon completion of all the tasks from the array.


=== GNU Parallel === <!--T:26-->
=== GNU Parallel === <!--T:26-->
<tt>GNU parallel</tt> is a great tool to pack many small jobs into one and parallelize it.
<tt>GNU Parallel</tt> is a great tool to pack many small jobs into a single job, and parallelize it.
This solution helps alleviate the issue of too many small files on a parallel file system by querying fixed size chunks from <tt>seq.fa</tt> and run on one node and multiple cores.
This solution helps alleviate the issue of too many small files in a parallel filesystem by querying fixed size chunks from <tt>seq.fa</tt> and running on one node and multiple cores.


<!--T:27-->
<!--T:27-->
As an example, if your <tt>seq.fa</tt> file is <tt>100Mb</tt>, you could read block of <tt>10Mb</tt> and use 10 cores.
As an example, if your <tt>seq.fa</tt> file is <tt>3MB</tt>, you could read blocks of <tt>1MB</tt> and GNU Parallel will create 3 jobs, thus using 3 cores. If we would have requested 10 cores in our task, we would have wasted 7 cores. Therefore, '''the block size is important'''. We can also let GNU Parallel decide, as done below.
 
<!--T:36-->
See also [[GNU Parallel#Handling_large_files|Handling large files]] in the GNU Parallel page.
 
==== Running with multiple cores on one node==== <!--T:34-->
{{File
{{File
   |name=blastn_gnu.sh
   |name=blastn_gnu.sh
Line 111: Line 119:


<!--T:29-->
<!--T:29-->
module load gcc/7.3.0 blast+/2.7.1
module load gcc/7.3.0 blast+/2.9.0
 
<!--T:35-->
cmd='blastn -db ref.fa -query - '


<!--T:30-->
<!--T:30-->
# Pass the whole file to GNU parallel
# Using the `::::` notation, give the sequences file to GNU parallel
# where
# where
#  --jobs number of core to use, equal $SLURM_CPUS_PER_TASK (the number of cores requested)
#  --jobs number of core to use, equal $SLURM_CPUS_PER_TASK (the number of cores requested)
#  -k keep same order
#  --keep-order keep same order as given in input
#  --block use block of 1Mb in size
#  --block -1 let GNU Parallel evaluate the block size and adapt
#  --recstart record start, here the sequence identifier `>`
#  --recstart record start, here the sequence identifier `>`
#  --pipe pass to the command
#  --pipepart pipe parts of $cmd together.
cat seq.fa {{!}} parallel --jobs $SLURM_CPUS_PER_TASK -k --block 1M --recstart '>' --pipe 'blastn -db ref.fa -query - ' > seq.ref
#              `--pipepart` is faster than `--pipe` (which is limited to 500MB/s) as `--pipepart` can easily go to 5GB/s according to Ole Tange.
# and redirect results in `seq.ref`.
parallel --jobs $SLURM_CPUS_PER_TASK --keep-order --block -1 --recstart '>' --pipepart $cmd :::: seq.fa > seq.ref
}}
}}


==== Job submission ==== <!--T:31-->
<!--T:33-->
With the above submission script, we can submit our blast search and it will run after the creation of the database:
Note: The file must not be compressed.
 
===== Job submission ===== <!--T:31-->
With the above submission script, we can submit our search and it will run after the database has been created.
{{Command|sbatch --dependency{{=}}afterok:$(sbatch makeblastdb.sh) blastn_gnu.sh}}
{{Command|sbatch --dependency{{=}}afterok:$(sbatch makeblastdb.sh) blastn_gnu.sh}}


=== Additional tips === <!--T:32-->
=== Additional tips === <!--T:32-->
* If it fits into the node-local storage, copy your FASTA database to the localscratch (<tt>$SLURM_TMPDIR</tt>).
* If it fits into the node's local storage, copy your FASTA database to the local scratch space (<tt>$SLURM_TMPDIR</tt>).
* Lower the number of hits returned (<code>-max_target_seqs, -max_hsps</code> can help), if it is reasonable for your research.
* Reduce the number of hits returned (<code>-max_target_seqs, -max_hsps</code> can help), if it is reasonable for your research.
* Limit your hit list using evalue filters to near identical hits (<code>-evalue</code>), if it is reasonable for your research.
* Limit your hit list to nearly identical hits using <code>-evalue</code> filters, if it is reasonable for your research.


</translate>
</translate>

Latest revision as of 22:02, 24 January 2024

Other languages:


BLAST ("Basic Local Alignment Search Tool") finds regions of similarity between biological sequences. The program compares nucleotide or protein sequences to sequence databases and calculates the statistical significance.

User manual

You can find more information on its arguments in the user manual or with

Question.png
[name@server ~]$ blastn -help

Databases

Some frequently used sequence databases are installed on the clusters in /cvmfs/bio.data.computecanada.ca/content/databases/Core/blast_dbs/2022_03_23/. Examine that directory and its subdirectories, e.g. with

Question.png
[name@server ~]$ ls /cvmfs/bio.data.computecanada.ca/content/databases/Core/blast_dbs/2022_03_23/


Accelerating the search

For the examples below, the file ref.fa will be used as the reference database in FASTA format, and seq.fa as the queries.

makeblastdb

Before running a search, we must build the database. This can be a preprocessing job, where the other jobs are dependent on the completion of the makeblastdb job. Here is an example of a submission script:

File : makeblastdb.sh

#!/bin/bash

#SBATCH --account=def-<user>  # The account to use
#SBATCH --time=00:02:00       # The duration in HH:MM:SS format
#SBATCH --cpus-per-task=1     # The number of cores
#SBATCH --mem=512M            # Total memory for this task

module load gcc/7.3.0 blast+/2.9.0

# Create the nucleotide database based on `ref.fa`.
makeblastdb -in ref.fa -title reference -dbtype nucl -out ref.fa


Task array

BLAST search can greatly benefit from data parallelism by splitting the query file into multiples queries and running these queries against the database.

Preprocessing

In order to accelerate the search, the seq.fa file must be split into smaller chunks. These should be at least 1MB or greater, but not smaller as it may hurt the parallel filesystem.

Using the faSplit utility:

Question.png
[name@server ~]$ module load kentutils/20180716
Question.png
[name@server ~]$ faSplit sequence seqs.fa 10 seq

will create 10 files named seqN.fa where N is in the range of [0..9] for 10 queries (sequences).

Job submission

Once our queries are split, we can create a task for each seq.fa.N file using a job array. The task id from the array will map to the file name containing the query to run.

This solution allows the scheduler to fit the smaller jobs from the array where there are resources available in the cluster.

File : blastn_array.sh

#!/bin/bash

#SBATCH --account=def-<user>  # The account to use
#SBATCH --time=00:02:00       # The duration in HH:MM:SS format of each task in the array
#SBATCH --cpus-per-task=1     # The number of cores for each task in the array
#SBATCH --mem-per-cpu=512M    # The memory per core for each task in the array
#SBATCH --array=0-9           # The number of tasks: 10

module load gcc/7.3.0 blast+/2.9.0

# Using the index of the current task, given by `$SLURM_ARRAY_TASK_ID`, run the corresponding query and write the result
blastn -db ref.fa -query seq.fa.${SLURM_ARRAY_TASK_ID} > seq.ref.${SLURM_ARRAY_TASK_ID}


With the above submission script, we can submit our search and it will run after the database has been created.

Question.png
[name@server ~]$ sbatch --dependency=afterok:$(sbatch makeblastdb.sh) blastn_array.sh

Once all the tasks from the array are done, the results can be concatenated using

Question.png
[name@server ~]$ cat seq.ref.{0..9} > seq.ref

where the 10 files will be concatenated into seq.ref file. This could be done from the login node or as a dependent job upon completion of all the tasks from the array.

GNU Parallel

GNU Parallel is a great tool to pack many small jobs into a single job, and parallelize it. This solution helps alleviate the issue of too many small files in a parallel filesystem by querying fixed size chunks from seq.fa and running on one node and multiple cores.

As an example, if your seq.fa file is 3MB, you could read blocks of 1MB and GNU Parallel will create 3 jobs, thus using 3 cores. If we would have requested 10 cores in our task, we would have wasted 7 cores. Therefore, the block size is important. We can also let GNU Parallel decide, as done below.

See also Handling large files in the GNU Parallel page.

Running with multiple cores on one node

File : blastn_gnu.sh

#!/bin/bash

#SBATCH --account=def-<user>  # The account to use
#SBATCH --time=00:02:00       # The duration in HH:MM:SS format
#SBATCH --cpus-per-task=4     # The number of cores
#SBATCH --mem-per-cpu=512M    # The memory per core

module load gcc/7.3.0 blast+/2.9.0

cmd='blastn -db ref.fa -query - '

# Using the `::::` notation, give the sequences file to GNU parallel
# where
#   --jobs number of core to use, equal $SLURM_CPUS_PER_TASK (the number of cores requested)
#   --keep-order keep same order as given in input
#   --block -1 let GNU Parallel evaluate the block size and adapt
#   --recstart record start, here the sequence identifier `>`
#   --pipepart pipe parts of $cmd together. 
#              `--pipepart` is faster than `--pipe` (which is limited to 500MB/s) as `--pipepart` can easily go to 5GB/s according to Ole Tange.
# and redirect results in `seq.ref`.
parallel --jobs $SLURM_CPUS_PER_TASK --keep-order --block -1 --recstart '>' --pipepart $cmd :::: seq.fa > seq.ref


Note: The file must not be compressed.

Job submission

With the above submission script, we can submit our search and it will run after the database has been created.

Question.png
[name@server ~]$ sbatch --dependency=afterok:$(sbatch makeblastdb.sh) blastn_gnu.sh


Additional tips

  • If it fits into the node's local storage, copy your FASTA database to the local scratch space ($SLURM_TMPDIR).
  • Reduce the number of hits returned (-max_target_seqs, -max_hsps can help), if it is reasonable for your research.
  • Limit your hit list to nearly identical hits using -evalue filters, if it is reasonable for your research.