Best practices for job submission: Difference between revisions

Memory consumption with bullet points
(Section in bullet points)
(Memory consumption with bullet points)
Line 49: Line 49:
==Memory consumption==
==Memory consumption==


Much like with the duration of your job, the goal when requesting the memory is to ensure that the amount is sufficient, with a certain margin of error - your <i>Memory Efficiency</i> in the output from the <tt>seff</tt> command should be at least 80% to 85% in most cases. If you plan on using an entire node for your job, it is natural to also use all of its available memory which you can express using the line <tt>#SBATCH --mem=0</tt> in your job submission script. Note however that most of our clusters offer nodes with variable amounts of memory available, so using this approach means your job will likely be assigned a node with less memory. If your testing has shown that you need to a high-memory node, then you will want to use a line like <tt>#SBATCH --mem=1500G</tt> for example, to request a node with 1500 GB (or 1.46 TB) of memory. There are relatively few of these high-memory nodes so your job will wait much longer to run - make sure your job really needs all this extra memory.
* Your <tt>Memory Efficiency</tt> in the output from the <tt>seff</tt> command '''should be at least 80% to 85%''' in most cases.
** Much like with the duration of your job, the goal when requesting the memory is to ensure that the amount is sufficient, with a certain margin of error.
* If you plan on using an '''entire node''' for your job, it is natural to also '''use all of its available memory''' which you can express using the line <tt>#SBATCH --mem=0</tt> in your job submission script.
** Note however that most of our clusters offer nodes with variable amounts of memory available, so using this approach means your job will likely be assigned a node with less memory.
* If your testing has shown that you need to a '''high-memory node''', then you will want to use a line like <tt>#SBATCH --mem=1500G</tt> for example, to request a node with 1500 GB (or 1.46 TB) of memory.
** There are relatively few of these high-memory nodes so your job will wait much longer to run - make sure your job really needs all this extra memory.


==Parallelism==
==Parallelism==
cc_staff
782

edits