rsnt_translations
57,772
edits
(Marked this version for translation) |
No edit summary |
||
Line 66: | Line 66: | ||
* Your <tt>Memory Efficiency</tt> in the output from the <tt>seff</tt> command '''should be at least 80% to 85%''' in most cases. | * Your <tt>Memory Efficiency</tt> in the output from the <tt>seff</tt> command '''should be at least 80% to 85%''' in most cases. | ||
** Much like with the duration of your job, the goal when requesting the memory is to ensure that the amount is sufficient, with a certain margin of error. | ** Much like with the duration of your job, the goal when requesting the memory is to ensure that the amount is sufficient, with a certain margin of error. | ||
* If you plan on using | * If you plan on using a '''whole node''' for your job, it is natural to also '''use all of its available memory''' which you can express using the line <tt>#SBATCH --mem=0</tt> in your job submission script. | ||
** Note however that most of our clusters offer nodes with variable amounts of memory available, so using this approach means your job will likely be assigned a node with less memory. | ** Note however that most of our clusters offer nodes with variable amounts of memory available, so using this approach means your job will likely be assigned a node with less memory. | ||
* If your testing has shown that you need | * If your testing has shown that you need a '''large memory node''', then you will want to use a line like <tt>#SBATCH --mem=1500G</tt> for example, to request a node with 1500 GB (or 1.46 TB) of memory. | ||
** There are relatively few of these | ** There are relatively few of these large memory nodes so your job will wait much longer to run - make sure your job really needs all this extra memory. | ||
==Parallelism== <!--T:13--> | ==Parallelism== <!--T:13--> |