Frequently Asked Questions: Difference between revisions

Marked this version for translation
No edit summary
(Marked this version for translation)
Line 67: Line 67:
You may see this message when the load on the [[Running jobs|Slurm]] manager or scheduler process is too high. We are working both to improve Slurm's tolerance of that and to identify and eliminate the sources of load spikes, but that is a long-term project. The best advice we have currently is to wait a minute or so. Then run <code>squeue -u $USER</code> and see if the job you were trying to submit appears: in some cases the error message is delivered even though the job was accepted by Slurm. If it doesn't appear, simply submit it again.
You may see this message when the load on the [[Running jobs|Slurm]] manager or scheduler process is too high. We are working both to improve Slurm's tolerance of that and to identify and eliminate the sources of load spikes, but that is a long-term project. The best advice we have currently is to wait a minute or so. Then run <code>squeue -u $USER</code> and see if the job you were trying to submit appears: in some cases the error message is delivered even though the job was accepted by Slurm. If it doesn't appear, simply submit it again.


== Why are my jobs taking so long to start? ==
== Why are my jobs taking so long to start? == <!--T:20-->
You can see the reason why your jobs are in the state <tt>PD</tt> (pending) int the column <tt>(REASON)</tt>, which will typically have the value <tt>Resources</tt> or <tt>Priority</tt>. In the former case, the cluster is simply very busy and you will have to be patient or perhaps consider where you might submit a job that asks for fewer resources (e.g. nodes, memory, time). In the latter case however, your job is waiting to start due to its lower priority. This is because you and other members of your research group have been over-consuming your just share of the cluster resources in the recent past, something you can track using the command <tt>sshare</tt> as explained in [[Job scheduling policies]]. The column <tt>LevelFS</tt> gives you information about your over- or under-consumption of cluster resources; when <tt>LevelFS</tt> is greater than unity you are consuming fewer resources than your just share, while if it is less than unity you are consuming more. The closer <tt>LevelFS</tt> becomes to zero, the more you are over-consuming resources and greater the degree to which your jobs will have their priority diminished. There is a memory effect to this calculation so the scheduler gradually forgets about any potential over- or under-consumption of resources from months past. Finally, note that this scheduler priority is unique to a specific cluster - your <tt>LevelFS</tt> on one cluster is independent of its value on another.   
You can see the reason why your jobs are in the state <tt>PD</tt> (pending) int the column <tt>(REASON)</tt>, which will typically have the value <tt>Resources</tt> or <tt>Priority</tt>. In the former case, the cluster is simply very busy and you will have to be patient or perhaps consider where you might submit a job that asks for fewer resources (e.g. nodes, memory, time). In the latter case however, your job is waiting to start due to its lower priority. This is because you and other members of your research group have been over-consuming your just share of the cluster resources in the recent past, something you can track using the command <tt>sshare</tt> as explained in [[Job scheduling policies]]. The column <tt>LevelFS</tt> gives you information about your over- or under-consumption of cluster resources; when <tt>LevelFS</tt> is greater than unity you are consuming fewer resources than your just share, while if it is less than unity you are consuming more. The closer <tt>LevelFS</tt> becomes to zero, the more you are over-consuming resources and greater the degree to which your jobs will have their priority diminished. There is a memory effect to this calculation so the scheduler gradually forgets about any potential over- or under-consumption of resources from months past. Finally, note that this scheduler priority is unique to a specific cluster - your <tt>LevelFS</tt> on one cluster is independent of its value on another.   
</translate>
</translate>
rsnt_translations
56,430

edits