cc_staff
30
edits
(Added SGID keyword and link to related section) |
mNo edit summary |
||
Line 144: | Line 144: | ||
* <tt>Priority</tt>ː Your job is waiting to start due to its lower priority. This is because you and other members of your research group have been over-consuming your fair share of the cluster resources in the recent past, something you can track using the command <tt>sshare</tt> as explained in [[Job scheduling policies]]. The <tt>LevelFS</tt> column gives you information about your over- or under-consumption of cluster resources: when <tt>LevelFS</tt> is greater than one, you are consuming fewer resources than your fair share, while if it is less than one you are consuming more. The more you overconsume resources, the closer the value gets to zero and the more your pending jobs decrease in priority. There is a memory effect to this calculation so the scheduler gradually "forgets" about any potential over- or under-consumption of resources from months past. Finally, note that the value of <tt>LevelFS</tt> is unique to the specific cluster. | * <tt>Priority</tt>ː Your job is waiting to start due to its lower priority. This is because you and other members of your research group have been over-consuming your fair share of the cluster resources in the recent past, something you can track using the command <tt>sshare</tt> as explained in [[Job scheduling policies]]. The <tt>LevelFS</tt> column gives you information about your over- or under-consumption of cluster resources: when <tt>LevelFS</tt> is greater than one, you are consuming fewer resources than your fair share, while if it is less than one you are consuming more. The more you overconsume resources, the closer the value gets to zero and the more your pending jobs decrease in priority. There is a memory effect to this calculation so the scheduler gradually "forgets" about any potential over- or under-consumption of resources from months past. Finally, note that the value of <tt>LevelFS</tt> is unique to the specific cluster. | ||
== Why do my jobs show "Nodes required for job are DOWN, DRAINED or RESERVED for jobs in higher priority partitions"? == <!--T:58--> | == Why do my jobs show "Nodes required for job are DOWN, DRAINED or RESERVED for jobs in higher priority partitions" or "ReqNodeNotAvailable"? == <!--T:58--> | ||
<!--T:59--> | <!--T:59--> | ||
Line 150: | Line 150: | ||
It means just what it says: One or more of the nodes Slurm considered for the job are down, or deliberately taken offline, | It means just what it says: One or more of the nodes Slurm considered for the job are down, or deliberately taken offline, | ||
or are being reserved for other jobs. On a large busy cluster there will almost always be such nodes. The message means | or are being reserved for other jobs. On a large busy cluster there will almost always be such nodes. The message means | ||
effectively the same thing as the reason "Resources" that appeared in Slurm version 17.11. | effectively the same thing as the reason "Resources" that appeared in Slurm version 17.11. | ||
== How accurate is START_TIME in <tt>squeue</tt> output? == <!--T:33--> | == How accurate is START_TIME in <tt>squeue</tt> output? == <!--T:33--> |