Known issues/en: Difference between revisions

Jump to navigation Jump to search
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 10: Line 10:
* Slurm can report: "Exceeded step memory limit at some point" which may be surprising and can cause a problem with dependent jobs.
* Slurm can report: "Exceeded step memory limit at some point" which may be surprising and can cause a problem with dependent jobs.
** File I/O uses memory and Slurm is correctly reporting this usage.  This usage (primarily delayed writes) was not as visible in previous systems.  The kernel usually resolves such memory shortages by flushing writes to the filesystem.
** File I/O uses memory and Slurm is correctly reporting this usage.  This usage (primarily delayed writes) was not as visible in previous systems.  The kernel usually resolves such memory shortages by flushing writes to the filesystem.
** Memory shortage can cause the kernel to kill processes ("OOM kill"), which results in the same message but affects exit status differently.
** Memory shortage can cause the kernel to kill processes ("OOM kill"), which results in the same message but affects exit code differently.
** A job that reports DerivedExitStatus 0:125 indicates hitting the memory limit, but not being OOM-killed.
** A job that reports DerivedExitStatus 0:125 indicates hitting the memory limit, but not being OOM-killed.
** In the absence of any other action, a step with 0:125 will *not* enable a job which has afterok dependency.  This is a Slurm bug that will be [https://bugs.schedmd.com/show_bug.cgi?id=3820 fixed in 17.11.3], so that Slurm can distinguish between the warning condition versus actual kernel OOM-kill events.  Slurm will continue to limit memory usage from cgroups, so I/O memory will still be counted and be reported when it exceeds the job's requested memory.
** Note that a step with 0:125 will *not* enable a job which has afterok dependency.  This is a Slurm bug that will be [https://bugs.schedmd.com/show_bug.cgi?id=3820 fixed in 17.11.3], so that Slurm can distinguish between the warning condition versus actual kernel OOM-kill events.  Slurm will continue to limit memory usage from cgroups, so I/O memory will still be counted and be reported when it exceeds the job's requested memory.


* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. Read about [[Job_scheduling_policies#Whole_nodes_versus_cores|whole node scheduling]].
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. Read about [[Job_scheduling_policies#Whole_nodes_versus_cores|whole node scheduling]].
38,760

edits

Navigation menu