Known issues: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 9: Line 9:


== Scheduler issues == <!--T:6-->
== Scheduler issues == <!--T:6-->
* <span style="color: red; text-decoration: line-through;"> Interactive jobs started via <tt>salloc</tt> do not support X11 forwarding. (''reported 13 December 2017 18:37 UTC'') </span> (FIXED as of 2017-12-13)
* Exceeded step memory limit at some point
** IO uses memory and slurm is correctly reporting these cases.
** IO (and other things) can trigger an actual OOM kill, which triggers the same message (but affects exit status differently).
** DerivedExitStatus 0:125 is the signature of the memory-use-but-not-OOM
** in the absence of any other action, a step with 0:125 will *not* enable a dependent job which has afterok the latter is a slurm bug that will be fixed, so that slurm can clearly distinguish its handling of "out of memory" warning versus actual "kernel OOM killed" errors.  Slurm will continue to correctly report memory usage from cgroups, so IO memory will still be counted.
** This is a [https://bugs.schedmd.com/show_bug.cgi?id=3820 reported bug], which is reportedly fixed in an upcoming version of Slurm.
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.

Revision as of 22:43, 26 January 2018

Other languages:

Report an issue

Shared issues

  • The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
  • CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS and CPU microcode. Read more at Meltdown and Spectre bugs.

Scheduler issues

  • Exceeded step memory limit at some point
    • IO uses memory and slurm is correctly reporting these cases.
    • IO (and other things) can trigger an actual OOM kill, which triggers the same message (but affects exit status differently).
    • DerivedExitStatus 0:125 is the signature of the memory-use-but-not-OOM
    • in the absence of any other action, a step with 0:125 will *not* enable a dependent job which has afterok the latter is a slurm bug that will be fixed, so that slurm can clearly distinguish its handling of "out of memory" warning versus actual "kernel OOM killed" errors. Slurm will continue to correctly report memory usage from cgroups, so IO memory will still be counted.
    • This is a reported bug, which is reportedly fixed in an upcoming version of Slurm.
  • The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See Job Scheduling - Whole Node Scheduling.
  • By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with --export=none or add #SBATCH --export=NONE to your job script.

Quota and filesystem problems

Quota errors on /project filesystem

Nearline

Missing symbolic links to project folders

Cedar only

Nothing to report at this time.

Graham only

  • We are currently updating the compute and login nodes due to the recent Meltdown/Spectre issue. Nodes will be rebooted in succession so that service will not be interrupted.
  • Compute nodes cannot access Internet
    • Solution: Contact technical support to request exceptions to be made; describe what you need to access and why.
  • Crontab is not offered on Graham.

Other issues

  1. Modules don't work for shells other than bash(sh) and tcsh.
    • Workaround: (this appears to work but not tested extensively)
      • source $LMOD_PKG/init/zsh
      • source $LMOD_PKG/init/ksh