Known issues

From Alliance Doc
Revision as of 16:41, 31 January 2018 by FuzzyBot (talk | contribs) (Updating to match new version of source page)
Jump to navigation Jump to search
Other languages:

Report an issue

Shared issues

  • The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
  • CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS and CPU microcode. Read more at Meltdown and Spectre bugs.

Scheduler issues

  • Slurm can report: "Exceeded step memory limit at some point" which may be surprising and can cause a problem with dependent jobs.
    • File I/O uses memory and Slurm is correctly reporting this usage. This usage (primarily delayed writes) was not as visible in previous systems. The kernel usually resolves such memory shortages by flushing writes to the filesystem.
    • Memory shortage can cause the kernel to kill processes ("OOM kill"), which results in the same message but affects exit code differently.
    • A job that reports DerivedExitStatus 0:125 indicates hitting the memory limit, but not being OOM-killed.
    • Note that a step with 0:125 will *not* enable a job which has afterok dependency. This is a Slurm bug that will be fixed in 17.11.3, so that Slurm can distinguish between the warning condition versus actual kernel OOM-kill events. Slurm will continue to limit memory usage from cgroups, so I/O memory will still be counted and be reported when it exceeds the job's requested memory.
  • The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. Read about whole node scheduling.
  • By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with --export=none or add #SBATCH --export=NONE to your job script.

Quota and filesystem problems

Quota errors on /project filesystem

Nearline

Missing symbolic links to project folders

Cedar only

Nothing to report at this time.

Graham only

  • We are currently updating the compute and login nodes due to the recent Meltdown/Spectre issue. Nodes will be rebooted in succession so that service will not be interrupted.
  • Compute nodes cannot access Internet
    • Solution: Contact technical support to request exceptions to be made; describe what you need to access and why.
  • Crontab is not offered on Graham.

Other issues

  1. Modules don't work for shells other than bash(sh) and tcsh.
    • Workaround: (this appears to work but not tested extensively)
      • source $LMOD_PKG/init/zsh
      • source $LMOD_PKG/init/ksh