Known issues: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(graham update)
No edit summary
Line 6: Line 6:
= Shared issues = <!--T:2-->
= Shared issues = <!--T:2-->
* The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
* The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
* CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS and rebooting.
* CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS; the nodes will be rebooted progressively.


== Scheduler issues == <!--T:6-->
== Scheduler issues == <!--T:6-->

Revision as of 18:31, 5 January 2018

Other languages:

Report an issue

Shared issues

  • The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
  • CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS; the nodes will be rebooted progressively.

Scheduler issues

  • Interactive jobs started via salloc do not support X11 forwarding. (reported 13 December 2017 18:37 UTC) (FIXED as of 2017-12-13)
  • The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See Job Scheduling - Whole Node Scheduling.
  • By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with --export=none or add #SBATCH --export=NONE to your job script.

Quota and filesystem problems

Quota errors on /project filesystem

Nearline

Missing symbolic links to project folders

Cedar only

Nothing to report at this time.

Graham only

  • Graham nodes will be updated to address the recent Meltdown/Spectre issue. Compute and login nodes will be "rolling" updated, so there will be no outage. There will be a reduction in rate of starting compute jobs, and each of the four login nodes will be incrementally rebooted.
  • Compute nodes cannot access Internet
    • Solution: Contact technical support to request exceptions to be made; describe what you need to access and why.
  • Crontab is not offered on Graham.

Other issues

  1. Modules don't work for shells other than bash(sh) and tcsh.
    • Workaround: (this appears to work but not tested extensively)
      • source $LMOD_PKG/init/zsh
      • source $LMOD_PKG/init/ksh