Known issues: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(delete mention of project quota errors)
(whole-node jobs and --export=none appear on scheduler pages, and not considered defects)
Line 11: Line 11:


<!--T:14-->
<!--T:14-->
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. Read about [[Job_scheduling_policies#Whole_nodes_versus_cores|whole node scheduling]].
No known issues at this time.
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.


== Quota and filesystem problems == <!--T:7-->
== Quota and filesystem problems == <!--T:7-->

Revision as of 14:20, 24 August 2018

Other languages:

Report an issue

Shared issues

  • The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
  • CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS and CPU microcode. Read more at Meltdown and Spectre bugs.

Scheduler issues

No known issues at this time.

Quota and filesystem problems

Nearline

Missing symbolic links to project folders

Cedar only

Nothing to report at this time.

Graham only

  • Compute nodes cannot access Internet
    • Solution: Contact technical support to request exceptions to be made; describe what you need to access and why.
  • Crontab is not offered on Graham.

Other issues

  1. Modules don't work for shells other than bash(sh) and tcsh.
    • Workaround: (this appears to work but not tested extensively)
      • source $LMOD_PKG/init/zsh
      • source $LMOD_PKG/init/ksh