Known issues/en: Difference between revisions
Jump to navigation
Jump to search
(Updating to match new version of source page) |
(Updating to match new version of source page) |
||
Line 5: | Line 5: | ||
= Shared issues = | = Shared issues = | ||
* The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status. | * The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status. | ||
* CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS | * CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS. Read more at [[Meltdown and Spectre bugs]]. | ||
== Scheduler issues == | == Scheduler issues == |
Revision as of 17:47, 11 January 2018
Report an issue
- Please report issues to the technical support team.
- The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
- CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS. Read more at Meltdown and Spectre bugs.
Scheduler issues
- Interactive jobs started via salloc do not support X11 forwarding. (reported 13 December 2017 18:37 UTC) (FIXED as of 2017-12-13)
- The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See Job Scheduling - Whole Node Scheduling.
- By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with --export=none or add #SBATCH --export=NONE to your job script.
Quota and filesystem problems
Quota errors on /project filesystem
- This topic has been moved to Frequently Asked Questions.
Nearline
- Nearline capabilities are not yet available; see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality.
- July 17 update: still not working. If you need your nearline RAC2017 quota, contact technical support.
Missing symbolic links to project folders
- Upon login to the new clusters, symbolic links are not always created in the user's account, as described in Project layout. If this is the case, please verify that your access to the cluster is enabled on this page https://ccdb.computecanada.ca/services/resources.
Cedar only
Nothing to report at this time.
Graham only
- We are currently updating the compute and login nodes due to the recent Meltdown/Spectre issue. Nodes will be rebooted in succession so that service will not be interrupted.
- Compute nodes cannot access Internet
- Solution: Contact technical support to request exceptions to be made; describe what you need to access and why.
- Crontab is not offered on Graham.
Other issues
- Modules don't work for shells other than bash(sh) and tcsh.
- Workaround: (this appears to work but not tested extensively)
- source $LMOD_PKG/init/zsh
- source $LMOD_PKG/init/ksh
- Workaround: (this appears to work but not tested extensively)