|
|
(16 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
| <languages /> | | <languages /> |
| = Report an issue =
| |
| * Please report issues to the [[Technical Support|technical support]] team.
| |
|
| |
|
| = Shared issues =
| | Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on [https://status.alliancecan.ca/ the Alliance Status page]. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package. |
| * The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
| |
| * CC Clusters are vulnerable to the recent Meltdown/Spectre vulnerabilities, and will be updated, which involves updating the OS and CPU microcode. Read more at [[Meltdown and Spectre bugs]].
| |
|
| |
|
| == Scheduler issues == | | ==Report an issue== |
| * Slurm can report: "Exceeded step memory limit at some point" which may be surprising and can cause a problem with dependent jobs.
| | Please report issues to the [[Technical Support|technical support]] team. |
| ** File I/O uses memory and Slurm is correctly reporting this usage. This usage (primarily delayed writes) was not as visible in previous systems. The kernel usually resolves such memory shortages by flushing writes to the filesystem.
| |
| ** Memory shortage can cause the kernel to kill processes ("OOM kill"), which results in the same message but affects exit status differently.
| |
| ** A job that reports DerivedExitStatus 0:125 indicates hitting the memory limit, but not being OOM-killed.
| |
| ** In the absence of any other action, a step with 0:125 will *not* enable a job which has afterok dependency. This is a Slurm bug that will be [https://bugs.schedmd.com/show_bug.cgi?id=3820 fixed in 17.11.3], so that Slurm can distinguish between the warning condition versus actual kernel OOM-kill events. Slurm will continue to limit memory usage from cgroups, so I/O memory will still be counted and be reported when it exceeds the job's requested memory.
| |
|
| |
|
| * The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. Read about [[Job_scheduling_policies#Whole_nodes_versus_cores|whole node scheduling]].
| | ==Shared issues== |
| * By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.
| | The [https://status.alliancecan.ca/ status page] is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page. |
|
| |
|
| == Quota and filesystem problems == | | ===Scheduler issues=== |
| === Quota errors on /project filesystem === | |
| * This topic has been moved to [[Frequently Asked Questions]].
| |
|
| |
|
| === Nearline ===
| | No known issues. |
| * Nearline capabilities are not yet available; see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality.
| |
| ** July 17 update: still not working. If you need your nearline RAC2017 quota, contact [[Technical Support|technical support]].
| |
|
| |
|
| === Missing symbolic links to project folders === | | ===Quota and filesystem issues=== |
| * Upon login to the new clusters, symbolic links are not always created in the user's account, as described in [[Project layout]]. If this is the case, please verify that your access to the cluster is enabled on this page [https://ccdb.computecanada.ca/services/resources https://ccdb.computecanada.ca/services/resources].
| |
|
| |
|
| = Cedar only = | | ====Missing project folder==== |
| Nothing to report at this time.
| | Upon creation of a new account for a Principal Investigator, the [[Project layout|<code>/project</code>]] storage space might not be allocated until the next business day. |
|
| |
|
| = Graham only = | | ==Cluster-specific issues== |
| * We are currently updating the compute and login nodes due to the recent Meltdown/Spectre issue. Nodes will be rebooted in succession so that service will not be interrupted.
| |
| * Compute nodes cannot access Internet
| |
| ** Solution: Contact [[Technical Support|technical support]] to request exceptions to be made; describe what you need to access and why.
| |
|
| |
|
| * Crontab is not offered on Graham.
| | ===Béluga=== |
| | No known issues. |
|
| |
|
| = Other issues = | | ===Cedar=== |
| #Modules don't work for shells other than bash(sh) and tcsh.
| | No known issues. |
| #*Workaround: (this appears to work but not tested extensively)
| | |
| #**<tt>source $LMOD_PKG/init/zsh</tt>
| | ===Graham=== |
| #**<tt>source $LMOD_PKG/init/ksh</tt>
| | Graham's /scratch is often slow; it will be replaced soon. |
| | |
| | ===Narval=== |
| | No known issues. |
Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on the Alliance Status page. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package.
Report an issue
Please report issues to the technical support team.
Shared issues
The status page is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page.
Scheduler issues
No known issues.
Quota and filesystem issues
Missing project folder
Upon creation of a new account for a Principal Investigator, the /project
storage space might not be allocated until the next business day.
Cluster-specific issues
Béluga
No known issues.
Cedar
No known issues.
Graham
Graham's /scratch is often slow; it will be replaced soon.
Narval
No known issues.