|
|
(108 intermediate revisions by 12 users not shown) |
Line 1: |
Line 1: |
| <languages /> | | <languages /> |
| <translate> | | <translate> |
| = Intro = <!--T:1-->
| |
| * Please report issues to [mailto:support@computecanada.ca support@computecanada.ca]
| |
|
| |
|
| = Shared issues = <!--T:2-->
| | <!--T:15--> |
| * The status page at http://status.computecanada.ca/ is not updated automatically yet, so does not necessarily show correct, current status.
| | Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on [https://status.alliancecan.ca/ the Alliance Status page]. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package. |
|
| |
|
| == Scheduler errors == <!--T:6--> | | ==Report an issue== <!--T:1--> |
| * The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]] ([[User:Pjmann|Patrick Mann]] 20:15, 17 July 2017 (UTC))
| | Please report issues to the [[Technical Support|technical support]] team. |
| ** Cpu and Gpu backfill partitions have been created on both clusters. If a job is submitted with <24hr runtime, it will be automatically entered into the cluster-wide backfill partition. This partition has a low priority, but will allow increased utilization of the cluster by serial jobs. ([[User:Nathanw|Nathan Wielenga]])
| |
| * SLURM epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally. ([[User:Gbnewby|Greg Newby]]) Fri Jul 14 19:32:48 UTC 2017)
| |
| ** This has been greatly improved after the addition of the epilog.clean script, but there are still nodes occasionally marked down for epilog failure. (NW)
| |
| * Operations will occasionally time out with a message like "Socket timed out on send/recv operation" or "Unable to contact slurm controller (connect failure)". As a temporary workaround, attempt to resubmit your jobs/commands, they should go through in a few seconds. ([[User:Nathanw|Nathan Wielenga]]) 08:50, 18 July 2017 (MDT))
| |
| ** Should be resolved after a VHD migration to a new backend for slurmctl. (NW)
| |
| * The environment of the shell in which a job was submitted is exported to the job. This can lead to irreproducible results.
| |
| ** Solution/workaround: Add the option <tt>#SBATCH --export=NONE</tt> to your job script.
| |
|
| |
|
| == Quota and filesystem problems == <!--T:7--> | | ==Shared issues== <!--T:2--> |
| === Quota errors on /project filesystem ===
| | The [https://status.alliancecan.ca/ status page] is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page. |
| Sometimes, users will see quota error on their project folders. This is because the group that owns the files is not the project group. You can change the group which owns files using the command
| |
| {{Command|chgrp -R <group> <folder>}}
| |
|
| |
|
| <!--T:8--> | | ===Scheduler issues=== <!--T:6--> |
| To see what <group> should be, run the following command :
| |
| {{Command|stat -c %G $HOME/projects/*/}}
| |
|
| |
|
| <!--T:9--> | | <!--T:14--> |
| Only the owner of the files can run the <tt>chgrp</tt> command. To ask us to correct the group owner for many users, write to support@computecanada.ca
| | No known issues. |
|
| |
|
| === Nearline === <!--T:10--> | | ===Quota and filesystem issues=== <!--T:7--> |
| * "Nearline" capabilities are not yet available (see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality)
| |
| ** Update July 17: still not working. If you need your nearline RAC2017 quota then please ask [mailto:support@computecanada.ca CC support]. ([[User:Pjmann|Patrick Mann]] 20:45, 17 July 2017 (UTC))
| |
|
| |
|
| = Cedar only = <!--T:3--> | | ====Missing project folder==== <!--T:11--> |
| * none
| | Upon creation of a new account for a Principal Investigator, the [[Project layout|<code>/project</code>]] storage space might not be allocated until the next business day. |
|
| |
|
| = Graham only = <!--T:4--> | | ==Cluster-specific issues== <!--T:17--> |
| * Custom file ACLs do not work on /home
| | |
| ** Solution/workaround: use the /project or /scratch filesystems instead
| | ===Béluga=== <!--T:16--> |
| * Compute nodes cannot access Internet
| | No known issues. |
| ** Solution: Request exceptions to be made at support@computecanada.ca Describe what you need to access and why.
| | |
| * Intel compiler does not work on compute nodes
| | ===Cedar=== <!--T:3--> |
| ** Solution/workaround: Compile your code on the login node.
| | No known issues. |
| | |
| | ===Graham=== <!--T:4--> |
| | Graham's /scratch is often slow; it will be replaced soon. |
| | |
| | ===Narval=== <!--T:18--> |
| | No known issues. |
|
| |
|
| = Other issues = <!--T:5-->
| |
| </translate> | | </translate> |
Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on the Alliance Status page. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package.
Report an issue
Please report issues to the technical support team.
Shared issues
The status page is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page.
Scheduler issues
No known issues.
Quota and filesystem issues
Missing project folder
Upon creation of a new account for a Principal Investigator, the /project
storage space might not be allocated until the next business day.
Cluster-specific issues
Béluga
No known issues.
Cedar
No known issues.
Graham
Graham's /scratch is often slow; it will be replaced soon.
Narval
No known issues.