Known issues: Difference between revisions
Jump to navigation
Jump to search
(Marked this version for translation) |
No edit summary |
||
Line 1: | Line 1: | ||
<languages /> | <languages /> | ||
<translate> | <translate> | ||
= Intro = <!--T:1--> | |||
* Please report issues to [mailto:support@computecanada.ca support@computecanada.ca] | * Please report issues to [mailto:support@computecanada.ca support@computecanada.ca] | ||
= Shared issues = <!--T:2--> | |||
* The status page at http://status.computecanada.ca/ is not updated automatically yet, so does not necessarily show correct, current status. | * The status page at http://status.computecanada.ca/ is not updated automatically yet, so does not necessarily show correct, current status. | ||
== Scheduler errors == <!--T:6--> | |||
* The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]] ([[User:Pjmann|Patrick Mann]] 20:15, 17 July 2017 (UTC)) | * The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]] ([[User:Pjmann|Patrick Mann]] 20:15, 17 July 2017 (UTC)) | ||
** Cpu and Gpu backfill partitions have been created on both clusters. If a job is submitted with <24hr runtime, it will be automatically entered into the cluster-wide backfill partition. This partition has a low priority, but will allow increased utilization of the cluster by serial jobs. ([[User:Nathanw|Nathan Wielenga]]) | ** Cpu and Gpu backfill partitions have been created on both clusters. If a job is submitted with <24hr runtime, it will be automatically entered into the cluster-wide backfill partition. This partition has a low priority, but will allow increased utilization of the cluster by serial jobs. ([[User:Nathanw|Nathan Wielenga]]) | ||
Line 14: | Line 14: | ||
** Should be resolved after a VHD migration to a new backend for slurmctl. (NW) | ** Should be resolved after a VHD migration to a new backend for slurmctl. (NW) | ||
== Quota and filesystem problems == <!--T:7--> | |||
=== Quota errors on /project filesystem === | |||
Sometimes, users will see quota error on their project folders. This is because the group that owns the files is not the project group. You can change the group which owns files using the command | Sometimes, users will see quota error on their project folders. This is because the group that owns the files is not the project group. You can change the group which owns files using the command | ||
{{Command|chgrp -R <group> <folder>}} | {{Command|chgrp -R <group> <folder>}} | ||
Line 26: | Line 26: | ||
Only the owner of the files can run the <tt>chgrp</tt> command. To ask us to correct the group owner for many users, write to support@computecanada.ca | Only the owner of the files can run the <tt>chgrp</tt> command. To ask us to correct the group owner for many users, write to support@computecanada.ca | ||
=== Nearline === <!--T:10--> | |||
* "Nearline" capabilities are not yet available (see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality) | * "Nearline" capabilities are not yet available (see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality) | ||
** Update July 17: still not working. If you need your nearline RAC2017 quota then please ask [mailto:support@computecanada.ca CC support]. ([[User:Pjmann|Patrick Mann]] 20:45, 17 July 2017 (UTC)) | ** Update July 17: still not working. If you need your nearline RAC2017 quota then please ask [mailto:support@computecanada.ca CC support]. ([[User:Pjmann|Patrick Mann]] 20:45, 17 July 2017 (UTC)) | ||
= Cedar only = <!--T:3--> | |||
* | * | ||
= Graham only = <!--T:4--> | |||
* Custom file ACLs do not work on /home | * Custom file ACLs do not work on /home | ||
** Solution/workaround: use the /project or /scratch filesystems instead | ** Solution/workaround: use the /project or /scratch filesystems instead | ||
Line 39: | Line 39: | ||
** Solution: Request exceptions to be made at support@computecanada.ca Describe what you need to access and why. | ** Solution: Request exceptions to be made at support@computecanada.ca Describe what you need to access and why. | ||
= Other issues = <!--T:5--> | |||
</translate> | </translate> |
Revision as of 13:17, 31 July 2017
Intro
- Please report issues to support@computecanada.ca
- The status page at http://status.computecanada.ca/ is not updated automatically yet, so does not necessarily show correct, current status.
Scheduler errors
- The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. See Job Scheduling - Whole Node Scheduling (Patrick Mann 20:15, 17 July 2017 (UTC))
- Cpu and Gpu backfill partitions have been created on both clusters. If a job is submitted with <24hr runtime, it will be automatically entered into the cluster-wide backfill partition. This partition has a low priority, but will allow increased utilization of the cluster by serial jobs. (Nathan Wielenga)
- SLURM epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally. (Greg Newby) Fri Jul 14 19:32:48 UTC 2017)
- Operations will occasionally time out with a message like "Socket timed out on send/recv operation" or "Unable to contact slurm controller (connect failure)". As a temporary workaround, attempt to resubmit your jobs/commands, they should go through in a few seconds. (Nathan Wielenga) 08:50, 18 July 2017 (MDT))
- Should be resolved after a VHD migration to a new backend for slurmctl. (NW)
Quota and filesystem problems
Quota errors on /project filesystem
Sometimes, users will see quota error on their project folders. This is because the group that owns the files is not the project group. You can change the group which owns files using the command
[name@server ~]$ chgrp -R <group> <folder>
To see what <group> should be, run the following command :
[name@server ~]$ stat -c %G $HOME/projects/*/
Only the owner of the files can run the chgrp command. To ask us to correct the group owner for many users, write to support@computecanada.ca
Nearline
- "Nearline" capabilities are not yet available (see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality)
- Update July 17: still not working. If you need your nearline RAC2017 quota then please ask CC support. (Patrick Mann 20:45, 17 July 2017 (UTC))
Cedar only
Graham only
- Custom file ACLs do not work on /home
- Solution/workaround: use the /project or /scratch filesystems instead
- Compute nodes cannot access Internet
- Solution: Request exceptions to be made at support@computecanada.ca Describe what you need to access and why.