|
|
(63 intermediate revisions by 7 users not shown) |
Line 1: |
Line 1: |
| <languages /> | | <languages /> |
| <translate> | | <translate> |
| = Report an issue = <!--T:1-->
| |
| * Please report issues to [mailto:support@computecanada.ca support@computecanada.ca].
| |
|
| |
|
| = Shared issues = <!--T:2-->
| | <!--T:15--> |
| * The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
| | Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on [https://status.alliancecan.ca/ the Alliance Status page]. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package. |
| * Utilization accounting is not currently being forwarded to the [https://ccdb.computecanada.ca Compute Canada database].
| |
|
| |
|
| == Scheduler issues == <!--T:6--> | | ==Report an issue== <!--T:1--> |
| * Some users are experiencing excessively long wait times for jobs to run. Staff are analyzing the causes and plan to introduce scheduler changes in the next few weeks that should alleviate the problem. (16:48, 27 September 2017 (UTC))
| | Please report issues to the [[Technical Support|technical support]] team. |
| * The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
| |
| * Slurm epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally.
| |
| ** This has been greatly improved after the addition of the epilog.clean script, but there are still nodes occasionally marked down for epilog failure.
| |
| * By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.
| |
|
| |
|
| == Quota and filesystem problems == <!--T:7--> | | ==Shared issues== <!--T:2--> |
| === Quota errors on /project filesystem ===
| | The [https://status.alliancecan.ca/ status page] is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page. |
| Users will sometimes see a quota error on their project folders. This may happen when files are owned by a group other than the project group. You can change the group which owns files using the command
| |
| {{Command|chgrp -R <group> <folder>}}
| |
|
| |
|
| If the project directories are configured as intended, new files and directories created in them will automatically assume the project group ownership. One of the ways in which an unexpected group ownership can occur is transferring files to the project directories using a program option to preserve group ownership. So, if you have a recurring problem with ownership, check the options being used by your file transfer program.
| | ===Scheduler issues=== <!--T:6--> |
|
| |
|
| <!--T:8--> | | <!--T:14--> |
| To see what the value of <group> should be, run the following command:
| | No known issues. |
| {{Command|stat -c %G $HOME/projects/*/}}
| |
|
| |
|
| <!--T:9--> | | ===Quota and filesystem issues=== <!--T:7--> |
| Only the owner of the files can run the <tt>chgrp</tt> command. To ask us to correct the group owner for many users, write to [mailto:support@computecanada.ca CC support].
| |
|
| |
|
| === Nearline === <!--T:10--> | | ====Missing project folder==== <!--T:11--> |
| * Nearline capabilities are not yet available; see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality.
| | Upon creation of a new account for a Principal Investigator, the [[Project layout|<code>/project</code>]] storage space might not be allocated until the next business day. |
| ** July 17 update: still not working. If you need your nearline RAC2017 quota, please ask [mailto:support@computecanada.ca CC support].
| |
|
| |
|
| === Missing symbolic links to project folders === <!--T:11--> | | ==Cluster-specific issues== <!--T:17--> |
| * Upon login to the new clusters, symbolic links are not always created in the user's account, as described in [[Project layout]]. If this is the case, please verify that your access to the cluster is enabled on this page [https://ccdb.computecanada.ca/services/resources https://ccdb.computecanada.ca/services/resources].
| |
|
| |
|
| = Cedar only = <!--T:3--> | | ===Béluga=== <!--T:16--> |
| Nothing to report at this time.
| | No known issues. |
|
| |
|
| = Graham only = <!--T:4--> | | ===Cedar=== <!--T:3--> |
| * /home is on an NFS appliance that does not support ACLs, so setfacl/getfacl doesn't work there.
| | No known issues. |
| ** Workaround: use the /project or /scratch filesystems instead.
| |
| ** Might be resolved by an update or reconfiguration.
| |
| * diskusage_report (and alias 'quota') do not report on Graham /home
| |
| ** Workaround: 'du -sh ~' shows usage; quota limit is 50G.
| |
| * Compute nodes cannot access Internet
| |
| ** Solution: Request exceptions to be made at [mailto:support@computecanada.ca CC support] describing what you need to access and why.
| |
|
| |
|
| <!--T:12--> | | ===Graham=== <!--T:4--> |
| * Crontab is not offered on Graham.
| | Graham's /scratch is often slow; it will be replaced soon. |
| | |
| | ===Narval=== <!--T:18--> |
| | No known issues. |
|
| |
|
| = Other issues = <!--T:5-->
| |
| #Modules don't work for shells other than bash(sh) and tcsh.
| |
| #*Workaround: (this appears to work but not tested extensively)
| |
| #**<tt>source $LMOD_PKG/init/zsh</tt>
| |
| #**<tt>source $LMOD_PKG/init/ksh</tt>
| |
| </translate> | | </translate> |