Known issues: Difference between revisions
No edit summary |
(Marked this version for translation) |
||
Line 20: | Line 20: | ||
{{Command|chgrp -R <group> <folder>}} | {{Command|chgrp -R <group> <folder>}} | ||
<!--T:13--> | |||
If the project directories are configured as intended, new files and directories created in them will automatically assume the project group ownership. One of the ways in which an unexpected group ownership can occur is transferring files to the project directories using a program option to preserve group ownership. So, if you have a recurring problem with ownership, check the options being used by your file transfer program. | If the project directories are configured as intended, new files and directories created in them will automatically assume the project group ownership. One of the ways in which an unexpected group ownership can occur is transferring files to the project directories using a program option to preserve group ownership. So, if you have a recurring problem with ownership, check the options being used by your file transfer program. | ||
Revision as of 12:59, 24 October 2017
Report an issue
- Please report issues to support@computecanada.ca.
- The status page at http://status.computecanada.ca/ is not updated automatically yet, so may lag in showing current status.
- Utilization accounting is not currently being forwarded to the Compute Canada database.
Scheduler issues
- Some users are experiencing excessively long wait times for jobs to run. Staff are analyzing the causes and plan to introduce scheduler changes in the next few weeks that should alleviate the problem. (16:48, 27 September 2017 (UTC))
- The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See Job Scheduling - Whole Node Scheduling.
- Slurm epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally.
- This has been greatly improved after the addition of the epilog.clean script, but there are still nodes occasionally marked down for epilog failure.
- By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with --export=none or add #SBATCH --export=NONE to your job script.
Quota and filesystem problems
Quota errors on /project filesystem
Users will sometimes see a quota error on their project folders. This may happen when files are owned by a group other than the project group. You can change the group which owns files using the command
[name@server ~]$ chgrp -R <group> <folder>
If the project directories are configured as intended, new files and directories created in them will automatically assume the project group ownership. One of the ways in which an unexpected group ownership can occur is transferring files to the project directories using a program option to preserve group ownership. So, if you have a recurring problem with ownership, check the options being used by your file transfer program.
To see what the value of <group> should be, run the following command:
[name@server ~]$ stat -c %G $HOME/projects/*/
Only the owner of the files can run the chgrp command. To ask us to correct the group owner for many users, write to CC support.
Nearline
- Nearline capabilities are not yet available; see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality.
- July 17 update: still not working. If you need your nearline RAC2017 quota, please ask CC support.
Missing symbolic links to project folders
- Upon login to the new clusters, symbolic links are not always created in the user's account, as described in Project layout. If this is the case, please verify that your access to the cluster is enabled on this page https://ccdb.computecanada.ca/services/resources.
Cedar only
Nothing to report at this time.
Graham only
- /home is on an NFS appliance that does not support ACLs, so setfacl/getfacl doesn't work there.
- Workaround: use the /project or /scratch filesystems instead.
- Might be resolved by an update or reconfiguration.
- diskusage_report (and alias 'quota') do not report on Graham /home
- Workaround: 'du -sh ~' shows usage; quota limit is 50G.
- Compute nodes cannot access Internet
- Solution: Request exceptions to be made at CC support describing what you need to access and why.
- Crontab is not offered on Graham.
Other issues
- Modules don't work for shells other than bash(sh) and tcsh.
- Workaround: (this appears to work but not tested extensively)
- source $LMOD_PKG/init/zsh
- source $LMOD_PKG/init/ksh
- Workaround: (this appears to work but not tested extensively)