Known issues: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 8: Line 8:


== Scheduler issues == <!--T:6-->
== Scheduler issues == <!--T:6-->
* Interactive jobs started via <tt>salloc</tt> require X11 forwarding to be enabled when you connect to the cluster. On Linux and MacOS, you can typically enable X11 forwarding by adding the option <tt>-X</tt> or <tt>-Y</tt> to your <tt>ssh</tt> command.
* Interactive jobs started via <tt>salloc</tt> require X11 forwarding to be enabled when you connect to the cluster. On Linux and MacOS, you can typically enable X11 forwarding by adding the option <tt>-X</tt> or <tt>-Y</tt> to your <tt>ssh</tt> command. If you do not have X11 forwarding capabilities, you can use <tt>srun --pty bash</tt> as the command to be run by <tt>salloc</tt>. For example: <tt>salloc --time=1:00:00 --ntasks=1 srun --pty bash</tt> (reported at 12:26, 13 December 2017 (UTC))
**If you do not have X11 forwarding capabilities, you can use the command <tt>srun --pty bash</tt> as the command to be run by <tt>salloc</tt>. For example: <tt>salloc --time=1:00:00 --ntasks=1 srun --pty bash</tt> ([[User:Mboisson|Maxime Boissonneault]] ([[User talk:Mboisson|talk]]) 12:26, 13 December 2017 (UTC))
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.

Revision as of 16:29, 13 December 2017

Other languages:

Report an issue

Shared issues

Scheduler issues

  • Interactive jobs started via salloc require X11 forwarding to be enabled when you connect to the cluster. On Linux and MacOS, you can typically enable X11 forwarding by adding the option -X or -Y to your ssh command. If you do not have X11 forwarding capabilities, you can use srun --pty bash as the command to be run by salloc. For example: salloc --time=1:00:00 --ntasks=1 srun --pty bash (reported at 12:26, 13 December 2017 (UTC))
  • The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See Job Scheduling - Whole Node Scheduling.
  • By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with --export=none or add #SBATCH --export=NONE to your job script.

Quota and filesystem problems

Quota errors on /project filesystem

Nearline

Missing symbolic links to project folders

Cedar only

Nothing to report at this time.

Graham only

  • /home is on an NFS appliance that does not support ACLs, so setfacl/getfacl doesn't work there.
    • Workaround: use the /project or /scratch filesystems instead.
    • Might be resolved by an update or reconfiguration.
  • diskusage_report (and alias 'quota') do not report on Graham /home (FIXED as of 2017-11-27)
  • Compute nodes cannot access Internet
    • Solution: Contact technical support to request exceptions to be made; describe what you need to access and why.
  • Crontab is not offered on Graham.

Other issues

  1. Modules don't work for shells other than bash(sh) and tcsh.
    • Workaround: (this appears to work but not tested extensively)
      • source $LMOD_PKG/init/zsh
      • source $LMOD_PKG/init/ksh