Known issues: Difference between revisions

Jump to navigation Jump to search
m
no edit summary
mNo edit summary
mNo edit summary
Line 5: Line 5:


== Shared issues == <!--T:2-->
== Shared issues == <!--T:2-->
# The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:15, 17 July 2017 (UTC))
# The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]] ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:15, 17 July 2017 (UTC))
# Quotas on <code>/project</code> are all 1 TB. The Storage National team is working on a project/RAC based schema. Fortunately Lustre have announced group-based quotas but that will need installation. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:12, 17 July 2017 (UTC))
# Quotas on <code>/project</code> are all 1 TB. The Storage National team is working on a project/RAC based schema. Fortunately Lustre have announced group-based quotas but that will need installation. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:12, 17 July 2017 (UTC))
# SLURM epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally.    ([[User:Gbnewby|Greg Newby]]) Fri Jul 14 19:32:48 UTC 2017)
# SLURM epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally.    ([[User:Gbnewby|Greg Newby]]) Fri Jul 14 19:32:48 UTC 2017)
cc_staff
1,483

edits

Navigation menu