|
|
(140 intermediate revisions by 15 users not shown) |
Line 1: |
Line 1: |
| <languages /> | | <languages /> |
| <translate> | | <translate> |
| == Intro == <!--T:1-->
| |
| * Please report issues to [mailto:support@computecanada.ca support@computecanada.ca]
| |
|
| |
|
| == Shared issues == <!--T:2-->
| | <!--T:15--> |
| # The CC slurm configuration preferentially encourages whole-node jobs. Users should if possible request whole-nodes rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]] ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:15, 17 July 2017 (UTC))
| | Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on [https://status.alliancecan.ca/ the Alliance Status page]. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package. |
| # Quotas on <code>/project</code> are all 1 TB. The Storage National team is working on a project/RAC based schema. Fortunately Lustre have announced group-based quotas but that will need installation. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:12, 17 July 2017 (UTC))
| |
| # SLURM epilog does not fully clean up processes from ended jobs, especially if the job did not exit normally. ([[User:Gbnewby|Greg Newby]]) Fri Jul 14 19:32:48 UTC 2017)
| |
| # Email from graham and cedar is still undergoing configuration. Therefore email job notifications from Slurm are failing. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 17:17, 26 June 2017 (UTC))
| |
| #* Cedar email is working now ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 16:11, 6 July 2017 (UTC))
| |
| #* Graham email is working
| |
| # The SLURM 'sinfo' command yields different resource-type detail on graham and cedar. ([[User:Gbnewby|Greg Newby]]) 16:05, 23 June 2017 (UTC))
| |
| # Local scratch on compute nodes has inconsistent naming. Cedar has /local and Graham has /localscratch.
| |
| # The status page at http://status.computecanada.ca/ is not updated automatically yet, so does not necessarily show correct, current status.
| |
| # "Nearline" capabilities are not yet available (see https://docs.computecanada.ca/wiki/National_Data_Cyberinfrastructure for a brief description of the intended functionality)
| |
| #* Update July 17: still not working. If you need your nearline RAC2017 quota then please ask [mailto:support@computecanada.ca CC support]. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:45, 17 July 2017 (UTC))
| |
| # Operations will occasionally time out with a message like "Socket timed out on send/recv operation" or "Unable to contact slurm controller (connect failure)". As a temporary workaround, attempt to resubmit your jobs/commands, they should go through in a few seconds. ([[User:Nathanw|Nathan Wielenga]]) 08:50, 18 July 2017 (MDT))
| |
|
| |
|
| == Cedar only == <!--T:3--> | | ==Report an issue== <!--T:1--> |
| * Environment variables such as $SCRATCH and $PROJECT are not yet set, although the filesystem are available. ([[User:Gbnewby|Greg Newby]]) 16:10, 21 June 2017 (UTC))
| | Please report issues to the [[Technical Support|technical support]] team. |
|
| |
|
| == Graham only == <!--T:4--> | | ==Shared issues== <!--T:2--> |
| * Graham scheduling is not properly running small jobs. Working on the problem. ([[User:Pjmann|Patrick Mann]] ([[User talk:Pjmann|talk]]) 20:14, 17 July 2017 (UTC))
| | The [https://status.alliancecan.ca/ status page] is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page. |
| ** too many nodes in the full node partition, not enough in the by core partition - no idea who is modifying the slurm conf
| | |
| * big memory nodes need to be added to the scheduler
| | ===Scheduler issues=== <!--T:6--> |
| * no network topology information in the scheduler
| | |
| | <!--T:14--> |
| | No known issues. |
| | |
| | ===Quota and filesystem issues=== <!--T:7--> |
| | |
| | ====Missing project folder==== <!--T:11--> |
| | Upon creation of a new account for a Principal Investigator, the [[Project layout|<code>/project</code>]] storage space might not be allocated until the next business day. |
| | |
| | ==Cluster-specific issues== <!--T:17--> |
| | |
| | ===Béluga=== <!--T:16--> |
| | No known issues. |
| | |
| | ===Cedar=== <!--T:3--> |
| | No known issues. |
| | |
| | ===Graham=== <!--T:4--> |
| | Graham's /scratch is often slow; it will be replaced soon. |
| | |
| | ===Narval=== <!--T:18--> |
| | No known issues. |
|
| |
|
| == Other issues == <!--T:5-->
| |
| </translate> | | </translate> |
Problems that affect many users and are being investigated, such as a cluster-wide malfunction or outage, are on the Alliance Status page. This "Known issues" page describes problems that affect many users but that may take some time to repair, or are not planned for repair at this time. Problems that only affect a specific software package are described on the wiki page for that software package.
Report an issue
Please report issues to the technical support team.
Shared issues
The status page is updated manually, so there may be a delay between when a problem begins and when it is posted to the status page.
Scheduler issues
No known issues.
Quota and filesystem issues
Missing project folder
Upon creation of a new account for a Principal Investigator, the /project
storage space might not be allocated until the next business day.
Cluster-specific issues
Béluga
No known issues.
Cedar
No known issues.
Graham
Graham's /scratch is often slow; it will be replaced soon.
Narval
No known issues.