RAC transition FAQ: Difference between revisions
Jump to navigation
Jump to search
(adjust scheduler work plan) |
No edit summary |
||
Line 11: | Line 11: | ||
* Contributed storage systems have different dates of activation and decommissioning. For these we'll be doing the SUM(2018, 2019) for quotas during the 30 days transition period. | * Contributed storage systems have different dates of activation and decommissioning. For these we'll be doing the SUM(2018, 2019) for quotas during the 30 days transition period. | ||
* For every other PI we use default quotas. | * For every other PI we use default quotas. | ||
* After the transition period, the quotas on the original sites from which data has been migrated will also be set to default. Users are expected to delete data from those original sites | * After the transition period, the quotas on the original sites from which data has been migrated will also be set to default. Users are expected to delete data from those original sites if the usage levels are above the new (default) quota. If usage remains above the new quota after the overlap period, staff may choose to delete everything. | ||
* Reasonable requests for extension of the overlap period will be honored, but please bear in mind that such an extension may be impossible or severely constrained if the original cluster is being defunded. | |||
=== Job scheduling === | === Job scheduling === |
Revision as of 18:44, 25 March 2019
This article is a draft
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Allocations from the 2019 Resource Allocation Competition come into effect on 2019 April 4.
Here are some notes on how we expect the transition from 2018 to 2019 allocations to go.
Storage[edit]
- There will be 30 days of overlap between 2018 and 2019 storage allocations, starting on 2019 April 4.
- On a given system the largest of the two quotas (2018, 2019) will be adopted during the transition period.
- If an allocation has moved from one site to another, users are expected to transfer the data by themselves (via globus, scp, rsync, etc.; see Transferring data). For large amounts of data (e.g., 200TB or more) please contact support for advice or assistance to manage the transfer.
- Groups with an allocation that has been moved to Béluga are encouraged to start migrating their data now. Béluga storage is already accessible via Globus
- Contributed storage systems have different dates of activation and decommissioning. For these we'll be doing the SUM(2018, 2019) for quotas during the 30 days transition period.
- For every other PI we use default quotas.
- After the transition period, the quotas on the original sites from which data has been migrated will also be set to default. Users are expected to delete data from those original sites if the usage levels are above the new (default) quota. If usage remains above the new quota after the overlap period, staff may choose to delete everything.
- Reasonable requests for extension of the overlap period will be honored, but please bear in mind that such an extension may be impossible or severely constrained if the original cluster is being defunded.
Job scheduling[edit]
- The scheduler team is planning to archive and compact the Slurm database on April 3 before implementing the new allocations on April 4. We hope to schedule the archive and compaction during off-peak hours. During this time the database may be unresponsive. Specifically, sacct and sacctmgr may be unresponsive.
- We expect to begin replacing 2018 allocations with 2019 allocations on April 4.
- Job priority may be inconsistent during the allocation cutover. Specifically, default allocations may face decreased priority.
- Jobs already in the system will be retained. Running jobs will not be stopped. Waiting jobs may be held.
- Waiting jobs attributed to an allocation which has been moved or not renewed may not schedule after the cutover. Advice on how to detect and handle such jobs will be forthcoming.