Bureaucrats, cc_docs_admin, cc_staff
2,915
edits
m (→Storage) |
(copy-edit, add links, add more on scheduling expectations) |
||
Line 5: | Line 5: | ||
=== Storage === | === Storage === | ||
* There will be | * There will be 30 days of overlap between 2018 and 2019 storage allocations, starting on 2019 April 4. | ||
* | * On a given system the largest of the two quotas (2018, 2019) will be adopted during the transition period. | ||
* If | * If an allocation has moved from one site to another, users are expected to transfer the data by themselves (via globus, scp, rsync, ''etc.''; see [[Transferring data]]). For large amounts of data (''e.g.'', 200TB or more) please [[Technical support|contact support]] for advice or assistance to manage the transfer. | ||
* | * Groups with an allocation that has been moved to [[Béluga]] are encouraged to start migrating their data '''now.''' Béluga storage is already active. | ||
* Contributed storage systems have different dates of activation and decommissioning. For these we'll be doing the SUM(2018, 2019) for quotas during the 30 days transition period. | * Contributed storage systems have different dates of activation and decommissioning. For these we'll be doing the SUM(2018, 2019) for quotas during the 30 days transition period. | ||
* For every other PI we use default quotas | * For every other PI we use default quotas. | ||
* After the transition period, the quotas on the original sites from which data has been migrated will also be set to default. Users are expected to delete data from those original sites as well, if the quotas are above the default. Otherwise staff will delete everything. | * After the transition period, the quotas on the original sites from which data has been migrated will also be set to default. Users are expected to delete data from those original sites as well, if the quotas are above the default. Otherwise staff will delete everything. | ||
=== Job scheduling === | === Job scheduling === | ||
* The scheduler team is planning to archive and compact the Slurm database on April 4 before activating the new allocations. We hope to schedule this during off-peak hours. During this process the database may be unresponsive. Specifically, <tt>sacct</tt> and <tt>sacctmgr</tt> may be unresponsive. | |||
* Once the database is compacted, 2018 allocations will be replaced with 2019 allocations. | |||
* The scheduler team is planning to compact the database | * We're not sure how long these steps (database archiving and compaction and allocation cutover) will take. We are hoping for a few hours. | ||
* Job priority may be inconsistent during the allocation cutover. Specifically, default allocations may face decreased priority. | |||
* Jobs already in the system will be retained. Running jobs will not be stopped. Queued jobs may be held. | |||
* Waiting jobs attributed to an allocation which has been moved or not renewed may not schedule after the cutover. Advice on how to detect and handle such jobs will be forthcoming. |