Migrating between clusters: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 7: Line 7:
=Software=
=Software=


The collection of [[Utiliser_des_modules/en|globally installed modules]] is the same across all of our general purpose clusters, distributed using CVMFS.
The collection of [[Utiliser_des_modules/en|globally installed modules]] is the same across all of our general purpose clusters, distributed using CVMFS. For this reason, you should not notice substantial differences among the modules available assuming you are using the same [[Standard_software_environments | standard software environment]]. However, any [[Python#Creating_and_using_a_virtual_environment|Python virtual environments]] or [[R#Installing_R_packages|R]] and [[Perl#Installing_Packages|Perl]] packages that you may have installed in your home directory on one cluster will need to be re-installed on the new cluster, using the same steps that you employed on the original cluster. Equally so, if you modified your <code>$HOME/.bashrc</code> file on one cluster to customize your environment, you will need to modify the same file on the new cluster you're using. If you installed a particular program in your home directory, this will also need to be re-installed on the new cluster since we mentioned in the previous section, the filesystems are independent between clusters.
 
=Job submission=
 
All of our clusters use Slurm for job submission, so many parts of a job submission script will work across clusters. However, you should note that the number of CPU cores per node may vary significantly across clusters, from 24 up to 64 cores, so check the page of the cluster you are using to verify how many cores can be used on a node. The amount of memory also varies so you may need to adapt your script to account for this as well, in addition to differences among GPUs that are available, if any.
 
On [[Cedar]], you may not submit jobs from your home directory and the compute nodes have direct Internet access; on [[Graham]], [[Béluga/en]] and [[Narval/en]], the compute nodes do not have Internet access. The maximum job duration is seven days on Béluga and Narval but 28 days on Cedar and Graham. All of the clusters except Cedar also restrict the number of jobs per user, both running and queued, to be no more than 1000.
Bureaucrats, cc_docs_admin, cc_staff
2,309

edits