38,757
edits
(Updating to match new version of source page) Tags: Mobile edit Mobile web edit |
(Updating to match new version of source page) |
||
Line 16: | Line 16: | ||
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>. | *[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>. | ||
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which | Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job. | ||
You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. | You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. |