38,757
edits
(Updating to match new version of source page) Tags: Mobile edit Mobile web edit |
(Updating to match new version of source page) |
||
Line 30: | Line 30: | ||
Neither IBM MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use option <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts. | Neither IBM MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use option <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts. | ||
You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please [ | You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please contact [[technical support] so that we can help you set up the access to it. | ||
Note that at [[Niagara]], the compute nodes mount the <tt>$HOME</tt> filesystem as <i>read-only</i>. Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in <tt>$HOME</tt> and crash in the process. | Note that at [[Niagara]], the compute nodes mount the <tt>$HOME</tt> filesystem as <i>read-only</i>. Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in <tt>$HOME</tt> and crash in the process. |