38,757
edits
(Updating to match new version of source page) |
(Updating to match new version of source page) |
||
Line 7: | Line 7: | ||
Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software. | Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software. | ||
== Configuring your account | == Configuring your account == | ||
In order to configure your account to use your own license server with our Star-CCM+ module, create a file <tt>$HOME/.licenses/starccm.lic</tt> with the content : | In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <tt>$HOME/.licenses/starccm.lic</tt> with the content : | ||
{{File|name=starccm.lic|contents=SERVER IP ANY PORT | {{File|name=starccm.lic|contents=SERVER IP ANY PORT | ||
USE_SERVER}} | USE_SERVER}} | ||
Line 24: | Line 24: | ||
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job. As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained. | Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job. As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained. | ||
You will also need to set up your job environment to use your license. If you are using | You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs. | ||
Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only". Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process. | Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only". Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process. | ||
Line 46: | Line 46: | ||
module load starccm-mixed/14.06.013 | module load starccm-mixed/14.06.013 | ||
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE' | ||
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com" | export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com" | ||
Line 76: | Line 76: | ||
module load starccm-mixed/14.06.013 | module load starccm-mixed/14.06.013 | ||
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE' | ||
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com" | export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com" | ||
Line 109: | Line 109: | ||
module load starccm/14.06.013-R8 | module load starccm/14.06.013-R8 | ||
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE' | ||
export CDLMD_LICENSE_FILE="1999@localhost" | export CDLMD_LICENSE_FILE="1999@localhost" | ||
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f | ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f | ||
Line 126: | Line 126: | ||
}}</tab> | }}</tab> | ||
</tabs> | </tabs> | ||
= Remote Visualization = <!--T:25--> | |||
To prepare your account for remote visualization on a cluster node or the graham VDI nodes first specify your license details: | |||
* Setup your <code>~/.licenses/starccm.lic</code> license file as described above<br> | |||
* Podkey users also set <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> | |||
== Cluster Nodes == | |||
o Using Compute Canada cluster modules | |||
Connect to a compute or login node with [https://docs.computecanada.ca/wiki/VNC#Connect TigerVNC] | |||
module load starccm-mixed (or starccm) | |||
starccm+ -np 4 inputfile.sim | |||
== VDI Nodes == | |||
o Using Compute Canada cluster modules | |||
Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC] | |||
module load CcEnv StdEnv | |||
module load starccm-mixed (or starccm) | |||
starccm+ -np 4 inputfile.sim | |||
o Local gra-vdi graphics optimized modules | |||
Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC] | |||
export CDLMD_LICENSE_FILE=~/.licenses/starccm.lic | |||
module load SnEnv | |||
module load starccm/mixed (or starccm/r8) | |||
starccm+ -np 4 inputfile.sim |