Star-CCM+/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 7: Line 7:
Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.
Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.


== Configuring your account to use your own license server ==
== Configuring your account ==
In order to configure your account to use your own license server with our Star-CCM+ module, create a file <tt>$HOME/.licenses/starccm.lic</tt> with the content :
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <tt>$HOME/.licenses/starccm.lic</tt> with the content :
{{File|name=starccm.lic|contents=SERVER IP ANY PORT
{{File|name=starccm.lic|contents=SERVER IP ANY PORT
USE_SERVER}}
USE_SERVER}}
Line 24: Line 24:
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.


You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.
You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.


Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only".  Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+.  Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process.
Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only".  Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+.  Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process.
Line 46: Line 46:
module load starccm-mixed/14.06.013
module load starccm-mixed/14.06.013


export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"


Line 76: Line 76:
module load starccm-mixed/14.06.013
module load starccm-mixed/14.06.013


export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"


Line 109: Line 109:
module load starccm/14.06.013-R8
module load starccm/14.06.013-R8


export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@localhost"
export CDLMD_LICENSE_FILE="1999@localhost"
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
Line 126: Line 126:
}}</tab>
}}</tab>
</tabs>
</tabs>
= Remote Visualization = <!--T:25-->
To prepare your account for remote visualization on a cluster node or the graham VDI nodes first specify your license details:
* Setup your <code>~/.licenses/starccm.lic</code> license file as described above<br>
* Podkey users also set <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code>
== Cluster Nodes ==
o Using Compute Canada cluster modules
Connect to a compute or login node with [https://docs.computecanada.ca/wiki/VNC#Connect TigerVNC]
module load starccm-mixed (or starccm)
starccm+ -np 4 inputfile.sim
== VDI Nodes ==
o Using Compute Canada cluster modules
Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
module load CcEnv StdEnv
module load starccm-mixed (or starccm)
starccm+ -np 4 inputfile.sim
o Local gra-vdi graphics optimized modules
Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
export CDLMD_LICENSE_FILE=~/.licenses/starccm.lic
module load SnEnv
module load starccm/mixed (or starccm/r8)
starccm+ -np 4 inputfile.sim

Revision as of 15:22, 30 June 2020

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite, supporting the modelling of acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.

Configuring your account

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with the content :

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server.

Cluster Batch Job Submission

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed precision flavour.

Star-CCM+ comes bundled with two different distributions of MPI:

  • IBM Platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
  • Intel MPI is specified with option -mpi intel.

Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options --ntasks-per-node=1 and --cpus-per-task=32 when submitting a job. As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add -fabric psm2 to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.

Note that at Niagara the compute nodes mount the $HOME filesystem as "read-only". Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the $HOME and crash in the process.

File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

# Pick an appropriate STARCCM version and precision
# module load starccm/14.06.013-R8
module load starccm-mixed/14.06.013

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $SLURM_SUBMIT_DIR/machinefile -batch /path/to/your/simulation/file
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 for smaller full nodes
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

# Pick an appropriate STARCCM module/version and precision; 
# module load starccm/12.04.011-R8
module load starccm-mixed/14.06.013

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile `pwd`/machinefile -mpi intel -batch `pwd`/your-simulation-file.sim
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # or 80 to use HyperThreading
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

cd $SLURM_SUBMIT_DIR

module purge --force
module load CCEnv
module load StdEnv
module load starccm/14.06.013-R8

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@localhost"
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

# ln -s $STARCCM_TMP $HOME  ### only the first time you run the script

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -power -np $NCORE -podkey $LM_PROJECT -machinefile $SLURM_SUBMIT_DIR/machinefile -batch /path/to/your/simulation/file

Remote Visualization

To prepare your account for remote visualization on a cluster node or the graham VDI nodes first specify your license details:

  • Setup your ~/.licenses/starccm.lic license file as described above
  • Podkey users also set export LM_PROJECT='CD-ADAPCO PROJECT ID'

Cluster Nodes

o Using Compute Canada cluster modules

Connect to a compute or login node with TigerVNC
module load starccm-mixed (or starccm)
starccm+ -np 4 inputfile.sim

VDI Nodes

o Using Compute Canada cluster modules

Connect to gra-vdi with TigerVNC
module load CcEnv StdEnv
module load starccm-mixed (or starccm)
starccm+ -np 4 inputfile.sim

o Local gra-vdi graphics optimized modules

Connect to gra-vdi with TigerVNC
export CDLMD_LICENSE_FILE=~/.licenses/starccm.lic
module load SnEnv
module load starccm/mixed (or starccm/r8)
starccm+ -np 4 inputfile.sim