Star-CCM+

From Alliance Doc
Revision as of 00:48, 24 June 2021 by Roberpj (talk | contribs)
Jump to navigation Jump to search
Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite, supporting the modelling of acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations[edit]

Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.

Configuring your account[edit]

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with the content :

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server.

Cluster batch job submission[edit]

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed precision flavour.

Star-CCM+ comes bundled with two different distributions of MPI:

  • IBM Platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
  • Intel MPI is specified with option -mpi intel.

Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options --ntasks-per-node=1 and set --cpus-per-task to use all cores as shown in the scripts. As a special case, when submitting jobs with version 14.02.012 or 14.04.013 modules on Cedar, one must add -fabric psm2 to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you setup the access to it.

Note that at Niagara the compute nodes mount the $HOME filesystem as "read-only". Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the $HOME and crash in the process.

File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Comment this line for older versions than 15.04.010

# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 for smaller full nodes
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Comment this line for older versions than 15.04.010

# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim -mpi intel
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Comment this line for older versions than 15.04.010

# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Append -fabric psm2 to next line when using module versions 15.04.010 or newer ie)

starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

'"`UNIQ--pre-00000020-QINU`"'

Remote visualization[edit]

Preparation[edit]

o To setup your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above
  2. Users with a Power-on-demand (POD) license should also:
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below

Cluster nodes[edit]

o Using global Compute Canada cluster modules:

  1. Connect to a cluster compute or login node with TigerVNC
  2. module load starccm-mixed/X.Y.Z **OR** starccm/X.Y.Z-R8
    starccm+ -mesa -np 4 input-file.sim

Under the old StdEnv/2016 and StdEnv/2018 environments, the following module versions are broken for interactive graphics use on cluster compute and cluster login nodes: starccm-mixed/11.06.011, starccm/11.06.011-R8, starccm-mixed/12.04.011, starccm/12.04.011-R8, starccm-mixed/12.06.011, starccm-mixed/13.06.012 and starccm/13.06.012-R8. Please use one of the other module versions instead.

VDI nodes[edit]

o Using global Compute Canada cluster modules:

  1. Connect to gra-vdi with TigerVNC
  2. module load CcEnv StdEnv
  3. module avail starccm
  4. module load starccm-mixed/14.X.Y **OR** starccm/14.X.Y-R8
    starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so -np 4 input-file.sim
  5. module load starccm-mixed/11.X.Y|12.X.Y|13.X.Y **OR** starccm/11.X.Y-R8|12.X.Y-R8|13.X.Y-R8)
    starccm+ -mesa -np 4 input-file.sim

o Using local gra-vdi modules (may provide better graphics performance):

  1. Connect to gra-vdi with TigerVNC
  2. export CDLMD_LICENSE_FILE=~/.licenses/starccm.lic
  3. module load SnEnv
  4. module load starccm/mixed/14.04.013 **OR** starccm/r8/14.04.013
    starccm+ -np 4 input-file.sim