Star-CCM+

From Alliance Doc
(Redirected from StarCCM)
Jump to navigation Jump to search
Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.

Configuring your account

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server at your institution.

POD license file

Researchers who have purchased a POD license from Siemens may simply configure the following $HOME/.licenses/starccm.lic file on any of our clusters where Star-CCM+ jobs are to be run.

File : starccm.lic

SERVER flex.cd-adapco.com ANY 1999
USE_SERVER


Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed-precision flavour.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online pay-on-usage server, the configuration is rather simple. If you are using an internal license server, please contact technical support so that we can help you set up the access to it.

Note that at Niagara, the compute nodes mount the $HOME filesystem as read-only. Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in $HOME and crash in the process.


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
 
module load CCEnv

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"

ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f

cd $SLURM_SUBMIT_DIR
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
 
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
 
# Workaround for license failures: 
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
        echo "Attempt number: "$I
        # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
        starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
        RET=$?
        i=$((i+1))
   done
exit $RET


Remote visualization

Preparation

o To set up your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above.
  2. Users with a POD license should also
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below.

Compute nodes

Connect with TigerVNC and open a terminal window…

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect to gra-vdi.alliancecan.ca with TigerVNC and login. Once the Remote Desktop appears click Applications -> Systems Tools -> Mate Terminal to open a terminal window and then specify which starccm version to load as shown below. Note that after you have loaded a StdEnv you may use the module avail starccm-mixed command to display which starccm versions are available. Note that currently only the MESA implementation of OpenGL is usable on gra-vdi with starccm due to virtualgl issues that otherwise provide local gpu hardware acceleration for OpenGL driven graphics.

STAR-CCM+ 18.04.008 (or newer versions)
module load CcEnv StdEnv/2023
module load starccm-mixed/18.04.008 **OR** starccm/18.04.008-R8
starccm+ -mesa
STAR-CCM+ 15.04.010 --> 18.02.008 (version range)
module load CcEnv StdEnv/2020
module load starccm-mixed/15.04.010 **OR** starccm/15.04.010-R8
starccm+ -mesa
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa