Star-CCM+: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
mNo edit summary
Line 70: Line 70:


<!--T:270-->
<!--T:270-->
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim


Line 113: Line 113:


<!--T:278-->
<!--T:278-->
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2


Line 146: Line 146:


<!--T:306-->
<!--T:306-->
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile-$SLURM_JOB_ID


<!--T:307-->
<!--T:307-->
Line 152: Line 152:


<!--T:287-->
<!--T:287-->
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $SLURM_SUBMIT_DIR/your-file.sim -mpi intel -fabric psm2
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile-$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-file.sim -mpi intel -fabric psm2


<!--T:288-->
<!--T:288-->
Line 237: Line 237:
<!--T:319-->
<!--T:319-->
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile-$SLURM_JOB_ID
   
   
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
Line 249: Line 249:
         echo "Attempt number: "$I
         echo "Attempt number: "$I
         # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
         # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
         starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
         starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_SUBMIT_DIR/machinefile-$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
         RET=$?
         RET=$?
         i=$((i+1))
         i=$((i+1))

Revision as of 15:40, 28 October 2024

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.

Configuring your account

In order to configure your account to use a license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with the following layout:

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where IP and PORT should be changed to specify the IP address and the static starccm PORT used by the server.

POD license file

Researchers who have purchased a POD license from Siemens may simply configure their license file on any cluster as follows:

File : starccm.lic

SERVER flex.cd-adapco.com ANY 1999
USE_SERVER


Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour (i.e., module load starccm/19.04.007-R8),
  • starccm-mixed for the mixed-precision flavour (i.e., module load starccm-mixed/19.04.007).

When submitting jobs on a cluster for the first time, you must set up the environment to use your license. If you are using CD-adapco's online pay-on-usage server, simply create a ~/.licenses/starccm.lic file as shown in the POD license file section above and license checkouts should immediately work. If however you are using an internal license server, then after creating ~/.licenses/starccm.lic you must submit a problem ticket to technical support so we can help co-ordinate the necessary one time network firewall changes required to access it. If you still have problems getting the licensing to work then try removing or renaming file ~/.flexlmrc since previous search paths and/or license server settings maybe stored in it which conflict with your current starccm.lic settings. Not e that files from previous may have accumulated a significant amount of disc space in hidden directories named .star-version_number if you have run many long jobs in the past using older slurm scripts or if you regularly run starccm+ in gui mode. These can be removed periodically by carefully running rm -ri ~/.starccm* and replying yes as prompted.

File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile-$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -nbuserdir $SLURM_TMPDIR -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
 
module load CCEnv

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"

ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f

cd $SLURM_SUBMIT_DIR
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile-$SLURM_JOB_ID
 
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
 
# Workaround for license failures: 
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
        echo "Attempt number: "$I
        # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
        starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_SUBMIT_DIR/machinefile-$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
        RET=$?
        i=$((i+1))
   done
exit $RET


Remote visualization

Preparation

o To set up your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above.
  2. Users with a POD license should also
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below.

Compute nodes

Connect with TigerVNC and open a terminal window…

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect to gra-vdi.alliancecan.ca with TigerVNC and login. Once the Remote Desktop appears click Applications -> Systems Tools -> Mate Terminal to open a terminal window and then specify which starccm version to load as shown below. Note that after you have loaded a StdEnv you may use the module avail starccm-mixed command to display which starccm versions are available. Note that currently only the MESA implementation of OpenGL is usable on gra-vdi with starccm due to virtualgl issues that otherwise provide local gpu hardware acceleration for OpenGL driven graphics.

STAR-CCM+ 18.04.008 (or newer versions)
module load CcEnv StdEnv/2023
module load starccm-mixed/18.04.008 **OR** starccm/18.04.008-R8
starccm+ -mesa
STAR-CCM+ 15.04.010 --> 18.02.008 (version range)
module load CcEnv StdEnv/2020
module load starccm-mixed/15.04.010 **OR** starccm/15.04.010-R8
starccm+ -mesa
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa