Star-CCM+: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 42: Line 42:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
Line 50: Line 51:


<!--T:264-->
<!--T:264-->
# module load StdEnv/2020     # Versions < 18.06.006
#module load StdEnv/2020     # Versions < 18.06.006
module load StdEnv/2023
module load StdEnv/2023


<!--T:265-->
#module load starccm/18.06.006-R8
# module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
module load starccm-mixed/18.06.006


Line 83: Line 83:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
Line 91: Line 92:


<!--T:272-->
<!--T:272-->
module load StdEnv/2020       # Do not change
#module load StdEnv/2020     # Versions < 18.06.006
module load StdEnv/2023


<!--T:300-->
#module load starccm/18.06.006-R8
# module load starccm/18.04.008-R8
module load starccm-mixed/18.06.006
module load starccm-mixed/18.04.008


<!--T:274-->
<!--T:274-->
Line 123: Line 124:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
Line 131: Line 133:


<!--T:280-->
<!--T:280-->
module load StdEnv/2020       # Do not change
#module load StdEnv/2020     # Versions < 18.06.006
module load StdEnv/2023


<!--T:304-->
#module load starccm/18.06.006-R8
# module load starccm/18.04.008-R8
module load starccm-mixed/18.06.006
module load starccm-mixed/18.04.008


<!--T:282-->
<!--T:282-->
Line 163: Line 165:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
Line 171: Line 174:


<!--T:289-->
<!--T:289-->
module load StdEnv/2020       # Do not change
#module load StdEnv/2020     # Versions < 18.06.006
module load StdEnv/2023


<!--T:290-->
#module load starccm/18.06.006-R8
# module load starccm/18.04.008-R8
module load starccm-mixed/18.06.006
module load starccm-mixed/18.04.008


<!--T:291-->
<!--T:291-->
Line 206: Line 209:


<!--T:297-->
<!--T:297-->
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1            # Specify 1 or more nodes
#SBATCH --nodes=1            # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
#SBATCH --mail-type=END
   
   
cd $SLURM_SUBMIT_DIR
module load CCEnv
 
<!--T:290-->
#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023
 
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006


<!--T:298-->
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
module load CCEnv
module load StdEnv/2018.3
module load starccm/13.06.012-R8


<!--T:299-->
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
 
cd $SLURM_SUBMIT_DIR
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
   
   

Revision as of 00:03, 23 May 2024

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.

Configuring your account

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server at your institution.

POD license file

Researchers who have purchased a POD license from Siemens may simply configure the following $HOME/.licenses/starccm.lic file on any of our clusters where Star-CCM+ jobs are to be run.

File : starccm.lic

SERVER flex.cd-adapco.com ANY 1999
USE_SERVER


Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed-precision flavour.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online pay-on-usage server, the configuration is rather simple. If you are using an internal license server, please contact technical support so that we can help you set up the access to it.

Note that at Niagara, the compute nodes mount the $HOME filesystem as read-only. Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in $HOME and crash in the process.


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
 
module load CCEnv

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"

ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f

cd $SLURM_SUBMIT_DIR
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
 
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
 
# Workaround for license failures: 
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
        echo "Attempt number: "$I
        # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
        starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
        RET=$?
        i=$((i+1))
   done
exit $RET


Remote visualization

Preparation

o To set up your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above.
  2. Users with a POD license should also
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below.

Compute nodes

Connect with TigerVNC and open a terminal window…

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect with TigerVNC and open a terminal window (Applications -> Systems Tools -> Mate Terminal)…

STAR-CCM+ 15.04.010 (or newer versions)
module load CcEnv StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 14.06.013 (this version only)
module load CcEnv StdEnv/2016
module load starccm-mixed/14.06.013 **OR** starccm/14.06.013-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa