Star-CCM+: Difference between revisions
mNo edit summary |
(Marked this version for translation) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 29: | Line 29: | ||
<!--T:5--> | <!--T:5--> | ||
When submitting jobs on a cluster for the first time, you must set up the environment to use your license. If you are using | When submitting jobs on a cluster for the first time, you must set up the environment to use your license. If you are using Siemans remote <i>pay-on-usage</i> license server then create a <code>~/.licenses/starccm.lic</code> file as shown in the <b>Configuring your account- POD license file</b> section above and license checkouts should immediately work. If however you are using an institutional license server, then after creating your <code>~/.licenses/starccm.lic</code> file you must also submit a problem ticket to [[technical support]] so we can help co-ordinate the necessary one time network firewall changes required to access it (assuming the server has never been setup to be accessed from the Alliance cluster you will be using). If you still have problems getting the licensing to work then try removing or renaming file <code>~/.flexlmrc</code> since previous search paths and/or license server settings maybe stored in it. Note that temporary output files from starccm jobs runs may accumulate in hidden directories named <code>~/.star-version_number</code> consuming valuable quota space. These can be removed by periodically running <code>rm -ri ~/.starccm*</code> and replying yes when prompted. | ||
== Slurm Scripts == <!--T:8--> | |||
<!--T:263--> | <!--T:263--> | ||
Line 55: | Line 57: | ||
#module load starccm/18.06.006-R8 | #module load starccm/18.06.006-R8 | ||
module load starccm-mixed/18.06.006 | module load starccm-mixed/18.06.006 | ||
<!--T:265--> | |||
SIM_FILE='mysample.sim' # Specify your input sim filename | |||
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename | |||
<!--T:266--> | <!--T:266--> | ||
# Comment the next line when using an institutional license server | |||
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key | |||
<!--T:267--> | <!--T:267--> | ||
# ------- no changes required below this line -------- | |||
<!--T:268--> | <!--T:268--> | ||
slurm_hl2hl.py --format STAR-CCM+ > $ | slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile | ||
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | ||
<!--T:270--> | <!--T:270--> | ||
if [ -n "$LM_PROJECT" ]; then | |||
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE | # Siemens PoD license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE | |||
else | |||
}}</tab> | # Institutional license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE | |||
fi | |||
}} | |||
</tab> | |||
<tab name="Cedar" > | <tab name="Cedar" > | ||
{{File | {{File | ||
Line 87: | Line 94: | ||
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm | #SBATCH --time=00-01:00 # Time limit: dd-hh:mm | ||
#SBATCH --nodes=1 # Specify 1 or more nodes | #SBATCH --nodes=1 # Specify 1 or more nodes | ||
#SBATCH --cpus-per-task=48 # | #SBATCH --cpus-per-task=48 # Request all cores per node (32 or 48) | ||
#SBATCH --mem=0 # Request all memory per node | #SBATCH --mem=0 # Request all memory per node | ||
#SBATCH --ntasks-per-node=1 # Do not change this value | #SBATCH --ntasks-per-node=1 # Do not change this value | ||
Line 100: | Line 107: | ||
<!--T:274--> | <!--T:274--> | ||
SIM_FILE='mysample.sim' # Specify your input sim filename | |||
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename | |||
<!--T: | <!--T:275--> | ||
# Comment the next line when using an institutional license server | |||
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key | |||
<!--T: | <!--T:276--> | ||
# ------- no changes required below this line -------- | |||
<!--T: | <!--T:301--> | ||
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile | |||
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | ||
<!--T:278--> | <!--T:278--> | ||
if [ -n "$LM_PROJECT" ]; then | |||
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE | # Siemens PoD license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2 | |||
else | |||
}}</tab> | # Institutional license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2 | |||
fi | |||
}} | |||
</tab> | |||
<tab name="Graham" > | <tab name="Graham" > | ||
{{File | {{File | ||
Line 130: | Line 142: | ||
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm | #SBATCH --time=00-01:00 # Time limit: dd-hh:mm | ||
#SBATCH --nodes=1 # Specify 1 or more nodes | #SBATCH --nodes=1 # Specify 1 or more nodes | ||
#SBATCH --cpus-per-task=32 # | #SBATCH --cpus-per-task=32 # Request all cores per node (32 or 44) | ||
#SBATCH --mem=0 # Request all memory per node | #SBATCH --mem=0 # Request all memory per node | ||
#SBATCH --ntasks-per-node=1 # Do not change this value | #SBATCH --ntasks-per-node=1 # Do not change this value | ||
Line 143: | Line 155: | ||
<!--T:282--> | <!--T:282--> | ||
SIM_FILE='mysample.sim' # Specify your input sim filename | |||
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename | |||
<!--T:283--> | |||
# Comment the next line when using an institutional license server | |||
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key | |||
<!--T:284--> | |||
# ------- no changes required below this line -------- | |||
<!--T:306--> | <!--T:306--> | ||
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile | slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile | ||
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | ||
<!--T:287--> | <!--T:287--> | ||
if [ -n "$LM_PROJECT" ]; then | |||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE | # Siemens PoD license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2 | |||
else | |||
}}</tab> | # Institutional license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2 | |||
fi | |||
}} | |||
</tab> | |||
<tab name="Narval" > | <tab name="Narval" > | ||
{{File | {{File | ||
Line 182: | Line 203: | ||
<!--T:291--> | <!--T:291--> | ||
SIM_FILE='mysample.sim' # Specify your input sim filename | |||
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename | |||
<!--T:292--> | <!--T:292--> | ||
# Comment the next line when using an institutional license server | |||
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key | |||
<!--T:293--> | <!--T:293--> | ||
# ------- no changes required below this line -------- | |||
<!--T:294--> | <!--T:294--> | ||
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile | |||
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | ||
<!--T:295--> | <!--T:295--> | ||
if [ -n "$LM_PROJECT" ]; then | |||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE | # Siemens PoD license server | ||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi | |||
else | |||
# Institutional license server | |||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi | |||
fi | |||
}} | }} | ||
</tab> | </tab> | ||
Line 216: | Line 241: | ||
#SBATCH --mem=0 # Request all memory per node | #SBATCH --mem=0 # Request all memory per node | ||
#SBATCH --ntasks-per-node=1 # Do not change this value | #SBATCH --ntasks-per-node=1 # Do not change this value | ||
module load CCEnv | module load CCEnv | ||
Line 230: | Line 253: | ||
<!--T:317--> | <!--T:317--> | ||
SIM_FILE='mysample.sim' # Specify input sim filename | |||
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename | |||
<!--T:321--> | |||
# Comment the next line when using an institutional license server | |||
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key | |||
<!--T:322--> | |||
# These settings are used instead of your ~/.licenses/starccm.lic | |||
# (settings shown will use the cd-adapco pod license server) | |||
FLEXPORT=1999 # Specify server static flex port | |||
VENDPORT=2099 # Specify server static vendor port | |||
LICSERVER=flex.cd-adapco.com # Specify license server hostname | |||
<!--T:319--> | |||
# ------- no changes required below this line -------- | |||
<!--T:318--> | <!--T:318--> | ||
ssh nia-gw -L | export CDLMD_LICENSE_FILE="$FLEXPORT@127.0.0.1" | ||
ssh nia-gw -L $FLEXPORT:$LICSERVER:$FLEXPORT -L $VENDPORT:$LICSERVER:$VENDPORT -N -f | |||
<!--T: | <!--T:320--> | ||
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile | |||
slurm_hl2hl.py --format STAR-CCM+ > $ | |||
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE)) | ||
Line 248: | Line 284: | ||
while [ $i -le 5 ] && [ $RET -ne 0 ]; do | while [ $i -le 5 ] && [ $RET -ne 0 ]; do | ||
[ $i -eq 1 ] {{!}}{{!}} sleep 5 | [ $i -eq 1 ] {{!}}{{!}} sleep 5 | ||
echo "Attempt number: "$I | |||
if [ -n "$LM_PROJECT" ]; then | |||
# Siemens PoD license server | |||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE | |||
else | |||
# Institutional license server | |||
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE | |||
fi | |||
RET=$? | RET=$? | ||
i=$((i+1)) | i=$((i+1)) | ||
done | |||
exit $RET | exit $RET | ||
}} | }} |
Latest revision as of 18:02, 31 October 2024
STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.
License limitations
We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.
Configuring your account
In order to configure your account to use a license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic
with the following layout:
SERVER <server> ANY <port>
USE_SERVER
where <server>
and <port>
should be changed to specify the hostname (or ip address) and the static vendor port of the license server respectively.
POD license file
Researchers with a POD license purchased from Siemens can specify it by creating a ~/.licenses/starccm.lic
text file as follows:
SERVER flex.cd-adapco.com ANY 1999
USE_SERVER
on any cluster (except Niagara) as well as setting LM_PROJECT to your YOUR CD-ADAPCO PROJECT ID in your slurm script. Please note that manually setting CDLMD_LICENSE_FILE="<port>@<server>" in your slurm script will no longer be required.
Cluster batch job submission
Select one of the available modules:
starccm
for the double-precision flavour (i.e.,module load starccm/19.04.007-R8
),starccm-mixed
for the mixed-precision flavour (i.e.,module load starccm-mixed/19.04.007
).
When submitting jobs on a cluster for the first time, you must set up the environment to use your license. If you are using Siemans remote pay-on-usage license server then create a ~/.licenses/starccm.lic
file as shown in the Configuring your account- POD license file section above and license checkouts should immediately work. If however you are using an institutional license server, then after creating your ~/.licenses/starccm.lic
file you must also submit a problem ticket to technical support so we can help co-ordinate the necessary one time network firewall changes required to access it (assuming the server has never been setup to be accessed from the Alliance cluster you will be using). If you still have problems getting the licensing to work then try removing or renaming file ~/.flexlmrc
since previous search paths and/or license server settings maybe stored in it. Note that temporary output files from starccm jobs runs may accumulate in hidden directories named ~/.star-version_number
consuming valuable quota space. These can be removed by periodically running rm -ri ~/.starccm*
and replying yes when prompted.
Slurm Scripts
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=40 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
#module load StdEnv/2020 # Versions < 18.06.006
module load StdEnv/2023
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
SIM_FILE='mysample.sim' # Specify your input sim filename
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key
# ------- no changes required below this line --------
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
if [ -n "$LM_PROJECT" ]; then
# Siemens PoD license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
else
# Institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
fi
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=48 # Request all cores per node (32 or 48)
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
#module load StdEnv/2020 # Versions < 18.06.006
module load StdEnv/2023
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
SIM_FILE='mysample.sim' # Specify your input sim filename
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key
# ------- no changes required below this line --------
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
if [ -n "$LM_PROJECT" ]; then
# Siemens PoD license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
else
# Institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
fi
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=32 # Request all cores per node (32 or 44)
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
#module load StdEnv/2020 # Versions < 18.06.006
module load StdEnv/2023
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
SIM_FILE='mysample.sim' # Specify your input sim filename
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key
# ------- no changes required below this line --------
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
if [ -n "$LM_PROJECT" ]; then
# Siemens PoD license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
else
# Institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
fi
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=64 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
#module load StdEnv/2020 # Versions < 18.06.006
module load StdEnv/2023
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
SIM_FILE='mysample.sim' # Specify your input sim filename
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key
# ------- no changes required below this line --------
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
if [ -n "$LM_PROJECT" ]; then
# Siemens PoD license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi
else
# Institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi
fi
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=40 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load CCEnv
#module load StdEnv/2020 # Versions < 18.06.006
module load StdEnv/2023
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
SIM_FILE='mysample.sim' # Specify input sim filename
#JAVA_FILE='mymacros.java' # Uncomment to specify an input java filename
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey' # Specify your Siemens Power on Demand (PoD) Key
# These settings are used instead of your ~/.licenses/starccm.lic
# (settings shown will use the cd-adapco pod license server)
FLEXPORT=1999 # Specify server static flex port
VENDPORT=2099 # Specify server static vendor port
LICSERVER=flex.cd-adapco.com # Specify license server hostname
# ------- no changes required below this line --------
export CDLMD_LICENSE_FILE="$FLEXPORT@127.0.0.1"
ssh nia-gw -L $FLEXPORT:$LICSERVER:$FLEXPORT -L $VENDPORT:$LICSERVER:$VENDPORT -N -f
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
# Workaround for license failures:
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
[ $i -eq 1 ] || sleep 5
echo "Attempt number: "$I
if [ -n "$LM_PROJECT" ]; then
# Siemens PoD license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
else
# Institutional license server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
fi
RET=$?
i=$((i+1))
done
exit $RET
Remote visualization
Preparation
o To set up your account for remote visualization:
- Create
~/.licenses/starccm.lic
as described above. - Users with a POD license should also
- set:
export LM_PROJECT='CD-ADAPCO PROJECT ID'
and - add: -power to the other command line options shown below.
- set:
Compute nodes
Connect with TigerVNC and open a terminal window…
- STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007
**OR**starccm/17.02.007-R8
- starccm+
- STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010
**OR**starccm/14.06.010-R8
- starccm+
- STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012
**OR**starccm/13.06.012-R8
- starccm+ -mesa
VDI nodes
Connect to gra-vdi.alliancecan.ca with TigerVNC and login. Once the Remote Desktop appears click Applications -> Systems Tools -> Mate Terminal to open a terminal window and then specify which starccm version to load as shown below. Note that after you have loaded a StdEnv you may use the module avail starccm-mixed
command to display which starccm versions are available. Note that currently only the MESA implementation of OpenGL is usable on gra-vdi with starccm due to virtualgl issues that otherwise provide local gpu hardware acceleration for OpenGL driven graphics.
- STAR-CCM+ 18.04.008 (or newer versions)
module load CcEnv StdEnv/2023
module load starccm-mixed/18.04.008
**OR**starccm/18.04.008-R8
- starccm+ -mesa
- STAR-CCM+ 15.04.010 --> 18.02.008 (version range)
module load CcEnv StdEnv/2020
module load starccm-mixed/15.04.010
**OR**starccm/15.04.010-R8
- starccm+ -mesa
- STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012
**OR**starccm/13.06.012-R8
- starccm+ -mesa