Star-CCM+/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
(Updating to match new version of source page)
 
(23 intermediate revisions by the same user not shown)
Line 2: Line 2:


[[Category:Software]]
[[Category:Software]]
[https://mdx.plm.automation.siemens.com/star-ccm-plus STAR-CCM+] is a multidisciplinary engineering simulation suite, supporting the modelling of acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.
[https://mdx.plm.automation.siemens.com/star-ccm-plus STAR-CCM+] is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.


= License limitations =
= License limitations =
Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.
We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from [https://www.plm.automation.siemens.com/global/en/buy/ Siemens]. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.


== Configuring your account ==
== Configuring your account ==
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <tt>$HOME/.licenses/starccm.lic</tt> with the content :
In order to configure your account to use a license server with our Star-CCM+ module, create a license file <code>$HOME/.licenses/starccm.lic</code> with the following layout:
{{File|name=starccm.lic|contents=SERVER IP ANY PORT
{{File|name=starccm.lic|contents=SERVER <server> ANY <port>
USE_SERVER}}
USE_SERVER}}
where you change <tt>IP</tt> and <tt>PORT</tt> with the IP address and the port used by the license server.
where <code><server></code> and <code><port></code> should be changed to specify the hostname (or ip address) and the static vendor port of the license server respectively.
 
=== POD license file ===
 
Researchers with a POD license purchased from [https://www.plm.automation.siemens.com/global/en/buy/ Siemens] can specify it by creating a  <code>~/.licenses/starccm.lic</code> text file as follows:
{{File|name=starccm.lic|contents=SERVER flex.cd-adapco.com ANY 1999
USE_SERVER}}
on any cluster (except Niagara) as well as setting LM_PROJECT to your YOUR CD-ADAPCO PROJECT ID in your slurm script.  Please note that manually setting CDLMD_LICENSE_FILE="<port>@<server>" in your slurm script will no longer be required.


= Cluster batch job submission =
= Cluster batch job submission =
Select one of the available modules:
Select one of the available modules:
* <tt>starccm</tt> for the double-precision flavour,  
* <code>starccm</code> for the double-precision flavour (i.e., <code>module load starccm/19.04.007-R8</code>),
* <tt>starccm-mixed</tt> for the mixed precision flavour.
* <code>starccm-mixed</code> for the mixed-precision flavour (i.e., <code>module load starccm-mixed/19.04.007</code>).
 
Star-CCM+ comes bundled with two different distributions of MPI:
*[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM Platform MPI] is the default distribution, but does not work on [[Cedar]]'s Intel OmniPath network fabric;
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.
 
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts.  As a special case, when submitting jobs with version 14.02.012 or 14.04.013 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.


You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it.  
When submitting jobs on a cluster for the first time, you must set up the environment to use your license. If you are using Siemans remote <i>pay-on-usage</i> license server then create a <code>~/.licenses/starccm.lic</code> file as shown in the <b>Configuring your account- POD license file</b> section above and license checkouts should immediately work. If however you are using an institutional license server, then after creating your <code>~/.licenses/starccm.lic</code> file you must also submit a problem ticket to [[technical support]] so we can help co-ordinate the necessary one time network firewall changes required to access it (assuming the server has never been setup to be accessed from the Alliance cluster you will be using).  If you still have problems getting the licensing to work then try removing or renaming file <code>~/.flexlmrc</code> since previous search paths and/or license server settings maybe stored in it.  Note that temporary output files from starccm jobs runs may accumulate in hidden directories named <code>~/.star-version_number</code> consuming valuable quota space.  These can be removed by periodically running <code>rm -ri ~/.starccm*</code> and replying yes when prompted.


Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only".  Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+.  Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process.
== Slurm Scripts ==


<tabs>
<tabs>
Line 35: Line 36:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# module load StdEnv/2016    # Uncomment for version 14.06.013 or older
#module load StdEnv/2020      # Versions < 18.06.006
 
module load StdEnv/2023
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
#module load starccm/18.06.006-R8
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
module load starccm-mixed/18.06.006


export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
SIM_FILE='mysample.sim'      # Specify your input sim filename
mkdir -p "$STARCCM_TMP"
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename


slurm_hl2hl.py --format STAR-CCM+ > machinefile
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key


NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# ------- no changes required below this line --------


starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))


}}</tab>
if [ -n "$LM_PROJECT" ]; then
  # Siemens PoD license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
else
  # Institutional license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
fi
}}
</tab>
<tab name="Cedar" >
<tab name="Cedar" >
{{File
{{File
Line 68: Line 76:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --cpus-per-task=48    # Request all cores per node (32 or 48)
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# module load StdEnv/2016    # Uncomment for version 14.06.013 or older
#module load StdEnv/2020      # Versions < 18.06.006
 
module load StdEnv/2023
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
#module load starccm/18.06.006-R8
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
module load starccm-mixed/18.06.006


export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
SIM_FILE='mysample.sim'      # Specify your input sim filename
mkdir -p "$STARCCM_TMP"
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename


slurm_hl2hl.py --format STAR-CCM+ > machinefile
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key


NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# ------- no changes required below this line --------


starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim -mpi intel
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))


}}</tab>
if [ -n "$LM_PROJECT" ]; then
  # Siemens PoD license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
else
  # Institutional license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
fi
}}
</tab>
<tab name="Graham" >
<tab name="Graham" >
{{File
{{File
Line 101: Line 116:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --cpus-per-task=32    # Request all cores per node (32 or 44)
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# module load StdEnv/2016    # Uncomment for version 14.06.013 or older
#module load StdEnv/2020      # Versions < 18.06.006
 
module load StdEnv/2023
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007
 
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"


export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
#module load starccm/18.06.006-R8
mkdir -p "$STARCCM_TMP"
module load starccm-mixed/18.06.006


slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
SIM_FILE='mysample.sim'      # Specify your input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename


NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key


# Append -fabric psm2 to next line when loading module versions 15.04.010 or newer ie)
# ------- no changes required below this line --------


starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))


}}</tab>
if [ -n "$LM_PROJECT" ]; then
  # Siemens PoD license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
else
  # Institutional license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
fi
}}
</tab>
<tab name="Narval" >
<tab name="Narval" >
{{File
{{File
Line 136: Line 156:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


module load StdEnv/2020       # Do not change
#module load StdEnv/2020     # Versions < 18.06.006
 
module load StdEnv/2023
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007
 
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"


export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
#module load starccm/18.06.006-R8
mkdir -p "$STARCCM_TMP"
module load starccm-mixed/18.06.006


slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
SIM_FILE='mysample.sim'      # Specify your input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename


NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key


# Append -fabric ucx to next line when loading module versions 15.04.010 
# ------- no changes required below this line --------


starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))


}}</tab>
if [ -n "$LM_PROJECT" ]; then
  # Siemens PoD license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi
else
  # Institutional license server
  starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi
fi
}}
</tab>
<tab name="Niagara" >
<tab name="Niagara" >
{{File
{{File
Line 168: Line 195:
|lang="bash"
|lang="bash"
|contents=
|contents=
<pre>
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
 
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
   
   
cd $SLURM_SUBMIT_DIR
module load CCEnv
 
#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023
 
#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006
 
SIM_FILE='mysample.sim'      # Specify input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename
 
# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key
 
# These settings are used instead of your ~/.licenses/starccm.lic
# (settings shown will use the cd-adapco pod license server)
FLEXPORT=1999                    # Specify server static flex port
VENDPORT=2099                    # Specify server static vendor port
LICSERVER=flex.cd-adapco.com    # Specify license server hostname
 
# ------- no changes required below this line --------


ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export CDLMD_LICENSE_FILE="$FLEXPORT@127.0.0.1"
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
ssh nia-gw -L $FLEXPORT:$LICSERVER:$FLEXPORT -L $VENDPORT:$LICSERVER:$VENDPORT -N -f
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
module load CCEnv
module load StdEnv/2018.3
module load starccm/13.06.012-R8


slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
   
   
# Workaround for license failures.
# Workaround for license failures:
# Try up to 5 times to get starccm+ to start by checking exit status (throws 143 when fails, 0 when works).
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
i=1
RET=-1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
         [ $i -eq 1 ] || sleep 5
         [ $i -eq 1 ] {{!}}{{!}} sleep 5
        echo "Attempt number: "$i
          echo "Attempt number: "$I
        starccm+ -power -np $NCORE -podkey $LM_PROJECT  -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID -batch $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
          if [ -n "$LM_PROJECT" ]; then
          # Siemens PoD license server
          starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
        else
          # Institutional license server
          starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
        fi
         RET=$?
         RET=$?
         i=$((i+1))
         i=$((i+1))
  done
done
exit $RET
exit $RET
</pre>
}}
}}</tab>
</tab>
</tabs>
</tabs>


Line 211: Line 258:
== Preparation ==
== Preparation ==


o To setup your account for remote visualization:
o To set up your account for remote visualization:
# Create <code>~/.licenses/starccm.lic</code> as described above<br>
# Create <code>~/.licenses/starccm.lic</code> as described above.<br>
# Users with a Power-on-demand (POD) license should also:
# Users with a POD license should also
:: set: <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> and
:: set: <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> and
:: add: <b>-power</b> to the other command line options shown below
:: add: <b>-power</b> to the other command line options shown below.


== Compute nodes ==
== Compute nodes ==


Connect to a compute node with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window then do one of the following:
Connect with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window…
: '''STAR-CCM+ 15.04.010 (or newer versions)'''
: <b>STAR-CCM+ 15.04.010 (or newer versions)</b>
:: <code>module load StdEnv/2020</code>
:: <code>module load StdEnv/2020</code>
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code>
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code>
::  starccm+
::  starccm+
: '''STAR-CCM+ 14.06.010, 14.04.013, 14.02.012'''
: <b>STAR-CCM+ 14.06.010, 14.04.013, 14.02.012</b>
:: <code>module load StdEnv/2016</code>
:: <code>module load StdEnv/2016</code>
:: <code>module load starccm-mixed/14.06.010</code> **OR** <code>starccm/14.06.010-R8</code>
:: <code>module load starccm-mixed/14.06.010</code> **OR** <code>starccm/14.06.010-R8</code>
::  starccm+
::  starccm+
: '''STAR-CCM+ 13.06.012 (or older versions)'''
: <b>STAR-CCM+ 13.06.012 (or older versions)</b>
:: <code>module load StdEnv/2016</code>
:: <code>module load StdEnv/2016</code>
:: <code>module load starccm-mixed/13.06.012</code> **OR** <code>starccm/13.06.012-R8</code>
:: <code>module load starccm-mixed/13.06.012</code> **OR** <code>starccm/13.06.012-R8</code>
Line 235: Line 282:
== VDI nodes ==
== VDI nodes ==


Connect to gra-vdi with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (''Applications -> Systems Tools -> Mate Terminal'') then do one of the following:
Connect to gra-vdi.alliancecan.ca with [[VNC#VDI_Nodes|TigerVNC]] and login.  Once the Remote Desktop appears click <i>Applications -> Systems Tools -> Mate Terminal</I> to open a terminal window and then specify which starccm version to load as shown below.  Note that after you have loaded a StdEnv you may use the <code>module avail starccm-mixed</code> command to display which starccm versions are available.  Note that currently only the MESA implementation of OpenGL is usable on gra-vdi with starccm due to virtualgl issues that otherwise provide local gpu hardware acceleration for OpenGL driven graphics.
: '''STAR-CCM+ 15.04.010 (or newer versions)'''
: <b>STAR-CCM+ 18.04.008 (or newer versions)</b>
:: <code>module load CcEnv StdEnv/2023</code>
:: <code>module load starccm-mixed/18.04.008</code> **OR** <code>starccm/18.04.008-R8</code>
:: starccm+ -mesa
: <b>STAR-CCM+ 15.04.010</b> --> <b>18.02.008 (version range)</b>
:: <code>module load CcEnv StdEnv/2020</code>
:: <code>module load CcEnv StdEnv/2020</code>
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code>
:: <code>module load starccm-mixed/15.04.010</code> **OR** <code>starccm/15.04.010-R8</code>
:: starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
:: starccm+ -mesa
: '''STAR-CCM+ 14.06.013 (this version only)'''
: <b>STAR-CCM+ 13.06.012 (or older versions)</b>
:: <code>module load CcEnv StdEnv/2016</code>
:: <code>module load starccm-mixed/14.06.013</code> **OR** <code>starccm/14.06.013-R8</code>
::  starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
: '''STAR-CCM+ 13.06.012 (or older versions)'''
:: <code>module load CcEnv StdEnv/2016</code>
:: <code>module load CcEnv StdEnv/2016</code>
:: <code>module load starccm-mixed/13.06.012 </code> **OR** <code>starccm/13.06.012-R8</code>
:: <code>module load starccm-mixed/13.06.012 </code> **OR** <code>starccm/13.06.012-R8</code>
::  starccm+ -mesa
::  starccm+ -mesa

Latest revision as of 18:03, 31 October 2024

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.

Configuring your account

In order to configure your account to use a license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with the following layout:

File : starccm.lic

SERVER <server> ANY <port>
USE_SERVER


where <server> and <port> should be changed to specify the hostname (or ip address) and the static vendor port of the license server respectively.

POD license file

Researchers with a POD license purchased from Siemens can specify it by creating a ~/.licenses/starccm.lic text file as follows:

File : starccm.lic

SERVER flex.cd-adapco.com ANY 1999
USE_SERVER


on any cluster (except Niagara) as well as setting LM_PROJECT to your YOUR CD-ADAPCO PROJECT ID in your slurm script. Please note that manually setting CDLMD_LICENSE_FILE="<port>@<server>" in your slurm script will no longer be required.

Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour (i.e., module load starccm/19.04.007-R8),
  • starccm-mixed for the mixed-precision flavour (i.e., module load starccm-mixed/19.04.007).

When submitting jobs on a cluster for the first time, you must set up the environment to use your license. If you are using Siemans remote pay-on-usage license server then create a ~/.licenses/starccm.lic file as shown in the Configuring your account- POD license file section above and license checkouts should immediately work. If however you are using an institutional license server, then after creating your ~/.licenses/starccm.lic file you must also submit a problem ticket to technical support so we can help co-ordinate the necessary one time network firewall changes required to access it (assuming the server has never been setup to be accessed from the Alliance cluster you will be using). If you still have problems getting the licensing to work then try removing or renaming file ~/.flexlmrc since previous search paths and/or license server settings maybe stored in it. Note that temporary output files from starccm jobs runs may accumulate in hidden directories named ~/.star-version_number consuming valuable quota space. These can be removed by periodically running rm -ri ~/.starccm* and replying yes when prompted.

Slurm Scripts

File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

SIM_FILE='mysample.sim'       # Specify your input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename

# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key

# ------- no changes required below this line --------

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

if [ -n "$LM_PROJECT" ]; then
   # Siemens PoD license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
else
   # Institutional license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
fi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # Request all cores per node (32 or 48)
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

SIM_FILE='mysample.sim'       # Specify your input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename

# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key

# ------- no changes required below this line --------

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

if [ -n "$LM_PROJECT" ]; then
   # Siemens PoD license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
else
   # Institutional license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
fi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # Request all cores per node (32 or 44)
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

SIM_FILE='mysample.sim'       # Specify your input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename

# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key

# ------- no changes required below this line --------

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

if [ -n "$LM_PROJECT" ]; then
   # Siemens PoD license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
else
   # Institutional license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi intel -fabric psm2
fi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

SIM_FILE='mysample.sim'       # Specify your input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename

# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key

# ------- no changes required below this line --------

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

if [ -n "$LM_PROJECT" ]; then
   # Siemens PoD license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi
else
   # Institutional license server
   starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE -mpi openmpi
fi


File : starccm_job.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value
 
module load CCEnv

#module load StdEnv/2020      # Versions < 18.06.006
module load StdEnv/2023

#module load starccm/18.06.006-R8
module load starccm-mixed/18.06.006

SIM_FILE='mysample.sim'       # Specify input sim filename
#JAVA_FILE='mymacros.java'    # Uncomment to specify an input java filename

# Comment the next line when using an institutional license server
LM_PROJECT='my22digitpodkey'  # Specify your Siemens Power on Demand (PoD) Key

# These settings are used instead of your ~/.licenses/starccm.lic
# (settings shown will use the cd-adapco pod license server)
FLEXPORT=1999                    # Specify server static flex port
VENDPORT=2099                    # Specify server static vendor port
LICSERVER=flex.cd-adapco.com     # Specify license server hostname

# ------- no changes required below this line --------

export CDLMD_LICENSE_FILE="$FLEXPORT@127.0.0.1"
ssh nia-gw -L $FLEXPORT:$LICSERVER:$FLEXPORT -L $VENDPORT:$LICSERVER:$VENDPORT -N -f

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_TMPDIR/machinefile
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
 
# Workaround for license failures: 
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
          echo "Attempt number: "$I
          if [ -n "$LM_PROJECT" ]; then
          # Siemens PoD license server
          starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
        else
          # Institutional license server
          starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -np $NCORE -nbuserdir $SLURM_TMPDIR -machinefile $SLURM_TMPDIR/machinefile $JAVA_FILE $SIM_FILE
        fi
        RET=$?
        i=$((i+1))
done
exit $RET


Remote visualization

Preparation

o To set up your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above.
  2. Users with a POD license should also
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below.

Compute nodes

Connect with TigerVNC and open a terminal window…

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect to gra-vdi.alliancecan.ca with TigerVNC and login. Once the Remote Desktop appears click Applications -> Systems Tools -> Mate Terminal to open a terminal window and then specify which starccm version to load as shown below. Note that after you have loaded a StdEnv you may use the module avail starccm-mixed command to display which starccm versions are available. Note that currently only the MESA implementation of OpenGL is usable on gra-vdi with starccm due to virtualgl issues that otherwise provide local gpu hardware acceleration for OpenGL driven graphics.

STAR-CCM+ 18.04.008 (or newer versions)
module load CcEnv StdEnv/2023
module load starccm-mixed/18.04.008 **OR** starccm/18.04.008-R8
starccm+ -mesa
STAR-CCM+ 15.04.010 --> 18.02.008 (version range)
module load CcEnv StdEnv/2020
module load starccm-mixed/15.04.010 **OR** starccm/15.04.010-R8
starccm+ -mesa
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa