Star-CCM+/en: Difference between revisions
(Updating to match new version of source page) |
(Updating to match new version of source page) Tags: Mobile edit Mobile web edit |
||
Line 2: | Line 2: | ||
[[Category:Software]] | [[Category:Software]] | ||
[https://mdx.plm.automation.siemens.com/star-ccm-plus STAR-CCM+] is a multidisciplinary engineering simulation suite | [https://mdx.plm.automation.siemens.com/star-ccm-plus STAR-CCM+] is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens. | ||
= License limitations = | = License limitations = | ||
We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from [https://www.plm.automation.siemens.com/global/en/buy/ Siemens]. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run. | |||
== Configuring your account == | == Configuring your account == | ||
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <tt>$HOME/.licenses/starccm.lic</tt> with | In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <tt>$HOME/.licenses/starccm.lic</tt> with | ||
{{File|name=starccm.lic|contents=SERVER IP ANY PORT | {{File|name=starccm.lic|contents=SERVER IP ANY PORT | ||
USE_SERVER}} | USE_SERVER}} | ||
where you change <tt>IP</tt> and <tt>PORT</tt> with the IP address and the port used by the license server at your institution. | where you change <tt>IP</tt> and <tt>PORT</tt> with the IP address and the port used by the license server at your institution. | ||
=== | === POD license file === | ||
Researchers who have purchased a | Researchers who have purchased a POD license from [https://www.plm.automation.siemens.com/global/en/buy/ Siemens] may simply configure the following <tt>$HOME/.licenses/starccm.lic</tt> file on any of our clusters where Star-CCM+ jobs are to be run. | ||
{{File|name=starccm.lic|contents=SERVER flex.cd-adapco.com ANY 1999 | {{File|name=starccm.lic|contents=SERVER flex.cd-adapco.com ANY 1999 | ||
USE_SERVER}} | USE_SERVER}} | ||
Line 22: | Line 22: | ||
Select one of the available modules: | Select one of the available modules: | ||
* <tt>starccm</tt> for the double-precision flavour, | * <tt>starccm</tt> for the double-precision flavour, | ||
* <tt>starccm-mixed</tt> for the mixed precision flavour. | * <tt>starccm-mixed</tt> for the mixed-precision flavour. | ||
Star-CCM+ comes bundled with two different distributions of MPI: | Star-CCM+ comes bundled with two different distributions of MPI: | ||
*[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM | *[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM platform MPI] is the default distribution, but does not work on [[Cedar]]'s Intel OmniPath network fabric; | ||
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>. | *[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>. | ||
Neither IBM | Neither IBM MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use option <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts. | ||
You will also need to set up your job environment to use your license. If you are using CD-adapco's online | You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@tech.alliancecan.ca contact us] so that we can help you set up the access to it. | ||
Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as | Note that at [[Niagara]], the compute nodes mount the <tt>$HOME</tt> filesystem as <i>read-only</i>. Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in <tt>$HOME</tt> and crash in the process. | ||
<tabs> | <tabs> | ||
Line 191: | Line 191: | ||
# Workaround for license failures. | # Workaround for license failures. | ||
# Try up to 5 times to get | # Try up to 5 times to get Star-CCM+ to start by checking exit status (throws 143 when fails, 0 when works). | ||
i=1 | i=1 | ||
RET=-1 | RET=-1 | ||
Line 210: | Line 210: | ||
== Preparation == | == Preparation == | ||
o To | o To set up your account for remote visualization: | ||
# Create <code>~/.licenses/starccm.lic</code> as described above<br> | # Create <code>~/.licenses/starccm.lic</code> as described above<br> | ||
# Users with a | # Users with a POD license should also | ||
:: set: <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> and | :: set: <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> and | ||
:: add: <b>-power</b> to the other command line options shown below | :: add: <b>-power</b> to the other command line options shown below | ||
Line 219: | Line 219: | ||
Connect with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window ... | Connect with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window ... | ||
: | : <b>STAR-CCM+ 15.04.010 (or newer versions)</b> | ||
:: <code>module load StdEnv/2020</code> | :: <code>module load StdEnv/2020</code> | ||
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code> | :: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code> | ||
:: starccm+ | :: starccm+ | ||
: | : <b>STAR-CCM+ 14.06.010, 14.04.013, 14.02.012</b> | ||
:: <code>module load StdEnv/2016</code> | :: <code>module load StdEnv/2016</code> | ||
:: <code>module load starccm-mixed/14.06.010</code> **OR** <code>starccm/14.06.010-R8</code> | :: <code>module load starccm-mixed/14.06.010</code> **OR** <code>starccm/14.06.010-R8</code> | ||
:: starccm+ | :: starccm+ | ||
: | : <b>STAR-CCM+ 13.06.012 (or older versions)</b> | ||
:: <code>module load StdEnv/2016</code> | :: <code>module load StdEnv/2016</code> | ||
:: <code>module load starccm-mixed/13.06.012</code> **OR** <code>starccm/13.06.012-R8</code> | :: <code>module load starccm-mixed/13.06.012</code> **OR** <code>starccm/13.06.012-R8</code> | ||
Line 235: | Line 235: | ||
Connect with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (''Applications -> Systems Tools -> Mate Terminal'') ... | Connect with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (''Applications -> Systems Tools -> Mate Terminal'') ... | ||
: | : <b>STAR-CCM+ 15.04.010 (or newer versions)</b> | ||
:: <code>module load CcEnv StdEnv/2020</code> | :: <code>module load CcEnv StdEnv/2020</code> | ||
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code> | :: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code> | ||
:: starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so | :: starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so | ||
: | : <b>STAR-CCM+ 14.06.013 (this version only)</b> | ||
:: <code>module load CcEnv StdEnv/2016</code> | :: <code>module load CcEnv StdEnv/2016</code> | ||
:: <code>module load starccm-mixed/14.06.013</code> **OR** <code>starccm/14.06.013-R8</code> | :: <code>module load starccm-mixed/14.06.013</code> **OR** <code>starccm/14.06.013-R8</code> | ||
:: starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so | :: starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so | ||
: | : <b>STAR-CCM+ 13.06.012 (or older versions)</b> | ||
:: <code>module load CcEnv StdEnv/2016</code> | :: <code>module load CcEnv StdEnv/2016</code> | ||
:: <code>module load starccm-mixed/13.06.012 </code> **OR** <code>starccm/13.06.012-R8</code> | :: <code>module load starccm-mixed/13.06.012 </code> **OR** <code>starccm/13.06.012-R8</code> | ||
:: starccm+ -mesa | :: starccm+ -mesa |
Revision as of 16:11, 2 June 2023
STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.
License limitations
We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.
Configuring your account
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with
SERVER IP ANY PORT
USE_SERVER
where you change IP and PORT with the IP address and the port used by the license server at your institution.
POD license file
Researchers who have purchased a POD license from Siemens may simply configure the following $HOME/.licenses/starccm.lic file on any of our clusters where Star-CCM+ jobs are to be run.
SERVER flex.cd-adapco.com ANY 1999
USE_SERVER
Cluster batch job submission
Select one of the available modules:
- starccm for the double-precision flavour,
- starccm-mixed for the mixed-precision flavour.
Star-CCM+ comes bundled with two different distributions of MPI:
- IBM platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
- Intel MPI is specified with option -mpi intel.
Neither IBM MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use option --ntasks-per-node=1 and set --cpus-per-task to use all cores as shown in the scripts.
You will also need to set up your job environment to use your license. If you are using CD-adapco's online pay-on-usage server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you set up the access to it.
Note that at Niagara, the compute nodes mount the $HOME filesystem as read-only. Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in $HOME and crash in the process.
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=40 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=48 # or 32 Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=32 # or 44 Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Remove "-podkey $LM_PROJECT" from next line if using an institutional server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=64 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi
#!/bin/bash
#SBATCH --time=0-00:30 # Time limit: d-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=40 # Request all cores per node
#SBATCH --ntasks-per-node=1 # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
cd $SLURM_SUBMIT_DIR
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
module load CCEnv
module load StdEnv/2018.3
module load starccm/13.06.012-R8
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Workaround for license failures.
# Try up to 5 times to get Star-CCM+ to start by checking exit status (throws 143 when fails, 0 when works).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
[ $i -eq 1 ] || sleep 5
echo "Attempt number: "$i
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
RET=$?
i=$((i+1))
done
exit $RET
Remote visualization
Preparation
o To set up your account for remote visualization:
- Create
~/.licenses/starccm.lic
as described above - Users with a POD license should also
- set:
export LM_PROJECT='CD-ADAPCO PROJECT ID'
and - add: -power to the other command line options shown below
- set:
Compute nodes
Connect with TigerVNC and open a terminal window ...
- STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007
**OR**starccm/17.02.007-R8
- starccm+
- STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010
**OR**starccm/14.06.010-R8
- starccm+
- STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012
**OR**starccm/13.06.012-R8
- starccm+ -mesa
VDI nodes
Connect with TigerVNC and open a terminal window (Applications -> Systems Tools -> Mate Terminal) ...
- STAR-CCM+ 15.04.010 (or newer versions)
module load CcEnv StdEnv/2020
module load starccm-mixed/17.02.007
**OR**starccm/17.02.007-R8
- starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
- STAR-CCM+ 14.06.013 (this version only)
module load CcEnv StdEnv/2016
module load starccm-mixed/14.06.013
**OR**starccm/14.06.013-R8
- starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
- STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012
**OR**starccm/13.06.012-R8
- starccm+ -mesa