Star-CCM+: Difference between revisions
No edit summary |
No edit summary |
||
Line 36: | Line 36: | ||
<!--T:5--> | <!--T:5--> | ||
You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please [ | You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please contact [[technical support] so that we can help you set up the access to it. | ||
<!--T:8--> | <!--T:8--> |
Revision as of 16:25, 2 June 2023
STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.
License limitations[edit]
We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.
Configuring your account[edit]
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with
SERVER IP ANY PORT
USE_SERVER
where you change IP and PORT with the IP address and the port used by the license server at your institution.
POD license file[edit]
Researchers who have purchased a POD license from Siemens may simply configure the following $HOME/.licenses/starccm.lic file on any of our clusters where Star-CCM+ jobs are to be run.
SERVER flex.cd-adapco.com ANY 1999
USE_SERVER
Cluster batch job submission[edit]
Select one of the available modules:
- starccm for the double-precision flavour,
- starccm-mixed for the mixed-precision flavour.
Star-CCM+ comes bundled with two different distributions of MPI:
- IBM platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
- Intel MPI is specified with option -mpi intel.
Neither IBM MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use option --ntasks-per-node=1 and set --cpus-per-task to use all cores as shown in the scripts.
You will also need to set up your job environment to use your license. If you are using CD-adapco's online pay-on-usage server, the configuration is rather simple. If you are using an internal license server, please contact [[technical support] so that we can help you set up the access to it.
Note that at Niagara, the compute nodes mount the $HOME filesystem as read-only. Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in $HOME and crash in the process.
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=40 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=48 # or 32 Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=32 # or 44 Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Remove "-podkey $LM_PROJECT" from next line if using an institutional server
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
#!/bin/bash
#SBATCH --account=def-group # Specify some account
#SBATCH --time=00-01:00 # Time limit: dd-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=64 # Request all cores per node
#SBATCH --mem=0 # Request all memory per node
#SBATCH --ntasks-per-node=1 # Do not change this value
module load StdEnv/2020 # Do not change
# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi
#!/bin/bash
#SBATCH --time=0-00:30 # Time limit: d-hh:mm
#SBATCH --nodes=1 # Specify 1 or more nodes
#SBATCH --cpus-per-task=40 # Request all cores per node
#SBATCH --ntasks-per-node=1 # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
cd $SLURM_SUBMIT_DIR
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
module load CCEnv
module load StdEnv/2018.3
module load starccm/13.06.012-R8
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Workaround for license failures.
# Try up to 5 times to get Star-CCM+ to start by checking exit status (throws 143 when fails, 0 when works).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
[ $i -eq 1 ] || sleep 5
echo "Attempt number: "$i
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
RET=$?
i=$((i+1))
done
exit $RET
Remote visualization[edit]
Preparation[edit]
o To set up your account for remote visualization:
- Create
~/.licenses/starccm.lic
as described above - Users with a POD license should also
- set:
export LM_PROJECT='CD-ADAPCO PROJECT ID'
and - add: -power to the other command line options shown below
- set:
Compute nodes[edit]
Connect with TigerVNC and open a terminal window ...
- STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007
**OR**starccm/17.02.007-R8
- starccm+
- STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010
**OR**starccm/14.06.010-R8
- starccm+
- STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012
**OR**starccm/13.06.012-R8
- starccm+ -mesa
VDI nodes[edit]
Connect with TigerVNC and open a terminal window (Applications -> Systems Tools -> Mate Terminal) ...
- STAR-CCM+ 15.04.010 (or newer versions)
module load CcEnv StdEnv/2020
module load starccm-mixed/17.02.007
**OR**starccm/17.02.007-R8
- starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
- STAR-CCM+ 14.06.013 (this version only)
module load CcEnv StdEnv/2016
module load starccm-mixed/14.06.013
**OR**starccm/14.06.013-R8
- starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
- STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012
**OR**starccm/13.06.012-R8
- starccm+ -mesa