Star-CCM+/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 8: Line 8:


== Configuring your account ==
== Configuring your account ==
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <tt>$HOME/.licenses/starccm.lic</tt> with
In order to configure your account to use your own license server with our Star-CCM+ module, create a license file <code>$HOME/.licenses/starccm.lic</code> with
{{File|name=starccm.lic|contents=SERVER IP ANY PORT
{{File|name=starccm.lic|contents=SERVER IP ANY PORT
USE_SERVER}}
USE_SERVER}}
where you change <tt>IP</tt> and <tt>PORT</tt> with the IP address and the port used by the license server at your institution.
where you change <code>IP</code> and <code>PORT</code> with the IP address and the port used by the license server at your institution.


=== POD license file ===
=== POD license file ===


Researchers who have purchased a POD license from [https://www.plm.automation.siemens.com/global/en/buy/ Siemens] may simply configure the following <tt>$HOME/.licenses/starccm.lic</tt>  file on any of our clusters where Star-CCM+ jobs are to be run.
Researchers who have purchased a POD license from [https://www.plm.automation.siemens.com/global/en/buy/ Siemens] may simply configure the following <code>$HOME/.licenses/starccm.lic</code>  file on any of our clusters where Star-CCM+ jobs are to be run.
{{File|name=starccm.lic|contents=SERVER flex.cd-adapco.com ANY 1999
{{File|name=starccm.lic|contents=SERVER flex.cd-adapco.com ANY 1999
USE_SERVER}}
USE_SERVER}}
Line 21: Line 21:
= Cluster batch job submission =
= Cluster batch job submission =
Select one of the available modules:
Select one of the available modules:
* <tt>starccm</tt> for the double-precision flavour,  
* <code>starccm</code> for the double-precision flavour,  
* <tt>starccm-mixed</tt> for the mixed-precision flavour.
* <code>starccm-mixed</code> for the mixed-precision flavour.


You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please contact [[technical support]] so that we can help you set up the access to it.  
You will also need to set up your job environment to use your license. If you are using CD-adapco's online <i>pay-on-usage</i> server, the configuration is rather simple. If you are using an internal license server, please contact [[technical support]] so that we can help you set up the access to it.  


Note that at [[Niagara]], the compute nodes mount the <tt>$HOME</tt> filesystem as <i>read-only</i>. Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in <tt>$HOME</tt> and crash in the process.
Note that at [[Niagara]], the compute nodes mount the <code>$HOME</code> filesystem as <i>read-only</i>. Therefore it is important to define the environment variable <code>$STARCCM_TMP</code> and point it to a location on <code>$SCRATCH</code>, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in <code>$HOME</code> and crash in the process.




Line 58: Line 58:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


# Remove "-power -podkey $LM_PROJECT" when using an institutional license server ...
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim


Line 90: Line 90:
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))
NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))


# Remove "-power -podkey $LM_PROJECT" when using an institutional license server ...
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2


Line 122: Line 122:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


# Remove "-power -podkey $LM_PROJECT" when using an institutional license server ...
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2


Line 154: Line 154:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


# Remove "-power -podkey $LM_PROJECT" when using an institutional license server ...
# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


Line 194: Line 194:
         [ $i -eq 1 ] {{!}}{{!}} sleep 5
         [ $i -eq 1 ] {{!}}{{!}} sleep 5
         echo "Attempt number: "$I
         echo "Attempt number: "$I
         # Remove "-power -podkey $LM_PROJECT" when using an institutional license server ...
         # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
         starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
         starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
         RET=$?
         RET=$?
Line 209: Line 209:


o To set up your account for remote visualization:
o To set up your account for remote visualization:
# Create <code>~/.licenses/starccm.lic</code> as described above<br>
# Create <code>~/.licenses/starccm.lic</code> as described above.<br>
# Users with a POD license should also
# Users with a POD license should also
:: set: <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> and
:: set: <code>export LM_PROJECT='CD-ADAPCO PROJECT ID'</code> and
:: add: <b>-power</b> to the other command line options shown below
:: add: <b>-power</b> to the other command line options shown below.


== Compute nodes ==
== Compute nodes ==


Connect with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window ...
Connect with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window…
: <b>STAR-CCM+ 15.04.010 (or newer versions)</b>
: <b>STAR-CCM+ 15.04.010 (or newer versions)</b>
:: <code>module load StdEnv/2020</code>
:: <code>module load StdEnv/2020</code>
Line 232: Line 232:
== VDI nodes ==
== VDI nodes ==


Connect with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (''Applications -> Systems Tools -> Mate Terminal'') ...
Connect with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (<i>Applications -> Systems Tools -> Mate Terminal</i>)
: <b>STAR-CCM+ 15.04.010 (or newer versions)</b>
: <b>STAR-CCM+ 15.04.010 (or newer versions)</b>
:: <code>module load CcEnv StdEnv/2020</code>
:: <code>module load CcEnv StdEnv/2020</code>

Revision as of 19:59, 25 July 2023

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite to model acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

We have the authorization to host STAR-CCM+ binaries on our servers, but we don't provide licenses. You will need to have your own license in order to use this software. A remote POD license can be purchased directly from Siemens. Alternatively, a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.

Configuring your account

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server at your institution.

POD license file

Researchers who have purchased a POD license from Siemens may simply configure the following $HOME/.licenses/starccm.lic file on any of our clusters where Star-CCM+ jobs are to be run.

File : starccm.lic

SERVER flex.cd-adapco.com ANY 1999
USE_SERVER


Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed-precision flavour.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online pay-on-usage server, the configuration is rather simple. If you are using an internal license server, please contact technical support so that we can help you set up the access to it.

Note that at Niagara, the compute nodes mount the $HOME filesystem as read-only. Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise, StarCCM+ will try to create such a directory in $HOME and crash in the process.


File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NNODES * SLURM_CPUS_PER_TASK * SLURM_NTASKS_PER_NODE))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > $STARCCM_TMP/machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -podkey $LM_PROJECT -np $NCORE -licpath $CDLMD_LICENSE_FILE -machinefile $STARCCM_TMP/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


File : starccm_job.sh

#!/bin/bash

#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --ntasks-per-node=1   # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
 
cd $SLURM_SUBMIT_DIR

ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
 
module load CCEnv
module load StdEnv/2018.3
module load starccm/13.06.012-R8

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
 
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
 
# Workaround for license failures: 
# until the exit status is equal to 0, we try to get Star-CCM+ to start (here, for at least 5 times).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
        echo "Attempt number: "$I
        # Remove "-power -podkey $LM_PROJECT" when using an institutional license server…
        starccm+ -batch -power -podkey $LM_PROJECT -np $NCORE -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
        RET=$?
        i=$((i+1))
   done
exit $RET


Remote visualization

Preparation

o To set up your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above.
  2. Users with a POD license should also
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below.

Compute nodes

Connect with TigerVNC and open a terminal window…

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect with TigerVNC and open a terminal window (Applications -> Systems Tools -> Mate Terminal)…

STAR-CCM+ 15.04.010 (or newer versions)
module load CcEnv StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 14.06.013 (this version only)
module load CcEnv StdEnv/2016
module load starccm-mixed/14.06.013 **OR** starccm/14.06.013-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa