Star-CCM+/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 28: Line 28:
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.  
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.  


Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts.  As a special case, when submitting jobs with version 14.02.012 or 14.04.013 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts.


You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it.  
You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it.  
Line 43: Line 43:
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
Line 74: Line 74:
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
Line 105: Line 105:
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
Line 138: Line 138:
#SBATCH --account=def-group  # Specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
Line 158: Line 158:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


# Remove -fabric psm2 from next line for module versions 17.02.00X and 17.04.00X
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
}}
 
</tab>
}}</tab>
<tab name="Niagara" >
<tab name="Niagara" >
{{File
{{File
Line 168: Line 167:
|lang="bash"
|lang="bash"
|contents=
|contents=
<pre>
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value
Line 196: Line 195:
RET=-1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
         [ $i -eq 1 ] || sleep 5
         [ $i -eq 1 ] {{!}}{{!}} sleep 5
         echo "Attempt number: "$i
         echo "Attempt number: "$i
         starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
         starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
Line 203: Line 202:
   done
   done
exit $RET
exit $RET
</pre>
}}
}}</tab>
</tab>
</tabs>
</tabs>



Revision as of 14:40, 29 May 2023

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite, supporting the modelling of acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software. A remote pod license can be purchased directly from Siemens. Alternatively a local license hosted at your institution can be used, providing it can be accessed through the firewall from the cluster where jobs are to be run.

Configuring your account

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with the content :

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server at your institution.

Pod License File

Researchers who have purchased a pod license from Siemens may simply configure the following $HOME/.licenses/starccm.lic file on any cluster Alliance cluster where starccm jobs are to be run :

File : starccm.lic

SERVER flex.cd-adapco.com ANY 1999
USE_SERVER


Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed precision flavour.

Star-CCM+ comes bundled with two different distributions of MPI:

  • IBM Platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
  • Intel MPI is specified with option -mpi intel.

Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options --ntasks-per-node=1 and set --cpus-per-task to use all cores as shown in the scripts.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you setup the access to it.

Note that at Niagara the compute nodes mount the $HOME filesystem as "read-only". Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the $HOME and crash in the process.

File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Remove "-podkey $LM_PROJECT" from next line if using an institutional server

starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/18.02.008-R8
module load starccm-mixed/18.02.008

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi openmpi


File : starccm_job.sh

#!/bin/bash

#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --nodes=1             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --ntasks-per-node=1   # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
 
cd $SLURM_SUBMIT_DIR

ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
 
module load CCEnv
module load StdEnv/2018.3
module load starccm/13.06.012-R8

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
 
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
 
# Workaround for license failures. 
# Try up to 5 times to get starccm+ to start by checking exit status (throws 143 when fails, 0 when works).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
        echo "Attempt number: "$i
        starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
        RET=$?
        i=$((i+1))
   done
exit $RET


Remote visualization

Preparation

o To setup your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above
  2. Users with a Power-on-demand (POD) license should also:
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below

Compute nodes

Connect with TigerVNC and open a terminal window ...

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect with TigerVNC and open a terminal window (Applications -> Systems Tools -> Mate Terminal) ...

STAR-CCM+ 15.04.010 (or newer versions)
module load CcEnv StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 14.06.013 (this version only)
module load CcEnv StdEnv/2016
module load starccm-mixed/14.06.013 **OR** starccm/14.06.013-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa