Star-CCM+/en: Difference between revisions
(Updating to match new version of source page) |
(Updating to match new version of source page) |
||
Line 13: | Line 13: | ||
where you change <tt>IP</tt> and <tt>PORT</tt> with the IP address and the port used by the license server. | where you change <tt>IP</tt> and <tt>PORT</tt> with the IP address and the port used by the license server. | ||
= Running Star-CCM+ on Compute Canada | = Running Star-CCM+ on Compute Canada systems = | ||
Select one of the available modules: | Select one of the available modules: | ||
* <tt>starccm</tt> for the double-precision flavour, | * <tt>starccm</tt> for the double-precision flavour, | ||
Line 41: | Line 41: | ||
#SBATCH --cpus-per-task=32 | #SBATCH --cpus-per-task=32 | ||
#SBATCH --mem=0 # Request all available MEM on full nodes | #SBATCH --mem=0 # Request all available MEM on full nodes | ||
module load starccm/12.04.011-R8 | |||
# Pick an appropriate STARCCM module/version and precision; | |||
# module load starccm/12.04.011-R8 | |||
module load starccm-mixed/13.04.010 | |||
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | ||
Line 67: | Line 70: | ||
#SBATCH --cpus-per-task=48 | #SBATCH --cpus-per-task=48 | ||
#SBATCH --mem=0 # Request all available MEM on full nodes | #SBATCH --mem=0 # Request all available MEM on full nodes | ||
module load starccm/12.04.011-R8 | |||
# Pick an appropriate STARCCM module/version and precision; | |||
# module load starccm/12.04.011-R8 | |||
module load starccm-mixed/13.04.010 | |||
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE' | ||
Line 79: | Line 85: | ||
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK)) | NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK)) | ||
starccm+ -power -np $NCORE -podkey $LM_PROJECT -machinefile machinefile -mpi intel -batch | starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile `pwd`/machinefile -mpi intel -batch `pwd`/your-simulation-file.sim | ||
}}</tab> | }}</tab> |
Revision as of 18:33, 21 March 2019
STAR-CCM+ is a multidisciplinary engineering simulation suite, supporting the modelling of acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.
License limitations
Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.
Configuring your account to use your own license server
In order to configure your account to use your own license server with our Star-CCM+ module, create a file $HOME/.licenses/starccm.lic with the content :
SERVER IP ANY PORT
USE_SERVER
where you change IP and PORT with the IP address and the port used by the license server.
Running Star-CCM+ on Compute Canada systems
Select one of the available modules:
- starccm for the double-precision flavour,
- starccm-mixed for the mixed precision flavour.
Star-CCM+ comes bundled with two different distributions of MPI:
- IBM Platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
- Intel MPI is specified with option -mpi intel.
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options --ntasks-per-node=1 and --cpus-per-task=32 when submitting a job.
You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.
Note that at Niagara the compute nodes mount the $HOME filesystem as "read-only". Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the $HOME and crash in the process.
#!/bin/bash
#SBATCH --time=0-01:00 # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=32
#SBATCH --mem=0 # Request all available MEM on full nodes
# Pick an appropriate STARCCM module/version and precision;
# module load starccm/12.04.011-R8
module load starccm-mixed/13.04.010
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -power -np $NCORE -machinefile machinefile -batch /path/to/your/simulation/file
#!/bin/bash
#SBATCH --time=0-01:00 # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=48
#SBATCH --mem=0 # Request all available MEM on full nodes
# Pick an appropriate STARCCM module/version and precision;
# module load starccm/12.04.011-R8
module load starccm-mixed/13.04.010
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile `pwd`/machinefile -mpi intel -batch `pwd`/your-simulation-file.sim
#!/bin/bash
#SBATCH --time=0-01:00 # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=40 # or 80 to use HyperThreading
#SBATCH --mem=0 # Request all available MEM on full nodes
cd $SLURM_SUBMIT_DIR
module purge --force
module load CCEnv
module load StdEnv
module load starccm/12.04.011-R8
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@localhost"
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
# ln -s $STARCCM_TMP $HOME ### only the first time you run the script
slurm_hl2hl.py --format STAR-CCM+ > machinefile
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
starccm+ -power -np $NCORE -podkey $LM_PROJECT -machinefile machinefile -batch /path/to/your/simulation/file