Star-CCM+: Difference between revisions

no edit summary
mNo edit summary
No edit summary
Line 26: Line 26:


<!--T:4-->
<!--T:4-->
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.  All module versions on Graham require this option for multi-node jobs.


<!--T:5-->
<!--T:5-->
Line 42: Line 42:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group   # specify some account
#SBATCH --account=def-group       # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --time=00-01:00           # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --nodes=2                 # Specify 1 or more nodes
#SBATCH --cpus-per-task=32   # Request all cores per node
#SBATCH --cpus-per-task=32         # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --mem=0                   # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value
#SBATCH --ntasks-per-node=1       # Do not change this value
 
module load StdEnv/2020            # Uncomment for 15.04.010 or newer versions


# Pick an appropriate STARCCM version and precision
# module load starccm/14.06.013-R8
# module load starccm/14.06.013-R8
module load starccm-mixed/14.06.013
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
cc_staff
1,857

edits