Star-CCM+: Difference between revisions
Jump to navigation
Jump to search
1,005 bytes removed
, 1 year ago
m
|
|
Line 26: |
Line 26: |
| * <tt>starccm</tt> for the double-precision flavour, | | * <tt>starccm</tt> for the double-precision flavour, |
| * <tt>starccm-mixed</tt> for the mixed-precision flavour. | | * <tt>starccm-mixed</tt> for the mixed-precision flavour. |
|
| |
| <!--T:6-->
| |
| Star-CCM+ comes bundled with two different distributions of MPI:
| |
| *[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM platform MPI] is the default distribution, but does not work on [[Cedar]]'s Intel OmniPath network fabric;
| |
| *[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.
| |
|
| |
| <!--T:4-->
| |
| Neither IBM MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use option <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts.
| |
|
| |
|
| <!--T:5--> | | <!--T:5--> |