Star-CCM+: Difference between revisions

Jump to navigation Jump to search
no edit summary
mNo edit summary
No edit summary
Line 26: Line 26:


<!--T:4-->
<!--T:4-->
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.


<!--T:5-->
<!--T:5-->
You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it.  
You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.
When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.


<!--T:8-->
<!--T:8-->
Line 73: Line 72:
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-01:00        # Time limit: d-hh:mm
#SBATCH --time=0-01:00        # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=48   # or 32 for smaller full nodes
#SBATCH --cpus-per-task=48
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all available MEM on full nodes
#SBATCH --ntasks-per-node=1  # Do not change this value


# Pick an appropriate STARCCM module/version and precision;  
# Pick an appropriate STARCCM module/version and precision;  
Line 102: Line 101:
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-01:00        # Time limit: d-hh:mm
#SBATCH --time=0-01:00        # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=40    # or 80 to use HyperThreading
#SBATCH --cpus-per-task=40    # or 80 to use HyperThreading
#SBATCH --mem=0              # Request all available MEM on full nodes
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value


cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
cc_staff
1,857

edits

Navigation menu