Star-CCM+/en: Difference between revisions

Jump to navigation Jump to search
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 22: Line 22:
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.  
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.  


Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  As a special case, when submitting jobs with version 14.02.012 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and set <tt>--cpus-per-task</tt> to use all cores as shown in the scripts.  As a special case, when submitting jobs with version 14.02.012 or 14.04.013 modules on Cedar, one must add <code>-fabric psm2</code> to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.


You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.
You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please [mailto:support@computecanada.ca contact us] so that we can help you setup the access to it.  


Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only".  Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+.  Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process.
Note that at [[Niagara]] the compute nodes mount the <tt>$HOME</tt> filesystem as "read-only".  Therefore it is important to define the environment variable <tt>$STARCCM_TMP</tt> and point it to a location on <tt>$SCRATCH</tt>, which is unique to the version of StarCCM+.  Otherwise StarCCM+ will try to create such a directory in the <tt>$HOME</tt> and crash in the process.


<tabs>
<tabs>
<tab name="Graham" >
<tab name="Beluga" >
{{File
{{File
|name=starccm_job.sh
|name=starccm_job.sh
Line 35: Line 35:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --cpus-per-task=32   # Request all cores per node
#SBATCH --cpus-per-task=40   # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# Pick an appropriate STARCCM version and precision
module load StdEnv/2020      # Uncomment for 15.04.010 or newer versions
 
# module load starccm/14.06.013-R8
# module load starccm/14.06.013-R8
module load starccm-mixed/14.06.013
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
Line 56: Line 59:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $SLURM_SUBMIT_DIR/machinefile -mpi intel -batch /path/to/your/simulation/file
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim


}}</tab>
}}</tab>
Line 65: Line 68:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # specify some account
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --nodes=2            # Specify 1 or more nodes
Line 72: Line 75:
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# Pick an appropriate STARCCM module/version and precision;
module load StdEnv/2020      # Uncomment for 15.04.010 or newer versions
# module load starccm/12.04.011-R8
 
module load starccm-mixed/14.06.013
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
Line 86: Line 92:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile `pwd`/machinefile -mpi intel -batch `pwd`/your-simulation-file.sim
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim -mpi intel
 
}}</tab>
<tab name="Graham" >
{{File
|name=starccm_job.sh
|lang="bash"
|contents=
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
 
module load StdEnv/2020      # Uncomment for 15.04.010 or newer versions
 
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/15.04.010-R8
module load starccm-mixed/15.04.010
 
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
 
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
 
slurm_hl2hl.py --format STAR-CCM+ > machinefile
 
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
 
# Append -fabric psm2 to next line when using module versions 15.04.010 or newer ...
 
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim -mpi intel -fabric psm2


}}</tab>
}}</tab>
Line 95: Line 136:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --nodes=2            # Specify 1 or more nodes
Line 120: Line 162:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $SLURM_SUBMIT_DIR/machinefile -batch /path/to/your/simulation/file
starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim


}}</tab>
}}</tab>
38,757

edits

Navigation menu