Star-CCM+/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 42: Line 42:
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# module load StdEnv/2016    # Un-comment for version 14.06.013 or older
# module load StdEnv/2016    # Uncomment for version 14.06.013 or older


# module load starccm/14.06.013-R8
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm-mixed/14.06.013
# module load starccm/16.04.007-R8
# module load starccm/17.02.007-R8
module load starccm-mixed/16.04.007
module load starccm-mixed/17.02.007


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
Line 71: Line 71:
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 for smaller full nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# module load StdEnv/2016    # Un-comment for version 14.06.013 or older
# module load StdEnv/2016    # Uncomment for version 14.06.013 or older


# module load starccm/14.06.013-R8
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm-mixed/14.06.013
# module load starccm/16.04.007-R8
# module load starccm/17.02.007-R8
module load starccm-mixed/16.04.007
module load starccm-mixed/17.02.007


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
Line 108: Line 108:
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --ntasks-per-node=1  # Do not change this value


# module load StdEnv/2016    # Un-comment for version 14.06.013 or older
# module load StdEnv/2016    # Uncomment for version 14.06.013 or older


# module load starccm/14.06.013-R8
# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm-mixed/14.06.013
# module load starccm/16.04.007-R8
# module load starccm/17.02.007-R8
module load starccm-mixed/16.04.007
module load starccm-mixed/17.02.007


export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
Line 125: Line 125:
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


# Append -fabric psm2 to next line when using module versions 15.04.010 or newer ie)
# Append -fabric psm2 to next line when loading module versions 15.04.010 or newer ie)


starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
}}</tab>
<tab name="Narval" >
{{File
|name=starccm_job.sh
|lang="bash"
|contents=
#!/bin/bash
#SBATCH --account=def-group  # Specify some account
#SBATCH --time=00-01:00      # Time limit: dd-hh:mm
#SBATCH --nodes=2            # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0              # Request all memory per node
#SBATCH --ntasks-per-node=1  # Do not change this value
module load StdEnv/2020      # Do not change
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007
export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"
export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"
slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
# Append -fabric ucx to next line when loading module versions 15.04.010 
starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel


}}</tab>
}}</tab>
Line 138: Line 171:
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --cpus-per-task=40
#SBATCH --ntasks-per-node=1  # Do not change this value
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
#SBATCH --mail-type=END
Line 186: Line 219:
== Compute nodes ==
== Compute nodes ==


Connect to a compute node with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window. Depending on the version you use:  
Connect to a compute node with [[VNC#Compute_Nodes|TigerVNC]] and open a terminal window then do one of the following:
: '''STAR-CCM+ 15.04.010 (or newer versions)'''
: '''STAR-CCM+ 15.04.010 (or newer versions)'''
:: <code>module load StdEnv/2020</code>
:: <code>module load StdEnv/2020</code>
:: <code>module load starccm-mixed/16.04.007</code> **OR** <code>starccm/16.04.007-R8</code>
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code>
::  starccm+
::  starccm+
: '''STAR-CCM+ 14.06.010, 14.04.013, 14.02.012'''
: '''STAR-CCM+ 14.06.010, 14.04.013, 14.02.012'''
Line 202: Line 235:
== VDI nodes ==
== VDI nodes ==


Connect to gra-vdi with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (''Applications -> Systems Tools -> Mate Terminal''). Depending on the version you use:  
Connect to gra-vdi with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal window (''Applications -> Systems Tools -> Mate Terminal'') then do one of the following:
: '''STAR-CCM+ 15.04.010 (or newer versions)'''
: '''STAR-CCM+ 15.04.010 (or newer versions)'''
:: <code>module load CcEnv StdEnv/2020</code>
:: <code>module load CcEnv StdEnv/2020</code>
:: <code>module load starccm-mixed/16.04.007</code> **OR** <code>starccm/16.04.007-R8</code>
:: <code>module load starccm-mixed/17.02.007</code> **OR** <code>starccm/17.02.007-R8</code>
::  starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
::  starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
: '''STAR-CCM+ 14.06.013 (this version only)'''
: '''STAR-CCM+ 14.06.013 (this version only)'''

Revision as of 17:00, 6 April 2022

Other languages:

STAR-CCM+ is a multidisciplinary engineering simulation suite, supporting the modelling of acoustics, fluid dynamics, heat transfer, rheology, multiphase flows, particle flows, solid mechanics, reacting flows, electrochemistry, and electromagnetics. It is developed by Siemens.

License limitations

Compute Canada has the authorization to host STAR-CCM+ binaries on its servers, but does not provide licenses to users. You will need to have your own license in order to use this software.

Configuring your account

In order to configure your account to use your own license server with our Star-CCM+ module, create a license file $HOME/.licenses/starccm.lic with the content :

File : starccm.lic

SERVER IP ANY PORT
USE_SERVER


where you change IP and PORT with the IP address and the port used by the license server.

Cluster batch job submission

Select one of the available modules:

  • starccm for the double-precision flavour,
  • starccm-mixed for the mixed precision flavour.

Star-CCM+ comes bundled with two different distributions of MPI:

  • IBM Platform MPI is the default distribution, but does not work on Cedar's Intel OmniPath network fabric;
  • Intel MPI is specified with option -mpi intel.

Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which hosts to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options --ntasks-per-node=1 and set --cpus-per-task to use all cores as shown in the scripts. As a special case, when submitting jobs with version 14.02.012 or 14.04.013 modules on Cedar, one must add -fabric psm2 to the starccm+ command line (last line in the below Cedar tab of the starccm_job.sh slurm script) for multi-node jobs to run properly otherwise no output will be obtained.

You will also need to set up your job environment to use your license. If you are using CD-adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you setup the access to it.

Note that at Niagara the compute nodes mount the $HOME filesystem as "read-only". Therefore it is important to define the environment variable $STARCCM_TMP and point it to a location on $SCRATCH, which is unique to the version of StarCCM+. Otherwise StarCCM+ will try to create such a directory in the $HOME and crash in the process.

File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=40    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

# module load StdEnv/2016     # Uncomment for version 14.06.013 or older

# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=48    # or 32 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

# module load StdEnv/2016     # Uncomment for version 14.06.013 or older

# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile $PWD/your-file.sim -mpi intel
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=32    # or 44 Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

# module load StdEnv/2016     # Uncomment for version 14.06.013 or older

# module load starccm/14.06.013-R8
# module load starccm-mixed/14.06.013
# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Append -fabric psm2 to next line when loading module versions 15.04.010 or newer ie)

starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel -fabric psm2
File : starccm_job.sh

#!/bin/bash
#SBATCH --account=def-group   # Specify some account
#SBATCH --time=00-01:00       # Time limit: dd-hh:mm
#SBATCH --nodes=2             # Specify 1 or more nodes
#SBATCH --cpus-per-task=64    # Request all cores per node
#SBATCH --mem=0               # Request all memory per node
#SBATCH --ntasks-per-node=1   # Do not change this value

module load StdEnv/2020       # Do not change

# module load starccm/17.02.007-R8
module load starccm-mixed/17.02.007

export LM_PROJECT='YOUR CD-ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

export STARCCM_TMP="${SCRATCH}/.starccm-${EBVERSIONSTARCCM}"
mkdir -p "$STARCCM_TMP"

slurm_hl2hl.py --format STAR-CCM+ > machinefile-$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Append -fabric ucx to next line when loading module versions 15.04.010  

starccm+ -jvmargs -Xmx4G -jvmargs -Djava.io.tmpdir=$SLURM_TMPDIR -batch -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $PWD/machinefile-$SLURM_JOB_ID $PWD/your-file.sim -mpi intel
File : starccm_job.sh

'"`UNIQ--pre-00000027-QINU`"'

Remote visualization

Preparation

o To setup your account for remote visualization:

  1. Create ~/.licenses/starccm.lic as described above
  2. Users with a Power-on-demand (POD) license should also:
set: export LM_PROJECT='CD-ADAPCO PROJECT ID' and
add: -power to the other command line options shown below

Compute nodes

Connect to a compute node with TigerVNC and open a terminal window then do one of the following:

STAR-CCM+ 15.04.010 (or newer versions)
module load StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+
STAR-CCM+ 14.06.010, 14.04.013, 14.02.012
module load StdEnv/2016
module load starccm-mixed/14.06.010 **OR** starccm/14.06.010-R8
starccm+
STAR-CCM+ 13.06.012 (or older versions)
module load StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa

VDI nodes

Connect to gra-vdi with TigerVNC and open a terminal window (Applications -> Systems Tools -> Mate Terminal) then do one of the following:

STAR-CCM+ 15.04.010 (or newer versions)
module load CcEnv StdEnv/2020
module load starccm-mixed/17.02.007 **OR** starccm/17.02.007-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 14.06.013 (this version only)
module load CcEnv StdEnv/2016
module load starccm-mixed/14.06.013 **OR** starccm/14.06.013-R8
starccm+ -clientldpreload /usr/lib64/VirtualGL/libvglfaker.so
STAR-CCM+ 13.06.012 (or older versions)
module load CcEnv StdEnv/2016
module load starccm-mixed/13.06.012 **OR** starccm/13.06.012-R8
starccm+ -mesa