Ansys: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
(Marked this version for translation)
 
(532 intermediate revisions by 9 users not shown)
Line 3: Line 3:


<translate>
<translate>
= Introduction = <!--T:2-->
<!--T:2-->
[http://www.ansys.com/ ANSYS] is a software suite for engineering simulation and 3-D design. It includes packages such as [http://www.ansys.com/Products/Fluids/ANSYS-Fluent ANSYS Fluent] and [http://www.ansys.com/products/fluids/ansys-cfx ANSYS CFX].
[http://www.ansys.com/ Ansys] is a software suite for engineering simulation and 3-D design. It includes packages such as [http://www.ansys.com/Products/Fluids/ANSYS-Fluent Ansys Fluent] and [http://www.ansys.com/products/fluids/ansys-cfx Ansys CFX].


= Licensing = <!--T:4-->
= Licensing = <!--T:4-->
Compute Canada is a hosting provider for ANSYS . This means that we have ANSYS software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our cluster.  Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the ANSYS modules, and it should find its license automatically. If this is not the case, please contact our [[Technical support]], so that we can arrange this for you.
We are a hosting provider for Ansys. This means that we have the software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our clusters.  Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the Ansys module, and it should find its license automatically. If this is not the case, please contact our [[technical support]] so that they can arrange this for you.


<!--T:6-->
== Configuring your license file == <!--T:10-->
Available modules are: <tt>fluent/16.1</tt>, <tt>ansys/16.2.3</tt>, <tt>ansys/17.2</tt>, <tt>ansys/18.1</tt>, <tt>ansys/18.2</tt>, <tt>ansys/19.1</tt>, <tt>ansys/19.2</tt>, <tt>ansys/2019R2</tt>, <tt>ansys/2019R3</tt>.
Our module for Ansys is designed to look for license information in a few places. One of those places is your /home folder. You can specify your license server by creating a file named <code>$HOME/.licenses/ansys.lic</code> consisting of two lines as shown.  Customize the file to replacing FLEXPORT, INTEPORT and LICSERVER with appropriate values for your server.
 
= Documentation = <!--T:8-->
The full ANSYS documentation (for the latest version) can be accessed by following these steps:
# connect to '''gra-vdi.computecanada.ca''' with tigervnc as described in [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes VDI Nodes]
# open a terminal window and start workbench:
#* module load CcEnv StdEnv ansys
#* runwb2
# in the upper pulldown menu click the sequence:
#* Help -> ANSYS Workbench Help
# once the ANSYS Help page appears click:
#* Home
 
= Configuring your own license file = <!--T:10-->
Our module for ANSYS is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, write the information to access into file <tt>$HOME/.licenses/ansys.lic</tt> using the following format:


<!--T:12-->
<!--T:12-->
{{File
{| class="wikitable" style="text-align:left; border:1px solid #BBB; background-color:#F9F9F9; width:50%;"
|name=ansys.lic
|+ style="text-align:left; background-color:#F2F2F2; font-size:110%" | FILE: ansys.lic
|lang="lua"
|-
|contents=
| style="border-style: none none none none; font-size: 100%; padding-left:10%; padding-bottom:0;" | setenv("ANSYSLMD_LICENSE_FILE", "<b>FLEXPORT</span>@LICSERVER</b>")
setenv("ANSYSLMD_LICENSE_FILE", "port@hostname")
|-
setenv("ANSYSLI_SERVERS", "port@hostname")
| style="border-style: none none none none; font-size: 100%; padding-left:10%; padding-top:0;" | setenv("ANSYSLI_SERVERS", "<b>INTEPORT@LICSERVER</b>")  
}}
|}


<!--T:13-->
<!--T:21-->
Cluster specific settings for <code>port@hostname</code> are given in the following table:
The following table provides established values for the CMC and SHARCNET license servers.  To use a different server, locate the corresponding values as explained in [[#Local_license_servers|Local license servers]].


<!--T:14-->
<!--T:14-->
{| class="wikitable"
{| class="wikitable"
|+ style="text-align:left; background-color:#F2F2F2; font-size:110%" | TABLE: Preconfigured license servers
! License
! License
! Cluster
! System/Cluster
! ANSYSLMD_LICENSE_FILE
! LICSERVER
! ANSYSLI_SERVERS
! FLEXPORT
! Notices
! INTEPORT
! VENDPORT
! NOTICES
|-
|-
| CMC
| CMC
| beluga
| beluga
| <code>6624@132.219.136.89</code>
| <code>10.20.73.21</code>
| <code>2325@132.219.136.89</code>
| <code>6624</code>
| <code>2325</code>
| n/a
| None
| None
|-
|-
| CMC
| CMC
| cedar
| cedar
| <code>6624@206.12.126.25</code>
| <code>172.16.0.101</code>
| <code>2325@206.12.126.25</code>
| <code>6624</code>
| <code>2325</code>
| n/a
| None
| None
|-
|-
| CMC
| CMC
| graham
| graham
| <code>6624@199.241.167.222</code>
| <code>199.241.167.222</code>
| <code>2325@199.241.167.222</code>
| <code>6624</code>
| NewIP Nov1/21
| <code>2325</code>
| n/a
| None
|-
|-
| CMC
| CMC
| narval
| narval
| <code>6624@10.100.64.10</code>
| <code>10.100.64.10</code>
| <code>2325@10.100.64.10</code>
| <code>6624</code>
| <code>2325</code>
| n/a
| None
|-
| SHARCNET
| beluga/cedar/graham/gra-vdi/narval
| <code>license3.sharcnet.ca</code>
| <code>1055</code>
| <code>2325</code>
| n/a
| None
| None
|-
|-
| SHARCNET
| SHARCNET
| beluga/cedar/graham/gra-vdi
| niagara
| <code>1055@license3.sharcnet.ca</code>
| <code>localhost</code>
| <code>2325@license3.sharcnet.ca</code>
| <code>1055</code>
| <code>2325</code>
| <code>1793</code>
| None
| None
|}
|}


<!--T:15-->
<!--T:15-->
Researchers who purchase a CMC license subscription must send their Compute Canada username to <cmcsupport@cmc.ca> otherwise license checkouts will fail. The number of cores that can be used with a CMC license is described in the <I>Other Tricks and Tips</i> section found [https://www.cmc.ca/qsg-ansys-2021-mechanical-fluids-cadpass-r20/ here].
Researchers who purchase a CMC license subscription must send their Alliance account username to <cmcsupport@cmc.ca> otherwise license checkouts will fail. The number of cores that can be used with a CMC license is described in the <i>Other Tricks and Tips</i> sections of the [https://www.cmc.ca/?s=Other+Tricks+and+Tips&lang=en/ Ansys Electronics Desktop and  Ansys Mechanical/Fluids quick start guides].


== Local License Servers == <!--T:16-->
=== Local license servers  === <!--T:16-->


<!--T:17-->
<!--T:17-->
Before a local institutional ANSYS license server can be reached from Compute Canada systems firewall configuration changes will need to be made on both the institution side and the Compute Canada side.  To start this process, contact your local ANSYS license server administrator and obtain the following information 1) fully qualified hostname of the local ANSYS license server 2) ANSYS flex port (commonly 1055) 3) ANSYS licensing interconnect port (commonly 2325) and 4) ANSYS static vendor port (site specific).  Ensure the administrator is willing to open the firewall on these three ports to accept license checkout requests from your ANSYS jobs running on Compute Canada systems. Next open a ticket with <support@computecanada.ca> and send us the four pieces of information and indicate which systems(s) you want to run ANSYS on for example Cedar, Beluga, Graham/Gra-vdi or Niagara.
Before a local Ansys license server can be reached from an Alliance cluster, firewall changes will need to be done on both the server side and the Alliance side.  For many local institutional servers this work has already been done.  In such cases you simply need to contact your local Ansys license server administrator and request 1) the fully qualified hostname (LICSERVER) of the server; 2) the Ansys flex port commonly 1055 (FLEXPORT); and 3) the Ansys licensing interconnect port commonly 2325 (INTEPORT).  With this information you can then immediately configure your <code>ansys.lic</code> file as described above and theoretically begin submitting jobs.
 
<!--T:18-->
If however your local license server has never been setup for use on the Alliance, you will additionally need to request 4) the static vendor port number (VENDPORT) number from your local Ansys server administrator. Once you have gathered all four pieces of information send it to [[technical support]] being sure to mention which Alliance cluster(s) you want to run Ansys on.  We will then arrange for the Alliance firewall to be opened so that license requests on the cluster(s) can reach your server.  You will then also receive a range of IP addresses to pass to your server administrator so the local firewall can likewise be opened to allow inbound license connections to reach your server on the 3 ports (FLEXPORT, INTEPORT, VENDPORT) from the requested Alliance system(s).
 
== Checking license usage == <!--T:283-->
 
<!--T:2830-->
Ansys comes with an <code>lmutil</code> tool that can be used to check your license usage.  Before using it verify your <code>ansys.lic</code> is configured.  Then run the following two commands on a cluster that you are set up to use: </translate>
{{Commands2
|module load ansys/2023R2
|$EBROOTANSYS/v232/licensingclient/linx64/lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -S ansyslmd
}}<translate>


= Version Compability = <!--T:18-->
<!--T:4776-->
If you load a different version of the Ansys module, you will need to modify the path to the <code>lmutil</code> command.


<!--T:19-->
= Version compatibility = <!--T:26-->
As explained in ANSYS [https://www.ansys.com/it-solutions/platform-support Platform Support] the current release (2021R2) was tested to read and open databases from the five previous releases.  In addition some products can read and open databases from releases before Ansys 18.1.


= Cluster Batch Job Submission = <!--T:20-->
== Platform Support == <!--T:19-->
The ANSYS software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them supports our [[Running jobs|Slurm]] scheduler. For this reason, we need special instructions for each ANSYS package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages. If one is not covered and you want us to investigate and help you start it, please contact our [[Technical support]].
The Ansys [https://www.ansys.com/it-solutions/platform-support Platform Support] page states "the current release has been tested to read and open databases from the five previous releases".  This implies that simulations developed using older versions of Ansys should generally work with newer module versions (forward compatible five releases).  The reverse however cannot be assumed to be true. The [https://www.ansys.com/it-solutions/platform-support Platform Support] link also provides version based software and hardware compatibility information to determine optimal (supported) platform infrastructure that Ansys can be run on. The features supported under Windows VS Linux systems can be displayed by clicking the <I>Platform Support by Application / Product</I> link.  Similar information for all of the above maybe found by clicking the <I>Previous Releases</I> link located at the lower left corner of the Platform Support page.


== ANSYS Fluent == <!--T:30-->
== What's New == <!--T:6740-->
Typically you would use the following procedure for running Fluent on one of the Compute Canada clusters:
 
Ansys posts [https://www.ansys.com/products/release-highlights Product Release and Updates] for the latest releases.  Similar information for previous releases can generally be pulled up for various application topics by visiting the Ansys [https://www.ansys.com/blog blog] page and using the FILTERS search bar.  For example searching on <code>What’s New Fluent 2024 gpu</code> pulls up a document with title <code>[https://www.ansys.com/blog/fluent-2024-r1 What’s New for Ansys Fluent in 2024 R1?]</code> containing a wealth of the latest gpu support information. Specifying a version number in the [https://www.ansys.com/news-center/press-releases Press Release] search bar is also a good way to find new release information.  At the time of this writing Ansys 2024R2 is the current release and will be installed when interest is expressed or there is evident need to support newer hardware or solver capabilities.  To request a new version be installed submit a problem ticket.
== Service Packs == <!--T:6741-->
 
<!--T:6742-->
Ansys regularly releases service packs to fix and enhance various issues with its major releases.  Therefore starting with Ansys 2024 a separate ansys module will appear on the clusters with a decimal and two digits appearing after the release number whenever a service pack is been installed over the initial release.  For example, the initial 2024 release without any service pack applied maybe loaded by doing <code>module load ansys/2024R1</code> while a module with Service Pack 3 applied maybe loaded by doing <code>module load ansys/2024R1.03</code> instead.  If a service pack is already available by the time a new release is to be installed, then most likely only a module for that service pack number will be installed unless otherwise a requeste to install the initial release also is received.
 
<!--T:6743-->
Most users will likely want to load the latest module version equipped with latest installed service pack which can be achieved by simply doing <code>module load ansys</code>.  While its not expected service packs will impact numerical results, the changes they make are extensive and so if computations have already been done with the initial release or an earlier service pack than some groups may prefer to continue using it. Having separate modules for each service pack makes this possible.  Starting with Ansys 2024R1 a detailed description of what each service pack does can be found by searching this [https://storage.ansys.com/staticfiles/cp/Readme/release2024R1/info_combined.pdf link] for <I>Service Pack Details</I>.  Future versions will presumably be similarly searchable by manually modifying the version number contained in the link.
 
= Cluster batch job submission = <!--T:20-->
The Ansys software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them support our [[Running jobs|Slurm scheduler]]. For this reason, we need special instructions for each Ansys package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages.  While the slurm scripts should work with on all clusters, Niagara users may need to make some additional changes covered [https://docs.scinet.utoronto.ca/index.php here].
 
== Ansys Fluent == <!--T:30-->
Typically, you would use the following procedure to run Fluent on one of our clusters:


<!--T:31-->
<!--T:31-->
* Prepare your Fluent job using Fluent from the "ANSYS Workbench" on your Desktop machine up to the point where you would run the calculation.
# Prepare your Fluent job using Fluent from the Ansys Workbench on your desktop machine up to the point where you would run the calculation.
* Export the "case" file "'''File > Export > Case...'''" or find the folder where Fluent saves your project's files. The "case" file will often have a name like <tt>FFF-1.cas.gz</tt>.
# Export the "case" file with <i>File > Export > Case…</i> or find the folder where Fluent saves your project's files. The case file will often have a name like <code>FFF-1.cas.gz</code>.
* If you already have data from a previous calculation, which you want to continue, export a "data" file as well ('''File > Export > Data...''') or find it the same project folder (<tt>FFF-1.dat.gz</tt>).  
# If you already have data from a previous calculation, which you want to continue, export a "data" file as well (<i>File > Export > Data…</i>) or find it in the same project folder (<code>FFF-1.dat.gz</code>).  
* [[Transferring_data|Transfer]] the "case" file (and if needed the "data" file) to a directory on the [[Project_layout|project]] or [[Storage_and_file_management#Storage_types|scratch]] filesystem on the cluster.  When exporting, you save the file(s) under a more instructive name than <tt>FFF-1.*</tt> or rename them when uploading them.
# [[Transferring_data|Transfer]] the case file (and if needed the data file) to a directory on the [[Project_layout|/project]] or [[Storage_and_file_management#Storage_types|/scratch]] filesystem on the cluster.  When exporting, you can save the file(s) under a more instructive name than <code>FFF-1.*</code> or rename them when they are uploaded.
* Now you need to create a "journal" file. It's purpose is to load the case- (and optionally the data-) file, run the solver and finally write the results.  See examples below and remember to adjust the filenames and desired number of iterations.
# Now you need to create a "journal" file. It's purpose is to load the case file (and optionally the data file), run the solver and finally write the results.  See examples below and remember to adjust the filenames and desired number of iterations.
* If jobs frequently fail to start due to license shortages (and manual resubmission of failed jobs is not convenient) consider modifying your slurm script to requeue your job (upto to 4 times) as shown in the following "<i>Fluent Slurm Script (by node + requeue)</i>" tab.  Be aware doing this will also requeue simulations that fail due to non-license related issues (such as divergence) resulting lost compute time.  Therefore it is strongly recommended to monitor and inspect each slurm output file to confirm each requeue attempt is license related.  When it is determined a job requeued due to a simulation issue then immediately manually kill the job progression with <code>scancel jobid</code> and correct the problem.
# If jobs frequently fail to start due to license shortages and manual resubmission of failed jobs is not convenient, consider modifying your script to requeue your job (up to 4 times) as shown under the <i>by node + requeue</i> tab further below.  Be aware that doing this will also requeue simulations that fail due to non-license related issues (such as divergence), resulting in lost compute time.  Therefore it is strongly recommended to monitor and inspect each Slurm output file to confirm each requeue attempt is license related.  When it is determined that a job is requeued due to a simulation issue, immediately manually kill the job progression with <code>scancel jobid</code> and correct the problem.
* After [[Running_jobs|running the job]] you can download the "data" file and import it back into Fluent with '''File > import > Data...'''.
# After [[Running_jobs|running the job]], you can download the data file and import it back into Fluent with <i>File > Import > Data…</i>.


=== Slurm Scripts === <!--T:220-->
=== Slurm scripts === <!--T:220-->
 
==== General purpose ==== <!--T:221-->


<!--T:222-->
<!--T:222-->
Most Fluent jobs should use the following <i>by node</i> script to minimize solution latency and maximize performance over as few nodes as possible. Very large jobs, however, might wait less in the queue if they use a <i>by core</i> script. However, the startup time of a job using many nodes can be significantly longer, thus offsetting some of the benefits. In addition, be aware that running large jobs over an unspecified number of potentially very many nodes will make them far more vulnerable to crashing if any of the compute nodes fail during the simulation. The scripts will ensure Fluent uses shared memory for communication when run on a single node or distributed memory (utilizing MPI and the appropriate HPC interconnect) when run over multiple nodes.  The two narval tabs maybe be useful to provide a more robust alternative if fluent hangs during the initial auto mesh partitioning phase when using the standard intel based scripts with the parallel solver.  The other option would be to manually perform the mesh partitioning in the fluent gui then trying to run the job again on the cluster with the intel scripts.  Doing so will allow you to inspect the partition statistics and specify the partitioning method to obtain an optimal result.  The number of mesh partitions should be an integral multiple of the there number of cores.  For optimal efficiency ensure at least 10000 cells per core otherwise specifying too many cores will eventually result in the poor performance as the scaling drops off.
<!--T:2300-->
<tabs>
<tabs>
<tab name="Fluent Slurm Script (by core)">
 
<!--T:6736-->
<tab name="Multinode (by node)">
{{File
|name=script-flu-bynode-intel.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:2302-->
#SBATCH --account=def-group  # Specify account name
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
#SBATCH --nodes=1            # Specify number of compute nodes (narval 1 node max)
#SBATCH --ntasks-per-node=32  # Specify number of cores per node (graham 32 or 44, cedar 48, beluga 40, narval 64, or less)
#SBATCH --mem=0              # Do not change (allocates all memory per compute node)
#SBATCH --cpus-per-task=1    # Do not change
 
<!--T:2306-->
module load StdEnv/2023      # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)
 
<!--T:2305-->
#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3    # or newer versions (narval only)
#module load ansys/2021R2    # or newer versions (beluga, cedar, graham)
 
<!--T:4733-->
MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp
 
<!--T:501-->
# ------- do not change any lines below --------
 
<!--T:4734-->
if [[ "${CC_CLUSTER}" == narval ]]; then
if [ "$EBVERSIONGENTOO" == 2020 ]; then
  module load intel/2021 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
  export HCOLL_RCACHE=^ucs
elif [ "$EBVERSIONGENTOO" == 2023 ]; then
  module load intel/2023 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT
fi
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
unset I_MPI_ROOT
fi
 
<!--T:4735-->
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))
 
<!--T:2310-->
if [ "$SLURM_NNODES" == 1 ]; then
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
}}
</tab>
 
<!--T:2200-->
<tab name="Multinode (by core)">
{{File
{{File
|name=script-flu-bycore.sh
|name=script-flu-bycore-intel.sh
|lang="bash"
|lang="bash"
|contents=
|contents=
Line 118: Line 218:
<!--T:2202-->
<!--T:2202-->
#SBATCH --account=def-group  # Specify account
#SBATCH --account=def-group  # Specify account
#SBATCH --time=00-06:00      # Specify time limit dd-hh:mm
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
#SBATCH --ntasks=16          # Specify total number cores
##SBATCH --nodes=1            # Uncomment to specify (narval 1 node max)
#SBATCH --ntasks=16          # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1    # Do not change
#SBATCH --cpus-per-task=1    # Do not change


<!--T:2204-->
<!--T:2206-->
#module load StdEnv/2016
module load StdEnv/2023      # Do not change
#module load ansys/2020R2     # Or older module versions
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)
 
<!--T:2205-->
#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2    # or newer versions (beluga, cedar, graham)


<!--T:2206-->
<!--T:4736-->
module load StdEnv/2020
MYJOURNALFILE=sample.jou      # Specify your journal file name
module load ansys/2021R1      # Or newer module versions
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp
 
<!--T:502-->
# ------- do not change any lines below --------
 
<!--T:4737-->
if [[ "${CC_CLUSTER}" == narval ]]; then
if [ "$EBVERSIONGENTOO" == 2020 ]; then
  module load intel/2021 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
  export HCOLL_RCACHE=^ucs
elif [ "$EBVERSIONGENTOO" == 2023 ]; then
  module load intel/2023 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT
fi
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
unset I_MPI_ROOT
fi


<!--T:2208-->
<!--T:4738-->
slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


<!--T:2210-->
<!--T:2210-->
fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
if [ "$SLURM_NNODES" == 1 ]; then
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
}}
}}
</tab>
</tab>


<!--T:230-->
<!--T:6737-->
<tab name="Fluent Slurm Script (by node)">
<tab name="Multinode (by node, narval)">
{{File
{{File
|name=script-flu-bynode.sh
|name=script-flu-bynode-openmpi.sh
|lang="bash"
|lang="bash"
|contents=
|contents=
#!/bin/bash
#!/bin/bash


<!--T:2302-->
<!--T:5302-->
#SBATCH --account=def-group  # Specify account
#SBATCH --account=def-group  # Specify account name
#SBATCH --time=00-06:00      # Specify time limit dd-hh:mm
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
#SBATCH --nodes=1            # Specify number compute nodes (1 or more)
#SBATCH --nodes=1            # Specify number of compute nodes
#SBATCH --cpus-per-task=32    # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --ntasks-per-node=64  # Specify number of cores per node (narval 64 or less)
#SBATCH --mem=0              # Do not change (allocates all memory per compute node)
#SBATCH --mem=0              # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --cpus-per-task=1    # Do not change
 
<!--T:5306-->
module load StdEnv/2023      # Do not change
module load ansys/2023R2      # Specify version (narval only)
 
<!--T:5733-->
MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp
 
<!--T:503-->
# ------- do not change any lines below --------
 
<!--T:5735-->
export OPENMPI_ROOT=$EBROOTOPENMPI
export OMPI_MCA_hwloc_base_binding_policy=core
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/mf-$SLURM_JOB_ID
for i in `cat /tmp/mf-$SLURM_JOB_ID {{!}} uniq`; do echo "${i}:$(cat /tmp/mf-$SLURM_JOB_ID {{!}} grep $i {{!}} wc -l)" >> /tmp/machinefile-$SLURM_JOB_ID; done
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))
 
<!--T:5310-->
if [ "$SLURM_NNODES" == 1 ]; then
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pshmem -i $MYJOURNALFILE
else
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
}}
</tab>
 
<!--T:6738-->
<tab name="Multinode (by core, narval)">
{{File
|name=script-flu-bycore-openmpi.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:6302-->
#SBATCH --account=def-group  # Specify account name
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify number of compute nodes (optional)
#SBATCH --ntasks=16          # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1     # Do not change
 
<!--T:6306-->
module load StdEnv/2023      # Do not change    
module load ansys/2023R2      # Specify version (narval only)


<!--T:2304-->
<!--T:6733-->
#module load StdEnv/2016
MYJOURNALFILE=sample.jou      # Specify your journal file name
#module load ansys/2020R2    # Or older module versions
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp


<!--T:2306-->
<!--T:504-->
module load StdEnv/2020
# ------- do not change any lines below --------
module load ansys/2021R1      # Or newer module versions


<!--T:2308-->
<!--T:6735-->
slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
export OPENMPI_ROOT=$EBROOTOPENMPI
export OMPI_MCA_hwloc_base_binding_policy=core
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/mf-$SLURM_JOB_ID
for i in `cat /tmp/mf-$SLURM_JOB_ID {{!}} uniq`; do echo "${i}:$(cat /tmp/mf-$SLURM_JOB_ID {{!}} grep $i {{!}} wc -l)" >> /tmp/machinefile-$SLURM_JOB_ID; done
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


<!--T:2310-->
<!--T:6310-->
fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
if [ "$SLURM_NNODES" == 1 ]; then
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pshmem -i $MYJOURNALFILE
else
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
}}
</tab>
 
<!--T:6739-->
<tab name="Multinode (by node, niagara)">
{{File
|name=script-flu-bynode-intel-nia.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:6750-->
#SBATCH --account=def-group      # Specify account name
#SBATCH --time=00-03:00          # Specify time limit dd-hh:mm
#SBATCH --nodes=1                # Specify number of compute nodes
#SBATCH --ntasks-per-node=80    # Specify number cores per node (niagara 80 or less)
#SBATCH --mem=0                  # Do not change (allocate all memory per compute node)
#SBATCH --cpus-per-task=1        # Do not change (required parameter)
 
<!--T:6752-->
module load CCEnv StdEnv/2023    # Do not change
module load ansys/2023R2        # Specify version (niagara only)
 
<!--T:6751-->
MYJOURNALFILE=sample.jou        # Specify your journal file name
MYVERSION=3d                    # Specify 2d, 2ddp, 3d or 3ddp
 
<!--T:6753-->
# These settings are used instead of your ~/.licenses/ansys.lic
LICSERVER=license3.sharcnet.ca  # Specify license server hostname
FLEXPORT=1055                    # Specify server flex port
INTEPORT=2325                    # Specify server interconnect port
VENDPORT=1793                    # Specify server vendor port
 
<!--T:505-->
# ------- do not change any lines below --------
 
<!--T:6744-->
ssh nia-gw -fNL $FLEXPORT:$LICSERVER:$FLEXPORT      # Do not change
ssh nia-gw -fNL $INTEPORT:$LICSERVER:$INTEPORT      # Do not change
ssh nia-gw -fNL $VENDPORT:$LICSERVER:$VENDPORT      # Do not change
export ANSYSLMD_LICENSE_FILE=$FLEXPORT@localhost    # Do not change
export ANSYSLI_SERVERS=$INTEPORT@localhost          # Do not change
 
<!--T:6745-->
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))
 
<!--T:6746-->
if [ ! -L "$HOME/.ansys" ]; then
  echo "ERROR: A link to a writable .ansys directory does not exist."
  echo 'Remove ~/.ansys if one exists and then run: ln -s $SCRATCH/.ansys ~/.ansys'
  echo "Then try submitting your job again. Aborting the current job now!"
elif [ ! -L "$HOME/.fluentconf" ]; then
  echo "ERROR: A link to a writable .fluentconf directory does not exist."
  echo 'Remove ~/.fluentconf if one exists and run: ln -s $SCRATCH/.fluentconf ~/.fluentconf'
  echo "Then try submitting your job again. Aborting the current job now!"
elif [ ! -L "$HOME/.flrecent" ]; then
  echo "ERROR: A link to a writable .flrecent file does not exist."
  echo 'Remove ~/.flrecent if one exists and then run: ln -s $SCRATCH/.flrecent ~/.flrecent'
  echo "Then try submitting your job again. Aborting the current job now!"
else
  mkdir -pv $SCRATCH/.ansys
  mkdir -pv $SCRATCH/.fluentconf
  touch $SCRATCH/.flrecent
  if [ "$SLURM_NNODES" == 1 ]; then
  fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
  else
  fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
  fi
fi
}}
}}
</tab>
</tab>


<!--T:240-->
<!--T:6754-->
<tab name="Fluent Slurm Script (by node + requeue)">
</tabs>
 
==== License requeue ==== <!--T:223-->
 
<!--T:224-->
The scripts in this section should only be used with Fluent jobs that are known to complete normally without generating any errors in the output however typically require multiple requeue attempts to checkout licenses.  They are not recommended for Fluent jobs that may 1) run for a long time before crashing 2) run to completion but contain unresolved journal file warnings, since in both cases the simulations will be repeated from the beginning until the maximum number of requeue attempts specified by the <code>array</code> value is reached.  For these types of jobs the general purpose scripts above should be used instead.
 
<!--T:2400-->
<tabs>
<tab name="Multinode (by node + requeue)">
{{File
{{File
|name=script-flu-bynode+requeue.sh
|name=script-flu-bynode+requeue.sh
Line 183: Line 443:
<!--T:2402-->
<!--T:2402-->
#SBATCH --account=def-group  # Specify account
#SBATCH --account=def-group  # Specify account
#SBATCH --time=00-06:00      # Specify time limit dd-hh:mm
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
#SBATCH --nodes=1            # Specify number compute nodes (1 or more)
#SBATCH --nodes=1            # Specify number of compute nodes (narval 1 node max)
#SBATCH --cpus-per-task=32   # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --ntasks-per-node=32 # Specify number of cores per node (graham 32 or 44, cedar 48, beluga 40, or less)
#SBATCH --array=1-4%1        # Specify number requeue attempts (2 or more)
#SBATCH --mem=0              # Do not change (allocates all memory per compute node)
#SBATCH --mem=0              # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --cpus-per-task=1    # Do not change
#SBATCH --array=1-5%1        # Specify number of requeue attempts (2 or more, 5 is shown)
 
<!--T:2406-->
module load StdEnv/2023      # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)
 
<!--T:2405-->
#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3    # or newer versions (narval only)
#module load ansys/2021R2    # or newer versions (beluga, cedar, graham)
 
<!--T:4739-->
MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp
 
<!--T:506-->
# ------- do not change any lines below --------
 
<!--T:4740-->
if [[ "${CC_CLUSTER}" == narval ]]; then
if [ "$EBVERSIONGENTOO" == 2020 ]; then
  module load intel/2021 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
  export HCOLL_RCACHE=^ucs
elif [ "$EBVERSIONGENTOO" == 2023 ]; then
  module load intel/2023 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT
fi
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
unset I_MPI_ROOT
fi
 
<!--T:4741-->
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))
 
<!--T:2410-->
if [ "$SLURM_NNODES" == 1 ]; then
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
if [ $? -eq 0 ]; then
    echo "Job completed successfully! Exiting now."
    scancel $SLURM_ARRAY_JOB_ID
else
    echo "Job attempt $SLURM_ARRAY_TASK_ID of $SLURM_ARRAY_TASK_COUNT failed due to license or simulation issue!"
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
      echo "Resubmitting job now …"
    else
      echo "All job attempts failed exiting now."
    fi
fi
}}
</tab>
 
<!--T:2900-->
<tab name="Multinode (by core + requeue)">
{{File
|name=script-flu-bycore+requeue.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:2902-->
#SBATCH --account=def-group  # Specify account
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify (narval 1 node max)
#SBATCH --ntasks=16          # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1    # Do not change
#SBATCH --array=1-5%1        # Specify number of requeue attempts (2 or more, 5 is shown)
 
<!--T:2906-->
module load StdEnv/2023      # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)
 
<!--T:2905-->
#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3    # or newer versions (narval only)
#module load ansys/2021R2    # or newer versions (beluga, cedar, graham)
 
<!--T:4742-->
MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp


<!--T:2404-->
<!--T:507-->
#module load StdEnv/2016
# ------- do not change any lines below --------
#module load ansys/2020R2    # Or older module versions


<!--T:2406-->
<!--T:4743-->
module load StdEnv/2020
if [[ "${CC_CLUSTER}" == narval ]]; then
module load ansys/2021R1      # Or newer module versions
if [ "$EBVERSIONGENTOO" == 2020 ]; then
  module load intel/2021 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
  export HCOLL_RCACHE=^ucs
elif [ "$EBVERSIONGENTOO" == 2023 ]; then
  module load intel/2023 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT
fi
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
unset I_MPI_ROOT
fi


<!--T:2408-->
<!--T:4744-->
slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))


<!--T:2410-->
<!--T:2910-->
fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
if [ "$SLURM_NNODES" == 1 ]; then
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
if [ $? -eq 0 ]; then
if [ $? -eq 0 ]; then
     echo "Job completed successfully! Exiting now."
     echo "Job completed successfully! Exiting now."
     scancel $SLURM_ARRAY_JOB_ID
     scancel $SLURM_ARRAY_JOB_ID
else
else
     echo "Job failed due to license or simulation issue!"
     echo "Job attempt $SLURM_ARRAY_TASK_ID of $SLURM_ARRAY_TASK_COUNT failed due to license or simulation issue!"
     if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
     if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
       echo "Resubmitting now ..."
       echo "Resubmitting job now "
     else
     else
       echo "Exiting now."
       echo "All job attempts failed exiting now."
     fi
     fi
fi
fi
Line 219: Line 576:
</tabs>
</tabs>


=== Journal Files === <!--T:250-->
==== Solution restart ==== <!--T:225-->
 
<!--T:226-->
The following two scripts are provided to automate restarting very large jobs that require more than the typical seven-day maximum runtime window available on most clusters. Jobs are restarted from the most recent saved time step files. A fundamental requirement is the first time step can be completed within the requested job array time limit (specified at the top of your Slurm script) when starting a simulation from an initialized solution field. It is assumed that a standard fixed time step size is being used. To begin, a working set of sample.cas, sample.dat and sample.jou files must be present. Next edit your sample.jou file to contain <code>/solve/dual-time-iterate 1</code> and <code>/file/auto-save/data-frequency 1</code>. Then create a restart journal file by doing <code>cp sample.jou sample-restart.jou</code> and edit the sample-restart.jou file to contain <code>/file/read-cas-data sample-restart</code> instead of <code>/file/read-cas-data sample</code> and comment out the initialization line with a semicolon for instance <code>;/solve/initialize/initialize-flow</code>. If your 2nd and subsequent time steps are known to run twice as fast (than the initial time step), edit sample-restart.jou to specify <code>/solve/dual-time-iterate 2</code>. By doing this, the solution will only be restarted after two 2 time steps are completed following the initial time step. An output file for each time step will still be saved in the output subdirectory. The value 2 is arbitrary but should be chosen such that the time for 2 steps fits within the job array time limit. Doing this will minimize the number of solution restarts which are computationally expensive. If your first time step performed by sample.jou starts from a converged (previous) solution, choose 1 instead of 2 since likely all time steps will require a similar amount of wall time to complete. Assuming 2 is chosen, the total time of simulation to be completed will be 1*Dt+2*Nrestart*Dt where Nrestart is the number of solution restarts specified in the script. The total number of time steps (and hence the number of output files generated) will therefore be 1+2*Nrestart. The value for the time resource request should be chosen so the initial time step and subsequent time steps will complete comfortably within the Slurm time window specifiable up to a maximum of "#SBATCH --time=07-00:00" days.
 
<!--T:3400-->
<tabs>
<tab name="Multinode (by node + restart)">
{{File
|name=script-flu-bynode+restart.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:3402-->
#SBATCH --account=def-group  # Specify account
#SBATCH --time=07-00:00      # Specify time limit dd-hh:mm
#SBATCH --nodes=1            # Specify number of compute nodes (narval 1 node max)
#SBATCH --ntasks-per-node=32  # Specify number of cores per node (graham 32 or 44, cedar 48, beluga 40, narval 64, or less)
#SBATCH --mem=0              # Do not change (allocates all memory per compute node)
#SBATCH --cpus-per-task=1    # Do not change
#SBATCH --array=1-5%1        # Specify number of solution restarts (2 or more, 5 is shown)
 
<!--T:2407-->
module load StdEnv/2023      # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)
 
<!--T:2408-->
#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3    # or newer versions (narval only)
#module load ansys/2021R2    # or newer versions (beluga, cedar, graham)
 
<!--T:4403-->
MYVERSION=3d                        # Specify 2d, 2ddp, 3d or 3ddp
MYJOUFILE=sample.jou                # Specify your journal filename
MYJOUFILERES=sample-restart.jou    # Specify journal restart filename
MYCASFILERES=sample-restart.cas.h5  # Specify cas restart filename
MYDATFILERES=sample-restart.dat.h5  # Specify dat restart filename
 
<!--T:508-->
# ------- do not change any lines below --------
 
<!--T:4745-->
if [[ "${CC_CLUSTER}" == narval ]]; then
if [ "$EBVERSIONGENTOO" == 2020 ]; then
  module load intel/2021 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
  export HCOLL_RCACHE=^ucs
elif [ "$EBVERSIONGENTOO" == 2023 ]; then
  module load intel/2023 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT
fi
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
unset I_MPI_ROOT
fi
 
<!--T:4746-->
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))
 
<!--T:3408-->
# Specify 2d, 2ddp, 3d or 3ddp and replace sample with your journal filename …
if [ "$SLURM_NNODES" == 1 ]; then
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g 2ddp -t $NCORES -affinity=0 -i $MYJOUFILE
  else
    fluent -g 2ddp -t $NCORES -affinity=0 -i $MYJOUFILERES
  fi
else
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g 2ddp -t $NCORES -affinity=0 -cnf=/tmp/machinefile-$SLURM_JOB_ID -mpi=intel -ssh -i $MYJOUFILE
  else
    fluent -g 2ddp -t $NCORES -affinity=0 -cnf=/tmp/machinefile-$SLURM_JOB_ID -mpi=intel -ssh -i $MYJOUFILERES
  fi
fi
if [ $? -eq 0 ]; then
    echo
    echo "SLURM_ARRAY_TASK_ID  = $SLURM_ARRAY_TASK_ID"
    echo "SLURM_ARRAY_TASK_COUNT = $SLURM_ARRAY_TASK_COUNT"
    echo
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
      echo "Restarting job with the most recent output dat file …"
      ln -sfv output/$(ls -ltr output {{!}} grep .cas {{!}} tail -n1 {{!}} awk '{print $9}') $MYCASFILERES
      ln -sfv output/$(ls -ltr output {{!}} grep .dat {{!}} tail -n1 {{!}} awk '{print $9}') $MYDATFILERES
      ls -lh cavity* output/*
    else
      echo "Job completed successfully! Exiting now."
      scancel $SLURM_ARRAY_JOB_ID
    fi
else
    echo "Simulation failed. Exiting …"
fi
}}
</tab>
 
<!--T:3900-->
<tab name="Multinode (by core + restart)">
{{File
|name=script-flu-bycore+restart.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:3902-->
#SBATCH --account=def-group  # Specify account
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify (narval 1 node max)
#SBATCH --ntasks=16          # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1    # Do not change
#SBATCH --array=1-5%1        # Specify number of restart aka time steps (2 or more, 5 is shown)
 
<!--T:3906-->
module load StdEnv/2023      # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)
 
<!--T:3905-->
#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3    # or newer versions (narval only)
#module load ansys/2021R2    # or newer versions (beluga, cedar, graham)
 
<!--T:4747-->
MYVERSION=3d                        # Specify 2d, 2ddp, 3d or 3ddp
MYJOUFILE=sample.jou                # Specify your journal filename
MYJOUFILERES=sample-restart.jou    # Specify journal restart filename
MYCASFILERES=sample-restart.cas.h5  # Specify cas restart filename
MYDATFILERES=sample-restart.dat.h5  # Specify dat restart filename
 
<!--T:509-->
# ------- do not change any lines below --------
 
<!--T:4748-->
if [[ "${CC_CLUSTER}" == narval ]]; then
if [ "$EBVERSIONGENTOO" == 2020 ]; then
  module load intel/2021 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
  export HCOLL_RCACHE=^ucs
elif [ "$EBVERSIONGENTOO" == 2023 ]; then
  module load intel/2023 intelmpi
  export INTELMPI_ROOT=$I_MPI_ROOT
fi
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
unset I_MPI_ROOT
fi
 
<!--T:4749-->
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
 
<!--T:3910-->
if [ "$SLURM_NNODES" == 1 ]; then
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -I $MYFILEJOU
  else
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -I $MYFILEJOURES
  fi
else
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOUFILE
  else
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOUFILERES
  fi
fi
if [ $? -eq 0 ]; then
    echo
    echo "SLURM_ARRAY_TASK_ID  = $SLURM_ARRAY_TASK_ID"
    echo "SLURM_ARRAY_TASK_COUNT = $SLURM_ARRAY_TASK_COUNT"
    echo
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
      echo "Restarting job with the most recent output dat file"
      ln -sfv output/$(ls -ltr output {{!}} grep .cas {{!}} tail -n1 {{!}} awk '{print $9}') $MYCASFILERES
      ln -sfv output/$(ls -ltr output {{!}} grep .dat {{!}} tail -n1 {{!}} awk '{print $9}') $MYDATFILERES
      ls -lh cavity* output/*
    else
      echo "Job completed successfully! Exiting now."
      scancel $SLURM_ARRAY_JOB_ID
    fi
else
    echo "Simulation failed. Exiting now."
fi
}}
</tab>
</tabs>
 
=== Journal files === <!--T:250-->


<!--T:2502-->
<!--T:2502-->
Fluent Journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed.
Fluent journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed. With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used.  The following journal files are set up with <code>/file/cff-files no</code> to use the legacy .cas/.dat file format (the default in module versions 2019R3 or older).  Set this instead to <code>/file/cff-files yes</code> to use the more efficient .cas.h5/.dat.h5 file format (the default in module versions 2020R1 or newer).
With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the Journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used.


<!--T:2504-->
<!--T:2503-->
<tabs>
<tabs>
<tab name="Journal File (steady,cas)">
<tab name="Journal file (steady, case)">
{{File
{{File
|name=sample1.jou
|name=sample1.jou
Line 235: Line 775:
; lines beginning with a semicolon are comments
; lines beginning with a semicolon are comments


<!--T:2506-->
<!--T:2501-->
; Read input file (FFF-in.cas):
; Overwrite files by default
/file/read-case FFF-in
/file/confirm-overwrite no
 
<!--T:2825-->
; Preferentially read/write files in legacy format
/file/cff-files no
 
<!--T:2507-->
; Read input case and data files
/file/read-case-data FFF-in


<!--T:2508-->
<!--T:2508-->
; Run the solver for this many iterations:
; Run the solver for this many iterations
/solve/iterate 1000
/solve/iterate 1000


<!--T:2510-->
<!--T:2511-->
; Overwrite output files by default:
; Overwrite output files by default
/file/confirm-overwrite n
/file/confirm-overwrite n


<!--T:2512-->
<!--T:2513-->
; Write final output file (FFF-out.dat):
; Write final output data file
/file/write-data FFF-out
/file/write-case-data FFF-out


<!--T:2514-->
<!--T:2515-->
; Write simulation report to file (optional):
; Write simulation report to file (optional)
/report/summary y "My_Simulation_Report.txt"
/report/summary y "My_Simulation_Report.txt"


<!--T:2516-->
<!--T:2517-->
; Exit fluent:
; Cleanly shutdown fluent
exit
/exit
}}
}}
</tab>
</tab>


<!--T:260-->
<!--T:3600-->
<tab name="Journal File (steady,cas+dat)">
<tab name="Journal file (steady, case + data)">
{{File
{{File
|name=sample2.jou
|name=sample2.jou
Line 270: Line 818:
; lines beginning with a semicolon are comments
; lines beginning with a semicolon are comments


<!--T:2602-->
<!--T:3601-->
; Read compressed input files (FFF-in.cas.gz & FFF-in.dat.gz):
; Overwrite files by default
/file/read-case-data FFF-in.gz
/file/confirm-overwrite no
 
<!--T:3602-->
; Preferentially read/write files in legacy format
/file/cff-files no
 
<!--T:3604-->
; Read input files
/file/read-case-data FFF-in


<!--T:2604-->
<!--T:3606-->
; Write a compressed data file every 100 iterations:
; Write a data file every 100 iterations
/file/auto-save/data-frequency 100
/file/auto-save/data-frequency 100


<!--T:2606-->
<!--T:3608-->
; Retain data files from 5 most recent iterations:
; Retain data files from 5 most recent iterations
/file/auto-save/retain-most-recent-files y
/file/auto-save/retain-most-recent-files y


<!--T:2608-->
<!--T:3610-->
; Write data files to output sub-directory (appends iteration)
; Write data files to output sub-directory (appends iteration)
/file/auto-save/root-name output/FFF-out.gz
/file/auto-save/root-name output/FFF-out


<!--T:2610-->
<!--T:3612-->
; Run the solver for this many iterations:
; Run the solver for this many iterations
/solve/iterate 1000
/solve/iterate 1000


<!--T:2612-->
<!--T:3614-->
; Write final compressed output files (FFF-out.cas.gz & FFF-out.dat.gz):
; Write final output case and data files
/file/write-case-data FFF-out.gz
/file/write-case-data FFF-out


<!--T:2614-->
<!--T:3616-->
; Write simulation report to file (optional):
; Write simulation report to file (optional)
/report/summary y "My_Simulation_Report.txt"
/report/summary y "My_Simulation_Report.txt"


<!--T:2616-->
<!--T:3618-->
; Exit fluent:
; Cleanly shutdown fluent
exit
/exit
}}
}}
</tab>
</tab>


<!--T:270-->
<!--T:3700-->
<tab name="Journal File (transient)">
<tab name="Journal file (transient)">
{{File
{{File
|name=sample3.jou
|name=sample3.jou
Line 313: Line 869:
; lines beginning with a semicolon are comments
; lines beginning with a semicolon are comments


<!--T:2702-->
<!--T:3701-->
; Read only the input case file:
; Overwrite files by default
/file/read-case        "FFF-transient-inp.gz"
/file/confirm-overwrite no
 
<!--T:3702-->
; Preferentially read/write files in legacy format
/file/cff-files no


<!--T:2704-->
<!--T:3704-->
; For continuation (restart) read in both case and data input files:
; Read the input case file
;/file/read-case-data  "FFF-transient-inp.gz"
/file/read-case FFF-transient-inp


<!--T:2706-->
<!--T:3706-->
; Write a data (and maybe case) file every 100 time steps:
; For continuation (restart) read in both case and data input files
;/file/read-case-data FFF-transient-inp
 
<!--T:3708-->
; Write a data (and maybe case) file every 100 time steps
/file/auto-save/data-frequency 100
/file/auto-save/data-frequency 100
/file/auto-save/case-frequency if-case-is-modified
/file/auto-save/case-frequency if-case-is-modified


<!--T:2708-->
<!--T:3710-->
; Retain only the most recent 5 data (and maybe case) files:
; Retain only the most recent 5 data (and maybe case) files
; [saves disk space if only a recent continuation file is needed]
/file/auto-save/retain-most-recent-files y
/file/auto-save/retain-most-recent-files y


<!--T:2710-->
<!--T:3712-->
; Write to output sub-directory (appends flowtime and timestep)
; Write to output sub-directory (appends flowtime and timestep)
/file/auto-save/root-name output/FFF-transient-out-%10.6f.gz
/file/auto-save/root-name output/FFF-transient-out-%10.6f


<!--T:2712-->
<!--T:3714-->
; ##### settings for Transient simulation :  ######
; ##### Settings for Transient simulation :  #####
; Set the magnitude of the (physical) time step (delta-t)
/solve/set/time-step  0.0001


<!--T:2714-->
<!--T:3716-->
; Set the number of time steps for a transient simulation:
; Set the physical time step size
/solve/set/max-iterations-per-time-step   20
/solve/set/time-step 0.0001


<!--T:2716-->
<!--T:3720-->
; Set the number of iterations for which convergence monitors are reported:
; Set the number of iterations for which convergence monitors are reported
/solve/set/reporting-interval   1
/solve/set/reporting-interval 1


<!--T:2718-->
<!--T:3722-->
; ##### End of settings for Transient simulation. ######
; ##### End of settings for Transient simulation #####


<!--T:2720-->
<!--T:3724-->
; Initialize using the hybrid initialization method:
; Initialize using the hybrid initialization method
/solve/initialize/hyb-initialization
/solve/initialize/hyb-initialization


<!--T:2722-->
<!--T:3718-->
; Perform unsteady iterations for a specified number of time steps:
; Set max number of iters per time step and number of time steps
/solve/dual-time-iterate   1000
;/solve/set/max-iterations-per-time-step 75
;/solve/dual-time-iterate 1000 ,
/solve/dual-time-iterate 1000 75


<!--T:2724-->
<!--T:3728-->
; Write final case and data output files:
; Write final case and data output files
/file/write-case-data "FFF-transient-out.gz"
/file/write-case-data FFF-transient-out


<!--T:2726-->
<!--T:3730-->
; Write simulation report to file (optional):
; Write simulation report to file (optional)
/report/summary y "Report_Transient_Simulation.txt"
/report/summary y Report_Transient_Simulation.txt


<!--T:2728-->
<!--T:3732-->
; Exit fluent:
; Cleanly shutdown fluent
exit
/exit
}}
}}
</tab>
</tab>
<!--T:6769-->
</tabs>
</tabs>


== ANSYS CFX == <!--T:78-->
=== UDFs === <!--T:520-->
<tab name="CFX Slurm Script">
 
<!--T:6770-->
The first step is to transfer your User-Defined Function or UDF (namely the sampleudf.c source file and any additional dependency files) to the cluster.  When uploading from a windows machine be sure the text mode setting of your transfer client is used otherwise fluent won't be able to read the file properly on the cluster since it runs linux.  The UDF should be placed in the directory where your journal, cas and dat files reside.  Next add one of the following commands into your journal file before the commands that read in your simulation cas/dat files.  Regardless of whether you use the Interpreted or Compiled UDF approach,  before uploading your cas file onto the Alliance please check that neither the Interpreted UDFs Dialog Box or the UDF Library Manager Dialog Box are configured to use any UDF, this will ensure that when jobs are submitted only the journal file commands will be in control.
 
==== Interpreted ==== <!--T:521-->
 
<!--T:6771-->
To tell fluent to interpret your UDF at runtime add the following command line into your journal file before the cas/dat files are read or initialized. The filename sampleudf.c should be replaced with the name of your source file.  The command remains the same regardless if the simulation is being run in serial or parallel.  To ensure the UDF can be found in the same directory as the journal file remove any managed definitions from the cas file by opening it in the gui and resaving either before uploading to the Alliance or opening it in the gui on a compute node or gra-vdi then resaving it.  Doing this will ensure only the following command/method will be in control when fluent runs.  To use a interpreted UDF with parallel jobs it will need to be parallelized as described in the section below.
 
<!--T:6772-->
define/user-defined/interpreted-functions "sampleudf.c" "cpp" 10000 no
 
==== Compiled ==== <!--T:522-->
 
<!--T:6773-->
To use this approach your UDF must be compiled on an alliance cluster at least once.  Doing so will create a libudf subdirectory structure containing the required <code>libudf.so</code> shared library.  The libudf directory cannot simply be copied from a remote system (such as your laptop) to the Alliance since the library dependencies of the shared library will not be satisfied resulting in fluent crashing on startup.  That said once you have compiled your UDF on one Alliance cluster you can transfer the newly created libudf to any other Alliance cluster providing your account there loads the same StdEnv environment module version.  Once copied, the UDF can be used by uncommenting the second (load) libudf line below in your journal file when submitting jobs to the cluster.  Both (compile and load) libudf lines should not be left uncommented in your journal file when submitting jobs on the cluster otherwise your UDF will automatically (re)compiled for each and every job.  Not only is this highly inefficient, but also it will lead to racetime-like build conflicts if multiple jobs are run from the same directory. Besides configuring your journal file to build your UDF, the fluent gui (run on any cluster compute node or gra-vdi) may also be used.  To do this one would navigate to the Compiled UDFs Dialog Box, add the UDF source file and click Build.  When using a compiled UDF with parallel jobs your source file should be parallelized as discussed in the section below.
 
<!--T:6774-->
define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""
 
<!--T:6775-->
and/or
 
<!--T:6776-->
define/user-defined/compiled-functions load libudf
 
==== Parallel ==== <!--T:523-->
 
<!--T:6777-->
Before a UDF can be used with a fluent parallel job (single node SMP and multi node MPI) it will need to be parallelized.  By doing this we control how/which processes (host and/or compute) run specific parts of the UDF code when fluent is run in parallel on the cluster. The instrumenting procedure involves adding compiler directives, predicates and reduction macros into your working serial UDF. Failure to do so will result in fluent running slow at best or immediately crashing at worst.  The end result will be a single UDF that runs efficiently when fluent is used in both serial and parallel mode.  The subject is described in detail under <I>Part I: Chapter 7: Parallel Considerations</I> of the Ansys 2024 <I>Fluent Customization Manual</I> which can be accessed [https://docs.alliancecan.ca/wiki/Ansys#Online_documentation here].
 
==== DPM ==== <!--T:524-->
UDFs can be used to customize Discrete Phase Models (DPM) as described in <I>Part III: Solution Mode | Chapter 24: Modeling Discrete Phase | 24.2 Steps for Using the Discrete Phase Models| 24.2.6 User-Defined Functions</I> of the <I>2024R2 Fluent Users Guide</I> and section <I>Part I: Creating and Using User Defined Functions | Chapter 2: DEFINE Macros | 2.5 Discrete Phase Model (DPM) DEFINE Macros</I> of the <I>2024R2 Fluent Customization Manual</I> available [https://ansyshelp.ansys.com/account/secured?returnurl=/Views/Secured/prod_page.html?pn=Fluent&pid=Fluent&lang=en here]. Before a DMP based UDF can be worked into a simulation, the injection of a set of particles must be defined by specifying "Point Properties" with variables such as source position, initial trajectory, mass flow rate, time duration, temperature and so forth depending on the injection type.  This can be done in the gui by clicking the Physics panel, Discrete Phase to open the <I>Discrete Phase Model</I> box and then clicking the <I>Injections</I> button.  Doing so will open an <I>Injections</I> dialog box where one or more injections can be created by clicking the <I>Create</I> button.  The "Set Injection Properties" dialog which appears will contain an "Injection Type" pulldown with first four types available are "single, group, surface, flat-fan-atomizer". If you select any of these then you can then the "Point Properties" tab can be selected to input the corresponding Value fields.  Another way to specify the "Point Properties" would be to read an injection text file.  To do this select "file" from the Injection Type pulldown, specify the Injection Name to be created and then click the <I>File</I> button (located beside the <I>OK</I> button at the bottom of the  "Set Injection Properties" dialog).  Here either an Injection Sample File (with .dpm extension) or a manually created injection text file can be selected.  To Select the File in the Select File dialog box that change the File of type pull down to All Files (*), then highlight the file which could have any arbitrary name but commonly likely does have a .inj extension, click the OK button.  Assuming there are no problems with the file, no Console error or warning message will appear in fluent.  As you will be returned to the "Injections" dialog box, you should see the same Injection name that you specified in the "Set Injection Properties" dialog and be able to List its Particles and Properties in the console.  Next open the Discrete Phase Model Dialog Box and select Interaction with Continuous Phase which will enable updating DPM source terms every flow iteration.  This setting can be saved in your cas file or added via the journal file as shown.  Once the injection is confirmed working in the gui the steps can be automated by adding commands to the journal file after solution initialization, for example:
/define/models/dpm/interaction/coupled-calculations yes
/define/models/dpm/injections/delete-injection injection-0:1
/define/models/dpm/injections/create injection-0:1 no yes file no zinjection01.inj no no no no
/define/models/dpm/injections/list-particles injection-0:1
/define/models/dpm/injections/list-injection-properties injection-0:1
where a basic manually created injection steady file format might look like:
  $ cat  zinjection01.inj
  (z=4 12)
  ( x          y        z    u        v    w    diameter  t        mass-flow  mass  frequency  time name )
  (( 2.90e-02  5.00e-03 0.0 -1.00e-03  0.0  0.0  1.00e-04  2.93e+02  1.00e-06  0.0  0.0        0.0 ) injection-0:1 )
noting that injection files for DPM simulations are generally setup for either steady or unsteady particle tracking where the format of the former is described in subsection <I>Part III: Solution Mode | Chapter 24: Modeling Discrete Phase | 24.3. Setting Initial Conditions for the Discrete Phase | 24.3.13 Point Properties for File Injections | 24.3.13.1 Steady File Format</I> of the <I>2024R2 Fluent Customization Manual</I>.
 
== Ansys CFX == <!--T:78-->
 
=== Slurm scripts === <!--T:781-->
 
<!--T:2832-->
<tabs>
<tab name="Multinode">
{{File
{{File
|name=script-cfx.sh
|name=script-cfx-dist.sh
|lang="bash"
|lang="bash"
|contents=
|contents=
Line 384: Line 1,000:
<!--T:1643-->
<!--T:1643-->
#SBATCH --account=def-group  # Specify account name
#SBATCH --account=def-group  # Specify account name
#SBATCH --time=00-06:00      # Specify time limit dd-hh:mm
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number compute nodes (1 or more)
#SBATCH --nodes=2             # Specify multiple (1 or more) compute nodes
#SBATCH --cpus-per-task=32   # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --ntasks-per-node=32 # Specify cores per node (graham 32 or 44, cedar 32 or 48, beluga 40, narval 64)
#SBATCH --mem=0              # Do not change (allocate all memory per compute node)
#SBATCH --mem=0              # Allocate all memory per compute node
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --cpus-per-task=1     # Do not change
 
<!--T:165-->
#module load StdEnv/2016
#module load ansys/2020R2    # Or older module versions


<!--T:166-->
<!--T:166-->
module load StdEnv/2020
module load StdEnv/2020       # Applies to: beluga, cedar, graham, narval
module load ansys/2021R1      # Or newer module versions
module load ansys/2021R1      # Or newer module versions


<!--T:80-->
<!--T:4771-->
NNODES=$(slurm_hl2hl.py --format ANSYS-CFX)
NNODES=$(slurm_hl2hl.py --format ANSYS-CFX)


<!--T:1644-->
<!--T:1644-->
# other options maybe added to the following command line as needed
# append additional command line options as required
cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $NNODES
if [ "$CC_CLUSTER" = cedar ]; then
  cfx5solve -def YOURFILE.def -start-method "Open MPI Distributed Parallel" -par-dist $NNODES
else
  cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $NNODES
fi


<!--T:82-->
<!--T:82-->
}}</tab>
}}</tab>
<!--T:2833-->
<tab name="Single node">
{{File
|name=script-cfx-local.sh
|lang="bash"
|contents=
#!/bin/bash
<!--T:1647-->
#SBATCH --account=def-group  # Specify account name
#SBATCH --time=00-03:00      # Specify time limit dd-hh:mm
#SBATCH --nodes=1            # Specify single compute node (do not change)
#SBATCH --ntasks-per-node=4  # Specify total cores (narval up to 64)
#SBATCH --mem=16G            # Specify 0 to use all node memory
#SBATCH --cpus-per-task=1    # Do not change
<!--T:167-->
module load StdEnv/2020      # Applies to: beluga, cedar, graham, narval
module load ansys/2021R1      # Or newer module versions
<!--T:1646-->
# append additional command line options as required
if [ "$CC_CLUSTER" = cedar ]; then
  cfx5solve -def YOURFILE.def -start-method "Open MPI Local Parallel" -part $SLURM_CPUS_ON_NODE
else
  cfx5solve -def YOURFILE.def -start-method "Intel MPI Local Parallel" -part $SLURM_CPUS_ON_NODE
fi
}}</tab>
</tabs>


<!--T:84-->
<!--T:84-->
Note: You may get the following errors in your output file : <tt>/etc/tmi.conf: No such file or directory</tt>. They do not seem to affect the computation.
Note: You may get the following error in your output file which does not seem to affect the computation: <i>/etc/tmi.conf: No such file or directory</i>.
 
== Workbench == <!--T:280-->
 
<!--T:2801-->
Before submitting a project file to the queue on a cluster (for the first time) follow these steps to initialize it.<br>
# Connect to the cluster with [[VNC#Compute_nodes|TigerVNC]].
# Switch to the directory where the project file is located (YOURPROJECT.wbpj) and [[Ansys#Workbench_3|start Workbench]] with the same Ansys module you used to create your project.
# In Workbench, open the project with <I>File -> Open</I>.
# In the main window, right-click on <i>Setup</i> and select <I>Clear All Generated Data</I>.
# In the top menu bar pulldown, select <I>File -> Exit</I> to exit Workbench.
# In the Ansys Workbench popup, when asked <I>The current project has been modified. Do you want to save it?</I>, click on the <i>No</i> button.
# Quit Workbench and submit your job using one of the Slurm scripts shown below.
 
<!--T:2845-->
To avoid writing the solution when a running job successfully completes \remove <code>;Save(Overwrite=True)</code> from the last line of your script.  Doing this will make it easier to run multiple test jobs (for scaling purposes when changing ntasks), since the initialized solution will not be overwritten each time.  Alternatively, keep a copy of the initialized YOURPROJECT.wbpj file and YOURPROJECT_files subdirectory and restore them after the solution is written.


== WORKBENCH == <!--T:280-->
=== Slurm scripts === <!--T:2814-->
 
<!--T:4772-->
A project file can be submitted to the queue by customizing one of the following scripts and then running the <code>sbatch script-wbpj.sh</code> command:


<!--T:2802-->
<!--T:2802-->
Before submitting a job to the queue with <code>sbatch script-wbpj.sh</code> several settings in YOURPROJECT.wbpj file must be initialized to be compatible with the settings in your slurm script.  To do this 1) start workbench using the runwb2 command as described in the [[ANSYS#Graphical_Use|Graphical_Use]] section below) 2) click File -> Open to load your workbench project file 3) double click either Setup or Solution 4) select the Home tab in top menu bar and locate the Solve panel 4) tick the Distributed box 3) specify a numeric value for Cores that matches ntasks specified in your slurm script 4) right click Solution found in the left Project tree, highlight and click <code>Clear Solution Data</code>, click <code>Yes</code> to clear previous results and restart points, then finally 5) click File -> Save Project and exit ansys.  The initialized (cleared) solution can be preserved by temporarily changing  <code>Save(Overwrite=True)</code> to <code>Save(Overwrite=False)</code> in the below slurm script to run test jobs before performing the final production run to avoid needing to open the project in the gui again.  The following script can be submitted to the queue with the <code>sbatch script-wbpj.sh</code> command.
<tabs>
 
<tab name="Single node (StdEnv/2020)">
<!--T:2803-->
<tab name="Project Slurm Script">
{{File
{{File
|name=script-wbpj.sh
|name=script-wbpj-2020.sh
|lang="bash"
|lang="bash"
|contents=
|contents=
#!/bin/bash
#!/bin/bash
<!--T:2803-->
#SBATCH --account=def-account
#SBATCH --account=def-account
#SBATCH --time=00-03:00       # Time (DD-HH:MM)
#SBATCH --time=00-03:00               # Time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G      # Memory per core
#SBATCH --mem=16G                      # Total Memory (set to 0 for all node memory)
#SBATCH --ntasks=8            # Number of cores
#SBATCH --ntasks=4                    # Number of cores
##SBATCH --nodes=1             # Number of nodes (optional)
#SBATCH --nodes=1                     # Do not change (multi-node not supported)
##SBATCH --ntasks-per-node=8   # Cores per node (optional)
##SBATCH --exclusive                  # Uncomment for scaling testing
##SBATCH --constraint=broadwell        # Applicable to graham or cedar
 
<!--T:2804-->
module load StdEnv/2020 ansys/2021R2   # OR newer Ansys modules


<!--T:2805-->
<!--T:2805-->
unset SLURM_GTIDS
if [ "$SLURM_NNODES" == 1 ]; then
  MEMPAR=0                              # Set to 0 for SMP (shared memory parallel)
else
  MEMPAR=1                              # Set to 1 for DMP (distributed memory parallel)
fi


<!--T:2806-->
<!--T:2849-->
rm -f mytest1_files/.lock
rm -fv *_files/.lock
MWFILE=~/.mw/Application\ Data/Ansys/`basename $(find $EBROOTANSYS/v* -maxdepth 0 -type d)`/SolveHandlers.xml
sed -re "s/(.AnsysSolution>+)[a-zA-Z0-9]*(<\/Distribute.)/\1$MEMPAR\2/" -i "$MWFILE"
sed -re "s/(.Processors>+)[a-zA-Z0-9]*(<\/MaxNumber.)/\1$SLURM_NTASKS\2/" -i "$MWFILE"
sed -i "s!UserConfigured=\"0\"!UserConfigured=\"1\"!g" "$MWFILE"


<!--T:2807-->
<!--T:2923-->
module load StdEnv/2016 ansys/2019R3
export KMP_AFFINITY=disabled
export I_MPI_HYDRA_BOOTSTRAP=ssh


<!--T:2804-->
<!--T:2835-->
export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=balanced
export PATH=/cvmfs/soft.computecanada.ca/nix/var/nix/profiles/16.09/bin:$PATH
runwb2 -B -E "Update();Save(Overwrite=True)" -F YOURPROJECT.wbpj
runwb2 -B -E "Update();Save(Overwrite=True)" -F YOURPROJECT.wbpj
}}
}}
</tab>
</tab>
</tabs>


== MECHANICAL == <!--T:108-->
== Mechanical == <!--T:108-->


<!--T:1081-->
<!--T:1081-->
The input file can be generated from within your interactive Workbench Mechanical session by clicking <i>Solution -> Tools -> Write Input Files</i> then specify <code>File name:</code> YOURAPDLFILE.inp and <code>Save as type:</code> APDL Input Files (*.inp).  APDL jobs can then be submitted to the queue by running the <code>sbatch script-name.sh</code> command.   The ANSYS modules given in each script were tested on graham and should work without issue (uncomment one).  Once the scripts are tested on other clusters they will be updated below if required.
The input file can be generated from within your interactive Workbench Mechanical session by clicking <i>Solution -> Tools -> Write Input Files</i> then specify <code>File name:</code> YOURAPDLFILE.inp and <code>Save as type:</code> APDL Input Files (*.inp).  APDL jobs can then be submitted to the queue by running the <code>sbatch script-name.sh</code> command.
 
=== Slurm scripts === <!--T:1083-->
 
<!--T:4770-->
The Ansys modules used in each of the following scripts have been tested on Graham and should work without issue (uncomment one).  Once the scripts have been tested on other clusters, they will be updated if required.


<!--T:1659-->
<!--T:1659-->
<tabs>
<tabs>
<tab name="SMP Script (stdenv/2016)">
<tab name="Single node (stdenv/2020)">
{{File
|name=script-smp-2016.sh
|lang="bash"
|contents=
#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory for all cores
#SBATCH --ntasks=8            # Specify number of cores (1 or more)
#SBATCH --nodes=1              # Specify one node (do not change)
 
<!--T:1679-->
unset SLURM_GTIDS
 
<!--T:1680-->
module load StdEnv/2016
 
<!--T:1681-->
#module load ansys/19.1
#module load ansys/19.2
#module load ansys/2019R2
#module load ansys/2019R3
#module load ansys/2020R1
module load ansys/2020R2
 
<!--T:1662-->
mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
}}
</tab>
<tab name="SMP Script (stdenv/2020)">
{{File
{{File
|name=script-smp-2020.sh
|name=script-smp-2020.sh
Line 495: Line 1,146:
#SBATCH --nodes=1              # Specify one node (do not change)
#SBATCH --nodes=1              # Specify one node (do not change)


<!--T:1682-->
<!--T:4755-->
unset SLURM_GTIDS
unset SLURM_GTIDS


<!--T:1683-->
<!--T:4756-->
module load StdEnv/2020
module load StdEnv/2020


<!--T:1684-->
<!--T:4757-->
#module load ansys/2021R1
#module load ansys/2021R2
module load ansys/2021R2
#module load ansys/2022R1
 
module load ansys/2022R2
<!--T:1663-->
mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
}}
</tab>
<tab name="DIS Script (stdenv/2016)">
{{File
|name=script-dis-2016.sh
|lang="bash"
|contents=
#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G      # Specify memory per core
#SBATCH --ntasks=8            # Specify number of cores (2 or more)
##SBATCH --nodes=2            # Specify number of nodes (optional)
##SBATCH --ntasks-per-node=4  # Specify cores per node (optional)
 
<!--T:1685-->
unset SLURM_GTIDS
 
<!--T:1686-->
module load StdEnv/2016


<!--T:1687-->
<!--T:4758-->
#module load ansys/2019R3
mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -I YOURAPDLFILE.inp
module load ansys/2020R1


<!--T:1664-->
<!--T:4759-->
export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=compact
rm -rf results-*
mapdl -dis -mpi intelmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
mkdir results-$SLURM_JOB_ID
cp -a --no-preserve=ownership $SLURM_TMPDIR/* results-$SLURM_JOB_ID
}}
}}
</tab>
</tab>
<tab name="DIS Script (stdenv/2020)">
<tab name="Multinode script (stdenv/2020)">
{{File
{{File
|name=script-dis-2020.sh
|name=script-dis-2020.sh
Line 550: Line 1,179:
##SBATCH --ntasks-per-node=4  # Specify cores per node (optional)
##SBATCH --ntasks-per-node=4  # Specify cores per node (optional)


<!--T:1688-->
<!--T:4765-->
unset SLURM_GTIDS
unset SLURM_GTIDS


<!--T:1689-->
<!--T:4766-->
module load StdEnv/2020
module load StdEnv/2020


<!--T:1690-->
<!--T:4767-->
#module load ansys/2021R1
module load ansys/2022R2
module load ansys/2021R2


<!--T:1665-->
<!--T:4768-->
mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -I YOURAPDLFILE.inp
 
<!--T:4769-->
rm -rf results-*
mkdir results-$SLURM_JOB_ID
cp -a --no-preserve=ownership $SLURM_TMPDIR/* results-$SLURM_JOB_ID
}}
}}
</tab>
</tab>
Line 567: Line 1,200:


<!--T:1082-->
<!--T:1082-->
ANSYS allocates 1024 MB total memory and 1024 MB database memory by default for APDL jobs. These values can be manually specified (or changed) by adding arguments -m 1024 and/or -db 1024 to the last maple command line in the above slurm scripts. When using a remote institutional license server with multiple ANSYS licenses it may be necessary to add arguments such as -p aa_r or -ppf anshpc. As always perform detailed scaling tests before running production jobs to ensure the optimal number of cores and minimum amount memory is specified in your slurm scripts. The single node SMP (Shared Memory Parallel) script will perform better than the multiple node DIS (Distributed Memory Parallel) script and therefore should be used whenever possible. To help avoid compatibility issues the ansys module loaded in your slurm script should ideally match the version used to to generate the input file:
Ansys allocates 1024 MB total memory and 1024 MB database memory by default for APDL jobs. These values can be manually specified (or changed) by adding arguments <code>-m 1024</code> and/or <code>-db 1024</code> to the mapdl command line in the above scripts. When using a remote institutional license server with multiple Ansys licenses, it may be necessary to add <code>-p aa_r</code> or <code>-ppf anshpc</code>, depending on which Ansys module you are using. As always, perform detailed scaling tests before running production jobs to ensure that the optimal number of cores and minimum amount memory is specified in your scripts. The <i>single node</i> (SMP Shared Memory Parallel) scripts will typically perform better than the <i>multinode</i> (DIS Distributed Memory Parallel) scripts and therefore should be used whenever possible. To help avoid compatibility issues the Ansys module loaded in your script should ideally match the version used to to generate the input file:


  <!--T:1669-->
  <!--T:1669-->
Line 573: Line 1,206:
  ! ANSYS input file written by Workbench version 2019 R3
  ! ANSYS input file written by Workbench version 2019 R3


== ANSYS EDT == <!--T:109-->
== Ansys EDT == <!--T:109-->


<!--T:1091-->
<!--T:1091-->
Ansys Electronic Desktop jobs can be submitted to the cluster queue by running the <code>sbatch script-name.sh</code> command.  The following script allows running a job with upto all cores and memory on a single node and was tested on graham.  To use it specify the simulation time, memory, number of cores and replace YOUR_AEDT_FILE with your input file name.  A full listing of ansysedt command line options can be obtained by starting ansysedt in [Graphical Mode | https://docs.computecanada.ca/wiki/ANSYS#Graphical_Use] with commands <code>ansysedt -help</code> or <code>ansysedt -Batchoptionhelp</code> to obtain scrollable graphical popups.  Additional tabs containing slurm scripts for submitting distributed jobs over multiple nodes will be added to the following table asap.  At present only ansysedt/2021R2 is installed (newer versions will be installed when released).  Ansysedt can be run interactively by starting a salloc session on a compute node (request sufficient memory & cores) and then issuing the command found in the last line of the following <code>script-local-cmd.sh</code> slurm script.
Ansys EDT can be run interactively in batch (non-gui) mode by first starting an salloc session with options <code>salloc --time=3:00:00 --tasks=8 --mem=16G --account=def-account</code> and then copy-pasting the full <code>ansysedt</code> command found in the last line of <i>script-local-cmd.sh</i>, being sure to manually specify $YOUR_AEDT_FILE.


<!--T:1577-->
=== Slurm scripts === <!--T:1092-->
 
<!--T:1093-->
Ansys Electronic Desktop jobs may be submitted to a cluster queue with the <code>sbatch script-name.sh</code> command using either of the following single node scripts.  As of January 2023, the scripts had only been tested on Graham and therefore may be updated in the future as required to support other clusters.  Before using them, specify the simulation time, memory, number of cores and replace YOUR_AEDT_FILE with your input file name.  A full listing of command line options can be obtained by starting Ansys EDT in [[ANSYS#Graphical_use|graphical mode]] with commands <code>ansysedt -help</code> or <code>ansysedt -Batchoptionhelp</code> to obtain scrollable graphical popups. 
 
<!--T:1094-->
<tabs>
<tabs>
<tab name="Single Node Script - Command Line">
<tab name="Single node (command line)">
{{File
{{File
|name=script-local-cmd.sh
|name=script-local-cmd.sh
Line 587: Line 1,225:
#!/bin/bash
#!/bin/bash


<!--T:2808-->
<!--T:1095-->
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory (0 to use all memory on the node)
#SBATCH --mem=16G              # Specify memory (set to 0 to use all compute node memory)
#SBATCH --ntasks=16            # Specify cores (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --ntasks=8            # Specify cores (beluga 40, cedar 32 or 48, graham 32 or 44, narval 64)
#SBATCH --nodes=1              # Request one node (Do Not Change)
#SBATCH --nodes=1              # Request one node (Do Not Change)


<!--T:2809-->
<!--T:1096-->
module load StdEnv/2020
module load StdEnv/2020
module load ansysedt/2021R2
module load ansysedt/2021R2


<!--T:2810-->
<!--T:1097-->
# Uncomment next line to run a test example:
# Uncomment next line to run a test example:
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .
Line 613: Line 1,251:
# ---- do not change anything below this line ---- #
# ---- do not change anything below this line ---- #


<!--T:2814-->
<!--T:2840-->
echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"


<!--T:2815-->
<!--T:2841-->
ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS -batchoptions \
export KMP_AFFINITY=disabled
      "'TempDirectory'=$SLURM_TMPDIR 'HPCLicenseType'='pool' 'HFSS/EnableGPU'=0" -batchsolve $YOUR_AEDT_FILE
ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS \
-batchoptions "TempDirectory=$SLURM_TMPDIR HPCLicenseType=pool HFSS/EnableGPU=0" -batchsolve "$YOUR_AEDT_FILE"
}}
}}
</tab>
</tab>
<tab name="Single Node Script - Options File">
<tab name="Single node (options file)">
{{File
{{File
|name=script-local-opt.sh
|name=script-local-opt.sh
Line 633: Line 1,272:
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory (0 to use all memory on the node)
#SBATCH --mem=16G              # Specify memory (set to 0 to allocate all compute node memory)
#SBATCH --ntasks=16            # Specify cores (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --ntasks=8            # Specify cores (beluga 40, cedar 32 or 48, graham 32 or 44, narval 64)
#SBATCH --nodes=1              # Request one node (Do Not Change)
#SBATCH --nodes=1              # Request one node (Do Not Change)


Line 671: Line 1,310:
# ---- do not change anything below this line ---- #
# ---- do not change anything below this line ---- #


<!--T:2824-->
<!--T:2842-->
echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"


<!--T:2825-->
<!--T:2843-->
export KMP_AFFINITY=disabled
ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS \
ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS \
              -batchoptions $OPTIONS_TXT -batchsolve $YOUR_AEDT_FILE
-batchoptions $OPTIONS_TXT -batchsolve "$YOUR_AEDT_FILE"
}}
</tab>
</tabs>
 
== Ansys ROCKY == <!--T:110-->
 
<!--T:1101-->
Besides being able to run simulations in gui mode (as discussed in the Graphical usage section below) [https://www.ansys.com/products/fluids/ansys-rocky Ansys Rocky] can also run simulations in non-gui (or headless) mode.  Both modes support running Rocky with cpus only or with cpus and [https://www.ansys.com/blog/mastering-multi-gpu-ansys-rocky-software-enhancing-its-performance gpus].  In the below section two sample slurm scripts are  provided where each script would be submitted to the graham queue with the sbatch command as per usual.  At the time of this writing neither script has been tested and therefore extensive customization will likely be required.  Its important to note that these scripts are only usable on graham since the rocky module which they both load is only (at the present time) installed on graham (locally).
 
=== Slurm scripts === <!--T:1102-->
 
<!--T:1103-->
To get a full listing of command line options run <code>Rocky -h</code> on the command line after loading any rocky module (currently only rocky/2023R2 is available on graham with 2024R1 and 2024R2 to be added asap).  In regards to using Rocky with gpus for solving coupled problems, the number of cpus you should request from slurm (on the same node) should be increased to a maximum until the scalability limit of the coupled application is reached.  On the other hand, if Rocky is being run with gpus to solve standalone uncoupled problems, then only a minimal number of cpus should be requested that will allow be sufficient for Rocky to still run optimally.  For instance only 2cpus or possibly 3cpus maybe required.  Finally when Rocky is run with more than 4cpus then <I>rocky_hpc</I> licenses will be required which the SHARCNET license does provide.
 
<!--T:1104-->
<tabs>
<tab name="CPU only">
{{File
|name=script-rocky-cpu.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:1105-->
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-02:00        # Specify time (DD-HH:MM)
#SBATCH --mem=24G              # Specify memory (set to 0 to use all node memory)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --nodes=1              # Request one node (do not change)
 
<!--T:1106-->
module load StdEnv/2023
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change) 
 
<!--T:6778-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0
}}
</tab>
<tab name="GPU based">
{{File
|name=script-rocky-gpu.sh
|lang="bash"
|contents=
#!/bin/bash
 
<!--T:1107-->
#SBATCH --account=account      # Specify your account (def or reg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=24G              # Specify memory (set to 0 to use all node memory)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --gres=gpu:v100:2      # Specify gpu type : gpu quantity
#SBATCH --nodes=1              # Request one node (do not change)
 
<!--T:1108-->
module load StdEnv/2023
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)
 
<!--T:6779-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=1 --gpu-num=$SLURM_GPUS_ON_NODE
}}
}}
</tab>
</tab>
</tabs>
</tabs>


= Graphical Use = <!--T:94-->
= Graphical use = <!--T:94-->


<!--T:941-->
<!--T:941-->
ANSYS programs maybe run interactively in gui mode on cluster Compute Nodes or graham VDI Nodes.
Ansys programs may be run interactively in GUI mode on cluster compute nodes or Graham VDI Nodes.


== Compute Nodes == <!--T:943-->  
== Compute nodes == <!--T:943-->  


<!--T:201-->
<!--T:201-->
ANSYS can be run interactively on cluster compute nodes for upto 24hours using [[VNC#Compute_Nodes|TigerVNC]].  This approach is ideal for testing computationally intensive simulations since all available cores and memory can be used.  Once connected open a terminal window and start one of the following programs:
Ansys can be run interactively on a single compute node for up to 24 hours.  This approach is ideal for testing large simulations since all cores and memory can be requested with salloc as described in [[VNC#Compute_Nodes|TigerVNC]].  Once connected with vncviewer, any of the following program versions can be started after loading the required modules as shown below.
 
=== Fluids === <!--T:1670-->
::: <code>module load StdEnv/2020</code>
::: <code>module load ansys/2021R1</code> (or newer versions)
::: <code>fluent -mpi=intel</code>, or,
::: <code>QTWEBENGINE_DISABLE_SANDBOX=1 cfx5</code>


<!--T:1670-->
=== Mapdl === <!--T:1680-->
: <b>FLUIDS</b>
::: <code>module load StdEnv/2020</code>
::: <code>module load StdEnv/2016 ansys/2020R2</code> (or older versions)
::: <code>module load ansys/2021R2</code> (or newer versions)
::: <code>module load StdEnv/2020 ansys/2021R1</code> (or newer versions)
::: <code>mapdl -g</code>, or via launcher,
::: <code>fluent|cfx5</code>
::: <code>launcher</code> --> click RUN button


<!--T:1671-->
=== Workbench === <!--T:1671-->
: <b>WORKBENCH</b>
::: <code>module load StdEnv/2020</code>
::: <code>module load StdEnv/2016 ansys/2019R3</code> (other versions are being tested)
::: <code>module load ansys/2021R2</code> (or newer versions)
::: <code>export KMP_AFFINITY=balanced; export I_MPI_HYDRA_BOOTSTRAP=ssh</code>
::: <code>xfwm4 --replace &</code> (only needed if using Ansys Mechanical)
::: <code>export PATH=$EBROOTNIXPKGS/bin:$PATH</code>
::: <code>export QTWEBENGINE_DISABLE_SANDBOX=1</code> (only needed if using CFD-Post)
::: <code>runwb2</code>
::: <code>runwb2</code>
::: <br>
::: NOTES :When running an Analysis Program such as Mechanical or Fluent in parallel on a single node, untick <i>Distributed</i> and specify a value of cores equal to your <b>salloc session setting minus 1</b>. The pulldown menus in the Ansys Mechanical workbench do not respond properly. As a workaround run <code>xfwm4 --replace</code> on the command line before starting workbench as shown. To make xfwm4 your default edit <code>$HOME/.vnc/xstartup</code> and change <code>mate-session</code> to <code>xfce4-session</code>.  Lastly, fluent from ansys/2022R2 does not currently work on compute nodes please use a different version.


<!--T:1672-->
=== Ansys EDT === <!--T:1672-->
: <b>ELECTRONICS DESKTOP</b>
::: Start an interactive session using the following form of the salloc command (to specify cores and available memory):
::: <code>module load CcEnv StdEnv/2020 ansysedt/2021R2</code>
::: <code>salloc --time=3:00:00 --nodes=1 --cores=8 --mem=16G --account=def-group</code>
::: <code>rm -rf ~/.mw</code>   (optionally force First-time configuration)
::: <code>xfwm4 --replace &</code> (then hit enter twice)
::: <code>module load StdEnv/2020 ansysedt/2021R2</code>, or
::: <code>module load StdEnv/2020 ansysedt/2023R2</code>, or
::: <code>module load StdEnv/2023 ansysedt/2023R2</code>, or newer
::: <code>ansysedt</code>
::: <code>ansysedt</code>
::: o Click <code>Tools -> Options -> HPC and Analysis Options -> Edit</code> then :
:::: 1) untick Use Automatic Settings box (required one time only)
:::: 2) under Machines tab do not change Cores (auto-detected from slurm)
::: o To run interactive analysis click:  <code>Project -> Analyze All</code>


<!--T:1673-->
=== Ensight === <!--T:1673-->
: <b>ENSIGHT</b>
::: <code>module load StdEnv/2020 ansys/2022R2; A=222; B=5.12.6</code>, or
::: <code>module load StdEnv/2016 ansys/2019R3; A=195; B=5.10.1</code>, or
::: <code>module load StdEnv/2020 ansys/2022R1; A=221; B=5.12.6</code>, or
::: <code>module load StdEnv/2016 ansys/2020R1; A=201; B=5.10.1</code>, or
::: <code>module load StdEnv/2020 ansys/2021R2; A=212; B=5.12.6</code>, or
::: <code>module load StdEnv/2016 ansys/2020R2; A=202; B=5.12.6</code>, or
::: <code>module load StdEnv/2020 ansys/2021R1; A=211; B=5.12.6</code>, or
::: <code>module load StdEnv/2020 ansys/2021R1; A=211; B=5.12.6</code>, or
::: <code>module load StdEnv/2020 ansys/2021R2; A=212; B=5.12.6</code>
::: <code>export LD_LIBRARY_PATH=$EBROOTANSYS/v$A/CEI/apex$A/machines/linux_2.6_64/qt-$B/lib</code>
::: <code>export LD_LIBRARY_PATH=$EBROOTANSYS/v$A/CEI/apex$A/machines/linux_2.6_64/qt-$B/lib</code>
::: <code>ensight -X</code>
::: <code>ensight -X</code>
Note: ansys/2022R2 Ensight is lightly tested on compute nodes. Please let us know if you find any problems using it.


<!--T:1674-->
=== Rocky === <!--T:1682-->
ASIDE: Some ANSYS gui programs can be run remotely on a cluster compute node by X forwarding over ssh to your local desktop.  Unlike VNC, this approach is untested and unsupported since it relies on a properly setup X display server for your particular operating system OR the selection, installation and configuration of a suitable X client emulator package such as MobaXterm.  Most users will find interactive response times unacceptably slow for basic menu tasks let alone performing more complex tasks such as those involving graphics rendering.  Startup times for gui programs can also be very slow depending on your internet connection. For example, in one test it took 40min to fully start <i>ansysedt</I> over ssh while starting it with vncviewer required on 34 seconds.  Despite these stark facts the approach may be of interest if the only goal is to open a simulation and run some calculations in the gui.  Therefore the basic steps are given here as a starting point: 1) ssh -Y username@graham.computecanada.ca 2) salloc --x11 --time=1:00:00 --cpus-per-task=1 --mem=16000 --account=def-mygroup 3) once on a compute node try running <code>xclock</code> to check the analog clock appears on your desktop, if it does then 4) load the needed modules and try running the program.
::: <code>module load rocky/2023R2 ansys/2023R2</code> (or newer versions)
::: <code>Rocky</code> (reads ~/licenses/ansys.lic if present, otherwise defaults to SHARCNET server), or<br>
::: <code>Rocky-int</code> (interactively select CMC or SHARCNET server, also reads ~/licenses/ansys.lic)<br>
::: <code>RockySolver</code> (run rocky from the command line, currently untested, specify "-h" for help)
::: <code>RockySchedular</code> (resource manager to submit multiple jobs on present node)
::: o Rocky is (currently) only available on gra-vdi and graham cluster (no workbench support on linux)
::: o Release pdfs can be found under /opt/software/rocky/2023R2/docs (read them with <code>mupdf</code>)
::: o Rocky supports gpu accelerated computing however this capability not been tested ..
::: o To request a graham compute node with gpus for computations use, for example:
:::  <code>salloc --time=04:00:00 --nodes=1 --cpus-per-task=6 --gres=gpu:v100:2 --mem=32G --account=someaccount</code>
::: o The SHARCNET license now includes Rocky (free for all researchers to use)


== VDI Nodes == <!--T:947-->
== VDI nodes == <!--T:947-->


<!--T:125-->
<!--T:125-->
ANSYS programs can be run for upto 24hours on graham VDI Nodes using a maximum of 8cores and 128GB memory.  The VDI System provides gpu OpenGL acceleration therefore it is ideal for performing tasks that benefit from high performance graphics.  One might use VDI to create or modify simulation input files, post process data or visualize simulation results.  To get started, login to gra-vdi.computecanada.ca with [[VNC#VDI_Nodes|TigerVNC]] then open a new terminal window and start one of the following <b>supported</b> program versions exactly as shown below:
Ansys programs can be run for up to 7days on grahams VDI nodes (gra-vdi.alliancecan.ca) using 8 cores (16 cores max) and 128GB memory.  The VDI System provides GPU OpenGL acceleration therefore it is ideal for performing tasks that benefit from high performance graphics.  One might use VDI to create or modify simulation input files, post-process data or visualize simulation results.  To log in connect with [[VNC#VDI_Nodes|TigerVNC]] then open a new terminal window and start one of the program versions shown below.  The vertical bar <code>|</code> notation is used to separate the various commands.  The maximum job size for any parallel job run on gra-vdi should be limited to 16cores to avoid overloading the servers and impacting other users.  To run two simultaneous gui jobs (16core max each) connect once with vnc to gra-vdi3.sharcnet.ca then connect again to gra-vdi4.sharecnet.ca likewise with vnc.  Next start an interactive gui session for the ansys program you are using in the desktop on each machine.  Note that simultaneous simulations should in general be run in different directories to avoid file conflict issues.  Unlike compute nodes vnc connections (which impose slurm limits through salloc) there is no time limit constraint on gra-vdi when running simulations.


<!--T:1675-->
=== Fluids === <!--T:1675-->
: <b>FLUIDS</b>
::: <code>module load CcEnv StdEnv/2020</code>
::: <code>module load CcEnv StdEnv/2020 ansys/2021R2</code>, or
::: <code>module load ansys/2021R1</code> (or newer versions)
::: <code>module load CcEnv StdEnv/2020 ansys/2021R1</code>, or
::: <code>unset SESSION_MANAGER</code>
::: <code>module load CcEnv StdEnv/2016 ansys/2020R2</code>, or
::: <code>fluent | cfx5 | icemcfd</code>
::: <code>module load CcEnv StdEnv/2016 ansys/2020R1</code>, or
::: o Where unsetting SESSION_MANAGER prevents the following Qt message from appearing when starting fluent:
::: <code>module load CcEnv StdEnv/2016 ansys/2019R3</code>
::: [<span style="Color:#ff7f50">Qt: Session management error: None of the authentication protocols specified are supported</span>]
::: o In the event the following message appears in a popup window when starting icemcfd ...
::: [<span style="Color:#ff7f50">Error segmentation violation - exiting after doing an emergency save</span>]
::: ... do not click the popup OK button otherwise icemcfd will crash.  Instead do the following (one time only):
::: click the Settings Tab -> Display -> tick X11 -> Apply -> OK -> File -> Exit
::: The error popup should no longer appear when icemcfd is restarted.
 
=== Mapdl === <!--T:1681-->
::: <code>module load CcEnv StdEnv/2020</code>
::: <code>module load ansys/2021R1</code> (or newer versions)
::: <code>mapdl -g</code>, or via launcher,
::: <code>unset SESSION_MANAGER; launcher</code> --> click RUN button
 
=== Workbench === <!--T:1676-->
::: <code>module load SnEnv</code>
::: <code>module load ansys/2020R2</code> (or newer versions)
::: <code>export HOOPS_PICTURE=opengl</code>
::: <code>export HOOPS_PICTURE=opengl</code>
::: <code>fluent|cfx5|icemcfd</code><br>
::: <code>runwb2</code>
::: o The export line avoids the following tui Warning from appearing when fluent starts:
:::: [<span style="Color:#ff7f50">Software rasterizer found, hardware acceleration will be disabled.</span>]
::: Alternatively the HOOPS_PICTURE environment variable can be set inside workbench by doing:
:::: Fluent Launcher --> Environment Tab --> HOOPS_PICTURE=opengl (without the export)


<!--T:1676-->
<!--T:4777-->
: <b>WORKBENCH</b>
::: NOTE1: When running Mechanical in Workbench on gra-vdi be sure to <b>tic</b> <i>Distributed</I> in the upper ribbon Solver panel and specify a maximum value of <b>24</b> cores.  When running Fluent on gra-vdi instead <b>untic</b> <i>Distributed</I> and specify a maximum value of <b>12</b> cores.  Do not attempt to use more than 128GB memory otherwise Ansys will hit the hard limit and be killed. If you need more cores or memory then please use a cluster compute node to run your graphical session on (as described in the previous Compute nodes section above).  When doing old pre-processing or post-processing work with Ansys on gra-vdi and not running calculation, please only use <b>4</b> cores otherwise hpc licenses will be checked out unnecessarily.
::: <code>module load SnEnv ansys/2021R2</code>, or
::: NOTE2: On very rare occasions the Ansys workbench gui will freeze or become unresponsive in some way.  If this happens open a new terminal window and run  <code>pkill -9 -e -u $USER -f "ansys|fluent|mwrpcss|mwfwrapper|ENGINE|mono"</code> to fully kill off Ansys.  Likewise if Ansys crashes or vncviewer disconnect before Ansys could be shutdown cleanly then try running the pkill command if Ansys does not run normally afterwards.  In general, if Ansys is not behaving properly and you suspect one of the aforementioned causes try pkill before opening a problem ticket.
::: <code>module load SnEnv ansys/2021R1</code>, or
::: <code>module load SnEnv ansys/2020R2</code>
::: <code>runwb2</code>
::: ------------------------------------------------------------------------------------
::: <code>module load CcEnv StdEnv/2016 ansys/2020R1</code>, or
::: <code>module load CcEnv StdEnv/2016 ansys/2019R3</code>
::: <code>export PATH=$EBROOTNIXPKGS/bin:$PATH</code>
::: <code>runwb2</code>


<!--T:1677-->
=== Ansys EDT === <!--T:1677-->
: <b>ELECTRONICS DESKTOP</b>
::: Open a terminal window and load the module:
::: <code>module load CcEnv StdEnv/2020 ansysedt/2021R2</code>
:::: <code>module load SnEnv ansysedt/2023R2</code>, or
::: <code>rm -rf ~/.mw</code>   (optionally force First-time configuration)
:::: <code>module load SnEnv ansysedt/2021R2</code>
::: <code>ansysedt</code>
::: Type <code>ansysedt</code> in the terminal and wait for the gui to start
::: The following only needs to be done once:
:::: click <code>Tools -> Options -> HPC and Analysis Options -> Options</code>
:::: change <code>HPC License</code> pulldown to <b>Pool</b> (allows > 4 cores to be used)
:::: click <code>OK</code>
::: ----------  EXAMPLES  ----------
::: To copy the 2023R2 Antennas examples directory into your account:
:::: login to a cluster such as graham
:::: <code>module load ansysedt/2023R2</code>
:::: <code>mkdir -p ~/Ansoft/$EBVERSIONANSYSEDT; cd ~/Ansoft/$EBVERSIONANSYSEDT; rm -rf Antennas</code>
:::: <code>cp -a $EBROOTANSYSEDT/v232/Linux64/Examples/HFSS/Antennas ~/Ansoft/$EBVERSIONANSYSEDT</code>
::: To run an example:
:::: open a simulation .aedt file then click <code>HFSS -> Validation Check</code>
:::: (if errors are reported by the validation check, close then reopen the simulation and repeat as required)
::::  to run simulation click <code>Project -> Analyze All</code>
:::: to quit without saving the converged solution click <code>File -> Close -> No </code>
::: If the program crashes and won't restart try running the following commands:
:::: <code>pkill -9 -u $USER -f "ansys*|mono|mwrpcss|apip-standalone-service"</code>
:::: <code>rm -rf ~/.mw</code> (ansysedt will re-run first-time configuration on startup)


<!--T:1678-->
=== Ensight === <!--T:1678-->
: <b>ENSIGHT</b>
::: <code>module load SnEnv ansys/2019R2</code> (or newer)
::: <code>module load SnEnv ansys/2021R2</code>, or
::: <code>module load SnEnv ansys/2021R1</code>
::: <code>ensight</code><br>
::: ------------------------------------------------------------------------------------
::: <code>module load CcEnv StdEnv/2016 ansys/2020R2</code>
::: <code>ensight</code><br>
::: <code>ensight</code><br>


= Site Specific Usage = <!--T:86-->
=== Rocky === <!--T:1679-->
::: <code>module load clumod rocky/2023R2 CcEnv StdEnv/2020 ansys/2023R2</code> (or newer versions)
::: <code>Rocky</code> (reads ~/licenses/ansys.lic if present, otherwise defaults to SHARCNET server), or<br>
::: <code>Rocky-int</code> (interactively select CMC or SHARCNET server, also reads ~/licenses/ansys.lic)<br>
::: <code>RockySolver</code> (run rocky from the command line, currently untested, specify "-h" for help)
::: <code>RockySchedular</code> (resource manager to submit multiple jobs on present node)
::: o Rocky is (currently) only available on gra-vdi and graham cluster (no workbench support on linux)
::: o Release pdfs can be found under /opt/software/rocky/2023R2/docs (read them with <code>mupdf</code>)
::: o Rocky can only use cpus on gra-vdi since it currently only has one gpu (dedicated to graphics)
::: o The SHARCNET license now includes Rocky (free for all researchers to use)
 
== SSH issues == <!--T:1674-->
::: Some Ansys GUI programs can be run remotely on a cluster compute node by X forwarding over SSH to your local desktop.  Unlike VNC, this approach is untested and unsupported since it relies on a properly setup X display server for your particular operating system OR the selection, installation and configuration of a suitable X client emulator package such as MobaXterm.  Most users will find interactive response times unacceptably slow for basic menu tasks let alone performing more complex tasks such as those involving graphics rendering.  Startup times for GUI programs can also be very slow depending on your Internet connection. For example, in one test it took 40 minutes to fully start <i>ansysedt</I> over SSH while starting it with vncviewer required only 34 seconds.  Despite the potential slowness when connecting over SSH to run GUI programs, doing so may still be of interest if your only goal is to open a simulation and perform some basic menu operations or run some calculations. The basic steps are given here as a starting point: 1) ssh -Y username@graham.computecanada.ca; 2) salloc --x11 --time=1:00:00 --mem=16G --cpus-per-task=4 [--gpus-per-node=1] --account=def-mygroup; 3) once connected onto a compute node try running <code>xclock</code>.  If the clock appears on your desktop, proceed to load the desired Ansys module and try running the program.
 
= Site-specific usage = <!--T:86-->


== Sharcnet License == <!--T:118-->
== SHARCNET license == <!--T:118-->


<!--T:90-->
<!--T:90-->
The SHARCNET Ansys license is free for use by <b>any</b> Compute Canada user on <b>any</b> Compute Canada system. Similar to the commercial version the software has no solver or geometry limits however it may only be used for the purpose of <b><i>Publishable Academic Research</i></b>.  The license was upgraded from CFD to MCS (Multiphysics Campus Solution) in May of 2020 and includes the following ANSYS products: HF, EM, Electronics HPC, Mechanical and CFD as described [https://www.ansys.com/academic/educator-tools/academic-product-portfolio here]. Neither LS-DYNA or Lumerical are included. In July of 2021 an additional 1024 anshpc licenses were added as a result researchers now start upto 4 jobs using a total of 124 anshpc (approximately double that of 2020) plus 4 anshpc per job.  Therefore a single 128 core job OR two 64 core jobs can be submitted to run on four OR two full 32 core Graham Broadwell nodes respectively.  A further limit increase to 172 anshpc is presently being considered to support launching 176 core jobs onto four full 44 core Graham Cascade Lake nodes.  The SHARCNET Ansys License is made available on a first come first serve basisTherefore if a larger than usual number of ANSYS jobs are submitted on a given day some jobs could fail on startup should insufficient licenses be availableThese events however are expected to be rare given the recent increase in anshpc licenses.  If your research requires more licenses than are available from SHARCNET then a dedicated researcher purchased license will be required.  Researchers can purchase an Ansys license directly from [https://www.simutechgroup.com Simutech] where an extra 20% country wide uplift fee must be added if the cluster where the license will be used is not co-located at your institutionA dedicated Ansys license can be hosted on a local institutional license server OR transferred part or in full on the SHARCNET Ansys License serverIn the former case the researcher would simply need to reconfigure their <code>~/.licenses/ansys.lic</code> file on graham (or beluga or cedar) as described at the top of this wiki page.  In the later case the researcher should instead open a ticket with SHARCNET to inquire about starting the license transfer processDepending on the SHARCNET Ansys license utilization the per user feature limits may need to be changed.  Advanced notice will be posted here a minimum of 2 weeks in advance. Large jobs that do not achieve an effective cpu utilization of at least 30% will be flagged by the system and you will likely be contacted by a Compute Canada analyst.
The SHARCNET Ansys license is free for academic use by <b>any</b> Alliance researcher on <b>any</b> Alliance system.   The installed software does not have any solver or geometry limits.  The SHARCNET license may <b>only</b> be used for the purpose of <b><i>Publishable Academic Research</i></b>.  Producing results for private commercial purposes is strictly prohibited.  The SHARCNET license was upgraded from CFD to MCS (Multiphysics Campus Solution) in May of 2020. It includes the following products: HF, EM, Electronics HPC, Mechanical and CFD as described [https://www.ansys.com/academic/educator-tools/academic-product-portfolio here]. In 2023 Rocky for Linux (no Workbench support) was also added. Neither LS-DYNA or Lumerical are included in the SHARCNET license. Note that since all the Alliance clusters are Linux based, SpaceClaim cannot be used on our systems. In July of 2021 an additional 1024 anshpc licenses were added to the previous 512 pool.  Before running large parallel jobs, scaling tests should be run for any given simulation.  Parallel jobs that do not achieve at least 50% CPU utilization may be flagged by the system for a follow-up by our support team.
 
<!--T:2852-->
As of December 2022, each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job)Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs )UPDATE as of October 2024 the license limit has been increased to 8 jobs and 512 hpc cores per researcher (collectively across all clusters for all applications) for a testing period to allow some researchers more flexibility for parameter explorations and running larger problemsAs the license will be far more oversubscribed some instances of job failures on startup may rarely occur, in which rare case the jobs will need to be resubmittedNevertheless assuming most researchers continue with a pattern of running one or two jobs using 128 cores on average total this is not expected to be an issueThat said it will be helpful to close Ansys applications immediately upon completion of any gui related tasks to release any licenses that maybe consumed while the application is otherwise idle, for others to use.
 
<!--T:2854-->
Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes), one can determine the number of full nodes by dividing the total anshpc cores with the compute node sizeFor example, consider  Graham which has many 32-core (Broadwell) and some 44-core (Cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32-core nodes assuming a 252 hpc core limit would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.


==== License Server File ==== <!--T:92-->
<!--T:2853-->
The SHARCNET Ansys license is made available on a first come first serve basis.  Should an unusually large number of Ansys jobs be submitted on a given day some jobs could fail on startup should insufficient licenses be available.  If this occurs then resubmit your job asap. If your research requires more than 512 hpc cores (the recent new max limit) than open a ticket to let us know.  Most likely you will need to purchase (and host) your own Ansys license at your local institution if its urgently needed in such case contact your local [https://www.simutechgroup.com SimuTech] office for a quote.  If however over time enough researchers express the same need, acquiring a larger Ansys license on the next renewal cycle maybe possible.
 
<!--T:4775-->
Researchers can also purchase their own ansys license subscription from [https://www.cmc.ca/subscriptions/ CMC] and use their remote license servers.  Doing so will have several benefits 1) a local institutional license server is not needed 2) a physical license does not need to be obtained upon each renewal 3) the license can be used [https://www.cmc.ca/ansys-campus-solutions-cmc-00200-04847/ almost] anywhere including at home, institutions, or any alliance cluster across Canada and 4) download and installation instructions for the windows version of ansys are provided so researchers can run spaceclaim on their own computer (not possible on the Alliance since all systems are linux based).  There is however one potentially serious limitation, according to the CMC [https://www.cmc.ca/qsg-ansys-cadpass-r20/ Ansys Quick Start Guides] there maybe a 64 core limit per user.
 
==== License server file ==== <!--T:92-->


<!--T:920-->
<!--T:920-->
To use the Sharcnet ANSYS license configure your ansys.lic file as follows:
To use the SHARCNET Ansys license on any Alliance cluster, simply configure your <code>ansys.lic</code> file as follows:
<source lang="bash">
<source lang="bash">
[gra-login1:~/.licenses] cat ansys.lic
[username@cluster:~] cat ~/.licenses/ansys.lic
setenv("ANSYSLMD_LICENSE_FILE", "1055@license3.sharcnet.ca")
setenv("ANSYSLMD_LICENSE_FILE", "1055@license3.sharcnet.ca")
setenv("ANSYSLI_SERVERS", "2325@license3.sharcnet.ca")
setenv("ANSYSLI_SERVERS", "2325@license3.sharcnet.ca")
</source>
</source>


==== Query License Server ==== <!--T:95-->
==== Query license server ==== <!--T:95-->


<!--T:930-->
<!--T:930-->
To show the number of license in use by your username and the total in use by all users run:
To show the number of licenses in use by your username and the total in use by all users, run:


<!--T:1645-->
<!--T:1645-->
Line 797: Line 1,570:


<!--T:933-->
<!--T:933-->
If you discover any licenses unexpectedly in use by your username (usually due to ansys not exiting cleanly on gra-vdi) then connect to the node where its running, open a terminal window and run the following command to terminate the rogue processes <code>pkill -9 -e -u $USER -f "ansys"</code> after which your licenses should be freed.  Note that gra-vdi consists of two nodes (gra-vdi3 and gra-vdi4) which researchers are randomly placed on when connecting to gra-vdi.computecanada.ca with tigervnc.  Therefore its necessary to specify the full hostname (gra-vdi3.sharcnet.ca or grav-vdi4.sharcnet.ca) when connecting with tigervnc to ensure you login to the correct node before running pkill.
If you discover any licenses unexpectedly in use by your username (usually due to ansys not exiting cleanly on gra-vdi) then connect to the node where its running, open a terminal window and run the following command to terminate the rogue processes <code>pkill -9 -e -u $USER -f "ansys"</code> after which your licenses should be freed.  Note that gra-vdi consists of two nodes (gra-vdi3 and gra-vdi4) which researchers are randomly placed on when connecting to gra-vdi.computecanada.ca with [[VNC#VDI_Nodes|TigerVNC]].  Therefore it's necessary to specify the full hostname (gra-vdi3.sharcnet.ca or grav-vdi4.sharcnet.ca) when connecting with tigervnc to ensure you log in to the correct node before running pkill.


=== Local VDI Modules === <!--T:93-->
=== Local VDI modules === <!--T:93-->


<!--T:124-->
<!--T:124-->
When using gra-vdi researchers have the choice of loading ANSYS modules from the global Compute Canada environment (after loading CcEnv) or loading ANSYS modules installed locally on the machine itself (after loading SnEnv).  The local modules maybe of interest as they include some Ansys programs and versions not yet supported by the Compute Canada environment for graphics use on gra-vdi or the clusters.  When starting programs from local Ansys modules, users can select the CMC license server or accept the default Sharcnet License server.  Presently the settings from <code>~/.licenses/ansys.lic</code> are not used by the local Ansys modules except when starting <code>runwb2</code> where they will override the default Sharcnet License server settings. Suitable usage of Ansys programs on gra-vdi includes: running a single test job interactively with upto 8cores and/or 128G ram, create or modify simulation input files, post process or visualize data.
When using gra-vdi, researchers have the choice of loading Ansys modules from our global environment (after loading CcEnv) or loading Ansys modules installed locally on the machine itself (after loading SnEnv).  The local modules may be of interest as they include some Ansys programs and versions not yet supported by our standard environment.  When starting programs from local Ansys modules, you can select the CMC license server or accept the default SHARCNET license server.  Presently, the settings from <code>~/.licenses/ansys.lic</code> are not used by the local Ansys modules except when starting <code>runwb2</code> where they will override the default SHARCNET license server settings. Suitable usage of Ansys programs on gra-vdi includes: running a single test job interactively with up to 8 cores and/or 128G RAM, create or modify simulation input files, post process or visualize data.


==== ansys Modules ==== <!--T:950-->
==== Ansys modules ==== <!--T:950-->


<!--T:952-->
<!--T:952-->
# Connect to gra-vdi.computecanada.ca with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
# Connect to gra-vdi.computecanada.ca with [[VNC#VDI_Nodes|TigerVNC]].
# Open a new terminal window and load a module:
# Open a new terminal window and load a module:
#; <code>module load SnEnv ansys/2021R2</code>, or
#; <code>module load SnEnv ansys/2021R2</code>, or
Line 814: Line 1,587:
#; <code>module load SnEnv ansys/2020R1</code>, or
#; <code>module load SnEnv ansys/2020R1</code>, or
#; <code>module load SnEnv ansys/2019R3</code>
#; <code>module load SnEnv ansys/2019R3</code>
# Start an ANSYS program by issuing one of the following:
# Start an Ansys program by issuing one of the following:
#; <code>runwb2|fluent|<b>cfx5</b>|icemcfd|apdl</code>
#; <code>runwb2|fluent|<b>cfx5</b>|icemcfd|apdl</code>
# Press <tt>y</tt> then <code>enter</code> to accept the conditions
# Press <code>y</code> and <code>Enter</code> to accept the conditions
# Press <code>enter</code> to accept the <tt>n</tt> option and use the SHARCNET license server by default (in the case of runwb2 <i>~/.licenses/ansysedt.lic</i> will be used if present otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server).  If you change <tt>n</tt> to <tt>y</tt> and hit enter the CMC license server will be used.
# Press <code>Enter</code> to accept the <code>n</code> option and use the SHARCNET license server by default (in the case of runwb2 <i>~/.licenses/ansysedt.lic</i> will be used if present, otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server).  If you change <code>n</code> to <code>y</code> and hit <code>y</code>. the CMC license server will be used.


<!--T:953-->
<!--T:953-->
Line 826: Line 1,599:
     4) CFX-Solver    (cfx5solve)
     4) CFX-Solver    (cfx5solve)


==== ansysedt Modules==== <!--T:954-->
==== ansysedt modules==== <!--T:954-->


<!--T:955-->
<!--T:955-->
# Connect to gra-vdi.computecanada.ca with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
# Connect to gra-vdi.computecanada.ca with [[VNC#VDI_Nodes|TigerVNC]].
# Open a new terminal window and load a module:
# Open a new terminal window and load a module:
#; <code>module load SnEnv ansysedt/2021R2</code>, or
#; <code>module load SnEnv ansysedt/2021R2</code>, or
#; <code>module load SnEnv ansysedt/2021R1</code>
#; <code>module load SnEnv ansysedt/2021R1</code>
# Start the ANSYS Electromagnetics Desktop program by typing the following command: <code>ansysedt</code>
# Start the Ansys Electromagnetics Desktop program by typing the following command: <code>ansysedt</code>
# Press <tt>y</tt> then <code>enter</code> to accept the conditions.  
# Press <code>y</code> and <code>Enter</code> to accept the conditions.  
# Press <code>enter</code> to accept the <tt>n</tt> option and use the SHARCNET license server by default (note that  <i>~/.licenses/ansysedt.lic</i> will be used if present otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server).  If you change <tt>n</tt> to <tt>y</tt> and hit enter then the CMC license server will be used.
# Press <code>Enter</code> to accept the <code>n</code> option and use the SHARCNET license server by default (note that  <i>~/.licenses/ansysedt.lic</i> will be used if present, otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server).  If you change <code>n</code> to <code>y</code> and hit enterthe CMC license server will be used.


<!--T:956-->
<!--T:956-->
License feature preferences previously setup with <i>anslic_admin</i> are no longer supported following the recent SHARCNET license server update (Sept9/2021).  If a license problem occurs try removing the <code>~/.ansys</code> directory in your home account to clear the settings.  If problems persist please open a problem ticket at <support@computecanada.ca> and provide the contents your <code>~/.licenses/ansys.lic</code> file.
License feature preferences previously setup with <i>anslic_admin</i> are no longer supported following the recent SHARCNET license server update (2021-09-09).  If a license problem occurs, try removing the <code>~/.ansys</code> directory in your /home account to clear the settings.  If problems persist please contact our [[technical support]] and provide the contents of your <code>~/.licenses/ansys.lic</code> file.


= Additive Manufacturing = <!--T:96-->
= Additive Manufacturing = <!--T:96-->


<!--T:960-->
<!--T:960-->
To get started configure your <code>~/.licenses/ansys.lic</code> file to point to a license server that has a valid ANSYS Mechanical License.  This must be done on all systems where you plan to run the software.   
To get started configure your <code>~/.licenses/ansys.lic</code> file to point to a license server that has a valid Ansys Mechanical License.  This must be done on all systems where you plan to run the software.   


== Enable Additive == <!--T:97-->
== Enable Additive == <!--T:97-->


<!--T:970-->
<!--T:970-->
To enable ANSYS Additive Manufacturing in your project do the following 3 steps:
This section describes how to make the Ansys Additive Manufacturing ACT extension available for use in your project. The steps must be performed on each cluster for each ansys module version where the extension will be used. Any extensions needed by your project will also need to be installed on the cluster as described below.  If you get warnings about missing un-needed extensions (such as ANSYSMotion) then uninstall them from your project.
 
=== Download Extension === <!--T:4773-->
* download AdditiveWizard.wbex from https://catalog.ansys.com/
* upload AdditiveWizard.wbex to the cluster where it will be used


=== Start Workbench === <!--T:98-->
=== Start Workbench === <!--T:98-->
* follow the Workbench section in [[ANSYS#Graphical_use|Graphical use above]].
* File -> Open your project file (ending in .wbpj) into Workbench gui


<!--T:981-->
===  Open Extension Manager === <!--T:99-->
* start workbench as described in the <b>Graphical Use - WORKBENCH</b> section found above.
* click ACT Start Page and the ACT Home page tab will open
* click Manage Extensions and the Extension Manager will open


=== Install Extension === <!--T:154-->
=== Install Extension === <!--T:154-->
* click Extensions -> Install Extension
* click the box with the large + sign under the search bar
* specify the following <i>/path/to/AdditiveWizard.wbex</i> then click Open: /cvmfs/restricted.computecanada.ca/easybuild/software/2017/Core/ansys/2019R3/v195/aisol/WBAddins/MechanicalExtensions/AdditiveWizard.wbex
* navigate to select and install your AdditiveWizard.wbex file


=== Load Extension === <!--T:155-->
=== Load Extension === <!--T:155-->
* click Extensions -> Manage Extensions and tick Additive Wizard
* click to highlight the AdditiveWizard box (loads the AdditiveWizard extension for current session only)
* click the ACT Start Page tab X to return to your Project tab
* click lower right corner arrow in the AdditiveWizard box and select <i>Load extension</i> (loads the extension for current AND future sessions)
 
=== Unload Extension === <!--T:156-->
* click to un-highlight the AdditiveWizard box (unloads extension for the current session only)
* click lower right corner arrow in the AdditiveWizard box and select <I>Do not load as default</i> (extension will not load for future sessions)


== Run Additive == <!--T:128-->
== Run Additive == <!--T:128-->
Line 868: Line 1,652:


<!--T:132-->
<!--T:132-->
A user can run a single ANSYS Additive Manufacturing job on gra-vdi with upto 16cores as follows:  
A user can run a single Ansys Additive Manufacturing job on gra-vdi with up to 16 cores as follows:  


<!--T:134-->
<!--T:134-->
* <code>Start Workbench</code> <code>On Gra-vdi</code> as described above in <code>Enable Additive</code>
* Start Workbench on Gra-vdi as described above in <b>Enable Additive</b>.
* click File -> Open and select <i>test.wbpj</i> then click Open
* click File -> Open and select <i>test.wbpj</i> then click Open
* click View -> reset workspace if you get a grey screen
* click View -> reset workspace if you get a grey screen
Line 879: Line 1,663:
<!--T:157-->
<!--T:157-->
Check utilization:
Check utilization:
* open another terminal and run: <code>top -u $USER</code>
* open another terminal and run: <code>top -u $USER</code>  **OR**  <code>ps u -u $USER | grep ansys</code>
* kill rogue processes from previous runs if required: <code>pkill -9 -e -u $USER -f "ansys"</code>
* kill rogue processes from previous runs: <code>pkill -9 -e -u $USER -f "ansys|mwrpcss|mwfwrapper|ENGINE"</code>
 
<!--T:2851-->
Please note that rogue processes can persistently tie up licenses between gra-vdi login sessions or cause other unusual errors when trying to start gui programs on gra-vdi.  Although rare, rogue processes can occur if an ansys gui session (fluent, workbench, etc) is not cleanly terminated by the user before vncviewer is terminated either manually or unexpectedly - for instance due to a transient network outage or hung filesystem.  If the latter is to blame then the processes may not by killable until normal disk access is restored.


===Cluster=== <!--T:141-->
===Cluster=== <!--T:141-->
Line 888: Line 1,675:


<!--T:142-->
<!--T:142-->
To submit an Additive job to a cluster queue, you must first prepare your additive simulation to run on a Compute Canada cluster.  To do this open your simulation as described in the <code>Enable Additive</code> section above then save it.  Next create a slurm script as explained in the <i>Cluster Batch Job Submission - WORKBENCH</I> section above.   For parametric studies change <code>Update()</code> to <code>UpdateAllDesignPoints()</code> in your script and submit a job to the queue with the <code>sbatch scriptname</code> command.   For initial performance testing one can avoid the solution from being written by specifying Overwrite=False in the slurm script so further runs to be conducted without needing to reopen the simulation in workbench (and mechanical) to clear the solution <b>and</b> recreate the design points.  Another option is to create a replay script to perform these tasks then manually run it on the cluster between runs as follows. The replay file can be modified for use in different directories by using editor to manually change its internal FilePath setting.
Before submitting a newly uploaded Additive project to a cluster queue (with <code>sbatch scriptname</code>) certain preparations must be done.  To begin, open your simulation with Workbench gui (as described in the <code>Enable Additive</code> section above) in the same directory that your job will be submitted from and then save it again. Be sure to use the same ansys module version that will be used for the job.  Next create a Slurm script (as explained in the <i>Cluster Batch Job Submission - WORKBENCH</I> section above). To perform parametric studies change <code>Update()</code> to <code>UpdateAllDesignPoints()</code> in the Slurm script.  Determine the optimal number of cores and memory by submitting several short test jobs. To avoid needing to manually clear the solution <b>and</b> recreate all the design points in Workbench between each test run, either 1) change <code>Save(Overwrite=True)</code> to <code>Save(Overwrite=False)</code> or 2) save a copy of the original YOURPROJECT.wbpj file and corresponding YOURPROJECT_files directoryOptionally create and then manually run a replay file on the cluster in the respective test case directory between each run, noting that a single replay file can be used in different directories by opening it in a text editor and changing the internal FilePath setting.


<!--T:144-->
<!--T:144-->
module load ansys/2019R3
module load ansys/2019R3
  rm -f test_files/.lock
  rm -f test_files/.lock
  runwb2 -R myreplay.wbjn
  runwb2 -R myreplay.wbjn
Line 899: Line 1,686:


<!--T:148-->
<!--T:148-->
Once your additive job has been running for a few minutes a snapshot of its resource utilization on the compute node(s) can be obtained with the following the srun command.  Sample output corresponding to an eight core submission script is shown next.  It can be see that two nodes were selected by the scheduler:
Once your additive job has been running for a few minutes a snapshot of its resource utilization on the compute node(s) can be obtained with the following srun command.  Sample output corresponding to an eight core submission script is shown next.  It can be seen that two nodes were selected by the scheduler:


<!--T:149-->
<!--T:149-->
Line 918: Line 1,705:


<!--T:151-->
<!--T:151-->
After a job completes its "Job Wall-clock time" can be obtained from <code>seff myjobid</code>.  Using this value scaling tests can be performed by submitting short test jobs with an increasing number of cores.  If the Wall-clock time decreases by ~50% when the number of cores are doubled then additional cores maybe considered.
After a job completes its "Job Wall-clock time" can be obtained from <code>seff myjobid</code>.  Using this value scaling tests can be performed by submitting short test jobs with an increasing number of cores.  If the Wall-clock time decreases by ~50% when the number of cores is doubled then additional cores may be considered.
 
= Online documentation = <!--T:8-->
The full Ansys documentation for versions back to 19.2 can be accessed by following these steps:
# Connect to <b>gra-vdi.computecanada.ca</b> with tigervnc as described [[VNC#VDI_Nodes|here]].
# If the Firefox browser or the Ansys Workbench is open, close it now.
# Start Firefox by clicking <I>Applications -> Internet -> Firefox</I>.
# Open a <b><i>new</I></b> terminal window by clicking  <I>Applications -> System Tools -> Mate Terminal</I>.
# Start Workbench by typing the following in your terminal: <i>module load CcEnv StdEnv/2023 ansys; runwb2</i>
# Go to the upper Workbench menu bar and click  <I>Help -> ANSYS Workbench Help</I>.  The <b>Workbench Users' Guide</b> should appear loaded in Firefox.
# At this point Workbench is no longer needed so close it by clicking the <I>>Unsaved Project - Workbench</I> tab located along the bottom frame (doing this will bring Workbench into focus) and then click <I>File -> Exit</I>.
# In the top middle of the Ansys documentation page, click the word <I>HOME</I> located just left of <I>API DOCS</I>.
# Now scroll down and you should see a list of Ansys product icons and/or alphabetical ranges.
# Select a product to view its documentation.  The documentation for the latest release version will be displayed by default.  Change the version by clicking the <I>Release Year</I> pull down located above and just to the right of the Ansys documentation page search bar.
# To search for documentation corresponding to a different Ansys product, click <I>HOME</I> again.
 
</translate>
</translate>

Latest revision as of 19:43, 23 October 2024

Other languages:

Ansys is a software suite for engineering simulation and 3-D design. It includes packages such as Ansys Fluent and Ansys CFX.

Licensing[edit]

We are a hosting provider for Ansys. This means that we have the software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our clusters. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the Ansys module, and it should find its license automatically. If this is not the case, please contact our technical support so that they can arrange this for you.

Configuring your license file[edit]

Our module for Ansys is designed to look for license information in a few places. One of those places is your /home folder. You can specify your license server by creating a file named $HOME/.licenses/ansys.lic consisting of two lines as shown. Customize the file to replacing FLEXPORT, INTEPORT and LICSERVER with appropriate values for your server.

FILE: ansys.lic
setenv("ANSYSLMD_LICENSE_FILE", "FLEXPORT@LICSERVER")
setenv("ANSYSLI_SERVERS", "INTEPORT@LICSERVER")

The following table provides established values for the CMC and SHARCNET license servers. To use a different server, locate the corresponding values as explained in Local license servers.

TABLE: Preconfigured license servers
License System/Cluster LICSERVER FLEXPORT INTEPORT VENDPORT NOTICES
CMC beluga 10.20.73.21 6624 2325 n/a None
CMC cedar 172.16.0.101 6624 2325 n/a None
CMC graham 199.241.167.222 6624 2325 n/a None
CMC narval 10.100.64.10 6624 2325 n/a None
SHARCNET beluga/cedar/graham/gra-vdi/narval license3.sharcnet.ca 1055 2325 n/a None
SHARCNET niagara localhost 1055 2325 1793 None

Researchers who purchase a CMC license subscription must send their Alliance account username to <cmcsupport@cmc.ca> otherwise license checkouts will fail. The number of cores that can be used with a CMC license is described in the Other Tricks and Tips sections of the Ansys Electronics Desktop and Ansys Mechanical/Fluids quick start guides.

Local license servers[edit]

Before a local Ansys license server can be reached from an Alliance cluster, firewall changes will need to be done on both the server side and the Alliance side. For many local institutional servers this work has already been done. In such cases you simply need to contact your local Ansys license server administrator and request 1) the fully qualified hostname (LICSERVER) of the server; 2) the Ansys flex port commonly 1055 (FLEXPORT); and 3) the Ansys licensing interconnect port commonly 2325 (INTEPORT). With this information you can then immediately configure your ansys.lic file as described above and theoretically begin submitting jobs.

If however your local license server has never been setup for use on the Alliance, you will additionally need to request 4) the static vendor port number (VENDPORT) number from your local Ansys server administrator. Once you have gathered all four pieces of information send it to technical support being sure to mention which Alliance cluster(s) you want to run Ansys on. We will then arrange for the Alliance firewall to be opened so that license requests on the cluster(s) can reach your server. You will then also receive a range of IP addresses to pass to your server administrator so the local firewall can likewise be opened to allow inbound license connections to reach your server on the 3 ports (FLEXPORT, INTEPORT, VENDPORT) from the requested Alliance system(s).

Checking license usage[edit]

Ansys comes with an lmutil tool that can be used to check your license usage. Before using it verify your ansys.lic is configured. Then run the following two commands on a cluster that you are set up to use:

[name@server ~]$ module load ansys/2023R2
[name@server ~]$ $EBROOTANSYS/v232/licensingclient/linx64/lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -S ansyslmd

If you load a different version of the Ansys module, you will need to modify the path to the lmutil command.

Version compatibility[edit]

Platform Support[edit]

The Ansys Platform Support page states "the current release has been tested to read and open databases from the five previous releases". This implies that simulations developed using older versions of Ansys should generally work with newer module versions (forward compatible five releases). The reverse however cannot be assumed to be true. The Platform Support link also provides version based software and hardware compatibility information to determine optimal (supported) platform infrastructure that Ansys can be run on. The features supported under Windows VS Linux systems can be displayed by clicking the Platform Support by Application / Product link. Similar information for all of the above maybe found by clicking the Previous Releases link located at the lower left corner of the Platform Support page.

What's New[edit]

Ansys posts Product Release and Updates for the latest releases. Similar information for previous releases can generally be pulled up for various application topics by visiting the Ansys blog page and using the FILTERS search bar. For example searching on What’s New Fluent 2024 gpu pulls up a document with title What’s New for Ansys Fluent in 2024 R1? containing a wealth of the latest gpu support information. Specifying a version number in the Press Release search bar is also a good way to find new release information. At the time of this writing Ansys 2024R2 is the current release and will be installed when interest is expressed or there is evident need to support newer hardware or solver capabilities. To request a new version be installed submit a problem ticket.

Service Packs[edit]

Ansys regularly releases service packs to fix and enhance various issues with its major releases. Therefore starting with Ansys 2024 a separate ansys module will appear on the clusters with a decimal and two digits appearing after the release number whenever a service pack is been installed over the initial release. For example, the initial 2024 release without any service pack applied maybe loaded by doing module load ansys/2024R1 while a module with Service Pack 3 applied maybe loaded by doing module load ansys/2024R1.03 instead. If a service pack is already available by the time a new release is to be installed, then most likely only a module for that service pack number will be installed unless otherwise a requeste to install the initial release also is received.

Most users will likely want to load the latest module version equipped with latest installed service pack which can be achieved by simply doing module load ansys. While its not expected service packs will impact numerical results, the changes they make are extensive and so if computations have already been done with the initial release or an earlier service pack than some groups may prefer to continue using it. Having separate modules for each service pack makes this possible. Starting with Ansys 2024R1 a detailed description of what each service pack does can be found by searching this link for Service Pack Details. Future versions will presumably be similarly searchable by manually modifying the version number contained in the link.

Cluster batch job submission[edit]

The Ansys software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them support our Slurm scheduler. For this reason, we need special instructions for each Ansys package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages. While the slurm scripts should work with on all clusters, Niagara users may need to make some additional changes covered here.

Ansys Fluent[edit]

Typically, you would use the following procedure to run Fluent on one of our clusters:

  1. Prepare your Fluent job using Fluent from the Ansys Workbench on your desktop machine up to the point where you would run the calculation.
  2. Export the "case" file with File > Export > Case… or find the folder where Fluent saves your project's files. The case file will often have a name like FFF-1.cas.gz.
  3. If you already have data from a previous calculation, which you want to continue, export a "data" file as well (File > Export > Data…) or find it in the same project folder (FFF-1.dat.gz).
  4. Transfer the case file (and if needed the data file) to a directory on the /project or /scratch filesystem on the cluster. When exporting, you can save the file(s) under a more instructive name than FFF-1.* or rename them when they are uploaded.
  5. Now you need to create a "journal" file. It's purpose is to load the case file (and optionally the data file), run the solver and finally write the results. See examples below and remember to adjust the filenames and desired number of iterations.
  6. If jobs frequently fail to start due to license shortages and manual resubmission of failed jobs is not convenient, consider modifying your script to requeue your job (up to 4 times) as shown under the by node + requeue tab further below. Be aware that doing this will also requeue simulations that fail due to non-license related issues (such as divergence), resulting in lost compute time. Therefore it is strongly recommended to monitor and inspect each Slurm output file to confirm each requeue attempt is license related. When it is determined that a job is requeued due to a simulation issue, immediately manually kill the job progression with scancel jobid and correct the problem.
  7. After running the job, you can download the data file and import it back into Fluent with File > Import > Data….

Slurm scripts[edit]

General purpose[edit]

Most Fluent jobs should use the following by node script to minimize solution latency and maximize performance over as few nodes as possible. Very large jobs, however, might wait less in the queue if they use a by core script. However, the startup time of a job using many nodes can be significantly longer, thus offsetting some of the benefits. In addition, be aware that running large jobs over an unspecified number of potentially very many nodes will make them far more vulnerable to crashing if any of the compute nodes fail during the simulation. The scripts will ensure Fluent uses shared memory for communication when run on a single node or distributed memory (utilizing MPI and the appropriate HPC interconnect) when run over multiple nodes. The two narval tabs maybe be useful to provide a more robust alternative if fluent hangs during the initial auto mesh partitioning phase when using the standard intel based scripts with the parallel solver. The other option would be to manually perform the mesh partitioning in the fluent gui then trying to run the job again on the cluster with the intel scripts. Doing so will allow you to inspect the partition statistics and specify the partitioning method to obtain an optimal result. The number of mesh partitions should be an integral multiple of the there number of cores. For optimal efficiency ensure at least 10000 cells per core otherwise specifying too many cores will eventually result in the poor performance as the scaling drops off.

File : script-flu-bynode-intel.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account name
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number of compute nodes (narval 1 node max)
#SBATCH --ntasks-per-node=32  # Specify number of cores per node (graham 32 or 44, cedar 48, beluga 40, narval 64, or less)
#SBATCH --mem=0               # Do not change (allocates all memory per compute node)
#SBATCH --cpus-per-task=1     # Do not change

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)

#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2     # or newer versions (beluga, cedar, graham)

MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp

# ------- do not change any lines below --------

if [[ "${CC_CLUSTER}" == narval ]]; then
 if [ "$EBVERSIONGENTOO" == 2020 ]; then
   module load intel/2021 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
   export HCOLL_RCACHE=^ucs
 elif [ "$EBVERSIONGENTOO" == 2023 ]; then
   module load intel/2023 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT
 fi
 unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
 unset I_MPI_ROOT
fi

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi


File : script-flu-bycore-intel.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify (narval 1 node max)
#SBATCH --ntasks=16           # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1     # Do not change

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)

#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2     # or newer versions (beluga, cedar, graham)

MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp

# ------- do not change any lines below --------

if [[ "${CC_CLUSTER}" == narval ]]; then
 if [ "$EBVERSIONGENTOO" == 2020 ]; then
   module load intel/2021 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
   export HCOLL_RCACHE=^ucs
 elif [ "$EBVERSIONGENTOO" == 2023 ]; then
   module load intel/2023 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT
 fi
 unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
 unset I_MPI_ROOT
fi

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi


File : script-flu-bynode-openmpi.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account name
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number of compute nodes
#SBATCH --ntasks-per-node=64  # Specify number of cores per node (narval 64 or less)
#SBATCH --mem=0               # Do not change (allocates all memory per compute node)
#SBATCH --cpus-per-task=1     # Do not change

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (narval only)

MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp

# ------- do not change any lines below --------

export OPENMPI_ROOT=$EBROOTOPENMPI
export OMPI_MCA_hwloc_base_binding_policy=core
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/mf-$SLURM_JOB_ID
for i in `cat /tmp/mf-$SLURM_JOB_ID | uniq`; do echo "${i}:$(cat /tmp/mf-$SLURM_JOB_ID | grep $i | wc -l)" >> /tmp/machinefile-$SLURM_JOB_ID; done
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pshmem -i $MYJOURNALFILE
else
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi


File : script-flu-bycore-openmpi.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account name
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify number of compute nodes (optional)
#SBATCH --ntasks=16           # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1     # Do not change

module load StdEnv/2023       # Do not change     
module load ansys/2023R2      # Specify version (narval only)

MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp

# ------- do not change any lines below --------

export OPENMPI_ROOT=$EBROOTOPENMPI
export OMPI_MCA_hwloc_base_binding_policy=core
slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/mf-$SLURM_JOB_ID
for i in `cat /tmp/mf-$SLURM_JOB_ID | uniq`; do echo "${i}:$(cat /tmp/mf-$SLURM_JOB_ID | grep $i | wc -l)" >> /tmp/machinefile-$SLURM_JOB_ID; done
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pshmem -i $MYJOURNALFILE
else
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=openmpi -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi


File : script-flu-bynode-intel-nia.sh

#!/bin/bash

#SBATCH --account=def-group      # Specify account name
#SBATCH --time=00-03:00          # Specify time limit dd-hh:mm
#SBATCH --nodes=1                # Specify number of compute nodes
#SBATCH --ntasks-per-node=80     # Specify number cores per node (niagara 80 or less)
#SBATCH --mem=0                  # Do not change (allocate all memory per compute node)
#SBATCH --cpus-per-task=1        # Do not change (required parameter)

module load CCEnv StdEnv/2023    # Do not change
module load ansys/2023R2         # Specify version (niagara only)

MYJOURNALFILE=sample.jou         # Specify your journal file name
MYVERSION=3d                     # Specify 2d, 2ddp, 3d or 3ddp

# These settings are used instead of your ~/.licenses/ansys.lic
LICSERVER=license3.sharcnet.ca   # Specify license server hostname
FLEXPORT=1055                    # Specify server flex port
INTEPORT=2325                    # Specify server interconnect port
VENDPORT=1793                    # Specify server vendor port

# ------- do not change any lines below --------

ssh nia-gw -fNL $FLEXPORT:$LICSERVER:$FLEXPORT      # Do not change
ssh nia-gw -fNL $INTEPORT:$LICSERVER:$INTEPORT      # Do not change
ssh nia-gw -fNL $VENDPORT:$LICSERVER:$VENDPORT      # Do not change
export ANSYSLMD_LICENSE_FILE=$FLEXPORT@localhost    # Do not change
export ANSYSLI_SERVERS=$INTEPORT@localhost          # Do not change

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))

if [ ! -L "$HOME/.ansys" ]; then
  echo "ERROR: A link to a writable .ansys directory does not exist."
  echo 'Remove ~/.ansys if one exists and then run: ln -s $SCRATCH/.ansys ~/.ansys'
  echo "Then try submitting your job again. Aborting the current job now!"
elif [ ! -L "$HOME/.fluentconf" ]; then
  echo "ERROR: A link to a writable .fluentconf directory does not exist."
  echo 'Remove ~/.fluentconf if one exists and run: ln -s $SCRATCH/.fluentconf ~/.fluentconf'
  echo "Then try submitting your job again. Aborting the current job now!"
elif [ ! -L "$HOME/.flrecent" ]; then
  echo "ERROR: A link to a writable .flrecent file does not exist."
  echo 'Remove ~/.flrecent if one exists and then run: ln -s $SCRATCH/.flrecent ~/.flrecent'
  echo "Then try submitting your job again. Aborting the current job now!"
else
  mkdir -pv $SCRATCH/.ansys
  mkdir -pv $SCRATCH/.fluentconf
  touch $SCRATCH/.flrecent
  if [ "$SLURM_NNODES" == 1 ]; then
   fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
  else
   fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
  fi
fi


License requeue[edit]

The scripts in this section should only be used with Fluent jobs that are known to complete normally without generating any errors in the output however typically require multiple requeue attempts to checkout licenses. They are not recommended for Fluent jobs that may 1) run for a long time before crashing 2) run to completion but contain unresolved journal file warnings, since in both cases the simulations will be repeated from the beginning until the maximum number of requeue attempts specified by the array value is reached. For these types of jobs the general purpose scripts above should be used instead.

File : script-flu-bynode+requeue.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number of compute nodes (narval 1 node max)
#SBATCH --ntasks-per-node=32  # Specify number of cores per node (graham 32 or 44, cedar 48, beluga 40, or less)
#SBATCH --mem=0               # Do not change (allocates all memory per compute node)
#SBATCH --cpus-per-task=1     # Do not change
#SBATCH --array=1-5%1         # Specify number of requeue attempts (2 or more, 5 is shown)

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)

#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2     # or newer versions (beluga, cedar, graham)

MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp

# ------- do not change any lines below --------

if [[ "${CC_CLUSTER}" == narval ]]; then
 if [ "$EBVERSIONGENTOO" == 2020 ]; then
   module load intel/2021 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
   export HCOLL_RCACHE=^ucs
 elif [ "$EBVERSIONGENTOO" == 2023 ]; then
   module load intel/2023 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT
 fi
 unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
 unset I_MPI_ROOT
fi

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
if [ $? -eq 0 ]; then
    echo "Job completed successfully! Exiting now."
    scancel $SLURM_ARRAY_JOB_ID
else
    echo "Job attempt $SLURM_ARRAY_TASK_ID of $SLURM_ARRAY_TASK_COUNT failed due to license or simulation issue!"
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
       echo "Resubmitting job now …"
    else
       echo "All job attempts failed exiting now."
    fi
fi


File : script-flu-bycore+requeue.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify (narval 1 node max) 
#SBATCH --ntasks=16           # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1     # Do not change
#SBATCH --array=1-5%1         # Specify number of requeue attempts (2 or more, 5 is shown)

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)

#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2     # or newer versions (beluga, cedar, graham)

MYJOURNALFILE=sample.jou      # Specify your journal file name
MYVERSION=3d                  # Specify 2d, 2ddp, 3d or 3ddp

# ------- do not change any lines below --------

if [[ "${CC_CLUSTER}" == narval ]]; then
 if [ "$EBVERSIONGENTOO" == 2020 ]; then
   module load intel/2021 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
   export HCOLL_RCACHE=^ucs
 elif [ "$EBVERSIONGENTOO" == 2023 ]; then
   module load intel/2023 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT
 fi
 unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
 unset I_MPI_ROOT
fi

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -i $MYJOURNALFILE
else
 fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOURNALFILE
fi
if [ $? -eq 0 ]; then
    echo "Job completed successfully! Exiting now."
    scancel $SLURM_ARRAY_JOB_ID
else
    echo "Job attempt $SLURM_ARRAY_TASK_ID of $SLURM_ARRAY_TASK_COUNT failed due to license or simulation issue!"
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
       echo "Resubmitting job now …"
    else
       echo "All job attempts failed exiting now."
    fi
fi


Solution restart[edit]

The following two scripts are provided to automate restarting very large jobs that require more than the typical seven-day maximum runtime window available on most clusters. Jobs are restarted from the most recent saved time step files. A fundamental requirement is the first time step can be completed within the requested job array time limit (specified at the top of your Slurm script) when starting a simulation from an initialized solution field. It is assumed that a standard fixed time step size is being used. To begin, a working set of sample.cas, sample.dat and sample.jou files must be present. Next edit your sample.jou file to contain /solve/dual-time-iterate 1 and /file/auto-save/data-frequency 1. Then create a restart journal file by doing cp sample.jou sample-restart.jou and edit the sample-restart.jou file to contain /file/read-cas-data sample-restart instead of /file/read-cas-data sample and comment out the initialization line with a semicolon for instance ;/solve/initialize/initialize-flow. If your 2nd and subsequent time steps are known to run twice as fast (than the initial time step), edit sample-restart.jou to specify /solve/dual-time-iterate 2. By doing this, the solution will only be restarted after two 2 time steps are completed following the initial time step. An output file for each time step will still be saved in the output subdirectory. The value 2 is arbitrary but should be chosen such that the time for 2 steps fits within the job array time limit. Doing this will minimize the number of solution restarts which are computationally expensive. If your first time step performed by sample.jou starts from a converged (previous) solution, choose 1 instead of 2 since likely all time steps will require a similar amount of wall time to complete. Assuming 2 is chosen, the total time of simulation to be completed will be 1*Dt+2*Nrestart*Dt where Nrestart is the number of solution restarts specified in the script. The total number of time steps (and hence the number of output files generated) will therefore be 1+2*Nrestart. The value for the time resource request should be chosen so the initial time step and subsequent time steps will complete comfortably within the Slurm time window specifiable up to a maximum of "#SBATCH --time=07-00:00" days.

File : script-flu-bynode+restart.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=07-00:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number of compute nodes (narval 1 node max)
#SBATCH --ntasks-per-node=32  # Specify number of cores per node (graham 32 or 44, cedar 48, beluga 40, narval 64, or less)
#SBATCH --mem=0               # Do not change (allocates all memory per compute node)
#SBATCH --cpus-per-task=1     # Do not change
#SBATCH --array=1-5%1         # Specify number of solution restarts (2 or more, 5 is shown)

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)

#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2     # or newer versions (beluga, cedar, graham)

MYVERSION=3d                        # Specify 2d, 2ddp, 3d or 3ddp
MYJOUFILE=sample.jou                # Specify your journal filename
MYJOUFILERES=sample-restart.jou     # Specify journal restart filename
MYCASFILERES=sample-restart.cas.h5  # Specify cas restart filename
MYDATFILERES=sample-restart.dat.h5  # Specify dat restart filename

# ------- do not change any lines below --------

if [[ "${CC_CLUSTER}" == narval ]]; then
 if [ "$EBVERSIONGENTOO" == 2020 ]; then
   module load intel/2021 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
   export HCOLL_RCACHE=^ucs
 elif [ "$EBVERSIONGENTOO" == 2023 ]; then
   module load intel/2023 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT
 fi
 unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
 unset I_MPI_ROOT
fi

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NNODES * SLURM_NTASKS_PER_NODE * SLURM_CPUS_PER_TASK))

# Specify 2d, 2ddp, 3d or 3ddp and replace sample with your journal filename …
if [ "$SLURM_NNODES" == 1 ]; then
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g 2ddp -t $NCORES -affinity=0 -i $MYJOUFILE
  else
    fluent -g 2ddp -t $NCORES -affinity=0 -i $MYJOUFILERES
  fi
else 
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g 2ddp -t $NCORES -affinity=0 -cnf=/tmp/machinefile-$SLURM_JOB_ID -mpi=intel -ssh -i $MYJOUFILE
  else
    fluent -g 2ddp -t $NCORES -affinity=0 -cnf=/tmp/machinefile-$SLURM_JOB_ID -mpi=intel -ssh -i $MYJOUFILERES
  fi
fi
if [ $? -eq 0 ]; then
    echo
    echo "SLURM_ARRAY_TASK_ID  = $SLURM_ARRAY_TASK_ID"
    echo "SLURM_ARRAY_TASK_COUNT = $SLURM_ARRAY_TASK_COUNT"
    echo
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
      echo "Restarting job with the most recent output dat file …"
      ln -sfv output/$(ls -ltr output | grep .cas | tail -n1 | awk '{print $9}') $MYCASFILERES
      ln -sfv output/$(ls -ltr output | grep .dat | tail -n1 | awk '{print $9}') $MYDATFILERES
      ls -lh cavity* output/*
    else
      echo "Job completed successfully! Exiting now."
      scancel $SLURM_ARRAY_JOB_ID
     fi
else
     echo "Simulation failed. Exiting …"
fi


File : script-flu-bycore+restart.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
##SBATCH --nodes=1            # Uncomment to specify (narval 1 node max)
#SBATCH --ntasks=16           # Specify total number of cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1     # Do not change
#SBATCH --array=1-5%1         # Specify number of restart aka time steps (2 or more, 5 is shown)

module load StdEnv/2023       # Do not change
module load ansys/2023R2      # Specify version (beluga, cedar, graham, narval)

#module load StdEnv/2020      # no longer supported
#module load ansys/2019R3     # or newer versions (narval only)
#module load ansys/2021R2     # or newer versions (beluga, cedar, graham)

MYVERSION=3d                        # Specify 2d, 2ddp, 3d or 3ddp
MYJOUFILE=sample.jou                # Specify your journal filename
MYJOUFILERES=sample-restart.jou     # Specify journal restart filename
MYCASFILERES=sample-restart.cas.h5  # Specify cas restart filename
MYDATFILERES=sample-restart.dat.h5  # Specify dat restart filename

# ------- do not change any lines below --------

if [[ "${CC_CLUSTER}" == narval ]]; then
 if [ "$EBVERSIONGENTOO" == 2020 ]; then
   module load intel/2021 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT/mpi/latest
   export HCOLL_RCACHE=^ucs
 elif [ "$EBVERSIONGENTOO" == 2023 ]; then
   module load intel/2023 intelmpi
   export INTELMPI_ROOT=$I_MPI_ROOT
 fi
 unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
 unset I_MPI_ROOT
fi

slurm_hl2hl.py --format ANSYS-FLUENT > /tmp/machinefile-$SLURM_JOB_ID
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

if [ "$SLURM_NNODES" == 1 ]; then
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -I $MYFILEJOU
  else
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pshmem -I $MYFILEJOURES
  fi
else 
  if [ "$SLURM_ARRAY_TASK_ID" == 1 ]; then
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOUFILE
  else
    fluent -g $MYVERSION -t $NCORES -affinity=0 -mpi=intel -pib -cnf=/tmp/machinefile-$SLURM_JOB_ID -i $MYJOUFILERES
  fi
fi
if [ $? -eq 0 ]; then
    echo
    echo "SLURM_ARRAY_TASK_ID  = $SLURM_ARRAY_TASK_ID"
    echo "SLURM_ARRAY_TASK_COUNT = $SLURM_ARRAY_TASK_COUNT"
    echo
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
      echo "Restarting job with the most recent output dat file"
      ln -sfv output/$(ls -ltr output | grep .cas | tail -n1 | awk '{print $9}') $MYCASFILERES
      ln -sfv output/$(ls -ltr output | grep .dat | tail -n1 | awk '{print $9}') $MYDATFILERES
      ls -lh cavity* output/*
    else
      echo "Job completed successfully! Exiting now."
      scancel $SLURM_ARRAY_JOB_ID
     fi
else
     echo "Simulation failed. Exiting now."
fi


Journal files[edit]

Fluent journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed. With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used. The following journal files are set up with /file/cff-files no to use the legacy .cas/.dat file format (the default in module versions 2019R3 or older). Set this instead to /file/cff-files yes to use the more efficient .cas.h5/.dat.h5 file format (the default in module versions 2020R1 or newer).

File : sample1.jou

; SAMPLE FLUENT JOURNAL FILE - STEADY SIMULATION
; ----------------------------------------------
; lines beginning with a semicolon are comments

; Overwrite files by default
/file/confirm-overwrite no

; Preferentially read/write files in legacy format
/file/cff-files no

; Read input case and data files
/file/read-case-data FFF-in

; Run the solver for this many iterations
/solve/iterate 1000

; Overwrite output files by default
/file/confirm-overwrite n

; Write final output data file
/file/write-case-data FFF-out

; Write simulation report to file (optional)
/report/summary y "My_Simulation_Report.txt"

; Cleanly shutdown fluent
/exit


File : sample2.jou

; SAMPLE FLUENT JOURNAL FILE - STEADY SIMULATION
; ----------------------------------------------
; lines beginning with a semicolon are comments

; Overwrite files by default
/file/confirm-overwrite no

; Preferentially read/write files in legacy format
/file/cff-files no

; Read input files
/file/read-case-data FFF-in

; Write a data file every 100 iterations
/file/auto-save/data-frequency 100

; Retain data files from 5 most recent iterations
/file/auto-save/retain-most-recent-files y

; Write data files to output sub-directory (appends iteration)
/file/auto-save/root-name output/FFF-out

; Run the solver for this many iterations
/solve/iterate 1000

; Write final output case and data files
/file/write-case-data FFF-out

; Write simulation report to file (optional)
/report/summary y "My_Simulation_Report.txt"

; Cleanly shutdown fluent
/exit


File : sample3.jou

; SAMPLE FLUENT JOURNAL FILE - TRANSIENT SIMULATION
; -------------------------------------------------
; lines beginning with a semicolon are comments

; Overwrite files by default
/file/confirm-overwrite no

; Preferentially read/write files in legacy format
/file/cff-files no

; Read the input case file
/file/read-case FFF-transient-inp

; For continuation (restart) read in both case and data input files
;/file/read-case-data FFF-transient-inp

; Write a data (and maybe case) file every 100 time steps
/file/auto-save/data-frequency 100
/file/auto-save/case-frequency if-case-is-modified

; Retain only the most recent 5 data (and maybe case) files
/file/auto-save/retain-most-recent-files y

; Write to output sub-directory (appends flowtime and timestep)
/file/auto-save/root-name output/FFF-transient-out-%10.6f

; ##### Settings for Transient simulation :  #####

; Set the physical time step size
/solve/set/time-step 0.0001

; Set the number of iterations for which convergence monitors are reported
/solve/set/reporting-interval 1

; ##### End of settings for Transient simulation #####

; Initialize using the hybrid initialization method
/solve/initialize/hyb-initialization

; Set max number of iters per time step and number of time steps
;/solve/set/max-iterations-per-time-step 75
;/solve/dual-time-iterate 1000 ,
/solve/dual-time-iterate 1000 75

; Write final case and data output files
/file/write-case-data FFF-transient-out

; Write simulation report to file (optional)
/report/summary y Report_Transient_Simulation.txt

; Cleanly shutdown fluent
/exit


UDFs[edit]

The first step is to transfer your User-Defined Function or UDF (namely the sampleudf.c source file and any additional dependency files) to the cluster. When uploading from a windows machine be sure the text mode setting of your transfer client is used otherwise fluent won't be able to read the file properly on the cluster since it runs linux. The UDF should be placed in the directory where your journal, cas and dat files reside. Next add one of the following commands into your journal file before the commands that read in your simulation cas/dat files. Regardless of whether you use the Interpreted or Compiled UDF approach, before uploading your cas file onto the Alliance please check that neither the Interpreted UDFs Dialog Box or the UDF Library Manager Dialog Box are configured to use any UDF, this will ensure that when jobs are submitted only the journal file commands will be in control.

Interpreted[edit]

To tell fluent to interpret your UDF at runtime add the following command line into your journal file before the cas/dat files are read or initialized. The filename sampleudf.c should be replaced with the name of your source file. The command remains the same regardless if the simulation is being run in serial or parallel. To ensure the UDF can be found in the same directory as the journal file remove any managed definitions from the cas file by opening it in the gui and resaving either before uploading to the Alliance or opening it in the gui on a compute node or gra-vdi then resaving it. Doing this will ensure only the following command/method will be in control when fluent runs. To use a interpreted UDF with parallel jobs it will need to be parallelized as described in the section below.

define/user-defined/interpreted-functions "sampleudf.c" "cpp" 10000 no

Compiled[edit]

To use this approach your UDF must be compiled on an alliance cluster at least once. Doing so will create a libudf subdirectory structure containing the required libudf.so shared library. The libudf directory cannot simply be copied from a remote system (such as your laptop) to the Alliance since the library dependencies of the shared library will not be satisfied resulting in fluent crashing on startup. That said once you have compiled your UDF on one Alliance cluster you can transfer the newly created libudf to any other Alliance cluster providing your account there loads the same StdEnv environment module version. Once copied, the UDF can be used by uncommenting the second (load) libudf line below in your journal file when submitting jobs to the cluster. Both (compile and load) libudf lines should not be left uncommented in your journal file when submitting jobs on the cluster otherwise your UDF will automatically (re)compiled for each and every job. Not only is this highly inefficient, but also it will lead to racetime-like build conflicts if multiple jobs are run from the same directory. Besides configuring your journal file to build your UDF, the fluent gui (run on any cluster compute node or gra-vdi) may also be used. To do this one would navigate to the Compiled UDFs Dialog Box, add the UDF source file and click Build. When using a compiled UDF with parallel jobs your source file should be parallelized as discussed in the section below.

define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""

and/or

define/user-defined/compiled-functions load libudf

Parallel[edit]

Before a UDF can be used with a fluent parallel job (single node SMP and multi node MPI) it will need to be parallelized. By doing this we control how/which processes (host and/or compute) run specific parts of the UDF code when fluent is run in parallel on the cluster. The instrumenting procedure involves adding compiler directives, predicates and reduction macros into your working serial UDF. Failure to do so will result in fluent running slow at best or immediately crashing at worst. The end result will be a single UDF that runs efficiently when fluent is used in both serial and parallel mode. The subject is described in detail under Part I: Chapter 7: Parallel Considerations of the Ansys 2024 Fluent Customization Manual which can be accessed here.

DPM[edit]

UDFs can be used to customize Discrete Phase Models (DPM) as described in Part III: Solution Mode | Chapter 24: Modeling Discrete Phase | 24.2 Steps for Using the Discrete Phase Models| 24.2.6 User-Defined Functions of the 2024R2 Fluent Users Guide and section Part I: Creating and Using User Defined Functions | Chapter 2: DEFINE Macros | 2.5 Discrete Phase Model (DPM) DEFINE Macros of the 2024R2 Fluent Customization Manual available here. Before a DMP based UDF can be worked into a simulation, the injection of a set of particles must be defined by specifying "Point Properties" with variables such as source position, initial trajectory, mass flow rate, time duration, temperature and so forth depending on the injection type. This can be done in the gui by clicking the Physics panel, Discrete Phase to open the Discrete Phase Model box and then clicking the Injections button. Doing so will open an Injections dialog box where one or more injections can be created by clicking the Create button. The "Set Injection Properties" dialog which appears will contain an "Injection Type" pulldown with first four types available are "single, group, surface, flat-fan-atomizer". If you select any of these then you can then the "Point Properties" tab can be selected to input the corresponding Value fields. Another way to specify the "Point Properties" would be to read an injection text file. To do this select "file" from the Injection Type pulldown, specify the Injection Name to be created and then click the File button (located beside the OK button at the bottom of the "Set Injection Properties" dialog). Here either an Injection Sample File (with .dpm extension) or a manually created injection text file can be selected. To Select the File in the Select File dialog box that change the File of type pull down to All Files (*), then highlight the file which could have any arbitrary name but commonly likely does have a .inj extension, click the OK button. Assuming there are no problems with the file, no Console error or warning message will appear in fluent. As you will be returned to the "Injections" dialog box, you should see the same Injection name that you specified in the "Set Injection Properties" dialog and be able to List its Particles and Properties in the console. Next open the Discrete Phase Model Dialog Box and select Interaction with Continuous Phase which will enable updating DPM source terms every flow iteration. This setting can be saved in your cas file or added via the journal file as shown. Once the injection is confirmed working in the gui the steps can be automated by adding commands to the journal file after solution initialization, for example:

/define/models/dpm/interaction/coupled-calculations yes
/define/models/dpm/injections/delete-injection injection-0:1
/define/models/dpm/injections/create injection-0:1 no yes file no zinjection01.inj no no no no
/define/models/dpm/injections/list-particles injection-0:1
/define/models/dpm/injections/list-injection-properties injection-0:1

where a basic manually created injection steady file format might look like:

 $ cat  zinjection01.inj
 (z=4 12)
 ( x          y        z    u         v    w    diameter  t         mass-flow  mass  frequency  time name )
 (( 2.90e-02  5.00e-03 0.0 -1.00e-03  0.0  0.0  1.00e-04  2.93e+02  1.00e-06   0.0   0.0        0.0 ) injection-0:1 )

noting that injection files for DPM simulations are generally setup for either steady or unsteady particle tracking where the format of the former is described in subsection Part III: Solution Mode | Chapter 24: Modeling Discrete Phase | 24.3. Setting Initial Conditions for the Discrete Phase | 24.3.13 Point Properties for File Injections | 24.3.13.1 Steady File Format of the 2024R2 Fluent Customization Manual.

Ansys CFX[edit]

Slurm scripts[edit]

File : script-cfx-dist.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account name
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=2             # Specify multiple (1 or more) compute nodes
#SBATCH --ntasks-per-node=32  # Specify cores per node (graham 32 or 44, cedar 32 or 48, beluga 40, narval 64)
#SBATCH --mem=0               # Allocate all memory per compute node
#SBATCH --cpus-per-task=1     # Do not change

module load StdEnv/2020       # Applies to: beluga, cedar, graham, narval
module load ansys/2021R1      # Or newer module versions

NNODES=$(slurm_hl2hl.py --format ANSYS-CFX)

# append additional command line options as required
if [ "$CC_CLUSTER" = cedar ]; then
  cfx5solve -def YOURFILE.def -start-method "Open MPI Distributed Parallel" -par-dist $NNODES
else
  cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $NNODES
fi
File : script-cfx-local.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account name
#SBATCH --time=00-03:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify single compute node (do not change)
#SBATCH --ntasks-per-node=4   # Specify total cores (narval up to 64)
#SBATCH --mem=16G             # Specify 0 to use all node memory
#SBATCH --cpus-per-task=1     # Do not change

module load StdEnv/2020       # Applies to: beluga, cedar, graham, narval
module load ansys/2021R1      # Or newer module versions

# append additional command line options as required
if [ "$CC_CLUSTER" = cedar ]; then
  cfx5solve -def YOURFILE.def -start-method "Open MPI Local Parallel" -part $SLURM_CPUS_ON_NODE
else
  cfx5solve -def YOURFILE.def -start-method "Intel MPI Local Parallel" -part $SLURM_CPUS_ON_NODE
fi

Note: You may get the following error in your output file which does not seem to affect the computation: /etc/tmi.conf: No such file or directory.

Workbench[edit]

Before submitting a project file to the queue on a cluster (for the first time) follow these steps to initialize it.

  1. Connect to the cluster with TigerVNC.
  2. Switch to the directory where the project file is located (YOURPROJECT.wbpj) and start Workbench with the same Ansys module you used to create your project.
  3. In Workbench, open the project with File -> Open.
  4. In the main window, right-click on Setup and select Clear All Generated Data.
  5. In the top menu bar pulldown, select File -> Exit to exit Workbench.
  6. In the Ansys Workbench popup, when asked The current project has been modified. Do you want to save it?, click on the No button.
  7. Quit Workbench and submit your job using one of the Slurm scripts shown below.

To avoid writing the solution when a running job successfully completes \remove ;Save(Overwrite=True) from the last line of your script. Doing this will make it easier to run multiple test jobs (for scaling purposes when changing ntasks), since the initialized solution will not be overwritten each time. Alternatively, keep a copy of the initialized YOURPROJECT.wbpj file and YOURPROJECT_files subdirectory and restore them after the solution is written.

Slurm scripts[edit]

A project file can be submitted to the queue by customizing one of the following scripts and then running the sbatch script-wbpj.sh command:

File : script-wbpj-2020.sh

#!/bin/bash

#SBATCH --account=def-account
#SBATCH --time=00-03:00                # Time (DD-HH:MM)
#SBATCH --mem=16G                      # Total Memory (set to 0 for all node memory)
#SBATCH --ntasks=4                     # Number of cores
#SBATCH --nodes=1                      # Do not change (multi-node not supported)
##SBATCH --exclusive                   # Uncomment for scaling testing
##SBATCH --constraint=broadwell        # Applicable to graham or cedar

module load StdEnv/2020 ansys/2021R2   # OR newer Ansys modules

if [ "$SLURM_NNODES" == 1 ]; then
  MEMPAR=0                               # Set to 0 for SMP (shared memory parallel)
else
  MEMPAR=1                               # Set to 1 for DMP (distributed memory parallel)
fi

rm -fv *_files/.lock
MWFILE=~/.mw/Application\ Data/Ansys/`basename $(find $EBROOTANSYS/v* -maxdepth 0 -type d)`/SolveHandlers.xml
sed -re "s/(.AnsysSolution>+)[a-zA-Z0-9]*(<\/Distribute.)/\1$MEMPAR\2/" -i "$MWFILE"
sed -re "s/(.Processors>+)[a-zA-Z0-9]*(<\/MaxNumber.)/\1$SLURM_NTASKS\2/" -i "$MWFILE"
sed -i "s!UserConfigured=\"0\"!UserConfigured=\"1\"!g" "$MWFILE"

export KMP_AFFINITY=disabled
export I_MPI_HYDRA_BOOTSTRAP=ssh

runwb2 -B -E "Update();Save(Overwrite=True)" -F YOURPROJECT.wbpj


Mechanical[edit]

The input file can be generated from within your interactive Workbench Mechanical session by clicking Solution -> Tools -> Write Input Files then specify File name: YOURAPDLFILE.inp and Save as type: APDL Input Files (*.inp). APDL jobs can then be submitted to the queue by running the sbatch script-name.sh command.

Slurm scripts[edit]

The Ansys modules used in each of the following scripts have been tested on Graham and should work without issue (uncomment one). Once the scripts have been tested on other clusters, they will be updated if required.

File : script-smp-2020.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory for all cores
#SBATCH --ntasks=8             # Specify number of cores (1 or more)
#SBATCH --nodes=1              # Specify one node (do not change)

unset SLURM_GTIDS

module load StdEnv/2020

#module load ansys/2021R2
#module load ansys/2022R1
module load ansys/2022R2

mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -I YOURAPDLFILE.inp

rm -rf results-*
mkdir results-$SLURM_JOB_ID
cp -a --no-preserve=ownership $SLURM_TMPDIR/* results-$SLURM_JOB_ID


File : script-dis-2020.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G       # Specify memory per core
#SBATCH --ntasks=8             # Specify number of cores (2 or more)
##SBATCH --nodes=2             # Specify number of nodes (optional)
##SBATCH --ntasks-per-node=4   # Specify cores per node (optional)

unset SLURM_GTIDS

module load StdEnv/2020

module load ansys/2022R2

mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -I YOURAPDLFILE.inp

rm -rf results-*
mkdir results-$SLURM_JOB_ID
cp -a --no-preserve=ownership $SLURM_TMPDIR/* results-$SLURM_JOB_ID


Ansys allocates 1024 MB total memory and 1024 MB database memory by default for APDL jobs. These values can be manually specified (or changed) by adding arguments -m 1024 and/or -db 1024 to the mapdl command line in the above scripts. When using a remote institutional license server with multiple Ansys licenses, it may be necessary to add -p aa_r or -ppf anshpc, depending on which Ansys module you are using. As always, perform detailed scaling tests before running production jobs to ensure that the optimal number of cores and minimum amount memory is specified in your scripts. The single node (SMP Shared Memory Parallel) scripts will typically perform better than the multinode (DIS Distributed Memory Parallel) scripts and therefore should be used whenever possible. To help avoid compatibility issues the Ansys module loaded in your script should ideally match the version used to to generate the input file:

 [gra-login2:~/ansys/mechanical/demo] cat YOURAPDLFILE.inp | grep version
! ANSYS input file written by Workbench version 2019 R3

Ansys EDT[edit]

Ansys EDT can be run interactively in batch (non-gui) mode by first starting an salloc session with options salloc --time=3:00:00 --tasks=8 --mem=16G --account=def-account and then copy-pasting the full ansysedt command found in the last line of script-local-cmd.sh, being sure to manually specify $YOUR_AEDT_FILE.

Slurm scripts[edit]

Ansys Electronic Desktop jobs may be submitted to a cluster queue with the sbatch script-name.sh command using either of the following single node scripts. As of January 2023, the scripts had only been tested on Graham and therefore may be updated in the future as required to support other clusters. Before using them, specify the simulation time, memory, number of cores and replace YOUR_AEDT_FILE with your input file name. A full listing of command line options can be obtained by starting Ansys EDT in graphical mode with commands ansysedt -help or ansysedt -Batchoptionhelp to obtain scrollable graphical popups.

File : script-local-cmd.sh

#!/bin/bash

#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory (set to 0 to use all compute node memory)
#SBATCH --ntasks=8             # Specify cores (beluga 40, cedar 32 or 48, graham 32 or 44, narval 64)
#SBATCH --nodes=1              # Request one node (Do Not Change)

module load StdEnv/2020
module load ansysedt/2021R2

# Uncomment next line to run a test example:
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .

# Specify input file such as:
YOUR_AEDT_FILE="TransientGeoRadar.aedt"

# Remove previous output:
rm -rf $YOUR_AEDT_FILE.* ${YOUR_AEDT_FILE}results

# ---- do not change anything below this line ---- #

echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"

export KMP_AFFINITY=disabled
ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS \
-batchoptions "TempDirectory=$SLURM_TMPDIR HPCLicenseType=pool HFSS/EnableGPU=0" -batchsolve "$YOUR_AEDT_FILE"


File : script-local-opt.sh

#!/bin/bash

#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory (set to 0 to allocate all compute node memory)
#SBATCH --ntasks=8             # Specify cores (beluga 40, cedar 32 or 48, graham 32 or 44, narval 64)
#SBATCH --nodes=1              # Request one node (Do Not Change)

module load StdEnv/2020
module load ansysedt/2021R2

# Uncomment next line to run a test example:
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .

# Specify input filename such as:
YOUR_AEDT_FILE="TransientGeoRadar.aedt"

# Remove previous output:
rm -rf $YOUR_AEDT_FILE.* ${YOUR_AEDT_FILE}results

# Specify options filename:
OPTIONS_TXT="Options.txt"

# Write sample options file
rm -f $OPTIONS_TXT
cat > $OPTIONS_TXT <<EOF
\$begin 'Config'
'TempDirectory'='$SLURM_TMPDIR'
'HPCLicenseType'='pool'
'HFSS/EnableGPU'=0
\$end 'Config'
EOF

# ---- do not change anything below this line ---- #

echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"

export KMP_AFFINITY=disabled
ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS \
-batchoptions $OPTIONS_TXT -batchsolve "$YOUR_AEDT_FILE"


Ansys ROCKY[edit]

Besides being able to run simulations in gui mode (as discussed in the Graphical usage section below) Ansys Rocky can also run simulations in non-gui (or headless) mode. Both modes support running Rocky with cpus only or with cpus and gpus. In the below section two sample slurm scripts are provided where each script would be submitted to the graham queue with the sbatch command as per usual. At the time of this writing neither script has been tested and therefore extensive customization will likely be required. Its important to note that these scripts are only usable on graham since the rocky module which they both load is only (at the present time) installed on graham (locally).

Slurm scripts[edit]

To get a full listing of command line options run Rocky -h on the command line after loading any rocky module (currently only rocky/2023R2 is available on graham with 2024R1 and 2024R2 to be added asap). In regards to using Rocky with gpus for solving coupled problems, the number of cpus you should request from slurm (on the same node) should be increased to a maximum until the scalability limit of the coupled application is reached. On the other hand, if Rocky is being run with gpus to solve standalone uncoupled problems, then only a minimal number of cpus should be requested that will allow be sufficient for Rocky to still run optimally. For instance only 2cpus or possibly 3cpus maybe required. Finally when Rocky is run with more than 4cpus then rocky_hpc licenses will be required which the SHARCNET license does provide.

File : script-rocky-cpu.sh

#!/bin/bash

#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-02:00        # Specify time (DD-HH:MM)
#SBATCH --mem=24G              # Specify memory (set to 0 to use all node memory)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --nodes=1              # Request one node (do not change)

module load StdEnv/2023
module load rocky/2023R2 ansys/2023R2   # only available on graham (do not change)   

Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0


File : script-rocky-gpu.sh

#!/bin/bash

#SBATCH --account=account      # Specify your account (def or reg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=24G              # Specify memory (set to 0 to use all node memory)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --gres=gpu:v100:2      # Specify gpu type : gpu quantity
#SBATCH --nodes=1              # Request one node (do not change)

module load StdEnv/2023
module load rocky/2023R2 ansys/2023R2   # only available on graham (do not change)

Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=1 --gpu-num=$SLURM_GPUS_ON_NODE


Graphical use[edit]

Ansys programs may be run interactively in GUI mode on cluster compute nodes or Graham VDI Nodes.

Compute nodes[edit]

Ansys can be run interactively on a single compute node for up to 24 hours. This approach is ideal for testing large simulations since all cores and memory can be requested with salloc as described in TigerVNC. Once connected with vncviewer, any of the following program versions can be started after loading the required modules as shown below.

Fluids[edit]

module load StdEnv/2020
module load ansys/2021R1 (or newer versions)
fluent -mpi=intel, or,
QTWEBENGINE_DISABLE_SANDBOX=1 cfx5

Mapdl[edit]

module load StdEnv/2020
module load ansys/2021R2 (or newer versions)
mapdl -g, or via launcher,
launcher --> click RUN button

Workbench[edit]

module load StdEnv/2020
module load ansys/2021R2 (or newer versions)
xfwm4 --replace & (only needed if using Ansys Mechanical)
export QTWEBENGINE_DISABLE_SANDBOX=1 (only needed if using CFD-Post)
runwb2

NOTES :When running an Analysis Program such as Mechanical or Fluent in parallel on a single node, untick Distributed and specify a value of cores equal to your salloc session setting minus 1. The pulldown menus in the Ansys Mechanical workbench do not respond properly. As a workaround run xfwm4 --replace on the command line before starting workbench as shown. To make xfwm4 your default edit $HOME/.vnc/xstartup and change mate-session to xfce4-session. Lastly, fluent from ansys/2022R2 does not currently work on compute nodes please use a different version.

Ansys EDT[edit]

Start an interactive session using the following form of the salloc command (to specify cores and available memory):
salloc --time=3:00:00 --nodes=1 --cores=8 --mem=16G --account=def-group
xfwm4 --replace & (then hit enter twice)
module load StdEnv/2020 ansysedt/2021R2, or
module load StdEnv/2020 ansysedt/2023R2, or
module load StdEnv/2023 ansysedt/2023R2, or newer
ansysedt
o Click Tools -> Options -> HPC and Analysis Options -> Edit then :
1) untick Use Automatic Settings box (required one time only)
2) under Machines tab do not change Cores (auto-detected from slurm)
o To run interactive analysis click: Project -> Analyze All

Ensight[edit]

module load StdEnv/2020 ansys/2022R2; A=222; B=5.12.6, or
module load StdEnv/2020 ansys/2022R1; A=221; B=5.12.6, or
module load StdEnv/2020 ansys/2021R2; A=212; B=5.12.6, or
module load StdEnv/2020 ansys/2021R1; A=211; B=5.12.6, or
export LD_LIBRARY_PATH=$EBROOTANSYS/v$A/CEI/apex$A/machines/linux_2.6_64/qt-$B/lib
ensight -X

Note: ansys/2022R2 Ensight is lightly tested on compute nodes. Please let us know if you find any problems using it.

Rocky[edit]

module load rocky/2023R2 ansys/2023R2 (or newer versions)
Rocky (reads ~/licenses/ansys.lic if present, otherwise defaults to SHARCNET server), or
Rocky-int (interactively select CMC or SHARCNET server, also reads ~/licenses/ansys.lic)
RockySolver (run rocky from the command line, currently untested, specify "-h" for help)
RockySchedular (resource manager to submit multiple jobs on present node)
o Rocky is (currently) only available on gra-vdi and graham cluster (no workbench support on linux)
o Release pdfs can be found under /opt/software/rocky/2023R2/docs (read them with mupdf)
o Rocky supports gpu accelerated computing however this capability not been tested ..
o To request a graham compute node with gpus for computations use, for example:
salloc --time=04:00:00 --nodes=1 --cpus-per-task=6 --gres=gpu:v100:2 --mem=32G --account=someaccount
o The SHARCNET license now includes Rocky (free for all researchers to use)

VDI nodes[edit]

Ansys programs can be run for up to 7days on grahams VDI nodes (gra-vdi.alliancecan.ca) using 8 cores (16 cores max) and 128GB memory. The VDI System provides GPU OpenGL acceleration therefore it is ideal for performing tasks that benefit from high performance graphics. One might use VDI to create or modify simulation input files, post-process data or visualize simulation results. To log in connect with TigerVNC then open a new terminal window and start one of the program versions shown below. The vertical bar | notation is used to separate the various commands. The maximum job size for any parallel job run on gra-vdi should be limited to 16cores to avoid overloading the servers and impacting other users. To run two simultaneous gui jobs (16core max each) connect once with vnc to gra-vdi3.sharcnet.ca then connect again to gra-vdi4.sharecnet.ca likewise with vnc. Next start an interactive gui session for the ansys program you are using in the desktop on each machine. Note that simultaneous simulations should in general be run in different directories to avoid file conflict issues. Unlike compute nodes vnc connections (which impose slurm limits through salloc) there is no time limit constraint on gra-vdi when running simulations.

Fluids[edit]

module load CcEnv StdEnv/2020
module load ansys/2021R1 (or newer versions)
unset SESSION_MANAGER
fluent | cfx5 | icemcfd
o Where unsetting SESSION_MANAGER prevents the following Qt message from appearing when starting fluent:
[Qt: Session management error: None of the authentication protocols specified are supported]
o In the event the following message appears in a popup window when starting icemcfd ...
[Error segmentation violation - exiting after doing an emergency save]
... do not click the popup OK button otherwise icemcfd will crash. Instead do the following (one time only):
click the Settings Tab -> Display -> tick X11 -> Apply -> OK -> File -> Exit
The error popup should no longer appear when icemcfd is restarted.

Mapdl[edit]

module load CcEnv StdEnv/2020
module load ansys/2021R1 (or newer versions)
mapdl -g, or via launcher,
unset SESSION_MANAGER; launcher --> click RUN button

Workbench[edit]

module load SnEnv
module load ansys/2020R2 (or newer versions)
export HOOPS_PICTURE=opengl
runwb2
o The export line avoids the following tui Warning from appearing when fluent starts:
[Software rasterizer found, hardware acceleration will be disabled.]
Alternatively the HOOPS_PICTURE environment variable can be set inside workbench by doing:
Fluent Launcher --> Environment Tab --> HOOPS_PICTURE=opengl (without the export)
NOTE1: When running Mechanical in Workbench on gra-vdi be sure to tic Distributed in the upper ribbon Solver panel and specify a maximum value of 24 cores. When running Fluent on gra-vdi instead untic Distributed and specify a maximum value of 12 cores. Do not attempt to use more than 128GB memory otherwise Ansys will hit the hard limit and be killed. If you need more cores or memory then please use a cluster compute node to run your graphical session on (as described in the previous Compute nodes section above). When doing old pre-processing or post-processing work with Ansys on gra-vdi and not running calculation, please only use 4 cores otherwise hpc licenses will be checked out unnecessarily.
NOTE2: On very rare occasions the Ansys workbench gui will freeze or become unresponsive in some way. If this happens open a new terminal window and run pkill -9 -e -u $USER -f "ansys|fluent|mwrpcss|mwfwrapper|ENGINE|mono" to fully kill off Ansys. Likewise if Ansys crashes or vncviewer disconnect before Ansys could be shutdown cleanly then try running the pkill command if Ansys does not run normally afterwards. In general, if Ansys is not behaving properly and you suspect one of the aforementioned causes try pkill before opening a problem ticket.

Ansys EDT[edit]

Open a terminal window and load the module:
module load SnEnv ansysedt/2023R2, or
module load SnEnv ansysedt/2021R2
Type ansysedt in the terminal and wait for the gui to start
The following only needs to be done once:
click Tools -> Options -> HPC and Analysis Options -> Options
change HPC License pulldown to Pool (allows > 4 cores to be used)
click OK
---------- EXAMPLES ----------
To copy the 2023R2 Antennas examples directory into your account:
login to a cluster such as graham
module load ansysedt/2023R2
mkdir -p ~/Ansoft/$EBVERSIONANSYSEDT; cd ~/Ansoft/$EBVERSIONANSYSEDT; rm -rf Antennas
cp -a $EBROOTANSYSEDT/v232/Linux64/Examples/HFSS/Antennas ~/Ansoft/$EBVERSIONANSYSEDT
To run an example:
open a simulation .aedt file then click HFSS -> Validation Check
(if errors are reported by the validation check, close then reopen the simulation and repeat as required)
to run simulation click Project -> Analyze All
to quit without saving the converged solution click File -> Close -> No
If the program crashes and won't restart try running the following commands:
pkill -9 -u $USER -f "ansys*|mono|mwrpcss|apip-standalone-service"
rm -rf ~/.mw (ansysedt will re-run first-time configuration on startup)

Ensight[edit]

module load SnEnv ansys/2019R2 (or newer)
ensight

Rocky[edit]

module load clumod rocky/2023R2 CcEnv StdEnv/2020 ansys/2023R2 (or newer versions)
Rocky (reads ~/licenses/ansys.lic if present, otherwise defaults to SHARCNET server), or
Rocky-int (interactively select CMC or SHARCNET server, also reads ~/licenses/ansys.lic)
RockySolver (run rocky from the command line, currently untested, specify "-h" for help)
RockySchedular (resource manager to submit multiple jobs on present node)
o Rocky is (currently) only available on gra-vdi and graham cluster (no workbench support on linux)
o Release pdfs can be found under /opt/software/rocky/2023R2/docs (read them with mupdf)
o Rocky can only use cpus on gra-vdi since it currently only has one gpu (dedicated to graphics)
o The SHARCNET license now includes Rocky (free for all researchers to use)

SSH issues[edit]

Some Ansys GUI programs can be run remotely on a cluster compute node by X forwarding over SSH to your local desktop. Unlike VNC, this approach is untested and unsupported since it relies on a properly setup X display server for your particular operating system OR the selection, installation and configuration of a suitable X client emulator package such as MobaXterm. Most users will find interactive response times unacceptably slow for basic menu tasks let alone performing more complex tasks such as those involving graphics rendering. Startup times for GUI programs can also be very slow depending on your Internet connection. For example, in one test it took 40 minutes to fully start ansysedt over SSH while starting it with vncviewer required only 34 seconds. Despite the potential slowness when connecting over SSH to run GUI programs, doing so may still be of interest if your only goal is to open a simulation and perform some basic menu operations or run some calculations. The basic steps are given here as a starting point: 1) ssh -Y username@graham.computecanada.ca; 2) salloc --x11 --time=1:00:00 --mem=16G --cpus-per-task=4 [--gpus-per-node=1] --account=def-mygroup; 3) once connected onto a compute node try running xclock. If the clock appears on your desktop, proceed to load the desired Ansys module and try running the program.

Site-specific usage[edit]

SHARCNET license[edit]

The SHARCNET Ansys license is free for academic use by any Alliance researcher on any Alliance system. The installed software does not have any solver or geometry limits. The SHARCNET license may only be used for the purpose of Publishable Academic Research. Producing results for private commercial purposes is strictly prohibited. The SHARCNET license was upgraded from CFD to MCS (Multiphysics Campus Solution) in May of 2020. It includes the following products: HF, EM, Electronics HPC, Mechanical and CFD as described here. In 2023 Rocky for Linux (no Workbench support) was also added. Neither LS-DYNA or Lumerical are included in the SHARCNET license. Note that since all the Alliance clusters are Linux based, SpaceClaim cannot be used on our systems. In July of 2021 an additional 1024 anshpc licenses were added to the previous 512 pool. Before running large parallel jobs, scaling tests should be run for any given simulation. Parallel jobs that do not achieve at least 50% CPU utilization may be flagged by the system for a follow-up by our support team.

As of December 2022, each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job). Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ). UPDATE as of October 2024 the license limit has been increased to 8 jobs and 512 hpc cores per researcher (collectively across all clusters for all applications) for a testing period to allow some researchers more flexibility for parameter explorations and running larger problems. As the license will be far more oversubscribed some instances of job failures on startup may rarely occur, in which rare case the jobs will need to be resubmitted. Nevertheless assuming most researchers continue with a pattern of running one or two jobs using 128 cores on average total this is not expected to be an issue. That said it will be helpful to close Ansys applications immediately upon completion of any gui related tasks to release any licenses that maybe consumed while the application is otherwise idle, for others to use.

Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes), one can determine the number of full nodes by dividing the total anshpc cores with the compute node size. For example, consider Graham which has many 32-core (Broadwell) and some 44-core (Cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32-core nodes assuming a 252 hpc core limit would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.

The SHARCNET Ansys license is made available on a first come first serve basis. Should an unusually large number of Ansys jobs be submitted on a given day some jobs could fail on startup should insufficient licenses be available. If this occurs then resubmit your job asap. If your research requires more than 512 hpc cores (the recent new max limit) than open a ticket to let us know. Most likely you will need to purchase (and host) your own Ansys license at your local institution if its urgently needed in such case contact your local SimuTech office for a quote. If however over time enough researchers express the same need, acquiring a larger Ansys license on the next renewal cycle maybe possible.

Researchers can also purchase their own ansys license subscription from CMC and use their remote license servers. Doing so will have several benefits 1) a local institutional license server is not needed 2) a physical license does not need to be obtained upon each renewal 3) the license can be used almost anywhere including at home, institutions, or any alliance cluster across Canada and 4) download and installation instructions for the windows version of ansys are provided so researchers can run spaceclaim on their own computer (not possible on the Alliance since all systems are linux based). There is however one potentially serious limitation, according to the CMC Ansys Quick Start Guides there maybe a 64 core limit per user.

License server file[edit]

To use the SHARCNET Ansys license on any Alliance cluster, simply configure your ansys.lic file as follows:

[username@cluster:~] cat ~/.licenses/ansys.lic
setenv("ANSYSLMD_LICENSE_FILE", "1055@license3.sharcnet.ca")
setenv("ANSYSLI_SERVERS", "2325@license3.sharcnet.ca")

Query license server[edit]

To show the number of licenses in use by your username and the total in use by all users, run:

ssh graham.computecanada.ca
module load ansys
lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -a | grep "Users of\|$USER"

If you discover any licenses unexpectedly in use by your username (usually due to ansys not exiting cleanly on gra-vdi) then connect to the node where its running, open a terminal window and run the following command to terminate the rogue processes pkill -9 -e -u $USER -f "ansys" after which your licenses should be freed. Note that gra-vdi consists of two nodes (gra-vdi3 and gra-vdi4) which researchers are randomly placed on when connecting to gra-vdi.computecanada.ca with TigerVNC. Therefore it's necessary to specify the full hostname (gra-vdi3.sharcnet.ca or grav-vdi4.sharcnet.ca) when connecting with tigervnc to ensure you log in to the correct node before running pkill.

Local VDI modules[edit]

When using gra-vdi, researchers have the choice of loading Ansys modules from our global environment (after loading CcEnv) or loading Ansys modules installed locally on the machine itself (after loading SnEnv). The local modules may be of interest as they include some Ansys programs and versions not yet supported by our standard environment. When starting programs from local Ansys modules, you can select the CMC license server or accept the default SHARCNET license server. Presently, the settings from ~/.licenses/ansys.lic are not used by the local Ansys modules except when starting runwb2 where they will override the default SHARCNET license server settings. Suitable usage of Ansys programs on gra-vdi includes: running a single test job interactively with up to 8 cores and/or 128G RAM, create or modify simulation input files, post process or visualize data.

Ansys modules[edit]

  1. Connect to gra-vdi.computecanada.ca with TigerVNC.
  2. Open a new terminal window and load a module:
    module load SnEnv ansys/2021R2, or
    module load SnEnv ansys/2021R1, or
    module load SnEnv ansys/2020R2, or
    module load SnEnv ansys/2020R1, or
    module load SnEnv ansys/2019R3
  3. Start an Ansys program by issuing one of the following:
    runwb2|fluent|cfx5|icemcfd|apdl
  4. Press y and Enter to accept the conditions
  5. Press Enter to accept the n option and use the SHARCNET license server by default (in the case of runwb2 ~/.licenses/ansysedt.lic will be used if present, otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server). If you change n to y and hit y. the CMC license server will be used.

where cfx5 from step 3. above provides the option to start the following components:

   1) CFX-Launcher  (cfx5 -> cfx5launch)
   2) CFX-Pre       (cfx5pre)
   3) CFD-Post      (cfdpost -> cfx5post)
   4) CFX-Solver    (cfx5solve)

ansysedt modules[edit]

  1. Connect to gra-vdi.computecanada.ca with TigerVNC.
  2. Open a new terminal window and load a module:
    module load SnEnv ansysedt/2021R2, or
    module load SnEnv ansysedt/2021R1
  3. Start the Ansys Electromagnetics Desktop program by typing the following command: ansysedt
  4. Press y and Enter to accept the conditions.
  5. Press Enter to accept the n option and use the SHARCNET license server by default (note that ~/.licenses/ansysedt.lic will be used if present, otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server). If you change n to y and hit enter, the CMC license server will be used.

License feature preferences previously setup with anslic_admin are no longer supported following the recent SHARCNET license server update (2021-09-09). If a license problem occurs, try removing the ~/.ansys directory in your /home account to clear the settings. If problems persist please contact our technical support and provide the contents of your ~/.licenses/ansys.lic file.

Additive Manufacturing[edit]

To get started configure your ~/.licenses/ansys.lic file to point to a license server that has a valid Ansys Mechanical License. This must be done on all systems where you plan to run the software.

Enable Additive[edit]

This section describes how to make the Ansys Additive Manufacturing ACT extension available for use in your project. The steps must be performed on each cluster for each ansys module version where the extension will be used. Any extensions needed by your project will also need to be installed on the cluster as described below. If you get warnings about missing un-needed extensions (such as ANSYSMotion) then uninstall them from your project.

Download Extension[edit]

Start Workbench[edit]

  • follow the Workbench section in Graphical use above.
  • File -> Open your project file (ending in .wbpj) into Workbench gui

Open Extension Manager[edit]

  • click ACT Start Page and the ACT Home page tab will open
  • click Manage Extensions and the Extension Manager will open

Install Extension[edit]

  • click the box with the large + sign under the search bar
  • navigate to select and install your AdditiveWizard.wbex file

Load Extension[edit]

  • click to highlight the AdditiveWizard box (loads the AdditiveWizard extension for current session only)
  • click lower right corner arrow in the AdditiveWizard box and select Load extension (loads the extension for current AND future sessions)

Unload Extension[edit]

  • click to un-highlight the AdditiveWizard box (unloads extension for the current session only)
  • click lower right corner arrow in the AdditiveWizard box and select Do not load as default (extension will not load for future sessions)

Run Additive[edit]

Gra-vdi[edit]

A user can run a single Ansys Additive Manufacturing job on gra-vdi with up to 16 cores as follows:

  • Start Workbench on Gra-vdi as described above in Enable Additive.
  • click File -> Open and select test.wbpj then click Open
  • click View -> reset workspace if you get a grey screen
  • start Mechanical, Clear Generated Data, tick Distributed, specify Cores
  • click File -> Save Project -> Solve

Check utilization:

  • open another terminal and run: top -u $USER **OR** ps u -u $USER | grep ansys
  • kill rogue processes from previous runs: pkill -9 -e -u $USER -f "ansys|mwrpcss|mwfwrapper|ENGINE"

Please note that rogue processes can persistently tie up licenses between gra-vdi login sessions or cause other unusual errors when trying to start gui programs on gra-vdi. Although rare, rogue processes can occur if an ansys gui session (fluent, workbench, etc) is not cleanly terminated by the user before vncviewer is terminated either manually or unexpectedly - for instance due to a transient network outage or hung filesystem. If the latter is to blame then the processes may not by killable until normal disk access is restored.

Cluster[edit]

Project preparation:

Before submitting a newly uploaded Additive project to a cluster queue (with sbatch scriptname) certain preparations must be done. To begin, open your simulation with Workbench gui (as described in the Enable Additive section above) in the same directory that your job will be submitted from and then save it again. Be sure to use the same ansys module version that will be used for the job. Next create a Slurm script (as explained in the Cluster Batch Job Submission - WORKBENCH section above). To perform parametric studies change Update() to UpdateAllDesignPoints() in the Slurm script. Determine the optimal number of cores and memory by submitting several short test jobs. To avoid needing to manually clear the solution and recreate all the design points in Workbench between each test run, either 1) change Save(Overwrite=True) to Save(Overwrite=False) or 2) save a copy of the original YOURPROJECT.wbpj file and corresponding YOURPROJECT_files directory. Optionally create and then manually run a replay file on the cluster in the respective test case directory between each run, noting that a single replay file can be used in different directories by opening it in a text editor and changing the internal FilePath setting.

module load ansys/2019R3
rm -f test_files/.lock
runwb2 -R myreplay.wbjn

Resource utilization:

Once your additive job has been running for a few minutes a snapshot of its resource utilization on the compute node(s) can be obtained with the following srun command. Sample output corresponding to an eight core submission script is shown next. It can be seen that two nodes were selected by the scheduler:

[gra-login1:~] srun --jobid=myjobid top -bn1 -u $USER | grep R | grep -v top
  PID USER   PR  NI    VIRT    RES    SHR S  %CPU %MEM    TIME+  COMMAND
22843 demo   20   0 2272124 256048  72796 R  88.0  0.2  1:06.24  ansys.e
22849 demo   20   0 2272118 256024  72822 R  99.0  0.2  1:06.37  ansys.e
22838 demo   20   0 2272362 255086  76644 R  96.0  0.2  1:06.37  ansys.e
  PID USER   PR  NI    VIRT    RES    SHR S  %CPU %MEM    TIME+  COMMAND
 4310 demo   20   0 2740212 271096 101892 R 101.0  0.2  1:06.26  ansys.e
 4311 demo   20   0 2740416 284552  98084 R  98.0  0.2  1:06.55  ansys.e
 4304 demo   20   0 2729516 268824 100388 R 100.0  0.2  1:06.12  ansys.e
 4305 demo   20   0 2729436 263204 100932 R 100.0  0.2  1:06.88  ansys.e
 4306 demo   20   0 2734720 431532  95180 R 100.0  0.3  1:06.57  ansys.e

Scaling tests:

After a job completes its "Job Wall-clock time" can be obtained from seff myjobid. Using this value scaling tests can be performed by submitting short test jobs with an increasing number of cores. If the Wall-clock time decreases by ~50% when the number of cores is doubled then additional cores may be considered.

Online documentation[edit]

The full Ansys documentation for versions back to 19.2 can be accessed by following these steps:

  1. Connect to gra-vdi.computecanada.ca with tigervnc as described here.
  2. If the Firefox browser or the Ansys Workbench is open, close it now.
  3. Start Firefox by clicking Applications -> Internet -> Firefox.
  4. Open a new terminal window by clicking Applications -> System Tools -> Mate Terminal.
  5. Start Workbench by typing the following in your terminal: module load CcEnv StdEnv/2023 ansys; runwb2
  6. Go to the upper Workbench menu bar and click Help -> ANSYS Workbench Help. The Workbench Users' Guide should appear loaded in Firefox.
  7. At this point Workbench is no longer needed so close it by clicking the >Unsaved Project - Workbench tab located along the bottom frame (doing this will bring Workbench into focus) and then click File -> Exit.
  8. In the top middle of the Ansys documentation page, click the word HOME located just left of API DOCS.
  9. Now scroll down and you should see a list of Ansys product icons and/or alphabetical ranges.
  10. Select a product to view its documentation. The documentation for the latest release version will be displayed by default. Change the version by clicking the Release Year pull down located above and just to the right of the Ansys documentation page search bar.
  11. To search for documentation corresponding to a different Ansys product, click HOME again.