Ansys

Revision as of 02:54, 25 November 2021 by Roberpj (talk | contribs)
Other languages:

Introduction

ANSYS is a software suite for engineering simulation and 3-D design. It includes packages such as ANSYS Fluent and ANSYS CFX.

Licensing

Compute Canada is a hosting provider for ANSYS . This means that we have ANSYS software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our cluster. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the ANSYS modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.

Available modules are: fluent/16.1, ansys/16.2.3, ansys/17.2, ansys/18.1, ansys/18.2, ansys/19.1, ansys/19.2, ansys/2019R2, ansys/2019R3.

Documentation

The full ANSYS documentation for versions back to 19.2 can be accessed by following these steps:

  1. connect to gra-vdi.computecanada.ca with tigervnc as described in VDI Nodes
  2. start the firefox browser by clicking: Applications -> Internet -> Firefox
  3. open a terminal window by clicking: Applications -> System Tools -> Mate Terminal
  4. start the latest version of workbench by running: module load CcEnv StdEnv ansys; runwb2
  5. in the upper right workbench menu bar click: Help -> ANSYS Workbench Help
  6. once the ANSYS documentation page has loaded into firefox do one of the following:
    • scroll down and select a topic for the currently loaded ANSYS module version, or,
    • enter a topic and ANSYS version in the search bar such as: fluent 2019 R3, apdl 2020 R2

,

Configuring your own license file

Our module for ANSYS is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, write the information to access into file $HOME/.licenses/ansys.lic using the following format:


File : ansys.lic

setenv("ANSYSLMD_LICENSE_FILE", "port@hostname")
setenv("ANSYSLI_SERVERS", "port@hostname")


Cluster specific settings for port@hostname are given in the following table:

License Cluster ANSYSLMD_LICENSE_FILE ANSYSLI_SERVERS Notices
CMC beluga 6624@132.219.136.89 2325@132.219.136.89 None
CMC cedar 6624@206.12.126.25 2325@206.12.126.25 None
CMC graham 6624@199.241.167.222 2325@199.241.167.222 NewIP Nov1/21
CMC narval 6624@10.100.64.10 2325@10.100.64.10 None
SHARCNET beluga/cedar/graham/gra-vdi 1055@license3.sharcnet.ca 2325@license3.sharcnet.ca None

Researchers who purchase a CMC license subscription must send their Compute Canada username to <cmcsupport@cmc.ca> otherwise license checkouts will fail. The number of cores that can be used with a CMC license is described in the Other Tricks and Tips section found here.

Local License Servers

Before a local institutional ANSYS license server can be reached from Compute Canada systems firewall configuration changes will need to be made on both the institution side and the Compute Canada side. To start this process, contact your local ANSYS license server administrator and obtain the following information 1) fully qualified hostname of the local ANSYS license server 2) ANSYS flex port (commonly 1055) 3) ANSYS licensing interconnect port (commonly 2325) and 4) ANSYS static vendor port (site specific). Ensure the administrator is willing to open the firewall on these three ports to accept license checkout requests from your ANSYS jobs running on Compute Canada systems. Next open a ticket with <support@computecanada.ca> and send us the four pieces of information and indicate which systems(s) you want to run ANSYS on for example Cedar, Beluga, Graham/Gra-vdi or Niagara.

Version Compability

As explained in ANSYS Platform Support the current release (2021R2) was tested to read and open databases from the five previous releases. In addition some products can read and open databases from releases before Ansys 18.1.

Cluster Batch Job Submission

The ANSYS software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them supports our Slurm scheduler. For this reason, we need special instructions for each ANSYS package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages. If one is not covered and you want us to investigate and help you start it, please contact our Technical support.

ANSYS Fluent

Typically you would use the following procedure for running Fluent on one of the Compute Canada clusters:

  • Prepare your Fluent job using Fluent from the "ANSYS Workbench" on your Desktop machine up to the point where you would run the calculation.
  • Export the "case" file "File > Export > Case..." or find the folder where Fluent saves your project's files. The "case" file will often have a name like FFF-1.cas.gz.
  • If you already have data from a previous calculation, which you want to continue, export a "data" file as well (File > Export > Data...) or find it the same project folder (FFF-1.dat.gz).
  • Transfer the "case" file (and if needed the "data" file) to a directory on the project or scratch filesystem on the cluster. When exporting, you save the file(s) under a more instructive name than FFF-1.* or rename them when uploading them.
  • Now you need to create a "journal" file. It's purpose is to load the case- (and optionally the data-) file, run the solver and finally write the results. See examples below and remember to adjust the filenames and desired number of iterations.
  • If jobs frequently fail to start due to license shortages (and manual resubmission of failed jobs is not convenient) consider modifying your slurm script to requeue your job (upto to 4 times) as shown in the following "Fluent Slurm Script (by node + requeue)" tab. Be aware doing this will also requeue simulations that fail due to non-license related issues (such as divergence) resulting lost compute time. Therefore it is strongly recommended to monitor and inspect each slurm output file to confirm each requeue attempt is license related. When it is determined a job requeued due to a simulation issue then immediately manually kill the job progression with scancel jobid and correct the problem.
  • After running the job you can download the "data" file and import it back into Fluent with File > import > Data....

Slurm Scripts

File : script-flu-bycore.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-06:00       # Specify time limit dd-hh:mm
#SBATCH --ntasks=16           # Specify total number cores
#SBATCH --mem-per-cpu=4G      # Specify memory per core
#SBATCH --cpus-per-task=1     # Do not change

#module load StdEnv/2016
#module load ansys/2020R2     # Or older module versions

module load StdEnv/2020
module load ansys/2021R1      # Or newer module versions

slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou


File : script-flu-bynode.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-06:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number compute nodes (1 or more)
#SBATCH --cpus-per-task=32    # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --mem=0               # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1   # Do not change

#module load StdEnv/2016
#module load ansys/2020R2     # Or older module versions

module load StdEnv/2020
module load ansys/2021R1      # Or newer module versions

slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou


File : script-flu-bynode+requeue.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account
#SBATCH --time=00-06:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number compute nodes (1 or more)
#SBATCH --cpus-per-task=32    # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --array=1-4%1         # Specify number requeue attempts (2 or more)
#SBATCH --mem=0               # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1   # Do not change

#module load StdEnv/2016
#module load ansys/2020R2     # Or older module versions

module load StdEnv/2020
module load ansys/2021R1      # Or newer module versions

slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
if [ $? -eq 0 ]; then
    echo "Job completed successfully! Exiting now."
    scancel $SLURM_ARRAY_JOB_ID
else
    echo "Job failed due to license or simulation issue!"
    if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
       echo "Resubmitting now ..."
    else
       echo "Exiting now."
    fi
fi


Journal Files

Fluent Journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed. With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the Journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used.

File : sample1.jou

; SAMPLE FLUENT JOURNAL FILE - STEADY SIMULATION
; ----------------------------------------------
; lines beginning with a semicolon are comments

; Read input file (FFF-in.cas):
/file/read-case  FFF-in

; Run the solver for this many iterations:
/solve/iterate 1000

; Overwrite output files by default:
/file/confirm-overwrite n

; Write final output file (FFF-out.dat):
/file/write-data  FFF-out

; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"

; Exit fluent:
exit


File : sample2.jou

; SAMPLE FLUENT JOURNAL FILE - STEADY SIMULATION
; ----------------------------------------------
; lines beginning with a semicolon are comments

; Read compressed input files (FFF-in.cas.gz & FFF-in.dat.gz):
/file/read-case-data  FFF-in.gz

; Write a compressed data file every 100 iterations:
/file/auto-save/data-frequency 100

; Retain data files from 5 most recent iterations:
/file/auto-save/retain-most-recent-files y

; Write data files to output sub-directory (appends iteration)
/file/auto-save/root-name output/FFF-out.gz

; Run the solver for this many iterations:
/solve/iterate 1000

; Write final compressed output files (FFF-out.cas.gz & FFF-out.dat.gz):
/file/write-case-data  FFF-out.gz

; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"

; Exit fluent:
exit


File : sample3.jou

; SAMPLE FLUENT JOURNAL FILE - TRANSIENT SIMULATION
; -------------------------------------------------
; lines beginning with a semicolon are comments

; Read only the input case file:
/file/read-case         "FFF-transient-inp.gz"

; For continuation (restart) read in both case and data input files:
;/file/read-case-data  "FFF-transient-inp.gz"

; Write a data (and maybe case) file every 100 time steps:
/file/auto-save/data-frequency 100
/file/auto-save/case-frequency if-case-is-modified

; Retain only the most recent 5 data (and maybe case) files:
; [saves disk space if only a recent continuation file is needed]
/file/auto-save/retain-most-recent-files y

; Write to output sub-directory (appends flowtime and timestep)
/file/auto-save/root-name output/FFF-transient-out-%10.6f.gz

; ##### settings for Transient simulation :  ######
; Set the magnitude of the (physical) time step (delta-t)
/solve/set/time-step   0.0001

; Set the number of time steps for a transient simulation:
/solve/set/max-iterations-per-time-step   20

; Set the number of iterations for which convergence monitors are reported:
/solve/set/reporting-interval   1

; ##### End of settings for Transient simulation. ######

; Initialize using the hybrid initialization method:
/solve/initialize/hyb-initialization

; Perform unsteady iterations for a specified number of time steps:
/solve/dual-time-iterate   1000

; Write final case and data output files:
/file/write-case-data  "FFF-transient-out.gz"

; Write simulation report to file (optional):
/report/summary y "Report_Transient_Simulation.txt"

; Exit fluent:
exit


ANSYS CFX

File : script-cfx.sh

#!/bin/bash

#SBATCH --account=def-group   # Specify account name
#SBATCH --time=00-06:00       # Specify time limit dd-hh:mm
#SBATCH --nodes=1             # Specify number compute nodes (1 or more)
#SBATCH --cpus-per-task=32    # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --mem=0               # Do not change (allocate all memory per compute node)
#SBATCH --ntasks-per-node=1   # Do not change

#module load StdEnv/2016
#module load ansys/2020R2     # Or older module versions

module load StdEnv/2020
module load ansys/2021R1      # Or newer module versions

NNODES=$(slurm_hl2hl.py --format ANSYS-CFX)

# other options maybe added to the following command line as needed
cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $NNODES

Note: You may get the following errors in your output file : /etc/tmi.conf: No such file or directory. They do not seem to affect the computation.

WORKBENCH

Before submitting a job to the queue with sbatch script-wbpj.sh several settings in YOURPROJECT.wbpj file must be initialized to be compatible with the settings in your slurm script. To do this 1) start the ANSYS workbench gui using the runwb2 command as described in the Graphical_Use section 2) click File -> Open to load your workbench project file 3) double click Solution in the center window to start Mechanical 4) under the Solution tab (found in top menu bar) locate the Solve panel *OR* click File -> Solve Process Settings -> My Computer -> Advanced 5) tick the Distributed box 6) specify a numeric value for Cores equal to the number of ntasks specified in your slurm script (if using AUTODYN specify Cores=ntasks-1 instead) 7) click File -> Clear Generated Data -> Yes when asked "Do you want to clear all results and any restart points ?" in the popup that appears 8) click File -> Save Project 9) click File -> Close Mechanical 10) click File -> exit to end your workbench gui session. NOTE1: You can avoid saving the solution when the simulation completes by removing ;Save(Overwrite=True) from the last line of the slurm script. Doing so will allow running test jobs without overwriting the initialized solution. NOTE2: For APDL based simulations the line with nodes=1 maybe removed from the slurm script (or its value changed to be greater than 1) to permit computations across multiple nodes.

File : script-wbpj.sh

#!/bin/bash
#SBATCH --account=def-account
#SBATCH --time=00-03:00                # Time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G               # Memory per core
#SBATCH --ntasks=8                     # Number of cores
#SBATCH --nodes=1                      # Do not change

unset SLURM_GTIDS

rm -f mytest1_files/.lock

module load StdEnv/2016 ansys/2019R3   # Do not change

export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=balanced
export PATH=/cvmfs/soft.computecanada.ca/nix/var/nix/profiles/16.09/bin:$PATH
runwb2 -B -E "Update();Save(Overwrite=True)" -F YOURPROJECT.wbpj


MECHANICAL

The input file can be generated from within your interactive Workbench Mechanical session by clicking Solution -> Tools -> Write Input Files then specify File name: YOURAPDLFILE.inp and Save as type: APDL Input Files (*.inp). APDL jobs can then be submitted to the queue by running the sbatch script-name.sh command. The ANSYS modules given in each script were tested on graham and should work without issue (uncomment one). Once the scripts are tested on other clusters they will be updated below if required.

File : script-smp-2016.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory for all cores
#SBATCH --ntasks=8             # Specify number of cores (1 or more)
#SBATCH --nodes=1              # Specify one node (do not change)

unset SLURM_GTIDS

module load StdEnv/2016

#module load ansys/19.1
#module load ansys/19.2
#module load ansys/2019R2
#module load ansys/2019R3
#module load ansys/2020R1
module load ansys/2020R2

mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp


File : script-smp-2020.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory for all cores
#SBATCH --ntasks=8             # Specify number of cores (1 or more)
#SBATCH --nodes=1              # Specify one node (do not change)

unset SLURM_GTIDS

module load StdEnv/2020

#module load ansys/2021R1
module load ansys/2021R2

mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp


File : script-dis-2016.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G       # Specify memory per core
#SBATCH --ntasks=8             # Specify number of cores (2 or more)
##SBATCH --nodes=2             # Specify number of nodes (optional)
##SBATCH --ntasks-per-node=4   # Specify cores per node (optional)

unset SLURM_GTIDS

module load StdEnv/2016

#module load ansys/2019R3
module load ansys/2020R1

export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=compact
mapdl -dis -mpi intelmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp


File : script-dis-2020.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify your account
#SBATCH --time=00-03:00        # Specify time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G       # Specify memory per core
#SBATCH --ntasks=8             # Specify number of cores (2 or more)
##SBATCH --nodes=2             # Specify number of nodes (optional)
##SBATCH --ntasks-per-node=4   # Specify cores per node (optional)

unset SLURM_GTIDS

module load StdEnv/2020

#module load ansys/2021R1
module load ansys/2021R2

mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp


ANSYS allocates 1024 MB total memory and 1024 MB database memory by default for APDL jobs. These values can be manually specified (or changed) by adding arguments -m 1024 and/or -db 1024 to the last maple command line in the above slurm scripts. When using a remote institutional license server with multiple ANSYS licenses it may be necessary to add arguments such as -p aa_r or -ppf anshpc. As always perform detailed scaling tests before running production jobs to ensure the optimal number of cores and minimum amount memory is specified in your slurm scripts. The single node SMP (Shared Memory Parallel) script will perform better than the multiple node DIS (Distributed Memory Parallel) script and therefore should be used whenever possible. To help avoid compatibility issues the ansys module loaded in your slurm script should ideally match the version used to to generate the input file:

 [gra-login2:~/ansys/mechanical/demo] cat YOURAPDLFILE.inp | grep version
! ANSYS input file written by Workbench version 2019 R3

ANSYS EDT

Ansys Electronic Desktop jobs can be submitted to the cluster queue by running the sbatch script-name.sh command. The following script allows running a job with upto all cores and memory on a single node and was tested on graham. To use it specify the simulation time, memory, number of cores and replace YOUR_AEDT_FILE with your input file name. A full listing of ansysedt command line options can be obtained by starting ansysedt in [Graphical Mode | https://docs.computecanada.ca/wiki/ANSYS#Graphical_Use] with commands ansysedt -help or ansysedt -Batchoptionhelp to obtain scrollable graphical popups. Additional tabs containing slurm scripts for submitting distributed jobs over multiple nodes will be added to the following table asap. At present only ansysedt/2021R2 is installed (newer versions will be installed when released). Ansysedt can be run interactively by starting a salloc session on a compute node (request sufficient memory & cores) and then issuing the command found in the last line of the following script-local-cmd.sh slurm script.

File : script-local-cmd.sh

#!/bin/bash

#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory (0 to use all memory on the node)
#SBATCH --ntasks=16            # Specify cores (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --nodes=1              # Request one node (Do Not Change)

module load StdEnv/2020
module load ansysedt/2021R2

# Uncomment next line to run a test example:
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .

# Specify input file such as:
YOUR_AEDT_FILE="TransientGeoRadar.aedt"

# Remove previous output:
rm -rf $YOUR_AEDT_FILE.* ${YOUR_AEDT_FILE}results

# ---- do not change anything below this line ---- #

echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"

ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS -batchoptions \
       "'TempDirectory'=$SLURM_TMPDIR 'HPCLicenseType'='pool' 'HFSS/EnableGPU'=0" -batchsolve $YOUR_AEDT_FILE


File : script-local-opt.sh

#!/bin/bash

#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=16G              # Specify memory (0 to use all memory on the node)
#SBATCH --ntasks=16            # Specify cores (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --nodes=1              # Request one node (Do Not Change)

module load StdEnv/2020
module load ansysedt/2021R2

# Uncomment next line to run a test example:
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .

# Specify input filename such as:
YOUR_AEDT_FILE="TransientGeoRadar.aedt"

# Remove previous output:
rm -rf $YOUR_AEDT_FILE.* ${YOUR_AEDT_FILE}results

# Specify options filename:
OPTIONS_TXT="Options.txt"

# Write sample options file
rm -f $OPTIONS_TXT
cat > $OPTIONS_TXT <<EOF
\$begin 'Config'
'TempDirectory'='$SLURM_TMPDIR'
'HPCLicenseType'='pool'
'HFSS/EnableGPU'=0
\$end 'Config'
EOF

# ---- do not change anything below this line ---- #

echo -e "\nANSYSLI_SERVERS= $ANSYSLI_SERVERS"
echo "ANSYSLMD_LICENSE_FILE= $ANSYSLMD_LICENSE_FILE"
echo -e "SLURM_TMPDIR= $SLURM_TMPDIR on $SLURMD_NODENAME\n"

ansysedt -monitor -UseElectronicsPPE -ng -distributed -machinelist list=localhost:1:$SLURM_NTASKS \
              -batchoptions $OPTIONS_TXT -batchsolve $YOUR_AEDT_FILE


Graphical Use

ANSYS programs maybe run interactively in gui mode on cluster Compute Nodes or graham VDI Nodes.

Compute Nodes

ANSYS can be run interactively on cluster compute nodes for upto 24hours using TigerVNC. This approach is ideal for testing computationally intensive simulations since all available cores and memory can be used. Once connected open a terminal window and start one of the following programs:

FLUIDS
module load StdEnv/2016 ansys/2020R2 (or older versions)
module load StdEnv/2020 ansys/2021R1 (or newer versions)
fluent|cfx5
WORKBENCH
module load StdEnv/2016 ansys/2019R3 (other versions are being tested)
export KMP_AFFINITY=balanced; export I_MPI_HYDRA_BOOTSTRAP=ssh
export PATH=$EBROOTNIXPKGS/bin:$PATH
runwb2
ELECTRONICS DESKTOP
module load CcEnv StdEnv/2020 ansysedt/2021R2
rm -rf ~/.mw (optionally force First-time configuration)
ansysedt
ENSIGHT
module load StdEnv/2016 ansys/2019R3; A=195; B=5.10.1, or
module load StdEnv/2016 ansys/2020R1; A=201; B=5.10.1, or
module load StdEnv/2016 ansys/2020R2; A=202; B=5.12.6, or
module load StdEnv/2020 ansys/2021R1; A=211; B=5.12.6, or
module load StdEnv/2020 ansys/2021R2; A=212; B=5.12.6
export LD_LIBRARY_PATH=$EBROOTANSYS/v$A/CEI/apex$A/machines/linux_2.6_64/qt-$B/lib
ensight -X

ASIDE: Some ANSYS gui programs can be run remotely on a cluster compute node by X forwarding over ssh to your local desktop. Unlike VNC, this approach is untested and unsupported since it relies on a properly setup X display server for your particular operating system OR the selection, installation and configuration of a suitable X client emulator package such as MobaXterm. Most users will find interactive response times unacceptably slow for basic menu tasks let alone performing more complex tasks such as those involving graphics rendering. Startup times for gui programs can also be very slow depending on your internet connection. For example, in one test it took 40min to fully start ansysedt over ssh while starting it with vncviewer required on 34 seconds. Despite these stark facts the approach may be of interest if the only goal is to open a simulation and run some calculations in the gui. Therefore the basic steps are given here as a starting point: 1) ssh -Y username@graham.computecanada.ca 2) salloc --x11 --time=1:00:00 --cpus-per-task=1 --mem=16000 --account=def-mygroup 3) once on a compute node try running xclock to check the analog clock appears on your desktop, if it does then 4) load the needed modules and try running the program.

VDI Nodes

ANSYS programs can be run for upto 24hours on graham VDI Nodes using a maximum of 8cores and 128GB memory. The VDI System provides gpu OpenGL acceleration therefore it is ideal for performing tasks that benefit from high performance graphics. One might use VDI to create or modify simulation input files, post process data or visualize simulation results. To get started, login to gra-vdi.computecanada.ca with TigerVNC then open a new terminal window and start one of the following supported program versions exactly as shown below:

FLUIDS
module load CcEnv StdEnv/2020 ansys/2021R2, or
module load CcEnv StdEnv/2020 ansys/2021R1, or
module load CcEnv StdEnv/2016 ansys/2020R2, or
module load CcEnv StdEnv/2016 ansys/2020R1, or
module load CcEnv StdEnv/2016 ansys/2019R3
export HOOPS_PICTURE=opengl
fluent|cfx5|icemcfd
WORKBENCH
module load SnEnv ansys/2021R2, or
module load SnEnv ansys/2021R1, or
module load SnEnv ansys/2020R2
runwb2
------------------------------------------------------------------------------------
module load CcEnv StdEnv/2016 ansys/2020R1, or
module load CcEnv StdEnv/2016 ansys/2019R3
export PATH=$EBROOTNIXPKGS/bin:$PATH
runwb2
ELECTRONICS DESKTOP
module load CcEnv StdEnv/2020 ansysedt/2021R2
rm -rf ~/.mw (optionally force First-time configuration)
ansysedt
ENSIGHT
module load SnEnv ansys/2021R2, or
module load SnEnv ansys/2021R1
ensight
------------------------------------------------------------------------------------
module load CcEnv StdEnv/2016 ansys/2020R2
ensight

Site Specific Usage

Sharcnet License

The SHARCNET Ansys license is free for use by any Compute Canada user on any Compute Canada system. Similar to the commercial version the software has no solver or geometry limits however it may only be used for the purpose of Publishable Academic Research. The license was upgraded from CFD to MCS (Multiphysics Campus Solution) in May of 2020 and includes the following ANSYS products: HF, EM, Electronics HPC, Mechanical and CFD as described here. Neither LS-DYNA or Lumerical are included. In July of 2021 an additional 1024 anshpc licenses were added as a result researchers now start upto 4 jobs using a total of 124 anshpc (approximately double that of 2020) plus 4 anshpc per job. Therefore a single 128 core job OR two 64 core jobs can be submitted to run on four OR two full 32 core Graham Broadwell nodes respectively. A further limit increase to 172 anshpc is presently being considered to support launching 176 core jobs onto four full 44 core Graham Cascade Lake nodes. The SHARCNET Ansys License is made available on a first come first serve basis. Therefore if a larger than usual number of ANSYS jobs are submitted on a given day some jobs could fail on startup should insufficient licenses be available. These events however are expected to be rare given the recent increase in anshpc licenses. If your research requires more licenses than are available from SHARCNET then a dedicated researcher purchased license will be required. Researchers can purchase an Ansys license directly from Simutech where an extra 20% country wide uplift fee must be added if the cluster where the license will be used is not co-located at your institution. A dedicated Ansys license can be hosted on a local institutional license server OR transferred part or in full on the SHARCNET Ansys License server. In the former case the researcher would simply need to reconfigure their ~/.licenses/ansys.lic file on graham (or beluga or cedar) as described at the top of this wiki page. In the later case the researcher should instead open a ticket with SHARCNET to inquire about starting the license transfer process. Depending on the SHARCNET Ansys license utilization the per user feature limits may need to be changed. Advanced notice will be posted here a minimum of 2 weeks in advance. Large jobs that do not achieve an effective cpu utilization of at least 30% will be flagged by the system and you will likely be contacted by a Compute Canada analyst.

License Server File

To use the Sharcnet ANSYS license configure your ansys.lic file as follows:

[gra-login1:~/.licenses] cat ansys.lic
setenv("ANSYSLMD_LICENSE_FILE", "1055@license3.sharcnet.ca")
setenv("ANSYSLI_SERVERS", "2325@license3.sharcnet.ca")

Query License Server

To show the number of license in use by your username and the total in use by all users run:

ssh graham.computecanada.ca
module load ansys
lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -a | grep "Users of\|$USER"

If you discover any licenses unexpectedly in use by your username (usually due to ansys not exiting cleanly on gra-vdi) then connect to the node where its running, open a terminal window and run the following command to terminate the rogue processes pkill -9 -e -u $USER -f "ansys" after which your licenses should be freed. Note that gra-vdi consists of two nodes (gra-vdi3 and gra-vdi4) which researchers are randomly placed on when connecting to gra-vdi.computecanada.ca with tigervnc. Therefore its necessary to specify the full hostname (gra-vdi3.sharcnet.ca or grav-vdi4.sharcnet.ca) when connecting with tigervnc to ensure you login to the correct node before running pkill.

Local VDI Modules

When using gra-vdi researchers have the choice of loading ANSYS modules from the global Compute Canada environment (after loading CcEnv) or loading ANSYS modules installed locally on the machine itself (after loading SnEnv). The local modules maybe of interest as they include some Ansys programs and versions not yet supported by the Compute Canada environment for graphics use on gra-vdi or the clusters. When starting programs from local Ansys modules, users can select the CMC license server or accept the default Sharcnet License server. Presently the settings from ~/.licenses/ansys.lic are not used by the local Ansys modules except when starting runwb2 where they will override the default Sharcnet License server settings. Suitable usage of Ansys programs on gra-vdi includes: running a single test job interactively with upto 8cores and/or 128G ram, create or modify simulation input files, post process or visualize data.

ansys Modules

  1. Connect to gra-vdi.computecanada.ca with TigerVNC
  2. Open a new terminal window and load a module:
    module load SnEnv ansys/2021R2, or
    module load SnEnv ansys/2021R1, or
    module load SnEnv ansys/2020R2, or
    module load SnEnv ansys/2020R1, or
    module load SnEnv ansys/2019R3
  3. Start an ANSYS program by issuing one of the following:
    runwb2|fluent|cfx5|icemcfd|apdl
  4. Press y then enter to accept the conditions
  5. Press enter to accept the n option and use the SHARCNET license server by default (in the case of runwb2 ~/.licenses/ansysedt.lic will be used if present otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server). If you change n to y and hit enter the CMC license server will be used.

where cfx5 from step 3. above provides the option to start the following components:

   1) CFX-Launcher  (cfx5 -> cfx5launch)
   2) CFX-Pre       (cfx5pre)
   3) CFD-Post      (cfdpost -> cfx5post)
   4) CFX-Solver    (cfx5solve)

ansysedt Modules

  1. Connect to gra-vdi.computecanada.ca with TigerVNC
  2. Open a new terminal window and load a module:
    module load SnEnv ansysedt/2021R2, or
    module load SnEnv ansysedt/2021R1
  3. Start the ANSYS Electromagnetics Desktop program by typing the following command: ansysedt
  4. Press y then enter to accept the conditions.
  5. Press enter to accept the n option and use the SHARCNET license server by default (note that ~/.licenses/ansysedt.lic will be used if present otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment for example to some other remote license server). If you change n to y and hit enter then the CMC license server will be used.

License feature preferences previously setup with anslic_admin are no longer supported following the recent SHARCNET license server update (Sept9/2021). If a license problem occurs try removing the ~/.ansys directory in your home account to clear the settings. If problems persist please open a problem ticket at <support@computecanada.ca> and provide the contents your ~/.licenses/ansys.lic file.

Additive Manufacturing

To get started configure your ~/.licenses/ansys.lic file to point to a license server that has a valid ANSYS Mechanical License. This must be done on all systems where you plan to run the software.

Enable Additive

To enable ANSYS Additive Manufacturing in your project do the following 3 steps:

Start Workbench

  • start workbench as described in the Graphical Use - WORKBENCH section found above.

Install Extension

  • click Extensions -> Install Extension
  • specify the following /path/to/AdditiveWizard.wbex then click Open: /cvmfs/restricted.computecanada.ca/easybuild/software/2017/Core/ansys/2019R3/v195/aisol/WBAddins/MechanicalExtensions/AdditiveWizard.wbex

Load Extension

  • click Extensions -> Manage Extensions and tick Additive Wizard
  • click the ACT Start Page tab X to return to your Project tab

Run Additive

Gra-vdi

A user can run a single ANSYS Additive Manufacturing job on gra-vdi with upto 16cores as follows:

  • Start Workbench On Gra-vdi as described above in Enable Additive
  • click File -> Open and select test.wbpj then click Open
  • click View -> reset workspace if you get a grey screen
  • start Mechanical, Clear Generated Data, tick Distributed, specify Cores
  • click File -> Save Project -> Solve

Check utilization:

  • open another terminal and run: top -u $USER
  • kill rogue processes from previous runs if required: pkill -9 -e -u $USER -f "ansys"

Cluster

Project preparation:

To submit an Additive job to a cluster queue, you must first prepare your additive simulation to run on a Compute Canada cluster. To do this open your simulation as described in the Enable Additive section above then save it. Next create a slurm script as explained in the Cluster Batch Job Submission - WORKBENCH section above. For parametric studies change Update() to UpdateAllDesignPoints() in your script and submit a job to the queue with the sbatch scriptname command. For initial performance testing one can avoid the solution from being written by specifying Overwrite=False in the slurm script so further runs to be conducted without needing to reopen the simulation in workbench (and mechanical) to clear the solution and recreate the design points. Another option is to create a replay script to perform these tasks then manually run it on the cluster between runs as follows. The replay file can be modified for use in different directories by using editor to manually change its internal FilePath setting.

module load ansys/2019R3
rm -f test_files/.lock
runwb2 -R myreplay.wbjn

Resource utilization:

Once your additive job has been running for a few minutes a snapshot of its resource utilization on the compute node(s) can be obtained with the following the srun command. Sample output corresponding to an eight core submission script is shown next. It can be see that two nodes were selected by the scheduler:

[gra-login1:~] srun --jobid=myjobid top -bn1 -u $USER | grep R | grep -v top
  PID USER   PR  NI    VIRT    RES    SHR S  %CPU %MEM    TIME+  COMMAND
22843 demo   20   0 2272124 256048  72796 R  88.0  0.2  1:06.24  ansys.e
22849 demo   20   0 2272118 256024  72822 R  99.0  0.2  1:06.37  ansys.e
22838 demo   20   0 2272362 255086  76644 R  96.0  0.2  1:06.37  ansys.e
  PID USER   PR  NI    VIRT    RES    SHR S  %CPU %MEM    TIME+  COMMAND
 4310 demo   20   0 2740212 271096 101892 R 101.0  0.2  1:06.26  ansys.e
 4311 demo   20   0 2740416 284552  98084 R  98.0  0.2  1:06.55  ansys.e
 4304 demo   20   0 2729516 268824 100388 R 100.0  0.2  1:06.12  ansys.e
 4305 demo   20   0 2729436 263204 100932 R 100.0  0.2  1:06.88  ansys.e
 4306 demo   20   0 2734720 431532  95180 R 100.0  0.3  1:06.57  ansys.e

Scaling tests:

After a job completes its "Job Wall-clock time" can be obtained from seff myjobid. Using this value scaling tests can be performed by submitting short test jobs with an increasing number of cores. If the Wall-clock time decreases by ~50% when the number of cores are doubled then additional cores maybe considered.