Ansys: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 463: | Line 463: | ||
module load ansys/2021R1 | module load ansys/2021R1 | ||
<!--T: | <!--T:1663--> | ||
mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp | mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp | ||
}} | }} | ||
Line 487: | Line 487: | ||
module load ansys/2020R1 | module load ansys/2020R1 | ||
<!--T: | <!--T:1664--> | ||
export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=compact | export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=compact | ||
mapdl -dis -mpi intelmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp | mapdl -dis -mpi intelmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp | ||
Line 511: | Line 511: | ||
module load ansys/2021R1 | module load ansys/2021R1 | ||
<!--T:1664--> | |||
mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp | mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp | ||
}} | }} | ||
Line 516: | Line 517: | ||
</tabs> | </tabs> | ||
<!--T:1082--> | |||
For APDL jobs ANSYS allocates 1024 MB total memory and 1024 MB database memory by default. These values can be manually specified (or changed) by adding arguments -m 1024 and/or -db 1024 to the last maple command line in the above slurm scripts. When using a remote institutional license server with multiple ANSYS licenses it may be necessary to add arguments such as -p aa_r or -ppf anshpc. As always perform detailed scaling tests before running production jobs to ensure the optimal number of cores and minimum amount memory is specified in your slurm scripts. The single node SMP (Shared Memory Parallel) script will perform better than the multiple node DIS (Distributed Memory Parallel) script and therefore should be used whenever possible. To help avoid compatibility issues the ansys module loaded in your slurm script should ideally match the version used to to generate the input file: | For APDL jobs ANSYS allocates 1024 MB total memory and 1024 MB database memory by default. These values can be manually specified (or changed) by adding arguments -m 1024 and/or -db 1024 to the last maple command line in the above slurm scripts. When using a remote institutional license server with multiple ANSYS licenses it may be necessary to add arguments such as -p aa_r or -ppf anshpc. As always perform detailed scaling tests before running production jobs to ensure the optimal number of cores and minimum amount memory is specified in your slurm scripts. The single node SMP (Shared Memory Parallel) script will perform better than the multiple node DIS (Distributed Memory Parallel) script and therefore should be used whenever possible. To help avoid compatibility issues the ansys module loaded in your slurm script should ideally match the version used to to generate the input file: | ||
Revision as of 23:31, 13 September 2021
Introduction[edit]
ANSYS is a software suite for engineering simulation and 3-D design. It includes packages such as ANSYS Fluent and ANSYS CFX.
Licensing[edit]
Compute Canada is a hosting provider for ANSYS . This means that we have ANSYS software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our cluster. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the ANSYS modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.
Available modules are: fluent/16.1, ansys/16.2.3, ansys/17.2, ansys/18.1, ansys/18.2, ansys/19.1, ansys/19.2, ansys/2019R2, ansys/2019R3.
Documentation[edit]
The full ANSYS documentation (for the latest version) can be accessed by following these steps:
- connect to gra-vdi.computecanada.ca with tigervnc as described in VDI Nodes
- open a terminal window and start workbench:
- module load CcEnv StdEnv ansys
- runwb2
- in the upper pulldown menu click the sequence:
- Help -> ANSYS Workbench Help
- once the ANSYS Help page appears click:
- Home
Configuring your own license file[edit]
Our module for ANSYS is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, write the information to access into file $HOME/.licenses/ansys.lic using the following format:
setenv("ANSYSLMD_LICENSE_FILE", "<port>@<hostname>")
setenv("ANSYSLI_SERVERS", "<port>@<hostname>")
The CMC license server or free SHARCNET license server can be specified using the cluster specific settings shown in the following table:
License | Cluster | ANSYSLMD_LICENSE_FILE | ANSYSLI_SERVERS | anshpc | Notices |
---|---|---|---|---|---|
CMC | beluga | 6624@132.219.136.89
|
2325@132.219.136.89
|
60 | None |
CMC | cedar | 6624@206.12.126.25
|
2325@206.12.126.25
|
60 | None |
CMC | graham | 6624@199.241.162.97
|
2325@199.241.162.97
|
60 | Down for CADpass Upgrade |
SHARCNET | beluga/cedar/graham/gra-vdi | 1055@license3.sharcnet.ca
|
2325@license3.sharcnet.ca
|
124 | None |
Researchers who purchase a CMC license subscription must send their Compute Canada username to <cmcsupport@cmc.ca> otherwise license checkouts will likely fail.
Local License Servers[edit]
Before a local institutional ANSYS license server can be reached from Compute Canada systems firewall configuration changes will need to be made on both the institution side and the Compute Canada side. To start this process, contact your local ANSYS license server administrator and obtain the following information 1) fully qualified hostname of the local ANSYS license server 2) ANSYS flex port (commonly 1055) 3) ANSYS licensing interconnect port (commonly 2325) and 4) ANSYS static vendor port (site specific). Ensure the administrator is willing to open the firewall on these three ports to accept license checkout requests from your ANSYS jobs running on Compute Canada systems. Next open a ticket with <support@computecanada.ca> and send us the four pieces of information and indicate which systems(s) you want to run ANSYS on for example Cedar, Beluga, Graham/Gra-vdi or Niagara.
Cluster Batch Job Submission[edit]
The ANSYS software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them supports our Slurm scheduler. For this reason, we need special instructions for each ANSYS package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages. If one is not covered and you want us to investigate and help you start it, please contact our Technical support.
ANSYS Fluent[edit]
Typically you would use the following procedure for running Fluent on one of the Compute Canada clusters:
- Prepare your Fluent job using Fluent from the "ANSYS Workbench" on your Desktop machine up to the point where you would run the calculation.
- Export the "case" file "File > Export > Case..." or find the folder where Fluent saves your project's files. The "case" file will often have a name like FFF-1.cas.gz.
- If you already have data from a previous calculation, which you want to continue, export a "data" file as well (File > Export > Data...) or find it the same project folder (FFF-1.dat.gz).
- Transfer the "case" file (and if needed the "data" file) to a directory on the project or scratch filesystem on the cluster. When exporting, you save the file(s) under a more instructive name than FFF-1.* or rename them when uploading them.
- Now you need to create a "journal" file. It's purpose is to load the case- (and optionally the data-) file, run the solver and finally write the results. See examples below and remember to adjust the filenames and desired number of iterations.
- If jobs frequently fail to start due to license shortages (and manual resubmission of failed jobs is not convenient) consider modifying your slurm script to requeue your job (upto to 4 times) as shown in the following "Fluent Slurm Script (by node + requeue)" tab. Be aware doing this will also requeue simulations that fail due to non-license related issues (such as divergence) resulting lost compute time. Therefore it is strongly recommended to monitor and inspect each slurm output file to confirm each requeue attempt is license related. When it is determined a job requeued due to a simulation issue then immediately manually kill the job progression with
scancel jobid
and correct the problem. - After running the job you can download the "data" file and import it back into Fluent with File > import > Data....
Slurm Scripts[edit]
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify time limit dd-hh:mm
#SBATCH --ntasks=16 # Specify total number cores
#SBATCH --mem-per-cpu=4G # Specify memory per core
#SBATCH --cpus-per-task=1 # Do not change
module load ansys/2020R1
slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify time limit dd-hh:mm
#SBATCH --nodes=1 # Specify number compute nodes (1 or more)
#SBATCH --cpus-per-task=32 # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --mem=0 # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1 # Do not change
module load ansys/2020R1
slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify time limit dd-hh:mm
#SBATCH --nodes=1 # Specify number compute nodes (1 or more)
#SBATCH --cpus-per-task=32 # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --array=1-4%1 # Specify number requeue attempts (2 or more)
#SBATCH --mem=0 # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1 # Do not change
module load ansys/2020R1
slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORES=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
fluent 3d -t $NCORES -cnf=machinefile -mpi=intel -affinity=0 -g -i sample.jou
if [ $? -eq 0 ]; then
echo "Job completed successfully! Exiting now."
scancel $SLURM_ARRAY_JOB_ID
else
echo "Job failed due to license or simulation issue!"
if [ $SLURM_ARRAY_TASK_ID -lt $SLURM_ARRAY_TASK_COUNT ]; then
echo "Resubmitting now ..."
else
echo "Exiting now."
fi
fi
Journal Files[edit]
Fluent Journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed. With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the Journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used.
; SAMPLE FLUENT JOURNAL FILE - STEADY SIMULATION
; ----------------------------------------------
; lines beginning with a semicolon are comments
; Read input file (FFF-in.cas):
/file/read-case FFF-in
; Run the solver for this many iterations:
/solve/iterate 1000
; Overwrite output files by default:
/file/confirm-overwrite n
; Write final output file (FFF-out.dat):
/file/write-data FFF-out
; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"
; Exit fluent:
exit
; SAMPLE FLUENT JOURNAL FILE - STEADY SIMULATION
; ----------------------------------------------
; lines beginning with a semicolon are comments
; Read compressed input files (FFF-in.cas.gz & FFF-in.dat.gz):
/file/read-case-data FFF-in.gz
; Write a compressed data file every 100 iterations:
/file/auto-save/data-frequency 100
; Retain data files from 5 most recent iterations:
/file/auto-save/retain-most-recent-files y
; Write data files to output sub-directory (appends iteration)
/file/auto-save/root-name output/FFF-out.gz
; Run the solver for this many iterations:
/solve/iterate 1000
; Write final compressed output files (FFF-out.cas.gz & FFF-out.dat.gz):
/file/write-case-data FFF-out.gz
; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"
; Exit fluent:
exit
; SAMPLE FLUENT JOURNAL FILE - TRANSIENT SIMULATION
; -------------------------------------------------
; lines beginning with a semicolon are comments
; Read only the input case file:
/file/read-case "FFF-transient-inp.gz"
; For continuation (restart) read in both case and data input files:
;/file/read-case-data "FFF-transient-inp.gz"
; Write a data (and maybe case) file every 100 time steps:
/file/auto-save/data-frequency 100
/file/auto-save/case-frequency if-case-is-modified
; Retain only the most recent 5 data (and maybe case) files:
; [saves disk space if only a recent continuation file is needed]
/file/auto-save/retain-most-recent-files y
; Write to output sub-directory (appends flowtime and timestep)
/file/auto-save/root-name output/FFF-transient-out-%10.6f.gz
; ##### settings for Transient simulation : ######
; Set the magnitude of the (physical) time step (delta-t)
/solve/set/time-step 0.0001
; Set the number of time steps for a transient simulation:
/solve/set/max-iterations-per-time-step 20
; Set the number of iterations for which convergence monitors are reported:
/solve/set/reporting-interval 1
; ##### End of settings for Transient simulation. ######
; Initialize using the hybrid initialization method:
/solve/initialize/hyb-initialization
; Perform unsteady iterations for a specified number of time steps:
/solve/dual-time-iterate 1000
; Write final case and data output files:
/file/write-case-data "FFF-transient-out.gz"
; Write simulation report to file (optional):
/report/summary y "Report_Transient_Simulation.txt"
; Exit fluent:
exit
ANSYS CFX[edit]
#!/bin/bash
#SBATCH --account=def-group # Specify account name
#SBATCH --time=00-06:00 # Specify time limit dd-hh:mm
#SBATCH --nodes=1 # Specify number compute nodes (1 or more)
#SBATCH --cpus-per-task=32 # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --mem=0 # Do not change (allocates all memory per compute node)
#SBATCH --ntasks-per-node=1 # Do not change
module load ansys/2020R1
NNODES=$(slurm_hl2hl.py --format ANSYS-CFX)
cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $NNODES <other options>
Note: You may get the following errors in your output file : /etc/tmi.conf: No such file or directory. They do not seem to affect the computation.
WORKBENCH[edit]
Before submitting a job to the queue with sbatch script-wbpj.sh
several settings in YOURPROJECT.wbpj file must be initialized to be compatible with the settings in your slurm script. To do this 1) load your project into the workbench gui (as described in the Graphical Use section above) 2) tick Distributed 3) set Cores equal to ntasks 4) clear any generated data and finally 5) save the project. The initialized settings can be preserved by temporarily changing Save(Overwrite=True)
to Save(Overwrite=False)
when submitting short test jobs. The following script can be submitted to the queue with the sbatch script-wbpj.sh
command.
#!/bin/bash
#SBATCH --account=def-account
#SBATCH --time=00-03:00 # Time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G # Memory per core
#SBATCH --ntasks=8 # Number of cores
# SBATCH --nodes=1 # Number of nodes (optional)
# SBATCH --ntasks-per-node=8 # Cores per node (optional)
unset SLURM_GTIDS
rm -f mytest1_files/.lock
module load StdEnv/2016 ansys/2019R3
export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=balanced
export PATH=/cvmfs/soft.computecanada.ca/nix/var/nix/profiles/16.09/bin:$PATH
runwb2 -B -E "Update();Save(Overwrite=True)" -F YOURPROJECT.wbpj
MECHANICAL[edit]
The input file can be generated from within your interactive Workbench Mechanical session by clicking Solution -> Tools -> Write Input Files then specify File name:
YOURAPDLFILE.inp and Save as type:
APDL Input Files (*.inp). APDL jobs can then be submitted to the queue by running the sbatch script-name.sh
command. The ANSYS modules given in each script were tested on graham and should work without issue (uncomment one). Once the scripts are tested on other clusters they will be updated below if required.
#!/bin/bash
#SBATCH --account=def-account # Specify your account
#SBATCH --time=00-03:00 # Specify time (DD-HH:MM)
#SBATCH --mem=16G # Specify memory for all cores
#SBATCH --ntasks=8 # Specify number of cores (1 or more)
#SBATCH --nodes=1 # Specify one node (do not change)
unset SLURM_GTIDS
module load StdEnv/2016
#module load ansys/19.1
#module load ansys/19.2
#module load ansys/2019R2
#module load ansys/2019R3
#module load ansys/2020R1
module load ansys/2020R2
mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
#!/bin/bash
#SBATCH --account=def-account # Specify your account
#SBATCH --time=00-03:00 # Specify time (DD-HH:MM)
#SBATCH --mem=16G # Specify memory for all cores
#SBATCH --ntasks=8 # Specify number of cores (1 or more)
#SBATCH --nodes=1 # Specify one node (do not change)
unset SLURM_GTIDS
module load StdEnv/2020
module load ansys/2021R1
mapdl -smp -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
#!/bin/bash
#SBATCH --account=def-account # Specify your account
#SBATCH --time=00-03:00 # Specify time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G # Specify memory per core
#SBATCH --ntasks=8 # Specify number of cores (2 or more)
##SBATCH --nodes=2 # Specify number of nodes (optional)
##SBATCH --ntasks-per-node=4 # Specify cores per node (optional)
unset SLURM_GTIDS
module load StdEnv/2016
#module load ansys/2019R3
module load ansys/2020R1
export I_MPI_HYDRA_BOOTSTRAP=ssh; export KMP_AFFINITY=compact
mapdl -dis -mpi intelmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
#!/bin/bash
#SBATCH --account=def-account # Specify your account
#SBATCH --time=00-03:00 # Specify time (DD-HH:MM)
#SBATCH --mem-per-cpu=2G # Specify memory per core
#SBATCH --ntasks=8 # Specify number of cores (2 or more)
##SBATCH --nodes=2 # Specify number of nodes (optional)
##SBATCH --ntasks-per-node=4 # Specify cores per node (optional)
unset SLURM_GTIDS
module load StdEnv/2020
module load ansys/2021R1
mapdl -dis -mpi openmpi -b nolist -np $SLURM_NTASKS -dir $SLURM_TMPDIR -i YOURAPDLFILE.inp
For APDL jobs ANSYS allocates 1024 MB total memory and 1024 MB database memory by default. These values can be manually specified (or changed) by adding arguments -m 1024 and/or -db 1024 to the last maple command line in the above slurm scripts. When using a remote institutional license server with multiple ANSYS licenses it may be necessary to add arguments such as -p aa_r or -ppf anshpc. As always perform detailed scaling tests before running production jobs to ensure the optimal number of cores and minimum amount memory is specified in your slurm scripts. The single node SMP (Shared Memory Parallel) script will perform better than the multiple node DIS (Distributed Memory Parallel) script and therefore should be used whenever possible. To help avoid compatibility issues the ansys module loaded in your slurm script should ideally match the version used to to generate the input file:
[gra-login2:~/ansys/mechanical/demo] cat YOURAPDLFILE.inp | grep version ! ANSYS input file written by Workbench version 2019 R3
Graphical Use[edit]
ANSYS or ANSYSEDT (ANSYS Electronics Desktop) programs can be run on Cluster or VDI nodes using the Compute Canada modules as follows:
Compute Nodes[edit]
Interactively run a computationally intensive test job requiring upto all cores or memory on a single cluster node.
- 1) Connect to a cluster compute node (3hr time limit) with TigerVNC then open a terminal window ...
- 2) FLUIDS
module load StdEnv/2016 ansys/2020R2
(or older versions)module load StdEnv/2020 ansys/2021R1
(or newer versions)fluent|cfx5|icemcfd
- 3) WORKBENCH
module load StdEnv/2016 ansys/2019R3
(other version being tested)export KMP_AFFINITY=balanced; export I_MPI_HYDRA_BOOTSTRAP=ssh
export PATH=$EBROOTNIXPKGS/bin:$PATH
runwb2
- 4) ELECTRONICS
module load StdEnv/2020 ansysedt/2021R1
ansysedt
- 5) ENSIGHT
module load StdEnv/2016 ansys/2020R2
ensight -X
VDI Nodes[edit]
Interactively run a single test job with upto 8cores, create or modify simulation input files, post process or visualize data.
- 1) CONNECT
- Login to gra-vdi.computecanada.ca with TigerVNC then open a new terminal window ...
- 2) FLUIDS
module load CcEnv StdEnv/2020 ansys/2021R1
, ormodule load CcEnv StdEnv/2016 ansys/2020R2
, ormodule load CcEnv StdEnv/2016 ansys/2020R1
, ormodule load CcEnv StdEnv/2016 ansys/2019R3
export HOOPS_PICTURE=opengl
fluent|cfx5|icemcfd
- 3) WORKBENCH
module load SnEnv ansys/2021R1
, ormodule load SnEnv ansys/2020R2
runwb2
- ------------------------------------------------------------------------------------
module load CcEnv StdEnv/2016 ansys/2020R1
, ormodule load CcEnv StdEnv/2016 ansys/2019R3
export PATH=$EBROOTNIXPKGS/bin:$PATH
runwb2
- 4) ELECTRONICS DESKTOP
module load CcEnv StdEnv/2020 ansysedt/2021R1
ansysedt
- 5) ENSIGHT
module load SnEnv ansys/2021R1
/opt/sharcnet/ansys/2021R1/v211/CEI/bin/ensight
- ------------------------------------------------------------------------------------
module load CcEnv StdEnv/2016 ansys/2020R2
ensight
Site Specific Usage[edit]
Sharcnet License[edit]
On 31may2020 the Sharcnet license was upgraded from a CFD (Research CFD) only license to a MCS (Multiphysics Campus Solution) license including the following ANSYS Academic Research products: HF, EM, Electronics HPC, Mechanical and CFD. The Sharcnet ANSYS license supports a total of 275 running jobs consisting of 25 aa_r unlimited simulation size Research tasks and 250 aa_t_a limited simulation size Teaching tasks. There is no limit to the number of jobs a researcher can run using the Teaching tasks. There is however a 2 job limit when using the Research tasks. A total of 384 aa_r_hpc cores are available to all running ANSYS jobs with a limit of 64 cores per user. Researchers are asked to only use Teaching tasks when possible as described in the License Preferences section above. This license has been renewed for over 10 years and there is no reason to expect it will not be renewed again in coming years.
The SHARCNET license can be used by any Compute Canada user on any Compute Canada system for the purpose of Publishable Academic Research. The license is made available on a first come first serve basis. Should a large number of ANSYS jobs attempt to start on a given day, it is possible some jobs may fail to start due to insufficient tokens being available, such jobs will need to be resubmitted. If guaranteed (dedicated) token access is required for your research to progress, open a ticket and request a quote for the quantity of hpc tokens needed. Up to 2 aa_r tokens will be reserved from the main license per group to start jobs reliably if the group purchases their own dedicated aa_r_hpc tokens (128 or more). Should you need to reliably start/run more than 2 jobs then you would also want to purchase a block of 5 aa_r or the much cheaper aa_r_cfd tokens. The quote will be obtained from Simutech to ensure compatibility with the existing license (customer #446422). Prices would be at cost plus applicable taxes and the actual purchase would be done directly by the PI with Simutech after that point. Neither LS-DYNA or Lumerical are included with the Sharcnet ANSYS license. Tokens for these products may be added to the SHARCNET server for dedicated use by similarely opening a ticket and requesting a quote.
License Server File[edit]
To use the Sharcnet ansys license configure your ansys.lic file as follows:
[gra-login1:~/.licenses] cat ansys.lic
setenv("ANSYSLMD_LICENSE_FILE", "1055@license3.sharcnet.ca")
setenv("ANSYSLI_SERVERS", "2325@license3.sharcnet.ca")
Query License Server[edit]
Check how many licenses your username currently has in use from all features:
ssh graham.computecanada.ca
module load ansys
lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -a | grep "Users of\|$USER"
Check how many jobs are running and the total cores currently in use from the global pool:
ssh graham.computecanada.ca
module load ansys
lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -a | grep "aa_r:\|aa_r_hpc:"
where lines beginning with ...
o your username (if any) represent the licenses currently in use by your running jobs
o Users of aa_r: represents the total number of ANSYS Academic Research tasks in use by all users (maximum 25 jobs running)
o Users of aa_t_a: represents the total number of ANSYS Academic Teaching tasks in use by all users (maximum 250 jobs running)
o Users of aa_r_hpc: represents the total number of ANSYS hpc licenses in use by all users (maximum 384hpc cores = 640total - 256reserved). Please note the total number of aa_r_hpc licenses required to run a parallel job is calculated by subtracting 16 from the total requested in your slurm script. Therefore lmutil will report (for example) that only 16 aa_r_hpc are in use for a running 32 core job.
If you discover any licenses unexpectedly in use by your username (usually due to ansys not exiting cleanly on gra-vdi) then connect to the node where its running, open a terminal window and run the following command to terminate the rogue processes pkill -9 -e -u $USER -f "ansys"
after which your licenses should be freed. Note that gra-vdi consists of two nodes (gra-vdi3 and gra-vdi4) which researchers are randomly placed on when connecting to gra-vdi.computecanada.ca with tigervnc. Therefore its necessary to specify the full hostname (gra-vdi3.sharcnet.ca or grav-vdi4.sharcnet.ca) when connecting with tigervnc to ensure you login to the correct node before running pkill.
Local VDI Modules[edit]
Gra-vdi also provides locally installed ANSYS modules that may provide better graphics performance and/or stability for performing remote visualization. The programs installed in these modules are started using legacy wrapper scripts that default to the SHARCNET license or allow the user to either select CMC as their license or specify any other license server in their environment as an alternative to specifying the server information under ~/.licenses. Suitable usage of ANSYS programs on gra-vdi includes running a single test job interactively with upto 8cores and/or 128G ram, create or modify simulation input files, post process or visualize data.
ansys[edit]
- Connect to gra-vdi.computecanada.ca with TigerVNC
- Open a new terminal window and load a module:
module load SnEnv ansys/2019R3
module load SnEnv ansys/2020R1
module load SnEnv ansys/2020R2
module load SnEnv ansys/2021R1
- Start an ANSYS program by issuing one of the following:
runwb2|fluent|cfx5|icemcfd|apdl
- Press y then
enter
to accept the conditions - Next press
enter
to accept the n option and use the SHARCNET license server by default (in the case of runwb2 note that ~/.licenses/ansysedt.lic will first be used if present otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment). If you change n to y and hit enter then the CMC license server will be used regardless of other settings.
where cfx5
from step 7. above provides the option to start the following components:
1) CFX-Launcher (cfx5launcher) 2) CFX-Pre (start cfx5pre directly) 3) CFD-Post (start cfx5post directly) 4) CFX-Solver (start cfx5solve directly)
ansysedt[edit]
- Connect to gra-vdi.computecanada.ca with TigerVNC
- Open a new terminal window and load a module:
module load SnEnv ansysedt/2021R1
- Start the ANSYS Electromagnetics Desktop program by typing the following command:
ansysedt
- Press y then
enter
to accept the conditions. - Next press
enter
to accept the n option and use the SHARCNET license server by default (note that ~/.licenses/ansysedt.lic will first be used if present otherwise ANSYSLI_SERVERS and ANSYSLMD_LICENSE_FILE will be used if set in your environment). If you change n to y and hit enter then the CMC license server will be used regardless of other settings.
Additive Manufacturing[edit]
To get started configure your ~/.licenses/ansys.lic
file to point to a license server that has a valid ANSYS Mechanical License. This must be done on all systems where you plan to run the software.
Enable Additive[edit]
To enable ANSYS Additive Manufacturing in your project do the following 3 steps:
Start Workbench[edit]
- start workbench as described in the Graphical Use - WORKBENCH section found above.
Install Extension[edit]
- click Extensions -> Install Extension
- specify the following /path/to/AdditiveWizard.wbex then click Open: /cvmfs/restricted.computecanada.ca/easybuild/software/2017/Core/ansys/2019R3/v195/aisol/WBAddins/MechanicalExtensions/AdditiveWizard.wbex
Load Extension[edit]
- click Extensions -> Manage Extensions and tick Additive Wizard
- click the ACT Start Page tab X to return to your Project tab
Run Additive[edit]
Gra-vdi[edit]
A user can run a single ANSYS Additive Manufacturing job on gra-vdi with upto 16cores as follows:
Start Workbench
On Gra-vdi
as described above inEnable Additive
- click File -> Open and select test.wbpj then click Open
- click View -> reset workspace if you get a grey screen
- start Mechanical, Clear Generated Data, tick Distributed, specify Cores
- click File -> Save Project -> Solve
Check utilization:
- open another terminal and run:
top -u $USER
- kill rogue processes from previous runs if required:
pkill -9 -e -u $USER -f "ansys"
Cluster[edit]
Project preparation:
To submit an Additive job to a cluster queue, you must first prepare your additive simulation to run on a Compute Canada cluster. To do this open your simulation as described in the Enable Additive
section above then save it. Next create a slurm script as explained in the Cluster Batch Job Submission - WORKBENCH section above. For parametric studies change Update()
to UpdateAllDesignPoints()
in your script and submit a job to the queue with the sbatch scriptname
command. For initial performance testing one can avoid the solution from being written by specifying Overwrite=False in the slurm script so further runs to be conducted without needing to reopen the simulation in workbench (and mechanical) to clear the solution and recreate the design points. Another option is to create a replay script to perform these tasks then manually run it on the cluster between runs as follows. The replay file can be modified for use in different directories by using editor to manually change its internal FilePath setting.
module load ansys/2019R3 rm -f test_files/.lock runwb2 -R myreplay.wbjn
Resource utilization:
Once your additive job has been running for a few minutes a snapshot of its resource utilization on the compute node(s) can be obtained with the following the srun command. Sample output corresponding to an eight core submission script is shown next. It can be see that two nodes were selected by the scheduler:
[gra-login1:~] srun --jobid=myjobid top -bn1 -u $USER | grep R | grep -v top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22843 demo 20 0 2272124 256048 72796 R 88.0 0.2 1:06.24 ansys.e 22849 demo 20 0 2272118 256024 72822 R 99.0 0.2 1:06.37 ansys.e 22838 demo 20 0 2272362 255086 76644 R 96.0 0.2 1:06.37 ansys.e PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4310 demo 20 0 2740212 271096 101892 R 101.0 0.2 1:06.26 ansys.e 4311 demo 20 0 2740416 284552 98084 R 98.0 0.2 1:06.55 ansys.e 4304 demo 20 0 2729516 268824 100388 R 100.0 0.2 1:06.12 ansys.e 4305 demo 20 0 2729436 263204 100932 R 100.0 0.2 1:06.88 ansys.e 4306 demo 20 0 2734720 431532 95180 R 100.0 0.3 1:06.57 ansys.e
Scaling tests:
After a job completes its "Job Wall-clock time" can be obtained from seff myjobid
. Using this value scaling tests can be performed by submitting short test jobs with an increasing number of cores. If the Wall-clock time decreases by ~50% when the number of cores are doubled then additional cores maybe considered.