Abaqus: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 112: | Line 112: | ||
<!--T:126--> | <!--T:126--> | ||
Note: If all 4 cae licenses are currently in use the following error message will occur: | Note: If all 4 cae licenses are currently in use (reserved plus in use) the following error message will occur: | ||
<source lang="bash"> | <source lang="bash"> |
Revision as of 20:34, 23 June 2020
Abaqus FEA is a software suite for finite element analysis and computer-aided engineering.
Using your own license[edit]
Abaqus is available on Compute Canada clusters, but you must provide your own license. To configure your cluster account, create a file named $HOME/.licenses/abaqus.lic corresponding to the module version you want to use (otherwise abaqus jobs will fail). This must be done on each cluster where you plan to run abaqus as follows:
Module Version 2020
prepend_path("ABAQUSLM_LICENSE_FILE","port@server")
Module Version 6.14.1
prepend_path("LM_LICENSE_FILE","port@server")
Replace port@server
with the port number and name of your Abaqus license server. Your license server must be reachable by our compute nodes, so your firewall will need to be configured appropriately. This usually requires our technical team to get in touch with the technical people managing your license software. Please contact our technical support and we will provide a list of IP addresses used by our clusters and obtain the information we need on the port and IP address of your server.
Cluster job submission[edit]
Below is a sample slurm script to submit a parallel job to a single compute node using 4 cores:
#!/bin/bash
#SBATCH --time=00-06:00 # days-hrs:mins
#SBATCH --mem=8G # node memory > 5G
#SBATCH --cpus-per-task=4 # number cores > 1
module load abaqus/6.14.1 # (or abaqus/2020)
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
abaqus job=test input=sample.inp scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE \
interactive mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
where a listing of abaqus options can be obtained by loading an abaqus module and running: abaqus -help | less
Node memory[edit]
An estimate for the total slurm node memory (--mem=) required for a simulation to run fully in ram (without being virtualized to scratch disk) can be obtained by examining the abaqus output test.dat
file. For example a simulation that requires a fairly large amount of memory might show:
M E M O R Y E S T I M A T E
PROCESS FLOATING PT MINIMUM MEMORY MEMORY TO
OPERATIONS REQUIRED MINIMIZE I/O
PER ITERATION (MB) (MB)
1 1.89E+14 3612 96345
To run your simulation interactively and monitor the memory consumption do the following:
1) ssh into a compute canada cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie)
salloc --time=0:30:00 --cpus-per-task=8 --mem=64G --account=def-piname
module load abaqus/6.14.1 OR module load abaqus/2020
unset SLURM_GTIDS
abaqus job=test input=Sample.inp scratch=$SCRATCH cpus=8 mp_mode=threads interactive
2) ssh into the compute canada cluster again, ssh into the compute node with the allocation, run top ie)
ssh gra100
top -u $USER
3) watch the VITR and RES columns until steady memory values are observed
To completely satisfy the recommended "MEMORY TO OPERATIONS REQUIRED MINIMIZE I/O" (MRMIO) value at least the same smount of non-swapped physical memory (RES) must be available to abaqus. Since the RES will in general be less than the virtual memory (VIRT) by some relatively constant amount for a given simulation, it is necessary to slightly over allocate the requested slurm node memory -mem=
. In the above sample slurm script this over-allocation has been hardcoded to a conservative value of 3072MB based on initial testing of the standard abaqus solver. To avoid long queue wait times associated with large values of MRMIO, it maybe worth investigating the simulation performance impact associated with reducing the RES memory that is made available to abaqus significantly below the MRMIO. This can be done by lowering the -mem=
value which in turn will set an artificially low value of memory=
in the abaqus command (found in the last line of the slurm script). In doing this one should be careful the RES does not dip below the "MINIMUM MEMORY REQUIRED" (MMR) otherwise abaqus will exit due to "Out Of Memory" (OOM). As an example, if your MRMIO is 96GB try running a series of short test jobs with #SBATCH --mem=8G, 16G, 32G, 64G
until an acceptable minimal performance impact is found, noting that smaller values will result in increasingly larger scratch space use by tmpdir files.
Cluster graphical use[edit]
Abaqus/2020 can be run interactively in graphical mode on a cluster compute node (3hr time limit) over TigerVNC with these steps:
- Install TigerVNC client on your desktop
- Connect to a cluster compute node with vncviewer
module load abaqus/2020
abaqus cae -mesa
Gra-vdi graphical use[edit]
NOTE: gra-vdi is currently OFFLINE for upgrading with a return to use date sometime in june
Abaqus/2020 can be run interactively in graphical mode on gra-vdi (no connection time limit) over TigerVNC with these steps:
- Install TigerVNC client on your desktop
- Connect to gra-vdi.computecanada.ca with vncviewer
module load SnEnv
module load abaqus/2020
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users of cae"
abaqus cae
Note: If all 4 cae licenses are currently in use (reserved plus in use) the following error message will occur:
[gra-vdi:~] abaqus cae
No socket connection to license server manager.
Feature: cae
License path: 27050@license3.sharcnet.ca:
FLEXnet Licensing error:-7,96
For further information, refer to the FLEXnet Licensing documentation,
or contact your local Abaqus representative.
Number of requested licenses: 1
Number of total licenses: 4
Number of licenses in use: 2
Number of available licenses: 2
Abaqus Error: Abaqus/CAE Kernel exited with an error.
Site specific usage[edit]
Sharcnet license[edit]
Sharcnet provides a small but free license consisting of 2cae and 21 execute tokens where usage limits are imposed 10 tokens/user and 15 tokens/group. For groups that have purchased tokens, the free token usage limits are added to their reservation. The free tokens are available on a first come first serve basis and mainly intended for testing and light usage before deciding whether or not to purchase dedicated tokens. The license can be used by any Compute Canada member but only on SHARCNET hardware. Groups that purchase dedicated tokens to run on the SHARCNET license server may likewise only use them on SHARCNET hardware. Such hardware includes gra-vdi for running abaqus in full graphical mode and graham cluster for submitting compute batch jobs to the queue. Before you can use the license you must open ticket at <support@computecanada.ca> and request access. In your email 1) mention that it is for use on Sharcnet systems and 2) include a copy/paste of the following License Agreement statement with your full name and Compute Canada username entered in the indicated locations. Please note that every user must do this ie) cannot be done one time only for a group (including PIs who have purchased their own dedicated tokens).
o License agreement
---------------------------------------------------------------------------------- Subject: Abaqus Sharcnet Academic License User Agreement This email is to confirm that i "_____________" with username "___________" will only use “SIMULIA Academic Software” with tokens from the SHARCNET license server for the following purposes: 1) on SHARCNET hardware where the software is already installed 2) in affiliation with a canadian degree-granting academic institution 3) for education, institutional or instruction purposes and not for any commercial or contract related purposes where results are not publishable 4) for experimental, theoretical and/or digital research work, undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, up to the point of proof-of-concept in a laboratory -----------------------------------------------------------------------------------
o Configure license file
The configuration of your abaqus license on each cluster depends on the module version being used:
Module Version 2020
[gra-login1:~] cat ~/.licenses/abaqus.lic
prepend_path("ABAQUSLM_LICENSE_FILE","27050@license3.sharcnet.ca")
Module Version 6.14.1
[gra-login1:~] cat ~/.licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27050@license3.sharcnet.ca")
If your abaqus jobs fail with error message [*** ABAQUS/eliT_CheckLicense rank 0 terminated by signal 11 (Segmentation fault)] in the slurm output file verify your abaqus.lic
file contains ABAQUSLM_LICENSE_FILE to use abaqus/2020. If your abaqus jobs fail with error message starting [License server machine is down or not responding etc] in the output file verify your abaqus.lic
file contains LM_LICENSE_FILE to use abaqus/6.14.1 as shown.
o Check license status
To query the sharcnet license server for started jobs, queued jobs, and reservations by purchasing groups run:
ssh graham.computecanada.ca
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|start\|queued\|RESERVATIONs"
When abaqus licensing lmstat shows your job is "queued" this means it has entered the "R"unning state from the perspective of either the squeue -j jobid
or sacct -j jobid
commands but is waiting for a license (not started) and therefore idle. This will have the same impact on your account priority as if the job were consuming cputime and thus should be avoided. When abaqus licensing lmstat indicates the job is in the "start" state then it has acquired the required abaqus tokens from the license and consuming cputime ...
o Specify job resources
To ensure optimal usage of both your Abaqus tokens and the Compute Canada resources its important to carefully specify the required memory and ncpus in your slurm script. The values can be determined by submitting a few short test jobs to the queue then checking their utilization. For completed jobs use seff JobNumber
to show the total "Memory Utilized" and "Memory Efficiency"; If the "Memory Efficiency" is less than ~90% decrease the value of "#SBATCH --mem=" setting in your slurm script accordingly. Notice that the seff JobNumber
command also shows the total "CPU (time) Utilized" and "CPU Efficiency"; If the "CPU Efficiency" is less than ~90% perform scaling tests to determine the optimal number of cpu's for optimal performance and then update the value of then update the value of "#SBATCH --cpus-per-task=" in your slurm script. For running jobs use the srun --jobid=29821580 --pty top -d 5 -u $USER
command to watch the %CPU, %MEM and RES for each abaqus parent process on the compute node; The %CPU and %MEM columns display the percent usage relative to the total available on the node while the RES column shows the per process resident memory size (in human readable format for values over 1gb). Further information regarding howto Monitor Jobs is available in the Compute Canada wiki.
o Core token mapping
TOKENS 5 6 7 8 10 12 14 16 19 21 25 28 34 38 CORES 1 2 3 4 6 8 12 16 24 32 48 64 96 128
where TOKENS = floor[5 X CORES^0.422]
o Using your license
To use your own license server (instead of the default SHARCNET license) on gra-vdi as described in the "Gra-vdi graphical use" section above, run command export ABAQUSLM_LICENSE_FILE="port@server"
after loading the abaqus module and before running abaqus cae.
Western license[edit]
The Western site license may only be used by Western researchers with hardware located on Western's campus such as the Dusky legacy cluster. Graham and gra-vdi are excluded since they are located at Waterloo (use the Sharcnet License for these systems as described above). Contact the Western abaqus license server administrator (located in Robarts) to make arrangements before attempting to use the Western abaqus license. Submit a ticket to Compute Canada support to request the admins contact information if necessary. You will need to provide your Compute Canada username and likely make arrangements to purchase tokens. If you are granted access request the port and server values and enter them into your abaqus.lic file as shown in 1) near the top of this wiki which will in turn be used by the Compute Canada module on dusky when it loads.