Abaqus: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 42: Line 42:


<!--T:102-->
<!--T:102-->
Below are proto-type slurm scripts for submitting thread and mpi based parallel simulations to single or multiple compute nodes.  Most users will find it sufficient to use one of the <i>project directory scripts</i> provided in the Single Node Computing section. The optional "memory=" argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning.  A listing of all abaqus command line arguments can be obtained by loading an abaqus module and running: <code>abaqus -help | less</code>.   For Single Node jobs that run less than a day the <i>work directory script</i> with restart file writing disabled should be sufficient. Single node jobs that will run for more than a day should however write restart files.  Jobs that create large restart files will benefit by writing to local disc through the use of the SLURM_TMPDIR environment variable utilized in the <i>temporary directory scripts</i> provided in the two rightmost tabs of the Single Node standard and explicit analysis sections.  The restart scripts shown here will continue jobs that have been terminated early for some reason.  Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure.  Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details).  Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the Multiple Node sections below to distribute computing over arbitrary node ranges determined automatically by the schedular.  Short scaling test jobs should be run to determine wall clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc) to determine the optimal number before running any long jobs.  
Below are proto-type slurm scripts for submitting thread and mpi based parallel simulations to single or multiple compute nodes.  Most users will find it sufficient to use one of the <i>project directory script's</i> provided in the Single Node Computing section. The optional "memory=" argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning.  A listing of all abaqus command line arguments can be obtained by loading an abaqus module and running: <code>abaqus -help | less</code>. Single Node jobs that run less than one day can use the <i>project directory script</i> found in the first tab. Single node jobs that run for more than a day should however use one of the restart scripts.  Jobs that create large restart files will benefit by writing to local disc through the use of the SLURM_TMPDIR environment variable utilized in the <i>temporary directory scripts</i> provided in the two rightmost tabs of the Single Node standard and explicit analysis sections.  The restart scripts shown here will continue jobs that have been terminated early for some reason.  Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure.  Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details).  Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the Multiple Node sections below to distribute computing over arbitrary node ranges determined automatically by the schedular.  Short scaling test jobs should be run to determine wall clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc) to determine the optimal number before running any long jobs.  


== Standard Analysis == <!--T:2069-->
== Standard Analysis == <!--T:2069-->

Revision as of 21:24, 28 September 2021

Other languages:

Abaqus FEA is a software suite for finite element analysis and computer-aided engineering.

Using your own license[edit]

Abaqus is available on Compute Canada clusters, but you must provide your own license. To configure your cluster account, create a file named $HOME/.licenses/abaqus.lic with the following two lines which support versions 202X and 6.14.1 respectively. This must be done on each cluster where you plan to run abaqus as follows:


File : abaqus.lic

prepend_path("ABAQUSLM_LICENSE_FILE","port@server")
prepend_path("LM_LICENSE_FILE","port@server")


Replace port@server with the port number and name of your Abaqus license server. Your license server must be reachable by our compute nodes, so your firewall will need to be configured appropriately. This usually requires our technical team to get in touch with the technical people managing your license software. Please contact our technical support and we will provide a list of IP addresses used by our clusters and obtain the information we need on the port and IP address of your server.

Online Documentation[edit]

The full ABAQUS documentation (latest version) can be accessed on gra-vdi as shown in the following steps.

Account Preparation:

  1. connect to gra-vdi.computecanada.ca with tigervnc as described in VDI Nodes
  2. open a terminal window on gra-vdi and type firefox (hit enter)
  3. in the address bar type about:config (hit enter) -> click the Accept the risk button
  4. in the search bar type uniqe then double click privacy.file_unique_origin to change true to false

View Documentation:

  1. connect to gra-vdi.computecanada.ca with tigervnc as described in VDI Nodes
  2. open a terminal window on gra-vdi and type firefox (hit enter)
  3. in the search bar copy paste one of the following:
    file:///opt/sharcnet/abaqus/2020/doc/English/DSSIMULIA_Established.htm, or
    file:///opt/sharcnet/abaqus/2021/doc/English/DSSIMULIA_Established.htm
  4. find a topic by clicking for example: Abaqus -> Analysis -> Analysis Techniques -> Analysis Continuation Techniques

Cluster job submission[edit]

Below are proto-type slurm scripts for submitting thread and mpi based parallel simulations to single or multiple compute nodes. Most users will find it sufficient to use one of the project directory script's provided in the Single Node Computing section. The optional "memory=" argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning. A listing of all abaqus command line arguments can be obtained by loading an abaqus module and running: abaqus -help | less. Single Node jobs that run less than one day can use the project directory script found in the first tab. Single node jobs that run for more than a day should however use one of the restart scripts. Jobs that create large restart files will benefit by writing to local disc through the use of the SLURM_TMPDIR environment variable utilized in the temporary directory scripts provided in the two rightmost tabs of the Single Node standard and explicit analysis sections. The restart scripts shown here will continue jobs that have been terminated early for some reason. Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure. Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details). Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the Multiple Node sections below to distribute computing over arbitrary node ranges determined automatically by the schedular. Short scaling test jobs should be run to determine wall clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc) to determine the optimal number before running any long jobs.

Standard Analysis[edit]

Abaqus solvers support thread-based and mpi-based parallelization. Scripts for each type are provided below for running Standard Analysis type jobs on Single or Multiple nodes respectively. Scripts to perform multiple node job restarts are not currently provided.

Single Node Computing[edit]

File : "scriptsp1.txt"

#!/bin/bash
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
#SBATCH --cpus-per-task=4      # Specify number of cores
#SBATCH --mem=8G               # Specify total memory > 5G
#SBATCH --nodes=1              # Do not change !

module load StdEnv/2020        # Latest version
module load abaqus/2021        # Latest version

#module load StdEnv/2016       # Uncomment to use
#module load abaqus/2020       # Uncomment to use

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"

rm -f testsp1* testsp2*
abaqus job=testsp1 input=mystd-sim.inp \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"


To write restart date every N=12 time increments and at the end of each step of the analysis:

*RESTART, WRITE, FREQUENCY=12

To disable writing restart data (into res,mdl,stt files) instead specify:

*RESTART, WRITE, FREQUENCY=0

To check the completed restart information do:

cat testsp1.msg | grep "STARTS\|COMPLETED\|WRITTEN"

Some simulations may benefit by adding the following to the abaqus command at the bottom of the script:

order_parallel=OFF
File : "scriptsp2.txt"

#!/bin/bash
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
#SBATCH --cpus-per-task=4      # Specify number of cores
#SBATCH --mem=8G               # Specify total memory > 5G
#SBATCH --nodes=1              # Do not change !

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"

rm -f testsp2*
abaqus job=testsp2 oldjob=testsp1 input=mystd-sim-restart.inp \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"


To read input file, input file should contain:

*RESTART, READ
File : "scriptst1.txt"

#!/bin/bash
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
#SBATCH --cpus-per-task=4      # Specify number of cores
#SBATCH --mem=8G               # Specify total memory > 5G
#SBATCH --nodes=1              # Do not change !

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR

rm -f testst1* testst2*
cd $SLURM_TMPDIR
while sleep 6h; do
   cp -f * $SLURM_SUBMIT_DIR 2>/dev/null
done &
WPID=$!
abaqus job=testst1 input=$SLURM_SUBMIT_DIR/mystd-sim.inp \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f * $SLURM_SUBMIT_DIR


To write restart date every N=12 time increments and at the end of each step of the analysis:

*RESTART, WRITE, FREQUENCY=12

To disable writing restart data (into res,mdl,stt files) instead specify:

*RESTART, WRITE, FREQUENCY=0

To check the completed restart information do:

cat testst1.msg | grep "STARTS\|COMPLETED\|WRITTEN"
File : "scriptst2.txt"

#!/bin/bash
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
#SBATCH --cpus-per-task=4      # Specify number of cores
#SBATCH --mem=8G               # Specify total memory > 5G
#SBATCH --nodes=1              # Do not change !

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR

rm -f testst2*
cp testst1* $SLURM_TMPDIR
cd $SLURM_TMPDIR
while sleep 3h; do
   cp -f testst2* $SLURM_SUBMIT_DIR 2>/dev/null
done &
WHILEPID=$!
abaqus job=testst2 oldjob=testst1 input=$SLURM_SUBMIT_DIR/mystd-sim-restart.inp \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f testst2* $SLURM_SUBMIT_DIR


To read restart file, input file should contain:

*RESTART, READ

Multiple Node Computing[edit]

Users with large memory or compute needs (and correspondingly large licenses) can use the following script to perform mpi-based computing over a arbitrary range of nodes ideally left to the schedular to automatically determine. A companion template script to perform restart multi-node jobs is not currently provided due to additional limitations when they can be used.


File : "scriptsp1-mpi.txt"

!/bin/bash
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
# SBATCH --nodes=2             # Best to leave commented
#SBATCH --ntasks=8             # Specify number of cores
#SBATCH --mem-per-cpu=16G      # Specify memory per core
#SBATCH --cpus-per-task=1      # Do not change !

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"

rm -f testsp1-mpi*

unset hostlist
nodes="$(slurm_hl2hl.py --format MPIHOSTLIST | xargs)"
for i in `echo "$nodes" | xargs -n1 | uniq`; do hostlist=${hostlist}$(echo "['${i}',$(echo "$nodes" | xargs -n1 | grep $i | wc -l)],"); done
hostlist="$(echo "$hostlist" | sed 's/,$//g')"
mphostlist="mp_host_list=[$(echo "$hostlist")]"
export $mphostlist
echo "$mphostlist" > abaqus_v6.env

abaqus job=testsp1-mpi input=mystd-sim.inp \
  scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi


Explicit Analysis[edit]

Abaqus solvers support thread-based and mpi-based parallelization. Scripts for each type are provided below for running Standard Analysis type jobs on Single or Multiple nodes respectively. Template scripts to perform multi-node job restarts are not currently provided pending further testing.

Single Node Computing[edit]

File : "scriptep1.txt"

#!/bin/bash
#SBATCH --account=def-group    # specify account
#SBATCH --time=00-06:00        # days-hrs:mins
#SBATCH --mem=8G               # node memory > 5G
#SBATCH --cpus-per-task=4      # number cores > 1
#SBATCH --nodes=1              # do not change

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"

rm -f testep1* testep2*
abaqus job=testep1 input=myexp-sim.inp \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"


To write restart output at n=12 time intervals (at the beginning of the step and at increments ending immediately after each time interval) your input file should contain:

*RESTART, WRITE, NUMBER INTERVAL=12, TIME MARKS=NO

To disable writing restart output (into the abq and sta files) instead specify:

*RESTART, WRITE, NUMBER INTERVAL=0

To check the completed restart information do:

cat testep1.sta | grep Restart
File : "scriptep2.txt"

#!/bin/bash
#SBATCH --account=def-group    # specify account
#SBATCH --time=00-06:00        # days-hrs:mins
#SBATCH --mem=8G               # node memory > 5G
#SBATCH --cpus-per-task=4      # number cores > 1
#SBATCH --nodes=1              # do not change

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"

rm -f testep2*
for f in testep1*; do [[ -f ${f} ]] && cp -a "$f" "testep2${f#testep1}"; done
abaqus job=testep2 input=myexp-sim-restart.inp recover \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"


No input file modifications are required to restart the analysis.

File : "scriptet1.txt"

#!/bin/bash
#SBATCH --account=def-group    # specify account
#SBATCH --time=00-06:00        # days-hrs:mins
#SBATCH --mem=8G               # node memory > 5G
#SBATCH --cpus-per-task=4      # number cores > 1
#SBATCH --nodes=1              # do not change

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR

rm -f testet1* testet2*
cd $SLURM_TMPDIR
while sleep 6h; do
   cp -f * $SLURM_SUBMIT_DIR 2>/dev/null
done &
WPID=$!
abaqus job=testet1 input=$SLURM_SUBMIT_DIR/myexp-sim.inp \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f * $SLURM_SUBMIT_DIR


To write restart output at n=12 time intervals (at the beginning of the step and at increments ending immediately after each time interval) your input file should contain:

*RESTART, WRITE, NUMBER INTERVAL=12, TIME MARKS=NO

To disable writing restart output (into the abq and sta files) instead specify:

*RESTART, WRITE, NUMBER INTERVAL=0

To check the completed restart information do:

cat testet1.sta | grep Restart
File : "scriptet2.txt"

#!/bin/bash
#SBATCH --account=def-group    # specify account
#SBATCH --time=00-06:00        # days-hrs:mins
#SBATCH --mem=8G               # node memory > 5G
#SBATCH --cpus-per-task=4      # number cores > 1
#SBATCH --nodes=1              # do not change

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR

rm -f testet2*
for f in testet1*; do cp -a "$f" $SLURM_TMPDIR/"testet2${f#testet1}"; done
cd $SLURM_TMPDIR
while sleep 3h; do
   cp -f * $SLURM_SUBMIT_DIR 2>/dev/null
done &
WPID=$!
abaqus job=testet2 input=$SLURM_SUBMIT_DIR/myexp-sim-restart.inp recover \
   scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
   mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f  * $SLURM_SUBMIT_DIR


No input file modifications are required to restart the analysis.

Multiple Node Computing[edit]

File : "scriptep1-mpi.txt"

!/bin/bash
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
# SBATCH --nodes=2             # Best to leave commented
#SBATCH --ntasks=8             # Specify number of cores
#SBATCH --mem-per-cpu=16G      # Specify memory per core
#SBATCH --cpus-per-task=1      # Do not change !

module load abaqus/2021

unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"

rm -f testep1-mpi*

unset hostlist
nodes="$(slurm_hl2hl.py --format MPIHOSTLIST | xargs)"
for i in `echo "$nodes" | xargs -n1 | uniq`; do hostlist=${hostlist}$(echo "['${i}',$(echo "$nodes" | xargs -n1 | grep $i | wc -l)],"); done
hostlist="$(echo "$hostlist" | sed 's/,$//g')"
mphostlist="mp_host_list=[$(echo "$hostlist")]"
export $mphostlist
echo "$mphostlist" > abaqus_v6.env

abaqus job=testep1-mpi input=myexp-sim.inp \
  scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi


Node memory[edit]

An estimate for the total slurm node memory (--mem=) required for a simulation to run fully in ram (without being virtualized to scratch disk) can be obtained by examining the abaqus output test.dat file. For example a simulation that requires a fairly large amount of memory might show:

                   M E M O R Y   E S T I M A T E
  
 PROCESS      FLOATING PT       MINIMUM MEMORY        MEMORY TO
              OPERATIONS           REQUIRED          MINIMIZE I/O
             PER ITERATION           (MB)               (MB)
  
     1          1.89E+14             3612              96345

To run your simulation interactively and monitor the memory consumption do the following:

 1) ssh into a compute canada cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie)
    salloc --time=0:30:00 --cpus-per-task=8 --mem=64G --account=def-piname
    module load abaqus/6.14.1  OR  module load abaqus/2020
    unset SLURM_GTIDS
    abaqus job=test input=Sample.inp scratch=$SCRATCH cpus=8 mp_mode=threads interactive
 2) ssh into the compute canada cluster again, ssh into the compute node with the allocation, run top ie)
    ssh gra100
    top -u $USER
 3) watch the VIRT and RES columns until steady peak memory values are observed

To completely satisfy the recommended "MEMORY TO OPERATIONS REQUIRED MINIMIZE I/O" (MRMIO) value at least the same amount of non-swapped physical memory (RES) must be available to abaqus. Since the RES will in general be less than the virtual memory (VIRT) by some relatively constant amount for a given simulation, it is necessary to slightly over allocate the requested slurm node memory -mem=. In the above sample slurm script this over-allocation has been hardcoded to a conservative value of 3072MB based on initial testing of the standard abaqus solver. To avoid long queue wait times associated with large values of MRMIO, it maybe worth investigating the simulation performance impact associated with reducing the RES memory that is made available to abaqus significantly below the MRMIO. This can be done by lowering the -mem= value which in turn will set an artificially low value of memory= in the abaqus command (found in the last line of the slurm script). In doing this one should be careful the RES does not dip below the "MINIMUM MEMORY REQUIRED" (MMR) otherwise abaqus will exit due to "Out Of Memory" (OOM). As an example, if your MRMIO is 96GB try running a series of short test jobs with #SBATCH --mem=8G, 16G, 32G, 64G until an acceptable minimal performance impact is found, noting that smaller values will result in increasingly larger scratch space used by temporary files.

Graphical use[edit]

Abaqus/2020 can be run interactively in graphical mode on a cluster or gra-vdi using VNC by following these steps:

On a cluster[edit]

  1. Connect to a compute node (3hr salloc time limit) with TigerVNC
  2. Open a new terminal window and enter one of the following:
    module load StdEnv/2016 abaqus/6.14.1, or,
    module load StdEnv/2016 abaqus/2020, or,
    module load StdEnv/2020 abaqus/2021
  3. abaqus cae -mesa

On gra-vdi[edit]

  1. Connect to gra-vdi (24hr abaqus runtime limit) with TigerVNC
  2. Open a new terminal window and enter one of the following:
    module load CcEnv StdEnv/2016 abaqus/6.14.1, or,
    module load CcEnv StdEnv/2016 abaqus/2020, or,
    module load CcEnv StdEnv/2020 abaqus/2021
  3. abaqus cae

o Checking license availability

There must be be at least 1 license free (not in use) for abaqus cae to start according to:

abaqus licensing lmstat -c $ABAQUSLM_LICENSE_FILE -a | grep "Users of cae"

The SHARCNET license has 2 free and 2 reserved licenses. If all 4 are in use the following error message will occur:

[gra-vdi3:~] abaqus licensing lmstat -c $ABAQUSLM_LICENSE_FILE -a | grep "Users of cae"
Users of cae:  (Total of 4 licenses issued;  Total of 4 licenses in use)

[gra-vdi3:~] abaqus cae
ABAQUSLM_LICENSE_FILE=27050@license3.sharcnet.ca
/opt/sharcnet/abaqus/2020/Commands/abaqus cae
No socket connection to license server manager.
Feature:       cae
License path:  27050@license3.sharcnet.ca:
FLEXnet Licensing error:-7,96
For further information, refer to the FLEXnet Licensing documentation,
or contact your local Abaqus representative.
Number of requested licenses: 1
Number of total licenses:     4
Number of licenses in use:    2
Number of available licenses: 2
Abaqus Error: Abaqus/CAE Kernel exited with an error.

Site specific use[edit]

Sharcnet license[edit]

Sharcnet provides a small but free license consisting of 2cae and 21 execute tokens where usage limits are imposed 10 tokens/user and 15 tokens/group. For groups that have purchased dedicated tokens the free token usage limits are added to their reservation. The free tokens are available on a first come first serve basis and mainly intended for testing and light usage before deciding whether or not to purchase dedicated tokens. The costs for dedicated tokens in cdn are approximately 110 per compute token and 400 per gui token, submit a ticket to request an official quote. The license can be used by any Compute Canada member but only on SHARCNET hardware. Groups that purchase dedicated tokens to run on the SHARCNET license server may likewise only use them on SHARCNET hardware including gra-vdi (for running abaqus in full graphical mode) and graham or dusky clusters (for submitting compute batch jobs to the queue). Before you can use the license you must open ticket at <support@computecanada.ca> and request access. In your email 1) mention that it is for use on Sharcnet systems and 2) include a copy/paste of the following License Agreement statement with your full name and Compute Canada username entered in the indicated locations. Please note that every user must do this ie) cannot be done one time only for a group (including PIs who have purchased their own dedicated tokens).

o License agreement

----------------------------------------------------------------------------------
Subject: Abaqus Sharcnet Academic License User Agreement

This email is to confirm that i "_____________" with username "___________" will
only use “SIMULIA Academic Software” with tokens from the SHARCNET license server
for the following purposes:

1) on SHARCNET hardware where the software is already installed
2) in affiliation with a canadian degree-granting academic institution
3) for education, institutional or instruction purposes and not for any commercial
   or contract related purposes where results are not publishable
4) for experimental, theoretical and/or digital research work, undertaken primarily
   to acquire new knowledge of the underlying foundations of phenomena and observable
   facts, up to the point of proof-of-concept in a laboratory    
-----------------------------------------------------------------------------------

o Configure license file

Configure your license file as follows, noting that it is only usable on SHARCNET systems: graham, gra-vdi and dusky.

[gra-login1:~] cat ~/.licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27050@license3.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27050@license3.sharcnet.ca")

If your abaqus jobs fail with error message [*** ABAQUS/eliT_CheckLicense rank 0 terminated by signal 11 (Segmentation fault)] in the slurm output file verify your abaqus.lic file contains ABAQUSLM_LICENSE_FILE to use abaqus/2020. If your abaqus jobs fail with error message starting [License server machine is down or not responding etc] in the output file verify your abaqus.lic file contains LM_LICENSE_FILE to use abaqus/6.14.1 as shown. The abaqus.lic file shown contains both so you should not see this problem.

o Query license server

I) To check the Sharcnet license server for started and queued jobs by username run:

ssh graham.computecanada.ca
module load StdEnv/2016.4
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|start\|queued\|RESERVATIONs"

II) To check the Sharcnet license server for reservations of products by purchasing groups run:

ssh graham.computecanada.ca
module load StdEnv/2016.4
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|RESERVATIONs"

III) To check the Sharcnet license server for license usage of the cae, standard and explicit products run:

ssh graham.computecanada.ca
module load StdEnv/2016.4
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users of" | grep "cae\|standard\|explicit"

When the output of query I) above indicatesa that a job for a particular username is "queued" this means the job has entered the "R"unning state from the perspective of squeue -j jobid or sacct -j jobid and is therefore idle on a compute node waiting for a license. This will have the same impact on your account priority as if the job were performing computations and consuming cputime. Eventually when sufficient licenses come available the "queued" job will "start". To demonstrate, the following shows the license server and queue output for the situation where a user submits two jobs, but only the first job acquires enough licenses to start:

 [roberpj@dus241:~] sq
         JOBID     USER      ACCOUNT           NAME  ST  TIME_LEFT  NODES  CPUS  MIN_MEM  NODELIST (REASON) 
         29801  roberpj  def-roberpj  scriptep1.txt   R    2:59:18      1    12       8G   dus47 (None) 
         29802  roberpj  def-roberpj  scriptsp1.txt   R    2:59:33      1    12       8G   dus28 (None) 
 [roberpj@dus241:~] abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|start\|queued\|RESERVATIONs"
 Users of abaqus:  (Total of 78 licenses issued;  Total of 71 licenses in use)
     roberpj dus47 /dev/tty (v62.2) (license3.sharcnet.ca/27050 275), start Thu 8/27 5:45, 14 licenses
     roberpj dus28 /dev/tty (v62.2) (license3.sharcnet.ca/27050 729) queued for 14 licenses

o Specify job resources

To ensure optimal usage of both your Abaqus tokens and the Compute Canada resources its important to carefully specify the required memory and ncpus in your slurm script. The values can be determined by submitting a few short test jobs to the queue then checking their utilization. For completed jobs use seff JobNumber to show the total "Memory Utilized" and "Memory Efficiency"; If the "Memory Efficiency" is less than ~90% decrease the value of "#SBATCH --mem=" setting in your slurm script accordingly. Notice that the seff JobNumber command also shows the total "CPU (time) Utilized" and "CPU Efficiency"; If the "CPU Efficiency" is less than ~90% perform scaling tests to determine the optimal number of cpu's for optimal performance and then update the value of then update the value of "#SBATCH --cpus-per-task=" in your slurm script. For running jobs use the srun --jobid=29821580 --pty top -d 5 -u $USER command to watch the %CPU, %MEM and RES for each abaqus parent process on the compute node; The %CPU and %MEM columns display the percent usage relative to the total available on the node while the RES column shows the per process resident memory size (in human readable format for values over 1gb). Further information regarding howto Monitor Jobs is available in the Compute Canada wiki.

o Core token mapping

TOKENS 5  6  7  8  10  12  14  16  19  21  25  28  34  38
CORES  1  2  3  4   6   8  12  16  24  32  48  64  96 128

where TOKENS = floor[5 X CORES^0.422]

Western license[edit]

The Western site license may only be used by Western researchers on hardware located at Western's campus. Currently Dusky cluster is the only system that satisfies these conditions. Graham and gra-vdi are excluded since they are located on Waterloo's campus. Contact the Western abaqus license server administrator <jmilner@robarts.ca> to inquire about using the Western abaqus license. You will need to provide your Compute Canada username and possibly make arrangements to purchase tokens. If you are granted access then you may proceed to configure your abaqus.lic file to point to the Western license server as follows:

o Configure license file

Configure your license file as follows, noting that it is only usable on dusky.

[dus241:~] cat .licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27000@license4.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27000@license4.sharcnet.ca")

Once configured, submit your jobd as described above in the Cluster job submission section. If there are any problems submit a problem ticket to technical support. Specify that you are using the abaqus Western license on dusky as well as the failed job number along with a paste of any error message if applicable.