Abaqus/fr: Difference between revisions
(Created page with "=== Scripts pour un nœud simple ===") |
(Created page with "Pour écrire les données de redémarrage pour un total de 12 incréments, le fichier en entrée doit contenir *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO Pour vérifier l'information complète sur le redémarrage egrep -i "step|restart" testep*.com testep*.msg testep*.sta") |
||
Line 244: | Line 244: | ||
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB" | mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB" | ||
}} | }} | ||
Pour écrire les données de redémarrage pour un total de 12 incréments, le fichier en entrée doit contenir | |||
*RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO | *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO | ||
Pour vérifier l'information complète sur le redémarrage | |||
egrep -i "step|restart" testep*.com testep*.msg testep*.sta | egrep -i "step|restart" testep*.com testep*.msg testep*.sta | ||
</tab> | </tab> |
Revision as of 21:40, 24 February 2023
Abaqus FEA est un progiciel commercial pour l'analyse d'éléments finis et l'ingénierie assistée par ordinateur.
Votre licence
Des modules Abaqus sont disponibles sur nos grappes, mais vous devez posséder votre propre licence. Pour configurer votre compte sur les grappes que vous voulez utiliser, connectez-vous et créez sur chacune un fichier $HOME/.licenses/abaqus.lic
qui contient les deux lignes suivantes, pour les versions 202X et 6.14.1 respectivement. Remplacez ensuite port@server
par le numéro du port flexlm et l'adresse IP (ou le nom complet du domaine) de votre serveur de licence Abaqus.
prepend_path("ABAQUSLM_LICENSE_FILE","port@server")
prepend_path("LM_LICENSE_FILE","port@server")
Si votre licence n'est pas configurée pour une grappe en particulier, les administrateurs de systèmes des deux parties devront effectuer certaines modifications. Ceci est nécessaire pour que les ports flexlm et TCP de votre serveur Abaqus puissent être rejoints par tous les nœuds de calcul quand vos tâches dans la queue seront exécutées. Pour que nous puissions vous assister dans cette tâche, écrivez au [[Technical
- le numéro du port flexlm
- le numéro du port statique
- l'adresse IP de votre serveur de licence Abaqus.
En retour vous recevrez une liste d'adresses IP et votre administrateur de système pourra ouvrir les pare-feu de votre serveur local pour que la grappe puisse se connecter via les deux ports. Une entente spéciale doit habituellement être négociée et signée avec SIMULIA pour qu'une telle licence puisse être utilisée à distance avec notre matériel.
Soumettre une tâche sur une grappe
Vous trouverez ci-dessous des prototypes de scripts Slurm pour soumettre des simulations parallèles sur un ou plusieurs nœuds de calcul en utilisant des fils et MPI. Dans la plupart des cas, il suffira d'utiliser un des scripts du répertoire de projet dans une des sections pour un nœud simple. Dans la dernière ligne des scripts, l'argument memory=
est optionnel et sert aux tâches qui demandent beaucoup de mémoire ou qui posent problème; la valeur de déplacement de 3072Mo pourrait nécessiter un ajustement. Pour obtenir la liste des arguments en ligne de commande, chargez un module Abaqus et lancez abaqus -help | less
.
Le script du répertoire de projet sous le premier onglet devrait suffire pour les tâches qui utilisent un nœud simple et qui ont une durée de moins d'une journée. Par contre, pour les tâches qui utilisent un nœud simple et qui ont une durée de plus d'une journée, vous devriez utiliser un des scripts de redémarrage. Dans le cas des tâches qui créent de gros fichiers de redémarrage, il est préférable que l'écriture se fasse sur le disque local avec la variable d'environnement SLURM_TMPDIR qui est utilisée dans les scripts sous les derniers onglets des sections Analyse standard et Analyse explicite. Les scripts de redémarrage continueront les tâches qui ont été terminées pour une quelconque raison. Ceci peut se produire si la tâche atteint la durée d'exécution maximum demandée avant d'être complète et qu'elle est tuée par la queue, ou si le nœud de calcul plante en raison d'un problème de matériel inattendu. D'autres types de redémarrage sont possibles en modifiant davantage le fichier d'entrée (non documentés ici) pour poursuivre une tâche ayant des étapes additionnelles ou en changeant l'analyse (consultez la documentation pour les détails particuliers à la version).
Les tâches qui exigent beaucoup de mémoire ou beaucoup de ressources de calcul (plus que la capacité d'un nœud simple) devraient utiliser les scripts MPI dans les sections pour nœuds multiples afin de distribuer le calcul sur un ensemble de nœuds arbitraires déterminé automatiquement par l'ordonnanceur. Avant de lancer des tâches de longue durée, il est recommandé d'exécuter de courts tests présentant peu de scalabilité pour déterminer la durée réelle d'exécution (et les exigences en mémoire) en fonction du nombre optimal de cœurs (2, 4, 8, etc.).
Analyse standard
Les solveurs prennent en charge la parallélisation avec fils et avec MPI. Des scripts pour chaque mode sont présentés sous les onglets pour l'utilisation d'un nœud simple et celle de nœuds multiples. Des scripts pour redémarrer une tâche qui utilise des nœuds multiples ne sont pas présentés pour l'instant.
Scripts pour un nœud simple
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify days-hrs:mins
#SBATCH --cpus-per-task=4 # Specify number of cores
#SBATCH --mem=8G # Specify total memory > 5G
#SBATCH --nodes=1 # Do not change !
module load StdEnv/2020 # Latest version
module load abaqus/2021 # Latest version
#module load StdEnv/2016 # Uncomment to use
#module load abaqus/2020 # Uncomment to use
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
rm -f testsp1* testsp2*
abaqus job=testsp1 input=mystd-sim.inp \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
Pour écrire les données de redémarrage en incréments de N=12, le fichier en entrée doit contenir
*RESTART, WRITE, OVERLAY, FREQUENCY=12
Pour écrire les données de redémarrage pour un total de 12 incréments, entrez plutôt
*RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Pour vérifier l'information complète sur le redémarrage
egrep -i "step|start" testsp*.com testsp*.msg testsp*.sta
Certaines simulations peuvent être améliorées en ajoutant au bas du script la commande Abaqus
order_parallel=OFF
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify days-hrs:mins
#SBATCH --cpus-per-task=4 # Specify number of cores
#SBATCH --mem=8G # Specify total memory > 5G
#SBATCH --nodes=1 # Do not change !
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
rm -f testsp2* testsp1.lck
abaqus job=testsp2 oldjob=testsp1 input=mystd-sim-restart.inp \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
Le fichier en entrée pour le redémarrage doit contenir
*HEADING *RESTART, READ
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify days-hrs:mins
#SBATCH --cpus-per-task=4 # Specify number of cores
#SBATCH --mem=8G # Specify total memory > 5G
#SBATCH --nodes=1 # Do not change !
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
rm -f testst1* testst2*
cd $SLURM_TMPDIR
while sleep 6h; do
cp -f * $SLURM_SUBMIT_DIR 2>/dev/null
done &
WPID=$!
abaqus job=testst1 input=$SLURM_SUBMIT_DIR/mystd-sim.inp \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f * $SLURM_SUBMIT_DIR
Pour écrire les données de redémarrage en incréments de N=12, le fichier en entrée doit contenir
*RESTART, WRITE, OVERLAY, FREQUENCY=12
Pour écrire les données de redémarrage pour un total de 12 incréments, entrez plutôt
*RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Pour vérifier l'information complète sur le redémarrage
egrep -i "step|start" testst*.com testst*.msg testst*.sta
#!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify days-hrs:mins
#SBATCH --cpus-per-task=4 # Specify number of cores
#SBATCH --mem=8G # Specify total memory > 5G
#SBATCH --nodes=1 # Do not change !
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
rm -f testst2* testst1.lck
cp testst1* $SLURM_TMPDIR
cd $SLURM_TMPDIR
while sleep 3h; do
cp -f testst2* $SLURM_SUBMIT_DIR 2>/dev/null
done &
WHILEPID=$!
abaqus job=testst2 oldjob=testst1 input=$SLURM_SUBMIT_DIR/mystd-sim-restart.inp \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f testst2* $SLURM_SUBMIT_DIR
Le fichier en entrée pour le redémarrage doit contenir
*HEADING *RESTART, READ
Script pour nœuds multiples
Si vous disposez d'une licence qui vous permet d'exécuter des tâches nécessitant beaucoup de mémoire et de calcul, le script suivant pourra effectuer le calcul avec MPI en utilisant un ensemble de nœuds arbitraires idéalement déterminé automatiquement par l'ordonnanceur. Un script modèle pour redémarrer des tâches sur nœuds multiples n'est pas fourni car son utilisation présente des limitations supplémentaires.
!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify days-hrs:mins
# SBATCH --nodes=2 # Best to leave commented
#SBATCH --ntasks=8 # Specify number of cores
#SBATCH --mem-per-cpu=16G # Specify memory per core
#SBATCH --cpus-per-task=1 # Do not change !
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
rm -f testsp1-mpi*
unset hostlist
nodes="$(slurm_hl2hl.py --format MPIHOSTLIST | xargs)"
for i in `echo "$nodes" | xargs -n1 | uniq`; do hostlist=${hostlist}$(echo "['${i}',$(echo "$nodes" | xargs -n1 | grep $i | wc -l)],"); done
hostlist="$(echo "$hostlist" | sed 's/,$//g')"
mphostlist="mp_host_list=[$(echo "$hostlist")]"
export $mphostlist
echo "$mphostlist" > abaqus_v6.env
abaqus job=testsp1-mpi input=mystd-sim.inp \
scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi
Analyse explicite
Les solveurs prennent en charge la parallélisation avec fils et avec MPI. Des scripts pour chaque mode sont présentés sous les onglets pour l'utilisation d'un nœud simple et celle de nœuds multiples. Des modèles de scripts pour redémarrer une tâche qui utilise des nœuds multiples nécessitent plus de tests et ne sont pas présentés pour l'instant.
Scripts pour un nœud simple
#!/bin/bash
#SBATCH --account=def-group # specify account
#SBATCH --time=00-06:00 # days-hrs:mins
#SBATCH --mem=8G # node memory > 5G
#SBATCH --cpus-per-task=4 # number cores > 1
#SBATCH --nodes=1 # do not change
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
rm -f testep1* testep2*
abaqus job=testep1 input=myexp-sim.inp \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
Pour écrire les données de redémarrage pour un total de 12 incréments, le fichier en entrée doit contenir
*RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Pour vérifier l'information complète sur le redémarrage
egrep -i "step|restart" testep*.com testep*.msg testep*.sta
#!/bin/bash
#SBATCH --account=def-group # specify account
#SBATCH --time=00-06:00 # days-hrs:mins
#SBATCH --mem=8G # node memory > 5G
#SBATCH --cpus-per-task=4 # number cores > 1
#SBATCH --nodes=1 # do not change
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
rm -f testep2* testep1.lck
for f in testep1*; do [[ -f ${f} ]] && cp -a "$f" "testep2${f#testep1}"; done
abaqus job=testep2 input=myexp-sim.inp recover \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
No input file modifications are required to restart the analysis.
#!/bin/bash
#SBATCH --account=def-group # specify account
#SBATCH --time=00-06:00 # days-hrs:mins
#SBATCH --mem=8G # node memory > 5G
#SBATCH --cpus-per-task=4 # number cores > 1
#SBATCH --nodes=1 # do not change
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
rm -f testet1* testet2*
cd $SLURM_TMPDIR
while sleep 6h; do
cp -f * $SLURM_SUBMIT_DIR 2>/dev/null
done &
WPID=$!
abaqus job=testet1 input=$SLURM_SUBMIT_DIR/myexp-sim.inp \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f * $SLURM_SUBMIT_DIR
To write restart data for a total of 12 time increments specify in the input file:
*RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Check for completed restart information in relevant output files:
egrep -i "step|restart" testet*.com testet*.msg testet*.sta
#!/bin/bash
#SBATCH --account=def-group # specify account
#SBATCH --time=00-06:00 # days-hrs:mins
#SBATCH --mem=8G # node memory > 5G
#SBATCH --cpus-per-task=4 # number cores > 1
#SBATCH --nodes=1 # do not change
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "SLURM_SUBMIT_DIR =" $SLURM_SUBMIT_DIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
rm -f testet2* testet1.lck
for f in testet1*; do cp -a "$f" $SLURM_TMPDIR/"testet2${f#testet1}"; done
cd $SLURM_TMPDIR
while sleep 3h; do
cp -f * $SLURM_SUBMIT_DIR 2>/dev/null
done &
WPID=$!
abaqus job=testet2 input=$SLURM_SUBMIT_DIR/myexp-sim.inp recover \
scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE interactive \
mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
{ kill $WPID && wait $WPID; } 2>/dev/null
cp -f * $SLURM_SUBMIT_DIR
No input file modifications are required to restart the analysis.
Multiple node computing
!/bin/bash
#SBATCH --account=def-group # Specify account
#SBATCH --time=00-06:00 # Specify days-hrs:mins
# SBATCH --nodes=2 # Best to leave commented
#SBATCH --ntasks=8 # Specify number of cores
#SBATCH --mem-per-cpu=16G # Specify memory per core
#SBATCH --cpus-per-task=1 # Do not change !
module load abaqus/2021
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
rm -f testep1-mpi*
unset hostlist
nodes="$(slurm_hl2hl.py --format MPIHOSTLIST | xargs)"
for i in `echo "$nodes" | xargs -n1 | uniq`; do hostlist=${hostlist}$(echo "['${i}',$(echo "$nodes" | xargs -n1 | grep $i | wc -l)],"); done
hostlist="$(echo "$hostlist" | sed 's/,$//g')"
mphostlist="mp_host_list=[$(echo "$hostlist")]"
export $mphostlist
echo "$mphostlist" > abaqus_v6.env
abaqus job=testep1-mpi input=myexp-sim.inp \
scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi
Node memory
An estimate for the total slurm node memory (--mem=) required for a simulation to run fully in ram (without being virtualized to scratch disk) can be obtained by examining the Abaqus output test.dat
file. For example, a simulation that requires a fairly large amount of memory might show:
M E M O R Y E S T I M A T E
PROCESS FLOATING PT MINIMUM MEMORY MEMORY TO
OPERATIONS REQUIRED MINIMIZE I/O
PER ITERATION (MB) (MB)
1 1.89E+14 3612 96345
To run your simulation interactively and monitor the memory consumption, do the following: 1) ssh into a cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie)
[name@server ~]$ module load abaqus/6.14.1 OR module load abaqus/2020
[name@server ~]$ unset SLURM_GTIDS
2) ssh into the cluster again, ssh into the compute node with the allocation, run top ie)
[name@server ~]$ ssh gra100
[name@server ~]$ top -u $USER
3) watch the VIRT and RES columns until steady peak memory values are observed
To completely satisfy the recommended "MEMORY TO OPERATIONS REQUIRED MINIMIZE I/O" (MRMIO) value, at least the same amount of non-swapped physical memory (RES) must be available to Abaqus. Since the RES will in general be less than the virtual memory (VIRT) by some relatively constant amount for a given simulation, it is necessary to slightly over-allocate the requested Slurm node memory -mem=
. In the above sample Slurm script, this over-allocation has been hardcoded to a conservative value of 3072MB based on initial testing of the standard Abaqus solver. To avoid long queue wait times associated with large values of MRMIO, it may be worth investigating the simulation performance impact associated with reducing the RES memory that is made available to Abaqus significantly below the MRMIO. This can be done by lowering the -mem=
value which in turn will set an artificially low value of memory=
in the Abaqus command (found in the last line of the slurm script). In doing this one should be careful the RES does not dip below the MINIMUM MEMORY REQUIRED (MMR) otherwise Abaqus will exit due to Out of Memory (OOM). As an example, if your MRMIO is 96GB try running a series of short test jobs with #SBATCH --mem=8G, 16G, 32G, 64G
until an acceptable minimal performance impact is found, noting that smaller values will result in increasingly larger scratch space used by temporary files.
Graphical use
Abaqus/2020 can be run interactively in graphical mode on a cluster or gra-vdi using VNC by following these steps:
On a cluster
- Connect to a compute node (3hr salloc time limit) with TigerVNC
- Open a new terminal window and enter one of the following:
module load StdEnv/2016 abaqus/6.14.1
, or,module load StdEnv/2016 abaqus/2020
, or,module load StdEnv/2020 abaqus/2021
abaqus cae -mesa
On gra-vdi
- Connect to gra-vdi (24hr abaqus runtime limit) with TigerVNC
- Open a new terminal window and enter one of the following:
module load CcEnv StdEnv/2016 abaqus/6.14.1
, or,module load CcEnv StdEnv/2016 abaqus/2020
, or,module load CcEnv StdEnv/2020 abaqus/2021
abaqus cae
o Checking license availability
There must be at least 1 license free (not in use) for abaqus cae
to start according to:
abaqus licensing lmstat -c $ABAQUSLM_LICENSE_FILE -a | grep "Users of cae"
The SHARCNET license has 2 free and 2 reserved licenses. If all 4 are in use the following error message will occur:
[gra-vdi3:~] abaqus licensing lmstat -c $ABAQUSLM_LICENSE_FILE -a | grep "Users of cae"
Users of cae: (Total of 4 licenses issued; Total of 4 licenses in use)
[gra-vdi3:~] abaqus cae
ABAQUSLM_LICENSE_FILE=27050@license3.sharcnet.ca
/opt/sharcnet/abaqus/2020/Commands/abaqus cae
No socket connection to license server manager.
Feature: cae
License path: 27050@license3.sharcnet.ca:
FLEXnet Licensing error:-7,96
For further information, refer to the FLEXnet Licensing documentation,
or contact your local Abaqus representative.
Number of requested licenses: 1
Number of total licenses: 4
Number of licenses in use: 2
Number of available licenses: 2
Abaqus Error: Abaqus/CAE Kernel exited with an error.
Site-specific use
SHARCNET license
SHARCNET provides a small but free license consisting of 2 cae and 35 execute tokens where usage limits are imposed 10 tokens/user and 15 tokens/group. For groups that have purchased dedicated tokens, the free token usage limits are added to their reservation. The free tokens are available on a first come first serve basis and mainly intended for testing and light usage before deciding whether or not to purchase dedicated tokens. The costs for dedicated tokens are approximately CAD$110 per compute token and CAD$400 per GUI token: submit a ticket to request an official quote. The license can be used by any Alliance researcher, but only on SHARCNET hardware. Groups that purchase dedicated tokens to run on the SHARCNET license server may likewise only use them on SHARCNET hardware including gra-vdi (for running Abaqus in full graphical mode) and Graham or Dusky clusters (for submitting compute batch jobs to the queue). Before you can use the license you must contact Technical support and request access. In your email 1) mention that it is for use on SHARCNET systems and 2) include a copy/paste of the following License Agreement
statement with your full name and username entered in the indicated locations. Please note that every user must do this it cannot be done one time only for a group; this includes PIs who have purchased their own dedicated tokens.
o License agreement
---------------------------------------------------------------------------------- Subject: Abaqus SHARCNET Academic License User Agreement This email is to confirm that i "_____________" with username "___________" will only use “SIMULIA Academic Software” with tokens from the SHARCNET license server for the following purposes: 1) on SHARCNET hardware where the software is already installed 2) in affiliation with a Canadian degree-granting academic institution 3) for education, institutional or instruction purposes and not for any commercial or contract-related purposes where results are not publishable 4) for experimental, theoretical and/or digital research work, undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, up to the point of proof-of-concept in a laboratory -----------------------------------------------------------------------------------
o Configure license file Configure your license file as follows, noting that it is only usable on SHARCNET systems: Graham, gra-vdi and Dusky.
[gra-login1:~] cat ~/.licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27050@license3.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27050@license3.sharcnet.ca")
If your Abaqus jobs fail with the error message [*** ABAQUS/eliT_CheckLicense rank 0 terminated by signal 11 (Segmentation fault)] in the slurm output file, verify if your abaqus.lic
file contains ABAQUSLM_LICENSE_FILE to use abaqus/2020. If your Abaqus jobs fail with an error message starting [License server machine is down or not responding, etc.] in the output file verify your abaqus.lic
file contains LM_LICENSE_FILE to use abaqus/6.14.1 as shown. The abaqus.lic
file shown contains both so you should not see this problem.
o Query license server
I) To check the SHARCNET license server for started and queued jobs by username, run:
ssh graham.computecanada.ca
module load StdEnv/2016.4
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|start\|queued\|RESERVATIONs"
II) To check the SHARCNET license server for reservations of products by purchasing groups, run:
ssh graham.computecanada.ca
module load StdEnv/2016.4
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|RESERVATIONs"
III) To check the SHARCNET license server for license usage of the cae, standard and explicit products, run:
ssh graham.computecanada.ca
module load StdEnv/2016.4
module load abaqus
abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users of" | grep "cae\|standard\|explicit"
When the output of query I) above indicates that a job for a particular username is queued this means the job has entered the "R"unning state from the perspective of squeue -j jobid
or sacct -j jobid
and is therefore idle on a compute node waiting for a license. This will have the same impact on your account priority as if the job were performing computations and consuming CPU time. Eventually when sufficient licenses come available the queued job will start. To demonstrate, the following shows the license server and queue output for the situation where a user submits two jobs, but only the first job acquires enough licenses to start:
[roberpj@dus241:~] sq JOBID USER ACCOUNT NAME ST TIME_LEFT NODES CPUS MIN_MEM NODELIST (REASON) 29801 roberpj def-roberpj scriptep1.txt R 2:59:18 1 12 8G dus47 (None) 29802 roberpj def-roberpj scriptsp1.txt R 2:59:33 1 12 8G dus28 (None)
[roberpj@dus241:~] abaqus licensing lmstat -c $LM_LICENSE_FILE -a | grep "Users\|start\|queued\|RESERVATIONs" Users of abaqus: (Total of 78 licenses issued; Total of 71 licenses in use) roberpj dus47 /dev/tty (v62.2) (license3.sharcnet.ca/27050 275), start Thu 8/27 5:45, 14 licenses roberpj dus28 /dev/tty (v62.2) (license3.sharcnet.ca/27050 729) queued for 14 licenses
o Specify job resources
To ensure optimal usage of both your Abaqus tokens and our resources, it's important to carefully specify the required memory and ncpus in your Slurm script. The values can be determined by submitting a few short test jobs to the queue then checking their utilization. For completed jobs use seff JobNumber
to show the total Memory Utilized and Memory Efficiency. If the Memory Efficiency is less than ~90%, decrease the value of the #SBATCH --mem=
setting in your Slurm script accordingly. Notice that the seff JobNumber
command also shows the total CPU (time) Utilized and CPU Efficiency. If the CPU Efficiency is less than ~90%, perform scaling tests to determine the optimal number of CPUs for optimal performance and then update the value of #SBATCH --cpus-per-task=
in your Slurm script. For running jobs, use the srun --jobid=29821580 --pty top -d 5 -u $USER
command to watch the %CPU, %MEM and RES for each Abaqus parent process on the compute node. The %CPU and %MEM columns display the percent usage relative to the total available on the node while the RES column shows the per process resident memory size (in human readable format for values over 1GB). Further information regarding how to [Running_jobs#Monitoring_jobs monitor jobs] is available on our documentation wiki
o Core token mapping
TOKENS 5 6 7 8 10 12 14 16 19 21 25 28 34 38 CORES 1 2 3 4 6 8 12 16 24 32 48 64 96 128
where TOKENS = floor[5 X CORES^0.422]
Western license
The Western site license may only be used by Western researchers on hardware located at Western's campus. Currently, the Dusky cluster is the only system that satisfies these conditions. Graham and gra-vdi are excluded since they are located on Waterloo's campus. Contact the Western Abaqus license server administrator <jmilner@robarts.ca> to inquire about using the Western Abaqus license. You will need to provide your username and possibly make arrangements to purchase tokens. If you are granted access then you may proceed to configure your abaqus.lic
file to point to the Western license server as follows:
o Configure license file
Configure your license file as follows, noting that it is only usable on Dusky.
[dus241:~] cat .licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27000@license4.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27000@license4.sharcnet.ca")
Once configured, submit your job as described in the Cluster job submission section above. If there are any problems submit a problem ticket to technical support. Specify that you are using the Abaqus Western license on dusky and provide the failed job number along with a paste of any error messages as applicable.
Online documentation
The full Abaqus documentation (latest version) can be accessed on gra-vdi as shown in the following steps.
Account preparation:
- connect to gra-vdi.computecanada.ca with tigervnc as described in [VNC#VDI_nodes VDI nodes]
- open a terminal window on gra-vdi and type
firefox
(hit enter) - in the address bar type
about:config
(hit enter) -> click the I accept the risk! button - in the search bar type
unique
then double clickprivacy.file_unique_origin
to change true to false
View documentation:
- connect to gra-vdi.computecanada.ca with tigervnc as described in [VNC#VDI_nodes VDI nodes]
- open a terminal window on gra-vdi and type
firefox
(hit enter) - in the search bar copy paste one of the following:
file:///opt/sharcnet/abaqus/2020/doc/English/DSSIMULIA_Established.htm
, orfile:///opt/sharcnet/abaqus/2021/doc/English/DSSIMULIA_Established.htm
- find a topic by clicking for example: Abaqus -> Analysis -> Analysis Techniques -> Analysis Continuation Techniques