Abaqus/en: Difference between revisions

Jump to navigation Jump to search
1,436 bytes added ,  4 years ago
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 5: Line 5:


= Using your own license =
= Using your own license =
Abaqus is available on Compute Canada clusters, but you must provide your own license.  To configure your cluster account, create a file named <tt>$HOME/.licenses/abaqus.lic</tt> corresponding to the module version you want to use (otherwise abaqus jobs will fail).  This must be done on each cluster where you plan to run abaqus as follows:<br><br>
Abaqus is available on Compute Canada clusters, but you must provide your own license.  To configure your cluster account, create a file named <tt>$HOME/.licenses/abaqus.lic</tt> with the following two lines which support versions 2020 and 6.14.1 respectively.  This must be done on each cluster where you plan to run abaqus as follows:


Module Version 2020
{{File
{{File
|name=abaqus.lic
|name=abaqus.lic
|contents=
|contents=
prepend_path("LM_LICENSE_FILE","port@server")
prepend_path("ABAQUSLM_LICENSE_FILE","port@server")
prepend_path("ABAQUSLM_LICENSE_FILE","port@server")
}}
Module Version 6.14.1
{{File
|name=abaqus.lic
|contents=
prepend_path("LM_LICENSE_FILE","port@server")
}}
}}


Line 23: Line 17:


= Cluster job submission =
= Cluster job submission =
Below is a sample slurm script to submit a parallel job to a single compute node using 4 cores:


Below is a sample slurm script for submitting a parallel 4core job to a single compute node using command <code>sbatch myscript.sh</code>:
{{File
{{File
   |name="abaqus_job.sh"
   |name="myscript.sh"
   |lang="sh"
   |lang="sh"
   |contents=
   |contents=
Line 34: Line 28:
#SBATCH --cpus-per-task=4      # number cores > 1
#SBATCH --cpus-per-task=4      # number cores > 1
   
   
module load abaqus/6.14.1      # (or abaqus/2020)
module load abaqus/6.14.1      # or abaqus/2020
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'


abaqus job=test input=sample.inp scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE \
abaqus job=test input=sample.inp scratch=$SCRATCH cpus=$SLURM_CPUS_ON_NODE \
   interactive mp_mode=threads memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
   interactive mp_mode=threads \
  memory="$((${SLURM_MEM_PER_NODE}-3072))MB"
}}
}}
 
Note the last line containing the "memory= " setting is optional.  It is mainly intended for larger memory or problematic jobs and may require tuning of the 3072MB default value.  A listing of abaqus command line arguments can be obtained by loading an abaqus module and running: <code>abaqus -help | less</code>
where a listing of abaqus options can be obtained by loading an abaqus module and running: <code>abaqus -help | less</code>


== Node memory ==
== Node memory ==
Line 68: Line 62:
     ssh gra100
     ssh gra100
     top -u $USER
     top -u $USER
  3) watch the VITR and RES columns until steady memory values are observed
  3) watch the VIRT and RES columns until steady peak memory values are observed
</source>
</source>


To completely satisfy the recommended "MEMORY TO OPERATIONS REQUIRED MINIMIZE I/O" (MRMIO) value at least the same smount of non-swapped physical memory (RES) must be available to abaqus.  Since the RES will in general be less than the virtual memory (VIRT) by some relatively constant amount for a given simulation, it is necessary to slightly over allocate the requested slurm node memory <code>-mem=</code>.  In the above sample slurm script this over-allocation has been hardcoded to a conservative value of 3072MB based on initial testing of the standard abaqus solver.  To avoid long queue wait times associated with large values of MRMIO, it maybe worth investigating the simulation performance impact associated with reducing the RES memory that is made available to abaqus significantly below the MRMIO.  This can be done by lowering the <code>-mem=</code> value which in turn will set an artificially low value of <code>memory=</code> in the abaqus command (found in the last line of the slurm script).  In doing this one should be careful the RES does not dip below the "MINIMUM MEMORY REQUIRED" (MMR) otherwise abaqus will exit due to "Out Of Memory" (OOM).  As an example, if your MRMIO is 96GB try running a series of short test jobs with <code>#SBATCH --mem=8G, 16G, 32G, 64G</code> until an acceptable minimal performance impact is found, noting that smaller values will result in increasingly larger scratch space use by tmpdir files.
To completely satisfy the recommended "MEMORY TO OPERATIONS REQUIRED MINIMIZE I/O" (MRMIO) value at least the same smount of non-swapped physical memory (RES) must be available to abaqus.  Since the RES will in general be less than the virtual memory (VIRT) by some relatively constant amount for a given simulation, it is necessary to slightly over allocate the requested slurm node memory <code>-mem=</code>.  In the above sample slurm script this over-allocation has been hardcoded to a conservative value of 3072MB based on initial testing of the standard abaqus solver.  To avoid long queue wait times associated with large values of MRMIO, it maybe worth investigating the simulation performance impact associated with reducing the RES memory that is made available to abaqus significantly below the MRMIO.  This can be done by lowering the <code>-mem=</code> value which in turn will set an artificially low value of <code>memory=</code> in the abaqus command (found in the last line of the slurm script).  In doing this one should be careful the RES does not dip below the "MINIMUM MEMORY REQUIRED" (MMR) otherwise abaqus will exit due to "Out Of Memory" (OOM).  As an example, if your MRMIO is 96GB try running a series of short test jobs with <code>#SBATCH --mem=8G, 16G, 32G, 64G</code> until an acceptable minimal performance impact is found, noting that smaller values will result in increasingly larger scratch space use by tmpdir files.


= Cluster graphical use =
= Graphical use =


Abaqus/2020 can be run interactively in graphical mode on a cluster compute node (3hr time limit) over TigerVNC with these steps:
Abaqus/2020 can be run interactively in graphical mode on a cluster or gra-vdi using VNC by following these steps:


# [https://docs.computecanada.ca/wiki/VNC#Setup Install] TigerVNC client on your desktop
== On a cluster ==
# [https://docs.computecanada.ca/wiki/VNC#Compute_Nodes Connect] to a cluster compute node with vncviewer
 
# Connect to a compute node (3hr time limit) with [https://docs.computecanada.ca/wiki/VNC#Compute_Nodes TigerVNC]
# <code>module load abaqus/2020</code>
# <code>module load abaqus/2020</code>
# <code>abaqus cae -mesa</code>
# <code>abaqus cae -mesa</code>


= Gra-vdi graphical use =
== On gra-vdi ==


NOTE: gra-vdi is currently OFFLINE for upgrading with a return to use date sometime in june
# Connect to gra-vdi (no time limit) with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
# <code>module load SnEnv</code>
# <code>module load abaqus/2020</code>
# <code>abaqus cae</code><br><br>
 
o <b>How to check license availability</b>


Abaqus/2020 can be run interactively in graphical mode on gra-vdi (no connection time limit) over TigerVNC with these steps:
There must be be at least 1 license not in use for <code>abaqus cae</code> to start according to:
abaqus licensing lmstat -c $ABAQUSLM_LICENSE_FILE -a | grep "Users of cae"


# [https://docs.computecanada.ca/wiki/VNC#Setup Install] TigerVNC client on your desktop
For example, the SHARCNET license has 2 free and 2 reserved licenses.  If all 4 are in use the following error message will occur:
# [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes Connect] to gra-vdi.computecanada.ca with vncviewer
 
# <code>module load SnEnv</code>
<source lang="bash">
# <code>module load abaqus/2020</code>
[gra-vdi3:~] abaqus licensing lmstat -c $ABAQUSLM_LICENSE_FILE -a | grep "Users of cae"
# <code>abaqus cae</code>
Users of cae:  (Total of 4 licenses issued;  Total of 4 licenses in use)
 
[gra-vdi3:~] abaqus cae
ABAQUSLM_LICENSE_FILE=27050@license3.sharcnet.ca
/opt/sharcnet/abaqus/2020/Commands/abaqus cae
No socket connection to license server manager.
Feature:      cae
License path:  27050@license3.sharcnet.ca:
FLEXnet Licensing error:-7,96
For further information, refer to the FLEXnet Licensing documentation,
or contact your local Abaqus representative.
Number of requested licenses: 1
Number of total licenses:    4
Number of licenses in use:    2
Number of available licenses: 2
Abaqus Error: Abaqus/CAE Kernel exited with an error.
</source>


= Site specific usage =
= Site specific use =


== Sharcnet license ==
== Sharcnet license ==


Sharcnet provides a small but free license consisting of 2cae and 21 execute tokens where usage limits are imposed 10 tokens/user and 15 tokens/group.  For groups that have purchased tokens, the free token usage limits are added to their reservation.  The free tokens are available on a first come first serve basis and mainly intended for testing and light usage before deciding whether or not to purchase dedicated tokens.  The license can be used by any Compute Canada member but only on SHARCNET hardware.  Groups that purchase dedicated tokens to run on the SHARCNET license server may likewise only use them on SHARCNET hardware.  Such hardware includes gra-vdi for running abaqus in full graphical mode and graham cluster for submitting compute batch jobs to the queue.  Before you can use the license you must open ticket at <support@computecanada.ca> and request access.  In your email 1) mention that it is for use on Sharcnet systems and 2) include a copy/paste of the following <tt>License Agreement</tt> statement with your full name and Compute Canada username entered in the indicated locations.  Please note that every user must do this ie) cannot be done one time only for a group (including PIs who have purchased their own dedicated tokens).
Sharcnet provides a small but free license consisting of 2cae and 21 execute tokens where usage limits are imposed 10 tokens/user and 15 tokens/group.  For groups that have purchased dedicated tokens the free token usage limits are added to their reservation.  The free tokens are available on a first come first serve basis and mainly intended for testing and light usage before deciding whether or not to purchase dedicated tokens.  The costs for dedicated tokens in cdn are approximately 110 per compute token and 400 per gui token, submit a ticket to request an official quote.  The license can be used by any Compute Canada member but only on SHARCNET hardware.  Groups that purchase dedicated tokens to run on the SHARCNET license server may likewise only use them on SHARCNET hardware.  Such hardware includes gra-vdi for running abaqus in full graphical mode and graham cluster for submitting compute batch jobs to the queue.  Before you can use the license you must open ticket at <support@computecanada.ca> and request access.  In your email 1) mention that it is for use on Sharcnet systems and 2) include a copy/paste of the following <tt>License Agreement</tt> statement with your full name and Compute Canada username entered in the indicated locations.  Please note that every user must do this ie) cannot be done one time only for a group (including PIs who have purchased their own dedicated tokens).


<b>o  License agreement</b>
<b>o  License agreement</b>
Line 120: Line 137:
<b>o Configure license file</b>
<b>o Configure license file</b>


The configuration of your abaqus license on each cluster depends on the module version being used:<br>
Configure your license file as follows, noting that it is only usable on SHARCNET systems: graham, gra-vdi and dusky.


Module Version 2020
<source lang="bash">
<source lang="bash">
[gra-login1:~] cat ~/.licenses/abaqus.lic
[gra-login1:~] cat ~/.licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27050@license3.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27050@license3.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27050@license3.sharcnet.ca")
</source>
Module Version 6.14.1
<source lang="bash">
[gra-login1:~] cat ~/.licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27050@license3.sharcnet.ca")
</source>
</source>


If your abaqus jobs fail with error message [*** ABAQUS/eliT_CheckLicense rank 0 terminated by signal 11 (Segmentation fault)] in the slurm output file verify your <code>abaqus.lic</code> file contains ABAQUSLM_LICENSE_FILE to use abaqus/2020.  If your abaqus jobs fail with error message starting [License server machine is down or not responding etc] in the output file verify your <code>abaqus.lic</code> file contains LM_LICENSE_FILE to use abaqus/6.14.1 as shown.
If your abaqus jobs fail with error message [*** ABAQUS/eliT_CheckLicense rank 0 terminated by signal 11 (Segmentation fault)] in the slurm output file verify your <code>abaqus.lic</code> file contains ABAQUSLM_LICENSE_FILE to use abaqus/2020.  If your abaqus jobs fail with error message starting [License server machine is down or not responding etc] in the output file verify your <code>abaqus.lic</code> file contains LM_LICENSE_FILE to use abaqus/6.14.1 as shown.  The <code>abaqus.lic</code> file shown contains both so you should not see this problem.


<b>o Check license status</b>
<b>o Check license status</b>
Line 165: Line 177:


== Western license ==
== Western license ==
The Western site license may only be used by Western researchers with hardware located on Western's campus such as the Dusky legacy cluster. Graham and gra-vdi are excluded since they are located at Waterloo (use the Sharcnet License for these systems as described above).  Contact the Western abaqus license server administrator (located in Robarts) to make arrangements before attempting to use the Western abaqus license.  Submit a ticket to Compute Canada support to request the admins contact information if necessary.  You will need to provide your Compute Canada username and likely make arrangements to purchase tokens.  If you are granted access request the port and server values and enter them into your abaqus.lic file as shown in 1) near the top of this wiki which will in turn be used by the Compute Canada module on dusky when it loads.
The Western site license may only be used by Western researchers on hardware located at Western's campus.  Currently Dusky cluster is the only system that satisfies these conditions. Graham and gra-vdi are excluded since they are located on Waterloo's campus.  Contact the Western abaqus license server administrator <jmilner@robarts.ca> to inquire about using the Western abaqus license.  You will need to provide your Compute Canada username and possibly make arrangements to purchase tokens.  If you are granted access then you may proceed to configure your <code>abaqus.lic</code> file to point to the Western license server as follows:
 
<b>o Configure license file</b>
 
Configure your license file as follows, noting that it is only usable on dusky.
 
<source lang="bash">
[dus241:~] cat .licenses/abaqus.lic
prepend_path("LM_LICENSE_FILE","27000@license4.sharcnet.ca")
prepend_path("ABAQUSLM_LICENSE_FILE","27000@license4.sharcnet.ca")
</source>
 
Once configured, submit your jobd as described above in the Cluster job submission section.  If there are any problems submit a problem ticket to [[Technical support|technical support]].  Specify that you are using the abaqus Western license on dusky as well as the failed job number along with a paste of any error message if applicable.
38,757

edits

Navigation menu