Abaqus: Difference between revisions

Jump to navigation Jump to search
189 bytes removed ,  22 hours ago
no edit summary
mNo edit summary
No edit summary
Line 3: Line 3:
__FORCETOC__
__FORCETOC__
<translate>
<translate>
<!--T:20819-->
<!--T:1-->
[https://www.3ds.com/products-services/simulia/products/abaqus/ Abaqus FEA] is a software suite for finite element analysis and computer-aided engineering.
[https://www.3ds.com/products-services/simulia/products/abaqus/ Abaqus FEA] is a software suite for finite element analysis and computer-aided engineering.


= Using your own license = <!--T:20820-->
= Using your own license = <!--T:2-->
Abaqus software modules are available on our clusters; however, you must provide your own license. To configure your account on a cluster, log in and create a file named <code>$HOME/.licenses/abaqus.lic</code> containing the following two lines which support versions 202X and 6.14.1 respectively. Next, replace <code>port@server</code> with the flexlm port number and server IP address (or fully qualified hostname) of your Abaqus license server.
Abaqus software modules are available on our clusters; however, you must provide your own license. To configure your account on a cluster, log in and create a file named <code>$HOME/.licenses/abaqus.lic</code> containing the following two lines which support versions 202X and 6.14.1 respectively. Next, replace <code>port@server</code> with the flexlm port number and server IP address (or fully qualified hostname) of your Abaqus license server.


<!--T:20821-->
<!--T:3-->
{{File
{{File
|name=abaqus.lic
|name=abaqus.lic
Line 17: Line 17:
}}
}}


<!--T:20822-->
<!--T:4-->
If your license has not been set up for use on an Alliance cluster, some additional configuration changes by the Alliance system administrator and your local system administrator will need to be done. Such changes are necessary to ensure the flexlm and vendor TCP ports of your Abaqus server are reachable from all cluster compute nodes when jobs are run via the queue. So we may help you get this done, write to [[Technical support|technical support]]. Please be sure to include the following three items:
If your license has not been set up for use on an Alliance cluster, some additional configuration changes by the Alliance system administrator and your local system administrator will need to be done. Such changes are necessary to ensure the flexlm and vendor TCP ports of your Abaqus server are reachable from all cluster compute nodes when jobs are run via the queue. So we may help you get this done, write to [[Technical support|technical support]]. Please be sure to include the following three items:
* flexlm port number
* flexlm port number
Line 24: Line 24:
You will then be sent a list of cluster IP addresses so that your administrator can open the local server firewall to allow connections from the cluster on both ports. Please note that a special license agreement must generally be negotiated and signed by SIMULIA and your institution before a local  license may be used remotely on Alliance hardware.
You will then be sent a list of cluster IP addresses so that your administrator can open the local server firewall to allow connections from the cluster on both ports. Please note that a special license agreement must generally be negotiated and signed by SIMULIA and your institution before a local  license may be used remotely on Alliance hardware.


= Cluster job submission = <!--T:20823-->
= Cluster job submission = <!--T:5-->
Below are prototype Slurm scripts for submitting thread and mpi-based parallel simulations to single or multiple compute nodes.  Most users will find it sufficient to use one of the <b>project directory scripts</b> provided in the <i>Single node computing</i> sections. The optional <code>memory=</code> argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning.  A listing of all Abaqus command line arguments can be obtained by loading an Abaqus module and running: <code>abaqus -help | less</code>.
Below are prototype Slurm scripts for submitting thread and mpi-based parallel simulations to single or multiple compute nodes.  Most users will find it sufficient to use one of the <b>project directory scripts</b> provided in the <i>Single node computing</i> sections. The optional <code>memory=</code> argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning.  A listing of all Abaqus command line arguments can be obtained by loading an Abaqus module and running: <code>abaqus -help | less</code>.


<!--T:20824-->
<!--T:6-->
Single node jobs that run less than one day should find the <i>project directory script</i> located in the first tab sufficient. However, single node jobs that run for more than a day should use one of the restart scripts.  Jobs that create large restart files will benefit by writing to local disk through the use of the SLURM_TMPDIR environment variable utilized in the <b>temporary directory scripts</b> provided in the two rightmost tabs of the single node standard and explicit analysis sections.  The restart scripts shown here will continue jobs that have been terminated early for some reason.  Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure.  Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details).
Single node jobs that run less than one day should find the <i>project directory script</i> located in the first tab sufficient. However, single node jobs that run for more than a day should use one of the restart scripts.  Jobs that create large restart files will benefit by writing to local disk through the use of the SLURM_TMPDIR environment variable utilized in the <b>temporary directory scripts</b> provided in the two rightmost tabs of the single node standard and explicit analysis sections.  The restart scripts shown here will continue jobs that have been terminated early for some reason.  Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure.  Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details).


<!--T:20825-->
<!--T:7-->
Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the <b>multiple node sections</b> below to distribute computing over arbitrary node ranges determined automatically by the scheduler.  Short scaling test jobs should be run to determine wall-clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc.) to determine the optimal number before running any long jobs.  
Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the <b>multiple node sections</b> below to distribute computing over arbitrary node ranges determined automatically by the scheduler.  Short scaling test jobs should be run to determine wall-clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc.) to determine the optimal number before running any long jobs.  


== Standard analysis == <!--T:20826-->
== Standard analysis == <!--T:8-->
Abaqus solvers support thread-based and mpi-based parallelization.  Scripts for each type are provided below for running Standard Analysis type jobs on Single or Multiple nodes respectively.  Scripts to perform multiple node job restarts are not currently provided.
Abaqus solvers support thread-based and mpi-based parallelization.  Scripts for each type are provided below for running Standard Analysis type jobs on Single or Multiple nodes respectively.  Scripts to perform multiple node job restarts are not currently provided.


=== Single node computing === <!--T:20827-->  
=== Single node computing === <!--T:9-->  


<!--T:20868-->
<!--T:10-->
<tabs>
<tabs>
<tab name="project directory script">
<tab name="project directory script">
Line 57: Line 57:
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu


<!--T:20869-->
<!--T:11-->
module load StdEnv/2020        # Latest installed version
module load StdEnv/2020        # Latest installed version
module load abaqus/2021        # Latest installed version
module load abaqus/2021        # Latest installed version


<!--T:20870-->
<!--T:12-->
#module load StdEnv/2016      # Uncomment to use
#module load StdEnv/2016      # Uncomment to use
#module load abaqus/2020      # Uncomment to use
#module load abaqus/2020      # Uncomment to use


<!--T:20871-->
<!--T:13-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 71: Line 71:
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"


<!--T:20872-->
<!--T:14-->
rm -f testsp1* testsp2*
rm -f testsp1* testsp2*
abaqus job=testsp1 input=mystd-sim.inp \
abaqus job=testsp1 input=mystd-sim.inp \
Line 79: Line 79:
}}
}}


<!--T:20828-->
<!--T:15-->
To write restart data every N=12 time increments specify in the input file:
To write restart data every N=12 time increments specify in the input file:
  *RESTART, WRITE, OVERLAY, FREQUENCY=12
  *RESTART, WRITE, OVERLAY, FREQUENCY=12
Line 89: Line 89:
  order_parallel=OFF
  order_parallel=OFF


<!--T:20873-->
<!--T:16-->
</tab>
</tab>
<tab name="project directory restart script">
<tab name="project directory restart script">
Line 108: Line 108:
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu


<!--T:20874-->
<!--T:17-->
module load StdEnv/2020        # Latest installed version
module load StdEnv/2020        # Latest installed version
module load abaqus/2021        # Latest installed version
module load abaqus/2021        # Latest installed version


<!--T:20917-->
<!--T:18-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 118: Line 118:
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"


<!--T:20875-->
<!--T:19-->
rm -f testsp2* testsp1.lck
rm -f testsp2* testsp1.lck
abaqus job=testsp2 oldjob=testsp1 input=mystd-sim-restart.inp \
abaqus job=testsp2 oldjob=testsp1 input=mystd-sim-restart.inp \
Line 126: Line 126:
}}
}}


<!--T:20829-->
<!--T:20-->
The restart input file should contain:
The restart input file should contain:
  *HEADING
  *HEADING
  *RESTART, READ
  *RESTART, READ


<!--T:20876-->
<!--T:21-->
</tab>
</tab>
<tab name="temporary directory script">
<tab name="temporary directory script">
Line 150: Line 150:
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu


<!--T:20877-->
<!--T:22-->
module load StdEnv/2020        # Latest installed version
module load StdEnv/2020        # Latest installed version
module load abaqus/2021        # Latest installed version
module load abaqus/2021        # Latest installed version


<!--T:20878-->
<!--T:23-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 162: Line 162:
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR


<!--T:20879-->
<!--T:24-->
rm -f testst1* testst2*
rm -f testst1* testst2*
mkdir $SLURM_TMPDIR/scratch
mkdir $SLURM_TMPDIR/scratch
Line 179: Line 179:
}}
}}


<!--T:20830-->
<!--T:25-->
To write restart data every N=12 time increments specify in the input file:
To write restart data every N=12 time increments specify in the input file:
  *RESTART, WRITE, OVERLAY, FREQUENCY=12
  *RESTART, WRITE, OVERLAY, FREQUENCY=12
Line 187: Line 187:
  egrep -i "step|start" testst*.com testst*.msg testst*.sta
  egrep -i "step|start" testst*.com testst*.msg testst*.sta


<!--T:20880-->
<!--T:26-->
</tab>
</tab>
<tab name="temporary directory restart script">
<tab name="temporary directory restart script">
Line 206: Line 206:
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu
##SBATCH --gres=gpu:a100:1    # Uncomment to specify gpu


<!--T:20881-->
<!--T:27-->
module load StdEnv/2020        # Latest installed version
module load StdEnv/2020        # Latest installed version
module load abaqus/2021        # Latest installed version
module load abaqus/2021        # Latest installed version


<!--T:20882-->
<!--T:28-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 218: Line 218:
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR


<!--T:20883-->
<!--T:29-->
rm -f testst2* testst1.lck
rm -f testst2* testst1.lck
cp testst1* $SLURM_TMPDIR
cp testst1* $SLURM_TMPDIR
Line 236: Line 236:
}}
}}


<!--T:20831-->
<!--T:30-->
The restart input file should contain:
The restart input file should contain:
  *HEADING
  *HEADING
Line 244: Line 244:
</tabs>
</tabs>


=== Multiple node computing === <!--T:20832-->
=== Multiple node computing === <!--T:31-->


Users with large memory or compute needs (and correspondingly large licenses) can use the following script to perform mpi-based computing over an arbitrary range of nodes ideally left to the scheduler to  automatically determine.  A companion template script to perform restart of multinode jobs is not provided due to additional limitations when they can be used.
Users with large memory or compute needs (and correspondingly large licenses) can use the following script to perform mpi-based computing over an arbitrary range of nodes ideally left to the scheduler to  automatically determine.  A companion template script to perform restart of multinode jobs is not provided due to additional limitations when they can be used.


<!--T:20885-->
<!--T:32-->
{{File
{{File
   |name="scriptsp1-mpi.txt"
   |name="scriptsp1-mpi.txt"
Line 262: Line 262:
#SBATCH --cpus-per-task=1      # Do not change !
#SBATCH --cpus-per-task=1      # Do not change !


<!--T:20886-->
<!--T:33-->
module load StdEnv/2020        # Latest installed version
module load StdEnv/2020        # Latest installed version
module load abaqus/2021        # Latest installed version
module load abaqus/2021        # Latest installed version


<!--T:20887-->
<!--T:34-->
unset SLURM_GTIDS
unset SLURM_GTIDS
#export MPI_IC_ORDER='tcp'
#export MPI_IC_ORDER='tcp'
Line 272: Line 272:
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"


<!--T:20888-->
<!--T:35-->
rm -f testsp1-mpi*
rm -f testsp1-mpi*


<!--T:20889-->
<!--T:36-->
unset hostlist
unset hostlist
nodes="$(slurm_hl2hl.py --format MPIHOSTLIST {{!}} xargs)"
nodes="$(slurm_hl2hl.py --format MPIHOSTLIST {{!}} xargs)"
Line 284: Line 284:
echo "$mphostlist" > abaqus_v6.env
echo "$mphostlist" > abaqus_v6.env


<!--T:20890-->
<!--T:37-->
abaqus job=testsp1-mpi input=mystd-sim.inp \
abaqus job=testsp1-mpi input=mystd-sim.inp \
   scratch=$SLURM_TMPDIR cpus=$SLURM_NTASKS interactive mp_mode=mpi \
   scratch=$SLURM_TMPDIR cpus=$SLURM_NTASKS interactive mp_mode=mpi \
Line 290: Line 290:
}}
}}


== Explicit analysis == <!--T:20833-->
== Explicit analysis == <!--T:38-->
Abaqus solvers support thread-based and mpi-based parallelization.  Scripts for each type are provided below for running explicit analysis type jobs on single or multiple nodes respectively.  Template scripts to perform multinode job restarts are not currently provided pending further testing.
Abaqus solvers support thread-based and mpi-based parallelization.  Scripts for each type are provided below for running explicit analysis type jobs on single or multiple nodes respectively.  Template scripts to perform multinode job restarts are not currently provided pending further testing.


=== Single node computing === <!--T:20834-->  
=== Single node computing === <!--T:39-->  


<!--T:20891-->
<!--T:40-->
<tabs>
<tabs>
<tab name="project directory script">
<tab name="project directory script">
Line 309: Line 309:
#SBATCH --nodes=1              # do not change
#SBATCH --nodes=1              # do not change


<!--T:20892-->
<!--T:41-->
module load StdEnv/2020
module load StdEnv/2020
module load abaqus/2021
module load abaqus/2021


<!--T:20893-->
<!--T:42-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 319: Line 319:
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"


<!--T:20894-->
<!--T:43-->
rm -f testep1* testep2*
rm -f testep1* testep2*
abaqus job=testep1 input=myexp-sim.inp \
abaqus job=testep1 input=myexp-sim.inp \
Line 326: Line 326:
}}
}}


<!--T:20835-->
<!--T:44-->
To write restart data for a total of 12 time increments specify in the input file:
To write restart data for a total of 12 time increments specify in the input file:
  *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
  *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Line 332: Line 332:
  egrep -i "step|restart" testep*.com testep*.msg testep*.sta
  egrep -i "step|restart" testep*.com testep*.msg testep*.sta


<!--T:20895-->
<!--T:45-->
</tab>
</tab>
<tab name="project directory restart script">
<tab name="project directory restart script">
Line 346: Line 346:
#SBATCH --nodes=1              # do not change
#SBATCH --nodes=1              # do not change


<!--T:20896-->
<!--T:46-->
module load StdEnv/2020
module load StdEnv/2020
module load abaqus/2021
module load abaqus/2021


<!--T:20897-->
<!--T:47-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 356: Line 356:
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"


<!--T:20898-->
<!--T:48-->
rm -f testep2* testep1.lck
rm -f testep2* testep1.lck
for f in testep1*; do [[ -f ${f} ]] && cp -a "$f" "testep2${f#testep1}"; done
for f in testep1*; do [[ -f ${f} ]] && cp -a "$f" "testep2${f#testep1}"; done
Line 364: Line 364:
}}
}}


<!--T:20836-->
<!--T:49-->
No input file modifications are required to restart the analysis.
No input file modifications are required to restart the analysis.


<!--T:20899-->
<!--T:50-->
</tab>
</tab>
<tab name="temporary directory script">
<tab name="temporary directory script">
Line 381: Line 381:
#SBATCH --nodes=1              # do not change
#SBATCH --nodes=1              # do not change


<!--T:20900-->
<!--T:51-->
module load StdEnv/2020
module load StdEnv/2020
module load abaqus/2021
module load abaqus/2021


<!--T:20901-->
<!--T:52-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 393: Line 393:
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR


<!--T:20902-->
<!--T:53-->
rm -f testet1* testet2*
rm -f testet1* testet2*
cd $SLURM_TMPDIR
cd $SLURM_TMPDIR
Line 407: Line 407:
}}
}}


<!--T:20837-->
<!--T:54-->
To write restart data for a total of 12 time increments specify in the input file:
To write restart data for a total of 12 time increments specify in the input file:
  *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
  *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Line 413: Line 413:
  egrep -i "step|restart" testet*.com testet*.msg testet*.sta
  egrep -i "step|restart" testet*.com testet*.msg testet*.sta


<!--T:20903-->
<!--T:55-->
</tab>
</tab>
<tab name="temporary directory restart script">
<tab name="temporary directory restart script">
Line 427: Line 427:
#SBATCH --nodes=1              # do not change
#SBATCH --nodes=1              # do not change


<!--T:20904-->
<!--T:56-->
module load StdEnv/2020
module load StdEnv/2020
module load abaqus/2021
module load abaqus/2021


<!--T:20905-->
<!--T:57-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
export MPI_IC_ORDER='tcp'
Line 439: Line 439:
echo "SLURM_TMPDIR = " $SLURM_TMPDIR
echo "SLURM_TMPDIR = " $SLURM_TMPDIR


<!--T:20906-->
<!--T:58-->
rm -f testet2* testet1.lck
rm -f testet2* testet1.lck
for f in testet1*; do cp -a "$f" $SLURM_TMPDIR/"testet2${f#testet1}"; done
for f in testet1*; do cp -a "$f" $SLURM_TMPDIR/"testet2${f#testet1}"; done
Line 454: Line 454:
}}
}}


<!--T:20838-->
<!--T:59-->
No input file modifications are required to restart the analysis.
No input file modifications are required to restart the analysis.


<!--T:20907-->
<!--T:60-->
</tab>
</tab>
</tabs>
</tabs>
rsnt_translations
57,772

edits

Navigation menu