Abaqus/en: Difference between revisions

Jump to navigation Jump to search
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 14: Line 14:
}}
}}


If your license has not been set up for a cluster then some additional configuration changes by the system administrators will need to be done.  Such changes are necessary to ensure the flexlm and vendor tcp ports of your abaqus server are reachable from all cluster compute nodes when jobs are run in the queue.  So we may help you get this done, open a ticket with [[Technical support|technical support]].  Please be sure to include the following three items: flexlm port number, static vendor port number, IP address of your abaqus license server.  You will then be sent a list of cluster IP addresses so your administrator can open the local server firewall to allow connections from the cluster on both ports.  Please note that a special license agreement must generally be negotiated and signed with SIMULIA before a local institutional license maybe used remotely on our hardware.  Your local abaqus license server administrator can confirm if such an agreement is in place otherwise additional costs maybe required.
If your license has not been set up for a cluster then some additional configuration changes by the system administrators will need to be done.  Such changes are necessary to ensure the flexlm and vendor tcp ports of your abaqus server are reachable from all cluster compute nodes when jobs are run in the queue.  So we may help you get this done, open a ticket with [[Technical support|technical support]].  Please be sure to include the following three items: flexlm port number, static vendor port number, IP address of your abaqus license server.  You will then be sent a list of cluster IP addresses so your administrator can open the local server firewall to allow connections from the cluster on both ports.  Please note that a special license agreement must generally be negotiated and signed with SIMULIA before a local institutional license may be used remotely on our hardware.  Your local abaqus license server administrator can confirm if such an agreement is in place otherwise additional costs may be required.


= Cluster job submission =
= Cluster job submission =


Below are prototype slurm scripts for submitting thread and mpi based parallel simulations to single or multiple compute nodes.  Most users will find it sufficient to use one of the <i>project directory script's</i> provided in the Single Node Computing sections. The optional "memory=" argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning.  A listing of all abaqus command line arguments can be obtained by loading an abaqus module and running: <code>abaqus -help | less</code>.  Single Node jobs that run less than one day should find the <i>project directory script</i> located in the first tab sufficient. Single node jobs that run for more than a day however should use one of the restart scripts.  Jobs that create large restart files will benefit by writing to local disc through the use of the SLURM_TMPDIR environment variable utilized in the <i>temporary directory scripts</i> provided in the two rightmost tabs of the Single Node standard and explicit analysis sections.  The restart scripts shown here will continue jobs that have been terminated early for some reason.  Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure.  Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details).  Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the Multiple Node sections below to distribute computing over arbitrary node ranges determined automatically by the scheduler.  Short scaling test jobs should be run to determine wall clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc.) to determine the optimal number before running any long jobs.  
Below are prototype slurm scripts for submitting thread and mpi based parallel simulations to single or multiple compute nodes.  Most users will find it sufficient to use one of the <i>project directory script's</i> provided in the Single Node Computing sections. The optional "memory=" argument found in the last line of the scripts is intended for larger memory or problematic jobs where 3072MB offset value may require tuning.  A listing of all abaqus command line arguments can be obtained by loading an abaqus module and running: <code>abaqus -help | less</code>.  Single Node jobs that run less than one day should find the <i>project directory script</i> located in the first tab sufficient. Single node jobs that run for more than a day however should use one of the restart scripts.  Jobs that create large restart files will benefit by writing to local disk through the use of the SLURM_TMPDIR environment variable utilized in the <i>temporary directory scripts</i> provided in the two rightmost tabs of the Single Node standard and explicit analysis sections.  The restart scripts shown here will continue jobs that have been terminated early for some reason.  Such job failures can occur if a job reaches its maximum requested runtime before completing and is killed by the queue or if the compute node the job was running on crashed due to an unexpected hardware failure.  Other restart types are possible by further tailoring of the input file (not shown here) to continue a job with additional steps or change the analysis (see the documentation for version specific details).  Jobs that require large memory or larger compute resources (beyond that which a single compute node can provide) should use the mpi scripts in the Multiple Node sections below to distribute computing over arbitrary node ranges determined automatically by the scheduler.  Short scaling test jobs should be run to determine wall clock times (and memory requirements) as a function of the number of cores (2, 4, 8, etc.) to determine the optimal number before running any long jobs.  


== Standard Analysis ==
== Standard Analysis ==
38,757

edits

Navigation menu